title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 6
8
| search_term
stringclasses 18
values | text
stringlengths 0
6.94M
|
---|---|---|---|---|
Comparison of the amyloid plaque proteome in Down syndrome, early-onset Alzheimer’s disease, and late-onset Alzheimer’s disease | b00e983f-eacd-42c9-ac41-b5429d693162 | 11742868 | Biochemistry[mh] | Down syndrome (DS) is the most prevalent chromosomal abnormality, characterized by the partial or complete triplication of chromosome 21 (Hsa21) . DS is strongly associated with Alzheimer’s disease (AD) due to the triplication of the amyloid-β precursor protein ( APP ) gene in Hsa21 . Hsa21 also contains other genes of interest for AD, such as S100β (associated with astrocytes), DYRK1A (encodes for a kinase that phosphorylates Tau), and SOD1 and BACE2 (related to oxidative stress) , which may play a role in AD in addition to APP . By age 40, virtually, all individuals with DS exhibit AD pathological hallmarks, including extracellular amyloid-β (Aβ) accumulation and neurofibrillary tangles formed by hyperphosphorylated Tau . Brain atrophy and elevated cerebrospinal fluid and plasma levels of Aβ42 and neurofilament light, respectively, have been observed in people with DS . These neuropathological features are qualitatively similar to other AD forms, such as early (EOAD) and late-onset AD (LOAD) . Earlier investigations and most recent findings suggest that AD neuropathology extends beyond Aβ and Tau proteins , implicating hundreds of associated proteins in biological dysfunctions, such as synaptic transmission, immune response, mitochondrial metabolism, and oxidative stress . Proteomic comparisons between DS and EOAD Aβ plaques reveal common proteins enriched in both conditions, although differences in protein abundance have been observed . Despite recent progress, the molecular mechanisms of AD remain elusive, particularly regarding common pathophysiological mechanisms across AD subtypes and the specifics of AD neuropathogenesis in DS. Individuals with DS develop AD neuropathology earlier than the general AD population, with Aβ and Tau accumulation patterns mirroring those in AD . However, the extent to which the protein composition in DS pathological lesions aligns with other AD subtypes remains uncertain . Identifying gene–phenotype associations in DS is also challenging due to multiple triplicated genes . Given these complexities, DS is particularly relevant as an AD model, due to the universal prevalence of DS with AD pathology with increasing age, compared to the other autosomal dominant inherited forms of AD and the more homogeneous, age-dependent pathology compared to LOAD . In light of these findings, this study aimed to characterize the proteomic differences among AD subtypes. In particular, we examined the Aβ plaque proteome in DS, EOAD, and LOAD, expanding on prior DS and EOAD comparisons . Our analysis revealed a substantial similarity of proteins enriched in Aβ plaques across all experimental groups, providing new evidence about the Aβ plaque-protein composition of individuals with DS in direct comparison with EOAD and LOAD. The proteomes also shared functional associations, thus revealing a consistent plaque-protein signature in DS, EOAD, and LOAD. Despite the enrichment of similar plaque proteins in all cohorts, we observed subtle differences in the proteome composition, characterized by variations in protein abundance in each group. Corresponding observations were made in the proteomic composition of DS, EOAD, and LOAD non-plaque tissue compared to controls. These insights may contribute to identifying novel therapeutic targets or biomarkers tailored to the specific features of different AD subtypes. Human brain tissue Post-mortem formalin-fixed and paraffin-embedded (FFPE) brain tissues from DS, EOAD, LOAD, and cognitive normal age-matched controls ( n = 20 brain cases for each cohort) were obtained from the National Institutes of Health NeuroBioBank (Maryland and Mt. Sinai brain banks), UK Brain Bank Network (South West Dementia brain bank), IDIBAPS Biobank from Barcelona, University of Pennsylvania and NYU Grossman School of Medicine, including autopsy tissues from NYU Alzheimer’s Disease Research Center (ADRC), Center for Biospecimen Research and Development (CBRD)/Department of Pathology and the North American SUDEP Registry (NASR) at NYU Comprehensive Epilepsy Center (CEC). FFPE tissue blocks containing hippocampus and surrounding entorhinal and temporal cortex were used for the present study as it contains a high amount of amyloid pathology. The cases were assessed by the brain repositories to confirm advanced AD, by ABC neuropathological score . Further details about the cases are included in Table and detailed case history is provided in Supp. Table. 1. Cases lacking information about α-synuclein and TDP-43 were stained by CBRD and assessed in the laboratory. Inclusion criteria for all cases included tissue formalin fixation below 3 years. We tolerated cases with TDP-43 (DS = 2, EOAD = 2, LOAD = 1) or α-synuclein (DS = 7, EOAD = 2, LOAD = 1) inclusions to increase the number of cases, as these co-pathologies are common in the elderly population. We performed one-way ANOVA analysis followed by post hoc Tukey’s multiple comparison test to evaluate age differences among the cohorts and multiple variable linear regression to determine the influence of clinical traits age and sex in the proteomics results. APOE genotyping APOE genotyping was conducted for the cases where this information was not provided by the brain banks, following a previously established protocol . Briefly, DNA extraction from FFPE tissue scrolls was performed using the QIAamp DNA FFPE Advanced UNG Kit (Qiagen, cat. 56,704) as indicated by the manufacturer. Two end-point PCRs were carried out using custom primers (forward primer 5ʹ AGGCCTACAAATCGGAACTGG 3ʹ; reverse primer 5ʹ CCTGTTCCACCAGGGGC 3ʹ; Sigma). After the initial PCR, DNA purification from the agarose gel was accomplished using the QIAquick Gel Extraction Kit (Qiagen, cat. 28,704), following the manufacturer's protocol. Subsequently, the gel-purified DNA was used for the second end-point PCR, followed by Sanger sequencing and sequence analysis using SnapGene 5.3.1 software. Immunohistochemistry for Aβ and pTau FFPE 8 µm tissue sections that contain the hippocampus and adjacent temporal cortex were collected on glass slides. Sections underwent chromogenic immunohistochemistry for total Aβ (Aβ 17–24 clone 4G8, 1:1000, BioLegend, cat. 800,710) and Tau pathology (PHF-1, 1:200, in house developed mouse monoclonal antibody provided by Dr. Peter Davies, Albert Einstein University, NY, USA ). Sections were deparaffinized and rehydrated through a brief series of xylene and ethanol washes. Antigen retrieval methods performed include a 7-min treatment of 88% formic acid followed by heat-induced citrate buffer treatment (10 mM sodium citrate, 0.05% Tween-20; pH 6). Endogenous peroxidase was quenched with 0.3% H 2 O 2 solution for 20 min. Sections were blocked with 10% normal goat serum, followed by an overnight incubation with the primary antibody diluted in 4% normal goat serum. Sections were incubated for 1 h at room temperature with the appropriate secondary antibody (biotinylated HRP mouse IgG, 1:1000, Vector, cat. BA-2000). Staining signal was amplified using VECTASTAIN Avidin–Biotin Complex (ABC) kit (Vector, cat. PK6100) for 30 min. The chromogen DAB was used to visualize the pathology. Sections were counterstained with hematoxylin and coverslipped using the appropriate mounting media. Aβ and Tau quantities were quantified from whole slide scans at 20X magnification using a Leica Aperio Versa 8 microscope. Five regions of interest (ROIs) in the temporal cortex and hippocampus (CA1, CA2, CA3) were used to calculate the percent positive pixel area. We used a custom macro based on the ‘Positive Pixel Count’ algorithm in ImageScope v.12.4.3.5008, with a modification to the ‘Color saturation threshold’ = 0 and the ‘Upper limit of intensity for weak-positive pixels’ (Iwp high) = 190. Statistical differences between experimental groups were evaluated using one-way ANOVA followed by Tukey’s post hoc multiple comparisons test in GraphPad Prism v 9.5.1. Data are shown as mean ± standard error of the mean (SEM). Laser-capture microdissection Unbiased localized proteomics was performed using the method outlined in Fig. a. FFPE tissues were cut into 8 µm sections from autopsy hippocampal and adjacent entorhinal and temporal cortex tissues onto laser-capture microdissection (LCM) compatible PET membrane slides (Leica, cat. 11,505,151). Amyloid-β deposits were visualized by immunohistochemistry using the pan-Aβ 4G8 antibody (1:1000, BioLegend, cat. 800,710), using the chromogen 3,3-diaminobenzidine (DAB, Thermo Scientific, cat. 34,065) reaction. Classic cored, neuritic and dense Aβ plaques were targeted (not diffuse or cotton-wool plaques) in gray matter of the hippocampal formation, and the adjacent subiculum and entorhinal cortex, as well as from the gray matter of the temporal cortex, in regions distant from the hippocampus, for a more homogeneous analysis, using LCM to dissect a total area of 2 mm 2 and the same area for neighboring non-plaque tissue (Fig. b–c), at 10X magnification with a LMD6500 microscope equipped with a UV laser (Leica). We avoided diffuse amyloid aggregates in all the cases used to maintain sample consistency. Microdissected samples were centrifuged for 2 min at 14,000 g and stored at − 80 °C. We also microdissected adjacent tissue free of plaques from the same microscopic field of views that contained microdissected amyloid plaques, but at a sufficient distance from plaques to ensure that plaque-associated tissue was not collected (Fig. c). These samples are henceforth referred to as ‘non-plaque’. In addition, analogous non-plaque tissue from control cases was selected from matching hippocampal and temporal cortex regions as those used in DS, EOAD, and LOAD, denoted as ‘Control non-plaque’. The schematic diagrams for the figure were generated using BioRender.com. Label-free quantitative mass spectrometry (MS) proteomics The extraction and digestion of proteins from Laser-Capture Microdissection (LCM) excised plaque and non-plaque tissue samples were performed using the SPEED sample prep workflow . Briefly, tissue sections were incubated in 10 μl of LC–MS grade formic acid (FA) for 5 min at 73 °C. The FA was then neutralized by a tenfold dilution with 2 M TRIS containing 10 mM Tris (2-carboxyethyl) phosphine (TCEP) and 20 mM chloroacetic acid (CAA), followed by an incubation at 90 °C for 1 h. For enzymatic digestion, samples were diluted sixfold with water containing 0.2 μg of sequencing-grade trypsin. Digestion was carried out overnight at 37 °C and halted by acidification to 2% TFA. Liquid chromatography–tandem mass spectrometry (LC–MS/MS) was performed online on an Evosep One LC using a Dr. Maisch ReproSil-Pur 120 C18 AQ analytical column (1.9-μm bead, 150 μm ID, 15 cm long). Peptides were gradient eluted from the column directly into an Orbitrap HF-X mass spectrometer using the 88-min extended Evosep method (SPD15) at a flow rate of 220 nl/min. The mass spectrometer was operated in data-independent acquisition (DIA) mode, acquiring MS/MS fragmentation across 22 m/z windows after every MS full-scan event. High-resolution full MS spectra were acquired with a resolution of 120,000, an Automatic Gain Control (AGC) target of 3e6, a maximum ion injection time of 60 ms, and a scan range of 350–165 m/z. Following each full MS scan, 22 data-independent higher-energy collisional dissociation (HCD) MS/MS scans were acquired at a resolution of 30,000, an AGC target of 3e6, and a stepped normalized collision energy (NCE) of 22.5, 25, and 27.5. Proteomics computational analysis The analysis of the MS data was conducted utilizing the Spectronaut software ( https://biognosys.com/shop/spectronaut ), searching in direct-DIA mode (w/o experimental spectral library) against the Homo Sapiens UniProt database ( http://www.uniprot.org/ ) combined with a list of common laboratory contaminants. The integrated search engine Pulsar was employed for the database search. The enzyme specificity was configured to trypsin, allowing for up to two missed cleavages during the search process. The search also included oxidation of methionine as a variable modification, and carbamidomethylation of cysteines as a fixed modification. The false discovery rate (FDR) for identification of peptide, protein, and site was limited to 1%. Quantification was performed on the MS/MS level, utilizing the three most intense fragment ions per precursor. Independent quantification of Aβ was manually curated and incorporated into the search results, consistent with previous studies . The intensity of Aβ was quantified by integrating the area under the curve for the peptide LVFFAEDVGSNK, which corresponds to amino acids 17–28 of Aβ. This peptide does not differentiate between cleaved or full-length sequences but shows strong enrichment and correlation with Aβ pathology . Data were log-transformed and normalized using median intensity across all samples. For subsequent data analysis, the Perseus , R environment ( http://www.r-project.org/ ), or GraphPad Prism were used for statistical computing and graphical representation. Proteomics statistical analyses The protein expression matrix ( n = 2080) was filtered to remove common laboratory contaminants, non-human proteins, and those proteins observed in less than half of all the four groups evaluated ( n = 1995). For principal component analysis (PCA), missing values were imputed from the normal distribution with a width of 0.3 and a downshift of 1.8 (relative to measured protein intensity distribution) using Perseus v 1.6.14.0 . We performed paired t tests to evaluate the amyloid plaques enrichment in relation to the non-plaque tissue adjacent to the amyloid plaques. In addition, we performed unpaired t tests to compare the protein enrichment of non-plaques from DS, EOAD, and LOAD compared to control tissue samples. Proteins were deemed significantly altered if they had a false discovery rate (FDR) below 5% (permutation-based FDR with 250 data randomizations). We further filtered the significant proteins based on the fold-change (FC) difference > 1.5 fold between the groups. The proteins of interest common to each pairwise comparison from ‘plaques vs. non-plaque’ and ‘non-plaque vs. control non-plaque’ tissue were evaluated by Venn diagrams generated from InteractiVenn . Pearson’s correlation analysis between DS, EOAD, and LOAD differentially abundant proteins identified in the pairwise comparisons were evaluated using GraphPad Prism v 9.5.1. For this analysis, we considered proteins that were significantly altered in at least one of the groups and had an FC > 1.5, on a given correlation. Mapping protein-coding genes to human chromosomes Genes coding for the proteins identified in the study were mapped to their respective chromosomes in R using the function ‘mapIds’ from the Annotation DBI package v 1.62.2 with the genome-wide annotation for human, org.Hs.eg.db v 3.17.0. Percentage of significantly altered proteins was calculated by dividing the number of significant proteins per each chromosome by the total number of proteins mapped to the respective chromosome. Location for each protein-coding gene in the chromosome 21 ( Homo sapiens autosome 21, or Hsa21) was determined using the UCSC Human Genome Browser . Gene Ontology functional annotation Gene Ontology (GO) enrichment analysis was performed in R using the function enrichGO from the package clusterProfiler v 4.8.2, with the genome-wide annotation for human, org.Hs.eg.db v 3.17.0. GO terms were filtered to an FDR < 0.05 using the Benjamini–Hochberg method . Isoform labels were excluded from Uniprot accession IDs for GO functional annotation. Duplicate proteins were removed, and the resulting list comprising 1980 proteins lacking isoforms was utilized as the background dataset. Functional annotation was focused on GO biological process (GO BP) and GO cellular component (GO CC). Heavily redundant GO terms were reduced using the simplify function from clusterProfiler, with a cutoff of 0.7. Top ten significantly enriched GO terms for highly abundant proteins in ‘plaques vs. non-plaque’ and ‘non-plaque vs. control non-plaque’ for each experimental group were selected using the adjusted p value (− Log 10 adj. p value) and compared using heatmaps generated in GraphPad Prism. Protein–protein interaction networks Protein–protein interaction (PPI) networks were made in Cytoscape v 3.10.0 using ‘STRING: protein query’ (STRING v 11.5 database ) with a (high) confidence score of 0.7. Networks reflect functional and physical protein associations for the differentially abundant proteins in DS, EOAD, and LOAD. Node size of the networks indicate the adjusted p value (− log 10 [ p value]) from the t tests and node color indicates fold-change (log 2 [FC]). Disconnected nodes were not depicted in the final network. Dotted-line colored boxes highlight proteins clustered by function similarity. Comparison with previous AD proteomics studies in human brain Our data were compared to previous proteomic studies using the NeuroPro database (v1.12; https://neuropro.biomedical.hosting/ ) . NeuroPro is a combined analysis of differentially enriched proteins found in human AD brain tissues identified in 38 published proteomics studies (at the time of use for this study, February 2024). NeuroPro database was filtered to include only proteins found in advanced AD proteomics studies (AD and AD/C). Alternatively, we applied a second filter to advanced AD to include proteomics studies in ‘plaques’ only. Protein lists obtained after filtering the NeuroPro database were manually curated to address current ‘obsolete deleted’, ‘merged’ or ‘demerged’ UniProt accession IDs. We performed a manual curation of NeuroPro protein lists to provide an accurate comparison between the proteins identified in previous proteomics studies and our present study. The UniProt accession IDs and gene IDs from the proteins we identified in the current study were matched to the IDs from the NeuroPro to identify proteins that have not been previously associated with human AD and amyloid plaque proteomics. Additionally, as the NeuroPro database does not include DS proteomics data, we compared our current DS plaque dataset with our previous DS plaque proteomics study . We identified the common proteins using the whole data matrix of both studies, by comparing the Uniprot Accession ID and the Gene ID, to account for any identifier differences. Then, we identified the significantly altered proteins in each study; for our dataset, we defined significantly altered proteins by FDR ≤ 5% and a fold-change ≥ 1.5. In our previous study, significantly altered proteins were defined by p < 0.05 and a fold-change ≥ 1.5. For the comparison, we included the significantly abundant and significantly decreased plaque proteins. We evaluated common significant proteins from the datasets using Venn diagrams generated from InteractiVenn . In addition, we performed Pearson’s correlation analysis between datasets using GraphPad Prism v 9.5.1. For the correlation analysis, we considered proteins that were significantly altered in at least one of the datasets. Validation of proteins of interest The proteins chloride voltage-gated channel 6 (CLCN6) and the Tripeptidyl peptidase I (TPP1, also known as CLN2), which are enriched in Aβ plaques, were validated using immunohistochemistry (IHC). CLCN6 was selected due to its significantly high abundance in DS plaques, limited evidence of its presence in plaques and about its role in AD, and its previously described function in the central nervous system . TPP1 was selected as another lysosomal protein, which has been described in the previous human proteomics studies to be associated to Aβ plaques, but it has not been validated by IHC. For immunolabeling, 8 µm serial sections adjacent to those used for proteomic analysis were deparaffinized and rehydrated. Sections from six cases in each cohort were subjected to antigen retrieval in a microwave, using Tris–EDTA buffer (pH 9, Proteintech), diluted 1X for CLCN6, and sodium citrate buffer pH 6, followed by formic acid treatment for TPP1. Primary antibodies against CLCN6 (1:350, Thermo Scientific, cat. OSC00147W-100UL), TPP1 (1:100, Sigma-Aldrich, cat. HPA037709-100UL), and the pan-Aβ 4G8 antibody (1:1000) were incubated overnight, followed by Alexa Fluor 488 and 647 secondary antibodies (Thermo Scientific). Additionally, we performed a co-staining using MAP2 (1:200, BD Biosciences, cat. 556,320) and CLCN6 to assess cell specificity of CLCN6 expression. Whole-slide scans were acquired at 20X magnification using a Leica Aperio Versa 8 microscope. For CLCN6 quantification, ten regions of interest (ROIs) from the same anatomical areas used for LCM were analyzed using a custom macro in ImageJ 1.54f. Briefly, a mask was generated to delineate the plaques area in the field of view, which was then applied to the CLCN6 channel to measure fluorescence intensity (total fluorescence = Integrated Density—[Area measured * Background mean gray value]) or the area occupied by CLCN6-positive objects using the "Measure" function. CLCN6-positive area was normalized to the total area of the plaques. Non-plaque CLCN6 area and fluorescence were measured by modifying the macro, where plaque ROIs were first subtracted from the CLCN6 channel before proceeding with the previously described quantification method. Significant differences were assessed using paired t tests (for comparisons between plaque and non-plaque tissue within the same case) or unpaired t tests (for comparisons between control non-plaque tissue and non-plaque tissue from DS, EOAD, or LOAD), with analyses performed using GraphPad Prism. TPP1 was quantified using QuPath v 0.5.1. Briefly, 10 regions of interest (ROIs) were manually annotated from the gray matter of the hippocampal formation and temporal cortex. Aβ plaques were annotated using a pixel classifier, with a Gaussian prefilter, smoothing sigma of 2, and a threshold of 30. Objects below 350 μm 2 were filtered out from the final annotations. Non-plaque adjacent tissue was selected using the same classifier, but ignoring pixels above threshold and assigning the remaining pixels detected to the class “Non-plaques”. TPP1-positive objects were annotated using a similar pixel classifier, with smoothing sigma of 1.5 and a threshold of 26. Objects below 20 μm 2 were filtered out for the final annotations. Density of protein TPP1 was calculated for positive immunolabeling inside plaques and for presence of TPP1 in the non-plaque region, using the formula TPP1 density = (sum of TPP1 areas/sum of plaques area) × 100. T tests’ statistical analyses were performed in GraphPad Prism. Weighted gene correlation network analysis We used the WGCNA package (version 1.72.1) in the R environment to conduct a Weighted Gene Correlation Network Analysis adapted from the WGCNA framework to investigate protein expression correlations. First, the curated protein expression matrix from the proteomics analysis ( n = 1995) underwent quality control to identify samples with excessive missing values. The networks were then constructed using the blockwiseModules function for each cohort (DS, EOAD, and LOAD), creating separate networks for Aβ plaques and non-plaque tissue within each cohort. The networks were constructed as “signed networks” with the topological overlap matrix (TOM) also set to “signed”. TOMdenom parameter was specified as “mean” to facilitate the capture of tightly connected protein groups within the network. The soft-thresholding power was set to 9 for DS Plaques and 10 for non-plaques, 7 for EOAD plaques and 11 for non-plaques, and 18 for LOAD plaques and 14 for LOAD non-plaque dataset. Additional parameters included a minimum module size of 10, a mergeCutHeight of 0.07 to merge highly similar modules more stringently, and a deepSplit value of 4 to facilitate finer differentiation of modules. A minimum intramodular connectivity (kME) of 0.3 was required for proteins to remain in a given module, with a reassignment threshold of 0.05 allowing minor reallocation of proteins to more appropriate modules if necessary. The biweight midcorrelation method (bicor) was used as the primary correlation measure, with a fallback to Pearson correlation for outlier adjustment where necessary (maxPOutliers = 0.1). Numeric module labels were employed for consistency, and to reduce the complexity of module visualization, the pamRespectsDendro option was set to FALSE. After running the blockwiseModules function, we used the signedKME function within the WGCNA package to perform an iterative module cleanup to refine the module assignments in the protein correlation networks, as previously described . The iterative cleanup process involved creating a bicor correlation table to assess the relationship between each protein and the respective module eigenproteins, referred to as kME. Initially, proteins with an intramodular kME below 0.30 were removed. The reassignment process consisted of reallocating proteins in the gray module (those not assigned to any module) to any module with a maximum kME greater than 0.30 and reassigning proteins whose intramodular kME was more than 0.10 below their maximum kME relative to any other module. This procedure continued iteratively until the minimum kME of the proteins in a module was above the threshold of 0.30 and the difference between the maximum kME and the intramodular kME was less than 0.1, or up to 30 iterations if module reassignment criteria were not met. After each reassignment, the module eigenproteins and the kME table were recalculated using the moduleEigengenes and signedKME functions, ensuring that all module assignments remained valid and appropriately ranked. Ultimately, this cleanup procedure reinforced the reliability of the module structure by systematically refining the assignments of proteins to their respective modules based on kME values. After the iterative module cleanup was performed, correlations between module eigenproteins (MEs) and clinical variables ( APOE genotype, age, Sex, co-pathologies, and Aβ and pTau levels) were calculated and plotted in a heatmap using the labeledHeatmap function of the WGCNA package. Subsequently, GO enrichment analysis was performed for each of the correlation networks using the function enrichGO from the package clusterProfiler , filtering GO terms to an FDR < 0.05 using the Benjamini–Hochberg method followed by the simplify function with a cutoff of 0.7 to remove heavily redundant terms. Post-mortem formalin-fixed and paraffin-embedded (FFPE) brain tissues from DS, EOAD, LOAD, and cognitive normal age-matched controls ( n = 20 brain cases for each cohort) were obtained from the National Institutes of Health NeuroBioBank (Maryland and Mt. Sinai brain banks), UK Brain Bank Network (South West Dementia brain bank), IDIBAPS Biobank from Barcelona, University of Pennsylvania and NYU Grossman School of Medicine, including autopsy tissues from NYU Alzheimer’s Disease Research Center (ADRC), Center for Biospecimen Research and Development (CBRD)/Department of Pathology and the North American SUDEP Registry (NASR) at NYU Comprehensive Epilepsy Center (CEC). FFPE tissue blocks containing hippocampus and surrounding entorhinal and temporal cortex were used for the present study as it contains a high amount of amyloid pathology. The cases were assessed by the brain repositories to confirm advanced AD, by ABC neuropathological score . Further details about the cases are included in Table and detailed case history is provided in Supp. Table. 1. Cases lacking information about α-synuclein and TDP-43 were stained by CBRD and assessed in the laboratory. Inclusion criteria for all cases included tissue formalin fixation below 3 years. We tolerated cases with TDP-43 (DS = 2, EOAD = 2, LOAD = 1) or α-synuclein (DS = 7, EOAD = 2, LOAD = 1) inclusions to increase the number of cases, as these co-pathologies are common in the elderly population. We performed one-way ANOVA analysis followed by post hoc Tukey’s multiple comparison test to evaluate age differences among the cohorts and multiple variable linear regression to determine the influence of clinical traits age and sex in the proteomics results. APOE genotyping was conducted for the cases where this information was not provided by the brain banks, following a previously established protocol . Briefly, DNA extraction from FFPE tissue scrolls was performed using the QIAamp DNA FFPE Advanced UNG Kit (Qiagen, cat. 56,704) as indicated by the manufacturer. Two end-point PCRs were carried out using custom primers (forward primer 5ʹ AGGCCTACAAATCGGAACTGG 3ʹ; reverse primer 5ʹ CCTGTTCCACCAGGGGC 3ʹ; Sigma). After the initial PCR, DNA purification from the agarose gel was accomplished using the QIAquick Gel Extraction Kit (Qiagen, cat. 28,704), following the manufacturer's protocol. Subsequently, the gel-purified DNA was used for the second end-point PCR, followed by Sanger sequencing and sequence analysis using SnapGene 5.3.1 software. FFPE 8 µm tissue sections that contain the hippocampus and adjacent temporal cortex were collected on glass slides. Sections underwent chromogenic immunohistochemistry for total Aβ (Aβ 17–24 clone 4G8, 1:1000, BioLegend, cat. 800,710) and Tau pathology (PHF-1, 1:200, in house developed mouse monoclonal antibody provided by Dr. Peter Davies, Albert Einstein University, NY, USA ). Sections were deparaffinized and rehydrated through a brief series of xylene and ethanol washes. Antigen retrieval methods performed include a 7-min treatment of 88% formic acid followed by heat-induced citrate buffer treatment (10 mM sodium citrate, 0.05% Tween-20; pH 6). Endogenous peroxidase was quenched with 0.3% H 2 O 2 solution for 20 min. Sections were blocked with 10% normal goat serum, followed by an overnight incubation with the primary antibody diluted in 4% normal goat serum. Sections were incubated for 1 h at room temperature with the appropriate secondary antibody (biotinylated HRP mouse IgG, 1:1000, Vector, cat. BA-2000). Staining signal was amplified using VECTASTAIN Avidin–Biotin Complex (ABC) kit (Vector, cat. PK6100) for 30 min. The chromogen DAB was used to visualize the pathology. Sections were counterstained with hematoxylin and coverslipped using the appropriate mounting media. Aβ and Tau quantities were quantified from whole slide scans at 20X magnification using a Leica Aperio Versa 8 microscope. Five regions of interest (ROIs) in the temporal cortex and hippocampus (CA1, CA2, CA3) were used to calculate the percent positive pixel area. We used a custom macro based on the ‘Positive Pixel Count’ algorithm in ImageScope v.12.4.3.5008, with a modification to the ‘Color saturation threshold’ = 0 and the ‘Upper limit of intensity for weak-positive pixels’ (Iwp high) = 190. Statistical differences between experimental groups were evaluated using one-way ANOVA followed by Tukey’s post hoc multiple comparisons test in GraphPad Prism v 9.5.1. Data are shown as mean ± standard error of the mean (SEM). Unbiased localized proteomics was performed using the method outlined in Fig. a. FFPE tissues were cut into 8 µm sections from autopsy hippocampal and adjacent entorhinal and temporal cortex tissues onto laser-capture microdissection (LCM) compatible PET membrane slides (Leica, cat. 11,505,151). Amyloid-β deposits were visualized by immunohistochemistry using the pan-Aβ 4G8 antibody (1:1000, BioLegend, cat. 800,710), using the chromogen 3,3-diaminobenzidine (DAB, Thermo Scientific, cat. 34,065) reaction. Classic cored, neuritic and dense Aβ plaques were targeted (not diffuse or cotton-wool plaques) in gray matter of the hippocampal formation, and the adjacent subiculum and entorhinal cortex, as well as from the gray matter of the temporal cortex, in regions distant from the hippocampus, for a more homogeneous analysis, using LCM to dissect a total area of 2 mm 2 and the same area for neighboring non-plaque tissue (Fig. b–c), at 10X magnification with a LMD6500 microscope equipped with a UV laser (Leica). We avoided diffuse amyloid aggregates in all the cases used to maintain sample consistency. Microdissected samples were centrifuged for 2 min at 14,000 g and stored at − 80 °C. We also microdissected adjacent tissue free of plaques from the same microscopic field of views that contained microdissected amyloid plaques, but at a sufficient distance from plaques to ensure that plaque-associated tissue was not collected (Fig. c). These samples are henceforth referred to as ‘non-plaque’. In addition, analogous non-plaque tissue from control cases was selected from matching hippocampal and temporal cortex regions as those used in DS, EOAD, and LOAD, denoted as ‘Control non-plaque’. The schematic diagrams for the figure were generated using BioRender.com. The extraction and digestion of proteins from Laser-Capture Microdissection (LCM) excised plaque and non-plaque tissue samples were performed using the SPEED sample prep workflow . Briefly, tissue sections were incubated in 10 μl of LC–MS grade formic acid (FA) for 5 min at 73 °C. The FA was then neutralized by a tenfold dilution with 2 M TRIS containing 10 mM Tris (2-carboxyethyl) phosphine (TCEP) and 20 mM chloroacetic acid (CAA), followed by an incubation at 90 °C for 1 h. For enzymatic digestion, samples were diluted sixfold with water containing 0.2 μg of sequencing-grade trypsin. Digestion was carried out overnight at 37 °C and halted by acidification to 2% TFA. Liquid chromatography–tandem mass spectrometry (LC–MS/MS) was performed online on an Evosep One LC using a Dr. Maisch ReproSil-Pur 120 C18 AQ analytical column (1.9-μm bead, 150 μm ID, 15 cm long). Peptides were gradient eluted from the column directly into an Orbitrap HF-X mass spectrometer using the 88-min extended Evosep method (SPD15) at a flow rate of 220 nl/min. The mass spectrometer was operated in data-independent acquisition (DIA) mode, acquiring MS/MS fragmentation across 22 m/z windows after every MS full-scan event. High-resolution full MS spectra were acquired with a resolution of 120,000, an Automatic Gain Control (AGC) target of 3e6, a maximum ion injection time of 60 ms, and a scan range of 350–165 m/z. Following each full MS scan, 22 data-independent higher-energy collisional dissociation (HCD) MS/MS scans were acquired at a resolution of 30,000, an AGC target of 3e6, and a stepped normalized collision energy (NCE) of 22.5, 25, and 27.5. The analysis of the MS data was conducted utilizing the Spectronaut software ( https://biognosys.com/shop/spectronaut ), searching in direct-DIA mode (w/o experimental spectral library) against the Homo Sapiens UniProt database ( http://www.uniprot.org/ ) combined with a list of common laboratory contaminants. The integrated search engine Pulsar was employed for the database search. The enzyme specificity was configured to trypsin, allowing for up to two missed cleavages during the search process. The search also included oxidation of methionine as a variable modification, and carbamidomethylation of cysteines as a fixed modification. The false discovery rate (FDR) for identification of peptide, protein, and site was limited to 1%. Quantification was performed on the MS/MS level, utilizing the three most intense fragment ions per precursor. Independent quantification of Aβ was manually curated and incorporated into the search results, consistent with previous studies . The intensity of Aβ was quantified by integrating the area under the curve for the peptide LVFFAEDVGSNK, which corresponds to amino acids 17–28 of Aβ. This peptide does not differentiate between cleaved or full-length sequences but shows strong enrichment and correlation with Aβ pathology . Data were log-transformed and normalized using median intensity across all samples. For subsequent data analysis, the Perseus , R environment ( http://www.r-project.org/ ), or GraphPad Prism were used for statistical computing and graphical representation. The protein expression matrix ( n = 2080) was filtered to remove common laboratory contaminants, non-human proteins, and those proteins observed in less than half of all the four groups evaluated ( n = 1995). For principal component analysis (PCA), missing values were imputed from the normal distribution with a width of 0.3 and a downshift of 1.8 (relative to measured protein intensity distribution) using Perseus v 1.6.14.0 . We performed paired t tests to evaluate the amyloid plaques enrichment in relation to the non-plaque tissue adjacent to the amyloid plaques. In addition, we performed unpaired t tests to compare the protein enrichment of non-plaques from DS, EOAD, and LOAD compared to control tissue samples. Proteins were deemed significantly altered if they had a false discovery rate (FDR) below 5% (permutation-based FDR with 250 data randomizations). We further filtered the significant proteins based on the fold-change (FC) difference > 1.5 fold between the groups. The proteins of interest common to each pairwise comparison from ‘plaques vs. non-plaque’ and ‘non-plaque vs. control non-plaque’ tissue were evaluated by Venn diagrams generated from InteractiVenn . Pearson’s correlation analysis between DS, EOAD, and LOAD differentially abundant proteins identified in the pairwise comparisons were evaluated using GraphPad Prism v 9.5.1. For this analysis, we considered proteins that were significantly altered in at least one of the groups and had an FC > 1.5, on a given correlation. Genes coding for the proteins identified in the study were mapped to their respective chromosomes in R using the function ‘mapIds’ from the Annotation DBI package v 1.62.2 with the genome-wide annotation for human, org.Hs.eg.db v 3.17.0. Percentage of significantly altered proteins was calculated by dividing the number of significant proteins per each chromosome by the total number of proteins mapped to the respective chromosome. Location for each protein-coding gene in the chromosome 21 ( Homo sapiens autosome 21, or Hsa21) was determined using the UCSC Human Genome Browser . Gene Ontology (GO) enrichment analysis was performed in R using the function enrichGO from the package clusterProfiler v 4.8.2, with the genome-wide annotation for human, org.Hs.eg.db v 3.17.0. GO terms were filtered to an FDR < 0.05 using the Benjamini–Hochberg method . Isoform labels were excluded from Uniprot accession IDs for GO functional annotation. Duplicate proteins were removed, and the resulting list comprising 1980 proteins lacking isoforms was utilized as the background dataset. Functional annotation was focused on GO biological process (GO BP) and GO cellular component (GO CC). Heavily redundant GO terms were reduced using the simplify function from clusterProfiler, with a cutoff of 0.7. Top ten significantly enriched GO terms for highly abundant proteins in ‘plaques vs. non-plaque’ and ‘non-plaque vs. control non-plaque’ for each experimental group were selected using the adjusted p value (− Log 10 adj. p value) and compared using heatmaps generated in GraphPad Prism. Protein–protein interaction (PPI) networks were made in Cytoscape v 3.10.0 using ‘STRING: protein query’ (STRING v 11.5 database ) with a (high) confidence score of 0.7. Networks reflect functional and physical protein associations for the differentially abundant proteins in DS, EOAD, and LOAD. Node size of the networks indicate the adjusted p value (− log 10 [ p value]) from the t tests and node color indicates fold-change (log 2 [FC]). Disconnected nodes were not depicted in the final network. Dotted-line colored boxes highlight proteins clustered by function similarity. Our data were compared to previous proteomic studies using the NeuroPro database (v1.12; https://neuropro.biomedical.hosting/ ) . NeuroPro is a combined analysis of differentially enriched proteins found in human AD brain tissues identified in 38 published proteomics studies (at the time of use for this study, February 2024). NeuroPro database was filtered to include only proteins found in advanced AD proteomics studies (AD and AD/C). Alternatively, we applied a second filter to advanced AD to include proteomics studies in ‘plaques’ only. Protein lists obtained after filtering the NeuroPro database were manually curated to address current ‘obsolete deleted’, ‘merged’ or ‘demerged’ UniProt accession IDs. We performed a manual curation of NeuroPro protein lists to provide an accurate comparison between the proteins identified in previous proteomics studies and our present study. The UniProt accession IDs and gene IDs from the proteins we identified in the current study were matched to the IDs from the NeuroPro to identify proteins that have not been previously associated with human AD and amyloid plaque proteomics. Additionally, as the NeuroPro database does not include DS proteomics data, we compared our current DS plaque dataset with our previous DS plaque proteomics study . We identified the common proteins using the whole data matrix of both studies, by comparing the Uniprot Accession ID and the Gene ID, to account for any identifier differences. Then, we identified the significantly altered proteins in each study; for our dataset, we defined significantly altered proteins by FDR ≤ 5% and a fold-change ≥ 1.5. In our previous study, significantly altered proteins were defined by p < 0.05 and a fold-change ≥ 1.5. For the comparison, we included the significantly abundant and significantly decreased plaque proteins. We evaluated common significant proteins from the datasets using Venn diagrams generated from InteractiVenn . In addition, we performed Pearson’s correlation analysis between datasets using GraphPad Prism v 9.5.1. For the correlation analysis, we considered proteins that were significantly altered in at least one of the datasets. The proteins chloride voltage-gated channel 6 (CLCN6) and the Tripeptidyl peptidase I (TPP1, also known as CLN2), which are enriched in Aβ plaques, were validated using immunohistochemistry (IHC). CLCN6 was selected due to its significantly high abundance in DS plaques, limited evidence of its presence in plaques and about its role in AD, and its previously described function in the central nervous system . TPP1 was selected as another lysosomal protein, which has been described in the previous human proteomics studies to be associated to Aβ plaques, but it has not been validated by IHC. For immunolabeling, 8 µm serial sections adjacent to those used for proteomic analysis were deparaffinized and rehydrated. Sections from six cases in each cohort were subjected to antigen retrieval in a microwave, using Tris–EDTA buffer (pH 9, Proteintech), diluted 1X for CLCN6, and sodium citrate buffer pH 6, followed by formic acid treatment for TPP1. Primary antibodies against CLCN6 (1:350, Thermo Scientific, cat. OSC00147W-100UL), TPP1 (1:100, Sigma-Aldrich, cat. HPA037709-100UL), and the pan-Aβ 4G8 antibody (1:1000) were incubated overnight, followed by Alexa Fluor 488 and 647 secondary antibodies (Thermo Scientific). Additionally, we performed a co-staining using MAP2 (1:200, BD Biosciences, cat. 556,320) and CLCN6 to assess cell specificity of CLCN6 expression. Whole-slide scans were acquired at 20X magnification using a Leica Aperio Versa 8 microscope. For CLCN6 quantification, ten regions of interest (ROIs) from the same anatomical areas used for LCM were analyzed using a custom macro in ImageJ 1.54f. Briefly, a mask was generated to delineate the plaques area in the field of view, which was then applied to the CLCN6 channel to measure fluorescence intensity (total fluorescence = Integrated Density—[Area measured * Background mean gray value]) or the area occupied by CLCN6-positive objects using the "Measure" function. CLCN6-positive area was normalized to the total area of the plaques. Non-plaque CLCN6 area and fluorescence were measured by modifying the macro, where plaque ROIs were first subtracted from the CLCN6 channel before proceeding with the previously described quantification method. Significant differences were assessed using paired t tests (for comparisons between plaque and non-plaque tissue within the same case) or unpaired t tests (for comparisons between control non-plaque tissue and non-plaque tissue from DS, EOAD, or LOAD), with analyses performed using GraphPad Prism. TPP1 was quantified using QuPath v 0.5.1. Briefly, 10 regions of interest (ROIs) were manually annotated from the gray matter of the hippocampal formation and temporal cortex. Aβ plaques were annotated using a pixel classifier, with a Gaussian prefilter, smoothing sigma of 2, and a threshold of 30. Objects below 350 μm 2 were filtered out from the final annotations. Non-plaque adjacent tissue was selected using the same classifier, but ignoring pixels above threshold and assigning the remaining pixels detected to the class “Non-plaques”. TPP1-positive objects were annotated using a similar pixel classifier, with smoothing sigma of 1.5 and a threshold of 26. Objects below 20 μm 2 were filtered out for the final annotations. Density of protein TPP1 was calculated for positive immunolabeling inside plaques and for presence of TPP1 in the non-plaque region, using the formula TPP1 density = (sum of TPP1 areas/sum of plaques area) × 100. T tests’ statistical analyses were performed in GraphPad Prism. We used the WGCNA package (version 1.72.1) in the R environment to conduct a Weighted Gene Correlation Network Analysis adapted from the WGCNA framework to investigate protein expression correlations. First, the curated protein expression matrix from the proteomics analysis ( n = 1995) underwent quality control to identify samples with excessive missing values. The networks were then constructed using the blockwiseModules function for each cohort (DS, EOAD, and LOAD), creating separate networks for Aβ plaques and non-plaque tissue within each cohort. The networks were constructed as “signed networks” with the topological overlap matrix (TOM) also set to “signed”. TOMdenom parameter was specified as “mean” to facilitate the capture of tightly connected protein groups within the network. The soft-thresholding power was set to 9 for DS Plaques and 10 for non-plaques, 7 for EOAD plaques and 11 for non-plaques, and 18 for LOAD plaques and 14 for LOAD non-plaque dataset. Additional parameters included a minimum module size of 10, a mergeCutHeight of 0.07 to merge highly similar modules more stringently, and a deepSplit value of 4 to facilitate finer differentiation of modules. A minimum intramodular connectivity (kME) of 0.3 was required for proteins to remain in a given module, with a reassignment threshold of 0.05 allowing minor reallocation of proteins to more appropriate modules if necessary. The biweight midcorrelation method (bicor) was used as the primary correlation measure, with a fallback to Pearson correlation for outlier adjustment where necessary (maxPOutliers = 0.1). Numeric module labels were employed for consistency, and to reduce the complexity of module visualization, the pamRespectsDendro option was set to FALSE. After running the blockwiseModules function, we used the signedKME function within the WGCNA package to perform an iterative module cleanup to refine the module assignments in the protein correlation networks, as previously described . The iterative cleanup process involved creating a bicor correlation table to assess the relationship between each protein and the respective module eigenproteins, referred to as kME. Initially, proteins with an intramodular kME below 0.30 were removed. The reassignment process consisted of reallocating proteins in the gray module (those not assigned to any module) to any module with a maximum kME greater than 0.30 and reassigning proteins whose intramodular kME was more than 0.10 below their maximum kME relative to any other module. This procedure continued iteratively until the minimum kME of the proteins in a module was above the threshold of 0.30 and the difference between the maximum kME and the intramodular kME was less than 0.1, or up to 30 iterations if module reassignment criteria were not met. After each reassignment, the module eigenproteins and the kME table were recalculated using the moduleEigengenes and signedKME functions, ensuring that all module assignments remained valid and appropriately ranked. Ultimately, this cleanup procedure reinforced the reliability of the module structure by systematically refining the assignments of proteins to their respective modules based on kME values. After the iterative module cleanup was performed, correlations between module eigenproteins (MEs) and clinical variables ( APOE genotype, age, Sex, co-pathologies, and Aβ and pTau levels) were calculated and plotted in a heatmap using the labeledHeatmap function of the WGCNA package. Subsequently, GO enrichment analysis was performed for each of the correlation networks using the function enrichGO from the package clusterProfiler , filtering GO terms to an FDR < 0.05 using the Benjamini–Hochberg method followed by the simplify function with a cutoff of 0.7 to remove heavily redundant terms. Amyloid-β and Tau pathologies are significantly increased in DS AD pathology was assessed using the Braak and Thal staging or equivalent ABC score, for all cases used for proteomics analysis (Table , detailed case history in Supp. Table 1). Age was significantly different ( p < 0.0001) in the LOAD cohort in comparison to the other experimental groups. However, we included eight controls ≤ 65 years old and the remaining 12 cases ≥ 65 to compensate for the age gap between EOAD and LOAD (Supp. Table 1). In addition, multiple variable linear regression analysis showed that age ( p = 0.97) and sex ( p = 0.45) did not contribute significantly to the differences observed in the proteomics analysis (Supp. Table 2). Assessment of the regional distribution of Aβ and Tau pathology (Supp. Figure 1a, b) in all cases showed that Aβ levels in hippocampal and temporal regions were similar in DS and EOAD. However, Aβ quantities in DS were significantly higher ( p = 0.013) compared to LOAD (Supp. Figure 1c). PHF-1 immunoreactive Tau pathology was significantly higher in DS compared to EOAD and LOAD ( p = 0.0002 and p < 0.0001, respectively) (Supp. Figure 1d). Aβ and Tau pathology were not significantly different between EOAD and LOAD (Supp. Figure 1c–d). These results suggest an exacerbated Aβ and Tau pathology in DS despite the advanced stage of AD for all the cases in the cohorts evaluated. Protein abundance in amyloid plaques and non-plaque tissue varies across DS, EOAD, and LOAD Aβ plaque pairwise comparisons Protein differential expression in Aβ plaques and adjacent AD non-plaque tissue was evaluated using LFQ-MS in the microdissected hippocampus and temporal cortex (Fig. ). LFQ-MS identified 1995 proteins (Supp. Tables 3–4), detected in at least 50% of the cases in any of the groups. PCA showed minimal segregation by groups (DS, EOAD, LOAD, or control) or by sample type (plaques and non-plaque tissue). We identified 132 differentially abundant proteins in DS Aβ plaques compared to DS non-plaque tissue (Fig. b, d), 192 proteins in EOAD plaques vs. EOAD non-plaques (Fig. b, e), and 128 proteins in LOAD plaques vs. LOAD non-plaque tissue (FDR ≤ 5%, FC ≥ 1.5) (Fig. b, f). From these sets of proteins, 43 were shared between the three cohorts. We found 45 proteins with differential enrichment in plaques in DS, 97 proteins in EOAD, and 51 proteins in LOAD (Fig. b), indicating that enrichment of some proteins in Aβ plaques is variable in each experimental group. We observed a consistent enrichment of AD associated proteins such as the Aβ specific peptide LVFFAEDVGSNK (sequence corresponds to amino acids 17–28 of APP, Fig. d–f, j). This peptide does not discriminate between cleaved or full-length sequences. However, previous findings have shown a strong correlation to Aβ pathology . We also identified previously detected amyloid plaque proteins, such as HTRA1, GPC1, VIM, APOE, CLSTN1, and SYT11 within the top ten most significant proteins across groups (Table ). As expected, APP was within the top ten significantly abundant proteins in DS amyloid plaques (Fig. d) and was also significantly enriched in amyloid plaques in EOAD and LOAD (Fig. k). The plaque-protein COL25A1 [collagen alpha-1(XXV) chain, also known as CLAC-P] was the most abundant protein in amyloid plaques in all experimental groups, showing more enrichment in plaques than the Aβ peptide (Fig. d–f, l). Interestingly, COL25A1 was below mass spectrometry detection threshold in all control tissues (Fig. l), suggesting that this protein is highly correlated to Aβ plaque pathology. COL25A1 was increased 129.5-fold in DS, 29.9-fold in EOAD and 71-fold in LOAD (Table ). In addition, COL25A1 was within the top ten significant proteins only in DS (Table ). Hyaluronan and proteoglycan link protein 2 (HAPLN2, also known as Bral1) was within the most significant proteins decreased in plaques in the three cohorts studied. In addition, we observed decreased plaque-protein levels of oligodendrocyte proteins. MOG was significantly decreased in all groups, and MAG and MBP were significantly decreased in EOAD and LOAD amyloid plaques, respectively (Supp. Table 3). MAG and MBP levels were also decreased in plaques in DS, although it did not meet our significance criteria. The glucose transport facilitator SLC2A3 (also known as GLUT3) was decreased in amyloid plaques in all groups, yet it was significant only in EOAD and LOAD (Table ). Overall, we observed similar proteins altered in Aβ plaques in all groups evaluated. However, most of the proteins show different abundance levels in plaques of DS, EOAD, and LOAD, accounting for the differences observed among groups. AD non-plaque tissue pairwise comparisons We identified 263 differentially expressed proteins in DS non-plaque tissue compared to control non-plaque tissue (Fig. c, g), 269 proteins in EOAD non-plaque tissue vs. control non-plaque tissue (Fig. c, h), and 301 significantly altered proteins in LOAD non-plaque tissue vs. control non-plaque tissue (Fig. c, i). We identified 65 altered non-plaque proteins compared to control tissue that were common between all cohorts evaluated (Fig. c). We also observed 138 proteins with differential enrichment levels in DS non-plaque tissue, 76 proteins in EOAD, and 148 proteins in LOAD (Fig. c). Notably, we identified among the top ten enriched proteins in DS non-plaque tissue CLU, VIM, HSPB6, and SYNM (Supp. Table 5), which we also found enriched in amyloid plaques in all disease groups. CLU was consistently enriched in non-plaque tissue in the three groups evaluated when compared to control tissue (Supp. Table 5). VIM and HSPB6 were also among the most enriched proteins in EOAD non-plaque tissue (Supp. Table 5). Conversely, we identified the actin-binding protein destrin (DSTN) as the only protein among the top ten significantly decreased proteins in non-plaque tissue from DS, EOAD, and LOAD cohorts compared to controls (Supp. Table 5). We also observed that parvalbumin (PVALB) was the most decreased protein in DS non-plaque tissue compared with controls (Fig. g), whereas the levels of PVALB in EOAD and LOAD were not significantly different from controls (Supp. Table 4). Our proteomics findings in non-plaque tissue showed that there were more differences in protein levels in non-plaque tissue between groups, in comparison to the more consistent protein levels in plaques, highlighting the largely similar plaque proteome between AD subtypes despite differences in baseline, non-plaque-protein expression. Amyloid plaque proteomes of DS, EOAD, and LOAD are highly correlated We performed correlation analyses to compare the proteomes of Aβ plaques and non-plaque tissues in DS, EOAD, and LOAD. Proteins included in the correlations were significant and FC > 1.5 at least in one of the groups evaluated. For amyloid plaques, there was a positive correlation between DS and EOAD ( R 2 = 0.77, p < 0.0001). We observed 65.5% (164/250) of the proteins changing in the same direction (i.e., fold-change for a protein is positive or negative in both groups), where 29.6% (74/250) of the proteins were significantly altered in DS and EOAD plaques (Fig. a). We only observed 4.8% (12/250) of the proteins changing in different directions (i.e., fold-change for a protein is positive in one group and negative in the other) (Fig. a). DS and LOAD plaque proteomes also correlated positively ( R 2 = 0.73, p < 0.0001), with 66.2% (135/204) of the proteins with same fold-change direction and 27.5% (56/204) of the proteins significantly altered in both groups (Fig. b). Similar to DS and EOAD, only 6.3% (13/204) of the proteins were changing in opposite direction (Fig. b). There was also a positive correlation between EOAD and LOAD differentially abundant plaque proteins ( R 2 = 0.67, p < 0.0001), similar to what we observed between DS vs. the AD subtypes evaluated. We identified 66.4% (234/256) of the proteins changing in the same direction, and 25% (64/256) of the proteins were significant in both groups (Fig. c). The proteins changing in opposite direction accounted for 8.6% (22/256) of the total (Fig. c). Our analysis shows high similarity among the proteins altered in Aβ plaques vs. non-plaques of DS, EOAD, and LOAD, with the majority of the proteins changing in the same direction. Correlation analyses of DS, EOAD, and LOAD non-plaque differentially abundant proteins showed positive correlations between DS and EOAD ( R 2 = 0.59, p < 0.0001) and a weaker correlation between DS and LOAD ( R 2 = 0.33, p < 0.0001) (Fig. d–e). We observed 65.9% (275/417) of the proteins changing in the same direction in DS and EOAD Aβ plaques, where 27.6% (115/417) of the proteins were significantly altered in both groups. We observed 6.5% (27/417) of proteins changing in the opposite direction (Fig. d). Similarly, 67.1% (328/489) of the proteins in DS and LOAD were changing in the same direction (Fig. e). We observed that 15.3% (75/489) of the proteins were significant in both groups, whereas 17.6% (86/489) of proteins had opposite fold changes (Fig. e). Moreover, we observed a higher positive correlation between EOAD vs. LOAD non-plaque proteomes ( R 2 = 0.79, p < 0.0001), with 63.9% (273/427) of the proteins were changing in the same direction, with 33.5% (143/427) being also significant in both groups (Fig. f). Only 2.6% (11/427) of the proteins were changing in opposite directions (Fig. f). Overall, we observed a similar ‘amyloid plaques protein signature’ across the experimental groups. Nonetheless, correlations of the non-plaque tissue proteomes suggest a higher similarity between EOAD and LOAD differentially enriched proteins in comparison to DS. Protein-coding genes present in Hsa21 are not associated with protein enrichment in Aβ plaques We performed chromosomal mapping of significantly altered proteins identified through proteomic analysis across all human chromosomes using the UCSC Human Genome Browser to evaluate the distribution of these proteins across DS, EOAD, and LOAD. Supplemental Figure illustrates the percentage of significantly altered proteins for each group. The overall percentage of proteins from each chromosome was below 20%, and no single chromosome exhibited a markedly overrepresented protein expression pattern. This suggests that proteins from all chromosomes, not just Hsa21, contribute to the molecular differences observed in both DS and AD. Of the 1995 proteins identified in this study, 22 were from Hsa21 (Fig. ). We compared these proteins with those reported in a previous DS plaque proteomics study , identifying a total of 26 Hsa21 proteins between the two studies. A significant portion, 69.2% (18/26), of these proteins were shared between the current and previous studies (Fig. ). Among the proteins identified, APP was significantly altered in Aβ plaques in all cohorts (Fig. ). GART was significantly abundant in LOAD and DS non-plaque tissue (Fig. a, c), and PCP4 was differentially expressed in LOAD and EOAD non-plaque tissue (Fig. a, b). CXADR was differentially expressed in EOAD amyloid plaques (Fig. b). APP was also significantly enriched in DS non-plaque tissue (FDR < 0.05, Fig. a). NCAM2, CBR1, CBR3, PDXK, CSTB, and COL6A1 were significantly enriched in DS non-plaque tissue (Fig. a). Taken together, these results along with the chromosomal mapping of all significantly altered proteins suggest that Hsa21 triplication does not necessarily lead to the enrichment of those gene products in Aβ plaques or in the surrounding non-plaque tissue. Aβ plaque-protein signature is related to APP processing, immunity, and lysosomes Aβ plaques functional analyses We identified functional associations for the significantly abundant proteins in Aβ plaques and AD non-plaque tissue by performing ‘GO enrichment analysis’ (FDR < 0.05, Supp. Tables 6–13). Top enriched biological process (BP) GO terms in DS included lytic vacuole organization, lysosome organization, and lysosomal transport (for the three terms, p = 1.29 × 10 −5 , Fig. a, Supp. Table 6). We also identified terms cell activation ( p = 0.00024), regulation of immune system process ( p = 0.00027), and leukocyte activation ( p = 0.00016), which were also observed in EOAD (Fig. a). For cellular component (CC), we identified as the top terms vacuole, lysosome, lytic vacuole ( p = 9.56 × 10 −14 ), and endosome ( p = 9.71 × 10 −14 , Fig. a, Supp. Table 10), similarly as BP GO terms. In contrast, EOAD most enriched BP terms were regulation of immune system process, B-cell-mediated immunity, immunoglobulin-mediated immune response, and lymphocyte-mediated immunity ( p = 4.33 × 10 −5 , Fig. a, Supp. Table 6). Top CC GO terms in EOAD were secretory granule ( p = 1.13 × 10 −6 ), vacuolar lumen, and collagen-containing extracellular matrix (both p = 8.75 × 10 −7 , Fig. a, Supp. Table 10). LOAD also showed BP GO terms related to lysosomes as observed in DS, yet with a lower significance. For instance, we identified lysosomal transport and organization and lytic vacuole organization ( p = 0.0288 Fig. a, Supp. Table 6). CC GO terms included lysosome and lytic vacuoles ( p = 2.47 × 10 −7 ), collagen-containing extracellular matrix ( p = 9.41 × 10 −6 ), and endosome ( p = 0.00063) (Fig. a, Supp. Table 10), highlighting functional similarities of plaque-associated proteins between DS and LOAD. We also evaluated the physical and functional protein interactions of significantly abundant proteins in Aβ plaques, using Cytoscape and the STRING database (Fig. b–d). The networks for amyloid plaque proteins for all the cohorts evaluated showed a significant degree of protein–protein interactions (PPI enrichment p = 1 × 10 −16 ). We observed a consistent group of proteins in all forms of AD evaluated, which were grouped based on functional enrichment (Fig. b–d). For instance, we identified proteins related to APP and Aβ metabolism (APP, APOE, CLU, CLSTN1, NCSTN, APLP2, and SPON1), immune response and inflammation (HLA-DRB1, HLA-DRB5, C1QC, C4A, and C3 consistent in DS and EOAD; CD44, ICAM1, and MSN in EOAD and LOAD), and lysosomal-related functions (PPT1, TPP1, LAMP1, PSAP, and CTSD). APOE was highly abundant in Aβ plaques in DS and LOAD (Fig. b, d) compared to EOAD, being the most significant in DS (Fig. b) in comparison to EOAD and LOAD. We also identified a group of glial-related proteins in EOAD network, namely VIM, DES, and GFAP (Fig. c). Overall, our findings suggest a similar plaque-protein signature in the three groups, which were functionally associated mainly to APP and Aβ processing, immunity-related responses, and lysosomal functions. In addition, an analysis of the ten most abundant proteins (ranked by FC) differentially enriched in Aβ plaques in DS, EOAD, or LOAD further showed the relationship of Aβ plaque-associated proteins with lysosomal and immune-related functions (Supp. Table 14). According to the GO annotation, we found that the significantly enriched amyloid plaque proteins in DS predominantly relate to endo/lysosomal functions, including CLCN6, ATG9A, and VAMP7 (Fig. , Supp. Table 14). Oligodendrocyte protein MOG was significantly decreased in plaques for all cohorts, but fold-change suggests an increased reduction in DS (Supp. Table 3, Fig. a) in comparison to the other groups. We identified protein ITM2C, which is involved in Aβ peptide production (Fig. b). We also observed proteins with functions linked to presynaptic signaling and axon guidance, namely, RUNDC3A and NTN1 (Fig. ). The calcium-binding protein and marker of inhibitory neurons PVALB was significantly enriched in DS plaques but was unaltered in EOAD and LOAD (Fig. f). In contrast, we observed that Aβ plaque proteins significantly abundant in EOAD are mostly related to immune response, immunoglobulin-mediated immune response (S100A7, HPX, and IL36G), as well as vacuole lumen and secretory vesicles related (GGH, TTR). The protein EPPK1 is linked to cytoskeletal organization functions such epithelial cell proliferation and intermediate filament organization (Supp. Table 14). In LOAD, we observed a series of proteins involved in bounding membrane of organelle, collagen-containing extracellular matrix, and vesicle membrane (CYB5B, VWF and PTPRN2). Although we did not observe particular association with GO terms, other amyloid plaque LOAD proteins, including TIMM8A, ACSS3, and SFXN5 (linked to mitochondrial functions) , THUMPD1 and RPS7 (related to RNA-binding activity and ribosomes) and NRXN2 (protein–protein interactions at the synapses) were identified (Supp. Table 14). These observations support our findings in the GO functional enrichment and protein interaction networks, providing evidence that some of the most abundant proteins in DS plaques are primarily linked to lysosomal pathways. Non-plaque tissue functional analyses GO terms for abundant non-plaque proteins showed chromatin remodeling as the top BP term for all experimental groups (DS p = 0.00128, EOAD p = 5.79 × 10 −9 , LOAD p = 1.69 × 10 −10 , Supp. Figure 3a, Supp. Table 8). Importantly, top BP GO terms in DS were associated with integrin-mediated signaling, extracellular structure, and extracellular matrix organization ( p = 0.00684, Supp. Figure 3a, Supp. Table 8). In contrast, EOAD and LOAD top BP GO terms included protein–DNA complex assembly ( p = 4.74 × 10 −6 and p = 1.14 × 10 −8 , respectively), regulation of gene expression (EOAD p = 5.08 × 10 −5 , LOAD p = 1.68 × 10 −8 ), and nucleosome assembly (EOAD p = 4.74 × 10 −6 , LOAD p = 3.25 × 10 −8 ) (Supp. Figure 3a, Supp. Table 8). Top CC GO terms for DS were collagen-containing extracellular matrix, which was also observed in EOAD and LOAD, external encapsulating structure, and extracellular matrix ( p = 3.52 × 10 −8 , Supp. Figure 3a, Supp. Table 12). Top CC GO term for EOAD was nucleosome ( p = 4.44 × 10 −6 ), which was also identified in DS and LOAD. Other EOAD top CC GO terms were DNA packaging complex ( p = 8.01 × 10 −6 ) and protein–DNA complex ( p = 2.23 × 10 −5 ) (Supp. Figure 3a, Supp. Table 12). In a similar fashion, LOAD top CC GO terms were DNA packaging complex, protein–DNA complex (both p = 3.78 × 10 −14 ), and nucleosome ( p = 1.71 × 10 −12 ) (Supp. Figure 3a, Supp. Table 12). We also created protein interaction networks of non-plaque tissue DS, EOAD, and LOAD proteomes, which showed a highly significant degree of protein–protein interactions (PPI enrichment p = 1 × 10 −16 , Supp. Figure 3b–d). We observed groups of RNA-binding proteins, such as SRSF4, eukaryotic initiation factors (eIF4), and the heterogeneous nuclear ribonucleoproteins (hnRNP) protein family, primarily in EOAD and LOAD networks (Supp. Figure 3c, d). We also observed a set of intermediate filament and glial proteins, such as GFAP, AQP4, DES, VIM, ALDH1L1, and GART (Supp. Figure 3b–d). Additionally, there were groups of histone proteins related to the nucleosome, such as H2A, H2B, and H1 protein families (Supp. Figure 3b–d). Particularly, the DS protein interaction network exhibited a set of collagens, laminins, cell adhesion proteins, proteoglycans, and heparin sulfate proteins (Supp. Figure 3b) as well as proteasome and chaperone proteins also involved in regulation of gene expression, including SQSTM1, PSMB4, PSMD4, and HSPB6 (Supp. Figure 3b). Our findings highlight a pivotal role of extracellular matrix (ECM) and structural components in DS besides the proteins associated to Aβ plaque pathology. Comparative analysis with previous human AD proteomics and identification of novel plaque proteins We compared the differentially abundant proteins found in Aβ plaques and AD non-plaque tissue with previous human AD proteomics studies compiled in the NeuroPro database . We observed that 77.7% of altered proteins identified in amyloid plaques in our study were also identified in previous AD plaque proteomics studies (Fig. a). From the 301 significantly altered plaque proteins that we identified in the present study, 13.6% have not been found in previous plaque proteomics studies, but only reported as significantly altered in bulk brain tissue proteomics studies (Fig. a). Similarly, 85.2% of the proteins we identified in the non-plaque tissue have been described in previous plaque and bulk tissue proteomics studies, whereas 10.9% have been identified in bulk human brain tissue but not in plaque proteomics studies (Fig. a). Interestingly, we identified in our study 34 proteins that have not been described previously in any human AD proteomics study, either in plaques or in bulk tissue (Fig. a, Supp. Table 15–16). In DS specifically, we identified seven amyloid plaque proteins and eight non-plaque tissue proteins significantly altered in our study, which have not been found in past AD brain tissue proteomics studies (Fig. b, Supp. Table 17). Similarly, we identified in EOAD 21 significantly altered proteins in plaque and eight in non-plaque tissue, which have not been described previously (Fig. b, Supp. Table 17). In the case of LOAD, we observed four significantly altered proteins in amyloid plaques and 15 in non-plaque tissue that have not been identified in previous AD plaques or bulk brain tissue proteomics studies (Fig. b, Supp. Table 17). From this group of proteins, LAMTOR4 (late endosomal/lysosomal adaptor and MAPK and MTOR activator 4) was significantly enriched in Aβ plaques in all the cohorts analyzed (Fig. c). The proteins HLA-DRB5, ALOX12B, and SERPINB4 were significantly enriched in DS and EOAD amyloid plaques (Fig. c). In contrast, LAMA2 was significantly decreased in DS and EOAD amyloid plaques (Fig. c). On the other hand, we observed the histone protein H2BC11, the basal cell adhesion protein BCAM, and the DNA-binding protein FUBP3 significantly enriched in non-plaque tissue in DS, EOAD, and LOAD (Fig. c). The protein centrosomal protein of 290 kDa (CEP290) showed a marked decrease in DS Aβ plaques compared to DS non-plaque tissue; however, it was detected in few cases of the 20 evaluated in that cohort (Supp. Table. 3), reason why it did not reach FDR < 0.05 (Fig. c). The protein FAM171A2 was significantly enriched only in EOAD and LOAD, contrary to the protein DCAKD that was significantly decreased in EOAD and LOAD non-plaque tissue (Fig. c). Overall, our proteomics findings are consistent with previous proteomics studies. Notably, our comparative analysis allowed us to identify novel proteins in AD human proteomics. Validation of the Aβ plaques protein signature in DS and novel plaque proteins in human DS proteomics The NeuroPro database is a powerful tool to investigate proteomic changes in AD human brains. However, by the time of writing this article, the database does not include DS proteomics data. Therefore, we compared our DS amyloid plaques proteomics findings with our previous study (Drummond et al . , 2022 ) where unbiased localized proteomics was used to interrogate the DS amyloid plaques proteome. In the study led by Drummond and colleagues, any Aβ plaque detected by IHC was sampled regardless of plaque morphology. We observed 2522 proteins between both DS plaque proteomics datasets, comprised of 1981 proteins in the present study and 2258 proteins in our previous work (excluding isoforms). We observed 68.1% (1717/2522) of proteins overlapping between both studies, with a total of 228 significantly altered plaque proteins in either dataset. Among these, 21.9% (50/228) were common to both studies (Fig. a). Particularly, 36% (82/228) of the significantly altered proteins in the present study were not significant in Drummond et al., and conversely, 42.1% (96/228) of the proteins identified in the previous study were not detected in the current dataset (Fig. a, Supp. Table 18). This variance may reflect differences in statistical thresholds and increased sample size, providing higher power in this study to identify more plaque-enriched proteins in DS with greater confidence. For instance, 35 proteins that were significantly enriched proteins detected in the Drummond study but not significant in ours were nonetheless observed in our dataset, with many showing increased abundance trends that nearly reached significance. In addition, from the proteins that were different between both studies (Fig. a), only 12 had a different direction of change, suggesting that most of the differences observed between the datasets are due to the differential stringency applied and the number of samples. Despite these differences, we observed a significant positive correlation between the Aβ plaque proteomes of the DS cohorts ( p < 0.0001, R 2 = 0.60, Fig. b). In fact, the 50 common proteins between both studies were changing in the same direction (48 enriched and 2 decreased in plaques, Fig. b). Within these set of amyloid plaque proteins, we identified Aβ peptide, APP, COL25A1, and a set of previously described plaque proteins, such as APOE, SMOC1, CLU, C3, and CLCN6 among others (extended data in Supp. Table 18), thus validating a plaque-protein signature also observed in DS Aβ pathology. Interestingly, from the seven novel DS plaque proteins regarding the NeuroPro database (Supp. Table 17), only ACP2 was also observed in the previous DS plaque proteomics study (Supp. Table 18). Our study is consistent with previous similar proteomics studies on AD brains, and further expanded the proteins present at these pathological lesions. Validation of CLCN6 and TPP1 in Aβ plaques by immunohistochemistry We performed immunofluorescence to validate the late endosome protein CLCN6, as it emerged as the most abundant plaque protein among the top ten significantly altered proteins in DS Aβ plaques (Supp. Table 14). Previously, CLCN6 was identified within plaques solely through our proteomics study , without histochemical evidence of its presence in Aβ plaques. Immunofluorescence staining showed CLCN6 localized in the cytoplasm of cells adjacent to intracellular 4G8 anti-Aβ-positive staining (Fig. a). Within plaques, Aβ appears to encapsulate CLCN6 + cells, with the highest intracellular colocalization between CLCN6 and Aβ. Moreover, CLCN6 + /4G8 + cells were observed on the periphery of amyloid plaques, suggesting a potential role for CLCN6 + cells in either releasing Aβ species into plaques or participating in a phagocytic process (Fig. a). Quantification of CLCN6 fluorescence and area, normalized by plaque area, showed a significant increase in Aβ plaques in DS, EOAD, and LOAD compared to non-plaque tissue (Fig. b–c). Interestingly, CLCN6 area was significantly reduced in non-plaque tissue across all cohorts relative to control non-plaque tissue (Fig. b–c). These histochemical results are consistent with trends observed in the proteomic data (Fig. d). Further co-staining with MAP2 indicated that most CLCN6 + cells are neurons, with a minority of smaller MAP2- cells also displaying CLCN6 staining (Fig. e). Overall, these findings suggest that CLCN6 may be involved in storing and transporting Aβ, which could be released extracellularly in the AD pathogenic context, contributing to amyloid plaque formation. TPP1 is a lysosomal protein that was identified in previous human proteomics , but has not been characterized in Aβ plaques by immunohistochemistry. Our validation revealed a distinctive punctate expression pattern common of lysosomal-associated proteins. These bright puncta were consistently observed both within Aβ plaques and in the surrounding non-plaque regions (Fig. a). In addition to the punctate signal, TPP1 expression appeared to be widespread and highly abundant throughout the tissues, with immunoreactivity present diffusely in the cytoplasmic regions of presumably neurons and glial cells (Fig. a). We observed TPP1-positive staining in Aβ plaques, with a pattern suggesting that the protein is not directly colocalized with Aβ. Instead, TPP1 appears to occupy spaces within the plaques that are less densely packed with amyloid or is embedded within denser amyloid aggregates while remaining distinguishable (bottom panel Fig. a.). Our proteomics analysis showed that TPP1 is significantly enriched in plaques of DS, EOAD, and LOAD (Fig. b). However, the enrichment of TPP1 in amyloid plaques is low (fold-change of 1.62 in DS, 1.51 in EOAD, and 1.69 in LOAD; Supp. Table 3). We did not observe significant differences in TPP1 levels by IHC (Fig. c). Notably, the density and intensity of TPP1 staining within plaques were qualitatively similar to those in the non-plaque areas, consistent with proteomic findings indicating subtle enrichment of TPP1 in plaques. Overall, our observations suggest that TPP1 is not exclusively localized to plaques but is instead distributed throughout the brain parenchyma. Correlation of protein changes to clinical traits WGCNA allowed us to identify correlations between clusters of co-expressing proteins with clinical traits, including APOE genotype, sex, age, TDP-43 and α-synuclein co-pathologies, and Aβ and pTau pathology regional levels. Top GO BP and CC annotations associated with each module are presented (FDR < 0.05), with additional information about module sizes and extended functional annotation provided in the supplementary tables 19 to 26. Notably, Module 1 from DS plaques, containing multiple highly abundant plaque proteins (e.g., CLCN6, MDK, ITM2C, ARL8B, and C1QC), correlated significantly only with pTau levels ( R = 0.5, p = 0.024) (Supp. Figure 4). In EOAD, we observed negative correlations between APOE 3 and 4 genotypes, as well as between APOE and age. Functional annotation indicated that modules correlated with APOE genotype are related to synaptic signaling and mitochondrial metabolic processes (Supp. Figure 5). Additionally, Module 5, including astrocytic proteins DES, VIM, GFAP, GJA1, and ALDH1L1, was positively correlated with APOE3 and negatively correlated with APOE4 ( R = 0.54, p = 0.014 and R = − 0.52, p = 0.02), underscoring astrocytes relevance in AD neuropathology (Supp. Figure 5). On the other hand, LOAD plaques co-expression networks revealed a significant correlation between Module 58, functionally associated with the axonal myelin sheath and containing multiple oligodendrocyte proteins (MOG, MBP, MAG, CNP, HAPLN2, and PLP1), and Aβ neuropathology ( R = − 0.51, p = 0.021) (Supp. Figure 6). In addition, Module 30, comprising proteins COL25A1, C3, and fibrinogens (FGA, FGB, FGG), was positively correlated with APOE4 and Tau ( R = 0.45, p = 0.048 and R = 0.56, p = 0.01, respectively), and negatively correlated with age ( R = − 0.63, p = 0.011) (Supp. Figure 6), suggesting potential age-dependent alterations in some of the proteins associated with module 30. Age correlated significantly with multiple modules in all cohorts, but it is noteworthy that the LOAD cohort is inherently older than DS and EOAD. In non-plaque tissue co-expression networks, modules 15, 29, and 44 in DS non-plaques showed opposing correlations with APOE3 and APOE4 (Supp. Figure 7), with Module 15 also associated with "Cytoplasmic translation" and "Ribosomal subunit" functions. EOAD non-plaque networks showed the most modules significantly correlated with APOE genotype (Supp. Figure 8). Functional enrichment included terms related to neuron differentiation, axon structure, presynapse, cytoskeletal organization, and GTPase regulation in modules negatively correlated with APOE4 (Supp. Figure 8). Module 55 was negatively correlated with APOE4 and positively with Tau ( R = − 0.57, p = 0.085 and R = 0.5, p = 0.025) (Supp. Figure 8), and included proteins C3 and fibrinogens (FGA, FGB, FGG), similar to Module 30 in LOAD plaques. This observation suggests that common proteins may have distinct roles in AD pathology across subtypes. LOAD non-plaque correlation networks showed a few modules significantly correlated with APOE4 genotype, similarly as LOAD plaques correlations (Supp. Figure 9). In particular, Module 23 was associated with "response to unfolded protein," comprising multiple heat shock proteins, such as HSPE1, HSPD1, HSPA8, HSPA9, and HSP90AA1 (Supp. Figure 9). Overall, our WGCNA analysis revealed that each cohort evaluated has distinct clusters of co-expressing proteins that correlate with clinical variables, such as APOE genotype, pTau, and Aβ pathology, suggesting that AD pathology progresses through different mechanisms in DS, EOAD, and LOAD. The interaction of the multiple proteins identified on each experimental group and clinical traits may inform the development of therapies and biomarkers tailored to each form of AD. AD pathology was assessed using the Braak and Thal staging or equivalent ABC score, for all cases used for proteomics analysis (Table , detailed case history in Supp. Table 1). Age was significantly different ( p < 0.0001) in the LOAD cohort in comparison to the other experimental groups. However, we included eight controls ≤ 65 years old and the remaining 12 cases ≥ 65 to compensate for the age gap between EOAD and LOAD (Supp. Table 1). In addition, multiple variable linear regression analysis showed that age ( p = 0.97) and sex ( p = 0.45) did not contribute significantly to the differences observed in the proteomics analysis (Supp. Table 2). Assessment of the regional distribution of Aβ and Tau pathology (Supp. Figure 1a, b) in all cases showed that Aβ levels in hippocampal and temporal regions were similar in DS and EOAD. However, Aβ quantities in DS were significantly higher ( p = 0.013) compared to LOAD (Supp. Figure 1c). PHF-1 immunoreactive Tau pathology was significantly higher in DS compared to EOAD and LOAD ( p = 0.0002 and p < 0.0001, respectively) (Supp. Figure 1d). Aβ and Tau pathology were not significantly different between EOAD and LOAD (Supp. Figure 1c–d). These results suggest an exacerbated Aβ and Tau pathology in DS despite the advanced stage of AD for all the cases in the cohorts evaluated. Aβ plaque pairwise comparisons Protein differential expression in Aβ plaques and adjacent AD non-plaque tissue was evaluated using LFQ-MS in the microdissected hippocampus and temporal cortex (Fig. ). LFQ-MS identified 1995 proteins (Supp. Tables 3–4), detected in at least 50% of the cases in any of the groups. PCA showed minimal segregation by groups (DS, EOAD, LOAD, or control) or by sample type (plaques and non-plaque tissue). We identified 132 differentially abundant proteins in DS Aβ plaques compared to DS non-plaque tissue (Fig. b, d), 192 proteins in EOAD plaques vs. EOAD non-plaques (Fig. b, e), and 128 proteins in LOAD plaques vs. LOAD non-plaque tissue (FDR ≤ 5%, FC ≥ 1.5) (Fig. b, f). From these sets of proteins, 43 were shared between the three cohorts. We found 45 proteins with differential enrichment in plaques in DS, 97 proteins in EOAD, and 51 proteins in LOAD (Fig. b), indicating that enrichment of some proteins in Aβ plaques is variable in each experimental group. We observed a consistent enrichment of AD associated proteins such as the Aβ specific peptide LVFFAEDVGSNK (sequence corresponds to amino acids 17–28 of APP, Fig. d–f, j). This peptide does not discriminate between cleaved or full-length sequences. However, previous findings have shown a strong correlation to Aβ pathology . We also identified previously detected amyloid plaque proteins, such as HTRA1, GPC1, VIM, APOE, CLSTN1, and SYT11 within the top ten most significant proteins across groups (Table ). As expected, APP was within the top ten significantly abundant proteins in DS amyloid plaques (Fig. d) and was also significantly enriched in amyloid plaques in EOAD and LOAD (Fig. k). The plaque-protein COL25A1 [collagen alpha-1(XXV) chain, also known as CLAC-P] was the most abundant protein in amyloid plaques in all experimental groups, showing more enrichment in plaques than the Aβ peptide (Fig. d–f, l). Interestingly, COL25A1 was below mass spectrometry detection threshold in all control tissues (Fig. l), suggesting that this protein is highly correlated to Aβ plaque pathology. COL25A1 was increased 129.5-fold in DS, 29.9-fold in EOAD and 71-fold in LOAD (Table ). In addition, COL25A1 was within the top ten significant proteins only in DS (Table ). Hyaluronan and proteoglycan link protein 2 (HAPLN2, also known as Bral1) was within the most significant proteins decreased in plaques in the three cohorts studied. In addition, we observed decreased plaque-protein levels of oligodendrocyte proteins. MOG was significantly decreased in all groups, and MAG and MBP were significantly decreased in EOAD and LOAD amyloid plaques, respectively (Supp. Table 3). MAG and MBP levels were also decreased in plaques in DS, although it did not meet our significance criteria. The glucose transport facilitator SLC2A3 (also known as GLUT3) was decreased in amyloid plaques in all groups, yet it was significant only in EOAD and LOAD (Table ). Overall, we observed similar proteins altered in Aβ plaques in all groups evaluated. However, most of the proteins show different abundance levels in plaques of DS, EOAD, and LOAD, accounting for the differences observed among groups. AD non-plaque tissue pairwise comparisons We identified 263 differentially expressed proteins in DS non-plaque tissue compared to control non-plaque tissue (Fig. c, g), 269 proteins in EOAD non-plaque tissue vs. control non-plaque tissue (Fig. c, h), and 301 significantly altered proteins in LOAD non-plaque tissue vs. control non-plaque tissue (Fig. c, i). We identified 65 altered non-plaque proteins compared to control tissue that were common between all cohorts evaluated (Fig. c). We also observed 138 proteins with differential enrichment levels in DS non-plaque tissue, 76 proteins in EOAD, and 148 proteins in LOAD (Fig. c). Notably, we identified among the top ten enriched proteins in DS non-plaque tissue CLU, VIM, HSPB6, and SYNM (Supp. Table 5), which we also found enriched in amyloid plaques in all disease groups. CLU was consistently enriched in non-plaque tissue in the three groups evaluated when compared to control tissue (Supp. Table 5). VIM and HSPB6 were also among the most enriched proteins in EOAD non-plaque tissue (Supp. Table 5). Conversely, we identified the actin-binding protein destrin (DSTN) as the only protein among the top ten significantly decreased proteins in non-plaque tissue from DS, EOAD, and LOAD cohorts compared to controls (Supp. Table 5). We also observed that parvalbumin (PVALB) was the most decreased protein in DS non-plaque tissue compared with controls (Fig. g), whereas the levels of PVALB in EOAD and LOAD were not significantly different from controls (Supp. Table 4). Our proteomics findings in non-plaque tissue showed that there were more differences in protein levels in non-plaque tissue between groups, in comparison to the more consistent protein levels in plaques, highlighting the largely similar plaque proteome between AD subtypes despite differences in baseline, non-plaque-protein expression. Protein differential expression in Aβ plaques and adjacent AD non-plaque tissue was evaluated using LFQ-MS in the microdissected hippocampus and temporal cortex (Fig. ). LFQ-MS identified 1995 proteins (Supp. Tables 3–4), detected in at least 50% of the cases in any of the groups. PCA showed minimal segregation by groups (DS, EOAD, LOAD, or control) or by sample type (plaques and non-plaque tissue). We identified 132 differentially abundant proteins in DS Aβ plaques compared to DS non-plaque tissue (Fig. b, d), 192 proteins in EOAD plaques vs. EOAD non-plaques (Fig. b, e), and 128 proteins in LOAD plaques vs. LOAD non-plaque tissue (FDR ≤ 5%, FC ≥ 1.5) (Fig. b, f). From these sets of proteins, 43 were shared between the three cohorts. We found 45 proteins with differential enrichment in plaques in DS, 97 proteins in EOAD, and 51 proteins in LOAD (Fig. b), indicating that enrichment of some proteins in Aβ plaques is variable in each experimental group. We observed a consistent enrichment of AD associated proteins such as the Aβ specific peptide LVFFAEDVGSNK (sequence corresponds to amino acids 17–28 of APP, Fig. d–f, j). This peptide does not discriminate between cleaved or full-length sequences. However, previous findings have shown a strong correlation to Aβ pathology . We also identified previously detected amyloid plaque proteins, such as HTRA1, GPC1, VIM, APOE, CLSTN1, and SYT11 within the top ten most significant proteins across groups (Table ). As expected, APP was within the top ten significantly abundant proteins in DS amyloid plaques (Fig. d) and was also significantly enriched in amyloid plaques in EOAD and LOAD (Fig. k). The plaque-protein COL25A1 [collagen alpha-1(XXV) chain, also known as CLAC-P] was the most abundant protein in amyloid plaques in all experimental groups, showing more enrichment in plaques than the Aβ peptide (Fig. d–f, l). Interestingly, COL25A1 was below mass spectrometry detection threshold in all control tissues (Fig. l), suggesting that this protein is highly correlated to Aβ plaque pathology. COL25A1 was increased 129.5-fold in DS, 29.9-fold in EOAD and 71-fold in LOAD (Table ). In addition, COL25A1 was within the top ten significant proteins only in DS (Table ). Hyaluronan and proteoglycan link protein 2 (HAPLN2, also known as Bral1) was within the most significant proteins decreased in plaques in the three cohorts studied. In addition, we observed decreased plaque-protein levels of oligodendrocyte proteins. MOG was significantly decreased in all groups, and MAG and MBP were significantly decreased in EOAD and LOAD amyloid plaques, respectively (Supp. Table 3). MAG and MBP levels were also decreased in plaques in DS, although it did not meet our significance criteria. The glucose transport facilitator SLC2A3 (also known as GLUT3) was decreased in amyloid plaques in all groups, yet it was significant only in EOAD and LOAD (Table ). Overall, we observed similar proteins altered in Aβ plaques in all groups evaluated. However, most of the proteins show different abundance levels in plaques of DS, EOAD, and LOAD, accounting for the differences observed among groups. We identified 263 differentially expressed proteins in DS non-plaque tissue compared to control non-plaque tissue (Fig. c, g), 269 proteins in EOAD non-plaque tissue vs. control non-plaque tissue (Fig. c, h), and 301 significantly altered proteins in LOAD non-plaque tissue vs. control non-plaque tissue (Fig. c, i). We identified 65 altered non-plaque proteins compared to control tissue that were common between all cohorts evaluated (Fig. c). We also observed 138 proteins with differential enrichment levels in DS non-plaque tissue, 76 proteins in EOAD, and 148 proteins in LOAD (Fig. c). Notably, we identified among the top ten enriched proteins in DS non-plaque tissue CLU, VIM, HSPB6, and SYNM (Supp. Table 5), which we also found enriched in amyloid plaques in all disease groups. CLU was consistently enriched in non-plaque tissue in the three groups evaluated when compared to control tissue (Supp. Table 5). VIM and HSPB6 were also among the most enriched proteins in EOAD non-plaque tissue (Supp. Table 5). Conversely, we identified the actin-binding protein destrin (DSTN) as the only protein among the top ten significantly decreased proteins in non-plaque tissue from DS, EOAD, and LOAD cohorts compared to controls (Supp. Table 5). We also observed that parvalbumin (PVALB) was the most decreased protein in DS non-plaque tissue compared with controls (Fig. g), whereas the levels of PVALB in EOAD and LOAD were not significantly different from controls (Supp. Table 4). Our proteomics findings in non-plaque tissue showed that there were more differences in protein levels in non-plaque tissue between groups, in comparison to the more consistent protein levels in plaques, highlighting the largely similar plaque proteome between AD subtypes despite differences in baseline, non-plaque-protein expression. We performed correlation analyses to compare the proteomes of Aβ plaques and non-plaque tissues in DS, EOAD, and LOAD. Proteins included in the correlations were significant and FC > 1.5 at least in one of the groups evaluated. For amyloid plaques, there was a positive correlation between DS and EOAD ( R 2 = 0.77, p < 0.0001). We observed 65.5% (164/250) of the proteins changing in the same direction (i.e., fold-change for a protein is positive or negative in both groups), where 29.6% (74/250) of the proteins were significantly altered in DS and EOAD plaques (Fig. a). We only observed 4.8% (12/250) of the proteins changing in different directions (i.e., fold-change for a protein is positive in one group and negative in the other) (Fig. a). DS and LOAD plaque proteomes also correlated positively ( R 2 = 0.73, p < 0.0001), with 66.2% (135/204) of the proteins with same fold-change direction and 27.5% (56/204) of the proteins significantly altered in both groups (Fig. b). Similar to DS and EOAD, only 6.3% (13/204) of the proteins were changing in opposite direction (Fig. b). There was also a positive correlation between EOAD and LOAD differentially abundant plaque proteins ( R 2 = 0.67, p < 0.0001), similar to what we observed between DS vs. the AD subtypes evaluated. We identified 66.4% (234/256) of the proteins changing in the same direction, and 25% (64/256) of the proteins were significant in both groups (Fig. c). The proteins changing in opposite direction accounted for 8.6% (22/256) of the total (Fig. c). Our analysis shows high similarity among the proteins altered in Aβ plaques vs. non-plaques of DS, EOAD, and LOAD, with the majority of the proteins changing in the same direction. Correlation analyses of DS, EOAD, and LOAD non-plaque differentially abundant proteins showed positive correlations between DS and EOAD ( R 2 = 0.59, p < 0.0001) and a weaker correlation between DS and LOAD ( R 2 = 0.33, p < 0.0001) (Fig. d–e). We observed 65.9% (275/417) of the proteins changing in the same direction in DS and EOAD Aβ plaques, where 27.6% (115/417) of the proteins were significantly altered in both groups. We observed 6.5% (27/417) of proteins changing in the opposite direction (Fig. d). Similarly, 67.1% (328/489) of the proteins in DS and LOAD were changing in the same direction (Fig. e). We observed that 15.3% (75/489) of the proteins were significant in both groups, whereas 17.6% (86/489) of proteins had opposite fold changes (Fig. e). Moreover, we observed a higher positive correlation between EOAD vs. LOAD non-plaque proteomes ( R 2 = 0.79, p < 0.0001), with 63.9% (273/427) of the proteins were changing in the same direction, with 33.5% (143/427) being also significant in both groups (Fig. f). Only 2.6% (11/427) of the proteins were changing in opposite directions (Fig. f). Overall, we observed a similar ‘amyloid plaques protein signature’ across the experimental groups. Nonetheless, correlations of the non-plaque tissue proteomes suggest a higher similarity between EOAD and LOAD differentially enriched proteins in comparison to DS. We performed chromosomal mapping of significantly altered proteins identified through proteomic analysis across all human chromosomes using the UCSC Human Genome Browser to evaluate the distribution of these proteins across DS, EOAD, and LOAD. Supplemental Figure illustrates the percentage of significantly altered proteins for each group. The overall percentage of proteins from each chromosome was below 20%, and no single chromosome exhibited a markedly overrepresented protein expression pattern. This suggests that proteins from all chromosomes, not just Hsa21, contribute to the molecular differences observed in both DS and AD. Of the 1995 proteins identified in this study, 22 were from Hsa21 (Fig. ). We compared these proteins with those reported in a previous DS plaque proteomics study , identifying a total of 26 Hsa21 proteins between the two studies. A significant portion, 69.2% (18/26), of these proteins were shared between the current and previous studies (Fig. ). Among the proteins identified, APP was significantly altered in Aβ plaques in all cohorts (Fig. ). GART was significantly abundant in LOAD and DS non-plaque tissue (Fig. a, c), and PCP4 was differentially expressed in LOAD and EOAD non-plaque tissue (Fig. a, b). CXADR was differentially expressed in EOAD amyloid plaques (Fig. b). APP was also significantly enriched in DS non-plaque tissue (FDR < 0.05, Fig. a). NCAM2, CBR1, CBR3, PDXK, CSTB, and COL6A1 were significantly enriched in DS non-plaque tissue (Fig. a). Taken together, these results along with the chromosomal mapping of all significantly altered proteins suggest that Hsa21 triplication does not necessarily lead to the enrichment of those gene products in Aβ plaques or in the surrounding non-plaque tissue. Aβ plaques functional analyses We identified functional associations for the significantly abundant proteins in Aβ plaques and AD non-plaque tissue by performing ‘GO enrichment analysis’ (FDR < 0.05, Supp. Tables 6–13). Top enriched biological process (BP) GO terms in DS included lytic vacuole organization, lysosome organization, and lysosomal transport (for the three terms, p = 1.29 × 10 −5 , Fig. a, Supp. Table 6). We also identified terms cell activation ( p = 0.00024), regulation of immune system process ( p = 0.00027), and leukocyte activation ( p = 0.00016), which were also observed in EOAD (Fig. a). For cellular component (CC), we identified as the top terms vacuole, lysosome, lytic vacuole ( p = 9.56 × 10 −14 ), and endosome ( p = 9.71 × 10 −14 , Fig. a, Supp. Table 10), similarly as BP GO terms. In contrast, EOAD most enriched BP terms were regulation of immune system process, B-cell-mediated immunity, immunoglobulin-mediated immune response, and lymphocyte-mediated immunity ( p = 4.33 × 10 −5 , Fig. a, Supp. Table 6). Top CC GO terms in EOAD were secretory granule ( p = 1.13 × 10 −6 ), vacuolar lumen, and collagen-containing extracellular matrix (both p = 8.75 × 10 −7 , Fig. a, Supp. Table 10). LOAD also showed BP GO terms related to lysosomes as observed in DS, yet with a lower significance. For instance, we identified lysosomal transport and organization and lytic vacuole organization ( p = 0.0288 Fig. a, Supp. Table 6). CC GO terms included lysosome and lytic vacuoles ( p = 2.47 × 10 −7 ), collagen-containing extracellular matrix ( p = 9.41 × 10 −6 ), and endosome ( p = 0.00063) (Fig. a, Supp. Table 10), highlighting functional similarities of plaque-associated proteins between DS and LOAD. We also evaluated the physical and functional protein interactions of significantly abundant proteins in Aβ plaques, using Cytoscape and the STRING database (Fig. b–d). The networks for amyloid plaque proteins for all the cohorts evaluated showed a significant degree of protein–protein interactions (PPI enrichment p = 1 × 10 −16 ). We observed a consistent group of proteins in all forms of AD evaluated, which were grouped based on functional enrichment (Fig. b–d). For instance, we identified proteins related to APP and Aβ metabolism (APP, APOE, CLU, CLSTN1, NCSTN, APLP2, and SPON1), immune response and inflammation (HLA-DRB1, HLA-DRB5, C1QC, C4A, and C3 consistent in DS and EOAD; CD44, ICAM1, and MSN in EOAD and LOAD), and lysosomal-related functions (PPT1, TPP1, LAMP1, PSAP, and CTSD). APOE was highly abundant in Aβ plaques in DS and LOAD (Fig. b, d) compared to EOAD, being the most significant in DS (Fig. b) in comparison to EOAD and LOAD. We also identified a group of glial-related proteins in EOAD network, namely VIM, DES, and GFAP (Fig. c). Overall, our findings suggest a similar plaque-protein signature in the three groups, which were functionally associated mainly to APP and Aβ processing, immunity-related responses, and lysosomal functions. In addition, an analysis of the ten most abundant proteins (ranked by FC) differentially enriched in Aβ plaques in DS, EOAD, or LOAD further showed the relationship of Aβ plaque-associated proteins with lysosomal and immune-related functions (Supp. Table 14). According to the GO annotation, we found that the significantly enriched amyloid plaque proteins in DS predominantly relate to endo/lysosomal functions, including CLCN6, ATG9A, and VAMP7 (Fig. , Supp. Table 14). Oligodendrocyte protein MOG was significantly decreased in plaques for all cohorts, but fold-change suggests an increased reduction in DS (Supp. Table 3, Fig. a) in comparison to the other groups. We identified protein ITM2C, which is involved in Aβ peptide production (Fig. b). We also observed proteins with functions linked to presynaptic signaling and axon guidance, namely, RUNDC3A and NTN1 (Fig. ). The calcium-binding protein and marker of inhibitory neurons PVALB was significantly enriched in DS plaques but was unaltered in EOAD and LOAD (Fig. f). In contrast, we observed that Aβ plaque proteins significantly abundant in EOAD are mostly related to immune response, immunoglobulin-mediated immune response (S100A7, HPX, and IL36G), as well as vacuole lumen and secretory vesicles related (GGH, TTR). The protein EPPK1 is linked to cytoskeletal organization functions such epithelial cell proliferation and intermediate filament organization (Supp. Table 14). In LOAD, we observed a series of proteins involved in bounding membrane of organelle, collagen-containing extracellular matrix, and vesicle membrane (CYB5B, VWF and PTPRN2). Although we did not observe particular association with GO terms, other amyloid plaque LOAD proteins, including TIMM8A, ACSS3, and SFXN5 (linked to mitochondrial functions) , THUMPD1 and RPS7 (related to RNA-binding activity and ribosomes) and NRXN2 (protein–protein interactions at the synapses) were identified (Supp. Table 14). These observations support our findings in the GO functional enrichment and protein interaction networks, providing evidence that some of the most abundant proteins in DS plaques are primarily linked to lysosomal pathways. Non-plaque tissue functional analyses GO terms for abundant non-plaque proteins showed chromatin remodeling as the top BP term for all experimental groups (DS p = 0.00128, EOAD p = 5.79 × 10 −9 , LOAD p = 1.69 × 10 −10 , Supp. Figure 3a, Supp. Table 8). Importantly, top BP GO terms in DS were associated with integrin-mediated signaling, extracellular structure, and extracellular matrix organization ( p = 0.00684, Supp. Figure 3a, Supp. Table 8). In contrast, EOAD and LOAD top BP GO terms included protein–DNA complex assembly ( p = 4.74 × 10 −6 and p = 1.14 × 10 −8 , respectively), regulation of gene expression (EOAD p = 5.08 × 10 −5 , LOAD p = 1.68 × 10 −8 ), and nucleosome assembly (EOAD p = 4.74 × 10 −6 , LOAD p = 3.25 × 10 −8 ) (Supp. Figure 3a, Supp. Table 8). Top CC GO terms for DS were collagen-containing extracellular matrix, which was also observed in EOAD and LOAD, external encapsulating structure, and extracellular matrix ( p = 3.52 × 10 −8 , Supp. Figure 3a, Supp. Table 12). Top CC GO term for EOAD was nucleosome ( p = 4.44 × 10 −6 ), which was also identified in DS and LOAD. Other EOAD top CC GO terms were DNA packaging complex ( p = 8.01 × 10 −6 ) and protein–DNA complex ( p = 2.23 × 10 −5 ) (Supp. Figure 3a, Supp. Table 12). In a similar fashion, LOAD top CC GO terms were DNA packaging complex, protein–DNA complex (both p = 3.78 × 10 −14 ), and nucleosome ( p = 1.71 × 10 −12 ) (Supp. Figure 3a, Supp. Table 12). We also created protein interaction networks of non-plaque tissue DS, EOAD, and LOAD proteomes, which showed a highly significant degree of protein–protein interactions (PPI enrichment p = 1 × 10 −16 , Supp. Figure 3b–d). We observed groups of RNA-binding proteins, such as SRSF4, eukaryotic initiation factors (eIF4), and the heterogeneous nuclear ribonucleoproteins (hnRNP) protein family, primarily in EOAD and LOAD networks (Supp. Figure 3c, d). We also observed a set of intermediate filament and glial proteins, such as GFAP, AQP4, DES, VIM, ALDH1L1, and GART (Supp. Figure 3b–d). Additionally, there were groups of histone proteins related to the nucleosome, such as H2A, H2B, and H1 protein families (Supp. Figure 3b–d). Particularly, the DS protein interaction network exhibited a set of collagens, laminins, cell adhesion proteins, proteoglycans, and heparin sulfate proteins (Supp. Figure 3b) as well as proteasome and chaperone proteins also involved in regulation of gene expression, including SQSTM1, PSMB4, PSMD4, and HSPB6 (Supp. Figure 3b). Our findings highlight a pivotal role of extracellular matrix (ECM) and structural components in DS besides the proteins associated to Aβ plaque pathology. We identified functional associations for the significantly abundant proteins in Aβ plaques and AD non-plaque tissue by performing ‘GO enrichment analysis’ (FDR < 0.05, Supp. Tables 6–13). Top enriched biological process (BP) GO terms in DS included lytic vacuole organization, lysosome organization, and lysosomal transport (for the three terms, p = 1.29 × 10 −5 , Fig. a, Supp. Table 6). We also identified terms cell activation ( p = 0.00024), regulation of immune system process ( p = 0.00027), and leukocyte activation ( p = 0.00016), which were also observed in EOAD (Fig. a). For cellular component (CC), we identified as the top terms vacuole, lysosome, lytic vacuole ( p = 9.56 × 10 −14 ), and endosome ( p = 9.71 × 10 −14 , Fig. a, Supp. Table 10), similarly as BP GO terms. In contrast, EOAD most enriched BP terms were regulation of immune system process, B-cell-mediated immunity, immunoglobulin-mediated immune response, and lymphocyte-mediated immunity ( p = 4.33 × 10 −5 , Fig. a, Supp. Table 6). Top CC GO terms in EOAD were secretory granule ( p = 1.13 × 10 −6 ), vacuolar lumen, and collagen-containing extracellular matrix (both p = 8.75 × 10 −7 , Fig. a, Supp. Table 10). LOAD also showed BP GO terms related to lysosomes as observed in DS, yet with a lower significance. For instance, we identified lysosomal transport and organization and lytic vacuole organization ( p = 0.0288 Fig. a, Supp. Table 6). CC GO terms included lysosome and lytic vacuoles ( p = 2.47 × 10 −7 ), collagen-containing extracellular matrix ( p = 9.41 × 10 −6 ), and endosome ( p = 0.00063) (Fig. a, Supp. Table 10), highlighting functional similarities of plaque-associated proteins between DS and LOAD. We also evaluated the physical and functional protein interactions of significantly abundant proteins in Aβ plaques, using Cytoscape and the STRING database (Fig. b–d). The networks for amyloid plaque proteins for all the cohorts evaluated showed a significant degree of protein–protein interactions (PPI enrichment p = 1 × 10 −16 ). We observed a consistent group of proteins in all forms of AD evaluated, which were grouped based on functional enrichment (Fig. b–d). For instance, we identified proteins related to APP and Aβ metabolism (APP, APOE, CLU, CLSTN1, NCSTN, APLP2, and SPON1), immune response and inflammation (HLA-DRB1, HLA-DRB5, C1QC, C4A, and C3 consistent in DS and EOAD; CD44, ICAM1, and MSN in EOAD and LOAD), and lysosomal-related functions (PPT1, TPP1, LAMP1, PSAP, and CTSD). APOE was highly abundant in Aβ plaques in DS and LOAD (Fig. b, d) compared to EOAD, being the most significant in DS (Fig. b) in comparison to EOAD and LOAD. We also identified a group of glial-related proteins in EOAD network, namely VIM, DES, and GFAP (Fig. c). Overall, our findings suggest a similar plaque-protein signature in the three groups, which were functionally associated mainly to APP and Aβ processing, immunity-related responses, and lysosomal functions. In addition, an analysis of the ten most abundant proteins (ranked by FC) differentially enriched in Aβ plaques in DS, EOAD, or LOAD further showed the relationship of Aβ plaque-associated proteins with lysosomal and immune-related functions (Supp. Table 14). According to the GO annotation, we found that the significantly enriched amyloid plaque proteins in DS predominantly relate to endo/lysosomal functions, including CLCN6, ATG9A, and VAMP7 (Fig. , Supp. Table 14). Oligodendrocyte protein MOG was significantly decreased in plaques for all cohorts, but fold-change suggests an increased reduction in DS (Supp. Table 3, Fig. a) in comparison to the other groups. We identified protein ITM2C, which is involved in Aβ peptide production (Fig. b). We also observed proteins with functions linked to presynaptic signaling and axon guidance, namely, RUNDC3A and NTN1 (Fig. ). The calcium-binding protein and marker of inhibitory neurons PVALB was significantly enriched in DS plaques but was unaltered in EOAD and LOAD (Fig. f). In contrast, we observed that Aβ plaque proteins significantly abundant in EOAD are mostly related to immune response, immunoglobulin-mediated immune response (S100A7, HPX, and IL36G), as well as vacuole lumen and secretory vesicles related (GGH, TTR). The protein EPPK1 is linked to cytoskeletal organization functions such epithelial cell proliferation and intermediate filament organization (Supp. Table 14). In LOAD, we observed a series of proteins involved in bounding membrane of organelle, collagen-containing extracellular matrix, and vesicle membrane (CYB5B, VWF and PTPRN2). Although we did not observe particular association with GO terms, other amyloid plaque LOAD proteins, including TIMM8A, ACSS3, and SFXN5 (linked to mitochondrial functions) , THUMPD1 and RPS7 (related to RNA-binding activity and ribosomes) and NRXN2 (protein–protein interactions at the synapses) were identified (Supp. Table 14). These observations support our findings in the GO functional enrichment and protein interaction networks, providing evidence that some of the most abundant proteins in DS plaques are primarily linked to lysosomal pathways. GO terms for abundant non-plaque proteins showed chromatin remodeling as the top BP term for all experimental groups (DS p = 0.00128, EOAD p = 5.79 × 10 −9 , LOAD p = 1.69 × 10 −10 , Supp. Figure 3a, Supp. Table 8). Importantly, top BP GO terms in DS were associated with integrin-mediated signaling, extracellular structure, and extracellular matrix organization ( p = 0.00684, Supp. Figure 3a, Supp. Table 8). In contrast, EOAD and LOAD top BP GO terms included protein–DNA complex assembly ( p = 4.74 × 10 −6 and p = 1.14 × 10 −8 , respectively), regulation of gene expression (EOAD p = 5.08 × 10 −5 , LOAD p = 1.68 × 10 −8 ), and nucleosome assembly (EOAD p = 4.74 × 10 −6 , LOAD p = 3.25 × 10 −8 ) (Supp. Figure 3a, Supp. Table 8). Top CC GO terms for DS were collagen-containing extracellular matrix, which was also observed in EOAD and LOAD, external encapsulating structure, and extracellular matrix ( p = 3.52 × 10 −8 , Supp. Figure 3a, Supp. Table 12). Top CC GO term for EOAD was nucleosome ( p = 4.44 × 10 −6 ), which was also identified in DS and LOAD. Other EOAD top CC GO terms were DNA packaging complex ( p = 8.01 × 10 −6 ) and protein–DNA complex ( p = 2.23 × 10 −5 ) (Supp. Figure 3a, Supp. Table 12). In a similar fashion, LOAD top CC GO terms were DNA packaging complex, protein–DNA complex (both p = 3.78 × 10 −14 ), and nucleosome ( p = 1.71 × 10 −12 ) (Supp. Figure 3a, Supp. Table 12). We also created protein interaction networks of non-plaque tissue DS, EOAD, and LOAD proteomes, which showed a highly significant degree of protein–protein interactions (PPI enrichment p = 1 × 10 −16 , Supp. Figure 3b–d). We observed groups of RNA-binding proteins, such as SRSF4, eukaryotic initiation factors (eIF4), and the heterogeneous nuclear ribonucleoproteins (hnRNP) protein family, primarily in EOAD and LOAD networks (Supp. Figure 3c, d). We also observed a set of intermediate filament and glial proteins, such as GFAP, AQP4, DES, VIM, ALDH1L1, and GART (Supp. Figure 3b–d). Additionally, there were groups of histone proteins related to the nucleosome, such as H2A, H2B, and H1 protein families (Supp. Figure 3b–d). Particularly, the DS protein interaction network exhibited a set of collagens, laminins, cell adhesion proteins, proteoglycans, and heparin sulfate proteins (Supp. Figure 3b) as well as proteasome and chaperone proteins also involved in regulation of gene expression, including SQSTM1, PSMB4, PSMD4, and HSPB6 (Supp. Figure 3b). Our findings highlight a pivotal role of extracellular matrix (ECM) and structural components in DS besides the proteins associated to Aβ plaque pathology. We compared the differentially abundant proteins found in Aβ plaques and AD non-plaque tissue with previous human AD proteomics studies compiled in the NeuroPro database . We observed that 77.7% of altered proteins identified in amyloid plaques in our study were also identified in previous AD plaque proteomics studies (Fig. a). From the 301 significantly altered plaque proteins that we identified in the present study, 13.6% have not been found in previous plaque proteomics studies, but only reported as significantly altered in bulk brain tissue proteomics studies (Fig. a). Similarly, 85.2% of the proteins we identified in the non-plaque tissue have been described in previous plaque and bulk tissue proteomics studies, whereas 10.9% have been identified in bulk human brain tissue but not in plaque proteomics studies (Fig. a). Interestingly, we identified in our study 34 proteins that have not been described previously in any human AD proteomics study, either in plaques or in bulk tissue (Fig. a, Supp. Table 15–16). In DS specifically, we identified seven amyloid plaque proteins and eight non-plaque tissue proteins significantly altered in our study, which have not been found in past AD brain tissue proteomics studies (Fig. b, Supp. Table 17). Similarly, we identified in EOAD 21 significantly altered proteins in plaque and eight in non-plaque tissue, which have not been described previously (Fig. b, Supp. Table 17). In the case of LOAD, we observed four significantly altered proteins in amyloid plaques and 15 in non-plaque tissue that have not been identified in previous AD plaques or bulk brain tissue proteomics studies (Fig. b, Supp. Table 17). From this group of proteins, LAMTOR4 (late endosomal/lysosomal adaptor and MAPK and MTOR activator 4) was significantly enriched in Aβ plaques in all the cohorts analyzed (Fig. c). The proteins HLA-DRB5, ALOX12B, and SERPINB4 were significantly enriched in DS and EOAD amyloid plaques (Fig. c). In contrast, LAMA2 was significantly decreased in DS and EOAD amyloid plaques (Fig. c). On the other hand, we observed the histone protein H2BC11, the basal cell adhesion protein BCAM, and the DNA-binding protein FUBP3 significantly enriched in non-plaque tissue in DS, EOAD, and LOAD (Fig. c). The protein centrosomal protein of 290 kDa (CEP290) showed a marked decrease in DS Aβ plaques compared to DS non-plaque tissue; however, it was detected in few cases of the 20 evaluated in that cohort (Supp. Table. 3), reason why it did not reach FDR < 0.05 (Fig. c). The protein FAM171A2 was significantly enriched only in EOAD and LOAD, contrary to the protein DCAKD that was significantly decreased in EOAD and LOAD non-plaque tissue (Fig. c). Overall, our proteomics findings are consistent with previous proteomics studies. Notably, our comparative analysis allowed us to identify novel proteins in AD human proteomics. The NeuroPro database is a powerful tool to investigate proteomic changes in AD human brains. However, by the time of writing this article, the database does not include DS proteomics data. Therefore, we compared our DS amyloid plaques proteomics findings with our previous study (Drummond et al . , 2022 ) where unbiased localized proteomics was used to interrogate the DS amyloid plaques proteome. In the study led by Drummond and colleagues, any Aβ plaque detected by IHC was sampled regardless of plaque morphology. We observed 2522 proteins between both DS plaque proteomics datasets, comprised of 1981 proteins in the present study and 2258 proteins in our previous work (excluding isoforms). We observed 68.1% (1717/2522) of proteins overlapping between both studies, with a total of 228 significantly altered plaque proteins in either dataset. Among these, 21.9% (50/228) were common to both studies (Fig. a). Particularly, 36% (82/228) of the significantly altered proteins in the present study were not significant in Drummond et al., and conversely, 42.1% (96/228) of the proteins identified in the previous study were not detected in the current dataset (Fig. a, Supp. Table 18). This variance may reflect differences in statistical thresholds and increased sample size, providing higher power in this study to identify more plaque-enriched proteins in DS with greater confidence. For instance, 35 proteins that were significantly enriched proteins detected in the Drummond study but not significant in ours were nonetheless observed in our dataset, with many showing increased abundance trends that nearly reached significance. In addition, from the proteins that were different between both studies (Fig. a), only 12 had a different direction of change, suggesting that most of the differences observed between the datasets are due to the differential stringency applied and the number of samples. Despite these differences, we observed a significant positive correlation between the Aβ plaque proteomes of the DS cohorts ( p < 0.0001, R 2 = 0.60, Fig. b). In fact, the 50 common proteins between both studies were changing in the same direction (48 enriched and 2 decreased in plaques, Fig. b). Within these set of amyloid plaque proteins, we identified Aβ peptide, APP, COL25A1, and a set of previously described plaque proteins, such as APOE, SMOC1, CLU, C3, and CLCN6 among others (extended data in Supp. Table 18), thus validating a plaque-protein signature also observed in DS Aβ pathology. Interestingly, from the seven novel DS plaque proteins regarding the NeuroPro database (Supp. Table 17), only ACP2 was also observed in the previous DS plaque proteomics study (Supp. Table 18). Our study is consistent with previous similar proteomics studies on AD brains, and further expanded the proteins present at these pathological lesions. We performed immunofluorescence to validate the late endosome protein CLCN6, as it emerged as the most abundant plaque protein among the top ten significantly altered proteins in DS Aβ plaques (Supp. Table 14). Previously, CLCN6 was identified within plaques solely through our proteomics study , without histochemical evidence of its presence in Aβ plaques. Immunofluorescence staining showed CLCN6 localized in the cytoplasm of cells adjacent to intracellular 4G8 anti-Aβ-positive staining (Fig. a). Within plaques, Aβ appears to encapsulate CLCN6 + cells, with the highest intracellular colocalization between CLCN6 and Aβ. Moreover, CLCN6 + /4G8 + cells were observed on the periphery of amyloid plaques, suggesting a potential role for CLCN6 + cells in either releasing Aβ species into plaques or participating in a phagocytic process (Fig. a). Quantification of CLCN6 fluorescence and area, normalized by plaque area, showed a significant increase in Aβ plaques in DS, EOAD, and LOAD compared to non-plaque tissue (Fig. b–c). Interestingly, CLCN6 area was significantly reduced in non-plaque tissue across all cohorts relative to control non-plaque tissue (Fig. b–c). These histochemical results are consistent with trends observed in the proteomic data (Fig. d). Further co-staining with MAP2 indicated that most CLCN6 + cells are neurons, with a minority of smaller MAP2- cells also displaying CLCN6 staining (Fig. e). Overall, these findings suggest that CLCN6 may be involved in storing and transporting Aβ, which could be released extracellularly in the AD pathogenic context, contributing to amyloid plaque formation. TPP1 is a lysosomal protein that was identified in previous human proteomics , but has not been characterized in Aβ plaques by immunohistochemistry. Our validation revealed a distinctive punctate expression pattern common of lysosomal-associated proteins. These bright puncta were consistently observed both within Aβ plaques and in the surrounding non-plaque regions (Fig. a). In addition to the punctate signal, TPP1 expression appeared to be widespread and highly abundant throughout the tissues, with immunoreactivity present diffusely in the cytoplasmic regions of presumably neurons and glial cells (Fig. a). We observed TPP1-positive staining in Aβ plaques, with a pattern suggesting that the protein is not directly colocalized with Aβ. Instead, TPP1 appears to occupy spaces within the plaques that are less densely packed with amyloid or is embedded within denser amyloid aggregates while remaining distinguishable (bottom panel Fig. a.). Our proteomics analysis showed that TPP1 is significantly enriched in plaques of DS, EOAD, and LOAD (Fig. b). However, the enrichment of TPP1 in amyloid plaques is low (fold-change of 1.62 in DS, 1.51 in EOAD, and 1.69 in LOAD; Supp. Table 3). We did not observe significant differences in TPP1 levels by IHC (Fig. c). Notably, the density and intensity of TPP1 staining within plaques were qualitatively similar to those in the non-plaque areas, consistent with proteomic findings indicating subtle enrichment of TPP1 in plaques. Overall, our observations suggest that TPP1 is not exclusively localized to plaques but is instead distributed throughout the brain parenchyma. WGCNA allowed us to identify correlations between clusters of co-expressing proteins with clinical traits, including APOE genotype, sex, age, TDP-43 and α-synuclein co-pathologies, and Aβ and pTau pathology regional levels. Top GO BP and CC annotations associated with each module are presented (FDR < 0.05), with additional information about module sizes and extended functional annotation provided in the supplementary tables 19 to 26. Notably, Module 1 from DS plaques, containing multiple highly abundant plaque proteins (e.g., CLCN6, MDK, ITM2C, ARL8B, and C1QC), correlated significantly only with pTau levels ( R = 0.5, p = 0.024) (Supp. Figure 4). In EOAD, we observed negative correlations between APOE 3 and 4 genotypes, as well as between APOE and age. Functional annotation indicated that modules correlated with APOE genotype are related to synaptic signaling and mitochondrial metabolic processes (Supp. Figure 5). Additionally, Module 5, including astrocytic proteins DES, VIM, GFAP, GJA1, and ALDH1L1, was positively correlated with APOE3 and negatively correlated with APOE4 ( R = 0.54, p = 0.014 and R = − 0.52, p = 0.02), underscoring astrocytes relevance in AD neuropathology (Supp. Figure 5). On the other hand, LOAD plaques co-expression networks revealed a significant correlation between Module 58, functionally associated with the axonal myelin sheath and containing multiple oligodendrocyte proteins (MOG, MBP, MAG, CNP, HAPLN2, and PLP1), and Aβ neuropathology ( R = − 0.51, p = 0.021) (Supp. Figure 6). In addition, Module 30, comprising proteins COL25A1, C3, and fibrinogens (FGA, FGB, FGG), was positively correlated with APOE4 and Tau ( R = 0.45, p = 0.048 and R = 0.56, p = 0.01, respectively), and negatively correlated with age ( R = − 0.63, p = 0.011) (Supp. Figure 6), suggesting potential age-dependent alterations in some of the proteins associated with module 30. Age correlated significantly with multiple modules in all cohorts, but it is noteworthy that the LOAD cohort is inherently older than DS and EOAD. In non-plaque tissue co-expression networks, modules 15, 29, and 44 in DS non-plaques showed opposing correlations with APOE3 and APOE4 (Supp. Figure 7), with Module 15 also associated with "Cytoplasmic translation" and "Ribosomal subunit" functions. EOAD non-plaque networks showed the most modules significantly correlated with APOE genotype (Supp. Figure 8). Functional enrichment included terms related to neuron differentiation, axon structure, presynapse, cytoskeletal organization, and GTPase regulation in modules negatively correlated with APOE4 (Supp. Figure 8). Module 55 was negatively correlated with APOE4 and positively with Tau ( R = − 0.57, p = 0.085 and R = 0.5, p = 0.025) (Supp. Figure 8), and included proteins C3 and fibrinogens (FGA, FGB, FGG), similar to Module 30 in LOAD plaques. This observation suggests that common proteins may have distinct roles in AD pathology across subtypes. LOAD non-plaque correlation networks showed a few modules significantly correlated with APOE4 genotype, similarly as LOAD plaques correlations (Supp. Figure 9). In particular, Module 23 was associated with "response to unfolded protein," comprising multiple heat shock proteins, such as HSPE1, HSPD1, HSPA8, HSPA9, and HSP90AA1 (Supp. Figure 9). Overall, our WGCNA analysis revealed that each cohort evaluated has distinct clusters of co-expressing proteins that correlate with clinical variables, such as APOE genotype, pTau, and Aβ pathology, suggesting that AD pathology progresses through different mechanisms in DS, EOAD, and LOAD. The interaction of the multiple proteins identified on each experimental group and clinical traits may inform the development of therapies and biomarkers tailored to each form of AD. We conducted a comparative analysis of Aβ plaque and non-plaque proteomes in individuals with DS, EOAD, and LOAD, identifying 43 proteins consistently altered in Aβ plaques across all cohorts. The Aβ plaque proteomes showed a high degree of correlation among DS and AD subtypes, although certain proteins showed differential abundance across the groups. GO functional enrichment and protein–protein interaction analyses indicated predominant associations of Aβ plaque proteins with APP metabolism, lysosomal functions, and immune responses. Our findings suggest a shared "Aβ plaque protein signature" across the evaluated groups, underscoring a notable similarity between the DS plaque proteome and those of EOAD and LOAD. In contrast, the non-plaque proteome showed group-specific variations in protein abundance, leading to distinct functional associations. These results highlight physiological differences in the brains of individuals with DS compared to those with EOAD and LOAD. Our unbiased localized proteomics approach enabled the identification of hundreds of proteins associated with Aβ plaques, including HTRA1, CLU, CLSTN1, GPC1, and VIM, which have been linked to protective roles against Aβ neuropathology or regulation of amyloid production . Additionally, we confirmed the presence of proteins in Aβ plaques that are less studied in the context of AD, such as CLCN6, ARL8B, TPP1, VAMP7, and SMOC1 , suggesting a potential important role for these proteins in AD pathology. We previously demonstrated a strong colocalization of SMOC1 with diffuse and neuritic plaques, with a higher proportion in hippocampus than in neighboring cortex . Most recent findings have shown colocalization of SMOC1 and PDGFRα, indicating that SMOC1 expression is highest in OPCs, as expected from RNAseq datasets . Furthermore, our findings include several previously unreported plaque-enriched proteins in human AD and DS proteomics, expanding on earlier studies. These novel proteins—linked to critical processes in AD pathology and DS, such as lysosomal function (ACP2, LAMTOR4), immune response (HLA-DRB5, IL36G), and ubiquitination (RBX1)—have been implicated in AD through genetic studies . Thus, our results provide evidence supporting these proteins’ involvement in AD pathophysiology. Our network analysis revealed a functional pattern among plaque proteins, with an increased level of predicted protein–protein interactions observed across all experimental groups. Notably, proteins such as NTN1, NCSTN, SPON1, and CLSTN1 were present in all cohorts and have known associations with APP/Aβ processing . While APP metabolism is well recognized in AD, with the APP gene located on chromosome 21 , these APP-related proteins remain understudied in DS. Our proteomics data also highlighted the presence of immune and inflammation-related proteins, including C1QC, C4A, C3, MDK, CLU, HLA-DRB1, and HLA-DRB5. These proteins clustered near the APP node in the protein networks, suggesting potential interactions with Aβ. This observation aligns with prior studies linking complement proteins, CLU, and MDK to senile plaques . Specifically, murine studies indicate that CLU may contribute to neurotoxicity and fibrillar Aβ deposition . Conversely, MDK has been shown to bind Aβ, with transgenic mouse studies indicating a reduction in Aβ deposition, although the underlying mechanisms remain unclear . Co-expression network analysis in murine AD models and human AD brain samples showed strong association of MDK with Aβ plaques and cerebrovascular amyloid (CAA), and suggest an increase in both parenchymal amyloid plaques and CAA, suggesting that MDK directly impacts amyloid deposition . Furthermore, studies using AD mouse models suggest that complement proteins may contribute to synapse loss, dystrophic neurite formation, and increased Aβ aggregation, potentially through microglia–astrocyte crosstalk in response to amyloid pathology (reviewed by Batista and colleagues ). Additionally, our findings reveal the enrichment of HLA-DRB1 and the novel plaque-protein HLA-DRB5 in Aβ plaques. Previous single-cell transcriptomic studies of human AD prefrontal cortex have correlated HLA-DRB1 and HLA-DRB5 expression in microglia with AD pathology , although the mechanisms of HLA proteins in Aβ neuropathology remain largely unknown. Our Aβ plaques proteomics data highlighted the enrichment of multiple proteins associated with the endo/lysosomal pathway, supporting prior findings that lysosomal dysfunction is a fundamental mechanism in AD . We identified TPP1, PPT1, LAMP1, ARL8B, and confirmed VAMP7, previously identified as a novel amyloid plaque protein , which are involved in lysosomal trafficking, vesicle fusion and protein degradation . ARL8B is associated with Niemann–Pick disease type C . ARL8B also may have a neuroprotective role against amyloid pathology . In addition, we showed that ARL8B is associated with plaques, specifically to areas that were not brightly stained for Aβ. In addition, we identified ARL8B expression in a subset of reactive plaque-associated astrocytes . ARL8B has also been detected in cerebrospinal fluid of AD patients compared to controls and Huntington’s disease patients, indicating that ARL8B altered levels are AD-specific . LAMP1 has been found enriched in Aβ plaques, and studies using AD murine models have shown that LAMP1-plaque-associated protein is particularly increased in axons and dystrophic neurites . Additionally, there is an enrichment of LAMP1 in reactive microglia within senile plaques, which has been implicated in amyloid removal . TPP1 is a lysosomal matrix protein and is ubiquitously expressed in the human brain . TPP1 has been shown to destabilize Aβ through endoproteolytic cleavage , and deficiencies in TPP1, together with PPT1, are linked to the neurodegenerative lysosomal storage disease neuronal ceroid lipofuscinosis (NCL) . TPP1 has been identified in previous human proteomics studies , but our current work is the first to provide a preliminary characterization of its role in the context of AD plaque pathology. Label-free mass spectrometry is a highly sensitive technique, which explains our observation of a subtle but significant enrichment of TPP1 in plaques, despite that we did not have evidence of the same pattern in our histochemistry. Although our preliminary validation of TPP1 did not reveal significant differences between Aβ plaques and non-plaque tissue, we observed a punctate expression pattern throughout the brain parenchyma, with notable association to Aβ plaques. These findings are similar to observations of other lysosomal proteins, such as ARL8B , LAMP1 , cathepsin D and lipofuscin , and CLCN6, which associate with plaques but do not directly colocalize with Aβ. This suggests that TPP1 may not interact directly with Aβ but is instead localized to small pockets within amyloid plaques where Aβ is either absent or undergoing degradation. CLCN6 is predominantly expressed in neurons within the central and peripheral nervous systems and is localized in the late endosomes of neuronal cell bodies . Our proteomics and immunohistochemical analyses confirmed the presence of CLCN6 in the neuronal cytoplasm, specifically surrounding intracellular Aβ, and revealed its enrichment in amyloid plaques compared to non-plaque tissue. Notably, CLCN6 has not been studied previously in the context of AD or DS, highlighting the novelty of these findings. Previous studies have demonstrated that CLCN6 disruption leads to lysosomal storage disease with behavioral abnormalities, resembling neuronal ceroid lipofuscinosis (NCL) . This pathology may be linked to a CLCN6 mutation impairing late endosome acidification, thereby compromising protein degradation and the autophagosomal pathway, which is a defect associated with late-onset NCL . Late endosomes play a critical role in forming intraluminal vesicles and serve as reservoirs for sorting ubiquitinated proteins destined for lysosomal degradation. Disruption of CLCN6 may therefore impede the degradation of key proteins such as TDP-43 and Tau, potentially contributing to intracellular protein accumulation and drawing parallels with other neurodegenerative disorders . Additionally, our WGCNA analysis in DS plaques highlighted a co-expression network module, including CLCN6 and other highly abundant plaque proteins, associated with Tau neuropathology levels. Altogether, our data suggest that CLCN6 may play a substantial role in the aggregation of neurotoxic proteins associated with AD neuropathology through its function in the endo/lysosomal pathway. A closer examination of the most significant functional associations in the DS Aβ plaque proteome elucidated a substantial enrichment of lysosomal-related GO terms, followed by those linked to the immune system and cell activation. Both lysosomal and immune processes are integral to AD pathophysiology . Strong evidence suggests that endo/lysosomal alterations in DS are associated with APP and the βCTF fragment produced after BACE-1 cleavage of APP, potentially explaining early changes in DS . Increased systemic inflammation, possibly exacerbated by Aβ accumulation, is also evident in individuals with DS . Interestingly, the functional associations observed in the DS plaque proteome appear to be a combination of those found in EOAD and LOAD, further highlighting the Aβ plaque proteome similarity across cohorts. Significant plaque proteins were enriched across all cohorts, with some proteins specifically enriched in certain groups. This may help understand AD pathogenesis and the unique mechanisms in DS and AD subtypes. Interestingly, COL25A1 (CLAC-P) was the most enriched protein in plaques, especially in DS compared to EOAD and LOAD. Previous studies in mice suggested that CLAC, derived from COL25A1, is crucial in converting diffuse Aβ deposits into senile plaques . This finding may partially account for the heightened amyloid pathology observed in DS. Moreover, previous research has shown that the interaction between CLAC and Aβ is determined by negatively charged residues in the central region . Given recent discoveries about Aβ filaments in DS and Aβ fibril variation in different AD subtypes, structural differences in Aβ fibrils may result in unique interactions with COL25A1 . Further investigation is required to comprehend the binding affinity of COL25A1 in DS and other forms of AD. However, our previous study indicated similar levels of COL25A1 in DS and EOAD plaques . It is plausible that the observed differences between our current and past studies are due to technical factors, such as sample preparation, data acquisition, and cohort size . Our proteomics analysis revealed a significant reduction of oligodendrocyte proteins, including HAPLN2, PLP1, MOG, MAG, MBP, and BCAS1, within Aβ plaques and also in the non-plaque proteome across all cohorts compared to controls. Additionally, WGCNA analysis identified a co-expression module of these oligodendrocytic proteins that negatively correlates with Aβ neuropathology, suggesting that Aβ accumulation may impact oligodendrocyte function and myelin stability. Previous studies in the AD murine model 5xFAD reported loss of myelin-associated lipids and disruption of the myelin sheath associated with Aβ plaques in the brain parenchyma . Zhan and colleagues provided evidence, using superior temporal gyrus of human AD brains, of increased levels of degraded MBP protein and colocalization with Aβ 42 in the plaque cores and also aggregated adjacent to the plaques . Due to the interaction between MBP and Aβ 42 , the authors suggest that degraded MBP and other damaged myelin components may have a role in plaque development . These findings indicate that oligodendrocyte disruption may worsen neurodegeneration in the context of Aβ pathology and highlight a potential therapeutic target. A study in rhesus monkeys linked myelin degeneration to normal aging and cognitive decline . Recent studies using transgenic mice and human AD tissues have shown that myelin defects promote Aβ plaque formation and cause transcriptional changes in oligodendrocytes seen in AD and other degenerative diseases . Given that individuals with DS often exhibit age-associated disorders earlier than euploid individuals , myelin damage may be an early characteristic in DS, potentially exacerbating amyloid pathology. Further studies are warranted to understand how oligodendrocytes are impacted in DS and AD. The analysis of the non-plaque tissue proteome in DS, EOAD, and LOAD highlighted two primary altered components in AD: the ECM and chromatin structure. In the DS non-plaque proteome, we observed a cluster of ECM-related proteins, which was not evident in EOAD and LOAD but suggested by functional annotation analysis. Early studies using human AD brain samples showed ECM proteins (collagen, laminin, and HSPG) colocalizing with neuritic plaques . Subsequent findings in transgenic mice and human AD brain samples indicated increased mRNA levels of collagen-type VI proteins proportional to APP and Aβ expression, suggesting protective roles against Aβ neurotoxicity . Our data indicate that the ECM in DS is more significantly affected compared to EOAD and LOAD. Recent studies using trisomy 21 iPSCs at different stages of neuronal induction suggested aberrant ECM pathways and increased cell–cell adhesion, affecting neural development . Proteomics studies of human AD brain tissues correlated cell–ECM interaction pathways and matrisome components with AD neuropathological and cognitive traits , and ECM components were observed in pre-clinical AD cases, suggesting early ECM alterations in AD. These observations support a more significant and earlier alteration of ECM proteins in DS, possibly exacerbated by AD neuropathology. Additionally, proteins linked to chromatin structure were consistently altered in non-plaque tissue in all groups, most prominently in LOAD and EOAD. Our observations align with previous research suggesting structural changes in chromatin accessibility and altered gene expression in AD . Studies using murine models of DS and trisomy 21 iPSCs have shown reduced global transcription activity and changes resembling those in senescent cells, such as chromosomal introversion, nuclear lamina disruption, and altered chromatin accessibility . This evidence may explain the differences observed in the protein interaction networks and functional annotation analyses between the non-plaque proteomes of DS and the AD subtypes studied. While our study sheds light on the molecular mechanisms behind Aβ plaque pathology in DS and various forms of AD, it is essential to recognize certain limitations. Bottom–up proteomics identifies proteins from detected peptides, reflecting only the trypsin-digestible proteome. Proteins are assembled as the smallest set explaining all observed peptides, with specific proteoforms reported only if unique peptides are detected. Despite this limitation, bottom–up proteomics offers higher sensitivity than other methods and avoids the need for pre-selecting protein targets, making label-free mass spectrometry ideal for discovery proteomics. Our findings highlight significant proteome changes, providing a foundation for future hypothesis generation and further investigation into the mechanisms driving these protein alterations. However, future studies should use additional validation and characterization methods for candidate proteins, which could further substantiate our findings, such as evaluation by two-dimensional (2D) electrophoresis and Western blotting, in addition to immunohistochemistry. These top–down proteomic technologies will be helpful for quantifying the levels of specific proteins, thereby complementing the discovery-based approach of bottom–up proteomics and providing a more comprehensive view of protein isoforms and post-translational modifications. Our analysis was also restricted to classic cored plaques and dense aggregates from DS and AD cases primarily at advanced disease stages, constraining our conclusions to an ‘end-point’ proteome profile. Nonetheless, we identified notable neuropathological distinctions between DS and other cohorts, potentially associated with observed proteomic alterations in plaque and non-plaque tissues. Future studies targeting different morphological types of plaques (i.e., diffuse or cotton-wool plaques) would be interesting. Our analysis was also limited to vulnerable brain regions in AD. Future investigations should encompass broader age ranges and include more detailed analysis of brain subregions, such as those within the hippocampus, entorhinal cortex, and adjacent temporal cortex. This approach could help create a more detailed 'proteomics landscape' of AD neuropathology, enhancing our understanding of disease progression and resilience mechanisms. Furthermore, membrane proteins, particularly integral membrane proteins, are often underrepresented in proteomics studies due to detection challenges. Finally, while our research is unbiased, it remains susceptible to variability arising from unknown genetic factors in each case. Subsequent research endeavors should integrate genetic details such as familial AD mutations and other known genetic variables, and expand on the sampling for APOE genotypes, to gain deeper insights into their impact on AD. Our study provides novel insights into the amyloid plaque proteome of DS, highlighting key functional aspects and contrasting them with EOAD and LOAD. We observed a notable similarity among the plaque proteomes of DS, EOAD, and LOAD, with predominant associations of plaque proteins with endo/lysosomal pathways, immunity, and APP metabolism. Specifically, the identification of CLCN6 underscores its potential role in AD pathology through its involvement in the endo/lysosomal pathway and warrants further investigation as a potential therapeutic target. The analysis of the non-plaque proteome revealed significant differential alterations in ECM and chromatin structure, emphasizing the nuanced differences between DS, EOAD, and LOAD. Our unbiased proteomics approach not only identifies enriched plaque proteins but also suggests potential therapeutic targets or biomarkers for AD, offering promising avenues for future research and clinical applications. Below is the link to the electronic supplementary material. Supplementary file1 (TIF 14649 KB) Supplementary file2 (TIF 302 KB) Supplementary file3 (TIF 1541 KB) Supplementary file4 (TIF 1685 KB) Supplementary file5 (TIF 1660 KB) Supplementary file6 (TIF 2137 KB) Supplementary file7 (TIF 1859 KB) Supplementary file8 (TIF 2150 KB) Supplementary file9 (TIF 2075 KB) Supplementary file10 (XLSX 8418 KB) |
Adapting the ACMG/AMP variant classification framework: A perspective from the ClinGen Hemoglobinopathy Variant Curation Expert Panel | 03d764d8-5a66-456f-a185-51dd200d81c0 | 9545675 | Pathology[mh] | INTRODUCTION Burgeoning demand for genetic screening and correspondingly expanded diagnostic sequencing efforts have dramatically increased the number of sequence variants, many of unknown significance, which require clinical annotation. The collection, assessment, and evaluation of variant evidence required to determine clinical actionability is a resource‐intensive process, influenced by expert opinion and differences in methodologies and thresholds across clinical laboratories (Harrison et al., ). Toward an effort to establish a common framework for variant classification based on a standardized and transparent assessment of different lines of evidence, in 2015, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) published joint recommendations for the interpretation of variants in genes associated with Mendelian disorders (Richards et al., ). The ACMG/AMP framework defined 28 evidence criteria, organized by type and strength, and developed a five‐tier scheme to classify variants as pathogenic, likely pathogenic, of uncertain significance, likely benign, or benign. This framework was designed for general use across different genes, diseases, and inheritance patterns, thus necessitating the application of expert judgment when evaluating and weighing evidence for the interpretation of variants. In response to the need for standardized evidence‐based methods to characterize the clinical relevance of gene‐ and disease‐specific sequence variants, the Clinical Genome Resource (ClinGen) assembles Variant Curation Expert Panels (VCEPs) to develop specifications for the ACMG/AMP framework (Rehm et al., ). In addition, ClinGen established the Sequence Variant Interpretation Working Group (SVI WG) to provide general refinement of the ACMG/AMP guidelines for criteria that are applicable across diverse domains and to harmonize guideline specifications made by the individual VCEP (Harrison et al., ). The SVI WG systematically reviews the ACMG/AMG guidelines and has already published recommendations for the specification of multiple evidence types ( https://www.clinicalgenome.org/svi/ ). The ClinGen Hemoglobinopathy VCEP ( www.clinicalgenome.org/affiliation/50052/ ) was created collaboratively between the ITHANET Portal ( https://www.ithanet.eu/ ) and the Global Globin Network of the Human Variome Project ( http://www.humanvariomeproject.org/gg2020/ ) to perform gene‐ and disease‐specific modifications to the ACMG/AMP framework for variants related to hemoglobinopathies. Hemoglobinopathies are the commonest monogenic disorders worldwide, with an extremely diverse clinical spectrum of conditions of varying severity and can be broadly classified into the thalassemia syndromes, characterized by a reduction in protein synthesis, and the structural hemoglobin variants, characterized by changes in protein stability and structure. Hemoglobinopathies are caused by both short nucleotide variants (SNVs) and copy number variants in the two globin gene clusters, namely the α‐globin locus (NG_000006), including genes HBA1 , HBA2 , and HBZ , and the β‐globin locus (NG_000007), including genes HBB , HBD , HBG1 , HBG2 , and HBE1 , and locus‐specific regulatory elements (Higgs et al., ). They predominantly have a recessive mode of inheritance, although dominantly inherited phenotypes have also been described and a large number of genetic modifiers are known to affect disease expressivity and penetrance (Stephanou et al., ). An epidemiological complication of hemoglobinopathies in regions historically endemic for malaria is the resistance of heterozygotes for established pathogenic variants to malaria (heterozygote advantage), which over time has led to an atypical enrichment of disease‐causing alleles in corresponding populations (Kountouris et al., ; Roberts & Williams, ). Normal adults have a complement of four alpha (αα/αα) and two beta (β/β) globin genes, which encode the globin chains constituting the main adult tetrameric α 2 β 2 hemoglobin molecule. Correspondingly, the underlying HBA1 , HBA2 , and HBB genes are primary pathology determinants in adults and are, therefore, the focus of initial guideline specification efforts for the hemoglobinopathies. The spectrum of phenotypes and disease severity depend on the properties of the protein variant in the case of structural defects (Thom et al., ), and on the number of genes that are lost, abnormally expressed, or, in some cases, duplicated in the case of the thalassemias (Farashi & Harteveld, ; Thein, ). For the thalassemias, the severity of the co‐inherited mutant alleles, from mild to moderate to absolute (0) deficiency of globin expression, affects survival (Kountouris et al., ) and determines disease severity through anemia, that is, an overall reduced level of hemoglobin, and through the toxicity of homotetramers formed by unaffected, excess globin chains. The degree of globin chain imbalance is thus central to thalassemia pathophysiology, with most thalassemia alleles causing observable changes in the hematological indices of heterozygotes (Kohne, ). These gene‐disease characteristics pose challenges for current ACMG/AMP variant interpretation guidelines and require customized criteria for accurate variant interpretation. The Hemoglobinopathy VCEP is tasked with providing expert review of all globin gene variants and resolution of conflicting interpretations in the ClinVar variant database using the specified ACMG/AMP guidelines. Table shows current summary data for hemoglobinopathy variants available on ClinVar (accessed on May 14, 2021). Accordingly, a total of 794 sequence variants affecting HBB are annotated in ClinVar, and a smaller number of 199 and 242 sequence variants affecting HBA1 and HBA2 , respectively. Most importantly, only a fraction of these variants has a review status of two stars in ClinVar, denoting two or more submissions with assertion criteria and evidence (or a public contact) providing the same interpretation. Specifically, the percentage of these variants with a two‐star review status in ClinVar is 26.4%, 8%, and 10.7%, for HBB , HBA1 , and HBA2 , respectively, highlighting the need for expert review of variants in these genes. The Hemoglobinopathy VCEP specifications were approved by ClinGen in April 2021 (Step 2 approval), which initiated the process of further validation and adaptation with known globin gene variants in a pilot study (toward Step 3 approval). Correspondingly, this report avoids detailed presentation of the current specifications and instead uses the perspective of the Hemoglobinopathy VCEP to describe the process of ACMG/AMP guideline adaptation for SNVs with recessive inheritance in HBB , HBA2 , and HBA1 . Owing to the involvement of two loci, the unusual epidemiology, and the complexity of allele interaction and phenotypes for hemoglobinopathies, our observations help amplify the challenges generally encountered during variant curation and interpretation, and during the specification of ACMG/AMP guidelines for future VCEPs. The ACMG/AMP framework constitutes a classification system for Mendelian variants based on evidence criteria that assess variant frequency in the general population, variant types with disease causality, protein domains and mutational hotspots implicated in disease, disease/trait phenotypes in probands and families with observed segregation, in silico predictions and functional evidence. Evidence criteria are divided into those that support a benign and a pathogenic classification, with intermediate categories being likely pathogenic , of uncertain significance and likely benign , each with a suggested measure of strength, namely supporting, moderate, strong, or very strong (Richards et al., ). The Hemoglobinopathy VCEP has a total of 31 unique evidence codes, shown in Table , some of which are assigned at different levels of strength depending on the amount of evidence that is available. These criteria can be broadly grouped into five categories that are discussed in the subsequent sections of this article. POPULATION DATA (PM2, BA1, AND BS1) The frequency of a variant in the general population can be informative for its pathogenicity, as variants of high frequency in any large general population or control cohort are unlikely to be disease‐causing. For this reason, a stand‐alone benign criterion (BA1) was introduced in the original ACMG/AMP guidelines for all variants with a frequency of at least 5% in a general or control population. This threshold is very conservative, is selected for use across different genes and diseases, and is adjusted by VCEPs to reflect known allele frequencies in the genes of interest. In addition, BS1 is used for variants with a frequency higher than expected for the disorder and it is also VCEP‐specific (Ghosh et al., ). A statistical framework has been developed to facilitate the estimation of these thresholds and is available at the Allele Frequency App (Whiffin et al., ). The framework accounts for disease prevalence, genetic and allelic heterogeneity, inheritance mode, penetrance, and sampling variance in reference data sets. Currently, gnomAD is the largest available population database and is widely used as a reference data set for the calculation of these thresholds (Karczewski et al., ). In the case of hemoglobinopathies, the high frequencies of some established pathogenic variants in historical malaria regions directly affect the application of criteria BA1 and BS1. Hence, to ensure the effective use of minor allele frequency evidence, the VCEP compiled a list of established variants that are excluded from criteria requiring population data (i.e., BA1, BS1, and PM2_Supporting) based on their frequency in different populations globally. The variant frequencies were derived from the IthaMaps database (Kountouris et al., , ), which manually curates relative allele frequencies of specific globin gene variants at the country and regional level. Extremely low frequency of genetic variants in the general population is considered moderate evidence for pathogenicity, based on the original ACMG/AMP guidelines (PM2). After further analysis and modeling, the ClinGen SVI subsequently recommended downgrading the strength of evidence to supporting. The threshold for PM2_Supporting is often defined to an order of magnitude lower than the BS1 threshold, while it can be alternatively defined by analyzing the frequency of established pathogenic or benign variants for the gene of interest and calculation of likelihood ratios for different thresholds. Using variants with established pathogenicity, the Hemoglobinopathy VCEP has selected the threshold that maximizes the likelihood ratio for globin gene variants and this approach will be validated and adjusted as required during the ongoing pilot study. VARIANT TYPE AND LOCATION (PVS1, PS1, PM1, PM4, AND PM5) The interpretation of a sequence variant requires an understanding of its effect on the structure and function of the gene product and prior knowledge of the molecular mechanism of disease. While some variants have a deleterious impact on protein production and/or function, others may cause partial or no discernible changes in phenotype. In contrast to variants that can lead to loss of function owing to premature termination of translation and protein synthesis, missense variants are difficult to assess for their pathogenicity, which largely depends on the variant position in the protein sequence and the biochemical consequence of the amino acid change. In addition, variants may act as benign bystanders to disease, or they may contribute to disease in the presence of another variant in the same gene. The molecular consequences of sequence variants depending on the variant type and the genomic location are evaluated by several rules in the ACMG/AMP framework (i.e., PVS1, PS1, PM4, and PM5). Loss of function is an established primary disease mechanism for hemoglobinopathies, hence the PVS1 criterion for null variants (e.g., nonsense, frameshift, canonical ±1,2 splice sites, initiation codon, single‐exon, or multi‐exon deletion) would apply, particularly in the case of the thalassemia syndromes. In fact, there is currently no null globin gene variant with a benign or likely benign effect reported in ClinVar, further highlighting the role of loss of function as a primary disease mechanism. Nevertheless, in line with ClinGen SVI recommendations (Abou Tayoun et al., ), the Hemoglobinopathy VCEP is currently working on PVS1 modification for different null variant types. Specifically, the VCEP will evaluate existing evidence on variant pathogenicity for each null variant type as well as alternative splicing and alternate routes of nonsense‐mediated decay for globin genes (Peixeiro et al., ). In addition, PM4 (protein length changing variant) will be applied for in‐frame deletions/insertions and losses of stop codons that disrupt protein function, such as the widespread stop‐loss variant NM_000517.4:c.427T>C (Hb Constant Spring). Furthermore, as the globin genes are affected by both pathogenic and benign missense variants, standard ACMG/AMP criteria PP2 (missense variants are a common cause of disease with little benign variation) and BP1 (truncating variants are the only known mechanism of variant pathogenicity) do not apply to hemoglobinopathies. Likewise, globin genes do not contain a repetitive region without known function, as would be a prerequisite for applying BP3 (in‐frame indels in a repetitive region without known function). The location of a variant within a protein can impart changes to the protein structure, function, and other properties. Expert opinion is necessary for specifying the important regions of a protein in the context of the molecular mechanisms of disease, also acknowledging that these regions must have a low rate of benign variants. The hemoglobin molecule is among the best‐characterized proteins, for which several structure‐ and function‐critical domains, such as α1β1 and α1β2 interfaces, the heme‐binding pocket, and other biologically relevant sites, have already been associated with molecular mechanisms, including the Bohr effect, 2,3‐DPG binding or AHSP binding (Thom et al., ). The ACMG/AMP criteria PM1 (variant in a critical domain/mutational hotspot), PS1 (variant creates same amino acid change as a known pathogenic variant), and PM5 (novel missense variant at the same position as known pathogenic variant) evaluate the variant location and similarly argue that variants affecting critical domains are more likely to cause functional disruption and, thus, a pathogenic effect. In hemoglobinopathies, the HBA2 and HBA1 genes are paralogous with identical structure and function, and can thus be incorporated in these criteria to provide additional evidence for variant hotspots in critical functional domains (Moradkhani et al., ). The ACMG/AMP criteria can be further adapted to accommodate variants found in regions that affect the expression or splicing of globin genes. COMPUTATIONAL DATA (PP3, BP4, AND BP7) Computational (in silico) tools predicting the effect of sequence variants can also facilitate variant interpretation. A plethora of tools is available with important differences in their algorithmic approach and the type of sequence variants they can predict. Some tools, such as SIFT (Kumar et al., ) and PolyPhen‐2 (Adzhubei et al., ), predict the impact of missense variants, and others, such as MaxEntScan (Yeo & Burge, ) and SpliceAI (Jaganathan et al., ), are focused on predicting the variant effect on splicing, while more recent tools can predict the effect of both coding and noncoding variants (Kircher et al., ). With the accuracy of in silico tools in the range of 65%–80% (Thusberg et al., ), the ACMG/AMP framework recommends the use of these predictions as supporting evidence for variant interpretation (PP3, BP4, and BP7). Nevertheless, the original framework does not recommend the use of specific in silico tools and, therefore, ClinGen VCEPs use different approaches to provide gene‐specific recommendations based on the predictive performance of multiple tools. Some VCEPs require an agreement in variant effect prediction among multiple tools. Other VCEPs opt for simplicity by using a meta‐predictor, such as REVEL (Ioannidis et al., ), which combines the results of multiple in silico tools, thus, providing a single prediction for each sequence variant. Rather than using the default threshold recommended by the in silico tool, several VCEPs have used data from established pathogenic and benign variants to identify the threshold that maximizes the predictive performance in the genes of interest. More recently, a quantitative approach (Johnston et al., ) has been proposed to calibrate different thresholds for benign and pathogenic computational evidence by using a Bayesian Classification Framework (Tavtigian et al., ). The Hemoglobinopathy VCEP recognizes the varying strengths and weaknesses of computational tools and recommends the use of REVEL for evaluating the effect of missense variants in the globin genes. For assessing the impact of variants in splicing, the specified criteria require concordant predictions across at least 50% of the tested tools. The Hemoglobinopathy VCEP is currently conducting a large‐scale study with over 1000 annotated globin gene variants to compare the performance of computational tools, adjust the prediction thresholds and, thus, further specify the criteria that use computational data (manuscript under preparation). Notably, the use of computational evidence is not allowed for loss‐of‐function variants that meet the PVS1 rule, to avoid accounting for the same evidence in different criteria. CASE LEVEL/SEGREGATION DATA (PS2, PS4, PM3, PM6, PP1, BP2, BP5, BS2, AND BS4) Case‐level data capture information about individuals who carry the variant of interest and can satisfy several components of the ACMG/AMP framework, such as PP4 (phenotype specific for disease), PS2/PM6 (de novo with/without parental testing), PM3/BP2 (in‐ trans or in‐ cis with a pathogenic variant), PP1/BS4 (cosegregation in affected family members, or lack thereof), PS4/BS2 (variant observation in cases or controls), and BP5 (alternate locus observations). Disease‐specific phenotype information is necessary to ensure that all affected individuals meet uniform diagnostic criteria. The inactivation of two β‐globin genes (β‐thalassemia) or three α‐globin genes (Hb H disease) results in disease with both hematological and clinical phenotypes. The absence of all four α‐globin genes causes the hydrops fetalis syndrome, which results in death in utero or shortly after birth. Furthermore, the inactivation of one β‐globin gene or two α‐globin genes characterizes the trait state of β‐ and α‐thalassemia, respectively, and produces a hematological phenotype that includes the change of red blood cell indices, hemoglobin pattern, and globin chain synthesis ratio. Such readily detectable clinical signs are usually uncovered by routine laboratory diagnostic screening since trait individuals are clinically asymptomatic and, thus, often unaware of their carrier status. Accordingly, prevention strategies for thalassemias and other hemoglobinopathies depend in large part on population and newborn screening, which has allowed diagnostic laboratories across different countries to amass evidence about common and rare variants in the heterozygous state. To utilize this large volume of data on heterozygous trait individuals in hemoglobinopathies as a resource for variant annotation, the Hemoglobinopathy VCEP has adapted the ACMG/AMP framework to capture the phenotype of heterozygous trait individuals as additional evidence for variant pathogenicity. In light of many rare variants only ever being detected in the compound heterozygous or carrier state, this step made many more globin variants accessible to formal annotation. Due to the lack of case‐control studies with hemoglobinopathy variants, PS4 has been adapted to count individuals with the trait phenotype. In contrast to β‐thalassemia, α‐thalassemia cannot be discerned from iron deficiency based on hematological parameters, which will prompt differential strength‐level adjustments for corresponding data in both major thalassemias. In addition, the ACMG/AMP criterion that pertains to phenotypic correlation (PP4) will not be applied as it would double‐count evidence collected in PS4 (observation in heterozygotes) and PM3 (in‐ trans occurrence in an individual with disease). The Hemoglobinopathy VCEP follows ClinGen SVI recommendations for elevating the weight of in‐ trans occurrence (PM3) and implements quantitative thresholds to modulate the strength of segregation evidence based on the number of individuals examined (PP1). Furthermore, specifications are provided to guide expert curation of variants with alternate locus observations, such as a β‐thalassemia phenotype caused by heterozygous β‐thalassemia in combination with duplication of the α‐globin locus and a correspondingly aggravated imbalance of the α‐globin/β‐globin ratio (Clark et al., ). FUNCTIONAL DATA (PS3 AND BS3) Functional assays are powerful tools to provide variant‐level evidence of the effect on protein function and splicing to meet PS3 (damaging effect) or BS3 (no effect). The Hemoglobinopathy VCEP reviewed functional assays used by multiple investigators and selected those that reflect the pathophysiological mechanism of disease for the assessment of thalassemia and structural hemoglobin variants. Well‐recognized assays include those that evaluate globin chain biosynthesis, red cell inclusions (e.g., denatured β 4 tetramers), and the stability, solubility, and oxygen affinity of the hemoglobin molecule. Functional criteria are also applied for evidence of abnormal RNA or protein expression of the variant allele as a consequence of a null or splicing effect. Other assays involve in vitro transcription assays, which are mainly used in research and, thus, do not conform to diagnostic laboratory standards. Functional evidence has a strong level of strength in the ACMG/AMP framework, yet not all assays are consistent predictors of a certain variant effect or uniformly evaluated across clinical laboratories. The SVI WG provides recommendations based on the validation, reproducibility, and robustness of data for individual assays, so as to advise on the appropriate level of strength of evidence to apply (Brnich et al., ). However, as validation controls and replicates are rarely documented for functional assays in hemoglobinopathies, the functional data will initially be considered as supporting level evidence in favor of pathogenicity or of benign interpretation in the ongoing pilot study, pending evaluation by the Hemoglobinopathy VCEP of applying increased weight during annotation of variants with established pathogenicity. A PILOT FOR SPECIFIED CRITERIA AND THEIR EVALUATION Table lists the draft VCEP‐specified criteria organized by evidence type and strength, which are currently being tested in a pilot variant curation study comprising an informative mixture of established structural and thalassemia mutations in the HBA1 , HBA2 , and HBB genes. In the process, the ACMG/AMP framework provides rules for combining criteria to arrive at a classification; however, it does not guide the interpretation of variants with conflicting evidence. By contrast, interpretation within the Bayesian Classification Framework (Tavtigian et al., ) provides a quantitative approach to the combination of rules and, thus, allows refining the strength of evidence and combining rules for the classification of variants that have contradictory benign and pathogenic evidence. In the evaluation and refinement of its draft criteria, the Hemoglobinopathy VCEP will use the standard ACMG/AMP framework in parallel to the application of the Bayesian Classification Framework, thus additionally testing the impact of a quantitative approach in sequence variant interpretation. CONCLUSION The ClinGen Hemoglobinopathy VCEP is a group of experts and biocurators with diverse specialties tasked with the adaptation of the 2015 ACMG/AMP guidelines for the classification of genetic variants in the HBA1 , HBA2 , and HBB genes for hemoglobinopathies. This report provides insights into the challenges and considerations of specifying the ACMG/AMP criteria to evaluate all available evidence relevant to hemoglobinopathies and the globin genes, with the aim to standardize the curation and interpretation of variants in different conditions. The current test by the ClinGen Hemoglobinopathy VCEP of its specifications in a small set of globin gene variants with known pathogenicity will lead to further specifications and minor adjustments of the rules described in this report. Once approved by ClinGen, the resulting final set of classification rules will be the first standardized framework for the interpretation of sequence variants in the globin genes. Most importantly, following the 2018 recognition of ClinGen by the Food and Drug Administration, assertions in the framework of ClinGen VCEPs are considered to be valid scientific evidence and can be used for test development and validation processes. An ever‐accelerating accumulation of diagnostic sequencing data for the globin loci, with global relevance of reliable variant interpretation for genetic counseling, diagnosis, and prognosis for hemoglobinopathies, means that both diligence and speed are of the essence in the current refinement and application of specified Hemoglobinopathy VCEP criteria. ITHANET Portal: https://www.ithanet.eu/ Genome Aggregation Database (gnomAD): https://gnomad.broadinstitute.org/ ClinVar: https://www.ncbi.nlm.nih.gov/clinvar/ Allele Frequency App: https://cardiodb.org/allelefrequencyapp/ The authors declare that there are no conflict of interests. Supporting information. Click here for additional data file. |
A comparative proteomic-based study identifies essential factors involved in hair follicle growth in inner Mongolia cashmere goats | c4e6b778-39aa-4b4b-857c-24d389936997 | 11866830 | Biochemistry[mh] | Cashmere is a production of cashmere goat skin with high economic value in the textile industry because it is softer, finer, and lighter than other animal fibres are . Unlike the mechanism of wool production of primary hair follicles (PHFs), cashmere fibres grow from secondary hair follicles (SHFs) and have characteristics of annual cyclic growth, during which they undergo the anagen (a period of cell proliferation), catagen (a period of apoptosis) and telogen (a period of relative mitotic quiescence) phases annually . The cashmere fibre begins to grow after transitioning from the comparatively dormant telogen stage to the actively proliferating anagen stage, and when it enters the catagen phase, the hair follicle (HF) fully stops growing and decreases in diameter and length . Histological studies of SHFs of Inner Mongolia cashmere goats have shown that anagen occurs between April and September, catagen occurs between October and November, and telogen continues until the end of March . The surrounding skin microenvironment, or niche, strictly controls and regulates this sequence of periodic alterations . The transition from the telogen phase to the anagen phase is a complex morphogenetic process of HFs that includes a sequence of reciprocal signals between mesenchymal and epithelial tissues . Exploring the growth process of HFs in more detail is beneficial not only for revealing the mechanism of their formation but also for providing insight into the HF cycle. The development of HFs is a complex process that necessitates accurate coordination of signals from various cell types in the microenvironment of the skin , and some factors have been confirmed to regulate the HF growth process directly or indirectly. For example, these factors may dictate whether an HF will develop on the basis of various combinations of signals, which consist of substances released from the hedgehog, Wnt/wingless, and FGF, BMP, TGF-β, and TNF pathways . These signals control epidermal-dermal communication . Beta-catenin is considered a crucial factor in determining the fate of HFs because of its role in inducing HF formation when epidermal beta-catenin is expressed . Moreover, the apoptosis suppressor BCL-2 may participate in the extension of the growth phase , and TGF-α may be involved in the control of the HF shape . However, it is also essential to study these factors overall, as they do not function alone in the skin. There are 12,927 microRNA and 12,865 miRNAs differentially expressed in cashmere goat skin during the SHF transition from telogen to anagen, and they work in the form of a microRNA‒miRNA network . Proteins are the ultimate embodiment of life activities , and exploration at the proteomic level can provide new insight into the molecular mechanism driving HF regression. Sequential windowed acquisition of all theoretical fragment ions (SWATH™) is a fast data-independent MS/MS acquisition technique that captures all identifiable fragment ions from peptide precursors found in a biological sample in a comprehensive and enduring manner . SWATH-MS allows reproducible large-scale protein measurements across diverse cohorts . For a deeper understanding of the intrinsic molecular mechanism of SHF growth, we performed a SWATH-based proteomic analysis to decipher the proteomic signature and its interaction relationships in the skin microenvironment during SHF morphogenesis. The results not only provide a novel protein repository of skin but also help elucidate the relationship between telogen and anagen in SHFs. Animals The experimental cashmere goats belonging to the Inner Mongolia cashmere breed and animals used for research were provided by the Aerbasi White Cashmere Goat Breeding Farm located in Inner Mongolia, China . All procedures in this study were performed after the required consent was obtained, and the experiment followed the International Guiding Principles for Biomedical Research involving animals and was approved by the Special Committee on Scientific Research and Academic Ethics of Inner Mongolia Agricultural University, which is responsible for the approval of Biomedical Research Ethics of Inner Mongolia Agricultural University (Approval No: (2020)056, project title: the International Guiding Principles for Biomedical Research involving animals; approval date: May 6th, 2020). For this study, three 2-year-old female Inner Mongolia cashmere goats were randomly selected as replicates in analyses performed in triplicate. The three goats were in good health, with similar developmental, physiological, and feeding conditions. During anagen (September) and telogen (March), samples of dorsal skin were collected approximately 10 to 15 cm from the scapula. Previous studies revealed that the annual development of SHFs in Inner Mongolia cashmere goats raised by the Aerbasi White Cashmere Goat Breeding Farm was the same . Prior to sampling, the site underwent shearing, shaving, and local anaesthesia with 2% lidocaine. Skin samples (1 cm 2 ) were collected using a single-use skin biopsy punch. The samples were preserved at -80 °C in the laboratory, and the transportation process of the samples was complete in liquid nitrogen. After the study was finished, the animals were released because the procedures were minimally invasive and euthanasia was not needed. Total protein extraction The same quantity of each skin sample was crushed in liquid nitrogen, and then, 500 µl of 1% sodium dodecyl sulphate (SDS) was added to the lysate. After that, the mixture was incubated at ambient temperature for 20 min before being ultrasonically sonicated for 2 min and then centrifuged at 4 °C and 12,000 rpm for 15 min. A BCA protein assay kit (Bioteke, Beijing, China) was used to determine the total protein content in the supernatant. The results of total protein extraction are shown in S1. Tryptic digestion of total protein One hundred micrograms of denatured total protein was treated with a mixture of 200 µl of 10 mM DL-dithiothreitol (DTT) and 8 M urea at 37 ℃ for 1 h, followed by centrifugation for 40 min at 12,000 rpm. The samples were then treated with 200 µl of 50 mM iodoacetamide (IAA) and incubated at room temperature with the light ray blocked for 30 min before centrifugation for 30 min at 12,000 rpm. To cleave proteins into peptides, the proteins were digested using trypsin (Promega, USA). Following digestion, 100 mM ammonium bicarbonate was added for elution, and the samples were centrifuged at 15,000 rpm for 30 min with a 10 kDa size-exclusion membrane (Sartorius, Germany). The eluates were dried in vacuo. SWATH-based LC‒MS/MS analysis An Eksigent NanoLC Ultra 2D Plus HPLC system was linked to a 5600 TripleTOF mass spectrometer for LC‒MS/MS analysis (Sciex, Framingham, MA, USA). For the data acquisition of peptide separation by LC‒MS‒MS (LC‒MS‒MS-MS), two distinct approaches, information-dependent acquisition (IDA) and SWATH acquisition, were employed. Peptides weighing approximately 2 µg were injected and separated on a C18 HPLC column (inner diameter: 75 μm ×15 cm). The peptides were separated using a linear gradient of 0.1% formic acid in water (A) and 0.1% formic acid in acetonitrile (B) for 120 min at 500 nL/minute. An MS TOF was collected from 350 to 1800 m/z for IDA, followed by an IDA of MS/MS with automated collision energy selection scanned from 40 to 1800 m/z for 0.05 s per spectrum. The resolution power was 30,000. SWATH-MS interrogated the MS1 mass range of 150–1200 m/z, and MS2 spectra were collected from 100 to 1500 m/z. The MS1 and MS2 scans had nominal resolving powers of 30,000 and 15,000, respectively. Analyst software (Sciex, Framingham, MA, USA) was used to automatically determine the collision energy by considering the m/z window range. Data analysis To acquire a classified spectral collection, we conducted peptide recognition through the Protein Pilot 4.5 application (Sciex, Framingham, MA, USA) utilizing the UniProt/SWISS-PROT Capra hircus database (obtained from https// www.UniProt.org ; 9925 proteins) with the following configurations: sample type, identification; cysteine alkylation, iodoacetamide; digestion, trypsin; instrument, Triple TOF 5600; ID focus, biological modification; search effort, thorough ID. Peptide detections by ProteinPilot were filtered at a 1% false discovery rate (FDR). In this study, skin peptides regulated at the HF developmental stages of telogen (March) and anagen (September) were detected by Protein Pilot, and we identified the total peptides in both periods (telogen and anagen) by Protein Pilot. The information from Protein Pilot (telogen and anagen) was imported into PeakView software (Sciex, Framingham, MA, USA) to analyse the SWATH database with the ion library created in Protein Pilot . PeakView software processed both targeted and nontargeted data to generate extracted ion chromatograms (XIC). After that, the data were conveyed to Markerview software (Sciex, Framingham, MA, USA) for interpretation and quantitative analysis. Markerview enables quick analysis of the data to identify protein expression changes (up- and downregulation) that have occurred . Principal component analysis (PCA) via Markerview was used to process the data, and the screening criteria for differentially expressed proteins in this study were a P value < 0.05 and a fold change > 2 or < 0.5. GO and KEGG pathway analysis To visually examine the role of proteins with altered expression, we used the DAVID gene functional classification tool ( https://david.ncifcrf.gov ) and the CluGo plug-in of Cytoscape software to perform Gene Ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses. All of the bioinformatic analyses were deemed significant at a corrected P value of 0.05. A bar graph is used to represent the results of the GO enrichment analysis performed on differentially regulated proteins. This graph can be used to visualize the various components of the biological process (BP), cellular component (CC), and molecular function (MF) categories. Construction of the PPI network STRING ( https://string-db.org/ ), a web-based bioinformatic platform dedicated to protein‒protein interactions (PPIs), provides extensive data on the interactions of proteins. Differentially regulated proteins were uploaded to STRING 11.5 to obtain information on PPIs. The medium confidence interaction score (0.4) was established as the minimal requirement. To generate a visual PPI network, the interactions were imported into the Cytoscape 3.9.0 application. The value of proteins was evaluated by calculating their betweenness centrality. Western blotting A total of 30 µg of protein was separated from each group via 12% SDS‒polyacrylamide gel electrophoresis (SDS‒PAGE) ( n = 3 samples/group). After the proteins were transferred to a PVDF membrane (PALL, New York, USA) using a semidry membrane transfer method, they were blocked for 2 h at room temperature with 5% skim milk. The membrane was then incubated overnight at 4 °C with rabbit polyclonal anti-keratin 25 (Abcam, 1:500), rabbit polyclonal anti-keratin 71 (Abcam, 1:500), and mouse monoclonal anti-tubulin (Abcam, 1:1000) antibodies. The membrane was then washed with PBST and incubated for 1 h at 37 °C with a fluorescent-labelled goat anti-mouse secondary antibody and a goat anti-rabbit secondary antibody (LI-COR Biosciences, Inc., Lincoln, NE, USA; 1: 3,000). The immunoreactive bands were examined with a LI-COR ® Odyssey near-infrared imager (LI-COR Biosciences, Inc.) after the membranes were washed. ImageJ software was used to quantify the immunoblots, and statistical analyses were conducted using SPSS 23.0 (Chicago, USA). Parametric one-way ANOVA was applied for data examination. The experimental data are expressed as the means ± SDs. Statistical significance was considered at P < 0.05. Immunohistochemistry Fresh skin samples from Inner Mongolia cashmere goats were obtained during the telogen and anagen phases. The samples were first fixed for 24 h in 4% paraformaldehyde. Before being embedded in paraffin, the sections were dehydrated in pure alcohol, and the alcohol was then replaced with benzene. The tissue sample slices, which were 8 μm thick, were subjected to a series of incubations, including xylene for dewaxing, gradient alcohol hydration, and 3% hydrogen peroxide (H 2 O 2 ) at room temperature to inactivate endogenous catalase. The samples were then flushed with phosphate-buffered saline (PBS), and antigen retrieval was carried out in citrate buffer, after which the samples were blocked at room temperature for 1 h with 5% bovine serum albumin (BSA). The samples were subsequently incubated with rabbit polyclonal anti-KRT25 antibody (Abcam, 1:50), rabbit polyclonal anti-KRT71 antibody (Abcam, 1:1000), and rabbit polyclonal anti-KRT82 antibody (Affinity, 1:500) overnight at 4 °C. 5% BSA was used as a negative control. The samples were subsequently washed with PBS and incubated with HRP-labelled goat anti-mouse secondary antibody (Beyotime, 1:500) for 1 h at 37 °C. Haematoxylin was used for final staining of the sections, and the results were observed using light microscopy. The experimental cashmere goats belonging to the Inner Mongolia cashmere breed and animals used for research were provided by the Aerbasi White Cashmere Goat Breeding Farm located in Inner Mongolia, China . All procedures in this study were performed after the required consent was obtained, and the experiment followed the International Guiding Principles for Biomedical Research involving animals and was approved by the Special Committee on Scientific Research and Academic Ethics of Inner Mongolia Agricultural University, which is responsible for the approval of Biomedical Research Ethics of Inner Mongolia Agricultural University (Approval No: (2020)056, project title: the International Guiding Principles for Biomedical Research involving animals; approval date: May 6th, 2020). For this study, three 2-year-old female Inner Mongolia cashmere goats were randomly selected as replicates in analyses performed in triplicate. The three goats were in good health, with similar developmental, physiological, and feeding conditions. During anagen (September) and telogen (March), samples of dorsal skin were collected approximately 10 to 15 cm from the scapula. Previous studies revealed that the annual development of SHFs in Inner Mongolia cashmere goats raised by the Aerbasi White Cashmere Goat Breeding Farm was the same . Prior to sampling, the site underwent shearing, shaving, and local anaesthesia with 2% lidocaine. Skin samples (1 cm 2 ) were collected using a single-use skin biopsy punch. The samples were preserved at -80 °C in the laboratory, and the transportation process of the samples was complete in liquid nitrogen. After the study was finished, the animals were released because the procedures were minimally invasive and euthanasia was not needed. The same quantity of each skin sample was crushed in liquid nitrogen, and then, 500 µl of 1% sodium dodecyl sulphate (SDS) was added to the lysate. After that, the mixture was incubated at ambient temperature for 20 min before being ultrasonically sonicated for 2 min and then centrifuged at 4 °C and 12,000 rpm for 15 min. A BCA protein assay kit (Bioteke, Beijing, China) was used to determine the total protein content in the supernatant. The results of total protein extraction are shown in S1. One hundred micrograms of denatured total protein was treated with a mixture of 200 µl of 10 mM DL-dithiothreitol (DTT) and 8 M urea at 37 ℃ for 1 h, followed by centrifugation for 40 min at 12,000 rpm. The samples were then treated with 200 µl of 50 mM iodoacetamide (IAA) and incubated at room temperature with the light ray blocked for 30 min before centrifugation for 30 min at 12,000 rpm. To cleave proteins into peptides, the proteins were digested using trypsin (Promega, USA). Following digestion, 100 mM ammonium bicarbonate was added for elution, and the samples were centrifuged at 15,000 rpm for 30 min with a 10 kDa size-exclusion membrane (Sartorius, Germany). The eluates were dried in vacuo. An Eksigent NanoLC Ultra 2D Plus HPLC system was linked to a 5600 TripleTOF mass spectrometer for LC‒MS/MS analysis (Sciex, Framingham, MA, USA). For the data acquisition of peptide separation by LC‒MS‒MS (LC‒MS‒MS-MS), two distinct approaches, information-dependent acquisition (IDA) and SWATH acquisition, were employed. Peptides weighing approximately 2 µg were injected and separated on a C18 HPLC column (inner diameter: 75 μm ×15 cm). The peptides were separated using a linear gradient of 0.1% formic acid in water (A) and 0.1% formic acid in acetonitrile (B) for 120 min at 500 nL/minute. An MS TOF was collected from 350 to 1800 m/z for IDA, followed by an IDA of MS/MS with automated collision energy selection scanned from 40 to 1800 m/z for 0.05 s per spectrum. The resolution power was 30,000. SWATH-MS interrogated the MS1 mass range of 150–1200 m/z, and MS2 spectra were collected from 100 to 1500 m/z. The MS1 and MS2 scans had nominal resolving powers of 30,000 and 15,000, respectively. Analyst software (Sciex, Framingham, MA, USA) was used to automatically determine the collision energy by considering the m/z window range. To acquire a classified spectral collection, we conducted peptide recognition through the Protein Pilot 4.5 application (Sciex, Framingham, MA, USA) utilizing the UniProt/SWISS-PROT Capra hircus database (obtained from https// www.UniProt.org ; 9925 proteins) with the following configurations: sample type, identification; cysteine alkylation, iodoacetamide; digestion, trypsin; instrument, Triple TOF 5600; ID focus, biological modification; search effort, thorough ID. Peptide detections by ProteinPilot were filtered at a 1% false discovery rate (FDR). In this study, skin peptides regulated at the HF developmental stages of telogen (March) and anagen (September) were detected by Protein Pilot, and we identified the total peptides in both periods (telogen and anagen) by Protein Pilot. The information from Protein Pilot (telogen and anagen) was imported into PeakView software (Sciex, Framingham, MA, USA) to analyse the SWATH database with the ion library created in Protein Pilot . PeakView software processed both targeted and nontargeted data to generate extracted ion chromatograms (XIC). After that, the data were conveyed to Markerview software (Sciex, Framingham, MA, USA) for interpretation and quantitative analysis. Markerview enables quick analysis of the data to identify protein expression changes (up- and downregulation) that have occurred . Principal component analysis (PCA) via Markerview was used to process the data, and the screening criteria for differentially expressed proteins in this study were a P value < 0.05 and a fold change > 2 or < 0.5. To visually examine the role of proteins with altered expression, we used the DAVID gene functional classification tool ( https://david.ncifcrf.gov ) and the CluGo plug-in of Cytoscape software to perform Gene Ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses. All of the bioinformatic analyses were deemed significant at a corrected P value of 0.05. A bar graph is used to represent the results of the GO enrichment analysis performed on differentially regulated proteins. This graph can be used to visualize the various components of the biological process (BP), cellular component (CC), and molecular function (MF) categories. STRING ( https://string-db.org/ ), a web-based bioinformatic platform dedicated to protein‒protein interactions (PPIs), provides extensive data on the interactions of proteins. Differentially regulated proteins were uploaded to STRING 11.5 to obtain information on PPIs. The medium confidence interaction score (0.4) was established as the minimal requirement. To generate a visual PPI network, the interactions were imported into the Cytoscape 3.9.0 application. The value of proteins was evaluated by calculating their betweenness centrality. A total of 30 µg of protein was separated from each group via 12% SDS‒polyacrylamide gel electrophoresis (SDS‒PAGE) ( n = 3 samples/group). After the proteins were transferred to a PVDF membrane (PALL, New York, USA) using a semidry membrane transfer method, they were blocked for 2 h at room temperature with 5% skim milk. The membrane was then incubated overnight at 4 °C with rabbit polyclonal anti-keratin 25 (Abcam, 1:500), rabbit polyclonal anti-keratin 71 (Abcam, 1:500), and mouse monoclonal anti-tubulin (Abcam, 1:1000) antibodies. The membrane was then washed with PBST and incubated for 1 h at 37 °C with a fluorescent-labelled goat anti-mouse secondary antibody and a goat anti-rabbit secondary antibody (LI-COR Biosciences, Inc., Lincoln, NE, USA; 1: 3,000). The immunoreactive bands were examined with a LI-COR ® Odyssey near-infrared imager (LI-COR Biosciences, Inc.) after the membranes were washed. ImageJ software was used to quantify the immunoblots, and statistical analyses were conducted using SPSS 23.0 (Chicago, USA). Parametric one-way ANOVA was applied for data examination. The experimental data are expressed as the means ± SDs. Statistical significance was considered at P < 0.05. Fresh skin samples from Inner Mongolia cashmere goats were obtained during the telogen and anagen phases. The samples were first fixed for 24 h in 4% paraformaldehyde. Before being embedded in paraffin, the sections were dehydrated in pure alcohol, and the alcohol was then replaced with benzene. The tissue sample slices, which were 8 μm thick, were subjected to a series of incubations, including xylene for dewaxing, gradient alcohol hydration, and 3% hydrogen peroxide (H 2 O 2 ) at room temperature to inactivate endogenous catalase. The samples were then flushed with phosphate-buffered saline (PBS), and antigen retrieval was carried out in citrate buffer, after which the samples were blocked at room temperature for 1 h with 5% bovine serum albumin (BSA). The samples were subsequently incubated with rabbit polyclonal anti-KRT25 antibody (Abcam, 1:50), rabbit polyclonal anti-KRT71 antibody (Abcam, 1:1000), and rabbit polyclonal anti-KRT82 antibody (Affinity, 1:500) overnight at 4 °C. 5% BSA was used as a negative control. The samples were subsequently washed with PBS and incubated with HRP-labelled goat anti-mouse secondary antibody (Beyotime, 1:500) for 1 h at 37 °C. Haematoxylin was used for final staining of the sections, and the results were observed using light microscopy. Proteomic changes in inner Mongolia cashmere goat SHFs during anagen and Telogen From January to March, the coat of the Inner Mongolia cashmere goat undergoes a period of relative mitotic quiescence (telogen), followed by the initiation of growth in April. The peak period of cashmere growth occurs in August and September . To examine potential proteins associated with the growth phase of cashmere goat skin SHFs, we performed IDA/SWATH-MS proteomic analysis of cashmere goat skin in the stages of anagen (September) and telogen (March). A total of 2414 proteins were detected and measured (S1), and principal component analysis (PCA) of those proteins revealed that proteins in anagen and telogen were distributed at different intervals, indicating that there was a major difference in the proteome between these two stages (Fig. a). We compared the proteomic signatures from anagen and telogen, and if the fold changes in these two groups of proteins were greater than 2 or less than 0.5, we considered them to be differentially regulated. Volcano plots of the differentially expressed proteins are displayed using the values of log2 (fold change) and -log10 ( P value) (Fig. b). This comparison revealed 503 proteins whose expression differed at least twofold between telogen and anagen, representing the proteins that were upregulated in anagen. There were 128 proteins with a fold change of less than 0.5, representing the downregulated proteins in the anagen phase. The differentially expressed proteins (DRPs) included mainly ribosomal proteins (r-proteins), eukaryotic translation initiation factors (EIFs), keratin (KRT) family members, and S100 family proteins. Comprehensive information on the DRPs is presented in Supplementary Data and . Functional enrichment analyses of differentially regulated proteins To assess the biological importance of these DRPs, we performed GO term and KEGG pathway enrichment analyses. We analysed the GO clustering of the DRPs in BP, CC, and MF terms between anagen and telogen. The results revealed that among the main enriched GO BP terms, cell adhesion was the most significantly enriched process, and most of the DRPs were enriched in the processes of protein transport and folding, RNA processing, and splicing. In addition, the DRPs were enriched in processes directly related to HF growth, such as hair follicle morphogenesis, cell ageing, keratinocyte differentiation, and keratinization (Fig. a). In CCs, these DRPs were enriched primarily in the extracellular exosome, cytoplasm, and nucleus (Fig. b). With respect to MFs, DRPs were enriched mainly in poly(A) RNA binding, protein binding, and RNA binding (Fig. c). In terms of KEGG pathway information, which is important for understanding the functions of DRPs, metabolism-related pathways, such as the TCA cycle signalling pathway, propanoate metabolism signalling pathway, and protein export, were the top associated pathways (Fig. d). In addition, fatty acid-related pathways, such as fatty acid degradation, biosynthesis of unsaturated fatty acids, and fatty acid elongation, were enriched. Furthermore, DRPs were enriched in several pathways related to hair follicle growth, such as the TGF-beta signalling pathway, the VEGF signalling pathway, and the Wnt signalling pathway . Protein‒protein interaction (PPI) analysis For analysis of the interplay among the DRPs, all those proteins were uploaded to STRING 11.5 software to identify the interrelations. Next, the construction of the PPI network was carried out utilizing the Cytoscape software. As illustrated in Fig. a, the interaction network consisted of 433 nodes and 2741 edges. In this network, according to the filter criteria, five modules were identified with the MCODE plugin (Fig. b-e). Cluster 1 has the highest cluster score of 18.222, consisting of 19 nodes and 164 edges. Cluster 2 is in second place, with a cluster score of 12.833, 13 nodes, and 77 edges. Cluster 3 has a score of 11.500, 13 nodes, and 69 edges, and Cluster 4 has a score of 8.833, 13 nodes, and 53 edges. MCODE analysis of the PPI prediction results of our study revealed that the cluster with the highest score was mainly composed of r-proteins and EIFs (Fig. b). The cluster with the second highest score was mainly composed of KRT family members, including KRT35, KRT73, KRT4, KRT82, KIFI, KRT71, KRT32, KRT85, KRT25, KRT39, KRT28, KRT74, and KRT27 (Fig. c). Validation by Western blotting The keratin cluster (Cluster 2) was the cluster with the highest score. In addition, KRT25 and KRT71 are involved in the BP of hair follicle morphogenesis. Therefore, these two KRTs were selected for Western blotting (WB) verification. Additionally, the index utilized to calculate the relative content of the target proteins was the grey-level ratio between the target proteins and internal controls. The expression of KRT25 and KRT71 in the skin during anagen and telogen is shown in Fig. a, and the difference in expression was very notable. The relative contents of the target proteins are shown in Fig. b and c. The findings demonstrated that KRT25 and KRT71 expression in anagen was greater than that in telogen, and the trends were similar to those found in SWATH. Localization of KRT25, KRT71 and KRT82 by immunohistochemistry The expression sites of KRT25, KRT71, and KRT82 in the skin of cashmere goats were determined by immunohistochemistry. After these sites were observed under a light microscope, brown cells were considered positive. The negative results revealed no yellow or brown staining (Figs. a and c, a and c and a and c), and all the immunohistochemistry results revealed that the background staining was light blue or colourless. In contrast, the experimental group exhibited distinct yellow or brown staining, indicating that the immunohistochemistry approach produced a particular immune response to KRT25, KRT71, and KRT82 (Figs. a and c, a and c and a and c). As shown in Figs. a and c, a and c and a and c, positive immunohistochemical staining for KRT25, KRT71, and KRT82 was observed in the SHFs of Inner Mongolia cashmere goat skin, with KRT25 in the IRS (Fig. a and c), KRT82 in the ORS (Fig. a and c), and KRT71 visible in both the IRS and ORS (Fig. a and c). The above three members of the KRT family can be found in both telogen and anagen, and the difference in their expression levels may be because the SHFs in telogen are much shorter than those in anagen. From January to March, the coat of the Inner Mongolia cashmere goat undergoes a period of relative mitotic quiescence (telogen), followed by the initiation of growth in April. The peak period of cashmere growth occurs in August and September . To examine potential proteins associated with the growth phase of cashmere goat skin SHFs, we performed IDA/SWATH-MS proteomic analysis of cashmere goat skin in the stages of anagen (September) and telogen (March). A total of 2414 proteins were detected and measured (S1), and principal component analysis (PCA) of those proteins revealed that proteins in anagen and telogen were distributed at different intervals, indicating that there was a major difference in the proteome between these two stages (Fig. a). We compared the proteomic signatures from anagen and telogen, and if the fold changes in these two groups of proteins were greater than 2 or less than 0.5, we considered them to be differentially regulated. Volcano plots of the differentially expressed proteins are displayed using the values of log2 (fold change) and -log10 ( P value) (Fig. b). This comparison revealed 503 proteins whose expression differed at least twofold between telogen and anagen, representing the proteins that were upregulated in anagen. There were 128 proteins with a fold change of less than 0.5, representing the downregulated proteins in the anagen phase. The differentially expressed proteins (DRPs) included mainly ribosomal proteins (r-proteins), eukaryotic translation initiation factors (EIFs), keratin (KRT) family members, and S100 family proteins. Comprehensive information on the DRPs is presented in Supplementary Data and . To assess the biological importance of these DRPs, we performed GO term and KEGG pathway enrichment analyses. We analysed the GO clustering of the DRPs in BP, CC, and MF terms between anagen and telogen. The results revealed that among the main enriched GO BP terms, cell adhesion was the most significantly enriched process, and most of the DRPs were enriched in the processes of protein transport and folding, RNA processing, and splicing. In addition, the DRPs were enriched in processes directly related to HF growth, such as hair follicle morphogenesis, cell ageing, keratinocyte differentiation, and keratinization (Fig. a). In CCs, these DRPs were enriched primarily in the extracellular exosome, cytoplasm, and nucleus (Fig. b). With respect to MFs, DRPs were enriched mainly in poly(A) RNA binding, protein binding, and RNA binding (Fig. c). In terms of KEGG pathway information, which is important for understanding the functions of DRPs, metabolism-related pathways, such as the TCA cycle signalling pathway, propanoate metabolism signalling pathway, and protein export, were the top associated pathways (Fig. d). In addition, fatty acid-related pathways, such as fatty acid degradation, biosynthesis of unsaturated fatty acids, and fatty acid elongation, were enriched. Furthermore, DRPs were enriched in several pathways related to hair follicle growth, such as the TGF-beta signalling pathway, the VEGF signalling pathway, and the Wnt signalling pathway . For analysis of the interplay among the DRPs, all those proteins were uploaded to STRING 11.5 software to identify the interrelations. Next, the construction of the PPI network was carried out utilizing the Cytoscape software. As illustrated in Fig. a, the interaction network consisted of 433 nodes and 2741 edges. In this network, according to the filter criteria, five modules were identified with the MCODE plugin (Fig. b-e). Cluster 1 has the highest cluster score of 18.222, consisting of 19 nodes and 164 edges. Cluster 2 is in second place, with a cluster score of 12.833, 13 nodes, and 77 edges. Cluster 3 has a score of 11.500, 13 nodes, and 69 edges, and Cluster 4 has a score of 8.833, 13 nodes, and 53 edges. MCODE analysis of the PPI prediction results of our study revealed that the cluster with the highest score was mainly composed of r-proteins and EIFs (Fig. b). The cluster with the second highest score was mainly composed of KRT family members, including KRT35, KRT73, KRT4, KRT82, KIFI, KRT71, KRT32, KRT85, KRT25, KRT39, KRT28, KRT74, and KRT27 (Fig. c). The keratin cluster (Cluster 2) was the cluster with the highest score. In addition, KRT25 and KRT71 are involved in the BP of hair follicle morphogenesis. Therefore, these two KRTs were selected for Western blotting (WB) verification. Additionally, the index utilized to calculate the relative content of the target proteins was the grey-level ratio between the target proteins and internal controls. The expression of KRT25 and KRT71 in the skin during anagen and telogen is shown in Fig. a, and the difference in expression was very notable. The relative contents of the target proteins are shown in Fig. b and c. The findings demonstrated that KRT25 and KRT71 expression in anagen was greater than that in telogen, and the trends were similar to those found in SWATH. The expression sites of KRT25, KRT71, and KRT82 in the skin of cashmere goats were determined by immunohistochemistry. After these sites were observed under a light microscope, brown cells were considered positive. The negative results revealed no yellow or brown staining (Figs. a and c, a and c and a and c), and all the immunohistochemistry results revealed that the background staining was light blue or colourless. In contrast, the experimental group exhibited distinct yellow or brown staining, indicating that the immunohistochemistry approach produced a particular immune response to KRT25, KRT71, and KRT82 (Figs. a and c, a and c and a and c). As shown in Figs. a and c, a and c and a and c, positive immunohistochemical staining for KRT25, KRT71, and KRT82 was observed in the SHFs of Inner Mongolia cashmere goat skin, with KRT25 in the IRS (Fig. a and c), KRT82 in the ORS (Fig. a and c), and KRT71 visible in both the IRS and ORS (Fig. a and c). The above three members of the KRT family can be found in both telogen and anagen, and the difference in their expression levels may be because the SHFs in telogen are much shorter than those in anagen. The skin, which serves as the external barrier for animals, envelops the whole organism and is dispersed in anatomically distinct niches that provide it with a specialized microenvironment . HFs are important components of the skin, and the development and growth mechanisms of HFs have focused on mammals with single coats, such as mice and humans . The primary, wool-producing follicles of cashmere goats are typically larger, whereas the secondary follicles are smaller and produce softer and finer cashmere. Cashmere goat skin contains more than 90% SHFs, and in addition to PHFs, SHFs follow their internal clock with a noticeable photoperiod-based cycle . Clearly, then, there must be some signals involved in cycling in SHFs, which has no effect on PHFs. A comparative analysis based on differential proteomics of these two periods of anagen and telogen in this study could provide a better understanding of the cyclical growth mechanism of cashmere goats and offer novel strategies for increasing fine cashmere quality through breeding. The results revealed that 631 proteins were extremely significantly differentially expressed: 503 proteins whose expression was upregulated and 128 proteins whose expression was downregulated. Since birth, the lifelong cycle growth mode of mammalian HFs is initiated. For cashmere goats, SHFs show annual periodicity. Apoptosis of keratinocytes in the matrix, inner root sheath (IRS), and outer root sheath (ORS) leads to rapid degeneration of the lower two-thirds of HFs during catagen, whereas bulge HF stem cells evade apoptosis. At the end of catagen, the lower HF undergoes a transformation and forms an epithelial strand, which brings the dermal papilla close to the bulge . In anagen, the regeneration of HFs is driven by stem cells in the bulge and coordinated by signal exchange from the dermal papilla niche . The signal between the bulge and dermal papilla is transported by extracellular exosomes in the skin . In this study, the most enriched CC was extracellular exosomes, which is likely due to the reasons mentioned above. Extracellular exosomes have also been identified as a method of releasing cellular waste products . Therefore, another reason for the greatest enrichment of CCs of extracellular exosomes might be that DRPs are involved mainly in BPs associated with metabolism. The importance of metabolism in the HF growth process was also confirmed through KEGG pathway analysis in this study. Metabolic regulation is a major driver of extracellular matrix production and degradation in fibroblasts . In addition, adipocytes can drive HF growth by promoting the skin stem cell niche . Fatty acids, which serve as important indicators of lipid breakdown, also have crucial functions in the development of HFs . Furthermore, the DRPs identified in this study were enriched in fatty acid metabolism pathways, such as fatty acid degradation, fatty acid elongation, and unsaturated fatty acid biosynthesis. In the KEGG pathway analysis, the TCA cycle, oxidative phosphorylation, and PPAR signalling pathways were linked to fatty acid metabolism. The oxidation process of fatty acids is ultimately catabolized in the TCA cycle by oxidative phosphorylation in the mitochondria. Mitochondria were also enriched in the CC category according to the GO enrichment analysis. The PPAR signalling pathway participates in fatty acid oxidation and mitochondrial oxidative metabolism, and the four DRPs involved in the PPAR signalling pathway in our study were upregulated in anagen, which could promote the degradation of fatty acids and the TCA cycle . These findings suggest that the abovementioned pathways are crucial for promoting the cyclic development of cashmere goat SHFs. The majority of molecular processes take place within a cell in the presence of many proteins that are connected by highly specific physical contacts. Hence, proteins rarely act alone. The interactions among these proteins can be called PPIs. MCODE analysis of the PPI prediction of our study revealed that the cluster with the highest score was mainly composed of r-proteins and EIFs. During cellular translation, r-proteins make up the subunits of the ribosome that work with rRNA , and EIFs also play an important role during this process; for example, the EIF3 family is critical in controlling translation initiation during the cell cycle, and the EIF4 family is related to protein synthesis . Therefore, our results indicated that DNA transcription and protein translation in the skin strongly increased during the HF growth period. The cluster with the second highest score was mainly composed of KRT family members, including KRT35, KRT73, KRT4, KRT82, KIFI, KRT71, KRT32, KRT85, KRT25, KRT39, KRT28, KRT74, and KRT27. KRTs are the main cytoskeletal proteins and not only play a role in support and mechanical resilience in the cell process but also play a very important role in the process of HF growth . The HF structure is composed of hair fibres, an IRS, an ORS, and a connective tissue sheath from the inside to the outside . In human HFs, KRT25, KRT27, KRT28, KRT71, KRT73, and KRT74 are IRS proteins . The immunohistochemical results of this study revealed that KRT25 was also expressed in the IRS of SHFs in the skin of Inner Mongolia cashmere goats. However, keratin 71 is expressed in both the IRS and ORS, which is different from its expression in single-coat mammals. For animals with single-coats, such as humans and mice, KRT71 is not only essential for the proper formation of the IRS but also indispensable for the correct moulding and growth of the hair shaft . For the secondary hair follicles of Inner Mongolia cashmere goat skin, KRT71 may have more profound importance because of its wider expression, but its specific role still needs further research. In human HFs, KRT25, KRT27, KRT28, KRT32, KRT35, KRT82, and KRT85 are present in the hair-forming compartment, indicating that these keratins are involved in the synthesis and growth of human HFs . Our results revealed that the main site of KRT82 expression is the ORS of SHFs. The molecular mechanisms of SHF development in cashmere goat skin differ from those of single-coats animal HFs. However, the KRT family undoubtedly plays an irreplaceable and important role in the process of HF growth. In addition to the protein cluster networks with high scores mentioned above, some proteins also needed special attention. For example, ANXA1, a calcium-dependent phospholipid binding protein, is upregulated during anagen. In the skin HFs of mice, ANXA1 participated in hair growth by influencing the proliferation of HF stem cells and the density of HFs. S100 family proteins must be mentioned. S100 proteins are a subgroup of Ca 2+ -binding proteins, some of which play an active role in the process of HF growth. For example, blockade of S100A3, an anagen-upregulated protein, can delay the entry of mouse HFs into the anagen period, reduce hair elongation, and decrease the number of subcutaneous HFs . In our research, S100 protein family members could be seen among both anagen-upregulated and anagen-downregulated proteins, and studies on the transcriptome of cashmere goat skin SHFs revealed that although S100 gene family members participate in the growth process of SHFs, they have different biological functions from those of humans and mice . Our proteomic data also revealed that the S100 family plays an indispensable role in the periodic growth of cashmere goat skin SHFs. Cashmere, often known as soft gold, is produced in excess of 20,000 tons in China every year, and as a consequence, cashmere goats have become a major source of income for farmers and herders in northern China . We compared the two periods of vigorous anagen and telogen variation in Inner Mongolia cashmere goat SHFs through the analytical method of proteomics. The proteins involved in the annual periodic growth of cashmere goat SHFs were revealed from the perspective of the proteome. Our immunohistochemistry data suggested that keratins have a relatively high cluster score, and the localization of KRT25, KRT71, and KRT82 revealed that these proteins were expressed in the SHFs of cashmere goat skin, although the specific expression sites were slightly different from those in humans and mice. The molecular mechanisms of SHF development in cashmere goat skin differ from those of single-coats animal HFs. Overall, these data provide new insight into the SHFs of cashmere goats; that is, the main SHF region and proteins involved in the initiation and continuation of SHF growth vary seasonally. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 |
Problem-based learning is helpful in encouraging academic institutions to strive for excellence: perceptions of Sudanese physiologists as an illustration | 01418110-68b8-44b7-83c7-a3d1be02285b | 11545978 | Physiology[mh] | Problem-based learning (PBL) is a popular teaching technique that promotes self-directed learning (SDL) in many medical programs throughout the world . PBL is an educational method in which students are assigned a problem or trigger, which might take the form of a picture, statement, video, or case. They are then permitted to work together to determine their learning needs while attempting to comprehend the problem; gather, synthesise, and apply information to the problem; and begin working constructively to learn from each other and group tutors . Since its establishment in 1969, PBL has been adopted by many educational institutions throughout the world, including McMaster University’s Faculty of Medicine in Canada. It replaced the old methodology, which is often based on lectures, with a student-centred learning approach . Previous research on the usefulness of PBL for student learning can be classified into two categories . The first line of inquiry might be described as exploratory research of instructors’ and students’ opinions and experiences with the use and/or transition to PBL, as well as the suitability of specific study materials for PBL. These studies, which often employ qualitative research approaches and (satisfaction) questionnaire data, concentrate on the experiences of students and teachers . As a result, these studies do not directly address the effectiveness of PBL; rather, they contribute to the overall picture of PBL by addressing attitudes and experiences that might assist users overcome potential challenges and difficulties while adopting or utilising PBL. The second line of research looked at how PBL affected the acquisition of information, skills, and competencies. These studies usually employed a comparative methodology, comparing and contrasting PBL with traditional, lecture-based curriculums. Regarding this line of research, it appears reasonable to conclude that PBL is an effective and satisfactory methodology for medical education and that graduates of PBL-enhanced curricula have certain advantages over graduates of traditional curricula (for example, knowledge, applying medical professionalism, capacity for problem-solving, communicator, graduation on time, lower drop out rates, and the ability to think critically) . The main challenge that students face in a PBL classroom is the transfer from a familiar, old mode of learning to a new, unfamiliar methodology . As a result, instructors’ roles in the classroom have to change in parallel with those of their students. Overall, it is clear that external factors can provide challenges for teachers, even when they understand what is required to effect such a transition in the classroom . Several Sudanese medical schools have lately begun to adopt PBL, an integrated, community-oriented curriculum approach, to increase student’s knowledge and professional skills. A culture of quality is believed to propel institutes towards excellence through feedback. Multiple feedback surveys at educational institutes help to have a better understanding of institutional performance and curriculum execution. Because problem-based learning is an important component of the integrated curriculum, the stakeholders’ (students and tutors) perspectives on its implementation are worth discussing. This study intended to investigate trends in Sudanese physiologists’ perceptions of PBL throughout time. The study’s findings help decision-makers plan, organise, and ensure the successful implementation of such an approach in the future. Study design and duration This descriptive cross-sectional study was conducted between February and March 2023. Study area, population and eligibility criteria The study was conducted at the Sudanese Physiological Society (SPS). The SPS is the only nonprofit body for which physiologists reside in the Republic of Sudan. The SPS was established in 1994 and is a member of African Associations of Physiological Science . All physiologists registered at the Sudanese Physiological Society (SPS), who are master’s students, master’s and PhD holders and currently on work, were included. Those who were on their annual vacation or who were taking sick leave were excluded. Sample size and sample technique Our study sample covered all Sudanese physiologists (117 in total, as reported by the general SPS secretary’s office) who were registered with the Sudanese Physiological Society. As the anticipated sample size is less than 200, census sampling is used. Data collection tool and procedure In a similar study, the questionnaire was pretested and validated by seven lecturers . The Cronbach’s alpha test, which was used to determine the questionnaire’s internal reliability, found that each questionnaire category had an internal reliability of more than 0.7, indicating that the items were appropriate for the study. Before data collection began, three medical education professors from the Master of Health Professions Education (MHPE) Board Committee of the International University of Africa’s Faculty of Medicine evaluated and approved the questionnaire. The questionnaire has four sections. The first section includes acquiring demographic data from participants. The second section included ten items that assessed lecturers’ attitudes towards problem-based learning in comparison to other educational approaches. The third section consisted of ten statements that asked participants to rate how PBL influences students’ learning from their point of view. In the final section, participants were asked to answer five questions about common difficulties that influenced their usage of PBL at their institutes. The participants were asked to rate the items on a five-point Likert scale, with 1 indicating strongly disagree, 2 indicating disagree, 3 indicating undecided, 4 indicating agree, and 5 indicating strongly agree. The survey has been distributed via multiple emails and social media platforms in English. Data management and analysis The data were analyzed using SPSS software version 25.0 (IBM Corp. SPSS Inc. Released 2017). The Shapiro-Wilk test confirmed the distribution’s normality. The qualitative data were analysed using percentages and frequencies (N). The quantitative data was characterised using interquartile ranges (IQRs), means, standard deviations, medians, and ranges (minimums and maximums). The Kruskal-Wallis test was used to assess the statistical significance of differences between multiple ordinal variables. To compare categorical variables, we utilised the Mann-Whitney test. A correlation (Spearman) analysis was conducted to determine the correlation between two quantitative variables. Multiple linear regression was employed to determine the relationship between the outcome variables and one or more independent variables. The results were assessed using a 5% significance level. The operational methodology was applied to categorise participants’ perceptions of the study domains into good, moderate, and poor levels. Operational definitions Participants who scored above or equal to the average (80%) on domain-related questions were classified as having good perceptions on that domain. Participants who scored between or equal to the average (60–79%) on domain-related questions were classified as having moderate perceptions on that domain. Participants who scored equal to or below the average (59%) on domain-related questions were classified as having poor perceptions on that domain. This descriptive cross-sectional study was conducted between February and March 2023. The study was conducted at the Sudanese Physiological Society (SPS). The SPS is the only nonprofit body for which physiologists reside in the Republic of Sudan. The SPS was established in 1994 and is a member of African Associations of Physiological Science . All physiologists registered at the Sudanese Physiological Society (SPS), who are master’s students, master’s and PhD holders and currently on work, were included. Those who were on their annual vacation or who were taking sick leave were excluded. Our study sample covered all Sudanese physiologists (117 in total, as reported by the general SPS secretary’s office) who were registered with the Sudanese Physiological Society. As the anticipated sample size is less than 200, census sampling is used. In a similar study, the questionnaire was pretested and validated by seven lecturers . The Cronbach’s alpha test, which was used to determine the questionnaire’s internal reliability, found that each questionnaire category had an internal reliability of more than 0.7, indicating that the items were appropriate for the study. Before data collection began, three medical education professors from the Master of Health Professions Education (MHPE) Board Committee of the International University of Africa’s Faculty of Medicine evaluated and approved the questionnaire. The questionnaire has four sections. The first section includes acquiring demographic data from participants. The second section included ten items that assessed lecturers’ attitudes towards problem-based learning in comparison to other educational approaches. The third section consisted of ten statements that asked participants to rate how PBL influences students’ learning from their point of view. In the final section, participants were asked to answer five questions about common difficulties that influenced their usage of PBL at their institutes. The participants were asked to rate the items on a five-point Likert scale, with 1 indicating strongly disagree, 2 indicating disagree, 3 indicating undecided, 4 indicating agree, and 5 indicating strongly agree. The survey has been distributed via multiple emails and social media platforms in English. The data were analyzed using SPSS software version 25.0 (IBM Corp. SPSS Inc. Released 2017). The Shapiro-Wilk test confirmed the distribution’s normality. The qualitative data were analysed using percentages and frequencies (N). The quantitative data was characterised using interquartile ranges (IQRs), means, standard deviations, medians, and ranges (minimums and maximums). The Kruskal-Wallis test was used to assess the statistical significance of differences between multiple ordinal variables. To compare categorical variables, we utilised the Mann-Whitney test. A correlation (Spearman) analysis was conducted to determine the correlation between two quantitative variables. Multiple linear regression was employed to determine the relationship between the outcome variables and one or more independent variables. The results were assessed using a 5% significance level. The operational methodology was applied to categorise participants’ perceptions of the study domains into good, moderate, and poor levels. Participants who scored above or equal to the average (80%) on domain-related questions were classified as having good perceptions on that domain. Participants who scored between or equal to the average (60–79%) on domain-related questions were classified as having moderate perceptions on that domain. Participants who scored equal to or below the average (59%) on domain-related questions were classified as having poor perceptions on that domain. Out of 117 physiologists, 82 agreed to participate in the study, yielding a 70% response rate. With a female-to-male ratio of 1.2:1, the majority of participants were in their fourth decade of life. Over 50% of them possessed a master’s degree and were employed as lecturers in the Department of Physiology. The most commonly used curriculum type was an integrated/community-based curriculum utilized by half of the participants in their PBL implementation at work. The minority group handled the PBL since they were college students and had been using PBL as a teaching strategy for a median of five years. The majority of physiologists hold advanced degrees or certificates in health professions education Tables and . Participant’s perceptions of the study domains The physiologist’s attitude toward PBL versus other approaches Based on their attitudes, the majority of participants firmly agreed that PBL generates interest in topics, that it is superior to traditional techniques in terms of effectiveness, that it enables students to think independently and learn for themselves and that it is a more scientific approach to teaching. Furthermore, more than two-thirds of the respondents said they would be interested in implementing PBL and thought it promoted students to learn in context and enabled engagement. The overall attitude percentage score was 76.9 ± 23.97% Table . Physiologists perceptions of how PBL affects students learning process In this domain, the majority of participants strongly agreed that PBL would improve students’ comprehension of the material, encourage self-learning, increase their level of engagement in the learning process, bolster their motivation, strengthen their ability to solve problems, assist them in identifying their strengths and weaknesses, and increase their participation in learning activities. Participants also strongly agreed that PBL would improve students’ communication, collaboration, and critical thinking abilities. This domain had an overall percentage score of 77.1 ± 25.13% Table . Physiologist’s perceptions about the typical challenges influencing their application of PBL Regarding this domain, the majority of participants firmly agreed that the primary reasons physiologists do not employ PBL in their courses are the large number of students and inadequate classroom infrastructure. Furthermore, a lower percentage of participants thought that PBL was irrelevant for their courses or that they lacked the necessary understanding to implement it. Furthermore, a far smaller percentage of respondents said that PBL was not supported by the curriculum in terms of teaching and learning activities. The total percentage score for this domain was 56.93 ± 19.43% Table . Operational cutoff points Based on the operational cutoff points, participants were categorized into three levels that corresponded to three distinct study domains. Approximately half of the participants had positive attitudes, positive perceptions of PBL effects, and poor perceptions about the common problems encountered when applying PBL at the respective institutions Fig. . PBL application at the current workplace (within or outside of Sudan) and physiologists’ perceptions of common factors impacting PBL application were shown to be significantly correlated according to the results of the Mann‒Whitney U test and the Kruskal‒Wallis test. Table The domains of attitude and perceptions of PBL’s effects on the learning process did not significantly correlate with participant characteristics ( p > 0.05). To investigate how various parameters contributed to the variance in the results for the three research domains, multiple linear regression analyses were run. The attitude score and the current workplace classification (private/governmental) were strongly correlated. Additionally, the score related to common challenges affecting PBL application at relevant institutes was strongly correlated with the current workplace and the use of PBL in this workplace Table . Spearman correlation analysis revealed a strong positive correlation between the participants attitude toward PBL and their age (r = 0.233, p = 0.005) as well as with the perception score of PBL effects on students’ learning processes ( r = 0.788, p < 0.001). Figs. and . The physiologist’s attitude toward PBL versus other approaches Based on their attitudes, the majority of participants firmly agreed that PBL generates interest in topics, that it is superior to traditional techniques in terms of effectiveness, that it enables students to think independently and learn for themselves and that it is a more scientific approach to teaching. Furthermore, more than two-thirds of the respondents said they would be interested in implementing PBL and thought it promoted students to learn in context and enabled engagement. The overall attitude percentage score was 76.9 ± 23.97% Table . Physiologists perceptions of how PBL affects students learning process In this domain, the majority of participants strongly agreed that PBL would improve students’ comprehension of the material, encourage self-learning, increase their level of engagement in the learning process, bolster their motivation, strengthen their ability to solve problems, assist them in identifying their strengths and weaknesses, and increase their participation in learning activities. Participants also strongly agreed that PBL would improve students’ communication, collaboration, and critical thinking abilities. This domain had an overall percentage score of 77.1 ± 25.13% Table . Physiologist’s perceptions about the typical challenges influencing their application of PBL Regarding this domain, the majority of participants firmly agreed that the primary reasons physiologists do not employ PBL in their courses are the large number of students and inadequate classroom infrastructure. Furthermore, a lower percentage of participants thought that PBL was irrelevant for their courses or that they lacked the necessary understanding to implement it. Furthermore, a far smaller percentage of respondents said that PBL was not supported by the curriculum in terms of teaching and learning activities. The total percentage score for this domain was 56.93 ± 19.43% Table . Based on their attitudes, the majority of participants firmly agreed that PBL generates interest in topics, that it is superior to traditional techniques in terms of effectiveness, that it enables students to think independently and learn for themselves and that it is a more scientific approach to teaching. Furthermore, more than two-thirds of the respondents said they would be interested in implementing PBL and thought it promoted students to learn in context and enabled engagement. The overall attitude percentage score was 76.9 ± 23.97% Table . In this domain, the majority of participants strongly agreed that PBL would improve students’ comprehension of the material, encourage self-learning, increase their level of engagement in the learning process, bolster their motivation, strengthen their ability to solve problems, assist them in identifying their strengths and weaknesses, and increase their participation in learning activities. Participants also strongly agreed that PBL would improve students’ communication, collaboration, and critical thinking abilities. This domain had an overall percentage score of 77.1 ± 25.13% Table . Regarding this domain, the majority of participants firmly agreed that the primary reasons physiologists do not employ PBL in their courses are the large number of students and inadequate classroom infrastructure. Furthermore, a lower percentage of participants thought that PBL was irrelevant for their courses or that they lacked the necessary understanding to implement it. Furthermore, a far smaller percentage of respondents said that PBL was not supported by the curriculum in terms of teaching and learning activities. The total percentage score for this domain was 56.93 ± 19.43% Table . Based on the operational cutoff points, participants were categorized into three levels that corresponded to three distinct study domains. Approximately half of the participants had positive attitudes, positive perceptions of PBL effects, and poor perceptions about the common problems encountered when applying PBL at the respective institutions Fig. . PBL application at the current workplace (within or outside of Sudan) and physiologists’ perceptions of common factors impacting PBL application were shown to be significantly correlated according to the results of the Mann‒Whitney U test and the Kruskal‒Wallis test. Table The domains of attitude and perceptions of PBL’s effects on the learning process did not significantly correlate with participant characteristics ( p > 0.05). To investigate how various parameters contributed to the variance in the results for the three research domains, multiple linear regression analyses were run. The attitude score and the current workplace classification (private/governmental) were strongly correlated. Additionally, the score related to common challenges affecting PBL application at relevant institutes was strongly correlated with the current workplace and the use of PBL in this workplace Table . Spearman correlation analysis revealed a strong positive correlation between the participants attitude toward PBL and their age (r = 0.233, p = 0.005) as well as with the perception score of PBL effects on students’ learning processes ( r = 0.788, p < 0.001). Figs. and . This study investigated physiologist’s perspectives on problem-based learning (PBL) in comparison to other teaching methodologies, as well as how they perceive PBL’s effects on students learning processes and the challenges they faced when implementing PBL in their institutions. It also investigated whether the characteristics of the participants influenced their responses in any meaningful way. Our study revealed that physiologists had a positive attitude towards PBL. Most believe that PBL makes topics more interesting, is superior to and more effective than traditional techniques, enables students to think and learn independently, and is a more scientific way to educate. These results were consistent with those of Van Den Bossche, P., et al., who discovered that PBL students outperformed traditional lecture-based education students in terms of performance . Additionally, Mahmood, S. U., et al. found that PBL promotes higher-order thinking , and Katwa, J. K., et al. concluded that one of the elements of PBL—self-directed learning—helped students evolve into lifelong learners . Likewise, more than two-thirds of the participants reported an interest in implementing PBL in their classrooms because they believed it improved student engagement and contextual learning. These findings are comparable with those of Orfan, S. N., et al., Aboonq, Ahmed, Z., and Malik, who reported that their participants had positive perceptions towards the use of PBL in various kinds of instructional and learning activities . According to our study, the majority of physiologists were well informed about the ways in which PBL influences students’ learning processes. They also held the view that PBL would improve students’ comprehension of the material, encourage self-learning, boost their engagement in the process, strengthen their motivation, strengthen their ability to solve problems, assist them in identifying their strengths and weaknesses, and increase their participation in learning activities. These results are consistent with those of Torp, L., and Sage, S., who stated that PBL fosters a learning environment in which teachers mentor students’ inquiry and coach their thinking, enabling them to comprehend the material at a deeper level . Additionally, Watson, G. H., discovered that PBL helps students learn new material and develop their critical thinking, reasoning, and self-evaluation abilities . Furthermore, Mahmood, S. U., et al., reported that PBL increased students’ motivation , Argaw, A. S., et al., reported that students had better problem-solving skills when taught through PBL than when taught through traditional lectures , and Watson, G. H. reported that PBL helps students develop their communication skills . Physiologists believe that PBL enhances students’ critical thinking, collaboration, and communication skills. These results are consistent with those of Azman, N., and Shin, L. K., who found that PBL fosters students’ ability to collaborate with others , and with those of Ghimire, S. R., and Bhandary, S., who found that PBL enhances generic abilities, which are important for securing future employment . Additionally, research by Aziz, A., et al. and Abdelkarim, A., et al., documented the broad benefits of PBL for students’ learning processes . According to our study, a significant portion of physiologists had negative perceptions about the typical difficulties encountered when implementing PBL in their institutions. Moreover, more than 50% of them thought that the main barriers to implementing PBL in the classroom were a high student body and inadequate classroom infrastructure. These outcomes are comparable to those of Orfan, S. N., et al., who reported identical findings . Furthermore, our research showed that fewer participants thought that PBL was appropriate in their courses or that they had the necessary understanding to implement it. These results contrast with those of Orfan, S. N., et al., who stated that the majority of study participants were unable to apply PBL since they were unaware of its use in courses . Additionally, a far smaller percentage of physiologists thought that the use of PBL was not supported by the curriculum. This is explained by the tiny number of physiologists who felt that they lacked the necessary skills to apply PBL. The attitudes of physiologists and their current occupation classification (governmental vs. private) were found to be significantly correlated. Additionally, the use of PBL at these workplaces and the common characteristics affecting PBL application at relevant institutes were strongly correlated with the current area of practice (inside/outside of Sudan). These results might stem from the necessity of holding more professional development activities (symposia, workshops, etc.), particularly for government employees and those working inside Sudan, who are likely to have fewer opportunities than others to engage in PBL. This study not only shed the light on the physiologist’s satisfaction with PBL over time, but it also highlighted the significance of considering tutors’ and students’ perspectives when implementing new curriculum changes in any educational institution globally. In reality, medical schools that foster a great culture of responding to feedback keep their curricula up to date and have satisfied stakeholders, which helps them rank among the best higher educational institutions. Strengths, limitations, and future prospects One of our study’s advantages is that it is the first to explore how Sudanese physiologists perceive the PBL. However, our study has a few limitations. First, the study’s conclusions may not be generalisable due to the limited sample size. Second, because only physiologists participated in this study, the findings may not apply to educators in other departments. Response bias and changes in perception over time are two further constraints. In the near future, professional development activities (such as extra training, workshops, and mentorship) that address often encountered challenges in the use of PBL and how to overcome them must be planned and executed. It is also recommended that future studies use a more diversified sample of physiologists from other areas and departments to increase the findings generalisability. The findings of this study may also be valuable to the Federal Ministry of Higher Education in deciding whether to apply this unique technique to improving teaching methods. One of our study’s advantages is that it is the first to explore how Sudanese physiologists perceive the PBL. However, our study has a few limitations. First, the study’s conclusions may not be generalisable due to the limited sample size. Second, because only physiologists participated in this study, the findings may not apply to educators in other departments. Response bias and changes in perception over time are two further constraints. In the near future, professional development activities (such as extra training, workshops, and mentorship) that address often encountered challenges in the use of PBL and how to overcome them must be planned and executed. It is also recommended that future studies use a more diversified sample of physiologists from other areas and departments to increase the findings generalisability. The findings of this study may also be valuable to the Federal Ministry of Higher Education in deciding whether to apply this unique technique to improving teaching methods. The study found that physiologists had a positive attitude towards PBL in comparison to other teaching approaches and had good perceptions of its benefits on student learning. Furthermore, physiologists had experience with the common challenges that hinder PBL implementation at their respective institutions. The gender, qualifications, job position, and curriculum used by physiologists have no significant effect on their responses. As PBL has proven to be an effective educational approach for equipping students and assisting them in their future careers and employment, old and new medical schools need to develop and implement PBL-integrated curricula, as well as frequently address encountered challenges in the use of PBL and how to overcome them to ensure a quality educational process. |
Effects of oil pollution on the growth and rhizosphere microbial community of | b8778338-3cc3-47a1-8128-97097432e3a2 | 11711279 | Microbiology[mh] | Oil pollution has emerged as a pervasive global environmental issue, exerting profound ramifications on the vitality and equilibrium of ecosystems . The discharge of petroleum hydrocarbons can impinge upon the physical and chemical attributes of soil, thereby engendering detrimental consequences for plant growth and soil microbial communities . Soil microorganisms represent pivotal constituents within soil ecosystems, playing an unequivocal role in soil health and plant growth . The configuration and functional diversity of microbial communities are indispensable for upholding the physical, chemical, and biological functionalities of soils . They not only actively participate in organic matter decomposition, nutrient cycling, and formation of soil structure but also intricately interact with plants to influence their growth and well-being . Soil microbial communities encompass diverse microorganisms such as bacteria, fungi, archaea, actinomycetes, viruses etc., which synergistically operate through intricate ecological relationships to facilitate nutrient transformation in soils while channeling energy flow . For instance, bacteria and fungi serve as primary drivers behind organic matter degradation; they break down complex organic molecules into simpler compounds that can be assimilated by plants or other microorganisms . Archaea play a pivotal role in the global carbon cycle by efficiently consuming methane, thereby potentially mitigating its atmospheric concentration . Soil viruses exert significant influence on ecological equilibrium and biodiversity promotion through their infection of bacteria, archaea, and fungi in soil, profoundly impacting the structure and functionality of soil microbial communities . Research has demonstrated that different microorganisms exhibit varying capacities to degrade petroleum hydrocarbons, with specific bacteria and fungi adept at utilizing these compounds as carbon sources . Consequently, in soils contaminated with crude oil, the abundance of such microorganisms often escalates, leading to the formation of a distinct microbial community structure that indirectly influences the growth and health of Calamagrostis epigejos alterations in its rhizosphere environment. Calamagrostis epigejos , a perennial herbaceous plant belonging to the Poaceae family, exhibits a wide distribution in temperate and tropical regions of Asia. It is particularly abundant in dry and semi-arid areas of China . Due to its remarkable adaptability and drought tolerance, Calamagrostis epigejos plays a pivotal role not only in natural ecosystems but also in soil stabilization and ecological restoration initiatives . Investigating the growth performance of Calamagrostis epigejos in crude oil-contaminated soil and its interaction with rhizosphere microorganisms is essential for comprehending the intricate relationships among plants, microorganisms, and pollutants. Rhizosphere microorganisms refer to microbial communities residing around plant roots that establish close ecological associations with root systems, thereby influencing their growth development and adaptation to environmental stressors . In oil-polluted environments, rhizosphere microorganisms may actively participate in the decomposition and transformation of crude oil through diverse biochemical processes, consequently impacting the remediation of contaminated soils . Previous studies have revealed that an increase in crude oil concentration significantly diminishes the height, biomass, leaf number, and root length of Calamagrostis epigejos . Plant-microbe interactions are highly complex. Plants regulate the composition of rhizosphere microorganisms through root exudates and their immune system, while rhizosphere microorganisms influence plant development, nutrient absorption, and stress responses through their metabolic activities. For example, plants can selectively promote the growth of beneficial microorganisms, such as nitrogen-fixing and phosphate-solubilizing bacteria, while inhibiting pathogenic microorganisms by modulating the composition of root exudates , . Although an increasing number of rhizosphere microorganisms are reported to have the potential to act as soil conditioners, the interaction mechanisms between microorganisms and plants in the soil remain unclear . With the advancement of metagenomics, the molecular-level interaction mechanisms between rhizosphere microorganisms and plants have gradually been revealed. Rhizosphere microorganisms enhance the availability of phosphorus in the soil by secreting secondary metabolites that dissolve phosphate rock, converting it into plant-available phosphorus, which is crucial for promoting plant growth . Rhizosphere microorganisms are capable of fixing nitrogen and converting it into ammonia, which plants can readily utilize. This process is a key mechanism by which plants acquire nitrogen. Additionally, microorganisms decompose organic matter, releasing nutrients that plants can absorb . Furthermore, rhizosphere microorganisms produce plant hormones, such as indoleacetic acid (IAA) and cytokinins, which are crucial for plant growth and development . In summary, the interaction between plants and rhizosphere microorganisms is multifaceted, involving the exchange of chemical signals, nutrient cycling, hormone synthesis, and immune system regulation. These interactions not only influence plant growth and development but also play a critical role in the health and stability of soil ecosystems. The Xi’an Botanical Garden, situated in Xi’an City, Shaanxi Province, was chosen as the experimental site for this study. The objective is to investigate the influence of varying concentrations of crude oil pollution on the growth of Calamagrostis epigejos and alterations in the structure of rhizosphere microbial communities. By implementing different levels of crude oil pollution (0 g/kg, 10 g/kg, 40 g/kg) and employing techniques such as biomass measurement, analysis of rhizosphere soil enzyme activity, and metagenomic sequencing, we comprehensively assess the impact of crude oil pollution on Calamagrostis epigejos and its associated rhizosphere microorganisms. Furthermore, this study aims to explore the mechanisms underlying interactions between Calamagrostis epigejos and rhizosphere microorganisms—particularly how these microorganisms regulate plant physiological and biochemical responses under conditions of crude oil pollution—and how these processes affect soil remediation efficiency. Through extensive research endeavors, it is anticipated that this study will provide a theoretical foundation and technical support for ecological restoration efforts targeting crude oil-polluted soil while offering scientific guidance for utilizing Calamagrostis epigejos in ecological restoration. Soil physicochemical factors The addition of crude oil did not significantly affect the physicochemical properties of the soil, including pH, available potassium content, and alkaline nitrogen content, as shown in Fig. . However, it markedly increased the levels of petroleum hydrocarbons, polyphenol oxidase activity, hydrogen peroxide enzyme activity, acid phosphatase (ACP), N-acetylglucosaminidase (NAG), as well as the aboveground to belowground biomass ratio in the soil. Conversely, it led to a decrease in available phosphorus levels and impacted β-glucosidase (BG), cellulose-hydrolyzing enzymes (CBH), and tiller number in Calamagrostis epigejos . Composition of soil microbial community structure Through metagenomic sequencing of microorganisms (Fig. ), it was determined that there were thirteen bacterial phyla, primarily including Proteobacteria , Actinobacteria , Acidobacteria , Chloroflexi , Candidatus , Verrucomicrobia and Gemmatimonadetes . Among these phyla, the bacteria from the first three were dominant with total relative abundances of 84.33% (CK), 85.33% (F10) and 87.33% (F40) at different levels of crude oil addition. The relative abundance of Actinobacteria was lowest in the F10 treatment at 55.96%, while it reached its highest level in the F40 treatment at 40.00%. On the other hand, Proteobacteria exhibited its lowest relative abundance in the CK treatment at 27.00%, but showed higher levels in both F10 and F40 treatments at 39.33% and 40.00%, respectively. Bacteroidetes , Cyanobacteria , Firmicutes , Nitrospirae , and Elusimicrobia had a relative abundance below 1% (< 1%), indicating their rarity; whereas the remaining five bacterial phyla displayed a relative abundance ranging from one to ten% (1-10%). Moreover, the differences observed among Proteobacteria , Acidibactera , Chloroflexi , and Verrucomicrobia between treatments were statistically significant ( P < 0 0.05). The fungal kingdom comprises eight phyla, namely Ascomycota , Basidiomycota , Mucoromycota , Blastocladiomycota , Chytridiomycota , Cryptomycota , Microsporidia and Zoopagomycota . Among these phyla, the fungi belonging to the first three are predominant with relative abundances of 99.50% (CK), 97.44% (F10) and 98.09% (F40) respectively under different levels of crude oil addition. Basidiomycota exhibits the lowest relative abundance at 37.88% in CK treatment and the highest at 50.03% in F40 treatment; whereas Ascomycota shows the lowest relative abundance at 6.40% in CK treatment and the highest at 43.61% in F40 treatment. The remaining five phyla have a relative abundance below 1%, indicating their rarity. Furthermore, statistically significant differences ( P < 0.05) were observed among the treatments for Ascomycota , Mucoromycota , and Blastocladiomycota phyla. The archaea domain comprises five main phyla: Thaumarchaeota , Euryarchaeota , Candidatus , Crenarchaeota , and Nanoarchaeota . Thaumarchaeota emerges as the predominant archaeal phylum with total relative abundances of 91.24% (CK), 87.99% (F10), and 83.77% (F40) under varying levels of crude oil addition. Nanoarchaeota exhibits a rare occurrence with a relative abundance below 1%, while the remaining three phyla range from 1 to 10%. Additionally, there are seven viral phyla including Uroviricota , Preplasmiviricota , Phixviricota , Nucleocytoviricota , lenarviricota , hofneiviricota , and cossaviricota .The Uroviricotaspecies dominates the viral community with relative abundances of 92.31%(CK), 92.52%(F10), and 92.59%(F40). The other six viralphyla have relative abundances below 1%, indicating their rarity. Significant differences in Phixviricota were also observed among the treatments( P < 0.05). The study investigated the differences in soil microbial community composition after adding different crude oils using PCA analysis (Fig. ). The total explanatory power of the differences in archaeal, viral, bacterial, and fungal communities was all above 80%, with R > 0, indicating variations in soil microbial community structure under different levels of crude oil addition. ANOSIM test revealed that crude oil addition significantly influenced viral, bacterial, and fungal communities. Soil microbial community diversity The diversity indices of soil archaea, viruses, bacteria, and fungi (Fig. ) exhibited significant differences among the treatments ( P < 0.05). Moreover, there was an upward trend in the Shannon and Simpson indices for soil archaea, viruses, bacteria, and fungi as the amount of crude oil increased. The F40 treatment showed significantly higher diversity indices compared to other treatments ( P < 0.05). Co-occurrence patterns of soil microbial communities Different additions of crude oil exhibited significant variations in the co-occurrence network characteristics of soil microorganisms (Fig. ; Table ). Compared to the control group (CK), crude oil addition significantly increased the number of edges and nodes in the co-occurrence network, indicating a higher complexity of the microbial community’s co-line network and more intricate interactions among different species. However, there was a significant reduction in average weighted degree, graph density, and modularity, suggesting a transition from a highly organized and tightly connected state to a looser and more evenly distributed state within the co-line network. Furthermore, compared to other treatments, CK had a higher proportion of positive correlation edges while exhibiting lower proportions of negative correlation edges. This indicates that there is strong cooperation among microbial community species in CK treatment but stronger competition among microbial community species in crude oil addition treatment. KEGG functional annotation and analysis The soil microbial functional genes were annotated using the KEGG database ( http://www.genome.jp/kegg/ ) . According to Fig. a, it can be observed that Metabolism-related genes had the highest abundance (49.99%) among the six subsystems corresponding to KEGG primary pathways, followed by Environmental information processing (13.31%) and Genetic information processing (15.92%). Other highly abundant categories included Cellular processes (10.15%) and Human Diseases (5.61%), among others. With an increase in crude oil addition, there was an upward trend in Environmental information processing while a downward trend was observed for Metabolism, Genetic information processing, Cellular processes, and Human Diseases. Further annotation of KEGG secondary functional genes (Fig. b) showed similar results across all treatments with mainly annotated pathways such as Carbohydrate metabolism, Amino acid metabolism, Energy metabolism, and Metabolism of cofactors and vitamins. Additionally, with an increase in crude oil addition, Aging, Carbohydrate metabolism, Amino acid metabolism, and Energy metabolism exhibited a decreasing trend while Cell motility, Digestive system, Lipid metabolism, Endocrine system, Immune system, and Sensory system showed an increasing trend. The redundancy analysis of KEGG primary pathway functions and soil enzyme activities (Fig. c) revealed significant positive correlations between Genetic Information Processing, Organismal Systems, and Human Diseases with CBH and BG. Additionally, Metabolism and Environmental Information Processing were significantly positively correlated with PPO, CAT, NAG, and ACP. Among these, soil ACP emerged as the primary environmental factor influencing soil microbial function. Similarly, the redundancy analysis of KEGG secondary pathway functions and soil enzyme activities (Fig. d) indicated significant positive correlations between Signaling Molecules and Interaction, Cancer: Specific Types, Drug Resistance: Antineoplastic, Xenobiotics Biodegradation and Metabolism, Circulatory System, and Sensory System with PPO, ACP, NAG, and CAT. Moreover, Transcription, Cellular Community - Eukaryotes, Cellular Community - Prokaryotes, and Carbohydrate Metabolism were significantly positively correlated with BG and CBH. Once more, soil ACP was identified as the key environmental factor affecting soil microbial function. Relationship between soil microbial community and environmental factors Through correlation analysis (Fig. a), it was observed that tiller number exhibited a significant negative correlation with aboveground carbon pool (ACP) and a significant positive correlation with circumference at breast height (CBH) ( P < 0.05). Additionally, aboveground biomass displayed a significant negative correlation with ACP, while exhibiting significant positive correlations with tiller number, CBH, and belowground biomass (BG) ( P < 0.05). Furthermore, belowground biomass demonstrated a significant positive correlation with ACP and nitrogen accumulation in the grain (NGA), but showed a significant negative correlation with tiller number ( P < 0.05). Moreover, potassium availability (AK) exhibited a noteworthy negative association with peroxidase activity (PPO) and catalase activity (CAT) levels( P < 0.05). Similarly, phosphorus availability (AP) displayed a notable positive relationship with tiller number and aboveground biomass; however, it revealed a significant negative association with ACP and belowground biomass( P < 0.05). Lastly, petroleum hydrocarbons exhibited substantial positive correlations with ACP, NGA, and belowground biomass; nevertheless, it presented an evident inverse relationship to tiller number( P < 0.05). Through Mantel analysis (Fig. b), it was observed that ACP, tiller number, aboveground biomass, and petroleum hydrocarbons exerted significant influences on the fungal community ( P < 0.05). Belowground biomass significantly impacted the archaeal community ( P < 0.05). ACP, CBH, tiller number, and petroleum hydrocarbons exhibited a significant impact on the viral community ( P < 0.05). However, crude oil addition did not yield a significant effect on the bacterial community, suggesting that soil bacterial communities are influenced by multiple factors in concert. Further analysis employing structural equation modeling revealed a significant influence of soil enzyme activity on the bacterial community ( P < 0.05) (Fig. c). Additionally, there existed a noteworthy positive correlation between fungal community composition, viral community composition, soil enzyme activity and plant growth; conversely, there was a notable negative correlation between bacterial and archaeal communities with respect to plant growth ( P < 0.05). Through redundancy analysis of archaea (Fig. d), Nanoarchaeota and Candidatus were positively correlated with BG, NAG, PPO, CAT, and ACP. Thaumarchaeota showed a positive correlation with AP and CBH, while Crenarchaeota and Euryarchaeota were positively correlated with pH. Redundancy analysis of viruses (Fig. e) revealed that Nucleocytoviricota , Phixviricota , and Preplasmiviricota were positively correlated with pH, ACP, NAG, and PPO. Uroviricota and Hofneiviricota showed a positive correlation with CAT, AP, and BG. Cossaviricota and Lenarviricota exhibited a positive correlation with CBH. Redundancy analysis of bacteria (Fig. f) indicated that Candidatus Planctomycetes and Verrucomicrobia were positively correlated with ACP, NAG, CAT, and PPO. Gemmatimonadetes , Nitrospirae , and Acidobacteria showed a positive correlation with BG, AP, and CBH. Chloroflexi , Firmicutes , Elusimicrobia , and Bacteroidetes were positively correlated with pH. Redundancy analysis of fungi (Fig. g) revealed that Cryptomycota was positively correlated with BG, AP, and CBH. Ascomycot and Chytridiomycota showed a positive correlation with ACP, NAG, CAT, and PPO. Basidiomycot , Zoopagomycot and Blastocladiomycota were positively correlated with pH. The addition of crude oil did not significantly affect the physicochemical properties of the soil, including pH, available potassium content, and alkaline nitrogen content, as shown in Fig. . However, it markedly increased the levels of petroleum hydrocarbons, polyphenol oxidase activity, hydrogen peroxide enzyme activity, acid phosphatase (ACP), N-acetylglucosaminidase (NAG), as well as the aboveground to belowground biomass ratio in the soil. Conversely, it led to a decrease in available phosphorus levels and impacted β-glucosidase (BG), cellulose-hydrolyzing enzymes (CBH), and tiller number in Calamagrostis epigejos . Through metagenomic sequencing of microorganisms (Fig. ), it was determined that there were thirteen bacterial phyla, primarily including Proteobacteria , Actinobacteria , Acidobacteria , Chloroflexi , Candidatus , Verrucomicrobia and Gemmatimonadetes . Among these phyla, the bacteria from the first three were dominant with total relative abundances of 84.33% (CK), 85.33% (F10) and 87.33% (F40) at different levels of crude oil addition. The relative abundance of Actinobacteria was lowest in the F10 treatment at 55.96%, while it reached its highest level in the F40 treatment at 40.00%. On the other hand, Proteobacteria exhibited its lowest relative abundance in the CK treatment at 27.00%, but showed higher levels in both F10 and F40 treatments at 39.33% and 40.00%, respectively. Bacteroidetes , Cyanobacteria , Firmicutes , Nitrospirae , and Elusimicrobia had a relative abundance below 1% (< 1%), indicating their rarity; whereas the remaining five bacterial phyla displayed a relative abundance ranging from one to ten% (1-10%). Moreover, the differences observed among Proteobacteria , Acidibactera , Chloroflexi , and Verrucomicrobia between treatments were statistically significant ( P < 0 0.05). The fungal kingdom comprises eight phyla, namely Ascomycota , Basidiomycota , Mucoromycota , Blastocladiomycota , Chytridiomycota , Cryptomycota , Microsporidia and Zoopagomycota . Among these phyla, the fungi belonging to the first three are predominant with relative abundances of 99.50% (CK), 97.44% (F10) and 98.09% (F40) respectively under different levels of crude oil addition. Basidiomycota exhibits the lowest relative abundance at 37.88% in CK treatment and the highest at 50.03% in F40 treatment; whereas Ascomycota shows the lowest relative abundance at 6.40% in CK treatment and the highest at 43.61% in F40 treatment. The remaining five phyla have a relative abundance below 1%, indicating their rarity. Furthermore, statistically significant differences ( P < 0.05) were observed among the treatments for Ascomycota , Mucoromycota , and Blastocladiomycota phyla. The archaea domain comprises five main phyla: Thaumarchaeota , Euryarchaeota , Candidatus , Crenarchaeota , and Nanoarchaeota . Thaumarchaeota emerges as the predominant archaeal phylum with total relative abundances of 91.24% (CK), 87.99% (F10), and 83.77% (F40) under varying levels of crude oil addition. Nanoarchaeota exhibits a rare occurrence with a relative abundance below 1%, while the remaining three phyla range from 1 to 10%. Additionally, there are seven viral phyla including Uroviricota , Preplasmiviricota , Phixviricota , Nucleocytoviricota , lenarviricota , hofneiviricota , and cossaviricota .The Uroviricotaspecies dominates the viral community with relative abundances of 92.31%(CK), 92.52%(F10), and 92.59%(F40). The other six viralphyla have relative abundances below 1%, indicating their rarity. Significant differences in Phixviricota were also observed among the treatments( P < 0.05). The study investigated the differences in soil microbial community composition after adding different crude oils using PCA analysis (Fig. ). The total explanatory power of the differences in archaeal, viral, bacterial, and fungal communities was all above 80%, with R > 0, indicating variations in soil microbial community structure under different levels of crude oil addition. ANOSIM test revealed that crude oil addition significantly influenced viral, bacterial, and fungal communities. The diversity indices of soil archaea, viruses, bacteria, and fungi (Fig. ) exhibited significant differences among the treatments ( P < 0.05). Moreover, there was an upward trend in the Shannon and Simpson indices for soil archaea, viruses, bacteria, and fungi as the amount of crude oil increased. The F40 treatment showed significantly higher diversity indices compared to other treatments ( P < 0.05). Different additions of crude oil exhibited significant variations in the co-occurrence network characteristics of soil microorganisms (Fig. ; Table ). Compared to the control group (CK), crude oil addition significantly increased the number of edges and nodes in the co-occurrence network, indicating a higher complexity of the microbial community’s co-line network and more intricate interactions among different species. However, there was a significant reduction in average weighted degree, graph density, and modularity, suggesting a transition from a highly organized and tightly connected state to a looser and more evenly distributed state within the co-line network. Furthermore, compared to other treatments, CK had a higher proportion of positive correlation edges while exhibiting lower proportions of negative correlation edges. This indicates that there is strong cooperation among microbial community species in CK treatment but stronger competition among microbial community species in crude oil addition treatment. The soil microbial functional genes were annotated using the KEGG database ( http://www.genome.jp/kegg/ ) . According to Fig. a, it can be observed that Metabolism-related genes had the highest abundance (49.99%) among the six subsystems corresponding to KEGG primary pathways, followed by Environmental information processing (13.31%) and Genetic information processing (15.92%). Other highly abundant categories included Cellular processes (10.15%) and Human Diseases (5.61%), among others. With an increase in crude oil addition, there was an upward trend in Environmental information processing while a downward trend was observed for Metabolism, Genetic information processing, Cellular processes, and Human Diseases. Further annotation of KEGG secondary functional genes (Fig. b) showed similar results across all treatments with mainly annotated pathways such as Carbohydrate metabolism, Amino acid metabolism, Energy metabolism, and Metabolism of cofactors and vitamins. Additionally, with an increase in crude oil addition, Aging, Carbohydrate metabolism, Amino acid metabolism, and Energy metabolism exhibited a decreasing trend while Cell motility, Digestive system, Lipid metabolism, Endocrine system, Immune system, and Sensory system showed an increasing trend. The redundancy analysis of KEGG primary pathway functions and soil enzyme activities (Fig. c) revealed significant positive correlations between Genetic Information Processing, Organismal Systems, and Human Diseases with CBH and BG. Additionally, Metabolism and Environmental Information Processing were significantly positively correlated with PPO, CAT, NAG, and ACP. Among these, soil ACP emerged as the primary environmental factor influencing soil microbial function. Similarly, the redundancy analysis of KEGG secondary pathway functions and soil enzyme activities (Fig. d) indicated significant positive correlations between Signaling Molecules and Interaction, Cancer: Specific Types, Drug Resistance: Antineoplastic, Xenobiotics Biodegradation and Metabolism, Circulatory System, and Sensory System with PPO, ACP, NAG, and CAT. Moreover, Transcription, Cellular Community - Eukaryotes, Cellular Community - Prokaryotes, and Carbohydrate Metabolism were significantly positively correlated with BG and CBH. Once more, soil ACP was identified as the key environmental factor affecting soil microbial function. Through correlation analysis (Fig. a), it was observed that tiller number exhibited a significant negative correlation with aboveground carbon pool (ACP) and a significant positive correlation with circumference at breast height (CBH) ( P < 0.05). Additionally, aboveground biomass displayed a significant negative correlation with ACP, while exhibiting significant positive correlations with tiller number, CBH, and belowground biomass (BG) ( P < 0.05). Furthermore, belowground biomass demonstrated a significant positive correlation with ACP and nitrogen accumulation in the grain (NGA), but showed a significant negative correlation with tiller number ( P < 0.05). Moreover, potassium availability (AK) exhibited a noteworthy negative association with peroxidase activity (PPO) and catalase activity (CAT) levels( P < 0.05). Similarly, phosphorus availability (AP) displayed a notable positive relationship with tiller number and aboveground biomass; however, it revealed a significant negative association with ACP and belowground biomass( P < 0.05). Lastly, petroleum hydrocarbons exhibited substantial positive correlations with ACP, NGA, and belowground biomass; nevertheless, it presented an evident inverse relationship to tiller number( P < 0.05). Through Mantel analysis (Fig. b), it was observed that ACP, tiller number, aboveground biomass, and petroleum hydrocarbons exerted significant influences on the fungal community ( P < 0.05). Belowground biomass significantly impacted the archaeal community ( P < 0.05). ACP, CBH, tiller number, and petroleum hydrocarbons exhibited a significant impact on the viral community ( P < 0.05). However, crude oil addition did not yield a significant effect on the bacterial community, suggesting that soil bacterial communities are influenced by multiple factors in concert. Further analysis employing structural equation modeling revealed a significant influence of soil enzyme activity on the bacterial community ( P < 0.05) (Fig. c). Additionally, there existed a noteworthy positive correlation between fungal community composition, viral community composition, soil enzyme activity and plant growth; conversely, there was a notable negative correlation between bacterial and archaeal communities with respect to plant growth ( P < 0.05). Through redundancy analysis of archaea (Fig. d), Nanoarchaeota and Candidatus were positively correlated with BG, NAG, PPO, CAT, and ACP. Thaumarchaeota showed a positive correlation with AP and CBH, while Crenarchaeota and Euryarchaeota were positively correlated with pH. Redundancy analysis of viruses (Fig. e) revealed that Nucleocytoviricota , Phixviricota , and Preplasmiviricota were positively correlated with pH, ACP, NAG, and PPO. Uroviricota and Hofneiviricota showed a positive correlation with CAT, AP, and BG. Cossaviricota and Lenarviricota exhibited a positive correlation with CBH. Redundancy analysis of bacteria (Fig. f) indicated that Candidatus Planctomycetes and Verrucomicrobia were positively correlated with ACP, NAG, CAT, and PPO. Gemmatimonadetes , Nitrospirae , and Acidobacteria showed a positive correlation with BG, AP, and CBH. Chloroflexi , Firmicutes , Elusimicrobia , and Bacteroidetes were positively correlated with pH. Redundancy analysis of fungi (Fig. g) revealed that Cryptomycota was positively correlated with BG, AP, and CBH. Ascomycot and Chytridiomycota showed a positive correlation with ACP, NAG, CAT, and PPO. Basidiomycot , Zoopagomycot and Blastocladiomycota were positively correlated with pH. Effects of crude oil addition on soil environmental factors and microbial community characteristics in Calamagrostis epigejos Soil is a crucial component of the ecosystem, and its physicochemical properties have significant impacts on plant growth and microbial activity. In the context of soil pollution, the impact of adding crude oil to soil physicochemical properties has received extensive attention , . The findings from this study indicate that the addition of crude oil does not significantly alter soil pH, available potassium levels, and alkaline nitrogen content. This observation may be attributed to the relatively limited effect of crude oil contamination on soil acid-base balance and potassium-nitrogen content . However, it is noteworthy that adding crude oil substantially increases petroleum hydrocarbon concentrations in the soil. This phenomenon could be attributed to the inhibitory effect of adding crude oil on the biodegradation process of petroleum hydrocarbons in the soil, leading to their accumulation . Furthermore, it is worth mentioning that adding crude oil significantly enhances polyphenol oxidase and hydrogen peroxide enzyme activities in the soil, which play pivotal roles in organic matter decomposition and plant growth processes. This may have triggered oxidation reactions of organic matter in the soil due to crude oil pollution . However, there was a significant decrease in the content of -glucosidase and cellulase, indicating that crude oil pollution had a detrimental impact on soil enzyme activity. -glucosidase and cellulase are vital enzymes in the soil, and their reduced activity may suggest an inhibitory effect of adding crude oil on microbial metabolism in the soil . In summary, the influence of adding crude oil on soil physical and chemical properties is complex, encompassing both promoting and inhibiting effects. This influence could be attributed to the properties and quantity of added crude oil as well as specific characteristics unique to the soil. Further research is warranted to investigate how adding crude oil affects microbial community structure in soils and elucidate the role played by these microorganisms during biodegradation processes. Soil microorganisms are small in size, widely distributed, relatively short-lived, diverse in species, and exhibit a rapid response to environmental disturbances. They possess the ability to quickly adapt to changes in the environment and maintain ecosystem stability. Consequently, they serve as important indicators for assessing soil quality and indicating changes within the soil environment , . The results of this study revealed that Proteobacteria and Actinobacteria were the dominant bacterial phyla which showed an increasing trend with higher levels of crude oil addition. In contrast, Acidobacteria and Chloroflexi exhibited a decreasing trend with increasing concentrations of crude oil. This observation can be attributed to the superior crude oil degradation capabilities typically possessed by Proteobacteria and Actinobacteria in their natural habitat. The addition of crude oil may provide these bacteria with additional carbon sources and energy, thereby promoting their growth and reproduction . On the other hand, Acidobacteria and Chloroflexi may not possess similar advantages in crude oil degradation processes; hence their relative abundance decreases under conditions where crude oil is added . As for fungi, Ascomycota , Basidiomycota , and Mucoromycota were identified as the dominant phyla. Ascomycota demonstrates an increasing trend with the elevation of crude oil addition levels, while Mucoromycota exhibits a declining trend. This phenomenon may be attributed to Ascomycota’s robust capacity for decomposing organic matter, including complex polycyclic aromatic hydrocarbons (PAHs) and petroleum hydrocarbons. Consequently, these fungi are likely to exhibit an upward trend as they can effectively utilize carbon sources present in crude oil . Conversely, Mucoromycota might display sensitivity towards specific chemical components in crude oil and possess weaker adaptability to environments contaminated by it, thereby limiting their growth in the presence of added crude oil . Archaea are predominantly represented by Thaumarchaeota and manifest a decreasing trend with the escalation of crude oil addition levels. This decline could be attributed to Thaumarchaeota generating substantial amounts of reactive oxygen species during ammonia oxidation process that would typically be counteracted by intracellular antioxidant systems. However, the introduction of crude oil might induce heightened oxidative stress surpassing Thaumarchaeota’s antioxidant capacity . Additionally, Thaumarchaeota may rely on symbiotic relationships with algae for energy acquisition; hence, the inclusion of crude oil could disrupt these symbiotic associations leading to constraints on its growth . The dominant viral phylum is Uroviricota and its relative abundance remains relatively stable across different levels of added crude oil. This observation suggests that Uroviricota may possess a certain degree of tolerance towards crude oil pollution or have the ability to colonize suitable hosts and sustain their survival in environments contaminated with crude oil . Through multiple comparative analyses, we identified differences in microbial communities, including Proteobacteria , Acidobacteria , Chloroflexi , Ascomycota , Mucoromycota , Candidatus Phixviricota etc., most of which are known to play crucial roles in enhancing soil health and promoting plant growth , . The observed variations in microbial phyla under the influence of crude oil imply that different levels of crude oil addition could potentially serve as an effective strategy for enhancing plant productivity, reducing diseases, and improving the soil biotic environment. With an increase in the amount of crude oil addition, the Shannon and Simpson indices of soil archaea, viruses, bacteria, and fungi exhibit an upward trend, indicating that crude oil addition enhances community diversity. This can be attributed to alterations in soil environmental conditions caused by crude oil addition which enable previously unable microorganisms to thrive and increase microbial community diversity . Crude oil addition significantly impacts soil microbial community diversity with an optimal amount promoting microbial community health. These findings are significant for guiding ecological restoration efforts in soils contaminated with crude oil as they suggest considering the impact of crude oil on microbial communities when conducting soil remediation for optimal restoration results. PCA analysis visually displays differences between sample points while inferring possible relationships between grouping categories and actual sample distributions . The results indicate a clear separation among soil bacterial, viral, and fungal communities regarding different treatments. Previous studies on soybean and rapeseed oil additions have also found distinct composition differences between microbial communities from soils with or without added crude oil . The impact of crude oil addition on the interaction between soil microorganisms and Calamagrostis epigejos In the study of soil microbial co-occurrence networks, the number of edges and nodes in the network serves as crucial indicators of network complexity . The introduction of crude oil enhances both the frequency and complexity of species interactions within the microbial community. This can be attributed to crude oil acting as an exogenous carbon and energy source, attracting a greater diversity of microbial species involved in degradation processes, thereby promoting increased microorganism interactions . Average weighted degree, graph density, and modularity are metrics used to assess node significance, network connectivity tightness, and structural differentiation within co-occurrence networks . The addition of crude oil results in a significant reduction in these parameters, indicating that the co-occurrence network undergoes structural changes following exposure to crude oil from a highly organized and tightly connected state to a more loosely distributed state. This alteration may arise due to crude oil disrupting energy and nutrient balance within the microbial community, intensifying competition among microorganisms and consequently leading to modifications in network structure . Compared to other treatments, the CK treatment demonstrates a higher proportion of positive correlations and a lower proportion of negative correlations among microbial community species. This observation suggests a robust cooperative relationship among microorganisms in the CK treatment, potentially facilitated by the provision of more stable and conducive environmental conditions. Consequently, microorganisms are able to collaborate effectively and optimize resource utilization. In contrast, the addition of crude oil treatment exhibits a pronounced competitive relationship among microbial community species. This can be attributed to alterations in resource availability caused by crude oil addition, thereby intensifying competition for limited resources among microorganisms . From an ecological perspective, the addition of crude oil to soil can have significant implications for the stability and functionality of soil ecosystems. On one hand, it has the potential to enhance microbial diversity and activity in the soil, thereby facilitating the biodegradation process of crude oil and mitigating its impact on soil pollution . On the other hand, it may induce alterations in both structure and functionality of microbial communities, potentially leading to a decline in sensitive or beneficial microorganisms that could negatively affect the health and sustainability of soil ecosystems . In practical applications, comprehending and harnessing dynamic changes within microbial co-occurrence networks is crucial for developing effective bioremediation strategies. By carefully adjusting both quantity and timing of crude oil addition, it becomes possible to optimize both structure and functionality of microbial communities, thus enhancing efficiency in crude oil degradation as well as effectiveness in soil remediation efforts. Furthermore, investigating shifts within microbial co-occurrence networks can provide valuable theoretical foundations for elucidating underlying mechanisms governing soil microbial ecology while contributing towards improved protection and restoration practices for soil environments. The impact of crude oil addition on the functional diversity of soil microbial communities in Calamagrostis epigejos Microorganisms play crucial roles in soil ecosystems and exhibit diverse functions, including nutrient cycling and metabolism . In this study, the introduction of crude oil significantly influenced the metabolic pathways of soil microorganisms. Genes associated with metabolism exhibited the highest abundance, suggesting that microbial communities actively adjust their metabolic pathways to adapt to environments contaminated with crude oil . As the amount of crude oil increased, genes related to environmental information processing, cell motility, and digestive system pathways showed an increase in abundance. This response may be attributed to microorganisms adapting to external environmental changes and enhancing their ability to cope with crude oil pollution . Conversely, genes linked to metabolism, genetic information processing, cellular processes, human diseases aging, and carbohydrate metabolism pathways displayed a decrease in abundance. This decline might reflect the inhibitory effect of adding crude oil on microbial community metabolic function and potential ecological pressure . Redundancy analysis revealed a significant positive correlation between soil acid phosphatase (ACP) activity and multiple KEGG pathways, highlighting the crucial role of ACP in regulating microbial metabolism. ACP plays a vital role in the synthesis and degradation of microbial cell walls, which is essential for their survival and adaptation to environmental changes . Furthermore, pathways such as Signaling molecules and interaction, Cancer: specific types exhibited a positive correlation with the activities of polyphenol oxidase (PPO), ACP, N-acetylglucosaminidase (NAG), and catalase (CAT) enzymes. This suggests that these enzymes play an important role in microbial response to environmental stress and signal transduction . In conclusion, KEGG pathway analysis reveals the impact of crude oil addition on soil microbial community functionality while emphasizing the critical role of soil enzyme activity. These findings provide significant scientific evidence for comprehending and addressing issues related to crude oil pollution while offering novel insights into the protection and restoration of soil microbial functionality. Future research can further explore the response mechanisms of microbial communities to crude oil addition and promote the recovery of soil microbial functionality through regulation of key enzyme activities for biodegradation. Calamagrostis epigejos Soil is a crucial component of the ecosystem, and its physicochemical properties have significant impacts on plant growth and microbial activity. In the context of soil pollution, the impact of adding crude oil to soil physicochemical properties has received extensive attention , . The findings from this study indicate that the addition of crude oil does not significantly alter soil pH, available potassium levels, and alkaline nitrogen content. This observation may be attributed to the relatively limited effect of crude oil contamination on soil acid-base balance and potassium-nitrogen content . However, it is noteworthy that adding crude oil substantially increases petroleum hydrocarbon concentrations in the soil. This phenomenon could be attributed to the inhibitory effect of adding crude oil on the biodegradation process of petroleum hydrocarbons in the soil, leading to their accumulation . Furthermore, it is worth mentioning that adding crude oil significantly enhances polyphenol oxidase and hydrogen peroxide enzyme activities in the soil, which play pivotal roles in organic matter decomposition and plant growth processes. This may have triggered oxidation reactions of organic matter in the soil due to crude oil pollution . However, there was a significant decrease in the content of -glucosidase and cellulase, indicating that crude oil pollution had a detrimental impact on soil enzyme activity. -glucosidase and cellulase are vital enzymes in the soil, and their reduced activity may suggest an inhibitory effect of adding crude oil on microbial metabolism in the soil . In summary, the influence of adding crude oil on soil physical and chemical properties is complex, encompassing both promoting and inhibiting effects. This influence could be attributed to the properties and quantity of added crude oil as well as specific characteristics unique to the soil. Further research is warranted to investigate how adding crude oil affects microbial community structure in soils and elucidate the role played by these microorganisms during biodegradation processes. Soil microorganisms are small in size, widely distributed, relatively short-lived, diverse in species, and exhibit a rapid response to environmental disturbances. They possess the ability to quickly adapt to changes in the environment and maintain ecosystem stability. Consequently, they serve as important indicators for assessing soil quality and indicating changes within the soil environment , . The results of this study revealed that Proteobacteria and Actinobacteria were the dominant bacterial phyla which showed an increasing trend with higher levels of crude oil addition. In contrast, Acidobacteria and Chloroflexi exhibited a decreasing trend with increasing concentrations of crude oil. This observation can be attributed to the superior crude oil degradation capabilities typically possessed by Proteobacteria and Actinobacteria in their natural habitat. The addition of crude oil may provide these bacteria with additional carbon sources and energy, thereby promoting their growth and reproduction . On the other hand, Acidobacteria and Chloroflexi may not possess similar advantages in crude oil degradation processes; hence their relative abundance decreases under conditions where crude oil is added . As for fungi, Ascomycota , Basidiomycota , and Mucoromycota were identified as the dominant phyla. Ascomycota demonstrates an increasing trend with the elevation of crude oil addition levels, while Mucoromycota exhibits a declining trend. This phenomenon may be attributed to Ascomycota’s robust capacity for decomposing organic matter, including complex polycyclic aromatic hydrocarbons (PAHs) and petroleum hydrocarbons. Consequently, these fungi are likely to exhibit an upward trend as they can effectively utilize carbon sources present in crude oil . Conversely, Mucoromycota might display sensitivity towards specific chemical components in crude oil and possess weaker adaptability to environments contaminated by it, thereby limiting their growth in the presence of added crude oil . Archaea are predominantly represented by Thaumarchaeota and manifest a decreasing trend with the escalation of crude oil addition levels. This decline could be attributed to Thaumarchaeota generating substantial amounts of reactive oxygen species during ammonia oxidation process that would typically be counteracted by intracellular antioxidant systems. However, the introduction of crude oil might induce heightened oxidative stress surpassing Thaumarchaeota’s antioxidant capacity . Additionally, Thaumarchaeota may rely on symbiotic relationships with algae for energy acquisition; hence, the inclusion of crude oil could disrupt these symbiotic associations leading to constraints on its growth . The dominant viral phylum is Uroviricota and its relative abundance remains relatively stable across different levels of added crude oil. This observation suggests that Uroviricota may possess a certain degree of tolerance towards crude oil pollution or have the ability to colonize suitable hosts and sustain their survival in environments contaminated with crude oil . Through multiple comparative analyses, we identified differences in microbial communities, including Proteobacteria , Acidobacteria , Chloroflexi , Ascomycota , Mucoromycota , Candidatus Phixviricota etc., most of which are known to play crucial roles in enhancing soil health and promoting plant growth , . The observed variations in microbial phyla under the influence of crude oil imply that different levels of crude oil addition could potentially serve as an effective strategy for enhancing plant productivity, reducing diseases, and improving the soil biotic environment. With an increase in the amount of crude oil addition, the Shannon and Simpson indices of soil archaea, viruses, bacteria, and fungi exhibit an upward trend, indicating that crude oil addition enhances community diversity. This can be attributed to alterations in soil environmental conditions caused by crude oil addition which enable previously unable microorganisms to thrive and increase microbial community diversity . Crude oil addition significantly impacts soil microbial community diversity with an optimal amount promoting microbial community health. These findings are significant for guiding ecological restoration efforts in soils contaminated with crude oil as they suggest considering the impact of crude oil on microbial communities when conducting soil remediation for optimal restoration results. PCA analysis visually displays differences between sample points while inferring possible relationships between grouping categories and actual sample distributions . The results indicate a clear separation among soil bacterial, viral, and fungal communities regarding different treatments. Previous studies on soybean and rapeseed oil additions have also found distinct composition differences between microbial communities from soils with or without added crude oil . Calamagrostis epigejos In the study of soil microbial co-occurrence networks, the number of edges and nodes in the network serves as crucial indicators of network complexity . The introduction of crude oil enhances both the frequency and complexity of species interactions within the microbial community. This can be attributed to crude oil acting as an exogenous carbon and energy source, attracting a greater diversity of microbial species involved in degradation processes, thereby promoting increased microorganism interactions . Average weighted degree, graph density, and modularity are metrics used to assess node significance, network connectivity tightness, and structural differentiation within co-occurrence networks . The addition of crude oil results in a significant reduction in these parameters, indicating that the co-occurrence network undergoes structural changes following exposure to crude oil from a highly organized and tightly connected state to a more loosely distributed state. This alteration may arise due to crude oil disrupting energy and nutrient balance within the microbial community, intensifying competition among microorganisms and consequently leading to modifications in network structure . Compared to other treatments, the CK treatment demonstrates a higher proportion of positive correlations and a lower proportion of negative correlations among microbial community species. This observation suggests a robust cooperative relationship among microorganisms in the CK treatment, potentially facilitated by the provision of more stable and conducive environmental conditions. Consequently, microorganisms are able to collaborate effectively and optimize resource utilization. In contrast, the addition of crude oil treatment exhibits a pronounced competitive relationship among microbial community species. This can be attributed to alterations in resource availability caused by crude oil addition, thereby intensifying competition for limited resources among microorganisms . From an ecological perspective, the addition of crude oil to soil can have significant implications for the stability and functionality of soil ecosystems. On one hand, it has the potential to enhance microbial diversity and activity in the soil, thereby facilitating the biodegradation process of crude oil and mitigating its impact on soil pollution . On the other hand, it may induce alterations in both structure and functionality of microbial communities, potentially leading to a decline in sensitive or beneficial microorganisms that could negatively affect the health and sustainability of soil ecosystems . In practical applications, comprehending and harnessing dynamic changes within microbial co-occurrence networks is crucial for developing effective bioremediation strategies. By carefully adjusting both quantity and timing of crude oil addition, it becomes possible to optimize both structure and functionality of microbial communities, thus enhancing efficiency in crude oil degradation as well as effectiveness in soil remediation efforts. Furthermore, investigating shifts within microbial co-occurrence networks can provide valuable theoretical foundations for elucidating underlying mechanisms governing soil microbial ecology while contributing towards improved protection and restoration practices for soil environments. Calamagrostis epigejos Microorganisms play crucial roles in soil ecosystems and exhibit diverse functions, including nutrient cycling and metabolism . In this study, the introduction of crude oil significantly influenced the metabolic pathways of soil microorganisms. Genes associated with metabolism exhibited the highest abundance, suggesting that microbial communities actively adjust their metabolic pathways to adapt to environments contaminated with crude oil . As the amount of crude oil increased, genes related to environmental information processing, cell motility, and digestive system pathways showed an increase in abundance. This response may be attributed to microorganisms adapting to external environmental changes and enhancing their ability to cope with crude oil pollution . Conversely, genes linked to metabolism, genetic information processing, cellular processes, human diseases aging, and carbohydrate metabolism pathways displayed a decrease in abundance. This decline might reflect the inhibitory effect of adding crude oil on microbial community metabolic function and potential ecological pressure . Redundancy analysis revealed a significant positive correlation between soil acid phosphatase (ACP) activity and multiple KEGG pathways, highlighting the crucial role of ACP in regulating microbial metabolism. ACP plays a vital role in the synthesis and degradation of microbial cell walls, which is essential for their survival and adaptation to environmental changes . Furthermore, pathways such as Signaling molecules and interaction, Cancer: specific types exhibited a positive correlation with the activities of polyphenol oxidase (PPO), ACP, N-acetylglucosaminidase (NAG), and catalase (CAT) enzymes. This suggests that these enzymes play an important role in microbial response to environmental stress and signal transduction . In conclusion, KEGG pathway analysis reveals the impact of crude oil addition on soil microbial community functionality while emphasizing the critical role of soil enzyme activity. These findings provide significant scientific evidence for comprehending and addressing issues related to crude oil pollution while offering novel insights into the protection and restoration of soil microbial functionality. Future research can further explore the response mechanisms of microbial communities to crude oil addition and promote the recovery of soil microbial functionality through regulation of key enzyme activities for biodegradation. This study enhances our understanding of the mechanisms underlying variations in soil bacterial, fungal, archaeal, and viral communities under different levels of crude oil addition. The dominant phyla for bacteria, fungi, archaea, and viruses were Proteobacteria , Actinobacteria , Ascomycota , Basidiomycota , Mucoromycota , Thaumarchaeota , and Uroviricota , respectively. With an increase in crude oil addition levels, the diversity of soil bacterial, fungal, archaeal, and viral communities exhibited an upward trend with significantly higher microbial community diversity observed in the F40 treatment compared to other treatments. Ecological network analysis revealed a transition from cooperative interactions to competitive relationships among soil bacterial, fungal, archaeal, and viral communities as crude oil addition levels increased. Furthermore, the fungal and viral communities as well as soil enzyme activity demonstrated a significant positive correlation with plant growth while the bacterial and archaeal communities displayed a significant negative correlation. This experiment provides a theoretical foundation for remediation strategies against crude oil pollution using Calamagrostis epigejos by investigating changes in bacterial, fungal, archaeal, and viral communities under varying levels of crude oil exposure and their driving factors. Experimental materials The seeds of Calamagrostis epigejos utilized in this experiment were collected from Sihemu Nature Reserve in Wuhai City, Inner Mongolia, in June 2020. The crude oil was obtained from Ansai Oilfield located in northern Shaanxi Province. Soil samples for testing purposes were extracted from the newly established area of Xi’an Botanical Garden in Shaanxi Province. Topsoil (0–20 cm) was gathered, dried, sieved to eliminate impurities, and mixed with river sand at a volume ratio of 3:1. After thorough mixing, the crude oil dissolved in petroleum ether at a volume ratio of 1:1 was added and thoroughly blended before being poured into the soil samples. After vigorous stirring and kneading to ensure uniform distribution of pollutants within the soil as an indicator of stress on Calamagrostis epigejos ’s biochemical properties, the contaminated soil was further combined with the remaining soil. To simulate recently polluted surface soil conditions, the contaminated soil sample underwent natural homogenization treatment for 30 days under shade. Experimental design Based on the actual soil oil pollution situation in the production area of Yan’an Oilfield in northern Shaanxi and incorporating previous research findings, this experiment established three levels of crude oil contamination at 0 g/kg, 10 g/kg, and 40 g/kg with three replicates for each treatment level. Feather grass used in this study was initially cultivated in a seedling tray and subsequently transplanted into plastic flower pots measuring 10 cm × 10 cm upon reaching four leaves. Three plants exhibiting consistent growth were selected for each pot. Once they reached approximately 10 cm height, the plants were rinsed with running water before being transplanted into plastic flower pots measuring 20 cm × 15 cm to undergo crude oil stress treatment. Each pot contained 1.6 kg of treated soil, and there were three biological replicates for each treatment level. The cultivation took place in a greenhouse under regular moisture management without fertilization or weed removal over a period of five months from June12th, 2023 to November 2nd, 2023. Collection of soil samples The plant samples are manually divided into aboveground parts and underground parts (root system). The soil samples are divided into three portions: one portion is poured into a sterilized centrifuge tube and placed in a foam box with ice packs, transported to the laboratory, and stored at -80 °C freezer for subsequent extraction of total DNA from the soil; another portion is brought back to the laboratory and kept cool in a 4 °C refrigerator for determination of soil enzyme activity; the remaining portion is air-dried and used for measuring the physicochemical properties of the soil. Soil enzyme determination method The determination of soil enzyme activity was conducted using the microplate fluorescence method . 4-methylumbelliferone (MUB) was employed as a standard control for β-glucosidase (BG), N-acetylglucosaminidase (NAG), cellulose hydrolyzing enzymes (CBH), and acid phosphatase (ACP) activities. Frozen soil samples at -20 °C were thawed in a refrigerator at 4 °C for 5 days. Subsequently, fresh soil weighing 1.5 g was added to a sodium acetate solution with a volume of 125 mL and mixed thoroughly on a magnetic stirrer for approximately one minute to obtain a homogeneous soil slurry. Using an eight-channel pipette, 200µL of the soil slurry was transferred onto each well of a 96-well microplate in the sample group, followed by adding corresponding enzyme substrates with a volume of 50µL. For the control groups, acetic acid, standard control substances, and specific substrates were used respectively. The microplate containing the samples was incubated at 25 °C for three hours before promptly measuring the readings under excitation light at both wavelengths of 365 nm and 450 nm using an enzyme analyzer (Synergy H1). Soil polyphenol oxidase activity was determined using the pyrogallol colorimetric method, while catalase activity was measured through potassium permanganate titration , . Metagenomic sequencing of soil microorganisms The total genomic DNA was extracted from the samples using the E.Z.N.A. ® Soil DNA Kit (Omega Bio-tek, Norcross, GA, USA). The concentration and purity of the extracted DNA were determined using TBS-380 and NanoDrop2000 spectrophotometers, respectively. The quality of the DNA extraction solution was assessed by 1% agarose gel electrophoresis. Subsequently, the extracted DNA fragments were fragmented to an average size of approximately 400 bp using Covaris M220 software (Gene Company Limited, China) and constructed into paired-end libraries. Paired-end libraries were prepared utilizing the NEXTflexTM Rapid DNA-Seq kit (Bioo Scientific, Austin, TX, USA), where adapters containing complete sequencing primer hybridization sites were ligated to blunt ends of the fragments. Paired-end sequencing was performed on Illumina NovaSeq/Hiseq Xten instruments (Illumina Inc., San Diego, CA, USA) with NovaSeq Reagent kit/Hiseq X Reagent kit at Aijibaike Biotechnology Co., Ltd. in Wuhan, China. Sequence quality control and genome assembly: The Majorbio Cloud platform (cloud.majorbio.com) is utilized for accessing the fastp tool ( https://github.com/OpenGene/fastp , version 0.20.0) online, which enables accessing the fastp tool online to remove adapter sequences, trim and filter low-quality reads containing N bases below a minimum length threshold of 50 bp or falling below a minimum quality threshold of 20 in order to generate clean reads. Subsequently, MEGAHIT (with parameters kmer min = 47, kmer_max = 97, step = 10) ( https://github.com/voutcn/megahit , version 1.1.2) is employed for efficient de Bruijn graph-based assembly of these high-quality reads into contigs using a de Bruijn graph-based approach. The final assembly results consist of contigs with lengths equal to or exceeding 300 bp. Species and functional annotation: The representative sequences of the non-redundant gene catalog were annotated using blastp with DIAMOND v0.9.19, based on the NCBI NR database. Classification annotation was performed using DIAMOND ( http://www.diamondsearch.org/index.php , version 0.8.35) with an e-value cutoff of 1e-5. KEGG annotation was conducted by employing Diamond ( http://www.diamondsearch.org/index.php , version 0.8.35) to annotate against the Kyoto Encyclopedia of Genes and Genomes database ( http://www.genome.jp/keeg/ , version 94.2), with an e-value cutoff of 1e − 5 . Data analysis The data on soil physicochemical properties, bacterial, fungal, archaeal, and viral community composition were processed using SPSS 26.0 and Excel 2010. Significance analysis of differences was conducted through one-way analysis of variance (ANOVA) followed by multiple comparisons (LSD method, P = 0.05). Bacterial, fungal, archaeal, and viral community composition and diversity analyses were performed utilizing the Aigibike-Sanger cloud platform. Molecular ecological networks were calculated using R language and visualized with Gephi software. Redundancy analysis (RDA) was employed to explore the relationship between microbial communities and soil environmental factors for plotting purposes using CANOCO 5.0 software. Graphs were further refined using Adobe Illustrator CS6. The seeds of Calamagrostis epigejos utilized in this experiment were collected from Sihemu Nature Reserve in Wuhai City, Inner Mongolia, in June 2020. The crude oil was obtained from Ansai Oilfield located in northern Shaanxi Province. Soil samples for testing purposes were extracted from the newly established area of Xi’an Botanical Garden in Shaanxi Province. Topsoil (0–20 cm) was gathered, dried, sieved to eliminate impurities, and mixed with river sand at a volume ratio of 3:1. After thorough mixing, the crude oil dissolved in petroleum ether at a volume ratio of 1:1 was added and thoroughly blended before being poured into the soil samples. After vigorous stirring and kneading to ensure uniform distribution of pollutants within the soil as an indicator of stress on Calamagrostis epigejos ’s biochemical properties, the contaminated soil was further combined with the remaining soil. To simulate recently polluted surface soil conditions, the contaminated soil sample underwent natural homogenization treatment for 30 days under shade. Based on the actual soil oil pollution situation in the production area of Yan’an Oilfield in northern Shaanxi and incorporating previous research findings, this experiment established three levels of crude oil contamination at 0 g/kg, 10 g/kg, and 40 g/kg with three replicates for each treatment level. Feather grass used in this study was initially cultivated in a seedling tray and subsequently transplanted into plastic flower pots measuring 10 cm × 10 cm upon reaching four leaves. Three plants exhibiting consistent growth were selected for each pot. Once they reached approximately 10 cm height, the plants were rinsed with running water before being transplanted into plastic flower pots measuring 20 cm × 15 cm to undergo crude oil stress treatment. Each pot contained 1.6 kg of treated soil, and there were three biological replicates for each treatment level. The cultivation took place in a greenhouse under regular moisture management without fertilization or weed removal over a period of five months from June12th, 2023 to November 2nd, 2023. The plant samples are manually divided into aboveground parts and underground parts (root system). The soil samples are divided into three portions: one portion is poured into a sterilized centrifuge tube and placed in a foam box with ice packs, transported to the laboratory, and stored at -80 °C freezer for subsequent extraction of total DNA from the soil; another portion is brought back to the laboratory and kept cool in a 4 °C refrigerator for determination of soil enzyme activity; the remaining portion is air-dried and used for measuring the physicochemical properties of the soil. The determination of soil enzyme activity was conducted using the microplate fluorescence method . 4-methylumbelliferone (MUB) was employed as a standard control for β-glucosidase (BG), N-acetylglucosaminidase (NAG), cellulose hydrolyzing enzymes (CBH), and acid phosphatase (ACP) activities. Frozen soil samples at -20 °C were thawed in a refrigerator at 4 °C for 5 days. Subsequently, fresh soil weighing 1.5 g was added to a sodium acetate solution with a volume of 125 mL and mixed thoroughly on a magnetic stirrer for approximately one minute to obtain a homogeneous soil slurry. Using an eight-channel pipette, 200µL of the soil slurry was transferred onto each well of a 96-well microplate in the sample group, followed by adding corresponding enzyme substrates with a volume of 50µL. For the control groups, acetic acid, standard control substances, and specific substrates were used respectively. The microplate containing the samples was incubated at 25 °C for three hours before promptly measuring the readings under excitation light at both wavelengths of 365 nm and 450 nm using an enzyme analyzer (Synergy H1). Soil polyphenol oxidase activity was determined using the pyrogallol colorimetric method, while catalase activity was measured through potassium permanganate titration , . The total genomic DNA was extracted from the samples using the E.Z.N.A. ® Soil DNA Kit (Omega Bio-tek, Norcross, GA, USA). The concentration and purity of the extracted DNA were determined using TBS-380 and NanoDrop2000 spectrophotometers, respectively. The quality of the DNA extraction solution was assessed by 1% agarose gel electrophoresis. Subsequently, the extracted DNA fragments were fragmented to an average size of approximately 400 bp using Covaris M220 software (Gene Company Limited, China) and constructed into paired-end libraries. Paired-end libraries were prepared utilizing the NEXTflexTM Rapid DNA-Seq kit (Bioo Scientific, Austin, TX, USA), where adapters containing complete sequencing primer hybridization sites were ligated to blunt ends of the fragments. Paired-end sequencing was performed on Illumina NovaSeq/Hiseq Xten instruments (Illumina Inc., San Diego, CA, USA) with NovaSeq Reagent kit/Hiseq X Reagent kit at Aijibaike Biotechnology Co., Ltd. in Wuhan, China. Sequence quality control and genome assembly: The Majorbio Cloud platform (cloud.majorbio.com) is utilized for accessing the fastp tool ( https://github.com/OpenGene/fastp , version 0.20.0) online, which enables accessing the fastp tool online to remove adapter sequences, trim and filter low-quality reads containing N bases below a minimum length threshold of 50 bp or falling below a minimum quality threshold of 20 in order to generate clean reads. Subsequently, MEGAHIT (with parameters kmer min = 47, kmer_max = 97, step = 10) ( https://github.com/voutcn/megahit , version 1.1.2) is employed for efficient de Bruijn graph-based assembly of these high-quality reads into contigs using a de Bruijn graph-based approach. The final assembly results consist of contigs with lengths equal to or exceeding 300 bp. Species and functional annotation: The representative sequences of the non-redundant gene catalog were annotated using blastp with DIAMOND v0.9.19, based on the NCBI NR database. Classification annotation was performed using DIAMOND ( http://www.diamondsearch.org/index.php , version 0.8.35) with an e-value cutoff of 1e-5. KEGG annotation was conducted by employing Diamond ( http://www.diamondsearch.org/index.php , version 0.8.35) to annotate against the Kyoto Encyclopedia of Genes and Genomes database ( http://www.genome.jp/keeg/ , version 94.2), with an e-value cutoff of 1e − 5 . The data on soil physicochemical properties, bacterial, fungal, archaeal, and viral community composition were processed using SPSS 26.0 and Excel 2010. Significance analysis of differences was conducted through one-way analysis of variance (ANOVA) followed by multiple comparisons (LSD method, P = 0.05). Bacterial, fungal, archaeal, and viral community composition and diversity analyses were performed utilizing the Aigibike-Sanger cloud platform. Molecular ecological networks were calculated using R language and visualized with Gephi software. Redundancy analysis (RDA) was employed to explore the relationship between microbial communities and soil environmental factors for plotting purposes using CANOCO 5.0 software. Graphs were further refined using Adobe Illustrator CS6. |
The pathological anatomical collection of the Natural History Museum Vienna | 4a06d780-5616-403e-844a-08ae4c9ad930 | 9893974 | Pathology[mh] | Short history on the collection and origin of objects The history of the pathological-anatomical collection in Vienna (PASW-pathologisch-anatomische Sammlung Wien) is intricately linked to that of Viennese anatomy and pathology as well as to the Museum of Human Anatomy; hence, we give a brief excursion on the complex development of these objects and institutions, with a strong personal reference to protagonists and curators (see among others ). Until around the middle of the 18th century, anatomy—like other theoretical subjects in medicine—was of only minor importance in the context of medical training . In 1718, the medical faculty decided to build an “anatomical theatre” in the citizen’s hospital to demonstrate anatomical operations. Dissection courses for physicians themselves were not integrated into training until the first anatomy chair was established in 1735. This was also the time when, due to an imperial decree, all deceased bodies in the civic hospital and other social institutions were to be made available for anatomical teaching. One of the leading protagonists of this period was Gerard van Swieten (1700–1772), private physician of Maria Theresia and later founder of the older Vienna Medical School, who finally extended this regulation to all hospitals. For the first time, human anatomical tissue specimens were produced as a source of education ; these specimens also formed the basis for an intended Museum of Anatomy . In the 1780s, this collection was recorded in a catalogue and significantly enlarged by the incorporation of anatomical pathological specimens of Ferdinand Leber (1727–1808) and by an intensified general collecting and preparation activity. These specimens were found to remain in an “anatomical theatre” within the university area, which was newly built under Josef II. Despite these promising developments, the pathological anatomical science in Vienna could not (yet) establish itself institutionally, and the efforts to set up an anatomical museum were initially not successful. This was not to happen until 1795, as, firstly, the Lower Austrian sanitary consultant Josef Pasqual von Ferro (1753–1809) had applied for the creation of a pathological anatomical museum and issued an order to keep interesting specimens for demonstrative purposes, and, secondly, the German physician Johann Peter Frank (1745–1821), a pioneer of the public health service and hygiene aspects, was appointed to the Vienna General Hospital as director general . Frank’s intentions regarding the establishment of a “Pathological Anatomical Institute,” as well as an officially associated pathological anatomical collection were already successful one year after his arrival in Vienna in 1796 . Leaving aside Frank’s improvement in this regard, institutionalization was undoubtedly the result of a variety of contemporary circumstances, including political reforms and their consequences, that characterize the second half of the 18th century , as well as cultural and scientific factors, e.g., regulations and easier access to clinical and anatomical “teaching material,” education of the physicians and “knowledge production” for general practitioners and clinicians (concerning the local conditions in Vienna, see the critical essays by and the perception of the patient as an “object of research” ). Infobox 1: Curators of the pathological anatomical collection 1796 Aloys Rudolf Vetter 1812 Lorenz Biermayer 1829 Johann Wagner 1834 Carl v. Rokitansky 1875 Richard Ladislaus Heschl 1882 Hans Kundrat 1893 Anton Weichselbaum 1916 Alexander Kolisko 1920 Heinrich Albrecht 1922 Rudolf Maresch 1936 Hermann Chiari 1946 Karl A. Portele 1993 Beatrix Patzak 2013 (continuing) Eduard Winter As prosector of the Vienna General Hospital and conservator of the museum, Frank appointed the young, highly motivated anatomist Aloys Rudolph Vetter (1765–1806), who waived personal benefits and a salary. In 1803, the collection already comprised 400 specimens, about 40 objects of today’s collection date to this early phase of object acquisition . Both protagonists remained connected to the institution in Vienna for only a few years: in 1804, Frank was appointed to the Imperial University in Vilnius and Vetter was appointed professor of anatomy and physiology in Kraków. The time afterwards is characterized by an obvious disinterest of the general hospital’s directors in pathology, apparently also scientific political controversies, which heated up on the question of the positioning of pathology as a medical field. A solution emerged in 1811 with the new head of the General Hospital, Valentin von Hildenbrand (1763–1818), who appointed Lorenz Biermayer (1778–1843) as pathological prosector and custos of the museum (from 1812) . He wrote the first museum catalogue, begun in 1813 and preserved to this day . At the same time, the medical and teaching issues in the monarchy were regulated by the authorities, which also included the handling of the bodies of the deceased. All those who died in the clinics of the general hospital were now to be dissected by the pathologists, the findings recorded, and the most interesting specimens collected and documented, including their medical history. Biermayer’s first autopsy protocol dates to 1817 ; unfortunately, the specimen has not been preserved. Biermayer’s further professional work was judged ambivalently by contemporary witnesses and ended with his dismissal. However, it remains to his credit that the specimens taken could be integrated into the museum’s holdings, catalogued in detail, and used for teaching and research. After Biermayer’s retirement, the museum catalogue was continued by his two assistants Johann Wagner (1800–1832) and Carl von Rokitansky (1804–1878), who had already been accepted as an initially unpaid trainee in 1827. Since Johann Wagner died only a few years after taking over the management of the museum in 1832, Rokitansky was entrusted with the agenda. In 1843, he not only carried out a first revision of the collection, in which a large part of the specimens was removed , but he also shifted the emphasis in the collection from macropathology to micropathology (histopathology). During this time, around the middle of the 19th century, the importance of anatomical science and especially pathology was recognized by clinicians. A development that led to the establishment of a first chair for pathological anatomy in 1844, to which Rokitansky was appointed. From now on, the extraction and preservation of organic (wet) specimens was at the center. Many of them are attributable to Rokitansky, the prestigious and most important pathological anatomist of his time and cofounder of the young Vienna Medical School, and have been preserved to this day at the pathological anatomical collection, together with numerous original drawings he had made of histological tissue slides. These specimens are relics from more than 60,000 autopsies and documents of the paradigmatic turn to micropathology and his concept of a “disease process,” which he was able to reconstruct from the accumulation of different symptoms and disease stages in a “scientifically sound and systematic” way . It is therefore not surprising that he also played a decisive role in the successful establishment of the “Vienna Anthropological Society” in 1870, to which many physicians of different disciplines belonged; as its first president, he also contributed to shaping the path for a “science of man” that researched and collected on a scientific basis. Thanks to his international reputation, a new pathological anatomical institute was constructed, which was opened in 1862 and was in use until its relocation to the new General Hospital in 1991. In this period, around the middle of the 19th century, the anatomist Joseph Hyrtl (1810–1894), known far beyond the borders of Austria-Hungary, had been active in Vienna since 1845 as ordinarius for anatomy; he should briefly be mentioned here, because he had gained a reputation not only as a teacher and textbook author, but also as a “preparation artist,” as a collection creator (in 1850, he founded the Museum of Comparative Anatomy in Vienna), and as director of the Museum of Human Anatomy (founded by Gerard van Swieten in 1745). Hyrtl is further noted for his contribution to the “Novara expedition” (1857–1859): as a member of the Academy of Sciences, he was involved in the selection of participants, scientific evaluation, and publication of human relics (approximately 100 human skulls in his collection were much sought after “non-European human varieties” and “atypia”) that were collected during this journey and at first kept at the Anatomical Institute. After a ministerial request to hand over the whole collection to the newly founded national research center of anthropology at the Natural History Museum Vienna, he retained some of them. In the 1980s, the museum at the Anatomical Institute became disclosed and the objects were integrated into the Federal Pathological-Anatomical Museum and “rediscovered” as part of the Novara and Natural History Museum Vienna collection in 2012 (see ). In 1875, Rokitansky’s successor in Vienna was his former student Richard Heschl (1824–1881), full professor of anatomy in Graz. Heschl had already founded and directed a pathological anatomical museum in Graz, was experienced as curator, and increased the Viennese collection in the few years of his activity mainly with dry specimens, especially crania and cranial fragments . This collection of macroscopic objects prospered in the following years through the incorporation of items from, e.g., the Graz Institute of Pathology, which were brought to Vienna by Hanns Kundrat (1845–1893), formerly also an assistant to Rokitansky. His research was oriented toward cerebral malformations, but his passion was the pathological anatomical collection, which he continuously expanded; some parts of Josef Hyrtl’s considerable collection were also taken over during this time. For some of his successors—Anton Weichselbaum (1845–1920), Alexander Kolisko (1857–1918), and Heinrich Albrecht (1866–1922)—the pathological anatomical museum was less important: Anton Weichselbaum, for example, focused on microbiology and histopathology, probably the most innovative and promising field of research before and around the turn of the millennium; his experiences with Robert Koch (1843–1910) in Berlin may have also influenced the foundation of a Viennese microbiological laboratory. Here, significant bacteriological discoveries were made (among other things, the pathogen of pneumonia or epidemic meningitis was identified) and it is not surprising that Weichselbaum was on the spot at the outbreak of the last major plague epidemic in India (1897) as the initiator of a Viennese expedition to Bombay (commissioned by the Austrian Academy of Sciences) to study this disease and its transmission paths. Numerous treatises on the protagonists, the objectives, the course, and the results of this undertaking, which had not been without consequences for Vienna and left traces in the pathological anatomical collection, are available, but cannot be discussed in detail here (see, among others, ). Weichselbaum’s successors, Alexander Kolisko and Heinrich Albrecht, were also reported to have not continued the museum catalogue in the period “before and after the First World War,” which probably corresponds to a lack of interest in the collection , but possibly also to their only 2‑year term of curatorship (Kolisko 1916–1918; Albrecht 1920–1922). It was not until the 1920s that Rudolf Maresch (1868–1936), an expert in endocrinology and who was more interested in the collection, was appointed as director. He improved the institutional structure and fabricated additional histological reports of numerous objects of the collection. Hermann Chiari (1897–1969), already assistant at the Pathological-Anatomical Institute from 1926 onward, was full professor in the National Socialist era and afterwards (1936–1969); he focused strongly on pathological morphology and histopathology. His role as a Wehrmacht pathologist between 1938 and 1945 is still insufficiently analyzed. After Chiari, Heinrich Holzner (1924–2013) became full professor of the institute, a position he held for more than 20 years (1969–1993). Holzner realized that a renewal of the institution and the museum was necessary, that it was in a perilous state and had to struggle with a major space problem. The valuable collection of specimens there had been supervised since 1946 by Karl Alfons Portele (1912–1993), a pathologist already hired by Chiari as curator. Portele’s idea of an administrative separation of the museum from the institute as a solution for the spatial limitedness was supported by Holzner: in 1971, the division was completed with the relocation of the pathologic-anatomical collection to the former Narrenturm in the old General Hospital in Vienna; here 25 renovated rooms (25 “cells”) on the first floor were occupied. In 1974, the Federal Ministry of Science and Research changed the status of the collection by upgrading it to a federal museum (Federal Pathological-Anatomical Museum in Vienna) with a complete administrative, personnel, and financial autonomy. “Austria was now the only country to have a state museum for medical preparations, and the collection was secured” . At that time, the museum’s inventory amounted to about 14,000 specimens. Portele was known both nationally and internationally for “including every endangered collection.” In the few years of its independent existence, this collection has experienced a significant increase in different object categories (e.g., dry and wet specimens, moulages, medical devices, microbiological and histological specimens, historical wall charts, a photo archive, and anatomical teaching records). Since the 1980s, when many institutions and other stakeholders were unable to guarantee the appropriate care of a collection of pathologically altered human body parts for spatial or financial reasons, the transfer to the Federal Pathological-Anatomical Museum probably offered an alternative depository—committed to ethical principles. Donors/persons in charge and institutions that handed over their collections are listed below (see ). Infobox 2 Acquired Collections and donors (in parentheses = year of accession) Leopold Arzt and Wilhelm Kerl, Allgemeines Krankenhaus, Universitäts-Hautklinik, Vienna (1976; soft tissue impressions, moulages) Gerhart Alth, Krankenhaus Lainz, Radiotherapy, Vienna Hans Asperger, Allgemeines Krankenhaus, Department of Pediatry, Vienna Heinz Flamm, Universität Wien, Institute for Hygiene, Vienna Hugo Husslein, Allgemeines Krankenhaus, Gynecology II, Vienna Rudolf Langer, Landesklinikum Mistelbach, ENT Department, Mistelbach Karl Lebeda, Tierseucheninstitut, Mödling Rudolf Niederhuemer, Technisches Museum, Vienna Otto Novotny, Allgemeines Krankenhaus, ENT Department, Vienna Franz Pötsch, BA für Impfstoffgewinnung, Vienna Josef Söltz-Szöts, Krankenhaus Rudolfstiftung, Dermatology Vienna Peter Wurnig, Mautner-Markhof Kinderspital, Surgery Vienna Karl und Theodor Henning (soft tissue impressions, moulages) Veterinary/zoological private collection Fritz Kincel (transferred to the zoological department of the NHM) Collection of the University of Vienna, Anatomical Institute (Hyrtl-Sammlung), Vienna Collection Krankenhaus Wieden, Vienna (1975) Collection Krankenhaus Rudolfstiftung, Vienna (1977) Collection Krankenhaus Wilhelminenspital, Vienna (1978) Collection Krankenhaus Lainz, Vienna (Kaiser-Jubiläum-Spital; 1974) Collection Landeskrankenhaus Graz, Institute for Pathology, Graz (until 1983) Collection Uni-Klinikum Bonn, Institute for Pathology, Germany (1992) Collection Klinikum der Stadt Wuppertal (preparations originate from hospitalis in Barmen and Ferdinand-Sauerbruch-Klinikum), Germany (1985) Collection Krankenhaus Hamburg-Harburg, Germany (1987 and 1998) Collection Innsbruck Collection Kaiserin-Elisabeth-Spital, Vienna (1994) Collection Kaiser-Franz-Josef-Spital, Vienna (1960) Collection Unfallkrankenhaus Meidling, Vienna Collection der Ignaz-Semmelweis-Frauenklinik, Vienna Collection Haus der Natur, Salzburg Collection Krankenhaus Sozialmedizinisches Zentrum Baumgartner Höhe, Vienna Collection Magistrat der Stadt Vienna, MA 60 – Veterinärdienste und Tierschutz, Vienna In the course of a legal amendment created in 1998, all federal museums, scientific institutions under public law, were released between 1999 and 2003 in full legal capacity. Only the Federal Pathological-Anatomical Museum remained a subordinate department of the Ministry of Education due to its small size, which contradicted the transformation to a fully legally competent, own scientific institution. In autumn 2011, the collection was incorporated into the “Scientific Institution Natural History Museum Vienna” ( wissenschaftliche Anstalt Naturhistorisches Museum Wien ) by federal law ( Budgetbegleitgesetz 2012, BGBl. I Nr. 112/2011) and internally associated with the Anthropological Department. Today, the pathological anatomical collection comprises approximately 10,500 maceration specimens, body stones, skeletons, partial skeletons, and skulls (also from archaeological contexts), approximately 36,000 wet specimens, approximately 4500 moulages (wax casts of pathologically altered body parts, which were made by Karl and Theodor Henning, Otto Helm, Maximilian Blaha, Dr. Ziegler, P. E. Habetin, among others, to convey the course of the disease as “directly” readable), approximately 150,000 histological slides, approximately 6500 medical devices and instruments, large archive holdings (e.g., autopsy findings since Biermayer’s time; historical teaching boards and posters, photographs [prints and negatives]). The history of the pathological-anatomical collection in Vienna (PASW-pathologisch-anatomische Sammlung Wien) is intricately linked to that of Viennese anatomy and pathology as well as to the Museum of Human Anatomy; hence, we give a brief excursion on the complex development of these objects and institutions, with a strong personal reference to protagonists and curators (see among others ). Until around the middle of the 18th century, anatomy—like other theoretical subjects in medicine—was of only minor importance in the context of medical training . In 1718, the medical faculty decided to build an “anatomical theatre” in the citizen’s hospital to demonstrate anatomical operations. Dissection courses for physicians themselves were not integrated into training until the first anatomy chair was established in 1735. This was also the time when, due to an imperial decree, all deceased bodies in the civic hospital and other social institutions were to be made available for anatomical teaching. One of the leading protagonists of this period was Gerard van Swieten (1700–1772), private physician of Maria Theresia and later founder of the older Vienna Medical School, who finally extended this regulation to all hospitals. For the first time, human anatomical tissue specimens were produced as a source of education ; these specimens also formed the basis for an intended Museum of Anatomy . In the 1780s, this collection was recorded in a catalogue and significantly enlarged by the incorporation of anatomical pathological specimens of Ferdinand Leber (1727–1808) and by an intensified general collecting and preparation activity. These specimens were found to remain in an “anatomical theatre” within the university area, which was newly built under Josef II. Despite these promising developments, the pathological anatomical science in Vienna could not (yet) establish itself institutionally, and the efforts to set up an anatomical museum were initially not successful. This was not to happen until 1795, as, firstly, the Lower Austrian sanitary consultant Josef Pasqual von Ferro (1753–1809) had applied for the creation of a pathological anatomical museum and issued an order to keep interesting specimens for demonstrative purposes, and, secondly, the German physician Johann Peter Frank (1745–1821), a pioneer of the public health service and hygiene aspects, was appointed to the Vienna General Hospital as director general . Frank’s intentions regarding the establishment of a “Pathological Anatomical Institute,” as well as an officially associated pathological anatomical collection were already successful one year after his arrival in Vienna in 1796 . Leaving aside Frank’s improvement in this regard, institutionalization was undoubtedly the result of a variety of contemporary circumstances, including political reforms and their consequences, that characterize the second half of the 18th century , as well as cultural and scientific factors, e.g., regulations and easier access to clinical and anatomical “teaching material,” education of the physicians and “knowledge production” for general practitioners and clinicians (concerning the local conditions in Vienna, see the critical essays by and the perception of the patient as an “object of research” ). Infobox 1: Curators of the pathological anatomical collection 1796 Aloys Rudolf Vetter 1812 Lorenz Biermayer 1829 Johann Wagner 1834 Carl v. Rokitansky 1875 Richard Ladislaus Heschl 1882 Hans Kundrat 1893 Anton Weichselbaum 1916 Alexander Kolisko 1920 Heinrich Albrecht 1922 Rudolf Maresch 1936 Hermann Chiari 1946 Karl A. Portele 1993 Beatrix Patzak 2013 (continuing) Eduard Winter As prosector of the Vienna General Hospital and conservator of the museum, Frank appointed the young, highly motivated anatomist Aloys Rudolph Vetter (1765–1806), who waived personal benefits and a salary. In 1803, the collection already comprised 400 specimens, about 40 objects of today’s collection date to this early phase of object acquisition . Both protagonists remained connected to the institution in Vienna for only a few years: in 1804, Frank was appointed to the Imperial University in Vilnius and Vetter was appointed professor of anatomy and physiology in Kraków. The time afterwards is characterized by an obvious disinterest of the general hospital’s directors in pathology, apparently also scientific political controversies, which heated up on the question of the positioning of pathology as a medical field. A solution emerged in 1811 with the new head of the General Hospital, Valentin von Hildenbrand (1763–1818), who appointed Lorenz Biermayer (1778–1843) as pathological prosector and custos of the museum (from 1812) . He wrote the first museum catalogue, begun in 1813 and preserved to this day . At the same time, the medical and teaching issues in the monarchy were regulated by the authorities, which also included the handling of the bodies of the deceased. All those who died in the clinics of the general hospital were now to be dissected by the pathologists, the findings recorded, and the most interesting specimens collected and documented, including their medical history. Biermayer’s first autopsy protocol dates to 1817 ; unfortunately, the specimen has not been preserved. Biermayer’s further professional work was judged ambivalently by contemporary witnesses and ended with his dismissal. However, it remains to his credit that the specimens taken could be integrated into the museum’s holdings, catalogued in detail, and used for teaching and research. After Biermayer’s retirement, the museum catalogue was continued by his two assistants Johann Wagner (1800–1832) and Carl von Rokitansky (1804–1878), who had already been accepted as an initially unpaid trainee in 1827. Since Johann Wagner died only a few years after taking over the management of the museum in 1832, Rokitansky was entrusted with the agenda. In 1843, he not only carried out a first revision of the collection, in which a large part of the specimens was removed , but he also shifted the emphasis in the collection from macropathology to micropathology (histopathology). During this time, around the middle of the 19th century, the importance of anatomical science and especially pathology was recognized by clinicians. A development that led to the establishment of a first chair for pathological anatomy in 1844, to which Rokitansky was appointed. From now on, the extraction and preservation of organic (wet) specimens was at the center. Many of them are attributable to Rokitansky, the prestigious and most important pathological anatomist of his time and cofounder of the young Vienna Medical School, and have been preserved to this day at the pathological anatomical collection, together with numerous original drawings he had made of histological tissue slides. These specimens are relics from more than 60,000 autopsies and documents of the paradigmatic turn to micropathology and his concept of a “disease process,” which he was able to reconstruct from the accumulation of different symptoms and disease stages in a “scientifically sound and systematic” way . It is therefore not surprising that he also played a decisive role in the successful establishment of the “Vienna Anthropological Society” in 1870, to which many physicians of different disciplines belonged; as its first president, he also contributed to shaping the path for a “science of man” that researched and collected on a scientific basis. Thanks to his international reputation, a new pathological anatomical institute was constructed, which was opened in 1862 and was in use until its relocation to the new General Hospital in 1991. In this period, around the middle of the 19th century, the anatomist Joseph Hyrtl (1810–1894), known far beyond the borders of Austria-Hungary, had been active in Vienna since 1845 as ordinarius for anatomy; he should briefly be mentioned here, because he had gained a reputation not only as a teacher and textbook author, but also as a “preparation artist,” as a collection creator (in 1850, he founded the Museum of Comparative Anatomy in Vienna), and as director of the Museum of Human Anatomy (founded by Gerard van Swieten in 1745). Hyrtl is further noted for his contribution to the “Novara expedition” (1857–1859): as a member of the Academy of Sciences, he was involved in the selection of participants, scientific evaluation, and publication of human relics (approximately 100 human skulls in his collection were much sought after “non-European human varieties” and “atypia”) that were collected during this journey and at first kept at the Anatomical Institute. After a ministerial request to hand over the whole collection to the newly founded national research center of anthropology at the Natural History Museum Vienna, he retained some of them. In the 1980s, the museum at the Anatomical Institute became disclosed and the objects were integrated into the Federal Pathological-Anatomical Museum and “rediscovered” as part of the Novara and Natural History Museum Vienna collection in 2012 (see ). In 1875, Rokitansky’s successor in Vienna was his former student Richard Heschl (1824–1881), full professor of anatomy in Graz. Heschl had already founded and directed a pathological anatomical museum in Graz, was experienced as curator, and increased the Viennese collection in the few years of his activity mainly with dry specimens, especially crania and cranial fragments . This collection of macroscopic objects prospered in the following years through the incorporation of items from, e.g., the Graz Institute of Pathology, which were brought to Vienna by Hanns Kundrat (1845–1893), formerly also an assistant to Rokitansky. His research was oriented toward cerebral malformations, but his passion was the pathological anatomical collection, which he continuously expanded; some parts of Josef Hyrtl’s considerable collection were also taken over during this time. For some of his successors—Anton Weichselbaum (1845–1920), Alexander Kolisko (1857–1918), and Heinrich Albrecht (1866–1922)—the pathological anatomical museum was less important: Anton Weichselbaum, for example, focused on microbiology and histopathology, probably the most innovative and promising field of research before and around the turn of the millennium; his experiences with Robert Koch (1843–1910) in Berlin may have also influenced the foundation of a Viennese microbiological laboratory. Here, significant bacteriological discoveries were made (among other things, the pathogen of pneumonia or epidemic meningitis was identified) and it is not surprising that Weichselbaum was on the spot at the outbreak of the last major plague epidemic in India (1897) as the initiator of a Viennese expedition to Bombay (commissioned by the Austrian Academy of Sciences) to study this disease and its transmission paths. Numerous treatises on the protagonists, the objectives, the course, and the results of this undertaking, which had not been without consequences for Vienna and left traces in the pathological anatomical collection, are available, but cannot be discussed in detail here (see, among others, ). Weichselbaum’s successors, Alexander Kolisko and Heinrich Albrecht, were also reported to have not continued the museum catalogue in the period “before and after the First World War,” which probably corresponds to a lack of interest in the collection , but possibly also to their only 2‑year term of curatorship (Kolisko 1916–1918; Albrecht 1920–1922). It was not until the 1920s that Rudolf Maresch (1868–1936), an expert in endocrinology and who was more interested in the collection, was appointed as director. He improved the institutional structure and fabricated additional histological reports of numerous objects of the collection. Hermann Chiari (1897–1969), already assistant at the Pathological-Anatomical Institute from 1926 onward, was full professor in the National Socialist era and afterwards (1936–1969); he focused strongly on pathological morphology and histopathology. His role as a Wehrmacht pathologist between 1938 and 1945 is still insufficiently analyzed. After Chiari, Heinrich Holzner (1924–2013) became full professor of the institute, a position he held for more than 20 years (1969–1993). Holzner realized that a renewal of the institution and the museum was necessary, that it was in a perilous state and had to struggle with a major space problem. The valuable collection of specimens there had been supervised since 1946 by Karl Alfons Portele (1912–1993), a pathologist already hired by Chiari as curator. Portele’s idea of an administrative separation of the museum from the institute as a solution for the spatial limitedness was supported by Holzner: in 1971, the division was completed with the relocation of the pathologic-anatomical collection to the former Narrenturm in the old General Hospital in Vienna; here 25 renovated rooms (25 “cells”) on the first floor were occupied. In 1974, the Federal Ministry of Science and Research changed the status of the collection by upgrading it to a federal museum (Federal Pathological-Anatomical Museum in Vienna) with a complete administrative, personnel, and financial autonomy. “Austria was now the only country to have a state museum for medical preparations, and the collection was secured” . At that time, the museum’s inventory amounted to about 14,000 specimens. Portele was known both nationally and internationally for “including every endangered collection.” In the few years of its independent existence, this collection has experienced a significant increase in different object categories (e.g., dry and wet specimens, moulages, medical devices, microbiological and histological specimens, historical wall charts, a photo archive, and anatomical teaching records). Since the 1980s, when many institutions and other stakeholders were unable to guarantee the appropriate care of a collection of pathologically altered human body parts for spatial or financial reasons, the transfer to the Federal Pathological-Anatomical Museum probably offered an alternative depository—committed to ethical principles. Donors/persons in charge and institutions that handed over their collections are listed below (see ). Infobox 2 Acquired Collections and donors (in parentheses = year of accession) Leopold Arzt and Wilhelm Kerl, Allgemeines Krankenhaus, Universitäts-Hautklinik, Vienna (1976; soft tissue impressions, moulages) Gerhart Alth, Krankenhaus Lainz, Radiotherapy, Vienna Hans Asperger, Allgemeines Krankenhaus, Department of Pediatry, Vienna Heinz Flamm, Universität Wien, Institute for Hygiene, Vienna Hugo Husslein, Allgemeines Krankenhaus, Gynecology II, Vienna Rudolf Langer, Landesklinikum Mistelbach, ENT Department, Mistelbach Karl Lebeda, Tierseucheninstitut, Mödling Rudolf Niederhuemer, Technisches Museum, Vienna Otto Novotny, Allgemeines Krankenhaus, ENT Department, Vienna Franz Pötsch, BA für Impfstoffgewinnung, Vienna Josef Söltz-Szöts, Krankenhaus Rudolfstiftung, Dermatology Vienna Peter Wurnig, Mautner-Markhof Kinderspital, Surgery Vienna Karl und Theodor Henning (soft tissue impressions, moulages) Veterinary/zoological private collection Fritz Kincel (transferred to the zoological department of the NHM) Collection of the University of Vienna, Anatomical Institute (Hyrtl-Sammlung), Vienna Collection Krankenhaus Wieden, Vienna (1975) Collection Krankenhaus Rudolfstiftung, Vienna (1977) Collection Krankenhaus Wilhelminenspital, Vienna (1978) Collection Krankenhaus Lainz, Vienna (Kaiser-Jubiläum-Spital; 1974) Collection Landeskrankenhaus Graz, Institute for Pathology, Graz (until 1983) Collection Uni-Klinikum Bonn, Institute for Pathology, Germany (1992) Collection Klinikum der Stadt Wuppertal (preparations originate from hospitalis in Barmen and Ferdinand-Sauerbruch-Klinikum), Germany (1985) Collection Krankenhaus Hamburg-Harburg, Germany (1987 and 1998) Collection Innsbruck Collection Kaiserin-Elisabeth-Spital, Vienna (1994) Collection Kaiser-Franz-Josef-Spital, Vienna (1960) Collection Unfallkrankenhaus Meidling, Vienna Collection der Ignaz-Semmelweis-Frauenklinik, Vienna Collection Haus der Natur, Salzburg Collection Krankenhaus Sozialmedizinisches Zentrum Baumgartner Höhe, Vienna Collection Magistrat der Stadt Vienna, MA 60 – Veterinärdienste und Tierschutz, Vienna In the course of a legal amendment created in 1998, all federal museums, scientific institutions under public law, were released between 1999 and 2003 in full legal capacity. Only the Federal Pathological-Anatomical Museum remained a subordinate department of the Ministry of Education due to its small size, which contradicted the transformation to a fully legally competent, own scientific institution. In autumn 2011, the collection was incorporated into the “Scientific Institution Natural History Museum Vienna” ( wissenschaftliche Anstalt Naturhistorisches Museum Wien ) by federal law ( Budgetbegleitgesetz 2012, BGBl. I Nr. 112/2011) and internally associated with the Anthropological Department. Today, the pathological anatomical collection comprises approximately 10,500 maceration specimens, body stones, skeletons, partial skeletons, and skulls (also from archaeological contexts), approximately 36,000 wet specimens, approximately 4500 moulages (wax casts of pathologically altered body parts, which were made by Karl and Theodor Henning, Otto Helm, Maximilian Blaha, Dr. Ziegler, P. E. Habetin, among others, to convey the course of the disease as “directly” readable), approximately 150,000 histological slides, approximately 6500 medical devices and instruments, large archive holdings (e.g., autopsy findings since Biermayer’s time; historical teaching boards and posters, photographs [prints and negatives]). 1796 Aloys Rudolf Vetter 1812 Lorenz Biermayer 1829 Johann Wagner 1834 Carl v. Rokitansky 1875 Richard Ladislaus Heschl 1882 Hans Kundrat 1893 Anton Weichselbaum 1916 Alexander Kolisko 1920 Heinrich Albrecht 1922 Rudolf Maresch 1936 Hermann Chiari 1946 Karl A. Portele 1993 Beatrix Patzak 2013 (continuing) Eduard Winter As prosector of the Vienna General Hospital and conservator of the museum, Frank appointed the young, highly motivated anatomist Aloys Rudolph Vetter (1765–1806), who waived personal benefits and a salary. In 1803, the collection already comprised 400 specimens, about 40 objects of today’s collection date to this early phase of object acquisition . Both protagonists remained connected to the institution in Vienna for only a few years: in 1804, Frank was appointed to the Imperial University in Vilnius and Vetter was appointed professor of anatomy and physiology in Kraków. The time afterwards is characterized by an obvious disinterest of the general hospital’s directors in pathology, apparently also scientific political controversies, which heated up on the question of the positioning of pathology as a medical field. A solution emerged in 1811 with the new head of the General Hospital, Valentin von Hildenbrand (1763–1818), who appointed Lorenz Biermayer (1778–1843) as pathological prosector and custos of the museum (from 1812) . He wrote the first museum catalogue, begun in 1813 and preserved to this day . At the same time, the medical and teaching issues in the monarchy were regulated by the authorities, which also included the handling of the bodies of the deceased. All those who died in the clinics of the general hospital were now to be dissected by the pathologists, the findings recorded, and the most interesting specimens collected and documented, including their medical history. Biermayer’s first autopsy protocol dates to 1817 ; unfortunately, the specimen has not been preserved. Biermayer’s further professional work was judged ambivalently by contemporary witnesses and ended with his dismissal. However, it remains to his credit that the specimens taken could be integrated into the museum’s holdings, catalogued in detail, and used for teaching and research. After Biermayer’s retirement, the museum catalogue was continued by his two assistants Johann Wagner (1800–1832) and Carl von Rokitansky (1804–1878), who had already been accepted as an initially unpaid trainee in 1827. Since Johann Wagner died only a few years after taking over the management of the museum in 1832, Rokitansky was entrusted with the agenda. In 1843, he not only carried out a first revision of the collection, in which a large part of the specimens was removed , but he also shifted the emphasis in the collection from macropathology to micropathology (histopathology). During this time, around the middle of the 19th century, the importance of anatomical science and especially pathology was recognized by clinicians. A development that led to the establishment of a first chair for pathological anatomy in 1844, to which Rokitansky was appointed. From now on, the extraction and preservation of organic (wet) specimens was at the center. Many of them are attributable to Rokitansky, the prestigious and most important pathological anatomist of his time and cofounder of the young Vienna Medical School, and have been preserved to this day at the pathological anatomical collection, together with numerous original drawings he had made of histological tissue slides. These specimens are relics from more than 60,000 autopsies and documents of the paradigmatic turn to micropathology and his concept of a “disease process,” which he was able to reconstruct from the accumulation of different symptoms and disease stages in a “scientifically sound and systematic” way . It is therefore not surprising that he also played a decisive role in the successful establishment of the “Vienna Anthropological Society” in 1870, to which many physicians of different disciplines belonged; as its first president, he also contributed to shaping the path for a “science of man” that researched and collected on a scientific basis. Thanks to his international reputation, a new pathological anatomical institute was constructed, which was opened in 1862 and was in use until its relocation to the new General Hospital in 1991. In this period, around the middle of the 19th century, the anatomist Joseph Hyrtl (1810–1894), known far beyond the borders of Austria-Hungary, had been active in Vienna since 1845 as ordinarius for anatomy; he should briefly be mentioned here, because he had gained a reputation not only as a teacher and textbook author, but also as a “preparation artist,” as a collection creator (in 1850, he founded the Museum of Comparative Anatomy in Vienna), and as director of the Museum of Human Anatomy (founded by Gerard van Swieten in 1745). Hyrtl is further noted for his contribution to the “Novara expedition” (1857–1859): as a member of the Academy of Sciences, he was involved in the selection of participants, scientific evaluation, and publication of human relics (approximately 100 human skulls in his collection were much sought after “non-European human varieties” and “atypia”) that were collected during this journey and at first kept at the Anatomical Institute. After a ministerial request to hand over the whole collection to the newly founded national research center of anthropology at the Natural History Museum Vienna, he retained some of them. In the 1980s, the museum at the Anatomical Institute became disclosed and the objects were integrated into the Federal Pathological-Anatomical Museum and “rediscovered” as part of the Novara and Natural History Museum Vienna collection in 2012 (see ). In 1875, Rokitansky’s successor in Vienna was his former student Richard Heschl (1824–1881), full professor of anatomy in Graz. Heschl had already founded and directed a pathological anatomical museum in Graz, was experienced as curator, and increased the Viennese collection in the few years of his activity mainly with dry specimens, especially crania and cranial fragments . This collection of macroscopic objects prospered in the following years through the incorporation of items from, e.g., the Graz Institute of Pathology, which were brought to Vienna by Hanns Kundrat (1845–1893), formerly also an assistant to Rokitansky. His research was oriented toward cerebral malformations, but his passion was the pathological anatomical collection, which he continuously expanded; some parts of Josef Hyrtl’s considerable collection were also taken over during this time. For some of his successors—Anton Weichselbaum (1845–1920), Alexander Kolisko (1857–1918), and Heinrich Albrecht (1866–1922)—the pathological anatomical museum was less important: Anton Weichselbaum, for example, focused on microbiology and histopathology, probably the most innovative and promising field of research before and around the turn of the millennium; his experiences with Robert Koch (1843–1910) in Berlin may have also influenced the foundation of a Viennese microbiological laboratory. Here, significant bacteriological discoveries were made (among other things, the pathogen of pneumonia or epidemic meningitis was identified) and it is not surprising that Weichselbaum was on the spot at the outbreak of the last major plague epidemic in India (1897) as the initiator of a Viennese expedition to Bombay (commissioned by the Austrian Academy of Sciences) to study this disease and its transmission paths. Numerous treatises on the protagonists, the objectives, the course, and the results of this undertaking, which had not been without consequences for Vienna and left traces in the pathological anatomical collection, are available, but cannot be discussed in detail here (see, among others, ). Weichselbaum’s successors, Alexander Kolisko and Heinrich Albrecht, were also reported to have not continued the museum catalogue in the period “before and after the First World War,” which probably corresponds to a lack of interest in the collection , but possibly also to their only 2‑year term of curatorship (Kolisko 1916–1918; Albrecht 1920–1922). It was not until the 1920s that Rudolf Maresch (1868–1936), an expert in endocrinology and who was more interested in the collection, was appointed as director. He improved the institutional structure and fabricated additional histological reports of numerous objects of the collection. Hermann Chiari (1897–1969), already assistant at the Pathological-Anatomical Institute from 1926 onward, was full professor in the National Socialist era and afterwards (1936–1969); he focused strongly on pathological morphology and histopathology. His role as a Wehrmacht pathologist between 1938 and 1945 is still insufficiently analyzed. After Chiari, Heinrich Holzner (1924–2013) became full professor of the institute, a position he held for more than 20 years (1969–1993). Holzner realized that a renewal of the institution and the museum was necessary, that it was in a perilous state and had to struggle with a major space problem. The valuable collection of specimens there had been supervised since 1946 by Karl Alfons Portele (1912–1993), a pathologist already hired by Chiari as curator. Portele’s idea of an administrative separation of the museum from the institute as a solution for the spatial limitedness was supported by Holzner: in 1971, the division was completed with the relocation of the pathologic-anatomical collection to the former Narrenturm in the old General Hospital in Vienna; here 25 renovated rooms (25 “cells”) on the first floor were occupied. In 1974, the Federal Ministry of Science and Research changed the status of the collection by upgrading it to a federal museum (Federal Pathological-Anatomical Museum in Vienna) with a complete administrative, personnel, and financial autonomy. “Austria was now the only country to have a state museum for medical preparations, and the collection was secured” . At that time, the museum’s inventory amounted to about 14,000 specimens. Portele was known both nationally and internationally for “including every endangered collection.” In the few years of its independent existence, this collection has experienced a significant increase in different object categories (e.g., dry and wet specimens, moulages, medical devices, microbiological and histological specimens, historical wall charts, a photo archive, and anatomical teaching records). Since the 1980s, when many institutions and other stakeholders were unable to guarantee the appropriate care of a collection of pathologically altered human body parts for spatial or financial reasons, the transfer to the Federal Pathological-Anatomical Museum probably offered an alternative depository—committed to ethical principles. Donors/persons in charge and institutions that handed over their collections are listed below (see ). Leopold Arzt and Wilhelm Kerl, Allgemeines Krankenhaus, Universitäts-Hautklinik, Vienna (1976; soft tissue impressions, moulages) Gerhart Alth, Krankenhaus Lainz, Radiotherapy, Vienna Hans Asperger, Allgemeines Krankenhaus, Department of Pediatry, Vienna Heinz Flamm, Universität Wien, Institute for Hygiene, Vienna Hugo Husslein, Allgemeines Krankenhaus, Gynecology II, Vienna Rudolf Langer, Landesklinikum Mistelbach, ENT Department, Mistelbach Karl Lebeda, Tierseucheninstitut, Mödling Rudolf Niederhuemer, Technisches Museum, Vienna Otto Novotny, Allgemeines Krankenhaus, ENT Department, Vienna Franz Pötsch, BA für Impfstoffgewinnung, Vienna Josef Söltz-Szöts, Krankenhaus Rudolfstiftung, Dermatology Vienna Peter Wurnig, Mautner-Markhof Kinderspital, Surgery Vienna Karl und Theodor Henning (soft tissue impressions, moulages) Veterinary/zoological private collection Fritz Kincel (transferred to the zoological department of the NHM) Collection of the University of Vienna, Anatomical Institute (Hyrtl-Sammlung), Vienna Collection Krankenhaus Wieden, Vienna (1975) Collection Krankenhaus Rudolfstiftung, Vienna (1977) Collection Krankenhaus Wilhelminenspital, Vienna (1978) Collection Krankenhaus Lainz, Vienna (Kaiser-Jubiläum-Spital; 1974) Collection Landeskrankenhaus Graz, Institute for Pathology, Graz (until 1983) Collection Uni-Klinikum Bonn, Institute for Pathology, Germany (1992) Collection Klinikum der Stadt Wuppertal (preparations originate from hospitalis in Barmen and Ferdinand-Sauerbruch-Klinikum), Germany (1985) Collection Krankenhaus Hamburg-Harburg, Germany (1987 and 1998) Collection Innsbruck Collection Kaiserin-Elisabeth-Spital, Vienna (1994) Collection Kaiser-Franz-Josef-Spital, Vienna (1960) Collection Unfallkrankenhaus Meidling, Vienna Collection der Ignaz-Semmelweis-Frauenklinik, Vienna Collection Haus der Natur, Salzburg Collection Krankenhaus Sozialmedizinisches Zentrum Baumgartner Höhe, Vienna Collection Magistrat der Stadt Vienna, MA 60 – Veterinärdienste und Tierschutz, Vienna In the course of a legal amendment created in 1998, all federal museums, scientific institutions under public law, were released between 1999 and 2003 in full legal capacity. Only the Federal Pathological-Anatomical Museum remained a subordinate department of the Ministry of Education due to its small size, which contradicted the transformation to a fully legally competent, own scientific institution. In autumn 2011, the collection was incorporated into the “Scientific Institution Natural History Museum Vienna” ( wissenschaftliche Anstalt Naturhistorisches Museum Wien ) by federal law ( Budgetbegleitgesetz 2012, BGBl. I Nr. 112/2011) and internally associated with the Anthropological Department. Today, the pathological anatomical collection comprises approximately 10,500 maceration specimens, body stones, skeletons, partial skeletons, and skulls (also from archaeological contexts), approximately 36,000 wet specimens, approximately 4500 moulages (wax casts of pathologically altered body parts, which were made by Karl and Theodor Henning, Otto Helm, Maximilian Blaha, Dr. Ziegler, P. E. Habetin, among others, to convey the course of the disease as “directly” readable), approximately 150,000 histological slides, approximately 6500 medical devices and instruments, large archive holdings (e.g., autopsy findings since Biermayer’s time; historical teaching boards and posters, photographs [prints and negatives]). Literature search In order to understand how the pathological anatomical collection was used for scientific progress, keywords (including former terms used and/or their acronyms) such as “ Pathologisch-anatomisches Museum ,” “FPAM” (“Federal Anatomical-Pathological Museum”), “PaBM” (“ Pathologisch-anatomisches Bundesmuseum ”), “PASiN” (“ Pathologisch-Anatomische Sammlung im Narrenturm ”), “ Narrenturm ,” “Fools Tower” in English, or “Pathological Anatomical Collection Vienna” were entered into search engines such as the Web of Science (Reuter) or Google Scholar as well as into the repository of the MedUni Vienna . PASW ( Pathologisch-anatomische Sammlung Wien ) is only defined as ordinary acronym now. In addition, papers that were provided by the authors to the collection curator were also included in our reference review. About 20 scientific papers published as articles in journals and two textbooks directly refer to specific specimens curated at the pathological anatomical collection. The latter are oriented towards collection history and to communicate propaedeutics of pathology with different perspectives and research . Object database of PASW collection Initially, the specimens of the pathological anatomical collection Vienna (PASW) were recorded in a handwritten catalogue. More recently, a digital database was created in which all objects are documented in detail. This contains basic information such as the actual museal number (by including a reference to former collections), the organ, the type of preparation (e.g., wet specimen, dry preparation, moulages), sex and age of the deceased, and the diagnosis based on the (cross-referenced) autopsy report. The dataset also comprises the donators or institutional provenance of a specimen and the location in the collection. Any use of a specimen in a research project and available mages (photographs, X‑rays, or CT scans) are noted as well. The database is continuously updated in order to be able to access all relevant data digitally. Classification scheme We decided to arrange the results according to the WHO classification scheme ICD-10 of the International Statistical Classification of Diseases and Related Health Problems (WHO, ) . Albeit, there is some overlapping, especially with regard to lethal causes; the application of a standardized system allows more systematic comparisons. In order to understand how the pathological anatomical collection was used for scientific progress, keywords (including former terms used and/or their acronyms) such as “ Pathologisch-anatomisches Museum ,” “FPAM” (“Federal Anatomical-Pathological Museum”), “PaBM” (“ Pathologisch-anatomisches Bundesmuseum ”), “PASiN” (“ Pathologisch-Anatomische Sammlung im Narrenturm ”), “ Narrenturm ,” “Fools Tower” in English, or “Pathological Anatomical Collection Vienna” were entered into search engines such as the Web of Science (Reuter) or Google Scholar as well as into the repository of the MedUni Vienna . PASW ( Pathologisch-anatomische Sammlung Wien ) is only defined as ordinary acronym now. In addition, papers that were provided by the authors to the collection curator were also included in our reference review. About 20 scientific papers published as articles in journals and two textbooks directly refer to specific specimens curated at the pathological anatomical collection. The latter are oriented towards collection history and to communicate propaedeutics of pathology with different perspectives and research . Initially, the specimens of the pathological anatomical collection Vienna (PASW) were recorded in a handwritten catalogue. More recently, a digital database was created in which all objects are documented in detail. This contains basic information such as the actual museal number (by including a reference to former collections), the organ, the type of preparation (e.g., wet specimen, dry preparation, moulages), sex and age of the deceased, and the diagnosis based on the (cross-referenced) autopsy report. The dataset also comprises the donators or institutional provenance of a specimen and the location in the collection. Any use of a specimen in a research project and available mages (photographs, X‑rays, or CT scans) are noted as well. The database is continuously updated in order to be able to access all relevant data digitally. We decided to arrange the results according to the WHO classification scheme ICD-10 of the International Statistical Classification of Diseases and Related Health Problems (WHO, ) . Albeit, there is some overlapping, especially with regard to lethal causes; the application of a standardized system allows more systematic comparisons. Based on the digital database, we first want to reveal when and how often which specimen of the collection has been used for medical or palaeopathological research purposes and which methods were applied in the process. Second, arranged according to the ICD-10 system, we will discuss in more detail the objects and groups of diseases that have been used for scientific purposes to date and address the potential for further pathological research and beyond (Figs. and ). ICD 10-I Infectious diseases Cross-organ infections are illustrated by numerous specimens. Among them are examples of individual organs that were pathologically altered by diverse pathogens. Some of these cases can be linked to historical epidemics, for example tuberculosis. Tuberculosis, caused by Mycobacterium tuberculosis, was known as “ Morbus Viennensis ” (Viennese disease) from the middle of the 18th century because of its almost endemic occurrence in Vienna. In 1811, 758 of 12,374 patients died of tuberculosis in the General Hospital. In 1815, 2859 of 11,520 deaths were attributed to tuberculosis. Pavement dust and even “waltz dancing” during the Congress of Vienna (1814–1815) were considered causes of tuberculosis—the latter is not at all illogical, considering the path of a tuberculosis infection which is a disease transmitted by airborne infection and favored by population density. The pathological anatomical collection includes a large number of wet and dry specimens of tuberculosis. Sedivy describes, for example, the tuberculosis of a kidney, as well as a specimen of a baby who suffered from pulmonary tuberculosis after being infected by the mother during pregnancy. There are also many examples of advanced bone and joint tuberculosis preserved at the collection, e.g., of spinal tuberculosis (classical Pott’s disease), ankylosis of the sacroiliac joint, the knee, shoulder, and elbow joints, and ankle and tarsal bones that were used for education or research . Carl Rokitansky examined typhus or typhoid fever caused by the bacterium Salmonella typhi in a study and therefore created a firm pathological anatomical basis around the middle of the 19th century. A disease that concerns several organs, the gastrointestinal tract (liver, gallbladder, spleen), kidneys, lungs, and muscles. After it was recognized that the disease was based on the transmission of typhus germs by food, contact with an infected individual, or—most frequently—water, one of the political measures was the construction of the first Vienna mountain-spring pipeline (1870–1873), which significantly reduced the incidence of the disease. Sedivy selected two specimens of small intestine sections in the collection archived as Typhus abdominalis for a histomorphological study; in doing so, he was able to identify different stages of cellular changes in the two sections and to verify the prior diagnosis. At the beginning of the 1830s, a previously unknown, new epidemic—cholera—emerged in Europe. It is caused by bacterial infection (Vibrio cholerae ); symptoms of this disease include diarrhea which may lead to severe dehydration and—in the worst case—even death within a very short time. The epidemic reached Vienna in mid-August 1831. It did not seem to subside completely until spring 1832, when a second wave of the epidemic hit Vienna and kept the city in suspense until September. Almost every 50th person came down with cholera and every 100th person died (original handwritten case reports by Rokitansky). The most significant examples of organ changes caused by cholera are implemented in the redesigned exhibition to illustrate, among other things, infection paths, preventive measures, and the medical, demographic, and socioeconomic consequences, and have been used to address numerous scientific questions. ICD-10-II Neoplasms All organs can be affected by cancer, defined as uncontrolled growth of cells, partly spreading out into other organs. When screening the PASW, for all assessed organs, cancerogenic mutations and histological changes can be detected and also represent one of the more intensely studied diseases (Fig. ). The increasing knowledge about the different causes of cancer also supports research on our early ancestors, for instance when cancer allows conclusions about nutritional status, working conditions, or exposure to chemical toxins . Exhibits already served early as objects to study the origin and impact of cancer. To quote as an example the Erdheim brain tumor in the collection, Joseph Engels delivered with his dissertation in 1839 groundbreaking research when he investigated the pituitary gland and the tumors of its infundibula, which he related to neurological disorders; in 1904 Jakob Erdheim specified the tumor as an hypophyseal duct tumor of which the valid name is currently craniopharyngioma . Using the same specimens for case studies, it could be shown that more sophisticated techniques such as CT scans increased the understanding of the disease . Within the last 10 years, it has also been shown that genetic methods can provide very promising evidence on historical specimens, even organic wet specimens made as early as in the 19th century; this approach supports molecular pathology and leads to increasingly exact diagnoses —some kind of “retro-specification.” The careful analysis of organs in conjunction with the documentation of cases and diagnosis is important for general understanding and the development of classifications. By using pathologically altered pancreas (often based on carcinoma), Sedivy could impressively demonstrate that the development of more sophisticated histopathological and immunohistochemical methods helped to establish modern diagnostic systems . Hodgkin disease, also known as Hodgkin lymphoma or lymphogranulomatosis, is a malignant disease of the lymphatic system. The disease usually starts in lymph nodes in the neck region and spreads via the lymph nodes in the chest to the lymph nodes in the abdomen and the spleen. The collection includes a specimen of a spleen that is clearly enlarged and bulge like due to Hodgkin disease . Multiple myeloma is an example of a non-Hodgkin lymphoma and is caused by the degeneration of a plasma cell. Clones spread in the bone marrow and can lead to numerous tumor foci. In a publication by Jellinek (1904, S. Jellinek, Virchows Archiv 177,p. 96–p. 133), the entire skeleton of a patient with multiple myeloma was described in detail. It was possible to show the effects of the disease on the condition of the bone substance, which is characterized by disseminated scalloped lytic lesions of varying size. The process starts in the bone marrow and ultimately destroys the cortex . ICD 10—III Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism Diseases of the blood and blood-forming organs include various anemias, coagulopathies, and other diseases. In the collection, one specimen of a frosted spleen ( Zuckergussmilz ; perisplenitis pseudocartilaginea) was described in more detail by Sedivy . It looks glassy due to proteins which form extracellularly deposited connective tissue, hyaline. ICD-10 IV Endocrine, nutritional, and metabolic diseases Metabolic diseases start at the cellular level where energy and mass transport are hampered, and manifest in different organs. Frequent diseases are diabetes mellitus, thyroid gland dysfunction, gout, and mucoviscidosis. While diabetes, thyroid gland dysfunction, and gout can be partly linked to nutrition, mucoviscidosis or Smith–Lemli–Opitz syndrome (SLOS) have a genetic basis. Key organs for metabolism are liver and gallbladder, spleen and pancreas, and kidney, as well as stomach and intestine. A rather seldom but impactful metabolic disease is the abovementioned SLOS, an autosomal recessive inherited congenital disorder associated with a highly variable phenotypic appearance, where due to a low performance of the enzyme 7‑dehydrocholesterol-reductase, cholesterol is not synthesized in an adequate quantity. As cholesterol is a key component for all cell membranes and enzymes, lacking cholesterol leads to severe impacts ranging from microcephaly, polysyndactyly, hypospadias, and intellectual disability up to holoprosencephaly (HPE), an incomplete separation of the two brain hemispheres, which can lead to cyclopic faces. A lower concentration of cholesterol could be demonstrated for the 10% formaldehyde preservation liquid of a fetus of the PASW which was suspected to have SLOS due to polydactyly . This research aims to contribute to prenatal SLOS syndrome ultrasound diagnostic. Two specimens on calcified thyroids were used to compare the symptoms with an archaeological finding . ICD 10—VI Diseases of the nervous system In addition to the numerous cerebral tumors archived in the collection, there are also specimens of cerebral infarcts caused by an inadequate blood flow, vascular occlusion, or other causes. The effect may be dystrophy or necrosis of brain tissue. Based on the type of cerebral infarction (stroke), a red form (hemorrhagic infarction = bleeding in the brain) and a white form (ischemic infarction = sudden reduced blood flow to the brain, often resulting in dead neuronal tissue) are distinguished. There is one specimen from 1953 mentioned by Sedivy belonging to this class, encephalomalacia (rubra et alba), cerebral softening. ICD 10—VIII Diseases of the ear and mastoid process There is one specimen investigated falling into this class, the tumor nervi acustici published by Sedivy following the original description. A review by Pascual revealed that Jakob Erdheim “redefined this lesion as a hypophyseal duct tumour (craniopharyngioma) in his 1904 monograph” , which indicates that this specimen belongs in the current classification to the ICD II class (D 44.4). ICD 10—IX Diseases of the circulatory system There is no deeper research on this type of specimen; they have mainly been used by Sedivy to illustrate and support the diagnostic analysis. However, the collection includes a very interesting and rare case of a vertebral column facing erosions and abnormal vascular groves ventrally due to a long-standing saccular dilatation of the aorta. In this case an aortic aneurysm, probably of arteriosclerotic nature is assumed . Because soil-embedded (pre)historical human skeletal remains are frequently affected by a range of taphonomic changes, making the diagnosis difficult, well-documented comparative examples are invaluable to evolutionary biologists interested in the history of disease. ICD 10—X Diseases of the respiratory system Diseases of the respiratory system refer to influenza, pneumonia, bronchitis, and other acute infections. There are several specimens of organ tuberculosis stored at the PASW, one specimen facing pulmonary tuberculosis was screened by Sedivy (for the subcollection of extra-pulmonic cases of bone and joint tuberculosis, see section “ICD 10‑I Infectious diseases”). Other examples of diseases manifested in the respiratory system and reported by Sedivy include cases of tracheitis (originating in diphtheria), fibrosis, carcinoma, chondroma, or are caused by external impacts such as respirable dust ( Staublungen , silicosis, asbestosis, etc.). ICD 10—XI Diseases of the digestive system The digestive system comprises all organs linked to digestion starting with the oral cavity and jaws, via the stomach to the intestines, including organs such as liver, salivary glands, or gallbladder. There are some specimens screened by Sedivy . One of the older specimens is an esophagus from 1896, an esophageal pouch, which was descriebed as a diverticulum oesophagi by Rokitansky. ICD 10—XII Diseases of the skin and subcutaneous tissue Diseases of the skin are mainly depicted by moulages, which are models from wax, elastin, or other material to show the features of the diseases. The oldest object in the moulage collection of the PASW was added to the catalogue in 1843. We can see an impression of a man whose lower jaw was missing due to external violence; the specimen currently belongs to ICD IXI. For the time being, the collection comprises about 2700 moulages, the majority were made by the Viennese moulage artists Karl Henning and his son Theodor, who signed their work . Sedivy described “Spiegler a,” the dermal cylindroma, which is a tumor of skin appendages and belongs to the neoplasms (section “ICD-10-II Neoplasms”). ICD 10—XIII Diseases of the musculoskeletal system and connective tissue The PASW is well known by the scientific community for its amazing subcollection of macerated, pathologically altered macrospecimens of human skeletons and isolated bones . Two well-represented diseases of the musculoskeletal system—Paget’s disease of bone and vitamin D deficiency—are to be presented in the following as examples. Paget’s disease (synonym: osteitis deformans Paget) is a chronic, slowly progressing metabolic bone disease and—in macroscopic view—characterized by an abnormal increase in bone mass associated with a mechanically inferior quality. It may concern one or more skeletal parts, most often the pelvic bone, the vertebral column, long bones, and the skull. The disease begins with an increased activity of the osteoclasts, i.e., an excessive resorption of bone substance, which is then followed by a bone deposit that is structurally less organized and weaker than normal bone . The cause of the disease is unknown; genetic, viral, and environmental influences are discussed. In the long term, Paget’s disease may lead to complications such as osteoarthritis, skeletal deformities, and fractures. At the PASW are more than 50 affected bones, isolated long bones, and crania stored, which represent progressed or late stages of osteitis deformans Paget given their deformity and porous structure. Nebot Valenzuela and Pietschmann (2017) selected a few of them to review the epidemiology, etiology, pathology, macrostructure, histology, and quantitative histomorphometry of Paget’s disease and observed hyperosteoclastosis and poor definition of the boundary between cortical and medullary bone by the histological approach. This diagnostically important criterium of hyperactivity of the osteoclasts was also identified by Sedivy . Additionally, Pagetic bone is also characterized by hypertrophy and alteration of trabecular parameters. In a further study, Nebot Valenzuela et al. (2019) compared the microstructure of bones with and without Paget’s disease using an X‑ray-based µCT scanner. They could not only confirm the higher porosity at the microstructural level by the use of this approach, but also that the femoral heads and tibial condyles were thickened due to increased trabecularization—important findings of relevance for diagnostic purposes. Bone as a dynamic tissue ensures not only the mechanical integrity of the body, but is also involved in the homeostasis of minerals ; for further references see . The sustainability of metabolic processes requires a specific mineral salt concentration which is controlled and regulated by vitamins and hormones. An irregularity of one of these factors, e.g., caused by malnutrition, impaired organ functions, renal and liver diseases, disturbed metabolism, or a combination of these causes, may lead to an impairment of bone tissue-associated organs. In this complex process, vitamin D plays a central role for the mineralization of the organic bone matrix (osteoid tissue), which requires a constant calcium and phosphorus level. If vitamin D is lacking, the calcium transfer to the bone is reduced. Depending on the onset of the disease, the pathological process will result in bone alterations such as rickets in adolescents or osteomalacia in older adults. Rickets is characterized by deformities of weightbearing long bones, pelvic bones, and the vertebral column; a short stature and widening of joints are consequences of an irregular growth plate development. In older osteomalacic individuals, fractures can often be observed . For a convincing diagnosis of these diseases, it was until quite recently obligatory to extract bone biopsies for histological examination of the microarchitectural trabecular structure and its quality. In 2003, the first application of µCT in the evaluation of trabecular architecture of vertebral bodies that were taken from individuals affected by osteomalacia (the diagnosis was stated in the case history) was introduced to prove the suitability of this approach. For that purpose, specimens hosted at the pathological anatomical collection were used to identify disturbed mineralization . The comparison of the metric dimensions and indices obtained by µCT inspection with the results obtained by light microscopical histological undecalcified ground sections not only successfully showed that µCT can be successfully applied to report structural properties of the trabecular network and alterations resulting from disordered mineralization, but it also illustrated the potential of medicohistorical collections in the current medical and palaeopathological research field . ICD 10—XIV Diseases of the genitourinary system A specific asset of the PASW is the collection of body stones. They are important diagnostic indicators for a variety of diseases—often linked to the lifestyle of the affected persons. Body stones are classified on the basis of their chemical composition and as well as according to their localization, for instance rather frequently in the bile or bile duct . ICD 10—XVI Certain conditions originating in the perinatal period As birth was and is one of the key processes in human live, scientists investigated low-risk reconditions early on. In an important work, Carl Breuss und Alexander Kolisko described and classified pelvic deformities, referring also to 67 specimens from PASW . ICD 10—XVII Congenital malformations, deformations, and chromosomal abnormalities Congenital malformations refer to diseases originating in the prenatal period. They can involve many different parts of the body, including the brain, heart, lungs, liver, bones, the intestinal tract, and the skeletal system. Congenital malformations can be inherited or caused by environmental factors and their impact on a child’s health and development can vary from mild to severe. A child with a congenital disorder may experience a disability or health problems throughout life. At the PASW, several specimens of a variety of developmental defects (e.g., conjoined twins, currently systematically clustered ) according to their neoaxial orientation and the nonformation of organs (aplasia) or genetic skeletal or bone malformations . Most of the severe genetic malformations were not compatible with survival, but there are also milder forms represented, such as those resulting from enzymatic defects. Some of these syndromes can be survived for several years and are therefore appealing for differential diagnosis as well as therapeutic interest. For instance, Pumberger explored to what extent magnetic resonance imaging (MRI) can support the diagnosis of liver malformation in fetuses. The archived specimens are in any case best-suited and precious objects for scientifically based causal genetic studies and the course of a disease without having been appropriately medicated. Here we refer to mucopolysaccharidosis, a type of a rare malformation syndrome. Mucopolysaccharidoses (MPS) are a group of inherited diseases (X-chromosomal or autosomal recessive) in which a defective or missing enzyme (lysosomal hydrolases) causes large amounts of complex carbohydrates (acidic mucopolysaccharides or glycosaminoglycans); these mucopolysaccharides accumulate in the lysosomes of the cells and tissues where they cause permanent, progressive cellular damage. As MPS composes the cartilage matrix, a faulty cartilage structure leads to various functional and morphological defects: besides mental retardation and organomegaly, multiple skeletal dysplasias are common . Among others, macrocephaly or scaphocephaly, frontal bossing, and facial dysmorphia with large mandible and wide interorbital breadth are often observed . A varying severity of dysostosis multiplex is the general bony manifestation of MPS, but a special appearance may occur in particular types of the MPS (it includes seven types and several subtypes: IH = Hurler–Pfaundler syndrome, IS = Scheie syndrome, IH/S = Hurler/Scheie syndrome, II = Hunter syndrome, III = Sanfilippo syndrome, IV = Morquio syndrome, V = now: type IS, VI = Maroteaux–Lamy syndrome, VII = Sly syndrome); disproportionate dwarfism with severe osteoarticular deformities, platyspondylia, irregularly shaped metaphysis of the long bones, metacarpals (proximal end tapered), and phalanges (distally tapered, often referred to as bullet-shaped) are regularly observed. Although lysosomal storage diseases are rare individually, the estimated incidence of all types of mucopolysaccharidosis disorders combined is 1 in 20,000 live births. Poorthuis reported an incidence of 4.5 cases per 100,000 live births for all mucopolysaccharidosis disorders in the Netherlands. Many types have a progressive process with a devastating prognosis. Over time, patients develop central nervous system (CNS) degeneration and progression to a vegetative state. Death usually occurs before age 20 years, primarily from cardiopulmonary arrest due to airway obstruction and/or pulmonary infection . ICD10—XIX Injury, poisoning, and certain other consequences of external causes In addition to various diseases such as tumors or infectious diseases, the collection also includes specimens from poisoning and trauma; in principle, such cases would have to be addressed as forensic medical cases. In Vienna, however, forensic medicine was part of the pathological anatomy until 1875—a fact also reflected in the collection. Accordingly, these specimens are particularly interesting from a sociohistorical point of view as they can shed light on working conditions, e.g., in the period of industrialization as well as the development of preventive measures and protective equipment. The relation between diseases and work processes that involved handling certain chemicals or other substances now classified as hazardous to health (e.g., asbestos, X‑rays) was not recognized until the end of the 19th century (e.g., asbestosis). A variety of (anamnestically well documented) specimens housed at the pathologic-anatomical collection have proven their worth as comparative objects, for example, in the context of determining causes of death. At the PASW there are a large number of specimens with different bone injuries, including perimortem skull injuries , well-healed fractures, or fractures showing complications such as nonunion bones or bone dislocations (e.g., acetabulum formation ), and trauma during birth (e.g., ossified hematoma ). Such specimens are valued in the field of paleopathology as comparative objects for the identification of injuries or the reconstruction of an injury and healing process in (pre)historical human skeletal remains, which have often been strongly altered by taphonomic processes . Reference for paleopathological research questions Paleopathology is concerned with the analysis of the nature and frequency of disease- and injury-related lesions on human skeletal or mummified body remains from prehistoric and historic times. Although this subfield of bioanthropology has a long history , paleopathology, as a young, dynamic discipline, has been able to make excellent progress over the past 40 years along methodological innovations and continuously improving techniques, establishing itself as an important meaningful branch of research in our discipline. Paleopathology, like pathology, seeks to make a diagnostic statement on the basis of individual phenomena (symptoms) or symptom associations (syndromes) and—if possible—to record the course of the disease and healing process, the impairments to the quality of life, as well as therapeutic interventions. Since disease can be understood as a process of complex interaction between, among other things, individual disposition and different natural and sociocultural environments, paleopathology looks for evidence and traces in prehistoric human relicts. It reflects the close interaction people have had with their environments and how this relationship has impacted their health . Furthermore, researching the history of diseases, their cause, and course, is also of immense importance for understanding the biological evolution of our own species. Written records, which could be confirmed and supplemented by new essential research findings, proved, e.g., epidemiological events with dramatic demographic consequences for population development. Today we know, for example, that the medieval plague pandemic in Europe (around 1350) claimed an estimated 25 million lives . In order to verify such dramatic events and to reconstruct the history of diseases in an evidence-based manner, experience in the assessment of tissue changes is required, which is often based on anamnestically documented and classified specimens, as they are available in many pathological anatomical collections worldwide. During the past 40 years, the scientific interest of paleopathologists as well as of physicians in objects of this kind has continuously increased, despite some critical concerns based on the fact of their nature—human bodily remains—and the historical collecting and acquisition strategies, which did not meet all our current ethical views and standards. Interestingly, the former director of the PASW, Alfons Portele, stated already in 1982, that the dry osteological specimens kept at the PASW, continuously expanded by him through the integration of “otherwise lost” hospital and private collections, “will gain importance for medical research through the ever-increasing subject of paleopathology”. This interest in such a collection is most likely linked to the concurrent technical innovations and methodological approaches (that include, e.g., a variety of histological, µCT, and SEM [SE- and BSE-mode] techniques, geometric morphometrics, and 3D reconstructions) which opened the window not only for answering current research questions but also for diagnostic purposes and clinical use as well. New methodologies are also developed to understand phenomena already present in the neolithicum, such as hydrocephalus . Especially the progress in aDNA (ancient DNA) analysis should be mentioned here, as it allows the extraction of complete genome data from historical, archaeological findings, from macerated skeletons, from wet specimens, and even from body stones, and, thus, enables reliable diagnosis of metabolic diseases, neoplasms, or infectious diseases. It is now possible to identify not only the species of microbial organisms, such as viral and bacterial pathogens, but also their origin, interactions (e.g., tuberculosis, leprosy), and rate of mutation through ancient time (e.g., ). This research field, which often uses the potential of pathological anatomical collections as a source for the extraction of pathogenic DNA, is significant to uncover and to better understand current epidemiological events and other diseases. Presumably, this is not the endpoint of the promising DNA approach: by taking the rapidly increasing number of papers dealing with microbiome research into account, we expect further insights into human health and disease in the upcoming years. Cross-organ infections are illustrated by numerous specimens. Among them are examples of individual organs that were pathologically altered by diverse pathogens. Some of these cases can be linked to historical epidemics, for example tuberculosis. Tuberculosis, caused by Mycobacterium tuberculosis, was known as “ Morbus Viennensis ” (Viennese disease) from the middle of the 18th century because of its almost endemic occurrence in Vienna. In 1811, 758 of 12,374 patients died of tuberculosis in the General Hospital. In 1815, 2859 of 11,520 deaths were attributed to tuberculosis. Pavement dust and even “waltz dancing” during the Congress of Vienna (1814–1815) were considered causes of tuberculosis—the latter is not at all illogical, considering the path of a tuberculosis infection which is a disease transmitted by airborne infection and favored by population density. The pathological anatomical collection includes a large number of wet and dry specimens of tuberculosis. Sedivy describes, for example, the tuberculosis of a kidney, as well as a specimen of a baby who suffered from pulmonary tuberculosis after being infected by the mother during pregnancy. There are also many examples of advanced bone and joint tuberculosis preserved at the collection, e.g., of spinal tuberculosis (classical Pott’s disease), ankylosis of the sacroiliac joint, the knee, shoulder, and elbow joints, and ankle and tarsal bones that were used for education or research . Carl Rokitansky examined typhus or typhoid fever caused by the bacterium Salmonella typhi in a study and therefore created a firm pathological anatomical basis around the middle of the 19th century. A disease that concerns several organs, the gastrointestinal tract (liver, gallbladder, spleen), kidneys, lungs, and muscles. After it was recognized that the disease was based on the transmission of typhus germs by food, contact with an infected individual, or—most frequently—water, one of the political measures was the construction of the first Vienna mountain-spring pipeline (1870–1873), which significantly reduced the incidence of the disease. Sedivy selected two specimens of small intestine sections in the collection archived as Typhus abdominalis for a histomorphological study; in doing so, he was able to identify different stages of cellular changes in the two sections and to verify the prior diagnosis. At the beginning of the 1830s, a previously unknown, new epidemic—cholera—emerged in Europe. It is caused by bacterial infection (Vibrio cholerae ); symptoms of this disease include diarrhea which may lead to severe dehydration and—in the worst case—even death within a very short time. The epidemic reached Vienna in mid-August 1831. It did not seem to subside completely until spring 1832, when a second wave of the epidemic hit Vienna and kept the city in suspense until September. Almost every 50th person came down with cholera and every 100th person died (original handwritten case reports by Rokitansky). The most significant examples of organ changes caused by cholera are implemented in the redesigned exhibition to illustrate, among other things, infection paths, preventive measures, and the medical, demographic, and socioeconomic consequences, and have been used to address numerous scientific questions. All organs can be affected by cancer, defined as uncontrolled growth of cells, partly spreading out into other organs. When screening the PASW, for all assessed organs, cancerogenic mutations and histological changes can be detected and also represent one of the more intensely studied diseases (Fig. ). The increasing knowledge about the different causes of cancer also supports research on our early ancestors, for instance when cancer allows conclusions about nutritional status, working conditions, or exposure to chemical toxins . Exhibits already served early as objects to study the origin and impact of cancer. To quote as an example the Erdheim brain tumor in the collection, Joseph Engels delivered with his dissertation in 1839 groundbreaking research when he investigated the pituitary gland and the tumors of its infundibula, which he related to neurological disorders; in 1904 Jakob Erdheim specified the tumor as an hypophyseal duct tumor of which the valid name is currently craniopharyngioma . Using the same specimens for case studies, it could be shown that more sophisticated techniques such as CT scans increased the understanding of the disease . Within the last 10 years, it has also been shown that genetic methods can provide very promising evidence on historical specimens, even organic wet specimens made as early as in the 19th century; this approach supports molecular pathology and leads to increasingly exact diagnoses —some kind of “retro-specification.” The careful analysis of organs in conjunction with the documentation of cases and diagnosis is important for general understanding and the development of classifications. By using pathologically altered pancreas (often based on carcinoma), Sedivy could impressively demonstrate that the development of more sophisticated histopathological and immunohistochemical methods helped to establish modern diagnostic systems . Hodgkin disease, also known as Hodgkin lymphoma or lymphogranulomatosis, is a malignant disease of the lymphatic system. The disease usually starts in lymph nodes in the neck region and spreads via the lymph nodes in the chest to the lymph nodes in the abdomen and the spleen. The collection includes a specimen of a spleen that is clearly enlarged and bulge like due to Hodgkin disease . Multiple myeloma is an example of a non-Hodgkin lymphoma and is caused by the degeneration of a plasma cell. Clones spread in the bone marrow and can lead to numerous tumor foci. In a publication by Jellinek (1904, S. Jellinek, Virchows Archiv 177,p. 96–p. 133), the entire skeleton of a patient with multiple myeloma was described in detail. It was possible to show the effects of the disease on the condition of the bone substance, which is characterized by disseminated scalloped lytic lesions of varying size. The process starts in the bone marrow and ultimately destroys the cortex . Diseases of the blood and blood-forming organs include various anemias, coagulopathies, and other diseases. In the collection, one specimen of a frosted spleen ( Zuckergussmilz ; perisplenitis pseudocartilaginea) was described in more detail by Sedivy . It looks glassy due to proteins which form extracellularly deposited connective tissue, hyaline. Metabolic diseases start at the cellular level where energy and mass transport are hampered, and manifest in different organs. Frequent diseases are diabetes mellitus, thyroid gland dysfunction, gout, and mucoviscidosis. While diabetes, thyroid gland dysfunction, and gout can be partly linked to nutrition, mucoviscidosis or Smith–Lemli–Opitz syndrome (SLOS) have a genetic basis. Key organs for metabolism are liver and gallbladder, spleen and pancreas, and kidney, as well as stomach and intestine. A rather seldom but impactful metabolic disease is the abovementioned SLOS, an autosomal recessive inherited congenital disorder associated with a highly variable phenotypic appearance, where due to a low performance of the enzyme 7‑dehydrocholesterol-reductase, cholesterol is not synthesized in an adequate quantity. As cholesterol is a key component for all cell membranes and enzymes, lacking cholesterol leads to severe impacts ranging from microcephaly, polysyndactyly, hypospadias, and intellectual disability up to holoprosencephaly (HPE), an incomplete separation of the two brain hemispheres, which can lead to cyclopic faces. A lower concentration of cholesterol could be demonstrated for the 10% formaldehyde preservation liquid of a fetus of the PASW which was suspected to have SLOS due to polydactyly . This research aims to contribute to prenatal SLOS syndrome ultrasound diagnostic. Two specimens on calcified thyroids were used to compare the symptoms with an archaeological finding . In addition to the numerous cerebral tumors archived in the collection, there are also specimens of cerebral infarcts caused by an inadequate blood flow, vascular occlusion, or other causes. The effect may be dystrophy or necrosis of brain tissue. Based on the type of cerebral infarction (stroke), a red form (hemorrhagic infarction = bleeding in the brain) and a white form (ischemic infarction = sudden reduced blood flow to the brain, often resulting in dead neuronal tissue) are distinguished. There is one specimen from 1953 mentioned by Sedivy belonging to this class, encephalomalacia (rubra et alba), cerebral softening. There is one specimen investigated falling into this class, the tumor nervi acustici published by Sedivy following the original description. A review by Pascual revealed that Jakob Erdheim “redefined this lesion as a hypophyseal duct tumour (craniopharyngioma) in his 1904 monograph” , which indicates that this specimen belongs in the current classification to the ICD II class (D 44.4). There is no deeper research on this type of specimen; they have mainly been used by Sedivy to illustrate and support the diagnostic analysis. However, the collection includes a very interesting and rare case of a vertebral column facing erosions and abnormal vascular groves ventrally due to a long-standing saccular dilatation of the aorta. In this case an aortic aneurysm, probably of arteriosclerotic nature is assumed . Because soil-embedded (pre)historical human skeletal remains are frequently affected by a range of taphonomic changes, making the diagnosis difficult, well-documented comparative examples are invaluable to evolutionary biologists interested in the history of disease. Diseases of the respiratory system refer to influenza, pneumonia, bronchitis, and other acute infections. There are several specimens of organ tuberculosis stored at the PASW, one specimen facing pulmonary tuberculosis was screened by Sedivy (for the subcollection of extra-pulmonic cases of bone and joint tuberculosis, see section “ICD 10‑I Infectious diseases”). Other examples of diseases manifested in the respiratory system and reported by Sedivy include cases of tracheitis (originating in diphtheria), fibrosis, carcinoma, chondroma, or are caused by external impacts such as respirable dust ( Staublungen , silicosis, asbestosis, etc.). The digestive system comprises all organs linked to digestion starting with the oral cavity and jaws, via the stomach to the intestines, including organs such as liver, salivary glands, or gallbladder. There are some specimens screened by Sedivy . One of the older specimens is an esophagus from 1896, an esophageal pouch, which was descriebed as a diverticulum oesophagi by Rokitansky. Diseases of the skin are mainly depicted by moulages, which are models from wax, elastin, or other material to show the features of the diseases. The oldest object in the moulage collection of the PASW was added to the catalogue in 1843. We can see an impression of a man whose lower jaw was missing due to external violence; the specimen currently belongs to ICD IXI. For the time being, the collection comprises about 2700 moulages, the majority were made by the Viennese moulage artists Karl Henning and his son Theodor, who signed their work . Sedivy described “Spiegler a,” the dermal cylindroma, which is a tumor of skin appendages and belongs to the neoplasms (section “ICD-10-II Neoplasms”). The PASW is well known by the scientific community for its amazing subcollection of macerated, pathologically altered macrospecimens of human skeletons and isolated bones . Two well-represented diseases of the musculoskeletal system—Paget’s disease of bone and vitamin D deficiency—are to be presented in the following as examples. Paget’s disease (synonym: osteitis deformans Paget) is a chronic, slowly progressing metabolic bone disease and—in macroscopic view—characterized by an abnormal increase in bone mass associated with a mechanically inferior quality. It may concern one or more skeletal parts, most often the pelvic bone, the vertebral column, long bones, and the skull. The disease begins with an increased activity of the osteoclasts, i.e., an excessive resorption of bone substance, which is then followed by a bone deposit that is structurally less organized and weaker than normal bone . The cause of the disease is unknown; genetic, viral, and environmental influences are discussed. In the long term, Paget’s disease may lead to complications such as osteoarthritis, skeletal deformities, and fractures. At the PASW are more than 50 affected bones, isolated long bones, and crania stored, which represent progressed or late stages of osteitis deformans Paget given their deformity and porous structure. Nebot Valenzuela and Pietschmann (2017) selected a few of them to review the epidemiology, etiology, pathology, macrostructure, histology, and quantitative histomorphometry of Paget’s disease and observed hyperosteoclastosis and poor definition of the boundary between cortical and medullary bone by the histological approach. This diagnostically important criterium of hyperactivity of the osteoclasts was also identified by Sedivy . Additionally, Pagetic bone is also characterized by hypertrophy and alteration of trabecular parameters. In a further study, Nebot Valenzuela et al. (2019) compared the microstructure of bones with and without Paget’s disease using an X‑ray-based µCT scanner. They could not only confirm the higher porosity at the microstructural level by the use of this approach, but also that the femoral heads and tibial condyles were thickened due to increased trabecularization—important findings of relevance for diagnostic purposes. Bone as a dynamic tissue ensures not only the mechanical integrity of the body, but is also involved in the homeostasis of minerals ; for further references see . The sustainability of metabolic processes requires a specific mineral salt concentration which is controlled and regulated by vitamins and hormones. An irregularity of one of these factors, e.g., caused by malnutrition, impaired organ functions, renal and liver diseases, disturbed metabolism, or a combination of these causes, may lead to an impairment of bone tissue-associated organs. In this complex process, vitamin D plays a central role for the mineralization of the organic bone matrix (osteoid tissue), which requires a constant calcium and phosphorus level. If vitamin D is lacking, the calcium transfer to the bone is reduced. Depending on the onset of the disease, the pathological process will result in bone alterations such as rickets in adolescents or osteomalacia in older adults. Rickets is characterized by deformities of weightbearing long bones, pelvic bones, and the vertebral column; a short stature and widening of joints are consequences of an irregular growth plate development. In older osteomalacic individuals, fractures can often be observed . For a convincing diagnosis of these diseases, it was until quite recently obligatory to extract bone biopsies for histological examination of the microarchitectural trabecular structure and its quality. In 2003, the first application of µCT in the evaluation of trabecular architecture of vertebral bodies that were taken from individuals affected by osteomalacia (the diagnosis was stated in the case history) was introduced to prove the suitability of this approach. For that purpose, specimens hosted at the pathological anatomical collection were used to identify disturbed mineralization . The comparison of the metric dimensions and indices obtained by µCT inspection with the results obtained by light microscopical histological undecalcified ground sections not only successfully showed that µCT can be successfully applied to report structural properties of the trabecular network and alterations resulting from disordered mineralization, but it also illustrated the potential of medicohistorical collections in the current medical and palaeopathological research field . A specific asset of the PASW is the collection of body stones. They are important diagnostic indicators for a variety of diseases—often linked to the lifestyle of the affected persons. Body stones are classified on the basis of their chemical composition and as well as according to their localization, for instance rather frequently in the bile or bile duct . As birth was and is one of the key processes in human live, scientists investigated low-risk reconditions early on. In an important work, Carl Breuss und Alexander Kolisko described and classified pelvic deformities, referring also to 67 specimens from PASW . Congenital malformations refer to diseases originating in the prenatal period. They can involve many different parts of the body, including the brain, heart, lungs, liver, bones, the intestinal tract, and the skeletal system. Congenital malformations can be inherited or caused by environmental factors and their impact on a child’s health and development can vary from mild to severe. A child with a congenital disorder may experience a disability or health problems throughout life. At the PASW, several specimens of a variety of developmental defects (e.g., conjoined twins, currently systematically clustered ) according to their neoaxial orientation and the nonformation of organs (aplasia) or genetic skeletal or bone malformations . Most of the severe genetic malformations were not compatible with survival, but there are also milder forms represented, such as those resulting from enzymatic defects. Some of these syndromes can be survived for several years and are therefore appealing for differential diagnosis as well as therapeutic interest. For instance, Pumberger explored to what extent magnetic resonance imaging (MRI) can support the diagnosis of liver malformation in fetuses. The archived specimens are in any case best-suited and precious objects for scientifically based causal genetic studies and the course of a disease without having been appropriately medicated. Here we refer to mucopolysaccharidosis, a type of a rare malformation syndrome. Mucopolysaccharidoses (MPS) are a group of inherited diseases (X-chromosomal or autosomal recessive) in which a defective or missing enzyme (lysosomal hydrolases) causes large amounts of complex carbohydrates (acidic mucopolysaccharides or glycosaminoglycans); these mucopolysaccharides accumulate in the lysosomes of the cells and tissues where they cause permanent, progressive cellular damage. As MPS composes the cartilage matrix, a faulty cartilage structure leads to various functional and morphological defects: besides mental retardation and organomegaly, multiple skeletal dysplasias are common . Among others, macrocephaly or scaphocephaly, frontal bossing, and facial dysmorphia with large mandible and wide interorbital breadth are often observed . A varying severity of dysostosis multiplex is the general bony manifestation of MPS, but a special appearance may occur in particular types of the MPS (it includes seven types and several subtypes: IH = Hurler–Pfaundler syndrome, IS = Scheie syndrome, IH/S = Hurler/Scheie syndrome, II = Hunter syndrome, III = Sanfilippo syndrome, IV = Morquio syndrome, V = now: type IS, VI = Maroteaux–Lamy syndrome, VII = Sly syndrome); disproportionate dwarfism with severe osteoarticular deformities, platyspondylia, irregularly shaped metaphysis of the long bones, metacarpals (proximal end tapered), and phalanges (distally tapered, often referred to as bullet-shaped) are regularly observed. Although lysosomal storage diseases are rare individually, the estimated incidence of all types of mucopolysaccharidosis disorders combined is 1 in 20,000 live births. Poorthuis reported an incidence of 4.5 cases per 100,000 live births for all mucopolysaccharidosis disorders in the Netherlands. Many types have a progressive process with a devastating prognosis. Over time, patients develop central nervous system (CNS) degeneration and progression to a vegetative state. Death usually occurs before age 20 years, primarily from cardiopulmonary arrest due to airway obstruction and/or pulmonary infection . In addition to various diseases such as tumors or infectious diseases, the collection also includes specimens from poisoning and trauma; in principle, such cases would have to be addressed as forensic medical cases. In Vienna, however, forensic medicine was part of the pathological anatomy until 1875—a fact also reflected in the collection. Accordingly, these specimens are particularly interesting from a sociohistorical point of view as they can shed light on working conditions, e.g., in the period of industrialization as well as the development of preventive measures and protective equipment. The relation between diseases and work processes that involved handling certain chemicals or other substances now classified as hazardous to health (e.g., asbestos, X‑rays) was not recognized until the end of the 19th century (e.g., asbestosis). A variety of (anamnestically well documented) specimens housed at the pathologic-anatomical collection have proven their worth as comparative objects, for example, in the context of determining causes of death. At the PASW there are a large number of specimens with different bone injuries, including perimortem skull injuries , well-healed fractures, or fractures showing complications such as nonunion bones or bone dislocations (e.g., acetabulum formation ), and trauma during birth (e.g., ossified hematoma ). Such specimens are valued in the field of paleopathology as comparative objects for the identification of injuries or the reconstruction of an injury and healing process in (pre)historical human skeletal remains, which have often been strongly altered by taphonomic processes . Paleopathology is concerned with the analysis of the nature and frequency of disease- and injury-related lesions on human skeletal or mummified body remains from prehistoric and historic times. Although this subfield of bioanthropology has a long history , paleopathology, as a young, dynamic discipline, has been able to make excellent progress over the past 40 years along methodological innovations and continuously improving techniques, establishing itself as an important meaningful branch of research in our discipline. Paleopathology, like pathology, seeks to make a diagnostic statement on the basis of individual phenomena (symptoms) or symptom associations (syndromes) and—if possible—to record the course of the disease and healing process, the impairments to the quality of life, as well as therapeutic interventions. Since disease can be understood as a process of complex interaction between, among other things, individual disposition and different natural and sociocultural environments, paleopathology looks for evidence and traces in prehistoric human relicts. It reflects the close interaction people have had with their environments and how this relationship has impacted their health . Furthermore, researching the history of diseases, their cause, and course, is also of immense importance for understanding the biological evolution of our own species. Written records, which could be confirmed and supplemented by new essential research findings, proved, e.g., epidemiological events with dramatic demographic consequences for population development. Today we know, for example, that the medieval plague pandemic in Europe (around 1350) claimed an estimated 25 million lives . In order to verify such dramatic events and to reconstruct the history of diseases in an evidence-based manner, experience in the assessment of tissue changes is required, which is often based on anamnestically documented and classified specimens, as they are available in many pathological anatomical collections worldwide. During the past 40 years, the scientific interest of paleopathologists as well as of physicians in objects of this kind has continuously increased, despite some critical concerns based on the fact of their nature—human bodily remains—and the historical collecting and acquisition strategies, which did not meet all our current ethical views and standards. Interestingly, the former director of the PASW, Alfons Portele, stated already in 1982, that the dry osteological specimens kept at the PASW, continuously expanded by him through the integration of “otherwise lost” hospital and private collections, “will gain importance for medical research through the ever-increasing subject of paleopathology”. This interest in such a collection is most likely linked to the concurrent technical innovations and methodological approaches (that include, e.g., a variety of histological, µCT, and SEM [SE- and BSE-mode] techniques, geometric morphometrics, and 3D reconstructions) which opened the window not only for answering current research questions but also for diagnostic purposes and clinical use as well. New methodologies are also developed to understand phenomena already present in the neolithicum, such as hydrocephalus . Especially the progress in aDNA (ancient DNA) analysis should be mentioned here, as it allows the extraction of complete genome data from historical, archaeological findings, from macerated skeletons, from wet specimens, and even from body stones, and, thus, enables reliable diagnosis of metabolic diseases, neoplasms, or infectious diseases. It is now possible to identify not only the species of microbial organisms, such as viral and bacterial pathogens, but also their origin, interactions (e.g., tuberculosis, leprosy), and rate of mutation through ancient time (e.g., ). This research field, which often uses the potential of pathological anatomical collections as a source for the extraction of pathogenic DNA, is significant to uncover and to better understand current epidemiological events and other diseases. Presumably, this is not the endpoint of the promising DNA approach: by taking the rapidly increasing number of papers dealing with microbiome research into account, we expect further insights into human health and disease in the upcoming years. In this review, we give an overview of the history of the collection of the PASW, with more than 50,000 specimens, and research based on these specimens. We discuss methodological aspects of the data basis as well as added value for medical research. We address ethical issues and develop a perspective for the further growth, accessibility, and usability of the collection. Methodological remarks We assume that for several reasons, our data with regards to publications are not complete. One reason lies in the object database of the PASW itself. While all curators aimed to link all publications on specimens to the objects, some publications came later or were not sent at all by the authors. The other source are literature databases or the full-text research, if possible. However, not all the authors refer to the collection in a specific manner or they ignored the object numbers. Cases were published without indicating the corresponding inventory numbers. This fact makes the statistical analysis of these publications difficult. One example are the specimens examined by Pumberger : “Nineteen specimens from a total of 34 fetuses with complex abdominal wall defects, preserved in the embryologic collection of the Federal Museum of Pathologic Anatomy (Vienna), were selected for examination by MRI [magnetic resonance imaging].” We will overcome this problem in the future by establishing PASW as the unequivocal name of the collection, and by establishing permanent identifiers (PIDs) for each object using QR codes. Another difficulty is the categorization of the diseases. We decided to use the WHO system ICD 10 for International Statistical Classification of Diseases and Related Health Problems, and not ICD 11 for Mortality and Morbidity Statistics, as the various papers were focusing on diseases and not mortality. Autopsy reports, however, revealed that people may have had different diseases. However, as with every classification system, there are overlaps, and some especially historical reports are difficult to match with modern medical terminology. Nevertheless, the classification system helped us to organize the papers in a readable and structured manner. The relevance of the PASW for historical, current, and future research The development of medical research is mirrored in the publications written on the basis of the PASW. The collection was established to become able to describe anatomical changes of organs in relationship to diseases and use them as demonstration objects in the formation of doctors. The development of new technical facilities such as microscopes allowed more detailed views not only of the macrostructure of organs but also of the microstructure. Especially Rokitanksy developed the area further and established micropathology in a new approach. This allowed him to describe diseases in a process-based manner. Having historical specimens and modern technologies, such as genetics or microcomputed tomography, new insights could be gained. For instance, tuberculosis played an important role in the mortality of the Viennese population. With lungs now being reinvestigated using modern molecular genetics, it can be revealed whether the bacteria were endemic to Vienna or reimported by travelers. Another big contribution of PASW was to allow a more systematic approach to clustering and ordering diseases . This helped to structure diagnoses and to support the development of therapies. Accordingly, the use of authentic and also historic objects for research is important to understand the development of modern pathological systematics and how modern methods enlarged our understanding of visible changes in organs such as the pancreas or pituitary glands , which can impact the whole human and inhibit normal growth and development. However, there are also gaps in the collection as there was never a systematic collection strategy. Most of the objects were donations from hospitals, and depend on the interests of the curators, specific parts of the collection grew. Although in some perspectives that is an advantage, as not all scientific questions and demands can be foreseen, it is also a shortcoming, as for specific diseases, no or only few objects are present. Current research has two clear foci on understanding the evolution of diseases. Evolutionary medicine currently provides a deeper understanding of processes and the trade-offs, linked to the genetic basis of diseases as genetics plays a role in nearly all diseases. Variations in human DNA and individual differences in how that DNA is expressed depending on lifestyle and environmental factors such as nutrition impact disease processes . The greater awareness of the genetic basis was also visible in the language, as analyses in her review. While the heredity of specific traits was rather easily visible, population genetics and evolution are more difficult to fetch. All the genetic data linked with functional traits and familial or ethnic histories provide a valuable resource for individual medicine . However, as also Benton et al. stress, environmental conditions play a crucial role. Ethical aspects A recent guideline published by the German Museum Association highlights two ethical principles next to the careful and respectful handling of the specimens; first, there must be some kind of mutual agreement to collect, prepare, research, and exhibit the preservations, i.e., human remains, and second a utilitarian one, promising added value for the whole society, mainly by their value for research . The first aspect refers to the origin of the specimens. The law in Austria is different to that in most other countries. While in Austria autopsies are allowed and the organs can be stored if the patient or the family of the dead person does not intervene, in other countries a more active behavior in order to receive bodies is expected. While the mutual agreement is currently documented by contracts between the person or his/her family, in former times it was an imperial law enacted by Emperor Franz II./I. in 1811, which not only allowed but made it more or less mandatory for doctors to collect interesting specimens for teaching and scientific purposes. It also regulated the founding of so-called pathological cabinets to store and exhibit these specimens. This law was the outcome of a process started by Gerard van Swieten in the 18th century to establish pathological anatomical collections to be used in teaching medical students. Ethics also comprised respectful treatment of the collection . This refers to the physical treatment of the preservations as well as their exhibition to different publics, for instance students or museum visitors. While Rokitansky had to dispose of some specimens due to their bad condition, there has been some progress in storage methodologies (e.g., ) and the danger that parts of the collection have to be thrown away is much reduced. Ethics also refers to the origin of the specimens. For each case, provenience research is done and visible also in claims such as: “The exhibits had been collected between 1840 and 1999; none of the specimens originated from the period of Austrian Fascism (1934 through 1938) or from the years of the Nazi regime in Germany and Austria (1938 through 1945) or in the Senate project of the University of Vienna (Angetter, 1998).” We assume that for several reasons, our data with regards to publications are not complete. One reason lies in the object database of the PASW itself. While all curators aimed to link all publications on specimens to the objects, some publications came later or were not sent at all by the authors. The other source are literature databases or the full-text research, if possible. However, not all the authors refer to the collection in a specific manner or they ignored the object numbers. Cases were published without indicating the corresponding inventory numbers. This fact makes the statistical analysis of these publications difficult. One example are the specimens examined by Pumberger : “Nineteen specimens from a total of 34 fetuses with complex abdominal wall defects, preserved in the embryologic collection of the Federal Museum of Pathologic Anatomy (Vienna), were selected for examination by MRI [magnetic resonance imaging].” We will overcome this problem in the future by establishing PASW as the unequivocal name of the collection, and by establishing permanent identifiers (PIDs) for each object using QR codes. Another difficulty is the categorization of the diseases. We decided to use the WHO system ICD 10 for International Statistical Classification of Diseases and Related Health Problems, and not ICD 11 for Mortality and Morbidity Statistics, as the various papers were focusing on diseases and not mortality. Autopsy reports, however, revealed that people may have had different diseases. However, as with every classification system, there are overlaps, and some especially historical reports are difficult to match with modern medical terminology. Nevertheless, the classification system helped us to organize the papers in a readable and structured manner. The development of medical research is mirrored in the publications written on the basis of the PASW. The collection was established to become able to describe anatomical changes of organs in relationship to diseases and use them as demonstration objects in the formation of doctors. The development of new technical facilities such as microscopes allowed more detailed views not only of the macrostructure of organs but also of the microstructure. Especially Rokitanksy developed the area further and established micropathology in a new approach. This allowed him to describe diseases in a process-based manner. Having historical specimens and modern technologies, such as genetics or microcomputed tomography, new insights could be gained. For instance, tuberculosis played an important role in the mortality of the Viennese population. With lungs now being reinvestigated using modern molecular genetics, it can be revealed whether the bacteria were endemic to Vienna or reimported by travelers. Another big contribution of PASW was to allow a more systematic approach to clustering and ordering diseases . This helped to structure diagnoses and to support the development of therapies. Accordingly, the use of authentic and also historic objects for research is important to understand the development of modern pathological systematics and how modern methods enlarged our understanding of visible changes in organs such as the pancreas or pituitary glands , which can impact the whole human and inhibit normal growth and development. However, there are also gaps in the collection as there was never a systematic collection strategy. Most of the objects were donations from hospitals, and depend on the interests of the curators, specific parts of the collection grew. Although in some perspectives that is an advantage, as not all scientific questions and demands can be foreseen, it is also a shortcoming, as for specific diseases, no or only few objects are present. Current research has two clear foci on understanding the evolution of diseases. Evolutionary medicine currently provides a deeper understanding of processes and the trade-offs, linked to the genetic basis of diseases as genetics plays a role in nearly all diseases. Variations in human DNA and individual differences in how that DNA is expressed depending on lifestyle and environmental factors such as nutrition impact disease processes . The greater awareness of the genetic basis was also visible in the language, as analyses in her review. While the heredity of specific traits was rather easily visible, population genetics and evolution are more difficult to fetch. All the genetic data linked with functional traits and familial or ethnic histories provide a valuable resource for individual medicine . However, as also Benton et al. stress, environmental conditions play a crucial role. A recent guideline published by the German Museum Association highlights two ethical principles next to the careful and respectful handling of the specimens; first, there must be some kind of mutual agreement to collect, prepare, research, and exhibit the preservations, i.e., human remains, and second a utilitarian one, promising added value for the whole society, mainly by their value for research . The first aspect refers to the origin of the specimens. The law in Austria is different to that in most other countries. While in Austria autopsies are allowed and the organs can be stored if the patient or the family of the dead person does not intervene, in other countries a more active behavior in order to receive bodies is expected. While the mutual agreement is currently documented by contracts between the person or his/her family, in former times it was an imperial law enacted by Emperor Franz II./I. in 1811, which not only allowed but made it more or less mandatory for doctors to collect interesting specimens for teaching and scientific purposes. It also regulated the founding of so-called pathological cabinets to store and exhibit these specimens. This law was the outcome of a process started by Gerard van Swieten in the 18th century to establish pathological anatomical collections to be used in teaching medical students. Ethics also comprised respectful treatment of the collection . This refers to the physical treatment of the preservations as well as their exhibition to different publics, for instance students or museum visitors. While Rokitansky had to dispose of some specimens due to their bad condition, there has been some progress in storage methodologies (e.g., ) and the danger that parts of the collection have to be thrown away is much reduced. Ethics also refers to the origin of the specimens. For each case, provenience research is done and visible also in claims such as: “The exhibits had been collected between 1840 and 1999; none of the specimens originated from the period of Austrian Fascism (1934 through 1938) or from the years of the Nazi regime in Germany and Austria (1938 through 1945) or in the Senate project of the University of Vienna (Angetter, 1998).” New specimens for the PASW are rare, but new diseases appear: Ebola, SARS, or, last but not least, COVID 19. A collection strategy should be developed and new specimens acquired in order to fill the gaps and to save material for future analysis. With regard to COVID 19, it became clear very rapidly that it was not just a lung disease but that other organs such as the heart or kidney are affected. With fast progress in medicine, reference material may be revisited some years later, as is also done currently with specimens from the collections. With personalized medicine as a new trend, it should be taken care that the collection is representative according to larger characteristics such as gender or blood group. Based on a careful ethical discussion, the specimen should become much more accessible to medical and evolutionary research. The basic information should be made available in a fair way. The Natural History Museum Vienna (NHMW) is currently developing a coherent database for all collections, with specific interfaces to different publics. In addition, specific specimens should be made available digitally . All specimens should receive permanent persistent identifiers, marked at the object (label) with a QR code. The collection has already helped to understand diseases, their origin, manifestation, and promote better cure. Facing new insights based on evolutionary medicine, it becomes apparent that authentic physical specimens will continue to play a major role in our understanding of disease and health. |
Establishment of Twinning Partnership to Improve Pediatric Radiotherapy Outcomes Globally | e1878288-b8dc-4242-89d5-0a1d30ee409a | 10881094 | Internal Medicine[mh] | Childhood cancer is prevalent throughout the world. With modern technologies and therapies, high-income countries (HICs) now report cure rates as high as 80% for children with cancer. , However, low- and middle-income countries (LMICs) struggle with lack of health care resources and infrastructure, resulting in upward of 90% of pediatric oncologic deaths occurring in these countries. - As radiotherapy is a critical component of care for children with malignancies, improving quality and access to pediatric radiotherapy services in LMICs is vital. CONTEXT Key Objective How can institutions establish effective twinning partnerships between low- and middle-income countries (LMICs) and high-income countries (HICs) in pediatric radiotherapy? Knowledge Generated An effective twinning partnership requires the prioritization of LMIC goals and capabilities through virtual and in-country discussions, needs assessments, interactive training, and adapted resources. After in-country visits, plans for future virtual and in-country training, research, and mentorship are necessary for long-term success. Relevance The presented twinning experience in pediatric radiotherapy between Emory University and Tikur Anbessa Specialized Hospital may serve as a model for other LMIC and HIC institutions interested in establishing similar partnerships. Ethiopia is a low-income country (LIC) in sub-Saharan Africa with a multitude of distinct ethnic groups, languages, and religions. It is home to an estimated 100 million people and is the second most populous country in Africa. , Previous epidemiologic data from Ethiopia suggest that at least 64,000 new cases of cancer occur annually. Despite the growing population and significant cancer burden, there are numerous challenges to adequate delivery of radiotherapy in this country where it is estimated that 70% of patients would benefit from radiation at some point in their disease course. - Tikur Anbessa Specialized Hospital (TASH) was established in Addis Ababa, Ethiopia, in 1961 and currently treats over 500,000 outpatients and 21,000 inpatients annually. TASH is currently the largest hospital and referral center in Ethiopia. , It is home to one of three functioning linear accelerators in the country, which treats over 1,700 patients annually. The School of Medicine at TASH was established in 1972 under the Addis Ababa University (AAU) and educates many medical students and residents, including 36 clinical oncology residents. Here, we present our experience establishing a partnership between TASH (LIC institution) and the Emory University (HIC institution). Collaboration between institutions in LMICs and HICs has been shown to be effective in improving oncologic treatment outcomes and is recommended by the WHO. - However, it is difficult to create and sustain such affiliations, and literature regarding pediatric radiotherapy twinning partnerships remains scarce. Key Objective How can institutions establish effective twinning partnerships between low- and middle-income countries (LMICs) and high-income countries (HICs) in pediatric radiotherapy? Knowledge Generated An effective twinning partnership requires the prioritization of LMIC goals and capabilities through virtual and in-country discussions, needs assessments, interactive training, and adapted resources. After in-country visits, plans for future virtual and in-country training, research, and mentorship are necessary for long-term success. Relevance The presented twinning experience in pediatric radiotherapy between Emory University and Tikur Anbessa Specialized Hospital may serve as a model for other LMIC and HIC institutions interested in establishing similar partnerships. Residents and faculty from various specialties at the Emory University had been traveling to Addis Ababa, Ethiopia, for 1-month long clinical rotations at TASH since 2012, including one radiation oncology resident in 2018. , The previous radiation oncology resident focused his time on training in head and neck contouring for intensity-modulated radiotherapy (IMRT) because of the recent installation of a linear accelerator at TASH. Despite these previous visits, a long-term relationship had not been created with the radiotherapy department at this institution. Notably, before installation of the linear accelerator, few pediatric patients were being treated with radiotherapy at TASH, attributable to unacceptably high toxicities with 2D techniques and lack of anesthesia capability in the radiotherapy department. Because of the long-standing relationship between Emory and TASH in other medical specialties, one radiation oncology resident and one pediatric radiotherapy faculty member set out to establish a twinning collaboration with TASH in pediatric radiotherapy in 2022-2023. Prioritization of pediatric radiotherapy was supported by the leadership at TASH because of understanding of the basics of IMRT treatment planning and desire to meet WHO initiative for childhood cancer goals. Institutional funding for travel and accommodations in Addis Ababa was secured through the Emory Global Health Residency Scholars Program, with additional aid from the American College of Radiation Oncology (ACRO). After securing funding, the attending-resident team developed a partnership, completed a needs assessment, and created resources in preparation for the in-person visit to TASH. The team proceeded to travel to Ethiopia for a 1-month long visit to TASH during which they delivered didactic lectures, conducted interactive training, and adapted resources. Upon return to the HIC, plans for future collaboration were established. Partnership Development Emory and TASH radiotherapy faculty and residents initially established correspondence virtually via email and video 5 months before the expected in-country visit. Clinical faculty, physics faculty, dosimetry faculty, and clinical residents were included in correspondence. TASH faculty and residents were encouraging and supportive of partnership development. Communication challenges included different time zones, scheduling around multiple faculty members, poor Internet connectivity, and language barriers. Needs Assessment and Gap Analysis To identify goals of collaboration, a pediatric radiotherapy needs assessment survey was developed by the Emory faculty and the resident (Fig ). Needs assessment questions were completed by radiation oncology clinical faculty members at TASH. Initial questions focused on current general departmental radiotherapy needs, including number of radiotherapy machines, brachytherapy capability, and staff capacity. Additional questions centered on details of pediatric oncology practices at TASH, including number of pediatric radiotherapy patients treated, availability of pediatric trained radiotherapy staff, anesthesia capabilities, and challenges and priorities for pediatric radiotherapy at TASH. Following completion of the questionnaire, team members met virtually to discuss the results. Results of needs assessment indicated that TASH houses one functioning linear accelerator using 6 megavolts (MV) and 16 MV energies. Treatments have been initiated on this machine since November 2020. Capabilities include 3-dimensional conformal radiotherapy, IMRT, volumetric modulated arc therapy, and electron therapy. There are also two cobalt-60 machines; however, typically one or both are nonfunctioning. One high-dose-rate brachytherapy machine with cobalt source is available for gynecological treatments. There are no other radiotherapy services available at TASH. There are seven clinical oncology faculty at TASH who serve as both radiation and medical oncologists. Four medical physicists and eight radiation therapists are employed in the department. Thirty-six clinical oncology residents are currently in training at AAU. One hundred thirty to 150 patients are treated with radiotherapy daily at TASH. Greater than 70% of both adult and pediatric cases are treated with palliative intent, as the majority of patients present with advanced disease. Pediatric patients with curative disease are prioritized for radiotherapy. The pediatric hematology and oncology department provides chemotherapy services at TASH, with additional outpatient chemotherapy services available at a satellite location, Amestengha. Over 900 pediatric patients with cancer (0-18 years old) are seen or treated per month. Pediatric patients are seen or treated in the TASH inpatient ward (50-60 patients per month), the TASH pediatric emergency ward (80-100 patients per month), and at Amestengha (750-850 patients per month). Six thousand to 8,000 patients with pediatric cancer are seen at TASH annually. More than 900 of these patients are new diagnoses, most commonly including Hodgkin lymphoma, medulloblastoma, Wilms tumor, neuroblastoma, rhabdomyosarcoma, and palliative cases. Pediatric patients come to TASH from all geographic regions in the country and sometimes neighboring countries; however the majority are from areas nearby Addis Ababa because of socioeconomic barriers. One clinical oncology faculty member, one medical physicist, and one radiation therapist received pediatric specific training at the Children Center Hospital, Egypt. Anesthesia services are available for pediatric patients; however, challenges include coordination and equipment availability. Pediatric multidisciplinary tumor conferences occur weekly with medical oncology, radiation oncology, surgical oncology, radiology, pathology, and residents. Radiotherapy protocols from Children's Oncology Group and International Society for Pediatric Oncology (SIOP) are used routinely and are chosen on the basis of the disease presentation and availability of resources. Challenges with pediatric radiotherapy at TASH (Table ) include lack of radiotherapy machines available for use in the country. Difficulty accessing existing radiotherapy machines and frequent machine downtime result in significant treatment delays and interruptions. There is a shortage of pediatric immobilization devices and inadequate on-board imaging, preventing swift and accurate treatments. Furthermore, there is a shortage of pediatric radiotherapy–trained and dedicated faculty and staff to develop pediatric-specific protocols and guidelines. Infrastructure and supportive resources that are needed include child-friendly waiting and consultation rooms, play therapy, psychosocial support, nutrition support, and anesthesia support. Many patient families travel from remote parts of Ethiopia for treatment and struggle with transportation and lodging. Difficulties with care coordination between disciplines make it challenging to implement combination treatment protocols and appropriate time radiotherapy. There is poor coordination with palliative care and pediatric oncology for toxicity management, supportive care measures, and disease surveillance. Pediatric radiotherapy goals for TASH included (1) mentorship and training for professionals, (2) strengthening of multidisciplinary teams, (3) creation of clinical care pathways, (4) expansion of resources and infrastructure, and (5) development of organized and specialized pediatric oncology services and processes (Fig ). Action Planning After needs assessment and goal delineation, Emory and TASH teams created outlines and resources to prepare for in-country visit. The schedule included introduction and orientation to TASH radiotherapy department, didactic lectures, and interactive training (Table ). Didactic lecture topics chosen were commonly seen pediatric radiotherapy cases identified on needs assessment, including Wilms tumor, medulloblastoma, rhabdomyosarcoma, Hodgkin lymphoma, and palliative radiotherapy. Interactive training focused on highest-impact pediatric malignancies and available cases. Didactic Lectures The Emory team arrived in Addis Ababa in February 2023. The Emory team delivered five didactic lectures over the course of 2 weeks to faculty and residents in the radiotherapy department (Data Supplement). Lectures focused on evidence-based guidelines and protocols adapted for LMIC setting. After each lecture, there was an opportunity for discussion between Emory and TASH team members regarding specific processes and challenges at TASH for each disease. Discussion informed interactive training with TASH team and future initiatives. Interactive Training The Emory team observed radiotherapy simulation, treatment planning, and radiotherapy delivery at TASH while in-country. Interactive training of residents and radiation therapists for pediatric computed tomography (CT) simulation was completed on the basis of available patient cases during visit, including Wilms tumor, rhabdomyosarcoma, and Hodgkin lymphoma. Direct observation and feedback were provided regarding patient setup, use of immobilization devices, isocenter placement, and CT imaging processes. Treatment planning and plan evaluation training sessions were led by the Emory team for groups of five to eight AAU residents at a time. Wilms tumor was chosen as the focus of these training sessions. Multiple Wilms cases were prepared by TASH residents. Contouring, field placement, treatment planning, and plan evaluation of these cases was completed by the Emory team in an interactive fashion. After these sessions, TASH residents independently completed decision making, treatment planning, and plan evaluation for additional Wilms cases and presented them to the Emory team for review. Adapted Resources Clinical care pathways and standard operating procedures (SOP) for Wilms tumor and craniospinal irradiation (CSI) were adapted from Emory resources (Data Supplement). Emory resources were chosen to facilitate collaboration between Emory and TASH team members. Wilms tumor clinical care pathways focused on the ideal CT simulation setup and treatment planning guidelines. CSI SOP was created in collaboration with Emory and TASH medical physicists and dosimetrists, with a focus on details of treatment planning and delivery goals. Future Partnership Before departure from Ethiopia, Emory and TASH teams established goals for continued collaborations abroad and plans for next in-person visit. Notably, virtual peer-review sessions were established on a monthly basis to review pediatric radiotherapy cases and maintain regular contact. Contact information for Emory experts in pediatric radiotherapy was distributed to the TASH team to facilitate future virtual discussion of complex cases. Furthermore, one faculty and one resident at TASH presented data pertaining to the Emory-TASH collaboration at an international conference after the visit. TASH faculty and trainees were also invited to Emory for an in-person visit for additional training. Emory team members planned for yearly in-person visits to TASH, as funding permits. Evaluation of twinning partnership will occur on an annual basis through discussion of progress in Emory-TASH pediatric radiotherapy goals (Fig ). There will be an opportunity to modify or add new goals. Impact Measurement All clinical oncology faculty members at TASH who deliver radiotherapy (n = 5) received pediatric radiotherapy didactic training from an expert in the field, compared with only a single faculty member receiving pediatric-specific training previously. All clinical oncology residents at TASH rotating on radiation oncology (n = 8) during the in-country visit received pediatric radiotherapy–specific didactic and interactive training. Notably, all residents had never or minimally (0-5 cases) contoured pediatric cases, put on fields for three-dimensional (3D) Wilms cases, or put on fields for 3D CSI cases. After the Emory team visit, all participating clinical oncology residents had completed six new pediatric radiotherapy cases with guidance of an expert in the field. Two cases included placement of fields for Wilms and two cases included placement of fields for CSI cases. Emory and TASH teams have participated in two virtual pediatric peer-review sessions to date. During these sessions, four pediatric cases were reviewed with minor or major changes made to all cases. Costs There was no cost to virtual partnership activities. Costs of in-country visit included the price of round-trip flights for the two Emory team members, in addition to 1 month of hotel accommodations and food. There were no costs for TASH team members; however, there was significant time investment required for participation in didactic and interactive training. Emory and TASH radiotherapy faculty and residents initially established correspondence virtually via email and video 5 months before the expected in-country visit. Clinical faculty, physics faculty, dosimetry faculty, and clinical residents were included in correspondence. TASH faculty and residents were encouraging and supportive of partnership development. Communication challenges included different time zones, scheduling around multiple faculty members, poor Internet connectivity, and language barriers. To identify goals of collaboration, a pediatric radiotherapy needs assessment survey was developed by the Emory faculty and the resident (Fig ). Needs assessment questions were completed by radiation oncology clinical faculty members at TASH. Initial questions focused on current general departmental radiotherapy needs, including number of radiotherapy machines, brachytherapy capability, and staff capacity. Additional questions centered on details of pediatric oncology practices at TASH, including number of pediatric radiotherapy patients treated, availability of pediatric trained radiotherapy staff, anesthesia capabilities, and challenges and priorities for pediatric radiotherapy at TASH. Following completion of the questionnaire, team members met virtually to discuss the results. Results of needs assessment indicated that TASH houses one functioning linear accelerator using 6 megavolts (MV) and 16 MV energies. Treatments have been initiated on this machine since November 2020. Capabilities include 3-dimensional conformal radiotherapy, IMRT, volumetric modulated arc therapy, and electron therapy. There are also two cobalt-60 machines; however, typically one or both are nonfunctioning. One high-dose-rate brachytherapy machine with cobalt source is available for gynecological treatments. There are no other radiotherapy services available at TASH. There are seven clinical oncology faculty at TASH who serve as both radiation and medical oncologists. Four medical physicists and eight radiation therapists are employed in the department. Thirty-six clinical oncology residents are currently in training at AAU. One hundred thirty to 150 patients are treated with radiotherapy daily at TASH. Greater than 70% of both adult and pediatric cases are treated with palliative intent, as the majority of patients present with advanced disease. Pediatric patients with curative disease are prioritized for radiotherapy. The pediatric hematology and oncology department provides chemotherapy services at TASH, with additional outpatient chemotherapy services available at a satellite location, Amestengha. Over 900 pediatric patients with cancer (0-18 years old) are seen or treated per month. Pediatric patients are seen or treated in the TASH inpatient ward (50-60 patients per month), the TASH pediatric emergency ward (80-100 patients per month), and at Amestengha (750-850 patients per month). Six thousand to 8,000 patients with pediatric cancer are seen at TASH annually. More than 900 of these patients are new diagnoses, most commonly including Hodgkin lymphoma, medulloblastoma, Wilms tumor, neuroblastoma, rhabdomyosarcoma, and palliative cases. Pediatric patients come to TASH from all geographic regions in the country and sometimes neighboring countries; however the majority are from areas nearby Addis Ababa because of socioeconomic barriers. One clinical oncology faculty member, one medical physicist, and one radiation therapist received pediatric specific training at the Children Center Hospital, Egypt. Anesthesia services are available for pediatric patients; however, challenges include coordination and equipment availability. Pediatric multidisciplinary tumor conferences occur weekly with medical oncology, radiation oncology, surgical oncology, radiology, pathology, and residents. Radiotherapy protocols from Children's Oncology Group and International Society for Pediatric Oncology (SIOP) are used routinely and are chosen on the basis of the disease presentation and availability of resources. Challenges with pediatric radiotherapy at TASH (Table ) include lack of radiotherapy machines available for use in the country. Difficulty accessing existing radiotherapy machines and frequent machine downtime result in significant treatment delays and interruptions. There is a shortage of pediatric immobilization devices and inadequate on-board imaging, preventing swift and accurate treatments. Furthermore, there is a shortage of pediatric radiotherapy–trained and dedicated faculty and staff to develop pediatric-specific protocols and guidelines. Infrastructure and supportive resources that are needed include child-friendly waiting and consultation rooms, play therapy, psychosocial support, nutrition support, and anesthesia support. Many patient families travel from remote parts of Ethiopia for treatment and struggle with transportation and lodging. Difficulties with care coordination between disciplines make it challenging to implement combination treatment protocols and appropriate time radiotherapy. There is poor coordination with palliative care and pediatric oncology for toxicity management, supportive care measures, and disease surveillance. Pediatric radiotherapy goals for TASH included (1) mentorship and training for professionals, (2) strengthening of multidisciplinary teams, (3) creation of clinical care pathways, (4) expansion of resources and infrastructure, and (5) development of organized and specialized pediatric oncology services and processes (Fig ). After needs assessment and goal delineation, Emory and TASH teams created outlines and resources to prepare for in-country visit. The schedule included introduction and orientation to TASH radiotherapy department, didactic lectures, and interactive training (Table ). Didactic lecture topics chosen were commonly seen pediatric radiotherapy cases identified on needs assessment, including Wilms tumor, medulloblastoma, rhabdomyosarcoma, Hodgkin lymphoma, and palliative radiotherapy. Interactive training focused on highest-impact pediatric malignancies and available cases. The Emory team arrived in Addis Ababa in February 2023. The Emory team delivered five didactic lectures over the course of 2 weeks to faculty and residents in the radiotherapy department (Data Supplement). Lectures focused on evidence-based guidelines and protocols adapted for LMIC setting. After each lecture, there was an opportunity for discussion between Emory and TASH team members regarding specific processes and challenges at TASH for each disease. Discussion informed interactive training with TASH team and future initiatives. The Emory team observed radiotherapy simulation, treatment planning, and radiotherapy delivery at TASH while in-country. Interactive training of residents and radiation therapists for pediatric computed tomography (CT) simulation was completed on the basis of available patient cases during visit, including Wilms tumor, rhabdomyosarcoma, and Hodgkin lymphoma. Direct observation and feedback were provided regarding patient setup, use of immobilization devices, isocenter placement, and CT imaging processes. Treatment planning and plan evaluation training sessions were led by the Emory team for groups of five to eight AAU residents at a time. Wilms tumor was chosen as the focus of these training sessions. Multiple Wilms cases were prepared by TASH residents. Contouring, field placement, treatment planning, and plan evaluation of these cases was completed by the Emory team in an interactive fashion. After these sessions, TASH residents independently completed decision making, treatment planning, and plan evaluation for additional Wilms cases and presented them to the Emory team for review. Clinical care pathways and standard operating procedures (SOP) for Wilms tumor and craniospinal irradiation (CSI) were adapted from Emory resources (Data Supplement). Emory resources were chosen to facilitate collaboration between Emory and TASH team members. Wilms tumor clinical care pathways focused on the ideal CT simulation setup and treatment planning guidelines. CSI SOP was created in collaboration with Emory and TASH medical physicists and dosimetrists, with a focus on details of treatment planning and delivery goals. Before departure from Ethiopia, Emory and TASH teams established goals for continued collaborations abroad and plans for next in-person visit. Notably, virtual peer-review sessions were established on a monthly basis to review pediatric radiotherapy cases and maintain regular contact. Contact information for Emory experts in pediatric radiotherapy was distributed to the TASH team to facilitate future virtual discussion of complex cases. Furthermore, one faculty and one resident at TASH presented data pertaining to the Emory-TASH collaboration at an international conference after the visit. TASH faculty and trainees were also invited to Emory for an in-person visit for additional training. Emory team members planned for yearly in-person visits to TASH, as funding permits. Evaluation of twinning partnership will occur on an annual basis through discussion of progress in Emory-TASH pediatric radiotherapy goals (Fig ). There will be an opportunity to modify or add new goals. All clinical oncology faculty members at TASH who deliver radiotherapy (n = 5) received pediatric radiotherapy didactic training from an expert in the field, compared with only a single faculty member receiving pediatric-specific training previously. All clinical oncology residents at TASH rotating on radiation oncology (n = 8) during the in-country visit received pediatric radiotherapy–specific didactic and interactive training. Notably, all residents had never or minimally (0-5 cases) contoured pediatric cases, put on fields for three-dimensional (3D) Wilms cases, or put on fields for 3D CSI cases. After the Emory team visit, all participating clinical oncology residents had completed six new pediatric radiotherapy cases with guidance of an expert in the field. Two cases included placement of fields for Wilms and two cases included placement of fields for CSI cases. Emory and TASH teams have participated in two virtual pediatric peer-review sessions to date. During these sessions, four pediatric cases were reviewed with minor or major changes made to all cases. There was no cost to virtual partnership activities. Costs of in-country visit included the price of round-trip flights for the two Emory team members, in addition to 1 month of hotel accommodations and food. There were no costs for TASH team members; however, there was significant time investment required for participation in didactic and interactive training. Progress in pediatric radiation oncology is the result of the efforts of collaborators around the globe. We identified strategies to improve collaboration between radiotherapy institutions in HICs and LMICs to improve childhood outcomes internationally. After comprehensive preparation and needs assessment, we successfully developed a twinning partnership between TASH and the Emory University with the goal of sustainable enhancement of pediatric radiotherapy outcomes in Ethiopia. This collaborative relationship may be replicated at other institutions. Previous literature regarding pediatric radiotherapy in LMICs focus on epidemiology, barriers to radiotherapy delivery, and patterns of care. - Data regarding solutions to these challenges are limited; however, institutional partnerships between radiotherapy departments represent one such avenue. Twinning partnerships have been effectively implemented in other medical specialties, including emergency medicine, psychiatry, and infectious diseases. , , - However, to our knowledge, this is the first outline of a formal implementation in pediatric radiation oncology. The International Atomic Energy Agency supports the use of twinning partnerships to improve pediatric radiotherapy delivery in LMICs because of the excessive variation in pediatric cancer outcomes. Barriers to creation of successful twinning partnerships exist in both HICs and LMICs. Notable challenges with communication among collaborators include time zone differences, language barriers, cultural practices, and scheduling difficulties. - The remote environment has eased some of these obstacles using virtual meetings and telehealth tools. Collaborators in LMICs often struggle with lack of resources, poor health care infrastructure, high patient volume, minimal ancillary support, ethical challenges, political uncertainty, and provider burnout. - Many factors lack the possibility of immediate resolution. Collaborators in HICs often struggle with difficulty in procurement of funding for global health endeavors, minimal postgraduate training in global health competencies, unfavorable attitudes toward global health among departmental leaders, and difficulty adapting HIC guidelines to LMIC contexts. - Improving access to funding, training, and mentorship in global health among HIC providers would facilitate successful twinning partnerships. Global health partnerships between HICs and LMICs are often criticized for lack of sustainable and meaningful change. For example, HIC collaborators may export novel technology without providing education to LMIC providers regarding the proper indications and use. Furthermore, HIC providers may use LMIC relationships to fill their curriculum vitae with publications, rather than prioritizing LMIC needs. A limitation of our experience is the lack of long-term follow-up to provide an in-depth assessment of impact. Monthly peer-review sessions enable identification of the number of changes made to pediatric radiotherapy plans secondary to partnership. Additional areas of interest include measurement of patient-specific outcomes, including survival and toxicities, and measurement of changes made to institutional pediatric oncology practices. Our twinning experience provides guidelines for prioritizing LMIC goals through initial needs assessment and virtual discussions with LMIC teams. Time spent in-country should be focused on adaptive training for LMIC providers. LMIC collaborators must take ownership of academic endeavors. Furthermore, routine communication must be continued after in-country visit to create sustainable improvements. With the support of HIC collaborators, two providers at TASH were able to submit data to international conferences after Emory's visit. Virtual Emory-TASH peer-review sessions encourage continued collaboration and open communication. Collaboration between HICs and LMICs may provide opportunities to improve childhood cancer outcomes globally. Prioritizing LMIC goals and capabilities through discussion, needs assessment, and adapted resources is essential for an effective twinning partnership. Our experience may serve as a model for other centers interested in establishing similar partnerships. |
Versorgungssituation des Systemischen Lupus Erythematodes in Rheinland-Pfalz und dem Saarland | 4fa05555-a05e-4a99-bef6-420eafb35051 | 11527904 | Internal Medicine[mh] | Der systemische Lupus erythematodes (SLE) ist eine klinisch heterogen verlaufende Autoimmunerkrankung, die mit hohem Leid für die Betroffenen sowie hohen sozioökonomischen Kosten verbunden ist. Obwohl hinreichend belegt ist, dass eine frühe Diagnosestellung und eine adäquate medizinische Versorgung essenziell für einen milden Krankheitsverlauf sind, gibt es weder eine ausreichende Zahl an Rheumatologen in der Fläche noch aktuelle Zahlen und Daten über die Versorgungssituation der Erkrankten in Deutschland. In der vorliegenden Arbeit wurden Daten über die Versorgungssituation des SLE in Rheinland-Pfalz und dem Saarland erfasst. Theoretisch kann der SLE jedes beliebige Organ befallen. Er gilt als die „wahrscheinlich klinisch und serologisch vielfältigste aller autoimmunen rheumatischen Erkrankungen“ . Patienten beklagen häufig Arthritiden (etwa 90 %), Hauterscheinungen (80 %) oder Fieber (78 %), wobei die in etwa 50 % der Fälle auftretende sogenannte Lupusnephritis mit sekundärem chronischem Nierenversagen oft verlaufs- und prognosebestimmend ist . Die Fatigue, eine Form der übermäßigen chronischen Müdigkeit, stellt mit einer Prävalenz von bis zu 92 % der Lupuspatienten eines der häufigsten und belastendsten Symptome dar . Die Variabilität des SLE erschwert oft die Diagnose und verzögert die Therapie, was wiederum zu einem längeren und häufig schwereren Leidensweg der Betroffenen führt . In einer Analyse von Krankenkassendaten konnten Schwarting et al. unter 3 Mio. Versicherten eine zunehmende Prävalenz von 0,056 % (zuletzt im Jahr 2014) detektieren. Gemäß einer Übersichtsstudie von Albrecht et al. schwankten die Prävalenzen zwischen 0,037–0,14 % . Frauen sind bis zu neunmal so häufig betroffen wie Männer . Die Erkrankung geht fast immer mit einer stark eingeschränkten Lebensqualität sowie in den meisten Fällen mit einer eingeschränkten Lebenserwartung einher . Obwohl in den letzten Jahrzehnten große Fortschritte im Verständnis und in der Therapie des SLE erzielt wurden, ist die gesundheitsbezogene Lebensqualität von SLE-Patienten heute so gering wie die von Patienten mit koronarer Herzkrankheit oder endgradiger COPD . Die Ursache des SLE ist jedoch nicht vollständig verstanden, seine Ätiologie ist multifaktoriell . Als Basistherapie hat sich das Antimalariamittel (AMM) Hydroxychloroquin (HCQ) etabliert, das bis auf wenige Ausnahmen jedem SLE-Patienten empfohlen wird . In der Akutbehandlung spielen Glukokortikoide (GC) weiterhin eine zentrale Rolle . Um langfristig GC einzusparen, können Immunsuppressiva (IS) wie Azathioprin (AZA), Mycophenolat-Mofetil (MMF) oder Methotrexat (MTX) verwendet werden . Zwei Biologika sind als Zusatztherapie des SLE zugelassen (Belimumab seit 2011 und Anifrolumab seit 2022). Die 10-Jahres-Überlebensrate von Lupuspatienten beträgt heute über 90 %, wobei in den ersten Erkrankungsjahren vor allem Infektionen als Folge der Immunsuppression durch Erkrankung und Therapie eine Rolle spielen, bis sie nach etwa fünf Jahren von den kardiovaskulären Komplikationen abgelöst werden – der akkumulierte chronische Schaden übersteigt die Krankheitsaktivität . Was die Versorgung von SLE-Patienten angeht, herrscht in Deutschland eine massive Unterversorgung mit Rheumatologen. Laut Berechnungen fehlen 45 % der benötigten niedergelassenen Rheumatologen sowie 17,5 % der benötigten rheumatologischen Betten . Trotz hoher individueller und gesellschaftlicher Bürden des Krankheitsbildes SLE herrscht ein Mangel an belastbaren Daten bezüglich der Versorgungsforschung im Allgemeinen sowie speziell in Deutschland. Da internationale Forschungsergebnisse oft nicht ohne Weiteres von einem Land auf das andere übertragen werden können, sind in Deutschland erhobene Daten von besonderem Wert . In dieser Studie wird eine Übersicht über die Versorgungssituation der Lupuserkrankten in Rheinland-Pfalz und dem Saarland geschaffen, insbesondere in Bezug auf erkrankungsrelevante Items wie die Anzahl der behandelten Patienten, Hauptsymptomatik, Medikation und Remission. Die Datenerhebung erfolgte von August 2020 bis April 2021. Es wurden Fragebögen an Rheumatologen, Nephrologen, Neurologen, Dermatologen und Hausärzte in Rheinland-Pfalz und dem Saarland versendet. Die Fragebögen wurden zunächst per Fax und durch den Newsletter der kassenärztlichen Vereinigung Rheinland-Pfalz (KV-RLP) an 1546 Empfänger geschickt, wobei entweder per Mail oder per Fax geantwortet werden konnte. Die Empfänger wurden von April bis Oktober 2020 über eine Internetrecherche getrennt nach Facharztrichtungen und Praxen beziehungsweise Kliniken ermittelt. Dazu wurde der Arztfinder der jeweiligen kassenärztlichen Vereinigungen genutzt sowie verschiedene Arztsuche-Webseiten wie Jameda oder Medfuehrer. Aufgrund geringer Rückläufe von nur 28 Antworten (1,8 %) in dieser ersten Runde wurde der Fragebogen auf eine Seite mit den zehn zu priorisierenden Fragen gekürzt sowie im Verlauf digitalisiert und vergütet. Abgefragt wurden unter anderem die Anzahl der behandelten SLE-Patienten, Hauptsymptome, Therapieregime, Remission, Komorbiditäten und GC-Dosen. In einem Freifeld konnten Verbesserungsvorschläge genannt werden. Dieser digitale Fragebogen wurde im Quartal 1/2021 in drei Runden per E‑Mail an 1219 Praxen (niedergelassene Fachärzte) und Zentren (Kliniken mit Ambulanzanbindung) versendet. Der Rücklauf betrug 118 Antworten (9,7 %), davon 57 mit SLE-Patienten (48,3 %). Insgesamt wurden Rückmeldungen über 635 Patienten gegeben. Die Auswertung der Daten erfolgte deskriptiv, um einen Überblick über die SLE-Versorgungssituation in den genannten Bundesländern zu erhalten. Es flossen insgesamt 163 ausgefüllte Fragebögen von 1546 versendeten in die Auswertung ein, mit folgender Aufteilung: 4 rheumatologische Zentren, 4 rheumatologische Praxen, 7 nephrologische Praxen, 1 dermatologisches Zentrum, 14 dermatologische Praxen, 5 neurologische Zentren, 8 neurologische Praxen sowie 120 hausärztliche Praxen. Die Rücklaufquoten der finalen Versandrunde variieren dabei von ca. 8 % (Nephrologie) bis 12 % (Dermatologie). Von den insgesamt 163 Antwortenden behandelten 85 in ihren Einrichtungen insgesamt 635 an SLE erkrankte Patienten (s. Abb. ), davon 457 Patienten aus Rheinland-Pfalz und 178 aus dem Saarland. Die weiblichen Patientinnen (84 %) waren im Durchschnitt 50,8 Jahre alt, die männlichen Patienten (16 %) waren im Durchschnitt 53,2 Jahre alt. Es stellte sich ein Verhältnis von 5:1 weibliche zu männliche Patienten heraus. Aufgrund der Anonymität der Fragebögen ist nicht ausgeschlossen, dass einige Patienten von mehreren antwortenden Fachärzten behandelt wurden und damit als Teil mehrerer Patientenkollektive dokumentiert wurden. Hauptsymptome Bei den angegebenen Hauptsymptomen dominierten im Mittel der Fachärzte Arthralgien (64 %) und Fatigue (61 %), gefolgt von Myalgien (42 %), Hautveränderungen (38 %) und dem Raynaud-Phänomen (35 %). Es fiel auf, dass einige Fachärzte vor allem die Symptome ihrer jeweiligen Fachrichtung beschrieben. Durch die neurologischen Fachärzte wurden neben neurologischen und neuropsychiatrischen Symptomen auch Myalgien und Arthralgien besonders häufig beschrieben. Die antwortenden Hausärzte lagen mit der beschriebenen Häufigkeit der Hauptsymptome etwa im Durchschnitt der anderen Fachärzte. Medikamentöse Behandlung und Therapieerfolg Bei der Analyse der Antworten aller Ärzte fällt auf, dass der medikamentöse Fokus vor allem auf AMM und GC lag. Dies entspricht in Grundzügen den Empfehlungen der Literatur bezüglich Schub- und Dauertherapie. In der Nutzung der AMM unterschieden sich die Rheumatologen stark von den anderen Fachärzten. Während die Rheumatologen im Mittel bei 81 % ihrer Patienten Antimalariamittel einsetzten, erhielten dies im Durchschnitt nur 35 % der SLE-Patienten der anderen fachärztlichen Gruppen. Besonders in der Dermatologie wurde mit 27 % einem sehr geringen Anteil an Patienten AMM verschrieben. Diese Zahlen liegen weit hinter den angestrebten bis zu 100 % AMM-Einsatz zurück. Auf der anderen Seite erhielten 4 % der Patienten dauerhaft mehr als 10 mg/d Prednisolon-Äquivalent (PÄ) sowie 22 % zwischen 5 und 10 mg/d, womit 26 % der Patienten dauerhaft mehr als die empfohlenen 5 mg oder weniger PÄ pro Tag einnehmen. Immunsuppressiva zum Einsparen von GC wurden hingegen insgesamt nur wenig verschrieben (im Mittel erhielten 19 % MTX, 14 % AZA, 11 % MMF und 18 % Belimumab). Auffallend war hierbei, dass vor allem MTX und Belimumab hauptsächlich durch Rheumatologen verschrieben wurden (22 % MTX, 24 % Belimumab). Unter der angegebenen Therapie befanden sich 76 % in Remission (Definition hier: „keine Symptomatik“), darunter 5 % therapiefrei, 64 % unter immunsuppressiver Therapie und 8 % unter Kortisonmonotherapie. Eine Übersicht zu Hauptsymptomatik, medikamentöser Therapie und Remission findet sich in Tab. . Die Differenzierung zwischen Zentren und Praxen wurde begrenzt durch die Tatsache, dass lediglich drei Patienten aus Zentren nicht-rheumatologisch betreut wurden. Daher erfolgte die Gegenüberstellung der Hauptsymptomatik und der medikamentösen Therapie lediglich zwischen rheumatologisch betreuten Patienten aus Praxen gegenüber rheumatologischen Zentren. Es zeigt sich, dass insbesondere die Fatigue deutlich häufiger in Zentren gesehen wird (siehe Tab. ). Bezüglich der Therapie lässt sich sagen, dass alle rheumatologisch betreuten Patienten vergleichsweise häufig mit AMM behandelt werden, in Praxen sogar noch häufiger als in Zentren. Nach den AMM wird in den Zentren interessanterweise Belimumab am häufigsten eingesetzt vor Mtx. Der GC-Einsatz ist hingegen in Zentren deutlich geringer. Komorbiditäten Die am häufigsten angegebenen Komorbiditäten waren das Fibromyalgiesyndrom (FMS, 26 %), Depressionen (24 %) und kardiovaskuläre Schädigungen (21 %). Die Prävalenz der Osteoporose lag mit 10 % etwa im Durchschnitt der deutschen Bevölkerung . Am häufigsten unterschieden sich die fachärztlichen Gruppen im Hinblick auf das FMS, welches 39 % der Neurologen, 57 % der Nephrologen und 63 % der Rheumatologen als häufige Komorbidität ankreuzten. Stark korreliert und nur unter den Neurologen mit 8 % vom FMS abweichend berichteten diese Fachärzte auch von vermehrten Depressionen unter ihren SLE-Patienten. Unter den Dermatologen berichteten nur 13 % von FMS und Depressionen als Komorbiditäten ihrer SLE-Patienten. Die Rheumatologen hingegen unterschieden sich in der Angabe der Komorbiditäten teilweise deutlich von ihren Kollegen. Kardiovaskuläre Komorbiditäten stachen mit 75 % deutlich hervor, ebenso Anämien, Osteoporose und Adipositas mit je 38 %. Bewertung der Versorgungssituation Bei einer Bewertung der Versorgungssituation von Patienten mit SLE mit einer Schulnote von eins bis sechs wurde in Rheinland-Pfalz eine Durchschnittsnote von 3,2 vergeben, während die saarländischen Ärzte im Durchschnitt eine Note 2,8 erteilten. Ärzte aus Praxen vergaben die Durchschnittsnote 3,1, solche aus Zentren die Durchschnittsnote 3,0. Verbesserungsvorschläge zur Versorgungssituation im Freitext Ein Freitextfeld für Verbesserungsvorschläge ließen 43 % der Antwortenden leer. Von den erbrachten Kommentaren waren 3 % als positiv zu werten. 50 % beschäftigten sich mit dem Mangel an Rheumatologen bzw. der Schwierigkeit, Termine bei einem Rheumatologen zu bekommen. Insgesamt 17 % wünschten sich mehr Fortbildung oder Aufklärung, 12 % mehr Vernetzung und Kommunikation und 18 % der Kommentare schlugen sonstige Verbesserungsmaßnahmen vor. Bei den angegebenen Hauptsymptomen dominierten im Mittel der Fachärzte Arthralgien (64 %) und Fatigue (61 %), gefolgt von Myalgien (42 %), Hautveränderungen (38 %) und dem Raynaud-Phänomen (35 %). Es fiel auf, dass einige Fachärzte vor allem die Symptome ihrer jeweiligen Fachrichtung beschrieben. Durch die neurologischen Fachärzte wurden neben neurologischen und neuropsychiatrischen Symptomen auch Myalgien und Arthralgien besonders häufig beschrieben. Die antwortenden Hausärzte lagen mit der beschriebenen Häufigkeit der Hauptsymptome etwa im Durchschnitt der anderen Fachärzte. Bei der Analyse der Antworten aller Ärzte fällt auf, dass der medikamentöse Fokus vor allem auf AMM und GC lag. Dies entspricht in Grundzügen den Empfehlungen der Literatur bezüglich Schub- und Dauertherapie. In der Nutzung der AMM unterschieden sich die Rheumatologen stark von den anderen Fachärzten. Während die Rheumatologen im Mittel bei 81 % ihrer Patienten Antimalariamittel einsetzten, erhielten dies im Durchschnitt nur 35 % der SLE-Patienten der anderen fachärztlichen Gruppen. Besonders in der Dermatologie wurde mit 27 % einem sehr geringen Anteil an Patienten AMM verschrieben. Diese Zahlen liegen weit hinter den angestrebten bis zu 100 % AMM-Einsatz zurück. Auf der anderen Seite erhielten 4 % der Patienten dauerhaft mehr als 10 mg/d Prednisolon-Äquivalent (PÄ) sowie 22 % zwischen 5 und 10 mg/d, womit 26 % der Patienten dauerhaft mehr als die empfohlenen 5 mg oder weniger PÄ pro Tag einnehmen. Immunsuppressiva zum Einsparen von GC wurden hingegen insgesamt nur wenig verschrieben (im Mittel erhielten 19 % MTX, 14 % AZA, 11 % MMF und 18 % Belimumab). Auffallend war hierbei, dass vor allem MTX und Belimumab hauptsächlich durch Rheumatologen verschrieben wurden (22 % MTX, 24 % Belimumab). Unter der angegebenen Therapie befanden sich 76 % in Remission (Definition hier: „keine Symptomatik“), darunter 5 % therapiefrei, 64 % unter immunsuppressiver Therapie und 8 % unter Kortisonmonotherapie. Eine Übersicht zu Hauptsymptomatik, medikamentöser Therapie und Remission findet sich in Tab. . Die Differenzierung zwischen Zentren und Praxen wurde begrenzt durch die Tatsache, dass lediglich drei Patienten aus Zentren nicht-rheumatologisch betreut wurden. Daher erfolgte die Gegenüberstellung der Hauptsymptomatik und der medikamentösen Therapie lediglich zwischen rheumatologisch betreuten Patienten aus Praxen gegenüber rheumatologischen Zentren. Es zeigt sich, dass insbesondere die Fatigue deutlich häufiger in Zentren gesehen wird (siehe Tab. ). Bezüglich der Therapie lässt sich sagen, dass alle rheumatologisch betreuten Patienten vergleichsweise häufig mit AMM behandelt werden, in Praxen sogar noch häufiger als in Zentren. Nach den AMM wird in den Zentren interessanterweise Belimumab am häufigsten eingesetzt vor Mtx. Der GC-Einsatz ist hingegen in Zentren deutlich geringer. Die am häufigsten angegebenen Komorbiditäten waren das Fibromyalgiesyndrom (FMS, 26 %), Depressionen (24 %) und kardiovaskuläre Schädigungen (21 %). Die Prävalenz der Osteoporose lag mit 10 % etwa im Durchschnitt der deutschen Bevölkerung . Am häufigsten unterschieden sich die fachärztlichen Gruppen im Hinblick auf das FMS, welches 39 % der Neurologen, 57 % der Nephrologen und 63 % der Rheumatologen als häufige Komorbidität ankreuzten. Stark korreliert und nur unter den Neurologen mit 8 % vom FMS abweichend berichteten diese Fachärzte auch von vermehrten Depressionen unter ihren SLE-Patienten. Unter den Dermatologen berichteten nur 13 % von FMS und Depressionen als Komorbiditäten ihrer SLE-Patienten. Die Rheumatologen hingegen unterschieden sich in der Angabe der Komorbiditäten teilweise deutlich von ihren Kollegen. Kardiovaskuläre Komorbiditäten stachen mit 75 % deutlich hervor, ebenso Anämien, Osteoporose und Adipositas mit je 38 %. Bei einer Bewertung der Versorgungssituation von Patienten mit SLE mit einer Schulnote von eins bis sechs wurde in Rheinland-Pfalz eine Durchschnittsnote von 3,2 vergeben, während die saarländischen Ärzte im Durchschnitt eine Note 2,8 erteilten. Ärzte aus Praxen vergaben die Durchschnittsnote 3,1, solche aus Zentren die Durchschnittsnote 3,0. Ein Freitextfeld für Verbesserungsvorschläge ließen 43 % der Antwortenden leer. Von den erbrachten Kommentaren waren 3 % als positiv zu werten. 50 % beschäftigten sich mit dem Mangel an Rheumatologen bzw. der Schwierigkeit, Termine bei einem Rheumatologen zu bekommen. Insgesamt 17 % wünschten sich mehr Fortbildung oder Aufklärung, 12 % mehr Vernetzung und Kommunikation und 18 % der Kommentare schlugen sonstige Verbesserungsmaßnahmen vor. In der vorliegenden Arbeit wurden erstmals gezielt Informationen über die Versorgungssituation von Patienten mit SLE in Rheinland-Pfalz und dem Saarland erhoben. Die in dieser Studie erhobenen Daten geben wertvolle Einblicke in Epidemiologie, Symptomatik, Therapie und Behandlungserfolge aus Sicht der niedergelassenen Kollegen. Das Verhältnis von weiblichen zu männlichen SLE-Patienten unter den rückläufigen Fragebögen betrug 5:1. Dies entspricht in etwa dem von Brinks et al. (2014) ermittelten Verhältnis aus ihrer Untersuchung deutscher SLE-Patienten von 4:1, unterschreitet jedoch erheblich das häufig in der Literatur angegebene Verhältnis von 9:1 . Unter den Hauptsymptomen dominierten im Mittel der Fachärzte Arthralgien und Fatigue mit je etwa 60 %, Hautveränderungen zeigten etwa 38 % der Patienten. In der Literatur sind besonders muskuloskelettale Symptome sowie Fatigue mit je nach Studie bis zu etwa 90 % deutlich häufiger beschrieben . An diese Zahlen näherten sich die Antworten der Rheumatologen mit 69 % Arthralgien und 66 % Fatigue an. Insgesamt kann jedoch keine Tendenz hin zu deutlich höheren Symptomraten unter rheumatologisch behandelten Patienten festgestellt werden. Eine Nierenbeteiligung beklagten 14 % der Patienten des befragten Kollektivs. Diese Zahl liegt deutlich unter den ermittelten 22 % eines anderen befragten deutschen Kollektivs von Fischer-Betz et al. . Fieber/Schwäche gaben im befragten Kollektiv 24 % der Patienten an, was deutlich weniger ist als in einem 200 Patienten umfassenden Kollektiv von Sloan et al. (2020), in dem etwa 78 % von längerfristigem Fieber berichteten . Das Raynaud-Phänomen wurde mit 35 % vergleichsweise häufig genannt, reichte damit aber nicht an die Häufigkeit in einer Kohorte von Nyman et al. (2020) heran (ca. 52 %) . Insgesamt fällt eine Fokussierung auf den jeweiligen Fachbereich auf. Womöglich werden hierdurch einige fachfremde Symptome, wie beispielsweise Hautveränderungen nicht dermatologisch betreuter Patienten, übersehen oder übergangen. Zur medikamentösen Behandlung ist zunächst zu sagen, dass die Empfehlungen der Literatur in Bezug auf die Basistherapie eine große Übereinstimmung aufweisen: Prinzipiell sollte jeder SLE-Patient Antimalariamittel erhalten . Glukokortikoide haben eine große Bedeutung im Schub, langfristig sind sie aber vor allem schädlich und sollten wo immer möglich durch den Einsatz von Immunsuppressiva oder Auslassversuche eingespart werden. Im befragten Kollektiv dieser Arbeit war der Einsatz von AMM im Vergleich zur Literatur unterdurchschnittlich. In einer Analyse von Daten der Kerndokumentation 2018 nahmen 67 % aller SLE-Patienten AMM ein, ein Wert, den unter den Befragten nur die Fachärzte der Rheumatologie mit 81 % übertrafen . Die anderen Fachärzte verschrieben im Mittel nur 35 % ihrer Patienten AMM. Auch der Einsatz von Glukokortikoiden als zweiter großer Säule bei im Mittel 46 % der Patienten entspricht den Erwartungen. Dieser unterschritt jedoch den in der Kerndokumentation beschriebenen Wert von 62 % und auch den anderer Kohorten von bis zu 88 % . Es könnte der vermehrte Einsatz von Immunsuppressiva sein, der den Rheumatologen den unterdurchschnittlichen Einsatz sehr hoch dosierter GC (≥ 20 mg/d) von 1 % (verglichen mit 4 % über alle Fachärzte hinweg) ermöglichte. Dieser wiederum war unter den Nephrologen mit 36 % um ein Vielfaches erhöht. Eine mögliche Erklärung hierfür könnte im hohen Anteil renaler Schübe liegen, wobei die in der Literatur empfohlene Therapie aus MMF und/oder niedrig dosiertem Cyclophosphamid besteht und GC möglicherweise nicht erforderlich sind . Ein insgesamt vergleichsweise hoher Einsatz von Immunsuppressiva durch die Nephrologen könnte die Aktivität der SLE-Manifestation unterstreichen. Dieser war in der Dermatologie nicht zu sehen. Der Einsatz von Immunsuppressiva lag in dieser fachärztlichen Gruppe weit unter dem Durchschnitt, während 78 % der Patienten Glukokortikoide verschrieben wurden. Ein möglicher Grund könnte der routinierte Umgang mit GC-haltigen Externa in der Dermatologie sein. Der dauerhafte (länger als sechs Monate erfolgte) Einsatz höher dosierter Glukokortikoide (> 5 mg/d) lag im Durchschnitt aller Fachärzte bei 26 %. Auffällig ist, dass die Rheumatologen als einzige fachärztliche Gruppe bei vielen Patienten dauerhaft Glukokortikoide in Dosierungen unter 5 mg/d einsetzten, wobei die höhere Quote mit Beobachtungen aus der Literatur übereinstimmt . Sie erreichten in dieser Kategorie eine Quote von 33 %, während sonst nur die Allgemeinmediziner (11 %) und die Dermatologen (8 %) überhaupt dauerhaft niedrig dosierte Glukokortikoide verschrieben. Diese nebenwirkungsärmere Langzeittherapie wäre gegenüber höher dosierten GC in deutlich mehr Fällen wünschenswert. Beim Blick auf die Remissionsraten der behandelten Patienten – wobei remittiert im Fragebogen als symptomfrei definiert wurde – fällt auf, dass sich 75 % der rheumatologisch behandelten Patienten in einer Form der Remission befanden. Die Unterscheidung nach Therapieform zeigte auf, dass 73 % unter Immunsuppression symptomfrei waren, was auf schwierige Fälle hinweisen könnte, die leitliniengerecht therapiert werden. Die Nephrologen wiesen mit 96 % eine noch höhere Remissionsrate auf, wobei in diesem Fall auffällt, dass 39 % der Patienten symptomfrei unter Kortisonmonotherapie waren, eine Behandlung entgegen den Empfehlungen der Literatur. Auch unter den dermatologisch und allgemeinmedizinisch betreuten Patienten waren einige unter Kortisonmonotherapie symptomfrei (24 % bzw. 17 %). Unter den rheumatologisch behandelten Patienten traf dies nur für 1,1 % zu. Auffällig ist zudem, dass sich unter den neurologisch behandelten Patienten mit 27 % weit unterdurchschnittlich viele in Remission befanden. Ob dies mit dem geringen Einsatz an AMM (27 %) und Immunsuppressiva (13 % AZA, 0 % MTX, 0 % MMF) oder mit der speziellen neurologisch, neuropsychiatrischen Manifestation zusammenhängt, lässt sich bei der geringen Anzahl an neurologischen Rückläufen nur mutmaßen. Weitere Forschungsarbeiten wären hier interessant. Symptomfrei ohne Therapie waren mit insgesamt 5 % nur wenige Patienten. Die Begrifflichkeit der Remission kann durchaus kritisch gesehen werden. Obwohl mit den „DORIS-Kriterien“ (definition of remission in SLE) eine objektive Bewertung möglich ist, können die Interpretation und das Empfinden durchaus divergieren. Im Abgleich mit der Literatur zeigt sich, dass sich aus ärztlicher Sicht der Großteil ihrer SLE-Patienten in Remission befindet. Betrachtet man jedoch die Therapie bei remittierten und nicht remittierten Patienten, zeigt sich eine vergleichbare Therapie, da nach ärztlicher Einschätzung die Remission häufig einen Zustand beschreibt, in dem man nichts Weiteres mehr tun muss oder kann . Zusammenfassend ist nicht auszuschließen, dass es Unterschiede zwischen den Facharztgruppen gibt, jedoch ist dies aufgrund des Studiendesigns nicht prüfbar. Der Vergleich zwischen rheumatologischen Praxen und rheumatologischen Zentren zeigt insgesamt vergleichbare Quoten an Symptomen und Medikamenteneinsatz. An Zentren wird Belimumab deutlich häufiger eingesetzt (27 % vs. 4 % in Praxen), was womöglich den deutlich geringeren GC-Einsatz begünstigt. Bezüglich der Komorbiditäten fällt auf, dass die rheumatologischen Fachärzte einige deutlich häufiger nannten als ihre Kollegen. Dies könnte auf schwerer erkrankte Patienten oder aber höhere Aufmerksamkeit gegenüber diesen Komorbiditäten zurückzuführen sein, von denen einige direkt mit dem SLE zusammenhängen. Die Auswertung der Kommentare zeigte, dass sich der weitaus größte Teil mit der Forderung nach mehr niedergelassenen Rheumatologen, nach mehr rheumatologischen Zentren, leichterem Zugang und schnellerer Terminvergabe befasste. Auch die Kommunikation sollte verbessert werden, möglicherweise durch Netzwerke, wofür ebenfalls die Zeit der rheumatologischen Fachärzte und damit die Anzahl derselben erhöht werden müsste. All dies deckt sich mit Beobachtungen in der Literatur, dass in Deutschland eine massive Unterversorgung mit Rheumatologen herrscht. Daten von Zink et al. stellten heraus, dass 2016 in Rheinland-Pfalz und dem Saarland nur 0,8 Rheumatologen anstelle eines errechneten Bedarfs von 2 pro 100.000 Erwachsenen vorhanden waren und deutschlandweit ca. 18 % zu wenig stationäre rheumatologische Betten zur Verfügung standen . Nicht überraschend war in dem Sinne die Erkenntnis, dass 45 % der befragten Patienten einer Kohorte von Danoff-Burg und Friedberg 2009 unzufrieden waren mit der Kontinuität der Betreuung und der Menge an Zeit, die sie mit Ärzten verbrachten, während gleichzeitig gezeigt werden konnte, dass mehr klinische Versorgung die Krankheitsaktivität, den Schaden und die Health-Related Quality of Life (HRQoL) positiv beeinflusst . Auch die in den Kommentarfeldern häufig gestellte Forderung nach Fortbildung und Aufklärung deckt sich mit den bekannten Herausforderungen. Eine bereits in der Literatur beschriebene unzureichende Umsetzung der Leitlinien, die sich beispielsweise in hoch dosierten GC-Dauertherapien widerspiegelt, geringe Verschreibung von AMM oder zu geringe Quoten einiger zum Teil unspezifischer oder fachfremder Hauptsymptome lassen auf einen Fortbildungsbedarf zum Thema SLE schließen . Insbesondere hinsichtlich der Therapie zeigte sich eine deutlich leitliniengerechtere Behandlung seitens der rheumatologischen Fachkollegen. In diesem Zusammenhang ist die aktuelle Studie von Aringer et al. (2021) zu erwähnen, die die schwache Repräsentanz der Rheumatologie in der medizinischen Lehre an deutschen Universitäten bemängelt . Die Autoren empfehlen verbindliche Lernziele, die in mindestens 6 Doppelstunden Pflichtvorlesung in internistischer Rheumatologie vermittelt werden sollen. Diese können, verknüpft mit einer Integration der Lerninhalte in relevante universitäre Prüfungen, „das Wissen um rheumatische Erkrankungen und damit die Versorgung der Menschen verbessern, die unter ihnen leiden“ . Auch eine, ebenfalls in vielen Kommentaren geforderte, weitere Vernetzung und intensivierte interdisziplinäre Kommunikation wird in der Literatur angestrebt, unter anderem um schwerwiegende Komorbiditäten zu erkennen oder auszuschließen . Die Ergebnisse der Symptomabfrage zeigten beispielsweise einen sehr engen Blick auf das jeweilige Fachgebiet mit zum Teil deutlich niedrigeren Quoten bei fachfremden Hauptsymptome. Eine interdisziplinäre Vernetzung und Behandlung könnte solche Unzulänglichkeiten ausgleichen. Die landesweite Datenerhebung in Rheinland-Pfalz und dem Saarland ermöglicht interessante Einblicke in die Versorgungsrealität von SLE-Patienten jenseits von Theorie und Leitlinien. Erstmals konnten spezifische Informationen zu Epidemiologie, Hauptsymptomen, medikamentösen Therapien und Komorbiditäten in den beiden Flächenländern gewonnen werden. Trotz aller Einschränkungen einer fragebogenbasierten Erhebung liefert die Studie wichtige Ansätze für dringende Optimierungen. Für eine Verbesserung der Versorgungssituation von SLE-Patienten werden basierend auf den Ergebnissen der besprochenen Studie folgende Vorschläge gemacht: Es wird eine Änderung der Bedarfsplanung für den Ausbau der rheumatologischen Sitze und Anreize zur rheumatologischen Niederlassung empfohlen. Zudem wäre ein Ausbau des rheumatologischen Lehrangebotes an medizinischen Fakultäten sinnvoll. Mehr Aufklärung und Fortbildung aller Fachärzte zu rheumatologischen Krankheitsbildern sowie die Förderung der interdisziplinären Kommunikation und Vernetzung werden als zielführend erachtet. Abschließend empfehlen wir die Erstellung vereinfachter Leitfäden zur Diagnostik und Therapie des systemischen Lupus Eerythematodes (SLE). |
Body Surface Potential Mapping: A Perspective on High‐Density Cutaneous Electrophysiology | 868671a5-fe63-4bd0-87fd-014112ee2246 | 11775574 | Physiology[mh] | Main Complex multicellular organisms have developed two main methods of communication to achieve cohesion and synchronicity: chemical, by means of molecular messengers known as hormones; and electrical, by transmitting electrical activity across the membrane of electrically excitable cells (EECs). These are a special group of cells capable of generating electric fields across their bilipid membranes due to controlled imbalances in ionic concentrations inside and outside the cell medium. The combined electrical activity of EECs propagates outwardly across tissue and can be captured by surface electrodes placed over the skin, yielding a set of time‐varying voltage signals known as body surface potentials (BSPs). The recording of electrophysiological activity from cutaneous electrodes for clinical purposes has been performed for more than a century and is an essential part of medical diagnosis. Each of these BSPs is obtained at different locations in the body, typically measured relative to a reference electrode away from the source of the investigated signal. As depicted in Figure , each BSP pertains to different structures and has different amplitudes, temporal variations, and spectral characteristics. As Figure depicts, there exist multiple types of BSPs employed to diagnose a wide range of conditions. A non‐exhaustive list includes muscle activity from electromyography (EMG), cardiac potentials from electrocardiography (ECG), brain activity from electroencephalography (EEG), gut activity from electrogastroenterography (EGEG), fetal movements from electrohysterography (EHG), eye movements from electrooculography (EOG) and electroretinography (ERG), and hearing activity from auditory brainstem responses (ABRs) and cortical auditory evoked potentials (CAEPs). As Figure depict, each BSP pertains to different structures and has distinct amplitude, temporal variations, and spectral characteristics. The clinical analysis of BSPs is typically conducted through direct or computer‐aided visual inspection of potentials from individual electrode channels. Computer software, such as LabChart (ADInstruments), MATLAB (MathWorks), LabVIEW (National Instruments), or Python libraries such as “OpenEphys” or “NeuroKit”, are commonly employed to perform the necessary signal processing on BSP signals and display them in a visually intuitive manner. Clinicians learn the expected characteristics of these signals and can identify specific deviations that relate to different pathological conditions (see Figure ). Moreover, often a time‐frequency analysis, such as the spectrogram, is performed to investigate the spectral features of the recorded BSPs. The diagnosis of conditions with BSPs is simple, reproducible, robust, and well‐established across the medical community. However, research in several fields has demonstrated that some of these methods lack sufficient spatial resolution to reliably capture clinically relevant information. For example, conventional 12‐lead ECG is not able to detect pre‐excitation syndromes, late potentials, acute myocardial ischemia, and various electrically isolated wall potential propagation conditions. Likewise, bipolar surface EMG is not sufficient to identify individual motor unit activity, fasciculation, and fibrillation potentials, and EEG cannot make a fine distinction between different populations of neurons within a particular brain region. Advances in flexible electrode array fabrication techniques and electrode coating materials have enabled the production of conformable, compact, and high‐density arrays for cutaneous electrophysiology, with recording quality exceeding that of state‐of‐the‐art Ag/AgCl electrodes. High signal quality is achieved by minimizing electrode impedance, which directly improves the signal‐to‐noise ratio (SNR) and allows for smaller electrode sizes. Impedance can be reduced by increasing the ionic and electronic conductivity of individual electrodes. Additionally, enhancing the mechanical interface between electrode coatings and the skin reduces impedance and improves signal quality by minimizing motion artifacts. Moreover, these novel fabrication techniques enable robust electromechanical bonding between flexible arrays and rigid external electronics, further enhancing signal quality by minimizing data loss or corruption prior to digitization. The BSPs obtained from these arrays can be employed to capture not only temporal and frequency characteristics but also spatial information in the form of detailed BSP heat maps (see Figure ). The analysis of these offers a wide range of additional features, such as propagation speeds, channel correlations, or trajectories, which improve diagnostic range and reliability. The independent development of this approach in the ECG, EMG, and EEG fields—showcased through cardiac potential maps, monopolar surface EMG heatmaps, and EEG scalp maps, respectively—reflects the increasing interest within the clinical community to augment traditional diagnostic capabilities of BSP recordings via body surface potential maps (BSPMs) 1 . The design of electrode arrays for BSP mapping involves the selection of specific parameters, as outlined in Figure . These parameters comprise 1) electrode shape, 2) electrode area, 3) inter‐electrode distance (IED), 4) electrode layout (i.e., maintaining consistent density, area, or shape across an array), and 5) array outline (i.e., determining the body area the electrode array should cover). Electrode array design for BSP mapping currently lacks a standardized framework; instead, it is zcustomized for each application and study, where the array parameters are chosen through an experimental iterative process tailored to each specific condition and even each patient. This approach requires an optimization process for every study and renders designs that are useful solely toward the research objectives of the study they were developed for, limiting their comparability, reproducibility, and transferability. A set of electrode array design guidelines for BSP mapping is required to allow for the fast prototyping of devices with optimal geometrical parameters. This general protocol should be agnostic to variability between subjects, body areas, signal range, magnitude, and bandwidth. This required level of abstraction can only be achieved by defining design rules that are solely based on the physical characteristics of the signals to be recorded. The expected direction of propagation, speed of propagation, and bandwidth of a BSP can be employed to determine the optimal geometrical parameters required for a given array. Guidelines implemented under this premise do not present instructions for the design of arrays applicable to any subject or study in any field. Instead, they offer a theoretical framework derived from the underlying properties of the signals under study, providing a preliminary understanding of how to achieve optimal recording of specific electrophysiology signals under given clinical conditions. The objective of this framework is to eliminate the need to empirically engineer arrays for each case study. The proposed guidelines systematically address each of the five geometrical design parameters in electrode arrays (refer to Figure ). Electrode shape (1) influences the effective area of individual electrodes, thereby impacting signal attenuation, signal‐to‐noise ratio (SNR), and spatial resolution. Additionally, it enables the implementation of electrodes with irregular structures or varying axes of symmetry, potentially enhancing overall device conformability and stretchability or reducing electrode insertion complexity (particularly advantageous in implantable applications). While an infinite number of potential geometries can be generated, standard shapes such as circles or rectangles are commonly employed. The choice of electrode shape is heavily dependent on the direction of propagation of the signal of interest. As depicted in Figure , signal attenuation in space due to signal averaging within an electrode's covered surface is solely influenced by electrode geometry. The electrode's shape dictates the covered area, thus determining which BSPs are spatially averaged. In scenarios where the direction of signal propagation is known, such as surface EMG in the forearm, shapes can be designed to minimize signal averaging in the direction of propagation (which also minimizes averaging in the time domain) while maximizing electrode area (see Figure ). This approach will decrease electrode impedance, improve SNR, and maximize spatial resolution. Conversely, when the signal propagation direction is variable or unknown, a radially symmetric electrode shape (i.e., circular) is preferable, ensuring uniform signal attenuation across all directions (isotropic electrode shape). The attenuation effect caused by any given electrode geometry on a BSP can be defined quantitatively. First, it is necessary to compute the total area ∯ of the electrode, which determines the weight of the averaging operation. The attenuating effect of the weight is then restricted to all locations within the boundary of the electrode ( S E ) by defining a piecewise function ( h E ), which represents the impulse response of the electrode and encompasses its behaviour against isolated electrical events, (1) h E x , y = 1 / ∯ S x , y d x d y , x , y ∈ S E 0 , otherwise where S represents electrode coverage as a function of position. Equation assumes that the potential distribution under the electrode area is integrated by the electrode, which is valid as a first approximation of the effect of an electrode with physical dimensions and corresponds to a low‐pass filter in the spatial frequency domain. The exact description of electrodes with physical dimensions would imply the solution of a mixed boundary condition problem, but the approximation of pure averaging is sufficient for practical considerations. In order to quantify the frequency dependency of the electrode attenuation on the recorded signal, the 2D Fourier transform can be applied to the impulse response of the electrode to yield its transfer function, H E ( f x , f y ) = F ( h E ( x , y ) ) , where f x and f y are the spatial Fourier frequencies. Examples of transfer functions for circular electrodes of different radii are presented at the bottom of Figure . The case of specific electrode derivations based on linear combinations of signals recorded at different electrodes (e.g., bipolar or Laplacian derivations), can also be treated in the spatial frequency domain ( f x , f y ). If the linear summation of signals detected by different point electrodes is considered, the expression for a spatial filter can be obtained, whose transfer function H sf ( f x , f y ) is given by: (2) H s f f x , f y = ∑ i = − l q − 1 ∑ u = − g h − 1 a i u e − j k z i d x e − j k θ u d y with l , q , h , g positive integers ( l+q is the number of electrodes in the x direction and h+g the number of electrodes in the y direction), a iu the weights given to the electrodes, d x and d y the inter‐electrode distances in the two directions (here assumed constant but can be generalized to account for the case where d x and d y vary along the grid in order to describe any distribution of electrodes, as shown in Figure ). Combining Equations and (2), the effect of electrode shape and spatial filtering can be described by the following transfer function: (3) H e l e f x , f y = H s i z e f x , f y H s f f x , f y Thus, when a specific derivation (e.g., Laplacian) is applied to specific electrode shapes, the BSP Fourier transform Φ( f x , f y ) is observed as its spatially filtered version: (4) Φ o b s f x , f y = Φ f x , f y H e l e f x , f y = Φ f x , f y H s i z e f x , f y H s f f x , f y Similarly to electrode shape, electrode area (2) significantly impacts both the achievable SNR and spatial resolution. When keeping the shape constant, increasing the electrode area lowers impedance, which enhances SNR. However, this improvement compromises spatial resolution since a larger surface covers more space and reduces the spatial (and therefore temporal) bandwidth since the equivalent low‐pass filter in Equation is more selective for larger areas. As Figure shows, when a cardiac potential is recorded by electrodes of increasing size, the SNR of the signal improves. However, when the electrode area covers sufficient space to capture multiple relevant signal changes simultaneously, these variations are lost through averaging. To retain essential signal features, the electrode area should remain below a critical threshold. The cut‐off frequency of the low‐pass filter corresponding to the averaging operation should be greater than the spatial bandwidth for each spatial direction. This corresponds to finding an optimal compromise that involves the selection of the largest electrode size that prevents any attenuation of signal features below a −3 dB point within the signal's bandwidth (see Figure ). Selecting an area meeting this criteria ensures the highest achievable SNR under the given recording conditions. The distance between electrodes (3) defines the sensitivity of the array to voltage changes in space, as depicted in Figure , and corresponds to sampling in the spatial domain. Decreasing the IED increases the number of recording points within a given area, enhancing the resolution of local changes in smaller sections. When transitioning from BSPs to BSPMs, sampling rate becomes relevant not only in time—measured by the number of data samples obtained within a time interval by recording electronics—but also in space. In theory, reducing the IED will invariably enhance resolution, however, in practice, it will also introduce complexities in electrode wiring, device weight, cost, etc. The optimal solution to this trade‐off is to find the maximum IED (minimum electrode density) that ensures full recoverability of spatial features in the recorded signals. The Nyquist‐Shannon theorem, extensively utilized for determining the minimum temporal sampling rate required for perfect signal reconstruction, dictates that signals sampled at more than twice the highest frequency produced by the signal of interest are entirely recoverable. The frequency bandwidth in the time domain of most physiological signals has been characterized, so the maximum frequency of each BSP can be determined. The question that arises is whether a relationship between temporal and spatial sampling rates can be established to ascertain the most suitable IED for BSP recording. Under the assumption of constant BSP propagation speeds, it is possible to relate time and space by means of the conduction velocity (CV) of the propagating potential. The CV is the speed at which signals travel through tissue. For BSPs, CV varies between ≈3 and 5 m s −1 for most signals. Slower BSP propagation speeds cause changes in potential across time to become less sparse in space, necessitating a greater number of electrodes to detect them. Assuming the lowest physiologically feasible CV is recommended to prevent aliasing effects at lower CV values. Establishing a relationship between temporal sampling frequency ( f ) and IED through the CV can be achieved by means of a simple proportion, IED = CV/ f . This association is valid for the spatial direction of propagation of the BSP but not in the other spatial direction. The threshold IED value obtained from this calculation does not account for practical scenarios where the IED may not be maintained perfectly across all electrodes in the array. Deviations from the threshold can result in distances slightly exceeding or falling short of the minimum required for optimal interpolation. When the distance is below the threshold, interpolation will be accurate; however, when the distance exceeds the threshold, interpolation will be imperfect, leading to reconstruction errors proportional to the deviation in the electrode position. If the expected deviation in electrode positioning is known, it is advisable to reduce the IED by the deviation value to avoid interpolation errors. The optimization of electrode layout (4) facilitates the creation of devices featuring distinct sections, each tailored to capture signals exhibiting varying physical properties. The choice of this parameter is primarily dependent on the behaviour exhibited by the signal of interest across the recording area. Adjusting electrode layout can potentially impact electrode shape, area, and IED. The previously outlined guidelines for these parameters can be employed as a framework to evaluate the necessity for greater or lesser electrode layout. When considering this parameter, it is recommended to start by examining the entire area of interest, investigating whether the signal properties—direction of propagation, bandwidth, and CV—can be assumed to remain consistent or vary predictably. If only the direction of propagation varies, it is relevant to assess whether the variability is unknown or consistent in the form of a known trajectory. In the former case, a uniform array with circular‐shaped electrodes might be preferable; in the latter, a non‐uniform variation in electrode shape might be required. When two or more properties exhibit well‐characterized variations, a proposed methodology involves segmenting the total area of interest into smaller sections. Each section undergoes a similar analytical process as described above, leading to either a defined set of design values (if the three properties can be assumed to remain constant) or a further breakdown of the section. This iterative process continues until design values are established for all sections within the array, see a graphical representation in Figure . The final design choice, array outline (5), determines the body area the electrode array should cover. This differs from the outline of the entire device, which must be adaptable to accommodate diverse individuals and clinical setups. Square lattices are commonly used in the literature because they simplify the process of importing the data into the software, as each row and column in a matrix can directly represent a specific location in space. However, it is advisable to employ hexagonal or triangular lattices as they maintain a consistent IED between neighbouring electrodes (see Figure for an example of a square lattice, where the diagonal distance is greater than the lateral one). The selection of an array outline is ultimately dependent on the proximity of interferent anatomical structures and the spatial sampling rate of the array. Limiting the coverage of the array to the area of interest alone leads to distortion or aliasing effects (since the spatial bandwidth becomes infinite in theory) at the edges of recorded BSPMs, a phenomenon known as truncation. This reconstruction effect occurs due to a lack of information about the area surrounding boundary electrodes, which causes estimations of the potential values around the edges to be inaccurate. Conversely, excessively broad coverage of the surrounding area will capture unwanted signals from external sources, potentially impacting diagnosis or subsequent processing. Figure illustrates examples of distortions caused by both extremes. In order to mitigate both truncation and interference, it is recommended to extend the spatial sampling of the array to include at least one additional point around the edge of the area of interest. The aforementioned guidelines provide a set of preliminary design choices based exclusively on the physical properties of electrophysiological signals and mathematical principles governing signal processing. Their aim is to simplify and generalize the optimization process for electrode array design to allow for more effective and translatable clinical research. The optimization of electrode array parameters remains necessary since the underlying assumptions on which they are based require experimental validation. As highlighted earlier, there is a clear tendency in the clinical community to employ BSPM over traditional BSPs since augmented spatial information improves diagnostic range and precision. However, this transition comes with challenges, including the management of increased cable wiring density and connectivity. Wireless data transmission has been widely explored in the literature as a solution to these connectivity issues, enhancing the wearability and comfort of non‐invasive devices. A wide range of communication protocols, hardware, firmware, and software tools are available for implementing wireless systems. Additionally, flexible electronic fabrication techniques have been adapted to enable integrated on‐device wireless systems. However, wireless data transmission faces data rate limitations as the number of simultaneous recording channels increases. While the low‐frequency band of cutaneous electrophysiological signals allows for a relatively low sampling rate that can support tens of channels, it may not suffice for arrays with hundreds or thousands of channels, restricting its use in several BSP mapping applications. The spatial nature of BSPMs allows intuitive observation of activity recorded by hundreds of channels since areas with different activity display varying intensity levels. Nevertheless, the interpretation of these potential values for diagnostic purposes results challenging. On the one hand, there is no precedent for what a healthy BSPM reference should resemble, the potentials measured at the surface of the skin do not immediately represent independent EECs but are a combination (linear or non‐linear, depending on the type of potentials) of the activity of multiple sources propagated through tissue. On the other hand, there is no prior knowledge of what biomarkers or BSPM features are relevant to identifying different pathological conditions. It becomes the task of specialized clinical field experts to identify this information by means of multiple case studies and spread it to a wider scientific community. This process is slow and renders the devices impractical in the short‐term. As Figure illustrates, two approaches can be employed to overcome the interpretability barrier in BSPM. The first is observational and focuses on modifying how BSPM data is visualized to enable a more intuitive understanding of potential values. This can be achieved through transformations or reconstructions that clarify the independent sources generating the data. For example, surface EMG monopolar array data can be transformed to yield independent motor unit activity, and cardiac BSPMs can be reconstructed using ECG imaging to visualize epicardial potentials. This approach allows for direct human inspection and the discovery of unknown conditions; however, it requires the development of transformation and reconstruction algorithms to solve the inverse problem of electrophysiology. This problem is mathematically ill‐posed, making it challenging to find biophysically plausible and accurate solutions. The second method is diagnostic, leveraging machine learning (ML) algorithms that use BSPM data directly or extracted features to autonomously classify known conditions or determine their progression. Training paradigms for classical ML classifiers fall into three categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning uses labelled input data to define the relationship between an input and its corresponding class, with algorithms including linear and logistic regression, support vector machines, decision trees, random forests, artificial neural networks, naïve Bayes classifiers, AdaBoost, and ensemble methods. Unsupervised learning employs statistical methods, such as clustering or dimensionality reduction, to define distinct classes from unlabelled data, with examples including K‐means, principal component analysis (PCA), and singular value decomposition (SVD). Reinforcement learning involves an agent exploring an unknown parameter space to determine the optimal policy that maximizes reward, continuously improving through experience. Examples include Markov decision processes, Q‐learning, and Monte Carlo methods. Classical machine learning methods aim to establish relationships between known correlated inputs and outputs, relying on predefined features or biomarkers relevant to the condition being classified. As previously mentioned, in traditional BSP analysis clinicians have empirically identified these features, allowing for visual diagnosis. However, in the context of BSPM, there is often little prior knowledge about which features are crucial for diagnosis, making many classical ML methods unsuitable for direct application. While non‐deep learning approaches can be useful where domain‐specific knowledge allows for effective feature engineering, their applicability is limited in the absence of clear biomarkers. In contrast, deep learning methods excel in such scenarios, as they can automatically extract complex features from raw BSPM data without prior knowledge. With their multi‐layered architecture, these networks identify patterns and establish relationships between the data and diagnostic outcomes in ways that traditional methods cannot match. The implementation of deep learning methods in clinical environments presents significant challenges. One major issue is the requirement for large, labelled datasets and substantial computational resources, both of which are often limited in healthcare settings where data collection can be fragmented or inconsistent. Additionally, working with large‐scale data raises critical concerns about patient privacy. Various approaches have been developed to address these concerns. Data anonymization, where personally identifiable information (PII) is removed or encrypted, is commonly used; however, anonymized data can sometimes be re‐identified through sophisticated inference techniques. To mitigate this risk, differential privacy can be employed, adding controlled noise to the data to protect individual identities while maintaining the accuracy of aggregate trends. Another promising technique is federated learning, which allows models to be trained locally on patient devices or within hospital systems, ensuring that sensitive data never leaves its source by sharing only model updates with a central server. Homomorphic encryption also enables secure computations on encrypted data without revealing raw information. These privacy‐preserving methods are crucial to ensuring that large‐scale data can be used for research and diagnostics without compromising patient confidentiality. Beyond data requirements, another key challenge for deep learning is its “black‐box” nature, which limits interpretability. While these models can achieve high diagnostic accuracy, clinicians often struggle to understand which features in the data drive predictions, leading to a lack of trust in AI‐based decisions. This challenge has spurred growing interest in explainable AI (XAI), which aims to make deep learning models more transparent. Techniques such as saliency maps, feature attribution, and attention mechanisms have been proposed to help clinicians identify relevant features in complex clinical data. Additionally, integrating deep learning into clinical workflows requires addressing practical challenges, including real‐time data processing, creating user‐friendly interfaces, and validating models across diverse patient populations. The most promising direction for the future may lie in developing hybrid models that combine the strengths of deep learning and traditional methods. These hybrid approaches could harness deep learning's capacity to extract complex features from BSPM data while integrating more interpretable techniques, such as classical machine learning or rule‐based systems, to enhance transparency and clinical trust. By striking a balance between accuracy and interpretability, hybrid models can bridge the gap between cutting‐edge computational power and practical clinical application. As has been described, both observational and diagnostic methods in BSPM present distinct advantages and drawbacks that impact their clinical utility. Observational models enhance diagnostic insight by offering greater transparency and enabling clinicians to visualize electrophysiological data through transformations and reconstructions. This approach is particularly beneficial for diagnosing complex conditions, as it allows for the identification of nuanced patterns that might be overlooked in raw data. However, these models face challenges due to the mathematical complexity of solving inverse problems, which can lead to inaccurate reconstructions. Therefore, future developments must enhance the accuracy and stability of these transformations while integrating intuitive visualization tools into clinical workflows, allowing for seamless interpretation without requiring extensive computational expertise. Conversely, the diagnostic approach, driven by machine learning, can significantly improve efficiency and accuracy by rapidly processing large volumes of BSPM data to identify patterns that may not be immediately apparent to human observers. This capability is especially crucial for conditions with subtle or early‐stage manifestations, where early detection can enhance patient outcomes. As BSPM technology advances and data volume increases due to high‐density arrays, the robustness of diagnostic algorithms becomes even more valuable. Future efforts will need to ensure high diagnostic accuracy while also addressing interpretability concerns to foster clinician trust in automated systems. Additionally, optimizing the integration of ML models into clinical workflows will be essential for ensuring these tools complement traditional diagnostic methods and enhance overall clinical practice. Ultimately, the ongoing development of both approaches will create unique opportunities to improve clinical diagnosis, each addressing specific implications and challenges in the evolving landscape of BSPM technology. With future advancements in data acquisition and sensor technology, both the observational and diagnostic approaches in BSPM are set to improve significantly. As BSPM arrays become more compact, flexible, and capable of capturing higher‐density signals, the volume and quality of data will improve for both methods. This enhancement will boost the diagnostic accuracy of ML models and provide clinicians with more detailed visualizations through the observational approach. While the observational method facilitates discovering unknown conditions and generating hypotheses, the diagnostic approach excels in automating decision‐making for known conditions. The future of BSPM in clinical diagnosis will likely see a synergistic integration of both strategies, where visualization techniques can reveal novel patterns in the data that ML models can analyze, and classify. Conversely, automated diagnostic systems could identify potential anomalies, prompting clinicians to investigate further using observational tools. By merging improved spatial information with advanced computational methods, BSPM will enable earlier detection of pathologies, more accurate staging of disease progression, and personalized treatment planning. The continued development of both observational and diagnostic approaches will be essential to realizing this potential, solidifying BSPM as a cornerstone of future clinical diagnostics. Conclusion The recording of BSPs from cutaneous electrodes stands as an essential part of medical diagnosis. Despite its widespread use and success in treating numerous diseases, traditional BSPs have been shown to lack sufficient spatial resolution to capture several conditions. This limitation has produced the simultaneous, independent development of electrode array recording techniques across multiple clinical fields where spatial information is acquired in the form of BSPMs. Presently, the design of electrode arrays for BSP mapping lacks a standardized framework, resulting in customizations for each clinical study, thereby limiting comparability, reproducibility, and transferability. In this study, a set of preliminary design guidelines, derived from existing literature, have been proposed. These rules are based exclusively on the physical properties of electrophysiological signals and the mathematical principles of signal processing. Their purpose is to simplify and generalize the optimization process for electrode array design, enabling more effective and translatable clinical research. The increased spatial information obtained through BSPMs introduces challenges in interpretation. To address this, two strategies have been outlined: observational transformations that reconstruct signal sources for intuitive comprehension and machine learning‐driven diagnostics for condition discernment. Each strategy presents distinct advantages and drawbacks, selection of each should be determined by precise clinical objectives. BSP mapping presents significant advantages in cutaneous electrophysiology and is anticipated to expand into broader clinical domains in the forthcoming decades. The authors declare no conflict of interest. |
Implementation of a programmatic assessment model in radiation oncology medical physics training | 0e19322b-e9d3-467e-a6cf-85c82f9c4ea6 | 11087179 | Internal Medicine[mh] | INTRODUCTION The Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM) was formed in 1977 after many years associated with the UK Hospital Physicists Association. By 1985, the ACPSEM Council determined a need for a formal qualification for practicing medical physicists and introduced an accreditation scheme. The Accreditation in Radiation therapy Equipment Commissioning and Quality Assurance (ARECQA) was established in 1987 and included written, practical, and oral examinations specific to Radiation Oncology Medical Physics (ROMP). Suitably experienced members would undertake this accreditation as a demonstration of their competence in this discipline. Accreditation did not include a formal syllabus or training program and was not mandated for clinical medical physicists. Candidates were given “on‐the‐job” training with no standardization on the scope or depth of training. It was not until a federal inquiry into Australian Radiation Oncology services in 2002 that a formal national ROMP training scheme was recommended. This recommendation led to the development of the ACPSEM ROMP “Training, Education and Accreditation Program” (TEAP) for Australia and New Zealand. This acronym has been subsequently modified to “Training, Education and Assessment Program.” With the introduction of TEAP, a certification process was formalized to employ residents (or “registrars”) specifically into training positions. These positions were often funded by state or federal governments and were contractual for the length of the training. There are numerous publications highlighting the training of medical physicists in different regions. , , , These publications concentrate on the differences in the intake of candidates into the program, length and structure of the program, and the curriculum covered. The ACPSEM ROMP TEAP was designed to produce competent, safe‐to‐practice Radiation Oncology Medical Physicists that have the skills and knowledge required to work independently in a radiation oncology department. The clinical training component of ROMP TEAP is 3 years in addition to the time required to complete any required post‐graduate university study (minimum MSc in Medical Physics). Entry into the ACPSEM ROMP TEAP is based on fixed eligibility criteria and selection tools, with clinical training occurring at an ACPSEM accredited clinical training site under the management of an ACPSEM approved supervisor. All registrars across Australasia would be enrolled into the same program and complete the same competencies, regardless of which clinical department they were based at. The original training content for the ACPSEM ROMP TEAP was developed in 2003 and originally centered around the educational requirements of a medical physicist defined in International Commission on Radiological Protection publication 44 and American Association of Physicists in Medicine (AAPM) Report 36 under the guidance of consultant educational experts. The ACPSEM ROMP TEAP subsequently formed the core of the initial International Atomic Energy Agency (IAEA) ROMP training course. Since that time, changes in technology and treatment techniques have required updates to some scientific content within the program. Earlier iterations of the ACPSEM TEAP (Versions 1 to 3.6) for ROMPs structured the program around major units of work (modules), and then covered a variety of learning outcomes on topics within each module. Each learning outcome had prescribed assessment criteria, which was assessed by a local clinical supervisor to evaluate candidate competence. The program suggested a range of assessment methods that could be used, but the type of assessment used was at the discretion of the supervisor. It was noted on the 10th anniversary of the introduction of the ACPSEM TEAP, that the program was based on workplace competency assessment performed by local clinical supervisors during the training period, with formal external assessment by ACPSEM‐appointed examiners to achieve certification in the final exams. The formal assessment included a written examination after completion of a sub‐set of the curriculum, as well as both practical and oral examinations at the end of the training period. Examinations were conducted by experienced clinical medical physicists. The written examination was independently blind‐marked by two examiners per subject area, and the practical and oral examinations were conducted on‐site in the candidate's department by two independent examiners, requiring a consensus decision. The biggest hurdle to robust and standardized clinical assessment throughout the training process, was the experience and training of the local clinical supervisors. Some supervisors lacked experience as educators and had not been adequately trained to provide appropriate supervision. Supervisors with limited teaching or training skills would often rely on methods that personally suited them and would use the same methods for all learning outcomes. Because of the wide range of suitable assessment types listed for a particular learning outcome, there was often a large disparity in the level of assessment used between registrars from different clinical departments. The range in competency evidence submissions from the cohort of registrars across Australia and New Zealand indicated a lack of a cohesive assessment standard being applied. Programmatic assessment spreads measurement of performance across a range and variety of assessment methods during the training process. The design and effectiveness of the assessment program as a whole is emphasized, rather than focusing on the adequacy of individual assessments of performance. This is because a program of assessment recognizes that assessing complex competencies requires a range of measures over time and cannot be adequately learned and assessed through a single, point in time assessment. Although programmatic assessment approaches have become highly regarded in health profession education, this approach contrasts significantly with traditional summative, mastery‐based approaches to assessment and learning. The major shift required to embed a programmatic assessment approach in a training program means that implementation is often challenging. , For example, the traditional formative/summative dichotomy is replaced with a continuum of stakes, from low‐ to high‐stakes with a wide variety of assessment tools. This allows the learner to demonstrate growing depth and breadth of knowledge in a discipline. Each individual assessment datapoint then contributes to the evidence base for determining clinical competence. In 2019, the ACPSEM initiated a project to update the existing training program with the requirements that it would reflect the current needs in training for radiation oncology medical physics. This included a dynamic curriculum that was able to be adaptive to changing technologies. In addition, the curriculum needed to be able to be delivered anywhere within Australasia, allowing for limited equipment access in smaller remote and regional departments. Finally, the clinical training component needed to be completed within a 3‐year time frame, as mandated by the federal funding bodies providing financial support for the program. Over‐arching this was a requirement that it also comply with the Australian Medical Council (AMC) and Australian Health Practitioner Regulation Agency (AHPRA) standards. METHOD The renewal of the ACPSEM ROMP TEAP officially commenced in early 2020 and proceeded through several key phases to reach implementation. Phase 1 included an expert medical training consultant desktop review of the program, incorporating an analysis of trends and identifying gaps in terms of AMC standards. It also included key stakeholder consultation with targeted online and phone‐based questionnaires. In phase 2, the primary project committee consisting of clinically certified and highly experienced medical physicists, medical physics training specialists, and medical education consultants, developed a series of program outcomes statements that would be used to ensure alignment of all content to graduate attributes. In phase 3, working groups were formed by clinically certified ROMP experts with experience in past training programs who reviewed and updated the scientific content of the training program. Finally in phase 4, an assessment committee of clinically certified and highly experienced medical physicists, academic medical physicists, and education specialists was formed to create a model of programmatic assessment for the ACPSEM ROMP TEAP. The committee in this phase was led by assessment specialists from the Australian Council for Educational Research (ACER). After determining scope and layout of the new curriculum in phases 2 and 3, a series of suitable assessment methods were discussed, that aligned with the program outcomes, curriculum content, and teaching and learning strategies of the program. The educational concepts used to underpin the curriculum development, and to inform the assessment methods, were based on the revised Bloom's Taxonomy. Under this educational model, domains of learning are defined and used as methods of determining breadth and depth of knowledge and skill in content areas. The application used here was divided into cognitive and behavioral domains to both describe the understanding and recall of knowledge, as well as the application of that knowledge clinically. An assessment method was considered potentially suitable if it were fit‐for‐purpose for the particular learning outcome. Assessment committee members reviewed all assessment methods and voted on their preferred assessment method for each learning outcome in the new training curriculum framework. RESULTS The key themes to emerge from the consultation in phase 1 included requirements for more standardized methods of assessment and a reduction in the duplication of learning outcomes. From these themes, there were several key recommendations generated: ‐ Clearly identified program outcomes ‐ A review and update of program content (curriculum) ‐ Development of a model of programmatic assessment In phase 2, a series of program outcome statements were created. These statements were based on the Canadian Medical Education Directives for Specialists (CanMEDS) framework for medical education and practice and reflected the attributes of graduates of the ROMP TEAP, and their development throughout their professional careers. These traits have been defined under the following categories: Safety : Works safely within the clinical environment of radiation oncology through the application of evidence‐based practice and risk management in compliance with regulations. Knowledge : Communicates scientific knowledge effectively and demonstrates skills for the core areas of radiation oncology. Critical thinking/problem solving : Provides sound radiation oncology medical physics guidance while exercising critical and innovative thinking, problem solving and judgment in a clinical or academic setting. Communication and teamwork : Communicates and collaborates effectively within a multidisciplinary team ensuring the patient and quality of care is of primary focus. Patient focused : Practices patient centered radiation oncology medical physics with compassion and respect, using ethical and professional values. Educator : Provides education, training, and supervision to facilitate the functions of the profession. Continuous Professional Development (CPD) : Demonstrates commitment to ongoing life‐long professional development and learning. In phase 3, scientific content in the ACPSEM clinical training guide (v3.6) was assessed by groups of craft ROMP experts for currency. New, emerging techniques and technology were incorporated, and areas of duplication were removed from the curriculum framework. Each learning outcome in the new curriculum was linked to at least one of the program outcome statements, to ensure they were fit for purpose. In the new curriculum, 10 key areas were identified for ROMP clinical training (see Table ) including one key area assigned for new and emerging technologies. Note that at the time of writing, there are no proton therapy centers in Australia, and MRI Linacs are extremely limited. As such, they are not considered core curriculum content, but have been included as part of the emerging technologies in the Australasian context. In addition to the curriculum to be studied, registrars are required to complete the following: Three clinical and scientific reports (CaSRs) which are designed to provide evidence of increasing depth of understanding and clinical involvement. Each report is assessed by an external, independent expert reviewer, and the final report is also assessed via an oral examination. Online written examinations covering core medical physics content. Presentation at a recognized national or international conference on clinical or research work conducted during the period of clinical training. Regular performance reviews throughout the program on training and milestone progress conducted by an external, independent expert assessor. Oral and practical examinations for final certification. All (i) to (v) assessment methods were part of the previous curriculum design with slight modifications to timing or delivery methods. In designing the ACPSEM ROMP TEAP programmatic assessment model in phase 4, many assessment methods were considered. Some of the resulting core assessment methods can also be considered as Structured Learning Activities (SLAs) due to feedback from the assessment providing a learning opportunity for the registrar. The SLAs considered, and their definitions, were as follows: Entrustment Activity – Registrars are given increasing levels of responsibility/trust in an ongoing routine task, coupled with decreasing levels of supervision. Written Task/Report – A short report outlining the work conducted on a task or understanding of a specific topic. Oral Assessment – A structured oral assessment interview on a topic. Multiple Choice Question (MCQ) Activity – An app‐based series of questions (from a question bank) that cover the required fundamental content of a topic. Practical Assessment – Observation of a practical skill that is not part of routine quality assurance, but forms part of the normal skill set for a clinical physicist. In deciding the SLA that would provide final evidence for assessment, the assessment committee was asked to vote for which SLA that they each felt would be most appropriate for each specific learning outcome. For some learning outcomes the group were unanimous in their recommendation of the assessment type. For other learning outcomes there were a spread of responses indicating different assessment preferences for each committee member. This often resulted because some learning outcomes lend themselves to multiple ways of assessment due to the nature of the content within them. Final decisions on individual learning outcome assessment methods were discussed with expert educational consultants to determine the fitness‐for‐purpose and alignment of the assessment method to learning outcome, training opportunities, curriculum, and graduate outcomes. , Through these sessions, in cases where there was originally no clear consensus, a majority decision could be found that was satisfactory to all committee members. On completion of voting and discussion, the final list of SLAs was now mandated as the learning activity (with associated assessment type) to be used for each learning outcome. The final ACPSEM curriculum framework contained 79 different learning outcomes, which were further broken into clarifying elements where required. For 73 learning outcomes, the breakdown of SLAs agreed by the committee was 23 written task/reports, 21 oral assessment, 12 MCQ activities, 12 entrustment activities and five practical assessments. There were also six additional learning outcomes that were tied to online learning modules with automated online assessment. Supplement provides the full list of learning outcomes, elements, and their associated SLAs in the final curriculum. The aim of the SLAs was to ensure that all registrars are being assessed in the same way and receiving the same learning opportunities. To aid in successful implementation of the progressive assessment model and standardization of assessment, assessment rubrics were created to be used as a tool for assessors for each assessment type in the programmatic assessment model. Using rubrics as a marking aid assists in providing feedback to registrars on their learning across all domains, as well as providing registrars with clear descriptions of the expected standard. All rubrics consisted of a three‐point scale (Falls Short / Meets / Exceeds), describing what a learner needs to demonstrate in order to meet the required minimum expectations for the learning outcome. The rubrics have been designed to be useful for a range of assessment types, to unify the language used in assessment, and to assess a range of skill sets so that they are useable throughout the whole program. All work should be marked against a set of criteria which covers both cognitive and behavioral domains of learning, selected from the following categories: Ability to perform practical tasks, Clinical medical physics judgment and responsibility, Demonstrates critical and thorough scientific thinking, Application of relevant theory to clinical situations, and Communication. Table shows an example assessment rubric for a practical activity. In this instance, communication is not used as part of the grading scheme. Registrars are required to achieve the standard, Meets Expectations, in each category to be considered competent. For entrustment‐based activities, which are graded over several levels of trust/responsibility, a registrar falling in the Exceeds Expectations may be considered ready to move to a higher level of entrustment. Templates for assessment incorporating the assessment rubrics were created and supervisors were encouraged to use these templates to ensure that they are assessing the methods prescribed in the programmatic assessment model. DISCUSSION The project to update the training program for Australasian radiation oncology medical physicists was planned in four phases. These phases covered addressing the overall intended outcome of the program, content of the curriculum, meaningful assessment, and implementation. The key aim of the project was the inclusion of programmatic assessment, with structured learning to address evidence of assessment. In addition, the training was to be broken into three clear stages including Stage A (Foundation), Stage B (Core) and Stage C (Consolidation) with each stage anticipated to take 12 months. Progression between stages can occur at other times, depending on different factors. Within each stage, there are: Hurdle requirements, which must be completed before the registrar is eligible for progression. Training and assessment evidence requirements, which must be collated in each stage. Ad hoc learning opportunities, which are not mandatory. Examples of ad hoc learning opportunities include (but are not limited to): Tutorials (both in‐house, online and via workshops), patient case studies, departmental projects (e.g., commissioning) and non‐routine quality assurance, informal discussions (with supervisors, trainers, registrars, other multidisciplinary staff, or patients), presentations to physicists, registrars, other multidisciplinary staff or patients. Structured Learning Activities (SLAs), which are mandatory. These are specifically mapped to learning outcomes, and satisfactory completion of SLAs (along with any ad hoc learning opportunities) allows the registrar to attain the skills stated in a learning outcome. Figure illustrates the diagrammatic summary of the process from enrollment (prior to Stage A) through to final certification (at Stage C completion). Hurdle requirements are denoted with an asterisk. Note that the proportion of SLAs to be completed in each stage is spread across the key areas, ensuring that the registrars gain knowledge in all disciplines. Progression from Stage A to B, Stage B to C, and Stage C to completion (Certification) is a high‐stakes decision made by a progression committee. The committee comprises an ACPSEM training coordinator and representatives from the Radiation Oncology Certification Panel who are experienced in training and have extensive knowledge of the ACPSEM TEAP requirements. They review all submitted evidence of training and assessment to make an informed decision of registrar progress in the program. Registrars have flexibility in the attainment of learning outcomes, especially in the order in which they are undertaken. This recognizes the variation in training center programs and contexts. Management of the curriculum and registrar learning is via an online Learning Management System (LMS). The system allows registrars to upload any training evidence, find resources and keep records on their progress. In addition, supervisors and other medical physicists who perform assessments can provide records of assessment (including assessment templates) and feedback given to the registrar. An important element of education is the key requirement of providing meaningful feedback and encouraging communication between supervisors and registrars. , This allows registrars to enhance their learning and show growth from that feedback. In each stage of the ACPSEM TEAP, registrars must complete a range of activities and milestones including independent external periodic progress reviews (PPRs). These reviews are conducted by trained assessors who monitor progress via an interview with the registrar and their supervisor. The registrar will be asked a range of questions on the learning outcomes that they have completed to assess their understanding and competence. Supervisors are also given opportunity to discuss their work with the registrar and any difficulty or success they are having. The outcome of the PPR is graded against the same behavioral and cognitive assessment rubrics, and the interval to the next review is based on their overall performance. Registrars who perform below expectations for their time in the program will be reviewed at either 3‐ or 6‐month intervals. Similarly, registrars who are performing at or above expectations, according to their time in the program, are reviewed at 9‐to‐12‐month intervals. The new program was released for enrolment in July 2022. At that time, registrars who had been in the previous program (v3.6) for less than 12 months were given the option to transfer to the new program (ROMP2022). Their progress against the new curriculum was determined and they were assigned recognition of prior learning (RPL) in completed learning outcomes. Across Australia and New Zealand, as at July 2022, there were approximately 80 registrars in the ROMP TEAP, of which 65% were enrolled in ROMP2022. While in the current transition phase, all registrars under the V3.6 program will be supported to their completion. All registrars, regardless of program, will use the same rubric marking templates for the external assessments and examinations. Currently, the program has been in place for 18 months, and initial feedback indicates the registrars are responding well to the change in assessment methods and increased feedback and are now progressing in a timely manner through the program. The main source of outreach and training on programmatic assessment has been centered on the clinical medical physicists who are supervising registrars in their department. The ACPSEM has provided significant training in the form of webinars, presentations at conferences, and feedback through PPRs to assist supervisors with the transition. The largest cause for concern has been noted as an increase in workload for supervisors who are now required to provide greater feedback to registrars than previously. Similarly, the use of rubrics is still a challenging skill that many are adapting to. The ACPSEM is committed in continuing to provide ongoing training to its members in how to provide high quality feedback to registrars, skills required for conducting assessment (e.g., oral assessment) and ensuring equitable training opportunities and standards are maintained. Future work will assess any further training gaps that need to be addressed, as well as the success of the project overall. The new curriculum appears to be meeting the current needs of the profession and the introduction of key area 10 (Advanced Technologies) has been a useful tool in allowing the curriculum some dynamic flexibility. There are plans to recognize the increasing use of artificial intelligence in radiation oncology by providing a dedicated space for this in key area 10 in the near future. CONCLUSION Previous assessment for TEAP learning outcomes, and final assessment, left the way in which the work was assessed, and marked, at the discretion of the supervisor or examiner. Now, templates and clear rubrics provide a well‐defined indication of the level expected, while also providing feedback to the learner, on where they may have strengths or weaknesses in the work. Registrars had previously borne the burden of proof of their competency in topic areas, by having to assemble portfolios of work that may or may not display competence. Now the proof of competence is shared with the supervisors in providing rich feedback on the work, areas of improvement, notes taken during observations or practical skills, as well as records of oral assessments and other work assessed collaboratively. All listed authors have contributed to the intellectual content, design of the work undertaken and interpretation of data, as well as approval of the published version. The authors have no conflicts of interest to declare. Supporting Information |
Can a low-threshold check-up motivate older adults to schedule a dental visit? Study protocol for a randomized controlled trial | 6da6fa29-7f4c-4fe6-a53f-5f10c32459ec | 11749651 | Dentistry[mh] | Note: the numbers in curly brackets in this protocol refer to SPIRIT checklist item numbers. The order of the items has been modified to group similar items (see http://www.equator-network.org/reporting-guidelines/spirit-2013-statement-defining-standard-protocol-items-for-clinical-trials/ ). Background and rationale {6a} Globally the population is aging. In 2022, almost 10% of the global population was aged 65 or older. Europe and Northern America accounted for the largest proportion of older individuals in 2022 and projections indicate a rise to 22.0% in 2030 and 26.9% in 2050 in these regions. Belgium ranks in the top European countries with the highest proportion of individuals aged 85 and older within its population . Oral disorders are among the main disability drivers in people aged 70 and above . Given the cumulative nature of oral conditions, older adults experience higher levels of tooth loss than their younger counterparts. Furthermore, they present with high levels of untreated oral diseases . This not only contributes to poor overall health but also negatively impacts their quality of life and general well-being . Regular dental attendance plays a pivotal role in the early diagnosis and effective treatment of oral diseases . Nevertheless, regular dental attendance is lower in older than in younger age groups . Moreover, the frequency of attending a family doctor is negatively associated with dental attendance . The main reported reasons among older adults for not seeking dental care are lack of awareness on its importance, edentulousness, perceived costs, logistical challenges associated with accessing dental services, dental anxiety, and negative prior experiences . To improve dental attendance in older adults, health services research to reduce these barriers is needed and aligns with the expressed needs of the target group themselves . To our knowledge, the impact of a dental screening in a setting belonging to their familiar environment on future contact with a dental professional among older adults has not yet been examined. Objectives {7} The objective of this study is to examine the effect of a low-threshold dental check-up in a non-dental setting among community-dwelling older adults (≥ 65 years of age) on contacting a dentist. The intervention will include an oral examination including tailored information on oral health issues. Participants will be informed about the importance of regular dental visits and will be given referral letters for the dental professional and the family physician. Participants will also receive informational flyers about oral hygiene and a list of nearby dentists, in case they do not have a regular dentist. This will be compared to a control group, which will only receive informational flyers and a list of nearby dentists. Trial design {8} A randomized, controlled, single-blinded, superiority trial with two groups will be conducted using a 1:1 allocation ratio. To avoid imbalance between groups, blocked randomization with blocks consisting of 8 to 12 people will be conducted. The protocol was written following the SPIRIT guidelines . Globally the population is aging. In 2022, almost 10% of the global population was aged 65 or older. Europe and Northern America accounted for the largest proportion of older individuals in 2022 and projections indicate a rise to 22.0% in 2030 and 26.9% in 2050 in these regions. Belgium ranks in the top European countries with the highest proportion of individuals aged 85 and older within its population . Oral disorders are among the main disability drivers in people aged 70 and above . Given the cumulative nature of oral conditions, older adults experience higher levels of tooth loss than their younger counterparts. Furthermore, they present with high levels of untreated oral diseases . This not only contributes to poor overall health but also negatively impacts their quality of life and general well-being . Regular dental attendance plays a pivotal role in the early diagnosis and effective treatment of oral diseases . Nevertheless, regular dental attendance is lower in older than in younger age groups . Moreover, the frequency of attending a family doctor is negatively associated with dental attendance . The main reported reasons among older adults for not seeking dental care are lack of awareness on its importance, edentulousness, perceived costs, logistical challenges associated with accessing dental services, dental anxiety, and negative prior experiences . To improve dental attendance in older adults, health services research to reduce these barriers is needed and aligns with the expressed needs of the target group themselves . To our knowledge, the impact of a dental screening in a setting belonging to their familiar environment on future contact with a dental professional among older adults has not yet been examined. The objective of this study is to examine the effect of a low-threshold dental check-up in a non-dental setting among community-dwelling older adults (≥ 65 years of age) on contacting a dentist. The intervention will include an oral examination including tailored information on oral health issues. Participants will be informed about the importance of regular dental visits and will be given referral letters for the dental professional and the family physician. Participants will also receive informational flyers about oral hygiene and a list of nearby dentists, in case they do not have a regular dentist. This will be compared to a control group, which will only receive informational flyers and a list of nearby dentists. A randomized, controlled, single-blinded, superiority trial with two groups will be conducted using a 1:1 allocation ratio. To avoid imbalance between groups, blocked randomization with blocks consisting of 8 to 12 people will be conducted. The protocol was written following the SPIRIT guidelines . Study setting {9} The study will be conducted in a non-dental setting at a location familiar for older adults within two primary care regions (ELZ RITS and ELZ Scheldekracht) in Flanders, Belgium. Eligibility criteria {10} Interested individuals will be included if they (a) are 65 years of age or older, (b) are community-dwelling within the two selected primary care zones in Flanders (Belgium), (c) are Dutch speaking, (d) did not have a dental check-up in the last 12 months, and (e) have sufficient cognitive ability to answer the questionnaires. Older adults of whom the partner is already enrolled in the study will be excluded. Community-dwelling refers to anyone who does not reside 24/7 in a residential care facility. Who will take informed consent? {26a} The informed consent forms {32} were approved by the Medical Ethics Committee affiliated with Ghent University Hospital. The participants will be encouraged to read the informed consent forms thoroughly and discuss them with the researcher. After all questions have been answered and upon agreement, the participants will be asked to sign the forms. Additional consent provisions for collection and use of participant data and biological specimens {26b} The informed consent form requests the participant’s permission to use his/her pseudonymized data for future scientific research in the same research domain. The study will be conducted in a non-dental setting at a location familiar for older adults within two primary care regions (ELZ RITS and ELZ Scheldekracht) in Flanders, Belgium. Interested individuals will be included if they (a) are 65 years of age or older, (b) are community-dwelling within the two selected primary care zones in Flanders (Belgium), (c) are Dutch speaking, (d) did not have a dental check-up in the last 12 months, and (e) have sufficient cognitive ability to answer the questionnaires. Older adults of whom the partner is already enrolled in the study will be excluded. Community-dwelling refers to anyone who does not reside 24/7 in a residential care facility. The informed consent forms {32} were approved by the Medical Ethics Committee affiliated with Ghent University Hospital. The participants will be encouraged to read the informed consent forms thoroughly and discuss them with the researcher. After all questions have been answered and upon agreement, the participants will be asked to sign the forms. The informed consent form requests the participant’s permission to use his/her pseudonymized data for future scientific research in the same research domain. Explanation for the choice of comparators {6b} The intervention will be compared to non-specific oral health information. Upon completion of the questionnaire, which is identical to that of the intervention group (as discussed later in Sect. 18a), the control group will be provided with basic guidance on oral hygiene via flyers and, in case they do not have a regular dentist, a list of local dentists. This comparator is chosen as this information, which is readily available online, can be provided to older adults without the involvement of a dental professional. Intervention description {11a} The participants allocated to the intervention group will receive an oral examination performed by the project-affiliated dental researchers. This will involve an assessment of oral hygiene, including the presence of plaque on teeth, tongue, and dentures, as well as the presence of food debris in the oral cavity. Participants will be inspected for mucosal lesions. For dentate participants, the number of teeth and the presence of caries, fillings, or crowns will be recorded. The severity of caries will be evaluated using the PUFA score . Periodontal status will be assessed by examining the mobility of natural teeth and the Dutch Periodontal Screening Index (DPSI) . Additionally, the presence of removable and fixed dentures and the number of occlusal contacts (with dentures present) will be noted. Examiners will use a head lamp, a mouth mirror (Henry Schein 900,748 and 9,009,470), and a periodontal probe (CyberTech C900-3456) for the oral examination. No x-rays will be taken because this will not be feasible if this intervention should be upscaled. Next, verbal information about any identified oral problem will be given. Finally, participants in the intervention group will receive a referral letter for a dental professional and a report for their family doctor (Appendix 1). All participants will receive a flyer with oral hygiene instructions adapted to their needs (natural teeth and/or dentures). These flyers are evidence-based brochures compiled by the Flemish Institute of Oral Health (“Gezonde Mond”) on performing good oral hygiene (Appendix 2). Participants without a regular dentist will receive a list with contact information of dentists in the area. Criteria for discontinuing or modifying allocated interventions {11b} If a participant is randomized into the intervention group but refuses to have an oral examination, this will be noted. Participants allocated to the control group requesting an oral examination will be advised to contact a dental professional, this will also be noted. Strategies to improve adherence to interventions {11c} Participants are not required to perform any actions independently; all procedures will be conducted in collaboration with the researcher. This approach ensures that participants do not need to initiate actions on their own. Consequently, no strategies to improve adherence are pre-established. Relevant concomitant care permitted or prohibited during the trial {11d} All concomitant care is permitted during the trial. Provisions for post-trial care {30} N/A, no disadvantages are expected for the participants. However, they are informed of possibilities to contact the researchers if ancillary care or post-trial care is needed. Outcomes {12} The primary outcome is whether or not the participant will contact a dental professional within four months after the intervention (yes/no). In Flanders (Belgium), there is an increasing shortage of dental professionals. As a consequence, many dental practices do not accept any new patients or have long waiting lists. Therefore, the outcome is not an actual dental appointment. Differences in proportions between the intervention and control group at timepoint 1 will be reported. The secondary outcomes are self-reported brushing frequency in comparison to the norm and changes in self-reported use of brushing materials. Self-reported brushing frequency in comparison to the norm The number of brushing episodes achieved by participants will be analyzed relative to the expected norm, which represents the recommended weekly brushing frequency outlined in the flyer provided to all participants. For individuals with natural teeth and no removable dentures, the norm is twice daily (14 times per week) . For those without natural teeth but with removable dentures, the norm is once daily (7 times per week) . For participants with both natural teeth and removable dentures, the combined norm is 21 brushing episodes per week. Brushing frequency will be calculated separately for natural teeth and dentures, based on questionnaire responses categorized as follows: (1) Once per week or less: 1 brushing episode per week, (2) Less than once per day: 3.5 brushing episodes per week (midpoint centering), (3) Once per day: 7 brushing episodes per week. (4) Twice per day: 14 brushing episodes per week. For example, a participant with both natural teeth and removable dentures who brushes their natural teeth once daily (7/14) and their dentures once daily (7/7) will achieve a brushing ratio of 0.67 (14/21). If a participant exceeds the expected brushing norm, the maximum score of 1.0 will be assigned. Changes in self-reported use of brushing materials Among participants with removable dentures, the self-reported use of a denture brush will be evaluated. This variable will be categorized as follows: “no change” for participants whose self-reported denture brush usage remained consistent between T0 and T1; “improvement” for participants who did not report using a denture brush at T0 but reported its use at T1; and “deterioration” for participants who reported using a denture brush at T0 but not at T1. Similarly, the self-reported use of hand soap or denture cleanser will be evaluated among participants with removable dentures. This variable will be categorized as follows: “no change” for participants whose self-reported use of these materials remained consistent between T0 and T1; “improvement” for participants who did not report using hand soap or a denture cleaner at T0 but reported using hand soap or a denture cleaner at T1; and “deterioration” for participants who reported using hand soap or a denture cleaner at T0 but not at T1. Participant timeline {13} The participant timeline is shown in Fig. . Sample size {14} G*Power (version 3.1.9.2) was used to calculate the sample size. To the best of our knowledge, this type of intervention has not yet been conducted within the target population. Therefore, expert opinions were utilized to estimate that the intervention would activate 30% of participants in the intervention group. In the control group, a maximum of 10% is expected to contact a dentist. Using logistic regression, with α = 0.05 and 1 – β = 80%, to detect a mean difference of 20% with equal allocation to both groups, a sample size of 129 persons is required. To allow for 33% drop-out (due to advanced age and potential frailty of the participants or due to an incorrect phone number or not answering calls), 194 persons will be recruited. Recruitment {15} All service centers and Social Welfare Offices within the selected primary care regions will be contacted to participate in the study. Together with the interested centers, different dates will be selected to perform the study in their facilities. Organizations within these primary care regions and other local initiatives focusing on social activities or care for older people will also be contacted to spread the call for participation in the study. Furthermore, residents of assisted living facilities will be contacted. Home care service organizations in the region have agreed to assist with participant recruitment. The intervention will be compared to non-specific oral health information. Upon completion of the questionnaire, which is identical to that of the intervention group (as discussed later in Sect. 18a), the control group will be provided with basic guidance on oral hygiene via flyers and, in case they do not have a regular dentist, a list of local dentists. This comparator is chosen as this information, which is readily available online, can be provided to older adults without the involvement of a dental professional. The participants allocated to the intervention group will receive an oral examination performed by the project-affiliated dental researchers. This will involve an assessment of oral hygiene, including the presence of plaque on teeth, tongue, and dentures, as well as the presence of food debris in the oral cavity. Participants will be inspected for mucosal lesions. For dentate participants, the number of teeth and the presence of caries, fillings, or crowns will be recorded. The severity of caries will be evaluated using the PUFA score . Periodontal status will be assessed by examining the mobility of natural teeth and the Dutch Periodontal Screening Index (DPSI) . Additionally, the presence of removable and fixed dentures and the number of occlusal contacts (with dentures present) will be noted. Examiners will use a head lamp, a mouth mirror (Henry Schein 900,748 and 9,009,470), and a periodontal probe (CyberTech C900-3456) for the oral examination. No x-rays will be taken because this will not be feasible if this intervention should be upscaled. Next, verbal information about any identified oral problem will be given. Finally, participants in the intervention group will receive a referral letter for a dental professional and a report for their family doctor (Appendix 1). All participants will receive a flyer with oral hygiene instructions adapted to their needs (natural teeth and/or dentures). These flyers are evidence-based brochures compiled by the Flemish Institute of Oral Health (“Gezonde Mond”) on performing good oral hygiene (Appendix 2). Participants without a regular dentist will receive a list with contact information of dentists in the area. If a participant is randomized into the intervention group but refuses to have an oral examination, this will be noted. Participants allocated to the control group requesting an oral examination will be advised to contact a dental professional, this will also be noted. Participants are not required to perform any actions independently; all procedures will be conducted in collaboration with the researcher. This approach ensures that participants do not need to initiate actions on their own. Consequently, no strategies to improve adherence are pre-established. All concomitant care is permitted during the trial. N/A, no disadvantages are expected for the participants. However, they are informed of possibilities to contact the researchers if ancillary care or post-trial care is needed. The primary outcome is whether or not the participant will contact a dental professional within four months after the intervention (yes/no). In Flanders (Belgium), there is an increasing shortage of dental professionals. As a consequence, many dental practices do not accept any new patients or have long waiting lists. Therefore, the outcome is not an actual dental appointment. Differences in proportions between the intervention and control group at timepoint 1 will be reported. The secondary outcomes are self-reported brushing frequency in comparison to the norm and changes in self-reported use of brushing materials. Self-reported brushing frequency in comparison to the norm The number of brushing episodes achieved by participants will be analyzed relative to the expected norm, which represents the recommended weekly brushing frequency outlined in the flyer provided to all participants. For individuals with natural teeth and no removable dentures, the norm is twice daily (14 times per week) . For those without natural teeth but with removable dentures, the norm is once daily (7 times per week) . For participants with both natural teeth and removable dentures, the combined norm is 21 brushing episodes per week. Brushing frequency will be calculated separately for natural teeth and dentures, based on questionnaire responses categorized as follows: (1) Once per week or less: 1 brushing episode per week, (2) Less than once per day: 3.5 brushing episodes per week (midpoint centering), (3) Once per day: 7 brushing episodes per week. (4) Twice per day: 14 brushing episodes per week. For example, a participant with both natural teeth and removable dentures who brushes their natural teeth once daily (7/14) and their dentures once daily (7/7) will achieve a brushing ratio of 0.67 (14/21). If a participant exceeds the expected brushing norm, the maximum score of 1.0 will be assigned. Changes in self-reported use of brushing materials Among participants with removable dentures, the self-reported use of a denture brush will be evaluated. This variable will be categorized as follows: “no change” for participants whose self-reported denture brush usage remained consistent between T0 and T1; “improvement” for participants who did not report using a denture brush at T0 but reported its use at T1; and “deterioration” for participants who reported using a denture brush at T0 but not at T1. Similarly, the self-reported use of hand soap or denture cleanser will be evaluated among participants with removable dentures. This variable will be categorized as follows: “no change” for participants whose self-reported use of these materials remained consistent between T0 and T1; “improvement” for participants who did not report using hand soap or a denture cleaner at T0 but reported using hand soap or a denture cleaner at T1; and “deterioration” for participants who reported using hand soap or a denture cleaner at T0 but not at T1. The number of brushing episodes achieved by participants will be analyzed relative to the expected norm, which represents the recommended weekly brushing frequency outlined in the flyer provided to all participants. For individuals with natural teeth and no removable dentures, the norm is twice daily (14 times per week) . For those without natural teeth but with removable dentures, the norm is once daily (7 times per week) . For participants with both natural teeth and removable dentures, the combined norm is 21 brushing episodes per week. Brushing frequency will be calculated separately for natural teeth and dentures, based on questionnaire responses categorized as follows: (1) Once per week or less: 1 brushing episode per week, (2) Less than once per day: 3.5 brushing episodes per week (midpoint centering), (3) Once per day: 7 brushing episodes per week. (4) Twice per day: 14 brushing episodes per week. For example, a participant with both natural teeth and removable dentures who brushes their natural teeth once daily (7/14) and their dentures once daily (7/7) will achieve a brushing ratio of 0.67 (14/21). If a participant exceeds the expected brushing norm, the maximum score of 1.0 will be assigned. Among participants with removable dentures, the self-reported use of a denture brush will be evaluated. This variable will be categorized as follows: “no change” for participants whose self-reported denture brush usage remained consistent between T0 and T1; “improvement” for participants who did not report using a denture brush at T0 but reported its use at T1; and “deterioration” for participants who reported using a denture brush at T0 but not at T1. Similarly, the self-reported use of hand soap or denture cleanser will be evaluated among participants with removable dentures. This variable will be categorized as follows: “no change” for participants whose self-reported use of these materials remained consistent between T0 and T1; “improvement” for participants who did not report using hand soap or a denture cleaner at T0 but reported using hand soap or a denture cleaner at T1; and “deterioration” for participants who reported using hand soap or a denture cleaner at T0 but not at T1. The participant timeline is shown in Fig. . G*Power (version 3.1.9.2) was used to calculate the sample size. To the best of our knowledge, this type of intervention has not yet been conducted within the target population. Therefore, expert opinions were utilized to estimate that the intervention would activate 30% of participants in the intervention group. In the control group, a maximum of 10% is expected to contact a dentist. Using logistic regression, with α = 0.05 and 1 – β = 80%, to detect a mean difference of 20% with equal allocation to both groups, a sample size of 129 persons is required. To allow for 33% drop-out (due to advanced age and potential frailty of the participants or due to an incorrect phone number or not answering calls), 194 persons will be recruited. All service centers and Social Welfare Offices within the selected primary care regions will be contacted to participate in the study. Together with the interested centers, different dates will be selected to perform the study in their facilities. Organizations within these primary care regions and other local initiatives focusing on social activities or care for older people will also be contacted to spread the call for participation in the study. Furthermore, residents of assisted living facilities will be contacted. Home care service organizations in the region have agreed to assist with participant recruitment. Sequence generation {16a} A randomization list (computer-generated by RC) will be processed in REDCap by a HIRUZ staff member. HIRUZ is the Clinical Research Centre of Ghent University Hospital and Ghent University. Randomization will be stratified by frailty status, based on the outcome of the Groningen Frailty Indicator included in the initial questionnaire. Random permuted blocks will be created using SAS v9.4 with variable sizes to avoid that the treatment allocation can be predicted. Concealment mechanism {16b} Participants will be randomly assigned to either control or intervention group with a 1:1 allocation by the REDCap program. Screeners will not have access to this list. Allocation concealment will be ensured as the REDCap program will not release the randomization code until the questionnaire is completed. Implementation {16c} A randomization list (prepared by RC) will be processed in REDCap by a HIRUZ staff member. Screeners will enroll participants, but they will not have access to this list. A randomization list (computer-generated by RC) will be processed in REDCap by a HIRUZ staff member. HIRUZ is the Clinical Research Centre of Ghent University Hospital and Ghent University. Randomization will be stratified by frailty status, based on the outcome of the Groningen Frailty Indicator included in the initial questionnaire. Random permuted blocks will be created using SAS v9.4 with variable sizes to avoid that the treatment allocation can be predicted. Participants will be randomly assigned to either control or intervention group with a 1:1 allocation by the REDCap program. Screeners will not have access to this list. Allocation concealment will be ensured as the REDCap program will not release the randomization code until the questionnaire is completed. A randomization list (prepared by RC) will be processed in REDCap by a HIRUZ staff member. Screeners will enroll participants, but they will not have access to this list. Who will be blinded {17a} This study is a single-blinded trial. At timepoint 0, after completion of the questionnaire, allocation will be revealed to the researcher. No information will be given to the participants in the control group about oral examinations in the intervention group. Participants in the intervention group will be told that due to an excess of time, they will also receive an oral examination. At timepoint 1 the participants will be contacted again by a different researcher, blinded to the actual allocation of the participants. Procedure for unblinding if needed {17b} N/A, there is no procedure for unblinding needed, participants remain blinded to their allocation. This study is a single-blinded trial. At timepoint 0, after completion of the questionnaire, allocation will be revealed to the researcher. No information will be given to the participants in the control group about oral examinations in the intervention group. Participants in the intervention group will be told that due to an excess of time, they will also receive an oral examination. At timepoint 1 the participants will be contacted again by a different researcher, blinded to the actual allocation of the participants. N/A, there is no procedure for unblinding needed, participants remain blinded to their allocation. Plans for assessment and collection of outcomes {18a} Following the provision of informed consent, all participants will receive a questionnaire. This questionnaire will be administered through a structured interview. The questionnaire consists of five sections. The first section addresses general participant information, including date of birth, gender, education, income, place of residence, living arrangements (alone or with others), and whether the participant receives assistance from a home care nurse. Part 2 assesses the participant's frailty utilizing the Groningen Frailty Indicator, a validated and multidimensional screening tool . The tool consists of 15 questions addressing physical, cognitive, social, and psychological domains. The score, ranging from 0 to 15, reflects increasing limitations, with a score of ≥ 4 serving as the threshold for identifying frailty. The third section inquires about the participant’s family physician and whether the participant has a regular dentist. The fourth section focuses on the dental history, oral status, current issues related to their mouth, teeth or dentures, and oral hygiene practices including the frequency of care and the tools utilized (toothpaste, electric toothbrush, tongue scraper, etc.). They will be asked whether they currently perceive a need for dental treatment and what the reasons behind this perception is. A xerostomia questionnaire is administered in the final Sect. (25). Next, participants will be allocated to either an intervention or control group. The intervention will be executed as described in Sect. 11a. Four months after T0, the participants will be contacted by phone. Information on any communication on oral health with a dental professional since the intervention and the reasons for this interaction will be gathered. Furthermore, it will be determined whether the participant has discussed this study with their family physician. In addition, they will be asked once more about self-care practices concerning teeth or dentures, the frequency of care, and used brushing materials, as well as whether the participant has reviewed the oral hygiene flyers at home. Study data will be collected and managed using REDCap electronic data capture tools hosted at Ghent University. REDCap (Research Electronic Data Capture) is a secure, web-based software platform designed to support data capture for research studies, providing (1) an intuitive interface for validated data capture; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages; and (4) procedures for data integration and interoperability with external sources . Several validated questionnaires and screening tools will be used. The Groningen Frailty Indicator will be used to determine the level of frailty. The feasibility, reliability, and validity of this tool have been confirmed in previous research . The Dutch version of the Summated Xerostomia Inventory will be used, this is a valid tool for measuring xerostomia symptoms in clinical and epidemiological research . Dental plaque will be evaluated by the Quigley-Hein plaque index, denture plaque will be evaluated according to the method of Augsburger and Elahi, and tongue plaque will be assessed through the Winkel tongue coating index. These tools are widely used methods for measuring dental plaque in clinical research and dental practices . To evaluate the severity of untreated dental caries, the PUFA index will be used. The reliability of this index has been proven . The periodontal condition will be screened by using the Miller index for tooth mobility and the validated Dutch Periodontal Screening Index . The study was piloted to examine the duration and feasibility of the intervention, software usability and to iron out mistakes. Following a brief training and calibration sessions provided by ADV, the intraclass correlation coefficient was calculated for all data screeners for the oral examination. These coefficients ranged from 0.833 to 0.967, indicating a strong to almost perfect level of agreement . Researchers will be guided by an integrated script on REDCap, questions will be displayed and reminders will pop up in case of missing data. This will be further reinforced by mandatory selection per section that all questions are completed. REDCap will automatically generate a calendar for the telephonic questionnaire at timepoint 1. Plans to promote participant retention and complete follow-up {18b} At the end of the intervention, participants will be reminded that they will receive a phone call (or e-mail if requested by the participant) four months later. Contact information of a partner or caregiver will also be registered in case the participant does not answer the telephone call. Four attempts will be made on two different days to contact the participant or caregiver: two calls in the morning and two calls in the afternoon. If the participant prefers to be contacted back by e-mail, one reminder will be sent one week after the first e-mail. During the piloting phase, it was observed that recruiting participants during social activities and game afternoons is not advisable. This approach leads to early drop-out, as individuals prefer to participate in the ongoing activities rather than to commit to the study. Hence, it was decided to minimize the recruitment of participants during this type of event. All participants will receive a pen with the logo of Gerodent PLUS (i.e., the name of the study project) as a gift. This pen might serve as an additional reminder about the upcoming phone call by the researchers. Data management {19} The data will exclusively be entered electronically within the REDCap system. Data quality is ensured within the REDCap platform through an integrated script and multiple measures to guarantee data completeness. The extent of actions that each user can undertake is restricted by the rights associated with their respective accounts. Data collection will end two weeks after the last planned telephone questionnaire. Upon completion of data collection, data will be exported out of the REDCap platform for subsequent analysis. These files will be securely stored on servers maintained by Ghent University, only accessible by the members of the research team. For data transference, a Secure File Transfer platform, namely Belnet Filesender will be used. Confidentiality {27} Each participant will receive a unique identification number to pseudonymize the data. The corresponding key will be kept on a secured server of Ghent University, exclusively accessible by the members of the research team. The hard copy informed consent forms will be scanned for electronic storage separately from the study participants’ records. Both collected data and informed consent forms will be kept for 10 years, as stipulated in the informed consent form. Following the publication of the research findings, raw pseudonymized data can be made available upon request. Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} N/A, no biological specimens will be collected. Following the provision of informed consent, all participants will receive a questionnaire. This questionnaire will be administered through a structured interview. The questionnaire consists of five sections. The first section addresses general participant information, including date of birth, gender, education, income, place of residence, living arrangements (alone or with others), and whether the participant receives assistance from a home care nurse. Part 2 assesses the participant's frailty utilizing the Groningen Frailty Indicator, a validated and multidimensional screening tool . The tool consists of 15 questions addressing physical, cognitive, social, and psychological domains. The score, ranging from 0 to 15, reflects increasing limitations, with a score of ≥ 4 serving as the threshold for identifying frailty. The third section inquires about the participant’s family physician and whether the participant has a regular dentist. The fourth section focuses on the dental history, oral status, current issues related to their mouth, teeth or dentures, and oral hygiene practices including the frequency of care and the tools utilized (toothpaste, electric toothbrush, tongue scraper, etc.). They will be asked whether they currently perceive a need for dental treatment and what the reasons behind this perception is. A xerostomia questionnaire is administered in the final Sect. (25). Next, participants will be allocated to either an intervention or control group. The intervention will be executed as described in Sect. 11a. Four months after T0, the participants will be contacted by phone. Information on any communication on oral health with a dental professional since the intervention and the reasons for this interaction will be gathered. Furthermore, it will be determined whether the participant has discussed this study with their family physician. In addition, they will be asked once more about self-care practices concerning teeth or dentures, the frequency of care, and used brushing materials, as well as whether the participant has reviewed the oral hygiene flyers at home. Study data will be collected and managed using REDCap electronic data capture tools hosted at Ghent University. REDCap (Research Electronic Data Capture) is a secure, web-based software platform designed to support data capture for research studies, providing (1) an intuitive interface for validated data capture; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages; and (4) procedures for data integration and interoperability with external sources . Several validated questionnaires and screening tools will be used. The Groningen Frailty Indicator will be used to determine the level of frailty. The feasibility, reliability, and validity of this tool have been confirmed in previous research . The Dutch version of the Summated Xerostomia Inventory will be used, this is a valid tool for measuring xerostomia symptoms in clinical and epidemiological research . Dental plaque will be evaluated by the Quigley-Hein plaque index, denture plaque will be evaluated according to the method of Augsburger and Elahi, and tongue plaque will be assessed through the Winkel tongue coating index. These tools are widely used methods for measuring dental plaque in clinical research and dental practices . To evaluate the severity of untreated dental caries, the PUFA index will be used. The reliability of this index has been proven . The periodontal condition will be screened by using the Miller index for tooth mobility and the validated Dutch Periodontal Screening Index . The study was piloted to examine the duration and feasibility of the intervention, software usability and to iron out mistakes. Following a brief training and calibration sessions provided by ADV, the intraclass correlation coefficient was calculated for all data screeners for the oral examination. These coefficients ranged from 0.833 to 0.967, indicating a strong to almost perfect level of agreement . Researchers will be guided by an integrated script on REDCap, questions will be displayed and reminders will pop up in case of missing data. This will be further reinforced by mandatory selection per section that all questions are completed. REDCap will automatically generate a calendar for the telephonic questionnaire at timepoint 1. At the end of the intervention, participants will be reminded that they will receive a phone call (or e-mail if requested by the participant) four months later. Contact information of a partner or caregiver will also be registered in case the participant does not answer the telephone call. Four attempts will be made on two different days to contact the participant or caregiver: two calls in the morning and two calls in the afternoon. If the participant prefers to be contacted back by e-mail, one reminder will be sent one week after the first e-mail. During the piloting phase, it was observed that recruiting participants during social activities and game afternoons is not advisable. This approach leads to early drop-out, as individuals prefer to participate in the ongoing activities rather than to commit to the study. Hence, it was decided to minimize the recruitment of participants during this type of event. All participants will receive a pen with the logo of Gerodent PLUS (i.e., the name of the study project) as a gift. This pen might serve as an additional reminder about the upcoming phone call by the researchers. The data will exclusively be entered electronically within the REDCap system. Data quality is ensured within the REDCap platform through an integrated script and multiple measures to guarantee data completeness. The extent of actions that each user can undertake is restricted by the rights associated with their respective accounts. Data collection will end two weeks after the last planned telephone questionnaire. Upon completion of data collection, data will be exported out of the REDCap platform for subsequent analysis. These files will be securely stored on servers maintained by Ghent University, only accessible by the members of the research team. For data transference, a Secure File Transfer platform, namely Belnet Filesender will be used. Each participant will receive a unique identification number to pseudonymize the data. The corresponding key will be kept on a secured server of Ghent University, exclusively accessible by the members of the research team. The hard copy informed consent forms will be scanned for electronic storage separately from the study participants’ records. Both collected data and informed consent forms will be kept for 10 years, as stipulated in the informed consent form. Following the publication of the research findings, raw pseudonymized data can be made available upon request. N/A, no biological specimens will be collected. Statistical methods for primary and secondary outcomes {20a} Primary outcome Our primary estimand is the difference between the two conditions in the proportion of participants who had dental contact in the period between baseline to month 4, regardless of whether they refused to have an oral examination (i.e., treatment-policy strategy). A logistic regression analysis will be performed with self-reported dental contact as the outcome. Group allocation (two levels: intervention group and control group) and frailty (two levels: frail and non-frail) will be added as predictors to the model. The exponentiated regression coefficient for group allocation will be interpreted as the intervention effect, expressed as the odds ratio for dental contact, conditional on frailty. To improve interpretability, predicted probabilities will be calculated to estimate the risk difference. Secondary outcomes A linear regression analysis will be performed with the percentage of self-reported brushing frequency at T1 relative to the norm as the outcome. Group allocation (two levels: intervention group and control group), frailty (two levels: frail and non-frail), and percentage of self-reported brushing episodes relative to the norm at T0 will be added as predictors to the model. Multinomial logistic regression analysis will be performed with a change in self-reported brushing materials (i.e., denture brush usage and use of hand soap/denture cleanser) as the outcome, with “no change” as the reference group . Group allocation (two levels: intervention group and control group) and frailty (two levels: frail and non-frail) will be added as predictors to the model. Interim analyses {21b} N/A, no interim analyses will be performed. Methods for additional analyses (e.g., subgroup analyses) {20b} Exploratory subgroup analyses will be conducted to examine interaction between group allocation and the following baseline characteristics of interest: age, gender (male vs. female), education (low vs. high), living arrangements (alone vs. with others), receiving assistance from a home care nurse (yes vs. no), frailty status (frail vs. not frail), time since last dental visit, having a regular dentist (yes vs. no), perceived need for dental visit by the participant, and dental status (dentate vs. edentulous) in order to ascertain the beneficiaries of the intervention. For each baseline characteristic of interest, a logistic regression model with the predictor, group allocation, and their interaction will be applied. To improve interpretability, predicted probabilities will be calculated to estimate the risk differences. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c} If the participant refuses the oral examination, this will be noted and participants will be further analyzed in the intervention group following the treatment policy strategy (under the intention-to-treat principle). All participants will be contacted at the stipulated timepoint 1. If direct participant contact can’t be established, the recorded proxy (partner, child, or caregiver) will be approached in an attempt to communicate with the participant. Should direct communication with the participant be infeasible (e.g., due to hospitalization), the questionnaire will be submitted to the proxy. Participants deceased during the trial period will be analyzed according to the “while alive strategy.” If a participant remains unreachable, missing data will be addressed by applying multiple imputation per randomization arm. In addition, the imputation model will be improved by including variables related to the missingness and variables correlated with variables of interest. Predictors of the multiple imputation model will be group allocation, age, gender, education, living arrangements (alone or with others), whether the participant receives assistance from a home care nurse, frailty status, time since last dental visit, whether the participant has a regular dentist, perceived need for dental visit by the participant, and dentate or edentulous status. See Fig. for an overview of our process for handling missing data. Plans to give access to the full protocol, participant-level data, and statistical code {31c} Raw pseudonymized data and statistical code will be shared after the publication of the research data. Primary outcome Our primary estimand is the difference between the two conditions in the proportion of participants who had dental contact in the period between baseline to month 4, regardless of whether they refused to have an oral examination (i.e., treatment-policy strategy). A logistic regression analysis will be performed with self-reported dental contact as the outcome. Group allocation (two levels: intervention group and control group) and frailty (two levels: frail and non-frail) will be added as predictors to the model. The exponentiated regression coefficient for group allocation will be interpreted as the intervention effect, expressed as the odds ratio for dental contact, conditional on frailty. To improve interpretability, predicted probabilities will be calculated to estimate the risk difference. Secondary outcomes A linear regression analysis will be performed with the percentage of self-reported brushing frequency at T1 relative to the norm as the outcome. Group allocation (two levels: intervention group and control group), frailty (two levels: frail and non-frail), and percentage of self-reported brushing episodes relative to the norm at T0 will be added as predictors to the model. Multinomial logistic regression analysis will be performed with a change in self-reported brushing materials (i.e., denture brush usage and use of hand soap/denture cleanser) as the outcome, with “no change” as the reference group . Group allocation (two levels: intervention group and control group) and frailty (two levels: frail and non-frail) will be added as predictors to the model. Our primary estimand is the difference between the two conditions in the proportion of participants who had dental contact in the period between baseline to month 4, regardless of whether they refused to have an oral examination (i.e., treatment-policy strategy). A logistic regression analysis will be performed with self-reported dental contact as the outcome. Group allocation (two levels: intervention group and control group) and frailty (two levels: frail and non-frail) will be added as predictors to the model. The exponentiated regression coefficient for group allocation will be interpreted as the intervention effect, expressed as the odds ratio for dental contact, conditional on frailty. To improve interpretability, predicted probabilities will be calculated to estimate the risk difference. A linear regression analysis will be performed with the percentage of self-reported brushing frequency at T1 relative to the norm as the outcome. Group allocation (two levels: intervention group and control group), frailty (two levels: frail and non-frail), and percentage of self-reported brushing episodes relative to the norm at T0 will be added as predictors to the model. Multinomial logistic regression analysis will be performed with a change in self-reported brushing materials (i.e., denture brush usage and use of hand soap/denture cleanser) as the outcome, with “no change” as the reference group . Group allocation (two levels: intervention group and control group) and frailty (two levels: frail and non-frail) will be added as predictors to the model. N/A, no interim analyses will be performed. Exploratory subgroup analyses will be conducted to examine interaction between group allocation and the following baseline characteristics of interest: age, gender (male vs. female), education (low vs. high), living arrangements (alone vs. with others), receiving assistance from a home care nurse (yes vs. no), frailty status (frail vs. not frail), time since last dental visit, having a regular dentist (yes vs. no), perceived need for dental visit by the participant, and dental status (dentate vs. edentulous) in order to ascertain the beneficiaries of the intervention. For each baseline characteristic of interest, a logistic regression model with the predictor, group allocation, and their interaction will be applied. To improve interpretability, predicted probabilities will be calculated to estimate the risk differences. If the participant refuses the oral examination, this will be noted and participants will be further analyzed in the intervention group following the treatment policy strategy (under the intention-to-treat principle). All participants will be contacted at the stipulated timepoint 1. If direct participant contact can’t be established, the recorded proxy (partner, child, or caregiver) will be approached in an attempt to communicate with the participant. Should direct communication with the participant be infeasible (e.g., due to hospitalization), the questionnaire will be submitted to the proxy. Participants deceased during the trial period will be analyzed according to the “while alive strategy.” If a participant remains unreachable, missing data will be addressed by applying multiple imputation per randomization arm. In addition, the imputation model will be improved by including variables related to the missingness and variables correlated with variables of interest. Predictors of the multiple imputation model will be group allocation, age, gender, education, living arrangements (alone or with others), whether the participant receives assistance from a home care nurse, frailty status, time since last dental visit, whether the participant has a regular dentist, perceived need for dental visit by the participant, and dentate or edentulous status. See Fig. for an overview of our process for handling missing data. Raw pseudonymized data and statistical code will be shared after the publication of the research data. Composition of the coordinating center and trial steering committee {5d} Conceptualization, methodology, investigation, formal analysis of the data, writing of initial draft, review and editing, and decision to submit for publication will be performed by the research team (ADV, LP, BJ, and RC). There will be two dental students and two dentists collecting data. The steering committee, consisting of the members of the Gerodent PLUS team, will meet monthly to provide oversight. The REDCap-Team of Ghent University handles data management. Daily coordination of the trial will be performed by ADV. Stakeholders in the Gerodent PLUS research project are met twice a year to provide advice and support in the implementation of the project and dissemination of the results. Members of this group are: Vlaams Instituut Mondgezondheid (Gezonde Mond) Logo Limburg on behalf of all Flemish Logo’s Expertisecentrum Dementie Paradox Vlaams Instituut Gezond Leven VZW Zorg-Saam ZKJ Woonzorggroep GVO Vivel ELZ Scheldekracht ELZ RITS Logo Midden West-Vlaanderen Zorgband Leie en Schelde Composition of the data monitoring committee, its role and reporting structure {21a} Since this study has a low risk for harm, no data monitoring committee was composed. Adverse event reporting and harms {22} The risk for adverse events is low. However, participants will be informed and encouraged to contact the research team in case of an adverse event. Contact information is included in the informed consent form. Unexpected harms will be documented based on participants’ spontaneous reports. These harms will not be classified or codified using standardized terminology. Should any harms occur, they will be reported in the publication of the study results. Frequency and plans for auditing trial conduct {23} N/A, there will be no auditing. Plans for communicating important protocol amendments to relevant parties (e.g., trial participants, ethical committees) {25} If modifications to the study protocol are needed, an amendment to the original application will be submitted for approval to the Medical Ethics Committee of the University Hospital Ghent. Approved modifications will also be made public on ClinicalTrials.gov (ID: NCT06341959) and corrections will be sent to this journal. If necessary, a modified informed consent form will be drafted. Decisions to amend will be thoroughly discussed within the study steering committee. Dissemination plans {31a} Trial results will be made public to the scientific community and healthcare professionals via conferences, scientific publications, and on the university research platform https://research.ugent.be search term “Gerodent PLUS”. The findings will also be communicated to the general public and policy makers through the Flemish Government, social media, and the different organizations in the stakeholders group (including the Flemish agency for Oral Health that includes the different health professional associations). Conceptualization, methodology, investigation, formal analysis of the data, writing of initial draft, review and editing, and decision to submit for publication will be performed by the research team (ADV, LP, BJ, and RC). There will be two dental students and two dentists collecting data. The steering committee, consisting of the members of the Gerodent PLUS team, will meet monthly to provide oversight. The REDCap-Team of Ghent University handles data management. Daily coordination of the trial will be performed by ADV. Stakeholders in the Gerodent PLUS research project are met twice a year to provide advice and support in the implementation of the project and dissemination of the results. Members of this group are: Vlaams Instituut Mondgezondheid (Gezonde Mond) Logo Limburg on behalf of all Flemish Logo’s Expertisecentrum Dementie Paradox Vlaams Instituut Gezond Leven VZW Zorg-Saam ZKJ Woonzorggroep GVO Vivel ELZ Scheldekracht ELZ RITS Logo Midden West-Vlaanderen Zorgband Leie en Schelde Since this study has a low risk for harm, no data monitoring committee was composed. The risk for adverse events is low. However, participants will be informed and encouraged to contact the research team in case of an adverse event. Contact information is included in the informed consent form. Unexpected harms will be documented based on participants’ spontaneous reports. These harms will not be classified or codified using standardized terminology. Should any harms occur, they will be reported in the publication of the study results. N/A, there will be no auditing. If modifications to the study protocol are needed, an amendment to the original application will be submitted for approval to the Medical Ethics Committee of the University Hospital Ghent. Approved modifications will also be made public on ClinicalTrials.gov (ID: NCT06341959) and corrections will be sent to this journal. If necessary, a modified informed consent form will be drafted. Decisions to amend will be thoroughly discussed within the study steering committee. Trial results will be made public to the scientific community and healthcare professionals via conferences, scientific publications, and on the university research platform https://research.ugent.be search term “Gerodent PLUS”. The findings will also be communicated to the general public and policy makers through the Flemish Government, social media, and the different organizations in the stakeholders group (including the Flemish agency for Oral Health that includes the different health professional associations). Regular dental attendance among older adults is low, therefore interventions to reactivate this target group into primary care are necessary. This study aims to address this issue by investigating the efficacy of a low-threshold check-up to motivate older adults to schedule a dental visit. In practice, this involves conducting the check-up at a familiar location, without the use of dental chairs or X-rays. The results of this trial have the potential to significantly contribute to the knowledge about how to promote dental attendance among community-dwelling older people. If a low-threshold check-up motivates them to schedule a dental visit, it could be an effective tool to reactivate older adults into primary care, potentially resulting in improved oral health. Existing literature highlights barriers for older adults’ access to oral care of which several will be addressed by the intervention. Firstly, oral health literacy will be enhanced by providing information about their oral health status and on the importance of regular dental attendance. Secondly, low subjective treatment needs will be addressed by highlighting any existing issues in their oral cavity after oral screening. Thirdly, an attempt will be made to overcome possible dental anxiety by performing the screening in a non-dental setting without a dental chair and white coats. Finally, lack of awareness of oral health among healthcare professionals will be addressed by involving the family physician as a trusted healthcare professional to provide additional motivation for older adults to visit the dentist. This study has a number of limitations that should be acknowledged. First, our intervention will not tackle the barrier of experiencing logistics challenges for dental attendance. Second, administering the questionnaire by telephone at timepoint 1 might be challenging for participants. Therefore, we deliberately will keep these questions very short and straightforward. Third, we will provide referral letters to participants for their general practitioners and dentists. However, we will not have any means to verify whether the participants will have actually delivered these letters to the intended recipients. A pilot study was conducted. Subsequently, the wording of some questions was simplified. It was also observed that approaching individuals during social events, such as game afternoons, yielded minimal engagement. Older adults prefer participating in the event itself and lack the time and interest for our study. However, engaging older adults in their assisted living residences proved to be effective. They feel comfortable in their own homes and take the time to participate in our study. The next step will involve organizing and conducting the screening followed by administering the subsequent telephone questionnaire, scheduled from April 2024 to March 2025. To date, a total of 147 participants have been included in the study. Additionally, 60 participants have been contacted four months after enrollment. Of these 60 participants, nine individuals could not be reached. In conclusion, the results from this study could help in designing and implementing an evidence-based intervention for community-dwelling older adults, who are currently often neglected in oral health promotion programs. This intervention might consist of simply distributing informational brochures or it might involve a more personalized approach with one-on-one conversations and oral examinations. It is also possible that the results of this study will indicate that these approaches are insufficient and that it will be necessary to focus on the pre-intentional phase and place greater emphasis on determinants beyond knowledge, namely self-efficacy and outcome expectations. Trial status Recruitment began in April 2024 and will continue until the required sample size is achieved, this is estimated to be in March 2025. Version 1.3, 22/12/2024. Recruitment began in April 2024 and will continue until the required sample size is achieved, this is estimated to be in March 2025. Version 1.3, 22/12/2024. Additional file 1. Additional file 2. Letter for dentist. Letter for family physician. Flyers with oral hygiene instructions(Future use of the materials of Gezonde Mond by others is permitted, provided that the logo of Gezonde Mond is included.) |
Efficacy and safety of anti-obesity herbal medicine focused on pattern identification: A systematic review and meta-analysis | b512bcd2-5a49-4235-a22b-3c5e83471ea0 | 9771347 | Pharmacology[mh] | Obesity is defined as excessive fat accumulation with health risks. Worldwide, 39% of adults were estimated to be obese in 2016, and the number has increased constantly in recent years. Obesity-related health consequences follow the growth in the prevalence of obesity because it is a major risk factor for non-communicable diseases, such as cardiovascular disease, diabetes, musculoskeletal disorder, and even some cancers. In Traditional Chinese Medicine (TCM), obesity is caused by dampness, phlegm, blood stasis, heat accumulation in the stomach and spleen, qi deficiency, spleen deficiency, and yang deficiency. Hence, treatments are always recommended according to the patient’s unique physical and environmental factors. Herbal medicine is a common intervention for obesity in TCM and is recommended based on individual characteristics using pattern identification (PI). PI is a characteristic point in the diagnosis and treatment of diseases. Even with the same disease, the pathogenic mechanism can be different, which means that the treatment should also be different. Various symptoms have been identified and categorized to determine the pathogenesis and mechanism of the disease, which is called pattern identification. This PI was applied to treatment, which enables more precise methods for treatment. In Korea, the 6 PI types, developed by the Korea institute of oriental medicine, are used widely in treating obesity: phlegm, food retention, blood stasis, liver- qi stagnation, deficiency of yang , and deficiency of spleen. Previous reports have suggested that herbal medicine based on PI can be an effective and safe approach to improving weight reduction. Many attempts have been made to determine differences in the characteristics of obese patients according to PI. Furthermore, it was reported that herbal medicine with an unmatched PI could lead to more frequent adverse events (AE), separate from its weight loss effect. In recent years, there has been increasing interest in precision medicine in various study fields. PI can be an attractive option for individualized medicine because PI has been used to categorize patients and select proper treatment according to their symptoms, pathogenesis, and treatment responses. The association between PI and precision medicine has been reported, and the possibility of PI as a precision medicine for diseases other than obesity is being actively studied. The efficacy and safety of acupuncture using PI for sleep disorders were reported, and the treatment for the common cold and COVID-19 using PI were studied. Many studies suggested the possibility of PI as individualization in modern medicine; it can contribute to clinical research and pharmacological research. PI help the clinical trial design in that researchers are able to expect responsive or non-responsive case using PI, so that the most appropriate patients for the intervention can be chosen. Furthermore, PI allow us to anticipate the results of applying renew drug to patients with specific type of PI by comparing the results of applying similar old drugs to specific patients with the same type of PI. In the present situation, few studies focused on PI have been published since the early 2010s. Most studies that reviewed the effectiveness of herbal medicine on obesity did not focus on PI or its efficacy. The mechanism or specific points that are meaningful to obesity treatment, such as appetite or adipose tissue growth, were only reported from those articles. ; The effectiveness and safety of several single herbs were reported, but they were limited to the several selected medical plant or review on studies whose subjects were not human. It was also reported that mahuang and ephedrine in the appropriate dose were effective in reducing the weight safely compared to the control group. On the other hand, those studies reviewed just single herbs and did not cover common types of herbal medicine consisting of various herbs. Few reports studied herbal medicine that consisted of various herbs based on TCM theory, but those studies also had limitations. After a systematic review and meta-analysis, it was reported that bangpoongtongsung-san (BTS ) and taeeumjowi-tang (TEJWT ) showed positive results in body weight (BW), body mass index (BMI), and waist circumference (WC) without severe AEs, but that focused on just 1 selected herbal formula. The review of herbal medicine containing several herbs was also reported, but it did not conduct an additional meta-analysis, so the effect size of herbal medicine could not be estimated. There is difficulty in clarifying whether herbal medicine based on PI is more effective and safer, because the lack of number of studies designed according to the PI. However, it is meaningful to focus on PI and evaluate the efficacy and safety of herbal medicine, considering that PI is the most basic and widely used method for determining the direction of treatment in obesity. Therefore, this study reviewed randomized controlled trials (RCTs) related to herbal medicine based on PI and attempted to find the characteristics of PI and results of herbal medicine with PI. This study evaluated the effectiveness and safety of herbal medicine in obesity with a focus on PI. This review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The protocol of this study was registered on PROSPERO (CRD42021271425 Available from: https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=271425 ). 2.1. Electronic searches and search strategy The search was performed in August 2021 in the following eight electronic databases: PubMed (MEDLINE), Cochrane Central Register of Controlled Trials, EMBASE, China National Knowledge Infrastructure (CNKI), CiNii, KoreaMed, Science-on, and Oriental Medicine Advanced Searching Integrated System (OASIS). The following terms were searched: herbal medicine (including a Mesh search using “obes*” OR “weight gain*” OR “weight loss” OR “body mass ind*” OR “adipos*” OR “overweight” OR “weight reduc*” OR “weight loss” OR “weight maint*” OR “weight decreas*” OR “weight control*”) and obesity (including a Mesh search using “herbal medicine” OR “plants, medicinal” OR “medicine, traditional” OR “drugs, Chinese herbal” Or “medicine, Korean traditional” OR “medicine, Kampo” OR “traditional Chinese medicine” OR “plant extracts”). Supplement 1 reveals the specific search terms for each database. The searches were conducted with each electronic database’s supported language, and there was no language restriction in the search strategy. After the literature search, all duplicated articles were excluded. The titles and abstracts of all studies were examined, and irrelevant articles were excluded. Finally, the full-text articles were reviewed for relevant RCTs. To increase the sensitivity, “pattern identification” was not included in the search terms. All studies using PI were included after the first screening by the researcher. 2.2. Eligibility criteria and study selection The titles and abstracts of the articles were examined, and only potentially relevant studies were selected. Only studies that recruited participants according to selected PI in advance, before the intervention was applied, were included. 2.2.1. Types of studies. RCTs with parallel-group designs were included. Non-RCTs, including mechanism studies, non-controlled studies, case reports, feasibility studies, and reviews, were also excluded. 2.2.2. Types of participants. Obese participants with a BMI over 25 kg/m 2 were included. Participants under 18 years were excluded. Participants with complications or secondary obesity were also excluded. 2.2.3. Types of intervention. RCTs that examined the effects of herbal medicine based on PI were included. Only herbal medicine, a prescription based on the theory of TCM, was included. Herbal medications without a proper prescription or just a mixture of several herbs without TCM theory were excluded. There were no limits on the forms of herbal medicine, such as decoction, capsule, tablet, pill, powder, and extracts. Studies involving herbal medicine combined with other therapies as an experimental intervention were excluded. Herbal medicines with lifestyle changes, including dietary modification or exercise modification, were included if the modifications were applied to both groups. The control interventions included a placebo, usual care, other medication, and managing dietary or exercise habits. Other medications indicated western medication, known as oral medication, such as orlistat or liraglutide, but it did not include operative methods. Usual care means management of lifestyle, including dietary modification or increases in physical activity. 2.2.4. Types of outcome measurements. The primary outcomes were BW and BMI. Additional outcomes were WC, hip circumference (HC), waist-hip ratio (WHR), and AEs. 2.3. Data extraction The following data were extracted by 2 reviewers (SHP, DHK): study design, sample size, characteristics of participants, intervention, comparators, treatment duration, outcome measurements, AE, and information for an assessment of the study quality. Missing data or queries were followed-up with the original authors via email, if needed. 2.4. Assessment of risk of bias The risk of bias assessment was performed using the “risk of bias” tool from Cochrane Collaboration. The tool consisted of 7 domains: sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective outcome reporting, and other bias. The risk of bias for each domain was rated as “low risk, “high risk,” or “unclear risk.” Two reviewers (SHP and DHK) assessed the risk of bias independently. When there were any disagreements or discrepancies, a third reviewer (HK) made the final decision. 2.5. Summary measures and synthesis of results The Review Manager software for Windows (RevMan ver.5.3.; Copenhagen; The Nordic Cochrane Center, The Cochrane Collaboration, 2014) was used for data synthesis. The meta-analysis and the evaluation of risk ratio (RR) or standard mean difference (SMD) were performed. A random effect model with 95% confidence was selected to calculate the pooled effect size estimates. The I -squared test was used to evaluated the heterogeneity. Heterogeneity was considered based on a rough guide to interpretation as follows: an I 2 value from 0% to 40% might not be important; an I 2 value from 30% to 60% may represent moderate heterogeneity; an I 2 value from 50% to 90% may indicate substantial heterogeneity; an I 2 value from 75% to 100% means considerable heterogeneity. Considering the heterogeneity, subgroup analysis was conducted according to the different comparisons. Furthermore, funnel plots were used to assess the publication bias when there were more than 10 identified studies in the meta-analysis. 2.6. Assessment of quality of evidence for each outcome The GRADEpro Guideline Development Tool ( https://community.cochrane.org/help/tools-and-software/gradepro-gdt , version3.6.) was used to assess the quality of evidence for each outcome across the studies. A “Summary of findings” tables was generated using GRADEpro GDO software (GRADEpro GDT, available at https://www.gradepro.org ), and the tables were imported into the review. The quality of evidence was described as “high,” “moderate,” “low,” or “very low,” using the GRADE framework and applied to all primary and additional outcomes. The risk of bias, indirectness, inconsistency, imprecision, and publication bias for all studies was estimated. The quality of evidence started high because only randomized controlled studies were included. The quality of evidence was downgraded according to the above domains: it was downgraded by 1 point when the domain was estimated to be serious and 2 points when it was assessed to be very serious. In relation to the risk of bias, not serious meant there was no risk of bias in more than 80% of the studies included or studies with large sample sizes. It was considered serious if there was a high risk of blinding. If most studies had a high risk of bias, it was evaluated as very serious. In relation to the inconsistency, not serious meant that the I 2 value was less than 50%, or the results of studies had the same direction even if the I 2 value was more than 50%. If the I 2 value ranged from 50% to 75% or the results of the studies had the same direction, even if the I 2 value was over 75%, it was estimated to be serious. An I 2 value over 75% meant very serious. In relation to the indirection, serious or very serious were determined based on the outcome measurement. If the study presented outcomes unrelated to obesity, such as mineral components, it was estimated to be very serious. If there was only an effective rate resulting from anti-obesity, it was assessed to be serious. In relation to imprecision, a total sample size of more than 400 was not estimated to be serious. A total sample size less than 400 or insignificant result was assessed to be serious with or without a sample size over 400. In other considerations, publication bias was considered serious. Redundant publications, a publication of 2 or more articles derived from a single study, and conflict of interest with a sponsor were estimated to be serious. This study evaluated the importance of outcomes as follows: BW and BMI were critical with 9 points; WC, HC, and WHR were critical with 8 points; AE was critical with 7 points. The search was performed in August 2021 in the following eight electronic databases: PubMed (MEDLINE), Cochrane Central Register of Controlled Trials, EMBASE, China National Knowledge Infrastructure (CNKI), CiNii, KoreaMed, Science-on, and Oriental Medicine Advanced Searching Integrated System (OASIS). The following terms were searched: herbal medicine (including a Mesh search using “obes*” OR “weight gain*” OR “weight loss” OR “body mass ind*” OR “adipos*” OR “overweight” OR “weight reduc*” OR “weight loss” OR “weight maint*” OR “weight decreas*” OR “weight control*”) and obesity (including a Mesh search using “herbal medicine” OR “plants, medicinal” OR “medicine, traditional” OR “drugs, Chinese herbal” Or “medicine, Korean traditional” OR “medicine, Kampo” OR “traditional Chinese medicine” OR “plant extracts”). Supplement 1 reveals the specific search terms for each database. The searches were conducted with each electronic database’s supported language, and there was no language restriction in the search strategy. After the literature search, all duplicated articles were excluded. The titles and abstracts of all studies were examined, and irrelevant articles were excluded. Finally, the full-text articles were reviewed for relevant RCTs. To increase the sensitivity, “pattern identification” was not included in the search terms. All studies using PI were included after the first screening by the researcher. The titles and abstracts of the articles were examined, and only potentially relevant studies were selected. Only studies that recruited participants according to selected PI in advance, before the intervention was applied, were included. 2.2.1. Types of studies. RCTs with parallel-group designs were included. Non-RCTs, including mechanism studies, non-controlled studies, case reports, feasibility studies, and reviews, were also excluded. 2.2.2. Types of participants. Obese participants with a BMI over 25 kg/m 2 were included. Participants under 18 years were excluded. Participants with complications or secondary obesity were also excluded. 2.2.3. Types of intervention. RCTs that examined the effects of herbal medicine based on PI were included. Only herbal medicine, a prescription based on the theory of TCM, was included. Herbal medications without a proper prescription or just a mixture of several herbs without TCM theory were excluded. There were no limits on the forms of herbal medicine, such as decoction, capsule, tablet, pill, powder, and extracts. Studies involving herbal medicine combined with other therapies as an experimental intervention were excluded. Herbal medicines with lifestyle changes, including dietary modification or exercise modification, were included if the modifications were applied to both groups. The control interventions included a placebo, usual care, other medication, and managing dietary or exercise habits. Other medications indicated western medication, known as oral medication, such as orlistat or liraglutide, but it did not include operative methods. Usual care means management of lifestyle, including dietary modification or increases in physical activity. 2.2.4. Types of outcome measurements. The primary outcomes were BW and BMI. Additional outcomes were WC, hip circumference (HC), waist-hip ratio (WHR), and AEs. RCTs with parallel-group designs were included. Non-RCTs, including mechanism studies, non-controlled studies, case reports, feasibility studies, and reviews, were also excluded. Obese participants with a BMI over 25 kg/m 2 were included. Participants under 18 years were excluded. Participants with complications or secondary obesity were also excluded. RCTs that examined the effects of herbal medicine based on PI were included. Only herbal medicine, a prescription based on the theory of TCM, was included. Herbal medications without a proper prescription or just a mixture of several herbs without TCM theory were excluded. There were no limits on the forms of herbal medicine, such as decoction, capsule, tablet, pill, powder, and extracts. Studies involving herbal medicine combined with other therapies as an experimental intervention were excluded. Herbal medicines with lifestyle changes, including dietary modification or exercise modification, were included if the modifications were applied to both groups. The control interventions included a placebo, usual care, other medication, and managing dietary or exercise habits. Other medications indicated western medication, known as oral medication, such as orlistat or liraglutide, but it did not include operative methods. Usual care means management of lifestyle, including dietary modification or increases in physical activity. The primary outcomes were BW and BMI. Additional outcomes were WC, hip circumference (HC), waist-hip ratio (WHR), and AEs. The following data were extracted by 2 reviewers (SHP, DHK): study design, sample size, characteristics of participants, intervention, comparators, treatment duration, outcome measurements, AE, and information for an assessment of the study quality. Missing data or queries were followed-up with the original authors via email, if needed. The risk of bias assessment was performed using the “risk of bias” tool from Cochrane Collaboration. The tool consisted of 7 domains: sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective outcome reporting, and other bias. The risk of bias for each domain was rated as “low risk, “high risk,” or “unclear risk.” Two reviewers (SHP and DHK) assessed the risk of bias independently. When there were any disagreements or discrepancies, a third reviewer (HK) made the final decision. The Review Manager software for Windows (RevMan ver.5.3.; Copenhagen; The Nordic Cochrane Center, The Cochrane Collaboration, 2014) was used for data synthesis. The meta-analysis and the evaluation of risk ratio (RR) or standard mean difference (SMD) were performed. A random effect model with 95% confidence was selected to calculate the pooled effect size estimates. The I -squared test was used to evaluated the heterogeneity. Heterogeneity was considered based on a rough guide to interpretation as follows: an I 2 value from 0% to 40% might not be important; an I 2 value from 30% to 60% may represent moderate heterogeneity; an I 2 value from 50% to 90% may indicate substantial heterogeneity; an I 2 value from 75% to 100% means considerable heterogeneity. Considering the heterogeneity, subgroup analysis was conducted according to the different comparisons. Furthermore, funnel plots were used to assess the publication bias when there were more than 10 identified studies in the meta-analysis. The GRADEpro Guideline Development Tool ( https://community.cochrane.org/help/tools-and-software/gradepro-gdt , version3.6.) was used to assess the quality of evidence for each outcome across the studies. A “Summary of findings” tables was generated using GRADEpro GDO software (GRADEpro GDT, available at https://www.gradepro.org ), and the tables were imported into the review. The quality of evidence was described as “high,” “moderate,” “low,” or “very low,” using the GRADE framework and applied to all primary and additional outcomes. The risk of bias, indirectness, inconsistency, imprecision, and publication bias for all studies was estimated. The quality of evidence started high because only randomized controlled studies were included. The quality of evidence was downgraded according to the above domains: it was downgraded by 1 point when the domain was estimated to be serious and 2 points when it was assessed to be very serious. In relation to the risk of bias, not serious meant there was no risk of bias in more than 80% of the studies included or studies with large sample sizes. It was considered serious if there was a high risk of blinding. If most studies had a high risk of bias, it was evaluated as very serious. In relation to the inconsistency, not serious meant that the I 2 value was less than 50%, or the results of studies had the same direction even if the I 2 value was more than 50%. If the I 2 value ranged from 50% to 75% or the results of the studies had the same direction, even if the I 2 value was over 75%, it was estimated to be serious. An I 2 value over 75% meant very serious. In relation to the indirection, serious or very serious were determined based on the outcome measurement. If the study presented outcomes unrelated to obesity, such as mineral components, it was estimated to be very serious. If there was only an effective rate resulting from anti-obesity, it was assessed to be serious. In relation to imprecision, a total sample size of more than 400 was not estimated to be serious. A total sample size less than 400 or insignificant result was assessed to be serious with or without a sample size over 400. In other considerations, publication bias was considered serious. Redundant publications, a publication of 2 or more articles derived from a single study, and conflict of interest with a sponsor were estimated to be serious. This study evaluated the importance of outcomes as follows: BW and BMI were critical with 9 points; WC, HC, and WHR were critical with 8 points; AE was critical with 7 points. 3.1. Study inclusion A total of 2932 citations were identified. Studies that did not use PI or inappropriate subjects, comparators, or data were excluded. Finally, 16 RCTs were included (1052 patients). According to the comparison, 2 RCTs (128 patients) compared herbal medicine to a placebo; 2 RCTs (161 patients) compared them to Western medication; 12 RCTs (763 patients) compared them to usual care, including modulation of the diet or exercise (Fig. ). 3.2. Study characteristics Table lists the characteristics of the included studies (16 RCTs). All studies included were performed in China. The number of patients involved in the studies ranged from 25 to 40 in the treatment or control groups. In total, 14 kinds of herbal formulas were involved. The forms of the intervention included decoction (12 RCTs), capsule or tablet (4 RCTs), and powder (1 RCT). Two RCTs used a placebo as a comparator, and 2 RCTs used orlistat for comparison. Twelve RCTs used diet and exercise control as a comparator of usual care. Most included reasonable diets or limited calories and aerobic exercise. The treatment duration ranged from 28 days to 3 months; 12 weeks (3 months) was mostly used (8 RCTs), followed by 8 weeks (2 months or 60 days) (6 RCTs). Modulation of diet and exercise was used in 15 RCTs. Only 2 studies used both BMI and BW as the outcome measure. Nine, 6, and 7 RCTs used WC, HC, and WHR, respectively, as an outcome measure. Eight RCTs reported the result of AE, and most were not significant in either group. 3.3. Pattern identification Table lists the classification of PI and the diagnostic criteria used for PI. The studies were assorted into 3 major types according to the pathology; the phlegm-dampness type, the heat accumulation type, and the liver- qi stagnation type. The phlegm-dampness type was counted in 8 RCTs, and it included only phlegm-dampness (n = 2) and phlegm-dampness with spleen deficiency (n = 6). The heat accumulation type was counted in 6 RCTs and commonly, it was related to the spleen and stomach. The type included stagnation of heat (or damp-heat) in the spleen and stomach, stomach heat and dampness stagnation, and damp-heat accumulation. The liver- qi stagnation type was collected in 2 RCTs and called a spleen deficiency and liver- qi stagnation or spleen deficiency and stagnation of liver- qi and heat. ‘Diagnosis and efficacy evaluation criteria of simple obesity’ was most frequently used in Seven RCTs as the diagnostic criteria for PI. ‘Internal Chinese medicine’, ‘Guiding principles for clinical research on the new drug of traditional Chinese medicine’, and ‘Guidelines for diagnosis and treatment of common diseases in internal Chinese medicine’ were all used in 3 RCTs, respectively. ‘Endocrinology specialty diseases and rheumatism-Clinical diagnosis and treatment of Chinese medicine, 2 nd Edi.’ was used in 2 RCTs. Three RCTs used 2 kinds of diagnostic criteria. “Guidelines for diagnosis and treatment of common diseases in internal Chinese medicine” and “Internal Chinese medicine” were used in 2 RCTs, and “Guiding principles for clinical research on the new drug of traditional Chinese medicine” and “Diagnosis and efficacy evaluation criteria of simple obesity” were used in 1 RCT. 3.4. Risk of bias Fig. indicates the risk of bias. In relation to selection bias, random sequence generation was low in 12 RCTs and unclear in 4. Allocation concealment was low in 3 RCTs and unclear in 13. Blinding of the participants and personnel was a high risk of bias in most studies, except for 2 RCTs that compared herbal medicine with a placebo. Another domain related to blinding and detection bias presented a low risk only in 1 RCT. Other RCTs did not describe sufficient information to assess the risk of bias. The risk of bias in incomplete outcome data was low in 11 RCTs and unclear in the others. The risk of selective reporting was assessed to be low in 5 RCTs; the risk of bias in the others was unclear, as there was a lack of explanation to assess the risk of bias. Most studies did not describe the information needed to assess the risk of bias; the other bias was also assessed as unclear. Only 2 RCTs whose comparator was placebo were assessed as having a low risk of bias overall. The others had a low or unclear risk of bias. 3.5. Outcomes 3.5.1. BW and BMI. Herbal medicine based on PI led to a significant reduction in both BW and BMI (BW: mean difference [MD] = –4.10, 95% confidence interval [CI]: –5.14 to –3.06, P < .0001, I 2 = 2%, BMI: MD = –1.53, 95% CI: –1.88 to –1.19, P < .0001, I 2 = 25%). When herbal medicine was compared with comparators, the changes in BMI were statistically significant in all subgroup analyses. In subgroup analysis, however, which compared herbal medicine to placebo, the decrease in BW was not significant in the herbal medicine group compared to the placebo group (BW: MD = –4.00, 95% CI: –10.52 to 2.52, P = .23, I 2 = not applicable). (Fig. ) 3.5.2. WC, HC, and WHR. The meta-analysis found that herbal medicine based on PI induced a significant decrease in WC, HC and WHR (WC: MD = –2.48, 95% CI: –2.95 to –2.02, P < .00001, I 2 = 0%, HC: MD = –1.75, 95% CI: –3.21 to –0.29, P = .02; I 2 = 65%, and WHR: MD = –0.03, 95% CI: –0.05 to –0.01, P = .0003; I 2 = 80%). After subgroup analysis according to comparator including placebo, orlistat, and usual care, herbal medicine induced a significant decrease only when compared with usual care. Herbal medicine could not lead to a significant improvement in WC and HC compared to the placebo. (Fig. ) 3.6. Adverse events Eight RCTs did not report AEs, whereas the other 8 RCTs reported the safety of herbal medicine. Only 2 studies reported AEs in the experimental group. On the other hand, they were mild AEs (4 cases), such as diarrhea, and they did not cause severe results. 3.7. Publication bias Fig. presents the publication bias according to the funnel plot. The funnel plot was considered visually asymmetric; hence, it was inferred that the publication bias possibly exists. 3.8. Assessment of evidence The quality of evidence for the primary outcome, BW and BMI, was assessed as moderate with a high risk of blinding. The assessment of the evidence for WC was also presented moderate quality. On the other hand, the quality of evidence for HC and WHR was downgraded as “very low” and “low,” respectively, because of the serious inconsistency (high heterogeneity) and imprecision (small sample size). A total of 2932 citations were identified. Studies that did not use PI or inappropriate subjects, comparators, or data were excluded. Finally, 16 RCTs were included (1052 patients). According to the comparison, 2 RCTs (128 patients) compared herbal medicine to a placebo; 2 RCTs (161 patients) compared them to Western medication; 12 RCTs (763 patients) compared them to usual care, including modulation of the diet or exercise (Fig. ). Table lists the characteristics of the included studies (16 RCTs). All studies included were performed in China. The number of patients involved in the studies ranged from 25 to 40 in the treatment or control groups. In total, 14 kinds of herbal formulas were involved. The forms of the intervention included decoction (12 RCTs), capsule or tablet (4 RCTs), and powder (1 RCT). Two RCTs used a placebo as a comparator, and 2 RCTs used orlistat for comparison. Twelve RCTs used diet and exercise control as a comparator of usual care. Most included reasonable diets or limited calories and aerobic exercise. The treatment duration ranged from 28 days to 3 months; 12 weeks (3 months) was mostly used (8 RCTs), followed by 8 weeks (2 months or 60 days) (6 RCTs). Modulation of diet and exercise was used in 15 RCTs. Only 2 studies used both BMI and BW as the outcome measure. Nine, 6, and 7 RCTs used WC, HC, and WHR, respectively, as an outcome measure. Eight RCTs reported the result of AE, and most were not significant in either group. Table lists the classification of PI and the diagnostic criteria used for PI. The studies were assorted into 3 major types according to the pathology; the phlegm-dampness type, the heat accumulation type, and the liver- qi stagnation type. The phlegm-dampness type was counted in 8 RCTs, and it included only phlegm-dampness (n = 2) and phlegm-dampness with spleen deficiency (n = 6). The heat accumulation type was counted in 6 RCTs and commonly, it was related to the spleen and stomach. The type included stagnation of heat (or damp-heat) in the spleen and stomach, stomach heat and dampness stagnation, and damp-heat accumulation. The liver- qi stagnation type was collected in 2 RCTs and called a spleen deficiency and liver- qi stagnation or spleen deficiency and stagnation of liver- qi and heat. ‘Diagnosis and efficacy evaluation criteria of simple obesity’ was most frequently used in Seven RCTs as the diagnostic criteria for PI. ‘Internal Chinese medicine’, ‘Guiding principles for clinical research on the new drug of traditional Chinese medicine’, and ‘Guidelines for diagnosis and treatment of common diseases in internal Chinese medicine’ were all used in 3 RCTs, respectively. ‘Endocrinology specialty diseases and rheumatism-Clinical diagnosis and treatment of Chinese medicine, 2 nd Edi.’ was used in 2 RCTs. Three RCTs used 2 kinds of diagnostic criteria. “Guidelines for diagnosis and treatment of common diseases in internal Chinese medicine” and “Internal Chinese medicine” were used in 2 RCTs, and “Guiding principles for clinical research on the new drug of traditional Chinese medicine” and “Diagnosis and efficacy evaluation criteria of simple obesity” were used in 1 RCT. Fig. indicates the risk of bias. In relation to selection bias, random sequence generation was low in 12 RCTs and unclear in 4. Allocation concealment was low in 3 RCTs and unclear in 13. Blinding of the participants and personnel was a high risk of bias in most studies, except for 2 RCTs that compared herbal medicine with a placebo. Another domain related to blinding and detection bias presented a low risk only in 1 RCT. Other RCTs did not describe sufficient information to assess the risk of bias. The risk of bias in incomplete outcome data was low in 11 RCTs and unclear in the others. The risk of selective reporting was assessed to be low in 5 RCTs; the risk of bias in the others was unclear, as there was a lack of explanation to assess the risk of bias. Most studies did not describe the information needed to assess the risk of bias; the other bias was also assessed as unclear. Only 2 RCTs whose comparator was placebo were assessed as having a low risk of bias overall. The others had a low or unclear risk of bias. 3.5.1. BW and BMI. Herbal medicine based on PI led to a significant reduction in both BW and BMI (BW: mean difference [MD] = –4.10, 95% confidence interval [CI]: –5.14 to –3.06, P < .0001, I 2 = 2%, BMI: MD = –1.53, 95% CI: –1.88 to –1.19, P < .0001, I 2 = 25%). When herbal medicine was compared with comparators, the changes in BMI were statistically significant in all subgroup analyses. In subgroup analysis, however, which compared herbal medicine to placebo, the decrease in BW was not significant in the herbal medicine group compared to the placebo group (BW: MD = –4.00, 95% CI: –10.52 to 2.52, P = .23, I 2 = not applicable). (Fig. ) 3.5.2. WC, HC, and WHR. The meta-analysis found that herbal medicine based on PI induced a significant decrease in WC, HC and WHR (WC: MD = –2.48, 95% CI: –2.95 to –2.02, P < .00001, I 2 = 0%, HC: MD = –1.75, 95% CI: –3.21 to –0.29, P = .02; I 2 = 65%, and WHR: MD = –0.03, 95% CI: –0.05 to –0.01, P = .0003; I 2 = 80%). After subgroup analysis according to comparator including placebo, orlistat, and usual care, herbal medicine induced a significant decrease only when compared with usual care. Herbal medicine could not lead to a significant improvement in WC and HC compared to the placebo. (Fig. ) Herbal medicine based on PI led to a significant reduction in both BW and BMI (BW: mean difference [MD] = –4.10, 95% confidence interval [CI]: –5.14 to –3.06, P < .0001, I 2 = 2%, BMI: MD = –1.53, 95% CI: –1.88 to –1.19, P < .0001, I 2 = 25%). When herbal medicine was compared with comparators, the changes in BMI were statistically significant in all subgroup analyses. In subgroup analysis, however, which compared herbal medicine to placebo, the decrease in BW was not significant in the herbal medicine group compared to the placebo group (BW: MD = –4.00, 95% CI: –10.52 to 2.52, P = .23, I 2 = not applicable). (Fig. ) The meta-analysis found that herbal medicine based on PI induced a significant decrease in WC, HC and WHR (WC: MD = –2.48, 95% CI: –2.95 to –2.02, P < .00001, I 2 = 0%, HC: MD = –1.75, 95% CI: –3.21 to –0.29, P = .02; I 2 = 65%, and WHR: MD = –0.03, 95% CI: –0.05 to –0.01, P = .0003; I 2 = 80%). After subgroup analysis according to comparator including placebo, orlistat, and usual care, herbal medicine induced a significant decrease only when compared with usual care. Herbal medicine could not lead to a significant improvement in WC and HC compared to the placebo. (Fig. ) Eight RCTs did not report AEs, whereas the other 8 RCTs reported the safety of herbal medicine. Only 2 studies reported AEs in the experimental group. On the other hand, they were mild AEs (4 cases), such as diarrhea, and they did not cause severe results. Fig. presents the publication bias according to the funnel plot. The funnel plot was considered visually asymmetric; hence, it was inferred that the publication bias possibly exists. The quality of evidence for the primary outcome, BW and BMI, was assessed as moderate with a high risk of blinding. The assessment of the evidence for WC was also presented moderate quality. On the other hand, the quality of evidence for HC and WHR was downgraded as “very low” and “low,” respectively, because of the serious inconsistency (high heterogeneity) and imprecision (small sample size). Obesity is an important disease because it is both weight gain and an indication of a health risk. In Korea, the adult obesity-related mortality rate is greater than 30%, and this increasing percentage is a cause of concern worldwide. In TCM, there are many attempts to find an effective treatment for obesity. Among those, the most frequently used treatment is herbal medicine, which is a prescription based on TCM theory. Although the most common treatment is this herbal medicine prescription based on PI, there are few reports examining the treatment effect and safety on obesity. There are some previous reviews on the weight loss effects of herbal medicine; however, there is no systematic review and meta-analysis focusing on PI. Most studies researched only a single herb intervention or the mechanisms of herbal medicine. Some reported the efficacy of herbal medicine, but they were limited in 1 or several specific kinds of herbal formula. Some systematic reviews analyzed studies using herbal medicine without conducting meta-analysis, so there were limits to estimating the effect size of herbal medicine. There was another attempt to study herbal medicine prescriptions, but it was aimed to find out the current status of herbal medicine sales and investigate new anti-obesity medicine. In TCM, it is commonly understood that the prescription should differ according to the pathogenesis even if the diagnosis is the same. Therefore, it is important for effective treatment to determine the correct PI type and prescribe matched herbal medicine. There are 6 types in PI for obesity: phlegm, food retention, blood stasis, lever- qi stagnation, deficiency of yang, and deficiency of spleen. On the other hand, it could be largely divided into 3 types according to the results of this study. The phlegm-dampness type and the heat accumulation type were the most common, followed by liver- qi stagnation with a deficiency of spleen. The common way to diagnose PI is through an examination of the symptoms in a subjective inspection. The phlegm-dampness type is characterized by feeling heavy, tired, headache, dizziness, bloating, or loose stool, which results from a dysfunction of the spleen. The deficiency of the spleen can induce a lack of circulating energy that leads to a lack of energy consumption and then obesity. On the other hand, the heat accumulation type is characterized by eating habits, frequent overeating or eating even after feeling full. Furthermore, constipation, stomach burning, thirst, or preferring cold water are common symptoms of this type, and it is similar to the food retention type. Overeating causes heat by stagnating food in the stomach and induces a dysfunction of the energy mechanism. The liver- qi stagnation type is related to stress. Chest tightness, chest fullness, upset, irritability, and irregular menstruation are common symptoms of the liver- qi stagnation type. It has been known that stress can cause a dysfunction of the digestive or systemic metabolism, such as deficiency of the spleen, endocrine disorder, and appetite abnormalities. The above 3 major types are assorted according to the characteristics of symptoms, and it is difficult to understand those 3 types as 1. Considering the perspective of pathogenesis of obesity, it is valid that 3 PI types in the present result were the major types. In TCM, obesity is caused by several pathogenic problems, such as dampness, phlegm, and heat accumulation. Overeating of high-calorie and high-fat diets can result in a dysfunction of the stomach and spleen. The dysfunction of the organs can induce pathogenic dampness, phlegm, and heat, which are explained as pathogenic results of hyper- or hypo-production. If those are obstructed in the body, it is changed dampness with phlegm or occurs as heat accumulation that finally causes obesity. In previous studies conducted in Korea, the liver- qi stagnation type, phlegm type, food retention type, and deficiency of spleen type were the most commonly reported type of obesity. The results of the present study are similar to previous studies, but they are not the same. This was probably because there was a difference in the background in which the study was conducted. Most studies included in the present study were conducted in China. Therefore, there could be differences in the characteristics of subjects. In a previous report that analyzed studies on Chinese people, the deficiency of the spleen type with or without phlegm was reported the most, followed by the heat accumulation type and liver- qi stagnation type. Furthermore, there are differences in the diagnostic methods and PI terms. To increase the reliability, in Korea, the questionnaire, which is a point-based survey and unified 6 types developed by Korea institute of oriental medicine, is used widely as diagnostic methods. However, there is no unified diagnostic system and unified terms which are used widely in China. The present study showed that the main symptoms were similar, but the references of the diagnostic criteria for PI were different, even though it was the same PI type according to the pathogenic mechanism. Moreover, it was the same in the PI term; various terms indicated the same PI type. This can cause excessive subdivision of the PI system and the ratio of PI types to appear differently, as well as a misunderstanding of PI. Detailed analysis and classification are also important, but it is believed that utilization and reliability will increase if it is divided into major large categories and a unified method and terms are used. In this study, the primary and additional outcomes were effectively improved. In all subgroup analyses, herbal medicine led to significant improvement in BMI, which is a major measurement of diagnosis of obesity. On the other hand, other measurements, including BW, WC, HC, and WHR, decreased insignificantly in the herbal medicine group compared to the placebo group. In this subgroup analysis, just 1 study was included, and it compared the anti-obese effectiveness between herbal medicine and a placebo after only 4 weeks. Considering that the study periods of the included articles were from 4 to 12 weeks, and most were conducted for 12 weeks, the period might be too short to induce a proper reduction of the anthropometric indices. In relation to safety, half of the included studies reported AEs. After evaluating the functions of the liver and kidneys and examining other AEs, there were only 4 mild AEs in the experimental group. On the other hand, there was no significant difference between the experimental and control groups. Considering that the 2 studies compared herbal medicine with usual care without herbal medicine, there was no significant difference in safety. More AEs can occur when using herbal medicines unsuitable for PI because of different pathomechanisms. Because of the difference in the ratio by PI type and ethical reason that herbal medicine that are not suitable for PI are at high risk of AEs, there was no study has conducted direct comparisons of the anti-obesity effects of herbal medicine depending on whether it was applied based on a PI or not. Most studies were designed to recruit suitable subjects using PI first and apply herbal medicine or control intervention later to participants diagnosed with the selected PI type. One study collected subjects without PI and diagnosed them with several PI types after recruitment. On the other hand, it also divided the subjects into 2 groups, applied herbal medicine or not, to determine the effectiveness of herbal medicine. Post-analysis was conducted to find out the differences according to whether the herbal medicine is suitable for PI or not, but the sample size was too small for reliable analysis. The effectiveness of specific herbal medicine without using PI and its effect size was reported in a previous study. The common herbal medicine, BTS and TEJWT, showed that both herbal medicines led to a decrease in BW and BMI but the changes were not statistically significant ( BTS BW: MD = –0.32, 95% CI: –3.01 to 2.28, P = .82, I 2 = 0% and BMI: MD = –0.67, 95% CI: –2.88 to 1.36, P = .48, I 2 = 0%; TEJWT BW: MD = –2.38, 95% CI: –6.45 to 1.69, P = .25, I 2 = 0% and BMI: MD = –0.46, 95% CI: –1.56 to 0.64, P = .41, I 2 = 0%). On the other hand, in the present study, BW and BMI were reduced significantly in the herbal medicine group compared to the control group. The effect size of herbal medicine for BW and BMI was larger than the results of the previous study conducted without using PI. Another study reported the effectiveness of BTS and boiogito-tang(BGT ) and the tendency of AEs according to PI. BTS is representative herbal medicine for the heat accumulation type, and care should be taken when applying it to the deficiency type. BGT is a common herbal medicine for the deficiency type. Not far from these directions, it was revealed that BTS was more effective for the liver- qi stagnation type. On the other hand, when BTS was applied to the deficiency of yang type, there were more AEs, including dyspepsia, epigastric pain, diarrhea, and headache. Furthermore, more AEs were counted in the liver- qi stagnation type in the BGT group. The anti-obese effect and safety of herbal medicine cannot be compared directly using PI because of a lack of studies. Although it is not a study on obesity, there were studies that focused on whether herbal medicine according to PI is more effective or not. Studies on stroke reported that functional improvement in acute stroke patients depended on the correspondence between herbal medicine and PI. The score of functional recovery tended to improve further in correspondence group than non-correspondence group. In addition, the association between a specific herbal medicine and PI was also studied. Gamichuongsangboha-tang , which is used to treat asthma, is reported that its therapeutic effect lasts longer in patients with deficiency type compared to excess type. Another study reported the effectiveness of Biyeom-go , which is a herbal ointment for rhinitis, is associated with cold-heat pattern and that ointment can be more effective for patients with heat type compared to cold type. Cheonwangbosim-dan , that has been prescribed for insomnia patients, was studied and it improved sleep quality in insomnia patients with a heart- yin deficiency type compared to non-heat- yin deficiency type. It is hard to conclude that herbal medicine using PI is more effective for obesity, above studies can support the possibility of PI for further improvement on treatment of obesity. Thus, this paper proposes the possibility that herbal medicine using PI is more effective than not using PI, considering that the anti-obese mechanisms of herbal medicine are not limited to reducing appetite and lipid absorption. The anti-obesity mechanism is explained by suppressing appetite, reducing the absorption of lipids and carbohydrates, inhibiting adipogenesis, regulating lipid and energy mechanisms, and improving the anti-obesity-related inflammation. Herbal medicine can be an excellent option not only to reduce body weight but also to solve the risk of health linked with obesity. PI reflects the holistic view of TCM, and it represents the patients’ symptoms. The TCM scores, which indicate the severity of the general symptoms related to the PI types, were decreased significantly with the weight reduction in most studies included. From these improvements, it is reasonable that herbal medicine regulates the weight changes and the systematic mechanism. The notable point of this study Is the quality of evidence. The changes in BW, BMI, and WC were assessed as a “moderate” quality of evidence. On the other hand, the quality of evidence of HC and WHR was downgraded to “very low” and “low.” This resulted in a high risk of bias in blinding, high heterogeneity, and a smaller sample size than the optimal information size. The primary outcome should have moderate quality. Despite the high risk of performance bias caused by the study design, the results were reliable with a large enough sample size, low heterogeneity, and no serious publication bias or indirection. Recently, there has been increasing interest in precision medicine, as with obesity research. PI can be an attractive option for individualized medicine, considering the developing procedures of precision medicine. For developing precision medicine, deep phenotyping of patients, such as medical history, lifestyle, physical examination, basic laboratory tests, imaging, functional diagnostics, and omics, is important. After preprocessing of these large data, data mining continues to establish diagnostic and prognostic models that leads to a prediction of the treatment response. Moreover, these tracts are fed back to the deep phenotyping stage. Because PI results from the historical accumulation of experience, deep phenotyping, which is essential to defining precision medicine, is prepared well. The next steps, including the setting of diagnostic and prognostic models and the predicting of treatment responses, have already been done using PI. The remaining work is a feed-back process for elaboration, which will require further studies using PI. PI has been suggested attractive issue for many researchers because of its association with precision medicine. Using PI can contribute to increasing treatment efficiency and assessing patients state and treatment progress. The acute cerebral infarction patients with phlegm-dampness type showed higher score in functional recovery compared to patients with Yin -deficiency type. The treatment efficiency of tinnitus was higher in spleen-stomach weakness type, stomach heat type and phlegm-fire type than other types regardless of the aspect of tinnitus. Although PI has possibility of individual medicine, there are few issues to apply PI to research. Because PI diagnosis criteria are often based on subjective symptoms, limitation of standards for PI and model validity are hinder the expansion of studies using PI. To solve these problems, there were many attempts including proposing a suitable methodological solution or finding out biomarkers related to PI. Evaluation of PI questionnaire using data mining or machine learning has been conducted for improvement of validity. Biomarkers that have association with PI are studied in some diseases including coronary heart disease, rheumatoid arthritis, Gastric carcinoma, and other diseases. Those efforts make well designed further studies able and the utilization of PI increased. This study had some limitations. First, pattern identification was not included in the search terms. Because it is not a controlled vocabulary, the search results decreased too much. Pattern identification was excluded from the search term to increase the sensitivity. Instead, all articles related to PI were extracted after the researcher screened the full text. On the other hand, there may be some studies that could not be included. Second, the included studies had a high risk of performance bias related to blinding, and those were performed in the same country. Hence, the results can be changed with further well-designed studies. Third, there was no direct comparison between herbal medicine with and without PI. Few studies reported the anti-obesity effect of herbal medicine according to the different PI groups; however, those had too few patients in each group, and there was no comparability among the groups. Next, all the included studies were conducted within 3 months, and it was difficult to evaluate the long-term effects of herbal medicine. Furthermore, the ignored heterogeneity according to the treatment period could be higher because the study designs focused on the short-term effects. Hence, a follow-up study will be needed. Lastly, the absence of unified PI diagnostic criteria and guidance of herbal medicine according to PI was a problem. Despite the above limitations, this study is the first review to evaluate the efficacy and safety of herbal medicine focusing on PI in treating obesity. It is insufficient to clarify whether herbal medicine is more effective and safter than not using PI, this study is able to present a new perspective considering PI in the treatment of obesity. Further well-designed studies with large sample sizes will allow the results and quality of evidence to be upgraded. In addition, this paper proposed the possibility of PI as precision medicine, which can be a novel approach to obesity treatment. Nevertheless, more well-designed studies will be required to yield a high quality of evidence and clarify the effectiveness of herbal medicine based on PI. In conclusion, 16 RCTs (2932 patients) focusing on herbal medicine with PI were reviewed, and the characteristics of PI used in the included studies were analyzed. The major 3 types of PI were the Phlegm-dampness type, heat accumulation type, and liver-qi stagnation type. The criteria for a diagnosis of PI presented 5 kinds of references. For anti-obese effectiveness, all outcome measurements were reduced significantly in the herbal medicine group compared to the control groups. The grade of evidence was of moderate quality, with a high risk of bias in the primary outcomes. Through analysis and assessment of its quality, herbal medicine is suggested to be an effective and safe treatment for obesity. Conceptualization: Dongho Keum, Hojun Kim. Data curation: Seohyun Park. Formal analysis: Seohyun Park. Funding acquisition: Hojun Kim. Methodology: Seohyun Park. Project administration: Hojun Kim. Supervision: Dongho Keum, Hojun Kim. Visualization: Seohyun Park. Writing – original draft: Seohyun Park. Writing – review & editing: Hojun Kim. |
Implementing teleophthalmology services to improve cost-effectiveness of the national eye care system | f6ac85d6-a57b-477c-a765-16714a3c4aca | 11427588 | Ophthalmology[mh] | The shortage of specialized healthcare providers is a worldwide public health challenge threatening to become a crisis . The ageing population, alarming rise in the prevalence of degenerative disease, and rapid technological innovation are among the factors that increasingly raise the need for healthcare specialists . Ophthalmology is one of the medical specialties with the highest expected future rise in demand for healthcare services, with age-related macular degeneration (AMD), cataracts, glaucoma, and diabetic retinopathy among the most often referred eye diseases . Although the global ophthalmological workforce is growing, the distribution and capacity of the eye care delivery system are universally challenged . In most countries, there is a fast-growing need to increase the number of training posts in ophthalmology and ongoing education and training for existing ophthalmologists. As the demand for eye care services continues to grow, it is also essential to explore other innovative solutions to increase capacity and to ensure future patients’ access to timely and high-quality eye care . Optometrist-assisted and teleophthalmology-enabled referral pathway (OTRP) for community optometry referrals has the potential to improve the capacity and efficiency of eye care delivery systems through risk stratification and limiting the number of improved referrals . OTRP can be defined as a collaboration between community optometrists and ophthalmologists who are working in either the primary (gate-keep function) or the secondary sector (hospitals), where the community-based optometrist obtains images (e.g., OCT, slit-lamp, or retinal imaging) and transmits them via an electronic system to the ophthalmologist who decides on the case management . One of the primary benefits of OTRP is its potential to increase the capacity of the eye care delivery system by enabling optometrists to play a more significant role in providing comprehensive eye care services. Optometrists are often the first point of contact for patients with eye problems, and they are trained to perform a range of eye exams and diagnose common eye conditions . By collaborating with ophthalmologists, optometrists can provide more comprehensive eye care services, potentially reducing the burden on ophthalmologists and increasing access to eye care for patients. OTRP also has the potential to improve the efficiency of the eye care delivery system by reducing the need for face-to-face consultations between patients and ophthalmologists . This can save patients’ time and money and reduce ophthalmologists’ workloads, allowing them to focus on the most complex cases . From a global perspective, the role of optometrists in national healthcare systems varies between countries, and future OTRP systems will likely differ accordingly . In the United Kingdom (UK), community optometrists conduct nearly all primary eye care consultations, with over 70% funded by the National Health Service . A recent study has demonstrated that more than a third of optometric referrals within the National Health Service did not require specialist consultancy and that OTRP offers the potential for cost reductions and increasing effectiveness . In Denmark, optometrists are not part of the public healthcare system, although they are recognized as healthcare providers . OTRP could potentially play a larger role in the delivery of eye care services in Denmark because optometry stores are widespread across the country, easily accessible to most people, and increasingly integrating automated equipment and diagnostic devices to enhance the accuracy and speed of eye examinations . Despite optometrists being an underutilized resource in the field of eye care in most healthcare systems, no health economic evaluation of OTRP has yet been conducted either in an international or a Danish setting . Therefore, our study aims to investigate the expected future costs and benefits of implementing OTRP under various possible organizational set-ups relevant to a Danish context. This study is designed to inform decision-makers about the possible role of optometrists and teleophthalmology in the national eye care system. Danish eye care system The Danish healthcare system is universal and based on principles of free and equal access to healthcare for all citizens . General ophthalmologists maintain a gatekeeper function to the secondary sector (general eye departments or university eye clinics). Danish citizens have the right to schedule an appointment with general ophthalmologists independently, with or without a referral from a general practitioner or optometrist . There are currently 430 ophthalmologists in Denmark, of whom 180 are general ophthalmologists and 250 are employed in the hospital sector . With approximately 5.9 million inhabitants in Denmark, this corresponds to 0.7 ophthalmologists per 10,000 inhabitants, which is a little below the European average of 0.8 per 10,000 inhabitants . General ophthalmologists provide care for approximately 3800 unique patients annually , a number that has grown over the last 15 years, especially in rural areas, where waiting times are highest . According to the Danish Health Agency, the number of ophthalmologists must be increased by 40–60% over the next 20 years to maintain current service levels . The density of optometry stores in Denmark is among the highest in Europe and it is approximately three per 10,000 inhabitants . Organization of a future OTRP system Two organizational models are particularly relevant for integrating OTRP services in the Danish public healthcare system: a reimbursement (R-OTRP) model and a public procurement (P-OTRP) model. A reimbursement model is a common way of integrating general ophthalmologists and other private healthcare specialists in the Danish primary care sector. It could be extended to include both optometrists and teleophthalmologists . It is the model currently used for optometrists in many UK National Health Service trusts and for reimbursing private providers under Medicare or Medicaid in the USA. In Denmark, medical specialists and other healthcare professionals can apply for authorization and permission to work under the Danish Health Insurance Act . These professionals can purchase a provider license which gives them the right to practice within a specific geographic domain and up to a certain capacity (or annual cost level) determined by the regional health authority. After receiving the license, the regional health authority is required to compensate for the services provided to patients in accordance with the nationally agreed contractual terms, which include a fee-for-service schedule. The nationally agreed terms of the contract are determined through negotiations every two years between the relevant specialist organization and the public payers. The provider license is typically open-ended with periodic reviews. An advantage of this model is the life-long relationship between payer and provider that enables monitoring and learning. This health economic evaluation assumes that the R-OTRP model is extended to optometrists and teleophthalmologists. We assume that Danish optometrists under an R-OTRP model can achieve the same level of efficiency as UK optometrists through continuous learning and control . We also assume that both optometrists and teleophthalmologists will receive a tariff for their referrals. A public procurement or tender model is an alternative model used by Danish health authorities. This model is used regularly by Danish health authorities to buy additional capacity for cataract surgery among private ophthalmologists with or without provider licenses . It is also used to procure ambulance services in each of the five regions through competitive bidding between invited private and public service providers for four-year contracts and it is used to create analog competition for hospital pharmaceuticals . The main advantage of a public procurement model is the possibility of price reductions and financial savings on public healthcare budgets through market competition and the flexibility to adjust healthcare capacity to meet temporary fluctuations in demand . In Denmark, the procurement model can be used at the national or regional level following the Danish Public Procurement Law and Procurement Directives from the EU Commission . The duration of the procurement contracts is typically a fixed period, such as one to four years, and winners may be paid for services in different ways according to specific contractual terms. In this economic evaluation, we assume that competitive tenders could be attractive for various partnerships between optometrists and ophthalmologists e.g., optometrists in stores working together with private ophthalmologists (with or without reimbursement contracts with Danish regions), optometrists working with ophthalmologists in hospitals, and general ophthalmologists who employ optometrists. We assume that the P-OTRP model is likely to be cheaper than the R-OTRP model due to price competition, but that the quality of the eye examinations in stores may be higher in the R-OTRP model because of the continuous working relationship between healthcare providers and the optometrist. For simplicity, we further assume that there is only a single fee paid per referred patient under the P-OTRP model covering services performed by a teleophthalmologist and an optometrist. Decision-analytic model A decision-analytic model (a decision tree) with a one-year time horizon was constructed to portray alternative future patient referral pathways for people examined in optometry stores for suspected ocular posterior segment eye disease. The model starts with people having a comprehensive eye examination in an optometry store and ends with the start of treatment or the end of the referral pathway. The model compares three alternative patient referral pathways (Fig. ): (1) the usual general ophthalmologist referral pathway (GO-RP), where optometrists are not reimbursed by the regional health authorities for the eye examination but refer all patients without any involvement of a teleophthalmologist to a general ophthalmologist, (2) an R-OTRP model, and (3) a P-OTRP model, as described in section 2.2. The economic evaluation was conducted from a Danish public health sector perspective with the main outputs being total healthcare costs per patient, average waiting time from eye examination in store until the start of treatment or end of referral pathway, and quality-adjusted life-years (QALY) gained. The QALYs were calculated as the difference between the gain in health-related quality of life (HRQoL) from initiation of treatment minus any disutility from potential anxiety during waiting time. As a sensitivity analysis, we included a societal perspective to explore the consequences for patients in terms of transportation and productivity costs. The model was constructed using TreeAge Pro Healthcare (version 2022, R2.0) following international guidelines for health economic evaluation . Model inputs The model was parameterized using the best available evidence relevant to the model (Table ). Central model assumptions were validated using an independent expert panel, comprising three general ophthalmologists, two optometrists, and one associate professor of health economics. The assumptions about cohort disease prevalence were taken from Muttuvelu et al. . We assume the same share of patients with eye disease and the same share of patients referred to treatment at a hospital eye department and general ophthalmologist for all three alternatives (GO-RP, R-OTRP, and P-OTRP) i.e., the clinical quality is assumed not to be affected by the introduction of teleophthalmologist and choice of organizational form. In the base-case, we assume that all patients in GO-RP see a general ophthalmologist if an optometrist gives the patient a diagnosis after a comprehensive eye examination, but in the sensitivity analyses, this assumption is relieved down to 50%. In base-case analysis for P-OTRP, we assume that teleophthalmologist can reduce the number of referrals up to 80.5% , which is varied in the sensitivity analysis from 50–90%. In base-case of R-OTRP, we assume that optometrists can reduce the number of referrals to teleophthalmologist by 10% compared to P-OTRP, which is increased in the sensitivity analysis up to 20%. All monetary outcomes were estimated in Danish Krone (DKK) adjusted to the year 2022 using the Consumer Price Index and subsequently converted to 2022 British Pound Sterling (£) using a conversion rate on December 12, 2022 of DKK 100 = £11.57. Healthcare costs were obtained from published sources, including the Danish diagnosis-related groups tariff system and tariffs from the Danish ophthalmologists’ collective agreement . The costs/tariffs of teleophthalmologists and optometrists were estimated in base-case to be £46 (DKK 400) and £20 (DKK 175) respectively. The model only includes marginal costs of services from providers, and no attempts have been made to include administrative costs of establishing and running a OTRP system such as the costs of tendering quality assurance or reimbursement. Nor have any potential changes in the costs of implementation been included. Data on current waiting times in the Danish eye care system were incorporated as average weeks of waiting time for general ophthalmologists and hospital eye departments according to available Danish statistics and validated with the expert panel . QALY gain was included within the one-year horizon as the gain from initiation of treatment of eye disease assuming an increase in HRQoL of 0.2 measured on an EQ-5D scale . The disutility from potential anxiety in the waiting time from eye examination and optometrist’s diagnosis and the start of treatment (for people with confirmed diagnosis) or ophthalmologist diagnosis (false positives) was included, assuming a difference in HRQoL of the average referred patient measured on an EQ-5D scale of 0.02 . Furthermore, the main results are shown graphically in a cost-effectiveness plane constructed from a probabilistic sensitivity analysis with 10,000 2nd-order Monte Carlo simulations using beta distribution for probabilities and QALYs, and gamma distributions for costs and waiting times . In the sensitivity analysis, patients’ transportation costs were included, assuming an average transport cost per consultation at the general ophthalmologist and hospital eye department of £11.75. We further included productivity costs due to patients’ absence from work because of eye consultations, assuming an average cost per consultation at the general ophthalmologist and hospital eye department of £20.83 . The Danish healthcare system is universal and based on principles of free and equal access to healthcare for all citizens . General ophthalmologists maintain a gatekeeper function to the secondary sector (general eye departments or university eye clinics). Danish citizens have the right to schedule an appointment with general ophthalmologists independently, with or without a referral from a general practitioner or optometrist . There are currently 430 ophthalmologists in Denmark, of whom 180 are general ophthalmologists and 250 are employed in the hospital sector . With approximately 5.9 million inhabitants in Denmark, this corresponds to 0.7 ophthalmologists per 10,000 inhabitants, which is a little below the European average of 0.8 per 10,000 inhabitants . General ophthalmologists provide care for approximately 3800 unique patients annually , a number that has grown over the last 15 years, especially in rural areas, where waiting times are highest . According to the Danish Health Agency, the number of ophthalmologists must be increased by 40–60% over the next 20 years to maintain current service levels . The density of optometry stores in Denmark is among the highest in Europe and it is approximately three per 10,000 inhabitants . Two organizational models are particularly relevant for integrating OTRP services in the Danish public healthcare system: a reimbursement (R-OTRP) model and a public procurement (P-OTRP) model. A reimbursement model is a common way of integrating general ophthalmologists and other private healthcare specialists in the Danish primary care sector. It could be extended to include both optometrists and teleophthalmologists . It is the model currently used for optometrists in many UK National Health Service trusts and for reimbursing private providers under Medicare or Medicaid in the USA. In Denmark, medical specialists and other healthcare professionals can apply for authorization and permission to work under the Danish Health Insurance Act . These professionals can purchase a provider license which gives them the right to practice within a specific geographic domain and up to a certain capacity (or annual cost level) determined by the regional health authority. After receiving the license, the regional health authority is required to compensate for the services provided to patients in accordance with the nationally agreed contractual terms, which include a fee-for-service schedule. The nationally agreed terms of the contract are determined through negotiations every two years between the relevant specialist organization and the public payers. The provider license is typically open-ended with periodic reviews. An advantage of this model is the life-long relationship between payer and provider that enables monitoring and learning. This health economic evaluation assumes that the R-OTRP model is extended to optometrists and teleophthalmologists. We assume that Danish optometrists under an R-OTRP model can achieve the same level of efficiency as UK optometrists through continuous learning and control . We also assume that both optometrists and teleophthalmologists will receive a tariff for their referrals. A public procurement or tender model is an alternative model used by Danish health authorities. This model is used regularly by Danish health authorities to buy additional capacity for cataract surgery among private ophthalmologists with or without provider licenses . It is also used to procure ambulance services in each of the five regions through competitive bidding between invited private and public service providers for four-year contracts and it is used to create analog competition for hospital pharmaceuticals . The main advantage of a public procurement model is the possibility of price reductions and financial savings on public healthcare budgets through market competition and the flexibility to adjust healthcare capacity to meet temporary fluctuations in demand . In Denmark, the procurement model can be used at the national or regional level following the Danish Public Procurement Law and Procurement Directives from the EU Commission . The duration of the procurement contracts is typically a fixed period, such as one to four years, and winners may be paid for services in different ways according to specific contractual terms. In this economic evaluation, we assume that competitive tenders could be attractive for various partnerships between optometrists and ophthalmologists e.g., optometrists in stores working together with private ophthalmologists (with or without reimbursement contracts with Danish regions), optometrists working with ophthalmologists in hospitals, and general ophthalmologists who employ optometrists. We assume that the P-OTRP model is likely to be cheaper than the R-OTRP model due to price competition, but that the quality of the eye examinations in stores may be higher in the R-OTRP model because of the continuous working relationship between healthcare providers and the optometrist. For simplicity, we further assume that there is only a single fee paid per referred patient under the P-OTRP model covering services performed by a teleophthalmologist and an optometrist. A decision-analytic model (a decision tree) with a one-year time horizon was constructed to portray alternative future patient referral pathways for people examined in optometry stores for suspected ocular posterior segment eye disease. The model starts with people having a comprehensive eye examination in an optometry store and ends with the start of treatment or the end of the referral pathway. The model compares three alternative patient referral pathways (Fig. ): (1) the usual general ophthalmologist referral pathway (GO-RP), where optometrists are not reimbursed by the regional health authorities for the eye examination but refer all patients without any involvement of a teleophthalmologist to a general ophthalmologist, (2) an R-OTRP model, and (3) a P-OTRP model, as described in section 2.2. The economic evaluation was conducted from a Danish public health sector perspective with the main outputs being total healthcare costs per patient, average waiting time from eye examination in store until the start of treatment or end of referral pathway, and quality-adjusted life-years (QALY) gained. The QALYs were calculated as the difference between the gain in health-related quality of life (HRQoL) from initiation of treatment minus any disutility from potential anxiety during waiting time. As a sensitivity analysis, we included a societal perspective to explore the consequences for patients in terms of transportation and productivity costs. The model was constructed using TreeAge Pro Healthcare (version 2022, R2.0) following international guidelines for health economic evaluation . The model was parameterized using the best available evidence relevant to the model (Table ). Central model assumptions were validated using an independent expert panel, comprising three general ophthalmologists, two optometrists, and one associate professor of health economics. The assumptions about cohort disease prevalence were taken from Muttuvelu et al. . We assume the same share of patients with eye disease and the same share of patients referred to treatment at a hospital eye department and general ophthalmologist for all three alternatives (GO-RP, R-OTRP, and P-OTRP) i.e., the clinical quality is assumed not to be affected by the introduction of teleophthalmologist and choice of organizational form. In the base-case, we assume that all patients in GO-RP see a general ophthalmologist if an optometrist gives the patient a diagnosis after a comprehensive eye examination, but in the sensitivity analyses, this assumption is relieved down to 50%. In base-case analysis for P-OTRP, we assume that teleophthalmologist can reduce the number of referrals up to 80.5% , which is varied in the sensitivity analysis from 50–90%. In base-case of R-OTRP, we assume that optometrists can reduce the number of referrals to teleophthalmologist by 10% compared to P-OTRP, which is increased in the sensitivity analysis up to 20%. All monetary outcomes were estimated in Danish Krone (DKK) adjusted to the year 2022 using the Consumer Price Index and subsequently converted to 2022 British Pound Sterling (£) using a conversion rate on December 12, 2022 of DKK 100 = £11.57. Healthcare costs were obtained from published sources, including the Danish diagnosis-related groups tariff system and tariffs from the Danish ophthalmologists’ collective agreement . The costs/tariffs of teleophthalmologists and optometrists were estimated in base-case to be £46 (DKK 400) and £20 (DKK 175) respectively. The model only includes marginal costs of services from providers, and no attempts have been made to include administrative costs of establishing and running a OTRP system such as the costs of tendering quality assurance or reimbursement. Nor have any potential changes in the costs of implementation been included. Data on current waiting times in the Danish eye care system were incorporated as average weeks of waiting time for general ophthalmologists and hospital eye departments according to available Danish statistics and validated with the expert panel . QALY gain was included within the one-year horizon as the gain from initiation of treatment of eye disease assuming an increase in HRQoL of 0.2 measured on an EQ-5D scale . The disutility from potential anxiety in the waiting time from eye examination and optometrist’s diagnosis and the start of treatment (for people with confirmed diagnosis) or ophthalmologist diagnosis (false positives) was included, assuming a difference in HRQoL of the average referred patient measured on an EQ-5D scale of 0.02 . Furthermore, the main results are shown graphically in a cost-effectiveness plane constructed from a probabilistic sensitivity analysis with 10,000 2nd-order Monte Carlo simulations using beta distribution for probabilities and QALYs, and gamma distributions for costs and waiting times . In the sensitivity analysis, patients’ transportation costs were included, assuming an average transport cost per consultation at the general ophthalmologist and hospital eye department of £11.75. We further included productivity costs due to patients’ absence from work because of eye consultations, assuming an average cost per consultation at the general ophthalmologist and hospital eye department of £20.83 . In the base-case analysis, the cost per individual with suspected ocular posterior segment eye disease was £115 for GO-RP and £75 and £94 for P-OTRP and R-OTRP respectively (Table ). The average waiting time for diagnosis or end of referral pathway was 25 weeks for GO-RP and 5.8 and 5.7 for P-OTPR and R-OTPR respectively. Both P-OTPR and R-OTPR were associated with a potential QALY gain of approximately 0.15 compared to 0.06 for GO-RP. The cost-effectiveness scatterplot indicated a high probability of OTRP being both less expensive and more effective than GO-RP (Fig. ). The probabilistic sensitivity analysis showed that P-OTRP was cheaper than GO-RP and R-OTRP in more than 95% of the simulations. The deterministic analysis demonstrated that the results were sensitive to the assumption about the share of the cohort that consults general ophthalmologists after a referral from an optometrist (GO-RP, base-case = 90%) (Table ). Furthermore, the result was sensitive to the size of the teleophthalmologist tariff. On the other hand, a potential reduction in the cost of the first visit to the general ophthalmologist did not significantly impact the result; the main reason is that a change in this cost will affect all arms. The sensitivity analyses showed that the results were also influenced by the effectiveness of P-OTRP and R-OTRP in reducing the number of unnecessary referrals but GO-RP would not surpass the OTRPs. The result was not sensitive to changes in the assumptions about zero false positives from teleophthalmologist to general ophthalmologist, however, assuming more than 30% of false positives led to R-OTRP being cheaper than P-OTRP. When patients’ cost of transportation and productivity costs were included, the OTRP appeared even more cost-effective as OTRP reduces patients’ travel costs and productivity costs compared to GO-RP. This study is, to our knowledge, the first health economic evaluation of optometrist-assisted teleophthalmology. Based on the best available evidence, the results strongly indicate that the role of OTRP in future eye care delivery systems should be planned for. OTRP has the potential to reduce healthcare costs and waiting time, increase patients’ HRQoL, and decrease patients’ cost of transportation and productivity costs. The main reason for these benefits is the ability of OTRP to alleviate the burden on general ophthalmologists. The results are sensitive to assumptions about the size of the tariffs for teleophthalmology services and the number of unnecessary referrals in the future eye care system. Furthermore, the conclusion about cost-effectiveness will also depend upon the size of the administrative costs in establishing and running a national OTRP system. These administrative costs could be seen as an investment in a more effective national eye care system which is paid for by a reduction in marginal costs for everyone who receives a comprehensive eye examination in the OTRP setup. Thus, OTRP is more likely to be cost-effective in a large-scale implementation rather than a small-scale intervention. Scalability will, therefore, be an important issue in future OTRP systems. In Denmark, more than 690,000 patients are currently being treated in general ophthalmology practices . Assuming, for example, that 15% of these patients could be seen in a future OTRP system with a similar cost saving of approximately £20–40 per patient, annual marginal cost savings of £2.1 m to £4.1 m (DKK 18.2 m–35.4 m) could be realized. This study has several limitations. These include uncertainties in the input data for probabilities of referrals for OTRP, costs, and QALYs, and the lack of consideration for individuals’ preferences for patient pathways, which should have been included in a full benefits assessment . The potential risk of false negatives due to optometrists’ and teleophthalmologists’ referral quality and competencies not being as high as general ophthalmologists were not considered. In this study, we assume a high accuracy of remote diagnoses . Although there is a possibility of poor-quality images, advancements in camera technology have proven their efficiency when compared to face-to-face examination and consultation, however, more research on this topic is needed . Additionally, the effects of teleophthalmology on workforce dynamics were not addressed in our calculations. In the future, AI-powered OTRP is expected to outperform other OTRPs, particularly in terms of accessibility, convenience, and scalability . These aspects were not incorporated in the calculations but would probably have increased the possibilities of future savings from OTRP. The P-OTRP model will have an advantage in terms of scalability because it builds on market competition and standardized products and services rather than education levels and competencies in optometrist stores. The generalizability of results from health economic evaluations is usually limited due to the differences among countries with regards to the organization of healthcare, clinical practices, unit costs, etc. . Currently, OTRPs are being tested in clinical research at university hospitals in the UK . For research and quality assurance purposes, both centralized private teleophthalmology units and university hospitals involved in OTRP have an important role in data collection, research, and continuous quality improvement. The P-OTRP model can involve many types of providers including ophthalmologists who are working in public as well as private organizations. Market competition secures the economic advantages of this particular model. The use of OTRP will require a secured digital communication system between the optometrist and the ophthalmologist. In Denmark, such systems are already in place and enforce the Danish Data Protection Act and the European General Data Protection Regulation . Therefore, implementation of the OTRP in Denmark will be a marginal cost in relation to the already established systems. However, this may not be generalizable to other countries with other prerequisites for establishing secure communication systems. Optometrist-assisted teleophthalmology is effective in reducing unnecessary referrals and waiting times, increasing patients’ HRQoL, and decreasing the healthcare and societal costs of diagnosing individuals with suspected ocular posterior segment eye disease. Further empirical research is needed to investigate the potential for improvements in national eye care through optometrist-assisted teleophthalmology. What was known before: Teleophthalmology represents an effective means for triaging patients; however, the cost-effectiveness of such services remains unexplored in the scientific literature. What this study adds: This research represents the first health economic evaluation of a nationwide teleophthalmology service, aiming to quantify potential economic savings, gains in Quality-Adjusted Life-Years (QALY), and reductions in waiting times. Teleophthalmology represents an effective means for triaging patients; however, the cost-effectiveness of such services remains unexplored in the scientific literature. This research represents the first health economic evaluation of a nationwide teleophthalmology service, aiming to quantify potential economic savings, gains in Quality-Adjusted Life-Years (QALY), and reductions in waiting times. |
Assessment of ChatGPT-4 in Family Medicine Board Examinations Using Advanced AI Learning and Analytical Methods: Observational Study | 5618c2f1-84d3-42a0-88ca-00757e908c64 | 11479358 | Family Medicine[mh] | Background Family physicians in the United States are required to complete the American Board of Family Medicine (ABFM) Certification Examination following residency and every 10 years after to maintain board-certified status. This examination consists of 300 questions with a scaled scoring system ranging from 200 to 800; this corresponds to percent correct scores of 57.7%-61.0% . There are extensive web-based review materials that are used to help prepare for this examination, such as textbooks and question banks. Several studies have examined the performance of advanced artificial intelligence (AI) language models (eg, ChatGPT) in attempting and failing similar board examinations . Many of these studies used ChatGPT version 3.5; however, a study examining the newer and more powerful ChatGPT-4 found that it significantly outperformed its predecessor and medical residents on a University of Toronto family medicine examination . ChatGPT-4 can now analyze documents in several file formats such as PDF. This would allow a user to simulate the process of learning and studying by providing learning material for the AI to consult in advance of being tested. With this approach the AI can be given material targeted to a specific region’s regulations and ensure that it has access to the most up-to-date clinical guidelines. Users engage with ChatGPT through the use of text inputs called “prompts.” The contents of the prompt dictate the output. Prompt engineering is the purposeful structural construction of the input and significantly impacts the output. The 4 core elements of the prompt include the instruction, context, input data, and output indicator . This means that, for the best result, the user must assign a task, provide context and background knowledge, ask a specific question, and specify the type of output desired. Both humans and AI can make errors when answering questions. The classification of these errors can be made into 3 categories: logical, informational, or explicit fallacy . This allows for an understanding of why the AI struggles to ascertain the correct answer and could allow for comparison to humans if that data were to be collected. This method of qualifying error types has previously been used in the context of AI answering medical examination questions ; the error types are defined as follows: Logical fallacy: This type of error occurs when the response demonstrates a stepwise process but ultimately fails to correctly answer the question. Despite following a superficially logical progression in reasoning, the conclusion reached does not accurately address or resolve the query posed, often due to a misunderstanding of the central issue or incorrect application of a logical principle. Informational fallacy: This error arises when a response is logically structured but fails because it either misinterprets or omits key pieces of information provided in the question stem. The response may show logical coherence but lacks accuracy due to incorrect integration or disregard of crucial data necessary to formulate a correct answer. Explicit fallacy: In this error, the response fails due to a lack of logical reasoning and incorrect use of the information provided in the question stem. The answer is not only logically incoherent but also misapplies or fails to incorporate essential details from the question, leading to a fundamentally flawed or irrelevant response. Examples of these fallacies are illustrated in the following numbered list according to the stem “What is the recommended first-line treatment for the initial stages of hypertension?” Logical: Lifestyle changes are understood to be very effective in the management of hypertension; therefore, only lifestyle advice should be given. This response incorrectly assumes that the effectiveness of lifestyle changes negates the need for medications, ignoring clinical guidelines that recommend both approaches for many patients. Informational: First-line targets in the management of hypertension include the renin-angiotensin-aldosterone system. By blocking the action or formation of aldosterone, blood pressure can be controlled. Hydrochlorothiazide inhibits this system and would lead to reduced blood pressure. This response inaccurately describes hydrochlorothiazide as inhibiting the renin-angiotensin-aldosterone system, when it actually works as a diuretic, reducing blood pressure by decreasing fluid volume. Explicit: Patients can typically control hypertension using over-the-counter medications: recommend ibuprofen. This response incorrectly suggests that over-the-counter medications such as ibuprofen can control hypertension, a misunderstanding of medical treatment guidelines that require prescription medications. International shortages of family physicians, especially in rural areas , underscore the importance and urgency of maximizing the efficiency of family doctors. AI has the potential to be an extremely useful and efficient tool for integration into the profession . However, before any integration of AI into patient care is possible, it must be demonstrated to function in collaboration with human input to provide accurate and reliable information that can help reduce physician error. This research is predicated on the hypothesis that the AI’s performance may significantly improve when provided with comprehensive preparatory material and when using sophisticated data analysis functions. Research Questions Our research questions were as follows: Can ChatGPT-4, when provided with comprehensive preparatory materials, perform at or above the passing threshold for the Family Medicine Board Examinations? Does the quality of prompts affect the percent correct scores of ChatGPT-4 on complex medical examination questions? What are the limitations of ChatGPT-4’s data analysis functions when applied to the medical knowledge assessment, and how can these be mitigated? Family physicians in the United States are required to complete the American Board of Family Medicine (ABFM) Certification Examination following residency and every 10 years after to maintain board-certified status. This examination consists of 300 questions with a scaled scoring system ranging from 200 to 800; this corresponds to percent correct scores of 57.7%-61.0% . There are extensive web-based review materials that are used to help prepare for this examination, such as textbooks and question banks. Several studies have examined the performance of advanced artificial intelligence (AI) language models (eg, ChatGPT) in attempting and failing similar board examinations . Many of these studies used ChatGPT version 3.5; however, a study examining the newer and more powerful ChatGPT-4 found that it significantly outperformed its predecessor and medical residents on a University of Toronto family medicine examination . ChatGPT-4 can now analyze documents in several file formats such as PDF. This would allow a user to simulate the process of learning and studying by providing learning material for the AI to consult in advance of being tested. With this approach the AI can be given material targeted to a specific region’s regulations and ensure that it has access to the most up-to-date clinical guidelines. Users engage with ChatGPT through the use of text inputs called “prompts.” The contents of the prompt dictate the output. Prompt engineering is the purposeful structural construction of the input and significantly impacts the output. The 4 core elements of the prompt include the instruction, context, input data, and output indicator . This means that, for the best result, the user must assign a task, provide context and background knowledge, ask a specific question, and specify the type of output desired. Both humans and AI can make errors when answering questions. The classification of these errors can be made into 3 categories: logical, informational, or explicit fallacy . This allows for an understanding of why the AI struggles to ascertain the correct answer and could allow for comparison to humans if that data were to be collected. This method of qualifying error types has previously been used in the context of AI answering medical examination questions ; the error types are defined as follows: Logical fallacy: This type of error occurs when the response demonstrates a stepwise process but ultimately fails to correctly answer the question. Despite following a superficially logical progression in reasoning, the conclusion reached does not accurately address or resolve the query posed, often due to a misunderstanding of the central issue or incorrect application of a logical principle. Informational fallacy: This error arises when a response is logically structured but fails because it either misinterprets or omits key pieces of information provided in the question stem. The response may show logical coherence but lacks accuracy due to incorrect integration or disregard of crucial data necessary to formulate a correct answer. Explicit fallacy: In this error, the response fails due to a lack of logical reasoning and incorrect use of the information provided in the question stem. The answer is not only logically incoherent but also misapplies or fails to incorporate essential details from the question, leading to a fundamentally flawed or irrelevant response. Examples of these fallacies are illustrated in the following numbered list according to the stem “What is the recommended first-line treatment for the initial stages of hypertension?” Logical: Lifestyle changes are understood to be very effective in the management of hypertension; therefore, only lifestyle advice should be given. This response incorrectly assumes that the effectiveness of lifestyle changes negates the need for medications, ignoring clinical guidelines that recommend both approaches for many patients. Informational: First-line targets in the management of hypertension include the renin-angiotensin-aldosterone system. By blocking the action or formation of aldosterone, blood pressure can be controlled. Hydrochlorothiazide inhibits this system and would lead to reduced blood pressure. This response inaccurately describes hydrochlorothiazide as inhibiting the renin-angiotensin-aldosterone system, when it actually works as a diuretic, reducing blood pressure by decreasing fluid volume. Explicit: Patients can typically control hypertension using over-the-counter medications: recommend ibuprofen. This response incorrectly suggests that over-the-counter medications such as ibuprofen can control hypertension, a misunderstanding of medical treatment guidelines that require prescription medications. International shortages of family physicians, especially in rural areas , underscore the importance and urgency of maximizing the efficiency of family doctors. AI has the potential to be an extremely useful and efficient tool for integration into the profession . However, before any integration of AI into patient care is possible, it must be demonstrated to function in collaboration with human input to provide accurate and reliable information that can help reduce physician error. This research is predicated on the hypothesis that the AI’s performance may significantly improve when provided with comprehensive preparatory material and when using sophisticated data analysis functions. Our research questions were as follows: Can ChatGPT-4, when provided with comprehensive preparatory materials, perform at or above the passing threshold for the Family Medicine Board Examinations? Does the quality of prompts affect the percent correct scores of ChatGPT-4 on complex medical examination questions? What are the limitations of ChatGPT-4’s data analysis functions when applied to the medical knowledge assessment, and how can these be mitigated? Creation and Programming of AI Family Medicine Board Examination Taker The specialized AI named “AI Family Medicine Exam Expert” , a version of ChatGPT, was customized specifically to take the ABFM Certification Examination. It was programmed with the following instructions and capabilities. The AI model, ChatGPT-4: “AI Family Medicine Exam Expert,” was programmed to operate under a specific set of instructions designed to guide its behavior toward producing outputs relevant to the ABFM Certification Examination. See the programmer–large language model interaction in the following paragraphs: Programmer: Please read the attached files in your configuration entirely and let me know if you have any trouble reading it or have any questions regarding its content. The goal is to completely memorize and understand the files’ contents. Please let me know when you have completed this task. Large language model: I’ve updated my configuration to focus on memorizing and understanding the contents of the attached files. Let me take a moment to review it thoroughly. I’ll let you know as soon as I’ve completed this task. Just a moment while I process the file’s contents. I’ve reviewed the contents of the attached files. How can I assist you further with it? Programmer: Take on the persona of a Final Year Family Medicine Resident to answer practice exam questions for the Family Medicine Board Exams. Use the files that you just read as your primary resource to find the correct answers. If the answer is not directly available in these files, use your browsing capabilities to search the internet for the most accurate and relevant information from scholarly sources such as Up-to-date or peer reviewed articles. Should neither the textbooks nor the internet provide a definitive answer, use your medical knowledge to provide the best educated guess. Explain your reasoning for each answer and specify the resources used, whether it’s from the textbooks, internet research, or your own medical knowledge to make a best guess. Please ask me any questions before you start. Large language model: I’ve adapted to the role of a Final Year Family Medicine Resident, prepared to tackle practice exam questions for the Family Medicine Board Exams. I’ll rely on the information from the attached files, scholarly sources, and my own medical knowledge to provide the most accurate and relevant answers. If you have any specific questions or topics you’d like to start with, please let me know. Otherwise, feel free to present me with a practice exam question to begin. These instructions mandated the model to prioritize information from 3 key textbooks: Textbook of Family Medicine (Ninth Edition) by Rakel , The Family Medicine Board Review Book by Baldor , and Family Medicine: A Practical Approach (Second Edition) by Al-Gelban et al , as seen in . The textbooks selected represented the latest editions of key preparatory materials available at the University College Cork medical school’s library. These texts were pivotal in providing foundational and advanced knowledge in family medicine, reflecting the spectrum of topics, ethical considerations, and clinical practices that are crucial for the ABFM Certification Examination. Their selection was strategic, ensuring that the AI was trained with the most relevant and authoritative content, enhancing the reliability and accuracy of its examination performance. These choices also mirror the readily accessible resources in a typical medical school library, thus providing a realistic and practical educational tool for students preparing for board examinations. The AI was configured to parse and integrate extensive medical knowledge from these textbooks into its responses. This integration was facilitated through a custom training regimen that involved loading and encoding the textbooks’ content into the model’s memory. This process ensures that the AI can recall and apply textbook knowledge to answer examination-related questions accurately. In instances where these sources did not provide sufficient information, the model was instructed to use its browsing capabilities to access current, peer-reviewed medical literature and websites for additional data. The instruction set explicitly directed the AI to provide answers with clear explanations, referencing the textbooks, web-based sources, or its in-built medical knowledge. In cases where neither the textbook nor the web provided a definitive answer, the AI was directed to apply its medical knowledge to give the best possible educated guess. Input data consisted of a diverse set of questions from American Academy of Family Physicians’ (AAFP’s) “Family Medicine Board Review Questions,” modeled after past Family Medicine Board Examinations . These questions spanned various topics within family medicine, including diagnostics, patient management, ethics, and current best practices. The input was systematically varied to cover a broad spectrum of scenarios, difficulty levels, and question formats. Each question was presented to the AI model as a stand-alone task, ensuring that responses were generated independently, without influence from previous queries . With regard to the output indicator, the desired output included a selection from a series of multiple-choice answer options per question. Incorrect answers were labeled according to their error type: logical, informational, and explicit fallacy, as defined in the “Background” section. Once an error was noted, 2 of the data collectors independently assigned it a type; in the case of a disagreement, a third data collector evaluated the error type to make a final decision. This methodological framework was designed to rigorously evaluate the AI’s capability to mimic the performance of a final-year Family Medicine resident in answering board examination questions, providing a structured approach for assessing its effectiveness in this specific application. Operational Procedure The AI was presented with a series of questions from the AAFP’s Family Medicine Board Review Questions. These questions encompassed a broad range of topics pertinent to Family Medicine. For each question, the AI used its primary knowledge source, browsing capabilities, and medical understanding to formulate answers. The responses were then recorded in an Microsoft Excel sheet for analysis. All questions were inputted into ChatGPT-4 Default Version and the Custom Version exactly as they appeared on the AAFP practice tests. Data Analysis The AI’s responses were evaluated against the correct answers as per the AAFP’s Family Medicine Board Review Questions. The minimum passing threshold for the 2009 certification examination was a scaled score of 390, corresponding to 57.7%-61.0% . Ethical Considerations As an observational study involving an AI system, there were no human or animal subjects, thus minimizing ethical concerns. Ethical approval was not required for this study. Statistical Analysis In this investigation, we evaluated the performance of 2 language model versions, ChatGPT-4 Custom Robot and ChatGPT-4 Regular, by comparing their responses to a set of 300 questions on a question-by-question basis. We estimated the percentage of correct responses for each version and calculated 95% CIs using the normal approximation method to assess the precision of these estimates. Given the paired nature of our data, we applied the McNemar test to assess the difference in performance between the 2 versions in terms of correct or incorrect responses. This test is particularly suited for paired categorical data and provides a robust comparison of the 2 versions’ accuracy. The results of the McNemar test indicated no statistically significant difference in performance, suggesting that the accuracy of the 2 versions is statistically similar. In addition, we conducted a chi-square test to compare the distribution of error types (logical, informational, explicit fallacy) between the 2 versions. This test aimed to identify significant variations in error patterns. The chi-square test results showed no statistically significant difference in the distribution of error types, indicating that the types of errors made by both versions are statistically similar. All statistical analyses were conducted using Python (version 3.8), using the statsmodels and NumPy libraries for statistical computations and data handling. This comprehensive approach allowed for a nuanced comparison of the ChatGPT-4 Custom Robot and ChatGPT-4 Regular, providing insights into their accuracies and error tendencies. The specialized AI named “AI Family Medicine Exam Expert” , a version of ChatGPT, was customized specifically to take the ABFM Certification Examination. It was programmed with the following instructions and capabilities. The AI model, ChatGPT-4: “AI Family Medicine Exam Expert,” was programmed to operate under a specific set of instructions designed to guide its behavior toward producing outputs relevant to the ABFM Certification Examination. See the programmer–large language model interaction in the following paragraphs: Programmer: Please read the attached files in your configuration entirely and let me know if you have any trouble reading it or have any questions regarding its content. The goal is to completely memorize and understand the files’ contents. Please let me know when you have completed this task. Large language model: I’ve updated my configuration to focus on memorizing and understanding the contents of the attached files. Let me take a moment to review it thoroughly. I’ll let you know as soon as I’ve completed this task. Just a moment while I process the file’s contents. I’ve reviewed the contents of the attached files. How can I assist you further with it? Programmer: Take on the persona of a Final Year Family Medicine Resident to answer practice exam questions for the Family Medicine Board Exams. Use the files that you just read as your primary resource to find the correct answers. If the answer is not directly available in these files, use your browsing capabilities to search the internet for the most accurate and relevant information from scholarly sources such as Up-to-date or peer reviewed articles. Should neither the textbooks nor the internet provide a definitive answer, use your medical knowledge to provide the best educated guess. Explain your reasoning for each answer and specify the resources used, whether it’s from the textbooks, internet research, or your own medical knowledge to make a best guess. Please ask me any questions before you start. Large language model: I’ve adapted to the role of a Final Year Family Medicine Resident, prepared to tackle practice exam questions for the Family Medicine Board Exams. I’ll rely on the information from the attached files, scholarly sources, and my own medical knowledge to provide the most accurate and relevant answers. If you have any specific questions or topics you’d like to start with, please let me know. Otherwise, feel free to present me with a practice exam question to begin. These instructions mandated the model to prioritize information from 3 key textbooks: Textbook of Family Medicine (Ninth Edition) by Rakel , The Family Medicine Board Review Book by Baldor , and Family Medicine: A Practical Approach (Second Edition) by Al-Gelban et al , as seen in . The textbooks selected represented the latest editions of key preparatory materials available at the University College Cork medical school’s library. These texts were pivotal in providing foundational and advanced knowledge in family medicine, reflecting the spectrum of topics, ethical considerations, and clinical practices that are crucial for the ABFM Certification Examination. Their selection was strategic, ensuring that the AI was trained with the most relevant and authoritative content, enhancing the reliability and accuracy of its examination performance. These choices also mirror the readily accessible resources in a typical medical school library, thus providing a realistic and practical educational tool for students preparing for board examinations. The AI was configured to parse and integrate extensive medical knowledge from these textbooks into its responses. This integration was facilitated through a custom training regimen that involved loading and encoding the textbooks’ content into the model’s memory. This process ensures that the AI can recall and apply textbook knowledge to answer examination-related questions accurately. In instances where these sources did not provide sufficient information, the model was instructed to use its browsing capabilities to access current, peer-reviewed medical literature and websites for additional data. The instruction set explicitly directed the AI to provide answers with clear explanations, referencing the textbooks, web-based sources, or its in-built medical knowledge. In cases where neither the textbook nor the web provided a definitive answer, the AI was directed to apply its medical knowledge to give the best possible educated guess. Input data consisted of a diverse set of questions from American Academy of Family Physicians’ (AAFP’s) “Family Medicine Board Review Questions,” modeled after past Family Medicine Board Examinations . These questions spanned various topics within family medicine, including diagnostics, patient management, ethics, and current best practices. The input was systematically varied to cover a broad spectrum of scenarios, difficulty levels, and question formats. Each question was presented to the AI model as a stand-alone task, ensuring that responses were generated independently, without influence from previous queries . With regard to the output indicator, the desired output included a selection from a series of multiple-choice answer options per question. Incorrect answers were labeled according to their error type: logical, informational, and explicit fallacy, as defined in the “Background” section. Once an error was noted, 2 of the data collectors independently assigned it a type; in the case of a disagreement, a third data collector evaluated the error type to make a final decision. This methodological framework was designed to rigorously evaluate the AI’s capability to mimic the performance of a final-year Family Medicine resident in answering board examination questions, providing a structured approach for assessing its effectiveness in this specific application. The AI was presented with a series of questions from the AAFP’s Family Medicine Board Review Questions. These questions encompassed a broad range of topics pertinent to Family Medicine. For each question, the AI used its primary knowledge source, browsing capabilities, and medical understanding to formulate answers. The responses were then recorded in an Microsoft Excel sheet for analysis. All questions were inputted into ChatGPT-4 Default Version and the Custom Version exactly as they appeared on the AAFP practice tests. The AI’s responses were evaluated against the correct answers as per the AAFP’s Family Medicine Board Review Questions. The minimum passing threshold for the 2009 certification examination was a scaled score of 390, corresponding to 57.7%-61.0% . As an observational study involving an AI system, there were no human or animal subjects, thus minimizing ethical concerns. Ethical approval was not required for this study. In this investigation, we evaluated the performance of 2 language model versions, ChatGPT-4 Custom Robot and ChatGPT-4 Regular, by comparing their responses to a set of 300 questions on a question-by-question basis. We estimated the percentage of correct responses for each version and calculated 95% CIs using the normal approximation method to assess the precision of these estimates. Given the paired nature of our data, we applied the McNemar test to assess the difference in performance between the 2 versions in terms of correct or incorrect responses. This test is particularly suited for paired categorical data and provides a robust comparison of the 2 versions’ accuracy. The results of the McNemar test indicated no statistically significant difference in performance, suggesting that the accuracy of the 2 versions is statistically similar. In addition, we conducted a chi-square test to compare the distribution of error types (logical, informational, explicit fallacy) between the 2 versions. This test aimed to identify significant variations in error patterns. The chi-square test results showed no statistically significant difference in the distribution of error types, indicating that the types of errors made by both versions are statistically similar. All statistical analyses were conducted using Python (version 3.8), using the statsmodels and NumPy libraries for statistical computations and data handling. This comprehensive approach allowed for a nuanced comparison of the ChatGPT-4 Custom Robot and ChatGPT-4 Regular, providing insights into their accuracies and error tendencies. Accuracy Assessment As shown in , the ChatGPT-4 Custom Robot version correctly answered 88.67% of the questions (95% CI 85.08%-92.25%), while the Regular version achieved a correct response rate of 87.33% (95% CI 83.57%-91.10%). Error Type Analysis The distribution of error types across the 2 versions was evaluated using a chi-square test. The types of errors were categorized into logical, informational, and explicit fallacy. The test resulted in a P value of .32. Statistical Significance The McNemar test, which was applied to assess the significance of the difference in performance between the 2 versions, yielded a P value of .45. As shown in , the ChatGPT-4 Custom Robot version correctly answered 88.67% of the questions (95% CI 85.08%-92.25%), while the Regular version achieved a correct response rate of 87.33% (95% CI 83.57%-91.10%). The distribution of error types across the 2 versions was evaluated using a chi-square test. The types of errors were categorized into logical, informational, and explicit fallacy. The test resulted in a P value of .32. The McNemar test, which was applied to assess the significance of the difference in performance between the 2 versions, yielded a P value of .45. Principal Results Accuracy assessment results suggested that the observed differences in correct response rates between the Custom Robot and Regular versions were not statistically significant, implying comparable performance in accuracy. Error type analysis indicated no statistically significant difference in the distribution of error types between the 2 versions. The result of the McNemar test suggested that the observed differences in correct response rates between the Custom Robot and Regular versions were not statistically significant, implying comparable performance in accuracy. Evaluation Outcomes The lack of a significant difference in performance indicates that the quality of prompts and resources given to the Custom Robot “AI Family Medicine Exam Expert” improved ChatGPT-4’s performance but was not found to be significantly impactful. However, their accuracy rates are indicative of a passing level of proficiency in understanding and responding to the complex medical scenarios presented in the examination questions . This observation aligns with previous research showing that large language models such as ChatGPT can perform at or near the passing thresholds in medical examinations without specialized training or reinforcement, as demonstrated in the study on the United States Medical Licensing Examination . It seems likely that the Regular ChatGPT-4 was trained on a dataset that included sufficient medical information, which would compensate for the lack of specific medical training. Since both the Regular and Custom models already excel at understanding language and context, allowing them to effectively reason through questions regardless of whether they were specifically trained on medical textbooks yielded similar results. Implications for AI Performance The lack of significant variation in error types highlights that both versions of ChatGPT-4 exhibit similar patterns in processing and interpreting medical information. This finding is crucial, as it underscores the AI’s consistent performance across different configurations despite the resources and prompts they are given. Limitations One key limitation of our study is the reliance of the custom pretrained language model on textbooks, which may not fully capture the nuanced and evolving nature of medical knowledge. Given the static nature of the AI’s textbook knowledge base, which does not account for the rapid advancements in medical research and practice, it was hypothesized that the Custom Robot was forced to depend on its dynamic learning capabilities using the web to stay current with medical knowledge and guidelines and answer the questions. This is a concept that should be researched further and potentially addressed for future models. Previous research has had this limitation as well ; some studies have discussed the difficulty of applying data from differing subsets in a single algorithm and others have mentioned that their models require continuous updates in knowledge bases in order to function properly . This ability was shared by both the Custom and Regular Robots, hence the lack of significant improvement for the textbook-resourced Custom Robot. Comparison With Prior Work Comparing our findings with prior work, we observe a progression in the capabilities of AI models in medical knowledge assessment for Family Medicine Board Examinations. Earlier studies of ChatGPT demonstrated insufficient accuracy to pass Family Medicine Board Examinations . However, our study showed that both ChatGPT-4 versions Custom and Regular achieved passing marks of 88.67% and 87.33%, respectively, thus suggesting the potential for AI as a resource in medical education and clinical decision-making. Conclusions Our study has provided compelling evidence that ChatGPT-4, in both its Regular and Custom Robot versions, exhibits a high level of proficiency in tackling the complex questions typical of the Family Medicine Board Examinations. The performance of these AI models, with correct response rates of 88.67% and 87.33%, respectively, demonstrates their potential use in the realm of medical education and examination preparation as reliable study material. Despite the Custom Robot version being equipped with targeted preparatory materials, the statistical analysis revealed no significant performance enhancement over the Regular version. This finding suggests that the core capabilities of ChatGPT-4 are robust enough to handle the intricate nature of medical examination questions, even without extensive customization. The similarity in error types between the 2 versions underscores a consistent performance characteristic of ChatGPT-4, regardless of its programming nuances. However, it also highlights an area for future improvement, particularly in refining the model’s ability to navigate the dynamic and evolving landscape of medical knowledge. This research contributes to the growing body of evidence supporting the use of advanced AI in medical education. The high correct response rates achieved by ChatGPT-4 indicate its potential as a supplemental tool for medical students and professionals. Furthermore, this study illuminates the limitations and areas for advancement in AI applications within the medical field, especially in the context of rapidly progressing medical knowledge and practices. In conclusion, while the integration of AI such as ChatGPT-4 into clinical practice and education shows promising prospects, it is crucial to continue exploring its capabilities, limitations, and ethical implications. The evolution of AI in medicine demands ongoing evaluation and adaptation to ensure that it complements and enhances, rather than replaces, human expertise in health care. Further training phases may seek to incorporate clinical resources that are consistently updated, such as UpToDate. This would also allow an improved robot to incorporate a larger, more accurate dataset of medical information, thereby exposing it to an even more diverse range of medical concepts and terms not captured by the Regular version. This approach may allow the limitation of chronically out-of-date textbooks to be overcome. Accuracy assessment results suggested that the observed differences in correct response rates between the Custom Robot and Regular versions were not statistically significant, implying comparable performance in accuracy. Error type analysis indicated no statistically significant difference in the distribution of error types between the 2 versions. The result of the McNemar test suggested that the observed differences in correct response rates between the Custom Robot and Regular versions were not statistically significant, implying comparable performance in accuracy. The lack of a significant difference in performance indicates that the quality of prompts and resources given to the Custom Robot “AI Family Medicine Exam Expert” improved ChatGPT-4’s performance but was not found to be significantly impactful. However, their accuracy rates are indicative of a passing level of proficiency in understanding and responding to the complex medical scenarios presented in the examination questions . This observation aligns with previous research showing that large language models such as ChatGPT can perform at or near the passing thresholds in medical examinations without specialized training or reinforcement, as demonstrated in the study on the United States Medical Licensing Examination . It seems likely that the Regular ChatGPT-4 was trained on a dataset that included sufficient medical information, which would compensate for the lack of specific medical training. Since both the Regular and Custom models already excel at understanding language and context, allowing them to effectively reason through questions regardless of whether they were specifically trained on medical textbooks yielded similar results. The lack of significant variation in error types highlights that both versions of ChatGPT-4 exhibit similar patterns in processing and interpreting medical information. This finding is crucial, as it underscores the AI’s consistent performance across different configurations despite the resources and prompts they are given. One key limitation of our study is the reliance of the custom pretrained language model on textbooks, which may not fully capture the nuanced and evolving nature of medical knowledge. Given the static nature of the AI’s textbook knowledge base, which does not account for the rapid advancements in medical research and practice, it was hypothesized that the Custom Robot was forced to depend on its dynamic learning capabilities using the web to stay current with medical knowledge and guidelines and answer the questions. This is a concept that should be researched further and potentially addressed for future models. Previous research has had this limitation as well ; some studies have discussed the difficulty of applying data from differing subsets in a single algorithm and others have mentioned that their models require continuous updates in knowledge bases in order to function properly . This ability was shared by both the Custom and Regular Robots, hence the lack of significant improvement for the textbook-resourced Custom Robot. Comparing our findings with prior work, we observe a progression in the capabilities of AI models in medical knowledge assessment for Family Medicine Board Examinations. Earlier studies of ChatGPT demonstrated insufficient accuracy to pass Family Medicine Board Examinations . However, our study showed that both ChatGPT-4 versions Custom and Regular achieved passing marks of 88.67% and 87.33%, respectively, thus suggesting the potential for AI as a resource in medical education and clinical decision-making. Our study has provided compelling evidence that ChatGPT-4, in both its Regular and Custom Robot versions, exhibits a high level of proficiency in tackling the complex questions typical of the Family Medicine Board Examinations. The performance of these AI models, with correct response rates of 88.67% and 87.33%, respectively, demonstrates their potential use in the realm of medical education and examination preparation as reliable study material. Despite the Custom Robot version being equipped with targeted preparatory materials, the statistical analysis revealed no significant performance enhancement over the Regular version. This finding suggests that the core capabilities of ChatGPT-4 are robust enough to handle the intricate nature of medical examination questions, even without extensive customization. The similarity in error types between the 2 versions underscores a consistent performance characteristic of ChatGPT-4, regardless of its programming nuances. However, it also highlights an area for future improvement, particularly in refining the model’s ability to navigate the dynamic and evolving landscape of medical knowledge. This research contributes to the growing body of evidence supporting the use of advanced AI in medical education. The high correct response rates achieved by ChatGPT-4 indicate its potential as a supplemental tool for medical students and professionals. Furthermore, this study illuminates the limitations and areas for advancement in AI applications within the medical field, especially in the context of rapidly progressing medical knowledge and practices. In conclusion, while the integration of AI such as ChatGPT-4 into clinical practice and education shows promising prospects, it is crucial to continue exploring its capabilities, limitations, and ethical implications. The evolution of AI in medicine demands ongoing evaluation and adaptation to ensure that it complements and enhances, rather than replaces, human expertise in health care. Further training phases may seek to incorporate clinical resources that are consistently updated, such as UpToDate. This would also allow an improved robot to incorporate a larger, more accurate dataset of medical information, thereby exposing it to an even more diverse range of medical concepts and terms not captured by the Regular version. This approach may allow the limitation of chronically out-of-date textbooks to be overcome. |
Molecular diagnostics of hepatobiliary and pancreatic neoplasias | 2ec71cad-d1d3-4b5e-b651-3c2a456282fb | 10948571 | Pathology[mh] | Neoplasias of the liver, bile ducts, and pancreas belong to the most frequent, clinically most relevant and challenging group of malignancies. In addition, their frequencies are rising, and despite significant improvements in prevention, diagnosis, and treatment, the individual prognosis of most patients is dismal, especially if curative resection cannot be achieved. Precise diagnosis and in recent years prediction of therapeutic response have gained increasing impact in hepatopancreatobiliary cancer due to more and more differentiated therapeutic approaches and particularly rapidly growing systemic treatment options. Molecular pathology is a cornerstone of these diagnostics and contributes in manifold ways to cancer typing (morpho-molecular subtyping, assessment of malignancy in uncertain constellations, and suspicion of genetic cancer predisposition) and predictive testing to guide systemic therapy. Molecular testing in hepatobiliary and pancreatic cancers has to reflect and adapt to several challenges: (a) resection material or biopsies (which may be small and/or contain only few tumor cells, especially in pancreatic and bile duct biopsies) and under more rare and specific conditions also liquid testing (blood, bile, cyst fluid) have to be handled, and testing approaches have to be tailored to these specimens. Also, (b) the indication for testing (typing, testing for approved therapies or molecular tumor boards, clinical trials, or even individualized treatment approaches) substantially matters and may soon be extended by the need to test for molecularly based adjuvant and neoadjuvant treatments. Finally, (c) availability of assays, competence, financing, and clinical environment affect the choice of tests and workflows. A peculiarity represents liver biopsy, as it is frequently the prime or even only material available to test for metastatic cancer of extrahepatic (including pancreatic) or unknown primary site. Recently, the clinical relevance of molecular testing in hepatobiliary cancer has increased. A number of successful clinical trials have led to approvals for molecularly guided systemic therapies. In addition, the complexity of biomarkers has increased from single gene testing via multigene panels addressing all clinically actionable specific genetic alterations to complex marker testing (e.g., tumor mutation burden (TMB), homologous recombination deficiency (HRD), microsatellite instability (MSI)) and even whole-exome sequencing in certain constellations. Complexity of testing, specific tissue issues, and turn-around time represent the triangle of technical challenges molecular pathology is facing, especially in hepatopancreatobiliary cancer. It can be foreseen that this development will not stop and that adequate scaling of specific pathological, biomedical, and bioinformatic expertise, resources, and equipment are required, a challenge which in its completeness may only be addressed by specialized centers and networks. Tumor subtyping Hepatocellular adenoma Hepatocellular adenoma (HCA) is a paradigmatic entity for morpho-molecular tumor subtyping. It mainly affects (younger) women without a pre-existing liver disease and is associated with exposure to steroid hormones . In addition, metabolic (e.g., obesity, glycogenosis) and vascular liver diseases may induce HCA formation. HCA subtyping has relevance in terms of potential complications as well as clinical management (Table ). HNF1A-inactivated HCAs (H-HCAs) are characterized by prominent fatty change and are negative for fatty acid binding protein 1 (FABP1) by immunohistochemistry . While they show no increased risk of malignant transformation in general, a CTNNB1-independent malignant transformation has been described in patients older than 60 years with lesions > 5 cm in diameter . So far, the transformation risk has not been linked to specific HNF1A mutations. Thus, sequencing of the HNF1A gene is currently neither required for diagnosis nor risk assessment regarding malignant transformation. Of note, liver adenomatosis may be observed in patients with HNF1A germline mutations (who may also develop maturity-onset diabetes of the young type 3). Inflammatory HCA (I-HCA) results from various mutations in genes contributing to activation of IL-6 signaling . It has some peculiar histological features: inflammatory foci, sinusoidal dilatation, and portal tract-like structures harboring ductular proliferations . Positivity of acute phase proteins (e.g., serum amyloid A, C-reactive protein) compared to the surrounding liver tissue can be used as a diagnostic immunomarker. The tumor-associated secretion of acute phase proteins may result in a systemic inflammation, which can be treated by HCA resection . Activating mutations of the CTNNB gene characterize a subgroup of HCA, which carries an increased risk of malignant transformation into HCC (so-called ß-catenin-activated HCA, B-HCA) . About half of all B-HCA reveal additional features of inflammatory HCA (BI-HCA) . Overall, the frequency of CTNNB1 mutation in HCA is 10 to 15% . Most mutations affecting exon 3 result in high activity of WNT signaling, while mutations in exons 7 (K335) and 8 (N387) and the S45 mutation in exon 3 lead to weaker pathway activation . The combination of glutamine synthetase (GS) and CD34 immunohistochemistry is able to discriminate these mutations in most cases, but molecular testing is advisable. HCA with classical exon 3 mutations show a diffuse GS expression and increased sinusoidal CD34 expression. Exon 3 S45 mutation is characterized by heterogeneous GS staining associated with a GS-positive but CD34-negative rim, while the central lesion reveals a diffuse capillarization. Exon 7/8 mutations show a similar CD34 staining pattern, but GS positivity is only focal and patchy . Strong activation of WNT signaling resulting from classical exon 3 mutations or S45 allele duplication has been associated with a high risk of malignant transformation . Consequently, molecular testing not only clarifies the precise nature of the CTNNB gene mutation but it also provides information about the risk of malignant transformation and is thus predictive in terms of therapeutic decisions (resection of all B-HCA with high WNT-pathway activation). A rare HCA subtype reveals activation of sonic hedgehog signaling (SH-HCA) due to focal deletions that fuse the promoter of INHBE with GLI1. These tumors occur more frequently in obese patient and have a higher risk of rupture and life-threatening bleeding . Argininosuccinate synthase 1 has been proposed as a diagnostic immunomarker . The very recently described familial adenomatous polyposis (FAP)-HCA occurs in patients with germline mutations of the APC gene and shows also activation of the WNT signaling pathway as demonstrated by strong positivity for glutamine synthetase. Thus, this rare subtype shares features with B-HCA, but it does not reveal nuclear beta-catenin accumulation and an increased risk of malignant transformation has not been established for these HCA . Finally, rare HCAs that do not fit in the above-mentioned subtypes are considered unclassified HCA (U-HCA). Hepatocellular carcinoma Numerous more or less differentiated attempts to subclassify hepatocellular carcinoma (HCC) using molecular genetic testing, expression profiling (RNA- and protein-based), epigenetics, and combinations thereof have been made. These analyses have uncovered molecular mechanisms contributing to different modes of HCC development and have thus provided the basis for further research. None of these approaches has made its way into HCC diagnosis or clinical management of HCC patients, as they have several shortcomings: there are many proposals but no consensus regarding classification schemes and methodology. As only earlier (resectable) tumor stages have been included, it is unclear whether the classification schemes represent molecular tumor typing or staging and to which extent they are valid for progressed HCCs. It has become apparent that HCC, besides the majority of typical HCCs showing different growth and cytological patterns (may be called HCC, not otherwise specified), contains several specific morpho-molecular subtypes, which show peculiar histological, molecular, clinical, and biological characteristics (Table ) and whose phenotypes typically remain stable throughout tumor progression. HCC subtyping has been included into the 5th edition of the World Health Organization (WHO) classification. Molecular analyses may support the diagnosis of these subtypes in questionable cases, of, e.g., fibrolamellar, sclerotic, or chromophobe HCCs. Other subtypes, such as lymphocyte-rich and steatotic subtypes, are still less clearly defined and lack consented diagnostically applicable molecular markers. In rare cases, the demonstration of hepatitis B virus-DNA integrations in HCC may establish a causal relation between profession-based infection and HCC development. Cholangiocarcinoma Intrahepatic cholangiocarcinoma (iCCA) is the second most frequent malignant primary liver tumor entity and needs to be separated from carcinomas of the gallbladder (GBCA) and extrahepatic biliary tree (eCCA). As suggested by animal models, iCCA may (under certain conditions also) develop from hepatocytes . In line with this hypothesis, detection of albumin mRNA expression by albumin in situ hybridization has been proposed to support the diagnosis of small duct (sd)-iCCA . At the histological level, a sd-iCCA composed of non-mucinous, cuboidal cells forming small tubular and ductular structures in a desmoplastic stroma is separated from a large duct (ld) type, which is biologically similar to eCCA and contains mucin-secreting, columnar cancer cells. sd-iCCA shares the etiological risk profile and primary nodular growth pattern with HCC, while ld-iCCA mirrors the etiology and growth pattern of eCCA. While eCCA and iCCA share some common mutations (e.g., TP53 , BRCA1 , BRCA2 , PK3CA, KRAS , SMAD4 , ARID1A , GNAS ), others are especially typical for small duct-iCCA ( IDH1 , IDH2, and BAP1 mutations as well as translocations involving FGFR2 , NRG1 , ALK , NTRK1-3, and possibly others; Table ) and may eventually allow for the identification of iCCA in a cancer of unknown primary (CUP) constellation . This is of direct clinical relevance as a specification of a hepatic adeno-CUP may provide the patient access to several specific guideline- and approval-based targeted and non-targeted therapeutic options superior to standard adeno-CUP chemotherapy. There is the first evidence that small duct iCCA, similar to HCC, may contain to a significant extent various different, low-frequency, morpho-molecularly defined subtypes; next to the cholangiolocellular and ductal-plate malformation-like subtypes that are already recognized, the solid-tubulocystic (“cholangioblastic”) subtype with its peculiar morphology, inhibin-positivity, and diagnostic NIPL-NICC1 translocation has recently been defined . Further potential morpho-molecular subtypes have been proposed and await confirmation. Pancreatic cancer Several attempts to molecularly subclassify pancreatic ductal adenocarcinoma (PDAC) have been carried out based on microarray data obtained from cell lines and PDAC tissue, RNA-sequencing in silico subtraction of transcript data obtained from cells comprising the tumor microenviroment (TME). . Based on these results and additional studies , there is now some consensus that acknowledges at least two different molecular subtypes with some overlap and inter- and intra-tumor heterogeneities (classic and basal types). Several clinical studies aim at harnessing molecular subtypes as well as other RNA-based signatures to inform efficacy of systemic treatments. Alternatively, assessment of copy number variations (CNVs) and larger chromosomal rearrangements can classify PDAC into four subtypes: “stable,” “locally rearranged,” “scattered,” and “unstable” . These may be exploited clinically in the future, as the unstable subtype is associated with homologous recombination deficiency and there is evidence that CNV-rich tumors tend to display a cold TME. Currently, molecular testing is not required clinically to define or subtype PDAC. Other tumor entities In other liver tumor entities, molecular testing is rarely required for typing. Rare exceptions may be questionable cases of malignant epithelioid hemangioendothelioma. Here, the demonstration of a WWTR1-CAMTA1 fusion gene (up to 90% of the cases) or the rare YAP1-TFE3 translocation may support the diagnosis in questionable cases . In cases of hepatic adeno-CUP, the pattern of molecular alterations may provide valuable information in regard to the entity. For example, detection of an FGFR2 translocation characterizes the tumor as a sd-iCCA with high certainty and provides the patient with several approved systemic therapeutic options. Definition of malignancy Tumor typing includes the reliable distinction of benign or premalignant hepatocellular lesions from early and highly differentiated HCC. One scenario is malignant transformation of B-HCA into HCC and the other the differential diagnosis between premalignant dysplastic nodules and highly differentiated HCC (Fig. ). Of note, the histological changes in the surrounding liver tissue as caused by the underlying chronic liver disease together with the so-called matrix diagnosis (e.g., older age, male gender, presence of chronic liver disease, patient origin from high-risk area) may support the diagnosis of HCC vs. HCA or FNH. Since definite histopathological features of malignant transformation (interstitial and vascular invasions) are rarely found in a critical biopsy specimen, next to the demonstration of disturbed trabecular architecture, the diagnosis of well-differentiated HCC may be supported by diffuse capillarization of the sinusoids in HCC as detected by CD34 immunohistology . In addition, an immunohistological marker panel (heat shock protein 70, glypican-3, and GS; Table ) became the diagnostic standard for the molecular adjunct diagnosis for malignancy in highly differentiated hepatocellular tumors. It provides a high sensitivity (~ 70%) and a near perfect specificity for the detection of malignancy in independent studies . Moreover, detection of mutations in the telomerase reverse transcriptase promoter may be helpful for the identification of malignant transformation in highly differentiated hepatocellular tumors, namely, HCA and dysplastic nodules vs HCC . It employs the fact that hTERT promoter mutations significantly increase in frequency from HCA (0%) to “borderline” cases (17%) to HCC derived from HCA (56%) and from dysplastic nodules (6–19%) to early and highly differentiated HCC (43–61%) . It has to be noted that the detection of hTERT-promoter mutations requires specific DNA-PCR-based assays and cannot be detected by standard panel-based assays or WES. In about 85–90% of the cases, PDAC may develop from different premalignant precursors: pancreatic intraepithelial neoplasia (PanIN) progresses from low- to high-grade lesions accumulating genetic alterations (e.g., mutations in KRAS, SMAD4, TP53) . Furthermore, intraductal papillary mucinous neoplasms (IPMNs) may progress from low-grade to high-grade dysplasia to PDAC, and more rarely mucinous cystic neoplasms (MCNs) may malignantly transform. KRAS mutations can be observed early in neoplastic development (i.e., even in low-grade PanIN and IPMN) consequently increasing to more than 90% of PDAC carrying activating KRAS mutations as major driver event. While diagnosis of malignant transformation in pancreatic carcinogenesis resides solely on histology not requiring any molecular testing, molecular testing of aspiration fluid may help to clarify the nature of cystic pancreatic lesions . Genetic cancer syndromes and genetic tumor predisposition Affection of the liver and biliary tree in genetic cancer syndromes is very rare. The vast majority of cases of HCC with a genetic background are due to hereditary metabolic diseases (e.g., genetic iron storage diseases, hereditary tyrosinemia type I (rare), Wilson’s disease (rare)). In these cases, severe hepatic disease manifestation provides the soil for HCC and iCCA development. Consequently, prevention of liver affection abolishes the risk of tumor development. The reason for the relative protection in regard to hepatic involvement in genetic tumor predisposition syndromes is unknown. Thus, there is no indication to test for genetic cancer syndromes in HCC and CCA. Genetic predisposition by respective germline mutations is likely in hepatic adenomatosis (> 10 HCA in a patient) or when histology or immunohistology detect multiple comparable microlesions in the non-tumorous parenchyma of a HCA-resection specimen. Angiomyolipoma (AML) has been linked to the tuberous sclerosis complex, but this correlation is much lower in hepatic AML when compared to renal AML. Rarely, hereditary cases of pancreatic cancer have been observed in association with Peutz-Jeghers syndrome (STK11), hereditary pancreatitis (PRSS1, SPINK1, CFTR), familial melanoma (CDKN2A, CDK4, BAP1), Lynch syndrome (MLH1, MSH2, MSH6, PMS2), hereditary breast and ovarian cancer syndrome (BRCA2, BRCA1, PALB2), Li-Fraumeni syndrome (TP53), FAP (APC), ataxia telangiectasia, and polymerase proofreading-associated polyposis (POLE, POLD1) . Hepatocellular adenoma Hepatocellular adenoma (HCA) is a paradigmatic entity for morpho-molecular tumor subtyping. It mainly affects (younger) women without a pre-existing liver disease and is associated with exposure to steroid hormones . In addition, metabolic (e.g., obesity, glycogenosis) and vascular liver diseases may induce HCA formation. HCA subtyping has relevance in terms of potential complications as well as clinical management (Table ). HNF1A-inactivated HCAs (H-HCAs) are characterized by prominent fatty change and are negative for fatty acid binding protein 1 (FABP1) by immunohistochemistry . While they show no increased risk of malignant transformation in general, a CTNNB1-independent malignant transformation has been described in patients older than 60 years with lesions > 5 cm in diameter . So far, the transformation risk has not been linked to specific HNF1A mutations. Thus, sequencing of the HNF1A gene is currently neither required for diagnosis nor risk assessment regarding malignant transformation. Of note, liver adenomatosis may be observed in patients with HNF1A germline mutations (who may also develop maturity-onset diabetes of the young type 3). Inflammatory HCA (I-HCA) results from various mutations in genes contributing to activation of IL-6 signaling . It has some peculiar histological features: inflammatory foci, sinusoidal dilatation, and portal tract-like structures harboring ductular proliferations . Positivity of acute phase proteins (e.g., serum amyloid A, C-reactive protein) compared to the surrounding liver tissue can be used as a diagnostic immunomarker. The tumor-associated secretion of acute phase proteins may result in a systemic inflammation, which can be treated by HCA resection . Activating mutations of the CTNNB gene characterize a subgroup of HCA, which carries an increased risk of malignant transformation into HCC (so-called ß-catenin-activated HCA, B-HCA) . About half of all B-HCA reveal additional features of inflammatory HCA (BI-HCA) . Overall, the frequency of CTNNB1 mutation in HCA is 10 to 15% . Most mutations affecting exon 3 result in high activity of WNT signaling, while mutations in exons 7 (K335) and 8 (N387) and the S45 mutation in exon 3 lead to weaker pathway activation . The combination of glutamine synthetase (GS) and CD34 immunohistochemistry is able to discriminate these mutations in most cases, but molecular testing is advisable. HCA with classical exon 3 mutations show a diffuse GS expression and increased sinusoidal CD34 expression. Exon 3 S45 mutation is characterized by heterogeneous GS staining associated with a GS-positive but CD34-negative rim, while the central lesion reveals a diffuse capillarization. Exon 7/8 mutations show a similar CD34 staining pattern, but GS positivity is only focal and patchy . Strong activation of WNT signaling resulting from classical exon 3 mutations or S45 allele duplication has been associated with a high risk of malignant transformation . Consequently, molecular testing not only clarifies the precise nature of the CTNNB gene mutation but it also provides information about the risk of malignant transformation and is thus predictive in terms of therapeutic decisions (resection of all B-HCA with high WNT-pathway activation). A rare HCA subtype reveals activation of sonic hedgehog signaling (SH-HCA) due to focal deletions that fuse the promoter of INHBE with GLI1. These tumors occur more frequently in obese patient and have a higher risk of rupture and life-threatening bleeding . Argininosuccinate synthase 1 has been proposed as a diagnostic immunomarker . The very recently described familial adenomatous polyposis (FAP)-HCA occurs in patients with germline mutations of the APC gene and shows also activation of the WNT signaling pathway as demonstrated by strong positivity for glutamine synthetase. Thus, this rare subtype shares features with B-HCA, but it does not reveal nuclear beta-catenin accumulation and an increased risk of malignant transformation has not been established for these HCA . Finally, rare HCAs that do not fit in the above-mentioned subtypes are considered unclassified HCA (U-HCA). Hepatocellular carcinoma Numerous more or less differentiated attempts to subclassify hepatocellular carcinoma (HCC) using molecular genetic testing, expression profiling (RNA- and protein-based), epigenetics, and combinations thereof have been made. These analyses have uncovered molecular mechanisms contributing to different modes of HCC development and have thus provided the basis for further research. None of these approaches has made its way into HCC diagnosis or clinical management of HCC patients, as they have several shortcomings: there are many proposals but no consensus regarding classification schemes and methodology. As only earlier (resectable) tumor stages have been included, it is unclear whether the classification schemes represent molecular tumor typing or staging and to which extent they are valid for progressed HCCs. It has become apparent that HCC, besides the majority of typical HCCs showing different growth and cytological patterns (may be called HCC, not otherwise specified), contains several specific morpho-molecular subtypes, which show peculiar histological, molecular, clinical, and biological characteristics (Table ) and whose phenotypes typically remain stable throughout tumor progression. HCC subtyping has been included into the 5th edition of the World Health Organization (WHO) classification. Molecular analyses may support the diagnosis of these subtypes in questionable cases, of, e.g., fibrolamellar, sclerotic, or chromophobe HCCs. Other subtypes, such as lymphocyte-rich and steatotic subtypes, are still less clearly defined and lack consented diagnostically applicable molecular markers. In rare cases, the demonstration of hepatitis B virus-DNA integrations in HCC may establish a causal relation between profession-based infection and HCC development. Cholangiocarcinoma Intrahepatic cholangiocarcinoma (iCCA) is the second most frequent malignant primary liver tumor entity and needs to be separated from carcinomas of the gallbladder (GBCA) and extrahepatic biliary tree (eCCA). As suggested by animal models, iCCA may (under certain conditions also) develop from hepatocytes . In line with this hypothesis, detection of albumin mRNA expression by albumin in situ hybridization has been proposed to support the diagnosis of small duct (sd)-iCCA . At the histological level, a sd-iCCA composed of non-mucinous, cuboidal cells forming small tubular and ductular structures in a desmoplastic stroma is separated from a large duct (ld) type, which is biologically similar to eCCA and contains mucin-secreting, columnar cancer cells. sd-iCCA shares the etiological risk profile and primary nodular growth pattern with HCC, while ld-iCCA mirrors the etiology and growth pattern of eCCA. While eCCA and iCCA share some common mutations (e.g., TP53 , BRCA1 , BRCA2 , PK3CA, KRAS , SMAD4 , ARID1A , GNAS ), others are especially typical for small duct-iCCA ( IDH1 , IDH2, and BAP1 mutations as well as translocations involving FGFR2 , NRG1 , ALK , NTRK1-3, and possibly others; Table ) and may eventually allow for the identification of iCCA in a cancer of unknown primary (CUP) constellation . This is of direct clinical relevance as a specification of a hepatic adeno-CUP may provide the patient access to several specific guideline- and approval-based targeted and non-targeted therapeutic options superior to standard adeno-CUP chemotherapy. There is the first evidence that small duct iCCA, similar to HCC, may contain to a significant extent various different, low-frequency, morpho-molecularly defined subtypes; next to the cholangiolocellular and ductal-plate malformation-like subtypes that are already recognized, the solid-tubulocystic (“cholangioblastic”) subtype with its peculiar morphology, inhibin-positivity, and diagnostic NIPL-NICC1 translocation has recently been defined . Further potential morpho-molecular subtypes have been proposed and await confirmation. Pancreatic cancer Several attempts to molecularly subclassify pancreatic ductal adenocarcinoma (PDAC) have been carried out based on microarray data obtained from cell lines and PDAC tissue, RNA-sequencing in silico subtraction of transcript data obtained from cells comprising the tumor microenviroment (TME). . Based on these results and additional studies , there is now some consensus that acknowledges at least two different molecular subtypes with some overlap and inter- and intra-tumor heterogeneities (classic and basal types). Several clinical studies aim at harnessing molecular subtypes as well as other RNA-based signatures to inform efficacy of systemic treatments. Alternatively, assessment of copy number variations (CNVs) and larger chromosomal rearrangements can classify PDAC into four subtypes: “stable,” “locally rearranged,” “scattered,” and “unstable” . These may be exploited clinically in the future, as the unstable subtype is associated with homologous recombination deficiency and there is evidence that CNV-rich tumors tend to display a cold TME. Currently, molecular testing is not required clinically to define or subtype PDAC. Other tumor entities In other liver tumor entities, molecular testing is rarely required for typing. Rare exceptions may be questionable cases of malignant epithelioid hemangioendothelioma. Here, the demonstration of a WWTR1-CAMTA1 fusion gene (up to 90% of the cases) or the rare YAP1-TFE3 translocation may support the diagnosis in questionable cases . In cases of hepatic adeno-CUP, the pattern of molecular alterations may provide valuable information in regard to the entity. For example, detection of an FGFR2 translocation characterizes the tumor as a sd-iCCA with high certainty and provides the patient with several approved systemic therapeutic options. Hepatocellular adenoma (HCA) is a paradigmatic entity for morpho-molecular tumor subtyping. It mainly affects (younger) women without a pre-existing liver disease and is associated with exposure to steroid hormones . In addition, metabolic (e.g., obesity, glycogenosis) and vascular liver diseases may induce HCA formation. HCA subtyping has relevance in terms of potential complications as well as clinical management (Table ). HNF1A-inactivated HCAs (H-HCAs) are characterized by prominent fatty change and are negative for fatty acid binding protein 1 (FABP1) by immunohistochemistry . While they show no increased risk of malignant transformation in general, a CTNNB1-independent malignant transformation has been described in patients older than 60 years with lesions > 5 cm in diameter . So far, the transformation risk has not been linked to specific HNF1A mutations. Thus, sequencing of the HNF1A gene is currently neither required for diagnosis nor risk assessment regarding malignant transformation. Of note, liver adenomatosis may be observed in patients with HNF1A germline mutations (who may also develop maturity-onset diabetes of the young type 3). Inflammatory HCA (I-HCA) results from various mutations in genes contributing to activation of IL-6 signaling . It has some peculiar histological features: inflammatory foci, sinusoidal dilatation, and portal tract-like structures harboring ductular proliferations . Positivity of acute phase proteins (e.g., serum amyloid A, C-reactive protein) compared to the surrounding liver tissue can be used as a diagnostic immunomarker. The tumor-associated secretion of acute phase proteins may result in a systemic inflammation, which can be treated by HCA resection . Activating mutations of the CTNNB gene characterize a subgroup of HCA, which carries an increased risk of malignant transformation into HCC (so-called ß-catenin-activated HCA, B-HCA) . About half of all B-HCA reveal additional features of inflammatory HCA (BI-HCA) . Overall, the frequency of CTNNB1 mutation in HCA is 10 to 15% . Most mutations affecting exon 3 result in high activity of WNT signaling, while mutations in exons 7 (K335) and 8 (N387) and the S45 mutation in exon 3 lead to weaker pathway activation . The combination of glutamine synthetase (GS) and CD34 immunohistochemistry is able to discriminate these mutations in most cases, but molecular testing is advisable. HCA with classical exon 3 mutations show a diffuse GS expression and increased sinusoidal CD34 expression. Exon 3 S45 mutation is characterized by heterogeneous GS staining associated with a GS-positive but CD34-negative rim, while the central lesion reveals a diffuse capillarization. Exon 7/8 mutations show a similar CD34 staining pattern, but GS positivity is only focal and patchy . Strong activation of WNT signaling resulting from classical exon 3 mutations or S45 allele duplication has been associated with a high risk of malignant transformation . Consequently, molecular testing not only clarifies the precise nature of the CTNNB gene mutation but it also provides information about the risk of malignant transformation and is thus predictive in terms of therapeutic decisions (resection of all B-HCA with high WNT-pathway activation). A rare HCA subtype reveals activation of sonic hedgehog signaling (SH-HCA) due to focal deletions that fuse the promoter of INHBE with GLI1. These tumors occur more frequently in obese patient and have a higher risk of rupture and life-threatening bleeding . Argininosuccinate synthase 1 has been proposed as a diagnostic immunomarker . The very recently described familial adenomatous polyposis (FAP)-HCA occurs in patients with germline mutations of the APC gene and shows also activation of the WNT signaling pathway as demonstrated by strong positivity for glutamine synthetase. Thus, this rare subtype shares features with B-HCA, but it does not reveal nuclear beta-catenin accumulation and an increased risk of malignant transformation has not been established for these HCA . Finally, rare HCAs that do not fit in the above-mentioned subtypes are considered unclassified HCA (U-HCA). Numerous more or less differentiated attempts to subclassify hepatocellular carcinoma (HCC) using molecular genetic testing, expression profiling (RNA- and protein-based), epigenetics, and combinations thereof have been made. These analyses have uncovered molecular mechanisms contributing to different modes of HCC development and have thus provided the basis for further research. None of these approaches has made its way into HCC diagnosis or clinical management of HCC patients, as they have several shortcomings: there are many proposals but no consensus regarding classification schemes and methodology. As only earlier (resectable) tumor stages have been included, it is unclear whether the classification schemes represent molecular tumor typing or staging and to which extent they are valid for progressed HCCs. It has become apparent that HCC, besides the majority of typical HCCs showing different growth and cytological patterns (may be called HCC, not otherwise specified), contains several specific morpho-molecular subtypes, which show peculiar histological, molecular, clinical, and biological characteristics (Table ) and whose phenotypes typically remain stable throughout tumor progression. HCC subtyping has been included into the 5th edition of the World Health Organization (WHO) classification. Molecular analyses may support the diagnosis of these subtypes in questionable cases, of, e.g., fibrolamellar, sclerotic, or chromophobe HCCs. Other subtypes, such as lymphocyte-rich and steatotic subtypes, are still less clearly defined and lack consented diagnostically applicable molecular markers. In rare cases, the demonstration of hepatitis B virus-DNA integrations in HCC may establish a causal relation between profession-based infection and HCC development. Intrahepatic cholangiocarcinoma (iCCA) is the second most frequent malignant primary liver tumor entity and needs to be separated from carcinomas of the gallbladder (GBCA) and extrahepatic biliary tree (eCCA). As suggested by animal models, iCCA may (under certain conditions also) develop from hepatocytes . In line with this hypothesis, detection of albumin mRNA expression by albumin in situ hybridization has been proposed to support the diagnosis of small duct (sd)-iCCA . At the histological level, a sd-iCCA composed of non-mucinous, cuboidal cells forming small tubular and ductular structures in a desmoplastic stroma is separated from a large duct (ld) type, which is biologically similar to eCCA and contains mucin-secreting, columnar cancer cells. sd-iCCA shares the etiological risk profile and primary nodular growth pattern with HCC, while ld-iCCA mirrors the etiology and growth pattern of eCCA. While eCCA and iCCA share some common mutations (e.g., TP53 , BRCA1 , BRCA2 , PK3CA, KRAS , SMAD4 , ARID1A , GNAS ), others are especially typical for small duct-iCCA ( IDH1 , IDH2, and BAP1 mutations as well as translocations involving FGFR2 , NRG1 , ALK , NTRK1-3, and possibly others; Table ) and may eventually allow for the identification of iCCA in a cancer of unknown primary (CUP) constellation . This is of direct clinical relevance as a specification of a hepatic adeno-CUP may provide the patient access to several specific guideline- and approval-based targeted and non-targeted therapeutic options superior to standard adeno-CUP chemotherapy. There is the first evidence that small duct iCCA, similar to HCC, may contain to a significant extent various different, low-frequency, morpho-molecularly defined subtypes; next to the cholangiolocellular and ductal-plate malformation-like subtypes that are already recognized, the solid-tubulocystic (“cholangioblastic”) subtype with its peculiar morphology, inhibin-positivity, and diagnostic NIPL-NICC1 translocation has recently been defined . Further potential morpho-molecular subtypes have been proposed and await confirmation. Several attempts to molecularly subclassify pancreatic ductal adenocarcinoma (PDAC) have been carried out based on microarray data obtained from cell lines and PDAC tissue, RNA-sequencing in silico subtraction of transcript data obtained from cells comprising the tumor microenviroment (TME). . Based on these results and additional studies , there is now some consensus that acknowledges at least two different molecular subtypes with some overlap and inter- and intra-tumor heterogeneities (classic and basal types). Several clinical studies aim at harnessing molecular subtypes as well as other RNA-based signatures to inform efficacy of systemic treatments. Alternatively, assessment of copy number variations (CNVs) and larger chromosomal rearrangements can classify PDAC into four subtypes: “stable,” “locally rearranged,” “scattered,” and “unstable” . These may be exploited clinically in the future, as the unstable subtype is associated with homologous recombination deficiency and there is evidence that CNV-rich tumors tend to display a cold TME. Currently, molecular testing is not required clinically to define or subtype PDAC. In other liver tumor entities, molecular testing is rarely required for typing. Rare exceptions may be questionable cases of malignant epithelioid hemangioendothelioma. Here, the demonstration of a WWTR1-CAMTA1 fusion gene (up to 90% of the cases) or the rare YAP1-TFE3 translocation may support the diagnosis in questionable cases . In cases of hepatic adeno-CUP, the pattern of molecular alterations may provide valuable information in regard to the entity. For example, detection of an FGFR2 translocation characterizes the tumor as a sd-iCCA with high certainty and provides the patient with several approved systemic therapeutic options. Tumor typing includes the reliable distinction of benign or premalignant hepatocellular lesions from early and highly differentiated HCC. One scenario is malignant transformation of B-HCA into HCC and the other the differential diagnosis between premalignant dysplastic nodules and highly differentiated HCC (Fig. ). Of note, the histological changes in the surrounding liver tissue as caused by the underlying chronic liver disease together with the so-called matrix diagnosis (e.g., older age, male gender, presence of chronic liver disease, patient origin from high-risk area) may support the diagnosis of HCC vs. HCA or FNH. Since definite histopathological features of malignant transformation (interstitial and vascular invasions) are rarely found in a critical biopsy specimen, next to the demonstration of disturbed trabecular architecture, the diagnosis of well-differentiated HCC may be supported by diffuse capillarization of the sinusoids in HCC as detected by CD34 immunohistology . In addition, an immunohistological marker panel (heat shock protein 70, glypican-3, and GS; Table ) became the diagnostic standard for the molecular adjunct diagnosis for malignancy in highly differentiated hepatocellular tumors. It provides a high sensitivity (~ 70%) and a near perfect specificity for the detection of malignancy in independent studies . Moreover, detection of mutations in the telomerase reverse transcriptase promoter may be helpful for the identification of malignant transformation in highly differentiated hepatocellular tumors, namely, HCA and dysplastic nodules vs HCC . It employs the fact that hTERT promoter mutations significantly increase in frequency from HCA (0%) to “borderline” cases (17%) to HCC derived from HCA (56%) and from dysplastic nodules (6–19%) to early and highly differentiated HCC (43–61%) . It has to be noted that the detection of hTERT-promoter mutations requires specific DNA-PCR-based assays and cannot be detected by standard panel-based assays or WES. In about 85–90% of the cases, PDAC may develop from different premalignant precursors: pancreatic intraepithelial neoplasia (PanIN) progresses from low- to high-grade lesions accumulating genetic alterations (e.g., mutations in KRAS, SMAD4, TP53) . Furthermore, intraductal papillary mucinous neoplasms (IPMNs) may progress from low-grade to high-grade dysplasia to PDAC, and more rarely mucinous cystic neoplasms (MCNs) may malignantly transform. KRAS mutations can be observed early in neoplastic development (i.e., even in low-grade PanIN and IPMN) consequently increasing to more than 90% of PDAC carrying activating KRAS mutations as major driver event. While diagnosis of malignant transformation in pancreatic carcinogenesis resides solely on histology not requiring any molecular testing, molecular testing of aspiration fluid may help to clarify the nature of cystic pancreatic lesions . Affection of the liver and biliary tree in genetic cancer syndromes is very rare. The vast majority of cases of HCC with a genetic background are due to hereditary metabolic diseases (e.g., genetic iron storage diseases, hereditary tyrosinemia type I (rare), Wilson’s disease (rare)). In these cases, severe hepatic disease manifestation provides the soil for HCC and iCCA development. Consequently, prevention of liver affection abolishes the risk of tumor development. The reason for the relative protection in regard to hepatic involvement in genetic tumor predisposition syndromes is unknown. Thus, there is no indication to test for genetic cancer syndromes in HCC and CCA. Genetic predisposition by respective germline mutations is likely in hepatic adenomatosis (> 10 HCA in a patient) or when histology or immunohistology detect multiple comparable microlesions in the non-tumorous parenchyma of a HCA-resection specimen. Angiomyolipoma (AML) has been linked to the tuberous sclerosis complex, but this correlation is much lower in hepatic AML when compared to renal AML. Rarely, hereditary cases of pancreatic cancer have been observed in association with Peutz-Jeghers syndrome (STK11), hereditary pancreatitis (PRSS1, SPINK1, CFTR), familial melanoma (CDKN2A, CDK4, BAP1), Lynch syndrome (MLH1, MSH2, MSH6, PMS2), hereditary breast and ovarian cancer syndrome (BRCA2, BRCA1, PALB2), Li-Fraumeni syndrome (TP53), FAP (APC), ataxia telangiectasia, and polymerase proofreading-associated polyposis (POLE, POLD1) . Hepatocellular carcinoma Despite the presence of potentially targetable molecular alterations, no entity-specific targeted treatment has reached approval in HCC, so far. Clinical trials addressing MET overexpressing or RAS-mutated HCCs have failed to show overall survival benefit, likely due to shortcomings in testing strategy or drug efficacy. Furthermore, the lack of biopsies in HCC has severely limited predictive testing in HCC trials, and trial-associated molecular analyses and several trials employing pathway-directed drugs have not relied on predictive testing . The frequency of alterations providing entity-independent access to specific systemic therapy, such as NTRK translocations, homologous recombination deficiency, or MMR deficiency, is exceedingly rare and does not justify regular diagnostic testing. Current first- and second-line systemic treatment approaches do not require molecular testing despite the growing evidence that treatment response depends on the molecular characteristics of the HCC. “Immuno-hot” HCCs are far more likely to respond to immune-oncological treatment , while CTNNB1-mutated HCCs are rather “immuno-cold” and seem to be better responders to TKI. Lenvatinib appears to act on FGFR-activated HCC, and resistance appears to involve compensatory activation of the EGFR-pathway ; furthermore, negative and positive molecular predictors of sorafenib response are likely to exist . Nevertheless, current predictive molecular testing in HCC is largely restricted to broad testing in molecular tumor boards and individual off-label attempts (Supplementary Material ). Cholangiocarcinoma Most CCA patients are diagnosed with advanced disease. Combined cisplatin and gemcitabine treatment improved the median overall survival and became the standard first-line systemic therapy for more than a decade . Data of the TOPAZ-1 trial showed improved overall and progression-free survival in patients with advanced biliary tract cancer, when the PD-L1 inhibitor durvalumab was added to this therapy regimen . Typically, second-line therapy on molecular alterations may affect treatment decisions significantly. Of the common genetic alterations described above, IDH1 and BRAF V600E mutations as well as FGFR2 fusions have gained primary clinical attention. In the ClarIDHy study, the IDH1 inhibitor ivosidenib demonstrated a clinical benefit in previously treated, advanced IDH1-mutant cholangiocarcinoma and has gained approval for second-line treatment. In addition, dual BRAF and MEK inhibition showed promising activity in patients with BRAF V600 -mutated biliary tract cancer in a phase 2 study. Another recurrent molecular feature of iCCA is the presence of principally targetable gene fusions . In particular, FGFR2 gene fusions show a high prevalence and have become an attractive target. Initially, the FIGHT-202 study demonstrated for the first time that a selective, oral FGFR inhibitor resulted in an objective response in previously treated CCA patients with detectable FGFR2 gene rearrangements , a finding leading to the approval of pemigatinib monotherapy for the treatment of adults with FGFR2 fusion-positive CCA that have progressed after at least one prior line of systemic therapy. In addition, infigratinib has been approved for the treatment of advanced, refractory CCA. Both compounds are ATP-competitive, binding reversibly to the ATP-binding pocket in the FGFR kinase domain, inhibitors universally resulting in acquired resistance mutations . Next-generation inhibitors covalently binding FGFR also led to measurable clinical benefit . Thus, there is a continuously evolving landscape of clinically relevant FGFR inhibitors. Other gene rearrangements that are amendable for efficient drug targeting include fusions involving the NRG1 and NTRK genes . Although inactivating mutations of genes involved in DNA repair (e.g., MLH1, MSH2, MSH6, PMS2, POLE) may be rarely (~ 1% frequency of pathogenic or likely pathogenic variants) detected in all types of cholangiocarcinoma, they represent a valuable target for off-label treatment immune checkpoint blockade . Meanwhile, at least in dedicated centers, molecular pathological analysis is recommended for every patient with advanced iCCA, which should at least cover the whole spectrum of FGFR2 fusions, IDH1 and BRAF mutations, and NTRK fusions and microsatellite instability. However, more than 50% of iCCA contain potentially druggable alterations, and we recently demonstrated that molecular profiling using large DNA and RNA panels can improve patients’ survival in clinical practice . Molecular alterations that were successfully addressed in addition to the targets detailed above included BAP1, BRCA1, IDH2, and PIK3CA mutations, ERBB2 amplification, and MET and NRG1 fusions (Supplementary Material ). Pancreatic cancer While specific approvals for targeted therapy are lacking and entity-agnostic approvals face extremely low frequencies of respective alterations in PDAC, some signs of improvement are appearing. About 90% of pancreatic ductal adenocarcinoma are driven by KRAS-mutations and have escaped targeted therapeutic attempts, so far. But the advent of allele-specific (G12C in approximately 1.5% of PDAC, G12D in > 40%) and allele-agnostic small molecule inhibitors of KRAS may influence the treatment landscape. The remaining approximately 10% of PDACs, which display wild-type KRAS, may carry gene fusions involving various drivers (e.g., NRG1, BRAF, ALK, NTRK1-3), in principle amenable to treatment approaches . Treatment with NTRK inhibitors is categorized as IC according to ESMO-ESCAT. These PDAC cases, which are clinically associated with younger onset (< 50 years) require specific attention: KRAS wild-type PDAC should be analyzed by appropriate assays to interrogate genetic translocations leading to potentially druggable gene fusions (e.g., break-apart fluorescence in situ hybridization, RNA-based targeted NGS). This approach has also been endorsed by ESMO guidelines (Supplementary Material ) . Approximately 5–7% of PDAC harbor mutations in HRR (homologous recombination repair)-encoding genes. These are mostly germline events often followed by a second somatic hit, both of which can be identified in the tumor tissue. These tumors exhibit an HRD (homologous recombination deficiency) phenotype which renders the tumor sensitive to PARP inhibitors or platinum-based agents. PARP inhibitors were shown to prolong progression-free survival in cases of pathogenic/likely pathogenic variants in BRCA1 and BRCA2 but failed to show overall survival benefits . Nevertheless, current guidelines recommend testing BRCA1/2 status (ESMO-ESCAT category: IA). Very few cases of PDAC (approx. 0.5–1.0%) are associated with deficient mismatch repair (dMMR) and trial data show a moderate to good response to checkpoint inhibitor blockade. Given these data as well as the limited therapeutic options, dMMR testing by immunohistochemistry is recommended (ESMO-ESCAT category: IC). Complementary PCR-based assays and NGS may support dMMR/MSI-H profiling . Cancers of unknown primary (CUP) Adeno-CUP of the liver is a frequent, clinically relevant constellation that requires specific consideration. Even if molecular testing may not narrow in on the responsible entity, comprehensive predictive testing can be of value as it may offer patients specific therapeutic options beyond the standard non-targeted chemotherapy (Supplementary Material ). Recent trial data addressing the clinical utility of molecularly guided therapy versus standard platinum-based chemotherapy in patients with unfavorable non-squamous CUP demonstrated significantly improved HR and response rates for patients which received a therapy based on predictive molecular biomarkers. Accordingly, the current ESMO guideline strongly recommends molecular pathology-guided testing in the diagnostic work-up of CUP patients. Despite the presence of potentially targetable molecular alterations, no entity-specific targeted treatment has reached approval in HCC, so far. Clinical trials addressing MET overexpressing or RAS-mutated HCCs have failed to show overall survival benefit, likely due to shortcomings in testing strategy or drug efficacy. Furthermore, the lack of biopsies in HCC has severely limited predictive testing in HCC trials, and trial-associated molecular analyses and several trials employing pathway-directed drugs have not relied on predictive testing . The frequency of alterations providing entity-independent access to specific systemic therapy, such as NTRK translocations, homologous recombination deficiency, or MMR deficiency, is exceedingly rare and does not justify regular diagnostic testing. Current first- and second-line systemic treatment approaches do not require molecular testing despite the growing evidence that treatment response depends on the molecular characteristics of the HCC. “Immuno-hot” HCCs are far more likely to respond to immune-oncological treatment , while CTNNB1-mutated HCCs are rather “immuno-cold” and seem to be better responders to TKI. Lenvatinib appears to act on FGFR-activated HCC, and resistance appears to involve compensatory activation of the EGFR-pathway ; furthermore, negative and positive molecular predictors of sorafenib response are likely to exist . Nevertheless, current predictive molecular testing in HCC is largely restricted to broad testing in molecular tumor boards and individual off-label attempts (Supplementary Material ). Most CCA patients are diagnosed with advanced disease. Combined cisplatin and gemcitabine treatment improved the median overall survival and became the standard first-line systemic therapy for more than a decade . Data of the TOPAZ-1 trial showed improved overall and progression-free survival in patients with advanced biliary tract cancer, when the PD-L1 inhibitor durvalumab was added to this therapy regimen . Typically, second-line therapy on molecular alterations may affect treatment decisions significantly. Of the common genetic alterations described above, IDH1 and BRAF V600E mutations as well as FGFR2 fusions have gained primary clinical attention. In the ClarIDHy study, the IDH1 inhibitor ivosidenib demonstrated a clinical benefit in previously treated, advanced IDH1-mutant cholangiocarcinoma and has gained approval for second-line treatment. In addition, dual BRAF and MEK inhibition showed promising activity in patients with BRAF V600 -mutated biliary tract cancer in a phase 2 study. Another recurrent molecular feature of iCCA is the presence of principally targetable gene fusions . In particular, FGFR2 gene fusions show a high prevalence and have become an attractive target. Initially, the FIGHT-202 study demonstrated for the first time that a selective, oral FGFR inhibitor resulted in an objective response in previously treated CCA patients with detectable FGFR2 gene rearrangements , a finding leading to the approval of pemigatinib monotherapy for the treatment of adults with FGFR2 fusion-positive CCA that have progressed after at least one prior line of systemic therapy. In addition, infigratinib has been approved for the treatment of advanced, refractory CCA. Both compounds are ATP-competitive, binding reversibly to the ATP-binding pocket in the FGFR kinase domain, inhibitors universally resulting in acquired resistance mutations . Next-generation inhibitors covalently binding FGFR also led to measurable clinical benefit . Thus, there is a continuously evolving landscape of clinically relevant FGFR inhibitors. Other gene rearrangements that are amendable for efficient drug targeting include fusions involving the NRG1 and NTRK genes . Although inactivating mutations of genes involved in DNA repair (e.g., MLH1, MSH2, MSH6, PMS2, POLE) may be rarely (~ 1% frequency of pathogenic or likely pathogenic variants) detected in all types of cholangiocarcinoma, they represent a valuable target for off-label treatment immune checkpoint blockade . Meanwhile, at least in dedicated centers, molecular pathological analysis is recommended for every patient with advanced iCCA, which should at least cover the whole spectrum of FGFR2 fusions, IDH1 and BRAF mutations, and NTRK fusions and microsatellite instability. However, more than 50% of iCCA contain potentially druggable alterations, and we recently demonstrated that molecular profiling using large DNA and RNA panels can improve patients’ survival in clinical practice . Molecular alterations that were successfully addressed in addition to the targets detailed above included BAP1, BRCA1, IDH2, and PIK3CA mutations, ERBB2 amplification, and MET and NRG1 fusions (Supplementary Material ). While specific approvals for targeted therapy are lacking and entity-agnostic approvals face extremely low frequencies of respective alterations in PDAC, some signs of improvement are appearing. About 90% of pancreatic ductal adenocarcinoma are driven by KRAS-mutations and have escaped targeted therapeutic attempts, so far. But the advent of allele-specific (G12C in approximately 1.5% of PDAC, G12D in > 40%) and allele-agnostic small molecule inhibitors of KRAS may influence the treatment landscape. The remaining approximately 10% of PDACs, which display wild-type KRAS, may carry gene fusions involving various drivers (e.g., NRG1, BRAF, ALK, NTRK1-3), in principle amenable to treatment approaches . Treatment with NTRK inhibitors is categorized as IC according to ESMO-ESCAT. These PDAC cases, which are clinically associated with younger onset (< 50 years) require specific attention: KRAS wild-type PDAC should be analyzed by appropriate assays to interrogate genetic translocations leading to potentially druggable gene fusions (e.g., break-apart fluorescence in situ hybridization, RNA-based targeted NGS). This approach has also been endorsed by ESMO guidelines (Supplementary Material ) . Approximately 5–7% of PDAC harbor mutations in HRR (homologous recombination repair)-encoding genes. These are mostly germline events often followed by a second somatic hit, both of which can be identified in the tumor tissue. These tumors exhibit an HRD (homologous recombination deficiency) phenotype which renders the tumor sensitive to PARP inhibitors or platinum-based agents. PARP inhibitors were shown to prolong progression-free survival in cases of pathogenic/likely pathogenic variants in BRCA1 and BRCA2 but failed to show overall survival benefits . Nevertheless, current guidelines recommend testing BRCA1/2 status (ESMO-ESCAT category: IA). Very few cases of PDAC (approx. 0.5–1.0%) are associated with deficient mismatch repair (dMMR) and trial data show a moderate to good response to checkpoint inhibitor blockade. Given these data as well as the limited therapeutic options, dMMR testing by immunohistochemistry is recommended (ESMO-ESCAT category: IC). Complementary PCR-based assays and NGS may support dMMR/MSI-H profiling . Adeno-CUP of the liver is a frequent, clinically relevant constellation that requires specific consideration. Even if molecular testing may not narrow in on the responsible entity, comprehensive predictive testing can be of value as it may offer patients specific therapeutic options beyond the standard non-targeted chemotherapy (Supplementary Material ). Recent trial data addressing the clinical utility of molecularly guided therapy versus standard platinum-based chemotherapy in patients with unfavorable non-squamous CUP demonstrated significantly improved HR and response rates for patients which received a therapy based on predictive molecular biomarkers. Accordingly, the current ESMO guideline strongly recommends molecular pathology-guided testing in the diagnostic work-up of CUP patients. Numerous different developments can be foreseen, with significant implications for molecular pathology diagnostics. The number of approved drugs that may require predictive testing will increase further. Novel clinical settings that will significantly increase molecular testing include molecularly targeted drug-antibody conjugates, the extension of targeted drugs, and the respective molecular testing to the adjuvant (as it has already happened in breast and lung cancer) and neoadjuvant setting as well as the integration of (mutation-based) neo-antigen targeted immuno-oncological treatment. Morpho-molecular subtyping in HCC is far from being complete, and it has just begun in iCCA; thus, new subtypes requiring respective molecular testing in suitable diagnostic constellations can be expected. Molecular pathology diagnostics will have to respond to these challenges in an adaptive manner, taking the indication and the material, resources, and workflow constellation into account. Considering the complex nature and diversity of entities, indications, and markers, nucleic acid–based testing will more and more develop towards “one-size-fits-all/many” approaches and “one-stop-shop-workflows” to meet the time, resources, and material constraints. Importantly, successful implementation of personalized oncology approaches and thus advanced molecular testing is critically processing time-dependent. This includes the time required for molecular testing, recommendation of a personalized therapy or clinical trial, and access to and financing of potentially suggested off-label therapies. Dedicated clinical infrastructures, like the Centers for Personalized Medicine in the Southwest of Germany, may provide a comprehensive framework (broad molecular testing and molecular tumor boards) for implementation of precision oncology approaches and may help to reduce dropout rates in molecular testing and treatment in progressed tumor stages. . Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 34.1 KB) |
High litter quality enhances plant energy channeling by soil macro‐detritivores and lowers their trophic position | 9fba5bbc-c469-4920-b88a-f6fd40925ff0 | 11848239 | Microbiology[mh] | Heterotrophic life on Earth crucially depends on energy and nutrients that are provided by primary production (Elton, ). The transfer of these resources to consumers is determined by feeding (trophic) interactions in food webs, and a multitude of ecosystem functions such as nutrient cycling, carbon sequestration, or decomposition depend on the trophic structure of food webs (Barnes et al., ). Across ecosystems, most of the primary production is not eaten by herbivores but instead fuels the food webs via detrital resources, that is, primarily plant leaf litter (Cebrian, ; Potapov et al., ). Detrital resources enter the food web through decomposition processes carried out by decomposer communities including fungi, bacteria, and litter‐feeding decomposer animals (Sterner & Elser, ). At the same time, the accessibility of detrital resources to consumers is likely influenced by the quality of the litter. This may be more pronounced in terrestrial than in aquatic systems due to the greater abundance of litter resources from vascular plants, higher accumulation of detritus, higher content of structural compounds such as lignin in litter, and more unevenly distributed resources in the former (Allan et al., ; Cebrian & Lartigue, ; Sterner & Elser, ; Tiegs et al., ). However, assessing trophic interactions in the detrital pathway in terrestrial systems is notoriously difficult due to the cryptic lifestyle of animals and the complexity of basal resources, restricting a more comprehensive understanding of fundamental ecosystem processes (Digel et al., ). This particularly applies to soil animals such as earthworms that feed on a mixture of dead plant material and soil, including microorganisms and microbial residues, making it hard to distinguish which fractions are actually assimilated (but see Larsen, Pollierer, et al., ). As key macro‐detritivores in terrestrial ecosystems, earthworms occupy a central role in the soil food web (Zhou et al., ), altering soil structure, decomposition processes, and thereby nutrient availability to microorganisms and plants (Lavelle & Spain, ; Zhong et al., ). Due to their feeding on plant residues and associated microorganisms, earthworms couple plant, bacterial, and fungal energy channels, but relative proportions likely depend on resource quality. The recalcitrant compounds of low‐quality litter resources typically favor fungal abundance, whereas more accessible compounds in high‐quality litter resources tend to favor bacterial abundance (Rooney et al., ). Consequently, earthworms may incorporate low‐quality litter resources mainly via the fungal energy channel, whereas they likely incorporate high‐quality litter resources mainly via the bacterial and plant energy channels, with plant resources becoming more important with higher litter quality. In addition, microbial communities are vertically structured in soil, with fungi typically dominating the organic or litter layer and bacteria dominating the mineral soil (Fierer et al., ; Lu & Scheu, ). Similarly, earthworms of different ecological groups also inhabit and feed on different soil layers. Therefore, due to the vertical distribution of earthworms and microorganisms, the energy channeling to earthworms presumably depends on their ecological group identity (Briones, ). The litter‐feeding epigeic and anecic earthworms likely assimilate litter resources via plant and fungal energy channels due to the higher fungal abundance in the litter layer (Lavelle & Spain, ). In contrast, the soil‐feeding endogeic species mostly feed in the upper mineral soil and thereby are more likely to incorporate litter‐derived resources via the bacterial energy channel due to the high abundance of bacteria in soil. Due to their feeding strategies and burrowing activities, earthworms of different ecological groups may differentially affect the abundance and community composition of microorganisms (Hättenschwiler et al., ). For instance, increased microbial activity in earthworm casts, especially in those of epigeic and anecic earthworms, may increase the transformation of recalcitrant litter resources into more bioavailable molecules. This likely increases the transfer of detrital resources to higher trophic levels, in turn benefitting earthworm nutrition (“external rumen” hypothesis; Swift et al., ). In addition, anecic species likely also favor the growth of microorganisms by mixing litter and mineral soil (Edwards et al., ). However, there is limited empirical evidence on how different earthworm ecological groups respond to variations in litter quality and how their responses affect microbial community composition in soil. Bulk stable isotope analysis is one of the widely used tools in soil food web analysis, allowing one to estimate trophic positions and the contribution of different resources to the nutrition of consumers (Potapov et al., ). The bulk isotope ratios of carbon ( 13 C/ 12 C) and nitrogen ( 15 N/ 14 N) in the consumer's body serve as biomarkers, with 13 C/ 12 C ratios informing about the use of carbon resources, for example, recently fixed versus microbially processed carbon, and 15 N/ 14 N ratios informing about trophic positions. However, accurate estimation of basal resources and trophic position can be difficult, in particular in soil food webs, as the basal resources are often mixed, for example, plant and microbial resources, making it difficult to determine the stable isotope composition of the consumers' food. Compound‐specific isotope analysis (CSIA) of amino acids can overcome some of the limitations of bulk stable isotope analysis and provide more reliable insights into trophic niches of soil animals (Pollierer et al., ). In CSIA, the trophic position of organisms is estimated by the difference in δ 15 N values (as a measure of 15 N/ 14 N ratios) between “trophic” and “source” amino acids. Typically, the δ 15 N values of primary resources are represented in the δ 15 N values of “source” amino acids (e.g., phenylalanine [ Phe ]) while the trophic enrichment is reflected in the δ 15 N values of “trophic” amino acids (e.g., glutamine/glutamic acid [ Glu ]) (Chikaraishi et al., ). The δ 13 C values of amino acids offer a complementary perspective on the trophic niches. While bacteria, fungi, and plants synthesize essential amino acids (eAAs) via unique pathways that each exhibit distinct δ 13 C eAA fingerprints, metazoans lack the metabolic pathways to synthesize eAAs de novo. They therefore take up eAAs from these basal resources without or with minor modification, allowing one to estimate the relative contribution of basal resources to the diet of consumers (Larsen et al., ). In this study, we investigated trophic niche differentiation among earthworm species of different ecological groups (epigeic, anecic, and endogeic) in response to litter of different quality, that is, litter materials forming a gradient of increasing lignin content and C‐to‐N ratio. The trophic niches of earthworms, that is, the use of basal resources and trophic position, were assessed by bulk stable isotope analysis and CSIA of amino acids. Hereafter, we use the term “energy channel” and “energy channeling” to describe the use of basal resources and to facilitate interpretation in light of soil food web theory (Moore & Hunt, ). We note that while eAAs serve as a proxy for energy flow, they do not directly equate to energy transfer. Additionally, we tested the effect of earthworm ecological groups and litter quality on the abundance of fungi and bacteria in litter and soil using phospholipid fatty acids (PLFA) analysis. We hypothesized that (1) earthworms feed more on plant‐derived resources and occupy a lower trophic position in high‐quality litter treatments, with this trend being stronger in epigeic and anecic than in endogeic earthworms. (2) Epigeic and anecic earthworms predominantly feed on plant and fungal resources due to the high abundance of fungi in litter, while endogeic earthworms mainly rely on bacterial resources due to the high abundance of bacteria in soil. (3) The fungal and bacterial abundance of litter and soil differs between litter treatments, with higher bacterial abundance in high‐quality litter and higher fungal abundance in low‐quality litter. (4) Earthworms modulate the abundance of fungi and bacteria in litter and soil, with this varying among earthworm ecological groups and with litter quality. Experimental setup The microcosm experiment was set up in a full factorial design with two treatments: four litter types (wheat straw, horse manure, legume leaves [mixed leaves of Trifolium pratense and Medicago sativa ], and rape leaves [ Brassica napus ]) and five earthworm species ( Eisenia fetida [epigeic], Lumbricus terrestris [anecic], Aporrectodea rosea [endogeic], Aporrectodea caliginosa [endogeic], and Allolobophora chlorotica [endogeic], Appendix : Figure ). Except E. fetida , all earthworm species used in this experiment were collected in March 2021 from a meadow close to the University of Göttingen, Germany (51°32′17.52″ N, 9°56′12.12″ E). The meadow was dominated by grasses (mainly Lolium perenne and Arrhenatherum elatius ) and legumes (e.g., Trifolium pratense ), but also herbs (e.g., Taraxacum officinale ). We aimed to use common earthworm species of Central European grasslands belonging to three different ecological groups. As we only found endogeic and anecic but no epigeic earthworm species, we purchased E. fetida as a native European epigeic earthworm species occurring in rich organic soils (Obert & Vďačný, ). Its feeding strategy likely resembles that of other epigeic species associated with decaying plant materials such as rotting deadwood (e.g., Dendrobaena octaedra ) or leaf litter (e.g., Lumbricus rubellus ) (Edwards et al., ). Because E. fetida is commonly used in commercial vermiculture worldwide (Edwards et al., ), understanding its trophic ecology is of particular interest for agricultural management. Eisenia fetida was purchased from a culture shop (Wir haben Würmer, St. Gallen, Switzerland), where it was cultured on decomposing materials mixed with manure and plant residues. Juvenile and adult earthworms were collected. Adults were identified to species using Sims and Gerard and juveniles were ascribed to species based on pigmentation, arrangement of setae, and shape of prostomium. The microcosms consisted of PVC tubes with an inner diameter of 10 cm and a height of 17 cm, covered with 200‐μm mesh at the bottom and surrounded by a transparent plastic sheet extending 10 cm above the top of the tube to prevent earthworms from escaping. The soil was taken from an agricultural field located in Relliehausen, Lower Saxony, Germany (51°46′42.4″ N, 9°41′42.7″ E), to a depth of 20 cm. The soil is characterized as Luvisol on loess with a loamy texture. The field was planted with wheat (C 3 plant) when sampling and rotated with maize (C 4 plant) and wheat in the years before. The soil was first sieved using a 4‐mm mesh to remove plant residues and then placed at −20°C for 10 days to kill existing earthworms. Each microcosm was filled with a mixture of 633 g of fresh weight sieved soil and 324 g of expanded clay (Bellandris Blähton, SAGAFLOR AG, Kassel, Germany) to improve soil structure. The soil moisture was kept at 70% of the maximum water holding capacity throughout the experiment. On top of the soil, 3 g dry litter was added initially corresponding to the litter treatments. Every 3 weeks, an additional 1 g of the respective litter was added to ensure continuous resource supply in the restricted space of the microcosms. Although earthworms are less restricted in their foraging range in the field and are therefore unlikely to deplete resources completely, we acknowledge that earthworms may face a more limited resource supply under natural conditions. A total of 100 microcosms were established with the four litter types and five earthworm species (four litter treatments × five earthworm species × five replicates). Prior to the experiment, the litter was cut to the size of legume leaf litter to facilitate consumption by earthworms. Based on the litter C‐to‐N ratio, the quality of litter was ranked from high to low as follows: rape leaves, legume leaves, horse manure, and wheat straw (Appendix : Table ). Five juvenile individuals of E. fetida , L. terrestris , A. rosea , A. caliginosa , and A. chlorotica were introduced into the respective treatments of the microcosms. The initial average total fresh biomass of E. fetida , L. terrestris , A. rosea , A. caliginosa , and A. chlorotica were 0.739 ± 0.003, 2.208 ± 0.009, 0.485 ± 0.004, 1.631 ± 0.005, and 0.858 ± 0.002 g, respectively. Microcosms were placed in darkness in a climate chamber at 20 ± 2°C and 70% humidity, watered four times a week based on gravimetric determination of the water loss, and randomized twice per week. At the end of the experiment, that is, after 18 weeks, microcosms were destructively sampled. Sampling The soil was broken up by hand and the earthworms were picked, counted, and weighed. Then, earthworms were kept at −20°C for 1 day. Subsequently, the earthworms were squeezed under a stereomicroscope to empty their gut. Then, earthworms were washed and surface sterilized by placement in 70% ethanol for 10 min. Thereafter, earthworms were lyophilized and stored in a desiccator until further analysis. Litter materials were collected, lyophilized, weighed, and stored in a desiccator. The soil was sieved through 2‐mm mesh and stored at −20°C until further analyses. Bulk stable isotope analysis and CSIA of amino acids Individual earthworms, dried litter material, and soil were subjected to both bulk stable isotope analysis and CSIA of amino acids with dual C and N stable isotope ratio analysis (Appendix : Section ). The isotopic variation of C and N (δ X ) was expressed as δ X (‰) = ( R sample − R standard )/ R standard × 1000, with R representing the ratio between the heavy and light isotopes ( 13 C/ 12 C or 15 N/ 14 N). Amino acids were extracted as described by Larsen, Pollierer, et al. ( ; Appendix : Section ) and then derivatized as described by Corr et al. ( ; Appendix : Section ). Amino acid derivatives were then measured in triplicate using a gas chromatography combustion isotope ratio mass spectrometry system (GC‐C‐IRMS; Appendix : Section ). We report all isotopic data in δ notation (‰). Stable isotope values of nitrogen and carbon in target amino acids were assessed independently. We obtained isotope values of 10 amino acids including alanine ( Ala ), asparagine/aspartic acid ( Asp ), glutamine/glutamic acid ( Glu ), glycine ( Gly ), isoleucine ( Ile )*, leucine ( Leu )*, methionine ( Met )*, phenylalanine ( Phe )*, threonine ( Thr )* and valine ( Val )* with the asterisks denoting eAAs. Phospholipid fatty acids analysis PLFAs from soil and litter materials were extracted using a modified Bligh and Dyer method (Frostegård et al., ; Appendix : Section ). PLFAs absolute abundances were calculated as nanomoles per gram dry weight of soil and litter. The PLFA 18:2ω6,9 was used as a fungal marker (Joergensen, ), while the saturated fatty acids i15:0, a15:0, i16:0, and i17:0 served as markers for Gram + bacteria, and the fatty acids cy17:0, cy19:0, 16:1ω7, and 18:1ω7 as markers for Gram − bacteria. Bacteria were represented by the sum of Gram + and Gram − bacteria. Total detected PLFAs ( n = 28) were used to calculate PLFA absolute abundance. Statistical analysis The variation in bulk δ 13 C and δ 15 N values, biomass of earthworms, as well as litter mass were analyzed using linear models, with earthworm species and litter treatments as explanatory categorical factors. Trophic position of earthworms as indicated by 𝛿 15 N values of amino acids (TP CSIA ) was calculated using the following equation (Chikaraishi et al., ): TP CSIA = [(δ 15 N Glu– − δ 15 N Phe − β)/TDF Glu‐Phe ] + 1, with δ 15 N Glu and δ 15 N Phe representing the δ 15 N value of Glu and Phe from earthworms, respectively, β the difference between δ 15 N values in Glu and Phe of the primary producer (litter) in the food web, and TDF Glu ‐ Phe (7.6 ± 1.2‰) the trophic discrimination factor at each shift of trophic level. We used specific β values from rape leaves (β = −7.5 ± 1.6‰) and legume leaves (β = −8.2 ± 1.4‰) for calculating the trophic position of earthworms. Due to the low δ 15 N concentration of amino acids in wheat straw, we could not obtain the δ 15 N value of Glu and Phe . Therefore, for earthworms sampled in the field and for those from the wheat straw and horse manure treatments, we used the β value for C 3 plants (−8.4 ± 1.6‰; Chikaraishi et al., ). The variation in TP CSIA of earthworms was analyzed using linear models, with earthworm species and litter treatments as explanatory variables. To evaluate the effect of litter quality on the TP CSIA of earthworms, planned contrasts were designed to compare the differences in TP CSIA of earthworms between treatments with wheat straw and each of the other litter treatments (Piovia‐Scott et al., ). To predict the biosynthetic origin of eAAs in earthworms, we used the fingerprinting approach as described in Larsen et al. . Briefly, we applied linear discriminant analysis (LDA) with δ 13 C values of the eAAs including Ile , Leu , Phe , Thr , and Val . We excluded Met and Lys because the chromatography of these amino acids was not satisfactory in all samples. We used eAAs δ 13 C values of bacteria, fungi, and plants obtained from Larsen et al. , Larsen, Pollierer, et al. , and Pollierer et al. , as well as those of rape and legume leaves from this experiment as classifier variables to identify the contribution of the basal resources to the diet of earthworms in the LDA. A leave‐one‐out cross‐validation approach was used to ensure the basal resource groups (plant, fungi, and bacteria) were statistically different. Statistical differences were confirmed by a high classification accuracy (98.9%, Appendix : Table , Figure ). We then ran multivariate analyses of variance (MANOVAs) for the LDA classification to inspect the effects of earthworm species and litter treatments on the use of basal resources by earthworms. To estimate the proportion of basal resources used by earthworms, we ran Bayesian mixing models based on eAAs δ 13 C values centered on the mean value of all eAAs. Because the dynamic range of mean‐centered δ 13 C values of Ile and Thr was too small to be informative, we excluded these eAAs and only used the three most informative eAAs ( Leu , Phe , and Val ). The Bayesian mixing models, which explicitly included earthworm species and litter treatments as fixed nested factors, were set to run for 300,000 iterations (burn‐in = 200,000) on three parallel Monte Carlo Markov chains with a thinning interval of 100 using non‐informative priors. The models were evaluated using the Gelman–Rubin diagnostic ( R̂ values <1.05). The fungal‐to‐bacterial ratio and abundance of fungi and bacteria in soil and litter were analyzed separately using linear models with earthworm species and litter treatments as explanatory variables. Only the mole percentages of PLFA higher than 0.2% of total PLFAs were included in the analyses. All statistical analyses were run in R version 4.3.0 (R Core Team, ). Bayesian mixing models were run using the “MixSIAR” package (Stock et al., ); LDAs were conducted using the “MASS” package (Ripley et al., ). Statistical results of linear models and MANOVAs were obtained using the “vegan” (Oksanen et al., ) and “stats” packages (R Core Team, ). Planned contrasts were performed using the “emmeans” package (Lenth, ). Data were transformed (log [earthworm biomass and litter mass] or mean‐centered [δ 13 C values of the eAAs]) prior to analyses when necessary to improve normality and homogeneity of variance. All figures were drawn using “ggplot2.” The microcosm experiment was set up in a full factorial design with two treatments: four litter types (wheat straw, horse manure, legume leaves [mixed leaves of Trifolium pratense and Medicago sativa ], and rape leaves [ Brassica napus ]) and five earthworm species ( Eisenia fetida [epigeic], Lumbricus terrestris [anecic], Aporrectodea rosea [endogeic], Aporrectodea caliginosa [endogeic], and Allolobophora chlorotica [endogeic], Appendix : Figure ). Except E. fetida , all earthworm species used in this experiment were collected in March 2021 from a meadow close to the University of Göttingen, Germany (51°32′17.52″ N, 9°56′12.12″ E). The meadow was dominated by grasses (mainly Lolium perenne and Arrhenatherum elatius ) and legumes (e.g., Trifolium pratense ), but also herbs (e.g., Taraxacum officinale ). We aimed to use common earthworm species of Central European grasslands belonging to three different ecological groups. As we only found endogeic and anecic but no epigeic earthworm species, we purchased E. fetida as a native European epigeic earthworm species occurring in rich organic soils (Obert & Vďačný, ). Its feeding strategy likely resembles that of other epigeic species associated with decaying plant materials such as rotting deadwood (e.g., Dendrobaena octaedra ) or leaf litter (e.g., Lumbricus rubellus ) (Edwards et al., ). Because E. fetida is commonly used in commercial vermiculture worldwide (Edwards et al., ), understanding its trophic ecology is of particular interest for agricultural management. Eisenia fetida was purchased from a culture shop (Wir haben Würmer, St. Gallen, Switzerland), where it was cultured on decomposing materials mixed with manure and plant residues. Juvenile and adult earthworms were collected. Adults were identified to species using Sims and Gerard and juveniles were ascribed to species based on pigmentation, arrangement of setae, and shape of prostomium. The microcosms consisted of PVC tubes with an inner diameter of 10 cm and a height of 17 cm, covered with 200‐μm mesh at the bottom and surrounded by a transparent plastic sheet extending 10 cm above the top of the tube to prevent earthworms from escaping. The soil was taken from an agricultural field located in Relliehausen, Lower Saxony, Germany (51°46′42.4″ N, 9°41′42.7″ E), to a depth of 20 cm. The soil is characterized as Luvisol on loess with a loamy texture. The field was planted with wheat (C 3 plant) when sampling and rotated with maize (C 4 plant) and wheat in the years before. The soil was first sieved using a 4‐mm mesh to remove plant residues and then placed at −20°C for 10 days to kill existing earthworms. Each microcosm was filled with a mixture of 633 g of fresh weight sieved soil and 324 g of expanded clay (Bellandris Blähton, SAGAFLOR AG, Kassel, Germany) to improve soil structure. The soil moisture was kept at 70% of the maximum water holding capacity throughout the experiment. On top of the soil, 3 g dry litter was added initially corresponding to the litter treatments. Every 3 weeks, an additional 1 g of the respective litter was added to ensure continuous resource supply in the restricted space of the microcosms. Although earthworms are less restricted in their foraging range in the field and are therefore unlikely to deplete resources completely, we acknowledge that earthworms may face a more limited resource supply under natural conditions. A total of 100 microcosms were established with the four litter types and five earthworm species (four litter treatments × five earthworm species × five replicates). Prior to the experiment, the litter was cut to the size of legume leaf litter to facilitate consumption by earthworms. Based on the litter C‐to‐N ratio, the quality of litter was ranked from high to low as follows: rape leaves, legume leaves, horse manure, and wheat straw (Appendix : Table ). Five juvenile individuals of E. fetida , L. terrestris , A. rosea , A. caliginosa , and A. chlorotica were introduced into the respective treatments of the microcosms. The initial average total fresh biomass of E. fetida , L. terrestris , A. rosea , A. caliginosa , and A. chlorotica were 0.739 ± 0.003, 2.208 ± 0.009, 0.485 ± 0.004, 1.631 ± 0.005, and 0.858 ± 0.002 g, respectively. Microcosms were placed in darkness in a climate chamber at 20 ± 2°C and 70% humidity, watered four times a week based on gravimetric determination of the water loss, and randomized twice per week. At the end of the experiment, that is, after 18 weeks, microcosms were destructively sampled. The soil was broken up by hand and the earthworms were picked, counted, and weighed. Then, earthworms were kept at −20°C for 1 day. Subsequently, the earthworms were squeezed under a stereomicroscope to empty their gut. Then, earthworms were washed and surface sterilized by placement in 70% ethanol for 10 min. Thereafter, earthworms were lyophilized and stored in a desiccator until further analysis. Litter materials were collected, lyophilized, weighed, and stored in a desiccator. The soil was sieved through 2‐mm mesh and stored at −20°C until further analyses. Individual earthworms, dried litter material, and soil were subjected to both bulk stable isotope analysis and CSIA of amino acids with dual C and N stable isotope ratio analysis (Appendix : Section ). The isotopic variation of C and N (δ X ) was expressed as δ X (‰) = ( R sample − R standard )/ R standard × 1000, with R representing the ratio between the heavy and light isotopes ( 13 C/ 12 C or 15 N/ 14 N). Amino acids were extracted as described by Larsen, Pollierer, et al. ( ; Appendix : Section ) and then derivatized as described by Corr et al. ( ; Appendix : Section ). Amino acid derivatives were then measured in triplicate using a gas chromatography combustion isotope ratio mass spectrometry system (GC‐C‐IRMS; Appendix : Section ). We report all isotopic data in δ notation (‰). Stable isotope values of nitrogen and carbon in target amino acids were assessed independently. We obtained isotope values of 10 amino acids including alanine ( Ala ), asparagine/aspartic acid ( Asp ), glutamine/glutamic acid ( Glu ), glycine ( Gly ), isoleucine ( Ile )*, leucine ( Leu )*, methionine ( Met )*, phenylalanine ( Phe )*, threonine ( Thr )* and valine ( Val )* with the asterisks denoting eAAs. PLFAs from soil and litter materials were extracted using a modified Bligh and Dyer method (Frostegård et al., ; Appendix : Section ). PLFAs absolute abundances were calculated as nanomoles per gram dry weight of soil and litter. The PLFA 18:2ω6,9 was used as a fungal marker (Joergensen, ), while the saturated fatty acids i15:0, a15:0, i16:0, and i17:0 served as markers for Gram + bacteria, and the fatty acids cy17:0, cy19:0, 16:1ω7, and 18:1ω7 as markers for Gram − bacteria. Bacteria were represented by the sum of Gram + and Gram − bacteria. Total detected PLFAs ( n = 28) were used to calculate PLFA absolute abundance. The variation in bulk δ 13 C and δ 15 N values, biomass of earthworms, as well as litter mass were analyzed using linear models, with earthworm species and litter treatments as explanatory categorical factors. Trophic position of earthworms as indicated by 𝛿 15 N values of amino acids (TP CSIA ) was calculated using the following equation (Chikaraishi et al., ): TP CSIA = [(δ 15 N Glu– − δ 15 N Phe − β)/TDF Glu‐Phe ] + 1, with δ 15 N Glu and δ 15 N Phe representing the δ 15 N value of Glu and Phe from earthworms, respectively, β the difference between δ 15 N values in Glu and Phe of the primary producer (litter) in the food web, and TDF Glu ‐ Phe (7.6 ± 1.2‰) the trophic discrimination factor at each shift of trophic level. We used specific β values from rape leaves (β = −7.5 ± 1.6‰) and legume leaves (β = −8.2 ± 1.4‰) for calculating the trophic position of earthworms. Due to the low δ 15 N concentration of amino acids in wheat straw, we could not obtain the δ 15 N value of Glu and Phe . Therefore, for earthworms sampled in the field and for those from the wheat straw and horse manure treatments, we used the β value for C 3 plants (−8.4 ± 1.6‰; Chikaraishi et al., ). The variation in TP CSIA of earthworms was analyzed using linear models, with earthworm species and litter treatments as explanatory variables. To evaluate the effect of litter quality on the TP CSIA of earthworms, planned contrasts were designed to compare the differences in TP CSIA of earthworms between treatments with wheat straw and each of the other litter treatments (Piovia‐Scott et al., ). To predict the biosynthetic origin of eAAs in earthworms, we used the fingerprinting approach as described in Larsen et al. . Briefly, we applied linear discriminant analysis (LDA) with δ 13 C values of the eAAs including Ile , Leu , Phe , Thr , and Val . We excluded Met and Lys because the chromatography of these amino acids was not satisfactory in all samples. We used eAAs δ 13 C values of bacteria, fungi, and plants obtained from Larsen et al. , Larsen, Pollierer, et al. , and Pollierer et al. , as well as those of rape and legume leaves from this experiment as classifier variables to identify the contribution of the basal resources to the diet of earthworms in the LDA. A leave‐one‐out cross‐validation approach was used to ensure the basal resource groups (plant, fungi, and bacteria) were statistically different. Statistical differences were confirmed by a high classification accuracy (98.9%, Appendix : Table , Figure ). We then ran multivariate analyses of variance (MANOVAs) for the LDA classification to inspect the effects of earthworm species and litter treatments on the use of basal resources by earthworms. To estimate the proportion of basal resources used by earthworms, we ran Bayesian mixing models based on eAAs δ 13 C values centered on the mean value of all eAAs. Because the dynamic range of mean‐centered δ 13 C values of Ile and Thr was too small to be informative, we excluded these eAAs and only used the three most informative eAAs ( Leu , Phe , and Val ). The Bayesian mixing models, which explicitly included earthworm species and litter treatments as fixed nested factors, were set to run for 300,000 iterations (burn‐in = 200,000) on three parallel Monte Carlo Markov chains with a thinning interval of 100 using non‐informative priors. The models were evaluated using the Gelman–Rubin diagnostic ( R̂ values <1.05). The fungal‐to‐bacterial ratio and abundance of fungi and bacteria in soil and litter were analyzed separately using linear models with earthworm species and litter treatments as explanatory variables. Only the mole percentages of PLFA higher than 0.2% of total PLFAs were included in the analyses. All statistical analyses were run in R version 4.3.0 (R Core Team, ). Bayesian mixing models were run using the “MixSIAR” package (Stock et al., ); LDAs were conducted using the “MASS” package (Ripley et al., ). Statistical results of linear models and MANOVAs were obtained using the “vegan” (Oksanen et al., ) and “stats” packages (R Core Team, ). Planned contrasts were performed using the “emmeans” package (Lenth, ). Data were transformed (log [earthworm biomass and litter mass] or mean‐centered [δ 13 C values of the eAAs]) prior to analyses when necessary to improve normality and homogeneity of variance. All figures were drawn using “ggplot2.” Changes in litter mass and earthworm biomass The final litter mass varied with litter treatments and depended on earthworm species (Table , Appendix : Figure ). Across litter treatments, litter mass significantly decreased during the experiment, with the decrease being more pronounced in litter of higher quality (−72.7 ± 2.9%, −68.6 ± 3.0%, and −10.0 ± 0.5% for rape leaves, legume leaves, and wheat straw, respectively). The reduction in litter mass was particularly strong in treatments with the epigeic species E. fetida and the anecic species L. terrestris , with the latter also strongly reducing the mass of horse manure (−82.6 ± 2.7%). The endogeic species A. caliginosa and A. chlorotica to a similar extent increased the mass loss of rape and legume leaves, on average by 62.3 ± 1.1%. Changes in earthworm biomass also varied significantly with litter treatments (Table , Appendix : Figure ). The biomass of E. fetida increased by 78.0 ± 5.2% and 91.5 ± 4.4% in treatments with legume and rape leaves, respectively. Similarly, the biomass of L. terrestris increased in the treatments with legume leaves, rape leaves, and horse manure by 171.8 ± 11.7%, 135.5 ± 13.1%, and 63.2 ± 8.1%, respectively. Further, the biomass of A. caliginosa increased in each of the litter treatments, with the highest increase in the treatment with legume leaves (106.7 ± 10.6%). By contrast, the biomass of A. chlorotica only increased in the treatment with legume leaves (52.6 ± 4.4%), while the biomass of A. rosea remained constant across all litter treatments. Juveniles and cocoons were produced by E. fetida in all litter treatments, with the number of offspring (both juveniles and cocoons) being over 20 times higher in the legume and rape leaves treatments than in the horse manure and wheat straw treatments (Appendix : Table ). Similarly, the reproductive behavior of endogeic and anecic earthworm species was observed in the legume leaf treatment, with a similar number of offspring in species of both groups (endogeic: new juveniles: 0.6 ± 0.4, cocoon: 3.0 ± 0.8; anecic: new juvenile: 1.0 ± 0). Surprisingly, neither juveniles nor cocoons were produced by endogeic or anecic earthworm species in the rape leaf treatment. Conversely, A. rosea produced similar numbers of juveniles and cocoons in the horse manure and wheat straw treatments. Bulk stable isotope composition of earthworms, litter, and soil Litter δ 13 C values were highest in wheat straw (−28.7 ± <0.1‰), followed by horse manure (−29.2 ± <0.1‰), legume leaves (−29.9 ± <0.1‰), and rape leaves (−31.4 ± <0.1‰; Appendix : Figure ). By contrast, the δ 15 N values of litter were highest in rape leaves (4.5 ± 0.1‰), intermediate in horse manure (3.1 ± 0.1‰), and lowest in legume leaves and wheat straw (−0.5 ± 0.1‰ for both; Appendix : Figure ). Soil δ 13 C and δ 15 N values were constant across litter treatments, averaging −27.1 ± <0.1‰ and 5.2 ± 0.1‰, respectively. In general, δ 13 C values of the epigeic earthworm species E. fetida and the anecic species L. terrestris were similar to those of soil, whereas in the endogeic earthworm species A. rosea , A. caliginosa , and A. chlorotica , they were higher than those of soil, on average, being increased on average by 2.2 ± 0.1‰ (Appendix : Figure ). All earthworm species were enriched in 13 C compared with litter (Appendix : Figure ), but this was more pronounced in the endogeic (+4.8 ± 0.1‰) than in epigeic (+3.6 ± 0.1‰) and anecic species (+2.8 ± 0.2‰). Further, δ 13 C values of earthworms also varied with litter treatments. Compared with the wheat straw treatment, the δ 13 C values of E. fetida were generally lower in the other litter treatments, and this was most pronounced in the rape leaves treatment (−3.3 ± 0.3‰; Appendix : Figure ). Similarly, compared with the wheat straw treatment, the δ 13 C values of L. terrestris were lower in the rape leaves treatment (−1.1 ± 0.3‰). By contrast, δ 13 C values of the endogeic earthworm species did not differ among litter treatments. Similar to δ 13 C values, the δ 15 N values of earthworms varied between earthworm species, but this depended on litter treatments (Table , Appendix : Figure ). Compared with the wheat straw treatment, δ 15 N values of E. fetida were lower in each of the other litter treatments, with the effect being most pronounced in the legume leaves treatment (−3.5 ± 0.5‰). By contrast, δ 15 N values of L. terrestris were 2.6 ± 0.5‰ higher in rape leaves than in the wheat straw treatment. Earthworm δ 15 N values of endogeic species were little affected by litter treatments, except in A. caliginosa ; in this species, δ 15 N values were slightly lower in the horse manure than in the rape leaves treatment. Each of the earthworm species was enriched in 15 N compared with litter (Appendix : Figure ); δ 15 N values of E. fetida exceeded those in litter by 10.1 ± 0.6‰, while in the anecic and endogeic species they were on average only 3.3 ± 0.2‰ higher than those in litter. In addition, δ 15 N values of E. fetida were higher than those in soil, while in the anecic and endogeic earthworm species, they were similar to those in soil, rape leaves, and horse manure (Appendix : Figure ). Trophic positions of earthworms derived from CSIA of amino acids Trophic positions of earthworms as indicated by TP CSIA varied in an interactive way with litter treatments and earthworm species (Table , Figure ). Prior to the experiment, the averages of TP CSIA of E. fetida , L. terrestris , A. rosea , A. caliginosa , and A. chlorotica were 3.4 ± 0.1, 2.0 ± <0.1, 2.9 ± <0.1, 2.4 ± <0.1 and 2.5 ± <0.1, respectively (Figure ). Compared with wheat straw treatments, the TP CSIA of all earthworm species except A. rosea was lower in the presence of legume leaves, with the decline being most pronounced in E. fetida , in which the TP CSIA decreased by 0.3 ± 0.1 compared with the wheat straw treatment. Similarly, in the presence of rape leaves, the TP CSIA was also lower in all earthworm species compared with the wheat straw treatment, but the decline was only significant in E. fetida and A. rosea , in which the TP CSIA decreased by 0.5 ± 0.1 and 0.1 ± 0.1, respectively. Notably, the presence of horse manure only decreased the TP CSIA of E. fetida by 0.2 ± 0.1 compared with the wheat straw treatment. Basal resources of earthworms inferred from amino acid 13 C fingerprinting The use of basal resources by earthworms significantly differed between earthworm species (MANOVA, Table , Figure ). The epigeic earthworm species E. fetida and the anecic earthworm species L. terrestris mainly relied on plant‐derived resources, whereas the endogeic earthworm species A. rosea , A. caliginosa , and A. chlorotica relied more on bacterial‐derived resources. However, the use of basal resources by earthworm species also depended on litter treatments (Table ). In the presence of legume and rape leaves, the epigeic species E. fetida and the endogeic species A. caliginosa and A. chlorotica shifted toward the use of plant‐derived resources, while they shifted to more bacterial‐derived resources in the presence of wheat straw and horse manure. The litter treatment effect on the use of basal resources was most pronounced in E. fetida as indicated by the fingerprinting approach (Figure ). The mixing models further indicated that all earthworm species studied consumed more plant‐derived resources in the legume and rape leaves treatments compared with the wheat straw and horse manure treatments (Figure , Appendix : Figure ). In the wheat straw and horse manure treatments, E. fetida mainly relied on bacterial (51.0 ± 3.4%) and less on plant‐derived resources (24.6 ± 3.4%), while it shifted to plant‐derived resources in the treatments with legumes and rape leaves (32.3 ± 2.5%). In line with the fingerprinting results, L. terrestris predominantly relied on plant‐derived resources (59.1 ± 4.4%), with this being most pronounced in the treatments with legume and rape leaves. The endogeic earthworm species mainly relied on bacterial resources (66.7 ± 2.0%), but slightly shifted toward plant resources in the treatments with legume and rape leaves. Additionally, the mixing model showed that the relative contribution of fungal‐derived resources to epigeic (24.4 ± 1.2%) and anecic (12.6 ± 1.2%) species was higher than in endogeic species (on average 8.0 ± 0.1%). Microbial community structure in soil and litter The fungal‐to‐bacterial ratio was highest in legume and rape leaves, intermediate in wheat straw, and lowest in horse manure. However, the fungal and bacterial abundance in litter varied with litter type and depended on earthworm species (Appendix : Table ). The absolute abundance of fungi in litter generally increased with litter quality, with the fungal abundance in rape leaves being highest and in wheat straw being lowest (Figure ). The bacterial abundance in legume and rape leaves as well as in horse manure was consistently higher than that in wheat straw. In addition, the fungal abundance in litter also depended on earthworm species, in particular in rape leaves, where the presence of E. fetida and L. terrestris increased the fungal abundance by 55.2% compared with endogeic species (Appendix : Figure ). By contrast, the presence of L. terrestris decreased the bacterial abundance in legume leaves by 77.3% compared with the other earthworm species. In contrast to litter, the fungal‐to‐bacterial ratio in soil was similar across litter treatments, with bacterial markers being generally more abundant than fungal markers (Figure ). However, the abundance of both bacteria and fungi in soil varied in an interactive way with litter type and earthworm species (Appendix : Table ). The fungal abundance was higher in the treatment with horse manure than that with wheat straw, especially in the presence of E. fetida and L. terrestris , where the fungal abundance increased by an average of 39.3% (Figure , Appendix : Figure ). Additionally, the presence of E. fetida and L. terrestris generally resulted in higher soil bacterial abundance than that of endogeic earthworm species, in particular in the legume leaf and horse manure treatments. The final litter mass varied with litter treatments and depended on earthworm species (Table , Appendix : Figure ). Across litter treatments, litter mass significantly decreased during the experiment, with the decrease being more pronounced in litter of higher quality (−72.7 ± 2.9%, −68.6 ± 3.0%, and −10.0 ± 0.5% for rape leaves, legume leaves, and wheat straw, respectively). The reduction in litter mass was particularly strong in treatments with the epigeic species E. fetida and the anecic species L. terrestris , with the latter also strongly reducing the mass of horse manure (−82.6 ± 2.7%). The endogeic species A. caliginosa and A. chlorotica to a similar extent increased the mass loss of rape and legume leaves, on average by 62.3 ± 1.1%. Changes in earthworm biomass also varied significantly with litter treatments (Table , Appendix : Figure ). The biomass of E. fetida increased by 78.0 ± 5.2% and 91.5 ± 4.4% in treatments with legume and rape leaves, respectively. Similarly, the biomass of L. terrestris increased in the treatments with legume leaves, rape leaves, and horse manure by 171.8 ± 11.7%, 135.5 ± 13.1%, and 63.2 ± 8.1%, respectively. Further, the biomass of A. caliginosa increased in each of the litter treatments, with the highest increase in the treatment with legume leaves (106.7 ± 10.6%). By contrast, the biomass of A. chlorotica only increased in the treatment with legume leaves (52.6 ± 4.4%), while the biomass of A. rosea remained constant across all litter treatments. Juveniles and cocoons were produced by E. fetida in all litter treatments, with the number of offspring (both juveniles and cocoons) being over 20 times higher in the legume and rape leaves treatments than in the horse manure and wheat straw treatments (Appendix : Table ). Similarly, the reproductive behavior of endogeic and anecic earthworm species was observed in the legume leaf treatment, with a similar number of offspring in species of both groups (endogeic: new juveniles: 0.6 ± 0.4, cocoon: 3.0 ± 0.8; anecic: new juvenile: 1.0 ± 0). Surprisingly, neither juveniles nor cocoons were produced by endogeic or anecic earthworm species in the rape leaf treatment. Conversely, A. rosea produced similar numbers of juveniles and cocoons in the horse manure and wheat straw treatments. Litter δ 13 C values were highest in wheat straw (−28.7 ± <0.1‰), followed by horse manure (−29.2 ± <0.1‰), legume leaves (−29.9 ± <0.1‰), and rape leaves (−31.4 ± <0.1‰; Appendix : Figure ). By contrast, the δ 15 N values of litter were highest in rape leaves (4.5 ± 0.1‰), intermediate in horse manure (3.1 ± 0.1‰), and lowest in legume leaves and wheat straw (−0.5 ± 0.1‰ for both; Appendix : Figure ). Soil δ 13 C and δ 15 N values were constant across litter treatments, averaging −27.1 ± <0.1‰ and 5.2 ± 0.1‰, respectively. In general, δ 13 C values of the epigeic earthworm species E. fetida and the anecic species L. terrestris were similar to those of soil, whereas in the endogeic earthworm species A. rosea , A. caliginosa , and A. chlorotica , they were higher than those of soil, on average, being increased on average by 2.2 ± 0.1‰ (Appendix : Figure ). All earthworm species were enriched in 13 C compared with litter (Appendix : Figure ), but this was more pronounced in the endogeic (+4.8 ± 0.1‰) than in epigeic (+3.6 ± 0.1‰) and anecic species (+2.8 ± 0.2‰). Further, δ 13 C values of earthworms also varied with litter treatments. Compared with the wheat straw treatment, the δ 13 C values of E. fetida were generally lower in the other litter treatments, and this was most pronounced in the rape leaves treatment (−3.3 ± 0.3‰; Appendix : Figure ). Similarly, compared with the wheat straw treatment, the δ 13 C values of L. terrestris were lower in the rape leaves treatment (−1.1 ± 0.3‰). By contrast, δ 13 C values of the endogeic earthworm species did not differ among litter treatments. Similar to δ 13 C values, the δ 15 N values of earthworms varied between earthworm species, but this depended on litter treatments (Table , Appendix : Figure ). Compared with the wheat straw treatment, δ 15 N values of E. fetida were lower in each of the other litter treatments, with the effect being most pronounced in the legume leaves treatment (−3.5 ± 0.5‰). By contrast, δ 15 N values of L. terrestris were 2.6 ± 0.5‰ higher in rape leaves than in the wheat straw treatment. Earthworm δ 15 N values of endogeic species were little affected by litter treatments, except in A. caliginosa ; in this species, δ 15 N values were slightly lower in the horse manure than in the rape leaves treatment. Each of the earthworm species was enriched in 15 N compared with litter (Appendix : Figure ); δ 15 N values of E. fetida exceeded those in litter by 10.1 ± 0.6‰, while in the anecic and endogeic species they were on average only 3.3 ± 0.2‰ higher than those in litter. In addition, δ 15 N values of E. fetida were higher than those in soil, while in the anecic and endogeic earthworm species, they were similar to those in soil, rape leaves, and horse manure (Appendix : Figure ). Trophic positions of earthworms as indicated by TP CSIA varied in an interactive way with litter treatments and earthworm species (Table , Figure ). Prior to the experiment, the averages of TP CSIA of E. fetida , L. terrestris , A. rosea , A. caliginosa , and A. chlorotica were 3.4 ± 0.1, 2.0 ± <0.1, 2.9 ± <0.1, 2.4 ± <0.1 and 2.5 ± <0.1, respectively (Figure ). Compared with wheat straw treatments, the TP CSIA of all earthworm species except A. rosea was lower in the presence of legume leaves, with the decline being most pronounced in E. fetida , in which the TP CSIA decreased by 0.3 ± 0.1 compared with the wheat straw treatment. Similarly, in the presence of rape leaves, the TP CSIA was also lower in all earthworm species compared with the wheat straw treatment, but the decline was only significant in E. fetida and A. rosea , in which the TP CSIA decreased by 0.5 ± 0.1 and 0.1 ± 0.1, respectively. Notably, the presence of horse manure only decreased the TP CSIA of E. fetida by 0.2 ± 0.1 compared with the wheat straw treatment. 13 C fingerprinting The use of basal resources by earthworms significantly differed between earthworm species (MANOVA, Table , Figure ). The epigeic earthworm species E. fetida and the anecic earthworm species L. terrestris mainly relied on plant‐derived resources, whereas the endogeic earthworm species A. rosea , A. caliginosa , and A. chlorotica relied more on bacterial‐derived resources. However, the use of basal resources by earthworm species also depended on litter treatments (Table ). In the presence of legume and rape leaves, the epigeic species E. fetida and the endogeic species A. caliginosa and A. chlorotica shifted toward the use of plant‐derived resources, while they shifted to more bacterial‐derived resources in the presence of wheat straw and horse manure. The litter treatment effect on the use of basal resources was most pronounced in E. fetida as indicated by the fingerprinting approach (Figure ). The mixing models further indicated that all earthworm species studied consumed more plant‐derived resources in the legume and rape leaves treatments compared with the wheat straw and horse manure treatments (Figure , Appendix : Figure ). In the wheat straw and horse manure treatments, E. fetida mainly relied on bacterial (51.0 ± 3.4%) and less on plant‐derived resources (24.6 ± 3.4%), while it shifted to plant‐derived resources in the treatments with legumes and rape leaves (32.3 ± 2.5%). In line with the fingerprinting results, L. terrestris predominantly relied on plant‐derived resources (59.1 ± 4.4%), with this being most pronounced in the treatments with legume and rape leaves. The endogeic earthworm species mainly relied on bacterial resources (66.7 ± 2.0%), but slightly shifted toward plant resources in the treatments with legume and rape leaves. Additionally, the mixing model showed that the relative contribution of fungal‐derived resources to epigeic (24.4 ± 1.2%) and anecic (12.6 ± 1.2%) species was higher than in endogeic species (on average 8.0 ± 0.1%). The fungal‐to‐bacterial ratio was highest in legume and rape leaves, intermediate in wheat straw, and lowest in horse manure. However, the fungal and bacterial abundance in litter varied with litter type and depended on earthworm species (Appendix : Table ). The absolute abundance of fungi in litter generally increased with litter quality, with the fungal abundance in rape leaves being highest and in wheat straw being lowest (Figure ). The bacterial abundance in legume and rape leaves as well as in horse manure was consistently higher than that in wheat straw. In addition, the fungal abundance in litter also depended on earthworm species, in particular in rape leaves, where the presence of E. fetida and L. terrestris increased the fungal abundance by 55.2% compared with endogeic species (Appendix : Figure ). By contrast, the presence of L. terrestris decreased the bacterial abundance in legume leaves by 77.3% compared with the other earthworm species. In contrast to litter, the fungal‐to‐bacterial ratio in soil was similar across litter treatments, with bacterial markers being generally more abundant than fungal markers (Figure ). However, the abundance of both bacteria and fungi in soil varied in an interactive way with litter type and earthworm species (Appendix : Table ). The fungal abundance was higher in the treatment with horse manure than that with wheat straw, especially in the presence of E. fetida and L. terrestris , where the fungal abundance increased by an average of 39.3% (Figure , Appendix : Figure ). Additionally, the presence of E. fetida and L. terrestris generally resulted in higher soil bacterial abundance than that of endogeic earthworm species, in particular in the legume leaf and horse manure treatments. The majority of energy and nutrients originating from primary production is processed via decomposition of detritus (Cebrian, ). Earthworms are among the most prominent decomposer animals in terrestrial systems; however, their trophic niche, that is, the proportions of plant, fungal, and bacterial resources they utilize, is masked by their ingestion of high amounts of mixed resources such as soil and leaf litter. Here, we applied stable isotope analyses to uncover the trophic niches of earthworms belonging to different ecological groups and to elucidate how they respond to different litter quality. All earthworm species incorporated litter resources via microbial energy channels, as indicated by both bulk‐tissue and eAAs 13 C values. eAAs of earthworms predominantly originated from bacteria (~60%), whereas fungi contributed little (~10%), corresponding to the dominance of bacteria in the experimental soil. Further, higher litter quality strengthened the plant energy channel and resulted in lower trophic positions of earthworms, indicating a crucial role of resource quality in shaping the trophic niches of soil animals. Litter quality as a driving factor of earthworm growth and bulk stable isotope composition Increased litter mass loss in the presence of high‐quality litter suggests that high‐quality litter decomposed more quickly and was consumed more intensively by earthworms. This is supported by higher earthworm biomass gain in the treatments with rape and legume leaves than in those with wheat straw and horse manure. The more balanced stoichiometry between high‐quality litter resources (C‐to‐N ratio of ~13) and earthworms (C‐to‐N ratio of ~4), compared with low‐quality litter (C‐to‐N ratio of ~93), likely enables earthworms to assimilate plant litter resources more efficiently (Sterner & Elser, ), resulting in higher biomass gain. Notably, compared with treatments with endogeic earthworm species, the litter mass loss was more pronounced in the presence of the epigeic E. fetida and the anecic L. terrestris , suggesting a more intense consumption of high‐quality litter by the latter. This was corroborated by higher biomass gains of L. terrestris and a higher number of offspring in E. fetida (Appendix : Table ) in the high‐quality litter treatments. Intensive consumption of high‐quality litter by L. terrestris is further supported by bulk 15 N values, which were about 3‰ enriched compared with litter, suggesting predominant assimilation of litter resources. By contrast, the high 15 N values of E. fetida likely stemmed from its pre‐experimental diet of high 15 N values. Thus, bulk 15 N values of E. fetida should be interpreted cautiously. All earthworm species were enriched in 13 C by 2‰–7‰ compared with litter, which exceeds the 0.5‰–1‰ enrichment per trophic level in non‐detrital systems. This phenomenon, coined “detrital shift” (Potapov et al., ), has been attributed to the consumption of microbially processed organic matter or the uptake of 13 C‐enriched leaf litter compounds (Pollierer et al., ). The shift was more pronounced in the endogeic (>4‰) than in the epigeic and anecic species (2‰–4‰), suggesting that the epigeic and anecic species relied more on litter resources, whereas the endogeic species depended more on microbial‐derived resources. Soil 13 C values typically increase with increasing soil depth due to the accumulation of old carbon resources heavily processed by microorganisms (Ehleringer et al., ). Therefore, the stronger enrichment in 13 C in endogeic earthworms likely reflects their higher reliance on old carbon resources compared with epigeic and anecic species (Ferlian et al., ). This is supported by the overlapping 15 N values of endogeic earthworms and organic matter in soil and is consistent with earlier findings that endogeic species assimilate more old carbon than epigeic and anecic species (Briones et al., ). Notably, in treatments with wheat straw and horse manure, E. fetida had higher 13 C values than in treatments with legume and rape leaves, indicating a greater reliance on 13 C‐enriched resources such as microbially processed carbon in low‐quality litter treatments. Effects of litter quality on trophic niches of earthworms Supporting our first hypothesis, earthworms in the high‐quality litter treatments generally occupied lower trophic positions and relied more on plant‐derived resources, suggesting selective feeding on litter resources. This was most pronounced in the epigeic species, as indicated by the reduction in its trophic position. However, the strong shift in trophic position in E. fetida may be partially attributed to its pre‐experimental diet as noted above. Notably, the high trophic position of L. terrestris , A. caliginosa , and A. chlorotica in low‐quality litter treatments resembled their initial trophic position. This could be due to either little incorporation of new resources in low‐quality litter treatments or to the incorporation of similar resources in the soil they were sampled from and in the low‐quality treatments. The fact that the biomass of these three earthworm species in the horse manure treatment, and of A. caliginosa also in the wheat straw treatment, significantly increased during the experiment argues against little turnover/incorporation of tissue carbon, but instead suggests the incorporation of similar resources in the experiment and the soil they were sampled from. By contrast, the biomass of L. terrestris and A. chlorotica in the wheat straw treatment did not increase during the experiment, suggesting little tissue turnover, thus necessitating careful interpretation of their stable isotope values. Nevertheless, all studied earthworm species shifted to higher proportions of plant resources in the rape and legume leaves treatments compared with treatments with wheat straw and horse manure. This likely reflects increased consumption and assimilation of plant resources in the high‐quality litter treatments, as has also been shown in aquatic animals such as pond snails and crustaceans (Zhang et al., ). By contrast, wheat straw and horse manure were rich in lignin and holocellulose that restrict the use of plant resources by detritivores. The digestion of these recalcitrant litter compounds requires cellulases and extracellular hydrolytic and oxidative enzymes, which earthworms are unlikely to synthesize themselves. Instead, they depend on microorganisms to produce these enzymes, as suggested by the “external rumen” hypothesis (Swift et al., ). Additionally, gut microbes may also supplement the nutrition of earthworms under nutrient‐deprived conditions (Larsen, Ventura, et al., ). However, the short retention time (2–24 h) of food resources during the earthworm gut passage likely does not allow a crucial role of gut microbes in aiding the digestion of litter materials (Drake & Horn, ; Zeibich et al., ); thereby, the “external rumen” hypothesis is more likely. This is supported by higher trophic positions of earthworms in wheat straw and horse manure treatments, indicating that the assimilation of microbial resources drives the elevation of the trophic position of decomposer animals (i.e., trophic inflation) (Steffan et al., ). More intermediate microbial trophic steps in response to lower diet quality could be a universal pattern, as similar results have been reported for a number of soil animals such as isopods and ants as well as for aquatic animals (Helms et al., ; van der Lee et al., ; van Straalen, ). The trophic niches of the earthworms in our study aligned with their assigned ecological groups. Both the TP CSIA results and the observed shifts toward greater utilization of plant resources in high‐quality litter treatments among epigeic and anecic species confirmed that these species predominantly consume litter‐derived resources (Edwards et al., ). The higher consumption of litter resources was accompanied by a higher proportion of fungal eAAs in the epigeic and anecic earthworms due to the high fungal abundance in litter. Although the TP CSIA of endogeic species was generally higher than that of epigeic and anecic species, it also varied among endogeic species, indicating trophic niche differentiation (Capowiez et al., ; Zhong, Larsen, et al., ). For instance, high‐quality litter significantly lowered the TP CSIA of A. caliginosa and A. chlorotica but not that of A. rosea , indicating a more pronounced shift toward the use of plant‐derived resources in the former, as also evidenced by the increase in their biomass in the high‐quality litter treatments. Despite these intragroup differences, trophic niche variation was less pronounced within endogeic species than across earthworm ecological groups, implying that the ecological grouping of earthworms is a robust indicator of trophic niche separation among earthworms. Basal resource use and energy channeling The fungal‐to‐bacterial ratio was higher in litter than in soil, and this corroborated with the fungal energy channel being more important in the epigeic and anecic than in endogeic species, partially supporting our second hypothesis. However, our findings suggest that fungi only moderately contribute to earthworm nutrition. Rather, litter‐derived resources were predominantly incorporated via the plant (~30%) and bacterial (~60%) energy channels, with lower quality litter resulting in a higher contribution of the bacterial energy channel. As the bacterial channel is perceived as “fast energy channel” (Coleman et al., ), the higher reliance on the bacterial channel likely leads to faster transfer and higher loss of energy along food chains in the low diet quality scenario. This is consistent with high nutrient leaching and fast mineralization of soil organic matter in intensively managed agricultural systems lacking input of high‐quality residues such as mulch material (Corbeels et al., ; de Vries et al., ). The predominance of bacterial eAAs in earthworms corresponded to the dominance of bacteria in our experimental soil. Although the epigeic earthworm species E. fetida is thought to predominantly feed and assimilate plant resources (Lavelle & Spain, ), it also contained a significant amount of bacterial eAAs in its tissue, which, as noted above, may have originated from the pre‐experimental diet. However, the anecic and endogeic earthworms also contained high proportions of eAAs from bacteria, indicating that bacteria or their residues in soil serve as important food resources. This was particularly evident in L. terrestris , which pulls litter into its burrows and feeds on microbially colonized litter within the soil matrix (Lavelle & Spain, ). As bacteria are often associated with small soil particles such as clay, earthworms ingest bacteria and bacterial residues when feeding on soil (Hemkemeyer et al., ). The latter form an important component of soil organic matter and, by breaking up these soil aggregates during the gut passage, earthworms may be able to access these resources (Angst et al., ). By contrast, it is unlikely that earthworms effectively digest living bacteria, since bacterial biomass and the number of bacterial cells in soil do not decrease during the earthworm gut passage (Scheu, ; Schönholzer et al., ). Based on pulse 13 C labelling of plants and CSIA of amino acids, a previous study documented that earthworms, in particular endogeic species, were only little enriched in 13 C from recent plant photosynthates, but substantial amounts of bacterial‐derived eAAs were found in their tissue (Zhong et al., ). This also suggests that earthworms rely on bacterial residues associated with older soil organic matter rather than on living bacterial biomass. Given the dominance of the fungal energy channel in soil food webs (Pausch et al., ), the predominant incorporation of plant and bacterial resources uniquely positions earthworms in soil food webs and may explain why earthworms reach high biomass and dominate energy processing in many soil food webs (Zhou et al., ). The influence of litter quality on dietary preferences and, in turn, on the channeling of energy and nutrients may be highly relevant for agricultural systems and management decisions, as this may feedback to ecosystem functions and services. Effects of litter quality and earthworm species on the abundance of fungi and bacteria In part supporting our third hypothesis, the abundance of fungi and bacteria in litter and soil varied with litter treatments. In litter, total microbial abundance and fungal‐to‐bacterial ratio were higher in the rape and legume leaves than in the horse manure and wheat straw treatments, likely relating to lower C‐to‐N ratio and lignin content of rape and legume leaves. In soil, the abundance of fungi and bacteria generally varied little with litter quality, likely reflecting that leachates from litter of different quality are similar (Hensgens et al., ). However, the abundance of fungi and bacteria was higher in the soil of the horse manure than in the wheat straw treatments due to the higher microbial abundance in the former, presumably reflecting the more intensive incorporation of manure into the soil by earthworms. Providing evidence for our fourth hypothesis, earthworms differentially modulated the abundance of fungi and bacteria in litter and soil. In litter, the abundance of fungi in rape leaves was higher in the presence of the epigeic E. fetida and the anecic L. terrestris than in the presence of endogeic species. Presumably, the litter‐feeding epigeic and anecic species increased the availability of nitrogen by casting and excreting mucus and urine in litter, thereby facilitating the exploitation of litter resources by fungi. The increased fungal abundance may have contributed to the increased contribution of fungi to the diet of epigeic and anecic earthworms, as indicated by CSIA, suggesting a positive feedback loop. Variations in the abundance of fungi and bacteria with earthworm species were stronger in soil than in litter, presumably reflecting the translocation of litter resources into the soil by earthworms (bioturbation). In particular, E. fetida and L. terrestris increased the abundance of bacteria (and microorganisms in total), and this was likely due to the incorporation of litter materials by these species into the soil, with this being more pronounced in horse manure, legume leaves, and rape leaves treatments. Similar to L. terrestris , A. caliginosa , as endogeic species, also increased the fungal abundance in soil in the horse manure treatment. In fact, A. caliginosa is known to feed on the dung of vertebrate herbivores, reflecting that its grouping as endogeic species is simplistic (Barley, ; Capowiez et al., ). Increased litter mass loss in the presence of high‐quality litter suggests that high‐quality litter decomposed more quickly and was consumed more intensively by earthworms. This is supported by higher earthworm biomass gain in the treatments with rape and legume leaves than in those with wheat straw and horse manure. The more balanced stoichiometry between high‐quality litter resources (C‐to‐N ratio of ~13) and earthworms (C‐to‐N ratio of ~4), compared with low‐quality litter (C‐to‐N ratio of ~93), likely enables earthworms to assimilate plant litter resources more efficiently (Sterner & Elser, ), resulting in higher biomass gain. Notably, compared with treatments with endogeic earthworm species, the litter mass loss was more pronounced in the presence of the epigeic E. fetida and the anecic L. terrestris , suggesting a more intense consumption of high‐quality litter by the latter. This was corroborated by higher biomass gains of L. terrestris and a higher number of offspring in E. fetida (Appendix : Table ) in the high‐quality litter treatments. Intensive consumption of high‐quality litter by L. terrestris is further supported by bulk 15 N values, which were about 3‰ enriched compared with litter, suggesting predominant assimilation of litter resources. By contrast, the high 15 N values of E. fetida likely stemmed from its pre‐experimental diet of high 15 N values. Thus, bulk 15 N values of E. fetida should be interpreted cautiously. All earthworm species were enriched in 13 C by 2‰–7‰ compared with litter, which exceeds the 0.5‰–1‰ enrichment per trophic level in non‐detrital systems. This phenomenon, coined “detrital shift” (Potapov et al., ), has been attributed to the consumption of microbially processed organic matter or the uptake of 13 C‐enriched leaf litter compounds (Pollierer et al., ). The shift was more pronounced in the endogeic (>4‰) than in the epigeic and anecic species (2‰–4‰), suggesting that the epigeic and anecic species relied more on litter resources, whereas the endogeic species depended more on microbial‐derived resources. Soil 13 C values typically increase with increasing soil depth due to the accumulation of old carbon resources heavily processed by microorganisms (Ehleringer et al., ). Therefore, the stronger enrichment in 13 C in endogeic earthworms likely reflects their higher reliance on old carbon resources compared with epigeic and anecic species (Ferlian et al., ). This is supported by the overlapping 15 N values of endogeic earthworms and organic matter in soil and is consistent with earlier findings that endogeic species assimilate more old carbon than epigeic and anecic species (Briones et al., ). Notably, in treatments with wheat straw and horse manure, E. fetida had higher 13 C values than in treatments with legume and rape leaves, indicating a greater reliance on 13 C‐enriched resources such as microbially processed carbon in low‐quality litter treatments. Supporting our first hypothesis, earthworms in the high‐quality litter treatments generally occupied lower trophic positions and relied more on plant‐derived resources, suggesting selective feeding on litter resources. This was most pronounced in the epigeic species, as indicated by the reduction in its trophic position. However, the strong shift in trophic position in E. fetida may be partially attributed to its pre‐experimental diet as noted above. Notably, the high trophic position of L. terrestris , A. caliginosa , and A. chlorotica in low‐quality litter treatments resembled their initial trophic position. This could be due to either little incorporation of new resources in low‐quality litter treatments or to the incorporation of similar resources in the soil they were sampled from and in the low‐quality treatments. The fact that the biomass of these three earthworm species in the horse manure treatment, and of A. caliginosa also in the wheat straw treatment, significantly increased during the experiment argues against little turnover/incorporation of tissue carbon, but instead suggests the incorporation of similar resources in the experiment and the soil they were sampled from. By contrast, the biomass of L. terrestris and A. chlorotica in the wheat straw treatment did not increase during the experiment, suggesting little tissue turnover, thus necessitating careful interpretation of their stable isotope values. Nevertheless, all studied earthworm species shifted to higher proportions of plant resources in the rape and legume leaves treatments compared with treatments with wheat straw and horse manure. This likely reflects increased consumption and assimilation of plant resources in the high‐quality litter treatments, as has also been shown in aquatic animals such as pond snails and crustaceans (Zhang et al., ). By contrast, wheat straw and horse manure were rich in lignin and holocellulose that restrict the use of plant resources by detritivores. The digestion of these recalcitrant litter compounds requires cellulases and extracellular hydrolytic and oxidative enzymes, which earthworms are unlikely to synthesize themselves. Instead, they depend on microorganisms to produce these enzymes, as suggested by the “external rumen” hypothesis (Swift et al., ). Additionally, gut microbes may also supplement the nutrition of earthworms under nutrient‐deprived conditions (Larsen, Ventura, et al., ). However, the short retention time (2–24 h) of food resources during the earthworm gut passage likely does not allow a crucial role of gut microbes in aiding the digestion of litter materials (Drake & Horn, ; Zeibich et al., ); thereby, the “external rumen” hypothesis is more likely. This is supported by higher trophic positions of earthworms in wheat straw and horse manure treatments, indicating that the assimilation of microbial resources drives the elevation of the trophic position of decomposer animals (i.e., trophic inflation) (Steffan et al., ). More intermediate microbial trophic steps in response to lower diet quality could be a universal pattern, as similar results have been reported for a number of soil animals such as isopods and ants as well as for aquatic animals (Helms et al., ; van der Lee et al., ; van Straalen, ). The trophic niches of the earthworms in our study aligned with their assigned ecological groups. Both the TP CSIA results and the observed shifts toward greater utilization of plant resources in high‐quality litter treatments among epigeic and anecic species confirmed that these species predominantly consume litter‐derived resources (Edwards et al., ). The higher consumption of litter resources was accompanied by a higher proportion of fungal eAAs in the epigeic and anecic earthworms due to the high fungal abundance in litter. Although the TP CSIA of endogeic species was generally higher than that of epigeic and anecic species, it also varied among endogeic species, indicating trophic niche differentiation (Capowiez et al., ; Zhong, Larsen, et al., ). For instance, high‐quality litter significantly lowered the TP CSIA of A. caliginosa and A. chlorotica but not that of A. rosea , indicating a more pronounced shift toward the use of plant‐derived resources in the former, as also evidenced by the increase in their biomass in the high‐quality litter treatments. Despite these intragroup differences, trophic niche variation was less pronounced within endogeic species than across earthworm ecological groups, implying that the ecological grouping of earthworms is a robust indicator of trophic niche separation among earthworms. The fungal‐to‐bacterial ratio was higher in litter than in soil, and this corroborated with the fungal energy channel being more important in the epigeic and anecic than in endogeic species, partially supporting our second hypothesis. However, our findings suggest that fungi only moderately contribute to earthworm nutrition. Rather, litter‐derived resources were predominantly incorporated via the plant (~30%) and bacterial (~60%) energy channels, with lower quality litter resulting in a higher contribution of the bacterial energy channel. As the bacterial channel is perceived as “fast energy channel” (Coleman et al., ), the higher reliance on the bacterial channel likely leads to faster transfer and higher loss of energy along food chains in the low diet quality scenario. This is consistent with high nutrient leaching and fast mineralization of soil organic matter in intensively managed agricultural systems lacking input of high‐quality residues such as mulch material (Corbeels et al., ; de Vries et al., ). The predominance of bacterial eAAs in earthworms corresponded to the dominance of bacteria in our experimental soil. Although the epigeic earthworm species E. fetida is thought to predominantly feed and assimilate plant resources (Lavelle & Spain, ), it also contained a significant amount of bacterial eAAs in its tissue, which, as noted above, may have originated from the pre‐experimental diet. However, the anecic and endogeic earthworms also contained high proportions of eAAs from bacteria, indicating that bacteria or their residues in soil serve as important food resources. This was particularly evident in L. terrestris , which pulls litter into its burrows and feeds on microbially colonized litter within the soil matrix (Lavelle & Spain, ). As bacteria are often associated with small soil particles such as clay, earthworms ingest bacteria and bacterial residues when feeding on soil (Hemkemeyer et al., ). The latter form an important component of soil organic matter and, by breaking up these soil aggregates during the gut passage, earthworms may be able to access these resources (Angst et al., ). By contrast, it is unlikely that earthworms effectively digest living bacteria, since bacterial biomass and the number of bacterial cells in soil do not decrease during the earthworm gut passage (Scheu, ; Schönholzer et al., ). Based on pulse 13 C labelling of plants and CSIA of amino acids, a previous study documented that earthworms, in particular endogeic species, were only little enriched in 13 C from recent plant photosynthates, but substantial amounts of bacterial‐derived eAAs were found in their tissue (Zhong et al., ). This also suggests that earthworms rely on bacterial residues associated with older soil organic matter rather than on living bacterial biomass. Given the dominance of the fungal energy channel in soil food webs (Pausch et al., ), the predominant incorporation of plant and bacterial resources uniquely positions earthworms in soil food webs and may explain why earthworms reach high biomass and dominate energy processing in many soil food webs (Zhou et al., ). The influence of litter quality on dietary preferences and, in turn, on the channeling of energy and nutrients may be highly relevant for agricultural systems and management decisions, as this may feedback to ecosystem functions and services. In part supporting our third hypothesis, the abundance of fungi and bacteria in litter and soil varied with litter treatments. In litter, total microbial abundance and fungal‐to‐bacterial ratio were higher in the rape and legume leaves than in the horse manure and wheat straw treatments, likely relating to lower C‐to‐N ratio and lignin content of rape and legume leaves. In soil, the abundance of fungi and bacteria generally varied little with litter quality, likely reflecting that leachates from litter of different quality are similar (Hensgens et al., ). However, the abundance of fungi and bacteria was higher in the soil of the horse manure than in the wheat straw treatments due to the higher microbial abundance in the former, presumably reflecting the more intensive incorporation of manure into the soil by earthworms. Providing evidence for our fourth hypothesis, earthworms differentially modulated the abundance of fungi and bacteria in litter and soil. In litter, the abundance of fungi in rape leaves was higher in the presence of the epigeic E. fetida and the anecic L. terrestris than in the presence of endogeic species. Presumably, the litter‐feeding epigeic and anecic species increased the availability of nitrogen by casting and excreting mucus and urine in litter, thereby facilitating the exploitation of litter resources by fungi. The increased fungal abundance may have contributed to the increased contribution of fungi to the diet of epigeic and anecic earthworms, as indicated by CSIA, suggesting a positive feedback loop. Variations in the abundance of fungi and bacteria with earthworm species were stronger in soil than in litter, presumably reflecting the translocation of litter resources into the soil by earthworms (bioturbation). In particular, E. fetida and L. terrestris increased the abundance of bacteria (and microorganisms in total), and this was likely due to the incorporation of litter materials by these species into the soil, with this being more pronounced in horse manure, legume leaves, and rape leaves treatments. Similar to L. terrestris , A. caliginosa , as endogeic species, also increased the fungal abundance in soil in the horse manure treatment. In fact, A. caliginosa is known to feed on the dung of vertebrate herbivores, reflecting that its grouping as endogeic species is simplistic (Barley, ; Capowiez et al., ). To quantify the effects of detrital quality on the trophic position and energy channeling of earthworms as major decomposer animals, we quantified the contributions of plant, bacterial, and fungal basal resources to their nutrition in response to a gradient of litter quality. Earthworms incorporated litter resources mainly via bacterial (~60%) and plant (~30%) energy channels, with soil‐feeding species being more strongly linked to the bacterial energy channel than litter‐feeding species, presumably due to the predominance of bacteria in soil. Interestingly, within the earthworm species, shifts in the abundance of fungi and bacteria in the litter little affected energy channeling into earthworms. Rather, high litter quality increased the assimilation of plant litter by earthworms, at least at the given time scale, resulting in lower trophic positions and supporting the view that bottom‐up forces structure decomposer communities. By contrast, lower quality litter resources increased the contribution of microorganisms to the nutrition of earthworms, potentially reflecting a general pattern of microorganisms as trophic intermediates in response to low diet quality. Overall, our study points to the fundamental role of plant resource quality as an indicator of energy channeling and trophic position in animals, which is likely a universal pattern in detrital food webs. The authors declare no conflicts of interest. Appendix S1. Appendix S2. |
Adverse social determinants of health elevate uncontrolled hypertension risk: a cardio-oncology prospective cohort study | 1710f1cd-c597-43ed-8f5f-767ac8f171f4 | 11368120 | Internal Medicine[mh] | pkae064_Supplementary_Data |
Clinical Proteomics Reveals Vulnerabilities in Noninvasive Breast Ductal Carcinoma and Drives Personalized Treatment Strategies | f9194ee7-9ede-4dcb-bafd-87101dfd48de | 11755405 | Biochemistry[mh] | Ductal carcinoma in situ (DCIS) is a preinvasive (stage 0) neoplastic lesion that is associated with a ∼10-fold elevated risk of developing invasive breast cancer, e.g., invasive ductal carcinoma (IDC; ref. ). Due to this increased risk, patients diagnosed with DCIS undergo aggressive treatment with breast-conserving surgery or total mastectomy with optional adjuvant therapy, i.e., radiotherapy or endocrine therapy. Studies, however, show that if left untreated, only 20% to 50% of patients with DCIS will progress to IDC . This has led to global concerns about overtreatment of patients with DCIS, the resulting high economic burden for the healthcare system and, most importantly, a high psychologic burden for the patients. Tools and expression signatures to predict invasive progression for better informed clinical decision making are required, and many international trials are currently enrolling patients with DCIS for nonsurgical management by active surveillance, e.g., LORIS, LORD, and LARRIKIN, as described in Morrissey, and colleagues The COMET trial (NCT02926911) in the United States is targeting histologically confirmed low-risk DCIS for a comparison of surgery to monitoring and endocrine therapy. At present, the diagnosis of DCIS is based on calcifications observed during mammography screenings and histologic assessment of tissue biopsies, i.e., formalin-fixed and paraffin-embedded (FFPE) needle core biopsies. Five morphologic key features, high intratumor heterogeneity, poor interobserver agreement , and the lack of validated prognostic markers significantly impact clear diagnosis and risk stratification, as well as patient enrollment and the final results of clinical studies. There is currently no precision oncology treatment available for patients diagnosed with DCIS. Postoperative (adjuvant) therapy is guided by IHC assays for estrogen and progesterone receptor status, HER2 expression status (by FISH), as well as BRCA1/2 mutation status. Clinical multigene assays, such as Oncotype DX/DCIS, MammaPrint, or PreludeDx DCIS, are sometimes used to clinically predict recurrence risks of patients but are not standard and only guide the use of adjuvant therapy. Generally, DCIS studies are limited by patient number and tissue quality. Recent genomic landscaping studies on individual DCIS lesions identified putative biomarkers associated with progression toward IDC and gave insights into the underlying cancer biology. Multi-omics profiling of DCIS, however, is still challenging because DCIS and IDC lesions are mostly studied in FFPE-preserved samples; “pure” DCIS lesions can be very small in size as they are usually from minimally invasive needle core biopsies, and access to “pure” IDC lesions is limited, as most surgically removed IDC lesions also present in situ components and may follow effective neoadjuvant therapy. In this study, we made use of our recently published FFPE proteomics method that facilitates proteomic profiling on FFPE-preserved tissue cores . In a cohort of carefully curated patients treated with DCIS and IDC at the Segal Cancer Centre of the Jewish General Hospital (JGH) in Montreal ( n = 51), we investigate changes in the protein expression of 29 “pure” DCIS lesions, 18 “pure” IDC lesions, 13 mixed-type lesions (IDC with DCIS components), and 9 cases in which DCIS and IDC are present in different lesions in the same patient, either synchronously or metachronously (see ). (Note: “Metachronously” means that a DCIS case developed to IDC during clinical follow-up. “Synchronously” means that both DCIS and IDC lesions were collected at the same time, either on the same breast or the other breast.) Data from recently published independent gene-expression studies investigating the progression from DCIS to IDC were used to complement the label-free protein expression data. Because FFPE preservation eliminates up to 85% of metabolites , we used Quantitative Systems Metabolism (QSM) technology from Doppelganger Biosystem GmbH, an artificial intelligence–driven metabolic analysis using proteomics data , for a comprehensive profiling of the central metabolism/energy metabolism. Guided by these results, we developed a highly multiplexed parallel reaction monitoring (PRM) assay for precise quantitation of 90 proteins that are associated with cancer metabolism, RNA regulation, and major cancer growth–associated pathways, such as PI3K/AKT/mTOR and EGFR/RAS/RAF. Chemicals and reagents All chemicals and reagents were purchased from Sigma-Aldrich unless otherwise specified. Sequencing-grade trypsin (Promega, P/N V511A) was used for the generation of tryptic peptides. Clinical specimens Clinical specimens were obtained from patients who had provided written informed consent for the tissue biobanking part of the JGH Breast Biobank (protocol 05-006). The study was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and was approved by the JGH Research Ethics Board. A total of 50 clinical cases of patients diagnosed and treated with DCIS and/or IDC at the JGH were carefully curated by a pathologist with expertise in breast cancer to select lesions meeting the inclusion criteria for mass spectrometry (MS)-based analysis, i.e., at least 30% tumor cellularity and less than 10% necrosis. The patients were of Caucasian ethnicity ranging from 22 to 82 years of age at first diagnosis (median age 52 years). The patients were followed for a period of 1 to 18 years (median 8 years). During the period of follow-up, 43 patients had no evidence of disease, and 1 patient had metastatic disease, whereas 8 patients died of cancer. The cohort comprises 29 cases with “pure” DCIS lesions, 18 cases with “pure” IDC lesions, 13 cases with mixed-type lesions (IDC with DCIS components), and 9 cases with synchronous/metachronous DCIS and IDC. Clinical data for the patients are available in Supplementary Table S1. Sample preparation One mm diameter tissue cores (∼0.8 mm 3 tissue volume) were prepared from FFPE blocks enriching for DCIS- or IDC-only tumor cells. Excessive paraffin was trimmed off using a clean scalpel blade. Protein extraction was performed following our developed FFPE proteomics workflow for core needle biopsies. Briefly, paraffin was removed by incubation with hot water (∼80°C). Each deparaffinized core was mechanically disrupted using a micropestle (Sigma-Aldrich, #BAF199230001) in 250 μL of 2% sodium deoxycholate, 50 mmol/L Tris-HCl, and 10 mmol/L tris(2-carboxyethyl)phosphine, pH 8.5, followed by sequential incubation in Eppendorf ThermoMixer C for 20 minutes at 99°C (1,100 rpm) and for 2 hours at 80°C (1,100 rpm). Samples were cooled down on ice for 1 minute before a 15-minute centrifugation at 21,000 × g (4°C) to remove cell debris. The supernatant was collected into a Protein LoBinding tube (Eppendorf), and the total protein concentration was determined using Pierce Reducing Agent Compatible BCA Kit (Thermo Fisher Scientific, P/N 23252) following the manufacturer’s instructions. Free cysteine residues were alkylated with iodoacetamide to a final concentration of 30 mmol/L and incubated for 30 minutes at room temperature, protected from light. For 2 μg of protein lysate, 2 μL of ferromagnetic beads with MagReSyn Hydroxyl functional groups (ReSyn Biosciences, 20 μg/mL) were equilibrated with 100 μL of 70% acetonitrile (ACN), briefly vortexed, and placed on a magnetic rack to remove the supernatant. This step was repeated another two times. Next, the protein extracts were added to the beads, and the sample was adjusted to a final concentration of 70% ACN, thoroughly vortexed, and incubated for 10 minutes at room temperature without shaking. The following washing steps were performed on a magnetic rack without disturbing the protein/bead aggregate. The supernatants were discarded, and the beads were washed on the magnetic rack with 1 mL of 95% ACN for 10 seconds, followed by a wash with 1 mL of 70% ACN without disturbing the protein/bead aggregate. The tubes were removed from the magnetic rack, 100 µL of digestion buffer [1:20 (w/w) trypsin:protein in 0.2 mol/L guanidine hydrochloride, 50 mmol/L ammonium bicarbonate, and 2 mmol/L CaCl 2 ] were added, and the samples were incubated at 37°C for 12 hours. After acidification with trifluoroacetic acid to a final concentration of 2%, the tubes were placed on the magnetic rack for 1 minute, followed by removal of the supernatant. To remove residual beads, the samples were centrifuged at 20,000 × g for 10 minutes. Preparation of spiking solutions for the response curve and absolute quantitation In order to promote translation of our findings and to validate label-free quantitation (LFQ) abundances with a more precise targeted MS approach, we developed a multiplexed PRM method to quantify 90 proteins in FFPE specimens, measuring the concentration of a unique signature peptide for each protein. All 90 peptides were measured in a single LC-MS/MS run. Two equimolar synthetic peptide mixtures (100 fmol/μg of each peptide) were prepared in 30% ACN with 0.1% formic acid in water (w/v); one mixture contained unlabeled peptides (light or NAT peptides), and the second mixture contained stable isotope-labeled standard peptides (heavy or SIS peptides). The light peptide mixture was used to develop the highly multiplexed PRM assay with optimized peptide-specific parameters, such as collision energy and charge state, whereas the heavy peptide mixture was used for normalization, serving as a spiking solution and internal standard for clinical samples. Quantitation was performed using a seven-point response curve consisting of a variable amount of light peptides, ranging from 0.41 to 250 fmol (three orders of magnitude), and a constant amount of SIS peptides (50 fmol). Digested BSA (0.01 μg) was used as a surrogate matrix of the response curve. To determine the limit of detection (LOD), a double-blank sample was prepared. The blank sample consisted of 0.01 μg BSA digest spiked with 50 fmol of the SIS mixture and analyzed before and/or directly after the highest calibrant level of the response curve. For quantitation of endogenous protein in the patient samples, 50 fmol of SIS peptides were spiked into 1 μg total digested tissue protein, as determined by Pierce Reducing Agent Compatible BCA Kit. Data analysis One μg of digested protein was preconcentrated on EV2001 C18 Evotips and separated on a heated (40°C) EV1137 column (15 cm × 150 μm, 1.5 μm particle size) using Evosep’s “extended method” (15 samples per day). The samples were analyzed by data-dependent acquisition mode on a Q Exactive Plus Orbitrap mass spectrometer operated with a Nanospray Flex ion source (both from Thermo Fisher Scientific) connected to an Evosep One high-performance liquid chromatography device (Evosep Biosystems). Full MS scans were acquired over the mass range from m/z 350 to m/z 1,500 at a resolution of 70,000 with an automatic gain control (AGC) target value of 1 × 10 6 and a maximum injection time of 50 milliseconds. The 15 most intense precursor ions (charge states +2, +3, and +4) were isolated with a window of 1.2 Da and fragmented using a normalized collision energy of 28; the dynamic exclusion was set to 30 seconds. MS/MS spectra were acquired at a mass resolution of 17,500 using an AGC target value of 2 × 10 4 and a maximum injection time of 64 milliseconds. Chromatographic separation of all PRM runs was performed with the same equipment and buffers as described above. The Q Exactive Plus was operated in PRM mode at a resolution of 35,000. Target precursor ions were isolated with the quadrupole isolation window set to m/z 1.2. An AGC target of 3 × 10 6 was used, allowing for a maximum injection time of 110 milliseconds. Data were acquired in time-scheduled mode, allowing a 2 minute retention time window for each target. Full MS scans were acquired in parallel at a low resolution (m/z 17,500) with an AGC target value of 1 × 10 6 and a maximum injection time of 50 milliseconds to ensure sample quality. MS data files are publicly available through the ProteomeXchange Consortium via the PRIDE partner repository with the following dataset identifier: PXD040782. The synthetic peptides selected for this PRM assay were validated by others; information is available through the NCI’s Clinical Proteomic Tumor Analysis Consortium Assay Portal ( assays.cancer.gov ). Data processing and differential expression analysis MS raw data were processed using Proteome Discoverer 2.5 (Thermo Fisher Scientific). Database searches were performed using SequestHT with multi-peptide search and a human Swiss-Prot database (January 2019; 20,414 target entries). LFQ was performed using the Minora Feature Detector node within Proteome Discoverer, and the Percolator software was used to calculate posterior error probabilities. Database searches were performed using trypsin as an enzyme with a maximum of two missed cleavages. Carbamidomethylation of cysteine (+57.021 Da) was set as a fixed modification, and oxidation of methionine (+15.995 Da) as variable modifications. Mass tolerances were set to 10 ppm for precursor ions and 0.02 Da for product ions. The data were filtered to a FDR <1% at the peptide and protein levels. Only proteins that were (i) identified with at least one protein unique peptide and (ii) quantified in ≥60% of replicates of at least one of the study groups were considered for the quantitative comparison. Protein LFQ data obtained from Proteome Discoverer were normalized based on summed protein intensities to correct for differences in sample loading. Missing protein intensity values were imputed using 1.5× the minimum observed intensity for this particular sample. The obtained normalized abundances were used for unpaired t tests (two tailed, 95% confidence interval) and differential expression analysis on log 2 -transformed data with multiple hypothesis testing using the Benjamini–Krieger false discovery approach (FDR 1%). Proteins having q values of <0.01 and absolute log 2 fold changes (FC) >1 were considered differential between tested groups. Statistical analysis was performed using GraphPad Prism 9. Raw PRM data were analyzed using Skyline (v22.2.0.351; ref. ). Correct peak integration and visual verification of detected peaks was performed manually for each target, and the three to four highest and most stable transitions were selected for quantitation. A linear regression model with 1/ x 2 weighting using the SIS/NAT ratio of each target peptide was used for the calculation of concentrations. Only calibration levels meeting the following criteria were accepted for response curve generation and regression analysis; precision average <20% coefficient of variation per calibration level and accuracy average between 80% and 120% per calibrant level, quantified in at least three consecutive calibrant levels. The LOD describes the smallest concentration of the target peptide (analyte) that is likely to be reliably distinguished from instrument noise and at which detection is feasible. To determine the LOD, we use replicate injections from a double-blank sample, i.e., fixed concentration of the SIS peptides in the surrogate matrix. The average concentration of the double-blank plus 3.3× the SD of the blank replicates is used to calculate the lowest detectable concentration for each peptide. The limit of quantitation describes the lowest concentration at which the analyte can not only be reliably detected, but at which above mentioned precision and accuracy criteria are met. Here the limit of quantitation was defined as the lowest calibration level for each peptide. Proteins/peptides with more than 60% missing values were excluded from the downstream analysis. Functional enrichment analysis Functional enrichment analysis was performed using the “Core Analysis” function within Ingenuity Pathway Analysis (Qiagen, Inc., content version: 81348237, release date: September 15, 2022; ref. ). Ingenuity Knowledge Base was used as reference set, allowing direct and indirect relationships. Only molecules having expression P values <0.05 and absolute log 2 FCs of >1 were considered for the core analysis. All other settings were kept with default parameters. Gene set enrichment analysis A pre-ranked gene set enrichment analysis (GSEA) was performed using GSEA v4.3.2 (Broad Institute, Inc.) software. The gene list was ranked by differential expression using the SIGN function within Excel with calculated log 2 FC and P value from an unpaired t test. A hallmark gene set Molecular Signature Database (MSigDB v2022.1; ref. ) was used as reference gene set. The search allowed 1,000 permutations, with set sizes between 15 and 500 genes. Pathways were collapsed to remove redundancy and to increase selectivity and specificity. Data were visualized using the clusterProfiler package within R. Metabolic analysis Protein expression data from paired DCIS/IDC cases was sent to Doppelganger Biosystems Inc. for metabolic analysis using QSM technology . Data availability MS data files are publicly available through the ProteomeXchange Consortium via the PRIDE partner repository with the following dataset identifier: PXD040782. Clinical data are available in Supplementary Table S1. Hematoxylin and eosin images of clinical specimens used in this study are available via the dataset identifier on the PRIDE repository. All chemicals and reagents were purchased from Sigma-Aldrich unless otherwise specified. Sequencing-grade trypsin (Promega, P/N V511A) was used for the generation of tryptic peptides. Clinical specimens were obtained from patients who had provided written informed consent for the tissue biobanking part of the JGH Breast Biobank (protocol 05-006). The study was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and was approved by the JGH Research Ethics Board. A total of 50 clinical cases of patients diagnosed and treated with DCIS and/or IDC at the JGH were carefully curated by a pathologist with expertise in breast cancer to select lesions meeting the inclusion criteria for mass spectrometry (MS)-based analysis, i.e., at least 30% tumor cellularity and less than 10% necrosis. The patients were of Caucasian ethnicity ranging from 22 to 82 years of age at first diagnosis (median age 52 years). The patients were followed for a period of 1 to 18 years (median 8 years). During the period of follow-up, 43 patients had no evidence of disease, and 1 patient had metastatic disease, whereas 8 patients died of cancer. The cohort comprises 29 cases with “pure” DCIS lesions, 18 cases with “pure” IDC lesions, 13 cases with mixed-type lesions (IDC with DCIS components), and 9 cases with synchronous/metachronous DCIS and IDC. Clinical data for the patients are available in Supplementary Table S1. One mm diameter tissue cores (∼0.8 mm 3 tissue volume) were prepared from FFPE blocks enriching for DCIS- or IDC-only tumor cells. Excessive paraffin was trimmed off using a clean scalpel blade. Protein extraction was performed following our developed FFPE proteomics workflow for core needle biopsies. Briefly, paraffin was removed by incubation with hot water (∼80°C). Each deparaffinized core was mechanically disrupted using a micropestle (Sigma-Aldrich, #BAF199230001) in 250 μL of 2% sodium deoxycholate, 50 mmol/L Tris-HCl, and 10 mmol/L tris(2-carboxyethyl)phosphine, pH 8.5, followed by sequential incubation in Eppendorf ThermoMixer C for 20 minutes at 99°C (1,100 rpm) and for 2 hours at 80°C (1,100 rpm). Samples were cooled down on ice for 1 minute before a 15-minute centrifugation at 21,000 × g (4°C) to remove cell debris. The supernatant was collected into a Protein LoBinding tube (Eppendorf), and the total protein concentration was determined using Pierce Reducing Agent Compatible BCA Kit (Thermo Fisher Scientific, P/N 23252) following the manufacturer’s instructions. Free cysteine residues were alkylated with iodoacetamide to a final concentration of 30 mmol/L and incubated for 30 minutes at room temperature, protected from light. For 2 μg of protein lysate, 2 μL of ferromagnetic beads with MagReSyn Hydroxyl functional groups (ReSyn Biosciences, 20 μg/mL) were equilibrated with 100 μL of 70% acetonitrile (ACN), briefly vortexed, and placed on a magnetic rack to remove the supernatant. This step was repeated another two times. Next, the protein extracts were added to the beads, and the sample was adjusted to a final concentration of 70% ACN, thoroughly vortexed, and incubated for 10 minutes at room temperature without shaking. The following washing steps were performed on a magnetic rack without disturbing the protein/bead aggregate. The supernatants were discarded, and the beads were washed on the magnetic rack with 1 mL of 95% ACN for 10 seconds, followed by a wash with 1 mL of 70% ACN without disturbing the protein/bead aggregate. The tubes were removed from the magnetic rack, 100 µL of digestion buffer [1:20 (w/w) trypsin:protein in 0.2 mol/L guanidine hydrochloride, 50 mmol/L ammonium bicarbonate, and 2 mmol/L CaCl 2 ] were added, and the samples were incubated at 37°C for 12 hours. After acidification with trifluoroacetic acid to a final concentration of 2%, the tubes were placed on the magnetic rack for 1 minute, followed by removal of the supernatant. To remove residual beads, the samples were centrifuged at 20,000 × g for 10 minutes. In order to promote translation of our findings and to validate label-free quantitation (LFQ) abundances with a more precise targeted MS approach, we developed a multiplexed PRM method to quantify 90 proteins in FFPE specimens, measuring the concentration of a unique signature peptide for each protein. All 90 peptides were measured in a single LC-MS/MS run. Two equimolar synthetic peptide mixtures (100 fmol/μg of each peptide) were prepared in 30% ACN with 0.1% formic acid in water (w/v); one mixture contained unlabeled peptides (light or NAT peptides), and the second mixture contained stable isotope-labeled standard peptides (heavy or SIS peptides). The light peptide mixture was used to develop the highly multiplexed PRM assay with optimized peptide-specific parameters, such as collision energy and charge state, whereas the heavy peptide mixture was used for normalization, serving as a spiking solution and internal standard for clinical samples. Quantitation was performed using a seven-point response curve consisting of a variable amount of light peptides, ranging from 0.41 to 250 fmol (three orders of magnitude), and a constant amount of SIS peptides (50 fmol). Digested BSA (0.01 μg) was used as a surrogate matrix of the response curve. To determine the limit of detection (LOD), a double-blank sample was prepared. The blank sample consisted of 0.01 μg BSA digest spiked with 50 fmol of the SIS mixture and analyzed before and/or directly after the highest calibrant level of the response curve. For quantitation of endogenous protein in the patient samples, 50 fmol of SIS peptides were spiked into 1 μg total digested tissue protein, as determined by Pierce Reducing Agent Compatible BCA Kit. One μg of digested protein was preconcentrated on EV2001 C18 Evotips and separated on a heated (40°C) EV1137 column (15 cm × 150 μm, 1.5 μm particle size) using Evosep’s “extended method” (15 samples per day). The samples were analyzed by data-dependent acquisition mode on a Q Exactive Plus Orbitrap mass spectrometer operated with a Nanospray Flex ion source (both from Thermo Fisher Scientific) connected to an Evosep One high-performance liquid chromatography device (Evosep Biosystems). Full MS scans were acquired over the mass range from m/z 350 to m/z 1,500 at a resolution of 70,000 with an automatic gain control (AGC) target value of 1 × 10 6 and a maximum injection time of 50 milliseconds. The 15 most intense precursor ions (charge states +2, +3, and +4) were isolated with a window of 1.2 Da and fragmented using a normalized collision energy of 28; the dynamic exclusion was set to 30 seconds. MS/MS spectra were acquired at a mass resolution of 17,500 using an AGC target value of 2 × 10 4 and a maximum injection time of 64 milliseconds. Chromatographic separation of all PRM runs was performed with the same equipment and buffers as described above. The Q Exactive Plus was operated in PRM mode at a resolution of 35,000. Target precursor ions were isolated with the quadrupole isolation window set to m/z 1.2. An AGC target of 3 × 10 6 was used, allowing for a maximum injection time of 110 milliseconds. Data were acquired in time-scheduled mode, allowing a 2 minute retention time window for each target. Full MS scans were acquired in parallel at a low resolution (m/z 17,500) with an AGC target value of 1 × 10 6 and a maximum injection time of 50 milliseconds to ensure sample quality. MS data files are publicly available through the ProteomeXchange Consortium via the PRIDE partner repository with the following dataset identifier: PXD040782. The synthetic peptides selected for this PRM assay were validated by others; information is available through the NCI’s Clinical Proteomic Tumor Analysis Consortium Assay Portal ( assays.cancer.gov ). MS raw data were processed using Proteome Discoverer 2.5 (Thermo Fisher Scientific). Database searches were performed using SequestHT with multi-peptide search and a human Swiss-Prot database (January 2019; 20,414 target entries). LFQ was performed using the Minora Feature Detector node within Proteome Discoverer, and the Percolator software was used to calculate posterior error probabilities. Database searches were performed using trypsin as an enzyme with a maximum of two missed cleavages. Carbamidomethylation of cysteine (+57.021 Da) was set as a fixed modification, and oxidation of methionine (+15.995 Da) as variable modifications. Mass tolerances were set to 10 ppm for precursor ions and 0.02 Da for product ions. The data were filtered to a FDR <1% at the peptide and protein levels. Only proteins that were (i) identified with at least one protein unique peptide and (ii) quantified in ≥60% of replicates of at least one of the study groups were considered for the quantitative comparison. Protein LFQ data obtained from Proteome Discoverer were normalized based on summed protein intensities to correct for differences in sample loading. Missing protein intensity values were imputed using 1.5× the minimum observed intensity for this particular sample. The obtained normalized abundances were used for unpaired t tests (two tailed, 95% confidence interval) and differential expression analysis on log 2 -transformed data with multiple hypothesis testing using the Benjamini–Krieger false discovery approach (FDR 1%). Proteins having q values of <0.01 and absolute log 2 fold changes (FC) >1 were considered differential between tested groups. Statistical analysis was performed using GraphPad Prism 9. Raw PRM data were analyzed using Skyline (v22.2.0.351; ref. ). Correct peak integration and visual verification of detected peaks was performed manually for each target, and the three to four highest and most stable transitions were selected for quantitation. A linear regression model with 1/ x 2 weighting using the SIS/NAT ratio of each target peptide was used for the calculation of concentrations. Only calibration levels meeting the following criteria were accepted for response curve generation and regression analysis; precision average <20% coefficient of variation per calibration level and accuracy average between 80% and 120% per calibrant level, quantified in at least three consecutive calibrant levels. The LOD describes the smallest concentration of the target peptide (analyte) that is likely to be reliably distinguished from instrument noise and at which detection is feasible. To determine the LOD, we use replicate injections from a double-blank sample, i.e., fixed concentration of the SIS peptides in the surrogate matrix. The average concentration of the double-blank plus 3.3× the SD of the blank replicates is used to calculate the lowest detectable concentration for each peptide. The limit of quantitation describes the lowest concentration at which the analyte can not only be reliably detected, but at which above mentioned precision and accuracy criteria are met. Here the limit of quantitation was defined as the lowest calibration level for each peptide. Proteins/peptides with more than 60% missing values were excluded from the downstream analysis. Functional enrichment analysis was performed using the “Core Analysis” function within Ingenuity Pathway Analysis (Qiagen, Inc., content version: 81348237, release date: September 15, 2022; ref. ). Ingenuity Knowledge Base was used as reference set, allowing direct and indirect relationships. Only molecules having expression P values <0.05 and absolute log 2 FCs of >1 were considered for the core analysis. All other settings were kept with default parameters. A pre-ranked gene set enrichment analysis (GSEA) was performed using GSEA v4.3.2 (Broad Institute, Inc.) software. The gene list was ranked by differential expression using the SIGN function within Excel with calculated log 2 FC and P value from an unpaired t test. A hallmark gene set Molecular Signature Database (MSigDB v2022.1; ref. ) was used as reference gene set. The search allowed 1,000 permutations, with set sizes between 15 and 500 genes. Pathways were collapsed to remove redundancy and to increase selectivity and specificity. Data were visualized using the clusterProfiler package within R. Protein expression data from paired DCIS/IDC cases was sent to Doppelganger Biosystems Inc. for metabolic analysis using QSM technology . MS data files are publicly available through the ProteomeXchange Consortium via the PRIDE partner repository with the following dataset identifier: PXD040782. Clinical data are available in Supplementary Table S1. Hematoxylin and eosin images of clinical specimens used in this study are available via the dataset identifier on the PRIDE repository. DCIS and IDC are highly heterogeneous tumor phenotypes but build two distinct clusters in sparse partial least squares regression for discrimination analysis Several genomic-centered studies have reported that both DCIS and IDC tumor phenotypes are highly heterogeneous , hampering clinical diagnosis but also limiting statistical power and robust assay development complementing clinical diagnosis. Using a streamlined FFPE proteomics workflow with a standard label-free MS-based data analysis, we quantified more than 2,800 proteins at a 1% FDR on the protein and peptide levels. Using less than 1% of the total protein extracted from a single 1-mm FFPE tissue core, we cover six orders of magnitude of the DCIS/IDC proteome . Notably, the proteome of the two ductal breast cancer disease states seems to be clearly differential from each other, as a sparse partial least squares regression for discrimination analysis shows two distinct clusters between the study cohorts . The sparse partial least squares regression for discrimination analysis is a statistical method used for extracting and selecting important features from high-dimensional data to discriminate between different groups while simultaneously considering sparsity to improve interpretability and reduce overfitting . Based on the available clinical data (non-omics data) and small sample size, we are not in the position to infer any underlying patterns or biological relationships leading to this clustering on the protein level. Nevertheless, the top 10 features driving the proteomic variability between DCIS and IDC seem to reflect high transcriptional activity, extracellular matrix (ECM) remodeling, and inflammation processes ( and ). MS-based proteomics complements and supports independent genomic/transcriptomic studies of DCIS to IDC progression Studies on the progression of DCIS to IDC have mainly used gene expression analysis or IHC/FISH on the protein level. The scientific community acknowledges misalignments between single-omics studies . We therefore compared MS-based label-free proteomics data with 49 differentially expressed genes identified by three recent larger-scale independent genomics/transcriptomics studies and found 22 overlapping genes (see ). Proteomics data identified gene products of Forkhead Box A1 ( FOXA1 ), POSTN , THBS2 , carbonic anhydrase 12 ( CA12 ), FN1 , and aldehyde dehydrogenase 1 ( ALDH1 ) as differentially expressed proteins (DEP, unpaired t test; P < 0.05; see Supplementary Table S2). The proteomics data show lower FOXA1 expression in “pure” DCIS compared with “pure” IDC ( P < 0.0001) and increased expression in mixed-type DCIS compared with “pure” DCIS ( P = 0.03), suggesting a protective function of FOXA1. The loss or silencing of FOXA1 observed in DCIS seems to promote cell migration and invasion. Interestingly, forced expression of FOXA1 in MCF-7 (IDC cell line) inhibits growth and controls cell plasticity by repressing the basal-like phenotype . Genetic studies associate FOXA1 with heterochromatin remodeling, particularly affecting hormone receptor transcription and regulation of the cell cycle with BRCA1 . Evidence of FOXA1 involvement in tumor progression on the (epi)genetic, transcriptomic, and proteomic levels warrants further investigation of FOXA1 as clinical biomarker and its clinical utility for DCIS risk stratification. POSTN (periostin), THBS2 (thrombospondin 2), and FN1 (fibronectin) mediate cell–cell and cell–matrix interactions. POSTN, a downstream effector of β-catenin, activates PI3K/AKT and ERK pathways . In DCIS, these proteins have lower expression levels compared with IDC ( P < 0.03, P < 0.04, P < 0.03, respectively), indicating stromal remodeling in DCIS to IDC progression. CA12 regulates the tumor microenvironment and metabolic pathways , with lower protein levels in “pure” DCIS compared with “pure” IDC ( P < 0.0001). Loss of CA12 activity, which normally regulates pH levels, likely creates a more acidic environment, which can favor malignant cell survival and contribute to the progression from DCIS to IDC. High ALDH1 expression characterizes cancer stem cells associated with tumorigenesis, metastatic behavior, and poor outcomes . Whereas an IHC-based profiling of DCIS did not associate ALDH1 with breast cancer events , our MS-based analysis on paired DCIS/IDC lesions does show a significantly higher concentration of ALDH1 in DCIS compared with IDC lesions ( P = 0.01), supporting findings from stem cell biology that ALDH1 might be a functional and prognostic biomarker of tumorigenesis in DCIS. Having access to “real-world” mixed-type lesions, the most prevalent clinical phenotype of breast ductal carcinoma, we were in the unique position to investigate the proteome of DCIS lesions that are likely active in the transition to IDC, depleted from intertumor heterogeneity. Comparing “pure” DCIS with mixed-type DCIS lesions revealed significantly lower protein levels of KRT5, KRT14, KRT6B, and CEACAM5 in “pure” DCIS lesions ( P < 0.05; see Supplementary Table S3), indicating stromal remodeling as a key feature in the progression from precancer to invasive cancer, with prognostic value for DCIS management. High expression of keratines (KRTs) is linked to good prognosis in breast cancer, whereas lower levels are associated with invasive tumor proliferation . CEACAM5 (also CEA) expression has context-dependent impact and a protective function in breast cancer, with potential usefulness in disease monitoring . Similarly, comparing “pure” IDC with mixed-type IDC lesions showed a loss of KRT expression in mixed-type IDC ( P < 0.05), suggesting a protective role of KRTs as a marker of progressiveness in DCIS. HER2 protein overexpression constitutes a major prognostic and predictive marker in invasive breast carcinoma, and some recent studies indicate an association between HER2-positive DCIS with higher risk of local recurrence . However, there seems to be no substantial clinical impact so far, and our comprehensive proteomic profiling does not identify significant changes in HER2 expression between DCIS and IDC lesions (see ). Loss of basal membrane stability, inflammatory processes, and epithelial-to-mesenchymal transition identified as key events driving DCIS progression Having confirmed the results of genomic/transcriptomic studies in this setting using direct MS-based protein measurements, we turned to a global proteomics approach to discover further features of the DCIS–IDC scenario. Differential expression analysis of more than 2,800 proteins identified in “pure” DCIS compared with IDC revealed ∼388 DEPs using an unpaired t test with the post hoc Benjamini–Krieger FDR method for multiple hypothesis testing ( q < 0.01) and at least a 2-fold change in the protein expression between DCIS and IDC ( ; Supplementary Table S2). To reduce interpatient variability, we compared proteomic profiles of DCIS and IDC lesions from the same patients ( n = 9). Ten DEP were identified: ILK, ITGA4, GPRC5A, FNTA, SCPEP1, EPB41L3, and SORBS1, which were significantly more highly expressed in DCIS compared with IDC, whereas ACAP1, ATP6V0A1, and KPRP were significantly more highly expressed in IDC compared with DCIS ( ; Supplementary Table S4). ILK, an integrin-linked kinase, regulates integrin signaling and is associated with tumor growth and metastasis . ITGA4 mediates cell–cell adhesions and is linked to cancer progression, inflammatory reactions, and ECM stemness . GPRC5A acts as an oncogene or tumor suppressor in different cancers . Androgen receptor–regulated FNTA enhances KRAS signaling and might be involved in tumorigenesis . SCPEP1 is associated with cancer development, growth, and metastasis . EPB41L3 is a tumor suppressor involved in apoptosis and cell-cycle regulation . Decreased expression in DCIS was observed for ATP6V0A1, which plays a role in pH homeostasis and tumor cell invasion . ACAP1 is associated with cell proliferation, migration, and immune infiltration in tumors . Loss of ACAP1 could indicate impaired immune response in IDC progression. KPRP, involved in keratinocyte differentiation , might contribute to invasiveness when its expression is lost in DCIS. Several genomic-centered studies have reported that both DCIS and IDC tumor phenotypes are highly heterogeneous , hampering clinical diagnosis but also limiting statistical power and robust assay development complementing clinical diagnosis. Using a streamlined FFPE proteomics workflow with a standard label-free MS-based data analysis, we quantified more than 2,800 proteins at a 1% FDR on the protein and peptide levels. Using less than 1% of the total protein extracted from a single 1-mm FFPE tissue core, we cover six orders of magnitude of the DCIS/IDC proteome . Notably, the proteome of the two ductal breast cancer disease states seems to be clearly differential from each other, as a sparse partial least squares regression for discrimination analysis shows two distinct clusters between the study cohorts . The sparse partial least squares regression for discrimination analysis is a statistical method used for extracting and selecting important features from high-dimensional data to discriminate between different groups while simultaneously considering sparsity to improve interpretability and reduce overfitting . Based on the available clinical data (non-omics data) and small sample size, we are not in the position to infer any underlying patterns or biological relationships leading to this clustering on the protein level. Nevertheless, the top 10 features driving the proteomic variability between DCIS and IDC seem to reflect high transcriptional activity, extracellular matrix (ECM) remodeling, and inflammation processes ( and ). Studies on the progression of DCIS to IDC have mainly used gene expression analysis or IHC/FISH on the protein level. The scientific community acknowledges misalignments between single-omics studies . We therefore compared MS-based label-free proteomics data with 49 differentially expressed genes identified by three recent larger-scale independent genomics/transcriptomics studies and found 22 overlapping genes (see ). Proteomics data identified gene products of Forkhead Box A1 ( FOXA1 ), POSTN , THBS2 , carbonic anhydrase 12 ( CA12 ), FN1 , and aldehyde dehydrogenase 1 ( ALDH1 ) as differentially expressed proteins (DEP, unpaired t test; P < 0.05; see Supplementary Table S2). The proteomics data show lower FOXA1 expression in “pure” DCIS compared with “pure” IDC ( P < 0.0001) and increased expression in mixed-type DCIS compared with “pure” DCIS ( P = 0.03), suggesting a protective function of FOXA1. The loss or silencing of FOXA1 observed in DCIS seems to promote cell migration and invasion. Interestingly, forced expression of FOXA1 in MCF-7 (IDC cell line) inhibits growth and controls cell plasticity by repressing the basal-like phenotype . Genetic studies associate FOXA1 with heterochromatin remodeling, particularly affecting hormone receptor transcription and regulation of the cell cycle with BRCA1 . Evidence of FOXA1 involvement in tumor progression on the (epi)genetic, transcriptomic, and proteomic levels warrants further investigation of FOXA1 as clinical biomarker and its clinical utility for DCIS risk stratification. POSTN (periostin), THBS2 (thrombospondin 2), and FN1 (fibronectin) mediate cell–cell and cell–matrix interactions. POSTN, a downstream effector of β-catenin, activates PI3K/AKT and ERK pathways . In DCIS, these proteins have lower expression levels compared with IDC ( P < 0.03, P < 0.04, P < 0.03, respectively), indicating stromal remodeling in DCIS to IDC progression. CA12 regulates the tumor microenvironment and metabolic pathways , with lower protein levels in “pure” DCIS compared with “pure” IDC ( P < 0.0001). Loss of CA12 activity, which normally regulates pH levels, likely creates a more acidic environment, which can favor malignant cell survival and contribute to the progression from DCIS to IDC. High ALDH1 expression characterizes cancer stem cells associated with tumorigenesis, metastatic behavior, and poor outcomes . Whereas an IHC-based profiling of DCIS did not associate ALDH1 with breast cancer events , our MS-based analysis on paired DCIS/IDC lesions does show a significantly higher concentration of ALDH1 in DCIS compared with IDC lesions ( P = 0.01), supporting findings from stem cell biology that ALDH1 might be a functional and prognostic biomarker of tumorigenesis in DCIS. Having access to “real-world” mixed-type lesions, the most prevalent clinical phenotype of breast ductal carcinoma, we were in the unique position to investigate the proteome of DCIS lesions that are likely active in the transition to IDC, depleted from intertumor heterogeneity. Comparing “pure” DCIS with mixed-type DCIS lesions revealed significantly lower protein levels of KRT5, KRT14, KRT6B, and CEACAM5 in “pure” DCIS lesions ( P < 0.05; see Supplementary Table S3), indicating stromal remodeling as a key feature in the progression from precancer to invasive cancer, with prognostic value for DCIS management. High expression of keratines (KRTs) is linked to good prognosis in breast cancer, whereas lower levels are associated with invasive tumor proliferation . CEACAM5 (also CEA) expression has context-dependent impact and a protective function in breast cancer, with potential usefulness in disease monitoring . Similarly, comparing “pure” IDC with mixed-type IDC lesions showed a loss of KRT expression in mixed-type IDC ( P < 0.05), suggesting a protective role of KRTs as a marker of progressiveness in DCIS. HER2 protein overexpression constitutes a major prognostic and predictive marker in invasive breast carcinoma, and some recent studies indicate an association between HER2-positive DCIS with higher risk of local recurrence . However, there seems to be no substantial clinical impact so far, and our comprehensive proteomic profiling does not identify significant changes in HER2 expression between DCIS and IDC lesions (see ). Having confirmed the results of genomic/transcriptomic studies in this setting using direct MS-based protein measurements, we turned to a global proteomics approach to discover further features of the DCIS–IDC scenario. Differential expression analysis of more than 2,800 proteins identified in “pure” DCIS compared with IDC revealed ∼388 DEPs using an unpaired t test with the post hoc Benjamini–Krieger FDR method for multiple hypothesis testing ( q < 0.01) and at least a 2-fold change in the protein expression between DCIS and IDC ( ; Supplementary Table S2). To reduce interpatient variability, we compared proteomic profiles of DCIS and IDC lesions from the same patients ( n = 9). Ten DEP were identified: ILK, ITGA4, GPRC5A, FNTA, SCPEP1, EPB41L3, and SORBS1, which were significantly more highly expressed in DCIS compared with IDC, whereas ACAP1, ATP6V0A1, and KPRP were significantly more highly expressed in IDC compared with DCIS ( ; Supplementary Table S4). ILK, an integrin-linked kinase, regulates integrin signaling and is associated with tumor growth and metastasis . ITGA4 mediates cell–cell adhesions and is linked to cancer progression, inflammatory reactions, and ECM stemness . GPRC5A acts as an oncogene or tumor suppressor in different cancers . Androgen receptor–regulated FNTA enhances KRAS signaling and might be involved in tumorigenesis . SCPEP1 is associated with cancer development, growth, and metastasis . EPB41L3 is a tumor suppressor involved in apoptosis and cell-cycle regulation . Decreased expression in DCIS was observed for ATP6V0A1, which plays a role in pH homeostasis and tumor cell invasion . ACAP1 is associated with cell proliferation, migration, and immune infiltration in tumors . Loss of ACAP1 could indicate impaired immune response in IDC progression. KPRP, involved in keratinocyte differentiation , might contribute to invasiveness when its expression is lost in DCIS. Overall, proteomic profiling of DCIS identified more than 380 putative biomarkers (protein level) to clinically profile DCIS lesions for risk stratification and disease management. The association of the DEPs quantified in this study with hallmarks of cancer, such as remodeling of the tumor microenvironment (e.g., ILK, ITGA4, and SCPEP1), escape of apoptosis (e.g., ILK, GPRC5A, FNTA, and EPB41L3), deregulation of the apical junction and energy metabolism (e.g., ATP6V0A1, KPRP, and ITGA4), and inflammation and immune response processes (e.g., ACAP1 and ITGA4; ), warrants further investigation. Furthermore, most of the identified DEPs are readily druggable, and repurposing of FDA-approved anti-inflammatory drugs and antibiotics pose interesting treatment options for DCIS. EIF2 and PI3K/Akt/mTOR signaling pathways potentially drive IDC phenotype development through dysregulation of central energy metabolism in cancer A deeper look into the molecular relationships of all the DEPs we have identified by functional enrichment analysis and GSEA confirms the previously reported loss of basal layer integrity and epithelial to mesenchymal transitions as key events supporting IDC. highlights cancer hallmarks that are predominant for the IDC and DCIS phenotypes, highlighting the dysregulation of cell metabolism as a key event in the DCIS phenotype. Proteomic profiling using MS-based techniques revealed metabolic vulnerabilities in DCIS that can provide insights into tumorigenic metabolic mechanisms that were missed by genomic/transcriptomic analysis alone. Functional enrichment analysis using Ingenuity Pathway Analysis identifies mitochondrial dysfunction, granzyme A signaling, glucocorticoid receptor signaling, and sirtuin signaling as significantly enriched ( P value of overlap <0.01) in our proteomics dataset, suggesting a dysregulation of glucose metabolism through a shift from oxidative phosphorylation (i.e., tricarboxylic acid cycle) to aerobic glycolysis ( and ; ref. ). Aerobic glycolysis is also known as the Warburg Effect and is characterized by high glucose uptake and glycolytic conversion of glucose to lactate to meet the high energy demands of proliferating cells . During glycolysis, glucose is converted to pyruvate. Cytosolic pyruvate can either enter the tricarboxylic acid cycle for oxidative phosphorylation and ATP production or be converted to lactate. Under normoxia, the metabolic fate of cytosolic pyruvate, and thus glucose metabolism, is regulated by pyruvate dehydrogenase complex (PDH) and lactate dehydrogenase, in which the PDH reaction is favored , PI3K/AKT signaling can modulate the metabolic fate of pyruvate as an upstream regulator of PDH and lactate dehydrogenase, creating “pseudo-hypoxic” conditions that favor pyruvate conversion to lactate. The pivotal role of PI3K/AKT as an upstream regulator in metabolic reprogramming is comprehensively reviewed by Hoxhaj and colleagues and involves the interaction with other proliferating signaling pathways, such as MAPK and mTOR. Our proteomic analysis of DCIS identified several differentially expressed molecules involved in glycolysis, hypoxia-mediated reactions, and PI3K/AKT/mTOR signaling which warrant further investigation. Metabolomic profiling of FFPE specimens is challenging, because ∼85% of metabolites are washed out during the preservation procedure. To nevertheless gain insights into metabolic changes occurring toward IDC progression, we conducted an artificial intelligence–based metabolic profiling using QSM technology, which is supported by more than 500 publications . Clear metabolic differences between DCIS/IDC lesions from the same patient (paired DCIS/IDC) were identified, but due to the large variability and small sample size ( n = 9), metabolic differences between the groups were hard to assess. A multitude of functional markers with direct causal relation to ATP production capacity and utilization of glucose were nevertheless identified . These findings confirm the dysregulation of energy metabolism toward IDC progression and suggest that the energy demand of transforming preinvasive cells (DCIS phenotype) is mainly achieved by fatty acid metabolism and lactate production. To further evaluate and promote the translation of our findings into the clinic, we developed a highly multiplexed targeted MS assay for absolute quantitation of 90 signature peptides, associated with cancer metabolism, central energy metabolism, RNA regulation, and members of the PI3K/AKT/mTOR, EIF2, and EGFR/RAS/RAF signaling pathways. A complete list of peptides included in this assay is provided in Supplementary Table S5. The results of the PRM assay are depicted as STRING functional protein association network , in which the differential expression is represented by the node color and the absolute FC by the node size. These findings correlate well with the previously discussed observations from label-free proteomics and independent genomics/transcriptomics studies, showing that DCIS tumors have a tendency toward loss of metabolic functions. Albumin (ALB) is significantly higher expressed in the DCIS phenotype compared with the IDC phenotype ( q value = 0.03). Studies associated low ALB levels with changes in the tumor microenvironment to more favorable conditions for disease progression and tumor migration, suggesting that serum ALB levels might have a prognostic value for cancer . Other studies discuss ALB as a potent marker for inflammation and the nutritional status of patients, in which low ALB levels correlate with inflammatory processes resulting in higher morbidity and poor prognosis . Our results support these findings and highlight remodeling of the tumor microenvironment, environmental stress (i.e., malnutrition, which inhibits EIF2 signaling; ref. ), and inflammatory processes as key events toward IDC progression. It is, however, important to note that the amount of ALB observed in this study may be influenced by factors such as tissue perfusion, infiltration, or even the biopsy acquisition process itself. This makes it challenging to draw firm conclusions about its role in disease progression or to consider it as a reliable biomarker without further investigation into potential confounding variables. In conclusion, clinical research on DCIS has been limited due to low sample numbers, high intertumor heterogeneity, and low tissue quality, as most DCIS lesions derive from diagnostic needle core biopsies and are FFPE. Although genetic/transcriptomic studies of DCIS progression provide a cellular blueprint of what might happen, genes cannot be readily targeted for therapy, and posttranslational modification cannot be assessed by genetic screening alone. Quantitative proteomics can complement and confirm genetic changes and provide a deeper look into the “real-life” tumor phenotype. The readily druggable nature of proteins makes quantitative proteomics studies attractive for clinical research. Additionally, MS-based studies allow both (i) discovery studies for comprehensive tumor profiling and (ii) validation studies in a highly multiplexed manner, with unprecedented accuracy, specificity, and sensitivity. We established a LFQ proteomics pipeline suitable for needle core biopsy–sized FFPE specimens and performed a comprehensive proteomic phenotyping of DCIS and IDC using less than 1% of the total extracted protein material. We cover six orders of magnitude of the disease proteome and identify more than 380 DEPs that identify classical hallmarks of cancer, reflective for high transcriptional activity, ECM remodeling, and inflammation processes as key events toward IDC progression. We further identify dysregulation of glucose metabolism as a key event in the transition from preinvasive to invasive carcinoma. Guided by these results, we developed a highly multiplexed PRM assay for precise quantitation of 90 proteins that are associated with cancer metabolism, RNA regulation, and major cancer pathways, such as PI3K/AKT/mTOR and EGFR/RAS/RAF. We applied this assay to generate an activation profile of these signature proteins for proliferation and metabolic remodeling in cancer in “real-world” clinical samples and were able to support observations from label-free proteomics data with absolute concentrations in the mmol range, facilitating the translation of our findings into the clinic. Notably, proteomic profiling has revealed that FDA-approved drugs, such as antibiotics and NSAIDs, may be repurposed for DCIS and IDC treatment, as they have been shown to control and target proteins identified as key events toward IDC progression. The concept of repurposing antibiotics and NSAIDs has been a topic of investigation for several years , and our proteomics data on DCIS-to-IDC progression support this concept. It is important to highlight that this study design is applicable to many diseases with limited sample volumes and low tissue quality, as it requires only a fraction of the total sample amount, allowing discovery and validation studies in the same sample cohort. In our opinion, clinical proteomics is a versatile tool for comprehensive tumor phenotyping, able to capture a “real-life” snapshot of tumor phenotypes, representative of posttranslational modifications and epigenetic changes. More than 99% of published clinical biomarkers/genomic assays fail to enter clinical practice , but we show here that complementing genomics and transcriptomics studies with proteomics data, and vice versa , will help create a better understanding of underlying disease mechanisms and will better inform the selection of biomarker candidates and patient enrollment for clinical studies, ultimately improving the quality and final results of clinical trials. This study provides real-world evidence data for DCIS, a disease for which currently no molecular tools or biomarkers exist, and gives an unbiased, comprehensive, and deep proteomic profile, identifying more than 380 actionable targets that can be taken further for functional analyses and biomarker analysis in a larger clinical cohort with more standardized and controlled sample collection, for example in a clinical trial. A deeper look into the molecular relationships of all the DEPs we have identified by functional enrichment analysis and GSEA confirms the previously reported loss of basal layer integrity and epithelial to mesenchymal transitions as key events supporting IDC. highlights cancer hallmarks that are predominant for the IDC and DCIS phenotypes, highlighting the dysregulation of cell metabolism as a key event in the DCIS phenotype. Proteomic profiling using MS-based techniques revealed metabolic vulnerabilities in DCIS that can provide insights into tumorigenic metabolic mechanisms that were missed by genomic/transcriptomic analysis alone. Functional enrichment analysis using Ingenuity Pathway Analysis identifies mitochondrial dysfunction, granzyme A signaling, glucocorticoid receptor signaling, and sirtuin signaling as significantly enriched ( P value of overlap <0.01) in our proteomics dataset, suggesting a dysregulation of glucose metabolism through a shift from oxidative phosphorylation (i.e., tricarboxylic acid cycle) to aerobic glycolysis ( and ; ref. ). Aerobic glycolysis is also known as the Warburg Effect and is characterized by high glucose uptake and glycolytic conversion of glucose to lactate to meet the high energy demands of proliferating cells . During glycolysis, glucose is converted to pyruvate. Cytosolic pyruvate can either enter the tricarboxylic acid cycle for oxidative phosphorylation and ATP production or be converted to lactate. Under normoxia, the metabolic fate of cytosolic pyruvate, and thus glucose metabolism, is regulated by pyruvate dehydrogenase complex (PDH) and lactate dehydrogenase, in which the PDH reaction is favored , PI3K/AKT signaling can modulate the metabolic fate of pyruvate as an upstream regulator of PDH and lactate dehydrogenase, creating “pseudo-hypoxic” conditions that favor pyruvate conversion to lactate. The pivotal role of PI3K/AKT as an upstream regulator in metabolic reprogramming is comprehensively reviewed by Hoxhaj and colleagues and involves the interaction with other proliferating signaling pathways, such as MAPK and mTOR. Our proteomic analysis of DCIS identified several differentially expressed molecules involved in glycolysis, hypoxia-mediated reactions, and PI3K/AKT/mTOR signaling which warrant further investigation. Metabolomic profiling of FFPE specimens is challenging, because ∼85% of metabolites are washed out during the preservation procedure. To nevertheless gain insights into metabolic changes occurring toward IDC progression, we conducted an artificial intelligence–based metabolic profiling using QSM technology, which is supported by more than 500 publications . Clear metabolic differences between DCIS/IDC lesions from the same patient (paired DCIS/IDC) were identified, but due to the large variability and small sample size ( n = 9), metabolic differences between the groups were hard to assess. A multitude of functional markers with direct causal relation to ATP production capacity and utilization of glucose were nevertheless identified . These findings confirm the dysregulation of energy metabolism toward IDC progression and suggest that the energy demand of transforming preinvasive cells (DCIS phenotype) is mainly achieved by fatty acid metabolism and lactate production. To further evaluate and promote the translation of our findings into the clinic, we developed a highly multiplexed targeted MS assay for absolute quantitation of 90 signature peptides, associated with cancer metabolism, central energy metabolism, RNA regulation, and members of the PI3K/AKT/mTOR, EIF2, and EGFR/RAS/RAF signaling pathways. A complete list of peptides included in this assay is provided in Supplementary Table S5. The results of the PRM assay are depicted as STRING functional protein association network , in which the differential expression is represented by the node color and the absolute FC by the node size. These findings correlate well with the previously discussed observations from label-free proteomics and independent genomics/transcriptomics studies, showing that DCIS tumors have a tendency toward loss of metabolic functions. Albumin (ALB) is significantly higher expressed in the DCIS phenotype compared with the IDC phenotype ( q value = 0.03). Studies associated low ALB levels with changes in the tumor microenvironment to more favorable conditions for disease progression and tumor migration, suggesting that serum ALB levels might have a prognostic value for cancer . Other studies discuss ALB as a potent marker for inflammation and the nutritional status of patients, in which low ALB levels correlate with inflammatory processes resulting in higher morbidity and poor prognosis . Our results support these findings and highlight remodeling of the tumor microenvironment, environmental stress (i.e., malnutrition, which inhibits EIF2 signaling; ref. ), and inflammatory processes as key events toward IDC progression. It is, however, important to note that the amount of ALB observed in this study may be influenced by factors such as tissue perfusion, infiltration, or even the biopsy acquisition process itself. This makes it challenging to draw firm conclusions about its role in disease progression or to consider it as a reliable biomarker without further investigation into potential confounding variables. In conclusion, clinical research on DCIS has been limited due to low sample numbers, high intertumor heterogeneity, and low tissue quality, as most DCIS lesions derive from diagnostic needle core biopsies and are FFPE. Although genetic/transcriptomic studies of DCIS progression provide a cellular blueprint of what might happen, genes cannot be readily targeted for therapy, and posttranslational modification cannot be assessed by genetic screening alone. Quantitative proteomics can complement and confirm genetic changes and provide a deeper look into the “real-life” tumor phenotype. The readily druggable nature of proteins makes quantitative proteomics studies attractive for clinical research. Additionally, MS-based studies allow both (i) discovery studies for comprehensive tumor profiling and (ii) validation studies in a highly multiplexed manner, with unprecedented accuracy, specificity, and sensitivity. We established a LFQ proteomics pipeline suitable for needle core biopsy–sized FFPE specimens and performed a comprehensive proteomic phenotyping of DCIS and IDC using less than 1% of the total extracted protein material. We cover six orders of magnitude of the disease proteome and identify more than 380 DEPs that identify classical hallmarks of cancer, reflective for high transcriptional activity, ECM remodeling, and inflammation processes as key events toward IDC progression. We further identify dysregulation of glucose metabolism as a key event in the transition from preinvasive to invasive carcinoma. Guided by these results, we developed a highly multiplexed PRM assay for precise quantitation of 90 proteins that are associated with cancer metabolism, RNA regulation, and major cancer pathways, such as PI3K/AKT/mTOR and EGFR/RAS/RAF. We applied this assay to generate an activation profile of these signature proteins for proliferation and metabolic remodeling in cancer in “real-world” clinical samples and were able to support observations from label-free proteomics data with absolute concentrations in the mmol range, facilitating the translation of our findings into the clinic. Notably, proteomic profiling has revealed that FDA-approved drugs, such as antibiotics and NSAIDs, may be repurposed for DCIS and IDC treatment, as they have been shown to control and target proteins identified as key events toward IDC progression. The concept of repurposing antibiotics and NSAIDs has been a topic of investigation for several years , and our proteomics data on DCIS-to-IDC progression support this concept. It is important to highlight that this study design is applicable to many diseases with limited sample volumes and low tissue quality, as it requires only a fraction of the total sample amount, allowing discovery and validation studies in the same sample cohort. In our opinion, clinical proteomics is a versatile tool for comprehensive tumor phenotyping, able to capture a “real-life” snapshot of tumor phenotypes, representative of posttranslational modifications and epigenetic changes. More than 99% of published clinical biomarkers/genomic assays fail to enter clinical practice , but we show here that complementing genomics and transcriptomics studies with proteomics data, and vice versa , will help create a better understanding of underlying disease mechanisms and will better inform the selection of biomarker candidates and patient enrollment for clinical studies, ultimately improving the quality and final results of clinical trials. This study provides real-world evidence data for DCIS, a disease for which currently no molecular tools or biomarkers exist, and gives an unbiased, comprehensive, and deep proteomic profile, identifying more than 380 actionable targets that can be taken further for functional analyses and biomarker analysis in a larger clinical cohort with more standardized and controlled sample collection, for example in a clinical trial. Supplementary Data Table T1 Anonymized Clinical Data Supplementary Data Table T2 LFQ data DCIS vs IDC Supplementary Data Table T3 LFQ data DCIS mixed vs IDC mixed Supplementary Data Table T4 LFQ data DCIS paired vs IDC paired Supplementary Data Table T5 PRM data DCIS vs IDC |
Staging by imaging in gynecologic cancer and the role of ultrasound: an update of European joint consensus statements | 589b2d0d-75f1-48ed-bed8-9715dc37a842 | 10958454 | Gynaecology[mh] | During the last decade, the use of ultrasound in the diagnosis and locoregional staging of gynecological malignancies has increased. Several ultrasound biomarkers are already established in clinical practice for detection of gynecologic cancer, prediction of the risk of disease or therapeutic outcome, prediction of oncological outcome, and evaluation of treatment response. An example of a diagnostic biomarker is the Ovarian-Adnexal Reporting and Data System (O-RADS) risk stratification and management system based on the International Ovarian Tumor Analysis Assessment of Different NEoplasias in the AdneXa (IOTA ADNEX) risk prediction model, or morphologic descriptors. A prognostic biomarker is the American Joint Committee on Cancer (AJCC) TNM (Tumor, Node, Metastasis) staging system, and an example of a response biomarker is the Response Evaluation in Solid Tumors (RECIST) criteria. Ultrasound provides both morphology-based (eg, tumor echogenicity and size) and functional biomarkers (eg, tumor perfusion based on power Doppler assessment or using contrast agents). With emerging machine-learning approaches, the applicability of precision ultrasound diagnostics will potentially further expand. The introduction of the high-resolution endovaginal probe allows detailed depiction of the pelvic anatomy, comparable to that achieved by pelvic MRI. Transabdominal ultrasound using a convex array probe provides detailed views of the abdominal organs, visceral and retroperitoneal lymph nodes, and peritoneum, guiding the prediction of disease resectability. Lastly, the use of a linear probe allows direct, high-resolution visualization of superficial structures, such as the peripheral lymph nodes . This has led to the implementation of ultrasound next to pelvic MRI as the first-line modality in the assessment of locoregional stage in gynecologic cancers, according to the European Society of Gynecological Oncology (ESGO) guidelines. Ultrasound has the advantages of having low cost with high availability, involving no radiation exposure and minimal discomfort to the patient. Furthermore, ultrasound is an ideal tool for guiding core-needle biopsies to establish the histologic diagnosis, prior to expedited start of tailored therapy. Ultrasound may also be useful intra-operatively, such as to guide surgeons during fertility-sparing surgeries to preserve uninvolved ovarian tissue or to delineate the free margins during trachelectomy. Ultrasound-guided drainage of fluid collections (eg, lymphoceles, abscesses, etc) or palliative insertion of permanent catheters can help to avoid unnecessary surgeries. Furthermore, its wide availability makes it a useful bedside test for early detection and monitoring of surgical complications. Ultrasound scanners are relatively inexpensive compared with other modalities such as MRI, CT and positron emission tomography-CT (PET-CT) . Some reservations regarding widespread implementation of ultrasound stem from lack of training that may lead to ultrasound-related misdiagnosis. The inter-rater agreement for primary staging parameters between less and more experienced sonographers has been assessed for various gynecologic cancers. For example, more experienced sonographers had higher detection rates for cervical stromal invasion in endometrial cancer . Similarly, experienced sonographers had higher inter-observer agreement than less experienced sonographers for diagnosing parametrial invasion in cervical cancer. Data from an ultrasound study in ovarian cancer (Imaging Study on Advanced ovArian Cancer, ISAAC) suggest almost perfect agreement among sonographers to stage advanced ovarian cancer, when at least 6 months' ultrasound training is provided in a specialized center. The introduction of ultrasound training into the gynecologic oncology curriculum, and the development of trusted certification- and accreditation systems by the scientific societies, may increase its widespread use and acceptance. Sonographers should perform detailed scanning using a systematic approach and standardized terminology for the relevant staging parameters. The use of checklists is recommended to guarantee uniformity and reproducibility of the reported staging results . The applications of ultrasound and other imaging methods on gynecologic cancer staging are outlined in . 10.1136/ijgc-2023-004609.supp1 Supplementary data 10.1136/ijgc-2023-004609.supp2 Supplementary data 10.1136/ijgc-2023-004609.supp3 Supplementary data 10.1136/ijgc-2023-004609.supp5 Supplementary data 10.1136/ijgc-2023-004609.supp4 Supplementary data 10.1136/ijgc-2023-004609.supp6 Supplementary data This review on ultrasound imaging in gynecologic cancers follows this structure: (1) vulvar, (2) vaginal, (3) cervical, (4) endometrial, and (5) tubo-ovarian cancers. Introduction Vulvar cancer accounts for 4% of all gynecological cancers, affecting predominantly elderly women. More than 90% of cases are squamous cell carcinoma and its variants. Some variants (basaloid and warty) are more frequent in younger women and are related to human papillomavirus (HPV) infection. Metastatic involvement of the inguinofemoral lymph nodes at diagnosis is the major prognostic factor and affects the surgical approach and the need for adjuvant therapy. Furthermore, large primary tumor size, stromal invasion, and positive resection margins significantly predict recurrent disease. Currently, there is a limited alignment between the eighth edition of TNM and the International Federation of Gynecology and Obstetrics (FIGO) 2021 staging systems , and lack of evidence to base treatment on the 2021 FIGO staging system. A further version of TNM classification for vulvar cancer (ninth version) aligned with the 2021 FIGO staging system is expected to be available in 2024. In the meantime, the eighth TNM classification is advised for staging and treatment planning. The 2021 FIGO staging system allows incorporation of findings from all cross-sectional imaging methods into the FIGO stage. Imaging provides information on tumor size, the extent of local involvement of the surrounding structures (vagina, uterus, anus, rectum, urethra, bladder), and the inguinofemoral, distant lymphatic and hematogenous spread, most commonly to the liver or lungs. For local staging, MRI is the modality of choice to assess invasion of vulvar cancer into septa, vagina, urethra, anus, and/or rectum due to its excellent soft tissue resolution. However, in gynecologic oncology centers with available expertise, ultrasound assessment can also be used . 10.1136/ijgc-2023-004609.supp7 Supplementary video For regional nodal staging, expert ultrasound is the method of choice for pre-operative assessment of inguinofemoral lymph nodes, with sensitivity of 76–90% and specificity of 60–96%, allowing a detailed evaluation of nodal architecture and perfusion . The methodologic assessment of vulvar lymph nodes by ultrasound was previously reported in the Vulvar International Tumor Analysis (VITA) consensus. The use of ultrasound guidance for fine needle aspiration or core needle biopsy improved the detection of metastases in lymph nodes with altered morphology. Core needle biopsy should be preferred whenever possible to obtain sufficient material for histological analysis, although fine needle aspiration may be appropriate for small suspicious lymph nodes. MRI is currently considered an alternative imaging method for lymph node staging, with variable sensitivity (ranging from 40% to 89%) depending on the diagnostic criteria used. However, novel MRI techniques (eg, DWI (diffusion-weighted imaging); DCE (dynamic contrast-enhanced)-, and high-resolution T2WI (T2-weighted imaging) series) are promising for improving locoregional staging. High-quality ultrasound or MRI examination for local (loco-regional) staging purposes should be complemented by a structured imaging report to communicate clinically relevant information to the referring physician. Patients who are not candidates for sentinel lymph node biopsy (if they have multifocal tumors, unifocal tumors with size ≥4 cm, and/or suspicious inguinofemoral nodes in pre-operative evaluation) should undergo further imaging in addition to the ultrasound or pelvic MRI assessment to exclude distant metastases. Thoracic and abdominal contrast-enhanced CT (CECT) or whole-body 18F-fluorodeoxyglucose positron emission tomography combined with CT (FDG-PET-CT) should be performed to exclude pelvic lymph node involvement and other distant metastases . New MRI sequences such as T2WI ultrafast spin echo sequences and whole-body DWI may be useful for assessing the upper abdomen and diagnosing distant nodal metastases. The location of the primary tumor and any suspicious regional and distant lymph nodes should be documented in a schematic drawing within a standardized systematic checklist . Current Guidelines and the Role of Imaging in Vulvar Cancer Staging Following the updated 2023 ESGO guidelines for the management of patients with vulvar cancer : Pre-operative work-up includes a medical history; general assessment of co-morbidities; frailty assessment; clinical examination; biopsy of all suspicious areas followed by pathologic review; and imaging as indicated. For pT1a tumors (tumor ≤2 cm confined to the vulva and/or perineum, with stromal invasion ≤1 mm), no further imaging is required. In patients considered eligible for a sentinel lymph node biopsy procedure, imaging of inguinofemoral lymph nodes by ultrasound or MRI is recommended. Suspicious inguinofemoral nodes on imaging should be assessed by ultrasound-guided fine needle aspiration or core needle biopsy if this would alter primary treatment. In all other cases, systemic staging (including pelvic lymph nodes and distant organs) by CECT (chest/abdomen/pelvis) or FDG-PET-CT is recommended. If the invasive tumor clinically involves surrounding tissues (≥T2 FIGO staging) or if clinical findings are equivocal, evaluation of extra-vulvar structures (urethra, bladder, vagina, cervix, and anal canal) with MRI is recommended. In specialized centers with an available trained ultrasound examiner, transvaginal/transrectal/perineal ultrasound can be an option in determining local staging. Equivocal distant metastasis should be biopsied (if possible) to avoid inappropriate treatment. Vulvar cancer accounts for 4% of all gynecological cancers, affecting predominantly elderly women. More than 90% of cases are squamous cell carcinoma and its variants. Some variants (basaloid and warty) are more frequent in younger women and are related to human papillomavirus (HPV) infection. Metastatic involvement of the inguinofemoral lymph nodes at diagnosis is the major prognostic factor and affects the surgical approach and the need for adjuvant therapy. Furthermore, large primary tumor size, stromal invasion, and positive resection margins significantly predict recurrent disease. Currently, there is a limited alignment between the eighth edition of TNM and the International Federation of Gynecology and Obstetrics (FIGO) 2021 staging systems , and lack of evidence to base treatment on the 2021 FIGO staging system. A further version of TNM classification for vulvar cancer (ninth version) aligned with the 2021 FIGO staging system is expected to be available in 2024. In the meantime, the eighth TNM classification is advised for staging and treatment planning. The 2021 FIGO staging system allows incorporation of findings from all cross-sectional imaging methods into the FIGO stage. Imaging provides information on tumor size, the extent of local involvement of the surrounding structures (vagina, uterus, anus, rectum, urethra, bladder), and the inguinofemoral, distant lymphatic and hematogenous spread, most commonly to the liver or lungs. For local staging, MRI is the modality of choice to assess invasion of vulvar cancer into septa, vagina, urethra, anus, and/or rectum due to its excellent soft tissue resolution. However, in gynecologic oncology centers with available expertise, ultrasound assessment can also be used . 10.1136/ijgc-2023-004609.supp7 Supplementary video For regional nodal staging, expert ultrasound is the method of choice for pre-operative assessment of inguinofemoral lymph nodes, with sensitivity of 76–90% and specificity of 60–96%, allowing a detailed evaluation of nodal architecture and perfusion . The methodologic assessment of vulvar lymph nodes by ultrasound was previously reported in the Vulvar International Tumor Analysis (VITA) consensus. The use of ultrasound guidance for fine needle aspiration or core needle biopsy improved the detection of metastases in lymph nodes with altered morphology. Core needle biopsy should be preferred whenever possible to obtain sufficient material for histological analysis, although fine needle aspiration may be appropriate for small suspicious lymph nodes. MRI is currently considered an alternative imaging method for lymph node staging, with variable sensitivity (ranging from 40% to 89%) depending on the diagnostic criteria used. However, novel MRI techniques (eg, DWI (diffusion-weighted imaging); DCE (dynamic contrast-enhanced)-, and high-resolution T2WI (T2-weighted imaging) series) are promising for improving locoregional staging. High-quality ultrasound or MRI examination for local (loco-regional) staging purposes should be complemented by a structured imaging report to communicate clinically relevant information to the referring physician. Patients who are not candidates for sentinel lymph node biopsy (if they have multifocal tumors, unifocal tumors with size ≥4 cm, and/or suspicious inguinofemoral nodes in pre-operative evaluation) should undergo further imaging in addition to the ultrasound or pelvic MRI assessment to exclude distant metastases. Thoracic and abdominal contrast-enhanced CT (CECT) or whole-body 18F-fluorodeoxyglucose positron emission tomography combined with CT (FDG-PET-CT) should be performed to exclude pelvic lymph node involvement and other distant metastases . New MRI sequences such as T2WI ultrafast spin echo sequences and whole-body DWI may be useful for assessing the upper abdomen and diagnosing distant nodal metastases. The location of the primary tumor and any suspicious regional and distant lymph nodes should be documented in a schematic drawing within a standardized systematic checklist . Following the updated 2023 ESGO guidelines for the management of patients with vulvar cancer : Pre-operative work-up includes a medical history; general assessment of co-morbidities; frailty assessment; clinical examination; biopsy of all suspicious areas followed by pathologic review; and imaging as indicated. For pT1a tumors (tumor ≤2 cm confined to the vulva and/or perineum, with stromal invasion ≤1 mm), no further imaging is required. In patients considered eligible for a sentinel lymph node biopsy procedure, imaging of inguinofemoral lymph nodes by ultrasound or MRI is recommended. Suspicious inguinofemoral nodes on imaging should be assessed by ultrasound-guided fine needle aspiration or core needle biopsy if this would alter primary treatment. In all other cases, systemic staging (including pelvic lymph nodes and distant organs) by CECT (chest/abdomen/pelvis) or FDG-PET-CT is recommended. If the invasive tumor clinically involves surrounding tissues (≥T2 FIGO staging) or if clinical findings are equivocal, evaluation of extra-vulvar structures (urethra, bladder, vagina, cervix, and anal canal) with MRI is recommended. In specialized centers with an available trained ultrasound examiner, transvaginal/transrectal/perineal ultrasound can be an option in determining local staging. Equivocal distant metastasis should be biopsied (if possible) to avoid inappropriate treatment. Introduction Primary vaginal cancer is rare, constituting only 2% of all genital tract malignancies in women. Squamous cell carcinoma is the most common histologic type, with an incidence of 80–95%. Rare tumor types typically occur in young children (mean age at diagnosis 2 years) and include embryonal rhabdomyosarcoma and germ cell tumors, especially yolk sac tumor. In adults, it is estimated that only 10% of all vaginal malignancies originate from the vagina; the majority are metastatic spread from other sites (ie, cervix, endometrium, vulva, rectum). When a vaginal tumor extends to the vulva it should be classified as a vulvar cancer, and when a vaginal tumor extends into the cervical ostium it should be classified as a cervical cancer. Main prognostic factors for vaginal cancer are the location, nodal status, histologic tumor type, and presence of lymphovascular space invasion. Tumor size is known to be a prognostic factor and differentiates between substages FIGO 1A (≤2 cm) and FIGO 1B (>2 cm). Tumors involving the lower third of the vagina or the full vaginal length have a poorer prognosis. Tumors located in the upper third of the vagina will typically spread through lymphatic pathways to the iliac lymph nodes, whereas tumors in the lower third typically spread to the inguinofemoral lymph nodes. Tumors of the middle third may spread to either or both lymph node regions. Vaginal cancer staging is defined by the 2016 TNM classification and the 2021 FIGO staging system . To determine the stage of the disease, a complete work-up should be performed, including clinical examination with biopsies and imaging. To determine local tumor extent, pelvic MRI is recommended, given its superior soft tissue resolution. Ultrasound performed by an expert sonographer can be used as a complementary imaging method in the primary work-up for locoregional staging . Also, regular ultrasound evaluation is recommended for assessing the response to neoadjuvant treatment of non-squamous rare cancers in childhood and adolescence (ie, germ cell cancer) and for follow-up in cases with complete remission (together with serum α-fetoprotein evaluation). CECT (chest/abdomen/pelvis) or whole-body FDG-PET-CT should be added especially in node-positive or locally and metastatic advanced disease . FDG-PET-CT is recommended for treatment planning before chemoradiotherapy or exenterative surgery with curative intent, or in the evaluation of recurrent disease. Because of the low sensitivity of any imaging method for detecting lymph node micro-metastases (≤2 mm) or small-volume (<5 mm) metastases, surgical staging of regional lymph nodes may be reasonable. The use of sentinel lymph node biopsy in vaginal cancer, in contrast to cervical cancer, is not yet established. In patients with positive pelvic nodes and negative para-aortic nodes on FDG-PET-CT, laparoscopic para-aortic lymph node surgical staging may be added to guide the external beam radiotherapy field. Suspicious inguinofemoral nodes on imaging should be sampled by ultrasound-guided fine needle aspiration or core needle biopsy if this would alter the primary treatment. 10.1136/ijgc-2023-004609.supp8 Supplementary video The location of the primary tumor and any metastatic regional and distant lymph nodes should be documented by preoperative imaging using a standardized systematic checklist including the use of schematic drawings as appropriate . Current Guidelines and the Role of Imaging in Vaginal Cancer Staging Following the upcoming 2023 ESGO-ESTRO (European Society for Radiotherapy and Oncology) – SIOPE (European Society of Pediatric Oncology) guidelines for the management of patients with vaginal cancer : Pelvic and vaginal examination with histologic confirmation of the disease is the first step in the diagnosis of vaginal cancer. Colposcopy is recommended, especially in stage I disease, for exact mapping of any (pre-)invasive disease. Pelvic MRI is the standard imaging method to determine local extent. Expert pelvic ultrasound may be complementary. CECT (chest/abdomen/pelvis) is recommended to assess the presence of nodal and distant disease. FDG-PET-CT is recommended in node-positive or locally advanced disease before chemoradiotherapy or exenterative surgery with curative intent, or in the evaluation of recurrent disease. Primary vaginal cancer is rare, constituting only 2% of all genital tract malignancies in women. Squamous cell carcinoma is the most common histologic type, with an incidence of 80–95%. Rare tumor types typically occur in young children (mean age at diagnosis 2 years) and include embryonal rhabdomyosarcoma and germ cell tumors, especially yolk sac tumor. In adults, it is estimated that only 10% of all vaginal malignancies originate from the vagina; the majority are metastatic spread from other sites (ie, cervix, endometrium, vulva, rectum). When a vaginal tumor extends to the vulva it should be classified as a vulvar cancer, and when a vaginal tumor extends into the cervical ostium it should be classified as a cervical cancer. Main prognostic factors for vaginal cancer are the location, nodal status, histologic tumor type, and presence of lymphovascular space invasion. Tumor size is known to be a prognostic factor and differentiates between substages FIGO 1A (≤2 cm) and FIGO 1B (>2 cm). Tumors involving the lower third of the vagina or the full vaginal length have a poorer prognosis. Tumors located in the upper third of the vagina will typically spread through lymphatic pathways to the iliac lymph nodes, whereas tumors in the lower third typically spread to the inguinofemoral lymph nodes. Tumors of the middle third may spread to either or both lymph node regions. Vaginal cancer staging is defined by the 2016 TNM classification and the 2021 FIGO staging system . To determine the stage of the disease, a complete work-up should be performed, including clinical examination with biopsies and imaging. To determine local tumor extent, pelvic MRI is recommended, given its superior soft tissue resolution. Ultrasound performed by an expert sonographer can be used as a complementary imaging method in the primary work-up for locoregional staging . Also, regular ultrasound evaluation is recommended for assessing the response to neoadjuvant treatment of non-squamous rare cancers in childhood and adolescence (ie, germ cell cancer) and for follow-up in cases with complete remission (together with serum α-fetoprotein evaluation). CECT (chest/abdomen/pelvis) or whole-body FDG-PET-CT should be added especially in node-positive or locally and metastatic advanced disease . FDG-PET-CT is recommended for treatment planning before chemoradiotherapy or exenterative surgery with curative intent, or in the evaluation of recurrent disease. Because of the low sensitivity of any imaging method for detecting lymph node micro-metastases (≤2 mm) or small-volume (<5 mm) metastases, surgical staging of regional lymph nodes may be reasonable. The use of sentinel lymph node biopsy in vaginal cancer, in contrast to cervical cancer, is not yet established. In patients with positive pelvic nodes and negative para-aortic nodes on FDG-PET-CT, laparoscopic para-aortic lymph node surgical staging may be added to guide the external beam radiotherapy field. Suspicious inguinofemoral nodes on imaging should be sampled by ultrasound-guided fine needle aspiration or core needle biopsy if this would alter the primary treatment. 10.1136/ijgc-2023-004609.supp8 Supplementary video The location of the primary tumor and any metastatic regional and distant lymph nodes should be documented by preoperative imaging using a standardized systematic checklist including the use of schematic drawings as appropriate . Following the upcoming 2023 ESGO-ESTRO (European Society for Radiotherapy and Oncology) – SIOPE (European Society of Pediatric Oncology) guidelines for the management of patients with vaginal cancer : Pelvic and vaginal examination with histologic confirmation of the disease is the first step in the diagnosis of vaginal cancer. Colposcopy is recommended, especially in stage I disease, for exact mapping of any (pre-)invasive disease. Pelvic MRI is the standard imaging method to determine local extent. Expert pelvic ultrasound may be complementary. CECT (chest/abdomen/pelvis) is recommended to assess the presence of nodal and distant disease. FDG-PET-CT is recommended in node-positive or locally advanced disease before chemoradiotherapy or exenterative surgery with curative intent, or in the evaluation of recurrent disease. Introduction Cervical cancer is the fourth most common cancer in women. Most of the cases are squamous cell carcinoma (75–90%) and adenocarcinomas (5–25%), with variable distribution across patient populations and countries. HPV plays a crucial role in the carcinogenesis, and is responsible for over 90% of all squamous cell carcinomas and 80–85% of adenocarcinomas. Main prognostic factors are described by the TNM classification and the FIGO staging system (maximum tumor size, depth of cervical stromal invasion, the maximum thickness of uninvolved stroma, extracervical extension, nodal involvement and distant metastases, pathological tumor type including HPV status, and presence of lymphovascular space involvement). Cervical cancer staging has undergone several updates in recent years, highlighting the necessity of accurate imaging for adequate treatment planning based on tumor size and location, parametrial involvement, lymph nodes status, and distant metastases . The 2021 version of the AJCC and the Union for International Cancer Control (UICC) TNM cervical cancer classification was recently aligned with the latest 2018 FIGO staging system of cervical cancer . All of them now emphasize and incorporate imaging findings in stage allocation and prognostication. To assess the local spread, clinical staging should be complemented by radiological staging as it may identify important prognostic factors that could guide the choice of treatment ( , online supplemental video S3, available here ). The imaging method of choice to determine local tumor extent in the pelvis is MRI, due to its high soft tissue resolution . Maximum tumor size at MRI has been shown to be highly reproducible and is a strong predictor of survival. An MRI study on 416 patients with cervical cancer found substantial overall inter-observer agreement (among four readers) for key FIGO staging parameters (ie, tumor size categories (≤ 2 cm; >2 cm and ≤4 cm; >4 cm), parametrial invasion, vaginal invasion, and enlarged lymph nodes; κ= 0.61–0.80). Unfortunately, access to MRI scanners is limited, particularly in low-income countries. MRI also has known contraindications and requires specific radiological expertise . This may explain why the reportedly high MRI staging accuracy in single-unit cervical cancer studies was not reproduced in a multicenter study. A European multicenter trial of early-stage cervical cancer showed comparable or better accuracy of ultrasound than MRI in local staging assessment . A recent meta-analysis reported similar diagnostic performance for detecting parametrial invasion in cervical cancer by ultrasound/MRI, reporting pooled sensitivities and specificities of 78%/68% and 96%/91%, respectively (p=0.55). In a multicenter trial by Pálsdóttir et al, the reliability of ultrasound and MRI to define local tumor extension by readers with different levels of experience was also documented, reporting only moderate inter-observer agreement for transvaginal ultrasound as opposed to moderate-to-substantial agreement for MRI. Interestingly, similar agreement for tumor extension was seen between experienced and less experienced observers, both for ultrasound and MRI, except for parametrial invasion by ultrasound. Importantly, inter-observer agreement is likely to improve with dedicated training. 10.1136/ijgc-2023-004609.supp10 Supplementary video Ultrasound examination allows the assessment of predictive imaging markers such as tumor size, echogenicity or vascular (Doppler) features. Abundant perfusion in the primary tumor has been linked to an aggressive clinical phenotype and poor treatment response. In studies on locally advanced cervical cancer, tumor ultrasound-derived 3D vascular indices prior to chemoradiotherapy were adversely associated with treatment response. Low vascular indices observed in patients with poor treatment response were probably linked to tumor hypoxia, which is known to induce therapy resistance in various solid tumors. For assessing lymph node involvement, ultrasound has poor sensitivity (38–43%) but high specificity (96%) for detecting nodal metastases, especially in early-stage cervical cancer. This is partly related to the small size of lymph node metastases in most cases (median maximum size of affected lymph nodes, 14 mm; median size of intranodal metastasis, 3.5 mm). On the other hand, other imaging modalities (ie, MRI, CECT, and FDG-PET-CT) also reportedly have poor sensitivities for detecting small-volume lymph node metastases. A multicenter prospective imaging study (Cervical Cancer Lymph Node Staging, CANNES study; https://clinicaltrials.gov/ct2/show/NCT05573451 ) is currently ongoing with the aim to compare the accuracy of expert ultrasound, MRI, and FDG-PET-CT for detecting pelvic and para-aortic lymph node metastases in cervical cancer. Thus, negative imaging findings do not rule out metastatic nodal involvement in cervical cancer, and surgical lymph node staging should be performed in early-stage cervical cancer. The use of sentinel lymph node biopsy is recommended, taking into consideration that about 10% (65/645) of early-stage cervical cancers with negative lymph nodes on pre-operative imaging have micrometastases (<2 mm), detectable by histopathological ultrastaging. In locally advanced cervical cancer, para-aortic lymph node dissection up to the inferior mesenteric artery may be considered for staging and treatment planning even if the nodes appear negative on imaging, to reveal otherwise undetectable small nodal metastases. In node-positive or locally advanced disease, CECT or FDG-PET-CT should be added. As with previously discussed gynecologic cancers, FDG-PET-CT is recommended for treatment planning before chemoradiotherapy or exenterative surgery with curative intent, or in the evaluation of recurrent disease. Real-life data on gynecologic oncologists’ preferred primary staging modality and its diagnostic performance in early-stage cervical cancer were provided by the prospective, international SENTIX study. Each participating site was instructed to choose their preferred method based on their routine clinical practice. Among 690 prospectively enrolled patients with early-stage cervical cancer, 46.7% and 43.1% of patients underwent MRI and ultrasound, respectively, and 10.1% underwent both modalities. Pelvic MRI and ultrasound yielded similar diagnostic performance for predicting histologically confirmed tumor size, parametrial involvement, and macrometastatic nodal involvement. The structured checklist is reported in and Online Supplemental Video S3 (available here ) demonstrates pre-operative ultrasound staging. Current Guidelines and the Role of Imaging in Cervical Cancer Staging Following the 2018 and updated 2023 ESGO/ESTRO/ESP (European Society of Pathology) guidelines for the management of patients with cervical cancer : Pelvic examination and biopsy±colposcopy are mandatory to diagnose cervical cancer. Pelvic MRI is mandatory for initial assessment of the extent of pelvic tumor and to guide treatment options (optional for stage T1a with free margins after conization). Endovaginal/transrectal ultrasonography is an option if performed by an adequately trained sonographer. In the locally advanced cervical cancer (T1b3 and higher, except T2a1, or in early-stage disease with suspicious lymph nodes on the imaging), FDG-PET-CT or CECT (chest/abdomen/pelvis) are recommended for the assessment of nodal spread and distant metastases. FDG-PET-CT is recommended before chemoradiotherapy with curative intent. Cervical cancer is the fourth most common cancer in women. Most of the cases are squamous cell carcinoma (75–90%) and adenocarcinomas (5–25%), with variable distribution across patient populations and countries. HPV plays a crucial role in the carcinogenesis, and is responsible for over 90% of all squamous cell carcinomas and 80–85% of adenocarcinomas. Main prognostic factors are described by the TNM classification and the FIGO staging system (maximum tumor size, depth of cervical stromal invasion, the maximum thickness of uninvolved stroma, extracervical extension, nodal involvement and distant metastases, pathological tumor type including HPV status, and presence of lymphovascular space involvement). Cervical cancer staging has undergone several updates in recent years, highlighting the necessity of accurate imaging for adequate treatment planning based on tumor size and location, parametrial involvement, lymph nodes status, and distant metastases . The 2021 version of the AJCC and the Union for International Cancer Control (UICC) TNM cervical cancer classification was recently aligned with the latest 2018 FIGO staging system of cervical cancer . All of them now emphasize and incorporate imaging findings in stage allocation and prognostication. To assess the local spread, clinical staging should be complemented by radiological staging as it may identify important prognostic factors that could guide the choice of treatment ( , online supplemental video S3, available here ). The imaging method of choice to determine local tumor extent in the pelvis is MRI, due to its high soft tissue resolution . Maximum tumor size at MRI has been shown to be highly reproducible and is a strong predictor of survival. An MRI study on 416 patients with cervical cancer found substantial overall inter-observer agreement (among four readers) for key FIGO staging parameters (ie, tumor size categories (≤ 2 cm; >2 cm and ≤4 cm; >4 cm), parametrial invasion, vaginal invasion, and enlarged lymph nodes; κ= 0.61–0.80). Unfortunately, access to MRI scanners is limited, particularly in low-income countries. MRI also has known contraindications and requires specific radiological expertise . This may explain why the reportedly high MRI staging accuracy in single-unit cervical cancer studies was not reproduced in a multicenter study. A European multicenter trial of early-stage cervical cancer showed comparable or better accuracy of ultrasound than MRI in local staging assessment . A recent meta-analysis reported similar diagnostic performance for detecting parametrial invasion in cervical cancer by ultrasound/MRI, reporting pooled sensitivities and specificities of 78%/68% and 96%/91%, respectively (p=0.55). In a multicenter trial by Pálsdóttir et al, the reliability of ultrasound and MRI to define local tumor extension by readers with different levels of experience was also documented, reporting only moderate inter-observer agreement for transvaginal ultrasound as opposed to moderate-to-substantial agreement for MRI. Interestingly, similar agreement for tumor extension was seen between experienced and less experienced observers, both for ultrasound and MRI, except for parametrial invasion by ultrasound. Importantly, inter-observer agreement is likely to improve with dedicated training. 10.1136/ijgc-2023-004609.supp10 Supplementary video Ultrasound examination allows the assessment of predictive imaging markers such as tumor size, echogenicity or vascular (Doppler) features. Abundant perfusion in the primary tumor has been linked to an aggressive clinical phenotype and poor treatment response. In studies on locally advanced cervical cancer, tumor ultrasound-derived 3D vascular indices prior to chemoradiotherapy were adversely associated with treatment response. Low vascular indices observed in patients with poor treatment response were probably linked to tumor hypoxia, which is known to induce therapy resistance in various solid tumors. For assessing lymph node involvement, ultrasound has poor sensitivity (38–43%) but high specificity (96%) for detecting nodal metastases, especially in early-stage cervical cancer. This is partly related to the small size of lymph node metastases in most cases (median maximum size of affected lymph nodes, 14 mm; median size of intranodal metastasis, 3.5 mm). On the other hand, other imaging modalities (ie, MRI, CECT, and FDG-PET-CT) also reportedly have poor sensitivities for detecting small-volume lymph node metastases. A multicenter prospective imaging study (Cervical Cancer Lymph Node Staging, CANNES study; https://clinicaltrials.gov/ct2/show/NCT05573451 ) is currently ongoing with the aim to compare the accuracy of expert ultrasound, MRI, and FDG-PET-CT for detecting pelvic and para-aortic lymph node metastases in cervical cancer. Thus, negative imaging findings do not rule out metastatic nodal involvement in cervical cancer, and surgical lymph node staging should be performed in early-stage cervical cancer. The use of sentinel lymph node biopsy is recommended, taking into consideration that about 10% (65/645) of early-stage cervical cancers with negative lymph nodes on pre-operative imaging have micrometastases (<2 mm), detectable by histopathological ultrastaging. In locally advanced cervical cancer, para-aortic lymph node dissection up to the inferior mesenteric artery may be considered for staging and treatment planning even if the nodes appear negative on imaging, to reveal otherwise undetectable small nodal metastases. In node-positive or locally advanced disease, CECT or FDG-PET-CT should be added. As with previously discussed gynecologic cancers, FDG-PET-CT is recommended for treatment planning before chemoradiotherapy or exenterative surgery with curative intent, or in the evaluation of recurrent disease. Real-life data on gynecologic oncologists’ preferred primary staging modality and its diagnostic performance in early-stage cervical cancer were provided by the prospective, international SENTIX study. Each participating site was instructed to choose their preferred method based on their routine clinical practice. Among 690 prospectively enrolled patients with early-stage cervical cancer, 46.7% and 43.1% of patients underwent MRI and ultrasound, respectively, and 10.1% underwent both modalities. Pelvic MRI and ultrasound yielded similar diagnostic performance for predicting histologically confirmed tumor size, parametrial involvement, and macrometastatic nodal involvement. The structured checklist is reported in and Online Supplemental Video S3 (available here ) demonstrates pre-operative ultrasound staging. Following the 2018 and updated 2023 ESGO/ESTRO/ESP (European Society of Pathology) guidelines for the management of patients with cervical cancer : Pelvic examination and biopsy±colposcopy are mandatory to diagnose cervical cancer. Pelvic MRI is mandatory for initial assessment of the extent of pelvic tumor and to guide treatment options (optional for stage T1a with free margins after conization). Endovaginal/transrectal ultrasonography is an option if performed by an adequately trained sonographer. In the locally advanced cervical cancer (T1b3 and higher, except T2a1, or in early-stage disease with suspicious lymph nodes on the imaging), FDG-PET-CT or CECT (chest/abdomen/pelvis) are recommended for the assessment of nodal spread and distant metastases. FDG-PET-CT is recommended before chemoradiotherapy with curative intent. Introduction Endometrial cancer is the most common gynecological malignancy in Europe, with a rising incidence due to increased age and obesity in the population. The majority of endometrial cancers (80%) are confined to the uterus at the time of diagnosis, since post-menopausal bleeding prompts investigations and early detection. Deep (≥50%) myometrial invasion, cervical stromal extension, non-endometrioid histology, high tumor grade and substantial (in contrast to focal) lymphovascular space invasion are independent risk factors for lymph node metastases and poor prognosis. The distribution of lymph node involvement is also prognostic as para-aortic lymph node metastases independently predict poor outcome. Since The Cancer Genome Atlas defined four molecular subgroups of endometrial cancers in 2013 (DNA polymerase ɛ ultramutated, POLEmut; DNA mismatch repair-deficient, MMRd; no specific molecular profile, NSMP; p53-abnormal, p53abn), molecular factors are increasingly being used to define groups at risk and guide adjuvant or systemic treatment. Among the four molecular subgroups p53abn cancers have the highest risk, while POLEmut have the lowest risk. The updated 2023 FIGO staging system for endometrial cancer includes histological types and tumor grading and also molecular subgroups if available, in order to better reflect the underlying biologic behavior of endometrial cancers. If POLEmut or p53abn is detected in early stage disease (former FIGO 2009 stage I) regardless of lymphovascular space invasion status or histologic type, the 2023 FIGO stage is changed to stage IAm POLEmut or stage IICm p53abn , respectively. In addition, the 2023 FIGO staging system for endometrial cancer differentiates between synchronous ovarian cancers and metastatic ovarian lesions. Disease limited to the endometrium and ovaries in low-grade endometrioid carcinomas (stage IA3) is distinguished from extensive spread of endometrial carcinoma to the ovary (stage IIIA1) by the presence of the following criteria: (1) superficial (<50%) myometrial invasion; (2) absence of extensive/substantial lymphovascular space invasion; (3) no additional metastases; and (4) unilateral ovarian tumor limited to the ovary, without capsule invasion/rupture (equivalent to pT1a ovarian cancer). Low-grade endometrioid cancers involving both the endometrium and the ovary are considered to have good prognosis, and no adjuvant treatment is recommended, while metastatic ovarian involvement by endometrial carcinoma is associated with poor prognosis . The changes incorporated in the 2023 FIGO staging system should be consistent with the TNM classification, which should also be updated accordingly. 10.1136/ijgc-2023-004609.supp11 Supplementary video Ultrasound is the first-line imaging technique to evaluate endometrial pathology in cases of abnormal uterine bleeding and helps to triage patients for appropriate diagnostic tests. Transvaginal ultrasound plays a pivotal role in planning the management of women with abnormal uterine bleeding. The International Endometrial Tumor Analysis (IETA) group has been established to define the standardized terms, definitions, and measurements for description of sonographic features of the endometrium and uterine cavity . Based on a large amount of prospectively collected data, the IETA group defined easy-to-assess features, such as endometrial thickness <3 mm, three-layer pattern, and linear endometrial midline, all indicating low risk of endometrial cancer. Patients presenting with these features can be safely discharged with no further follow-up even with a history of abnormal uterine bleeding. Similarly, the presence of a single vessel without branching is associated with very low risk (1.5%) of endometrial cancer. For all other findings, further investigations and biopsy are recommended. The method of obtaining a histological sample (pipelle, curettage, or hysteroscopic resection under direct visualization) depends on the available resources and clinical experience. However, hysteroscopic biopsy is recommended (at least for focal lesions), since it yields higher agreement with final histological diagnosis. In histologically verified endometrial cancer, a trained sonographer can assess the size of the endometrial lesion (its anteroposterior diameter has key prognostic impact), depth of myometrial invasion, and cervical stromal invasion as well as screen for other pelvic pathology . Additionally, the identification of ultrasound features on gray-scale and power Doppler may be used to predict low-risk and high-risk endometrial cancer phenotypes . Unlike transabdominal ultrasound, transvaginal ultrasound is less limited by patient habitus (obesity) or position of the uterus. Ultrasound and MRI have similar accuracy in determining the local extent of endometrial cancer, although both methods may be inaccurate in 15–25% cases. A recent systematic review and meta-analysis from Alcázar et al confirmed very similar diagnostic performances of transvaginal ultrasound/MRI for detecting cervical stromal invasion, with reported pooled sensitivities and specificities of 69%/69% and specificities of 93%/91%, respectively. Another meta-analysis by the same authors found similar diagnostic performance of transvaginal ultrasound/MRI for detecting deep myometrial invasion, reporting pooled sensitivities and specificities of 75%/83% and 82%/82%, respectively. No statistical differences were found between ultrasound and MRI in local staging in both meta-analyses. Diagnostic performance for detecting deep myometrial invasion appears similar between expert and non-expert sonographers, whereas experts perform better in the evaluation of cervical stromal invasion. Thus, the training of sonographers in endometrial cancer staging is critical. For local staging of endometrial cancer, pelvic MRI and transvaginal/transrectal ultrasound yield similar diagnostic performance, and the choice of imaging method is determined by local access to these imaging modalities and operator expertise. At some centers, transvaginal ultrasound is used as the first-line imaging tool, with subsequent selective use of pelvic MRI in cases with suboptimal assessment on ultrasound (eg, reduced acoustic visibility/penetration due to fibroids/bowel gas/obesity/other pathology). In other centers, pelvic MRI is used as the first-line imaging modality for pre-operative local staging. Whole-body imaging can be considered in addition to pelvic ultrasound or MRI depending on the putative risk profile based on imaging findings, clinical features, and presence of pathologic factors, such as high tumor grade, substantial lymphovascular space invasion, non-endometrioid histology, p53abn molecular subgroup, tumor anteroposterior diameter >20 mm, deep (>50%) myometrial invasion, or cervical stroma infiltration. To predict high-risk cancers using ultrasound, the strategy of combining subjective assessment of cervical stromal invasion and myoinvasion with tumor grade correctly stratified 80% of patients as high-risk or low-risk cancer for the presence of lymph node metastases. Patients were classified as high risk based on biopsy-confirmed grade 3 endometrioid, gastrointestinal-type mucinous cancer or other non-endometrioid histotype and/or suspicion of deep myoinvasion or cervical stromal invasion on ultrasound. A similar approach was tested by Fasmer et al using pre-operative biopsy and pelvic MRI in all patients, with selective FDG-PET-CT based on high-risk MRI features (myoinvasion ≥50% and/or cervical stromal invasion and/or suspicious lymph nodes). Based on their findings, pre-operative FDG-PET-CT only in cases with high-risk MRI features seems to bring the most benefit in detecting distant spread while avoiding unnecessary FDG-PET-CT-related radiation in low-risk patients. Both CECT and MRI are considered equivalent for the evaluation of nodal metastases, although neither can replace surgicopathologic lymph node assessment. FDG-PET-CT is considered the best imaging method to evaluate lymph node and distant metastases due to its high specificity, although sensitivity is lower. Due to limited sensitivity of imaging to detect small-volume metastases, surgical lymph node staging by sentinel lymph node biopsy remains important to allow the proper selection of adjuvant treatment and improve patient outcome. Local tumor extent, regional and distant lymph node metastases and other distant metastases should be documented by preoperative imaging using a standardized systematic checklist, including the use of schematic drawing(s) as appropriate . Current Guidelines and the Role of Imaging in Endometrial Cancer Following the 2020 ESGO/ESTRO/ESP guidelines for the management of patients with endometrial cancer : Pre-operative work-up includes: obtaining family history and medical history; general assessment; geriatric assessment, if appropriate; clinical examination including pelvic examination. Expert transvaginal or transrectal pelvic ultrasound or pelvic MRI is recommended for local staging. Depending on the clinical and pathologic risk, additional imaging modalities should be considered to assess ovarian, nodal, peritoneal, and other sites of metastatic disease. For assessing distant lymph node metastases or distant spread, CECT (chest/abdomen/pelvis) or FDG-PET-CT is recommended. Endometrial cancer is the most common gynecological malignancy in Europe, with a rising incidence due to increased age and obesity in the population. The majority of endometrial cancers (80%) are confined to the uterus at the time of diagnosis, since post-menopausal bleeding prompts investigations and early detection. Deep (≥50%) myometrial invasion, cervical stromal extension, non-endometrioid histology, high tumor grade and substantial (in contrast to focal) lymphovascular space invasion are independent risk factors for lymph node metastases and poor prognosis. The distribution of lymph node involvement is also prognostic as para-aortic lymph node metastases independently predict poor outcome. Since The Cancer Genome Atlas defined four molecular subgroups of endometrial cancers in 2013 (DNA polymerase ɛ ultramutated, POLEmut; DNA mismatch repair-deficient, MMRd; no specific molecular profile, NSMP; p53-abnormal, p53abn), molecular factors are increasingly being used to define groups at risk and guide adjuvant or systemic treatment. Among the four molecular subgroups p53abn cancers have the highest risk, while POLEmut have the lowest risk. The updated 2023 FIGO staging system for endometrial cancer includes histological types and tumor grading and also molecular subgroups if available, in order to better reflect the underlying biologic behavior of endometrial cancers. If POLEmut or p53abn is detected in early stage disease (former FIGO 2009 stage I) regardless of lymphovascular space invasion status or histologic type, the 2023 FIGO stage is changed to stage IAm POLEmut or stage IICm p53abn , respectively. In addition, the 2023 FIGO staging system for endometrial cancer differentiates between synchronous ovarian cancers and metastatic ovarian lesions. Disease limited to the endometrium and ovaries in low-grade endometrioid carcinomas (stage IA3) is distinguished from extensive spread of endometrial carcinoma to the ovary (stage IIIA1) by the presence of the following criteria: (1) superficial (<50%) myometrial invasion; (2) absence of extensive/substantial lymphovascular space invasion; (3) no additional metastases; and (4) unilateral ovarian tumor limited to the ovary, without capsule invasion/rupture (equivalent to pT1a ovarian cancer). Low-grade endometrioid cancers involving both the endometrium and the ovary are considered to have good prognosis, and no adjuvant treatment is recommended, while metastatic ovarian involvement by endometrial carcinoma is associated with poor prognosis . The changes incorporated in the 2023 FIGO staging system should be consistent with the TNM classification, which should also be updated accordingly. 10.1136/ijgc-2023-004609.supp11 Supplementary video Ultrasound is the first-line imaging technique to evaluate endometrial pathology in cases of abnormal uterine bleeding and helps to triage patients for appropriate diagnostic tests. Transvaginal ultrasound plays a pivotal role in planning the management of women with abnormal uterine bleeding. The International Endometrial Tumor Analysis (IETA) group has been established to define the standardized terms, definitions, and measurements for description of sonographic features of the endometrium and uterine cavity . Based on a large amount of prospectively collected data, the IETA group defined easy-to-assess features, such as endometrial thickness <3 mm, three-layer pattern, and linear endometrial midline, all indicating low risk of endometrial cancer. Patients presenting with these features can be safely discharged with no further follow-up even with a history of abnormal uterine bleeding. Similarly, the presence of a single vessel without branching is associated with very low risk (1.5%) of endometrial cancer. For all other findings, further investigations and biopsy are recommended. The method of obtaining a histological sample (pipelle, curettage, or hysteroscopic resection under direct visualization) depends on the available resources and clinical experience. However, hysteroscopic biopsy is recommended (at least for focal lesions), since it yields higher agreement with final histological diagnosis. In histologically verified endometrial cancer, a trained sonographer can assess the size of the endometrial lesion (its anteroposterior diameter has key prognostic impact), depth of myometrial invasion, and cervical stromal invasion as well as screen for other pelvic pathology . Additionally, the identification of ultrasound features on gray-scale and power Doppler may be used to predict low-risk and high-risk endometrial cancer phenotypes . Unlike transabdominal ultrasound, transvaginal ultrasound is less limited by patient habitus (obesity) or position of the uterus. Ultrasound and MRI have similar accuracy in determining the local extent of endometrial cancer, although both methods may be inaccurate in 15–25% cases. A recent systematic review and meta-analysis from Alcázar et al confirmed very similar diagnostic performances of transvaginal ultrasound/MRI for detecting cervical stromal invasion, with reported pooled sensitivities and specificities of 69%/69% and specificities of 93%/91%, respectively. Another meta-analysis by the same authors found similar diagnostic performance of transvaginal ultrasound/MRI for detecting deep myometrial invasion, reporting pooled sensitivities and specificities of 75%/83% and 82%/82%, respectively. No statistical differences were found between ultrasound and MRI in local staging in both meta-analyses. Diagnostic performance for detecting deep myometrial invasion appears similar between expert and non-expert sonographers, whereas experts perform better in the evaluation of cervical stromal invasion. Thus, the training of sonographers in endometrial cancer staging is critical. For local staging of endometrial cancer, pelvic MRI and transvaginal/transrectal ultrasound yield similar diagnostic performance, and the choice of imaging method is determined by local access to these imaging modalities and operator expertise. At some centers, transvaginal ultrasound is used as the first-line imaging tool, with subsequent selective use of pelvic MRI in cases with suboptimal assessment on ultrasound (eg, reduced acoustic visibility/penetration due to fibroids/bowel gas/obesity/other pathology). In other centers, pelvic MRI is used as the first-line imaging modality for pre-operative local staging. Whole-body imaging can be considered in addition to pelvic ultrasound or MRI depending on the putative risk profile based on imaging findings, clinical features, and presence of pathologic factors, such as high tumor grade, substantial lymphovascular space invasion, non-endometrioid histology, p53abn molecular subgroup, tumor anteroposterior diameter >20 mm, deep (>50%) myometrial invasion, or cervical stroma infiltration. To predict high-risk cancers using ultrasound, the strategy of combining subjective assessment of cervical stromal invasion and myoinvasion with tumor grade correctly stratified 80% of patients as high-risk or low-risk cancer for the presence of lymph node metastases. Patients were classified as high risk based on biopsy-confirmed grade 3 endometrioid, gastrointestinal-type mucinous cancer or other non-endometrioid histotype and/or suspicion of deep myoinvasion or cervical stromal invasion on ultrasound. A similar approach was tested by Fasmer et al using pre-operative biopsy and pelvic MRI in all patients, with selective FDG-PET-CT based on high-risk MRI features (myoinvasion ≥50% and/or cervical stromal invasion and/or suspicious lymph nodes). Based on their findings, pre-operative FDG-PET-CT only in cases with high-risk MRI features seems to bring the most benefit in detecting distant spread while avoiding unnecessary FDG-PET-CT-related radiation in low-risk patients. Both CECT and MRI are considered equivalent for the evaluation of nodal metastases, although neither can replace surgicopathologic lymph node assessment. FDG-PET-CT is considered the best imaging method to evaluate lymph node and distant metastases due to its high specificity, although sensitivity is lower. Due to limited sensitivity of imaging to detect small-volume metastases, surgical lymph node staging by sentinel lymph node biopsy remains important to allow the proper selection of adjuvant treatment and improve patient outcome. Local tumor extent, regional and distant lymph node metastases and other distant metastases should be documented by preoperative imaging using a standardized systematic checklist, including the use of schematic drawing(s) as appropriate . Following the 2020 ESGO/ESTRO/ESP guidelines for the management of patients with endometrial cancer : Pre-operative work-up includes: obtaining family history and medical history; general assessment; geriatric assessment, if appropriate; clinical examination including pelvic examination. Expert transvaginal or transrectal pelvic ultrasound or pelvic MRI is recommended for local staging. Depending on the clinical and pathologic risk, additional imaging modalities should be considered to assess ovarian, nodal, peritoneal, and other sites of metastatic disease. For assessing distant lymph node metastases or distant spread, CECT (chest/abdomen/pelvis) or FDG-PET-CT is recommended. Introduction Tubo-ovarian cancer is the leading cause of death among all gynecological cancers in developed countries, with more than two-thirds of patients being diagnosed at an advanced stage (FIGO stage III and IV) due to absence of symptoms in the initial stages of the disease. It is acknowledged that tubo-ovarian cancer is not a homogeneous disease, but rather a group of tumors with different epidemiologies, precursor lesions, morphologies, response to treatment, and prognosis. More than 90% of malignant tubo-ovarian tumors are of epithelial origin, with the most common and lethal being high-grade serous carcinoma. In 2014, FIGO revised its ovarian cancer staging system to incorporate ovarian, fallopian tube, and peritoneal cancer in the same classification. The eighth edition of the TNM staging system of cancer of the ovary, fallopian tube, and peritoneum mirrors that of 2014 FIGO staging classification . Accurate pre-operative diagnosis of tubo-ovarian cancer and timely referral of patients to specialized centers is crucial for their prognosis. For these reasons, four scientific societies—namely, ESGO, the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG), the IOTA group, and the European Society for Gynecological Endoscopy (ESGE), have issued an evidence-based consensus statement on the pre-operative diagnosis of ovarian cancer to help differentiate between benign and malignant ovarian tumors. The tumor should be characterized by expert sonographers (level III) subjectively, or by less experienced sonographers using the IOTA Simple Rules risk calculation or the IOTA ADNEX model. The IOTA ADNEX model uses simple predictor variables and discriminates between benign and malignant tumors, but also calculates the risk of four types of malignancy. However, a large proportion of adnexal masses (40%) can be classified as benign using the modified benign descriptors without computer support. Thus the IOTA group has recently validated a two-step strategy . For this strategy, the first step consists of applying the modified benign descriptors if applicable. If one of these applies, the mass can be classified as benign with a risk of malignancy <1%, while if none applies, the IOTA ADNEX model can then be used to estimate the risk of malignancy. In particular, patients at intermediate (10–50%) and high risk (≥50%) of malignancy should be referred to a specialized center. Knowledge of standardized ultrasound terms describing ovarian pathology is essential in order to accurately apply IOTA-based models. The standard IOTA terminology refers to the main characteristics of adnexal tumors, including definitions of the lesion’s solid component, intracystic content, blood flow (ie, color score), acoustic shadows, and others . 10.1136/ijgc-2023-004609.supp12 Supplementary video For patients diagnosed with tubo-ovarian cancer, the most important independent prognostic factor is complete surgical tumor resection (no residual tumor at the end of surgery). Therefore, accurate pre-operative identification of peritoneal and other metastatic spread is critical for prognostication and optimizing patient management. In general, all imaging modalities have high specificity for predicting residual disease (remaining visible cancer tissue at the end of debulking surgery), but low sensitivity for detecting small-volume carcinomatosis, potentially resulting in unnecessary surgical explorations due to non-resectable disease . Laparoscopy, on the other hand, offers direct visualization of the peritoneal parietal, visceral, omental, and mesenteric surfaces but may miss retroperitoneal spread, tumors behind the gastrosplenic ligament or the lesser sac, as described in the recent review by Pinto et al. Laparoscopy can be considered in cases of uncertain resectability or to exclude small-volume carcinomatosis which may not be seen on imaging, such as on the bowel serosa or mesentery . 10.1136/ijgc-2023-004609.supp13 Supplementary video The role of expert ultrasound in the pre-operative staging of tubo-ovarian cancer has been re-evaluated with recent evidence demonstrating its high accuracy in the prediction of tumor histotype and radiological staging. This includes the assessment of pelvic and abdominal peritoneal involvement, retroperitoneal lymph node metastasis, and ultimately, the prediction of non-resectability . In addition to diagnostic performance, patient satisfaction on ultrasound, CECT, and whole-body diffusion-weighted MRI was evaluated in the prospective international ISAAC study. Ultrasound was the preferred imaging method despite being associated with mild (or occasionally moderate) pain when compared with CECT and whole-body diffusion-weighted MRI. Whole-body diffusion-weighted MRI, involving long procedural time (~60 min) in the MRI scanner, was the least preferred by patients. As part of the same study, the reproducibility of ultrasound staging was tested. After a minimum of 6 months of training in a high-volume specialist center, 12 less experienced and 13 more experienced ultrasound operators assessing the presence of advanced ovarian cancer in 19 anatomic sites based on video clips acquired by one expert sonographer achieved almost perfect agreement (κ coefficient 0.88). CECT is usually considered the standard imaging modality for pre-operative tubo-ovarian cancer staging, despite reported variable overall pre-operative staging accuracy and poor inter-observer agreement. The novel but time-consuming whole-body diffusion-weighted MRI may be superior to CECT for primary tumor characterization, staging, and prediction of residual disease and reportedly yields almost perfect inter-observer staging agreement in ovarian cancer . FDG-PET-CT may serve as a problem-solving imaging modality if there is a very high risk of distant spread or indeterminate distant (eg, thoracic) metastases have been identified by CECT. Several models and imaging scoring systems have been developed for predicting surgical outcome and residual disease, but studies have frequently failed to provide sufficient external validation of their results. Nowadays, a thorough and structured imaging assessment of critical sites for ovarian cancer surgery remains the most useful approach . Using this approach, the results of an international ISAAC interim analysis showed that transvaginal/transabdominal ultrasound was non-inferior to both CECT (p value=0.029) or whole-body diffusion-weighted MRI (p value=0.036) for predicting surgical non-resectability. In an ideal setting, a woman can receive an all-in-one ultrasound-based approach at the same appointment. This includes diagnosis, staging, and prediction of non-resectability, and establishing the histopathologic diagnosis using ultrasound guided core-needle biopsy if the disease is deemed unresectable (one-stop ovarian cancer clinic concept). As with previously discussed cancers, all imaging modalities face limitations in detecting small-volume lymph node metastases. Therefore, in early-stage ovarian cancer, systematic pelvic and abdominal lymphadenectomy are usually recommended to detect occult small-volume lymph nodes metastases, and tailor adjuvant treatment. In advanced ovarian cancer it is recommended to remove only clinically affected lymph nodes since the LION study showed that routine systematic pelvic and para-aortic lymphadenectomy for all patients does not improve progression-free or overall survival. The main role of imaging in these cases is therefore to detect suspicious or enlarged lymph nodes for selective resection. A description of the primary tumor location and any locations of peritoneal spread, infiltrated regional and distant lymph nodes and other metastatic sites should be documented by preoperative imaging using a standardized systematic checklist . Current Guidelines and the Role of Imaging in Tubo-ovarian Cancer Following the 2021 ESGO/ISUOG/IOTA/ESGE consensus on ovarian tumor diagnosis : Subjective assessment by expert sonographers (level III) has the best diagnostic performance to distinguish between benign and malignant ovarian tumors. If the above is not available, the use of ultrasound-based diagnostic models can assist clinicians to distinguish between benign and malignant ovarian tumors. Ultrasound-based diagnostic models (IOTA Simple Rules risk model or IOTA ADNEX model) are preferable to CA 125 level, HE4 level, or Risk of Ovarian Malignancy Algorithm (ROMA) as they better distinguish between benign and malignant ovarian tumors. The IOTA ADNEX model and the IOTA Simple Rules risk model are recommended instead of morphological scoring systems, including the Risk of Malignancy Index. Following the 2019 and updated 2023 ESGO/ESMO (European Society for Medical Oncology) consensus conference recommendations on ovarian cancer : CECT, whole-body diffusion-weighted MRI, and FDG-PET-CT are considered viable options for assessing tumor extent and resectability and to detect distant metastases in ovarian cancer. Ultrasound by an expert sonographer may also be used to assess tumor extent and resectability in the pelvic and abdominal cavity. Tubo-ovarian cancer is the leading cause of death among all gynecological cancers in developed countries, with more than two-thirds of patients being diagnosed at an advanced stage (FIGO stage III and IV) due to absence of symptoms in the initial stages of the disease. It is acknowledged that tubo-ovarian cancer is not a homogeneous disease, but rather a group of tumors with different epidemiologies, precursor lesions, morphologies, response to treatment, and prognosis. More than 90% of malignant tubo-ovarian tumors are of epithelial origin, with the most common and lethal being high-grade serous carcinoma. In 2014, FIGO revised its ovarian cancer staging system to incorporate ovarian, fallopian tube, and peritoneal cancer in the same classification. The eighth edition of the TNM staging system of cancer of the ovary, fallopian tube, and peritoneum mirrors that of 2014 FIGO staging classification . Accurate pre-operative diagnosis of tubo-ovarian cancer and timely referral of patients to specialized centers is crucial for their prognosis. For these reasons, four scientific societies—namely, ESGO, the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG), the IOTA group, and the European Society for Gynecological Endoscopy (ESGE), have issued an evidence-based consensus statement on the pre-operative diagnosis of ovarian cancer to help differentiate between benign and malignant ovarian tumors. The tumor should be characterized by expert sonographers (level III) subjectively, or by less experienced sonographers using the IOTA Simple Rules risk calculation or the IOTA ADNEX model. The IOTA ADNEX model uses simple predictor variables and discriminates between benign and malignant tumors, but also calculates the risk of four types of malignancy. However, a large proportion of adnexal masses (40%) can be classified as benign using the modified benign descriptors without computer support. Thus the IOTA group has recently validated a two-step strategy . For this strategy, the first step consists of applying the modified benign descriptors if applicable. If one of these applies, the mass can be classified as benign with a risk of malignancy <1%, while if none applies, the IOTA ADNEX model can then be used to estimate the risk of malignancy. In particular, patients at intermediate (10–50%) and high risk (≥50%) of malignancy should be referred to a specialized center. Knowledge of standardized ultrasound terms describing ovarian pathology is essential in order to accurately apply IOTA-based models. The standard IOTA terminology refers to the main characteristics of adnexal tumors, including definitions of the lesion’s solid component, intracystic content, blood flow (ie, color score), acoustic shadows, and others . 10.1136/ijgc-2023-004609.supp12 Supplementary video For patients diagnosed with tubo-ovarian cancer, the most important independent prognostic factor is complete surgical tumor resection (no residual tumor at the end of surgery). Therefore, accurate pre-operative identification of peritoneal and other metastatic spread is critical for prognostication and optimizing patient management. In general, all imaging modalities have high specificity for predicting residual disease (remaining visible cancer tissue at the end of debulking surgery), but low sensitivity for detecting small-volume carcinomatosis, potentially resulting in unnecessary surgical explorations due to non-resectable disease . Laparoscopy, on the other hand, offers direct visualization of the peritoneal parietal, visceral, omental, and mesenteric surfaces but may miss retroperitoneal spread, tumors behind the gastrosplenic ligament or the lesser sac, as described in the recent review by Pinto et al. Laparoscopy can be considered in cases of uncertain resectability or to exclude small-volume carcinomatosis which may not be seen on imaging, such as on the bowel serosa or mesentery . 10.1136/ijgc-2023-004609.supp13 Supplementary video The role of expert ultrasound in the pre-operative staging of tubo-ovarian cancer has been re-evaluated with recent evidence demonstrating its high accuracy in the prediction of tumor histotype and radiological staging. This includes the assessment of pelvic and abdominal peritoneal involvement, retroperitoneal lymph node metastasis, and ultimately, the prediction of non-resectability . In addition to diagnostic performance, patient satisfaction on ultrasound, CECT, and whole-body diffusion-weighted MRI was evaluated in the prospective international ISAAC study. Ultrasound was the preferred imaging method despite being associated with mild (or occasionally moderate) pain when compared with CECT and whole-body diffusion-weighted MRI. Whole-body diffusion-weighted MRI, involving long procedural time (~60 min) in the MRI scanner, was the least preferred by patients. As part of the same study, the reproducibility of ultrasound staging was tested. After a minimum of 6 months of training in a high-volume specialist center, 12 less experienced and 13 more experienced ultrasound operators assessing the presence of advanced ovarian cancer in 19 anatomic sites based on video clips acquired by one expert sonographer achieved almost perfect agreement (κ coefficient 0.88). CECT is usually considered the standard imaging modality for pre-operative tubo-ovarian cancer staging, despite reported variable overall pre-operative staging accuracy and poor inter-observer agreement. The novel but time-consuming whole-body diffusion-weighted MRI may be superior to CECT for primary tumor characterization, staging, and prediction of residual disease and reportedly yields almost perfect inter-observer staging agreement in ovarian cancer . FDG-PET-CT may serve as a problem-solving imaging modality if there is a very high risk of distant spread or indeterminate distant (eg, thoracic) metastases have been identified by CECT. Several models and imaging scoring systems have been developed for predicting surgical outcome and residual disease, but studies have frequently failed to provide sufficient external validation of their results. Nowadays, a thorough and structured imaging assessment of critical sites for ovarian cancer surgery remains the most useful approach . Using this approach, the results of an international ISAAC interim analysis showed that transvaginal/transabdominal ultrasound was non-inferior to both CECT (p value=0.029) or whole-body diffusion-weighted MRI (p value=0.036) for predicting surgical non-resectability. In an ideal setting, a woman can receive an all-in-one ultrasound-based approach at the same appointment. This includes diagnosis, staging, and prediction of non-resectability, and establishing the histopathologic diagnosis using ultrasound guided core-needle biopsy if the disease is deemed unresectable (one-stop ovarian cancer clinic concept). As with previously discussed cancers, all imaging modalities face limitations in detecting small-volume lymph node metastases. Therefore, in early-stage ovarian cancer, systematic pelvic and abdominal lymphadenectomy are usually recommended to detect occult small-volume lymph nodes metastases, and tailor adjuvant treatment. In advanced ovarian cancer it is recommended to remove only clinically affected lymph nodes since the LION study showed that routine systematic pelvic and para-aortic lymphadenectomy for all patients does not improve progression-free or overall survival. The main role of imaging in these cases is therefore to detect suspicious or enlarged lymph nodes for selective resection. A description of the primary tumor location and any locations of peritoneal spread, infiltrated regional and distant lymph nodes and other metastatic sites should be documented by preoperative imaging using a standardized systematic checklist . Following the 2021 ESGO/ISUOG/IOTA/ESGE consensus on ovarian tumor diagnosis : Subjective assessment by expert sonographers (level III) has the best diagnostic performance to distinguish between benign and malignant ovarian tumors. If the above is not available, the use of ultrasound-based diagnostic models can assist clinicians to distinguish between benign and malignant ovarian tumors. Ultrasound-based diagnostic models (IOTA Simple Rules risk model or IOTA ADNEX model) are preferable to CA 125 level, HE4 level, or Risk of Ovarian Malignancy Algorithm (ROMA) as they better distinguish between benign and malignant ovarian tumors. The IOTA ADNEX model and the IOTA Simple Rules risk model are recommended instead of morphological scoring systems, including the Risk of Malignancy Index. Following the 2019 and updated 2023 ESGO/ESMO (European Society for Medical Oncology) consensus conference recommendations on ovarian cancer : CECT, whole-body diffusion-weighted MRI, and FDG-PET-CT are considered viable options for assessing tumor extent and resectability and to detect distant metastases in ovarian cancer. Ultrasound by an expert sonographer may also be used to assess tumor extent and resectability in the pelvic and abdominal cavity. Ultrasound is a reliable imaging modality, which is widely available, non-invasive, low cost, and has no known contraindications or patient risks. Based on the recent evidence, expert ultrasound is recommended as an equal alternative to MRI for locoregional staging of vulvar, vaginal, cervical, and endometrial cancer and as the first-choice imaging modality for evaluation of abnormal uterine bleeding. In ovarian cancer staging, the choice of imaging modality may depend on availability of imaging method and expertise, although ultrasound is recognized as comparable to CECT, MRI, and FDG-PET-CT for abdominal staging. Similarly, for predicting non-resectability in ovarian cancer, ultrasound is recognized as non-inferior to CECT and MRI and an effective tool for guiding core needle biopsy in patients deemed unfit for surgery. Moreover, ultrasound is the imaging method of choice in primary ovarian tumor characterization. Its crucial role in gynecological oncology should be acknowledged, and financial and logistic resources need to be allocated for the training of future ultrasound experts in gynecological cancer. 10.1136/ijgc-2023-004609.supp9 Supplementary video |
Implementing Whole Genome Sequencing (WGS) in Clinical Practice: Advantages, Challenges, and Future Perspectives | 526c23b0-ff5a-4c8e-998d-bc20f00db3e6 | 10969765 | Pharmacology[mh] | 1.1. History and Evolution of WGS Technologies Since its inception, genome sequencing has improved dramatically when it comes to cost, time, and accuracy, mainly due to the rapid advancement of technology. In just seventy years, we went from learning about the structure of DNA to sequencing the entirety of the human genome and using these data for various important purposes . It all began in the 1970s and 1980s when the first attempts at DNA sequencing were made, primarily through Sanger sequencing . This pioneering method relied on chain termination and electrophoresis, paving the way for the first sequencing of small genomes. However, it was a slow and financially demanding process. The 1980s witnessed the development of automatic DNA sequencing, with various techniques such as PCR sequencing and chain termination sequencing emerging. This dramatically sped up the sequencing process and significantly reduced costs. A monumental milestone arrived in the year 2000 with the completion of the Human Genome Project. This marked the first complete sequences of the human genome, revolutionizing our understanding of genes and non-coding regions. However, the real explosion of progress occurred in the 2000s with the advent of next-generation sequencing (NGS) technologies . This included pyrosequencing, Illumina sequencing, and SOLiD sequencing, enabling faster and more affordable sequencing of larger genomes, including the human genome. Progress continued through the 2010s, when NGS techniques were refined and new platforms like Oxford Nanopore and PacBio technologies allowed for long-read sequencing and the unraveling of complex genome segments. Today, WGS technologies have become indispensable tools in clinical medicine and scientific research. They enable more precise diagnoses of genetic diseases, personalized medicine, and a deeper understanding of the genetic factors influencing health. The aforementioned innovations make DNA sequencing an integral part of our ability to delve deeper into genome secrets and apply them in practice. An essential application of WGS is the discovery of genetic variants in the human genome and their association with enigmatic or well-known clinical entities . By performing this early on, preventative measures can be taken to mitigate the impact of the disease. WGS provides a valuable tool in the physicians’ arsenal and produces an unprecedented amount of information that tremendously facilitates the diagnostic process. Third-generation sequencing now stands at the forefront of genome sequencing and stands to give more accurate and cost-effective results. WGS can be applied to newborn screening, cancer detection, genetic diseases, and personalized medicine . It has the ability to revolutionize the way certain diseases can be diagnosed, resulting in the avoidance of long and expensive traditional diagnostic methods. Although there are advantages to this technique, the disadvantages must also be taken into account. One such disadvantage is our limited understanding of the significance of certain variants that WGS discovers. This presents a problem when trying to interpret WGS findings and determine if the discovered variant is responsible for the clinical presentation. This interpretation is further complicated by the fact that some diseases are a combined product of multiple variants, not just any single one. Excellent tools for genetic interpretation are widely accessible databases and classification algorithms that can provide physicians with supplementary data. Overall, WGS offers a massive benefit to the field of medicine. As technology progresses, the number of diseases that WGS can detect will steadily increase, as well as its accuracy. On the other hand, scientists are continuously working towards a better understanding of the data this technology provides us, resulting in increasingly accurate interpretations of results. The aim of this review was to comprehensively and clearly cover the advantages, challenges, and future perspectives of WGS in everyday clinical practice. depicts all the main topics covered in this review. 1.2. Applications of WGS in Biomedical Research WGS has become an emerging technology as rapid strides have been made over the past few decades. WGS has revealed a wealth of information, including gene number and density, repeat sequences, non-protein coding RNA genes, and evolutionary conserved sequences . WGS can detect single nucleotide polymorphisms (SNPs) in both introns and exons, which is crucial since SNPs can be attributed to a wide range of conditions . In healthcare, disease susceptibility, drug responses, and physical traits can, in certain instances, be attributed to SNPs. WGS is excellent for sequencing non-coding RNA, which includes, but is not limited to, transfer RNA, ribosomal RNA, small nuclear RNA, and miRNA . miRNA is a key area of study because it has an important regulatory function, whereas SNPs can cause an increase in oncogenic risk. Although there are many more SNPs yet to be discovered, the technology is still relatively new, and time is bound to answer questions that scientists are asking today. WGS has the ability to revolutionize the way preventative medicine is conceptualized. Through WGS, physicians will have the ability to determine individual genetic profiles, allowing for prediction of likelihood of future disease manifestation with considerable accuracy . WGS is slowly becoming more and more economically feasible, opening the opportunity for great benefits . For example, it can detect genetic variants that can cause rare immunological disorders. WGS has the potential to dramatically reduce the time spent on the diagnostic odyssey as well as overcoming the large costs associated with missed or delayed diagnosis. WGS essentially circumvents this costly process by a one-time, relatively inexpensive test to reveal a vast amount of information that traditional methods cannot. This allows for real actionable steps to be taken to mitigate or altogether prevent certain diseases. Although the technology is efficient, some of the data gathered from WGS are hard to translate into actionable measures . There is a significant increase in the number of variants of uncertain significance. Scientists are, however, breaking through this barrier and learning to make the connections between variants and phenotypes. For example, variants of uncertain significance (VUS) are stored in a database, which allows different laboratories to collaborate and better understand which role they play in the disease . Consequently, the rate of diagnosis will likely steadily increase in the future as the mysteries of the genome begin to unravel. 1.3. Revolutionizing Rare Disease Diagnosis with WGS One of the key advantages of WGS compared to whole exome sequencing (WES) is the ability to analyze non-coding regions of the genome. Non-coding DNA contains various components, including repetitive sequences (telomeres, centromeres, satellite DNA), sequences encoding different types of non-coding RNA molecules, and numerous regulatory elements (promoters, enhancers, and silencers). Non-coding RNA molecules and other regulatory elements play a crucial role in gene expression control. These genomic loci are particularly important in diagnosing multifactorial genetic diseases. WGS allows for a detailed analysis of non-coding regions, providing the opportunity to identify variants that can affect gene regulation and consequently disease development . Uncommon medical conditions, collectively known as rare diseases, encompass a vast array of over 8000 unique disorders, most of which stem from genetic origins. While each of these conditions is individually infrequent, their combined impact affects a considerable segment of the population, with a prevalence ranging from 6% to 8%. A study conducted as part of the 100,000 Genomes Project unveiled that WGS played a pivotal role in providing diagnoses for 25% of participants grappling with rare disorders . This innovative approach demonstrated its ability to detect conditions that might have otherwise eluded traditional diagnostic methods. Furthermore, a more recent investigation shed light on the potential of tailoring WGS analyses to individual patients, a practice that could significantly augment the diagnostic rates of these conditions . The acceleration in diagnosis is particularly valuable for certain rare diseases, such as primary mitochondrial disease phenotypes—a cluster of inherited disorders arising from mutations in either mitochondrial or nuclear DNA. The non-coding regions of the genome, which make up the vast majority of our DNA (98.5%), were long considered “genomic junk” because they did not code for proteins. However, with the completion of the Human Genome Project (HGP) and advancements in next-generation sequencing (NGS) technology, research has increasingly suggested that non-coding regions of the genome play a pivotal role in gene regulation and can have a significant impact on disease onset . WGS in particular offers the opportunity to uncover variants in non-coding regions, opening new perspectives in understanding the origins of genetic diseases in children . Comprehensive genomic analysis can reveal the causes of rare inherited diseases, including mitochondrial disorders, neurological conditions, metabolic disorders, hematological disorders, and bone and soft tissue development disorders, as well as assess the risk of multifactorial diseases like diabetes and childhood obesity . Given the continuous advancement of scientific knowledge, WGS provides the ability to discover new genetic changes that may lead to diseases. Through secondary findings, it may also enable the prevention and timely treatment of health issues that were not the initial reason for testing . Such proactive healthcare, within the context of pediatric preventive medicine, yields better treatment outcomes and ensures disease prevention before advanced disease stages requiring challenging treatments occur . Mutations in regulatory elements within non-coding regions lead to changes in gene expression, which can significantly impact phenotypic manifestations and disease development. For instance, mutations in promotors, enhancers, or silencers can affect the binding of transcription factors and alter the expression levels of specific genes, resulting in disease development . Detecting variants in regulatory regions through WGS provides the opportunity to identify new variants as causes of genetic diseases in children, in which previous analyses failed to establish a cause in coding DNA. Identifying specific variants in regulatory elements may enhance our understanding of the underlying molecular mechanisms. This can lead to the discovery of new therapeutic targets and the development of novel therapeutic strategies . Additionally, gene variants in non-coding regions may have implications for understanding complex genetic diseases involving the interaction of multiple genes and environmental factors. For example, changes in specific genes may increase susceptibility to certain environmental factors, such as susceptibility to infections like H. pylori , which can result in an increased risk of multifactorial diseases such as gastric ulcers and stomach cancer . Clinical genome analysis can be divided into three phases: primary, secondary, and tertiary analysis. Primary analysis encompasses the technical components of next-generation sequencing, including DNA extraction, library preparation, and preliminary sample quality control. Secondary analysis involves bioinformatic data processing from sequencing, including aligning the obtained sequence with the reference human genome and additional computational operations to correct potential analysis errors . Finally, tertiary analysis involves variant interpretation, including variant annotation, filtering, clinical classification, result interpretation, and the generation of a medical report for genetic testing. This review will cover primary, secondary, and tertiary analysis, with a specific focus on clinical interpretation and the application of WGS in everyday clinical practice. Since its inception, genome sequencing has improved dramatically when it comes to cost, time, and accuracy, mainly due to the rapid advancement of technology. In just seventy years, we went from learning about the structure of DNA to sequencing the entirety of the human genome and using these data for various important purposes . It all began in the 1970s and 1980s when the first attempts at DNA sequencing were made, primarily through Sanger sequencing . This pioneering method relied on chain termination and electrophoresis, paving the way for the first sequencing of small genomes. However, it was a slow and financially demanding process. The 1980s witnessed the development of automatic DNA sequencing, with various techniques such as PCR sequencing and chain termination sequencing emerging. This dramatically sped up the sequencing process and significantly reduced costs. A monumental milestone arrived in the year 2000 with the completion of the Human Genome Project. This marked the first complete sequences of the human genome, revolutionizing our understanding of genes and non-coding regions. However, the real explosion of progress occurred in the 2000s with the advent of next-generation sequencing (NGS) technologies . This included pyrosequencing, Illumina sequencing, and SOLiD sequencing, enabling faster and more affordable sequencing of larger genomes, including the human genome. Progress continued through the 2010s, when NGS techniques were refined and new platforms like Oxford Nanopore and PacBio technologies allowed for long-read sequencing and the unraveling of complex genome segments. Today, WGS technologies have become indispensable tools in clinical medicine and scientific research. They enable more precise diagnoses of genetic diseases, personalized medicine, and a deeper understanding of the genetic factors influencing health. The aforementioned innovations make DNA sequencing an integral part of our ability to delve deeper into genome secrets and apply them in practice. An essential application of WGS is the discovery of genetic variants in the human genome and their association with enigmatic or well-known clinical entities . By performing this early on, preventative measures can be taken to mitigate the impact of the disease. WGS provides a valuable tool in the physicians’ arsenal and produces an unprecedented amount of information that tremendously facilitates the diagnostic process. Third-generation sequencing now stands at the forefront of genome sequencing and stands to give more accurate and cost-effective results. WGS can be applied to newborn screening, cancer detection, genetic diseases, and personalized medicine . It has the ability to revolutionize the way certain diseases can be diagnosed, resulting in the avoidance of long and expensive traditional diagnostic methods. Although there are advantages to this technique, the disadvantages must also be taken into account. One such disadvantage is our limited understanding of the significance of certain variants that WGS discovers. This presents a problem when trying to interpret WGS findings and determine if the discovered variant is responsible for the clinical presentation. This interpretation is further complicated by the fact that some diseases are a combined product of multiple variants, not just any single one. Excellent tools for genetic interpretation are widely accessible databases and classification algorithms that can provide physicians with supplementary data. Overall, WGS offers a massive benefit to the field of medicine. As technology progresses, the number of diseases that WGS can detect will steadily increase, as well as its accuracy. On the other hand, scientists are continuously working towards a better understanding of the data this technology provides us, resulting in increasingly accurate interpretations of results. The aim of this review was to comprehensively and clearly cover the advantages, challenges, and future perspectives of WGS in everyday clinical practice. depicts all the main topics covered in this review. WGS has become an emerging technology as rapid strides have been made over the past few decades. WGS has revealed a wealth of information, including gene number and density, repeat sequences, non-protein coding RNA genes, and evolutionary conserved sequences . WGS can detect single nucleotide polymorphisms (SNPs) in both introns and exons, which is crucial since SNPs can be attributed to a wide range of conditions . In healthcare, disease susceptibility, drug responses, and physical traits can, in certain instances, be attributed to SNPs. WGS is excellent for sequencing non-coding RNA, which includes, but is not limited to, transfer RNA, ribosomal RNA, small nuclear RNA, and miRNA . miRNA is a key area of study because it has an important regulatory function, whereas SNPs can cause an increase in oncogenic risk. Although there are many more SNPs yet to be discovered, the technology is still relatively new, and time is bound to answer questions that scientists are asking today. WGS has the ability to revolutionize the way preventative medicine is conceptualized. Through WGS, physicians will have the ability to determine individual genetic profiles, allowing for prediction of likelihood of future disease manifestation with considerable accuracy . WGS is slowly becoming more and more economically feasible, opening the opportunity for great benefits . For example, it can detect genetic variants that can cause rare immunological disorders. WGS has the potential to dramatically reduce the time spent on the diagnostic odyssey as well as overcoming the large costs associated with missed or delayed diagnosis. WGS essentially circumvents this costly process by a one-time, relatively inexpensive test to reveal a vast amount of information that traditional methods cannot. This allows for real actionable steps to be taken to mitigate or altogether prevent certain diseases. Although the technology is efficient, some of the data gathered from WGS are hard to translate into actionable measures . There is a significant increase in the number of variants of uncertain significance. Scientists are, however, breaking through this barrier and learning to make the connections between variants and phenotypes. For example, variants of uncertain significance (VUS) are stored in a database, which allows different laboratories to collaborate and better understand which role they play in the disease . Consequently, the rate of diagnosis will likely steadily increase in the future as the mysteries of the genome begin to unravel. One of the key advantages of WGS compared to whole exome sequencing (WES) is the ability to analyze non-coding regions of the genome. Non-coding DNA contains various components, including repetitive sequences (telomeres, centromeres, satellite DNA), sequences encoding different types of non-coding RNA molecules, and numerous regulatory elements (promoters, enhancers, and silencers). Non-coding RNA molecules and other regulatory elements play a crucial role in gene expression control. These genomic loci are particularly important in diagnosing multifactorial genetic diseases. WGS allows for a detailed analysis of non-coding regions, providing the opportunity to identify variants that can affect gene regulation and consequently disease development . Uncommon medical conditions, collectively known as rare diseases, encompass a vast array of over 8000 unique disorders, most of which stem from genetic origins. While each of these conditions is individually infrequent, their combined impact affects a considerable segment of the population, with a prevalence ranging from 6% to 8%. A study conducted as part of the 100,000 Genomes Project unveiled that WGS played a pivotal role in providing diagnoses for 25% of participants grappling with rare disorders . This innovative approach demonstrated its ability to detect conditions that might have otherwise eluded traditional diagnostic methods. Furthermore, a more recent investigation shed light on the potential of tailoring WGS analyses to individual patients, a practice that could significantly augment the diagnostic rates of these conditions . The acceleration in diagnosis is particularly valuable for certain rare diseases, such as primary mitochondrial disease phenotypes—a cluster of inherited disorders arising from mutations in either mitochondrial or nuclear DNA. The non-coding regions of the genome, which make up the vast majority of our DNA (98.5%), were long considered “genomic junk” because they did not code for proteins. However, with the completion of the Human Genome Project (HGP) and advancements in next-generation sequencing (NGS) technology, research has increasingly suggested that non-coding regions of the genome play a pivotal role in gene regulation and can have a significant impact on disease onset . WGS in particular offers the opportunity to uncover variants in non-coding regions, opening new perspectives in understanding the origins of genetic diseases in children . Comprehensive genomic analysis can reveal the causes of rare inherited diseases, including mitochondrial disorders, neurological conditions, metabolic disorders, hematological disorders, and bone and soft tissue development disorders, as well as assess the risk of multifactorial diseases like diabetes and childhood obesity . Given the continuous advancement of scientific knowledge, WGS provides the ability to discover new genetic changes that may lead to diseases. Through secondary findings, it may also enable the prevention and timely treatment of health issues that were not the initial reason for testing . Such proactive healthcare, within the context of pediatric preventive medicine, yields better treatment outcomes and ensures disease prevention before advanced disease stages requiring challenging treatments occur . Mutations in regulatory elements within non-coding regions lead to changes in gene expression, which can significantly impact phenotypic manifestations and disease development. For instance, mutations in promotors, enhancers, or silencers can affect the binding of transcription factors and alter the expression levels of specific genes, resulting in disease development . Detecting variants in regulatory regions through WGS provides the opportunity to identify new variants as causes of genetic diseases in children, in which previous analyses failed to establish a cause in coding DNA. Identifying specific variants in regulatory elements may enhance our understanding of the underlying molecular mechanisms. This can lead to the discovery of new therapeutic targets and the development of novel therapeutic strategies . Additionally, gene variants in non-coding regions may have implications for understanding complex genetic diseases involving the interaction of multiple genes and environmental factors. For example, changes in specific genes may increase susceptibility to certain environmental factors, such as susceptibility to infections like H. pylori , which can result in an increased risk of multifactorial diseases such as gastric ulcers and stomach cancer . Clinical genome analysis can be divided into three phases: primary, secondary, and tertiary analysis. Primary analysis encompasses the technical components of next-generation sequencing, including DNA extraction, library preparation, and preliminary sample quality control. Secondary analysis involves bioinformatic data processing from sequencing, including aligning the obtained sequence with the reference human genome and additional computational operations to correct potential analysis errors . Finally, tertiary analysis involves variant interpretation, including variant annotation, filtering, clinical classification, result interpretation, and the generation of a medical report for genetic testing. This review will cover primary, secondary, and tertiary analysis, with a specific focus on clinical interpretation and the application of WGS in everyday clinical practice. 2.1. Alignment and Mapping of Sequencing Reads In WGS, alignment and mapping of sequencing reads implies arranging the reads so that values at specific points can be compared . At these points, we expect these values to be equal, as these are homologous points in the reference genome, and interpret any mismatch as a variation in the sequence being tested. In clinical practice, this is the first step in genetic data analysis and involves aligning the sample genome to a reference genome and observing potential differences, which are later interpreted as genetic variants. Approximating homology between two sequences using similarities in sequencing reads was pioneered by Needleman and Wunsch in the form of optimal pairwise global alignment. This then led to the development of optimal pairwise local alignment by Smith and Waterman, which was designed for the alignment of subsequences. The conceptual solution for effective whole genome alignment was to make a division into subsequences and then apply local alignment algorithms . In clinical genetic sequencing today, analysis of sequencing reads is performed in processes referred to as data analysis pipelines. These can be categorized into upstream pipelines, which carry out the task of read alignment and mapping, and downstream pipelines, designated for genetic variant calling. A study by Betschart RO et al. compared two alignment and mapping approaches in WGS, GATK utilizing BWA-MEM2 2.2.1, which is most frequently used, and DRAGEN 3.8.4. While the authors conclude that DRAGEN is superior to GATK, they also highlight the important aspects of comparison when it comes to systems for genome alignment and mapping. Firstly, the comparison of alignment systems was broken down by single nucleotide variants (SNVs) and insertion-deletion variants (Indels). SNVs represent value changes at homologous points in compared reads and will be detected as a mismatch. Indels, on the other hand, represent an added or missing value, which causes the entire read to shift. It is due to this difference that SNVs and Indels represent different challenges for alignment systems. Furthermore, comparison was categorized by Indel size, which can imply the gain or loss of multiple values in the sequence, as well as whether a coding or non-coding region is in question. Once the algorithms have been stratified in accordance with these differences, parameters such as time to completion and precision in detection could be observed . Genetic mapping has become a great asset in the personalized medical approach in many medical disciplines, but perhaps most evidently in oncology, with ongoing projects which aim to complete the global mapping of the cancer genome. Ganini C. et al. extensively highlight this matter in their comprehensive paper, discussing all aspects of this line of research in modern times. Neurology is another of many disciplines utilizing genetic mapping, as is highlighted in research by Png G. et al. which describes the mapping of the serum proteome to neurological disorders . 2.2. Variant Calling and Genotyping Variant calling is the process of identifying genetic variants from received sequencing data . This is the next step in data analysis following alignment and is performed by downstream data analysis pipelines. Variant calling can be categorized into germline and somatic variant calling. Germline variant calling implies that the interpretated variants are generally in a similar haplotype configuration of the reference genome while respecting the paradigms of mendelian principles in most cases. However, somatic variant calling allows for the existence of multiple cell lines and the development of frequent de novo mutations violating Mendelian principles. Somatic variant calling is useful in detecting cell mosaicism within an individual and has an especially important application in genotyping tumor cells. Variant calling algorithms can also be categorized by different types of genetic variants. SNVs and smaller Indels, up to 20 base pairs, can be detected directly after alignment and often require only minor local realignment once a candidate site has been detected. On the other hand, structural variants (SVs) and copy number variations (CNVs) are not as simple to precisely determine. These algorithms primarily rely on the depth of coverage, as well as assembly-based sequence reconstruction after a candidate for SV or CNV has been identified . A study published by Pei S. et al. systematically evaluated different variant callers on 12 next-generation sequencing datasets for both germline and somatic variants. The germline callers Sentieon, GATK, and DeepVariant all had an F1 score of over 0.99 and a 30x coverage, results which show a high sensitivity and accuracy in all three systems in analyzing germline variants. Somatic callers such as Mutect2 and TNscope were tested in calling somatic variants. The systems achieved high F1 scores overall, but more interestingly, a correlation between tumor sample purity and accuracy was noted. Both systems showed better accuracy in calling both SNVs and Indels as the tumor sample purity increased. Overall, the authors concluded that careful selection of variant caller, depending on the circumstances, is of great importance to reliable variant detection . Variant genotyping entails a different process than variant calling. Calling merely provides evidence of a genetic variant in a specific gene locus. Variant genotyping is the process of identifying the specific allele that was detected by calling and is therefore the next step in genetic data analysis. Determining the specific change that has occurred has great value, as variants are later classified to determine their clinical significance. Similar to variant calling, SV genotyping is much more complex than the genotyping of SNVs or Indels, as is highlighted in a comprehensive evaluation by Duan X. et al. . 2.3. Structural Variant Detection and Analysis The term structural variant refers to a larger genetic alteration and encompasses several types of variants such as deletions, insertions, duplications, translocations, and inversions. These are categorized differently from Indels, as they are at least 50 base pairs in size. It is no surprise that such variants pose a challenge when it comes to computational data analysis. The general steps of structural variant detection and analysis are the same as with SNVs and Indels, and they involve alignment, calling, and genotyping . However, the algorithms used for SV analysis are specifically designed for this purpose. SV discovery and genotyping is of grave importance in clinical genetics, as it has been shown that these variants can have important roles in phenotype diversity, as well as complex genetic conditions . In the evaluation published by Duan X. et al., five long-read systems for SV genotyping were evaluated, including cuteSV, LRcaller, Sniffles, SVJedi, and VaPoR. LRcaller and cuteSV had the best F1 scores for insertions and deletions, while LRcaller gave the best performance with duplications, inversions, and translocations. Firstly, the authors noted that the accuracy of the algorithms is inversely proportional to the size of the SV. This would indicate that larger SVs pose a greater challenge to analyze, which is concurrent with the difference in both size and analysis complexity between SVs and Indels. Secondly, the authors concluded that the algorithm accuracy is greater in the case of insertions and deletions than in duplications, inversions, and translocations. One of the possible reasons for this is that complex genetic alterations such as translocations and inversions can often be accompanied by additional changes such as deletions or duplications at the site of separation or joining of genetic material. Finally, the conclusion regarding depth of coverage was that analysis at a depth of coverage of 20× produces diminishing returns in the F1 scores . As highlighted in a recent review by Romagnoli S. et al., Oxford Nanopore Technology developed the first sequencing system that uses nanopores as biosensors to sequence longer DNA molecules. The authors concluded that this novel system could resolve the problem regarding the sequencing of complex SVs. As for clinical applications, the authors discuss prenatal diagnostics, as well as cancer profiling . 2.4. Data Integration and Annotation in WGS While the so-far-covered process of DNA analysis provides the exact sequence of base pair values and their potential alterations, it gives little information about its functional regions. Genomic annotation is the process of determining which elements of the DNA sequence hold which function . The most frequent example is a protein-encoding gene, but others include different regulatory DNA regions. Annotation gives meaning to the analyzed sequence and provides necessary information for clinical evaluation of sequencing. The process of annotation has evolved substantially over the last three decades, and the techniques used can be categorized into several major stages. The beginning of genomic annotation was marked by computer algorithms in the 1990s, which were designed to predict protein-encoding regions. From that point on, the focus was primarily on the annotation of species-specific reference genomes constructed by statistical methods. In the last few years, however, as multi-omics became a staple of innovative medicine, annotation of other functional DNA units, such as regulatory elements, has become common, if not a standard . The process of data integration entails combining results produced by different sources into a single, uniform view or format. Wen B. et al. designed and proposed an efficient integration algorithm, which the authors called the NGS-Integrator. Their published paper highlights the aspects of data integration, as the algorithm allows for the integration of multiple datasets generated by the same method but also datasets generated by different methods. The result of this process is one single track produced by reformatting multiple genome-wide sequencing results. The authors conclude that a time and memory-efficient algorithm can significantly facilitate downstream analysis such as identifying regulatory DNA domains. In genetic research and practice, the process of data integration is essential for the reproducibility of the analytic process, as well as the comparison of experimental results . In WGS, alignment and mapping of sequencing reads implies arranging the reads so that values at specific points can be compared . At these points, we expect these values to be equal, as these are homologous points in the reference genome, and interpret any mismatch as a variation in the sequence being tested. In clinical practice, this is the first step in genetic data analysis and involves aligning the sample genome to a reference genome and observing potential differences, which are later interpreted as genetic variants. Approximating homology between two sequences using similarities in sequencing reads was pioneered by Needleman and Wunsch in the form of optimal pairwise global alignment. This then led to the development of optimal pairwise local alignment by Smith and Waterman, which was designed for the alignment of subsequences. The conceptual solution for effective whole genome alignment was to make a division into subsequences and then apply local alignment algorithms . In clinical genetic sequencing today, analysis of sequencing reads is performed in processes referred to as data analysis pipelines. These can be categorized into upstream pipelines, which carry out the task of read alignment and mapping, and downstream pipelines, designated for genetic variant calling. A study by Betschart RO et al. compared two alignment and mapping approaches in WGS, GATK utilizing BWA-MEM2 2.2.1, which is most frequently used, and DRAGEN 3.8.4. While the authors conclude that DRAGEN is superior to GATK, they also highlight the important aspects of comparison when it comes to systems for genome alignment and mapping. Firstly, the comparison of alignment systems was broken down by single nucleotide variants (SNVs) and insertion-deletion variants (Indels). SNVs represent value changes at homologous points in compared reads and will be detected as a mismatch. Indels, on the other hand, represent an added or missing value, which causes the entire read to shift. It is due to this difference that SNVs and Indels represent different challenges for alignment systems. Furthermore, comparison was categorized by Indel size, which can imply the gain or loss of multiple values in the sequence, as well as whether a coding or non-coding region is in question. Once the algorithms have been stratified in accordance with these differences, parameters such as time to completion and precision in detection could be observed . Genetic mapping has become a great asset in the personalized medical approach in many medical disciplines, but perhaps most evidently in oncology, with ongoing projects which aim to complete the global mapping of the cancer genome. Ganini C. et al. extensively highlight this matter in their comprehensive paper, discussing all aspects of this line of research in modern times. Neurology is another of many disciplines utilizing genetic mapping, as is highlighted in research by Png G. et al. which describes the mapping of the serum proteome to neurological disorders . Variant calling is the process of identifying genetic variants from received sequencing data . This is the next step in data analysis following alignment and is performed by downstream data analysis pipelines. Variant calling can be categorized into germline and somatic variant calling. Germline variant calling implies that the interpretated variants are generally in a similar haplotype configuration of the reference genome while respecting the paradigms of mendelian principles in most cases. However, somatic variant calling allows for the existence of multiple cell lines and the development of frequent de novo mutations violating Mendelian principles. Somatic variant calling is useful in detecting cell mosaicism within an individual and has an especially important application in genotyping tumor cells. Variant calling algorithms can also be categorized by different types of genetic variants. SNVs and smaller Indels, up to 20 base pairs, can be detected directly after alignment and often require only minor local realignment once a candidate site has been detected. On the other hand, structural variants (SVs) and copy number variations (CNVs) are not as simple to precisely determine. These algorithms primarily rely on the depth of coverage, as well as assembly-based sequence reconstruction after a candidate for SV or CNV has been identified . A study published by Pei S. et al. systematically evaluated different variant callers on 12 next-generation sequencing datasets for both germline and somatic variants. The germline callers Sentieon, GATK, and DeepVariant all had an F1 score of over 0.99 and a 30x coverage, results which show a high sensitivity and accuracy in all three systems in analyzing germline variants. Somatic callers such as Mutect2 and TNscope were tested in calling somatic variants. The systems achieved high F1 scores overall, but more interestingly, a correlation between tumor sample purity and accuracy was noted. Both systems showed better accuracy in calling both SNVs and Indels as the tumor sample purity increased. Overall, the authors concluded that careful selection of variant caller, depending on the circumstances, is of great importance to reliable variant detection . Variant genotyping entails a different process than variant calling. Calling merely provides evidence of a genetic variant in a specific gene locus. Variant genotyping is the process of identifying the specific allele that was detected by calling and is therefore the next step in genetic data analysis. Determining the specific change that has occurred has great value, as variants are later classified to determine their clinical significance. Similar to variant calling, SV genotyping is much more complex than the genotyping of SNVs or Indels, as is highlighted in a comprehensive evaluation by Duan X. et al. . The term structural variant refers to a larger genetic alteration and encompasses several types of variants such as deletions, insertions, duplications, translocations, and inversions. These are categorized differently from Indels, as they are at least 50 base pairs in size. It is no surprise that such variants pose a challenge when it comes to computational data analysis. The general steps of structural variant detection and analysis are the same as with SNVs and Indels, and they involve alignment, calling, and genotyping . However, the algorithms used for SV analysis are specifically designed for this purpose. SV discovery and genotyping is of grave importance in clinical genetics, as it has been shown that these variants can have important roles in phenotype diversity, as well as complex genetic conditions . In the evaluation published by Duan X. et al., five long-read systems for SV genotyping were evaluated, including cuteSV, LRcaller, Sniffles, SVJedi, and VaPoR. LRcaller and cuteSV had the best F1 scores for insertions and deletions, while LRcaller gave the best performance with duplications, inversions, and translocations. Firstly, the authors noted that the accuracy of the algorithms is inversely proportional to the size of the SV. This would indicate that larger SVs pose a greater challenge to analyze, which is concurrent with the difference in both size and analysis complexity between SVs and Indels. Secondly, the authors concluded that the algorithm accuracy is greater in the case of insertions and deletions than in duplications, inversions, and translocations. One of the possible reasons for this is that complex genetic alterations such as translocations and inversions can often be accompanied by additional changes such as deletions or duplications at the site of separation or joining of genetic material. Finally, the conclusion regarding depth of coverage was that analysis at a depth of coverage of 20× produces diminishing returns in the F1 scores . As highlighted in a recent review by Romagnoli S. et al., Oxford Nanopore Technology developed the first sequencing system that uses nanopores as biosensors to sequence longer DNA molecules. The authors concluded that this novel system could resolve the problem regarding the sequencing of complex SVs. As for clinical applications, the authors discuss prenatal diagnostics, as well as cancer profiling . While the so-far-covered process of DNA analysis provides the exact sequence of base pair values and their potential alterations, it gives little information about its functional regions. Genomic annotation is the process of determining which elements of the DNA sequence hold which function . The most frequent example is a protein-encoding gene, but others include different regulatory DNA regions. Annotation gives meaning to the analyzed sequence and provides necessary information for clinical evaluation of sequencing. The process of annotation has evolved substantially over the last three decades, and the techniques used can be categorized into several major stages. The beginning of genomic annotation was marked by computer algorithms in the 1990s, which were designed to predict protein-encoding regions. From that point on, the focus was primarily on the annotation of species-specific reference genomes constructed by statistical methods. In the last few years, however, as multi-omics became a staple of innovative medicine, annotation of other functional DNA units, such as regulatory elements, has become common, if not a standard . The process of data integration entails combining results produced by different sources into a single, uniform view or format. Wen B. et al. designed and proposed an efficient integration algorithm, which the authors called the NGS-Integrator. Their published paper highlights the aspects of data integration, as the algorithm allows for the integration of multiple datasets generated by the same method but also datasets generated by different methods. The result of this process is one single track produced by reformatting multiple genome-wide sequencing results. The authors conclude that a time and memory-efficient algorithm can significantly facilitate downstream analysis such as identifying regulatory DNA domains. In genetic research and practice, the process of data integration is essential for the reproducibility of the analytic process, as well as the comparison of experimental results . The American College for Medical Genetics (ACMG) classification classifies variants into five categories: pathogenic (P), likely pathogenic (LP), variant of uncertain significance (VUS), likely benign (LB), and benign (B). The classification criteria will be further described in the following text. 3.1. Functional Annotation and Prioritization of Variants Functional annotation and prioritization of genetic variants is an essential step when it comes to estimating the significance of a genetic variant concerning a certain clinical phenotype . When using WGS for diagnostics of rare diseases, determining which of the many discovered variants are responsible for the presented disorder can be a great challenge. One aspect of variant prioritization is determining the mutation tolerance of the specific gene locus in question. As several studies have shown, mutation rates vary across the human genome, meaning some loci are more vulnerable to mutations than others. An example of this was demonstrated in a study by Petrovski S. et al., in which the authors concluded that gene loci responsible for Mendelian genetic diseases are significantly more susceptible to variation occurrence . Another aspect of variant prioritization is determining the mutational architecture of the variant and its correlation to a given phenotype. This is an important aspect, as it is well-known that abnormalities in different regions of the same gene can lead to different clinical manifestations. Finally, the process of variant prioritization involves determining the mode of inheritance, zygosity, and origin of a variant. This is essential when observing a patient and his condition relative to his family members, who might also be candidates for WGS. As an example, a heterozygous variant found in a patient, but also other unaffected members of his family, can often with great probability be ruled out as the cause of an autosomal dominant condition with full penetrance. Likewise, if the same pathological patterns repeat within a family intergenerationally, WGS testing of multiple family members can quickly elucidate which variant might be responsible . The modern process of variant prioritization utilizes highly effective prioritization algorithms. One such example can be found in a study published by Schluter A. et al., in which the authors tackled the problem of diagnosing genetic white matter disorders (GWMDs) . The authors derived a seed group of GWMD-related genes from their patients’ Human Phenotype Ontology terms. Following this step, an interactome-prioritization algorithm was applied, based on network expansion of the created seed group. The term interactome refers to all molecular interactions within a particular cell. The described algorithm observes the molecular interactions between products of genes from the seed group and other molecular products that have their own corresponding genes. These genes then become the next candidates for testing, and observing all interactions of their products grows the network even further. Using this algorithm, the authors were able to discover novel candidate genes for GWMDs and deemed their method more time-efficient than the classical targeted diagnostic approach. 3.2. Variant Databases and Population Frequency Analysis Genetic variant databases are an important tool in the interpretation of genetic variants, as well as the discovery of new relationships between genes and diseases. Over the last decade, several projects, pioneered by the 1000 Genomes Project, have undertaken the task of generating and aggregating large collections of human genetic sequencing data . As a result, comprehensive and accurate genome-wide estimations of variant frequencies in the human population have become publicly available. These large-scale variant databases are not without their limitations, with the most obvious being extremely difficult quality control. The data curated in these databases are acquired from an immensely large number of different sources, from large-scale population studies to individual reports made by clinicians. Major examples of such databases are GnomAD, OMIM, HGMD, Uniprot, dbSNP, PubMed, ExAC, and ClinVar, which are responsible for the curation of a large number of reported variants and their frequency analysis. Additionally, web-based tools like the UCSC Genome Browser and Ensembl facilitate the visualization and analysis of genomic data and contribute to the curation of reported variants. Determining the frequency of a specific genetic variant can be a useful step in its interpretation. While the low occurrence of a variant is not sufficient to declare it pathogenic, there is an undeniable correlation between the rarity and pathogenicity of genetic variants . Apart from its usefulness in individual phenotype assessment, population frequency analysis also plays a vital role in genetic epidemiology. On one hand, it can be used to determine the frequencies of variants for autosomal recessive disorders within a subpopulation or nation. The findings of such studies can prove immensely significant, as they can draw attention to abnormally high occurrences of rare conditions in a specific region and lead to the implementation of new health protocols, such as genetic screening. One example is the study published by Scotet V. et al., in which the authors discuss the epidemiology of cystic fibrosis and genetic-based health policies, one of which is genetic screening . On the other hand, population frequency analysis is also of great significance in cancer epidemiology. An example is a research paper published by Zavala V.A. et al., which offers a comprehensive view of the genetic epidemiology of breast cancer in Latin America. The authors evaluate the available knowledge of breast cancer epidemiology, as well as genome-wide association studies perfomed in countries in Latin America. In their conclusion, a population-specific frequency analysis is prudent in constructing the correct risk prediction model, as a model constructed on European population data can prove inaccurate in this case . 3.3. Clinical Significance and Pathogenicity Assessment Determining the pathogenicity and clinical significance of a genetic variant represents the final step in individual WGS testing. While this is the most important aspect of clinical genetics, it can also be the most challenging due to the complexity of variant classification. Early variant classifications categorized genetic variants into two groups. Variants with a population frequency higher than 1% were labeled as polymorphisms, while variants with frequencies lower than 1% were called mutations . This, however, often led to confusion, as this classification provided no information on the respective variant’s impact on a clinical phenotype. In 2015, ACMG proposed a new classification system that categorized variants by likelihood of phenotype impact or pathogenicity. Pathogenicity, however, is always interpreted in the context of a specific condition, as well as the mode of inheritance . Additionally, models exist which utilize a Bayesian framework, as well as VCEP protocols . As previously stated, genetic variant databases and population frequency analysis have a vital role in the classification of genetic variants. For this reason, each variant classification is accompanied by a category showing its corresponding level of evidence. These levels of evidence are (1) population, (2) computational, (3) functional, and (4) segregation data. A stronger level of evidence for a certain variant classification implies a larger sample on which the variant has been observed. On the other hand, underreported variants often fall into the VUS category and are reclassified as the level of evidence increases . Modern technology has given way to computational, or in silico, prediction of variant pathogenicity. Garcia F.A.O. et al. provide an overview of in silico prediction tools from the early 2000s to today. From the mid-2000s, in silico prediction tools examined the conservation of DNA regions in order to assess the likelihood of a variant having an impact on a clinical presentation. Once large-scale databases of sequencing data emerged, the capabilities of these tools improved, as they had a much larger sample to derive data from. Machine learning systems (MLSs) are also a noteworthy asset in variant analysis . Supervised MLSs require large databases in order to be “trained” to assess pathogenicity but can utilize a number of biochemical and mathematical parameters which are out of reach for tools focused on conservation. On the other hand, unsupervised MLSs undergo no training process and are therefore considered less reliable but also less biased, as their analytic process is not dependent on the sample they are “trained” with. The authors conclude that in silico prediction tools have an important role in providing evidence for variant classification, and their further development will provide better diagnostic accuracy in clinical genetics . 3.4. Interpretation of Non-Coding Variants Coding genetic regions make up only 1% of the human genome, while the rest pertains to non-coding regions . The drawback of whole exome sequencing (WES) and many classifying algorithms as diagnostic tools is their exclusivity towards coding variants. Generally, variants in non-coding regions include deep intron variants, promotor or enhancer variants, structural variants, and chromatin configuration variants. Despite not coding for specific proteins, variants in these regions can still affect their function and are associated with medical conditions. The promise of WGS is a complete overview of the human genome, coding and non-coding regions alike, making it a far more powerful tool for data collection and diagnostics . With the integration of data science and data analytics in modern medicine, WGS will provide a much greater volume of data, likely sufficient for the optimization of machine learning and deep learning models. This might, in turn, facilitate the development of new classification algorithms with a much broader capacity for pathogenicity determination, including non-coding variants. Examples of in silico studies on non-coding variants can already be found in the literature, conducted on large data repositories for non-coding regions such as HaploREG and RegulomeDB . Functional annotation and prioritization of genetic variants is an essential step when it comes to estimating the significance of a genetic variant concerning a certain clinical phenotype . When using WGS for diagnostics of rare diseases, determining which of the many discovered variants are responsible for the presented disorder can be a great challenge. One aspect of variant prioritization is determining the mutation tolerance of the specific gene locus in question. As several studies have shown, mutation rates vary across the human genome, meaning some loci are more vulnerable to mutations than others. An example of this was demonstrated in a study by Petrovski S. et al., in which the authors concluded that gene loci responsible for Mendelian genetic diseases are significantly more susceptible to variation occurrence . Another aspect of variant prioritization is determining the mutational architecture of the variant and its correlation to a given phenotype. This is an important aspect, as it is well-known that abnormalities in different regions of the same gene can lead to different clinical manifestations. Finally, the process of variant prioritization involves determining the mode of inheritance, zygosity, and origin of a variant. This is essential when observing a patient and his condition relative to his family members, who might also be candidates for WGS. As an example, a heterozygous variant found in a patient, but also other unaffected members of his family, can often with great probability be ruled out as the cause of an autosomal dominant condition with full penetrance. Likewise, if the same pathological patterns repeat within a family intergenerationally, WGS testing of multiple family members can quickly elucidate which variant might be responsible . The modern process of variant prioritization utilizes highly effective prioritization algorithms. One such example can be found in a study published by Schluter A. et al., in which the authors tackled the problem of diagnosing genetic white matter disorders (GWMDs) . The authors derived a seed group of GWMD-related genes from their patients’ Human Phenotype Ontology terms. Following this step, an interactome-prioritization algorithm was applied, based on network expansion of the created seed group. The term interactome refers to all molecular interactions within a particular cell. The described algorithm observes the molecular interactions between products of genes from the seed group and other molecular products that have their own corresponding genes. These genes then become the next candidates for testing, and observing all interactions of their products grows the network even further. Using this algorithm, the authors were able to discover novel candidate genes for GWMDs and deemed their method more time-efficient than the classical targeted diagnostic approach. Genetic variant databases are an important tool in the interpretation of genetic variants, as well as the discovery of new relationships between genes and diseases. Over the last decade, several projects, pioneered by the 1000 Genomes Project, have undertaken the task of generating and aggregating large collections of human genetic sequencing data . As a result, comprehensive and accurate genome-wide estimations of variant frequencies in the human population have become publicly available. These large-scale variant databases are not without their limitations, with the most obvious being extremely difficult quality control. The data curated in these databases are acquired from an immensely large number of different sources, from large-scale population studies to individual reports made by clinicians. Major examples of such databases are GnomAD, OMIM, HGMD, Uniprot, dbSNP, PubMed, ExAC, and ClinVar, which are responsible for the curation of a large number of reported variants and their frequency analysis. Additionally, web-based tools like the UCSC Genome Browser and Ensembl facilitate the visualization and analysis of genomic data and contribute to the curation of reported variants. Determining the frequency of a specific genetic variant can be a useful step in its interpretation. While the low occurrence of a variant is not sufficient to declare it pathogenic, there is an undeniable correlation between the rarity and pathogenicity of genetic variants . Apart from its usefulness in individual phenotype assessment, population frequency analysis also plays a vital role in genetic epidemiology. On one hand, it can be used to determine the frequencies of variants for autosomal recessive disorders within a subpopulation or nation. The findings of such studies can prove immensely significant, as they can draw attention to abnormally high occurrences of rare conditions in a specific region and lead to the implementation of new health protocols, such as genetic screening. One example is the study published by Scotet V. et al., in which the authors discuss the epidemiology of cystic fibrosis and genetic-based health policies, one of which is genetic screening . On the other hand, population frequency analysis is also of great significance in cancer epidemiology. An example is a research paper published by Zavala V.A. et al., which offers a comprehensive view of the genetic epidemiology of breast cancer in Latin America. The authors evaluate the available knowledge of breast cancer epidemiology, as well as genome-wide association studies perfomed in countries in Latin America. In their conclusion, a population-specific frequency analysis is prudent in constructing the correct risk prediction model, as a model constructed on European population data can prove inaccurate in this case . Determining the pathogenicity and clinical significance of a genetic variant represents the final step in individual WGS testing. While this is the most important aspect of clinical genetics, it can also be the most challenging due to the complexity of variant classification. Early variant classifications categorized genetic variants into two groups. Variants with a population frequency higher than 1% were labeled as polymorphisms, while variants with frequencies lower than 1% were called mutations . This, however, often led to confusion, as this classification provided no information on the respective variant’s impact on a clinical phenotype. In 2015, ACMG proposed a new classification system that categorized variants by likelihood of phenotype impact or pathogenicity. Pathogenicity, however, is always interpreted in the context of a specific condition, as well as the mode of inheritance . Additionally, models exist which utilize a Bayesian framework, as well as VCEP protocols . As previously stated, genetic variant databases and population frequency analysis have a vital role in the classification of genetic variants. For this reason, each variant classification is accompanied by a category showing its corresponding level of evidence. These levels of evidence are (1) population, (2) computational, (3) functional, and (4) segregation data. A stronger level of evidence for a certain variant classification implies a larger sample on which the variant has been observed. On the other hand, underreported variants often fall into the VUS category and are reclassified as the level of evidence increases . Modern technology has given way to computational, or in silico, prediction of variant pathogenicity. Garcia F.A.O. et al. provide an overview of in silico prediction tools from the early 2000s to today. From the mid-2000s, in silico prediction tools examined the conservation of DNA regions in order to assess the likelihood of a variant having an impact on a clinical presentation. Once large-scale databases of sequencing data emerged, the capabilities of these tools improved, as they had a much larger sample to derive data from. Machine learning systems (MLSs) are also a noteworthy asset in variant analysis . Supervised MLSs require large databases in order to be “trained” to assess pathogenicity but can utilize a number of biochemical and mathematical parameters which are out of reach for tools focused on conservation. On the other hand, unsupervised MLSs undergo no training process and are therefore considered less reliable but also less biased, as their analytic process is not dependent on the sample they are “trained” with. The authors conclude that in silico prediction tools have an important role in providing evidence for variant classification, and their further development will provide better diagnostic accuracy in clinical genetics . Coding genetic regions make up only 1% of the human genome, while the rest pertains to non-coding regions . The drawback of whole exome sequencing (WES) and many classifying algorithms as diagnostic tools is their exclusivity towards coding variants. Generally, variants in non-coding regions include deep intron variants, promotor or enhancer variants, structural variants, and chromatin configuration variants. Despite not coding for specific proteins, variants in these regions can still affect their function and are associated with medical conditions. The promise of WGS is a complete overview of the human genome, coding and non-coding regions alike, making it a far more powerful tool for data collection and diagnostics . With the integration of data science and data analytics in modern medicine, WGS will provide a much greater volume of data, likely sufficient for the optimization of machine learning and deep learning models. This might, in turn, facilitate the development of new classification algorithms with a much broader capacity for pathogenicity determination, including non-coding variants. Examples of in silico studies on non-coding variants can already be found in the literature, conducted on large data repositories for non-coding regions such as HaploREG and RegulomeDB . 4.1. Mendelian Disorders and Rare Disease Genomics Rare Mendelian diseases, disorders caused by a single gene, show considerable variation in clinical appearance and severity, conveying the principle that many other factors affect the outcome of the disease. Genetic modifiers are genetic loci that may affect how disease-causing mutations manifest themselves . They play a critical role in regulating the phenotype of Mendelian diseases, as they may either lighten or aggravate the symptoms associated with the disease. Monogenic disorders, commonly referred to as Mendelian disorders, are a class of hereditary diseases brought on by changes in a single gene . Mendel’s rules of inheritance apply to many conditions which show recognizable inheritance patterns such as autosomal dominant, autosomal recessive, or X-linked inheritance. Huntington’s disease, sickle cell anemia, and cystic fibrosis are a few examples of Mendelian illnesses. It is worth noting that these diseases can have a complex genetic etiology. For example, 16 different genes have been associated with an osteogenesis imperfecta phenotype . These complexities are relevant, as they can lead to multiple potential genetic therapeutic approaches . Genetic modifiers affect the way a disease presents itself through many different mechanisms including gene expression, protein function, and cellular pathways . Identifying and labeling genetic modifiers in rare Mendelian diseases can be a difficult task. It is difficult to acquire sufficient data for analysis because of the rarity of these diseases and the complexity of the genetic landscape. However, by employing certain experimental research approaches such as genome-wide association studies, whole exome sequencing, and functional studies in model organisms, studying said genetic modifiers can be made easier. The clinical ramifications of comprehending genetic modifiers are critical. It is now possible to predict illness outcomes more accurately, classify individuals into various risk groups, and create individualized treatment plans by identifying specific modifiers. Furthermore, the potential for designing pharmaceuticals that specifically target or manipulate the pathways that these modifiers affect is also feasible. Genetic modifiers play a significant role in the development of the clinical presentation and severity of rare Mendelian diseases. Understanding these modifiers opens up possibilities for better diagnosis, prognosis, and therapeutic approaches, ultimately improving patient care in the setting of rare genetic disorders . 4.2. Genomic Medicine and Precision Healthcare Recently, there have been many advances in genetics that hold the potential to revolutionize healthcare. Genomic medicine, precision medicine, and personalized medicine are all important interrelated practices that are prevalent in clinical practice . Genomic medicine refers to the application of a patient’s genomic data, such as DNA sequence variants and other genetic traits, to influence clinical judgment. In order to improve diagnostics, predict illness risk, and create targeted therapeutics, genomic medicine strives to understand the genetic basis of diseases. An excellent example of this approach is the prediction of illness risk in cardiovascular diseases . Precision medicine is an approach that bases treatment choices on an individual’s unique genetic makeup, environmental influences, and lifestyle choices. It entails customizing medical interventions to each patient’s unique traits in an effort to maximize therapeutic results and reduce side effects. Although it utilizes genomic data, precision medicine also accounts for non-genetic factors. Precision medicine is seen as a type of clinical practice within personalized medicine, which covers factors other than genetics . In addition to genetic and clinical data, it considers the preferences, values, and circumstances specific to each patient. Personalized medicine emphasizes the significance of adapting medical choices to the particular requirements and traits of each patient. The fields of personalized, precision, and genomic medicine are linked and have similar aims. Rather than being mutually exclusive, these phrases indicate various viewpoints within the developing field of personalized healthcare. However, there are still obstacles in the way of implementing genomic medicine, precision medicine, and personalized medicine . These issues include the necessity for interdisciplinary cooperation, complex genetic data interpretation and communication, integration into current healthcare systems, and ethical issues. These concepts are fluid and are constantly evolving, so developments in technology, data analysis, and knowledge of the genome will continue to shape the field. To fully achieve the potential of genomic medicine, precision medicine, and personalized care, it is essential to continue research, education, and collaboration with other researchers with vested interests. Rare Mendelian diseases, disorders caused by a single gene, show considerable variation in clinical appearance and severity, conveying the principle that many other factors affect the outcome of the disease. Genetic modifiers are genetic loci that may affect how disease-causing mutations manifest themselves . They play a critical role in regulating the phenotype of Mendelian diseases, as they may either lighten or aggravate the symptoms associated with the disease. Monogenic disorders, commonly referred to as Mendelian disorders, are a class of hereditary diseases brought on by changes in a single gene . Mendel’s rules of inheritance apply to many conditions which show recognizable inheritance patterns such as autosomal dominant, autosomal recessive, or X-linked inheritance. Huntington’s disease, sickle cell anemia, and cystic fibrosis are a few examples of Mendelian illnesses. It is worth noting that these diseases can have a complex genetic etiology. For example, 16 different genes have been associated with an osteogenesis imperfecta phenotype . These complexities are relevant, as they can lead to multiple potential genetic therapeutic approaches . Genetic modifiers affect the way a disease presents itself through many different mechanisms including gene expression, protein function, and cellular pathways . Identifying and labeling genetic modifiers in rare Mendelian diseases can be a difficult task. It is difficult to acquire sufficient data for analysis because of the rarity of these diseases and the complexity of the genetic landscape. However, by employing certain experimental research approaches such as genome-wide association studies, whole exome sequencing, and functional studies in model organisms, studying said genetic modifiers can be made easier. The clinical ramifications of comprehending genetic modifiers are critical. It is now possible to predict illness outcomes more accurately, classify individuals into various risk groups, and create individualized treatment plans by identifying specific modifiers. Furthermore, the potential for designing pharmaceuticals that specifically target or manipulate the pathways that these modifiers affect is also feasible. Genetic modifiers play a significant role in the development of the clinical presentation and severity of rare Mendelian diseases. Understanding these modifiers opens up possibilities for better diagnosis, prognosis, and therapeutic approaches, ultimately improving patient care in the setting of rare genetic disorders . Recently, there have been many advances in genetics that hold the potential to revolutionize healthcare. Genomic medicine, precision medicine, and personalized medicine are all important interrelated practices that are prevalent in clinical practice . Genomic medicine refers to the application of a patient’s genomic data, such as DNA sequence variants and other genetic traits, to influence clinical judgment. In order to improve diagnostics, predict illness risk, and create targeted therapeutics, genomic medicine strives to understand the genetic basis of diseases. An excellent example of this approach is the prediction of illness risk in cardiovascular diseases . Precision medicine is an approach that bases treatment choices on an individual’s unique genetic makeup, environmental influences, and lifestyle choices. It entails customizing medical interventions to each patient’s unique traits in an effort to maximize therapeutic results and reduce side effects. Although it utilizes genomic data, precision medicine also accounts for non-genetic factors. Precision medicine is seen as a type of clinical practice within personalized medicine, which covers factors other than genetics . In addition to genetic and clinical data, it considers the preferences, values, and circumstances specific to each patient. Personalized medicine emphasizes the significance of adapting medical choices to the particular requirements and traits of each patient. The fields of personalized, precision, and genomic medicine are linked and have similar aims. Rather than being mutually exclusive, these phrases indicate various viewpoints within the developing field of personalized healthcare. However, there are still obstacles in the way of implementing genomic medicine, precision medicine, and personalized medicine . These issues include the necessity for interdisciplinary cooperation, complex genetic data interpretation and communication, integration into current healthcare systems, and ethical issues. These concepts are fluid and are constantly evolving, so developments in technology, data analysis, and knowledge of the genome will continue to shape the field. To fully achieve the potential of genomic medicine, precision medicine, and personalized care, it is essential to continue research, education, and collaboration with other researchers with vested interests. 5.1. Prenatal and Neonatal Genetic Testing Newborn screening (NBS) has become an essential tool for disease prevention and treatment from an early age. It has taken a proactive approach rather than a reactive approach, allowing for disorders to be discovered in their earlier stages . With the advent of next-generation sequencing and its application in newborn screening, two advantages present themselves. WGS can predict many more diseases while simultaneously improving the accuracy of results, essentially serving as a preventative measure in neonatal and pediatric care . One benefit that WGS provides is that it completely circumvents the arduous and costly process of traditional diagnosis. Furthermore, due to the extensive information that WGS provides, physicians can predict with a greater degree of accuracy which diseases patients can develop and what the probability of such a development is . With WGS, predicting disorders prior to symptom onset, ten, fifteen, or even twenty years in the future might be possible. Given this information, immediate steps can be taken for early monitoring and treatment, which mitigates the disease’s emotional, physical, and financial impact on both the afflicted as well as their family members. WGS data can also be used for genetic counseling for potential future pregnancies . With the wealth of information that WGS provides, clinicians would be able to screen for both metabolic and non-metabolic disorder genetics . With the advantages presented, WGS in NBS can greatly expedite the process of diagnosis and treatment and can serve as a vital tool for both physician and patient. 5.2. Cancer Genomics and Precision Oncology WGS has the ability to detect important somatic mutations in tumor tissue . Through early detection of cancer mutations, each malignant disorder can be characterized in great detail, which facilitates a personalized approach. Several factors including different input amounts, tumor purity, various library construction protocols, sequencing instruments, and bioinformatics pipelines can impact somatic mutation detection. WGS generated better data than whole exome sequencing (WES), which had higher G/C content and more adapter contamination . Furthermore, formalin-fixed paraffin-embedded (FFPE) blocks showed more DNA degradation in WES as compared to WGS, and as a result, WGS is better suited for this method of tissue preservation. Mutation callers such as MuTect2 or Strelka2 can be used . Strelka2 overall had the best reproducibility for WGS but the worst in WES runs, while MuTect did consistently well in WES. WGS sequencing has much more reproducibility and consistency than WES and is subject to less variation. The importance of precision oncology is not only highlighted by examples with somatic variants but germline variants as well. One excellent example of the importance of WGS in cancer treatment can be found in hereditary gynecological cancers, such as ovarian cancer and breast cancer . The genetic etiology of these conditions is most often associated with germline variants in the BRCA1 and BRCA2 genes, as well as BARD1 , PALB2 , ATM , MLH1 , MSH2 , AKT1 , CDH1 , CTNNB1 , MSH6 , NBN , PIK3CA , PMS2 , PRKN , STK11 , TP53, and others. Understanding the underlying genetic mechanisms of these cancers has led to the development and application of novel therapeutic agents, such as PARP inhibitors. It has been shown that BRCA1/2 , alongside other genes, take part in the repair of double-strand DNA breaks by inducing homologous recombination. As this mechanism is defective, tumor cells greatly rely on the PARP repair mechanism, unlike healthy cells with functional homologous recombination. For this reason, PARP inhibitors selectively cause DNA damage accumulation in tumor cells, leading to their apoptosis . A recent publication greatly emphasizes the importance of cancer genetics. Using WGS technology, the authors analyzed 13,880 solid tumor genotypes. The results of the study provided a great insight into the statistics of cancer genomics, likely greatly facilitating further research in the field of oncology . 5.3. Pharmacogenomics and Personalized Medicine With the significant decrease in price for DNA sequencing, a new field known as pharmacogenomics (PGx) is being pioneered by scientists. PGx is the study of how genetic factors impact the way drugs are metabolized in an individual organism . Through genome sequencing, PGx will be able to boost therapeutic benefits and reduce negative side effects. It has been theorized that genetic factors can account for up to 95% of an individual’s drug response, and their contribution to the total number of adverse reactions is estimated to be as high as 20% . Genome sequencing reveals an enormous amount of information and enables proper drug and dose selection through PGx . There are several examples of PGx proving very useful in clinical practices. Abacavir is frequently used in combination with other antiretroviral drugs to treat HIV. However, between 5 and 8% of infected individuals can develop a very severe hypersensitivity reaction due to a major histocompatibility complex I allele (HLA-B*5701) . Through PGx screening of this allele, hypersensitivity towards Abacavir decreased by 60%. The results from another study found that the presence of the allele is correlated with Abacavir sensitivity, thus illustrating the importance of PGx testing when prescribing medication. Another example drug is codeine, which has demonstrated variable toxicity dependent on CYP2D6 variants . In the same manner, statin efficiency and toxicity have shown variability with different CYP3A4 and SLCO1B1 variants. Up to 10% of patients exhibit muscular symptoms, which might be avoided with a personalized PGx approach . Studies have also shown that clopidogrel has variable efficiency in different CYP2C19 genotypes . Additionally, PGx testing can be beneficial when prescribing warfarin, as well as novel oral anticoagulant therapeutics, as it allows for the identification of clinically relevant polymorphisms . Studies conducted by the University of Chicago and St. Jude Children’s Hospital both claimed that PGx was important and feasible . In another study by the Mayo Clinic, the authors claimed that between 91 and 99% of the population had one PGx variant that could cause an adverse reaction to drugs . For example, variation in the CYP2D6 gene, which is responsible for drug PGx, can have vastly different results, from negligible effects to cases of overdose. PGx addresses this issue by sequencing a person’s genome and then recommending whether to take certain medications. The progress towards PGx is continuously steady, as tests are being conducted in approved laboratories and are even now starting to become mandatory in certain countries. Baylor Institute of Medicine includes PGx for both warfarin sensitivity and clopidogrel metabolism, enabling patients to take the medication best suited for them. Additionally, PGx has a big role to play in moderating drug administration in psychiatry and has already proven useful in certain clinical cases . PGx has the ability to revolutionize the way healthcare is administered and could predict with a great deal of certainty which treatment option is the most appropriate . Drug side-effects can be a great treatment obstacle, and PGx tackles this issue by providing solutions specifically tailored to patients’ genetic code. PGx can maximize the efficacy of drugs and minimize the debilitating side effects, ensuring the best healthcare is being administered to patients . However, an opposing viewpoint regarding the clinical utility of WGS in pharmacogenomics can also be found in the literature and is therefore worth mentioning. More skeptical authors have arrived at the conclusion that WGS does not warrant clinical implementation in this regard due to insufficient knowledge and an absence of clear guidelines. In their viewpoint, the expectation of improved clinical outcomes and better informed clinical decision-making due to PGx is still out of reach and warrants further research . 5.4. Infectious Disease Genomics and Outbreak Investigations Outbreak investigations are nearly always employed at the start of an outbreak to determine the specific strain, method of spreading, and ways to prevent it . Through this information, scientists can begin to tackle the problem methodically and use WGS of the pathogen to aid in their efforts. Currently, antibiotic resistance has become one of the largest public health crises, with even the strongest antibiotics having little to no effects on certain bacterial strains. WGS can be used to predict resistance phenotypes in E. coli and S. aureus , which have become increasingly resistant to antibiotics . Furthermore, mutations in these bacteria can be detected by WGS. Evidence from WGS has proven that pneumococcal bacteria have begun to capsule switch, preventing them from becoming phagocytized by the immune system . This information allowed scientists to develop a more effective vaccine better suited to counter pneumococcal bacteria. Furthermore, by showing the entire genome and its subsequent evolution, scientists can determine what allows bacteria to become virulent as well as the cause of their resistance. They can then develop ways to combat the bacteria and create vaccines for future mutations, thus minimizing the effects of the disease. Understanding the cause of pathogen spread is crucial in outbreak investigations by public health officials. For example, during an outbreak of MRSA in China, scientists learned through WGS that the sasX gene was crucial for the successful spread of the pathogen . In addition, WGS can also be used to characterize different types of strains. After the rubella virus was eradicated in the United States, cases still appeared. After performing WGS on the genetic profile of these viruses, it was determined that they were brought from foreign entities, as the profile matched the rubella virus strains to different countries . Similarly, hospitals that persistently suffered from C. difficile outbreaks managed to uncover the underlying cause of the infections using WGS . WGS offers invaluable information to outbreak investigations and aids scientists in ending current outbreaks as well as providing preventative measures for future outbreaks. As WGS technology progresses, outbreak investigations can become more efficient and accurate and less costly. It offers the opportunity for scientists to enhance their understanding of resistance and allows them to create much more effective medicine in their fight against ever-mutating pathogens. Newborn screening (NBS) has become an essential tool for disease prevention and treatment from an early age. It has taken a proactive approach rather than a reactive approach, allowing for disorders to be discovered in their earlier stages . With the advent of next-generation sequencing and its application in newborn screening, two advantages present themselves. WGS can predict many more diseases while simultaneously improving the accuracy of results, essentially serving as a preventative measure in neonatal and pediatric care . One benefit that WGS provides is that it completely circumvents the arduous and costly process of traditional diagnosis. Furthermore, due to the extensive information that WGS provides, physicians can predict with a greater degree of accuracy which diseases patients can develop and what the probability of such a development is . With WGS, predicting disorders prior to symptom onset, ten, fifteen, or even twenty years in the future might be possible. Given this information, immediate steps can be taken for early monitoring and treatment, which mitigates the disease’s emotional, physical, and financial impact on both the afflicted as well as their family members. WGS data can also be used for genetic counseling for potential future pregnancies . With the wealth of information that WGS provides, clinicians would be able to screen for both metabolic and non-metabolic disorder genetics . With the advantages presented, WGS in NBS can greatly expedite the process of diagnosis and treatment and can serve as a vital tool for both physician and patient. WGS has the ability to detect important somatic mutations in tumor tissue . Through early detection of cancer mutations, each malignant disorder can be characterized in great detail, which facilitates a personalized approach. Several factors including different input amounts, tumor purity, various library construction protocols, sequencing instruments, and bioinformatics pipelines can impact somatic mutation detection. WGS generated better data than whole exome sequencing (WES), which had higher G/C content and more adapter contamination . Furthermore, formalin-fixed paraffin-embedded (FFPE) blocks showed more DNA degradation in WES as compared to WGS, and as a result, WGS is better suited for this method of tissue preservation. Mutation callers such as MuTect2 or Strelka2 can be used . Strelka2 overall had the best reproducibility for WGS but the worst in WES runs, while MuTect did consistently well in WES. WGS sequencing has much more reproducibility and consistency than WES and is subject to less variation. The importance of precision oncology is not only highlighted by examples with somatic variants but germline variants as well. One excellent example of the importance of WGS in cancer treatment can be found in hereditary gynecological cancers, such as ovarian cancer and breast cancer . The genetic etiology of these conditions is most often associated with germline variants in the BRCA1 and BRCA2 genes, as well as BARD1 , PALB2 , ATM , MLH1 , MSH2 , AKT1 , CDH1 , CTNNB1 , MSH6 , NBN , PIK3CA , PMS2 , PRKN , STK11 , TP53, and others. Understanding the underlying genetic mechanisms of these cancers has led to the development and application of novel therapeutic agents, such as PARP inhibitors. It has been shown that BRCA1/2 , alongside other genes, take part in the repair of double-strand DNA breaks by inducing homologous recombination. As this mechanism is defective, tumor cells greatly rely on the PARP repair mechanism, unlike healthy cells with functional homologous recombination. For this reason, PARP inhibitors selectively cause DNA damage accumulation in tumor cells, leading to their apoptosis . A recent publication greatly emphasizes the importance of cancer genetics. Using WGS technology, the authors analyzed 13,880 solid tumor genotypes. The results of the study provided a great insight into the statistics of cancer genomics, likely greatly facilitating further research in the field of oncology . With the significant decrease in price for DNA sequencing, a new field known as pharmacogenomics (PGx) is being pioneered by scientists. PGx is the study of how genetic factors impact the way drugs are metabolized in an individual organism . Through genome sequencing, PGx will be able to boost therapeutic benefits and reduce negative side effects. It has been theorized that genetic factors can account for up to 95% of an individual’s drug response, and their contribution to the total number of adverse reactions is estimated to be as high as 20% . Genome sequencing reveals an enormous amount of information and enables proper drug and dose selection through PGx . There are several examples of PGx proving very useful in clinical practices. Abacavir is frequently used in combination with other antiretroviral drugs to treat HIV. However, between 5 and 8% of infected individuals can develop a very severe hypersensitivity reaction due to a major histocompatibility complex I allele (HLA-B*5701) . Through PGx screening of this allele, hypersensitivity towards Abacavir decreased by 60%. The results from another study found that the presence of the allele is correlated with Abacavir sensitivity, thus illustrating the importance of PGx testing when prescribing medication. Another example drug is codeine, which has demonstrated variable toxicity dependent on CYP2D6 variants . In the same manner, statin efficiency and toxicity have shown variability with different CYP3A4 and SLCO1B1 variants. Up to 10% of patients exhibit muscular symptoms, which might be avoided with a personalized PGx approach . Studies have also shown that clopidogrel has variable efficiency in different CYP2C19 genotypes . Additionally, PGx testing can be beneficial when prescribing warfarin, as well as novel oral anticoagulant therapeutics, as it allows for the identification of clinically relevant polymorphisms . Studies conducted by the University of Chicago and St. Jude Children’s Hospital both claimed that PGx was important and feasible . In another study by the Mayo Clinic, the authors claimed that between 91 and 99% of the population had one PGx variant that could cause an adverse reaction to drugs . For example, variation in the CYP2D6 gene, which is responsible for drug PGx, can have vastly different results, from negligible effects to cases of overdose. PGx addresses this issue by sequencing a person’s genome and then recommending whether to take certain medications. The progress towards PGx is continuously steady, as tests are being conducted in approved laboratories and are even now starting to become mandatory in certain countries. Baylor Institute of Medicine includes PGx for both warfarin sensitivity and clopidogrel metabolism, enabling patients to take the medication best suited for them. Additionally, PGx has a big role to play in moderating drug administration in psychiatry and has already proven useful in certain clinical cases . PGx has the ability to revolutionize the way healthcare is administered and could predict with a great deal of certainty which treatment option is the most appropriate . Drug side-effects can be a great treatment obstacle, and PGx tackles this issue by providing solutions specifically tailored to patients’ genetic code. PGx can maximize the efficacy of drugs and minimize the debilitating side effects, ensuring the best healthcare is being administered to patients . However, an opposing viewpoint regarding the clinical utility of WGS in pharmacogenomics can also be found in the literature and is therefore worth mentioning. More skeptical authors have arrived at the conclusion that WGS does not warrant clinical implementation in this regard due to insufficient knowledge and an absence of clear guidelines. In their viewpoint, the expectation of improved clinical outcomes and better informed clinical decision-making due to PGx is still out of reach and warrants further research . Outbreak investigations are nearly always employed at the start of an outbreak to determine the specific strain, method of spreading, and ways to prevent it . Through this information, scientists can begin to tackle the problem methodically and use WGS of the pathogen to aid in their efforts. Currently, antibiotic resistance has become one of the largest public health crises, with even the strongest antibiotics having little to no effects on certain bacterial strains. WGS can be used to predict resistance phenotypes in E. coli and S. aureus , which have become increasingly resistant to antibiotics . Furthermore, mutations in these bacteria can be detected by WGS. Evidence from WGS has proven that pneumococcal bacteria have begun to capsule switch, preventing them from becoming phagocytized by the immune system . This information allowed scientists to develop a more effective vaccine better suited to counter pneumococcal bacteria. Furthermore, by showing the entire genome and its subsequent evolution, scientists can determine what allows bacteria to become virulent as well as the cause of their resistance. They can then develop ways to combat the bacteria and create vaccines for future mutations, thus minimizing the effects of the disease. Understanding the cause of pathogen spread is crucial in outbreak investigations by public health officials. For example, during an outbreak of MRSA in China, scientists learned through WGS that the sasX gene was crucial for the successful spread of the pathogen . In addition, WGS can also be used to characterize different types of strains. After the rubella virus was eradicated in the United States, cases still appeared. After performing WGS on the genetic profile of these viruses, it was determined that they were brought from foreign entities, as the profile matched the rubella virus strains to different countries . Similarly, hospitals that persistently suffered from C. difficile outbreaks managed to uncover the underlying cause of the infections using WGS . WGS offers invaluable information to outbreak investigations and aids scientists in ending current outbreaks as well as providing preventative measures for future outbreaks. As WGS technology progresses, outbreak investigations can become more efficient and accurate and less costly. It offers the opportunity for scientists to enhance their understanding of resistance and allows them to create much more effective medicine in their fight against ever-mutating pathogens. 6.1. Clinical Utility and Cost-Effectiveness of WGS WGS offers a comprehensive analysis of an individual’s entire genetic code, providing invaluable insights into their genetic makeup and potential health risks. One of the key advantages of WGS is its ability to diagnose rare and complex genetic disorders with a high degree of accuracy. This not only improves patient outcomes but also reduces the burden of prolonged and inconclusive diagnostic processes . Moreover, the cost-effectiveness of WGS has improved over the years, making it a viable option for clinical use. The decreasing cost of sequencing and data analysis, coupled with the potential for early disease detection and prevention, positions WGS as a valuable investment in healthcare. In addition to diagnosing rare diseases, WGS plays a crucial role in oncology, pharmacogenomics, and personalized medicine. It allows oncologists to identify specific genetic mutations in cancer patients, guiding the selection of targeted therapies for better treatment outcomes. Another aspect of the cost-effectiveness of WGS is the elimination of the necessity for additional diagnostic procedures. An excellent example of this is the use of whole exome sequencing in the diagnostics of autosomal genetic diseases. While WES has been a diagnostic standard for these conditions for a long time, its results can be inconclusive and appear as a diagnostic “dead-end”. A recently published study observed the utility and benefit of WGS testing in WES-negative patients . The authors concluded that this was a beneficial approach, as new and useful data were obtained for a number of patients in the cohort. Based on their results, they propose the integration of WGS into the diagnostics of autosomal disorders. WGS offers significant clinical utility and cost-effectiveness by enabling precise diagnoses, personalized treatments, and improved patient outcomes . As technology continues to advance and costs decrease, the integration of WGS into clinical practice is expected to become even more widespread, revolutionizing healthcare delivery and enhancing the quality of patient care. 6.2. Integration of WGS into Electronic Health Records The integration of WGS into electronic health records (EHRs) represents a significant advancement in healthcare technology . This integration offers numerous benefits, from enhancing patient care to facilitating cutting-edge research. By incorporating WGS data into EHRs, physicians can better understand a patient’s genetic predispositions to various diseases, quickly search through a patient’s genomic data, accelerate the diagnostic process, and tailor treatment plans accordingly. Furthermore, by aggregating de-identified genomic data from EHRs, physicians can conduct large-scale studies to uncover novel insights into the genetic basis of diseases . This data-sharing approach fuels medical research, potentially leading to breakthroughs in the understanding and treatment of various conditions. However, challenges such as data security, privacy, and the need for interoperability standards must be addressed for successful integration . Protecting patient confidentiality and ensuring seamless data exchange between different healthcare systems are paramount concerns. Integrating WGS into electronic health records offers a promising avenue for advancing patient care and medical research. While challenges remain, the potential benefits in terms of personalized medicine and scientific discovery make this integration a compelling area of development in healthcare technology. 6.3. Genetic Counseling and Patient Education in WGS Genetic counseling and patient education play pivotal roles in harnessing the power of WGS in healthcare. In an era where genetic information is increasingly accessible, it is essential to guide individuals and families in navigating the complexities of their genomic data . WGS offers numerous advantages, such as early disease detection and personalized medicine. However, it also raises ethical dilemmas, privacy concerns, and psychosocial challenges. Genetic counseling and patient education are instrumental in helping individuals and families navigate this intricate landscape . They equip patients with the knowledge and emotional support needed to make informed choices about genetic testing, treatment options, and family planning. The integration of genetic counseling and patient education is paramount in realizing the full potential of WGS in the healthcare system. These essential components empower individuals to make informed decisions about their genetic information, ultimately leading to improved health outcomes and a more equitable healthcare system. WGS offers a comprehensive analysis of an individual’s entire genetic code, providing invaluable insights into their genetic makeup and potential health risks. One of the key advantages of WGS is its ability to diagnose rare and complex genetic disorders with a high degree of accuracy. This not only improves patient outcomes but also reduces the burden of prolonged and inconclusive diagnostic processes . Moreover, the cost-effectiveness of WGS has improved over the years, making it a viable option for clinical use. The decreasing cost of sequencing and data analysis, coupled with the potential for early disease detection and prevention, positions WGS as a valuable investment in healthcare. In addition to diagnosing rare diseases, WGS plays a crucial role in oncology, pharmacogenomics, and personalized medicine. It allows oncologists to identify specific genetic mutations in cancer patients, guiding the selection of targeted therapies for better treatment outcomes. Another aspect of the cost-effectiveness of WGS is the elimination of the necessity for additional diagnostic procedures. An excellent example of this is the use of whole exome sequencing in the diagnostics of autosomal genetic diseases. While WES has been a diagnostic standard for these conditions for a long time, its results can be inconclusive and appear as a diagnostic “dead-end”. A recently published study observed the utility and benefit of WGS testing in WES-negative patients . The authors concluded that this was a beneficial approach, as new and useful data were obtained for a number of patients in the cohort. Based on their results, they propose the integration of WGS into the diagnostics of autosomal disorders. WGS offers significant clinical utility and cost-effectiveness by enabling precise diagnoses, personalized treatments, and improved patient outcomes . As technology continues to advance and costs decrease, the integration of WGS into clinical practice is expected to become even more widespread, revolutionizing healthcare delivery and enhancing the quality of patient care. The integration of WGS into electronic health records (EHRs) represents a significant advancement in healthcare technology . This integration offers numerous benefits, from enhancing patient care to facilitating cutting-edge research. By incorporating WGS data into EHRs, physicians can better understand a patient’s genetic predispositions to various diseases, quickly search through a patient’s genomic data, accelerate the diagnostic process, and tailor treatment plans accordingly. Furthermore, by aggregating de-identified genomic data from EHRs, physicians can conduct large-scale studies to uncover novel insights into the genetic basis of diseases . This data-sharing approach fuels medical research, potentially leading to breakthroughs in the understanding and treatment of various conditions. However, challenges such as data security, privacy, and the need for interoperability standards must be addressed for successful integration . Protecting patient confidentiality and ensuring seamless data exchange between different healthcare systems are paramount concerns. Integrating WGS into electronic health records offers a promising avenue for advancing patient care and medical research. While challenges remain, the potential benefits in terms of personalized medicine and scientific discovery make this integration a compelling area of development in healthcare technology. Genetic counseling and patient education play pivotal roles in harnessing the power of WGS in healthcare. In an era where genetic information is increasingly accessible, it is essential to guide individuals and families in navigating the complexities of their genomic data . WGS offers numerous advantages, such as early disease detection and personalized medicine. However, it also raises ethical dilemmas, privacy concerns, and psychosocial challenges. Genetic counseling and patient education are instrumental in helping individuals and families navigate this intricate landscape . They equip patients with the knowledge and emotional support needed to make informed choices about genetic testing, treatment options, and family planning. The integration of genetic counseling and patient education is paramount in realizing the full potential of WGS in the healthcare system. These essential components empower individuals to make informed decisions about their genetic information, ultimately leading to improved health outcomes and a more equitable healthcare system. Short-read sequencing represents the initial generation of NGS technologies that followed Sanger sequencing. The length of each individual read in this method is 75–800 bp, and the reads are then massively sequenced in parallel. This is achieved by fragmentation of the DNA strand and subsequent amplification of each short fragment . Amplification is performed either by emulsion PCR or bridging PCR, depending on the sequencing platform . While the technology of short-read sequencing was revolutionary at its dawn, certain shortcomings became more apparent through the years. The process of DNA fragmentation and such analysis resulted in a loss of information, which made comprehensive analysis more difficult. The introduction of long-read technologies is now transforming genomics research by allowing researchers to explore genomes at remarkable resolution. In 2011, PacBio released their PACBio RS sequencer that employs single-molecule real-time (SMRT) technology . This machine increased average read lengths by more than ten times. As a result of long-read sequencing methods, genome regions that were mysteries could finally be resolved, and the complex transcriptomes have the potential to be explored in great detail . Some applications of long-read technologies include WGS, RNA-sequencing, and detection of epigenetic modifications. In the context of sequencing reads, hybrid sequencing is a third option that integrates short-read and long-read sequencing. The aim is to eliminate the weaknesses of both approaches by using the strengths of the other. Short-read sequencing, due to the fragmentation of DNA, results in an information loss, which makes certain types of variants difficult, if not impossible, to detect. Long-read sequencing overcomes this issue by removing fragmentation out of the process. However, the drawback of long-read sequencing is the occurrence of errors . A great comparison can be found in a recently published metagenomic study, in which the authors emphasize the advantages and disadvantages of these two approaches . The highlighted literature presents hybrid sequencing as a superior method. It successfully overcomes the shortcomings of both short-read and long-read sequencing by combining the two methods and utilizing the strengths of each one. When considering the potential of WGS in clinical practice, current challenges and limitations must be taken into consideration. One of the greatest challenges of clinical genetics is the clinical interpretation of non-coding variants. While great advances have been made in the field of in silico prediction tools for this very purpose, this still remains a formidable barrier to the full realization of WGS’s clinical utility. Building precise models based on large training databases remains a challenge due to issues such as overfitting and overgeneralizing variant effects . This lack of knowledge and understanding leaves room for considerable uncertainty in the clinical diagnostic process. Another issue is presented by the term “variant penetrance”. Pathogenic variants of low penetrance will often not lead to a pathological phenotype. In WGS testing, low-penetrance pathogenic variants can be interpreted as a “false positive” result, setting the clinician on an incorrect diagnostic course. While false positive results are arguably better than false negative results, they can still cause the patient unnecessary emotional distress, as well as lead to further medical actions, which are in that case unwarranted . Difficulties with WGS diagnostics can be found in patients with non-Mendelian genetic disorders . One such example is the paper published by Fang H et al. in 2017. The authors applied an integrated WGS-HPO pedigree to diagnose a patient with Prader-Willi syndrome. They concluded that relying solely on utilizing WGS would not have been sufficient to make the correct diagnosis in some cases, due to the complexity of the underlying genetic and epigenetic error. In cases of Prader–Willi syndrome, approximately 25% of cases are associated with uniparental disomy, and through WGS, uniparental isodisomy can be detected. The limitations of WGS testing can be overcome in certain cases by “trio testing”, which enables the detection of uniparental heterodisomy. Trio-testing, which involves testing of the proband’s biological parents, can help in the interpretation of results for de novo variants in deep intronic and other non-coding regions. Finally, when discussing the diagnostic effectiveness of WGS, genetic mosaicism must be taken into account. The issue lies in the fact that WGS analysis is most commonly performed on a peripheral blood sample, or one of the other alternatives if necessary. The precision of WGS in this clinical scenario was analyzed by King DA et al. in a paper published in 2017 . The authors examined a large group of patients with undiagnosed developmental disorders. In 73% of mosaic events, there was a difference in results between the peripheral blood and saliva samples, suggesting that the blood sample alone would miss a considerable fraction of chromosomal abnormalities. These clinical examples highlight the distance that still needs to be covered in terms of research before WGS can be fully utilized as a clinical tool. While it produces considerably large amounts of data, it still needs to be approached with caution. The greatest issues with using this technology incautiously boil down to misinterpretation of the detected abnormalities or overlooking undetected ones. With the introduction of next-generation sequencing, sequencing yield increased along with a decrease in sequencing cost. Most of these genomes were presented in small pieces. Consequently, the gene annotation in these genomes is either inadequate or nonexistent altogether. As a result, long-read sequencing was introduced, and one of the primary products on the market is nanopore sequencing by Oxford Nanopore Technologies (ONT), which has a very low cost . Nanopore sequencing technology has the potential to make nucleic acid sequencing accessible and feasible for everyone. An obstacle stands in the way: interpreting nanopore sequences requires high bioinformatics skills. However, as interpretation technologies advance and biologists expand their bioinformatics knowledge, the potential of nanopore sequencing is sure to keep evolving . Single-cell genomics and spatial transcriptomics are important tools revolutionizing genome sequencing. These tools assist in measuring gene activity, mapping the activity, and monitoring the resultant molecular phenotypes. Single-cell genomics is the study of cellular uniqueness and utilizes omics techniques such as single-cell RNA sequencing (scRNA-seq) and single-cell DNA sequencing (scDNA-seq), which allow for the analysis of genetic variants and gene expression patterns at the single-cell level . Spatial transcriptomics features other techniques, including in situ hybridization, digital optical barcoding, conventional immunofluorescence methods, and next-generation sequencing . Single-cell genomics possesses the potential to expand the current knowledge of disease pathogenesis, opening the door for improved personalized medicine and targeted therapeutic interventions . Similarly, spatially resolved transcriptomics has the potential to supply a thorough understanding of the molecular architecture of tissues, providing novel insights into organ growth, function, and disease mechanisms . Multi-omics integration is the practice of integrating and analyzing multiple omics datasets in a clear and logical manner to address the obstacles of organizing and managing large amounts of data without errors . Omics has opened the door for advanced data analysis, resulting in exciting opportunities, breakthroughs, and challenges for both statisticians and biologists. However, in order to achieve quality results from multi-omics, experiments must be carefully designed, data must be diligently collected, and findings must be FAIR (findable, accessible, interoperable, and reusable). The goal of multi-omics integration is to incorporate that into precision health: an individualistic approach that integrates data from medical history, omics, environment, lifestyle, and other factors. Precision health involves generating the data and modeling them, and multi-omic integration will provide greater insight, resulting in more accuracy in precision health . The clinical utility of WGS lies in its ability to detect genetic variants in coding regions, non-coding regions, as well as structurally complex variants such as deep intronic variants. In our clinical practice, we have had multiple cases where next-generation sequencing (NGS) has proven to be an essential diagnostic tool. By integrating multi-omics data, including genomics, metabolomics, and proteomics, we have significantly enhanced our diagnostic capabilities. For instance, in one case involving a patient with severe and deteriorating neurological symptoms, the combination of WGS and metabolic profile allowed us to identify a novel pathogenic variant in a non-coding region of the genome, shedding light on the molecular basis of the condition . Additionally, in cases of undiagnosed genetic syndromes, the integration of genomics data provided a comprehensive view of the underlying molecular mechanisms, aiding in the accurate diagnosis and subsequent management of these conditions. WGS presently facilitates precise diagnostics of rare diseases in cases such as uniparental isodisomy among children with Prader–Willi and Angelman syndromes, de novo deep intronic variants, and repeat expansions in non-coding regions among individuals affected by diseases such as myotonic dystrophies. Moreover, our experience extends to cases where traditional diagnostic approaches failed to provide conclusive results . The synergy of genomics, metabolomics, and proteomics has been instrumental in uncovering elusive genetic mutations and intricate molecular signatures that would have otherwise gone undetected. In summary, the integration of multi-omics data, facilitated by advanced sequencing technologies like WGS and NGS, has been a transformative approach in our clinical practice. It has enabled us to unravel complex genetic landscapes, leading to more accurate and personalized diagnoses in diverse clinical scenarios. The application of WGS holds significant potential in the field of molecular medicine, shaping the future of genetic disease diagnosis. The rapid advancement in genome sequencing technology has enabled increasingly rapid and high-quality genome analysis, characterized by high precision and diminishing costs. The incorporation of WGS into routine clinical practice presents novel opportunities for personalized medicine and improved patient health outcomes, including proactive measures to prevent the development of multifactorial diseases. Looking ahead, WGS is expected to become a standard diagnostic tool in pediatrics, facilitating precise and personalized care for children with monogenic and multifactorial diseases. The integration of WGS into clinical practice represents a significant paradigm shift, offering hope for improved outcomes for individuals grappling with rare diseases. This powerful technology not only enhances diagnostic accuracy but also opens new avenues for personalized treatments, ultimately paving the way for a brighter future for patients around the world. |
The distribution characteristics of strabismus surgery types in a tertiary hospital in the Central Plains region during the COVID-19 epidemic | d7c16b4b-7968-4330-a25f-f4011f8c343d | 10865711 | Ophthalmology[mh] | Strabismus, a common and prevalent disease in ophthalmology, not only affects the appearance of patients, but also leads to amblyopia, abnormal binocular vision function, and even psychological inferiority. Epidemiological studies in European countries have shown that esotropia is the most common type of strabismus in Europe . Relevant studies have also been conducted in Asian countries, with results showing that exotropia is the most common type in Asian countries . Some regions in China have conducted epidemiological studies on strabismus, with results similar to those in Asian countries . Our hospital is the largest comprehensive hospital in Henan Province, China, with sufficient sources of strabismus and pediatric ophthalmology professional groups. We hope to use statistical analysis of our clinical data to obtain the distribution pattern of various types of strabismus surgery in the Central Plains region of China, in order to provide reference for clinical work. Collect clinical data of strabismus patients who underwent surgery by the strabismus and pediatric ophthalmology professional group of the First Affiliated Hospital of Zhengzhou University from January 2020 to December 2022, and carry out statistical analysis. All patient information is extracted from the medical record system, including patient names, gender, age, diagnosis, etc. All patients underwent refraction, best-corrected visual acuity, anterior segment, fundus, intraocular pressure, and strabismus specialty examinations, including corneal reflection test, prism and alternate cover test, Krimsky test for 33 cm and 6 m strabismus, eye movement, four-point light, binocular vision assessment, and Titmus stereo chart. All strabismus patients were admitted for surgical treatment after strict screening of surgical indications in the outpatient department. The admission diagnosis was classified according to the Chinese Strabismus Diagnosis Expert Consensus . Statistical analysis: SPSS 27.0 was used for statistical analysis. Chi-square test was used to compare the differences in the proportion of various types of strabismus over the three years. If P < 0.05 in the comparison between the three groups, further pairwise comparisons were conducted. The total number of strabismus surgeries in 2020 was 1357, in 2021 was 1451, and in 2022 was 1131 (Fig. ). The surgical volume significantly decreased in February 2020, August 2021, November 2022, and December 2022. Except for special circumstances, July and August are the peak periods for strabismus surgery each year (Table ; Fig. ). When the patients were grouped by age, it was found that patients aged 0–6 years accounted for 37% of the total number of strabismus surgeries, those aged 7–12 years accounted for 31%, those aged 13–18 years accounted for 12%, and those over 18 years accounted for 20% (Table ; Fig. ). It was also found that strabismus surgeries for children aged 7–12 were concentrated in June, July, and August of each year (Fig. ). From 2020 to 2022, a total of 3939 strabismus surgeries were performed, of which exotropia surgeries were the most common, accounting for 60% (2361 patients); esotropia surgeries accounted for 29% (1146 patients), and the number of exotropia surgeries was about twice that of esotropia surgeries (Table ). Among exotropia surgeries, intermittent exotropia had the highest proportion, accounting for about 53%, followed by constant exotropia, accounting for about 35%. The proportion of intermittent and constant exotropia did not change significantly over the three years ( χ 2 = 2.642, P = 0.267 and χ 2 = 3.012, P = 0.221, respectively) (Table ). Among intermittent exotropia, the proportion of convergence insufficiency type was the highest, accounting for more than 70%, while the proportion of separation excess type was the lowest, less than 3% (Table ). Among esotropia classifications, non-accommodative esotropia had the highest proportion, accounting for more than 50% (Table ). Over the past three years, the total number of strabismus surgeries performed at our hospital was 1357, 1451, and 1131, respectively, with no significant fluctuations. The surgery volume significantly decreased in February 2020, August 2021, November 2022, and December 2022. Analysis showed that this was due to the severe impact of the COVID-19 epidemic in Zhengzhou during these months. Non-ophthalmic emergency patients did not seek medical attention, resulting in a sharp decline in strabismus surgery during the months with severe epidemics. Foreign studies have also found that during the COVID-19 epidemic, only ophthalmic emergency patients sought medical attention. This is because ophthalmic examinations require face-to-face interaction, which increases the risk of virus transmission, and ophthalmologists are also assigned to care for COVID-19 infected patients . During the epidemic, the total number of strabismus surgeries per year decreased compared to the pre-epidemic period. Before the epidemic, our hospital performed approximately 2000 strabismus surgeries per year, and the epidemic indeed had a significant impact on the total number of surgical patients. Except for special circumstances, the peak period for strabismus surgery is usually in July and August each year. Statistical analysis of the age distribution of patients showed that strabismus surgery for children aged 7–12 years is concentrated in June, July, and August each year. This is related to the Chinese national conditions, where July and August are summer vacation periods. The majority of strabismus patients are school-age children. To avoid affecting their studies, parents choose longer holidays to allow their children to undergo strabismus surgery. There were no significant differences in this distribution pattern before and after the epidemic. Relevant studies in China have also shown a significant seasonal variation in the number of strabismus patients seeking medical attention, with peak periods during the winter and summer vacations . After grouping the patients by age, it was found that strabismus surgery patients under 18 years old accounted for 80% of the total, indicating that most strabismus patients undergo surgical treatment before adulthood. The purpose of strabismus surgery is not only to improve appearance but also to obtain good binocular visual function. Studies by many scholars in China and abroad have shown that the development of human binocular vision begins in infancy. The sensitive period is from 3 to 5 months after birth, with a peak at 1–3 years old, and development continues until 6–9 years old . Therefore, some scholars suggest that for intermittent or constant exotropia, surgery should be performed before the age of 7 to better restore perceptual function . With the increasing awareness of strabismus and the necessity of strabismus surgery among parents, the window for strabismus surgery has shifted earlier. In the past three years, exotropia surgery patients accounted for the largest proportion, about 60%, followed by esotropia surgery patients, accounting for about 29%. The number of exotropia surgery patients is about twice as high as that of esotropia surgery patients. Among exotropia surgery patients, the proportion of intermittent exotropia is the highest, about 53%, followed by constant exotropia, accounting for about 35%. Among patients with intermittent exotropia, the proportion of insufficient convergence type is the largest, accounting for over 70%, while the proportion of excessive separation type is the smallest, less than 3%. With the development of China’s economy and health care system, young children can receive early vision screening, such as routine physical examinations in kindergartens. Strabismus patients can be detected early and undergo conservative treatment, such as wearing glasses and vision training. More and more parents can supervise their children to persist in wearing glasses and vision training, which enables some children with accommodative esotropia to restore normal eye position through conservative treatment and avoid surgery. Therefore, exotropia surgery patients are more than esotropia patients, with similar results in many domestic studies. In a retrospective study involving 5,746 strabismus patients, found that exotropia surgery accounted for 63.5% of cases, esotropia surgery accounted for 13.2% of cases, and intermittent exotropia was the most common subtype within exotropia surgery, accounting for approximately 71.3% . However, the primary subtype in intermittent exotropia was different from our study’s findings. In a study of 4,640 strabismus surgery patients, reported that exotropia surgery accounted for 54% of cases, esotropia accounted for 22.1% of cases, and constant exotropia was the most common type within exotropia, although its prevalence decreased over the years . Intermittent exotropia was the next most common type and showed an increasing trend. A study conducted in Singapore on a Chinese population also indicated a ratio of 7:1 for exotropia to esotropia, with the majority of exotropia cases being intermittent (63%). In a study of 12,327 strabismus surgery patients over a 10-year period, found that constant exotropia was the most common type among all subtypes, and the number of exotropia surgery patients was approximately 5.83 times that of esotropia surgery patients . However, a study conducted in a tertiary hospital in Spain over a year and a half period involving 153 patients showed that esotropia accounted for 47.7% of cases, while exotropia accounted for 35.9% . These findings indicate significant differences in the classification of strabismus surgery patients between China and Europe. During the past three years, the proportion of intermittent exotropia has decreased, and the proportion of constant exotropia has increased, but the difference was not statistically significant ( χ 2 = 2.642, P = 0.267; χ 2 = 3.012, P = 0.221). This may be because the COVID-19 epidemic has limited the medical treatment of patients with intermittent exotropia, and the condition has gradually progressed to constant exotropia. Some studies have shown that one-third of patients with intermittent exotropia experience a deterioration of their condition after a three-year follow-up . This study conducted a statistical analysis of the distribution of strabismus surgery in our hospital during the three-year period of the Covid-19 pandemic. It identified the characteristics of strabismus surgery distribution during the pandemic period. However, there are certain limitations to this study. It is a retrospective analysis that only reflects the results of a specific period and cannot study the overall incidence rate of strabismus. In the future, it is hoped that research with a larger sample size will be conducted to explore the universality of the distribution characteristics of strabismus and the issue of strabismus prevalence in China. During the three-year period of the Covid-19 pandemic, the total number of strabismus surgeries in our hospital did not show significant fluctuations. The number of strabismus surgeries decreased significantly during the months of the Covid-19 pandemic. Patients under 18 years old accounted for 80% of the strabismus surgeries, and patients between 7 and 12 years old were concentrated in the months of July and August each year. Among all strabismus surgery patients, exotropia was the most common type, occurring twice as often as esotropia. Among patients with exotropia, intermittent exotropia had the highest proportion. The combined proportion of intermittent exotropia and constant exotropia remained stable during the three-year period of the Covid-19 pandemic, but the proportion of intermittent exotropia decreased, while the proportion of constant exotropia increased. This fully demonstrates the importance of early screening and regular follow-up observation for patients with intermittent exotropia, and the need for necessary intervention measures at the appropriate time to prevent the progression of intermittent exotropia to constant exotropia. |
Motivational interviewing from the paediatricians’ perspective: assessments after a 2-day training for physicians caring for adolescents with chronic medical conditions (CMCs) | 25b80d45-2929-4dfa-81b7-dddcc6eaf46e | 11110176 | Pediatrics[mh] | Children and adolescents with chronic medical conditions (CMCs) have an elevated risk of developing psychological comorbidities, such as anxiety and depression . In addition to concerns about the diagnosis and prognosis, regular long-term treatments affect the daily lives of CMCs. Among social disturbances, stigmatisation and rejection by peers are a major challenge that can have a negative impact on self-confidence and self-esteem . The effectiveness of integrated mental health care in paediatric settings has received increased attention . More specifically, validated diagnostic instruments and brief psychological interventions, such as Motivational Interviewing (MI) for behavioural change, were shown to improve primary clinical outcomes and mental health symptoms . In this context, good co-operation between paediatricians and patients and a corresponding communicative competence of the paediatricians would be desirable. MI is a client-centered conversation technique and a directive approach to explore ambivalence and develop intrinsic motivation purposefully. Building on a patient empowerment perspective, MI has emerged as an effective counselling technique to detect comorbid mental health problems and support health-related lifestyle changes . In MI conversations, various techniques are used, such as open-ended questions, active listening, providing confirmation, summarizing, affirming, and reflecting on behaviour . The aim is eliciting”Change talk” and “Confidence talk” to bring about behaviour changes . Change talk includes any statement by the doctor that favours a movement towards a specific change goal, while confidence talk expresses in particular the ability to change. MI was initially used to treat addictive behaviour and has been used for several other behavioural changes (e.g. health behaviour and health service use) in the meantime . Furthermore, it was shown that MI improves the utilization of psychiatric care services by young patients . Published data suggests implementing MI techniques into clinical practice to be feasible, as even 15-minute counselling applying MI techniques can be effective . Physicians can acquire MI techniques in professional training sessions . A review of ten studies by Söderlund et al. found an average initial training duration of nine hours for general health care practitioners in learning MI techniques. Significant improvement in the long-term quality of MI was achieved through regular follow-up sessions. Most training courses are offered in the format of one- to three-day workshops, emphasizing the importance of continuous follow-up training, e.g., in the form of supervision . To date, few studies have addressed and systematically analyzed experiences with MI from the physicians’ perspective. This study aimed to fill this knowledge gap and to provide recommendations for the integration of MI into the clinical routine in the care of adolescents. Therefore, we investigated. paediatricians’ experiences with a 2-day basic MI education, paediatricians’ experiences using MI as part of the single-center cluster-randomized controlled COACH-MI trial to improve uptake of mental health care for adolescents with CMCs and comorbid symptoms of anxiety and depression, paediatricians’ experiences integrating MI into the daily clinical practice of paediatricians caring for chronically ill adolescents at a University children’s hospital outpatient clinic. The study was conducted within the multicenter project of the COACH consortium (Chronic Conditions in Adolescents: Implementation and Evaluation of Patient-Centered Collaborative Health Care), aiming to improve awareness and access to mental health care for adolescents with CMCs. In this cluster-randomized trial with 164 adolescents with CMCs and comorbid anxiety or depression, training physicians in MI improved uptake rates of psychological counselling among adolescents, however results did not reach statistical significance . Our study was conducted following the main study from May to August 2021. Aims Our aim was to explore clinicians’ experiences of MI training and subsequent use of MI in the routine care of adolescents with CMCs. Therefore, we wanted to find out if and how MI can be integrated into clinical practice and how training in MI should be designed. Design A mixed methods study approach with quantitative and qualitative data gathered with based on a pseudonymized questionnaire was employed to explore the opinions, experiences, and needs of paediatricians using MI in everyday practice. Participants and setting The COACH-MI trial was conducted at the outpatient clinics of the University Children’s Hospital Düsseldorf, Germany (Endocrinology and Diabetes, Pulmonology, Cardiology, Gastroenterology, Neurology, Immunology and Rheumatology, Metabolism), as described previously . After completion of the main study , our study was conducted between May and August 2021. Out of 25 physicians, 20 participated in the project. Five physicians left the outpatient department or the hospital before completing the first MI session. As part of the study, the doctors attended a 2-day in-person MI training course, conducted by a Motivational Interviewing Network of Trainers (MINT) certified trainer and booster sessions one year after study initiation. None of the paediatricians had previous specialized training in psychiatry, psychotherapy, or MI prior to study start. The aim was to collect data from the doctors’ perspective on their experiences with the MI technique; the response rate was 95% (19/20). Data collection A self-report questionnaire gathered data on the following themes: MI skills/proficiency, actual MI use in everyday practice, opinions on MI, and need for training and framework conditions in clinical routine. No validated questionnaire was available for evaluating experiences with MI and the physicians’ perception of the method, the technique, and the application of MI in clinical practice. Therefore, the questionnaire was developed by our study team. One author, who has a strong background in educational theory and questionnaire design, and two other authors - a total of 2 paediatricians and a psychologist - developed the questionnaire in German language and included a total of 16 questions on the above-mentioned themes. As there was no validated questionnaire in this topic, we developed questions which relate to factors that could be important based on our experience and informal discussions with doctors. The three-page questionnaire collected demographic and practice information, such as age, gender, qualification, and work experience in order, to characterize the sample of paediatricians. We used different question types: closed questions (yes/no), open questions, and rating scales (linear Likert scale). The questionnaire asked respondents to rate on a six-point Likert scale the extent to which of nine MI conversation techniques were used before and after MI training. We chose a bipolar Likert scale to reflect the agreement or disagreement on a 6-point scale to avoid a neutral middle option. The questionnaire is reliable. The Cronbach’s alpha value for the nine items measuring the dialogue techniques used before and after the MI training is 0.860. Open-ended questions asked for suggestions to make MI better using in everyday clinical practice and for general comments. Questionnaires were completed anonymously to preserve participant privacy. The answers to open-ended questions were analyzed and assigned to labels by the first author of this paper. Our aim was to explore clinicians’ experiences of MI training and subsequent use of MI in the routine care of adolescents with CMCs. Therefore, we wanted to find out if and how MI can be integrated into clinical practice and how training in MI should be designed. A mixed methods study approach with quantitative and qualitative data gathered with based on a pseudonymized questionnaire was employed to explore the opinions, experiences, and needs of paediatricians using MI in everyday practice. The COACH-MI trial was conducted at the outpatient clinics of the University Children’s Hospital Düsseldorf, Germany (Endocrinology and Diabetes, Pulmonology, Cardiology, Gastroenterology, Neurology, Immunology and Rheumatology, Metabolism), as described previously . After completion of the main study , our study was conducted between May and August 2021. Out of 25 physicians, 20 participated in the project. Five physicians left the outpatient department or the hospital before completing the first MI session. As part of the study, the doctors attended a 2-day in-person MI training course, conducted by a Motivational Interviewing Network of Trainers (MINT) certified trainer and booster sessions one year after study initiation. None of the paediatricians had previous specialized training in psychiatry, psychotherapy, or MI prior to study start. The aim was to collect data from the doctors’ perspective on their experiences with the MI technique; the response rate was 95% (19/20). A self-report questionnaire gathered data on the following themes: MI skills/proficiency, actual MI use in everyday practice, opinions on MI, and need for training and framework conditions in clinical routine. No validated questionnaire was available for evaluating experiences with MI and the physicians’ perception of the method, the technique, and the application of MI in clinical practice. Therefore, the questionnaire was developed by our study team. One author, who has a strong background in educational theory and questionnaire design, and two other authors - a total of 2 paediatricians and a psychologist - developed the questionnaire in German language and included a total of 16 questions on the above-mentioned themes. As there was no validated questionnaire in this topic, we developed questions which relate to factors that could be important based on our experience and informal discussions with doctors. The three-page questionnaire collected demographic and practice information, such as age, gender, qualification, and work experience in order, to characterize the sample of paediatricians. We used different question types: closed questions (yes/no), open questions, and rating scales (linear Likert scale). The questionnaire asked respondents to rate on a six-point Likert scale the extent to which of nine MI conversation techniques were used before and after MI training. We chose a bipolar Likert scale to reflect the agreement or disagreement on a 6-point scale to avoid a neutral middle option. The questionnaire is reliable. The Cronbach’s alpha value for the nine items measuring the dialogue techniques used before and after the MI training is 0.860. Open-ended questions asked for suggestions to make MI better using in everyday clinical practice and for general comments. Questionnaires were completed anonymously to preserve participant privacy. The answers to open-ended questions were analyzed and assigned to labels by the first author of this paper. Study conduct Consent and complete questionnaires were provided by n = 19 of 20 paediatricians (response rate of 95%), while one physician did not “consent” to participate in the study. Of these, n = 12 (63.2%) participants were female, n = 7 (36.8%) male, n = 3 (15.8%) participants were in residency training, n = 9 (47.7%) were specialists, and n = 7 (36.8%) were senior physicians. The average work experience was 12.2 years. Personal experiences The vast majority of all respondents (94.7%) reported that they found MI helpful for clinical conversations. They stated it was important for their clinical work (Likert scale from 1 = not important to 6 = very important ; M = 4.7, SD 1.2) and used it outside the COACH-MI study context (Likert scale from 1 = never to 6 = always ; M = 4.1, SD 1.0). n = 7 (36.8%) physicians stated they felt more secure during patient conversations using MI techniques. n = 14 physicians (73.7%) thought MI strengthened the physician-patient-alliance. About two-thirds ( n = 12; 63%) of the respondents perceived that conversations are conducted “on equal terms” with the adolescents by using MI techniques, and n = 11 (58%) physicians promoted confidence talk. About one-third ( n = 6; 32%) promoted change talk and resolved ambivalences in their patients (Fig. ). Concerning MI training, more MI techniques were used after training (Likert scale from 1 never to 6 always; before M = 3.7, SD 1.3 vs. after M = 4.5, SD 1.1). Primarily the following methods were increasingly applied: advising with permission ( M = 2.5, SD 1.5 vs. M = 4.3, SD 1.1), reflective listening ( M = 3.4, SD 1.2 vs. M = 4.8, SD 0.9), an appreciative approach ( M = 3.8, SD 1.3 vs. M = 5, SD 0.8), and emphasizing autonomy ( M = 3.7, SD 1.2 vs. M = 4.6, SD 0.8) (Fig. ). The following groups of patients were perceived to benefit most from MI: adolescents (47.4%), patients with CMCs (47.9%), and patients with noncompliance (26.3%). Here, respondents indicated that MI is beneficial for crisis conversation (52.6%), as well as compliance issues (31.6%) and first consultations (26.3%). It was perceived as less helpful in informed consent discussions (15.8%) and follow-up discussions (10.5%). External and internal framework conditions About one-third ( n = 6; 31.6%) stated that insufficient framework conditions hampered MI conversations. Due to lack of time, only half of the paediatricians ( n = 9; 47.4%) offered second appointments to discuss critical topics further, although n = 17 (89.5%) stated that more appointments (> 1 appointment) would have been needed for sufficient MI application. To overcome the aforementioned barriers in clinical practice, respondents indicated the most important factor to be a distraction-free environment, specifically a calm, quiet room, no disturbance from other staff and calls (57.9%; Fig. ), as well as more scheduled time for patient-conversations (36.8%). On average, physicians reported that their MI conversations lasted about 25 min. In addition, n = 4 (21.1%) of the respondents thought that important general conditions were establishing a safe environment for the patient to speak freely. Only n = 2 (10.5%) physicians stated that they had too little practical experience and did not feel sufficiently trained. Therefore, n = 4 (21.1%) physicians felt insecure about conducting MI consultations (Fig. ). Training All doctors have completed a 2-day course learning MI. More than half of the doctors (57.9%) felt that the training was sufficient to train the basics, however, they wanted additional interventions, e.g. in the context of booster sessions. Most of the respondents (73.7%) recommended annual workshops and booster sessions. N = 6 (31.6%) of the respondents wished for more intensive MI training with supervision, with about half ( n = 10; 52.6%) suggesting training via online courses. Only n = 3 (15.8%) preferred self-study using literature and video recordings. These results are presented in Fig. . The respondents stated that MI training is important for residency (Likert scale from 1 = not important to 6 = very important ; M = 4.7, SD 1.2), and n = 18 (94.7%) respondents stated that MI training should be integrated into residency training. Additionally, n = 12 (63.2%) wished for earlier conversation training during medical school, and n = 10 (52.6%) paediatricians recommended further training after residency. Consent and complete questionnaires were provided by n = 19 of 20 paediatricians (response rate of 95%), while one physician did not “consent” to participate in the study. Of these, n = 12 (63.2%) participants were female, n = 7 (36.8%) male, n = 3 (15.8%) participants were in residency training, n = 9 (47.7%) were specialists, and n = 7 (36.8%) were senior physicians. The average work experience was 12.2 years. The vast majority of all respondents (94.7%) reported that they found MI helpful for clinical conversations. They stated it was important for their clinical work (Likert scale from 1 = not important to 6 = very important ; M = 4.7, SD 1.2) and used it outside the COACH-MI study context (Likert scale from 1 = never to 6 = always ; M = 4.1, SD 1.0). n = 7 (36.8%) physicians stated they felt more secure during patient conversations using MI techniques. n = 14 physicians (73.7%) thought MI strengthened the physician-patient-alliance. About two-thirds ( n = 12; 63%) of the respondents perceived that conversations are conducted “on equal terms” with the adolescents by using MI techniques, and n = 11 (58%) physicians promoted confidence talk. About one-third ( n = 6; 32%) promoted change talk and resolved ambivalences in their patients (Fig. ). Concerning MI training, more MI techniques were used after training (Likert scale from 1 never to 6 always; before M = 3.7, SD 1.3 vs. after M = 4.5, SD 1.1). Primarily the following methods were increasingly applied: advising with permission ( M = 2.5, SD 1.5 vs. M = 4.3, SD 1.1), reflective listening ( M = 3.4, SD 1.2 vs. M = 4.8, SD 0.9), an appreciative approach ( M = 3.8, SD 1.3 vs. M = 5, SD 0.8), and emphasizing autonomy ( M = 3.7, SD 1.2 vs. M = 4.6, SD 0.8) (Fig. ). The following groups of patients were perceived to benefit most from MI: adolescents (47.4%), patients with CMCs (47.9%), and patients with noncompliance (26.3%). Here, respondents indicated that MI is beneficial for crisis conversation (52.6%), as well as compliance issues (31.6%) and first consultations (26.3%). It was perceived as less helpful in informed consent discussions (15.8%) and follow-up discussions (10.5%). About one-third ( n = 6; 31.6%) stated that insufficient framework conditions hampered MI conversations. Due to lack of time, only half of the paediatricians ( n = 9; 47.4%) offered second appointments to discuss critical topics further, although n = 17 (89.5%) stated that more appointments (> 1 appointment) would have been needed for sufficient MI application. To overcome the aforementioned barriers in clinical practice, respondents indicated the most important factor to be a distraction-free environment, specifically a calm, quiet room, no disturbance from other staff and calls (57.9%; Fig. ), as well as more scheduled time for patient-conversations (36.8%). On average, physicians reported that their MI conversations lasted about 25 min. In addition, n = 4 (21.1%) of the respondents thought that important general conditions were establishing a safe environment for the patient to speak freely. Only n = 2 (10.5%) physicians stated that they had too little practical experience and did not feel sufficiently trained. Therefore, n = 4 (21.1%) physicians felt insecure about conducting MI consultations (Fig. ). All doctors have completed a 2-day course learning MI. More than half of the doctors (57.9%) felt that the training was sufficient to train the basics, however, they wanted additional interventions, e.g. in the context of booster sessions. Most of the respondents (73.7%) recommended annual workshops and booster sessions. N = 6 (31.6%) of the respondents wished for more intensive MI training with supervision, with about half ( n = 10; 52.6%) suggesting training via online courses. Only n = 3 (15.8%) preferred self-study using literature and video recordings. These results are presented in Fig. . The respondents stated that MI training is important for residency (Likert scale from 1 = not important to 6 = very important ; M = 4.7, SD 1.2), and n = 18 (94.7%) respondents stated that MI training should be integrated into residency training. Additionally, n = 12 (63.2%) wished for earlier conversation training during medical school, and n = 10 (52.6%) paediatricians recommended further training after residency. There are several reasons for physicians to improve their conversational skills and attitude in communicating with patients. This might be especially true when dealing with adolescents with chronic medical conditions e.g., type 1 diabetes, rheumatic diseases, neurological disorders, gastrointestinal diseases, or congenital heart diseases. In our main study , we were able to show that the use of MI in patients with CMCs leads to longer patient-physician conversations and lower anxiety scores at one year. We evaluated paediatricians’ experiences with MI after a 2-day workshop and opportunities and challenges in terms of MI integration into everyday clinical practice. Paediatricians working in outpatient clinics generally considered MI helpful. In line with the results of Rubak et al. and Reinauer et al. , MI was perceived to have a positive impact on physician-patient interactions, compared to traditional counselling. In line with previously published literature, participating physicians felt more confident by using MI techniques . Integrating MI into clinical practice comes with several challenges. Our results support previously published findings that MI requires a time frame that is not always available in routine patient care . In our study, MI conversations to discuss a conspicuous mental screening result lasted an estimated 30,3 min . In the study here, the mean conversation time was estimated to be 25 min. The discrepancy between these two times is due to the fact that the questionnaires were completed after one year. The investigators stated that they needed more time or more appointments to talk to their patients, but that this was often not feasible in the daily clinical routine due to timetabled structures. In a study by Kirschner et al. lack of time was also mentioned as a major obstacle. MI training was associated with longer patients-physician conversations. MI conversations were significantly longer than TAU (30.3 [16.7] vs. 16.8 [12.5] min; p < 0.001) . Additionally, half of the paediatricians scheduled second appointments with patients to sufficiently apply MI techniques. Other studies have shown that even short interventions of about 15 min can affect behavioural changes in patients. The likelihood of behavioural change increases with the number of conversations scheduled . Some general aggravating conditions were criticized in our study. The MI conversations occurred in consulting rooms with disturbances, such as entering staff or ringing telephones. Therefore, an uninterrupted atmosphere was perceived as an essential factor for implementing MI. After two days of MI training, the use of MI was still found to be challenging by part of the trained physicians, and regular training was suggested to avoid falling back into old patterns of behaviour. Some physicians reported feeling insecure in their MI proficiency, regardless of whether they had attended a booster session or not. More than half of the doctors (57.9%) felt that the training was not sufficient and would have liked further interventions to practice MI, such as booster sessions. Past research has demonstrated the importance of close integration of training and practice . Keeley et al. conducted a study offering baseline training plus two refresher training courses of 4 h each, along with feedback on audiotaped patient encounters. This study elaborated the importance of follow-up training as basic courses alone may not be sufficient to reach MI proficiency. Miller et al. investigated the effect of feedback and coaching after a 2-day basic course and the impact of self-study through training videos after a 2-day basic course: No improvement in the performance of MI was achieved through self-study. However, with regular feedback and coaching, MI skills could be consolidated and maintained. A meta-analysis by de Roten et al. supported the improvement of MI skills by adding feedback in the context of supervision or coaching. Lindhardt et al. , Miller et al. , and Brobeck et al. also state the importance of supervision and follow-up sessions. Surprisingly, only n = 6 (31.6%) of the study physicians indicated that supervision was helpful. Most physicians ( n = 10; 52.6%) considered 2-day basic training and booster sessions sufficient, and would have additionally considered online courses useful. The participants probably included the feasibility of specific MI training techniques in everyday practice in their judgment. Due to time constraints, they might find supervision to be too time-consuming. Nevertheless, we were able to demonstrate that a 2-day course led to changes in the applied conversation techniques, which is in line with published data . The patients seem to benefit more from the intervention with increasing MI experience . Notably, nearly all of the physicians participating in our study felt that it was important for MI training to be integrated into residency training, and a majority thought it would be necessary to start training during medical school as well. Most studies concentrate on medical staff such as doctors, nurses, and midwives, as conducted by Madson et al. . Poirier et al. demonstrated that teaching motivational interviewing techniques to first-year medical students can enhance student knowledge and confidence in patient counselling regarding health behaviour changes. Therefore, it seems reasonable to implement MI training early in medical staff education. Limitations When interpreting the results, some limitations must be taken into account. On the one hand, a limited number of paediatricians were recruited in our single-center study. On the other hand, no validated questionnaire was available for evaluating paediatricians’ experiences with a two-day MI workshop. Thus, the questionnaire was designed to address our research questions. The different questions (open questions, closed questions…) as well as the wording of the questions can have a potential influence on the answers of the doctors surveyed. As our questionnaire is not scientifically validated, but was developed by ourselves, the occurrence of various confounding factors cannot be ruled out and should be taken into account when interpreting the results. These confounding factors include the different question types described above, but also the different possible interpretations of the question and/or the possible answers. Furthermore, this questionnaire is not a generally valid questionnaire for surveying MI technique for various professional sectors, but is specifically aimed at doctors. The application of MI in the study was limited to counselling adolescents with CMCs and a positive screening for anxiety and depression symptoms. The current questionnaire was conducted one year after the COACH-MI study was completed, and this temporal distance might have influenced the physicians’ responses and might incur substantial recall bias. Further, querying paediatricians about their practices pre- and post-MI training, knowing the MI-training is the studied intervention, is prone to social desirability bias. Future directions Comprehensive integration of MI into the clinical routine of physicians treating chronically ill adolescents is challenging. This is traced back to the lack of time and space resources in the clinical routine at a University outpatient clinic for the practice of MI and the lack of continued acquisition of sufficient training skills. Future research is needed to determine whether supervised sessions are accepted to improve physician education, if a corresponding time frame is made possible. Future research should focus not only on MI training but also on the implementation process in clinical settings, especially when time resources are limited. When interpreting the results, some limitations must be taken into account. On the one hand, a limited number of paediatricians were recruited in our single-center study. On the other hand, no validated questionnaire was available for evaluating paediatricians’ experiences with a two-day MI workshop. Thus, the questionnaire was designed to address our research questions. The different questions (open questions, closed questions…) as well as the wording of the questions can have a potential influence on the answers of the doctors surveyed. As our questionnaire is not scientifically validated, but was developed by ourselves, the occurrence of various confounding factors cannot be ruled out and should be taken into account when interpreting the results. These confounding factors include the different question types described above, but also the different possible interpretations of the question and/or the possible answers. Furthermore, this questionnaire is not a generally valid questionnaire for surveying MI technique for various professional sectors, but is specifically aimed at doctors. The application of MI in the study was limited to counselling adolescents with CMCs and a positive screening for anxiety and depression symptoms. The current questionnaire was conducted one year after the COACH-MI study was completed, and this temporal distance might have influenced the physicians’ responses and might incur substantial recall bias. Further, querying paediatricians about their practices pre- and post-MI training, knowing the MI-training is the studied intervention, is prone to social desirability bias. Comprehensive integration of MI into the clinical routine of physicians treating chronically ill adolescents is challenging. This is traced back to the lack of time and space resources in the clinical routine at a University outpatient clinic for the practice of MI and the lack of continued acquisition of sufficient training skills. Future research is needed to determine whether supervised sessions are accepted to improve physician education, if a corresponding time frame is made possible. Future research should focus not only on MI training but also on the implementation process in clinical settings, especially when time resources are limited. According to physicians who care for chronically ill adolescents, even a 2-day MI training course can sustainably improve communication behaviour with this patient group. The need to integrate basic knowledge (of MI) into the training of physicians at an early stage has become obvious, as well as to offer more advanced training opportunities and time resources to experienced physicians. Overall, it would make sense to implement MI as a fixed treatment component in the daily routine care of healthcare systems, although the lack of a time component and an undisturbed environment are seen as the main obstacles to implementation. |
Editor’s Note on Recent Journal Impact Factor of | 4b39760a-d07b-445e-9a8e-94666f2c183a | 11261193 | Internal Medicine[mh] | |
Overall survival is comparable between percutaneous radiofrequency ablation and liver resection as first-line therapies for solitary 3–5 cm hepatocellular carcinoma | 98311c4e-9a8c-4f5f-a99e-d3f8027a1980 | 11821760 | Surgical Procedures, Operative[mh] | Hepatocellular carcinoma (HCC) is the most common primary cancer with a liver-cell origin. HCC is the sixth most common cancer worldwide . It is also the second leading cause of cancer-related deaths in Taiwan ( https://www.mohw.gov.tw/cp-6650-79055-1.html ). The updated Barcelona Clinic Liver Cancer (BCLC) guidelines recommend that patients with a solitary HCC without macrovascular invasion or extrahepatic spread are considered for liver resection (LR) if they do not show clinically significant portal hypertension (CSPH) . The survival benefit offered by radiofrequency ablation (RFA) in patients with HCC of ≤ 3 cm may be competitive with that offered by LR. Therefore, RFA could be given priority because of its lower invasiveness and cost . For patients with a solitary HCC of 3.0–5.0 cm, LR is recommended as the first-line treatment in the absence of CSPH . However, there is little evidence to support this recommendation. In randomized controlled trials (RCTs) of 234 patients that reported outcomes for HCC of 3–5 cm, there was no significant difference in overall survival (OS) or recurrence-free survival (RFS) between LR and RFA. Although the evidence level for RCT was highest, the case number was limited in the RCTs. A meta-analysis of RCTs comparing LR and RFA for patients with HCC within Milan criteria . Trial sequential analysis performed on these data showed that a study randomizing more than 10,000 patients would be needed to obtain stable results and confirm whether LR is superior to RFA: such a study is unlikely to be designed . To our knowledge, only two retrospective studies have compared LR and RFA to treat a single HCC of 3.0–5.0 cm . Therefore, in this retrospective study, we aimed to compare the survival outcome of patients undergoing LR or RFA for a solitary HCC of 3.0–5.0 cm. The Institutional Review Board of Chang Gung Memorial Hospital-Kaohsiung Branch approved this study (reference number: 202000398B0). Data were extracted from the Kaohsiung Chang Gung Memorial Hospital’s HCC registry. Patient enrollment In this retrospective study, we enrolled 424 patients with Child–Pugh class A liver disease and a solitary HCC of 3–5 cm at BCLC stage A; 310 of these patients underwent LR and 114 underwent percutaneous RFA (Fig. ). All patients who received LR underwent an R0 resection. The raw data for the unmatched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/gavmbuzjraf3yvp06xahd/raw-data-single-hcc-3.0-5.0cm-unmatched.xlsx?rlkey=9ehwehvdnt0k8c5febq67iwse&st=6wx5cx5f&dl=0 The raw data for the matched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/t2p8nb09st0da0bzatoe1/raw-data-single-hcc-3.0-5.0cm-matched.xlsx?rlkey=pd21ud8y22ivald7ps52j0e3q&st=e46tcb7y&dl=0 Decision-making about treatment modalities for patients with a solitary hepatocellular carcinoma of 3–5 cm Each patient newly diagnosed with HCC was discussed by a multidisciplinary HCC team. In general, ideal surgical candidates (i.e., patients with well-preserved liver function without severe comorbidities and with good performance status) would be referred for LR. Variables of interest Our HCC registry data included 7th edition of American Joint Committee on Cancer (AJCC) stage and original BCLC staging system stage . Cirrhosis was defined according to histology for patients who underwent surgery and image studies for patients who underwent non-surgical treatments. Laboratory data included alpha-fetoprotein (AFP), hepatitis B surface antigen (HBsAg), anti-hepatitis C virus antibody (HCV), Child–Pugh class, and Model for End-Stage Liver Disease (MELD) score . Major resection was defined as resection of three or more liver segments. Comorbidities, etiology of chronic liver disease, post-treatment complications, recurrence modality, and treatments for recurrence were not recorded in our HCC registry data. Due to the relatively large sample size in the present study, we only manually reviewed these data from medical records for the matched cohorts; however, we manually reviewed all patients’ treatments for recurrence. The designation as alcoholic was according to the diagnosis of the physician in charge. Non-alcoholic fatty liver disease (NAFLD) was defined as the presence of hepatic steatosis on histology or image studies after excluding HBsAg-positive, anti-HCV-positive, or alcoholic cases . Severe post-treatment complications were defined as Clavien–Dindo classes III–V . OS was calculated as the time elapsed from the date of treatment to the date of the last follow-up or death. RFS was defined as the time from treatments to recurrence or last follow-up. The procedure for liver resection and percutaneous radiofrequency ablation and surveillance after curative treatment for hepatocellular carcinoma The procedure for LR and surveillance after LR or RFA for HCC were described in our previous publications . All RFA procedures were performed under. general anesthesia and percutaneously under ultrasonographic guidance using the multiple-electrode switching system-RFA with a radiofrequency electrode (Covidien LLC, Mansfield, MA, USA). Definition of well-preserved liver function in Child–Pugh class A liver disease In an Italian study that enrolled 543 patients with HCC who underwent LR, postoperative liver decompensation was independently associated with a MELD score of > 9 (odds ratio [OR] = 2.26; 95% confidence interval [CI] = 1.10–4.58; p = 0.02] . Therefore, we assumed that compensated liver function could be stratified with additional granularity by using a MELD score of > 9 for patients with HCC undergoing LR. Statistical analyses Patient characteristics are presented as number or median (interquartile range [IQR]). Categorical variables were analyzed using the chi-square test. Continuous variables were analyzed using the Mann–Whitney U test. The Kaplan–Meier estimator and log-rank test were used to compare OS and RFS between groups. Propensity score matching (PSM) was used to identify a cohort of patients receiving LR with preoperative characteristics similar to those of patients receiving RFA. PSM was estimated using a multivariate logistic regression model, with treatment approaches as the dependent variable and the following preoperative characteristics as covariates: age (> 65 vs ≤ 65 years), sex, AFP (≥ 20 vs < 20 ng/ml), and MELD score (> 9 vs ≤ 9). PSM was performed with 1:1 matching without replacement using a caliper width equal to 0.2 of the propensity score. Standardized mean difference (SMD) values < 0.1 indicated a trivial difference in the covariate between treatment groups, whereas values > 0.5 indicated substantial differences. Local recurrence after treatment was analyzed in a competing risks framework, with non-local recurrence as the competing event. Non-local recurrence after treatment was analyzed in a competing risks framework, with local recurrence as the competing event. Cumulative incidence functions (CIFs) were estimated according to Kalbfleisch et al. . The Gray test was performed to assess CIF differences between the LR and RFA groups. All p-values were two-tailed, and a p -value of < 0.05 was considered statistically significant. All statistical analyses were performed using the computing environment IBM SPSS Statistics, version 25. In this retrospective study, we enrolled 424 patients with Child–Pugh class A liver disease and a solitary HCC of 3–5 cm at BCLC stage A; 310 of these patients underwent LR and 114 underwent percutaneous RFA (Fig. ). All patients who received LR underwent an R0 resection. The raw data for the unmatched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/gavmbuzjraf3yvp06xahd/raw-data-single-hcc-3.0-5.0cm-unmatched.xlsx?rlkey=9ehwehvdnt0k8c5febq67iwse&st=6wx5cx5f&dl=0 The raw data for the matched cohort are available via the following digital object identifier: https://www.dropbox.com/scl/fi/t2p8nb09st0da0bzatoe1/raw-data-single-hcc-3.0-5.0cm-matched.xlsx?rlkey=pd21ud8y22ivald7ps52j0e3q&st=e46tcb7y&dl=0 Each patient newly diagnosed with HCC was discussed by a multidisciplinary HCC team. In general, ideal surgical candidates (i.e., patients with well-preserved liver function without severe comorbidities and with good performance status) would be referred for LR. Our HCC registry data included 7th edition of American Joint Committee on Cancer (AJCC) stage and original BCLC staging system stage . Cirrhosis was defined according to histology for patients who underwent surgery and image studies for patients who underwent non-surgical treatments. Laboratory data included alpha-fetoprotein (AFP), hepatitis B surface antigen (HBsAg), anti-hepatitis C virus antibody (HCV), Child–Pugh class, and Model for End-Stage Liver Disease (MELD) score . Major resection was defined as resection of three or more liver segments. Comorbidities, etiology of chronic liver disease, post-treatment complications, recurrence modality, and treatments for recurrence were not recorded in our HCC registry data. Due to the relatively large sample size in the present study, we only manually reviewed these data from medical records for the matched cohorts; however, we manually reviewed all patients’ treatments for recurrence. The designation as alcoholic was according to the diagnosis of the physician in charge. Non-alcoholic fatty liver disease (NAFLD) was defined as the presence of hepatic steatosis on histology or image studies after excluding HBsAg-positive, anti-HCV-positive, or alcoholic cases . Severe post-treatment complications were defined as Clavien–Dindo classes III–V . OS was calculated as the time elapsed from the date of treatment to the date of the last follow-up or death. RFS was defined as the time from treatments to recurrence or last follow-up. The procedure for LR and surveillance after LR or RFA for HCC were described in our previous publications . All RFA procedures were performed under. general anesthesia and percutaneously under ultrasonographic guidance using the multiple-electrode switching system-RFA with a radiofrequency electrode (Covidien LLC, Mansfield, MA, USA). In an Italian study that enrolled 543 patients with HCC who underwent LR, postoperative liver decompensation was independently associated with a MELD score of > 9 (odds ratio [OR] = 2.26; 95% confidence interval [CI] = 1.10–4.58; p = 0.02] . Therefore, we assumed that compensated liver function could be stratified with additional granularity by using a MELD score of > 9 for patients with HCC undergoing LR. Patient characteristics are presented as number or median (interquartile range [IQR]). Categorical variables were analyzed using the chi-square test. Continuous variables were analyzed using the Mann–Whitney U test. The Kaplan–Meier estimator and log-rank test were used to compare OS and RFS between groups. Propensity score matching (PSM) was used to identify a cohort of patients receiving LR with preoperative characteristics similar to those of patients receiving RFA. PSM was estimated using a multivariate logistic regression model, with treatment approaches as the dependent variable and the following preoperative characteristics as covariates: age (> 65 vs ≤ 65 years), sex, AFP (≥ 20 vs < 20 ng/ml), and MELD score (> 9 vs ≤ 9). PSM was performed with 1:1 matching without replacement using a caliper width equal to 0.2 of the propensity score. Standardized mean difference (SMD) values < 0.1 indicated a trivial difference in the covariate between treatment groups, whereas values > 0.5 indicated substantial differences. Local recurrence after treatment was analyzed in a competing risks framework, with non-local recurrence as the competing event. Non-local recurrence after treatment was analyzed in a competing risks framework, with local recurrence as the competing event. Cumulative incidence functions (CIFs) were estimated according to Kalbfleisch et al. . The Gray test was performed to assess CIF differences between the LR and RFA groups. All p-values were two-tailed, and a p -value of < 0.05 was considered statistically significant. All statistical analyses were performed using the computing environment IBM SPSS Statistics, version 25. Characteristics of patients undergoing liver resection or percutaneous radiofrequency ablation therapy in the unmatched cohort Tumor size of the LR group was larger than that of the RFA group ( p < 0.001). The proportion of male patients was higher ( p = 0.005) and the proportion aged > 65 years ( p < 0.001) or with a MELD score of > 9 ( p < 0.001) was lower in the LR group compared to the RFA group. There were no significant differences in AFP level, HBsAg positivity, and anti-HCV positivity between groups (Table ). Of the 310 patients who received LR, 126 (40.6%) underwent major resection; pathology data showed that 122 (39.4%) patients were with AJCC stage 1, 187 (60.2%) with stage 2, and 1 (0.3%) with stage 3 disease; 169 (54.7%) patients were non-cirrhotic and 186 (60%) patients showed microvascular invasion (MVI). We have not reported cirrhosis prevalence in the RFA group because image-defined cirrhosis is vague and subjective. Six (1.9%) patients in the LR group and three (2.6%) patients in the RFA group eventually received liver transplants. Treatments for recurrence in the unmatched cohort Of the 310 patients who received LR, 101 (32.6%) developed recurrence. The patients in the LR group underwent the following treatments for recurrence: 12 (11.8%) underwent LR, 40 (39.6%) underwent RFA, 3 (3.0%) received percutaneous ethanol injection (PEI), 37 (36.6%) underwent transarterial chemoembolization (TACE), 5 (5.0%) received targeted therapies (i.e., sorafenib or lenvatinib), 1 (1.0%) received atezolizumab + bevacizumab therapy, 1 (1.0%) received systemic therapy clinical trial, 1 (1.0%) was lost to follow up, and 3 (3.0%) received best supportive care (BSC). Of the 114 patients who received RFA, 58 (50.9%) developed recurrence. The patients in the RFA group underwent the following treatments for recurrence: 3 (5.2%) patients underwent LR, 31 (53.4%) underwent RFA, 1 (1.7%) received percutaneous ethanol injection (PEI), 17 (29.3%) underwent TACE, 3 (5.2%) received targeted therapies, and 1 (1.7%) received BSC. Five-year overall survival and recurrence-free survival of the unmatched cohort The 5-year OS of the LR group was 70% compared to the 48% of the RFA group ( p < 0.001) (Fig. ). The 5-year RFS of the LR group was 52% and that of the RFA group was 19% ( p < 0.001) (Fig. ). Five-year overall survival and recurrence-free survival of the unmatched cohort stratified by tumor size Among all patients ( n = 424), 208 underwent LR and 94 underwent RFA with a tumor size of 3.1–4.0 cm, and 102 underwent LR and 20 underwent RFA with a tumor size of 4.1–5.0 cm. Among patients with a tumor size of 3.1–4.0 cm, 5-year OS was 72% in the LR group and 51% in the RFA group ( p = 0.0032; Fig. ), and 5-year RFS was 54% in the LR group and 22% in the RFA group ( p = 0.0001; Fig. ). Among patients with a tumor size of 4.1–5.0 cm, 5-year OS was 67% in the LR group and 31% in the RFA group ( p = 0.0012; Fig. ), and 5-year RFS was 48% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0005; Fig. ). Baseline characteristics of matched cohorts There were no significant differences in the etiology of chronic liver disease; common comorbidities, including diabetes, hypertension and cardio-cerebral-vascular diseases; age; sex; MELD score; and AFP level between the two groups (Table ). One patient who underwent LR developed a severe post-treatment complication (i.e., massive right pleural effusion for which pigtail drainage was performed), whereas no patients in the RFA group developed complications ( p = 1.000). Five-year overall survival and recurrence-free survival of the matched cohort The 5-year OS of the LR group was 58%, whereas that of the RFA group was 50% ( p = 0.367) (Fig. ). The 5-year RFS of the LR group was 55% and that of the RFA group was 16% ( p = 0.001) (Fig. ). Five-year overall survival and recurrence-free survival of the matched cohort stratified by tumor size After PSM, there were 99 patients in the LR and RFA groups. Among the 99 patients in the LR group, 64 (64.6%) had a tumor size of 3.1–4.0 cm, and 35 (35.3%) had a tumor size of 4.1–5.0 cm. Among the 99 patients in the RFA group, 81 (81.8%) had a tumor size of 3.1–4.0 cm, and 18 (18.1%) had a tumor size of 4.1–5.0 cm. Among the patients with a tumor size of 3.1–4.0 cm, 5-year OS was 69% in the LR group and 53% in the RFA group ( p = 0.146; Fig. ), and 5-year RFS was 63% in the LR group and 18% in the RFA group ( p = 0.0007; Fig. ). Among the patients with a tumor size of 4.1–5.0 cm, 5-year OS was 33% in the LR group and 33% in the RFA group ( p = 0.6323; Fig. ), and 5-year RFS was 40% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0333; Fig. ). Characteristics of tumor recurrence and treatments for recurrence after propensity score matching Local recurrence was significantly higher in the RFA group compared to the LR group ( p = 0.005). There were no significant differences in the proportion of patients with recurrence beyond Milan criteria ( p = 0.548) and patients who underwent curative treatments ( p = 0.5) between the RFA group and the LR group (Table ). Thirty patients developed recurrence in the LR group and 52 patients in the RFA group. The details of recurrence modality are as follows: 8 (26.6%) patients were BCLC 0, 10 (33.3%) were BCLC A, 5 (16.6%) were BCLC B, and 9 (30%) were BCLC C in the LR group; 14 (26.9%) patients were BCLC 0, 23 (44.2%) were BCLC A, 7 (13.5%) were BCLC B, and 6 (11.5%) were BCLC C in the RFA group. The details of treatment modalities for recurrence are as follows: 3 (10.0%) patients underwent LR, 13 (43.3%) patients underwent RFA, 1 (3.3%) patient received PEI, 11 (36.6%) patients underwent TACE, 3 (10.0%) patients received targeted therapies (i.e., sorafenib or lenvatinib), and 1 (3.3%) patient received BSC in the LR group; 3 (5.7%) patients underwent LR, 26 (50.0%) patients underwent RFA, 1 (1.9%) patient received PEI, 16 (30.7%) patients underwent TACE, 3 (5.7%) patients received targeted therapies, and 1 (1.9%) patient received BSC in the RFA group. Tumor size of the LR group was larger than that of the RFA group ( p < 0.001). The proportion of male patients was higher ( p = 0.005) and the proportion aged > 65 years ( p < 0.001) or with a MELD score of > 9 ( p < 0.001) was lower in the LR group compared to the RFA group. There were no significant differences in AFP level, HBsAg positivity, and anti-HCV positivity between groups (Table ). Of the 310 patients who received LR, 126 (40.6%) underwent major resection; pathology data showed that 122 (39.4%) patients were with AJCC stage 1, 187 (60.2%) with stage 2, and 1 (0.3%) with stage 3 disease; 169 (54.7%) patients were non-cirrhotic and 186 (60%) patients showed microvascular invasion (MVI). We have not reported cirrhosis prevalence in the RFA group because image-defined cirrhosis is vague and subjective. Six (1.9%) patients in the LR group and three (2.6%) patients in the RFA group eventually received liver transplants. Of the 310 patients who received LR, 101 (32.6%) developed recurrence. The patients in the LR group underwent the following treatments for recurrence: 12 (11.8%) underwent LR, 40 (39.6%) underwent RFA, 3 (3.0%) received percutaneous ethanol injection (PEI), 37 (36.6%) underwent transarterial chemoembolization (TACE), 5 (5.0%) received targeted therapies (i.e., sorafenib or lenvatinib), 1 (1.0%) received atezolizumab + bevacizumab therapy, 1 (1.0%) received systemic therapy clinical trial, 1 (1.0%) was lost to follow up, and 3 (3.0%) received best supportive care (BSC). Of the 114 patients who received RFA, 58 (50.9%) developed recurrence. The patients in the RFA group underwent the following treatments for recurrence: 3 (5.2%) patients underwent LR, 31 (53.4%) underwent RFA, 1 (1.7%) received percutaneous ethanol injection (PEI), 17 (29.3%) underwent TACE, 3 (5.2%) received targeted therapies, and 1 (1.7%) received BSC. The 5-year OS of the LR group was 70% compared to the 48% of the RFA group ( p < 0.001) (Fig. ). The 5-year RFS of the LR group was 52% and that of the RFA group was 19% ( p < 0.001) (Fig. ). Among all patients ( n = 424), 208 underwent LR and 94 underwent RFA with a tumor size of 3.1–4.0 cm, and 102 underwent LR and 20 underwent RFA with a tumor size of 4.1–5.0 cm. Among patients with a tumor size of 3.1–4.0 cm, 5-year OS was 72% in the LR group and 51% in the RFA group ( p = 0.0032; Fig. ), and 5-year RFS was 54% in the LR group and 22% in the RFA group ( p = 0.0001; Fig. ). Among patients with a tumor size of 4.1–5.0 cm, 5-year OS was 67% in the LR group and 31% in the RFA group ( p = 0.0012; Fig. ), and 5-year RFS was 48% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0005; Fig. ). There were no significant differences in the etiology of chronic liver disease; common comorbidities, including diabetes, hypertension and cardio-cerebral-vascular diseases; age; sex; MELD score; and AFP level between the two groups (Table ). One patient who underwent LR developed a severe post-treatment complication (i.e., massive right pleural effusion for which pigtail drainage was performed), whereas no patients in the RFA group developed complications ( p = 1.000). The 5-year OS of the LR group was 58%, whereas that of the RFA group was 50% ( p = 0.367) (Fig. ). The 5-year RFS of the LR group was 55% and that of the RFA group was 16% ( p = 0.001) (Fig. ). After PSM, there were 99 patients in the LR and RFA groups. Among the 99 patients in the LR group, 64 (64.6%) had a tumor size of 3.1–4.0 cm, and 35 (35.3%) had a tumor size of 4.1–5.0 cm. Among the 99 patients in the RFA group, 81 (81.8%) had a tumor size of 3.1–4.0 cm, and 18 (18.1%) had a tumor size of 4.1–5.0 cm. Among the patients with a tumor size of 3.1–4.0 cm, 5-year OS was 69% in the LR group and 53% in the RFA group ( p = 0.146; Fig. ), and 5-year RFS was 63% in the LR group and 18% in the RFA group ( p = 0.0007; Fig. ). Among the patients with a tumor size of 4.1–5.0 cm, 5-year OS was 33% in the LR group and 33% in the RFA group ( p = 0.6323; Fig. ), and 5-year RFS was 40% in the LR group and unmeasurable in the RFA group due to a limited follow-up period ( p = 0.0333; Fig. ). Local recurrence was significantly higher in the RFA group compared to the LR group ( p = 0.005). There were no significant differences in the proportion of patients with recurrence beyond Milan criteria ( p = 0.548) and patients who underwent curative treatments ( p = 0.5) between the RFA group and the LR group (Table ). Thirty patients developed recurrence in the LR group and 52 patients in the RFA group. The details of recurrence modality are as follows: 8 (26.6%) patients were BCLC 0, 10 (33.3%) were BCLC A, 5 (16.6%) were BCLC B, and 9 (30%) were BCLC C in the LR group; 14 (26.9%) patients were BCLC 0, 23 (44.2%) were BCLC A, 7 (13.5%) were BCLC B, and 6 (11.5%) were BCLC C in the RFA group. The details of treatment modalities for recurrence are as follows: 3 (10.0%) patients underwent LR, 13 (43.3%) patients underwent RFA, 1 (3.3%) patient received PEI, 11 (36.6%) patients underwent TACE, 3 (10.0%) patients received targeted therapies (i.e., sorafenib or lenvatinib), and 1 (3.3%) patient received BSC in the LR group; 3 (5.7%) patients underwent LR, 26 (50.0%) patients underwent RFA, 1 (1.9%) patient received PEI, 16 (30.7%) patients underwent TACE, 3 (5.7%) patients received targeted therapies, and 1 (1.9%) patient received BSC in the RFA group. Cumulative incidence of local and non-local tumor recurrence in patients after propensity score matching The cumulative incidence of local tumor recurrence was significantly higher in the RFA group compared to the LR group ( p < 0.001) (Fig. ). The cumulative incidence of non-local recurrence did not differ between the two groups ( p = 0.7) (Fig. ). In the present study, the proportion of patients aged > 65 years and with a MELD score of > 9 was higher in the RFA group compared to the LR group in the unmatched cohort; these results suggest that older-aged patients (older age being a surrogate marker of severe comorbidities) and inadequate liver function reserve (i.e., a MELD score of > 9) were referred for RFA. The inherent selection bias between the two treatment modalities resulted in better OS and RFS of the LR group of the unmatched cohort. After PSM, 5-year OS did not differ between the LR group and the RFA group despite the 5-year RFS of the former being better. The results of the present study could be explained by local recurrence being higher in the RFA group ( p < 0.001); however, non-local recurrence was not different between the two groups ( p = 0.70). An Italian study reported that among patients with HCC who underwent RFA, the post-recurrence survival of those with local recurrence was better than that of patients with non-local recurrence; because local recurrence is considered to be incomplete ablation with residual tumor, re-ablation could be performed effectively. However, non-local recurrence could be partly due to occult metastasis from the primary tumor, which indicates aggressive tumor biology and, consequently, a worse outcome . Tumor size is a well-known prognostic factor for patients with HCC . Therefore, we performed subgroup analysis and stratified tumor size as 3.1–4.0 and 4.1–5.0 cm. Our results showed that 5-year OS was comparable in two treatment groups after PSM, irrespective of tumor size; however, 5-year RFS was superior in the LR group compared to the RFA group, irrespective of tumor size. We used a MELD score of > 9 to indicate inadequate liver function reserve in the present study. Traditionally, the MELD score is used to evaluate the severity of deterioration of liver function reserve in patients with liver decompensation . However, numerous studies have shown its utility for patients with HCC undergoing LR, further supporting the feasibility of application of the MELD score for patients undergoing LR for HCC. We used AFP ≥ 20 ng/ml as a covariate in the PSM. This cutoff value is from the American Association for the Study of Liver Diseases guidelines which recommend that patients with the risk of HCC undergo surveillance using contrast-enhanced computed tomography or magnetic resonance imaging if their AFP level is ≥ 20 ng/ml. Patients with HCC and AFP ≥ 20 ng/ml are also referred to as those with AFP-positive HCC . As tumor size increases, the risk of MVI also increases . Of the 310 patients who underwent LR enrolled in the present study, MVI was noted in 186 (60%). MVI indicates aggressive tumor biology and the increased risk of micro-metastasis. This suggests that such an unfavorable prognostic marker implies micro-metastasis, which could result in less effective complete tumor resection. Lei et al. enrolled 72 patients undergoing LR and 50 patients undergoing RFA to treat a single HCC of 3.0–5.0 cm. Their results showed that OS and RFS were comparable between the two groups. Cox regression analysis showed that neither LR nor RFA was a significant risk factor of OS or RFS. However, the study included a limited number of cases and 34.7% of the LR group and 26% of the RFA group had Child–Pugh class B liver disease . Ye et al. enrolled 196 patients who underwent LR and 192 patients who underwent RFA for a single HCC of 3.0–5.0 cm. After PSM, 5-year OS was 34% and 40% ( p = 0.103) and 5-year RFS was 10% and 15% ( p = 0.087) in the RFA group and LR group, respectively. In addition, 7.6% of the LR group and 8.8% of the RFA group had Child–Pugh class B liver disease . Postoperative liver decompensation is the most representative cause of morbidity and mortality in LR . Thus, the ideal candidates for LR should be those with well-preserved liver function. With the advent of local–regional therapies, those with early-stage HCC and inadequate liver function reserve should be referred for local–regional therapies if liver transplantation is not feasible . Accordingly, we only enrolled patients with Child–Pugh class A liver disease in the present study. In addition, two previous studies did not analyze differences in local and non-local recurrence between the two treatment modalities, which constitute the key concept for explaining the comparability of OS between the two treatment modalities. A Chinese multi-center study enrolled 1289 patients who underwent percutaneous microwave ablation (MWA) ( n = 414) or laparoscopic liver resection (LLR) ( n = 875) as the first-line therapy for a solitary HCC of 3–5 cm. After PSM, there were no differences in OS between MWA and LLR (hazard ratio [HR] = 0.88, 95% CI = 0.65–1.19, p = 0.420), and MWA was inferior to LLR in RFS (HR = 1.36, 95% CI = 1.05–1.75, p = 0.017) . Our findings are compatible with the results of the Chinese study despite the use of different thermal ablation modalities. The same group of authors conducted a study of the same patients, but with age restricted to > 60 years. The MWA group consisted of 309 patients and the LLR group of 363 patients. After PSM, OS was similar between the two groups (HR 0.98, p = 0.900) and RFS was inferior in the MWA group (HR 1.52, p = 0.007) . Microwave ablation has potential advantages compared to RFA, including the ability to achieve higher temperatures and larger ablation zones, with lower susceptibility to heat sink effects. Despite these advantages, a recent systematic review and meta‑analysis reported that the efficacy of MWA, as measured by incomplete ablation and complication rates, was similar to that of RFA for HCC less than 5 cm . This phenomenon may be explained by the fact that the efficacy of thermal ablation is largely dependent on operators’ experience. RFA has been introduced in clinical guidelines as a curative treatment for early-stage HCC since the early 2000s, whereas MWA has been increasingly applied in clinical practice in the last decade . However, we would not select patients with peri-vascular tumor for RFA treatment. Stereotactic body radiotherapy has also been noted for its suitability for treating tumors located in anatomical sites where RFA would be challenging . Bridging therapies are used in patients meeting liver transplantation criteria to delay HCC progression and minimize the risk of delisting while on the waiting list . Due to the extreme shortage of donors in Taiwan, among all 424 patients in our study, only 9 (2.1%) eventually received liver transplants. The strength of our study is that we enrolled a relatively large number of patients with a single HCC of 3.0–5.0 cm and Child–Pugh class A liver disease who underwent percutaneous RFA or LR compared to previous studies . The results of our study support the results of previous studies . The limitation of our study is that because it is a single-center retrospective investigation, it may show inherent selection bias. In addition, the study lacked data on tumor location (superficial vs deep) because it was not mentioned in our image reports. For patients with deep-seated HCC and the presence of CSPH, up-front liver transplantation is desirable but not always available. In general, these patients would be referred for RFA. The 5-year OS of patients with a solitary HCC of 3–5 cm was comparable between the LR and RFA groups after PSM. However, the two groups did not differ in severe post-treatment complications. Accordingly, percutaneous RFA could be the first-line treatment for patients with a solitary HCC of 3–5 cm who are reluctant to undergo surgery. The results of the present study, along with those of previous studies, can reassure physicians that the outcome of RFA is no worse than that of LR, even for patients with a single HCC of 3.0–5.0 cm. Therefore, clinicians should not indicate LR for patients who are not perfectly suitable for it. |
Facilitating the Sharing of Electrophysiology Data Analysis Results Through In-Depth Provenance Capture | 278ba2e5-69e2-4cc6-b214-c7cf983e852a | 11181106 | Physiology[mh] | Sharing electrophysiology data analysis results is challenging, especially in collaborative environments. The results can be understood and interpreted only with the accurate description of the individual analysis steps, the parameters, and the data flow, which can be achieved by storing the results together with detailed provenance information. We implemented the Alpaca toolbox to capture provenance during the execution of Python scripts, a typical implementation in pipelines that analyze electrophysiology datasets. Alpaca provides an easy and lightweight solution to record the relevant details of the analysis, facilitating sharing the results. Electrophysiology methods are routinely used to investigate brain function, including the measurement of extracellular potentials using microelectrodes implanted into brain tissue . The first electrophysiology experiments acquired potentials from single or few implanted electrodes, which limited the data throughput of the experiments. However, recent technological advances produced large-density electrode arrays and data acquisition systems able to record hundreds of channels from heterogeneous sources in the experiment sampled at high resolution . It is now possible to perform massive and parallel recordings during electrophysiology experiments that result in datasets that are both complex in structure and large in volume. For the analysis of such datasets, this introduces two major consequences. First, the analysis will often be partially conducted in an exploratory style, where the analysis parameters and selection of datasets are probed iteratively by the scientists. Keeping track of these choices and approaches is particularly challenging for the scientist in the context of complex data. Second, the analysis of modern datasets often requires advanced methods that are implemented as workflows composed of several interdependent scripts ( , for a detailed description). The highly diverse and distributed results from the parallel and intertwined processing pipelines operating on complex data must be organized and described in a manner that is comprehensible not only to the original author of the analysis workflow but also in a collaborative context. Taken together, the full workflow including iterative and pipeline approaches, starting from the experimental data acquisition to the presentation of final results, is subject to a hierarchical decision-making process, frequent changes, and a large number of processing steps. With growing complexity, these aspects are increasingly difficult to follow, especially in collaborative contexts, where results of analyses executed by different scientists are shared. The resulting lack of reproducibility undermines the scientific investigations and the public trust in the scientific method and results (cf., ). In collaborative environments, the details of an executed analysis workflow should not only be fully documented but also readily understandable by all partners. Thus, work in collaboration could be improved further by directly capturing provenance information on a coarser level of granularity that is informative of the data manipulations throughout the execution of an analysis workflow leading to a certain analysis result . By using a provenance tracking system during workflow execution, all operations performed on a given data object can be described and stored in an accessible and structured way that is comprehensible to a human. For the analysis of an electrophysiology dataset, those operations consist of specific analysis methods or processes, such as applying a bandpass filter, downsampling a specific recorded signal, or generating a plot. Ultimately, the details relevant for the final interpretation of the results can be captured and, ideally, stored as metadata with the analysis results. These may then represent summaries of the analysis flow and lead to a description of the results that improve findability, interoperability, and reusability of the results (FAIR principles, ). Several tools to track and record provenance within (analysis) workflows and single scripts exist, spanning different domains . The tools take different approaches depending on which type of information to capture (e.g., tracing code execution, capturing user interactions, or monitoring operating system calls), and the implementation varies according to the intended use of the captured provenance information and its granularity . Although some of these solutions might be adapted or even combined to use in the analysis of electrophysiology data, none of these are designed and optimized with the particularities of this type of analysis setting in mind. One of these particularities to consider is the ease of use with custom analysis scripts. A workflow management system (WMS) such as VisTrails , for instance, requires the construction of workflows from analysis modules implemented as part of the WMS framework or by writing plugins, when the user might need the flexibility of custom scripts. Likewise, a tool such as AiiDA provides a full workflow ecosystem that requires the development of plugins and wrappers to interface and enforces its own data types, hindering the reuse of existing code and libraries without considerable effort. A second aspect to consider is the level of detail and suitability of the provenance information. For individual scripts, tools like noWorkflow produce a provenance trail that is highly detailed and without semantic meaning, making it difficult for the scientist to extract information. In contrast, a tool like Sumatra will record a more global context in which a script is run in the command line (script parameters, execution environment, version history, and links to the output files), while specific operations inside the script will not be detailed. Solutions to orchestrate a series of scripts, like Snakemake , produce flow graphs that show the flow of execution for the scripts composing a workflow but lack the actions performed within each script such that more detailed provenance metadata must be manually recorded by the user without any standardization. Finally, a last aspect is the specificity of the tools for a certain scientific domain. For example, a tool like LONI Pipeline , that supports full workflows with provenance tracking for the analysis of neuroimaging data , could be readily used in some analysis scenarios. However, the specificity of the available workflow components is a disadvantage for the user that wants to implement pipelines that fall outside of the scope of the intended use. This work sets out to address the challenges associated with the analysis of an electrophysiology dataset and sharing the results. To accomplish this, a novel tool was implemented to capture the suitable scope of provenance information and store it as metadata together with results generated by analysis scripts implemented in the Python programming language. A typical analysis scenario is presented as a use case and then the tool is analyzed with respect to the challenges it aims to address. Challenges for provenance capture during the analysis of electrophysiology data We argue that a tool to capture provenance information during the analysis of electrophysiology data has to deal with four scenarios: (i) the analyses often require several preprocessing steps before any analytical method is applied, (ii) the data analysis process is often not linear but intertwined and therefore exhibits a certain level of intricacy, (iii) parameters of the analysis are frequently and often iteratively probed, and (iv) the final results are likely to be published or used in shared environments. In the following, we describe these scenarios in detail and derive four associated challenges for capturing provenance. Preprocessing is a typical step in the analysis, and is usually custom-tailored to a particular project . For instance, data from a recording session of multiple trials (e.g., repeated stimulus presentations or behavioral responses) are usually recorded as a single data stream and only during the analysis cut into the individual trial epochs relevant to the analysis goal. Due to the high level of heterogeneity in the data, this is frequently achieved using custom scripts, with parameters that are specific to the trial structure and design of the experiment (e.g., selecting only particular trials according to behavioral responses such as reaction time). The scientist’s written documentation, source code, and, in many cases, the data itself would need to be inspected to understand all these steps, e.g., the chunking of the data that was performed before the core analysis. Therefore, a first challenge is to clearly document the processing in an accessible and automated manner and to provide this information as supplement to the analysis output. The full analysis pipeline from the dataset to a final result artifact is likely not built in one attempt, but instead involves a continuous development . For instance, as new data are obtained, time series may need to be excluded from analysis and new hypotheses are generated. Therefore, the analysis scripts may be updated to include additional analysis steps, and the resulting code will have increasing complexity. One solution to organize this agile process is to use a WMS ( Snakemake ; ) coupled with a code versioning system such as git . For each run, the WMS will provide coarse provenance information, such as the name of the script, environment information, script parameters, and files that were used or generated. The scripts can then be tracked to specific versions knowing the git commit history. However, if multiple operations (e.g., cutting data, downsampling, and filtering) are performed inside one script, the actual parameters in each step are possibly not captured as part of the provenance. This is the case where provenance information shows only script parameters passed by command line. The mapping of command line to the actual parameters used by the functions in the script relies on the correct implementation of the code, and any default parameters for the function that are not passed by command line will not be known. Furthermore, it is not possible to inspect each intermediate (in-memory) data object during the execution of the script. Yet, without knowledge of these data operations and the data flow, it becomes challenging to compare results generated by multiple versions of the evolving analysis script, in particular if the code structure of the script changes over time. A solution to this challenge could be to break such complex scripts in several smaller scripts, such that the coarse provenance information of the WMS could be more descriptive of each individual process and intermediate results would be saved to disk (i.e., in our example, separate scripts for cutting, downsampling, and filtering). However, this may be inconvenient and inefficient: resource-intensive operations (e.g., file loading and writing) might be repeated across different scripts, and temporary files would have to be used between the steps, instead of efficiently manipulating data in memory. Moreover, this approach limits the expressiveness and creativity of defining data operations as opposed to the full set of operations offered by the programming language in a single script. Therefore, a second challenge is to efficiently capture the parameters and the data flow associated with the analysis steps of the script. The parameters that control the final analysis output are frequently probed iteratively . For example, the scientist performing the analysis could write a Jupyter notebook to find specific frequency cutoffs for a filtering step. In one scenario, code cells of the notebook can be run in arbitrary sequences, with some parameters being changed in the process until a result artifact (e.g., a plot) is saved in a file. In a different scenario, it is possible to generate several versions of a given file by the same notebook, each of which overwrites the previous version. At this point, the scientist performing the analysis might rely on the associated Jupyter history or versioning of the notebook/files using git . However, the relevant parameters that were used to generate results saved in the last version of the file would be difficult to recall. Ultimately, a detailed documentation by the user or retracing the source code according to an execution history is still required. Therefore, a third challenge is to retain a documentation of the iterative generation of the analysis result that is explicitly and unambiguously linked to the generated result file. The fourth challenge stems from the situation where results (e.g., plots) are likely to be published or used in collaborative environments . This includes files uploaded in a manuscript submission, or files deposited in a shared folder or sent between collaborators via email. The interpretation of the stored results depends on the understanding of the analysis details and its relevant parameters by the collaboration partner. Moreover, searching for specific results in a large collection of shared files can be difficult: not all the relevant parameters are recorded in the file name, and are likely stored as non-machine-readable information within the file (e.g., an axis label in a figure). In these situations, analysis provenance stored together with the shared result files as structured and comprehensible metadata should improve information transfer in the collaboration and findability of the results. Use case scenario As a use case scenario, we consider an analysis that computed the mean power spectral densities (PSDs) from a publicly available dataset containing massively parallel electrophysiological recordings (raw electrode signals, local field potentials, and spiking activity) in the motor cortex of monkeys in a behavioral task involving movement planning and execution. The experiment details, data acquisition setup, and resulting datasets were previously described . Briefly, two subjects (monkey N and monkey L) were implanted with one Utah electrode array (96 active electrodes) in the primary motor/premotor cortices. Subjects were trained in an instructed delayed reach-to-grasp task. In a trial, the monkey had to grasp a cubic object using either a side grip (SG) or a precision grip (PG). The SG consists of the subject grasping the object with the tip of the thumb and the lateral surface of the other fingers, on the lateral sides of the object. The PG consists of the subject placing the tips of the thumb and index finger on a groove on the upper and lower sides of the object. The monkey had to pull the object against a load that required either a low (LF) or high pulling force (HF). The grip and force instructions were presented through a light-emitting diode (LED) panel using two different visual cue signals (CUE and GO), respectively, which were separated by a 1,000 ms delay . As a result of the combination of the grip and force conditions, four trial types were possible: side grip with low force (SGLF), side grip with high force (SGHF), precision grip with low force (PGLF), and precision grip with high force (PGHF). A recording session consisted of several repetitions of each trial type that were acquired continuously in a single recording file. Neural activity was recorded during the session using a Blackrock Microsystems Cerebus data acquisition system, with the raw electrode signals bandpass-filtered between 0.3 and 7,500 Hz at the headstage level and digitized at 30 kHz with 16-bit resolution (0.25 V/bit, raw signal). The behavioral events were simultaneously acquired through the digital input port that stored 8-bit binary codes as received from the behavioral apparatus controller. The experimental datasets are provided in the Neuroscience Information Exchange ( NIX ) format (RRID:SCR_016196; https://nixio.readthedocs.io ), developed with the aim to provide standardized methods and models for storing neuroscience data together with their metadata . Inside the NIX file, data are represented according to the data model provided by the Neo (RRID:SCR_000634; https://neuralensemble.org/neo ) Python library . Neo provides several features to work with electrophysiology data. First, it allows loading data files written using open standards such as NIX as well as proprietary formats produced by specific recording systems (e.g., Blackrock Microsystems, Plexon, Neuralynx, among others). Second, it implements a data model to load and structure information generated by the electrophysiology experiment in a standardized representation. This includes time series of data acquired continuously in samples (such as the signals from electrodes or analog outputs of a behavioral apparatus) or timestamps (such as spikes in an electrode or digital events produced by a behavioral apparatus). Third, Neo provides typical manipulations and transformations of the data, such as downsampling the signal from electrodes or extracting parts of the data at specific recording intervals. The objects may store relevant metadata, such as names of signal sources, channel labels, or details on the experimental protocol. In this use case scenario, Neo was used to load the datasets and manipulate the data during the analysis. The relevant parts of the structure and relationships between objects of the Neo data model are briefly represented in . The Neo library is based on two types of objects: data and containers. Different classes of data objects exist, depending on the specific information to be stored. Data objects are derived from Quantity arrays that are provided by the Python quantities package ( https://github.com/python-quantities/python-quantities ) and provide NumPy arrays with attached physical units. The AnalogSignal is used to store one or more continuous signals (i.e., time series) sampled at a fixed rate, such as the 30 kHz raw signal captured from each of the 96 electrodes in the Utah array. The Event object is used to store one or multiple labeled timestamps, such as the behavioral events throughout the trials acquired from the digital port of the recording system. The container objects are used to group data objects together, and these are accessed through specific collections (lists) present in the container. The top-level container is the Block object that stores general descriptions of the data and has one or more Segment objects accessible by the segments attribute. The Segment object groups data objects that share a common time axis (i.e., they start and end within the same recording time, defined by the t_start and t_stop attributes; ). The Segment object also has collections to store specific data objects: analogsignals is a list of the AnalogSignal data objects, and events is a list of the Event data objects. The Neo data model also defines a framework for metadata description as key-value pairs for its data and container objects through annotations and array annotations. Annotations may be added to any Neo object. They contain information that are applicable to the complete object, such as the hardware filter settings that apply to all channels contained in an AnalogSignal object. Array annotations may be added to Neo data objects only. They contain information stored in arrays, whose length corresponds to the number of elements in the data. They are used to provide metadata for a particular element in the data stored in the object. For instance, in the Event object representing the behavioral events in the reach-to-grasp task, the trial_event_labels array annotation stores the decoded event string associated with each event timestamp stored in the object . In the end, all the data in the NIX dataset are loaded into Neo data objects that encapsulate all the relevant metadata. In the use case scenario, the PSDs were analyzed for each subject (monkey N and monkey L), and the mean PSD was computed for each of the four trial types present in the experiment . Although a single Python script (named psd_by_trial_type.py ) was used to produce the plot (stored as R2G_PSD_all_subjects.png ), the actual analysis algorithm is complex (shown in a schematic form in ). In a typical scenario, a file such as R2G_PSD_all_subjects.png could be stored in a shared folder or even sent to collaborators by e-mail. At this point, several key information cannot be obtained from the plot alone: (i) How were the trials defined, i.e., which time points or behavioral events were used as start and end points to cut the data in the data preprocessing? (ii) Was any filtering applied to the raw signal, before the computation of the PSD? (iii) Several methods are available to obtain the PSD estimate, each with particular features that may affect the estimation of the spectrum . Which method was used in this analysis, and what were the relevant parameters (e.g., for frequency resolution)? (iv) How was the aggregation performed (i.e., method and number of trials). What do the shaded area intervals around the plot lines represent? In addition to these questions, the contents of a plot such as R2G_PSD_all_subjects.png may be the result of several iterations of exploratory analyses and development of psd_by_trial_type.py . In our scenario, parameters that could have been iteratively probed or improved could be the identification of failed electrodes, definition of a suitable time window for cutting the data from a full trial, or to select specific filter cutoffs. Therefore, R2G_PSD_all_subjects.png could be overwritten after psd_by_trial_type.py was run with different parameters or different versions of the code. Altogether, the exhaustive set of steps and definitions used for the generation of the analysis result is not apparent from R2G_PSD_all_subjects.png . Even with a good description such as the flowchart in , which could be added as accompanying documentation, the exact parameters used for function calls are still missing, especially if these were determined during run time (such as the number of trials in the dataset). The only way of getting those relevant details of the analysis is by directly inspecting psd_by_trial_type.py . The difficulties associated with this approach are illustrated in . For a simple code snippet , which iterates over a list of trial data to apply a Butterworth filter and then downsample the signal, it is not possible to visualize the state of the data for each iteration (e.g., the array shape). In addition, the actual contents of the variables are unknown. A robust data model like Neo helps to understand which objects were accessed during each iteration. However, even when using that framework, the exact data objects and their transformations in each iteration of the for-loop are not apparent from the code given that the object instances (including attributes, such as the shape of an array) are only available during run time. One example of such information that exists only at run time is the number of trials (i.e., the number of Segment objects returned by cut_segment_by_epoch ) and the number of channels (i.e., the shape of the AnalogSignal object in each loop iteration). Unless running the script again with the same dataset and explicitly outputting this information, it is not possible to know. In contrast, by capturing and structuring the relevant provenance during the execution, a representation could be obtained in a way that all relevant information is accessible after the run . The detailed trace ultimately shows which part of the data and the resulting intermediate objects were used during each iteration. Alpaca: a tool for automatic and lightweight provenance capture in Python scripts As the analysis of electrophysiology datasets is usually based on scripts such as psd_by_trial_type.py , we set to implement Alpaca (Automated Lightweight ProvenAnce CApture) as a tool to capture the provenance information that describes the main steps implemented in scripts that process data. The captured information can be stored as a metadata file that is associated with the result file(s) generated by the script (e.g., the plot in stored in R2G_PSD_all_subjects.png ). Alpaca can be used for scripts written in the Python programming language as Python is free and open source, and has been gaining popularity among the neuroscience community . Python is also frequently used in the analysis of electrophysiology data, and several dedicated open source packages are available, such as the Neo and NWB (Neurodata without borders; RRID:SCR_015242; https://www.nwb.org ) frameworks for electrophysiology data representation, the unified spike sorting pipeline SpikeInterface (RRID:SCR_021150; https://spikeinterface.readthedocs.io ) , and Elephant (Electrophysiology Analysis Toolkit; RRID:RRID:SCR_003833; https://python-elephant.org ) for data analysis. Therefore, a tool implemented in Python will have greater impact in the neuroscience community, as no licenses or fees are required and it builds on already established state-of-the-art processing and analysis tools. The functionality of Alpaca is illustrated in . Alpaca is based on a Python function decorator (a Python decorator allows adding new functionality to existing functions without changing their behavior) that supports tracking the individual steps of the analysis and constructing a provenance trace. In addition, Alpaca serializes the captured provenance information as a metadata file encoded in the RDF format (Resource Description Framework, a general model for description and exchange of graph data; ) according to the data model defined in the W3C (World Wide Web Consortium; https://www.w3.org ) PROV standard (PROV-DM; ). PROV is an open standard that was developed to allow the interoperability of provenance information in heterogeneous environments . Finally, visualization of the provenance trace is supported by converting the PROV metadata into graphs that show the data flow within the script and allow the visual inspection of the captured provenance . Alpaca is provided as a standalone open source Python package that can be installed from the Python Package Index or directly from the code repository ( https://github.com/INM-6/alpaca ). The documentation with usage examples is available online ( https://alpaca-prov.readthedocs.io ). Several design decisions were adopted in Alpaca. First, the tool captures provenance during the execution without the need for users to enhance this information with additional metadata or documentation. Second, code instrumentation is reduced to a minimum level, and users are asked to make only minor changes in the existing code to enable tracking (see the online document contained within the code repository accompanying this study ( https://github.com/INM-6/alpaca_use_case/blob/f1696ec8dceaadbed6b825636ca7eb9aee704c92/documents/code_changes.pdf ) showing the changes required to track provenance within psd_by_trial_type.py ). Third, it is flexible enough to accommodate different coding styles, and it was designed to be the most compatible with existing code bases. Therefore, provenance is captured in an automatized and lightweight fashion. Alpaca assumes that an analysis script such as psd_by_trial_type.py is composed of several functions that are called sequentially (potentially in the context of control flow statements such as loops), each performing a step in the analysis. The functions in the script may take data as input and produce outputs based on a transformation of that data, or generate new data. Moreover, a function may have one or more parameters that are not data inputs but modify the behavior on how the function is generating the output. For example, in reshaping an array using the NumPy function reshape , the new shape would represent a parameter that defines how to reshape the original array (i.e., input data) into a new array (i.e., the output data). In Python , information to a function is passed through function arguments that are accessed by the local code in the function body that performs the computation. Those are specified in the function declaration using the def keyword. Therefore, Alpaca utilizes the following definitions to analyze a function call in the script: Input: a file or Python object that provides data for the function. It is one of the function arguments; Output: a file or Python object generated by a function. Can be a return value of the function or one of the function arguments; Parameter: any other function argument that is neither an input nor an output; Metadata: additional information contained in the input/output. For Python objects, these can be accessible by attributes (i.e., accessed by the dot . after the object name, such as signal.shape ) or annotations stored in dictionaries accessed by special attributes, such as the ones defined in the Neo data model. For files, this is the file path. Initializing Alpaca The calls to the functions tracked by Alpaca are expected to be present in a single scope (i.e., the main script body or a single function such as main ). To identify the code to be tracked and start the capture, the user must insert a call to the activate function at a point in the script before the corresponding block of code. When calling activate , Alpaca identifies the current script in execution, obtains the SHA256 hash (a hash is a function that maps data with variable size to fixed-size values. SHA256 is a Secure Hash Algorithm (SHA) that can be used to verify the identity of files) of the source file storing the code, and generates a universally unique identifier (UUID) to identify the script execution ( session ID ). The source code to be tracked will be analyzed to allow the extraction of each individual code statement later, during the analysis of each function execution. Before activating the tracking, the user can set options using the alpaca_settings function. These settings operate globally within the toolbox and control how Alpaca captures and describes provenance. Tracking the steps of the analysis The Provenance function decorator is used to wrap each data processing function executed in the script . When applying the decorator, the argument names that are either Python object inputs, file inputs, or file outputs are identified through the decorator constructor parameters inputs , file_input , or file_output . When the script is run, for each execution of the function, the decorator: (i) generates a description of the inputs and outputs, (ii) records the parameters used in the call, (iii) generates a unique execution UUID ( execution ID ), and (iv) captures the start/end timestamps. Finally, this information is used to build a record for the function execution. Provenance has an internal global function execution counter, incremented after the execution of any function being tracked. The current value is also added to the function execution record to obtain the order of that execution. Finally, all the execution records are stored in an internal history, which will be used to serialize the information at the end. The Provenance decorator analyzes the inputs and outputs to extract the information relevant for their description and their metadata: for Python objects (e.g., an AnalogSignal object), the type information ( Python class name and the module where it is implemented), content hash, and current memory address are recorded. The content hash is computed using either the hash function from the joblib ( https://joblib.readthedocs.io ) package (using the SHA1 algorithm) or the builtin Python hash function (that uses the algorithm implemented in the __hash__ method of the object). By default, every object will be hashed using joblib . However, it is possible to define specific packages whose objects will be hashed using the builtin hash function using the alpaca_settings function. This allows selecting hashing functionality that may already be implemented in the object (which can be faster), or avoid sensitivity to minor changes to the object content that will produce a provenance trace that is too detailed. The values of all object instance attributes (i.e., stored in the __dict__ dictionary) are recorded, together with the values of the specific attributes when present. This includes, for example, shape and dtype for NumPy arrays, or extended attributes such as units , t_start , t_stop , nix_name , and dimensionality for the AnalogSignal object of Neo representing a measurement time series. More generic attributes that could be used by other data models, such as id , pid , or create_time , are also captured if present. Currently, the support to capture extended metadata details is implemented for NumPy -based objects; for files, the SHA256 file hash is computed using the hashlib package, and the absolute file path is recorded; for the Python builtin None , the object hash is an UUID, as it is a special case where the actual object is shared throughout the execution environment. This avoids duplication. The information on the function is also extracted: name, module, and version of the package where it was implemented (if available through the metadata module from the importlib package implemented in Python 3.8 or higher). Version information is currently not recorded for user-defined functions (i.e., implemented in the script file being tracked). Finally, the inputs to a function may be accessed from container objects by subscripts (e.g., an item in a list such as signals[0] ) or attributes (e.g., segment.analogsignals ). To capture these static relationships, the abstract syntax tree of the source code statement containing the current function call is analyzed, all container objects are identified, and the operations (subscript or attribute) are added to the execution history. In the end, the container memberships are identified and recorded if used when passing inputs to a function. Serialization of the provenance information The captured provenance is serialized as RDF graph , using one of the formats supported by RDFLib ( https://github.com/RDFLib/rdflib ). The AlpacaProvDocument class is responsible for managing the serialization, based on the history captured by the Provenance decorator. For simplified usage, the serialization can be accomplished in a single step by just calling the save_provenance function at the end of the script execution, passing a destination file and serialization format. All the information currently stored in the history in Provenance will be saved to the disk. For the RDF representation of the captured provenance, the PROV-O ontology was extended to incorporate properties relevant to the description of the provenance elements captured by Alpaca. shows the main classes derived from the SoftwareAgent (a subclass of Agent), Entity, and Activity classes of the PROV-O ontology, and shows the provenance relationships among the classes, as defined in PROV-O. These main classes are: DataObjectEntity : entity used to represent a Python object that was an input or output of a function; FileEntity : entity used to represent a file that was an input or output of a function; FunctionExecution : activity used to represent a single execution of one function with a set of parameters; ScriptAgent : agent used to represent the script that was run and executed several functions in sequence. In addition to the classes derived from PROV-O, two additional classes are defined in the Alpaca ontology. They are used to represent specific information in the context of the provenance captured by Alpaca: Function: represents a Python function. It contains code that is executed to perform some action in the script, and that can take inputs, parameters, and produce outputs (e.g., in our example, the welch_psd function defined in the spectral module of the Elephant package); NameValuePair: represents information where a value is associated with a name. Name is a string and value can be any literal (e.g., integers, strings, decimal numbers). This is the main class used to store function parameters and data object metadata. The Alpaca ontology also defines specific extended properties which are used to serialize function parameters, object/file metadata, and function information. They are summarized in . For representing memberships, such as objects accessed from attributes (e.g., segment.analogsignals ), indexes (e.g., signals[0] ), or slices (e.g., signals[1:5] ), the PROV-O hasMember property is used. The DataObjectEntity representing the container object will have a hasMember property whose value is the DataObjectEntity representing the element accessed. The element will have one of the following properties to describe the membership: fromAttribute: a string storing the name of the attribute used to access the object in the container (e.g., analogsignals in segment.analogsignals ); containerIndex: a string storing the index used to access the object in the container (e.g., 0 in signals[0] ). This is not necessarily a number, as Python uses string indexes when accessing elements in dictionaries; containerSlice: a string storing the slice used to access the object (e.g., 1:5 in signals[1:5] ). In the RDF graph, each data object, file, or function execution is identified by a uniform resource name (URN) identifier . The functions and script are also represented by their own URNs. To compose a unique identifier, specific information captured during the script execution is used in the composition of the final URN string. The authority identifier element is a string that points to the institute or organization which has responsibility over the analysis. It can be set using the alpaca_settings function. The identifiers generated by Alpaca are summarized in . summarizes how a single function execution is stored in the serialized RDF graph using the Alpaca ontology and the PROV-O properties. Visualization of the serialized provenance The provenance records serialized to RDF files can be loaded as NetworkX (RRID:SCR_016864; https://networkx.org ) graph objects. Besides the functionality for graph analysis offered by NetworkX , the graph objects can be saved as GEXF (Graph exchange XML format; https://gexf.net ) or GraphML ( http://graphml.graphdrawing.org ) files that can be visualized by available graph visualization tools, e.g., Gephi (RRID:SCR_004293; https://gephi.org ) , or other Python -based frameworks, e.g., Pyvis ( https://pyvis.readthedocs.io ) . This takes the advantage of existing free and open source solutions developed specifically for analyzing and interacting with graphs. In Alpaca, the ProvenanceGraph class is responsible for generating the NetworkX graph objects from serialized provenance data. summarizes how the visualization graph is obtained from the RDF graph. The resulting graph will have entities ( DataObjectEntity or FileEntity ) and activities ( FunctionExecution ) as nodes, identified by the respective URN. Directed edges show the data flow across the functions. Metadata and function parameters are added to the attributes dictionary of each node. A few attributes are present for all the nodes in the graph (omitted in Fig. for clarity): type: describes one of the three possible types of node: object, file, or function; label: for data objects, it is the Python class name (e.g., AnalogSignal ). For functions, it is the function name (e.g., welch_psd ). For files, it is File ; Python_name: for data objects and functions, it is the full module path to the class or function, with respect to the package where it is implemented (e.g., neo.core.analogsignal.AnalogSignal ). For files, this attribute is not used; Time Interval: a string representing a time interval according to the standard used by Gephi that is composed from the order of the function execution. This information can be used to visualize the temporal evolution of the provenance graph, e.g., using the timeline feature of Gephi that displays only the nodes within a specified execution interval. The ProvenanceGraph class provides options to tweak the visualization. First, it is possible to select which attributes and annotations from the metadata to include in the visualization graph. Second, parameter names can be prefixed by the function name, so that they can easily be identified. Third, nodes representing the builtin Python None object (that is the default return value of a Python function) can be omitted. Finally, nodes describing a sequence of object access operations from containers (e.g., segment.analogsignals[0] , which accesses the list in the analogsignals attribute of segment , followed by retrieving its first element) can be condensed such that a single edge describing the operation is generated. These visualization options reduce clutter and facilitate the visual inspection of the recorded provenance information. Finally, the provenance graphs can become large when repeated operations are performed within the script, such as using a for loop to iterate over several data objects to perform computations. Therefore, an aggregation and summarization are available, adapted from the functionality already implemented in NetworkX (from version 2.6). It uses the Summarization by Grouping Nodes on Attributes and Pairwise edges (SNAP) aggregation algorithm, and was modified from the original implementation to allow the selection of specific attributes of a set of nodes. Moreover, for functions executed with distinct sets of parameters, the different values can also be taken into account when identifying similarity of nodes in summarizing the graph. The aggregation generates supernodes that represent not a single execution and data, but several identical or similar processing nodes. The identifiers of the individual elements that were aggregated in the supernode are listed in the members node attribute. The total number of nodes aggregated into the supernode is stored in the member_count node attribute. In the end, the user can aggregate several nodes together, depending on whether they share the values of a given attribute, which allows the generation of a simplified version of the provenance trace that provides a more general overview of the analysis. Code accessibility The code to reproduce the analyses presented as use case in this paper is freely available online at https://github.com/INM-6/alpaca_use_case . , , – , and were manually created using Inkscape. and are direct outputs of the corresponding scripts. , , , and were created from graph visualization files generated by the corresponding scripts (GEXF format). The GEXF files were loaded into Gephi (version 0.9.7) and nodes were edited for color, position, and size. The graphs were exported to Scalable Vector Graphics (SVG) files that were manually edited using Inkscape to compose the final figures. Editing involved adjusting label sizes and adding information available as node attributes in Gephi. The data used for the analysis can be found at https://gin.g-node.org/INT/multielectrode_grasp . All codes used in this manuscript are also available as Extended Data. 10.1523/ENEURO.0476-23.2024.d1 Extended Data 1 Download Extended Data 1, ZIP file . 10.1523/ENEURO.0476-23.2024.d2 Extended Data 2 Download Extended Data 2, ZIP file . We argue that a tool to capture provenance information during the analysis of electrophysiology data has to deal with four scenarios: (i) the analyses often require several preprocessing steps before any analytical method is applied, (ii) the data analysis process is often not linear but intertwined and therefore exhibits a certain level of intricacy, (iii) parameters of the analysis are frequently and often iteratively probed, and (iv) the final results are likely to be published or used in shared environments. In the following, we describe these scenarios in detail and derive four associated challenges for capturing provenance. Preprocessing is a typical step in the analysis, and is usually custom-tailored to a particular project . For instance, data from a recording session of multiple trials (e.g., repeated stimulus presentations or behavioral responses) are usually recorded as a single data stream and only during the analysis cut into the individual trial epochs relevant to the analysis goal. Due to the high level of heterogeneity in the data, this is frequently achieved using custom scripts, with parameters that are specific to the trial structure and design of the experiment (e.g., selecting only particular trials according to behavioral responses such as reaction time). The scientist’s written documentation, source code, and, in many cases, the data itself would need to be inspected to understand all these steps, e.g., the chunking of the data that was performed before the core analysis. Therefore, a first challenge is to clearly document the processing in an accessible and automated manner and to provide this information as supplement to the analysis output. The full analysis pipeline from the dataset to a final result artifact is likely not built in one attempt, but instead involves a continuous development . For instance, as new data are obtained, time series may need to be excluded from analysis and new hypotheses are generated. Therefore, the analysis scripts may be updated to include additional analysis steps, and the resulting code will have increasing complexity. One solution to organize this agile process is to use a WMS ( Snakemake ; ) coupled with a code versioning system such as git . For each run, the WMS will provide coarse provenance information, such as the name of the script, environment information, script parameters, and files that were used or generated. The scripts can then be tracked to specific versions knowing the git commit history. However, if multiple operations (e.g., cutting data, downsampling, and filtering) are performed inside one script, the actual parameters in each step are possibly not captured as part of the provenance. This is the case where provenance information shows only script parameters passed by command line. The mapping of command line to the actual parameters used by the functions in the script relies on the correct implementation of the code, and any default parameters for the function that are not passed by command line will not be known. Furthermore, it is not possible to inspect each intermediate (in-memory) data object during the execution of the script. Yet, without knowledge of these data operations and the data flow, it becomes challenging to compare results generated by multiple versions of the evolving analysis script, in particular if the code structure of the script changes over time. A solution to this challenge could be to break such complex scripts in several smaller scripts, such that the coarse provenance information of the WMS could be more descriptive of each individual process and intermediate results would be saved to disk (i.e., in our example, separate scripts for cutting, downsampling, and filtering). However, this may be inconvenient and inefficient: resource-intensive operations (e.g., file loading and writing) might be repeated across different scripts, and temporary files would have to be used between the steps, instead of efficiently manipulating data in memory. Moreover, this approach limits the expressiveness and creativity of defining data operations as opposed to the full set of operations offered by the programming language in a single script. Therefore, a second challenge is to efficiently capture the parameters and the data flow associated with the analysis steps of the script. The parameters that control the final analysis output are frequently probed iteratively . For example, the scientist performing the analysis could write a Jupyter notebook to find specific frequency cutoffs for a filtering step. In one scenario, code cells of the notebook can be run in arbitrary sequences, with some parameters being changed in the process until a result artifact (e.g., a plot) is saved in a file. In a different scenario, it is possible to generate several versions of a given file by the same notebook, each of which overwrites the previous version. At this point, the scientist performing the analysis might rely on the associated Jupyter history or versioning of the notebook/files using git . However, the relevant parameters that were used to generate results saved in the last version of the file would be difficult to recall. Ultimately, a detailed documentation by the user or retracing the source code according to an execution history is still required. Therefore, a third challenge is to retain a documentation of the iterative generation of the analysis result that is explicitly and unambiguously linked to the generated result file. The fourth challenge stems from the situation where results (e.g., plots) are likely to be published or used in collaborative environments . This includes files uploaded in a manuscript submission, or files deposited in a shared folder or sent between collaborators via email. The interpretation of the stored results depends on the understanding of the analysis details and its relevant parameters by the collaboration partner. Moreover, searching for specific results in a large collection of shared files can be difficult: not all the relevant parameters are recorded in the file name, and are likely stored as non-machine-readable information within the file (e.g., an axis label in a figure). In these situations, analysis provenance stored together with the shared result files as structured and comprehensible metadata should improve information transfer in the collaboration and findability of the results. As a use case scenario, we consider an analysis that computed the mean power spectral densities (PSDs) from a publicly available dataset containing massively parallel electrophysiological recordings (raw electrode signals, local field potentials, and spiking activity) in the motor cortex of monkeys in a behavioral task involving movement planning and execution. The experiment details, data acquisition setup, and resulting datasets were previously described . Briefly, two subjects (monkey N and monkey L) were implanted with one Utah electrode array (96 active electrodes) in the primary motor/premotor cortices. Subjects were trained in an instructed delayed reach-to-grasp task. In a trial, the monkey had to grasp a cubic object using either a side grip (SG) or a precision grip (PG). The SG consists of the subject grasping the object with the tip of the thumb and the lateral surface of the other fingers, on the lateral sides of the object. The PG consists of the subject placing the tips of the thumb and index finger on a groove on the upper and lower sides of the object. The monkey had to pull the object against a load that required either a low (LF) or high pulling force (HF). The grip and force instructions were presented through a light-emitting diode (LED) panel using two different visual cue signals (CUE and GO), respectively, which were separated by a 1,000 ms delay . As a result of the combination of the grip and force conditions, four trial types were possible: side grip with low force (SGLF), side grip with high force (SGHF), precision grip with low force (PGLF), and precision grip with high force (PGHF). A recording session consisted of several repetitions of each trial type that were acquired continuously in a single recording file. Neural activity was recorded during the session using a Blackrock Microsystems Cerebus data acquisition system, with the raw electrode signals bandpass-filtered between 0.3 and 7,500 Hz at the headstage level and digitized at 30 kHz with 16-bit resolution (0.25 V/bit, raw signal). The behavioral events were simultaneously acquired through the digital input port that stored 8-bit binary codes as received from the behavioral apparatus controller. The experimental datasets are provided in the Neuroscience Information Exchange ( NIX ) format (RRID:SCR_016196; https://nixio.readthedocs.io ), developed with the aim to provide standardized methods and models for storing neuroscience data together with their metadata . Inside the NIX file, data are represented according to the data model provided by the Neo (RRID:SCR_000634; https://neuralensemble.org/neo ) Python library . Neo provides several features to work with electrophysiology data. First, it allows loading data files written using open standards such as NIX as well as proprietary formats produced by specific recording systems (e.g., Blackrock Microsystems, Plexon, Neuralynx, among others). Second, it implements a data model to load and structure information generated by the electrophysiology experiment in a standardized representation. This includes time series of data acquired continuously in samples (such as the signals from electrodes or analog outputs of a behavioral apparatus) or timestamps (such as spikes in an electrode or digital events produced by a behavioral apparatus). Third, Neo provides typical manipulations and transformations of the data, such as downsampling the signal from electrodes or extracting parts of the data at specific recording intervals. The objects may store relevant metadata, such as names of signal sources, channel labels, or details on the experimental protocol. In this use case scenario, Neo was used to load the datasets and manipulate the data during the analysis. The relevant parts of the structure and relationships between objects of the Neo data model are briefly represented in . The Neo library is based on two types of objects: data and containers. Different classes of data objects exist, depending on the specific information to be stored. Data objects are derived from Quantity arrays that are provided by the Python quantities package ( https://github.com/python-quantities/python-quantities ) and provide NumPy arrays with attached physical units. The AnalogSignal is used to store one or more continuous signals (i.e., time series) sampled at a fixed rate, such as the 30 kHz raw signal captured from each of the 96 electrodes in the Utah array. The Event object is used to store one or multiple labeled timestamps, such as the behavioral events throughout the trials acquired from the digital port of the recording system. The container objects are used to group data objects together, and these are accessed through specific collections (lists) present in the container. The top-level container is the Block object that stores general descriptions of the data and has one or more Segment objects accessible by the segments attribute. The Segment object groups data objects that share a common time axis (i.e., they start and end within the same recording time, defined by the t_start and t_stop attributes; ). The Segment object also has collections to store specific data objects: analogsignals is a list of the AnalogSignal data objects, and events is a list of the Event data objects. The Neo data model also defines a framework for metadata description as key-value pairs for its data and container objects through annotations and array annotations. Annotations may be added to any Neo object. They contain information that are applicable to the complete object, such as the hardware filter settings that apply to all channels contained in an AnalogSignal object. Array annotations may be added to Neo data objects only. They contain information stored in arrays, whose length corresponds to the number of elements in the data. They are used to provide metadata for a particular element in the data stored in the object. For instance, in the Event object representing the behavioral events in the reach-to-grasp task, the trial_event_labels array annotation stores the decoded event string associated with each event timestamp stored in the object . In the end, all the data in the NIX dataset are loaded into Neo data objects that encapsulate all the relevant metadata. In the use case scenario, the PSDs were analyzed for each subject (monkey N and monkey L), and the mean PSD was computed for each of the four trial types present in the experiment . Although a single Python script (named psd_by_trial_type.py ) was used to produce the plot (stored as R2G_PSD_all_subjects.png ), the actual analysis algorithm is complex (shown in a schematic form in ). In a typical scenario, a file such as R2G_PSD_all_subjects.png could be stored in a shared folder or even sent to collaborators by e-mail. At this point, several key information cannot be obtained from the plot alone: (i) How were the trials defined, i.e., which time points or behavioral events were used as start and end points to cut the data in the data preprocessing? (ii) Was any filtering applied to the raw signal, before the computation of the PSD? (iii) Several methods are available to obtain the PSD estimate, each with particular features that may affect the estimation of the spectrum . Which method was used in this analysis, and what were the relevant parameters (e.g., for frequency resolution)? (iv) How was the aggregation performed (i.e., method and number of trials). What do the shaded area intervals around the plot lines represent? In addition to these questions, the contents of a plot such as R2G_PSD_all_subjects.png may be the result of several iterations of exploratory analyses and development of psd_by_trial_type.py . In our scenario, parameters that could have been iteratively probed or improved could be the identification of failed electrodes, definition of a suitable time window for cutting the data from a full trial, or to select specific filter cutoffs. Therefore, R2G_PSD_all_subjects.png could be overwritten after psd_by_trial_type.py was run with different parameters or different versions of the code. Altogether, the exhaustive set of steps and definitions used for the generation of the analysis result is not apparent from R2G_PSD_all_subjects.png . Even with a good description such as the flowchart in , which could be added as accompanying documentation, the exact parameters used for function calls are still missing, especially if these were determined during run time (such as the number of trials in the dataset). The only way of getting those relevant details of the analysis is by directly inspecting psd_by_trial_type.py . The difficulties associated with this approach are illustrated in . For a simple code snippet , which iterates over a list of trial data to apply a Butterworth filter and then downsample the signal, it is not possible to visualize the state of the data for each iteration (e.g., the array shape). In addition, the actual contents of the variables are unknown. A robust data model like Neo helps to understand which objects were accessed during each iteration. However, even when using that framework, the exact data objects and their transformations in each iteration of the for-loop are not apparent from the code given that the object instances (including attributes, such as the shape of an array) are only available during run time. One example of such information that exists only at run time is the number of trials (i.e., the number of Segment objects returned by cut_segment_by_epoch ) and the number of channels (i.e., the shape of the AnalogSignal object in each loop iteration). Unless running the script again with the same dataset and explicitly outputting this information, it is not possible to know. In contrast, by capturing and structuring the relevant provenance during the execution, a representation could be obtained in a way that all relevant information is accessible after the run . The detailed trace ultimately shows which part of the data and the resulting intermediate objects were used during each iteration. Python scripts As the analysis of electrophysiology datasets is usually based on scripts such as psd_by_trial_type.py , we set to implement Alpaca (Automated Lightweight ProvenAnce CApture) as a tool to capture the provenance information that describes the main steps implemented in scripts that process data. The captured information can be stored as a metadata file that is associated with the result file(s) generated by the script (e.g., the plot in stored in R2G_PSD_all_subjects.png ). Alpaca can be used for scripts written in the Python programming language as Python is free and open source, and has been gaining popularity among the neuroscience community . Python is also frequently used in the analysis of electrophysiology data, and several dedicated open source packages are available, such as the Neo and NWB (Neurodata without borders; RRID:SCR_015242; https://www.nwb.org ) frameworks for electrophysiology data representation, the unified spike sorting pipeline SpikeInterface (RRID:SCR_021150; https://spikeinterface.readthedocs.io ) , and Elephant (Electrophysiology Analysis Toolkit; RRID:RRID:SCR_003833; https://python-elephant.org ) for data analysis. Therefore, a tool implemented in Python will have greater impact in the neuroscience community, as no licenses or fees are required and it builds on already established state-of-the-art processing and analysis tools. The functionality of Alpaca is illustrated in . Alpaca is based on a Python function decorator (a Python decorator allows adding new functionality to existing functions without changing their behavior) that supports tracking the individual steps of the analysis and constructing a provenance trace. In addition, Alpaca serializes the captured provenance information as a metadata file encoded in the RDF format (Resource Description Framework, a general model for description and exchange of graph data; ) according to the data model defined in the W3C (World Wide Web Consortium; https://www.w3.org ) PROV standard (PROV-DM; ). PROV is an open standard that was developed to allow the interoperability of provenance information in heterogeneous environments . Finally, visualization of the provenance trace is supported by converting the PROV metadata into graphs that show the data flow within the script and allow the visual inspection of the captured provenance . Alpaca is provided as a standalone open source Python package that can be installed from the Python Package Index or directly from the code repository ( https://github.com/INM-6/alpaca ). The documentation with usage examples is available online ( https://alpaca-prov.readthedocs.io ). Several design decisions were adopted in Alpaca. First, the tool captures provenance during the execution without the need for users to enhance this information with additional metadata or documentation. Second, code instrumentation is reduced to a minimum level, and users are asked to make only minor changes in the existing code to enable tracking (see the online document contained within the code repository accompanying this study ( https://github.com/INM-6/alpaca_use_case/blob/f1696ec8dceaadbed6b825636ca7eb9aee704c92/documents/code_changes.pdf ) showing the changes required to track provenance within psd_by_trial_type.py ). Third, it is flexible enough to accommodate different coding styles, and it was designed to be the most compatible with existing code bases. Therefore, provenance is captured in an automatized and lightweight fashion. Alpaca assumes that an analysis script such as psd_by_trial_type.py is composed of several functions that are called sequentially (potentially in the context of control flow statements such as loops), each performing a step in the analysis. The functions in the script may take data as input and produce outputs based on a transformation of that data, or generate new data. Moreover, a function may have one or more parameters that are not data inputs but modify the behavior on how the function is generating the output. For example, in reshaping an array using the NumPy function reshape , the new shape would represent a parameter that defines how to reshape the original array (i.e., input data) into a new array (i.e., the output data). In Python , information to a function is passed through function arguments that are accessed by the local code in the function body that performs the computation. Those are specified in the function declaration using the def keyword. Therefore, Alpaca utilizes the following definitions to analyze a function call in the script: Input: a file or Python object that provides data for the function. It is one of the function arguments; Output: a file or Python object generated by a function. Can be a return value of the function or one of the function arguments; Parameter: any other function argument that is neither an input nor an output; Metadata: additional information contained in the input/output. For Python objects, these can be accessible by attributes (i.e., accessed by the dot . after the object name, such as signal.shape ) or annotations stored in dictionaries accessed by special attributes, such as the ones defined in the Neo data model. For files, this is the file path. Initializing Alpaca The calls to the functions tracked by Alpaca are expected to be present in a single scope (i.e., the main script body or a single function such as main ). To identify the code to be tracked and start the capture, the user must insert a call to the activate function at a point in the script before the corresponding block of code. When calling activate , Alpaca identifies the current script in execution, obtains the SHA256 hash (a hash is a function that maps data with variable size to fixed-size values. SHA256 is a Secure Hash Algorithm (SHA) that can be used to verify the identity of files) of the source file storing the code, and generates a universally unique identifier (UUID) to identify the script execution ( session ID ). The source code to be tracked will be analyzed to allow the extraction of each individual code statement later, during the analysis of each function execution. Before activating the tracking, the user can set options using the alpaca_settings function. These settings operate globally within the toolbox and control how Alpaca captures and describes provenance. Tracking the steps of the analysis The Provenance function decorator is used to wrap each data processing function executed in the script . When applying the decorator, the argument names that are either Python object inputs, file inputs, or file outputs are identified through the decorator constructor parameters inputs , file_input , or file_output . When the script is run, for each execution of the function, the decorator: (i) generates a description of the inputs and outputs, (ii) records the parameters used in the call, (iii) generates a unique execution UUID ( execution ID ), and (iv) captures the start/end timestamps. Finally, this information is used to build a record for the function execution. Provenance has an internal global function execution counter, incremented after the execution of any function being tracked. The current value is also added to the function execution record to obtain the order of that execution. Finally, all the execution records are stored in an internal history, which will be used to serialize the information at the end. The Provenance decorator analyzes the inputs and outputs to extract the information relevant for their description and their metadata: for Python objects (e.g., an AnalogSignal object), the type information ( Python class name and the module where it is implemented), content hash, and current memory address are recorded. The content hash is computed using either the hash function from the joblib ( https://joblib.readthedocs.io ) package (using the SHA1 algorithm) or the builtin Python hash function (that uses the algorithm implemented in the __hash__ method of the object). By default, every object will be hashed using joblib . However, it is possible to define specific packages whose objects will be hashed using the builtin hash function using the alpaca_settings function. This allows selecting hashing functionality that may already be implemented in the object (which can be faster), or avoid sensitivity to minor changes to the object content that will produce a provenance trace that is too detailed. The values of all object instance attributes (i.e., stored in the __dict__ dictionary) are recorded, together with the values of the specific attributes when present. This includes, for example, shape and dtype for NumPy arrays, or extended attributes such as units , t_start , t_stop , nix_name , and dimensionality for the AnalogSignal object of Neo representing a measurement time series. More generic attributes that could be used by other data models, such as id , pid , or create_time , are also captured if present. Currently, the support to capture extended metadata details is implemented for NumPy -based objects; for files, the SHA256 file hash is computed using the hashlib package, and the absolute file path is recorded; for the Python builtin None , the object hash is an UUID, as it is a special case where the actual object is shared throughout the execution environment. This avoids duplication. The information on the function is also extracted: name, module, and version of the package where it was implemented (if available through the metadata module from the importlib package implemented in Python 3.8 or higher). Version information is currently not recorded for user-defined functions (i.e., implemented in the script file being tracked). Finally, the inputs to a function may be accessed from container objects by subscripts (e.g., an item in a list such as signals[0] ) or attributes (e.g., segment.analogsignals ). To capture these static relationships, the abstract syntax tree of the source code statement containing the current function call is analyzed, all container objects are identified, and the operations (subscript or attribute) are added to the execution history. In the end, the container memberships are identified and recorded if used when passing inputs to a function. Serialization of the provenance information The captured provenance is serialized as RDF graph , using one of the formats supported by RDFLib ( https://github.com/RDFLib/rdflib ). The AlpacaProvDocument class is responsible for managing the serialization, based on the history captured by the Provenance decorator. For simplified usage, the serialization can be accomplished in a single step by just calling the save_provenance function at the end of the script execution, passing a destination file and serialization format. All the information currently stored in the history in Provenance will be saved to the disk. For the RDF representation of the captured provenance, the PROV-O ontology was extended to incorporate properties relevant to the description of the provenance elements captured by Alpaca. shows the main classes derived from the SoftwareAgent (a subclass of Agent), Entity, and Activity classes of the PROV-O ontology, and shows the provenance relationships among the classes, as defined in PROV-O. These main classes are: DataObjectEntity : entity used to represent a Python object that was an input or output of a function; FileEntity : entity used to represent a file that was an input or output of a function; FunctionExecution : activity used to represent a single execution of one function with a set of parameters; ScriptAgent : agent used to represent the script that was run and executed several functions in sequence. In addition to the classes derived from PROV-O, two additional classes are defined in the Alpaca ontology. They are used to represent specific information in the context of the provenance captured by Alpaca: Function: represents a Python function. It contains code that is executed to perform some action in the script, and that can take inputs, parameters, and produce outputs (e.g., in our example, the welch_psd function defined in the spectral module of the Elephant package); NameValuePair: represents information where a value is associated with a name. Name is a string and value can be any literal (e.g., integers, strings, decimal numbers). This is the main class used to store function parameters and data object metadata. The Alpaca ontology also defines specific extended properties which are used to serialize function parameters, object/file metadata, and function information. They are summarized in . For representing memberships, such as objects accessed from attributes (e.g., segment.analogsignals ), indexes (e.g., signals[0] ), or slices (e.g., signals[1:5] ), the PROV-O hasMember property is used. The DataObjectEntity representing the container object will have a hasMember property whose value is the DataObjectEntity representing the element accessed. The element will have one of the following properties to describe the membership: fromAttribute: a string storing the name of the attribute used to access the object in the container (e.g., analogsignals in segment.analogsignals ); containerIndex: a string storing the index used to access the object in the container (e.g., 0 in signals[0] ). This is not necessarily a number, as Python uses string indexes when accessing elements in dictionaries; containerSlice: a string storing the slice used to access the object (e.g., 1:5 in signals[1:5] ). In the RDF graph, each data object, file, or function execution is identified by a uniform resource name (URN) identifier . The functions and script are also represented by their own URNs. To compose a unique identifier, specific information captured during the script execution is used in the composition of the final URN string. The authority identifier element is a string that points to the institute or organization which has responsibility over the analysis. It can be set using the alpaca_settings function. The identifiers generated by Alpaca are summarized in . summarizes how a single function execution is stored in the serialized RDF graph using the Alpaca ontology and the PROV-O properties. Visualization of the serialized provenance The provenance records serialized to RDF files can be loaded as NetworkX (RRID:SCR_016864; https://networkx.org ) graph objects. Besides the functionality for graph analysis offered by NetworkX , the graph objects can be saved as GEXF (Graph exchange XML format; https://gexf.net ) or GraphML ( http://graphml.graphdrawing.org ) files that can be visualized by available graph visualization tools, e.g., Gephi (RRID:SCR_004293; https://gephi.org ) , or other Python -based frameworks, e.g., Pyvis ( https://pyvis.readthedocs.io ) . This takes the advantage of existing free and open source solutions developed specifically for analyzing and interacting with graphs. In Alpaca, the ProvenanceGraph class is responsible for generating the NetworkX graph objects from serialized provenance data. summarizes how the visualization graph is obtained from the RDF graph. The resulting graph will have entities ( DataObjectEntity or FileEntity ) and activities ( FunctionExecution ) as nodes, identified by the respective URN. Directed edges show the data flow across the functions. Metadata and function parameters are added to the attributes dictionary of each node. A few attributes are present for all the nodes in the graph (omitted in Fig. for clarity): type: describes one of the three possible types of node: object, file, or function; label: for data objects, it is the Python class name (e.g., AnalogSignal ). For functions, it is the function name (e.g., welch_psd ). For files, it is File ; Python_name: for data objects and functions, it is the full module path to the class or function, with respect to the package where it is implemented (e.g., neo.core.analogsignal.AnalogSignal ). For files, this attribute is not used; Time Interval: a string representing a time interval according to the standard used by Gephi that is composed from the order of the function execution. This information can be used to visualize the temporal evolution of the provenance graph, e.g., using the timeline feature of Gephi that displays only the nodes within a specified execution interval. The ProvenanceGraph class provides options to tweak the visualization. First, it is possible to select which attributes and annotations from the metadata to include in the visualization graph. Second, parameter names can be prefixed by the function name, so that they can easily be identified. Third, nodes representing the builtin Python None object (that is the default return value of a Python function) can be omitted. Finally, nodes describing a sequence of object access operations from containers (e.g., segment.analogsignals[0] , which accesses the list in the analogsignals attribute of segment , followed by retrieving its first element) can be condensed such that a single edge describing the operation is generated. These visualization options reduce clutter and facilitate the visual inspection of the recorded provenance information. Finally, the provenance graphs can become large when repeated operations are performed within the script, such as using a for loop to iterate over several data objects to perform computations. Therefore, an aggregation and summarization are available, adapted from the functionality already implemented in NetworkX (from version 2.6). It uses the Summarization by Grouping Nodes on Attributes and Pairwise edges (SNAP) aggregation algorithm, and was modified from the original implementation to allow the selection of specific attributes of a set of nodes. Moreover, for functions executed with distinct sets of parameters, the different values can also be taken into account when identifying similarity of nodes in summarizing the graph. The aggregation generates supernodes that represent not a single execution and data, but several identical or similar processing nodes. The identifiers of the individual elements that were aggregated in the supernode are listed in the members node attribute. The total number of nodes aggregated into the supernode is stored in the member_count node attribute. In the end, the user can aggregate several nodes together, depending on whether they share the values of a given attribute, which allows the generation of a simplified version of the provenance trace that provides a more general overview of the analysis. The calls to the functions tracked by Alpaca are expected to be present in a single scope (i.e., the main script body or a single function such as main ). To identify the code to be tracked and start the capture, the user must insert a call to the activate function at a point in the script before the corresponding block of code. When calling activate , Alpaca identifies the current script in execution, obtains the SHA256 hash (a hash is a function that maps data with variable size to fixed-size values. SHA256 is a Secure Hash Algorithm (SHA) that can be used to verify the identity of files) of the source file storing the code, and generates a universally unique identifier (UUID) to identify the script execution ( session ID ). The source code to be tracked will be analyzed to allow the extraction of each individual code statement later, during the analysis of each function execution. Before activating the tracking, the user can set options using the alpaca_settings function. These settings operate globally within the toolbox and control how Alpaca captures and describes provenance. The Provenance function decorator is used to wrap each data processing function executed in the script . When applying the decorator, the argument names that are either Python object inputs, file inputs, or file outputs are identified through the decorator constructor parameters inputs , file_input , or file_output . When the script is run, for each execution of the function, the decorator: (i) generates a description of the inputs and outputs, (ii) records the parameters used in the call, (iii) generates a unique execution UUID ( execution ID ), and (iv) captures the start/end timestamps. Finally, this information is used to build a record for the function execution. Provenance has an internal global function execution counter, incremented after the execution of any function being tracked. The current value is also added to the function execution record to obtain the order of that execution. Finally, all the execution records are stored in an internal history, which will be used to serialize the information at the end. The Provenance decorator analyzes the inputs and outputs to extract the information relevant for their description and their metadata: for Python objects (e.g., an AnalogSignal object), the type information ( Python class name and the module where it is implemented), content hash, and current memory address are recorded. The content hash is computed using either the hash function from the joblib ( https://joblib.readthedocs.io ) package (using the SHA1 algorithm) or the builtin Python hash function (that uses the algorithm implemented in the __hash__ method of the object). By default, every object will be hashed using joblib . However, it is possible to define specific packages whose objects will be hashed using the builtin hash function using the alpaca_settings function. This allows selecting hashing functionality that may already be implemented in the object (which can be faster), or avoid sensitivity to minor changes to the object content that will produce a provenance trace that is too detailed. The values of all object instance attributes (i.e., stored in the __dict__ dictionary) are recorded, together with the values of the specific attributes when present. This includes, for example, shape and dtype for NumPy arrays, or extended attributes such as units , t_start , t_stop , nix_name , and dimensionality for the AnalogSignal object of Neo representing a measurement time series. More generic attributes that could be used by other data models, such as id , pid , or create_time , are also captured if present. Currently, the support to capture extended metadata details is implemented for NumPy -based objects; for files, the SHA256 file hash is computed using the hashlib package, and the absolute file path is recorded; for the Python builtin None , the object hash is an UUID, as it is a special case where the actual object is shared throughout the execution environment. This avoids duplication. The information on the function is also extracted: name, module, and version of the package where it was implemented (if available through the metadata module from the importlib package implemented in Python 3.8 or higher). Version information is currently not recorded for user-defined functions (i.e., implemented in the script file being tracked). Finally, the inputs to a function may be accessed from container objects by subscripts (e.g., an item in a list such as signals[0] ) or attributes (e.g., segment.analogsignals ). To capture these static relationships, the abstract syntax tree of the source code statement containing the current function call is analyzed, all container objects are identified, and the operations (subscript or attribute) are added to the execution history. In the end, the container memberships are identified and recorded if used when passing inputs to a function. The captured provenance is serialized as RDF graph , using one of the formats supported by RDFLib ( https://github.com/RDFLib/rdflib ). The AlpacaProvDocument class is responsible for managing the serialization, based on the history captured by the Provenance decorator. For simplified usage, the serialization can be accomplished in a single step by just calling the save_provenance function at the end of the script execution, passing a destination file and serialization format. All the information currently stored in the history in Provenance will be saved to the disk. For the RDF representation of the captured provenance, the PROV-O ontology was extended to incorporate properties relevant to the description of the provenance elements captured by Alpaca. shows the main classes derived from the SoftwareAgent (a subclass of Agent), Entity, and Activity classes of the PROV-O ontology, and shows the provenance relationships among the classes, as defined in PROV-O. These main classes are: DataObjectEntity : entity used to represent a Python object that was an input or output of a function; FileEntity : entity used to represent a file that was an input or output of a function; FunctionExecution : activity used to represent a single execution of one function with a set of parameters; ScriptAgent : agent used to represent the script that was run and executed several functions in sequence. In addition to the classes derived from PROV-O, two additional classes are defined in the Alpaca ontology. They are used to represent specific information in the context of the provenance captured by Alpaca: Function: represents a Python function. It contains code that is executed to perform some action in the script, and that can take inputs, parameters, and produce outputs (e.g., in our example, the welch_psd function defined in the spectral module of the Elephant package); NameValuePair: represents information where a value is associated with a name. Name is a string and value can be any literal (e.g., integers, strings, decimal numbers). This is the main class used to store function parameters and data object metadata. The Alpaca ontology also defines specific extended properties which are used to serialize function parameters, object/file metadata, and function information. They are summarized in . For representing memberships, such as objects accessed from attributes (e.g., segment.analogsignals ), indexes (e.g., signals[0] ), or slices (e.g., signals[1:5] ), the PROV-O hasMember property is used. The DataObjectEntity representing the container object will have a hasMember property whose value is the DataObjectEntity representing the element accessed. The element will have one of the following properties to describe the membership: fromAttribute: a string storing the name of the attribute used to access the object in the container (e.g., analogsignals in segment.analogsignals ); containerIndex: a string storing the index used to access the object in the container (e.g., 0 in signals[0] ). This is not necessarily a number, as Python uses string indexes when accessing elements in dictionaries; containerSlice: a string storing the slice used to access the object (e.g., 1:5 in signals[1:5] ). In the RDF graph, each data object, file, or function execution is identified by a uniform resource name (URN) identifier . The functions and script are also represented by their own URNs. To compose a unique identifier, specific information captured during the script execution is used in the composition of the final URN string. The authority identifier element is a string that points to the institute or organization which has responsibility over the analysis. It can be set using the alpaca_settings function. The identifiers generated by Alpaca are summarized in . summarizes how a single function execution is stored in the serialized RDF graph using the Alpaca ontology and the PROV-O properties. The provenance records serialized to RDF files can be loaded as NetworkX (RRID:SCR_016864; https://networkx.org ) graph objects. Besides the functionality for graph analysis offered by NetworkX , the graph objects can be saved as GEXF (Graph exchange XML format; https://gexf.net ) or GraphML ( http://graphml.graphdrawing.org ) files that can be visualized by available graph visualization tools, e.g., Gephi (RRID:SCR_004293; https://gephi.org ) , or other Python -based frameworks, e.g., Pyvis ( https://pyvis.readthedocs.io ) . This takes the advantage of existing free and open source solutions developed specifically for analyzing and interacting with graphs. In Alpaca, the ProvenanceGraph class is responsible for generating the NetworkX graph objects from serialized provenance data. summarizes how the visualization graph is obtained from the RDF graph. The resulting graph will have entities ( DataObjectEntity or FileEntity ) and activities ( FunctionExecution ) as nodes, identified by the respective URN. Directed edges show the data flow across the functions. Metadata and function parameters are added to the attributes dictionary of each node. A few attributes are present for all the nodes in the graph (omitted in Fig. for clarity): type: describes one of the three possible types of node: object, file, or function; label: for data objects, it is the Python class name (e.g., AnalogSignal ). For functions, it is the function name (e.g., welch_psd ). For files, it is File ; Python_name: for data objects and functions, it is the full module path to the class or function, with respect to the package where it is implemented (e.g., neo.core.analogsignal.AnalogSignal ). For files, this attribute is not used; Time Interval: a string representing a time interval according to the standard used by Gephi that is composed from the order of the function execution. This information can be used to visualize the temporal evolution of the provenance graph, e.g., using the timeline feature of Gephi that displays only the nodes within a specified execution interval. The ProvenanceGraph class provides options to tweak the visualization. First, it is possible to select which attributes and annotations from the metadata to include in the visualization graph. Second, parameter names can be prefixed by the function name, so that they can easily be identified. Third, nodes representing the builtin Python None object (that is the default return value of a Python function) can be omitted. Finally, nodes describing a sequence of object access operations from containers (e.g., segment.analogsignals[0] , which accesses the list in the analogsignals attribute of segment , followed by retrieving its first element) can be condensed such that a single edge describing the operation is generated. These visualization options reduce clutter and facilitate the visual inspection of the recorded provenance information. Finally, the provenance graphs can become large when repeated operations are performed within the script, such as using a for loop to iterate over several data objects to perform computations. Therefore, an aggregation and summarization are available, adapted from the functionality already implemented in NetworkX (from version 2.6). It uses the Summarization by Grouping Nodes on Attributes and Pairwise edges (SNAP) aggregation algorithm, and was modified from the original implementation to allow the selection of specific attributes of a set of nodes. Moreover, for functions executed with distinct sets of parameters, the different values can also be taken into account when identifying similarity of nodes in summarizing the graph. The aggregation generates supernodes that represent not a single execution and data, but several identical or similar processing nodes. The identifiers of the individual elements that were aggregated in the supernode are listed in the members node attribute. The total number of nodes aggregated into the supernode is stored in the member_count node attribute. In the end, the user can aggregate several nodes together, depending on whether they share the values of a given attribute, which allows the generation of a simplified version of the provenance trace that provides a more general overview of the analysis. The code to reproduce the analyses presented as use case in this paper is freely available online at https://github.com/INM-6/alpaca_use_case . , , – , and were manually created using Inkscape. and are direct outputs of the corresponding scripts. , , , and were created from graph visualization files generated by the corresponding scripts (GEXF format). The GEXF files were loaded into Gephi (version 0.9.7) and nodes were edited for color, position, and size. The graphs were exported to Scalable Vector Graphics (SVG) files that were manually edited using Inkscape to compose the final figures. Editing involved adjusting label sizes and adding information available as node attributes in Gephi. The data used for the analysis can be found at https://gin.g-node.org/INT/multielectrode_grasp . All codes used in this manuscript are also available as Extended Data. 10.1523/ENEURO.0476-23.2024.d1 Extended Data 1 Download Extended Data 1, ZIP file . 10.1523/ENEURO.0476-23.2024.d2 Extended Data 2 Download Extended Data 2, ZIP file . In the following, we will describe and evaluate the analysis provenance captured by Alpaca in the use case scenario described in Section 2.2. After running psd_by_trial_type.py with the code modified to use Alpaca, a detailed provenance trace was obtained and stored as R2G_PSD_all_subjects.ttl . Corresponding GEXF graph files for visualization were generated, with distinct levels of aggregation and granularity of the steps in psd_by_trial_type.py , ranging from a fine-grained view to a summarizing birds-eye view. The interactive analysis of those graphs using Gephi are presented in the form of a video (accessible at https://purl.org/alpaca/video ). Here, we will present the main features of the provenance trace using several Gephi graph exports. Then, we detail how they address the four challenges for tracking provenance of the analysis we identified in the Materials and Methods and . Overview of the captured provenance shows the overview of the graph generated from R2G_PSD_all_subjects.ttl ( None objects returned by functions were removed). Overall, 3,579 nodes and 4,313 edges are present, and the graph has eight colored regions. Each region corresponds to the iterations of the two outer loops in psd_by_trial_type.py (i.e., loop over two subjects × loop over four trial types resulting in eight iterations; ). For the remainder of this study, the visualization is optimized to remove memberships due to the access of Neo objects in containers that introduces extra nodes in the graph. This simplification is illustrated in . Using the timeline feature of Gephi , it is possible to isolate specific parts of the graph based on the execution order of statements in the Python code. Here, we single out the time window that corresponds to the processing of a single trial in a loop iteration and then inspect individual attributes of the objects and parameters of the functions involved until the computation of the PSD. It is possible to inspect the start and end time points of the trial segment with respect to the recording time in the dataset using the t_start and t_stop attributes of the Segment object at the beginning of the trace, thus uniquely identifying the analyzed data segment. It is also possible to review the AnalogSignal object containing the data that were later processed and used to compute the PSD by the welch_psd function of Elephant . General attributes, such as the shape of the data array of the AnalogSignal object, can be accessed together with specific metadata, such as the names of the channels associated with the time series in the data. Finally, for these intermediate steps, it is possible to inspect specific parameters passed to each function: the attributes of FunctionExecution graph nodes (shown example: butter ) corresponding to function parameters are prefixed by the function name, followed by the name of the argument as defined in the Python function definition (cf., ). Taken together, Alpaca captured these types of information for each individual step throughout the execution of psd_by_trial_type.py such that each iteration of the central analysis can be traced in detail after completion of the script. It is possible to retrace the first steps after loading the two data files . A function called load_data (defined in psd_by_trial_type.py ) was called with the neural data file (available as a dataset in the NIX file format) of one particular subject as input and returning a Block object with all the data of that recording session. We can inspect the subject_name annotation of Block and identify in human-readable text which subject corresponds to each object. We can alternatively bind each Block to the specific source data file, by inspecting the File node associated with each object, and obtain the SHA256 hash ( data_hash node attribute). Although the actual path used in the analysis ( File_path node attribute) will point to the actual location of the file in the system where the script was run, the hash will allow the identification of the file regardless of its name and location. Moreover, the graph shows that the first Segment stored in the Block was accessed (through the segments attribute of Block ), and this was the main source for all subsequent analysis done for each monkey. By inspecting the node of the Segment object, we have access to its attributes and annotations, such as the starting and end times of the data in recording time (−0.0021 and 1003.2122 s for subject monkey N, and 0.0 and 709.2480 s for monkey L; ). In a similar way as described above for reading the input data, we can inspect the generation of the output file . It was obtained from a matplotlib Figure object that was initialized by a function at the beginning of the script execution ( create_main_plot_objects ), was successively filled with graphs as power spectra were calculated, and finally saved to disk as a PNG file using a function called save_plot . Understanding the data preprocessing shows the sequence of steps applied to the Segment object that contains the full data for one subject. When aggregated by function parameters (i.e., simplified based on similarity of function parameters), the graph shows four separate paths that start from each of the two Segment objects (one per subject). Each is comprised of the Neo functions get_event , add_epoch , and cut_segment_by_epoch . Each of those functions performs a specific action: identify specific events during the recording (stored in an Event object) according to selection criteria, select a window of data around these identified event timestamps (stored in an Epoch object), and finally use the windows stored in the Epoch object to cut the large Segment , producing one Segment object per epoch containing a window. We can now analyze the captured provenance to verify the detailed parameters used in each of those preprocessing functions. get_event used a parameter called properties , together with the Segment object as input. That parameter defines a dictionary with keys and values that are compared to the annotations or attributes of a Neo Event object in order to select the desired subset of all events recorded during the experiment. All four paths considered the CUE-OFF event of correct trials (defined by the trial_event_labels = ‘CUE-OFF’ and performance_in_trial_str = ‘correct_trial’ dictionary entries). However, in each path, the function was called with the belongs_to_trialtype value containing one of the four possible trial labels: PGHF, PGLF, SGHF, or SGLF. Therefore, each Event object returned by get_event will contain the times of CUE-OFF of all correct trials of one of the four trial types. The times of the generated Event objects were used to define epochs and cut the data to obtain segments of the trials of a particular type. Inspecting the subsequent executions of the functions add_epoch and cut_segment_by_epoch , their parameters show that epochs were defined as 500 ms after the CUE-OFF event ( pre = 0.0 ms and post = 500.0 ms), and the absolute recording times were preserved when cutting ( reset_time=False ). Therefore, for each subject, we can partition the provenance graph in four separate paths, each dealing with processing data of a particular trial type (the outer loops of psd_by_trial_type.py ; ). Not only these selection criteria for extracting the data are retained by the provenance trail, but also we can retrieve the precise time points used for cutting the data and calculated only during run time based on the loaded data on a trial-by-trial basis (by inspecting the t_start and t_stop attributes of each Segment generated by cut_segment_by_epoch ). Overall, Alpaca allowed us to understand the initial data preprocessing and trial definitions, addressing challenges 1 and 2. Inspecting the data flow used to generate a result The figure stored in R2G_PSD_all_subjects.png and shown in could have been produced by different versions of psd_by_trial_type.py , with steps in different order or new steps added. A likely scenario is the necessity to filter out some channels for one of the datasets. In , we see that for each subject, a user-defined function called select_channels was applied to the data. For monkey L, it is apparent from the shapes of the data arrays that two recording channels were excluded (due to signal quality), such that only 94 of the 96 recording channels were used. The provenance track captured by Alpaca shows this, as the returned AnalogSignal object is different from the object containing all the channels, and the shape attribute shows the removal of the two channels. At this point, it is possible to bind R2G_PSD_all_subjects.ttl to R2G_PSD_all_subjects.png through the SHA256 hash of the file written by the function save_plot . R2G_PSD_all_subjects.ttl will also have all the function executions linked to the script identifier, obtained from the hash of psd_by_trial_type.py and session ID (cf. ). Thus, it was possible to record all operations within a single script together with the actual parameters used. In this way, the provenance information can be used to automatically capture and retain the ongoing development process from the perspective of the generated results, addressing challenges 2 and 3. Reviewing analysis parameters In between runs of a single version of psd_by_trial_type.py , the analysis parameters could also have been changed, leading to alternate versions of the PSD estimates in R2G_PSD_all_subjects.png generated by each run. A scenario where this is likely to occur is one where the scientist performing the analysis may have iterated the code execution several times to find a set of parameters that allowed a good visualization of the power spectra. The provenance track captured by Alpaca allows to inspect the values of each individual function call. From the AnalogSignal object after channel selection, there is a common pathway in the aggregated graph for both subjects . The functions butter , AnalogSignal.downsample , and finally welch_psd were called sequentially. Those correspond to the filtering, downsampling, and computation of the PSD using the Welch method. Each of those functions have key parameters that will affect the PSD estimate, and the parameters were captured automatically. We can use the provenance information to verify that a 250 Hz low-pass cutoff was used for the filtering (from the parameter passed to the Elephant butter function). Moreover, we verify that the signal was downsampled by a factor of 60 (method downsample from the AnalogSignal object). By inspecting the shapes of the AnalogSignal objects that are input and output of the function, we can verify the downsample operation: the input object had 15,000 samples and, after AnalogSignal.downsample , the number was reduced to 250. Finally, it is possible to inspect all the parameters for the PSD computation using the Elephant welch_psd function: a Hanning window was used, for an estimate with a 2 Hz frequency resolution. The resulting objects storing the frequency bins and power estimates ( Quantity arrays) are discernible by the units attribute. The frequency array has a dimension of 126, which is expected for a PSD of a continuous signal downsampled to 500 Hz and with a frequency resolution of 2 Hz. It is also possible to observe that the power estimates are a two-dimensional array with first dimensions of 96 (for monkey N) and 94 (for monkey L), which agree with the source AnalogSignal objects and indicate the number of channels. Therefore, the power estimates were obtained for each channel as a single array. Addressing challenges 1 and 2, it possible to retrieve the value of any parameter that may have resulted from trial and error iterations during the development of psd_by_trial_type.py , as the provenance information shows the detailed history of the generation of the data objects that were ultimately used by the plotting function. Facilitating sharing of analysis results When sharing R2G_PSD_all_subjects.png with others, some parts of the figure leave guesswork to the collaborator. However, R2G_PSD_all_subjects.ttl contains several missing pieces of information that are not accessible from the figure stored in R2G_PSD_all_subjects.png alone. In addition to the details of the analysis steps presented above, it is also possible to know the last steps used to transform the data before plotting the lines and intervals using the plot_lfp_psd function . First, an average of the power across all channels was obtained for each trial. The NumPy mean function was applied to the array with the per-channel power estimates, over the first axis ( axis = 0 parameter). Then, the channel averages of all trials of the same trial type of a single subject were averaged in a grand mean (using the NumPy mean function). The individual trial averages were also used to obtain a SEM estimate (using the SciPy sem function). Finally, the grand mean and SEM were passed to the plot_lfp_psd function that performed the plotting in the AxesSubplot object corresponding to the graph panel for that subject, taking the multiplier 1.96 as a parameter to define the width of the intervals. Not only all these steps are now apparent, but it is also possible to know how many trials were used for each subject when plotting (monkey N: PGHF = 36, PGLF = 35, SGHF = 36, or SGLF = 35; monkey L: PGHF = 33, PGLF = 31, SGHF = 30, or SGLF = 41; and B ). In addition, for each call of plot_lfp_psd it is possible to inspect the parameter providing the legend label with respect to the source of the mean, SEM, and frequency data used as inputs. As mentioned above, two electrode channels were excluded in the analysis of monkey L data. The provenance information in R2G_PSD_all_subjects.ttl makes it possible to check the channel_names annotations of each AnalogSignal object used in each iteration when computing the PSD . The inspected labels show that channels 2 and 4 were excluded for this monkey. An additional scenario to illustrate how to make use of the captured provenance in a shared environment is presented in . Here, a plot resembling the one presented in is stored in R2G_PSD_all_subjects.png . However, the lines and interval area boundaries appear smoothed, suggesting the plot was generated by an alternate version of psd_by_trial_type.py . The provenance captured by Alpaca reveals steps after the aggregation of the power estimates across trials. Spline smoothing objects from the SciPy package were used to generate new arrays that were the inputs to the plotting function plot_lfp_psd . With this information, collaborators receiving R2G_PSD_all_subjects.png can clearly identify that the plot is not showing the actual estimates but a smoothed version. In summary, addressing challenge 4, the provenance information captured by Alpaca facilitates sharing R2G_PSD_all_subjects.png as it provides additional information for finding and understanding the results without requiring extra work by the scientist performing the analysis. Provenance capture in parallelization and multiple-script scenarios The complexity of electrophysiology analysis workflows can increase in multiple ways to accommodate the demands on data size and computational load of a particular analysis. In the following we explore in how far Alpaca can be integrated in two such scenarios, illustrated by the use case example . One way is to adopt parallelization approaches inside a single script, such as using the message passing interface (MPI). In this approach, a script is run multiple times as separate simultaneous processes, and each run is given an identifier (rank). A script such as psd_by_trial_type.py could have the control flow modified, for example, such that each iteration in the main for loop processing the two subject files (i.e., monkey N or monkey L; ) would be executed in different processes according to the rank value defined for that script execution . At the end of the loop iteration, the in-memory arrays with the computed PSDs are transferred to the main process (rank 0) using MPI routines to produce the plots and the final PNG file R2G_PSD_all_subjects.png . This approach allows the distribution of the execution of each iteration among the different compute cores. A second way consists of breaking a complex script into smaller scripts that perform more atomic parts of the analysis, a common approach for electrophysiology data analysis pipelines. In this scenario, inputs of later scripts are the outputs of earlier scripts in the pipeline (i.e., there is a sequential dependence among the scripts). In our example, psd_by_trial_type.py could be broken into two main steps: the first reads an experimental dataset and computes the PSDs for each trial type, and the second creates the plot objects, takes the PSD data from both datasets, plots it using matplotlib , and saves the plot as R2G_PSD_all_subjects.png . Although this requires saving the data with the computed PSDs into intermediate files (which adds a file input/output performance cost), the workflow can be orchestrated by management systems such as Snakemake that control the parallel or sequential execution of the steps according to the file dependencies (i.e., the PSDs of either monkey N or monkey L can be estimated simultaneously, but the final plotting step must wait for the availability of the PSD data from both subjects). Snakemake can distribute the execution of parts of the analysis to specific compute cores and reuses data from previous steps if changes are made to a script in a later step. As Alpaca tracks the provenance of single-script runs, we implemented the two scenarios described above to demonstrate how to use the tool to track provenance in complex multi-script or parallelization scenarios. Each scenario uses the same functions as the psd_by_trial_type.py script described for the use case example, and is instrumented with Alpaca in the same way. Modifications were introduced only to accommodate the requirements for parallelization or breaking the steps into multiple scripts. For MPI, the control flow is modified to process a single subject loop iteration, to plot only on rank 0, and to perform an MPI send/receive operation before the execution of the plotting functions. For Snakemake , steps were added to save/load intermediate PSD data as (pickle) files. Each script execution generates an RDF file containing the provenance of that single execution. In the MPI example, 2 RDF files are saved (from rank 0 and 1 executions, respectively). In the Snakemake example, 3 RDF files are obtained: two in the step to compute the PSDs (for either monkey N or monkey L) and one in the step for plotting. The data from all RDF files of either MPI (2 files) or Snakemake (3 files) examples are trivially combined into a single RDF graph to visualize provenance as a graph. compares the graphs obtained for each scenario after aggregation. Due to the unique identifiers generated by Alpaca, when the provenance data of the distributed executions were combined, a fully connected graph describing the whole analysis emerged (e.g., in the Snakemake example, the identifiers of the files saved in step 1 are the same when read by step 2). The resulting provenance tracks of the MPI and the Snakemake scenarios are highly similar to the single-script scenario , with minor differences due to the changes needed to accommodate the parallelization or multi-script orchestration (e.g., reading additional files). Therefore, the script-based tracking of provenance using Alpaca can be used in these more complex and distributed scenarios, yielding a merged provenance record that provides the overview of the whole analysis process. shows the overview of the graph generated from R2G_PSD_all_subjects.ttl ( None objects returned by functions were removed). Overall, 3,579 nodes and 4,313 edges are present, and the graph has eight colored regions. Each region corresponds to the iterations of the two outer loops in psd_by_trial_type.py (i.e., loop over two subjects × loop over four trial types resulting in eight iterations; ). For the remainder of this study, the visualization is optimized to remove memberships due to the access of Neo objects in containers that introduces extra nodes in the graph. This simplification is illustrated in . Using the timeline feature of Gephi , it is possible to isolate specific parts of the graph based on the execution order of statements in the Python code. Here, we single out the time window that corresponds to the processing of a single trial in a loop iteration and then inspect individual attributes of the objects and parameters of the functions involved until the computation of the PSD. It is possible to inspect the start and end time points of the trial segment with respect to the recording time in the dataset using the t_start and t_stop attributes of the Segment object at the beginning of the trace, thus uniquely identifying the analyzed data segment. It is also possible to review the AnalogSignal object containing the data that were later processed and used to compute the PSD by the welch_psd function of Elephant . General attributes, such as the shape of the data array of the AnalogSignal object, can be accessed together with specific metadata, such as the names of the channels associated with the time series in the data. Finally, for these intermediate steps, it is possible to inspect specific parameters passed to each function: the attributes of FunctionExecution graph nodes (shown example: butter ) corresponding to function parameters are prefixed by the function name, followed by the name of the argument as defined in the Python function definition (cf., ). Taken together, Alpaca captured these types of information for each individual step throughout the execution of psd_by_trial_type.py such that each iteration of the central analysis can be traced in detail after completion of the script. It is possible to retrace the first steps after loading the two data files . A function called load_data (defined in psd_by_trial_type.py ) was called with the neural data file (available as a dataset in the NIX file format) of one particular subject as input and returning a Block object with all the data of that recording session. We can inspect the subject_name annotation of Block and identify in human-readable text which subject corresponds to each object. We can alternatively bind each Block to the specific source data file, by inspecting the File node associated with each object, and obtain the SHA256 hash ( data_hash node attribute). Although the actual path used in the analysis ( File_path node attribute) will point to the actual location of the file in the system where the script was run, the hash will allow the identification of the file regardless of its name and location. Moreover, the graph shows that the first Segment stored in the Block was accessed (through the segments attribute of Block ), and this was the main source for all subsequent analysis done for each monkey. By inspecting the node of the Segment object, we have access to its attributes and annotations, such as the starting and end times of the data in recording time (−0.0021 and 1003.2122 s for subject monkey N, and 0.0 and 709.2480 s for monkey L; ). In a similar way as described above for reading the input data, we can inspect the generation of the output file . It was obtained from a matplotlib Figure object that was initialized by a function at the beginning of the script execution ( create_main_plot_objects ), was successively filled with graphs as power spectra were calculated, and finally saved to disk as a PNG file using a function called save_plot . shows the sequence of steps applied to the Segment object that contains the full data for one subject. When aggregated by function parameters (i.e., simplified based on similarity of function parameters), the graph shows four separate paths that start from each of the two Segment objects (one per subject). Each is comprised of the Neo functions get_event , add_epoch , and cut_segment_by_epoch . Each of those functions performs a specific action: identify specific events during the recording (stored in an Event object) according to selection criteria, select a window of data around these identified event timestamps (stored in an Epoch object), and finally use the windows stored in the Epoch object to cut the large Segment , producing one Segment object per epoch containing a window. We can now analyze the captured provenance to verify the detailed parameters used in each of those preprocessing functions. get_event used a parameter called properties , together with the Segment object as input. That parameter defines a dictionary with keys and values that are compared to the annotations or attributes of a Neo Event object in order to select the desired subset of all events recorded during the experiment. All four paths considered the CUE-OFF event of correct trials (defined by the trial_event_labels = ‘CUE-OFF’ and performance_in_trial_str = ‘correct_trial’ dictionary entries). However, in each path, the function was called with the belongs_to_trialtype value containing one of the four possible trial labels: PGHF, PGLF, SGHF, or SGLF. Therefore, each Event object returned by get_event will contain the times of CUE-OFF of all correct trials of one of the four trial types. The times of the generated Event objects were used to define epochs and cut the data to obtain segments of the trials of a particular type. Inspecting the subsequent executions of the functions add_epoch and cut_segment_by_epoch , their parameters show that epochs were defined as 500 ms after the CUE-OFF event ( pre = 0.0 ms and post = 500.0 ms), and the absolute recording times were preserved when cutting ( reset_time=False ). Therefore, for each subject, we can partition the provenance graph in four separate paths, each dealing with processing data of a particular trial type (the outer loops of psd_by_trial_type.py ; ). Not only these selection criteria for extracting the data are retained by the provenance trail, but also we can retrieve the precise time points used for cutting the data and calculated only during run time based on the loaded data on a trial-by-trial basis (by inspecting the t_start and t_stop attributes of each Segment generated by cut_segment_by_epoch ). Overall, Alpaca allowed us to understand the initial data preprocessing and trial definitions, addressing challenges 1 and 2. The figure stored in R2G_PSD_all_subjects.png and shown in could have been produced by different versions of psd_by_trial_type.py , with steps in different order or new steps added. A likely scenario is the necessity to filter out some channels for one of the datasets. In , we see that for each subject, a user-defined function called select_channels was applied to the data. For monkey L, it is apparent from the shapes of the data arrays that two recording channels were excluded (due to signal quality), such that only 94 of the 96 recording channels were used. The provenance track captured by Alpaca shows this, as the returned AnalogSignal object is different from the object containing all the channels, and the shape attribute shows the removal of the two channels. At this point, it is possible to bind R2G_PSD_all_subjects.ttl to R2G_PSD_all_subjects.png through the SHA256 hash of the file written by the function save_plot . R2G_PSD_all_subjects.ttl will also have all the function executions linked to the script identifier, obtained from the hash of psd_by_trial_type.py and session ID (cf. ). Thus, it was possible to record all operations within a single script together with the actual parameters used. In this way, the provenance information can be used to automatically capture and retain the ongoing development process from the perspective of the generated results, addressing challenges 2 and 3. In between runs of a single version of psd_by_trial_type.py , the analysis parameters could also have been changed, leading to alternate versions of the PSD estimates in R2G_PSD_all_subjects.png generated by each run. A scenario where this is likely to occur is one where the scientist performing the analysis may have iterated the code execution several times to find a set of parameters that allowed a good visualization of the power spectra. The provenance track captured by Alpaca allows to inspect the values of each individual function call. From the AnalogSignal object after channel selection, there is a common pathway in the aggregated graph for both subjects . The functions butter , AnalogSignal.downsample , and finally welch_psd were called sequentially. Those correspond to the filtering, downsampling, and computation of the PSD using the Welch method. Each of those functions have key parameters that will affect the PSD estimate, and the parameters were captured automatically. We can use the provenance information to verify that a 250 Hz low-pass cutoff was used for the filtering (from the parameter passed to the Elephant butter function). Moreover, we verify that the signal was downsampled by a factor of 60 (method downsample from the AnalogSignal object). By inspecting the shapes of the AnalogSignal objects that are input and output of the function, we can verify the downsample operation: the input object had 15,000 samples and, after AnalogSignal.downsample , the number was reduced to 250. Finally, it is possible to inspect all the parameters for the PSD computation using the Elephant welch_psd function: a Hanning window was used, for an estimate with a 2 Hz frequency resolution. The resulting objects storing the frequency bins and power estimates ( Quantity arrays) are discernible by the units attribute. The frequency array has a dimension of 126, which is expected for a PSD of a continuous signal downsampled to 500 Hz and with a frequency resolution of 2 Hz. It is also possible to observe that the power estimates are a two-dimensional array with first dimensions of 96 (for monkey N) and 94 (for monkey L), which agree with the source AnalogSignal objects and indicate the number of channels. Therefore, the power estimates were obtained for each channel as a single array. Addressing challenges 1 and 2, it possible to retrieve the value of any parameter that may have resulted from trial and error iterations during the development of psd_by_trial_type.py , as the provenance information shows the detailed history of the generation of the data objects that were ultimately used by the plotting function. When sharing R2G_PSD_all_subjects.png with others, some parts of the figure leave guesswork to the collaborator. However, R2G_PSD_all_subjects.ttl contains several missing pieces of information that are not accessible from the figure stored in R2G_PSD_all_subjects.png alone. In addition to the details of the analysis steps presented above, it is also possible to know the last steps used to transform the data before plotting the lines and intervals using the plot_lfp_psd function . First, an average of the power across all channels was obtained for each trial. The NumPy mean function was applied to the array with the per-channel power estimates, over the first axis ( axis = 0 parameter). Then, the channel averages of all trials of the same trial type of a single subject were averaged in a grand mean (using the NumPy mean function). The individual trial averages were also used to obtain a SEM estimate (using the SciPy sem function). Finally, the grand mean and SEM were passed to the plot_lfp_psd function that performed the plotting in the AxesSubplot object corresponding to the graph panel for that subject, taking the multiplier 1.96 as a parameter to define the width of the intervals. Not only all these steps are now apparent, but it is also possible to know how many trials were used for each subject when plotting (monkey N: PGHF = 36, PGLF = 35, SGHF = 36, or SGLF = 35; monkey L: PGHF = 33, PGLF = 31, SGHF = 30, or SGLF = 41; and B ). In addition, for each call of plot_lfp_psd it is possible to inspect the parameter providing the legend label with respect to the source of the mean, SEM, and frequency data used as inputs. As mentioned above, two electrode channels were excluded in the analysis of monkey L data. The provenance information in R2G_PSD_all_subjects.ttl makes it possible to check the channel_names annotations of each AnalogSignal object used in each iteration when computing the PSD . The inspected labels show that channels 2 and 4 were excluded for this monkey. An additional scenario to illustrate how to make use of the captured provenance in a shared environment is presented in . Here, a plot resembling the one presented in is stored in R2G_PSD_all_subjects.png . However, the lines and interval area boundaries appear smoothed, suggesting the plot was generated by an alternate version of psd_by_trial_type.py . The provenance captured by Alpaca reveals steps after the aggregation of the power estimates across trials. Spline smoothing objects from the SciPy package were used to generate new arrays that were the inputs to the plotting function plot_lfp_psd . With this information, collaborators receiving R2G_PSD_all_subjects.png can clearly identify that the plot is not showing the actual estimates but a smoothed version. In summary, addressing challenge 4, the provenance information captured by Alpaca facilitates sharing R2G_PSD_all_subjects.png as it provides additional information for finding and understanding the results without requiring extra work by the scientist performing the analysis. The complexity of electrophysiology analysis workflows can increase in multiple ways to accommodate the demands on data size and computational load of a particular analysis. In the following we explore in how far Alpaca can be integrated in two such scenarios, illustrated by the use case example . One way is to adopt parallelization approaches inside a single script, such as using the message passing interface (MPI). In this approach, a script is run multiple times as separate simultaneous processes, and each run is given an identifier (rank). A script such as psd_by_trial_type.py could have the control flow modified, for example, such that each iteration in the main for loop processing the two subject files (i.e., monkey N or monkey L; ) would be executed in different processes according to the rank value defined for that script execution . At the end of the loop iteration, the in-memory arrays with the computed PSDs are transferred to the main process (rank 0) using MPI routines to produce the plots and the final PNG file R2G_PSD_all_subjects.png . This approach allows the distribution of the execution of each iteration among the different compute cores. A second way consists of breaking a complex script into smaller scripts that perform more atomic parts of the analysis, a common approach for electrophysiology data analysis pipelines. In this scenario, inputs of later scripts are the outputs of earlier scripts in the pipeline (i.e., there is a sequential dependence among the scripts). In our example, psd_by_trial_type.py could be broken into two main steps: the first reads an experimental dataset and computes the PSDs for each trial type, and the second creates the plot objects, takes the PSD data from both datasets, plots it using matplotlib , and saves the plot as R2G_PSD_all_subjects.png . Although this requires saving the data with the computed PSDs into intermediate files (which adds a file input/output performance cost), the workflow can be orchestrated by management systems such as Snakemake that control the parallel or sequential execution of the steps according to the file dependencies (i.e., the PSDs of either monkey N or monkey L can be estimated simultaneously, but the final plotting step must wait for the availability of the PSD data from both subjects). Snakemake can distribute the execution of parts of the analysis to specific compute cores and reuses data from previous steps if changes are made to a script in a later step. As Alpaca tracks the provenance of single-script runs, we implemented the two scenarios described above to demonstrate how to use the tool to track provenance in complex multi-script or parallelization scenarios. Each scenario uses the same functions as the psd_by_trial_type.py script described for the use case example, and is instrumented with Alpaca in the same way. Modifications were introduced only to accommodate the requirements for parallelization or breaking the steps into multiple scripts. For MPI, the control flow is modified to process a single subject loop iteration, to plot only on rank 0, and to perform an MPI send/receive operation before the execution of the plotting functions. For Snakemake , steps were added to save/load intermediate PSD data as (pickle) files. Each script execution generates an RDF file containing the provenance of that single execution. In the MPI example, 2 RDF files are saved (from rank 0 and 1 executions, respectively). In the Snakemake example, 3 RDF files are obtained: two in the step to compute the PSDs (for either monkey N or monkey L) and one in the step for plotting. The data from all RDF files of either MPI (2 files) or Snakemake (3 files) examples are trivially combined into a single RDF graph to visualize provenance as a graph. compares the graphs obtained for each scenario after aggregation. Due to the unique identifiers generated by Alpaca, when the provenance data of the distributed executions were combined, a fully connected graph describing the whole analysis emerged (e.g., in the Snakemake example, the identifiers of the files saved in step 1 are the same when read by step 2). The resulting provenance tracks of the MPI and the Snakemake scenarios are highly similar to the single-script scenario , with minor differences due to the changes needed to accommodate the parallelization or multi-script orchestration (e.g., reading additional files). Therefore, the script-based tracking of provenance using Alpaca can be used in these more complex and distributed scenarios, yielding a merged provenance record that provides the overview of the whole analysis process. We presented Alpaca, a toolbox to capture fine-grained provenance information when executing Python code, with a specific focus on scripts that analyze data. The information is saved as a metadata file that represents a sidecar file to the saved analysis results. Using a realistic use case analysis of calculating power spectra estimates in a massively parallel electrophysiology dataset, we showed how this captured provenance metadata helps in understanding an electrophysiology analysis result that could ultimately be shared among collaborators. With the help of graph visualizations, it is possible to inspect the data flow across functions together with other details that were available at run time, such as object attributes and annotations and function parameters. The toolbox takes advantage of existing standards to represent electrophysiology data in Python (e.g., Neo ) by also capturing relevant object metadata into the provenance records. In the end, it was possible to obtain detailed information that were not available from the result file alone. This provided a better context for the interpretation of an analysis result and adds to the rigor in its reuse. In the beginning, we introduced four challenges associated with the analysis of electrophysiology datasets that we aimed to consider in designing a toolbox to capture provenance. We then showed, using our concrete use case, that Alpaca addresses these challenges. First, the customized data preprocessing routine using functionality of the Neo package was described in the provenance record with all the relevant parameters. Second, any state of the parameters of the functions called in the script and the data flow will be automatically recorded together with the results to allow detailed comparisons as the script is developed and adapted over time. Third, in agile, iterative analysis scenarios, the changes to the source code or execution order of code blocks lead to different result files and to different provenance tracks that can be bound to the result files and code by the file and script identifiers, respectively. Finally, Alpaca provided a structured provenance record describing the history of generation of R2G_PSD_all_subjects.png as an additional file that is suitable for sharing together with the results. This serialized provenance makes not only information available in the plot in (e.g., subject names, units) but also that were not apparent at all (e.g., the annotations employed to select the timestamps of the CUE-OFF events that are the start time of the trial data used) accessible in a machine-readable format that can be inspected by scientists receiving the shared analysis results. Overall, the provenance information captured by Alpaca delivers the information required for understanding and interpreting an electrophysiology analysis result, facilitating especially work in collaborative environments. Trust is a key factor in experimental data analysis, especially in collaborative contexts. Result artifacts (files, figures, etc.) are useful as long as the processes that generated them fit the hypotheses and research questions that guided the analysis in the first place. As provenance information describes the data and its transformations, it is expected that it should help in building trust in the analysis of electrophysiology data. Improving trust in the analysis is one of the focuses of Alpaca, and the provenance information captured as a metadata file helps in that direction. With the example presented in this paper, we demonstrated that the toolbox describes the analysis processes in detail, reducing uncertainty on every step of the data analysis. Data loading, preprocessing, signal processing, obtaining the actual PSD estimates, and preparing the data for plotting and saving the result file were apparent when analyzing the provenance records saved as R2G_PSD_all_subjects.ttl . In addition, the key parameters that determine each intermediate result are clearly defined. In the end, Alpaca contributes to building trust in the processing of analyzing data in collaborative environments and sharing results among peers. Alpaca might improve the reproducibility of the results when analyzing electrophysiology data. Considering reproducibility as the ability to reproduce a given analysis result by different individuals in different settings, the detailed information provided by Alpaca provides a good description of the processes involved in the generation of the analysis result even in the absence of the original script. Although a full re-execution or reconstruction of the source code is neither possible nor the goal of the tool, still it is possible to know the sequence of functions used, their source packages and versions, and the relevant parameters in a level of detail that would help in any reimplementation of the analysis pipeline from scratch. The provided identifiers and hashes would also help in checking whether the data objects are equivalent between runs, without having to serialize the full object data at each step. In the end, although the generation of the exact result file will require the re-execution of the original script, the information summarized by Alpaca already makes any attempts to reproduce the results using a different code more likely to succeed. Alpaca also contributes to make the electrophysiology data analysis results more compliant to the FAIR principles . These were developed to provide recommendations and requisites to increase the findability, accessibility, interoperability, and reusability of data. While typically considered in the context of the source data files obtained from an experiment, the principles could be extended to include artifacts such as a result stored in R2G_PSD_all_subjects.png . Indeed, increasing the FAIRness of such electrophysiology analysis results would bring several benefits. First, if the results are findable, it is easier to navigate among a collection of results such as hundreds of files in a shared folder. Second, the interoperability would allow for the comparison of similar results produced by different implementations of a single method (such as the case of different Python toolboxes providing similar analysis functions, such as the computation of a PSD using the Welch method that is available in Elephant , SciPy , MNE , and many others). Finally, the reusability of the results would eliminate the necessity of repeating required analyses when they were already performed. In the use case presented in this paper, a collaborator might be interested in using the PSD estimates as a starting point for further analyses of the same experimental datasets. If the existing R2G_PSD_all_subjects.png already provided an adequate analysis with respect to the preprocessed trial data, signal processing, parameters of the PSD estimates, and aggregation over channels and trials, she could simply reuse it to make any required inferences before starting her analysis. Alpaca provides advances mainly with respect to the reusability FAIR principle, as the analysis results are obtained with detailed provenance, and the results are also described with accurate and relevant attributes such as the annotations present in the Neo data objects. However, Alpaca also improves the interoperability and findability of the results. Regarding interoperability, first the provenance information is structured in a machine-readable format, using the PROV provenance model that defines a broadly used vocabulary for provenance representation. Moreover, the metadata (in the form of attributes and annotations of the data objects) and function parameters (that can be seen as a special kind of metadata when considering what is proposed in the FAIR principles) are also structured in a machine-readable format defined formally in the Alpaca ontology. Finally, the findability of the results is improved, as Alpaca binds the identifiers of the individual data objects, files, script, functions, and function executions to the analysis outcome, making it queriable via, e.g., the functions used in generating the outcome or by specific parameter settings. In the end, although a fully FAIR-compliant solution requires the development of additional resources such as controlled vocabularies and ontologies to represent the electrophysiology data analysis processes, Alpaca already provided increased adherence of the electrophysiology analysis result to the FAIR principles. Besides those improvements associated with the machine-readability of the captured provenance, Alpaca also facilitates the access to the provenance of the analysis results by humans. Facilitating data interpretation is one of the primary focuses of Alpaca. The visualization graphs generated from the RDF files eliminate the necessity of complicated tools such as SPARQL Protocol and RDF Query Language (SPARQL) queries to extract and interact with the captured provenance. This ability to explore the provenance graphs and inspect data object attributes and annotations as well as function parameters allows the scientist to visually understand the details of each individual data transformation which facilitates the interpretation and understanding of the analysis result. This is complemented by the possibility to aggregate similar nodes in the graphs producing summarizations. While these lose the fine-grained details, they provide a high-level overview that is more descriptive of the analysis process than any accompanying textual documentation or the script source code. Ultimately, Alpaca not only records the provenance information for documentation purposes, but helps in understanding and interpreting the analysis result. Users familiar with graph databases can insert the generated RDF files into triple stores and use the SPARQL query language to introspect the analysis results without relying on the visual graphs. This is an alternative to complement the graph visualizations to obtain more direct answers to specific questions about the provenance of the result (e.g., obtain the list of all distinct functions used to generate a file). One design feature of Alpaca is that it does not provide a description of the control flow in the script. This is apparent from the main structure of the provenance graph of the example presented in , where each iteration of a for loop appeared as a separate path starting from the function that generated the objects accessed in the loop. From the implementation perspective, the same graph would be obtained if the source code was structured in a way that the access of individual elements was done without a loop (i.e., instead of looping over a container with N elements, insert N function calls, each using a different element from the container). Therefore, at this point, it is not possible to use the saved provenance to make inferences about the code. In contrast, the data-centric approach taken by Alpaca was developed with the aim of exposing the data and its transformations, and relevant parameters and metadata. Thus, we consider that the resulting provenance lacks complexity while making the data flow clear, regardless of the control flow used to achieve it. The analysis of electrophysiology data frequently involves more complex workflows than a single script such as the one presented in the example. We demonstrated in that Alpaca can track provenance in multi-script workflows where parallelization is involved. Therefore, the tool is helpful in highly parallelized environments where scripts are frequently used, such as high-performance clusters. The script-based approach could also be useful in cloud-based scenarios where Python scripts can be executed, such as Amazon Web Services (AWS) Elastic Cloud instances, or dedicated services for scientific computing, such as Code Ocean and EBRAINS . Code not implemented in a functional programming style is still poorly supported in this initial version, and this capability is a point to be addressed in future versions of the tool. However, the current functionality is expected to accommodate several typical use cases for analyzing electrophysiology data. Comparison with existing tools There are existing tools that aim to capture and describe provenance during the execution of scripts, and each tool has distinct technical approaches and aims to accomplish distinct objectives ( for a review). One approach is to capture provenance during the script run time, as adopted by Alpaca. In this context, we highlight noWorkflow , as it was intended to be used in a similar scenario than Alpaca, i.e., the execution of standalone Python scripts that analyze data and produce output files. However, in contrast to Alpaca, noWorkflow does not require code instrumentation, but relies on a custom command line tool to run the script. The noWorkflow tool performs an a priori analysis of the code together with tracing during the script execution to provide a very in-depth description of the sequence of functions called and to generate a detailed call graph as provenance information. All the information is captured and saved in a local database. The focus of noWorkflow is storing and describing repeated runs of the code (trials), highlighting the differences and evolution across trials. Although noWorkflow provides a very detailed description of the analysis process at the level of every function call (which is not possible for Alpaca as it tracks only the functions identified by the decorator), it falls short for some aspects introduced by Alpaca. First, we decided to save provenance using a data model derived from PROV, which increases interoperability, while noWorkflow currently relies on a custom relational database to structure the information on the function executions. Moreover, Alpaca aims to provide an extended description of the data objects across the script execution, which was implemented in the ontology used in the RDF serialization. Together with the description of the sequence of functions executed, this additional information is relevant for the understanding of the analysis result, especially regarding metadata provided as annotations. An example in the presented use case is the identification of the data pertaining to the individual trial types. noWorkflow would have shown the loops and sequence of Neo functions used to cut the data into the smaller trial segments, but the annotations identifying each Event object used for the preprocessing using those functions would not be accessible. In the end, this relevant information is accessible from the provenance records provided by Alpaca. Overall, Alpaca captures provenance with a different perspective on the analysis process, that is more relevant for the particularities of electrophysiology data analysis as introduced at the beginning of this paper. AiiDA is another tool that can be used to capture provenance in data analysis workflows implemented in Python . It was developed as a complete solution for the automation, management, persistence, sharing, and reproducibility of complex workflows. With respect to data provenance, AiiDA tracks and records the inputs, outputs, and metadata of computations and produces a complete provenance graph. The technical approach is similar to Alpaca since it also uses decorators to instrument the code. However, AiiDA has other design features: (i) it saves provenance in a centralized storage; (ii) as part of the provenance tracking, any data object can be saved to the database with a unique identifier, allowing its retrieval later for reuse together with the lineage. In the end, AiiDA is a more holistic tool for reproducibility than Alpaca, as it is possible to re-execute the analysis using the same data objects previously stored. However, we also identify limitations in comparison to Alpaca. First, AiiDA requires any existing data objects (such as the ones provided by the Neo framework) to be wrapped by custom objects so that the system can identify and serialize their content to the database, which can be achieved through a plugin system. This means that the user must implement this interface for any and every specific data object in a custom framework. This not only requires a considerable amount of effort but this may also introduce a level of maintenance complexity as the data framework evolves and the user needs to ensure that the wrappers retain compatibility in the future. With the approach taken by Alpaca, we tried to keep the original Python objects without any fundamental transformation in their structure, and therefore we focused on identifying them using the URNs so that the lineage graph can be constructed, together with the description of their relevant metadata. An additional limitation of AiiDA is the overall setup of the system to obtain the provenance information. In the approach taken by Alpaca, the provenance information is saved locally as RDF in an additional file that should accompany the actual results produced by the script, using the interoperable PROV data model. Although sharing the information requires the user to also share the provenance metadata file together, which is less convenient than just querying a database using a command line tool such as the one provided by AiiDA , this adds simplicity to use the tool as no special services are required to be set up at the user system. It is important to note that, at this point, the individual RDF files produced by Alpaca could also be stored into a centralized RDF triple store system (either locally or remote) in order to provide similar functionality, if desired. Finally, a third limitation is the use of a non-interoperable standard for description of provenance, as the provenance graphs by AiiDA rely on a custom description of the data and control flows, and obtaining the provenance graphs requires the user to query the information using the specific AiiDA application programming interface (API) as opposed as using a standard such as SPARQL. In the end, in comparison to AiiDA , Alpaca has a reduced entry barrier to implement provenance tracking into existing scripts, which may be relevant for the average electrophysiology lab to start benefiting from provenance capture during the analysis of their experimental data. It is likely that each of the two tools focus on the needs brought by different application scenarios, such as a small lab versus a large research institute. For the small lab, improvements in collaborative work in the analysis of electrophysiology data by capturing more detailed provenance might be quickly achieved by using a tool like Alpaca. Recently, CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility) was proposed as a solution for the end-to-end description of provenance in scientific experiments . The overarching goal of CAESAR is to capture, query, and visualize the complete path of a scientific experiment, from the design to the results, while providing interoperability. This was achieved by the implementation of the REPRODUCE-ME model for provenance , based on existing ontologies such as PROV-O and P-Plan . A solution called ProvBook is also provided in order to support reproducibility and to describe the provenance of the analysis part of the experiment implemented as Jupyter notebooks. Alpaca shares similar concepts with CAESAR , as we extended PROV-O to obtain an interoperable description of provenance. However, the provenance information provided by Alpaca is more detailed with respect to the analysis part, which is the main goal of the tool. While CAESAR / ProvBook provides overall descriptions of changes in the source code of Jupyter notebook cells (and the associated results produced by those changes), the details of the functions called inside each cell are not described with the same level of detail as Alpaca. Moreover, although CAESAR supports the capture and interoperable serialization of metadata throughout the experiment, Alpaca structures metadata for data objects throughout the code execution during the analysis (e.g., the annotations and attributes of Neo objects), which provides a more fine-grained description of the data evolution (e.g., the removal of the two channels from the data from monkey L in the use case example). In the end, CAESAR is a useful tool to capture overall aspects of provenance during the execution of an analysis in the context of an electrophysiology experiment. However, the additional level of detail provided by Alpaca is complementary and could be used to provide additional levels to the provenance, while retaining interoperability. The fairworkflows library aims to make workflows implemented within Jupyter notebooks more compliant with the FAIR principles . The library uses decorators to add semantic information to the Python code. After their execution, fairworkflows constructs RDF graphs describing the workflows using P-Plan and other ontologies defined by the user in the annotations . This is linked to the provenance information that is captured during the execution and structured using PROV-O and can be published in the form of nanopublications . The use of decorators to instrument the functions is similar to Alpaca, and the decorators of fairworkflows might be used within scripts such as psd_by_trial_type.py . However, while Alpaca makes a distinction between inputs, outputs, and parameters (from the arguments that a Python function can take and its return values), fairworkflows makes a direct mapping of arguments as inputs and function returns as outputs. Therefore, the semantic model for provenance in Alpaca emphasizes the identification of the parameters relevant to control the execution of particular functions. For example, in the computation of the PSD using welch_psd , fairworkflows would consider the 2 Hz value an input to the function, when Alpaca records it as the special property hasParameter . This is particularly relevant when querying the information using SPARQL, for instance. Moreover, Alpaca also captures and describes detailed information about the objects, which we showed to be relevant for the correct interpretation of the results. However, the extra information from the semantic annotations in fairworkflows could be combined with Alpaca to provide more descriptive provenance and published using the nanopublication engine. Computational models are frequently used together with electrophysiology experiments to understand brain function and dynamics. Several state-of-the art simulation engines (e.g., NEural Simulation Tool, ; NEURON, ; Brian, ) are available, and many are implemented in Python or provide high-level Python interfaces where neuronal models with different complexities and biological details can be easily constructed using Python scripts (e.g., by using an interface such as PyNN; ). In this context, Alpaca might be useful to track the sequence of functions and respective parameters used to instantiate the models in the simulator and run the simulations. This could be used as a complement to tools such as Sumatra , which functions as an electronic lab notebook for simulations, capturing coarse level provenance when executing simulation scripts. Another example is for a tool such as beNNch , which implements a modular workflow for performance benchmarking of neuronal network simulations and could profit from a more fine-grained capture of details in the model and configuration step. Therefore, there is the possibility of also using Alpaca outside of experimental scenarios. A useful tool for electrophysiology data analysis pipelines is a WMS such as Snakemake . A particularity of Snakemake as a WMS is that it orchestrates the execution of different steps that can take the form of custom Python scripts, instead of modular and specific workflow elements such as the ones provided by a WMS such as LONI Pipeline . This is attractive when working with electrophysiology data as different aspects of the analysis process (as mentioned in Section 1) can be considered yet providing modular and reusable elements . The Snakemake WMS is based on binding input and output files as dependencies to each script executed in sequence. Therefore, one could envision a scenario where a script such as psd_by_trial_type.py would have all parameters passed by command line and the execution was controlled by Snakemake . In this scenario, Snakemake would describe the NIX files and the the file R2G_PSD_all_subjects.png as inputs and output of psd_by_trial_type.py , respectively, together with the description of the command line parameters. However, this would still rely on the correct mapping of all command line parameters to the actual Python functions (such as the filter cutoff in butter or frequency resolution in welch_psd ). Any parameters potentially hard coded directly into the function calls would not be captured and would result in a wrong or incomplete description of provenance. In contrast, all function-level parameters are tracked automatically with Alpaca. We successfully demonstrated that Alpaca integrates with Snakemake , providing detailed provenance of the operations within the scripts while taking advantage of the WMS orchestration capabilities . Finally, the provenance description of a Snakemake execution in the form of directed acyclic graphs is currently stored in a non-interoperable format. Therefore, Alpaca can be a complementary solution to use with Snakemake in more complex analysis scenarios, such as the ones that require multiple scripts. However, the provenance description is enhanced: while the coarse provenance at the file/script level can be provided by Snakemake , the additional metadata file produced by Alpaca provides a more fine-grained level of detail regarding each step of the workflow, while adding interoperability. Alpaca might also complement existing technologies frequently used to analyze electrophysiology data, especially in cloud-based and collaborative environments. DataJoint (RRID:SCR_014543; https://datajoint.com ) is a database-centered approach to computing and storing analysis results using tailored relational models . Workflows for the analysis of neurophysiology data can be implemented using MATLAB or Python -based APIs using reusable and curated components . We could expect that Alpaca would track and describe the individual operations performed by the Python objects modeling the underlying database and analyses according to the DataJoint framework. However, the challenges of a deeper integration will warrant additional investigation. In addition, Code Ocean (RRID:SCR_015532; https://codeocean.com ) is a cloud-based service for computational reproducibility, providing the execution environment in containers that integrate code and data into a “compute capsule.” This ensures the reproducibility of the code execution, and the history of the executions is tracked together with the results, all accessible through a Web interface. At this point, the provenance provided by Code Ocean will expose details of capsule executions and files produced. In parallel, Alpaca can be used to extract detailed information on the execution inside the capsule’s code. This additional provenance could be linked to the coarse provenance provided by Code Ocean . Additional investigation is required to align the provenance information between Alpaca and different execution and workflow environments and database frameworks. Limitations The initial implementation of Alpaca described in this article has some limitations with respect to the scope and visualization of the captured provenance. Here, we describe these and suggest remedies. First, Alpaca is not capturing and saving information regarding the execution environment such as Python interpreter information, packages installed, operating system, and hardware details. However, there are existing tools that can be used for that purpose and that could be used to run a script instrumented with Alpaca (e.g., Sumatra ; ). Moreover, Alpaca could be integrated with such tools to use the information provided by them in the saved provenance records. In the end, we focused on adding granularity instead of reimplementing functionality of existing tools, as this information is more relevant for understanding and sharing the electrophysiology analysis result. Second, the Alpaca ontology is currently not structured to allow the description of the execution environment. It could be further expanded to include any information regarding the environment, as one could envision a revised Alpaca provenance model and ontology with a PROV Agent subclass that would be related to ScriptAgent , and whose properties would describe the relevant aspects of the environment. Moreover, the description could be further improved by integration with other ontologies developed specifically for the detailed description of experimental workflows, such as P-Plan and REPRODUCE-ME . Therefore, although not present in this initial implementation, the approach adopted allows easy expansion and integration of additional features. Third, some steps are visible from the data flow perspective but they are not fully descriptive and understandable at this point. One example is a user-defined function, such as plot_lfp_psd in psd_by_trial_type.py . As a plotting function, the user might be interested in knowing additional details on how the inputs (i.e., the matplotlib AxesSubplot object and the arrays with the data) were handled. The current implementation tracks code in a single scope, and therefore the execution of a function such as plot_lfp_psd is treated as a “black box.” It would be interesting to also capture the execution of some functions with an even finer description of the operations inside those functions. This could be achieved by expanding the functionality to automatically include functions in levels lower than the primary capture scope. However, even in the current implementation of Alpaca, although such fine descriptions from inside of plot_lfp_psd are not available, the provenance stored in the generated metadata file already points to where the function was implemented. In this way, the user can focus on inspecting the implementation of the function plot_lfp_psd and does not have to check the full source code. Fourth, only a generic visualization graph is currently provided in Alpaca. The initial version of Alpaca is intended to provide the basic model and functionality to capture and describe provenance when analyzing electrophysiology data while providing essential visualization. Although we took the approach to leverage the advantage of open source graph visualization tools such as Gephi , the visualization of the captured provenance is not optimized (e.g., showing only parameters of the selected function or object). Such optimized visualization can be incorporated as additional feature in Alpaca without any changes to the captured information or serialization as RDF, by using existing graph visualization frameworks such as Pyvis to build a customized visualization environment based on the information in the RDF graphs and the Alpaca provenance model. Finally, there are existing tools that specifically deal with the visualization of provenance graphs. One example is AVOCADO , implemented to be an interactive provenance graph visualization tool that exploits the topological structure of the graph to provide a visual aggregation. Although Alpaca provides basic aggregation using functionality adapted from NetworkX , we could also leverage a tool like AVOCADO to provide visualization functionality more tailored to the features of a provenance graph, such as hierarchical structure (e.g., all the steps in a single-trial-processing loop grouped in a single node) and temporal evolution (isolating the visualization of the analyses performed in the first or the second dataset). However, the technical challenges of such integration are unknown at this point. Fifth, although the design of Alpaca allows capturing and describing any Python object used by a function, the serialization of extended details according to the Alpaca PROV model (i.e., attributes and annotations) is currently limited to NumPy -based objects such as NumPy arrays, quantities arrays, and Neo objects. With this initial version of Alpaca, we aimed to establish the foundational capabilities to describe data object metadata in the captured provenance, as this is an essential feature to understand and interpret the analysis result, without focusing on extensive coverage of the data models currently available in Python . It is important to mention that the functionality to describe the data objects in detail is already implemented as a plugin system, where a Python package can insert a specific function to fetch information from objects used by that package. Therefore, support for capturing detailed information besides those selected cases (e.g., NWB or Pandas DataFrames ) can be achieved by implementing the relevant function for the package and adding a new interface for the user to define attributes of a particular object to be captured. Finally, Alpaca does not allow rerunning the code to reproduce the analysis result fully. This was not the focus of the tool, and such functionality could be achieved by integrating with existing tools that allow code re-execution. One candidate is Sumatra , as it not only captures the information on the environment but also allows re-executing the script with the same parameters as the original run. Moreover, we demonstrated that Alpaca can easily integrate with a script-based WMS such as Snakemake that supports re-executing the code. Rerunning the analysis can also be accomplished within systems that control script execution, such as Code Ocean . In the end, any existing tool that properly manages environment management and script invocations might be used to rerun the code, while Alpaca adds an additional level of detail to the captured provenance aimed at increasing interpretability. Future directions Several improvements are planned for Alpaca in the future. First, we plan to expand the toolbox to also capture provenance for analyses implemented using Jupyter notebooks. Not only is Jupyter extensively used for exploratory data analysis, but also the repeated execution of code cells and subsequent substitution of data objects in memory requires detailed provenance tracking for reliable description of any analysis result produced by a notebook. Also, the provenance records lack semantic information that are relevant for understanding electrophysiology data and metadata. Therefore, a further improvement is to allow the inclusion of classes and vocabularies defined in domain-specific ontologies in the provenance records, which will bring further improvements to the FAIRness of electrophysiology analysis results. Using semantic information will improve the interpretation of the captured provenance by scientists unfamiliar with the script code and toolboxes used in the analysis. For instance, the graph visualizations could be improved with this information to display a human-readable, programming language independent label defined in the ontology class instead of the function names defined in the Python code. This would help understand steps using functions defined in analysis toolboxes (e.g., Elephant and Neo ) and user-defined functions, whose understanding requires referring to the original code. This would also allow an easier assessment of differences and similarities when comparing provenance from different analyses and further simplify understanding the provenance outside the context of the original code. The functionality will also be improved to capture information about the execution environment, together with information from version control systems such as git , to provide more detailed information about the source code that originated the analysis result. Planned improvements include automatically capturing information on the Python interpreter, operating system and hardware, and details of the Python packages where the functions are implemented (cf., e.g., Sumatra ). Furthermore, we propose to integrate a specific tool to aid in comparing different provenance files to facilitate identifying differences between analyses. The goal is to leverage information provided by the provenance model implemented by Alpaca, especially the metadata captured as attributes and annotations, in order to help scientists draw informed conclusions based on differences among a set of results. We aim to further improve the interaction and analysis of the captured provenance by developing a custom visualization and search interface based on the serialized RDF graphs. This tailored visualization interface is planned to be aware of the provenance model implemented in Alpaca, and use more user-friendly resources such as floating labels to show annotations and attributes of the data or function parameters, or interactive visualization controls such as graph expansion/aggregation on demand. Finally, we aim to investigate how the captured provenance can be integrated with existing tools in the neurophysiology data ecosystem. A potential integration is how to incorporate the generated provenance metadata into standards to share neurophysiology data, such as NIX and NWB file formats. Files written using these standards could easily embed the RDF files or their information as metadata. In addition, Alpaca could be integrated with Python packages used in the manipulation, preprocessing, and analysis of electrophysiology data (e.g., Neo , SpikeInterface , Elephant ) to provide embedded provenance capture functionality, eliminating the requirement for the user to instrument functions from packages that are frequently used. Conclusions We implemented Alpaca, a toolbox for lightweight provenance capture during the execution of Python scripts used for the analysis of electrophysiology data. Alpaca captures more detailed information about the analysis processes, including not only the lineage of the data but also embedded metadata relevant for the description of data objects during the processing pipeline. In the end, this makes the electrophysiology analysis result artifacts more compliant to the FAIR principles. This may improve research reproducibility and the trust in the results, especially in collaborative environments. Therefore, Alpaca may be a valuable tool to facilitate sharing electrophysiology data analysis results. There are existing tools that aim to capture and describe provenance during the execution of scripts, and each tool has distinct technical approaches and aims to accomplish distinct objectives ( for a review). One approach is to capture provenance during the script run time, as adopted by Alpaca. In this context, we highlight noWorkflow , as it was intended to be used in a similar scenario than Alpaca, i.e., the execution of standalone Python scripts that analyze data and produce output files. However, in contrast to Alpaca, noWorkflow does not require code instrumentation, but relies on a custom command line tool to run the script. The noWorkflow tool performs an a priori analysis of the code together with tracing during the script execution to provide a very in-depth description of the sequence of functions called and to generate a detailed call graph as provenance information. All the information is captured and saved in a local database. The focus of noWorkflow is storing and describing repeated runs of the code (trials), highlighting the differences and evolution across trials. Although noWorkflow provides a very detailed description of the analysis process at the level of every function call (which is not possible for Alpaca as it tracks only the functions identified by the decorator), it falls short for some aspects introduced by Alpaca. First, we decided to save provenance using a data model derived from PROV, which increases interoperability, while noWorkflow currently relies on a custom relational database to structure the information on the function executions. Moreover, Alpaca aims to provide an extended description of the data objects across the script execution, which was implemented in the ontology used in the RDF serialization. Together with the description of the sequence of functions executed, this additional information is relevant for the understanding of the analysis result, especially regarding metadata provided as annotations. An example in the presented use case is the identification of the data pertaining to the individual trial types. noWorkflow would have shown the loops and sequence of Neo functions used to cut the data into the smaller trial segments, but the annotations identifying each Event object used for the preprocessing using those functions would not be accessible. In the end, this relevant information is accessible from the provenance records provided by Alpaca. Overall, Alpaca captures provenance with a different perspective on the analysis process, that is more relevant for the particularities of electrophysiology data analysis as introduced at the beginning of this paper. AiiDA is another tool that can be used to capture provenance in data analysis workflows implemented in Python . It was developed as a complete solution for the automation, management, persistence, sharing, and reproducibility of complex workflows. With respect to data provenance, AiiDA tracks and records the inputs, outputs, and metadata of computations and produces a complete provenance graph. The technical approach is similar to Alpaca since it also uses decorators to instrument the code. However, AiiDA has other design features: (i) it saves provenance in a centralized storage; (ii) as part of the provenance tracking, any data object can be saved to the database with a unique identifier, allowing its retrieval later for reuse together with the lineage. In the end, AiiDA is a more holistic tool for reproducibility than Alpaca, as it is possible to re-execute the analysis using the same data objects previously stored. However, we also identify limitations in comparison to Alpaca. First, AiiDA requires any existing data objects (such as the ones provided by the Neo framework) to be wrapped by custom objects so that the system can identify and serialize their content to the database, which can be achieved through a plugin system. This means that the user must implement this interface for any and every specific data object in a custom framework. This not only requires a considerable amount of effort but this may also introduce a level of maintenance complexity as the data framework evolves and the user needs to ensure that the wrappers retain compatibility in the future. With the approach taken by Alpaca, we tried to keep the original Python objects without any fundamental transformation in their structure, and therefore we focused on identifying them using the URNs so that the lineage graph can be constructed, together with the description of their relevant metadata. An additional limitation of AiiDA is the overall setup of the system to obtain the provenance information. In the approach taken by Alpaca, the provenance information is saved locally as RDF in an additional file that should accompany the actual results produced by the script, using the interoperable PROV data model. Although sharing the information requires the user to also share the provenance metadata file together, which is less convenient than just querying a database using a command line tool such as the one provided by AiiDA , this adds simplicity to use the tool as no special services are required to be set up at the user system. It is important to note that, at this point, the individual RDF files produced by Alpaca could also be stored into a centralized RDF triple store system (either locally or remote) in order to provide similar functionality, if desired. Finally, a third limitation is the use of a non-interoperable standard for description of provenance, as the provenance graphs by AiiDA rely on a custom description of the data and control flows, and obtaining the provenance graphs requires the user to query the information using the specific AiiDA application programming interface (API) as opposed as using a standard such as SPARQL. In the end, in comparison to AiiDA , Alpaca has a reduced entry barrier to implement provenance tracking into existing scripts, which may be relevant for the average electrophysiology lab to start benefiting from provenance capture during the analysis of their experimental data. It is likely that each of the two tools focus on the needs brought by different application scenarios, such as a small lab versus a large research institute. For the small lab, improvements in collaborative work in the analysis of electrophysiology data by capturing more detailed provenance might be quickly achieved by using a tool like Alpaca. Recently, CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility) was proposed as a solution for the end-to-end description of provenance in scientific experiments . The overarching goal of CAESAR is to capture, query, and visualize the complete path of a scientific experiment, from the design to the results, while providing interoperability. This was achieved by the implementation of the REPRODUCE-ME model for provenance , based on existing ontologies such as PROV-O and P-Plan . A solution called ProvBook is also provided in order to support reproducibility and to describe the provenance of the analysis part of the experiment implemented as Jupyter notebooks. Alpaca shares similar concepts with CAESAR , as we extended PROV-O to obtain an interoperable description of provenance. However, the provenance information provided by Alpaca is more detailed with respect to the analysis part, which is the main goal of the tool. While CAESAR / ProvBook provides overall descriptions of changes in the source code of Jupyter notebook cells (and the associated results produced by those changes), the details of the functions called inside each cell are not described with the same level of detail as Alpaca. Moreover, although CAESAR supports the capture and interoperable serialization of metadata throughout the experiment, Alpaca structures metadata for data objects throughout the code execution during the analysis (e.g., the annotations and attributes of Neo objects), which provides a more fine-grained description of the data evolution (e.g., the removal of the two channels from the data from monkey L in the use case example). In the end, CAESAR is a useful tool to capture overall aspects of provenance during the execution of an analysis in the context of an electrophysiology experiment. However, the additional level of detail provided by Alpaca is complementary and could be used to provide additional levels to the provenance, while retaining interoperability. The fairworkflows library aims to make workflows implemented within Jupyter notebooks more compliant with the FAIR principles . The library uses decorators to add semantic information to the Python code. After their execution, fairworkflows constructs RDF graphs describing the workflows using P-Plan and other ontologies defined by the user in the annotations . This is linked to the provenance information that is captured during the execution and structured using PROV-O and can be published in the form of nanopublications . The use of decorators to instrument the functions is similar to Alpaca, and the decorators of fairworkflows might be used within scripts such as psd_by_trial_type.py . However, while Alpaca makes a distinction between inputs, outputs, and parameters (from the arguments that a Python function can take and its return values), fairworkflows makes a direct mapping of arguments as inputs and function returns as outputs. Therefore, the semantic model for provenance in Alpaca emphasizes the identification of the parameters relevant to control the execution of particular functions. For example, in the computation of the PSD using welch_psd , fairworkflows would consider the 2 Hz value an input to the function, when Alpaca records it as the special property hasParameter . This is particularly relevant when querying the information using SPARQL, for instance. Moreover, Alpaca also captures and describes detailed information about the objects, which we showed to be relevant for the correct interpretation of the results. However, the extra information from the semantic annotations in fairworkflows could be combined with Alpaca to provide more descriptive provenance and published using the nanopublication engine. Computational models are frequently used together with electrophysiology experiments to understand brain function and dynamics. Several state-of-the art simulation engines (e.g., NEural Simulation Tool, ; NEURON, ; Brian, ) are available, and many are implemented in Python or provide high-level Python interfaces where neuronal models with different complexities and biological details can be easily constructed using Python scripts (e.g., by using an interface such as PyNN; ). In this context, Alpaca might be useful to track the sequence of functions and respective parameters used to instantiate the models in the simulator and run the simulations. This could be used as a complement to tools such as Sumatra , which functions as an electronic lab notebook for simulations, capturing coarse level provenance when executing simulation scripts. Another example is for a tool such as beNNch , which implements a modular workflow for performance benchmarking of neuronal network simulations and could profit from a more fine-grained capture of details in the model and configuration step. Therefore, there is the possibility of also using Alpaca outside of experimental scenarios. A useful tool for electrophysiology data analysis pipelines is a WMS such as Snakemake . A particularity of Snakemake as a WMS is that it orchestrates the execution of different steps that can take the form of custom Python scripts, instead of modular and specific workflow elements such as the ones provided by a WMS such as LONI Pipeline . This is attractive when working with electrophysiology data as different aspects of the analysis process (as mentioned in Section 1) can be considered yet providing modular and reusable elements . The Snakemake WMS is based on binding input and output files as dependencies to each script executed in sequence. Therefore, one could envision a scenario where a script such as psd_by_trial_type.py would have all parameters passed by command line and the execution was controlled by Snakemake . In this scenario, Snakemake would describe the NIX files and the the file R2G_PSD_all_subjects.png as inputs and output of psd_by_trial_type.py , respectively, together with the description of the command line parameters. However, this would still rely on the correct mapping of all command line parameters to the actual Python functions (such as the filter cutoff in butter or frequency resolution in welch_psd ). Any parameters potentially hard coded directly into the function calls would not be captured and would result in a wrong or incomplete description of provenance. In contrast, all function-level parameters are tracked automatically with Alpaca. We successfully demonstrated that Alpaca integrates with Snakemake , providing detailed provenance of the operations within the scripts while taking advantage of the WMS orchestration capabilities . Finally, the provenance description of a Snakemake execution in the form of directed acyclic graphs is currently stored in a non-interoperable format. Therefore, Alpaca can be a complementary solution to use with Snakemake in more complex analysis scenarios, such as the ones that require multiple scripts. However, the provenance description is enhanced: while the coarse provenance at the file/script level can be provided by Snakemake , the additional metadata file produced by Alpaca provides a more fine-grained level of detail regarding each step of the workflow, while adding interoperability. Alpaca might also complement existing technologies frequently used to analyze electrophysiology data, especially in cloud-based and collaborative environments. DataJoint (RRID:SCR_014543; https://datajoint.com ) is a database-centered approach to computing and storing analysis results using tailored relational models . Workflows for the analysis of neurophysiology data can be implemented using MATLAB or Python -based APIs using reusable and curated components . We could expect that Alpaca would track and describe the individual operations performed by the Python objects modeling the underlying database and analyses according to the DataJoint framework. However, the challenges of a deeper integration will warrant additional investigation. In addition, Code Ocean (RRID:SCR_015532; https://codeocean.com ) is a cloud-based service for computational reproducibility, providing the execution environment in containers that integrate code and data into a “compute capsule.” This ensures the reproducibility of the code execution, and the history of the executions is tracked together with the results, all accessible through a Web interface. At this point, the provenance provided by Code Ocean will expose details of capsule executions and files produced. In parallel, Alpaca can be used to extract detailed information on the execution inside the capsule’s code. This additional provenance could be linked to the coarse provenance provided by Code Ocean . Additional investigation is required to align the provenance information between Alpaca and different execution and workflow environments and database frameworks. The initial implementation of Alpaca described in this article has some limitations with respect to the scope and visualization of the captured provenance. Here, we describe these and suggest remedies. First, Alpaca is not capturing and saving information regarding the execution environment such as Python interpreter information, packages installed, operating system, and hardware details. However, there are existing tools that can be used for that purpose and that could be used to run a script instrumented with Alpaca (e.g., Sumatra ; ). Moreover, Alpaca could be integrated with such tools to use the information provided by them in the saved provenance records. In the end, we focused on adding granularity instead of reimplementing functionality of existing tools, as this information is more relevant for understanding and sharing the electrophysiology analysis result. Second, the Alpaca ontology is currently not structured to allow the description of the execution environment. It could be further expanded to include any information regarding the environment, as one could envision a revised Alpaca provenance model and ontology with a PROV Agent subclass that would be related to ScriptAgent , and whose properties would describe the relevant aspects of the environment. Moreover, the description could be further improved by integration with other ontologies developed specifically for the detailed description of experimental workflows, such as P-Plan and REPRODUCE-ME . Therefore, although not present in this initial implementation, the approach adopted allows easy expansion and integration of additional features. Third, some steps are visible from the data flow perspective but they are not fully descriptive and understandable at this point. One example is a user-defined function, such as plot_lfp_psd in psd_by_trial_type.py . As a plotting function, the user might be interested in knowing additional details on how the inputs (i.e., the matplotlib AxesSubplot object and the arrays with the data) were handled. The current implementation tracks code in a single scope, and therefore the execution of a function such as plot_lfp_psd is treated as a “black box.” It would be interesting to also capture the execution of some functions with an even finer description of the operations inside those functions. This could be achieved by expanding the functionality to automatically include functions in levels lower than the primary capture scope. However, even in the current implementation of Alpaca, although such fine descriptions from inside of plot_lfp_psd are not available, the provenance stored in the generated metadata file already points to where the function was implemented. In this way, the user can focus on inspecting the implementation of the function plot_lfp_psd and does not have to check the full source code. Fourth, only a generic visualization graph is currently provided in Alpaca. The initial version of Alpaca is intended to provide the basic model and functionality to capture and describe provenance when analyzing electrophysiology data while providing essential visualization. Although we took the approach to leverage the advantage of open source graph visualization tools such as Gephi , the visualization of the captured provenance is not optimized (e.g., showing only parameters of the selected function or object). Such optimized visualization can be incorporated as additional feature in Alpaca without any changes to the captured information or serialization as RDF, by using existing graph visualization frameworks such as Pyvis to build a customized visualization environment based on the information in the RDF graphs and the Alpaca provenance model. Finally, there are existing tools that specifically deal with the visualization of provenance graphs. One example is AVOCADO , implemented to be an interactive provenance graph visualization tool that exploits the topological structure of the graph to provide a visual aggregation. Although Alpaca provides basic aggregation using functionality adapted from NetworkX , we could also leverage a tool like AVOCADO to provide visualization functionality more tailored to the features of a provenance graph, such as hierarchical structure (e.g., all the steps in a single-trial-processing loop grouped in a single node) and temporal evolution (isolating the visualization of the analyses performed in the first or the second dataset). However, the technical challenges of such integration are unknown at this point. Fifth, although the design of Alpaca allows capturing and describing any Python object used by a function, the serialization of extended details according to the Alpaca PROV model (i.e., attributes and annotations) is currently limited to NumPy -based objects such as NumPy arrays, quantities arrays, and Neo objects. With this initial version of Alpaca, we aimed to establish the foundational capabilities to describe data object metadata in the captured provenance, as this is an essential feature to understand and interpret the analysis result, without focusing on extensive coverage of the data models currently available in Python . It is important to mention that the functionality to describe the data objects in detail is already implemented as a plugin system, where a Python package can insert a specific function to fetch information from objects used by that package. Therefore, support for capturing detailed information besides those selected cases (e.g., NWB or Pandas DataFrames ) can be achieved by implementing the relevant function for the package and adding a new interface for the user to define attributes of a particular object to be captured. Finally, Alpaca does not allow rerunning the code to reproduce the analysis result fully. This was not the focus of the tool, and such functionality could be achieved by integrating with existing tools that allow code re-execution. One candidate is Sumatra , as it not only captures the information on the environment but also allows re-executing the script with the same parameters as the original run. Moreover, we demonstrated that Alpaca can easily integrate with a script-based WMS such as Snakemake that supports re-executing the code. Rerunning the analysis can also be accomplished within systems that control script execution, such as Code Ocean . In the end, any existing tool that properly manages environment management and script invocations might be used to rerun the code, while Alpaca adds an additional level of detail to the captured provenance aimed at increasing interpretability. Several improvements are planned for Alpaca in the future. First, we plan to expand the toolbox to also capture provenance for analyses implemented using Jupyter notebooks. Not only is Jupyter extensively used for exploratory data analysis, but also the repeated execution of code cells and subsequent substitution of data objects in memory requires detailed provenance tracking for reliable description of any analysis result produced by a notebook. Also, the provenance records lack semantic information that are relevant for understanding electrophysiology data and metadata. Therefore, a further improvement is to allow the inclusion of classes and vocabularies defined in domain-specific ontologies in the provenance records, which will bring further improvements to the FAIRness of electrophysiology analysis results. Using semantic information will improve the interpretation of the captured provenance by scientists unfamiliar with the script code and toolboxes used in the analysis. For instance, the graph visualizations could be improved with this information to display a human-readable, programming language independent label defined in the ontology class instead of the function names defined in the Python code. This would help understand steps using functions defined in analysis toolboxes (e.g., Elephant and Neo ) and user-defined functions, whose understanding requires referring to the original code. This would also allow an easier assessment of differences and similarities when comparing provenance from different analyses and further simplify understanding the provenance outside the context of the original code. The functionality will also be improved to capture information about the execution environment, together with information from version control systems such as git , to provide more detailed information about the source code that originated the analysis result. Planned improvements include automatically capturing information on the Python interpreter, operating system and hardware, and details of the Python packages where the functions are implemented (cf., e.g., Sumatra ). Furthermore, we propose to integrate a specific tool to aid in comparing different provenance files to facilitate identifying differences between analyses. The goal is to leverage information provided by the provenance model implemented by Alpaca, especially the metadata captured as attributes and annotations, in order to help scientists draw informed conclusions based on differences among a set of results. We aim to further improve the interaction and analysis of the captured provenance by developing a custom visualization and search interface based on the serialized RDF graphs. This tailored visualization interface is planned to be aware of the provenance model implemented in Alpaca, and use more user-friendly resources such as floating labels to show annotations and attributes of the data or function parameters, or interactive visualization controls such as graph expansion/aggregation on demand. Finally, we aim to investigate how the captured provenance can be integrated with existing tools in the neurophysiology data ecosystem. A potential integration is how to incorporate the generated provenance metadata into standards to share neurophysiology data, such as NIX and NWB file formats. Files written using these standards could easily embed the RDF files or their information as metadata. In addition, Alpaca could be integrated with Python packages used in the manipulation, preprocessing, and analysis of electrophysiology data (e.g., Neo , SpikeInterface , Elephant ) to provide embedded provenance capture functionality, eliminating the requirement for the user to instrument functions from packages that are frequently used. We implemented Alpaca, a toolbox for lightweight provenance capture during the execution of Python scripts used for the analysis of electrophysiology data. Alpaca captures more detailed information about the analysis processes, including not only the lineage of the data but also embedded metadata relevant for the description of data objects during the processing pipeline. In the end, this makes the electrophysiology analysis result artifacts more compliant to the FAIR principles. This may improve research reproducibility and the trust in the results, especially in collaborative environments. Therefore, Alpaca may be a valuable tool to facilitate sharing electrophysiology data analysis results. |
Exploring Perceptions of Anti-vaping Message Themes: A Qualitative Study of Australian Adolescents and Adults | 9c9a9ffc-ce73-4c8e-87d4-5e6c2562b7f5 | 11750735 | Health Communication[mh] | The use of electronic cigarettes (e-cigarettes; also known as vaping) is increasing rapidly. From 2012 to 2023, the estimated number of people who use e-cigarettes quadrupled from 21.3 to 86.1 million globally. Of particular concern is the global increase in e-cigarette use among youth and those who have never smoked, , with these individuals at increased risk of experiencing multiple health harms. Minimizing e-cigarette uptake and encouraging vaping cessation have thus become important components of public health agendas. Health communication campaigns are widely used to influence health-related behaviors. In the tobacco control space, hard-hitting campaigns are considered effective and essential, with well-designed campaigns found to increase quit attempts and reduce youth initiation rates, tobacco use, and secondhand smoke exposure. In Australia, the context of the present study, campaigns addressing the harms associated with tobacco smoking and encouraging those who smoke to quit have led to increases in (1) negative attitudes towards smoking and (2) quitting-related intentions and behaviors. Given the success of anti-smoking campaigns, and in light of the aforementioned increases in e-cigarette use, attention has turned to developing health communication campaigns that target vaping. , Research conducted in the United States, Canada, and England suggests more than half of youth have noticed anti-vaping campaigns, with significant increases in campaign awareness coinciding with increased investment in campaign dissemination. Campaigns that address the drivers of e-cigarette use—such as positive beliefs about e-cigarettes and the e-cigarette industry, lower health and addiction risk perceptions, and social norms—appear promising for changing use-related attitudes and behaviors. For example, previous work among adolescents and young adults has demonstrated that messages focusing on the health harms of e-cigarette use are considered appealing, increase risk perceptions and intentions to quit, and lower intentions to use e-cigarettes. Other messages suggested to be effective for these age groups include those that (1) focus on the chemical ingredients of e-cigarettes and the impact of use on mood and cognitive functioning , and (2) feature anti-industry sentiment. Studies exploring the effectiveness of messages that focus on the risk of addiction and the social acceptability of e-cigarette use have produced mixed results. For example, when asked to consider messages that may discourage vaping, adolescent and young adult participants have generated or endorsed messages that focused on social undesirability and addiction. , By contrast, college students reported that the widespread use of e-cigarettes in their college community contradicted messages about social undesirability, and such messages have therefore been rated poorly by members of this population cohort. They also reported that a nicotine addiction message would elicit defensiveness, noting that nicotine is less harmful than other substances. Little research has been done with adults, although available work suggests that adults consider the risk of nicotine addiction to be the least discouraging health harm compared to respiratory, cardiovascular, chemical, and explosion harms. Although prior work has identified a range of potentially effective messages to minimize e-cigarette use, gaps in the literature remain. First, few studies have identified what message content participants believe will be well-received by the target audience, with many instead exploring existing beliefs about e-cigarettes and motivators/discouragers of use and then drawing implications for health communications. , Among the few studies that assessed perceived message effectiveness, participants were provided with a specific hypothetical message rather than a message theme. , In these studies, reactions to the content of a particular message may be confounded by the techniques used to communicate the message (eg, humor). The presentation of a message theme has the potential to reduce the influence of confounding factors, such as message execution, on assessments of effectiveness. The assessment of message themes—rather than hypothetical messages—has been utilized in the development of anti-smoking campaigns, but appears to be lacking in the context of anti-vaping campaigns. Second, most research to date has been conducted with individuals who currently use tobacco or e-cigarettes, , , or has not stratified by use status. Given the importance of reducing the uptake of e-cigarette use among youth and those who have never smoked, exploring the potential effectiveness of message themes among these individuals is critical to prevention efforts. Finally, the closed-response survey approach typically used in prior work limits the extent to which participants can provide feedback on the specific elements of an anti-vaping message they believe to be effective or ineffective. Qualitative approaches that allow for the collection of rich, detailed data to understand whether and how message themes can be executed effectively are needed. The present study sought to address these gaps in the literature by using a qualitative focus group (FG) design to obtain a comprehensive understanding of the potential effectiveness of a range of anti-vaping message themes among adolescents, young adults, and adults. To identify message themes that may be effective for prevention and cessation, both those who vape and those who do not vape were sampled. We aimed to (1) identify message themes that are perceived to be effective at minimizing e-cigarette uptake and encouraging cessation among youth and those who have never smoked and (2) elicit opinions on how these message themes could be optimized in terms of execution and delivery to these populations. Recruitment and Sample As part of a larger project exploring Australians’ experiences with e-cigarettes, a social research agency was commissioned to recruit a purposive sample of 14- to 39-year-olds to participate in one of 16 FGs conducted in Melbourne and Sydney, the two most populous cities in Australia. Participants were recruited via email through the agency’s database, which comprised Australians who had provided consent to being approached to participate in research studies. Age was the only eligibility criterion. As individuals are more likely to feel comfortable sharing genuine perspectives in a homogenous group, groups were stratified by gender (men, women), age (14- to 15-year-olds, 16- to 17-year-olds, 18- to 24-year-olds, and 25- to 39-year-olds), and e-cigarette use status (current/former e-cigarette use, never use). We conducted FGs with younger (14- to 15-year-olds) and older (16- to 17-year-olds) adolescents separately given the distinct social and developmental differences between these age groups and the potential impact this may have on participants’ willingness to contribute to FG discussions. The composition of each FG is presented in . Procedure This study was approved by a university Human Research Ethics Committee (The University of Melbourne #24865). All participants (and caregivers of those aged <16 years) provided written informed consent. Groups were held in March 2023 and facilitated by a Principal Research Fellow with a PhD in clinical psychology (MJ). The average duration of the FGs was 70 minutes (range: 57 to 88 minutes) and the average group comprised eight participants (range: 6–10; total number of participants across all groups = 139). Participants were reimbursed AUD120 for time and costs associated with participation, and caregivers were reimbursed AUD30. Caregivers were not present during the FGs. While waiting for their FG to begin, participants completed a short survey that assessed their sociodemographic characteristics (eg, gender, age). A semi-structured interview guide comprising open-ended questions was then followed . To allow idea generation with minimal facilitator influence, participants were initially asked to discuss what could be done to reduce e-cigarette use. Probing questions were used to further explore (1) what messages would reduce e-cigarette use; (2) who, or which source, should deliver these messages; and (3) through which platforms or mediums messages should be delivered. The facilitator then presented participants with 14 message themes that were developed by the research team based on previous research , , and consultations with two experts in tobacco control. Participants were asked how effective they believed each message theme would be in encouraging the prevention or cessation of e-cigarette use. Probing questions were used to explore why message themes would or would not be effective and how effectiveness could be increased. Following the discussion of each message theme, participants were asked to rate message themes on a 10-point scale, with 1 indicating the lowest level of effectiveness and 10 the highest. At the conclusion of each FG, the facilitator and supporting researchers met to discuss the content, their interpretation of discussions, and any observations. Data Analysis Qualitative Analysis FGs were audio recorded and transcribed verbatim by an independent and ISO-accredited transcription agency. Transcripts were then imported into NVivo for coding and analysis. Consistent with the aims of the study, data were coded under the following topics: (1) message content (unprompted and prompted), (2) execution, and (3) delivery methods. Given the data-driven nature of the study, we undertook thematic analysis using an inductive approach. One researcher (MEB) coded and analyzed all transcripts in a manner consistent with Braun et al. reflexive thematic analysis framework. This is an iterative process involving data familiarization; code and theme generation; reviewing, defining, and naming themes; and synthesis of themes into a manuscript. MEB is a postdoctoral research fellow with a background in clinical and health psychology. MEB regularly consulted the facilitator of the FGs and listened to the audio recordings to ensure data interpretation accurately reflected group discussions. While the stratification of adolescents into separate age groups for the purposes of FG delivery was appropriate (as discussed above), data from adolescent FGs of the same gender and vaping status were merged given the findings did not differ. The merging of these groups also facilitated comparisons to previous research. , , As such, a total of 12 groups differentiated by age (adolescents cf. young adults cf. adults aged ≥25 years), gender (women cf. men), and vaping status (current/former vaping cf. those who have never vaped) were formed from the original 16 FGs. The Matrix Query function in NVivo was used to explore differences between groups in terms of responses to message themes and suggestions regarding message execution and delivery. We include illustrative quotes throughout the results section that follows. Each quote is followed by details of the participant’s respective FG: FG number (eg, FG #1); adolescents or young adults or adults; W = Women or M = Men; V = those who currently vape or have previously vaped or NV = those who have never vaped. Quantitative Analysis Descriptive statistics for the perceived effectiveness of each message theme for each group were computed using SPSS Version 29. These results are presented in . As part of a larger project exploring Australians’ experiences with e-cigarettes, a social research agency was commissioned to recruit a purposive sample of 14- to 39-year-olds to participate in one of 16 FGs conducted in Melbourne and Sydney, the two most populous cities in Australia. Participants were recruited via email through the agency’s database, which comprised Australians who had provided consent to being approached to participate in research studies. Age was the only eligibility criterion. As individuals are more likely to feel comfortable sharing genuine perspectives in a homogenous group, groups were stratified by gender (men, women), age (14- to 15-year-olds, 16- to 17-year-olds, 18- to 24-year-olds, and 25- to 39-year-olds), and e-cigarette use status (current/former e-cigarette use, never use). We conducted FGs with younger (14- to 15-year-olds) and older (16- to 17-year-olds) adolescents separately given the distinct social and developmental differences between these age groups and the potential impact this may have on participants’ willingness to contribute to FG discussions. The composition of each FG is presented in . This study was approved by a university Human Research Ethics Committee (The University of Melbourne #24865). All participants (and caregivers of those aged <16 years) provided written informed consent. Groups were held in March 2023 and facilitated by a Principal Research Fellow with a PhD in clinical psychology (MJ). The average duration of the FGs was 70 minutes (range: 57 to 88 minutes) and the average group comprised eight participants (range: 6–10; total number of participants across all groups = 139). Participants were reimbursed AUD120 for time and costs associated with participation, and caregivers were reimbursed AUD30. Caregivers were not present during the FGs. While waiting for their FG to begin, participants completed a short survey that assessed their sociodemographic characteristics (eg, gender, age). A semi-structured interview guide comprising open-ended questions was then followed . To allow idea generation with minimal facilitator influence, participants were initially asked to discuss what could be done to reduce e-cigarette use. Probing questions were used to further explore (1) what messages would reduce e-cigarette use; (2) who, or which source, should deliver these messages; and (3) through which platforms or mediums messages should be delivered. The facilitator then presented participants with 14 message themes that were developed by the research team based on previous research , , and consultations with two experts in tobacco control. Participants were asked how effective they believed each message theme would be in encouraging the prevention or cessation of e-cigarette use. Probing questions were used to explore why message themes would or would not be effective and how effectiveness could be increased. Following the discussion of each message theme, participants were asked to rate message themes on a 10-point scale, with 1 indicating the lowest level of effectiveness and 10 the highest. At the conclusion of each FG, the facilitator and supporting researchers met to discuss the content, their interpretation of discussions, and any observations. Qualitative Analysis FGs were audio recorded and transcribed verbatim by an independent and ISO-accredited transcription agency. Transcripts were then imported into NVivo for coding and analysis. Consistent with the aims of the study, data were coded under the following topics: (1) message content (unprompted and prompted), (2) execution, and (3) delivery methods. Given the data-driven nature of the study, we undertook thematic analysis using an inductive approach. One researcher (MEB) coded and analyzed all transcripts in a manner consistent with Braun et al. reflexive thematic analysis framework. This is an iterative process involving data familiarization; code and theme generation; reviewing, defining, and naming themes; and synthesis of themes into a manuscript. MEB is a postdoctoral research fellow with a background in clinical and health psychology. MEB regularly consulted the facilitator of the FGs and listened to the audio recordings to ensure data interpretation accurately reflected group discussions. While the stratification of adolescents into separate age groups for the purposes of FG delivery was appropriate (as discussed above), data from adolescent FGs of the same gender and vaping status were merged given the findings did not differ. The merging of these groups also facilitated comparisons to previous research. , , As such, a total of 12 groups differentiated by age (adolescents cf. young adults cf. adults aged ≥25 years), gender (women cf. men), and vaping status (current/former vaping cf. those who have never vaped) were formed from the original 16 FGs. The Matrix Query function in NVivo was used to explore differences between groups in terms of responses to message themes and suggestions regarding message execution and delivery. We include illustrative quotes throughout the results section that follows. Each quote is followed by details of the participant’s respective FG: FG number (eg, FG #1); adolescents or young adults or adults; W = Women or M = Men; V = those who currently vape or have previously vaped or NV = those who have never vaped. Quantitative Analysis Descriptive statistics for the perceived effectiveness of each message theme for each group were computed using SPSS Version 29. These results are presented in . FGs were audio recorded and transcribed verbatim by an independent and ISO-accredited transcription agency. Transcripts were then imported into NVivo for coding and analysis. Consistent with the aims of the study, data were coded under the following topics: (1) message content (unprompted and prompted), (2) execution, and (3) delivery methods. Given the data-driven nature of the study, we undertook thematic analysis using an inductive approach. One researcher (MEB) coded and analyzed all transcripts in a manner consistent with Braun et al. reflexive thematic analysis framework. This is an iterative process involving data familiarization; code and theme generation; reviewing, defining, and naming themes; and synthesis of themes into a manuscript. MEB is a postdoctoral research fellow with a background in clinical and health psychology. MEB regularly consulted the facilitator of the FGs and listened to the audio recordings to ensure data interpretation accurately reflected group discussions. While the stratification of adolescents into separate age groups for the purposes of FG delivery was appropriate (as discussed above), data from adolescent FGs of the same gender and vaping status were merged given the findings did not differ. The merging of these groups also facilitated comparisons to previous research. , , As such, a total of 12 groups differentiated by age (adolescents cf. young adults cf. adults aged ≥25 years), gender (women cf. men), and vaping status (current/former vaping cf. those who have never vaped) were formed from the original 16 FGs. The Matrix Query function in NVivo was used to explore differences between groups in terms of responses to message themes and suggestions regarding message execution and delivery. We include illustrative quotes throughout the results section that follows. Each quote is followed by details of the participant’s respective FG: FG number (eg, FG #1); adolescents or young adults or adults; W = Women or M = Men; V = those who currently vape or have previously vaped or NV = those who have never vaped. Descriptive statistics for the perceived effectiveness of each message theme for each group were computed using SPSS Version 29. These results are presented in . Unprompted Ideas for Anti-vaping Message Themes When given the opportunity to discuss what could be done to reduce e-cigarette use, participants suggested a range of policy and practice approaches. Given the aims of the present study, we report only on the discussions in which participants focused on health communications. Discussions in which participants focused on regulation-based approaches are reported elsewhere. All groups reported that disseminating information about the health harms associated with e-cigarette use would be effective in reducing vaping: Saying to them, “You can vape now and you can have fun with it but in 20 years - you don’t know what’s going to happen to you in the future.” – FG#4, adolescents 14–15 years old, M, NV. Messages focusing on social norms were also considered useful, with almost all groups recommending such a theme: A lot of people started vaping because they think it’d make them look cool... I think showing them what everyone else sees them as would probably get them to stop. – FG#4, adolescents 14–15 years old, M, NV. Approximately half of the groups—mostly adolescents and young adults—suggested that messages focusing on the risk of addiction, the impact of dependence, and the chemical composition and manufacturing of e-cigarettes would be effective. Some groups proposed messages that focused on the impact of vaping on others, such as family, friends, and partners: A lot of the reasons why you can’t do it indoors is [to protect] people with asthma and stuff… show the effects it has on other people. – FG#9, young adults 18–24 years old, F, V. Talk about secondhand smoke and how it can hurt your family . – FG#7, adolescents 16–17 years old, M, V. Perceptions of Hypothetical Anti-vaping Message Themes Message Themes Considered Effective Of the 14 hypothetical message themes presented, those considered most likely to be effective aligned with those mentioned spontaneously by groups (discussed above) and included the health harms associated with use, the chemical ingredients of e-cigarettes, the social consequences of vaping, and the impact of dependence. Driving positive sentiment towards most of these message themes were the high levels of certainty that these outcomes would occur and the perceived severity of the impact on one’s life. Results relating to each of these themes are now presented. Given the similarities in responses to the health harms and chemical ingredients themes, these are discussed together. Health Harms of E-cigarette Use and Chemical Ingredients of E-cigarettes All groups reported that messages focusing on the health harms and chemical ingredients of e-cigarettes would be effective, with most noting that such messages would provide the public with information of which they were not previously aware. Groups also discussed the potential for these messages to debunk misinformation that e-cigarette use is not harmful; a misconception that young adult and adult participants believed was due to the attractive flavorings that were available. I think people don’t know what’s in it so bringing awareness to that may make more people disgusted at what they’re having. – FG#8, adolescents 16–17 years old, M, NV (chemical ingredients) That’s a really good approach because a lot of people have this misconception that it’s healthier [than cigarettes] somehow so saying “it’s not actually healthy” would be helpful. – FG#16, adults 25–39 years old, M, NV (health harms) It definitely takes the tarnish off… smoking green apple or whatever… having a message coming across “I’m actually inhaling all of these different chemicals”. It doesn’t sound as fun anymore. – FG#16, adults 25–39 years old, M, NV (chemical ingredients) Groups provided insights into how they believed the health harm and chemical ingredient message themes could be improved to maximize effectiveness. In terms of the latter, all groups spoke of the importance of providing plain language information about the chemicals in e-cigarettes, disseminating information about other harmful products that contain these chemicals, and describing their use in other contexts. For example, participants commented that it was not enough to inform the public that e-cigarettes contain formaldehyde; information detailing the health harms of formaldehyde and that it is used to preserve dead bodies was needed. In terms of the health harms theme, most groups reported that information on the short-term or proximal harms of vaping would be particularly useful for reducing e-cigarette use. The specific health harms discussed varied across age groups and included respiratory health, sexual health, oral health, and cancer risk. Most groups—mainly young adults and adults—suggested drawing comparisons between e-cigarettes and combustible cigarettes given greater public knowledge of the chemical ingredients and health impacts of the latter. You could do a comparison [whereby] ‘X’ number of puffs of the average vape is the equivalent to smoking ‘X’ cigarettes… the amount of campaign energy over the last 50 years that has gone into anti-smoking… I think people generally accept that smoking is bad for you. – FG#14, adults 25–39 years old, F, NV In terms of message execution, more than half of the groups perceived the use of imagery or stories that elicited strong emotions, such as disgust or fear, to be effective: “You don’t want your mum to outlive you”. Stuff like that would be scary. – FG#9, young adults 18–24 years old, F, V If you see really grotesque images of someone suffering really badly from vaping then people would be like “I don’t want that to happen to me”. – FG#8, adolescents 16–17 years old, M, NV. Vaping Dependence Messages that focus on the impact of vaping dependence were considered effective by most groups. It was reported that messages with such a theme would bring attention to the pervasive impacts of substance dependence, which may otherwise go unrecognized. I think that’s good because it will make people realise how much they depend on it… how often they do it and where they do it and how it’s not good, takes over their life. – FG#2, adolescents 14–15 years old, F, NV Many groups offered a range of examples of the impacts of dependence that could be incorporated into communications featuring this theme to maximize effectiveness. Most commonly, groups (regardless of age, gender, or vaping status) discussed the negative impact of vaping dependence on one’s social life: Friendship would be a good one because that’s a big thing for teenagers. No one likes being alone, they always want to be doing something… Consequences like making you not feel like going out with your friends and all that. – FG#3, adolescents 14–15 years old, M, V People who are so addicted might not want to go to the movies because they have to sit there for three hours and they can’t vape… They don’t want to go to their partner’s house because they can’t use it there. – FG#9, young adults 18–24 years old, F, V Some groups spoke of the financial consequences associated with keeping up a vaping habit, difficulties exercising and traveling, poor academic achievement, impact on work, reduced enjoyment of activities, and increased distress associated with withdrawal: I really want to have a good job and I want to have money and I want to be able to afford things and be able to go on holidays and travel. If vaping would impact that, I think I’d be more into not doing it. – FG#1, adolescents 14–15 years old, F, V My friend lost hers at a footy game and had to sit through the next two hours. She was like “I can’t do it, what do I do?”. You could just see this shift in how much she was enjoying being there. It changed really quickly. – FG#9, young adults 18–24 years old, F, V Social Consequences of E-cigarette Use A message theme focused on social norms and the social consequences of e-cigarette use was particularly well-received by adolescents and young adults, as well as adults who had never vaped. The acceptance of e-cigarette use among young people was highlighted as a driver of uptake and continued use, and thus messages that aim to change social norms were perceived to be effective. People often do it to present themselves and to look a certain way so if you were telling people, “Actually no, you don’t come across how you think you do”, then maybe they won’t want to do it as much. – FG#2, adolescents 14–15 years old, F, NV Adults who vaped did not endorse the same views. Although they recognized the importance of changing the social acceptability of vaping, they tended to consider this norm intractable and therefore suggested that a message focused on this would be ineffective. From my understanding, at the moment it is a cool thing to do. I think it’s a hard sell to advertise it’s actually not cool when the general consensus in that age group [adolescence] is it is cool . – FG#16, adults 25–39 years old, M, NV You’re not out in the cold in the pouring rain, you’re – all your mates are inside vaping, so there is no loner aspect.” – FG#13, adults 25–39 years old, F, V. To deliver a message focused on social norms most effectively, half of the FGs noted that the message source would form a particularly important aspect of the campaign (discussed in further detail below). Adolescents and young adults suggested using specific terms to describe vaping (“just pathetic,” “silly,” “cringe”) and those who vape (“you’re not as cool as you think,” “nobody thinks you’re cool, you just look disgusting,” “if you do this [vape], you’re a…idiot”). They believed these terms needed to be condescending and would be well-received. “You look like an idiot, stop”. I feel like I’d be like…it would get through people’s head that they look dumb. – FG#1, adolescents 14–15 years old, F, V Being made fun of a little bit makes you go “oh, yeah, I can see why it’s a little bit silly now.” – FG#11, young adults 18–24 years old, M, V There’s nothing less masculine than a buff guy walking past you smelling like strawberry shortcake. Just making the connection that no one sees you as cool and it’s just a stupid image to have. – FG#12, young adults 18–24 years old, M, NV Message Themes Considered Ineffective Message themes met with ambivalence among groups included industry marketing tactics, the industry targeting a new generation of Australians, the environmental impacts of e-cigarettes, the risk of addiction, and the mislabeling of nicotine products. Message themes that were considered largely ineffective were those relating to the involvement of organized crime in the sale of products, e-cigarettes as medical devices, the risk of burns and injuries while using an e-cigarette, and the presence of counterfeit/fake products on the market. These themes were perceived to be ineffective because (1) the risk of being impacted by the issue communicated was considered rare and avoidable, (2) the themes did not directly target vaping behavior, and/or (3) the themes may have the unintended consequence of encouraging vaping. For example, the risk of burns or injuries from e-cigarette use was considered uncommon, and many groups believed that susceptibility to nicotine addiction can be controlled by the individual and is therefore avoidable. In terms of the message themes that did not directly target vaping behavior—such as those relating to industry involvement, organized crime, and e-cigarettes as medical devices—groups comprising those who vape reported that they did not purchase e-cigarettes through social media and/or noted that e-cigarettes could be purchased outside of organized crime. They also reported that such a message would simply encourage those who use e-cigarettes to source their vapes more responsibly rather than promote cessation. That would just take them to other places to get them. I’ve had people go behind alleys to get them, and then they realised that that’s dumb. I think that [message theme] is just a way to let people know to not do that, so they’ll do it a different way. – FG#6, adolescents 16–17 years old, F, NV) Message themes that were thought to have the potential to encourage e-cigarette use were those relating to counterfeit devices and mislabeling. For example, younger groups reported that they would not be concerned about using counterfeit devices because “they’re getting it cheaper” (FG#2, adolescents 14–15 years old, F, NV) and “as long as it works, it’s good” (FG#3, adolescents 14–15 years old, M, V). Messages that focused on nicotine mislabeling were considered to be especially counterproductive as the presence of nicotine is sought after (“Most young people would see that as a bonus.”—FG#15, adults 25–39 years old, M, V). There were some differences observed between groups for the theme relating to the environmental impact of e-cigarettes. All adult groups, as well as young adult women who vaped, considered this theme to be effective. This positive sentiment was based on personal concerns for the environment and the belief that the broader population held similar values. It was noted, however, that messages featuring this theme may encourage more responsible e-cigarette use as opposed to cessation. I think it’s pretty important. More or less everyone cares about the environment… It may stop people from using it or may push them to use reusable. – FG#16, adults 25–39 years old, M, NV By contrast, younger participants (adolescents and other young adults) believed this message theme would appeal to only a small proportion of their age group. They also reported being desensitized to environmental issues. You’re targeting a very small audience considering the amount of people our age who actually give a damn, compared to the amount of people who don’t. You’re targeting a minority. – FG#3, adolescents 14–15 years old, M, V When you hear about something so much you stop caring. It’s not like if I litter one gum packet one time it’s going to make the whole world collapse. – FG#1, adolescents 14–15 years old, F, V Message Delivery Groups discussed how messages should be delivered and by whom to optimize outcomes. Almost all groups believed that messages should be distributed via social media. TikTok, Instagram, YouTube, and SnapChat were frequently mentioned platforms. A few groups discussed their tendency to “skip” advertisements and highlighted the importance of creating health communications that got to the point quickly. Some also suggested creating advertisements that were not skippable. Regardless of age, groups generally reported that television is not an effective delivery medium. Most groups believed messages should be communicated via educational institutions. Adolescents and young adults suggested school education sessions or using schools as platforms for distributing messages. However, a few groups noted that the social acceptance of e-cigarette use in school settings would interfere with message engagement (“I’ve seen so many of my mates just laugh at it and just brush it off”—FG#7, adolescents 16–17 years old, M, V). Many groups suggested displaying messages in public such as on transport, in places where people tend to vape, and in shopping centers. Some groups reported that messages displayed at point of sale would also help to reduce e-cigarette use. In terms of message source, groups discussed the importance of having messages delivered by individuals to whom they could relate; for example, someone they see often or with whom they share similar characteristics (eg, age). Most groups discussed the effectiveness of personal testimonies from individuals (of the same age as the group being targeted) who use or used e-cigarettes. They reported that this would provide a sense of realism and increase their perceptions of the risks of e-cigarette use. If they could show an actual young person that’s had negative effects from it, then I feel that is a wake-up call. I don’t know anyone that’s had any problems from vaping, even if they’ve done it for many, many years. No-one, I feel, understands until it happens to someone that it has harmful effects. – FG#5, adolescents 16–17 years old, F, V Most groups reported on the likely effectiveness of social media “influencers” (including athletes and popular gamers), noting that the followers of these individuals are the target audience of anti-vaping campaigns. Social influencers were thought to have the trust of the target audience and considerable influence over social norms. If I had an influencer who I respect and have followed for a while relay a message like that, I would be more likely to look into it a bit more in comparison to an ad that I can leave instantly. – FG#10, young adults 18–24 years old, F, NV Finally, a substantial minority of groups—all adolescents and young adults—discussed being influenced by well-recognized organizations and medical professionals including Cancer Councils, medical doctors, and health experts. I’d still probably listen to [Cancer Councils] as well to be honest. I find them really credible. Everyone knows who the Cancer Council is. – FG#9, young adults 18–24 years old, F, V When given the opportunity to discuss what could be done to reduce e-cigarette use, participants suggested a range of policy and practice approaches. Given the aims of the present study, we report only on the discussions in which participants focused on health communications. Discussions in which participants focused on regulation-based approaches are reported elsewhere. All groups reported that disseminating information about the health harms associated with e-cigarette use would be effective in reducing vaping: Saying to them, “You can vape now and you can have fun with it but in 20 years - you don’t know what’s going to happen to you in the future.” – FG#4, adolescents 14–15 years old, M, NV. Messages focusing on social norms were also considered useful, with almost all groups recommending such a theme: A lot of people started vaping because they think it’d make them look cool... I think showing them what everyone else sees them as would probably get them to stop. – FG#4, adolescents 14–15 years old, M, NV. Approximately half of the groups—mostly adolescents and young adults—suggested that messages focusing on the risk of addiction, the impact of dependence, and the chemical composition and manufacturing of e-cigarettes would be effective. Some groups proposed messages that focused on the impact of vaping on others, such as family, friends, and partners: A lot of the reasons why you can’t do it indoors is [to protect] people with asthma and stuff… show the effects it has on other people. – FG#9, young adults 18–24 years old, F, V. Talk about secondhand smoke and how it can hurt your family . – FG#7, adolescents 16–17 years old, M, V. Message Themes Considered Effective Of the 14 hypothetical message themes presented, those considered most likely to be effective aligned with those mentioned spontaneously by groups (discussed above) and included the health harms associated with use, the chemical ingredients of e-cigarettes, the social consequences of vaping, and the impact of dependence. Driving positive sentiment towards most of these message themes were the high levels of certainty that these outcomes would occur and the perceived severity of the impact on one’s life. Results relating to each of these themes are now presented. Given the similarities in responses to the health harms and chemical ingredients themes, these are discussed together. Health Harms of E-cigarette Use and Chemical Ingredients of E-cigarettes All groups reported that messages focusing on the health harms and chemical ingredients of e-cigarettes would be effective, with most noting that such messages would provide the public with information of which they were not previously aware. Groups also discussed the potential for these messages to debunk misinformation that e-cigarette use is not harmful; a misconception that young adult and adult participants believed was due to the attractive flavorings that were available. I think people don’t know what’s in it so bringing awareness to that may make more people disgusted at what they’re having. – FG#8, adolescents 16–17 years old, M, NV (chemical ingredients) That’s a really good approach because a lot of people have this misconception that it’s healthier [than cigarettes] somehow so saying “it’s not actually healthy” would be helpful. – FG#16, adults 25–39 years old, M, NV (health harms) It definitely takes the tarnish off… smoking green apple or whatever… having a message coming across “I’m actually inhaling all of these different chemicals”. It doesn’t sound as fun anymore. – FG#16, adults 25–39 years old, M, NV (chemical ingredients) Groups provided insights into how they believed the health harm and chemical ingredient message themes could be improved to maximize effectiveness. In terms of the latter, all groups spoke of the importance of providing plain language information about the chemicals in e-cigarettes, disseminating information about other harmful products that contain these chemicals, and describing their use in other contexts. For example, participants commented that it was not enough to inform the public that e-cigarettes contain formaldehyde; information detailing the health harms of formaldehyde and that it is used to preserve dead bodies was needed. In terms of the health harms theme, most groups reported that information on the short-term or proximal harms of vaping would be particularly useful for reducing e-cigarette use. The specific health harms discussed varied across age groups and included respiratory health, sexual health, oral health, and cancer risk. Most groups—mainly young adults and adults—suggested drawing comparisons between e-cigarettes and combustible cigarettes given greater public knowledge of the chemical ingredients and health impacts of the latter. You could do a comparison [whereby] ‘X’ number of puffs of the average vape is the equivalent to smoking ‘X’ cigarettes… the amount of campaign energy over the last 50 years that has gone into anti-smoking… I think people generally accept that smoking is bad for you. – FG#14, adults 25–39 years old, F, NV In terms of message execution, more than half of the groups perceived the use of imagery or stories that elicited strong emotions, such as disgust or fear, to be effective: “You don’t want your mum to outlive you”. Stuff like that would be scary. – FG#9, young adults 18–24 years old, F, V If you see really grotesque images of someone suffering really badly from vaping then people would be like “I don’t want that to happen to me”. – FG#8, adolescents 16–17 years old, M, NV. Vaping Dependence Messages that focus on the impact of vaping dependence were considered effective by most groups. It was reported that messages with such a theme would bring attention to the pervasive impacts of substance dependence, which may otherwise go unrecognized. I think that’s good because it will make people realise how much they depend on it… how often they do it and where they do it and how it’s not good, takes over their life. – FG#2, adolescents 14–15 years old, F, NV Many groups offered a range of examples of the impacts of dependence that could be incorporated into communications featuring this theme to maximize effectiveness. Most commonly, groups (regardless of age, gender, or vaping status) discussed the negative impact of vaping dependence on one’s social life: Friendship would be a good one because that’s a big thing for teenagers. No one likes being alone, they always want to be doing something… Consequences like making you not feel like going out with your friends and all that. – FG#3, adolescents 14–15 years old, M, V People who are so addicted might not want to go to the movies because they have to sit there for three hours and they can’t vape… They don’t want to go to their partner’s house because they can’t use it there. – FG#9, young adults 18–24 years old, F, V Some groups spoke of the financial consequences associated with keeping up a vaping habit, difficulties exercising and traveling, poor academic achievement, impact on work, reduced enjoyment of activities, and increased distress associated with withdrawal: I really want to have a good job and I want to have money and I want to be able to afford things and be able to go on holidays and travel. If vaping would impact that, I think I’d be more into not doing it. – FG#1, adolescents 14–15 years old, F, V My friend lost hers at a footy game and had to sit through the next two hours. She was like “I can’t do it, what do I do?”. You could just see this shift in how much she was enjoying being there. It changed really quickly. – FG#9, young adults 18–24 years old, F, V Social Consequences of E-cigarette Use A message theme focused on social norms and the social consequences of e-cigarette use was particularly well-received by adolescents and young adults, as well as adults who had never vaped. The acceptance of e-cigarette use among young people was highlighted as a driver of uptake and continued use, and thus messages that aim to change social norms were perceived to be effective. People often do it to present themselves and to look a certain way so if you were telling people, “Actually no, you don’t come across how you think you do”, then maybe they won’t want to do it as much. – FG#2, adolescents 14–15 years old, F, NV Adults who vaped did not endorse the same views. Although they recognized the importance of changing the social acceptability of vaping, they tended to consider this norm intractable and therefore suggested that a message focused on this would be ineffective. From my understanding, at the moment it is a cool thing to do. I think it’s a hard sell to advertise it’s actually not cool when the general consensus in that age group [adolescence] is it is cool . – FG#16, adults 25–39 years old, M, NV You’re not out in the cold in the pouring rain, you’re – all your mates are inside vaping, so there is no loner aspect.” – FG#13, adults 25–39 years old, F, V. To deliver a message focused on social norms most effectively, half of the FGs noted that the message source would form a particularly important aspect of the campaign (discussed in further detail below). Adolescents and young adults suggested using specific terms to describe vaping (“just pathetic,” “silly,” “cringe”) and those who vape (“you’re not as cool as you think,” “nobody thinks you’re cool, you just look disgusting,” “if you do this [vape], you’re a…idiot”). They believed these terms needed to be condescending and would be well-received. “You look like an idiot, stop”. I feel like I’d be like…it would get through people’s head that they look dumb. – FG#1, adolescents 14–15 years old, F, V Being made fun of a little bit makes you go “oh, yeah, I can see why it’s a little bit silly now.” – FG#11, young adults 18–24 years old, M, V There’s nothing less masculine than a buff guy walking past you smelling like strawberry shortcake. Just making the connection that no one sees you as cool and it’s just a stupid image to have. – FG#12, young adults 18–24 years old, M, NV Of the 14 hypothetical message themes presented, those considered most likely to be effective aligned with those mentioned spontaneously by groups (discussed above) and included the health harms associated with use, the chemical ingredients of e-cigarettes, the social consequences of vaping, and the impact of dependence. Driving positive sentiment towards most of these message themes were the high levels of certainty that these outcomes would occur and the perceived severity of the impact on one’s life. Results relating to each of these themes are now presented. Given the similarities in responses to the health harms and chemical ingredients themes, these are discussed together. Health Harms of E-cigarette Use and Chemical Ingredients of E-cigarettes All groups reported that messages focusing on the health harms and chemical ingredients of e-cigarettes would be effective, with most noting that such messages would provide the public with information of which they were not previously aware. Groups also discussed the potential for these messages to debunk misinformation that e-cigarette use is not harmful; a misconception that young adult and adult participants believed was due to the attractive flavorings that were available. I think people don’t know what’s in it so bringing awareness to that may make more people disgusted at what they’re having. – FG#8, adolescents 16–17 years old, M, NV (chemical ingredients) That’s a really good approach because a lot of people have this misconception that it’s healthier [than cigarettes] somehow so saying “it’s not actually healthy” would be helpful. – FG#16, adults 25–39 years old, M, NV (health harms) It definitely takes the tarnish off… smoking green apple or whatever… having a message coming across “I’m actually inhaling all of these different chemicals”. It doesn’t sound as fun anymore. – FG#16, adults 25–39 years old, M, NV (chemical ingredients) Groups provided insights into how they believed the health harm and chemical ingredient message themes could be improved to maximize effectiveness. In terms of the latter, all groups spoke of the importance of providing plain language information about the chemicals in e-cigarettes, disseminating information about other harmful products that contain these chemicals, and describing their use in other contexts. For example, participants commented that it was not enough to inform the public that e-cigarettes contain formaldehyde; information detailing the health harms of formaldehyde and that it is used to preserve dead bodies was needed. In terms of the health harms theme, most groups reported that information on the short-term or proximal harms of vaping would be particularly useful for reducing e-cigarette use. The specific health harms discussed varied across age groups and included respiratory health, sexual health, oral health, and cancer risk. Most groups—mainly young adults and adults—suggested drawing comparisons between e-cigarettes and combustible cigarettes given greater public knowledge of the chemical ingredients and health impacts of the latter. You could do a comparison [whereby] ‘X’ number of puffs of the average vape is the equivalent to smoking ‘X’ cigarettes… the amount of campaign energy over the last 50 years that has gone into anti-smoking… I think people generally accept that smoking is bad for you. – FG#14, adults 25–39 years old, F, NV In terms of message execution, more than half of the groups perceived the use of imagery or stories that elicited strong emotions, such as disgust or fear, to be effective: “You don’t want your mum to outlive you”. Stuff like that would be scary. – FG#9, young adults 18–24 years old, F, V If you see really grotesque images of someone suffering really badly from vaping then people would be like “I don’t want that to happen to me”. – FG#8, adolescents 16–17 years old, M, NV. Vaping Dependence Messages that focus on the impact of vaping dependence were considered effective by most groups. It was reported that messages with such a theme would bring attention to the pervasive impacts of substance dependence, which may otherwise go unrecognized. I think that’s good because it will make people realise how much they depend on it… how often they do it and where they do it and how it’s not good, takes over their life. – FG#2, adolescents 14–15 years old, F, NV Many groups offered a range of examples of the impacts of dependence that could be incorporated into communications featuring this theme to maximize effectiveness. Most commonly, groups (regardless of age, gender, or vaping status) discussed the negative impact of vaping dependence on one’s social life: Friendship would be a good one because that’s a big thing for teenagers. No one likes being alone, they always want to be doing something… Consequences like making you not feel like going out with your friends and all that. – FG#3, adolescents 14–15 years old, M, V People who are so addicted might not want to go to the movies because they have to sit there for three hours and they can’t vape… They don’t want to go to their partner’s house because they can’t use it there. – FG#9, young adults 18–24 years old, F, V Some groups spoke of the financial consequences associated with keeping up a vaping habit, difficulties exercising and traveling, poor academic achievement, impact on work, reduced enjoyment of activities, and increased distress associated with withdrawal: I really want to have a good job and I want to have money and I want to be able to afford things and be able to go on holidays and travel. If vaping would impact that, I think I’d be more into not doing it. – FG#1, adolescents 14–15 years old, F, V My friend lost hers at a footy game and had to sit through the next two hours. She was like “I can’t do it, what do I do?”. You could just see this shift in how much she was enjoying being there. It changed really quickly. – FG#9, young adults 18–24 years old, F, V Social Consequences of E-cigarette Use A message theme focused on social norms and the social consequences of e-cigarette use was particularly well-received by adolescents and young adults, as well as adults who had never vaped. The acceptance of e-cigarette use among young people was highlighted as a driver of uptake and continued use, and thus messages that aim to change social norms were perceived to be effective. People often do it to present themselves and to look a certain way so if you were telling people, “Actually no, you don’t come across how you think you do”, then maybe they won’t want to do it as much. – FG#2, adolescents 14–15 years old, F, NV Adults who vaped did not endorse the same views. Although they recognized the importance of changing the social acceptability of vaping, they tended to consider this norm intractable and therefore suggested that a message focused on this would be ineffective. From my understanding, at the moment it is a cool thing to do. I think it’s a hard sell to advertise it’s actually not cool when the general consensus in that age group [adolescence] is it is cool . – FG#16, adults 25–39 years old, M, NV You’re not out in the cold in the pouring rain, you’re – all your mates are inside vaping, so there is no loner aspect.” – FG#13, adults 25–39 years old, F, V. To deliver a message focused on social norms most effectively, half of the FGs noted that the message source would form a particularly important aspect of the campaign (discussed in further detail below). Adolescents and young adults suggested using specific terms to describe vaping (“just pathetic,” “silly,” “cringe”) and those who vape (“you’re not as cool as you think,” “nobody thinks you’re cool, you just look disgusting,” “if you do this [vape], you’re a…idiot”). They believed these terms needed to be condescending and would be well-received. “You look like an idiot, stop”. I feel like I’d be like…it would get through people’s head that they look dumb. – FG#1, adolescents 14–15 years old, F, V Being made fun of a little bit makes you go “oh, yeah, I can see why it’s a little bit silly now.” – FG#11, young adults 18–24 years old, M, V There’s nothing less masculine than a buff guy walking past you smelling like strawberry shortcake. Just making the connection that no one sees you as cool and it’s just a stupid image to have. – FG#12, young adults 18–24 years old, M, NV All groups reported that messages focusing on the health harms and chemical ingredients of e-cigarettes would be effective, with most noting that such messages would provide the public with information of which they were not previously aware. Groups also discussed the potential for these messages to debunk misinformation that e-cigarette use is not harmful; a misconception that young adult and adult participants believed was due to the attractive flavorings that were available. I think people don’t know what’s in it so bringing awareness to that may make more people disgusted at what they’re having. – FG#8, adolescents 16–17 years old, M, NV (chemical ingredients) That’s a really good approach because a lot of people have this misconception that it’s healthier [than cigarettes] somehow so saying “it’s not actually healthy” would be helpful. – FG#16, adults 25–39 years old, M, NV (health harms) It definitely takes the tarnish off… smoking green apple or whatever… having a message coming across “I’m actually inhaling all of these different chemicals”. It doesn’t sound as fun anymore. – FG#16, adults 25–39 years old, M, NV (chemical ingredients) Groups provided insights into how they believed the health harm and chemical ingredient message themes could be improved to maximize effectiveness. In terms of the latter, all groups spoke of the importance of providing plain language information about the chemicals in e-cigarettes, disseminating information about other harmful products that contain these chemicals, and describing their use in other contexts. For example, participants commented that it was not enough to inform the public that e-cigarettes contain formaldehyde; information detailing the health harms of formaldehyde and that it is used to preserve dead bodies was needed. In terms of the health harms theme, most groups reported that information on the short-term or proximal harms of vaping would be particularly useful for reducing e-cigarette use. The specific health harms discussed varied across age groups and included respiratory health, sexual health, oral health, and cancer risk. Most groups—mainly young adults and adults—suggested drawing comparisons between e-cigarettes and combustible cigarettes given greater public knowledge of the chemical ingredients and health impacts of the latter. You could do a comparison [whereby] ‘X’ number of puffs of the average vape is the equivalent to smoking ‘X’ cigarettes… the amount of campaign energy over the last 50 years that has gone into anti-smoking… I think people generally accept that smoking is bad for you. – FG#14, adults 25–39 years old, F, NV In terms of message execution, more than half of the groups perceived the use of imagery or stories that elicited strong emotions, such as disgust or fear, to be effective: “You don’t want your mum to outlive you”. Stuff like that would be scary. – FG#9, young adults 18–24 years old, F, V If you see really grotesque images of someone suffering really badly from vaping then people would be like “I don’t want that to happen to me”. – FG#8, adolescents 16–17 years old, M, NV. Messages that focus on the impact of vaping dependence were considered effective by most groups. It was reported that messages with such a theme would bring attention to the pervasive impacts of substance dependence, which may otherwise go unrecognized. I think that’s good because it will make people realise how much they depend on it… how often they do it and where they do it and how it’s not good, takes over their life. – FG#2, adolescents 14–15 years old, F, NV Many groups offered a range of examples of the impacts of dependence that could be incorporated into communications featuring this theme to maximize effectiveness. Most commonly, groups (regardless of age, gender, or vaping status) discussed the negative impact of vaping dependence on one’s social life: Friendship would be a good one because that’s a big thing for teenagers. No one likes being alone, they always want to be doing something… Consequences like making you not feel like going out with your friends and all that. – FG#3, adolescents 14–15 years old, M, V People who are so addicted might not want to go to the movies because they have to sit there for three hours and they can’t vape… They don’t want to go to their partner’s house because they can’t use it there. – FG#9, young adults 18–24 years old, F, V Some groups spoke of the financial consequences associated with keeping up a vaping habit, difficulties exercising and traveling, poor academic achievement, impact on work, reduced enjoyment of activities, and increased distress associated with withdrawal: I really want to have a good job and I want to have money and I want to be able to afford things and be able to go on holidays and travel. If vaping would impact that, I think I’d be more into not doing it. – FG#1, adolescents 14–15 years old, F, V My friend lost hers at a footy game and had to sit through the next two hours. She was like “I can’t do it, what do I do?”. You could just see this shift in how much she was enjoying being there. It changed really quickly. – FG#9, young adults 18–24 years old, F, V A message theme focused on social norms and the social consequences of e-cigarette use was particularly well-received by adolescents and young adults, as well as adults who had never vaped. The acceptance of e-cigarette use among young people was highlighted as a driver of uptake and continued use, and thus messages that aim to change social norms were perceived to be effective. People often do it to present themselves and to look a certain way so if you were telling people, “Actually no, you don’t come across how you think you do”, then maybe they won’t want to do it as much. – FG#2, adolescents 14–15 years old, F, NV Adults who vaped did not endorse the same views. Although they recognized the importance of changing the social acceptability of vaping, they tended to consider this norm intractable and therefore suggested that a message focused on this would be ineffective. From my understanding, at the moment it is a cool thing to do. I think it’s a hard sell to advertise it’s actually not cool when the general consensus in that age group [adolescence] is it is cool . – FG#16, adults 25–39 years old, M, NV You’re not out in the cold in the pouring rain, you’re – all your mates are inside vaping, so there is no loner aspect.” – FG#13, adults 25–39 years old, F, V. To deliver a message focused on social norms most effectively, half of the FGs noted that the message source would form a particularly important aspect of the campaign (discussed in further detail below). Adolescents and young adults suggested using specific terms to describe vaping (“just pathetic,” “silly,” “cringe”) and those who vape (“you’re not as cool as you think,” “nobody thinks you’re cool, you just look disgusting,” “if you do this [vape], you’re a…idiot”). They believed these terms needed to be condescending and would be well-received. “You look like an idiot, stop”. I feel like I’d be like…it would get through people’s head that they look dumb. – FG#1, adolescents 14–15 years old, F, V Being made fun of a little bit makes you go “oh, yeah, I can see why it’s a little bit silly now.” – FG#11, young adults 18–24 years old, M, V There’s nothing less masculine than a buff guy walking past you smelling like strawberry shortcake. Just making the connection that no one sees you as cool and it’s just a stupid image to have. – FG#12, young adults 18–24 years old, M, NV Message themes met with ambivalence among groups included industry marketing tactics, the industry targeting a new generation of Australians, the environmental impacts of e-cigarettes, the risk of addiction, and the mislabeling of nicotine products. Message themes that were considered largely ineffective were those relating to the involvement of organized crime in the sale of products, e-cigarettes as medical devices, the risk of burns and injuries while using an e-cigarette, and the presence of counterfeit/fake products on the market. These themes were perceived to be ineffective because (1) the risk of being impacted by the issue communicated was considered rare and avoidable, (2) the themes did not directly target vaping behavior, and/or (3) the themes may have the unintended consequence of encouraging vaping. For example, the risk of burns or injuries from e-cigarette use was considered uncommon, and many groups believed that susceptibility to nicotine addiction can be controlled by the individual and is therefore avoidable. In terms of the message themes that did not directly target vaping behavior—such as those relating to industry involvement, organized crime, and e-cigarettes as medical devices—groups comprising those who vape reported that they did not purchase e-cigarettes through social media and/or noted that e-cigarettes could be purchased outside of organized crime. They also reported that such a message would simply encourage those who use e-cigarettes to source their vapes more responsibly rather than promote cessation. That would just take them to other places to get them. I’ve had people go behind alleys to get them, and then they realised that that’s dumb. I think that [message theme] is just a way to let people know to not do that, so they’ll do it a different way. – FG#6, adolescents 16–17 years old, F, NV) Message themes that were thought to have the potential to encourage e-cigarette use were those relating to counterfeit devices and mislabeling. For example, younger groups reported that they would not be concerned about using counterfeit devices because “they’re getting it cheaper” (FG#2, adolescents 14–15 years old, F, NV) and “as long as it works, it’s good” (FG#3, adolescents 14–15 years old, M, V). Messages that focused on nicotine mislabeling were considered to be especially counterproductive as the presence of nicotine is sought after (“Most young people would see that as a bonus.”—FG#15, adults 25–39 years old, M, V). There were some differences observed between groups for the theme relating to the environmental impact of e-cigarettes. All adult groups, as well as young adult women who vaped, considered this theme to be effective. This positive sentiment was based on personal concerns for the environment and the belief that the broader population held similar values. It was noted, however, that messages featuring this theme may encourage more responsible e-cigarette use as opposed to cessation. I think it’s pretty important. More or less everyone cares about the environment… It may stop people from using it or may push them to use reusable. – FG#16, adults 25–39 years old, M, NV By contrast, younger participants (adolescents and other young adults) believed this message theme would appeal to only a small proportion of their age group. They also reported being desensitized to environmental issues. You’re targeting a very small audience considering the amount of people our age who actually give a damn, compared to the amount of people who don’t. You’re targeting a minority. – FG#3, adolescents 14–15 years old, M, V When you hear about something so much you stop caring. It’s not like if I litter one gum packet one time it’s going to make the whole world collapse. – FG#1, adolescents 14–15 years old, F, V Groups discussed how messages should be delivered and by whom to optimize outcomes. Almost all groups believed that messages should be distributed via social media. TikTok, Instagram, YouTube, and SnapChat were frequently mentioned platforms. A few groups discussed their tendency to “skip” advertisements and highlighted the importance of creating health communications that got to the point quickly. Some also suggested creating advertisements that were not skippable. Regardless of age, groups generally reported that television is not an effective delivery medium. Most groups believed messages should be communicated via educational institutions. Adolescents and young adults suggested school education sessions or using schools as platforms for distributing messages. However, a few groups noted that the social acceptance of e-cigarette use in school settings would interfere with message engagement (“I’ve seen so many of my mates just laugh at it and just brush it off”—FG#7, adolescents 16–17 years old, M, V). Many groups suggested displaying messages in public such as on transport, in places where people tend to vape, and in shopping centers. Some groups reported that messages displayed at point of sale would also help to reduce e-cigarette use. In terms of message source, groups discussed the importance of having messages delivered by individuals to whom they could relate; for example, someone they see often or with whom they share similar characteristics (eg, age). Most groups discussed the effectiveness of personal testimonies from individuals (of the same age as the group being targeted) who use or used e-cigarettes. They reported that this would provide a sense of realism and increase their perceptions of the risks of e-cigarette use. If they could show an actual young person that’s had negative effects from it, then I feel that is a wake-up call. I don’t know anyone that’s had any problems from vaping, even if they’ve done it for many, many years. No-one, I feel, understands until it happens to someone that it has harmful effects. – FG#5, adolescents 16–17 years old, F, V Most groups reported on the likely effectiveness of social media “influencers” (including athletes and popular gamers), noting that the followers of these individuals are the target audience of anti-vaping campaigns. Social influencers were thought to have the trust of the target audience and considerable influence over social norms. If I had an influencer who I respect and have followed for a while relay a message like that, I would be more likely to look into it a bit more in comparison to an ad that I can leave instantly. – FG#10, young adults 18–24 years old, F, NV Finally, a substantial minority of groups—all adolescents and young adults—discussed being influenced by well-recognized organizations and medical professionals including Cancer Councils, medical doctors, and health experts. I’d still probably listen to [Cancer Councils] as well to be honest. I find them really credible. Everyone knows who the Cancer Council is. – FG#9, young adults 18–24 years old, F, V To inform the development of health communications aimed at reducing e-cigarette use among youth and those who have never smoked, we explored adolescents’, young adults’, and adults’ (1) perceptions of the effectiveness of a variety of message themes and (2) ideas for how message themes could be executed and delivered to the public to optimize outcomes. Message themes perceived to be the most effective by participants of the FGs were those that focused on (1) the health harms associated with e-cigarette use and (2) the chemical ingredients in e-cigarette products, other uses for these chemicals, and their impact on the body when inhaled. Although our analysis was not theory-driven, the expressed reasons for believing these two themes to be particularly effective at reducing e-cigarette use are consistent with the Health Action Process Approach, which considers outcome expectancies and risk perceptions to be important predictors of behavior change. The findings are also consistent with prior work that has highlighted the potential effectiveness of messages relating to health harms and chemical ingredients. , The present study extends this work by identifying the specific health harms considered most likely to discourage e-cigarette use and how these should be presented to maximize effectiveness. Results suggest that individuals may be discouraged from using e-cigarettes when presented with the short-term risks of use. Participants also recommended focusing on the respiratory health, sexual health, oral health, and cancer risks associated with vaping and noted that communications featuring the personal testimonies of those who use e-cigarettes and have experienced these or other health harms would be most impactful. In terms of the chemical ingredients theme, participants recommended that communications featuring this theme go further than simply informing the target audience that e-cigarettes contain chemicals. They highlighted the importance of explaining what the chemical does to one’s body and providing information on other products in which the chemical can be found, which is consistent with previous research in this area indicating that unknown chemical names can cause confusion. Other message themes considered likely to be effective were those that focused on the negative social consequences associated with e-cigarette use and the impact of dependence on daily functioning. In terms of the former, mixed results had previously been found for messages relating to social norms, with some work finding that adolescents and young adults consider these types of messages to be effective , and other work suggesting they do not. , Participants in the present study reported that social attitudes towards e-cigarette use would be difficult to change but believed this to be possible by using social media influencers and/or a source the target audience finds relatable and credible. Personal testimonies were considered particularly important. In terms of the theme relating to dependence, participants considered the following issues to be particularly relevant: Social, work, academic, and financial consequences; reduced enjoyment of activities; increased distress from withdrawal symptoms; and difficulties traveling and undertaking physical exercise. Interestingly, the dependence theme was considered more effective than the theme relating to addiction, with many groups noting that nicotine addiction can be controlled by the individual and is therefore avoidable (i.e., “it won’t happen to me”). Although speculative, it could also be that young Australians have become desensitized to the terms “addiction” or “addicted” as a result of their use in the everyday vernacular (eg, “I’m addicted to TikTok”). Developers of health communications relating to e-cigarette use may wish to consider focusing on dependence or the impact of addiction without using the term itself. Personally relevant examples of dependence that relate to the aforementioned issues are likely to be most impactful. Message themes that did not specifically relate to e-cigarette use or that presented information the target audience considered unlikely—such as anti-industry sentiment, the involvement of organized crime, and the risk of burns and injuries—were perceived as having little impact on vaping behaviors. This is somewhat inconsistent with prior work that found anti-industry sentiment to be potentially effective. Finally, a message theme focusing on the environmental harms associated with e-cigarettes drew mixed responses, with some groups considering such a theme to be effective and others believing it would resonate with only a few individuals. Results from the present study have the potential to inform future research directions and the development of anti-vaping health communication campaigns. Given themes related to health harms, chemical ingredients, and dependence were well-received by all groups, rotating messages that utilize these themes may be effective in maintaining population attention and interest while continuing to pursue a consistent public health agenda of reducing e-cigarette use. Messages featuring themes relating to social norms and the environmental impacts of vaping may require more targeted dissemination, with health communication campaigns addressing the former likely to be well-received by adolescents and young adults but not adults, and the latter best disseminated to adults. If anti-vaping campaigns addressing themes outside of those assessed in the current study are developed, our findings suggest these should highlight short-term risks that are perceived as being difficult to avoid and have significant unwanted outcomes. We note that young people in our study suggested several specific terms for use in campaigns targeting social norms that may be stigmatizing to those experiencing nicotine dependence. Continued co-design is important to execute this message theme effectively, and feedback should be sought from those who currently vape to ensure messages that attempt to dislodge embedded social practices and encourage cessation are not stigmatizing. This study has some limitations. First, we did not assess message themes according to a validated model of perceived effectiveness or behavior change. While findings appear consistent with relevant models in these areas, , future assessments of message themes may wish to use established measures of perceived effectiveness that assess various behavior change constructs (eg, self-efficacy). Second, the qualitative nature of this research means that findings only reflect the views of the participants sampled and caution should be exercised when generalizing to the broader population. Third, the data we obtained may have been impacted by FG dynamics (eg, participants not wishing to disagree with the group). Our use of stratification to create more homogeneous groups is an effective means of mitigating this risk. Fourth, our analytical approach involved a single coder. While this is standard practice in reflexive thematic analysis, , it does prevent the calculation of inter-rater reliability and we acknowledge the subjectivity associated with this type of analysis. Trustworthiness was enhanced, however, by the involvement of the FG facilitator in the construction and refinement of the coding hierarchy. In addition, the coder listened to all audio recordings, read the group transcripts in their entirety, and engaged in line-by-line coding. These processes facilitated accurate and genuine interpretation of the data. , Finally, as the purpose of this study was to consider message themes that may be effective at discouraging e-cigarette use among youth and those who have never smoked, we did not explore perceptions of the likely impact of message themes on those who do smoke. The development of message themes for anti-vaping campaigns should consider the information needs of those who smoke to ensure those who may benefit from using e-cigarettes to quit smoking are not dissuaded from seeking health advice. Given the increasing number of individuals who regularly vape, consideration should also be given to messages that provide support and information on how to quit. In terms of study strengths, the qualitative nature of the work allowed for the collection of more detailed data than seen in previous studies in this area. Second, we were able to assess message themes that were largely evidence-based, that is, stemming from previous research on the drivers of e-cigarette use. Third, novel information was collected on how message themes could be improved and best executed and delivered to the public. Finally, prior to the current study, little research on anti-vaping messages had included adults and those who have never used e-cigarettes. Exploring the effectiveness of message themes among members of these population segments is important given e-cigarette use continues to increase among adults and prevention is key to reducing uptake. The present study identified several anti-vaping message themes that may be effective at reducing e-cigarette use, providing developers of health communication campaigns with key insights into messages that are likely to be well-received. FG participants considered several themes to be effective at reducing e-cigarette use, suggesting that developers of health communications relating to vaping could use a series of rotating messages. This technique has been used in tobacco control campaigns to sustain the attention of the target audience and reduce desensitization over time. , Further work to develop a range of specific anti-vaping messages through iterative co-design with the target audience is warranted. Supplementary material is available at Nicotine and Tobacco Research online. ntae198_suppl_Supplementary_Materials |
Comparative experimental anesthesia efficacy study of epidural injection at the sacrococcygeal space using ultrasound guidance versus blindness technique in Egyptian donkeys ( | c43f2470-8eca-44a7-9898-abcd3fcb86bb | 11806798 | Surgical Procedures, Operative[mh] | Donkeys, domesticated Equidae members, were utilized in underdeveloped countries for draft and riding, primarily in rural areas, making them particularly crucial in developing nations with large numbers of donkeys . Egypt’s donkey population has decreased from 3.2 million in 2014 to one million due to factors such as Chinese market demand, overexploitation, industrialization, and increasing Egyptian donkey demand for preparation and industrialization . Ultrasound imaging techniques enable accurate needle placement and real-time monitoring of local anesthetic solution administration . The advantages of this approach include enhanced nerve block effectiveness, faster onset times, and reduced local anesthetic solution needed for effective block production . In contrast to traditional blind techniques, which may fail because of an uneven distribution of local anesthetic, US can confirm the distribution of local anesthetic around the target nerve, which is a major advantage for peripheral nerve block procedures . There are numerous published data points that are related to caudal epidural anesthesia, but they are mainly focused on studying the effects of anesthesia drugs . However, there is a shortage of information about the method of anesthesia technique rather than the blind caudal epidural anesthesia injection method . Caudal epidural anesthesia is a technique used in surgical interventions to inject a local anesthetic into the epidural space, preserving the pelvic limb’s function, especially in large animal patients who are standing sedated during the procedure . The sacrococcygeal epidural injection using a blind technique can be challenging, particularly in small animals with high body condition scores, due to the presence of abundant subcutaneous fat . Caudal epidural analgesia is frequently utilized on standing sedated animals during surgical operations since it preserves the mobility of the pelvic limbs . Excellent accuracy can be achieved by injecting the horse cervical nerve roots using ultrasound guidance. The injection is deposited in direct contact with up to 75 of the C3-C8 cervical nerve roots . In seven obese dogs, the location of the needle insertion for the lumbosacral epidural injection was determined by ultrasound imaging . Ultrasound guidance is frequently used in human anesthesia to guide the placement of caudal epidural needles . Ultrasound-guided epidural injection technique is recently introduced in the veterinary surgical science . The success rate of anesthetists administering epidural anesthesia is increased by ultrasound evaluation prior to the injection, which also shortens their learning curve . A recent parturient study found that prepuncture ultrasonographic evaluation of the lumbar region was linked to a notably lower number of puncture attempts and a quicker process time; these benefits were especially noticeable in individuals who were obese. The first-attempt success rate for obese patients under ultrasound guidance was 92%, compared to 44% when utilizing a traditional blind approach . Though the relevance and value of this image-guided process for fewer skilled veterinary operators in comparison to the traditional landmark blind method had not been examined, it has potential value . According to the authors’ knowledge, no veterinary study has taken into account administering an ultrasound-guided epidural injection to the donkey’s sacrococcygeal region. Many ultrasound-guided methods define needle insertion at the cisterna magna in horses for CSF collection , among the first and second cervical vertebrae , and in the lumbosacral space . One paper describing canine sacrococcygeal epidural injection aided by ultrasonography . It was hypothesized that the utility of ultrasound guidance to optimize needle location and the viewing of sacrococcygeal space and adjacent structures would enhance the technique’s safety and success rates when compared to blind techniques. The goal of this investigation was to evaluate the efficacy of blind and ultrasound-guided epidural injection in the donkey’s sacrococcygeal region in cadavers and clinical cases, as well as to explain the ultrasonographic anatomy of the sacrococcygeal region in donkeys. In the end, the current findings were compared to the formerly published data about the epidural injection in the sacrococcygeal area in the different domestic species. Animals collection and study design This study was conducted on a total of twenty adult Egyptian donkeys ( Equus asinus ) of both sexes (sexes not recorded) and weighed (mean ± SD) 130.4 ± 3.59 kg. To ensure the validity of these donkeys to the experiment, the musculoskeletal system of these donkeys was examined physically, radiographically, and ultrasonographically. The collected donkeys used were free from any vertebral column anatomical abnormalities. The Egyptian donkeys were grouped into two experimental groups. In the first group, we used ten cadaver donkeys that were humanely euthanized. In the second group, we used 10 live, healthy adult donkeys to assess the efficacy and time of analgesia onset for blind and ultrasound-guided epidural injections at the sacrococcygeal region. These donkeys were obtained directly from their respective owners, and the informed consents from the owners have been duly obtained. The anatomical terms were applied according to Nomina Anatomica Veterinaria . Cadaveric study This study was conducted on ten donkey cadavers as soon as possible (1–2 h) after a humane euthanasia by a rapid intravenous injection of thiopental sodium (1-g vial; EPICO, Egypt) at a dose of 35 mg/kg BW, as illustrated by Hamed, et al. . Cadavers were randomly subdivided according to the method of epidural injection into blind ( n = 5) or US-guided ( n = 5) epidural injections at the sacrococcygeal region. In this study, an area extending from the initial third of the tail to the lumbosacral area was clipped and prepared aseptically. Then, we applied the blind epidural injection and ultrasound-guided epidural injection technique at the sacrococcygeal space in the donkeys as follows: Blind epidural injection Blind injection was carried out by detecting the sacrococcygeal space and inserting a 20-gauge needle (Med, Eldawlia ico, Egypt) into the space, precisely aiming to create a successful injection. Following the surgeon’s confirmation that the right position had been achieved, 6 mL of 1% methylene blue solution was injected into the spinal canal at the sacrococcygeal space (Fig. ). U.S-guided epidural injection An ultrasound machine (Mindray DP-2200Vet., PR China) with a 7.0 to 10.0 MHz convex probe was used to carry out US-guided injection in both transverse and longitudinal planes. Initially, the transducer was positioned transversely across the sacrum caudal area. The sacrococcygeal space was found and viewed by moving the probe craniocaudally to the vertebral column. The needle was punctured via ultrasound guidance. The needle tip proceeded toward the sacrococcygeal area till it approached the spinal canal floor while the transducer was in place. On ultrasonography, the needle tip placement with relation to the sacrococcygeal space was examined, and if required, adjustments were made. The dye’s entry into the area was seen as an anechoic, fluid wave on the ultrasonography image. After epidural injection, either blindly or U.S. guided, anatomic dissection of the preserved specimens was performed with great care, documented, and photographed in order to look for the existence of methylene blue in the vertebral canal. Following dissection, the existence of methylene blue in the spinal canal was seen as evidence of a successful injection (Fig. E). In vivo study Ten healthy adult donkeys were employed to evaluate the effectiveness and precision of blind and ultrasound-guided epidural injections at the sacrococcygeal region. All donkeys were kept in stocks for adequate restraint, and then aseptic cleaning and shaving were done in the examining area. Firstly, sedation of donkeys was induced via intravenous administration of a combination of acepromazine (Vetranquil, 0.02 mg/kg body weight intravenously) and detomidine (Detogesic, 0.01 mg/kg body weight intravenously) with a 15-minute interval. Then, a subcutaneous anesthetic injection of 2 ml of lidocaine HCl was utilized. The precise location was determined by palpating the indentation between the sacrum and the first caudal vertebra and moving the tail up and down. A thirty-five degree angle with the median plane was formed when the 20G needle was placed into the skin. To verify that the needle was positioned correctly, the hanging drop technique was utilized to identify negative pressure, and the injection went through without resistance. After negative aspiration, 6 mL of 2% lidocaine hydrochloride (Lidocain Ccl, B. Braun Melsungen AG, Germany) was injected epidural into the sacrococcygeal area (Fig. ). The blind or ultrasound-guided approach was considered successful when the perineal region’s cutaneous sensibility was completely lost. Sensation was confirmed at the perineal region with a pinprick test using a 22-gauge (2.5 cm) needle prior to lidocaine injection (baseline) and at two-minute intervals thereafter until sensation was abolished. The needle was inserted into the skin, and the animal’s reaction to this stimulus was observed. Analgesia was deemed effective when the animal accepted the skin puncture and did not react to pricking. When the animal shifted its head, neck, and/or trunk to preclude the unpleasant stimulation of the needle, analgesia was deemed ineffective. The time (in minutes) spent between the injection of the local anesthetic and the lack of sensation was regarded as the start of local analgesia. Three days following injection, the donkeys were observed to check for any possible complications, including infection, hematoma, or neurological issues. Injection criteria The injection criterion evaluation was the responsibility of individual practitioners. A subjective grading method for the ease of accurate needle penetration, the difficulty of injection, the number of tries, and the performance time was used to evaluate and score the expert’s estimated confidence at injection (Table ), according to El-Shafaey , et al. . In both cadaveric and live animal studies, injection criteria were applied. Skilled anatomists, sonographers, radiologists, and anesthetists assessed and judged the injection criteria’s effectiveness. The epidural injection procedures for both the cadaveric and live animal experiments were carried out by a skilled anesthetist. Statistical analysis The statistical software package GraphPad Prism (GraphPad Prism for Win. Version 5.0, GraphPad Software Inc., USA) was used to conduct the statistical analysis. All scores were expressed as median (minimum–maximum). A pairwise comparison between the two injection criterion scores (non-parametric data) was carried out using the Mann Whitney U test. When P < 0.05, significance was deemed to exist after applying the bonferroni correction for multiple comparisons between variables. This study was conducted on a total of twenty adult Egyptian donkeys ( Equus asinus ) of both sexes (sexes not recorded) and weighed (mean ± SD) 130.4 ± 3.59 kg. To ensure the validity of these donkeys to the experiment, the musculoskeletal system of these donkeys was examined physically, radiographically, and ultrasonographically. The collected donkeys used were free from any vertebral column anatomical abnormalities. The Egyptian donkeys were grouped into two experimental groups. In the first group, we used ten cadaver donkeys that were humanely euthanized. In the second group, we used 10 live, healthy adult donkeys to assess the efficacy and time of analgesia onset for blind and ultrasound-guided epidural injections at the sacrococcygeal region. These donkeys were obtained directly from their respective owners, and the informed consents from the owners have been duly obtained. The anatomical terms were applied according to Nomina Anatomica Veterinaria . This study was conducted on ten donkey cadavers as soon as possible (1–2 h) after a humane euthanasia by a rapid intravenous injection of thiopental sodium (1-g vial; EPICO, Egypt) at a dose of 35 mg/kg BW, as illustrated by Hamed, et al. . Cadavers were randomly subdivided according to the method of epidural injection into blind ( n = 5) or US-guided ( n = 5) epidural injections at the sacrococcygeal region. In this study, an area extending from the initial third of the tail to the lumbosacral area was clipped and prepared aseptically. Then, we applied the blind epidural injection and ultrasound-guided epidural injection technique at the sacrococcygeal space in the donkeys as follows: Blind epidural injection Blind injection was carried out by detecting the sacrococcygeal space and inserting a 20-gauge needle (Med, Eldawlia ico, Egypt) into the space, precisely aiming to create a successful injection. Following the surgeon’s confirmation that the right position had been achieved, 6 mL of 1% methylene blue solution was injected into the spinal canal at the sacrococcygeal space (Fig. ). U.S-guided epidural injection An ultrasound machine (Mindray DP-2200Vet., PR China) with a 7.0 to 10.0 MHz convex probe was used to carry out US-guided injection in both transverse and longitudinal planes. Initially, the transducer was positioned transversely across the sacrum caudal area. The sacrococcygeal space was found and viewed by moving the probe craniocaudally to the vertebral column. The needle was punctured via ultrasound guidance. The needle tip proceeded toward the sacrococcygeal area till it approached the spinal canal floor while the transducer was in place. On ultrasonography, the needle tip placement with relation to the sacrococcygeal space was examined, and if required, adjustments were made. The dye’s entry into the area was seen as an anechoic, fluid wave on the ultrasonography image. After epidural injection, either blindly or U.S. guided, anatomic dissection of the preserved specimens was performed with great care, documented, and photographed in order to look for the existence of methylene blue in the vertebral canal. Following dissection, the existence of methylene blue in the spinal canal was seen as evidence of a successful injection (Fig. E). Blind injection was carried out by detecting the sacrococcygeal space and inserting a 20-gauge needle (Med, Eldawlia ico, Egypt) into the space, precisely aiming to create a successful injection. Following the surgeon’s confirmation that the right position had been achieved, 6 mL of 1% methylene blue solution was injected into the spinal canal at the sacrococcygeal space (Fig. ). An ultrasound machine (Mindray DP-2200Vet., PR China) with a 7.0 to 10.0 MHz convex probe was used to carry out US-guided injection in both transverse and longitudinal planes. Initially, the transducer was positioned transversely across the sacrum caudal area. The sacrococcygeal space was found and viewed by moving the probe craniocaudally to the vertebral column. The needle was punctured via ultrasound guidance. The needle tip proceeded toward the sacrococcygeal area till it approached the spinal canal floor while the transducer was in place. On ultrasonography, the needle tip placement with relation to the sacrococcygeal space was examined, and if required, adjustments were made. The dye’s entry into the area was seen as an anechoic, fluid wave on the ultrasonography image. After epidural injection, either blindly or U.S. guided, anatomic dissection of the preserved specimens was performed with great care, documented, and photographed in order to look for the existence of methylene blue in the vertebral canal. Following dissection, the existence of methylene blue in the spinal canal was seen as evidence of a successful injection (Fig. E). Ten healthy adult donkeys were employed to evaluate the effectiveness and precision of blind and ultrasound-guided epidural injections at the sacrococcygeal region. All donkeys were kept in stocks for adequate restraint, and then aseptic cleaning and shaving were done in the examining area. Firstly, sedation of donkeys was induced via intravenous administration of a combination of acepromazine (Vetranquil, 0.02 mg/kg body weight intravenously) and detomidine (Detogesic, 0.01 mg/kg body weight intravenously) with a 15-minute interval. Then, a subcutaneous anesthetic injection of 2 ml of lidocaine HCl was utilized. The precise location was determined by palpating the indentation between the sacrum and the first caudal vertebra and moving the tail up and down. A thirty-five degree angle with the median plane was formed when the 20G needle was placed into the skin. To verify that the needle was positioned correctly, the hanging drop technique was utilized to identify negative pressure, and the injection went through without resistance. After negative aspiration, 6 mL of 2% lidocaine hydrochloride (Lidocain Ccl, B. Braun Melsungen AG, Germany) was injected epidural into the sacrococcygeal area (Fig. ). The blind or ultrasound-guided approach was considered successful when the perineal region’s cutaneous sensibility was completely lost. Sensation was confirmed at the perineal region with a pinprick test using a 22-gauge (2.5 cm) needle prior to lidocaine injection (baseline) and at two-minute intervals thereafter until sensation was abolished. The needle was inserted into the skin, and the animal’s reaction to this stimulus was observed. Analgesia was deemed effective when the animal accepted the skin puncture and did not react to pricking. When the animal shifted its head, neck, and/or trunk to preclude the unpleasant stimulation of the needle, analgesia was deemed ineffective. The time (in minutes) spent between the injection of the local anesthetic and the lack of sensation was regarded as the start of local analgesia. Three days following injection, the donkeys were observed to check for any possible complications, including infection, hematoma, or neurological issues. The injection criterion evaluation was the responsibility of individual practitioners. A subjective grading method for the ease of accurate needle penetration, the difficulty of injection, the number of tries, and the performance time was used to evaluate and score the expert’s estimated confidence at injection (Table ), according to El-Shafaey , et al. . In both cadaveric and live animal studies, injection criteria were applied. Skilled anatomists, sonographers, radiologists, and anesthetists assessed and judged the injection criteria’s effectiveness. The epidural injection procedures for both the cadaveric and live animal experiments were carried out by a skilled anesthetist. The statistical software package GraphPad Prism (GraphPad Prism for Win. Version 5.0, GraphPad Software Inc., USA) was used to conduct the statistical analysis. All scores were expressed as median (minimum–maximum). A pairwise comparison between the two injection criterion scores (non-parametric data) was carried out using the Mann Whitney U test. When P < 0.05, significance was deemed to exist after applying the bonferroni correction for multiple comparisons between variables. Cadaveric study In a cadaveric investigation, US-guided epidural injection at the sacrococcygeal region proved to be a practical, dependable, and precise method. Through cadaver dissection, it was verified that the sacrococcygeal space was correctly identified in every cadaver and dyed in every case (Fig. E). On the ultrasonogram, the caudal portion of the sacrum served as the prominent landmark for transverse transducer location with respect to the donkey vertebral column. The sacral crest appeared in the image as a thin, hyperechoic vertical structure, and two perpendicular hyperechoic lines on either side of it depicted the caudal sacral processes. Caudally to this site, the sacrococcygeal space, which represents the body and arch of the first caudal vertebra, was found to be a circular hypoechoic region. It was defined by hyperechoic structures that produced distal acoustic shadowing (Fig. ). The floor of the spinal canal was indicated by the thin, straight, hyperechoic line with distal acoustic shadowing that was found underneath the epidural space. The needle was pushed into the epidural space through the opening between the sacrum and the dorsal lamina of the first caudal vertebra until it was visible that it touched the vertebral canal floor. In all of the dissected cadavers, no blood vessel was found to be stained with the blue dye, indicating that no vascular components had accidentally entered the body. Entirely, comparing the blind technique to the US-guided epidural injection at the donkey’s sacrococcygeal region, there was generally a significant increase ( P < 0.05) in the injection parameters (Table ). Analysis of injection parameters between US-guided and blind injection procedures showed that the US-guided methods had substantially higher needle accuracy penetration than the blind methods. Injection difficulties and trial count were significantly higher in blind techniques compared to US-guided procedures. The frequency of trials required for an efficient injection is higher in the blind technique in comparison with US-guided injection techniques (0.5 vs. 2, respectively) (Table ). Nonetheless, US-guided injection techniques required a shorter performance time (3 min vs. 5 min, respectively) for accurate needle placement than the blind method. US-guided approach required a short time for the onset of analgesia Live donkeys exhibited good tolerance to both blind and ultrasound-guided methods of epidural injection at the sacrococcygeal region. Visualization of the identical structures seen in the cadavers was achievable using the ultrasound-guided approach. The appearance of the epidural space and surrounding structures in living and dead animals did not differ from one another on ultrasonography. In ultrasonography, visualization of the needle tip within the vertebral canal was also achievable in all cases. Aspiration that was negative verified the correctness and omission of vascular structures. The local anesthetic solution was injected gradually and was always visible in real time after the needle tip was inside the sacrococcygeal region (Fig. ). All animals experienced a loss of analgesia in the perineal regions within five to ten minutes. During or after this procedure, there were no visible or ultrasonographic defects found. In the ‘blind’ technique, the bony characteristics could be identified in every instance. The needle was proceeded to the sacrococcygeal area after aspiration was tried prior to each injection, and the needle was removed and reinserted till aspiration proved negative. Desensitization of the perineal region was attained in 3/5 cases, and the onset of analgesia began to take effect between 15 and 20 min later. Each donkey recovered calmly from the trials and displayed no signs of any neurological abnormalities or evidence of nerve harm. Compared to the blind procedure, the ultrasound-guided approach resulted in a shorter time for the onset of analgesia, but it was non-significant ( P < 0.09). The duration of analgesia in both groups (up to 75 min) did not differ significantly (Table ). In a cadaveric investigation, US-guided epidural injection at the sacrococcygeal region proved to be a practical, dependable, and precise method. Through cadaver dissection, it was verified that the sacrococcygeal space was correctly identified in every cadaver and dyed in every case (Fig. E). On the ultrasonogram, the caudal portion of the sacrum served as the prominent landmark for transverse transducer location with respect to the donkey vertebral column. The sacral crest appeared in the image as a thin, hyperechoic vertical structure, and two perpendicular hyperechoic lines on either side of it depicted the caudal sacral processes. Caudally to this site, the sacrococcygeal space, which represents the body and arch of the first caudal vertebra, was found to be a circular hypoechoic region. It was defined by hyperechoic structures that produced distal acoustic shadowing (Fig. ). The floor of the spinal canal was indicated by the thin, straight, hyperechoic line with distal acoustic shadowing that was found underneath the epidural space. The needle was pushed into the epidural space through the opening between the sacrum and the dorsal lamina of the first caudal vertebra until it was visible that it touched the vertebral canal floor. In all of the dissected cadavers, no blood vessel was found to be stained with the blue dye, indicating that no vascular components had accidentally entered the body. Entirely, comparing the blind technique to the US-guided epidural injection at the donkey’s sacrococcygeal region, there was generally a significant increase ( P < 0.05) in the injection parameters (Table ). Analysis of injection parameters between US-guided and blind injection procedures showed that the US-guided methods had substantially higher needle accuracy penetration than the blind methods. Injection difficulties and trial count were significantly higher in blind techniques compared to US-guided procedures. The frequency of trials required for an efficient injection is higher in the blind technique in comparison with US-guided injection techniques (0.5 vs. 2, respectively) (Table ). Nonetheless, US-guided injection techniques required a shorter performance time (3 min vs. 5 min, respectively) for accurate needle placement than the blind method. Live donkeys exhibited good tolerance to both blind and ultrasound-guided methods of epidural injection at the sacrococcygeal region. Visualization of the identical structures seen in the cadavers was achievable using the ultrasound-guided approach. The appearance of the epidural space and surrounding structures in living and dead animals did not differ from one another on ultrasonography. In ultrasonography, visualization of the needle tip within the vertebral canal was also achievable in all cases. Aspiration that was negative verified the correctness and omission of vascular structures. The local anesthetic solution was injected gradually and was always visible in real time after the needle tip was inside the sacrococcygeal region (Fig. ). All animals experienced a loss of analgesia in the perineal regions within five to ten minutes. During or after this procedure, there were no visible or ultrasonographic defects found. In the ‘blind’ technique, the bony characteristics could be identified in every instance. The needle was proceeded to the sacrococcygeal area after aspiration was tried prior to each injection, and the needle was removed and reinserted till aspiration proved negative. Desensitization of the perineal region was attained in 3/5 cases, and the onset of analgesia began to take effect between 15 and 20 min later. Each donkey recovered calmly from the trials and displayed no signs of any neurological abnormalities or evidence of nerve harm. Compared to the blind procedure, the ultrasound-guided approach resulted in a shorter time for the onset of analgesia, but it was non-significant ( P < 0.09). The duration of analgesia in both groups (up to 75 min) did not differ significantly (Table ). The field of veterinary anesthesia is persistently pursuing suitable alternative epidural injections, which present a higher chance of success due to needle placement accuracy, dependability, and safety . Finding the best imaging modality for epidural administration in veterinary medicine is a topic of controversy. As a result, the purpose of this investigation was to compare the viability and application of US-guided injection techniques versus blind approaches for epidural injection in the sacrococcygeal area of donkeys. This is one of the first studies that compares and explains the “blind” and ultrasound-guided methods for administering epidural injections at the sacrococcygeal space in Egyptian donkeys, as far as we know. In a cadaveric investigation, US-guided epidural injection at the sacrococcygeal region proved to be a practical, dependable, and precise method. The sacrococcygeal epidural space was observed in the present investigation using ultrasonography in cadavers and clinical donkeys. It was identified as a hypoechoic circular zone situated caudal to the sacrum, which was defined by the hyperechoic bony components that comprised the first caudal vertebra. This look was comparable to the sacral hiatus that has been reported in canines , who stated that the sacrococcygeal epidural space can be found by using the median sacral crest and the caudal sacral processes as helpful starting markers. Furthermore, the transverse approach provided the ultrasonographer with a small window dorsally between the last sacral vertebrae and the 1st caudal vertebrae, enabling the ultrasonographer to photograph the vertebral canal. The surgical technical skills of sacrococcygeal space piercing can be difficult to educate to rookie operators, as the feedback provided by the sensation of tissue layers during needle injection can be challenging to communicate. Before carrying out a procedure in a clinical context, training on a cadaver is thought to have the benefit of lowering a novice’s anxiety and offering nontheoretical, practical hands-on training . In this study, the cadaver model consisted of fresh cadavers, which were thought to be an appropriate tool for training and learning about piercing a needle into the sacrococcygeal space before clinical practice. Furthermore, the precise in vivo sensation of tissue layers, animal responses (such as twitching of the tail or popping sensation entering the sacrococcygeal space), and the potential existence of CSF and blood could only be felt and experienced on a live anesthetized donkey, as reported by Etienne , et al. . Gross dissection of cadavers was useful in this study to establish an appropriate technique for puncturing the sacrococcygeal space in donkeys by identifying anatomical landmarks, placing needles correctly, checking for the presence of methylene blue in the vertebral canal, and assessing potential damage to surrounding structures. Following ultrasound, cadaveric dissection verified that each case’s sacrococcygeal area had been precisely located and stained. There was no evidence of vascular injury in any of the dissected cadavers since no blood vessel was found to be stained with the blue dye. These findings were similar to those previously described in canines, donkeys, and Egyptian buffaloes . Ultrasound is generally regarded as a highly helpful instrument for consists of CSF on the atlanto-occipital location in standing horses. Because, it reduces the attempt numbers, speeds up the process, and minimizes blood contamination and damage . To achieve the puncture while utilizing a blind technique based primarily on anatomical markers, the horse must be in a symmetrical and static position. Accurate site localization and puncture are made possible by ultrasonography . According to our investigation, US-guided injections had a noticeably higher accuracy rate than blind injections. This may be related to the viability of US-guided injection, which visualizes the needle tip and fluid flow during epidural injection, allowing the needle to be directed to the sacrococcygeal area while avoiding important structures. Our results aligned with those described in horses . The objective of the live study was to eradicate post-mortem alterations in cadaveric specimens and exhibit crucial factors, such as the attitude, pain, and conduct of the living animals throughout the injection. In live donkeys, both blind and ultrasound-guided methods of epidural injection at the sacrococcygeal region were tolerated effectively. In both living and dead animals, there were no variations in the ultrasonographic features of the surrounding structures and the epidural space. Aspiration that was negative verified the correctness and omission of vascular structures. Our findings are consistent with those described in the horses and buffalos . In this study, US-guided sacrococcygeal space injection produced superior results than the blind approach for all injection metrics, as well as increased specificity. The optimal needle position inside the sacrococcygeal space, the positive non-invasive visualization of the spinal canal, and the high-quality ultrasound images improve accuracy and shorten the time (3 min vs. 5 min, respectively) and number of trials needed for epidural injection. These findings align with those reported by . On the contrary, when US-guided techniques were used to inject the scapulohumeral joint (SHJ), bicipital bursa (BB), and infraspinatus bursa (IB) in horses, the regular time was noticeably longer. It was believed that this was directly linked to the operator’s inexperience . The blind sacrococcygeal epidural injection technique is difficult to use since it depends on the palpation of surface anatomic indicators . In the current investigation, a shorter time for the onset of analgesia was achieved with the ultrasound-guided method, although the difference was not statistically significant. This could be because it is challenging to determine the precise anatomical site where the needle should be inserted, which could result in improper insertion of the needle and inadequate injection . Therefore, the current study offers a foundation for reference for refining injection techniques for sacrococcygeal epidural injection in donkeys using US-guided sacrococcygeal epidural injection. This study has limitations that should be noted. The first is that an ultrasound technique’s effectiveness depends on its operator. Being proficient in ultrasound is a unique ability that takes training and skill. The study’s small sample size of animals is the second drawback. The effect of age, sex, body condition score, and body weight of each animal on the size and contents of the epidural space is the third drawback. To confirm the usefulness of the procedure on a larger sample size in clinical cases and to arrive at a definitive conclusion, future investigations should take into account the limitations of this study. In conclusion, US-guided epidural injection at the sacrococcygeal space in donkeys provides a number of benefits, including the ability to directly visualize the needle and distribute local anesthetic and avoid unintentional vascular damage in comparison with traditional blind techniques. Most US-guided sacrococcygeal space injections are straightforward techniques that are easy to learn and can be used in field conditions. Consequently, more research is required to assess this method in clinical cases. Conclusion . US-guided injection procedures revealed the performance time required for perfect placing the needle was significantly less than with a blind one. A shorter time for the onset of analgesia was achieved with the ultrasound-guided method, although the difference was not statistically significant. The ultrasound-guided epidural injection technique provided a number of benefits over the blind one, including the capacity to directly view the needle and distribute local anesthetic, avoid unintentional vascular damage, and quickly produce analgesia in comparison with traditional blind techniques. In conclusion, the ultrasound-guided epidural injection technique provides enhanced visualization of anatomical landmarks for accurate injection placement to offer efficient and safe anesthesia in the surgical approach in the sacrococcygeal region of the Egyptian donkeys. |
Sex‐ and | 583e29e9-1fd3-4021-8058-ce12e5677a15 | 10944031 | Internal Medicine[mh] | Approximately 1 in 5 antihypertensive medication studies informing hypertension guidelines do not incorporate any sex‐ and gender‐based reporting or analysis. Fewer than 1 in 4 antihypertensive medication studies have appropriate sex‐based representation in study participants, and sex‐stratified analysis of results is not common. Sex‐stratified adverse events are rarely reported. Despite an emphasis on precision medicine and mandates from journals and funding agencies, antihypertensive medication studies informing hypertension guidelines rarely incorporate sex‐ and gender‐based reporting and analysis. Greater attention to sex‐ and gender‐based factors in research is required to optimally inform clinical practice and improve management of hypertension in all individuals. The authors declare that all supporting data are available within the article and its online supplementary files. We systematically reviewed all literature cited in the International Society of Hypertension (2020), Latin American Society of Hypertension (2017), European Society of Cardiology/European Society of Hypertension (2018), Pan‐African Society of Cardiology (2020), American College of Cardiology/American Heart Association (2017) and Hypertension Canada (2020) guidelines (Table ). The terms sex and gender are not synonymous. However, recognizing these terms are often used interchangeably in studies, we assumed women to mean female sex and men to mean male sex. The inclusion criteria were observational studies, randomized controlled trials, and systematic reviews involving antihypertensive medications. The exclusion criteria were single‐sex studies, guidelines, and commentaries (Table ). Two reviewers (N.G. and K.T.M.) independently extracted data using a standardized data abstraction form, with data items including ratio of male‐to‐female participants, analysis of baseline demographics and study outcomes by sex, reporting of adverse events (AEs), and stratification of AEs by sex. Any event reported using terminology such as “adverse effects, side effects, adverse outcomes, or safety outcomes” was defined an “AE." The participation‐to‐prevalence ratio (PPR) is a measure of the representation of a specific group in the study population relative to the prevalence of the condition of interest in the same group. In the context of sex and hypertension, this metric can be calculated by dividing the proportion of men or women in the study population by the proportion of men or women, respectively, with hypertension in the general population, which, given that the global prevalence of hypertension is roughly equal in men and women, , , is 0.5. As a PPR approximating 1 suggests a representative study population composition, a PPR <0.8 and >1.2 indicates the underrepresentation and overrepresentation of male or female participants. Calculation of the PPR is an important metric in the development of sex‐specific guidelines. , Institutional review board approval and informed consent were not required for this study as all data were publicly available. Of 1659 unique articles cited in the 6 guidelines, 331 studies met inclusion criteria (Figure ). Of the 331 studies that met the inclusion criteria, 267 (81%) reported the sex of participants (Table ), with only 73 (22%) reporting a male‐to‐female PPR of 0.8 to 1.2, 140 (42%) with a PPR of >1.2 (overrepresentation of men), and 40 (12%) with a PPR of <0.8 (overrepresentation of women), whereas the PPR could not be determined for 14 studies (4%) (Figures and ). Baseline characteristics were stratified by sex in 11 studies (3%), and 67 (20%) considered sex in analysis through statistical adjustment (n=18 [5%]) or stratification (n=49 [15%]). Although 105 studies (32%) reported AEs, only 2 (0.6%) stratified AEs by sex. Of the 267 studies that reported the sex or gender of participants, 87 (33%) used sex‐based terms (eg, male or female) to describe their participants, 24 (9%) studies used gender‐based terms (eg, men or women), and 156 (58%) used sex‐ and gender‐based terms interchangeably. No study reported how the sex or gender of participants was determined. Our key findings were as follows: (1) approximately 1 in 5 antihypertensive medication studies informing hypertension guidelines did not incorporate any sex‐ and gender‐based reporting or analysis; (2) <1 in 4 studies had appropriate sex‐based representation in study participants; (3) approximately15% of studies reported sex‐stratified outcomes; (4) sex‐stratified AEs were rarely reported; and (5) sex‐ and gender‐based terminology was commonly used interchangeably. The results highlight that despite the increasing emphasis on precision health and personalized cardiovascular care, , , , , few antihypertensive medication studies informing commonly used guidelines for hypertension management incorporated principles of sex‐ and gender‐based analysis, including targeting a study PPR of 0.8 to 1.2, reporting baseline participant demographics by sex and gender, or analyzing or reporting study outcomes and AEs stratified by sex and gender. These findings are concerning given the recognized sex and gender differences in the pathophysiology and cardiovascular and kidney risks of hypertension, , , , , , , as well as access, , adherence, and AEs , , , related to antihypertensive agents. The National Institutes of Health launched the Precision Medicine Initiative and instituted the Sex as a Biological Variable Policy in 2015 , ; and although the guidelines included in this review were published between 2017 and 2020, the design of this study does not capture more recently published research. However, a subanalysis of literature informing the guidelines published during or after 2015 demonstrated a similar pattern to our overall results (Figure ). Our findings are also consistent with a recent scoping review of antihypertensive medication studies published between 1964 and 2020 that showed substantial underrepresentation of female participants in clinical trials, with only 3.7% of studies stratifying results by sex. Our results are also in keeping with previous work highlighting the underrepresentation of women in cardiovascular and kidney trials. , , , , Pharmacokinetics and pharmacodynamics of drugs differ by sex, which may account for greater AEs and lower adherence in women compared with men using antihypertensive medications, , , , underscoring the importance of reporting sex‐stratified AEs. The incomplete reporting of sex and gender and an emphasis on sex rather than gender noted in this study have also been observed in other research settings. , , Similar to our findings showing most studies used the terms sex and gender interchangeably, only 35% of Canadian clinical practice guidelines published between 2013 and 2015 for noncommunicable health conditions that included “sex” and/or “gender” used the terms correctly according to the Sex and Gender Equity in Research guidelines. This may partially reflect a lack of integration of sex, as a biological attribute, and gender, as a socially constructed identity, in health research reporting guidelines. In a systematic review of 407 reporting guidelines listed on the Equator Network registry and published between 1995 and 2018, only 1 reporting guideline met the criteria of the correct use of sex and gender concepts. The fact that no study reported both the sex and the gender of participants deserves mention. The assumption that sex assigned at birth always aligns with gender identity does not take into account the growing global transgender, gender‐diverse, and nonbinary populations; moreover, these populations are impacted by disparities across a variety of cardiovascular risk factors compared with their cisgender peers. Most research on hypertension in transgender or nonbinary adults has focused on the impact of gender‐affirming hormone therapy on blood pressure, which to date has been overall inconclusive. , Application of frameworks , to improve incorporation of sex and gender considerations in blood pressure research has the potential to create new knowledge in the management of hypertension. Our study provides evidence that literature informing guidelines for management of hypertension poorly incorporates sex and gender considerations in study design, analysis, and reporting despite mandates from funders, , , journals, and governments. , , Structured frameworks exist to determine whether sex‐specific recommendations should be made in clinical guidelines. However, research informing guidelines first needs to systematically incorporate sex‐ and gender‐related considerations to achieve the goal of optimizing health outcomes for all. N. Gulamhusein was supported by a Canada Graduate Scholarship–Master's through the Canadian Institutes of Health Research. None. Tables S1–S3 Figure S1 |
Renal sympathetic denervation 2024 in Austria: recommendations from the Austrian Society of Hypertension | 657cf209-ff57-4304-8e7c-1feaa15de777 | 11420322 | Internal Medicine[mh] | The following recommendations should guide Austrian physicians in the use of renal sympathetic denervation (RDN) in patients with different scenarios of arterial hypertension. This is an update of the previous guidelines from the Austrian Society of Hypertension from 2014 , as new clinical evidence about the efficacy and safety evolved , a U. S. Federal Food and Drug Administration (FDA) premarket approval for two devices was released and new guidance (clinical practice guidelines, position papers, consensus statements) from the European Society of Cardiology and the European Society of Hypertension were recently published . The concept of RDN stems from the fact that increased sympathetic drive is a well-known key driver in systemic arterial hypertension . As early as the 1930s, a surgical procedure known as thoracolumbar splanchnicectomy was developed for patients with severe forms of arterial hypertension and showed the effect of blood pressure lowering by sympathectomy . The goal of RDN is the denervation of sympathetic fibers in the adventitia of the renal arterial vasculature. In the early 2000s minimally invasive catheter-based endovascular systems were developed that facilitate the ablation of renal sympathetic nerves with high efficacy and a low complication rate. The first hype of RDN started after the publication of the initial feasibility study in 2009 and the first controlled study SYMPLICITY-HTN 2 in 2010 . The latter had a randomized design and found a staggering reduction of office blood pressure (BP) by 32/12 mm Hg after 6 months, while there was no difference in the control group (+1/0 mm Hg). After clinical certification the procedure was included into daily clinical care in Austria and other countries. The Austrian RDN registry with up to 300 patient cases found similar reductions in office BP compared to SYMPLICITY HTN 2 and recommendations from the Austrian Society of Hypertension were published in 2012 and 2014 . The enthusiasm for RDN was dampened after publication of the first large sham-controlled study SYMPLICITY HTN‑3 in 2014, which found merely no effect of RDN, i.e., similar reductions of BP in the RDN and in the sham groups (Table ). These results led to a stop in the use and reimbursement for the procedure in many countries. The European Society of Cardiology (ESC)/European Society of Hypertension (ESH) guidelines for the management of hypertension published in 2018 did not recommend RDN outside of clinical trials at all (class III indication) . Later studies found regression to the mean, asymmetric data handling and a motivation towards better adherence to antihypertensive drugs in the RDN group to be the main drivers for positive results of the first studies . These considerations, based on results from the SYMPLICITY-HTN 3 study and other first-generation sham-controlled clinical trials, led to a complete rethinking of procedural and patient-related aspects of RDN. The following advances were made in second-generation studies: Better screening of study patients by using ambulatory BP monitoring and regular adherence checks before and after the intervention in both the RDN and the sham group. Increased efficacy of RDN by using multielectrode second-generation devices, optimization of perioperative workflows and adequate training of operators. Exclusion of unintended bias by use of low-noise outcome variables (such as ambulatory BP instead of office BP) and performance of a blinded sham procedure in the control group. Second-generation trials with sham-controls now paint a homogeneous picture regarding the efficacy of RDN in patients with arterial hypertension (Table ). In patients with mild, moderate and resistant hypertension, a consistent reduction of ambulatory BP is shown compared to sham 2–36 months after the procedure. On the basis of these data RDN is clearly a suitable option for BP lowering in selected patients with arterial hypertension, in accordance with the recently published guidelines for the management of arterial hypertension by the ESH . Currently, two different physical principals are mainly used for RDN: radiofrequency (RF) ablation and ultrasound (US) ablation. The first available system on the market was based on RF . The Symplicity Renal Denervation System® (Medtronic, Minneapolis, MI, USA) has first been developed to perform point-by-point ablation around the renal artery. The first generation was time-consuming and it was deemed less efficacious because only the proximal parts could be ablated. The second generation (Symplicity Spyral, Medtronic) enables the simultaneous ablation of several points of the renal arterial system in a spiral configuration as well as an ablation of the branches of the main renal artery. The Paradise Ultrasound Denervation System (Recor Medical Inc, Palo Alto, CA, USA) is available for RDN using unfocused US. This enables a homogeneous penetration and 360° ablation of the perivascular tissue. Based on the favorable results from the second-generation trials outlined above, both the RF and the US devices received FDA premarket approval for clinical use in November 2023. In Austria, RDN will be reimbursed in 2025 on a preliminary basis (NUB, new investigation and treatment methods, NUB—neue Untersuchungs- und Behandlungsmethoden ). An externally delivered US device has been explored but failed to show valid data on blood pressure lowering . Other methods for RDN are under investigation. The RDN using perivascular alcohol injection has been explored in early studies, which showed promising results . A sham-controlled second-generation study has recently been published; the results were neutral . Currently, no final recommendation can be given regarding this method. The RDN is an invasive, preventive procedure that causes a clear reduction in BP as a surrogate marker but no direct reduction of cardiovascular outcomes has yet been shown. Consequently, RDN has to be a proven low-risk procedure to be accepted as an alternative to antihypertensive medication, which serves as the gold standard for treatment of arterial hypertension and is highly effective with a low risk profile. Safety data from multiple randomized controlled trials and long-term registries are available. If performed by experienced operators RDN can generally be considered a safe procedure. The most common complications are access site-related and occur in around 1% of cases. They can be avoided by a US-guided puncture and/or the use of a vascular closure device. The radiation dose varies and data on long-term problems due to radiation are missing as well as data on long-term negative effects due to anesthesia. The risk of contrast-associated acute kidney injury can be prevented by balanced hydration. Major vascular complications other than at the access site (i.e., dissection, perforation, intrarenal hematoma) are conceivable in theory but exceedingly rare events. In a meta-analysis of 50 trials and 5769 patients undergoing RF RDN, 7 intraprocedural dissections resulting in stent implantations were reported . In general, these acute complications can be avoided with proper RDN techniques . Previously feared long-term complications of RDN, such as decline in renal function and renal artery stenosis have not been shown in large observational studies and controlled trials . In a meta-analysis, renal arterial long-term complications (i.e., new renal artery stenoses) occurred with a similar incidence as in an untreated hypertensive population . In another meta-analysis it was concluded that renal function did not change significantly for at least 9 months after RDN . Current data show that, compared to sham-group patients, RDN leads to a moderate, albeit clinically significant reduction in 24 h SBP of 4–7 mm Hg in nearly all patient groups, from hypertension grade I to resistant hypertension. A reduction in inpatient admissions due to hypertensive emergencies was also recorded after RDN ; however, the procedural success is dependent on specific patient-related and procedure-related factors, which are discussed in the following sections. The durability of RDN is still a theoretical question as preclinical data suggest that reinnervation may occur theoretically 30 months after RDN in a mouse model . Long-term randomized clinical trials, however, indicate a consistent BP reduction with a trend to lower BP over time for at least 3 years . Recently, long-term follow-up studies of 9 and 10 years after RDN became available, indicating a long-lasting BP lowering effect of the procedure . Based on registry data, the reduction in BP is independent of the number and class of antihypertensive drugs at baseline . Furthermore, after 3 months follow-up more patients decreased than increased the number of antihypertensive drugs. First-generation randomized controlled trials of RDN showed that the procedure can lead to suboptimal results and a higher risk of complications in inexperienced centers with a low case load. For example, in the SYMPLICITY-HTN 3 study , low operator experience and ineffective RDN have been discussed as probable reasons for the neutral outcome . Nonstandardized patient pathways can lead to inadequate patient selection without guideline-directed medical treatment or exclusion of secondary hypertension. Patients with unsuitable anatomy have to be identified preprocedurally. Furthermore, institutional expertise to adequately treat rare complications has to be present, especially in vascular surgery. The importance of institutional experience regarding RDN is highlighted in the ESH guidelines, which also recommend limiting the use of RDN to experienced centers . The RDN centers should have a dedicated hypertension outpatient department, inpatient ward and departments of radiology, cardiology, nephrology, laboratory diagnostics, on-site vascular surgery and a coronary/intensive care unit. Specialization for management of complex patients with arterial hypertension is necessary and can be evidenced with dedicated diplomas (Hochdruckspezialist Österreichische Gesellschaft für Hypertensiologie, European Specialist in Hypertension ESH, Excellence Center ESH). A multidisciplinary hypertension team (MDT) can be formed that enables the informed discussion of patients suitable for RDN from various viewpoints. We strongly recommend that a hypertension specialist, certified by the ESH or a “Hochdruckspezialist” certified by the Austrian Society of Hypertension takes part in hypertension team meetings. In addition to RDN operators, a clinical cardiologist, a nephrologist and a specialist experienced in sedation (e.g., anesthesiologist, intensive care specialist) should participate. This multidisciplinary approach is also endorsed by the ESH guidelines (class I recommendation ). The final decision to perform RDN should be made by a dedicated multidisciplinary hypertension team that includes at least a certified hypertension specialist, an RDN operator, a clinical cardiologist, a nephrologist and an expert on analgosedation. The RDN operators should be experts in percutaneous cardiovascular interventions including access site management, radioprotection, periprocedural BP management, analgesia, and the renal arterial anatomy. We recommend that operators should first gain experience in vascular interventions before performing RDN. Furthermore, operators should receive hands-on training using a bench model of RDN and off-site attendance in an active RDN center. Proctoring of the first cases should reduce the risk of complications in operators starting to become self-dependent. The RDN operators should have performed a sufficient number of RDN procedures with a proctor before performing an RDN procedure independently. To retain experience, operators should perform RDN procedures on a regular basis. Adherence to medical treatment Ensuring adherence to medical treatment is one of the cornerstones of initially asymptomatic diseases, such as hypertension . Current guidelines for treatment of hypertension recommend the use antihypertensive polypills to increase adherence by reduction of side effects and ease of use . An informed discussion with the patient about possible side effects is essential when starting antihypertensive treatment. The time of drug intake, in the morning or in the evening, may be adapted to best fit the daily life of the patient as the TIME study did not show any benefit of the evening over the morning dose administration . Good adherence leads to better outcomes but assessment of adherence is challenging outside clinical trials and may be very difficult to measure in the clinical routine . The adherence to antihypertensive medication should be checked and discussed with the patient. In a large number of patients as evidenced for instance in all recent high-quality RDN trials, persistent and complete adherence is hard to achieve. As the primary goal is BP control, patients who are repeatedly nonadherent (if this reflects the unwillingness of the patient to take drugs) or intolerant to multiple antihypertensive drugs, can also be considered for RDN after information about the potential lack of effect and benefits and also the risks associated with the procedure. These patients may be on fewer than three drugs at the time of their selection for RDN . Adherence to medical treatment should be ascertained before considering RDN in patients with arterial hypertension, for instance with witnessed drug intake, laboratory drug monitoring, or monitoring of prescription refills. The results of these tests should be discussed with the patient. As BP lowering is the goal to reduce cardiovascular risk, RDN could be an option for patients unable to be fully adherent to antihypertensive drugs, for instance due to side effects, in certain conditions. Screening for RDN and shared decision making All patients considered for RDN have to undergo investigations, screening for secondary hypertension as recommended by international guidelines and optimization of treatment at a hypertension clinic. The 24 h ambulatory BP monitoring (ABPM) is an integral part of the diagnostic work-up to exclude white-coat hypertension. Before RDN can be considered, secondary hypertension has to be excluded, antihypertensive treatment should be optimized at a hypertension clinic, and persistence of high BP has to be evaluated using ABPM . As RDN is an invasive procedure, available and safe oral reserve antihypertensive medications as possible alternatives, potential complications and the need to continue medical antihypertensive treatment despite the procedure in most instances, should be discussed thoroughly with the patient. In addition to individual clinical expertise and available external clinical evidence from high-quality RDN studies , the patient’s specific needs as well as possible intolerances to medical treatment, should be incorporated in the final decision to perform RDN. The patient’s needs and expectations should be included in the final decision to perform RDN. Resistant hypertension Resistant hypertension is defined as not reaching BP targets despite treatment with at least three antihypertensive medications including one diuretic at maximum tolerated doses . Patients with resistant hypertension are the best studied hypertensive population undergoing RDN and are therefore the preferred patient group. We propose the following inclusion criteria for patients undergoing RDN: The RDN is a reasonable additional treatment option in patients with resistant hypertension and: Taking at least 3 different antihypertensive medications, one of which should be a diuretic. Have an average 24‑h SBP of ≥ 130 mm Hg or an average daytime SBP of ≥ 135 mm Hg in a recent 24‑h BP recording. Are at least 18 years old. Have an estimated glomerular filtration rate of ≥ 40 ml/min/1.73 m 2 body surface area. Mild hypertension In patients with arterial hypertension grade I who take only few antihypertensives (usually defined as 0–1 antihypertensives), RDN may prevent the necessity of taking antihypertensives at all. Two studies found positive results of RDN in this patient population ; however, a high number of different well-tolerated and well-studied antihypertensive medications are available for this patient population. Therefore, the panellists believe that RDN may only be considered in this patient population in selected cases after carefully evaluating benefits and harms, especially taking the individual tolerability to antihypertensive medication into account. In patients with mild hypertension, RDN may be considered in selected cases, specifically in the presence of intolerance to several antihypertensive drug classes, considering the patient’s needs and shared decision making. Uncontrolled hypertension with intolerance to antihypertensive drugs Patients with uncontrolled hypertension have been included in several sham-controlled trials with improvement in BP control . The use of RDN may therefore be considered an option for patients with uncontrolled hypertension despite attempting lifestyle modifications and antihypertensive medication but who are either intolerant to additional medication or do not wish to be on additional medications and who are willing to undergo RDN after shared decision-making . The RDN can be considered as a treatment option in patients with an eGFR > 40 ml/min/1.73 m 2 who have uncontrolled BP, if drug treatment elicits serious side effects and poor quality of life. Ensuring adherence to medical treatment is one of the cornerstones of initially asymptomatic diseases, such as hypertension . Current guidelines for treatment of hypertension recommend the use antihypertensive polypills to increase adherence by reduction of side effects and ease of use . An informed discussion with the patient about possible side effects is essential when starting antihypertensive treatment. The time of drug intake, in the morning or in the evening, may be adapted to best fit the daily life of the patient as the TIME study did not show any benefit of the evening over the morning dose administration . Good adherence leads to better outcomes but assessment of adherence is challenging outside clinical trials and may be very difficult to measure in the clinical routine . The adherence to antihypertensive medication should be checked and discussed with the patient. In a large number of patients as evidenced for instance in all recent high-quality RDN trials, persistent and complete adherence is hard to achieve. As the primary goal is BP control, patients who are repeatedly nonadherent (if this reflects the unwillingness of the patient to take drugs) or intolerant to multiple antihypertensive drugs, can also be considered for RDN after information about the potential lack of effect and benefits and also the risks associated with the procedure. These patients may be on fewer than three drugs at the time of their selection for RDN . Adherence to medical treatment should be ascertained before considering RDN in patients with arterial hypertension, for instance with witnessed drug intake, laboratory drug monitoring, or monitoring of prescription refills. The results of these tests should be discussed with the patient. As BP lowering is the goal to reduce cardiovascular risk, RDN could be an option for patients unable to be fully adherent to antihypertensive drugs, for instance due to side effects, in certain conditions. All patients considered for RDN have to undergo investigations, screening for secondary hypertension as recommended by international guidelines and optimization of treatment at a hypertension clinic. The 24 h ambulatory BP monitoring (ABPM) is an integral part of the diagnostic work-up to exclude white-coat hypertension. Before RDN can be considered, secondary hypertension has to be excluded, antihypertensive treatment should be optimized at a hypertension clinic, and persistence of high BP has to be evaluated using ABPM . As RDN is an invasive procedure, available and safe oral reserve antihypertensive medications as possible alternatives, potential complications and the need to continue medical antihypertensive treatment despite the procedure in most instances, should be discussed thoroughly with the patient. In addition to individual clinical expertise and available external clinical evidence from high-quality RDN studies , the patient’s specific needs as well as possible intolerances to medical treatment, should be incorporated in the final decision to perform RDN. The patient’s needs and expectations should be included in the final decision to perform RDN. Resistant hypertension is defined as not reaching BP targets despite treatment with at least three antihypertensive medications including one diuretic at maximum tolerated doses . Patients with resistant hypertension are the best studied hypertensive population undergoing RDN and are therefore the preferred patient group. We propose the following inclusion criteria for patients undergoing RDN: The RDN is a reasonable additional treatment option in patients with resistant hypertension and: Taking at least 3 different antihypertensive medications, one of which should be a diuretic. Have an average 24‑h SBP of ≥ 130 mm Hg or an average daytime SBP of ≥ 135 mm Hg in a recent 24‑h BP recording. Are at least 18 years old. Have an estimated glomerular filtration rate of ≥ 40 ml/min/1.73 m 2 body surface area. In patients with arterial hypertension grade I who take only few antihypertensives (usually defined as 0–1 antihypertensives), RDN may prevent the necessity of taking antihypertensives at all. Two studies found positive results of RDN in this patient population ; however, a high number of different well-tolerated and well-studied antihypertensive medications are available for this patient population. Therefore, the panellists believe that RDN may only be considered in this patient population in selected cases after carefully evaluating benefits and harms, especially taking the individual tolerability to antihypertensive medication into account. In patients with mild hypertension, RDN may be considered in selected cases, specifically in the presence of intolerance to several antihypertensive drug classes, considering the patient’s needs and shared decision making. Patients with uncontrolled hypertension have been included in several sham-controlled trials with improvement in BP control . The use of RDN may therefore be considered an option for patients with uncontrolled hypertension despite attempting lifestyle modifications and antihypertensive medication but who are either intolerant to additional medication or do not wish to be on additional medications and who are willing to undergo RDN after shared decision-making . The RDN can be considered as a treatment option in patients with an eGFR > 40 ml/min/1.73 m 2 who have uncontrolled BP, if drug treatment elicits serious side effects and poor quality of life. In 2023 the European Society of Hypertension released new guidelines incorporating RDN as a treatment option in patients with an eGFR of > 40 ml/min/1.73 m 2 with uncontrolled BP despite the use of antihypertensive drug combination treatment or if drug treatment would lead to serious side effects and reduced quality of life (class of recommendation II, level of evidence B) . Furthermore, RDN can be considered as an additional treatment option in patients with true resistant hypertension and eGFR > 40 ml/min/1.73m 2 (class of recommendation II, level of evidence B); however, RDN should only be performed in experienced specialized centers and the selection of patients undergoing RDN must incorporate shared decision making (class of recommendation I for both recommendations, expert opinion) . Although RDN undoubtedly lowers BP in groups of patients, the effect of the intervention in individual patients is heterogeneous, resembling the situation with different antihypertensive drug classes . The topic is currently under investigation and potential candidates are heart rate , pulsatile hemodynamics/arterial stiffness , renin , and many others. Due to technical or individual considerations or absence of evidence, patients with the following factors should not undergo RDN (Fig. ). The RDN should not be performed in patients with the following prohibitive conditions (contraindications): Unsuitable renal arterial anatomy. Presence of accessory untreatable arteries. Inappropriate vessel diameter. Advanced renal artery atherosclerosis. Renal artery stenosis. Fibromuscular dysplasia. Previous renal artery stenting. Secondary hypertension. Undergoing abdominal dialysis or hemodialysis. Unstable clinical situations (acute coronary syndromes, acute cerebrovascular events etc.). Pregnancy. Age < 18 years or > 85 years. The RDN should not be performed in patients in the following situations due to insufficient clinical evidence: Severely impaired kidney function (< 40 mL/min). Single functioning kidney. Kidney transplant recipients. Procedural planning and patient preparation Adequate imaging is crucial for procedural planning and identification of potential anatomical ineligibilities. Non-invasive renal artery imaging using either computed tomography or magnetic resonance imaging should be preferred over duplex ultrasound to identify: The presence of accessory arteries. Anatomical anomalies that prohibit an RDN procedure (e.g., inappropriate vessel diameter, untreated atherosclerotic or fibromuscular dysplasia, renal artery stenosis). Extent of abdominal aorta/iliofemoral arteries atherothrombotic disease. Selective renal angiography immediately before RDN remains the gold standard for identification of renal artery abnormalities. To reduce complications all measures should be undertaken to minimize the risk of complications. This includes adequate preparation of the procedure as well as sophisticated bail-out strategies. The following recommendations are adapted from the clinical consensus statement from the ESC Council on Hypertension and the European Association of Percutaneous Cardiovascular Interventions regarding renal denervation in the management of hypertension in adults . We recommend the establishment of a standard operating procedure that includes acute management in case of complications. Continuous monitoring of vital parameters should be performed to identify complications early. If applicable, antidotes of anaesthetics should be available in the catheter laboratory (e.g., naloxone and flumazenil). Patients should be hydrated to euvolemia to reduce the risk of acute kidney injury. Intraprocedural administration of unfractionated heparin (100 U/kg or a target ACT > 250 s) is advised. Preprocedural aspirin should be administered as loading dose, followed by 100 mg daily until 1 month postprocedure. In the case of oral anticoagulant therapy, antithrombotic therapy should be tailored according to ESC guidelines for chronic coronary syndromes related to endovascular interventions . Procedure As ablation of the renal arteries is painful, patients should be sedated during the procedure by a specialist trained in sedation. Analgesia may be performed with opioids. Vital signs should be monitored and intravenous drugs for BP control should be available in the catheter laboratory. For RDN we recommend analgosedation with low doses of opioids (e.g., fentanyl) together with sedating drugs (e.g., midazolam or propofol). Intra-arterial nitrates are recommended preprocedurally (in the absence of hypotension). The BP should be monitored invasively and corrected when necessary. Intravenous drugs for BP control should be available in the catheter laboratory. As a significant proportion of complications are derived from the vascular access, it should be gained with maximum caution and all available tools should be used to minimize the risk of adverse events. Radiation should be kept to a minimum. A 6 French catheter is used in RF ablation and a 7 French catheter in the US-based device. Femoral arterial access may be performed under US guidance, if the operator is experienced to do so. Vascular closure devices should be used to reduce the risk of complications. Modern monoplane or biplane angiographic systems should be used to reduce the radiation dose. At the end of the procedure, angiography of the renal artery should exclude potential renal parenchymal or arterial injuries. Adequate imaging is crucial for procedural planning and identification of potential anatomical ineligibilities. Non-invasive renal artery imaging using either computed tomography or magnetic resonance imaging should be preferred over duplex ultrasound to identify: The presence of accessory arteries. Anatomical anomalies that prohibit an RDN procedure (e.g., inappropriate vessel diameter, untreated atherosclerotic or fibromuscular dysplasia, renal artery stenosis). Extent of abdominal aorta/iliofemoral arteries atherothrombotic disease. Selective renal angiography immediately before RDN remains the gold standard for identification of renal artery abnormalities. To reduce complications all measures should be undertaken to minimize the risk of complications. This includes adequate preparation of the procedure as well as sophisticated bail-out strategies. The following recommendations are adapted from the clinical consensus statement from the ESC Council on Hypertension and the European Association of Percutaneous Cardiovascular Interventions regarding renal denervation in the management of hypertension in adults . We recommend the establishment of a standard operating procedure that includes acute management in case of complications. Continuous monitoring of vital parameters should be performed to identify complications early. If applicable, antidotes of anaesthetics should be available in the catheter laboratory (e.g., naloxone and flumazenil). Patients should be hydrated to euvolemia to reduce the risk of acute kidney injury. Intraprocedural administration of unfractionated heparin (100 U/kg or a target ACT > 250 s) is advised. Preprocedural aspirin should be administered as loading dose, followed by 100 mg daily until 1 month postprocedure. In the case of oral anticoagulant therapy, antithrombotic therapy should be tailored according to ESC guidelines for chronic coronary syndromes related to endovascular interventions . As ablation of the renal arteries is painful, patients should be sedated during the procedure by a specialist trained in sedation. Analgesia may be performed with opioids. Vital signs should be monitored and intravenous drugs for BP control should be available in the catheter laboratory. For RDN we recommend analgosedation with low doses of opioids (e.g., fentanyl) together with sedating drugs (e.g., midazolam or propofol). Intra-arterial nitrates are recommended preprocedurally (in the absence of hypotension). The BP should be monitored invasively and corrected when necessary. Intravenous drugs for BP control should be available in the catheter laboratory. As a significant proportion of complications are derived from the vascular access, it should be gained with maximum caution and all available tools should be used to minimize the risk of adverse events. Radiation should be kept to a minimum. A 6 French catheter is used in RF ablation and a 7 French catheter in the US-based device. Femoral arterial access may be performed under US guidance, if the operator is experienced to do so. Vascular closure devices should be used to reduce the risk of complications. Modern monoplane or biplane angiographic systems should be used to reduce the radiation dose. At the end of the procedure, angiography of the renal artery should exclude potential renal parenchymal or arterial injuries. Regular follow-up of patients undergoing RDN is necessary to monitor and eventually react to changes in BP profiles or renal function. By regular follow-up long-term complications can be identified and treated earlier. At the suspicion of a late renal vascular complication, renal angiography by computed tomography or vascular ultrasound should be performed. Centers performing RDN are responsible for adequate follow-up at 3, 6 and 12 months after the procedure and at yearly intervals thereafter, including assessment of renal function and BP. While RDN shows consistent BP reduction in selected patients in the setting of randomized controlled trials in experienced centers, there are still limited data about the use of second-generation devices in daily clinical practice in centers with less experience. It is therefore crucial to document the efficacy and safety of RDN outside of clinical trials. This documentation furthermore serves as quality control for centers performing RDN. A national prospective registry will be re-established that should capture baseline, procedural and outcome data of all RDN procedures in Austria. Procedural and outcome data of all patients undergoing RDN should be collected and included in a prospective trial, study and/or registry. This position paper should be seen as guidance for physicians performing RDN in Austria. When specific conditions regarding the RDN center, the patient and the procedure are fulfilled, RDN can be a useful supplement to medical antihypertensive treatment in patients with arterial hypertension. |
Identification of drug responsive enhancers by predicting chromatin accessibility change from perturbed gene expression profiles | b65b3c9e-0c8f-476b-9af8-5a21c57023d3 | 11139989 | Pharmacology[mh] | The completion of the Human Genome Project enables a good understanding of coding regions. This allows pharmacogenomics studies to have focused on the drug-associated gene signatures, and many computational models predicted potential drug-gene associations and built user-friendly web server on a large scale – . However, the coding region of the human genome only accounts for about 2% of the entire genome – and the remaining region (non-coding region) has been demonstrated to play important roles in regulating transcriptional and non-transcriptional processes – . Genome-wide association studies (GWAS) have revealed numerous diseases and phenotypic traits associated single nucleotide variants (SNVs) located in non-coding regions – . In addition to disease-related genetic variants, GWAS has uncovered numerous variants associated with drug sensitivity, with a majority of them situated within non-coding regulatory elements , . Those variants may play a very important role in explaining individual’s heterogenous response to drug treatment in personalized medicine , . In addition, several successful biological experiments have suggested the influence of nucleotide changes located in gene regulatory elements on drug sensitivity . For instance, a single nucleotide polymorphism (SNP) in the promoter of gene vitamin K epoxide reductase complex subunit 1 ( VKORC1 ), fundamentally influenced the individual’s reaction to the anticoagulant warfarin ; a SNP located in the enhancer of several solute carrier family ( SLC ) drug transporters was reported to link to the clearance of methotrexate ( MTX ) . In addition, variants within the regulatory regions of drug metabolizing enzymes and drug transport proteins will also affect the therapeutic effect of drugs. For example, SNPs in the promoter region of the drug-metabolizing enzyme CYP3A4 will cause changes in the expression level of the gene, which in turn will change the efficacy of the drug . Computational model also suggested the close relationships between non-coding regions and small molecules . Therefore, systematic identification of regulatory elements related to drug sensitivity is of great significance for enhancing the causal understanding of drug-associated gene and revealing genetic variations that could intervene the patient’ response to drug treatment. In order to identify drug responsive regulatory elements, one way is to assess the alterations in chromatin activity resulting from drug perturbations. We noticed that high-throughput sequencing technologies, such as Chromatin Immunoprecipitation Sequencing (ChIP-seq) , not only effectively determine the position of a large number of regulatory elements in the genome but also improve the annotation of the function of these elements . High-throughput chromosome conformation capture (Hi-C) , split-pool recognition of interactions by tag extension (SPRITE) , genome architecture mapping (GAM) , chromatin interaction analysis by paired-end tag sequencing (ChiA-PET) , could detect chromatin interactions in the mammalian nucleus, and the pulldown methods such as HiChIP , PLAC-seq , etc., integrated ChIP-seq and Hi-C, to reveal the interactions between regulatory elements and their targets. Encyclopedia of DNA elements (ENCODE) database has curated and deposited all these high-throughput sequencing data to identify functional elements within the human genome . It includes ChIP-seq data across over 2700 cell lines, DNase-seq data across over 500 cell lines, and RNA-seq for over 200 cell lines. However, perturbed chromatin activity profiles, where chromatin activity is measured after a drug perturbation, are seriously inadequate compared to several main large scale perturbed gene expression profile datasets. This greatly hinders the progress to directly connect the causal perturbations to their regulatory element activity consequences. In this paper, we aim to overcome the lack of perturbation chromatin activity data by computationally predicting chromatin accessibility via paired expression and chromatin accessibility data accumulated in ENCODE and ROADMAP. These valuable paired data offer machine learning a great gold standard dataset to predict regulatory elements’ activity from gene expression. For instance, Zhou et al., have developed a computational framework (BIRD) to predict the genome locus chromatin accessibility measured by DNase I hypersensitivity (DH) from biological sample’s transcriptome . The application of BIRD in predicting TF-binding sites (TFBSs), turning publicly available gene expression samples in Gene Expression Omnibus (GEO) into a regulome database, which contains the regulatory element activities . Inspired by this application, the drug-dependent enhancers could be detected by utilizing main large scale perturbation gene expression profile datasets, predicting regulatory element’s activity upon drug treatment, and finding enhancers that display the significantly different chromatin accessibility after drug treatment. In particular, a computational framework, called PERD, was developed to P redict the E nhancers R esponsive to D rug by assessing their chromatin accessibility changes after drug treatment. A regularized regression model with potential TF-enhancer and enhancer-gene interactions as constraints was constructed to predict enhancer’s chromatin accessibility. The validation on paired DNase-seq and RNA-seq data from ENCODE and Roadmap indicated the feasibility of using transcriptome of enhancer’s associated genes and binding TFs to predict its chromatin accessibility. Then, the enhancer chromatin accessibility before/after given drug treatment was predicted from perturbed gene expression profile datasets , and compared. The enhancers that display the significantly different chromatin accessibility were output as the drug responsive enhancers. As a pilot study, the drug responsive enhancers were then related with TF motifs and pharmacogenetics (PGx), to identify the variants related to drug perturbations. PERD was proposed to identify drug responsive enhancers by predicting the changes in enhancers’ chromatin accessibility from perturbed gene expression profiles. In particular, the priority knowledge about enhancer associated genes and TF binding regions were firstly collected to construct the enhancer-gene and TF-enhancer network, respectively (Fig. ). Then, a regularized regression model was established to predict enhancer’s chromatin accessibility from the transcriptional expression of its associated genes and binding TFs (Fig. ). The model was trained on the paired DNase-seq and RNA-seq data in ENCODE. The drug responsive enhancers were revealed through identifying enhancers that display significantly changed chromatin accessibilities after drug treatment (Fig. ). More detail of PERD will be explained in Methods. Several validations were conducted to evaluate the PERD model. It was found that PERD can efficiently reveal enhancer’s chromatin accessibility on the basis of transcriptome data and identify non-coding regions closely related with drug perturbation. Predicting enhancer’s chromatin accessibility from the expression level of its associated genes and binding TFs The PERD model (1) was designed to predict the enhancer’s chromatin activity by the transcriptional expression of its downstream associated genes and upstream binding TFs. To evaluate the performance of regression algorithms on learning, the cross-enhancer PCC, cross-cell PCC, and Squared prediction error were introduced. The leave-one-out cross-validation on 110 cell lines and validation on 57 test cell lines indicates that, comparing with elastic net and SVM, RF could achieve higher cross-cell, cross-enhancer PCCs, and smaller prediction errors (Supplementary Fig. ). Since RF outperformed other two regression algorithms, we applied RF regression algorithm to implement regression learning in model (1), which called EopenByTFandTG, and compared it with only using downstream target gene’s expression to learn the enhancer’s openness (EopenByTG). As a result, EopenByTFandTG obtained higher cross-cell, cross-enhancer PCCs, and smaller prediction errors comparing with EopenByTG based on leave-one-out validation on 110 cell lines (Fig. a– ) and on 57 test cell lines (Fig. d– ). The distribution of cross-cell and cross-enhancer PCCs on leave-one-out validation on 110 cell lines (Fig. g, i) and on 57 test cell lines (Fig. h, ) also suggested better performance of EopenByTFandTG. That is, EopenByTFandTG outperformed EopenByTG consistently. In conclusion, the RF algorithm outperforms other two regression algorithms and enhancer’s chromatin activity prediction could be improved by including both upstream TFs’ and downstream genes’ transcriptional expression. Thus, in the rest of analysis, PERD model stands for using RF as regression model and using both upstream TF and downstream gene’s transcriptional expression to learn the enhancer’s chromatin accessibility. The previous studies suggested the tissue/cell type specificity of enhancers – . We then validated the prediction results on a particular tissue/cell type based on leave-one-out validation on entire 167 ENCODE cell lines. The tissue types in validated dataset were shown in Supplementary Fig. , and the prediction results on tissue types with cell lines larger than 10 were shown in Supplementary Fig. . The cross-cell, cross-enhancer PCCs, and Squared prediction error exhibited significant variation across different tissue types. Muscle cells, the largest tissue type, achieved the highest cross-enhancer PCCs but lowest cross-cell PCCs, resulting in the worst prediction errors. These results suggested that, the prediction results were relied on the tissue types, but not determined by the number of cell lines in this tissue types. Characterization of the well predicted enhancers Both leave-one-out cross-validation on 110 training cell lines and predictions on 57 testing cell lines indicated relative lower across-cell P-T correlation than across-enhancer P-T correlation, suggesting that the prediction model performs well on a fraction of enhancers. For instance, based on leave-one-out cross-validation on whole 110 ENCODE cell lines, only about 50% (26892/54076) enhancers have their across-cell P-T correlation larger than 0.5. To summarize the characteristics for enhancers that would have better predictions, three properties derived from DNase-seq data (DH signal) were introduced, including the DH spread (defined by the number of cell lines with DH signal larger than 0), DH variation (defined by the standard deviation of DH signal across cell lines), and DH specificity (defined by the number of cell types with DH signal larger than 2). Based on predictions on 57 testing cell lines, we can see that, all three DH properties were correlated with the across-cell P-T correlation (Fig. a– ). The predictions significantly varied on different enhancer groups (Fig. ), meaning that the P-T correlation correlates with three DH properties. In particular, the enhancers that exhibit a broader distribution and greater variability in DH signal, are more likely to yield accurate prediction of chromatin accessibility. Evaluation of prediction model in an independent dataset We then applied the PERD to an independent data (paired DNase-seq and RNA-seq data in Roadmap project) to further evaluate its prediction performance. In particular, the PERD was trained by the benchmark data, including paired DNase-seq and RNA-seq data in 167 ENCODE paired cell lines, and the enhancer’s openness was predicted based on transcriptional expression from Roadmap RNA-seq data. By comparing the predictions with the true openness value measured by Roadmap DNase-seq data, PERD’s prediction performance on new enhancers was evaluated. We found that PERD achieved over 0.5 across-enhancer P-T correlation and less than 0.4 prediction error (Supplementary Fig. A– ), indicating the great generalization of PERD in another independent scenario. The prediction results on tissue types in Roadmap data (Supplementary Fig. ) also suggested the tissue-specific prediction ability of PERD. Meanwhile, the correlation analysis between across-cell P-T correlation and DH characteristics indicates that the enhancers with high spread and variation along with low specificity might have better chance to get good predictions (Supplementary Fig. ). Revealing the drug responsive enhancers through predicting their chromatin accessibility changes The evaluation of prediction model on both benchmark data and independent data suggested that enhancers with high spread and variation along with low specificity might have better chance to achieve good predictions. Thus, the enhancers with more than 100 active cell lines (DH > 0), variations larger than lower quantile, and more than 10 active cell types were remaining for further analysis. That is, there were a total of 9,340 enhancers remaining. The PERD was trained on the benchmark data, and the chromatin accessibility before/after drug treatment for 9340 enhancers were predicted based on CMAP transcriptional expression data and compared to identify drug responsive enhancers. In addition, considering the tissue-specificity of PERD, only the largest two cancer types (breast and prostate) (Supplementary Fig. ) were considered here. Particularly, using transcriptional expression data on breast and prostate cancer cell lines (MCF7 and PC3) before/after drug treatment to learn the enhancer’s openness before/after drug treatment, respectively. In general, comparing with benchmark data, there were less bio-samples in CMAP with active enhancers (DH > 0). However, the percentage of CMAP instances with active enhancers were about larger than 80% for all drugs (Supplementary Fig. ). It means that most of CMAP instances achieved predicted value for representing enhancer chromatin activity. In addition, the variations of predicted enhancer activities were comparable with that in benchmark data, just that the predicted value were relative lower than true DH value in training data (Supplementary Fig. ). These results indicated that the distribution of predicted value was roughly close to the true DH signal. We then detected the drug responsive enhancers by finding the significantly altered enhancers after drug treatment (diffEnhancers). The count of differential enhancers (diffEnhancers) exhibited significant variations across different drugs in both breast and prostate cancer. For example, in breast cancer, the number of diffEnhancers ranged from 2 for chlorpromazine to 494 for LY-294002 (Supplementary Fig. ). In contrast, for prostate cancer, drugs genistein and wortmannin lacked sufficient instances for differential openness analysis, and fluphenazine did not present enhancers with a logFC exceeding 0.5 and a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p$$\end{document} p value below 0.001. Consequently, the count of diffEnhancers ranged from zero for the aforementioned three drugs to 235 for trichostatin A (Supplementary Fig. ). Furthermore, the quantities of diffEnhancers differed between two cancer types. For instance, genistein showed 86 diffEnhancers in breast cancer but none in prostate cancer. With the exception of LY-294002 and trichostatin A, the remaining drugs did not share diffEnhancers (Supplementary Fig. ). These findings suggest that the presence of diffEnhancers is contingent upon the specific drug and disease context. PERD associates genetic variants with drug responsive enhancers From PERD, the drug-dependent enhancers were revealed. That is, the existing pharmacogenomics resources, such as CMAP and CDS-DB, can be expanded to form a drug, gene, and enhancer drug mechanism network. Base on this network, various applications can be made. For instance, associating PERD predictions with the existing pharmacogenetic variants, to extend annotations of genetic variants from disease to drug level. To this end, several validations were implemented. Firstly, the predicted diffEnhancers were associated with TF motifs, and for both two cancer type, all 13 drugs’ diffEnhancers were linked with at least one TF motif, and some even had two thousand TF motifs, such as drug trichostatin related enhancer: chr17:48538779–48607552 in prostate cancer (Supplementary Figs. , ), implying the potential regulatory role of these diffEnhancers. Then, the diffEnhancers were related to the drug perturbational genes, that is, find the overlap gene sets among diffEnhancers’ TGs and drug perturbational genes. Drug perturbational genes were defined as genes with significantly different expression level after drug treatment with p value less than 0.05 and absolute log transformed fold change larger than 0.8. As a results, for breast cancer, except for ‘thioridazine’, all other 12 drugs have at least one diffEnhancer associated with drug perturbational genes, and PI3K inhibitor ‘LY-294002’ even had 396 diffEnhancers with their TGs happening to be drug perturbational genes, which took about 80 percent of the total diffEnhancers (494) for ‘LY-294002’ (Fig. ). While, for prostate cancer, 5 out of 11 drugs have at least one diffEnhancer associated with drug perturbational genes, and potent Histone Deacetylase (HDAC) inhibitor ‘Trichostatin A’ had about 62% (146/235) diffEnhancers with their TGs also displaying significantly different expression level after ‘Trichostatin A’ treatment (Fig. ). All these results suggested that, PERD might uncover pharmacogenetic variants that will result in the perturbation of the corresponding gene expression. To further validate this assumption, the predicted diffEnhancers were linked to GTEx portal, which deposited over 20 thousand significant variant-gene associations based on permutations. Specifically, significant variant-gene associations were obtained from GTEx_Analysis_v8_eQTL.tar whole blood genes with q value less than 0.05, and diffEnhancers with variants associated with drug perturbed genes located in were reported. For breast cancer, 5 out of 13 drugs have at least one diffEnhancer associated with drug PGx, and ‘LY-294002’ had the most diffEnhancers with PGx located in (Fig. ). While, for prostate cancer, only ‘LY-294002’ got diffEnhancer with eQTLs for drug perturbed genes located in (Fig. ). The detail of these diffEnhancers and variants for drug perturbed genes were investigated for potential candidates that are worthy for further experimental validation for two cancer types. Specifically, we checked if these variants associated genes were consistent with diffEnhancers’ TGs. Those variants that were associated with drug consistent perturbations were then associated with the diffEnhancers and listed in Supplementary Table . For instance, LY-294002 was linked with breast cancer by previous study . Enhancer ‘chr20: 44, 640, 672–44, 653, 156’ was output as the diffEnhancer for LY-294002 (Fig. ). The variation in eQTL ‘chr20: 44, 642, 751’ would lead to the changes in the expression of gene adenosine deaminase ( ADA ), and located in the genome region of enhancer ‘chr20: 44, 640, 672–44, 653, 156’ (Fig. ). Enhancer ‘chr20: 44, 640, 672–44, 653, 156’ exhibited significantly different activity after LY-294002 treatment (Wilcoxon rank sum test p < 0.05, Fig. ). In addition, ‘chr20: 44, 640, 672–44, 653, 156’ associated gene ADA displayed the significantly different expression level after LY-294002 treatment (Wilcoxon rank sum test p < 0.001, Fig. ). All these results indicate that the variant in enhancer ‘chr20: 44, 640, 672–44, 653, 156’ would intervene LY-294002 sensitivity in breast cancer patients through altering the regulation of LY-294002 responsive gene: ADA . Another example is LY-294002 in prostate cancer. LY-294002 was linked to prostate cancer by previous study . Enhancer ‘chr12:12713282–12727320’ was output as the diffEnhancer for LY-294002 (Fig. ), and eQTL ‘chr12: 12,726,123’ was located in the genome region of ‘chr12:12713282–12727320’ (Fig. ). Enhancer ‘chr12:12713282–12727320’ displayed significantly different activity after LY-294002 treatment (Wilcoxon rank sum test p < 0.0001, Fig. ). In addition, both enhancer ‘chr12:12713282–12727320’ and eQTL ‘chr12: 12,726,123’ associated gene apolipoprotein L domain containing 1 ( APOLD1 ) displayed the significantly different expression level after LY-294002 treatment (Wilcoxon rank sum test p < 0.0001, Fig. ). All these results provide supported evidences for the relationship between variants in ‘chr12:12713282–12727320’ and LY-294002’s sensitivity in prostate cancer patients. That is, the variant in enhancer ‘chr12:12713282–12727320’ would intervene LY-294002 sensitivity in prostate cancer patients through altering the regulation of LY-294002 perturbed gene: APOLD1 . To further validate the usefulness of PERD in revealing genetic variations that could intervene patient’ response to drug treatment, the CDS-DB database deposited the perturbations in patient-derived cancer cell was introduced. The application on CDS-DB breast cancer data revealed the diffEnhancers associated drug perturbed genes (Supplementary Table ). Th further PGx was identified from PharmGKB database . As a result, Celecoxib associated PGx was ‘rs4133101’ (chr5: 40, 679, 465), which will cause the variation in the expression of gene Prostaglandin E Receptor 4 ( PTGER4 ) according to PharmGKB database. In addition, the expression level of PTGER4 displayed the significant difference after Celecoxib treatment. PERD outputs enhancer ‘chr5: 40, 674, 520–40, 690, 311’ as the diffEhancer of Celecoxib, and the target gene of enhancer ‘chr5: 40, 674, 520–40, 690, 311’ was PTGER4 (Supplementary Fig. ). The Docetaxel’s diffEnhancer ‘chr11:3, 958, 402–3, 961, 399’ was associated with perturbed gene Ribonucleotide Reductase Catalytic Subunit M1 ( RRM1 ), and the variant ‘rs9937’ (chr11:4, 138, 227) related with clinical anticancer drug Docetaxel by PharmGKB database was located in gene body of RRM1 (Supplementary Fig. ). In summary, PERD offered a set of genetic variants related with regulatory region of drug perturbed genes, which would result in alterations in drug sensitivity through regulating those genes. This could provide rich information for personalized drug treatment. The PERD model (1) was designed to predict the enhancer’s chromatin activity by the transcriptional expression of its downstream associated genes and upstream binding TFs. To evaluate the performance of regression algorithms on learning, the cross-enhancer PCC, cross-cell PCC, and Squared prediction error were introduced. The leave-one-out cross-validation on 110 cell lines and validation on 57 test cell lines indicates that, comparing with elastic net and SVM, RF could achieve higher cross-cell, cross-enhancer PCCs, and smaller prediction errors (Supplementary Fig. ). Since RF outperformed other two regression algorithms, we applied RF regression algorithm to implement regression learning in model (1), which called EopenByTFandTG, and compared it with only using downstream target gene’s expression to learn the enhancer’s openness (EopenByTG). As a result, EopenByTFandTG obtained higher cross-cell, cross-enhancer PCCs, and smaller prediction errors comparing with EopenByTG based on leave-one-out validation on 110 cell lines (Fig. a– ) and on 57 test cell lines (Fig. d– ). The distribution of cross-cell and cross-enhancer PCCs on leave-one-out validation on 110 cell lines (Fig. g, i) and on 57 test cell lines (Fig. h, ) also suggested better performance of EopenByTFandTG. That is, EopenByTFandTG outperformed EopenByTG consistently. In conclusion, the RF algorithm outperforms other two regression algorithms and enhancer’s chromatin activity prediction could be improved by including both upstream TFs’ and downstream genes’ transcriptional expression. Thus, in the rest of analysis, PERD model stands for using RF as regression model and using both upstream TF and downstream gene’s transcriptional expression to learn the enhancer’s chromatin accessibility. The previous studies suggested the tissue/cell type specificity of enhancers – . We then validated the prediction results on a particular tissue/cell type based on leave-one-out validation on entire 167 ENCODE cell lines. The tissue types in validated dataset were shown in Supplementary Fig. , and the prediction results on tissue types with cell lines larger than 10 were shown in Supplementary Fig. . The cross-cell, cross-enhancer PCCs, and Squared prediction error exhibited significant variation across different tissue types. Muscle cells, the largest tissue type, achieved the highest cross-enhancer PCCs but lowest cross-cell PCCs, resulting in the worst prediction errors. These results suggested that, the prediction results were relied on the tissue types, but not determined by the number of cell lines in this tissue types. Both leave-one-out cross-validation on 110 training cell lines and predictions on 57 testing cell lines indicated relative lower across-cell P-T correlation than across-enhancer P-T correlation, suggesting that the prediction model performs well on a fraction of enhancers. For instance, based on leave-one-out cross-validation on whole 110 ENCODE cell lines, only about 50% (26892/54076) enhancers have their across-cell P-T correlation larger than 0.5. To summarize the characteristics for enhancers that would have better predictions, three properties derived from DNase-seq data (DH signal) were introduced, including the DH spread (defined by the number of cell lines with DH signal larger than 0), DH variation (defined by the standard deviation of DH signal across cell lines), and DH specificity (defined by the number of cell types with DH signal larger than 2). Based on predictions on 57 testing cell lines, we can see that, all three DH properties were correlated with the across-cell P-T correlation (Fig. a– ). The predictions significantly varied on different enhancer groups (Fig. ), meaning that the P-T correlation correlates with three DH properties. In particular, the enhancers that exhibit a broader distribution and greater variability in DH signal, are more likely to yield accurate prediction of chromatin accessibility. We then applied the PERD to an independent data (paired DNase-seq and RNA-seq data in Roadmap project) to further evaluate its prediction performance. In particular, the PERD was trained by the benchmark data, including paired DNase-seq and RNA-seq data in 167 ENCODE paired cell lines, and the enhancer’s openness was predicted based on transcriptional expression from Roadmap RNA-seq data. By comparing the predictions with the true openness value measured by Roadmap DNase-seq data, PERD’s prediction performance on new enhancers was evaluated. We found that PERD achieved over 0.5 across-enhancer P-T correlation and less than 0.4 prediction error (Supplementary Fig. A– ), indicating the great generalization of PERD in another independent scenario. The prediction results on tissue types in Roadmap data (Supplementary Fig. ) also suggested the tissue-specific prediction ability of PERD. Meanwhile, the correlation analysis between across-cell P-T correlation and DH characteristics indicates that the enhancers with high spread and variation along with low specificity might have better chance to get good predictions (Supplementary Fig. ). The evaluation of prediction model on both benchmark data and independent data suggested that enhancers with high spread and variation along with low specificity might have better chance to achieve good predictions. Thus, the enhancers with more than 100 active cell lines (DH > 0), variations larger than lower quantile, and more than 10 active cell types were remaining for further analysis. That is, there were a total of 9,340 enhancers remaining. The PERD was trained on the benchmark data, and the chromatin accessibility before/after drug treatment for 9340 enhancers were predicted based on CMAP transcriptional expression data and compared to identify drug responsive enhancers. In addition, considering the tissue-specificity of PERD, only the largest two cancer types (breast and prostate) (Supplementary Fig. ) were considered here. Particularly, using transcriptional expression data on breast and prostate cancer cell lines (MCF7 and PC3) before/after drug treatment to learn the enhancer’s openness before/after drug treatment, respectively. In general, comparing with benchmark data, there were less bio-samples in CMAP with active enhancers (DH > 0). However, the percentage of CMAP instances with active enhancers were about larger than 80% for all drugs (Supplementary Fig. ). It means that most of CMAP instances achieved predicted value for representing enhancer chromatin activity. In addition, the variations of predicted enhancer activities were comparable with that in benchmark data, just that the predicted value were relative lower than true DH value in training data (Supplementary Fig. ). These results indicated that the distribution of predicted value was roughly close to the true DH signal. We then detected the drug responsive enhancers by finding the significantly altered enhancers after drug treatment (diffEnhancers). The count of differential enhancers (diffEnhancers) exhibited significant variations across different drugs in both breast and prostate cancer. For example, in breast cancer, the number of diffEnhancers ranged from 2 for chlorpromazine to 494 for LY-294002 (Supplementary Fig. ). In contrast, for prostate cancer, drugs genistein and wortmannin lacked sufficient instances for differential openness analysis, and fluphenazine did not present enhancers with a logFC exceeding 0.5 and a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p$$\end{document} p value below 0.001. Consequently, the count of diffEnhancers ranged from zero for the aforementioned three drugs to 235 for trichostatin A (Supplementary Fig. ). Furthermore, the quantities of diffEnhancers differed between two cancer types. For instance, genistein showed 86 diffEnhancers in breast cancer but none in prostate cancer. With the exception of LY-294002 and trichostatin A, the remaining drugs did not share diffEnhancers (Supplementary Fig. ). These findings suggest that the presence of diffEnhancers is contingent upon the specific drug and disease context. From PERD, the drug-dependent enhancers were revealed. That is, the existing pharmacogenomics resources, such as CMAP and CDS-DB, can be expanded to form a drug, gene, and enhancer drug mechanism network. Base on this network, various applications can be made. For instance, associating PERD predictions with the existing pharmacogenetic variants, to extend annotations of genetic variants from disease to drug level. To this end, several validations were implemented. Firstly, the predicted diffEnhancers were associated with TF motifs, and for both two cancer type, all 13 drugs’ diffEnhancers were linked with at least one TF motif, and some even had two thousand TF motifs, such as drug trichostatin related enhancer: chr17:48538779–48607552 in prostate cancer (Supplementary Figs. , ), implying the potential regulatory role of these diffEnhancers. Then, the diffEnhancers were related to the drug perturbational genes, that is, find the overlap gene sets among diffEnhancers’ TGs and drug perturbational genes. Drug perturbational genes were defined as genes with significantly different expression level after drug treatment with p value less than 0.05 and absolute log transformed fold change larger than 0.8. As a results, for breast cancer, except for ‘thioridazine’, all other 12 drugs have at least one diffEnhancer associated with drug perturbational genes, and PI3K inhibitor ‘LY-294002’ even had 396 diffEnhancers with their TGs happening to be drug perturbational genes, which took about 80 percent of the total diffEnhancers (494) for ‘LY-294002’ (Fig. ). While, for prostate cancer, 5 out of 11 drugs have at least one diffEnhancer associated with drug perturbational genes, and potent Histone Deacetylase (HDAC) inhibitor ‘Trichostatin A’ had about 62% (146/235) diffEnhancers with their TGs also displaying significantly different expression level after ‘Trichostatin A’ treatment (Fig. ). All these results suggested that, PERD might uncover pharmacogenetic variants that will result in the perturbation of the corresponding gene expression. To further validate this assumption, the predicted diffEnhancers were linked to GTEx portal, which deposited over 20 thousand significant variant-gene associations based on permutations. Specifically, significant variant-gene associations were obtained from GTEx_Analysis_v8_eQTL.tar whole blood genes with q value less than 0.05, and diffEnhancers with variants associated with drug perturbed genes located in were reported. For breast cancer, 5 out of 13 drugs have at least one diffEnhancer associated with drug PGx, and ‘LY-294002’ had the most diffEnhancers with PGx located in (Fig. ). While, for prostate cancer, only ‘LY-294002’ got diffEnhancer with eQTLs for drug perturbed genes located in (Fig. ). The detail of these diffEnhancers and variants for drug perturbed genes were investigated for potential candidates that are worthy for further experimental validation for two cancer types. Specifically, we checked if these variants associated genes were consistent with diffEnhancers’ TGs. Those variants that were associated with drug consistent perturbations were then associated with the diffEnhancers and listed in Supplementary Table . For instance, LY-294002 was linked with breast cancer by previous study . Enhancer ‘chr20: 44, 640, 672–44, 653, 156’ was output as the diffEnhancer for LY-294002 (Fig. ). The variation in eQTL ‘chr20: 44, 642, 751’ would lead to the changes in the expression of gene adenosine deaminase ( ADA ), and located in the genome region of enhancer ‘chr20: 44, 640, 672–44, 653, 156’ (Fig. ). Enhancer ‘chr20: 44, 640, 672–44, 653, 156’ exhibited significantly different activity after LY-294002 treatment (Wilcoxon rank sum test p < 0.05, Fig. ). In addition, ‘chr20: 44, 640, 672–44, 653, 156’ associated gene ADA displayed the significantly different expression level after LY-294002 treatment (Wilcoxon rank sum test p < 0.001, Fig. ). All these results indicate that the variant in enhancer ‘chr20: 44, 640, 672–44, 653, 156’ would intervene LY-294002 sensitivity in breast cancer patients through altering the regulation of LY-294002 responsive gene: ADA . Another example is LY-294002 in prostate cancer. LY-294002 was linked to prostate cancer by previous study . Enhancer ‘chr12:12713282–12727320’ was output as the diffEnhancer for LY-294002 (Fig. ), and eQTL ‘chr12: 12,726,123’ was located in the genome region of ‘chr12:12713282–12727320’ (Fig. ). Enhancer ‘chr12:12713282–12727320’ displayed significantly different activity after LY-294002 treatment (Wilcoxon rank sum test p < 0.0001, Fig. ). In addition, both enhancer ‘chr12:12713282–12727320’ and eQTL ‘chr12: 12,726,123’ associated gene apolipoprotein L domain containing 1 ( APOLD1 ) displayed the significantly different expression level after LY-294002 treatment (Wilcoxon rank sum test p < 0.0001, Fig. ). All these results provide supported evidences for the relationship between variants in ‘chr12:12713282–12727320’ and LY-294002’s sensitivity in prostate cancer patients. That is, the variant in enhancer ‘chr12:12713282–12727320’ would intervene LY-294002 sensitivity in prostate cancer patients through altering the regulation of LY-294002 perturbed gene: APOLD1 . To further validate the usefulness of PERD in revealing genetic variations that could intervene patient’ response to drug treatment, the CDS-DB database deposited the perturbations in patient-derived cancer cell was introduced. The application on CDS-DB breast cancer data revealed the diffEnhancers associated drug perturbed genes (Supplementary Table ). Th further PGx was identified from PharmGKB database . As a result, Celecoxib associated PGx was ‘rs4133101’ (chr5: 40, 679, 465), which will cause the variation in the expression of gene Prostaglandin E Receptor 4 ( PTGER4 ) according to PharmGKB database. In addition, the expression level of PTGER4 displayed the significant difference after Celecoxib treatment. PERD outputs enhancer ‘chr5: 40, 674, 520–40, 690, 311’ as the diffEhancer of Celecoxib, and the target gene of enhancer ‘chr5: 40, 674, 520–40, 690, 311’ was PTGER4 (Supplementary Fig. ). The Docetaxel’s diffEnhancer ‘chr11:3, 958, 402–3, 961, 399’ was associated with perturbed gene Ribonucleotide Reductase Catalytic Subunit M1 ( RRM1 ), and the variant ‘rs9937’ (chr11:4, 138, 227) related with clinical anticancer drug Docetaxel by PharmGKB database was located in gene body of RRM1 (Supplementary Fig. ). In summary, PERD offered a set of genetic variants related with regulatory region of drug perturbed genes, which would result in alterations in drug sensitivity through regulating those genes. This could provide rich information for personalized drug treatment. In this work, we developed a computational framework, PERD, to identify regulatory elements associated with drug sensitivity. To this end, we firstly constructed a machine learning model to probe chromatin accessibility of enhancers based on transcriptional expression of their linked downstream genes and upstream binding TFs. The model was trained by the paired DNase-seq and RNA-seq data curated from ENCODE and Roadmap data resources. The results demonstrated the model’s efficacy in predicting enhancer openness with its potential enhancer-TF and enhancer-gene interactions. Subsequently, we applied the model to predict and compare enhancer openness pre- and post-administration of specific drugs. Enhancers exhibiting significant differences in openness upon treatment (referred to as diffEnhancers) were identified as drug responsive enhancers. The identified diffEnhancers were further related to TF motifs and PGx resources. The variants linked with a given drug may provide a great opportunity for drug repurposing. For instance, Fulvestrant, a selective estrogen receptor degrader, was used to treat hormone receptor (HR)-positive metastatic breast cancer in postmenopausal women with disease progression as well as HR-positive . PERD reported two associated enhancers (chr1:26529190–26536400 and chr6:30506802–30512599) with GWAS SNPs (rs112750178 and rs140668832) located in their genome regions. Both two variants were associated trait of “Cervical Cancer” by GWAS, suggesting the potential usage of Fulvestrant in treatment of cervical cancer. In a prior study, Fulvestrant previously used as a treatment for cervical cancer in mice . The experiments indicated that Fulvestrant could efficiently clear cancer and its precursor lesions in both mice cervix and vagina . All these findings provide the great chance of Fulvestrant in cervical cancer in human, and its potential usage in other gynecological cancers has been tested by a clinical trial (No. NCT03926936, started on March 13, 2019, estimated end by Dec. 31, 2025). Besides linking to GWAS, our predictions can be also associated with the individual patient whole genome data in future, to reveal the drug-dependent variants that could be a great target to affect the efficacy of clinical drug. The validations on both ENCODE and Roadmap data indicated that the prediction results varied on different tissue types. That is, PERD is tissue-type specific. Here, PERD was only applied on the largest two cancer type in CMAP, breast cancer cells (MCF-7) and prostate cancer cells (PC3). The drug responsive enhancers varied a lot for these two cancer types (Supplementary Fig. ), and the number of diffEnhancers depends on the number of instances for analysis. In future, we would apply PERD on a big pharmacogenomics resource, such as LINCS L1000 , to get more stable results. DNase-I hypersensitive sites sequencing (DNase-seq) and Assays for Transposase-Accessible Chromatin sequencing (ATAC-seq) are two widely used protocols for genome-wide investigation of chromatin accessibility. DNase-seq and ATAC-seq are based on the use of cleavage enzymes (DNase-I, enzymes which hydrolyze phosphodiester bonds of DNA molecules, and Tn5, transposases, respectively), which recognize and cleave DNA in open chromatin regions. Comparing with DNase-seq, ATAC-seq requires fewer cells and is less laborious, and the number of ATAC-seq-based studies was higher than that of DNase-seq-based studies in recent years. PERD model can be used both in DNase-seq and ATAC-seq data with paired RNA-seq data. In future work, we will try to apply ATAC-seq to do the follow-up research. For instance, considering the fact that enhancers are highly cell-type or cell-state specific, we will conduct genome-wide investigation of chromatin accessibility before/after drug treatment to identify the causality of drug perturbation based on paired single-cell ATAC-seq and DNase-seq to address the issue of drug resistant caused by tumor heterogeneity. The PERD model Construction of enhancer-gene, and TF-enhancer network The enhancer-gene network was extracted from GeneHancer, a database of genome-wide enhancer-to-gene and promoter-to-gene associations, embedded in GeneCards . The enhancers were integrated from ENCODE, the Ensembl regulatory build, the functional annotation of the mammalian genome (FANTOM) project , the VISTA Enhancer Browser , etc. Using the enhancer-to-gene associations in GeneHancer, the enhancer-gene network was constructed. The TF-enhancer network was constructed based on ENCODE human ChIP-seq data. In particular, the TF binding regions were firstly summarized from ENCODE human ChIP-seq data, and a TF with the binding site located in a given enhancer region was then associated with this enhancer. The enhancers’ genome regions were defined by GeneHancer database . Learning enhancer’s chromatin accessibility from its associated genes and binding TFs’ expression Once the enhancer-gene and enhancer-TF networks were constructed, the enhancer’s chromatin accessibility was then predicted by the following regularized regression model given paired expression and chromatin accessibility data across diverse cellular contexts: 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{array}{cc}\begin{array}{c}\min \\ \alpha ,\beta \end{array} & ||O-{\alpha }_{0}-{\beta }_{0}-{\gamma }_{T}\mathop{\sum}\limits_{p\in {E}_{T}}{\alpha }_{p}{{TF}}_{p}-{\gamma }_{G}\mathop{\sum}\limits_{q\in {E}_{G}}{\beta }_{q}{{TG}}_{q}||_{2}^{2}+\lambda \left({{||}\alpha {||}}_{2}^{2}+{{||}\alpha {||}}_{1}+{{||}\beta {||}}_{2}^{2}+{{||}\beta {||}}_{1}\right)\end{array}$$\end{document} min α , β ∣ ∣ O − α 0 − β 0 − γ T ∑ p ∈ E T α p T F p − γ G ∑ q ∈ E G β q T G q ∣ ∣ 2 2 + λ ∣ ∣ α ∣ ∣ 2 2 + ∣ ∣ α ∣ ∣ 1 + ∣ ∣ β ∣ ∣ 2 2 + ∣ ∣ β ∣ ∣ 1 where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O$$\end{document} O is chromatin accessibility value (openness) for a given enhancer (determined by the maximum DH signal along this enhancer region), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{TF}}_{p}$$\end{document} TF p and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{TG}}_{q}$$\end{document} TG q are the expression level for p th TF and q th gene associated with the given enhancer in the network, respectively, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\gamma }_{T}$$\end{document} γ T and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\gamma }_{G}$$\end{document} γ G are pre-defined parameters to represent the weight for TF and gene in prediction, respectively, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${E}_{T}$$\end{document} E T and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${E}_{G}$$\end{document} E G are enhancer’s binding TF and associated gene set, respectively, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda$$\end{document} λ is a pre-defined parameter to represent the weight for regularization. Model implementation and evaluation To implement the model (1) in a more efficient way, we set \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\gamma }_{G}={\gamma }_{T}$$\end{document} γ G = γ T , the model (1) then became Elastic net regression model , which was implemented by R “glnmet” package. The model (1) can be treated as other regression model using different regularized terms, such as Random Forest (RF) and Support Vector Machine (SVM) , which can be implemented by R ‘randomForest’ and R ‘e107’ package, respectively. To further simplify the implementation procedure, the enhancers with more than two associated genes and binding TFs were remained for further analysis. Therefore, there are a total of 54,076 enhancers remaining for further investigation. Suppose we have a total of M enhancers with at least two associated genes and binding TFs across N cells, let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}_{.m}=({O}_{1m},\ldots ,{O}_{{Nm}})$$\end{document} O . m = ( O 1 m , … , O N m ) represents the measured chromatin accessibility value (openness) for m- th enhancer across N cell lines, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}_{n.}=({O}_{n1},\ldots ,{O}_{{nM}})$$\end{document} O n . = ( O n 1 , … , O n M ) represents the measured openness in the n-th cell lines for M enhancers, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{O}$$\end{document} O ˆ represents the predicted openness value. The following three statistics were introduced to evaluate the performance of the prediction model: cross-cell correlation: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\tau }_{C}={cor}\left({O}_{.m},{\hat{O}}_{.m}\right)$$\end{document} τ C = cor O . m , O ˆ . m ; cross-enhancer correlation: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\tau }_{E}={cor}\left({O}_{n.},{\hat{O}}_{n.}\right)$$\end{document} τ E = cor O n . , O ˆ n . ; squared prediction error: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau =\frac{\sum _{n}\sum _{m}{({O}_{{nm}}-{\hat{O}}_{{nm}})}^{2}}{\sum _{n}\sum _{m}{({O}_{{nm}}-\bar{O})}^{2}}$$\end{document} τ = ∑ n ∑ m ( O nm − O ˆ nm ) 2 ∑ n ∑ m ( O nm − O ® ) 2 , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{O}$$\end{document} O ® is the mean openness for all M enhancers across N cells. Reporting summary Further information on research design is available in the linked to this article. Construction of enhancer-gene, and TF-enhancer network The enhancer-gene network was extracted from GeneHancer, a database of genome-wide enhancer-to-gene and promoter-to-gene associations, embedded in GeneCards . The enhancers were integrated from ENCODE, the Ensembl regulatory build, the functional annotation of the mammalian genome (FANTOM) project , the VISTA Enhancer Browser , etc. Using the enhancer-to-gene associations in GeneHancer, the enhancer-gene network was constructed. The TF-enhancer network was constructed based on ENCODE human ChIP-seq data. In particular, the TF binding regions were firstly summarized from ENCODE human ChIP-seq data, and a TF with the binding site located in a given enhancer region was then associated with this enhancer. The enhancers’ genome regions were defined by GeneHancer database . The enhancer-gene network was extracted from GeneHancer, a database of genome-wide enhancer-to-gene and promoter-to-gene associations, embedded in GeneCards . The enhancers were integrated from ENCODE, the Ensembl regulatory build, the functional annotation of the mammalian genome (FANTOM) project , the VISTA Enhancer Browser , etc. Using the enhancer-to-gene associations in GeneHancer, the enhancer-gene network was constructed. The TF-enhancer network was constructed based on ENCODE human ChIP-seq data. In particular, the TF binding regions were firstly summarized from ENCODE human ChIP-seq data, and a TF with the binding site located in a given enhancer region was then associated with this enhancer. The enhancers’ genome regions were defined by GeneHancer database . Once the enhancer-gene and enhancer-TF networks were constructed, the enhancer’s chromatin accessibility was then predicted by the following regularized regression model given paired expression and chromatin accessibility data across diverse cellular contexts: 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{array}{cc}\begin{array}{c}\min \\ \alpha ,\beta \end{array} & ||O-{\alpha }_{0}-{\beta }_{0}-{\gamma }_{T}\mathop{\sum}\limits_{p\in {E}_{T}}{\alpha }_{p}{{TF}}_{p}-{\gamma }_{G}\mathop{\sum}\limits_{q\in {E}_{G}}{\beta }_{q}{{TG}}_{q}||_{2}^{2}+\lambda \left({{||}\alpha {||}}_{2}^{2}+{{||}\alpha {||}}_{1}+{{||}\beta {||}}_{2}^{2}+{{||}\beta {||}}_{1}\right)\end{array}$$\end{document} min α , β ∣ ∣ O − α 0 − β 0 − γ T ∑ p ∈ E T α p T F p − γ G ∑ q ∈ E G β q T G q ∣ ∣ 2 2 + λ ∣ ∣ α ∣ ∣ 2 2 + ∣ ∣ α ∣ ∣ 1 + ∣ ∣ β ∣ ∣ 2 2 + ∣ ∣ β ∣ ∣ 1 where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O$$\end{document} O is chromatin accessibility value (openness) for a given enhancer (determined by the maximum DH signal along this enhancer region), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{TF}}_{p}$$\end{document} TF p and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{TG}}_{q}$$\end{document} TG q are the expression level for p th TF and q th gene associated with the given enhancer in the network, respectively, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\gamma }_{T}$$\end{document} γ T and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\gamma }_{G}$$\end{document} γ G are pre-defined parameters to represent the weight for TF and gene in prediction, respectively, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${E}_{T}$$\end{document} E T and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${E}_{G}$$\end{document} E G are enhancer’s binding TF and associated gene set, respectively, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda$$\end{document} λ is a pre-defined parameter to represent the weight for regularization. To implement the model (1) in a more efficient way, we set \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\gamma }_{G}={\gamma }_{T}$$\end{document} γ G = γ T , the model (1) then became Elastic net regression model , which was implemented by R “glnmet” package. The model (1) can be treated as other regression model using different regularized terms, such as Random Forest (RF) and Support Vector Machine (SVM) , which can be implemented by R ‘randomForest’ and R ‘e107’ package, respectively. To further simplify the implementation procedure, the enhancers with more than two associated genes and binding TFs were remained for further analysis. Therefore, there are a total of 54,076 enhancers remaining for further investigation. Suppose we have a total of M enhancers with at least two associated genes and binding TFs across N cells, let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}_{.m}=({O}_{1m},\ldots ,{O}_{{Nm}})$$\end{document} O . m = ( O 1 m , … , O N m ) represents the measured chromatin accessibility value (openness) for m- th enhancer across N cell lines, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${O}_{n.}=({O}_{n1},\ldots ,{O}_{{nM}})$$\end{document} O n . = ( O n 1 , … , O n M ) represents the measured openness in the n-th cell lines for M enhancers, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{O}$$\end{document} O ˆ represents the predicted openness value. The following three statistics were introduced to evaluate the performance of the prediction model: cross-cell correlation: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\tau }_{C}={cor}\left({O}_{.m},{\hat{O}}_{.m}\right)$$\end{document} τ C = cor O . m , O ˆ . m ; cross-enhancer correlation: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\tau }_{E}={cor}\left({O}_{n.},{\hat{O}}_{n.}\right)$$\end{document} τ E = cor O n . , O ˆ n . ; squared prediction error: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau =\frac{\sum _{n}\sum _{m}{({O}_{{nm}}-{\hat{O}}_{{nm}})}^{2}}{\sum _{n}\sum _{m}{({O}_{{nm}}-\bar{O})}^{2}}$$\end{document} τ = ∑ n ∑ m ( O nm − O ˆ nm ) 2 ∑ n ∑ m ( O nm − O ® ) 2 , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{O}$$\end{document} O ® is the mean openness for all M enhancers across N cells. Further information on research design is available in the linked to this article. Supplemental material Reporting summary |
South-to-south collaboration to strengthen the health workforce: the case of paediatric cardiac surgery in Rwanda | 9040b500-49b4-4174-bdcb-9edb12481b09 | 11499810 | Pediatrics[mh] | Despite 80% of deaths from cardiovascular diseases occurring in low-income and middle-income countries (LMICs), these countries have the biggest gap in terms of availability of cardiac surgical care, with Africa having only one cardiothoracic surgeon per four million people. The persistence of inequities in access to paediatric cardiovascular care is a fundamental injustice. For example, while 97% of rheumatic heart diseases occur in LMICs, only 11% of the population can access surgical care. The same applies to congenital heart disease (CHD), where one-third of untreated children die within a month. In LMICs, the lack of availability of echocardiography and CHD surgery results in children dying from CHD, whereas those in higher-resourced settings have ready access to care. Building a sustainable, qualified cardiac workforce in Sub-Saharan Africa is critical to addressing the significant inequities across the region. There are 78 cardiac centres or units on the continent, with 22 located in Sub-Saharan Africa. However, most of these centres, including the one in Rwanda, have relied heavily on visiting surgical teams coming for a few weeks per year. Historically, surgical care across the continent has undergone a gradual shift away from colonial legacies, where surgical care was largely managed by a surgeon from the colonising country, with local surgeons being required to have their training validated in Western countries for legitimacy. Few countries have made the transition to having fully locally run programmes. When globally distributed, Africa still only has 2.7% of the world’s cardiothoracic surgeons. The workforce gap is further exacerbated in paediatric cardiac surgery, with only 1.99 paediatric cardiac surgeons per million in the region when adjusted for the paediatric population. The only way these inequities can be addressed is for countries and institutions in the region to implement rigorous health workforce development plans with expert partners that engage international experts while transitioning to a locally managed and sustainable training model. This is further strengthened through strong government support and mobilisation of resources to build the programme. Additionally, this must consider all cadres of professionals required to sustainably run the programme, including medical doctors, nurses, allied health professionals, biomedical engineers and administration. The Government of Rwanda significantly invests into building its health workforce in alignment with the national ‘four by four’ strategy, aiming to quadruple the health workforce over an average of 4 years. Specifically, this strategy aims to move Rwanda closer to the WHO’s health workforce targets by establishing and strengthening training programmes, recapturing local faculty and leveraging partnerships, and improving the existing capacity of training institutions. One way this will be achieved is through increasing residency and fellowship intake capacity, including in general surgery and cardiothoracic surgery, from an overall average of 72 per year in 2020–2022 to a minimum of 208 per year. This adjusted enrolment rate will lead to over 1000 enrollees by 2028. In terms of cardiovascular care, which is one of Rwanda’s national priorities, there are currently two Rwandan paediatric cardiologists and one adult cardiothoracic surgeon practising in the country. The country currently does not have any Rwandan paediatric cardiac surgeon. Workforce projections over the next 4 years were developed with the aim of having one complete local team trained to run the programme, including 5 paediatric cardiologists, 1 paediatric cardiac surgeon, 4 adult cardiac surgeons, 5 paediatric critical care specialists and 10 perfusionists. While this is not sufficient to address the burden of disease, the long-term plan is to continue scaling up the workforce to meet this burden over the coming 10 years. Substantial investments were dedicated to the recruitment of expatriate faculty, ensuring the inclusion of at least one full-time specialist in each focal domain, including specialised nurses, perfusionists, paediatric intensivists, paediatric cardiologists and a paediatric cardiac surgeon. In 2021, the University of Rwanda inaugurated subspecialty fellowship programmes, marking a noteworthy milestone. These programmes, encompassing adult cardiology and paediatric cardiology, attracted two cohorts of fellows each, currently immersed in a comprehensive training at tertiary-level teaching hospitals within Rwanda and further enriched by external rotations abroad in India, Vietnam, France and the USA. King Faisal Hospital Rwanda (KFH), situated in Kigali, Rwanda, is a quaternary-level teaching hospital currently expanding its capacity from 160 to over 600 beds. The hospital is also in the process of establishing its own medical college. Notably, KFH houses Rwanda’s sole catheterisation laboratory and operates the exclusive paediatric cardiac surgery programme in the country. In the past, the hospital depended on international surgical teams from Australia, Belgium, Canada, Israel and the USA for paediatric cardiac surgical care, operating on a mission basis and without a clear training model for the local team. However, there is a need to transition from this reliance to a fully sustainable and locally run programme that engages these partnerships while prioritising training. The transition emphasises south-to-south collaboration and the shift in engagement of the international visiting teams. These efforts are done in collaboration with the KFH Foundation, with a mandate to work together with KFH to mobilise resources to support initiatives in specialised clinical care, education and research. To build a locally run and sustainable programme in the coming 5 years, KFH leadership is prioritising south-to-south partnerships and reimagining the scope of international visiting teams to further increase access to paediatric cardiac care in Rwanda. Therefore, KFH, in partnership with the KFH Foundation, onboarded a rotating cardiac surgery team to rotate every 6 months from the Children’s Heart Center and St Paul’s Hospital Millennium Medical College, both in Addis Ababa, Ethiopia. The Ethiopian centre has two fully trained teams. The institution benefits from the partnership because while one of the teams rotating in Rwanda gains access to a high case load and continuous exposure in a familiar setting, the programme uninterruptedly operates locally in Ethiopia with the second trained team. This is made possible by close collaboration with the Ethiopian institution, specifically by recruiting from a pool of trained cardiac surgery professionals in a way that does not compromise their clinical service delivery and who are willing to contribute to health workforce development in another country before returning to Ethiopia. This also has mutual financial benefits, where the hosting institution in Rwanda pays the salaries of the visiting Ethiopian team. This interdisciplinary team includes a cardiac intensivist, cardiac anaesthesiologist, paediatric cardiologist, perfusionists, cardiac critical care nurses and cardiac operating theatre nurses. Furthermore, a full-time paediatric cardiac surgeon was hired in June 2022 to lead the programme. Other south-to-south partnerships are also being leveraged with high-volume training sites, primarily in Kenya, Tanzania, India and Ethiopia. This allows for trainees to have hands-on exposure in settings with similar disease profiles, making them better prepared to return to Rwanda and further develop the local programme. In the case of KFH, the benefits of south-to-south collaboration have resulted in a win–win situation for the institutions involved. Four overarching benefits of this model that are evident at KFH include a contextual understanding, mentorship with local interest, strengthening equitable surgical access and moving towards programmatic sustainability. outlines the benefits of the partnership and the way forward as the team in Rwanda transitions to being fully trained and self-reliant. South-to-south partnerships bring teams with a strong understanding of the local environment, making integration into the system a smoother process. The rotating Ethiopian team understands the context, including disease demography and disease presentations typical for African patients. For example, they come with ample experience in managing late-presenting CHD cases. They are also experienced in the management of congenital and acquired heart disease in the absence of advanced diagnostic and therapeutic tools. They also have exposure to rheumatic heart disease, which is the most common acquired heart disease seen in developing countries and rarely seen in the West. This model also allows for strengthened mentorship, addressing the challenge of limited training and hands-on patient access for African health professionals in Western hospitals, which only allow for direct patient contact after incountry training or certification. This model allows local professionals to function as trainees in their own contexts and hospitals. With paediatric cardiac care in the country, junior members of the team or prospective residents or fellows can observe and be inspired by activities to which they previously had no exposure. Regarding equitable access to care, the programme provides an alternative to overseas surgery, which is less convenient and more costly. Prior to the establishment of this programme, Rwanda referred paediatric cardiac patients abroad through the national medical referral board if patient care could not be coordinated with the visiting surgical team schedules. This posed logistical and financial challenges both to the Government of Rwanda and to the patients. Having the programme hosted in Rwanda allows for increased and more equitable access to care, as well as significant cost savings to both the families and the government. Finally, this partnership model promotes longer-term programme sustainability in Rwanda. The nature of the programme raises the level of all clinical services, including the blood bank, intensive care unit (ICU), resuscitation team, supply chain and procurement systems, and biomedical engineering, among others. Building a cardiac team that is committed to sustaining a programme ultimately creates a team that continues to serve the local and regional community. Having a strong cardiovascular service in the hospital raises the quality of care at the hospital. Since its inception in October 2022 and over the first year of the programme, the team has performed over 170 paediatric cardiac surgeries. Of these, 77% were performed by the local team without reliance on international collaborators, with two deaths, or a 1.5% mortality rate. This is a significant milestone compared with the 18 paediatric cardiac procedures that were performed the year before relying on visiting teams, with the rest being referred abroad. When compared with the expected mortality rate of 2% per the Society of Thoracic Surgeons’ database, the mortality rate of 1.5% is well within acceptable. While mortality is a robust indicator for evaluating the quality of the programme, morbidities and complications are serving as additional indicators of the programme going forward. Under this south-to-south collaboration, the team established a progression of clinical case complexity to ensure that local professionals are being trained and strengthened in a progressive way. This is outlined in , which highlights both the increasing complexity (eg, procedures and patient age) and frequency of procedures to build the team’s capacity over time and further reduce their reliance on visiting teams. The inverted triangle demonstrates the greatest volume of straightforward procedures in stage 1, which then build to more complex and rare procedures through stage 3. The team started with simple cases, such as patent ductus arteriosus ligations, atrial septal defect repairs and ventricular septal defect repair in children weighing over 15 kg. After 6 months, they advanced to more complex cases, such as ventricular septal defects in younger and smaller patients, coarctation repairs and tetralogy of Fallot repair, with surgical days two to three times per week. Notable procedures include three cases of arterial switch performed for the first time in Rwanda, as well as other neonatal procedures for aortopulmonary window, pulmonary atresia with intact ventricular septum requiring surgical creation of a forward flow at a neonatal age, and four neonatal coarctation repairs, with the youngest patient being a 4-day-old neonate. A substantial cohort of 36 health professionals are undergoing training, while the Ethiopian team continues to collaborate with the Rwandan team at KFH. These 36 trainees include 3 individuals specialising in cardiac surgery, 16 critical care nurses, 4 operating theatre nurses, 5 perfusionists, 5 intensivists and 3 cardiac anaesthesiologists. Locally, there are two paediatric cardiology fellows and six adult cardiology fellows expected to complete their training within the next 2–3 years. Additionally, various other health professionals, such as biomedical engineers, nutritionists, physiotherapists and administrators, have received specialised cardiac exposure training, primarily in Israel, Tanzania, Kenya, Ethiopia and India. Alongside the strengthening of south-to-south partnerships, KFH still engages regularly with visiting surgical teams. However, the objective of their engagement is shifting towards capacity and systems building. Maintaining its long-term cooperation with these teams, KFH continues to host open heart and catheterisation trips to treat the sickest with the leadership of the local team. The goal of this model is to broaden the case variety and complexity of patients treated in Rwanda and for the programme to be led by the local team. International engagements are designed to promote self-sustainability with clear long-term and short-term objectives. Significant progress has been made towards strengthening the hospital’s infrastructure to accommodate this programme, including establishing a dedicated cardiac ICU and strengthening the operating theatre infrastructure. However, there is still a lack of dedicated paediatric cardiac ICU and perioperative care environment. Supply chain and stock availability are also a challenge, with local suppliers not having the required surgical supplies, leaving the programme to rely on the visiting teams to bring them. Furthermore, ensuring that screenings, referrals and procedures are covered by insurance schemes is a challenge, and KFH is in ongoing discussions with insurance companies to advocate for increased access to the service. KFH aims to have a sustainable paediatric cardiac surgery programme run entirely by Rwandan professionals over the coming 5 years. As the programme matures, the expectation will be twofold: deliver a consistently high level of clinical service delivery and teach the local staff to be independent in the coming 5 years. Once the trained Rwandan workforce returns or completes their fellowships, the visiting team plans to support them for a year and then transition out. The reciprocal benefit for Ethiopia is that these professionals are exposed to consistent cardiac surgical care in another setting as mentors, without compromising the service in Ethiopia. Priorities for the way forward include scaling up training and the scope of clinical care, establishing and implementing locally run training programmes and strengthening research output. In addition to strengthening clinical and research activities, emphasis is made on safety and quality infrastructure. Specifically, this includes establishing a national registry and strengthening the culture towards meaningful quality improvement activities in the hospital and across the health system. Duplicating the same model, efforts are also underway to strengthen and scale up the adult cardiothoracic programme alongside the paediatric programme to address the full spectrum of cardiovascular clinical care. As general surgeons and other health professionals are undergoing subspecialty surgical training abroad in both adult and paediatrics, the team based in Rwanda is working with international partners to develop and implement specialised cardiac training programmes that are facilitated and accredited locally. This includes fellowships in adult and paediatric critical care and cardiothoracic surgery. Ultimately, the aim is for the Rwandan professionals undergoing training to become the future faculty of these locally run programmes to ensure programme sustainability. Finally, efforts are underway to develop an accompanying research programme to conduct and publish more research on paediatric cardiac surgical care in Rwanda and the region. One study found that while 53% of African countries, including Rwanda, contribute to cardiac surgery publications, only 3% of these surgical papers are specifically about cardiac surgery. This will allow for the programme to continue to grow through evidence-based interventions. Through this south-to-south collaboration and training model, and as the first paediatric cardiac surgery programme of its kind in the region, Rwanda is well positioned to have a sustainable and fully locally run cardiac surgery programme within the coming 5 years. Once the programme is run locally, efforts will be underway to establish a wider network of paediatric cardiac surgery service and training, with the aim that the Rwandan workforce can become the future mentors to programmes across the region to further expand on this south-to-south model. |
Effectiveness of a Cloud-Based Telepathology System in China: Large-Sample Observational Study | 2b5ad018-b737-4e65-8597-818ea60b747b | 8367172 | Pathology[mh] | Pathology diagnoses have been widely recognized as a gold standard for confirming diseases . A precise and timely diagnosis is an indispensable precondition for further therapies . However, there is a critical shortage and misdistribution of senior pathologists in resource-limited countries; China faces this challenge. According to statistics from the National Ministry of Health, there are 9841 licensed pathologists in China, but nearly 70% work in tertiary hospitals located in large cities . Pathologists, especially senior and professional ones, are urgently needed in rural and remote areas . To obtain a confirmed diagnosis and key guidance for subsequent therapies, undiagnosed pathology sections in county-level hospitals are usually mailed or personally transported to senior pathologists in tertiary hospitals. The procedure is complicated and costly. Furthermore, the valuable pathology sections are also at risk of being destroyed or lost. Telepathology is a powerful tool that can be used to address this challenge by transmitting pathology images through telecommunication . The first use of telepathology can be traced back to the 1960s in the United States, in which real-time black-and-white images were sent for interpretation. After a half a century of development, many uses of telepathology have been developed, with powerful features that can promptly transmit static, dynamic, and whole-slide images. The whole-slide imaging system is the most advanced means to view scanned and digitized slides in their entirety, with high-resolution digital images and superior zoom capability . Whole-slide imaging has therefore been considered to be an ideal method for telepathology . Despite considerable advancements, whole-slide imaging has several drawbacks , such as the need for large local storage space, network bandwidth constraints, cumbersome operation, occupied computing resources and large idle space, insufficient utilization rate, and difficulty in managing large digital files , which limits the application of whole-slide imaging. To compensate for the shortcomings of whole-slide imaging, we established a Chinese National Cloud-Based Telepathology system (CNCTPS) based on an existing, mature telemedicine system of the National Telemedicine Center of China, with dual video and data drives, which solved the difficulty of telemedicine data interaction . The CNCTPS was equipped with a deeply optimized storage model and analytical algorithm, which solved the problems of archiving classification and integration of large amounts of pathology data. This novel system can facilitate the prompt extraction and utilization of pathology data by doctors. The CNCTPS was deployed in December 2015; 83 hospitals were connected in total, making it the largest remote pathology network in China. Previous studies have mainly focused on the construction and optimization of telepathology systems or analysis of the effect of system use, with a limited sample , and to date, there are no unified criteria to evaluate a telepathology system. Perron et al evaluated diagnostic concordance and the turnaround time of a telepathology system. Chong et al showed that telepathology shortened turnaround time and provided significant financial savings. Similarly, Zhou et al reported service volume, turnaround time, and the concordance rate of a telepathology consultation service. However, these studies were mainly focused on one or some small and isolated aspects and did not comprehensively evaluate the service effect of the telepathology system. Thus, the aim of this study was to comprehensively evaluate the CNCTPS by evaluating 4 aspects—service volume, turnaround time, diagnosis accuracy, and economic benefits—which we chose after reviewing the literature on telepathology systems evaluation. The Cloud-Based Telepathology System Digitization of Pathology Sections Participating hospitals were equipped with digital slide scanners and matched computer workstations (KF-PRO-005, Konfoong Biotech International Co Ltd), for converting traditional glass slices into whole-slide imaging. Whole-slide imaging of a slide could be completed within 40 seconds under a ×20 objective (0.47 µm/pixel) and within 100 seconds under a ×40 objective (0.5 µm/pixel). Scanning control software (K-Scanner 1.6.0.14, Konfoong Biotech International Co Ltd) and image browsing and management software (K-Viewer 1.5.3.1, Konfoong Biotech International Co Ltd) were used to control scanning and viewing in whole-slide imaging. Data Storage and Transport Whole-slide imaging and all other telepathology data were stored in dedicated servers located at the National Telemedicine Center of China to ensure the safety and speed of data storage, as well as the efficiency of data access by users. The overall design was based on a cloud-computing infrastructure service system, which was characterized by elastic expansion, high availability, and high stability. The system is equipped with wide-area and multilayer architecture, including access, application service, and data center layers . A private network with a bandwidth of up to 20 MB was used for data transmission. Telepathology Management A web-based telepathology consultation system and mobile app were developed. Each has different functions for applicants, coordinators, and specialists. The web version was embedded in the telemedicine collaborative service platform of the National Telemedicine Center of China ; the app was independently developed and adapted for Android and iOS mobile phones and tablets . Telepathology Consultation There are 17 specialists from the Department of Pathology of the First Affiliated Hospital of Zhengzhou University who currently participate in telepathology consultation, including 8 professors and 9 associate professors specializing in different fields. The consultation is a voluntary activity with no charge. Pathologists from participating hospitals scanned and uploaded the slides to be diagnosed with patient information to the cloud platform. Coordinators from the National Telemedicine Center of China then assigned these cases to specialists (based on their specialties and fields), who are very likely to be able to provide confirmed diagnoses and valuable suggestions for corresponding therapies . CNCTPS Implementation Stages The system was implemented in 3 stages. First, the participating hospitals were selected, starting in August 2015, based on medical service quality, readiness of their pathology departments and telemedicine services, and their willingness to use telepathology. Second, system hardware and software were deployed. Starting in January 2016, our technicians installed and debugged the equipment in participating hospitals. Third, personnel training and system maintenance were conducted. This included intensive training at the National Telemedicine Center of China and on-site training in their hospitals. In addition, to ensure the normal operation of the system, technicians provide regular maintenance of the hardware and software in participating hospitals. System operation guides were also provided to the participating hospitals . Data Collection To analyze the service volume, turnaround time, and economic benefits of the CNCTPS, we collected all case data submitted from January 2016 to December 2019, which included demographic and clinical data, submitted hospital, case submission time, report issuance time, telepathology diagnosis, and specialist who made the diagnosis. After removing test cases, there were 23,167 cases. Specimens had been taken from multiple organs, which were divided into 26 groups. To analyze the diagnostic accuracy of the CNCTPS, we followed up the final diagnosis of all the 23,167 cases through the hospital information system of the First Affiliated Hospital of Zhengzhou University. We searched and found that 564 cases had also been diagnosed directly in the First Affiliated Hospital of Zhengzhou University. The diagnostic accuracy of telepathology was calculated by using the final diagnosis in the First Affiliated Hospital of Zhengzhou University as the reference. Statistical Analysis Descriptive statistics were used to analyze characterize case data, including demographic characteristics of patients from whom samples were taken, diagnosis, histopathology type, and turnaround time. The median value and interquartile range are reported for continuous data, and percentages are reported for categorical data. The Kruskal–Wallis H test was used to compare turnaround time in different years, and the Nemenyi test was used for further multiple comparisons. The concordance between CNCTPS and final diagnoses was analyzed (complete concordance or variance with no clinical significance). The consistency was determined by the McNemar test and consistency check. All statistical analyses were performed using R software (version 4.0.0; R Foundation for Statistical Computing). All tests were 2-tailed, and P <.05 is considered statistically significant. Digitization of Pathology Sections Participating hospitals were equipped with digital slide scanners and matched computer workstations (KF-PRO-005, Konfoong Biotech International Co Ltd), for converting traditional glass slices into whole-slide imaging. Whole-slide imaging of a slide could be completed within 40 seconds under a ×20 objective (0.47 µm/pixel) and within 100 seconds under a ×40 objective (0.5 µm/pixel). Scanning control software (K-Scanner 1.6.0.14, Konfoong Biotech International Co Ltd) and image browsing and management software (K-Viewer 1.5.3.1, Konfoong Biotech International Co Ltd) were used to control scanning and viewing in whole-slide imaging. Data Storage and Transport Whole-slide imaging and all other telepathology data were stored in dedicated servers located at the National Telemedicine Center of China to ensure the safety and speed of data storage, as well as the efficiency of data access by users. The overall design was based on a cloud-computing infrastructure service system, which was characterized by elastic expansion, high availability, and high stability. The system is equipped with wide-area and multilayer architecture, including access, application service, and data center layers . A private network with a bandwidth of up to 20 MB was used for data transmission. Telepathology Management A web-based telepathology consultation system and mobile app were developed. Each has different functions for applicants, coordinators, and specialists. The web version was embedded in the telemedicine collaborative service platform of the National Telemedicine Center of China ; the app was independently developed and adapted for Android and iOS mobile phones and tablets . Telepathology Consultation There are 17 specialists from the Department of Pathology of the First Affiliated Hospital of Zhengzhou University who currently participate in telepathology consultation, including 8 professors and 9 associate professors specializing in different fields. The consultation is a voluntary activity with no charge. Pathologists from participating hospitals scanned and uploaded the slides to be diagnosed with patient information to the cloud platform. Coordinators from the National Telemedicine Center of China then assigned these cases to specialists (based on their specialties and fields), who are very likely to be able to provide confirmed diagnoses and valuable suggestions for corresponding therapies . CNCTPS Implementation Stages The system was implemented in 3 stages. First, the participating hospitals were selected, starting in August 2015, based on medical service quality, readiness of their pathology departments and telemedicine services, and their willingness to use telepathology. Second, system hardware and software were deployed. Starting in January 2016, our technicians installed and debugged the equipment in participating hospitals. Third, personnel training and system maintenance were conducted. This included intensive training at the National Telemedicine Center of China and on-site training in their hospitals. In addition, to ensure the normal operation of the system, technicians provide regular maintenance of the hardware and software in participating hospitals. System operation guides were also provided to the participating hospitals . Participating hospitals were equipped with digital slide scanners and matched computer workstations (KF-PRO-005, Konfoong Biotech International Co Ltd), for converting traditional glass slices into whole-slide imaging. Whole-slide imaging of a slide could be completed within 40 seconds under a ×20 objective (0.47 µm/pixel) and within 100 seconds under a ×40 objective (0.5 µm/pixel). Scanning control software (K-Scanner 1.6.0.14, Konfoong Biotech International Co Ltd) and image browsing and management software (K-Viewer 1.5.3.1, Konfoong Biotech International Co Ltd) were used to control scanning and viewing in whole-slide imaging. Whole-slide imaging and all other telepathology data were stored in dedicated servers located at the National Telemedicine Center of China to ensure the safety and speed of data storage, as well as the efficiency of data access by users. The overall design was based on a cloud-computing infrastructure service system, which was characterized by elastic expansion, high availability, and high stability. The system is equipped with wide-area and multilayer architecture, including access, application service, and data center layers . A private network with a bandwidth of up to 20 MB was used for data transmission. A web-based telepathology consultation system and mobile app were developed. Each has different functions for applicants, coordinators, and specialists. The web version was embedded in the telemedicine collaborative service platform of the National Telemedicine Center of China ; the app was independently developed and adapted for Android and iOS mobile phones and tablets . There are 17 specialists from the Department of Pathology of the First Affiliated Hospital of Zhengzhou University who currently participate in telepathology consultation, including 8 professors and 9 associate professors specializing in different fields. The consultation is a voluntary activity with no charge. Pathologists from participating hospitals scanned and uploaded the slides to be diagnosed with patient information to the cloud platform. Coordinators from the National Telemedicine Center of China then assigned these cases to specialists (based on their specialties and fields), who are very likely to be able to provide confirmed diagnoses and valuable suggestions for corresponding therapies . The system was implemented in 3 stages. First, the participating hospitals were selected, starting in August 2015, based on medical service quality, readiness of their pathology departments and telemedicine services, and their willingness to use telepathology. Second, system hardware and software were deployed. Starting in January 2016, our technicians installed and debugged the equipment in participating hospitals. Third, personnel training and system maintenance were conducted. This included intensive training at the National Telemedicine Center of China and on-site training in their hospitals. In addition, to ensure the normal operation of the system, technicians provide regular maintenance of the hardware and software in participating hospitals. System operation guides were also provided to the participating hospitals . To analyze the service volume, turnaround time, and economic benefits of the CNCTPS, we collected all case data submitted from January 2016 to December 2019, which included demographic and clinical data, submitted hospital, case submission time, report issuance time, telepathology diagnosis, and specialist who made the diagnosis. After removing test cases, there were 23,167 cases. Specimens had been taken from multiple organs, which were divided into 26 groups. To analyze the diagnostic accuracy of the CNCTPS, we followed up the final diagnosis of all the 23,167 cases through the hospital information system of the First Affiliated Hospital of Zhengzhou University. We searched and found that 564 cases had also been diagnosed directly in the First Affiliated Hospital of Zhengzhou University. The diagnostic accuracy of telepathology was calculated by using the final diagnosis in the First Affiliated Hospital of Zhengzhou University as the reference. Descriptive statistics were used to analyze characterize case data, including demographic characteristics of patients from whom samples were taken, diagnosis, histopathology type, and turnaround time. The median value and interquartile range are reported for continuous data, and percentages are reported for categorical data. The Kruskal–Wallis H test was used to compare turnaround time in different years, and the Nemenyi test was used for further multiple comparisons. The concordance between CNCTPS and final diagnoses was analyzed (complete concordance or variance with no clinical significance). The consistency was determined by the McNemar test and consistency check. All statistical analyses were performed using R software (version 4.0.0; R Foundation for Statistical Computing). All tests were 2-tailed, and P <.05 is considered statistically significant. CNCTPS Service Volume During the 4-year study period from 2016 to 2019, 23,167 cases were submitted to the CNCTPS for consultation. The service volume of the CNCTPS was n=2335 in 2016; n=4330 in 2017; n=7262 in 2018; and n=9240 in 2019, with an average annual growth rate of 41.04%. A total of 83 hospitals participated in the telepathology consultation service. The number of participating hospitals also grew, from n=60 in 2016 to n=74 in 2019 . Hospitals of different levels have joined the CNCTPS, including 17 city-level and 66 county-level hospitals. Among 2016 and 2019, city-level hospitals and county-level hospitals applied for 2880 (2880/23,167, 12.43%) and 20,287 (20,287/23,167, 87.57%) consultations, respectively. The number of county-level hospitals applying for consultation increased from n=49 in 2016 to n=63 in 2019, and the service volume also increased from n=2095 in 2016 to n=8317 in 2019. In city-level hospitals, the number of hospitals applying for consultations did not change, while the service volume showed an overall increasing trend . Characteristics of Cases Submitted to the CNCTPS The locations, from which specimens had been taken, were divided into 26 groups . Of the 23,167 patients represented by case data, 9519 (41.09%) were male and 13,648 (58.91%) were female . The median age of the patients, from whom specimens were taken, was 53 years (mean 52.86 years, range 1 day to 98 years). There were 17,495 out of 23,167 cases (75.52%) with confirmed diagnoses; 4779 out of 23,167 cases (20.63%) needed further examination, and most (4007/4779, 83.85%) required immunohistochemical examination. The other 893 (893/23,167, 3.85%) cases failed to be diagnosed, and poor slice quality and incomplete sampling were the main reasons thereof. Among 17,495 confirmed cases, 12,088 were benign lesions, 5217 were malignant lesions, and 190 were borderline lesions. In total, 52.18% (12,088/23,167) benign cases and 22.52% (5217/23,167) malignant cases had been confirmed. The proportion of malignant lesions in the esophagus, lung/mediastinum, urinary, and thoracic cavity/pleura was higher than that of benign lesions . In the other 22 tissue types, the proportion of benign lesions was higher than that of malignant lesions. CNCTPS Turnaround Time The turnaround time, the time from transmitting whole-slide images to the issuance of diagnostic reports, was a median of 16.93 hours (IQR 32.59; mean 24.93 hours, range 100 seconds to 167.97 hours). Experts’ opinion reports were released within 12 hours in 10,244 of the 23,167 cases (10,244/23,167, 44.05%) and within 72 hours in 21,286 cases (21,286/23,167, 91.88%) . The difference in distribution of turnaround time in different years , was statistically significant (H=1433.62, P <.001). The median turnaround time gradually decreased in turn, from 29.36 hours in 2016 to 9.75 hours in 2019, and differences between subsequent years were statistically significant with pairwise comparison (between 2018 and 2019 adjusted P =.01, other groups adjusted P <.001). CNCTPS Diagnostic Accuracy Of 564 diagnosed by both the CNCTPS and pathologists in the hospital, 553 cases diagnosed by the CNCTPS were consistent with the final diagnosis made by pathologists in hospital; that is, the accuracy rate was 98.05%. In the other 11 cases—4 false-positive cases and 7 false-negative cases—5 of the 11 cases occurred in the uterus . The sensitivity and specificity were 97.66% and 98.49%, respectively . The Youden index was 0.96. The positive and negative predictive values were 98.65% and 97.39%, respectively. No statistical difference was observed between telepathology diagnosis and final diagnosis ( P =.55), which showed good consistency (κ=0.96, P <.001). Economic Benefits of the CNCTPS Telepathology consultation is free and avoids the need for patients having to visit higher-level hospitals. Therefore, consultation and travel costs were saved. At the same time, food costs were lower in the local area. Thus, compared with the traditional pathology consultation, diagnosis via the CNCTPS results in cost-savings of 378.5 RMB (approximately US $50) per patient . In terms of the annual telepathology consultation cases, the total amount is substantial—approximately $300,000 per year. During the 4-year study period from 2016 to 2019, 23,167 cases were submitted to the CNCTPS for consultation. The service volume of the CNCTPS was n=2335 in 2016; n=4330 in 2017; n=7262 in 2018; and n=9240 in 2019, with an average annual growth rate of 41.04%. A total of 83 hospitals participated in the telepathology consultation service. The number of participating hospitals also grew, from n=60 in 2016 to n=74 in 2019 . Hospitals of different levels have joined the CNCTPS, including 17 city-level and 66 county-level hospitals. Among 2016 and 2019, city-level hospitals and county-level hospitals applied for 2880 (2880/23,167, 12.43%) and 20,287 (20,287/23,167, 87.57%) consultations, respectively. The number of county-level hospitals applying for consultation increased from n=49 in 2016 to n=63 in 2019, and the service volume also increased from n=2095 in 2016 to n=8317 in 2019. In city-level hospitals, the number of hospitals applying for consultations did not change, while the service volume showed an overall increasing trend . The locations, from which specimens had been taken, were divided into 26 groups . Of the 23,167 patients represented by case data, 9519 (41.09%) were male and 13,648 (58.91%) were female . The median age of the patients, from whom specimens were taken, was 53 years (mean 52.86 years, range 1 day to 98 years). There were 17,495 out of 23,167 cases (75.52%) with confirmed diagnoses; 4779 out of 23,167 cases (20.63%) needed further examination, and most (4007/4779, 83.85%) required immunohistochemical examination. The other 893 (893/23,167, 3.85%) cases failed to be diagnosed, and poor slice quality and incomplete sampling were the main reasons thereof. Among 17,495 confirmed cases, 12,088 were benign lesions, 5217 were malignant lesions, and 190 were borderline lesions. In total, 52.18% (12,088/23,167) benign cases and 22.52% (5217/23,167) malignant cases had been confirmed. The proportion of malignant lesions in the esophagus, lung/mediastinum, urinary, and thoracic cavity/pleura was higher than that of benign lesions . In the other 22 tissue types, the proportion of benign lesions was higher than that of malignant lesions. The turnaround time, the time from transmitting whole-slide images to the issuance of diagnostic reports, was a median of 16.93 hours (IQR 32.59; mean 24.93 hours, range 100 seconds to 167.97 hours). Experts’ opinion reports were released within 12 hours in 10,244 of the 23,167 cases (10,244/23,167, 44.05%) and within 72 hours in 21,286 cases (21,286/23,167, 91.88%) . The difference in distribution of turnaround time in different years , was statistically significant (H=1433.62, P <.001). The median turnaround time gradually decreased in turn, from 29.36 hours in 2016 to 9.75 hours in 2019, and differences between subsequent years were statistically significant with pairwise comparison (between 2018 and 2019 adjusted P =.01, other groups adjusted P <.001). Of 564 diagnosed by both the CNCTPS and pathologists in the hospital, 553 cases diagnosed by the CNCTPS were consistent with the final diagnosis made by pathologists in hospital; that is, the accuracy rate was 98.05%. In the other 11 cases—4 false-positive cases and 7 false-negative cases—5 of the 11 cases occurred in the uterus . The sensitivity and specificity were 97.66% and 98.49%, respectively . The Youden index was 0.96. The positive and negative predictive values were 98.65% and 97.39%, respectively. No statistical difference was observed between telepathology diagnosis and final diagnosis ( P =.55), which showed good consistency (κ=0.96, P <.001). Telepathology consultation is free and avoids the need for patients having to visit higher-level hospitals. Therefore, consultation and travel costs were saved. At the same time, food costs were lower in the local area. Thus, compared with the traditional pathology consultation, diagnosis via the CNCTPS results in cost-savings of 378.5 RMB (approximately US $50) per patient . In terms of the annual telepathology consultation cases, the total amount is substantial—approximately $300,000 per year. Principal Results The cloud-based system can quickly process data with large memory requirements, thereby overcoming the difficulties of large whole-slide imaging file management. This study reported on one of the largest cloud-based telepathology systems in China and evaluated its operation results. This study used a large sample size, which provides an in-depth practical understanding of the cloud-based telepathology system in China, and gives suggestions for further evaluations and improvements of the telepathology system. This system served 23,167 cases from 2016 to 2019. The median turnaround time was 16.93 hours, which decreased from 29.36 hours in 2016 to 9.75 hours in 2019. The diagnostic accuracy was 98.05%, and approximately $300,000 were collectively saved by patients each year. The CNCTPS has proven to be highly reliable and plays an important role in facilitating the distribution of limited senior pathologist resources in China. A total of 83 hospitals are covered by the CNCTPS, which is the largest telepathology network in China. Compared with the 6, 24, and 60 workstations in other reported telepathology networks , the CNCTPS covers more medical institutions. Case data for more than 20,000 patients were diagnosed by the CNCTPS in 4 years. To the best of our knowledge, this is the largest sample size in a study on telepathology system use and operation and is much higher than those in similar literature . The amount of case data reviewed and the number of participating hospitals increased each year, consistent with findings reported by Chen et al and Zhou et al . Most case data (20,287/23,167, 87.57%) had been submitted by county-level hospitals because the shortage of pathologists in China’s county-level hospitals is more severe than that in city-level hospitals. A total of 893 cases failed to be diagnosed by the CNCTPS, of which only 26 cases were complicated enough that needed to be consulted in a higher-level hospital, while others were due to incomplete sampling and poor slice quality. Standard materials and good slice preparation are the main factors affecting telepathology diagnosis , which require experienced pathology technicians. Although we had conducted theory and practical operation training for pathology technicians in the early stage of CNCTPS construction, incomplete sampling and poor slice quality were still the main reasons for failed diagnoses. Strengthening the training of telepathology staff in the later stages of system operation is still needed. In terms of histopathology type, an analysis of ten-year telepathology cases in Tanzania showed a higher proportion of benign cases, which reported 56.1% benign and 40.8% malignant diseases . We reached a similar conclusion: the proportion of benign cases (12,088/23,167, 52.18%) was higher than that of malignant cases (5217/23,167, 22.52%). The average turnaround time of the CNCTPS was 24.93 hours, which is shorter than the 38 hours reported by Zhou et al and 66 hours reported by Völker et al but is slightly higher than the 0.7 days (ie, 16.8 hours) reported by Chong et al . The majority (14,835/23,167, 64.04%) of cases were diagnosed within 24 hours, which is higher than the 61.5% reported by Chen et al and slightly lower than the 64% reported by Perron et al , but the proportion within 48 hours (82.88% vs 70.00%) and 72 hours (91.88% vs 80.00%) were higher than those reported by Perron et al . Nonetheless, the median turnaround time decreased annually during the 4 years, indicating that the CNCTPS operates well. Compared to static images in the early stage of telepathology, whole-slide imaging allows the entire slide to be viewed in a manner that simulates microscopy . A recent meta-analysis shows that the weighted mean of the concordance rates between telepathology and conventional microscopy was 91.1% up to 2000, and from 2000 onward, the weighted mean of the concordance rates was 97.2%. It has been asserted that the reasons for increased consistency rate in recent years should be attributed to the increased use of whole-slide imaging . The range of diagnostic concordance rates between whole-slide imaging and traditional electron microscopy is 89% to 100% , and the average value is 96.9%. Our study demonstrated similar results (98.05%). Moreover, no statistically significant differences were found ( P =.55) between whole-slide imaging and traditional pathology diagnosis, and the consistency of diagnostic results was excellent, which further confirmed the accuracy of whole-slide imaging. Some cost-effectiveness studies have demonstrated that telemedicine can reduce costs , but not all . Cost-utility and cost-effectiveness studies for telepathology are rare. Meléndez-Álvarez et al only evaluated the cost of their telepathology system, which saved US $410. Vosoughi et al evaluated the cost-efficiency of their real-time nonrobotic telepathology system, which saved US $10,767.10 per year. In our study, cost savings for patients were estimated. During the 4 years of the telepathology system operation, approximately US $300,000 per year was saved by patients. Limitations To the best of our knowledge, this is the first study to comprehensively evaluate the operation of a telepathology system based on a large sample. The CNCTPS showed fast responsiveness and high accuracy. However, owing to the limited information collected by the CNCTPS, this study did not analyze the reasons for cases with long turnaround time or the reasons for false positives and false negatives. In addition, only the costs saved for patients were evaluated in the economic benefits of the CNCTPS. The economic impact of telemedicine is a collaborative and complex process in which different economic, social, and political actors can be involved , and the construction of our system is a public welfare project initiated by the government and a powerful hospital. Most of the digital slide scanners were donated to the participating hospitals, and the private network was free. Future Work Turnaround time and diagnostic accuracy are the main criteria used to evaluate a telepathology system, and further work is required to explore the factors that influence turnaround time and diagnostic accuracy. First, it is necessary to analyze the causes of cases with long turnaround time through a survey of pathologists, especially those with turnaround times longer than 72 hours. Second, more investigation for incorrectly diagnosed images is needed. In addition, adding a follow-up module to the CNCTPS is necessary to allow the final diagnosis result of each case to be easily followed up. Finally, a user satisfaction survey should be conducted, with thorough questionnaires or in-depth interviews, in a subsequent study to improve the system. Conclusions The CNCTPS has proven to be highly reliable. It can provide rapid telepathology diagnoses to participating hospitals that are consistent with the final diagnosis. The application of this system reduces financial costs and time for patients, facilitating the distribution of limited senior pathologist resources in China. Therefore, we believe telepathology services will become more widespread, in more regions worldwide, especially those with insufficient medical resources. The cloud-based system can quickly process data with large memory requirements, thereby overcoming the difficulties of large whole-slide imaging file management. This study reported on one of the largest cloud-based telepathology systems in China and evaluated its operation results. This study used a large sample size, which provides an in-depth practical understanding of the cloud-based telepathology system in China, and gives suggestions for further evaluations and improvements of the telepathology system. This system served 23,167 cases from 2016 to 2019. The median turnaround time was 16.93 hours, which decreased from 29.36 hours in 2016 to 9.75 hours in 2019. The diagnostic accuracy was 98.05%, and approximately $300,000 were collectively saved by patients each year. The CNCTPS has proven to be highly reliable and plays an important role in facilitating the distribution of limited senior pathologist resources in China. A total of 83 hospitals are covered by the CNCTPS, which is the largest telepathology network in China. Compared with the 6, 24, and 60 workstations in other reported telepathology networks , the CNCTPS covers more medical institutions. Case data for more than 20,000 patients were diagnosed by the CNCTPS in 4 years. To the best of our knowledge, this is the largest sample size in a study on telepathology system use and operation and is much higher than those in similar literature . The amount of case data reviewed and the number of participating hospitals increased each year, consistent with findings reported by Chen et al and Zhou et al . Most case data (20,287/23,167, 87.57%) had been submitted by county-level hospitals because the shortage of pathologists in China’s county-level hospitals is more severe than that in city-level hospitals. A total of 893 cases failed to be diagnosed by the CNCTPS, of which only 26 cases were complicated enough that needed to be consulted in a higher-level hospital, while others were due to incomplete sampling and poor slice quality. Standard materials and good slice preparation are the main factors affecting telepathology diagnosis , which require experienced pathology technicians. Although we had conducted theory and practical operation training for pathology technicians in the early stage of CNCTPS construction, incomplete sampling and poor slice quality were still the main reasons for failed diagnoses. Strengthening the training of telepathology staff in the later stages of system operation is still needed. In terms of histopathology type, an analysis of ten-year telepathology cases in Tanzania showed a higher proportion of benign cases, which reported 56.1% benign and 40.8% malignant diseases . We reached a similar conclusion: the proportion of benign cases (12,088/23,167, 52.18%) was higher than that of malignant cases (5217/23,167, 22.52%). The average turnaround time of the CNCTPS was 24.93 hours, which is shorter than the 38 hours reported by Zhou et al and 66 hours reported by Völker et al but is slightly higher than the 0.7 days (ie, 16.8 hours) reported by Chong et al . The majority (14,835/23,167, 64.04%) of cases were diagnosed within 24 hours, which is higher than the 61.5% reported by Chen et al and slightly lower than the 64% reported by Perron et al , but the proportion within 48 hours (82.88% vs 70.00%) and 72 hours (91.88% vs 80.00%) were higher than those reported by Perron et al . Nonetheless, the median turnaround time decreased annually during the 4 years, indicating that the CNCTPS operates well. Compared to static images in the early stage of telepathology, whole-slide imaging allows the entire slide to be viewed in a manner that simulates microscopy . A recent meta-analysis shows that the weighted mean of the concordance rates between telepathology and conventional microscopy was 91.1% up to 2000, and from 2000 onward, the weighted mean of the concordance rates was 97.2%. It has been asserted that the reasons for increased consistency rate in recent years should be attributed to the increased use of whole-slide imaging . The range of diagnostic concordance rates between whole-slide imaging and traditional electron microscopy is 89% to 100% , and the average value is 96.9%. Our study demonstrated similar results (98.05%). Moreover, no statistically significant differences were found ( P =.55) between whole-slide imaging and traditional pathology diagnosis, and the consistency of diagnostic results was excellent, which further confirmed the accuracy of whole-slide imaging. Some cost-effectiveness studies have demonstrated that telemedicine can reduce costs , but not all . Cost-utility and cost-effectiveness studies for telepathology are rare. Meléndez-Álvarez et al only evaluated the cost of their telepathology system, which saved US $410. Vosoughi et al evaluated the cost-efficiency of their real-time nonrobotic telepathology system, which saved US $10,767.10 per year. In our study, cost savings for patients were estimated. During the 4 years of the telepathology system operation, approximately US $300,000 per year was saved by patients. To the best of our knowledge, this is the first study to comprehensively evaluate the operation of a telepathology system based on a large sample. The CNCTPS showed fast responsiveness and high accuracy. However, owing to the limited information collected by the CNCTPS, this study did not analyze the reasons for cases with long turnaround time or the reasons for false positives and false negatives. In addition, only the costs saved for patients were evaluated in the economic benefits of the CNCTPS. The economic impact of telemedicine is a collaborative and complex process in which different economic, social, and political actors can be involved , and the construction of our system is a public welfare project initiated by the government and a powerful hospital. Most of the digital slide scanners were donated to the participating hospitals, and the private network was free. Turnaround time and diagnostic accuracy are the main criteria used to evaluate a telepathology system, and further work is required to explore the factors that influence turnaround time and diagnostic accuracy. First, it is necessary to analyze the causes of cases with long turnaround time through a survey of pathologists, especially those with turnaround times longer than 72 hours. Second, more investigation for incorrectly diagnosed images is needed. In addition, adding a follow-up module to the CNCTPS is necessary to allow the final diagnosis result of each case to be easily followed up. Finally, a user satisfaction survey should be conducted, with thorough questionnaires or in-depth interviews, in a subsequent study to improve the system. The CNCTPS has proven to be highly reliable. It can provide rapid telepathology diagnoses to participating hospitals that are consistent with the final diagnosis. The application of this system reduces financial costs and time for patients, facilitating the distribution of limited senior pathologist resources in China. Therefore, we believe telepathology services will become more widespread, in more regions worldwide, especially those with insufficient medical resources. |
Advancing patient‐centered care: Recent developments in UEG's patient relations | 3d1f8b80-179b-429e-b5af-28f9a22d74c2 | 11250133 | Internal Medicine[mh] | Prevention was identified as a core priority for the digestive health community. Among the challenges identified were low health literacy at societal levels, exacerbated by inadequate awareness and/or tools for early diagnosis for some conditions, and the persistent lack of incentives at the societal level for addressing addictions and/or implementing measures to improve digestive health. Furthermore, the erosion of trust in medical information and healthcare professionals following the COVID pandemic underscored the urgent need for intervention. It was recognized that current measures targeting risk factors often place undue emphasis on individual responsibility, neglecting systemic changes and lack of basic living conditions (like unemployment, poor housing, unsafe neighborhoods, pollution etc.). In response to these challenges, the group rallied around a series of proactive measures aimed at effecting positive change. These included a commitment to investing more in education across all social groups to enhance health literacy and community awareness, particularly regarding early signs of diseases, healthy habits, and the critical role of healthy nutrition. Additionally, there was consensus on the necessity of enhancing healthcare professionals' training, particularly in areas such as nutrition. The group also advocated for addressing physical inactivity in the workplace through the implementation of designated exercise time and the provision of on‐site facilities to support employees. Furthermore, there was a resounding call for increased governmental and international investment in combating misinformation and disseminating credible, well‐targeted information to rebuild trust in scientific evidence. This will improve the uptake of national vaccination and screening programs, and thereby improve health outcomes. Underlining the need for collaboration across various sectors, the group identified key stakeholders for engagement. These include actors of change such as parents and educators, who play pivotal roles in shaping health behaviors and attitudes from an early age. Additionally, actors of power such as local policymakers, the WHO, and EU institutions were highlighted as crucial collaborators in driving policy changes and implementing systemic interventions. Furthermore, the group underscored the importance of engaging with the food and agriculture industry, particularly in advising on topics of mutual interest, such as dietary requirements for coeliac patients. However, it was also noted that aligning with industries contributing to health harms, such as alcohol and tobacco, would be contrary to the group's mission, emphasizing the need for ethical partnerships in pursuit of improved digestive health outcomes. The group who discussed challenges related to diagnosis identified persistent barriers to accessing primary care, characterized by a lack of specific knowledge among healthcare providers and the burden of repeated examinations. Additionally, the group emphasized the widespread difficulty in accessing timely diagnoses across many countries, highlighting the detrimental impact of misdiagnosis on patients' quality of life and mental well‐being. Furthermore, concerns were raised regarding the risk of overdiagnosis, particularly in cases where treatments may impact on patients' quality of life, as observed in certain cancer diagnoses. Knowledge of age‐related signs and symptoms affects the efficiency of timely diagnosis, especially in the younger patients. In response to these challenges, the group presented a series of best practice examples aimed at improving healthcare delivery and patient outcomes. These included prioritizing prevention as the most cost‐effective investment, increasing awareness of early disease signs within healthcare settings, and championing patient‐centered care by involving patients as partners in their illness experiences. The group also underscored the importance of anti‐stigma training for healthcare providers and the prioritization of transitional care services. Looking ahead, the group emphasized the critical importance of collaboration between healthcare professionals and patient representatives in developing evidence‐based guidelines, which should be translated at national levels and made accessible to non‐specialists. Furthermore, joint advocacy projects were deemed essential involving key stakeholders such as patient representatives, healthcare professionals, and policymakers. Collaboration with educational institutions and representatives from the school system was also highlighted as a vital avenue for promoting health literacy and early intervention initiatives within communities. When discussing quality of care, the biggest reported challenge faced by the patient community was the prevalent stigma experienced by patients, which takes a heavy toll on their well‐being, particularly among those diagnosed with liver diseases and inflammatory bowel diseases. The group also highlighted the detrimental effects of a lack of knowledge, which often manifests in patients experiencing feelings of guilt and shame. Additionally, the strain placed on healthcare systems was identified as a significant factor impacting both the quality of life of healthcare professionals and their ability to deliver personalized and compassionate care, and the well‐being of family members caring for patients. In response to these pressing issues, the group formulated a set of recommendations aimed at tackling these challenges head‐on. These recommendations included prioritizing self‐management education for patients, fostering effective communication between healthcare professionals and patients, and ensuring the active involvement of patients in setting standards of care. Furthermore, the group advocated for the implementation of holistic care approaches and the reduction of logistical and bureaucratic barriers within healthcare systems. Moreover, the identification and implementation of transitions of care interventions were deemed crucial steps toward improving the quality of life among patients and alleviating the burden on healthcare systems and caregivers alike. Through concerted action on these recommendations, the group endeavors to address the multifaceted challenges faced by individuals within the digestive health community, fostering improved outcomes and well‐being for all stakeholders involved. In conclusion, the collaborative efforts initiated by the first Digestive Health Roundtable mark a significant step forward in enhancing patient relations within the digestive healthcare landscape. As we navigate the complexities of modern healthcare, it is imperative that we continue to prioritize the voices and experiences of patients, recognizing them as invaluable partners in the pursuit of improved health outcomes and quality of care. Moving forward, we remain committed to nurturing these relationships, advocating for the integration of joint recommendations into policy‐making and clinical practice, and ensuring that every individual receives optimal care. The authors have no conflicts of interest to declare. |
Aberrant p53 immunostaining patterns in breast carcinoma of no special type strongly correlate with presence and type of | aaa3fc49-89ec-4b8a-9fe1-c71858d86d53 | 11522169 | Anatomy[mh] | Mutations in the tumor suppressor gene TP53 can be identified in approximately 20–40% of all breast carcinomas (BCs) with different frequencies in the established molecular subtypes . TP53 mutations have been predominantly associated with basal-like tumors, but also occur in HER2 + and in luminal-like (HR + HER2 −) BCs . Mostly because of its dominant association with basal-like carcinoma, this genetic alteration has been associated with decreased survival in metastatic BC . The TP53 -encoded protein p53 is known to play an important role in cell cycle arrest, regulation of apoptosis, the response to genotoxic stress and DNA alterations . TP53 mutations result in genetic instability with increased somatic mutations, unbalanced DNA copy number variations and multiple chromosomal alterations . Clinical studies have demonstrated that TP53 mutations are associated with poor prognosis also in HR + HER2 −BC . This can in part be attributed to the significant correlation between TP53 mutations in HR + HER2 −BC and resistance to endocrine therapies, including antiestrogens and aromatase inhibitors . Recently, the analysis of a cohort of primary tumors treated with preoperative short-term endocrine therapy showed a reduced response rate to endocrine therapy in cases with TP53 mutations, as measured by insufficient decrease of the proliferation marker Ki-67 in the resection specimen compared to the diagnostic core biopsy . More recently, TP53 mutations have also been identified as an independent factor, contributing to the high 21-gene recurrence score . In conclusion, patients carrying TP53 mutations in early HR + HER2 −BC are more likely to experience recurrence, distant metastasis and shorter overall survival under adjuvant endocrine therapy . Based on these findings, evaluation of the TP53 status might influence therapeutic decisions in HR + HER2 −BC in the future. However, TP53 mutation analysis is currently not part of the standard diagnostic workup outside of clinical studies, and risk profiling beyond standard clinical-pathological parameters relies primarily on commercial gene expression profiling. Given the potential future role of TP53 status and the cost and technical effort of mutational profiling, a fast and broadly available screening tool for TP53 alterations is desirable . In the last decade, p53 IHC has become an accepted surrogate marker for TP53 status in other tumor entities such as endometrial or ovarian cancer . Major progress in the molecular classification of these tumors in the clinical setting was enabled by a robust and reproducible algorithm to interpret patterns of p53 staining rather than relying on a cutoff for overexpression of p53 alone . These aberrant patterns include overexpression (OE), defined as continuous band-like pattern of strongly stained tumor cell nuclei (mostly associated with missense mutations), complete absence (CA, resulting from truncating mutations) or cytoplasmic (CY) expression (underlying truncating mutations with disturbance of the nuclear localization signal), whereas the wild-type pattern shows variable levels of nuclear expression in the tumor cell population. This approach has enabled the integration of IHC as a standard diagnostic marker for the “p53 abnormal” molecular subtype in endometrial cancer, which is associated with high-risk disease . However, the clinical significance of immunohistochemical pattern analysis of p53 expression in BC remains uncertain. A previous study demonstrated that p53 staining patterns have an adverse effect on survival when a bimodal distribution with extreme negative and extreme positive staining is considered . In our study, we evaluated whether a more comprehensive evaluation of p53 staining patterns can predict the mutational status of TP53 in BC, including the consideration of the CY pattern as an additional form of aberrant expression. We aimed to correlate these patterns with specific types of mutations and to explore the characteristics of TP53 -mutated cases by correlating TP53 mutational status with histomorphological and molecular features including tumor-grade, therapy-relevant subtype and PIK3CA mutational status. Patients, samples and clinical data This study retrospectively analyzed formalin-fixed paraffin-embedded (FFPE) tumor tissue samples from the Institute of Pathology and Neuropathology (Tübingen University Hospital). A total of 131 consecutive cases of female patients diagnosed with early-stage BC of non-special type (NST) (pT1-3 unifocal, N0M0) at our institution between 2010 and 2012 with available core biopsies and resection specimens without neoadjuvant treatment were collected. As part of routine workup, cases had been immunophenotyped for hormone receptors, proliferation rate (Ki-67) and HER2 status. Fluorescence in situ hybridization (FISH) had been performed according to the ASCO/CAP guidelines on cases with HER2 Score 2 + . All biopsies and resection specimens were re-evaluated histologically on hematoxylin and eosin (H&E)–stained slides by two experienced breast pathologists (A. S. and I. A. M-M) for Elston and Ellis score. For assessment of biological subtype, immunostains were re-evaluated and proliferation rate was calculated according to the recommendations of the Ki-67 Working Group . Medical records were retrieved, including age, previous medical history and follow-up. This study was performed according to the Declaration of Helsinki and was approved by the Ethics Committee of the Medical Faculty of the University of Tübingen (547/2021BO2). p53 immunohistochemistry IHC for p53 (DO-7, Dilution 1:400, Novocastra, Leica Biosystems, Wetzlar, Germany) was performed using an automated stainer (Ventana Medical Systems, Tucson, Arizona, USA) in accordance with the manufacturer’s protocol. Assessment of p53 staining was carried out both on resection specimens and on core biopsies, independently by two pathologists following the recent recommendations for gynecological neoplasms, with aberrant staining defined as OE, CA and CY . In case of disagreement, stainings were re-evaluated by another senior pathologist (F.F.) to reach consensus. DNA isolation To enrich tumor cell content, tumors of the resection specimens were macroscopically dissected, and tumor cell content was estimated proportionally. Genomic DNA was extracted from macrodissected 5 µm paraffin sections using the Maxwell® RSC DNA FFPE Kit and the Maxwell® RSC Instrument (Promega, Madison, WI, USA) and quantified with the Qubit Fluorometer employing the Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific, Waltham, MA, USA), according to the manufacturer´s protocol. Quality control polymerase chain reaction (PCR) was performed to determine the amplifiable DNA length . Only cases with at least 100 base pairs (bp) amplifiable DNA were included for NGS analysis. Core biopsies were used instead, in cases where resection specimens had poor DNA quality (< 100 bp). Targeted NGS analysis Targeted sequencing was performed using the Ion GeneStudio™S5 system (Thermo Fisher Scientific, Waltham, MA, USA). NGS analysis was performed using two panels − the Ion AmpliSeq TP53 Community Panel and an Ion AmpliSeq™ custom PIK3CA panel from Thermo Fisher Scientific covering the entire coding regions of TP53 (NM_000546.6) and PIK3CA (NM_006218.4), respectively (summarized in the Supplemental Table and ). Amplicon library preparation and semiconductor sequencing were performed according to the manufacturer’s manuals using the Ion AmpliSeq Library Kit version 2.0, the Ion Library TaqMan Quantitation Kit on the LightCycler 480 (Roche, Basel, Switzerland), the Ion 540 Kit–Chef on the Ion Chef and the Ion 540 Chip Kit (Thermo Fisher Scientific). Output files were generated by Torrent Suite (version 5.16.1). Variant calling was performed using the Ion Reporter Software (version 5.20.2.0; Thermo Fisher Scientific). Variants were visualized using the Integrative Genomics Viewer (IGV, version 2.16.2; Broad Institute, Cambridge, MA) to exclude panel-specific artifacts. For variant calling, standard settings were used (no allelic frequency detection limit threshold). Variants were considered at a variant allele frequency (VAF) of > 10% and a coverage of at least 91%. The National Center for Biotechnology Information single-nucleotide polymorphism database (dbSNP; including GnomAD, ExAC and TOPMED) was used to exclude SNPs. Statistical analysis Statistical analysis was performed using JMP SAS 15.1.0 (SAS, Cary, NC, USA) and R v. 4.0.5 (RStudio Team (2022); RStudio: Integrated Development Environment for R.RStudio, PBC, Boston, MA URL http://www.rstudio.com/ ). Categorical variables were described using frequencies and percentages. Numerical variables were expressed as either mean and standard deviation (± SD) or median and interquartile range (IQR), according to the distribution of the data. Normality of distribution was assessed by testing kurtosis and skewness, as well as by QQ plots. Chi-square test was used to assess categorical variables, and kappa tests were performed to measure agreement. For survival analysis, ten cases were excluded due to a previous diagnosis of cancer. Disease-free survival (DFS) was defined as the time from diagnosis to the date of any disease recurrence (local, regional or distant), excluding death. Overall survival (OS) was defined as the time from diagnosis to the date of death from any cause or to the date of censoring at the last time the subject was known to be alive. DFS and OS curves were illustrated by Kaplan–Meier regression. A log-rank test was used for time-to-event outcomes. Multivariate Cox regression analysis was performed to assess the clinical value of the TP53 status on DFS and OS; hazard ratio (HR) and their 95% confidence intervals (95% CI) were calculated. Multivariable survival analyses were conducted using Cox proportional hazards regression modelling to assess the magnitude of impact while adjusting for well-known clinicopathological risk parameters (age, tumor stage, molecular subtypes). All p -values were two-sided, and p < 0.05 was considered statistically significant. This study retrospectively analyzed formalin-fixed paraffin-embedded (FFPE) tumor tissue samples from the Institute of Pathology and Neuropathology (Tübingen University Hospital). A total of 131 consecutive cases of female patients diagnosed with early-stage BC of non-special type (NST) (pT1-3 unifocal, N0M0) at our institution between 2010 and 2012 with available core biopsies and resection specimens without neoadjuvant treatment were collected. As part of routine workup, cases had been immunophenotyped for hormone receptors, proliferation rate (Ki-67) and HER2 status. Fluorescence in situ hybridization (FISH) had been performed according to the ASCO/CAP guidelines on cases with HER2 Score 2 + . All biopsies and resection specimens were re-evaluated histologically on hematoxylin and eosin (H&E)–stained slides by two experienced breast pathologists (A. S. and I. A. M-M) for Elston and Ellis score. For assessment of biological subtype, immunostains were re-evaluated and proliferation rate was calculated according to the recommendations of the Ki-67 Working Group . Medical records were retrieved, including age, previous medical history and follow-up. This study was performed according to the Declaration of Helsinki and was approved by the Ethics Committee of the Medical Faculty of the University of Tübingen (547/2021BO2). IHC for p53 (DO-7, Dilution 1:400, Novocastra, Leica Biosystems, Wetzlar, Germany) was performed using an automated stainer (Ventana Medical Systems, Tucson, Arizona, USA) in accordance with the manufacturer’s protocol. Assessment of p53 staining was carried out both on resection specimens and on core biopsies, independently by two pathologists following the recent recommendations for gynecological neoplasms, with aberrant staining defined as OE, CA and CY . In case of disagreement, stainings were re-evaluated by another senior pathologist (F.F.) to reach consensus. To enrich tumor cell content, tumors of the resection specimens were macroscopically dissected, and tumor cell content was estimated proportionally. Genomic DNA was extracted from macrodissected 5 µm paraffin sections using the Maxwell® RSC DNA FFPE Kit and the Maxwell® RSC Instrument (Promega, Madison, WI, USA) and quantified with the Qubit Fluorometer employing the Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific, Waltham, MA, USA), according to the manufacturer´s protocol. Quality control polymerase chain reaction (PCR) was performed to determine the amplifiable DNA length . Only cases with at least 100 base pairs (bp) amplifiable DNA were included for NGS analysis. Core biopsies were used instead, in cases where resection specimens had poor DNA quality (< 100 bp). Targeted sequencing was performed using the Ion GeneStudio™S5 system (Thermo Fisher Scientific, Waltham, MA, USA). NGS analysis was performed using two panels − the Ion AmpliSeq TP53 Community Panel and an Ion AmpliSeq™ custom PIK3CA panel from Thermo Fisher Scientific covering the entire coding regions of TP53 (NM_000546.6) and PIK3CA (NM_006218.4), respectively (summarized in the Supplemental Table and ). Amplicon library preparation and semiconductor sequencing were performed according to the manufacturer’s manuals using the Ion AmpliSeq Library Kit version 2.0, the Ion Library TaqMan Quantitation Kit on the LightCycler 480 (Roche, Basel, Switzerland), the Ion 540 Kit–Chef on the Ion Chef and the Ion 540 Chip Kit (Thermo Fisher Scientific). Output files were generated by Torrent Suite (version 5.16.1). Variant calling was performed using the Ion Reporter Software (version 5.20.2.0; Thermo Fisher Scientific). Variants were visualized using the Integrative Genomics Viewer (IGV, version 2.16.2; Broad Institute, Cambridge, MA) to exclude panel-specific artifacts. For variant calling, standard settings were used (no allelic frequency detection limit threshold). Variants were considered at a variant allele frequency (VAF) of > 10% and a coverage of at least 91%. The National Center for Biotechnology Information single-nucleotide polymorphism database (dbSNP; including GnomAD, ExAC and TOPMED) was used to exclude SNPs. Statistical analysis was performed using JMP SAS 15.1.0 (SAS, Cary, NC, USA) and R v. 4.0.5 (RStudio Team (2022); RStudio: Integrated Development Environment for R.RStudio, PBC, Boston, MA URL http://www.rstudio.com/ ). Categorical variables were described using frequencies and percentages. Numerical variables were expressed as either mean and standard deviation (± SD) or median and interquartile range (IQR), according to the distribution of the data. Normality of distribution was assessed by testing kurtosis and skewness, as well as by QQ plots. Chi-square test was used to assess categorical variables, and kappa tests were performed to measure agreement. For survival analysis, ten cases were excluded due to a previous diagnosis of cancer. Disease-free survival (DFS) was defined as the time from diagnosis to the date of any disease recurrence (local, regional or distant), excluding death. Overall survival (OS) was defined as the time from diagnosis to the date of death from any cause or to the date of censoring at the last time the subject was known to be alive. DFS and OS curves were illustrated by Kaplan–Meier regression. A log-rank test was used for time-to-event outcomes. Multivariate Cox regression analysis was performed to assess the clinical value of the TP53 status on DFS and OS; hazard ratio (HR) and their 95% confidence intervals (95% CI) were calculated. Multivariable survival analyses were conducted using Cox proportional hazards regression modelling to assess the magnitude of impact while adjusting for well-known clinicopathological risk parameters (age, tumor stage, molecular subtypes). All p -values were two-sided, and p < 0.05 was considered statistically significant. Patient characteristics Cases were categorized as follows: HR + HER2 − (85 cases; G1, 21; G2, 42; G3, 22), HER2 + (21 cases) and TN (25 cases). Five TNBC cases exhibited the morphology of carcinoma with apocrine differentiation. Based on their morphological features, the remaining 20 TNBC cases were classified in accordance with Weisman et al. as TNBC with prominent tumor infiltrating lymphocytes (TNBC TIL, 9 cases), TNBC with large central acellular zone (TNBC LCAZ, 6 cases) and TNBC not otherwise specified (TNBC NOS, 5 cases) . The median age of the patients was 59 years (range, 26–89 years). The tumor median size was 18 mm (range, 3–103 mm). The T stage ranged from pT1-3. The numbers of different grading and nuclear grade per subgroup are shown in Table . p53 immunohistochemistry The majority of cases exhibited a wild-type staining pattern (79/131, 60.3%) with variable numbers of positive tumor cells with heterogenous nuclear staining intensity. Aberrant staining patterns were detected in 52/131 cases (39.7%), interpreted as indicating the potential presence of a TP53 mutation. The aberrant staining patterns were classified as OE, CA and CY (Fig. ). OE refers to strong, homogenous and band-like staining of all well-fixed tumor cell nuclei, whereas CA is depicted as a total loss of staining in tumor cells, with preserved variable staining in non-tumor cell nuclei as an internal control, and CY describes a granular cytoplasmic staining with variable or missing nuclear staining. The highest prevalence of aberrant p53 staining was observed in TNBC (23/25, 92%, Fig. A left panel), while only 18.8% (16/85) were detected in HR + HER2 − ( p < 0.001, Table ). Strikingly, no aberrant staining was observed in the HR + HER2 − G1 subset. The most common aberrant staining pattern observed was OE (34/52, 65.4%), followed by CA (16/52, 30.7%), and CY (2/52, 3.8%). OE was the most common aberrant staining pattern in all subgroups, and only two TN cases showed CY staining (Fig. A right panel). The staining patterns between matched resection specimens and core biopsies showed complete agreement (124/124, 100%). In three cases, either the core biopsy or the resection specimen could not be evaluated due to an insufficient quantity of invasive tumor cells on the slide. In another four cases, no corresponding core biopsy was available. Therefore, in a total of seven cases, the staining pattern was analyzed in a single sample, either the core biopsy or the resection specimen. Next-generation sequencing of TP53 and PIK3CA In total, TP53 mutations were found in 53/126 amplifiable cases (42.1%). DNA extracted from the resection specimens was primarily used for NGS. If the DNA quality was insufficient, the corresponding core biopsy was used instead. In five samples, mutation status could not be determined due to poor DNA quality, both in resection specimens and in core biopsies. The TP53 mutation rates observed in the different subtypes, namely HR + HER2 − , HER2 + and TNBC, were 21.3%, 61.9% and 92.0%, respectively. The two TNBC cases lacking a TP53 mutation were observed to exhibit the morphological characteristics of carcinoma with apocrine differentiation. Consequently, the five TNBC cases with apocrine differentiation demonstrated a lower mutation rate (3/5, 60%) than the remaining 20 TNBC cases, which were subclassified as TNBC TIL, TNBC LCAZ and TNBC NOS with a 100% mutation rate, respectively. Most of the TP53 mutations (41/53, 77.4%) were found in the DNA-binding domain (Fig. B). The majority of the TP53 mutations were classified as missense mutations (30/53, 56.6%), followed by truncating mutations (22/53, 41.5%) and one inframe mutation (1/53, 1.9%, Supplemental Table ). Truncating mutations included nine splice site mutations, eight nonsense mutations, three frameshift deletions and two frameshift insertions. The TN subtype had the highest proportion of truncating mutations (Fig. C). Interestingly, missense mutations were more common in the HR + HER2 − and the HER2 + subgroups (Table ). In addition to the NGS analysis of TP53 , we also sequenced PIK3CA . In total, PIK3CA mutations were found in 54/126 amplifiable cases (42.9%). Almost half of these mutations were found in hotspot p.His1047Arg/Leu (26/54, Supplemental Fig. ). The majority of mutations identified, 50/54 (92.6%), were classified as missense mutations and only four were classified as inframe mutations (7.4%). Concordance between p53 immunostaining and TP53 mutational status Overall, the comparison of the protein and genotype level showed a sensitivity of 96.2% and a specificity of 100% for the immunohistochemical detection of the TP53 mutation. Furthermore, there was a significant level of agreement between missense mutations and OE, as well as between truncating mutations and CA (Cohen’s κ 73% and 76%, respectively). NGS analysis revealed that the two cases exhibiting CY staining in IHC had truncating mutations both within the nuclear localization signalling domain of p53 (p.Arg306Ter and p.X306_splice), likely resulting in mislocalization of the protein. Only two HR + HER2 − cases of grade 2 were false negatives by IHC, both carrying missense mutations (p.Ser116Cys and p.Arg175His, Fig. ). Histopathological features of TP53 -mutated cases and association with PIK3CA mutational status TP53 -mutated cases exhibited higher nuclear pleomorphism ( p < 0.001) and grade (G3, 73.6%; G2, 26.4%; G1, 0%) compared to wild-type cases (G3, 21.9%; G2, 52.1%; G1, 26.0%) ( p < 0.001, Fig. ). Similar trends were evident in HR + HER2 − ( p < 0.001 and p < 0.04). TP53 mutations were also linked to a high Ki-67 proliferation index ( p < 0.001). TP53 and PIK3CA mutations showed an inverse correlation. In contrast, the subgroups of HR + HER2 − , HER2 + and TNBC displayed PIK3CA mutations in 55.0%, 38.1% and 8.0% of cases, respectively. Notably, there was a strong association between mutant TP53 and wild-type PIK3CA ( p < 0.001). Prognostic impact of TP53 mutational status and aberrant immunostaining The statistical analysis of DFS and mutation status is summarized in Supplemental Fig. . By Kaplan–Meier analysis of all patients, p53 mutation/expression status was not statistically associated with DFS, showing comparable DFS between TP53 -mutated and wild-type cases, as well as between p53 aberrant and wild-type staining (respectively, Supplemental Fig. A). By univariate analysis, p53 mutation and aberrant staining were correlated with HR values of 2.6 and 2.9, respectively, but this was not statistically significant (Supplemental Fig. B). A slight disparity in DFS was observed when comparing only the p53 mutation/expression status of cases classified as HR + HER2 − . However, this difference was not statistically significant (Supplemental Fig. C). There was no significant difference in OS between TP53 -mutated and wild-type cases (Supplemental Fig. ). Cases were categorized as follows: HR + HER2 − (85 cases; G1, 21; G2, 42; G3, 22), HER2 + (21 cases) and TN (25 cases). Five TNBC cases exhibited the morphology of carcinoma with apocrine differentiation. Based on their morphological features, the remaining 20 TNBC cases were classified in accordance with Weisman et al. as TNBC with prominent tumor infiltrating lymphocytes (TNBC TIL, 9 cases), TNBC with large central acellular zone (TNBC LCAZ, 6 cases) and TNBC not otherwise specified (TNBC NOS, 5 cases) . The median age of the patients was 59 years (range, 26–89 years). The tumor median size was 18 mm (range, 3–103 mm). The T stage ranged from pT1-3. The numbers of different grading and nuclear grade per subgroup are shown in Table . The majority of cases exhibited a wild-type staining pattern (79/131, 60.3%) with variable numbers of positive tumor cells with heterogenous nuclear staining intensity. Aberrant staining patterns were detected in 52/131 cases (39.7%), interpreted as indicating the potential presence of a TP53 mutation. The aberrant staining patterns were classified as OE, CA and CY (Fig. ). OE refers to strong, homogenous and band-like staining of all well-fixed tumor cell nuclei, whereas CA is depicted as a total loss of staining in tumor cells, with preserved variable staining in non-tumor cell nuclei as an internal control, and CY describes a granular cytoplasmic staining with variable or missing nuclear staining. The highest prevalence of aberrant p53 staining was observed in TNBC (23/25, 92%, Fig. A left panel), while only 18.8% (16/85) were detected in HR + HER2 − ( p < 0.001, Table ). Strikingly, no aberrant staining was observed in the HR + HER2 − G1 subset. The most common aberrant staining pattern observed was OE (34/52, 65.4%), followed by CA (16/52, 30.7%), and CY (2/52, 3.8%). OE was the most common aberrant staining pattern in all subgroups, and only two TN cases showed CY staining (Fig. A right panel). The staining patterns between matched resection specimens and core biopsies showed complete agreement (124/124, 100%). In three cases, either the core biopsy or the resection specimen could not be evaluated due to an insufficient quantity of invasive tumor cells on the slide. In another four cases, no corresponding core biopsy was available. Therefore, in a total of seven cases, the staining pattern was analyzed in a single sample, either the core biopsy or the resection specimen. TP53 and PIK3CA In total, TP53 mutations were found in 53/126 amplifiable cases (42.1%). DNA extracted from the resection specimens was primarily used for NGS. If the DNA quality was insufficient, the corresponding core biopsy was used instead. In five samples, mutation status could not be determined due to poor DNA quality, both in resection specimens and in core biopsies. The TP53 mutation rates observed in the different subtypes, namely HR + HER2 − , HER2 + and TNBC, were 21.3%, 61.9% and 92.0%, respectively. The two TNBC cases lacking a TP53 mutation were observed to exhibit the morphological characteristics of carcinoma with apocrine differentiation. Consequently, the five TNBC cases with apocrine differentiation demonstrated a lower mutation rate (3/5, 60%) than the remaining 20 TNBC cases, which were subclassified as TNBC TIL, TNBC LCAZ and TNBC NOS with a 100% mutation rate, respectively. Most of the TP53 mutations (41/53, 77.4%) were found in the DNA-binding domain (Fig. B). The majority of the TP53 mutations were classified as missense mutations (30/53, 56.6%), followed by truncating mutations (22/53, 41.5%) and one inframe mutation (1/53, 1.9%, Supplemental Table ). Truncating mutations included nine splice site mutations, eight nonsense mutations, three frameshift deletions and two frameshift insertions. The TN subtype had the highest proportion of truncating mutations (Fig. C). Interestingly, missense mutations were more common in the HR + HER2 − and the HER2 + subgroups (Table ). In addition to the NGS analysis of TP53 , we also sequenced PIK3CA . In total, PIK3CA mutations were found in 54/126 amplifiable cases (42.9%). Almost half of these mutations were found in hotspot p.His1047Arg/Leu (26/54, Supplemental Fig. ). The majority of mutations identified, 50/54 (92.6%), were classified as missense mutations and only four were classified as inframe mutations (7.4%). TP53 mutational status Overall, the comparison of the protein and genotype level showed a sensitivity of 96.2% and a specificity of 100% for the immunohistochemical detection of the TP53 mutation. Furthermore, there was a significant level of agreement between missense mutations and OE, as well as between truncating mutations and CA (Cohen’s κ 73% and 76%, respectively). NGS analysis revealed that the two cases exhibiting CY staining in IHC had truncating mutations both within the nuclear localization signalling domain of p53 (p.Arg306Ter and p.X306_splice), likely resulting in mislocalization of the protein. Only two HR + HER2 − cases of grade 2 were false negatives by IHC, both carrying missense mutations (p.Ser116Cys and p.Arg175His, Fig. ). TP53 -mutated cases and association with PIK3CA mutational status TP53 -mutated cases exhibited higher nuclear pleomorphism ( p < 0.001) and grade (G3, 73.6%; G2, 26.4%; G1, 0%) compared to wild-type cases (G3, 21.9%; G2, 52.1%; G1, 26.0%) ( p < 0.001, Fig. ). Similar trends were evident in HR + HER2 − ( p < 0.001 and p < 0.04). TP53 mutations were also linked to a high Ki-67 proliferation index ( p < 0.001). TP53 and PIK3CA mutations showed an inverse correlation. In contrast, the subgroups of HR + HER2 − , HER2 + and TNBC displayed PIK3CA mutations in 55.0%, 38.1% and 8.0% of cases, respectively. Notably, there was a strong association between mutant TP53 and wild-type PIK3CA ( p < 0.001). TP53 mutational status and aberrant immunostaining The statistical analysis of DFS and mutation status is summarized in Supplemental Fig. . By Kaplan–Meier analysis of all patients, p53 mutation/expression status was not statistically associated with DFS, showing comparable DFS between TP53 -mutated and wild-type cases, as well as between p53 aberrant and wild-type staining (respectively, Supplemental Fig. A). By univariate analysis, p53 mutation and aberrant staining were correlated with HR values of 2.6 and 2.9, respectively, but this was not statistically significant (Supplemental Fig. B). A slight disparity in DFS was observed when comparing only the p53 mutation/expression status of cases classified as HR + HER2 − . However, this difference was not statistically significant (Supplemental Fig. C). There was no significant difference in OS between TP53 -mutated and wild-type cases (Supplemental Fig. ). Our study demonstrates that a comprehensive evaluation of p53 staining patterns, rather than an arbitrary cutoff, shows a high sensitivity (96%) and specificity (100%) in predicting the presence and type of TP53 mutations in BC of NST. These encouraging results indicate that p53 IHC could serve as a reliable surrogate marker to identify patients at higher risk for resistance to endocrine therapy and may be used as a cost-effective screening tool. Furthermore, we observed a significant association of TP53 mutations with high tumor grade, high nuclear grade, proliferative activity and TN status. Within the HR + HER2 − group, we observed 21% cases with mutant TP53 , with a clear preference for high-grade carcinomas. Moreover, we identified a negative association with PIK3CA mutations, suggesting distinct tumor clusters with activation of alternative pathways. Previous clinical studies attempting to define p53 staining as a surrogate to predict mutations in BC have relied on the OE and/or CA staining pattern, which have yielded different cut-off values, ranging from 10 to 50% positive cells, or CA only . In a previous study, a cutoff of 35% was described when assessing the OE pattern, resulting in a sensitivity of 65% and a specificity of 95%. However, cases with a CA pattern were not included, as this pattern was considered insignificant in cases of HR + HER2 − . Nevertheless, in our study, almost one-third of HR + HER2 − cases exhibited CA staining, thereby emphasizing the significance of employing a combination of staining patterns during the IHC assessment. In the present study, we propose the implementation of an IHC algorithm analogous to that previously described for ovarian and endometrial cancer . We validated aberrant pattern interpretation of p53 IHC by targeted sequencing of the entire TP53 gene in a well-defined series of BC of NST representing different therapy-relevant subtypes. This resulted in a strong agreement between the staining pattern and the type of mutation. Most cases carrying a TP53 mutation showed OE staining, predominantly associated with missense mutations. CA was observed in cases with truncating mutations, while CY staining was attributed to alterations in the nuclear localization signal. Specifically, the both truncating mutations p.Arg306Ter and p.X306_splice corresponding to the CY staining and observed in TN were found within the nuclear localization signalling domain (305–322 aa) of p53 which might lead to a cytoplasmic accumulation of the protein, as previously described by Köbel et al. . To the best of our knowledge, this is the first study to describe the CY pattern of p53 IHC in BC and to match this pattern with the underlying genetic alterations in the nuclear localization signalling domain of the TP53 gene. In 2/53 cases with TP53 mutations, an aberrant IHC staining pattern could not be identified. Among these two false-negatives cases, one case contained the missense mutation p.Ser116Cys, which is considered a variant of unknown significance (VUS) according to the most recent updates in the respective databases. Supposedly, the mutation still maintains a partial wild-type function, which may lead to a wild-type-like staining pattern and might not have an impact on the cellular biology of the tumor cells. The second case showed the missense mutation p.Arg175His, which is a well-documented hotspot mutation, which was detected by aberrant OE staining in another case within our cohort. The IHC staining was re-evaluated using both specimens, the core biopsy and the resection specimen. In this case, the discrepant result could not be attributed to fixation artefacts. Given that the cases in this cohort did not undergo neoadjuvant therapy, the interval between the removal of the core biopsy and the resection specimens was relatively brief. Consequently, the tissue of both samples is considered to exhibit a high degree of biological similarity, which allows for the comparison of the samples and provides a satisfactory explanation for the 100% agreement of the staining patterns between the core biopsies and the resection specimens. This approach allowed us to demonstrate that both core biopsies and resection specimens can be employed for the evaluation of p53 status. The frequencies of aberrant p53 expression in our study, affecting 19% of HR + HER2 − , 62% of HER2 + cases and 92% of TNBC were consistent with the documented mutation rates for the different subtypes of BC . Additionally, in line with recent findings, TNBC cases with apocrine differentiation demonstrated a lower TP53 mutation rate than the overall TNBC cohort . Evaluation of TP53 status in HR + HER2 − cases may be of particular interest for future treatment decisions, especially in cases with favourable or intermediate pathological features. In recent studies, in addition to tumor stage, grading and hormone-receptor status, a variety of additional markers have been evaluated, including the 21-gene recurrence score, the PAM50 risk of recurrence score and changes in Ki-67 after short-term endocrine therapy . In this context, a rapid evaluation of TP53 status may assist in the establishment of an additional risk category in HR + HER2 − .The reliable and widely available strategy with pattern analysis in IHC of this study might help to evaluate larger clinical studies. This may be able to provide the necessary clinical confirmation from larger retrospective analysis or to plan future prospective trials, which include the affordable p53 protein status as a secondary parameter in addition to gene expression assays. Moreover, approaches targeting TP53 mutations that were formerly deemed undruggable are now being subjected to rigorous investigation, with some already undergoing clinical trials . Consequently, TP53 mutation status may emerge as a pivotal predictive marker for therapies tailored to TP53 -mutated BC. The prognostic impact of aberrant p53 IHC with a bimodal pattern, similar to our strategy but lacking the CY pattern, has been previously analyzed . Boyle et al. demonstrated that p53 aberrant staining is significantly associated with shorter OS and DFS. Interestingly, the strongest effect on survival was observed within the group of HR + HER2 − , independently of the results in the TNBC cases. In our study, no differences in DFS and OS were identified in cases of aberrant p53 status in the complete cohort and HR + HER2 − cases. This may be due to the limited number of cases in the survival analysis and the exclusion of patients with neoadjuvant treatment, a relevant segment of high-risk HR + HER2 − cases, which were not part of the study. In addition to the relatively small sample size of our cohort and the focus on BC of NST, some limitations may be acknowledged in our study including the potential for tissue degradation due to the age of our samples, which exceeded 10 years. To ensure the integrity of the DNA during sequencing analysis, 5/131 samples with less than 100 bp in the quality control PCR were excluded. Additionally, IHC evaluation was limited to areas on the slides exhibiting adequately fixed tissue. In previous studies, TP53 mutations were associated with poor prognosis of primary BC and predicted potential endocrine resistance of the HR + HER2 − subtype, making a suitable screening tool desirable. In our study, IHC for p53 with interpretation of specific aberrant staining patterns could reliably identify patients with mutant TP53 in a simple and affordable manner. The staining patterns matched the expected types of mutations in NGS, confirming the validity of this approach. Therefore, this strategy might facilitate future studies to evaluate the impact of TP53 mutations on the benefit of specific therapeutic strategies. Eventually, like in endometrial carcinoma, p53 IHC might become part of the routine diagnostic panel in BC. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 1679 KB) |
Gut microbes modulate the effects of the flavonoid quercetin on atherosclerosis | a416f6a1-c6fd-4b94-a4c9-e12a88deb265 | 11723976 | Biochemistry[mh] | The digestive tract of mammals harbors microbial communities that fulfill important functions including the breakdown of non-digestible carbohydrates (e.g., dietary fiber), shaping of the immune system, and provision of colonization resistance against pathogens . The gut microbiome has also been implicated in regulating host energy metabolism and the onset of various disorders such as inflammatory bowel disease , cancer , and neurological diseases . Recent studies have highlighted the significant roles of the gut microbiome and chronic inflammation in the development of cardiovascular diseases (CVD) including atherosclerosis – . Atherosclerosis is a chronic inflammatory disorder characterized by the accumulation of fatty deposits in the artery, leading to life-threatening CVD such as myocardial infarction and stroke. A number of bacterial metabolites arising from specific dietary components, including trimethylamine, phenylacetylglutamine, and short-chain fatty acids (SCFAs), have gained recognition as important mediators of cardiovascular health – . Epidemiologic studies have linked a high intake of flavonoids—polyphenolic compounds naturally occurring in fruits, vegetables, and cereals—with a lower risk of metabolic and CVD , . This association is supported by animal studies in which the administration of flavonoids as supplements reduces the progression of atherosclerosis . Many flavonoids, including the commonly occurring flavonoid quercetin, have been shown to possess anti-oxidative and anti-inflammatory activity . In general, dietary quercetin and other flavonoids are not efficiently absorbed in the proximal small intestine, with a significant fraction of these compounds reaching the distal small intestine and the colon, where they are metabolized by gut microbes , . Gut bacterial metabolism of quercetin and other flavonoids results in several phenolic acids that can exert beneficial effects on the host . Consumption of quercetin is also known to impact gut microbiome composition , . Despite the evidence suggesting that quercetin supplementation has protective effects on atherosclerosis and modifies gut microbiota composition , it is still unknown whether the gut microbiota contributes to the beneficial effects of this flavonoid on vascular disease. In addition, the bioavailability of flavonoids varies depending on the food matrix . Quercetin is lipophilic, and diets with high lipid content enhance its absorption . However, it is not known whether other food components, such as plant polysaccharides, which largely co-occur with flavonoids and modulate the gut microbiome, affect the beneficial effects of this flavonoid on atherosclerosis. Interestingly, a meta-analysis of human trials showed considerable inter-individual variability of cardiometabolic biomarkers in response to flavonoid supplementation , which may be associated with inter-individual differences in the gut microbiome and diet. In this study, we hypothesized that i) the gut microbiota contributes to the beneficial effects of quercetin on atherosclerosis and that ii) the effects are dependent on dietary plant polysaccharides. Our results suggest that bacterial metabolism and complex plant polysaccharides modulate the protective effect of quercetin on atherosclerosis. The beneficial effects of quercetin on atherosclerosis are microbiome dependent We tested the effect of quercetin on atherosclerosis progression in ConvR and GF ApoE KO mice fed a low-fat high-microbiota-accesible carbohydrates (MAC) diet supplemented with 0.1% w/w quercetin (Supplementary Table ) starting at 6-week old and maintained in the diet for 16 weeks (Fig. ). Atherosclerosis burden was analyzed in tissue collected from 22-week-old animals. Unexpectedly, quercetin did not affect plasma lipid profiles in these mice (Fig. ). However, we observed an interaction between quercetin supplementation and the presence of microbes that resulted in reduced atherosclerotic lesions and reduced accumulation of macrophages in ConvR animals but not in GF mice (Fig. ). There was also a strong trend in the levels of collagen that accumulated in ConvR in the presence of quercetin that was not detected in GF mice (Fig. ); however, this did not reach significance. Quercetin consumption modulates gut microbiome composition in ConvR mice fed a high-MAC diet To investigate whether the effect of quercetin on atherosclerosis is associated with changes in gut microbiota composition, we characterized the cecal microbiomes of the ConvR ApoE KO mice discussed above using 16S rRNA gene sequencing. We found that mice consuming the high-MAC diet supplemented with quercetin showed significantly increased richness of the gut microbiota as determined by the Chao1 index relative to control (high MAC, no quercetin) mice (Fig. ). Quercetin-fed animals also harbored more diverse microbiomes as determined by the Shannon index (Fig. ). Non-metric multidimensional scaling (NMDS) analysis of weighted UniFrac distances revealed a significant influence of quercetin (PERMANOVA; P = 0.017) on microbial community composition (Fig. B). Furthermore, linear discriminant analysis (LDA) effect size (LEfSe Galaxy Version 1.0) was performed to identify taxonomic differences in microbiota composition between the two groups of mice. Figure illustrates the differential phylogenetic distributions of microbial communities in these two groups. Taxa belonging to the Eggerthellaceae, Ruminococcaceae, and Desulfovibrionaceae families and Parvibacter , Dorea , and Ruminiclostridium genera were increased in the quercetin-fed mice relative to control mice, whereas the members of the Lactobacillaceae family were detected at lower levels in the presence of the flavonoid (Fig. , Supplementary Fig. ). Furthermore, atherosclerotic plaque areas were negatively associated with the Eggerthellaceae and Erysipelotrichaceae families and positively associated with the Lactobacillaceae family (Fig. , Supplementary Fig. ). Collectively, these results suggest that dietary quercetin increased bacterial richness and modified the abundance of several microbial taxa associated with atherosclerosis. Microbial phenolic metabolites in blood are associated with atheroprotection Bacterial fermentation of carbohydrates that reach the distal gut results in the production of SCFAs, including acetate, propionate, and butyrate, which have been previously associated with athero-protection , . Previous work suggests that flavonoids may influence the production of SCFAs . To start exploring potential mechanisms by which quercetin inhibits the development of atherosclerosis, we measured levels of SCFAs in cecal contents. Quercetin did not change cecal levels of acetate, propionate, and butyrate in ConvR mice (Supplementary Fig. ). We next analyzed phenolic metabolites in plasma samples using Ultra Performance Liquid Chromatography-Tandem Mass Spectrometer (UPLC-MS/MS). Sparse partial least squares discriminant analysis (sPLS-DA) plot showed significant separation between ConvR, and GF mice, with modest separation between control and quercetin-supplemented diets. Interestingly, there was no separation between GF controls and GF quercetin animals (Fig. ). We also determined levels of quercetin and its derivatives (quercetin 3-O-glucuronide, quercetin 3-O-sulfate, isorhamnetin glucuronide) in the circulation. Unexpectedly, there was little to no changes in those metabolites (Supplementary Fig. ), suggesting that quercetin was further metabolized by gut microbes. Comparison of phenolic metabolites in ConvR mice consuming control vs . high-quercetin-supplemented diet showed that several metabolites, such as benzoylglutamic acid, 3,4-DHBA and its sulfate-conjugated form, trans-4-hydroxy-3-methoxycinnamic acid (ferulic acid), and 3-methoxybenzoic acid methyl ester, were significantly increased by the quercetin supplementation (Fig. ). This was also confirmed by Variable Importance in Projection (VIP) scores (Supplementary Fig. ) and correlation coefficients (Supplementary Fig. ). Moreover, atherosclerotic plaque areas from the ConvR mice consuming high-MAC diets (plus/minus quercetin) were negatively associated with hydroxyhippuric acid, benzoylglutamic acid, and 3,4-DHBA sulfate (Fig. ). Plasma levels of these metabolites were also negatively associated with macrophage area but did not correlate with collagen area (data not shown). Collectively, these results suggested that dietary quercetin increased several plasma phenolic metabolites derived from bacterial metabolism including 3,4-DHBA, when provided in concert with dietary plant polysaccharides. Quercetin does not impact atherosclerosis progression in mice consuming a low-MAC diet We next tested the effect of quercetin on atherosclerosis progression in mice fed a low-MAC diet. ApoE KO mice were fed a low-MAC diet or a low-MAC diet supplemented with 0.1% w/w quercetin (Supplementary Table ) starting at 6-week old and maintained in the diet for 16 weeks (Fig. ). It is important to note that this is a synthetic diet consisting of pure/semi-pure ingredients (e.g. casein) as opposed to the high-MAC diet which consists of whole ingredients (e.g., ground wheat, ground corn middlings, dehulled soybean meal etc.). Interestingly, quercetin did not affect plasma lipid profile, atherosclerosis lesion size, and macrophage or collagen levels in the aortic sinus from these mice (Fig. ). While quercetin supplementation resulted in microbiome changes in this diet (Supplementary Fig. ), it did not impact plasma levels of phenylacetic acid, benzoylglutamic acid, 3,4-DHBA, and its sulfated form (Supplementary Fig. ). Altogether these results support the notion that MAC may support bacterial metabolism of quercetin and that bacterial metabolites derived from the flavonoid may contribute to the atheroprotective effects. 3,4-DHBA reduces the detrimental effect of LPS on human aortic endothelial cells (HAoEC) monolayer integrity We examined whether 3,4-DHBA modulates inflammation by testing its effects on bone marrow-derived macrophages (BMDM). We focused on this metabolite because it is well-known bacterial metabolite of quercetin and because we see elevated in ConvR mice consuming quercetin, and its sulfated form negatively associated with disease (Fig. ). The effects of 3,4-DHBA on inflammatory cytokine production were tested. BMDM were initially stimulated with LPS and subsequently treated with 3,4-DHBA followed by the addition of ATP. We found that 3,4 DHBA treatment did not impact levels of IL-1β or IL-6 levels secreted by BMDM (Fig. ). We also examined the effects of 3,4-DHBA on vascular permeability, as increased permeability facilitates the entry of lipoproteins, inflammatory cells, and other macromolecules into the arterial wall, initiating and propagating the atherosclerosis process . Primary HAoEC were grown to confluence on transwell inserts and exposed to LPS with two different 3,4-DHBA concentrations for 24 h, and the endothelial monolayer integrity was evaluated using a voltohmmeter. We found that LPS significantly lowered TEER whereas 3,4-DHBA supplementation attenuated the effects of LPS (Fig. ). These results suggest that the microbial metabolite 3,4-DHBA may protect endothelial barrier integrity. We tested the effect of quercetin on atherosclerosis progression in ConvR and GF ApoE KO mice fed a low-fat high-microbiota-accesible carbohydrates (MAC) diet supplemented with 0.1% w/w quercetin (Supplementary Table ) starting at 6-week old and maintained in the diet for 16 weeks (Fig. ). Atherosclerosis burden was analyzed in tissue collected from 22-week-old animals. Unexpectedly, quercetin did not affect plasma lipid profiles in these mice (Fig. ). However, we observed an interaction between quercetin supplementation and the presence of microbes that resulted in reduced atherosclerotic lesions and reduced accumulation of macrophages in ConvR animals but not in GF mice (Fig. ). There was also a strong trend in the levels of collagen that accumulated in ConvR in the presence of quercetin that was not detected in GF mice (Fig. ); however, this did not reach significance. To investigate whether the effect of quercetin on atherosclerosis is associated with changes in gut microbiota composition, we characterized the cecal microbiomes of the ConvR ApoE KO mice discussed above using 16S rRNA gene sequencing. We found that mice consuming the high-MAC diet supplemented with quercetin showed significantly increased richness of the gut microbiota as determined by the Chao1 index relative to control (high MAC, no quercetin) mice (Fig. ). Quercetin-fed animals also harbored more diverse microbiomes as determined by the Shannon index (Fig. ). Non-metric multidimensional scaling (NMDS) analysis of weighted UniFrac distances revealed a significant influence of quercetin (PERMANOVA; P = 0.017) on microbial community composition (Fig. B). Furthermore, linear discriminant analysis (LDA) effect size (LEfSe Galaxy Version 1.0) was performed to identify taxonomic differences in microbiota composition between the two groups of mice. Figure illustrates the differential phylogenetic distributions of microbial communities in these two groups. Taxa belonging to the Eggerthellaceae, Ruminococcaceae, and Desulfovibrionaceae families and Parvibacter , Dorea , and Ruminiclostridium genera were increased in the quercetin-fed mice relative to control mice, whereas the members of the Lactobacillaceae family were detected at lower levels in the presence of the flavonoid (Fig. , Supplementary Fig. ). Furthermore, atherosclerotic plaque areas were negatively associated with the Eggerthellaceae and Erysipelotrichaceae families and positively associated with the Lactobacillaceae family (Fig. , Supplementary Fig. ). Collectively, these results suggest that dietary quercetin increased bacterial richness and modified the abundance of several microbial taxa associated with atherosclerosis. Bacterial fermentation of carbohydrates that reach the distal gut results in the production of SCFAs, including acetate, propionate, and butyrate, which have been previously associated with athero-protection , . Previous work suggests that flavonoids may influence the production of SCFAs . To start exploring potential mechanisms by which quercetin inhibits the development of atherosclerosis, we measured levels of SCFAs in cecal contents. Quercetin did not change cecal levels of acetate, propionate, and butyrate in ConvR mice (Supplementary Fig. ). We next analyzed phenolic metabolites in plasma samples using Ultra Performance Liquid Chromatography-Tandem Mass Spectrometer (UPLC-MS/MS). Sparse partial least squares discriminant analysis (sPLS-DA) plot showed significant separation between ConvR, and GF mice, with modest separation between control and quercetin-supplemented diets. Interestingly, there was no separation between GF controls and GF quercetin animals (Fig. ). We also determined levels of quercetin and its derivatives (quercetin 3-O-glucuronide, quercetin 3-O-sulfate, isorhamnetin glucuronide) in the circulation. Unexpectedly, there was little to no changes in those metabolites (Supplementary Fig. ), suggesting that quercetin was further metabolized by gut microbes. Comparison of phenolic metabolites in ConvR mice consuming control vs . high-quercetin-supplemented diet showed that several metabolites, such as benzoylglutamic acid, 3,4-DHBA and its sulfate-conjugated form, trans-4-hydroxy-3-methoxycinnamic acid (ferulic acid), and 3-methoxybenzoic acid methyl ester, were significantly increased by the quercetin supplementation (Fig. ). This was also confirmed by Variable Importance in Projection (VIP) scores (Supplementary Fig. ) and correlation coefficients (Supplementary Fig. ). Moreover, atherosclerotic plaque areas from the ConvR mice consuming high-MAC diets (plus/minus quercetin) were negatively associated with hydroxyhippuric acid, benzoylglutamic acid, and 3,4-DHBA sulfate (Fig. ). Plasma levels of these metabolites were also negatively associated with macrophage area but did not correlate with collagen area (data not shown). Collectively, these results suggested that dietary quercetin increased several plasma phenolic metabolites derived from bacterial metabolism including 3,4-DHBA, when provided in concert with dietary plant polysaccharides. We next tested the effect of quercetin on atherosclerosis progression in mice fed a low-MAC diet. ApoE KO mice were fed a low-MAC diet or a low-MAC diet supplemented with 0.1% w/w quercetin (Supplementary Table ) starting at 6-week old and maintained in the diet for 16 weeks (Fig. ). It is important to note that this is a synthetic diet consisting of pure/semi-pure ingredients (e.g. casein) as opposed to the high-MAC diet which consists of whole ingredients (e.g., ground wheat, ground corn middlings, dehulled soybean meal etc.). Interestingly, quercetin did not affect plasma lipid profile, atherosclerosis lesion size, and macrophage or collagen levels in the aortic sinus from these mice (Fig. ). While quercetin supplementation resulted in microbiome changes in this diet (Supplementary Fig. ), it did not impact plasma levels of phenylacetic acid, benzoylglutamic acid, 3,4-DHBA, and its sulfated form (Supplementary Fig. ). Altogether these results support the notion that MAC may support bacterial metabolism of quercetin and that bacterial metabolites derived from the flavonoid may contribute to the atheroprotective effects. We examined whether 3,4-DHBA modulates inflammation by testing its effects on bone marrow-derived macrophages (BMDM). We focused on this metabolite because it is well-known bacterial metabolite of quercetin and because we see elevated in ConvR mice consuming quercetin, and its sulfated form negatively associated with disease (Fig. ). The effects of 3,4-DHBA on inflammatory cytokine production were tested. BMDM were initially stimulated with LPS and subsequently treated with 3,4-DHBA followed by the addition of ATP. We found that 3,4 DHBA treatment did not impact levels of IL-1β or IL-6 levels secreted by BMDM (Fig. ). We also examined the effects of 3,4-DHBA on vascular permeability, as increased permeability facilitates the entry of lipoproteins, inflammatory cells, and other macromolecules into the arterial wall, initiating and propagating the atherosclerosis process . Primary HAoEC were grown to confluence on transwell inserts and exposed to LPS with two different 3,4-DHBA concentrations for 24 h, and the endothelial monolayer integrity was evaluated using a voltohmmeter. We found that LPS significantly lowered TEER whereas 3,4-DHBA supplementation attenuated the effects of LPS (Fig. ). These results suggest that the microbial metabolite 3,4-DHBA may protect endothelial barrier integrity. A large body of literature supports the notion that consumption of flavonoids decreases the risk of CVD . More recent studies have established that flavonoids impact the gut microbiome and have suggested that microbes impact their efficacy . Our study provides causal evidence linking the effect of quercetin consumption, atherosclerosis and the gut microbiome. Flavonoids are metabolized by phase I and phase II metabolism in the intestine and liver. In the colon, resident gut bacteria can convert unabsorbed flavonoids into small phenolic acids and aromatic metabolites , . The effects these metabolites have on the host are poorly described. Feeding studies with tracing of metabolic conversion suggest flavonoid catabolites are readily absorbed in the colon, often possess longer half-lives and reach substantially higher systemic concentrations than parent compounds . These observations have increased the interest in microbiota-generated metabolites, which might mediate cardiometabolic effects of flavonoids. Degradation of quercetin by the gut microbiota involves C-ring fission, formation of 3-(4-hydroxyphenyl)propionic acid, and subsequent transformation to 3,4-dihydroxyphenylacetic acid . Further modification leads to 3,4-DHBA and 4-hydroxybenzoic acid. 3,4-dihydroxyphenylacetic acid can also be dehydroxylated to 3-hydroxyphenylacetic acid or 4-hydroxyphenylacetic acid and phenylacetic acid, further degrading into various smaller products . Our semi-quantitative targeted phenol metabolomic analysis identified several microbiota-generated metabolites from quercetin, such as 3,4-DHBA, its sulfated form 3,4-DHBA sulfate, and trans-4-hydroxy-3-methoxycinnamic acid (ferulic acid) that were elevated in plasma from conventional animals consuming diet supplemented with quercetin. These results are consistent with previous findings showing that 3,4-DHBA and ferulic acid are protective against atherosclerosis development in animal models , , whereas the effect of benzoylglutamic acid (also increased by quercetin in colonized mice fed high MAC diet) on atherogenesis has not been explored. Importantly, these metabolites were not increased by quercetin consumption in GF mice or mice consuming the low-MAC diet, emphasizing the role of the gut microbiota and MAC in the generation of these metabolites. Furthermore, dysfunction of the endothelial lining of lesion-prone areas of the vasculature contributes to atherosclerosis lesion initiation and progression . We found that 3,4-DHBA lower the detrimental effects of LPS on HAoEC barrier integrity. Altogether, these results suggest that microbial products of quercetin may contribute to the beneficial effects of the flavonoid observed in ConvR mice. Dietary quercetin may alter gut microbial composition in part by stimulating the growth of specific bacteria . Similarly, our 16S rRNA sequencing data showed that quercetin increased microbiota richness and alpha diversity. Eggerthellaceae, Ruminococcaceae, and Desulfovibrionaceae families were highly enriched in the quercetin-fed mice. However, whether these taxa have the capacity to degrade quercetin is still unknown. Interestingly, Ellagibacter isourolithinifaciens belonging to the Eggerthellaceae family, a recently isolated bacterium from human feces, can metabolize ellagic acid into isourolithin A so that the taxa in the Eggerthellaceae family would potentially metabolize quercetin in the gut . Future studies using gnotobiotic mice colonized with a defined consortium of microbes will help clarify the role of flavonoid-metabolizing bacteria on host physiology and disease. Flavonoids are commonly mixed with different macromolecules including carbohydrates, lipids, and proteins that affect their bioaccessibility (i.e., amount of an ingested nutrient available for absorption in the gut after digestion) and bioavailability (i.e., proportion that is digested, absorbed, and used) . While the protective effects of quercetin on atherosclerosis have been previously described in mice , , , , in most cases, western-type diets (i.e., high-fat, high-cholesterol diets) were used to exacerbate the disease. Quercetin is lipophilic, and the high lipid content in these diets enhances the efficiency of quercetin absorption . This may explain why we did not observe a reduction in atherosclerosis in mice fed the low-fat, low-MAC diet. Furthermore, our results suggest that quercetin’s effect on atherosclerosis is influenced by dietary plant polysaccharides. Although we did not provide the mechanisms by which the dietary plant polysaccharides impact quercetin to exert its action, it has been shown that they prolong gastric emptying time and delay the absorption of flavonoids. In addition, dietary fiber may reduce rates of flavonoid absorption mainly by physically trapping the flavonoids within the fiber matrix in the chyme . The current study has some limitations that should be addressed. First, this study used only male mice, precluding us from testing the effect of sex. Additionally, the low-MAC and high-MAC diets used in this study differ dramatically between them beside their contents of plant polysaccharides making it impossible to conclude that the differences in the effects of quercetin are due to carbohydrate accessibility to gut microbes. Likewise, the baseline lesion size was slightly higher for mice consuming the high MAC diet, but the diets have too many differences and experiments using the two diets were done at different times, preventing us from being able to make direct comparisons. Furthermore, we tested the effect of one metabolite derived from quercetin (i.e., 3,4-DHBA) in vitro; it is not clear that this metabolite reaches plasma levels needed to elicit its effects in vivo or if other bacterial metabolites made from quercetin contribute to the flavonoid effects. Lastly, we observed that 3,4-DHBA (but not 3,4-DHBA sulfate or other microbial-derived metabolites) was increased in GF mice consuming the control diet. This was unexpected as this diet was not supplemented with quercetin. It is possible that the diet has small levels of this compound that are detected in some of the samples. Despite these limitations, the work presented here shows that the protective effect of quercetin on atherosclerosis depends on the gut microbiota, and that atheroprotection by this flavonoid is associated with increased accumulation of phenolic acids in the blood. Further studies are warranted to clarify the metabolic processes underlying the generation of specific bioavailable, bioactive phenolic acid metabolites and to identify bacteria consortiums that optimize the generation of these phenolic acids. These studies will facilitate the development of symbiotic approaches for preventing CVD. Gnotobiotic husbandry All GF ApoE KO mice were maintained in a controlled environment in plastic flexible film gnotobiotic isolators under a strict 12 h light/dark cycle and received sterilized water and standard chow (LabDiet 5021; LabDiet, St Louis, MO) ad libitum until 6 weeks of age. Using traditional microbiology methods, the sterility of GF animals was assessed by incubating freshly collected fecal samples under aerobic and anaerobic conditions. Animals and experimental design Experiments: i) Six-week-old group-housed male ConvR or GF C57BL/6 ApoE KO mice were fed a standard grain-based chow diet composed of 18.6% (w/w) protein, 58.9% total carbohydrates including 14.7% neutral detergent fiber, and 6.2% fat (i.e., high-MAC diet, 3.1 kcal/g, TD.2018; Envigo, Supplementary Table ) or the high-MAC diet supplemented with 0.1% (w/w) quercetin (TD.150883; Envigo, Supplementary Table ) for 16 weeks. Dietary fiber in the high-MAC diet is derived from various plants, including ground wheat, ground corn, wheat middling, dehulled soybean meal, and corn gluten meal. The experimental diets were sterilized by irradiation. ii) Six-week-old group-housed male ConvR ApoE KO mice were fed a defined diet composed of 17.7% (w/w) protein, 60.1% carbohydrate, and 7.2% fat (i.e., low-MAC diet, 3.8 kcal/g, TD.97184; Envigo, Supplementary Table ) or the low-MAC diet supplemented with 0.1% (w/w) quercetin (TD.150881; Envigo, Supplementary Table ) for 16 weeks. Littermates from multiple mating pairs were used in this study, and they were randomly assigned to groups at weaning. Due to the different diets used in the experiment, blinding was not feasible for the duration of the study. Blinding was implemented for the measurement of atherosclerosis. After 4 h fasting, mice were then euthanized at 22 weeks of age between Zeitgeber time 6-8. Mice were placed into a chamber filled with vapor of the anaesthetic isoflurane to induce unconsciousness, and blood samples were drawn by cardiac puncture, followed by cervical dislocation for euthanasia. All animals in the current study were handled and maintained in accordance with the University of Wisconsin–Madison, standards for animal welfare and all protocols were approved by the university’s Animal Care and Use Committee. Atherosclerotic lesion assessments Atherosclerotic lesions were assessed as previously described . Briefly, mice were anesthetized, and the aorta was perfused with PBS. To determine the atherosclerotic lesion size at the aortic sinus, the samples were cut in the ascending aorta, and the proximal samples containing the aortic sinus were embedded in OCT compound (Tissue-Tek; Sakura Finetek, Tokyo, Japan). Five consecutive sections (10 μm thickness) taken at 100 μm intervals (i.e. 50, 150, 250, 350, and 450 μm from the bottom of the aortic sinus) were collected from each mouse and stained with Oil Red O. The atherosclerosis volume in the aortic sinus was expressed as the mean size of the 5 sections for each mouse. Immunohistochemistry was performed on formalin-fixed cryosections of mouse aortic roots using antibodies to identify macrophages (MOMA-2, 1:50; ab33451, Abcam, Cambridge, MA), followed by detection with biotinylated secondary antibodies (1:400; ab6733, Abcam) and streptavidin-horseradish peroxidase (1:500; P0397, Dako, Carpinteria, CA). Negative controls were prepared with substitution with an isotype control antibody. Staining with Masson’s trichrome was used to delineate the fibrous area according to the manufacturer’s instructions (ab150686, Abcam). Stained sections were digitally captured, and the stained area was calculated. Plaque area, Oil Red O-positive area, macrophage area, and fibrous area were measured using Image J software (National Institutes of Health, Bethesda, MD). DNA extraction from cecal contents DNA was isolated from cecal contents by extraction using a bead-beating protocol . Mouse cecal samples were re-suspended in a solution containing 500 μl of extraction buffer [200 mM Tris (pH 8.0), 200 mM NaCl, 20 mM EDTA], 210 μl of 20% SDS, 500 μl phenol:chloroform:isoamyl alcohol (pH 7.9, 25:24:1) and 500 μl of 0.1-mm diameter zirconia/silica beads. Cells were mechanically disrupted using a bead beater (BioSpec Products, Barlesville, OK; maximum setting for 3 min at room temperature), centrifuged to separate phases, then the nucleic acids in the aqueous phase were precipitated by the addition of isopropanol. Following solubilization in 10 mM Tris/HCl (pH 8.0) + 1 mM EDTA, contaminants were removed using QIAquick 96-well PCR Purification Kit (Qiagen, Germantown, MD, USA). Isolated DNA was eluted in 5 mM Tris/HCL (pH 8.5) and was stored at −80 °C until further use. 16S rRNA gene sequencing PCR was performed using universal primers flanking the variable 4 (V4) region of the bacterial 16S rRNA gene . Genomic DNA samples were amplified in duplicate. Each reaction contained 25 ng genomic DNA, 10 μM of each uniquely barcoded primer, 12.5 μl 2x HiFi HotStart ReadyMix (KAPA Biosystems, Wilmington, MA, USA), and water to a final reaction volume of 25 μl. PCR was carried out under the following conditions: initial denaturation for 3 min at 95 °C, followed by 20 cycles of denaturation for 30 s at 95 °C, annealing for 30 s at 55 °C and elongation for 30 s at 72 °C, and a final elongation step for 5 min at 72 °C. PCR products were purified with the QIAquick 96-well PCR Purification Kit and quantified using the Qubit dsDNA HS Assay kit (Invitrogen, Oregon, USA). Samples were equimolar pooled and sequenced by the University of Wisconsin–Madison Biotechnology Center with the MiSeq 2×250 v2 kit (Illumina, San Diego, CA, USA) using custom sequencing primers. Microbiota analysis in QIIME2 Demultiplexed paired-end fastq files were generated by CASAVA (Illumina), and a sample mapping file were used as input files. Sequences were processed, quality filtered and analyzed with QIIME2 (version 2019.10) ( https://qiime2.org ), a plugin-based microbiome analysis platform . DADA2 was used to denoise sequencing reads with the q2-dada2 plugin for quality filtering and identification of ASV (i.e. 100% exact sequence match). This resulted in 3,580,038 total sequences, with an average of 81,364 sequences per sample. Sequence variants were aligned with mafft with the q2-alignment plugin. The q2-phylogeny plugin was used for phylogenetic reconstruction via FastTree . Taxonomic classification was assigned using classify-sklearn against the SILVA 132 reference sequences . Alpha- and beta-diversity (weighted and unweighted UniFrac ) analyses were performed using the q2-diversity plugin at a rarefaction depth of 30000 sequences per sample. Subsequent processing and analysis were performed in R (v.3.6.2), and data generated in QIIME2 was imported into R using Phyloseq . LefSe analysis was performed using parameters as follows (p < 0.05 and LDA score 3.0 ;). Plasma biochemical analysis Plasma was acquired by centrifugation and stored at −80 °C until measurement. The triglycerides, total cholesterol, and high-density lipoprotein cholesterol levels were measured with commercially available kits from Wako Chemicals (Richmond, VA). Gas chromatography-mass spectrometry (GC-MS) of SCFA measurement Sample preparation was based on a previously described procedure , with some modifications. Cecal contents were weighed in 4 mL vials, then 10 μL of a mixture of internal standards (20 mM each; acetic acid-D4, Sigma-Aldrich #233315; propionic acid-D6, Sigma-Aldrich #490644; and butyric acid-D7, CDN isotopes #D-171) was subsequently added, followed by 20 μL of 33% HCl and 1 mL diethyl ether and the vials were sealed with polytetrafluoroethylene-lined screw caps. For plasma samples, 50 μL of each sample, 1.25 μL of the internal standard mix, 5 μL of 33% HCl, and 0.75 mL of diethyl ether were mixed. The mixture was vortexed vigorously for 3 min and then centrifuged (4000 g , 10 min). The upper organic layer was transferred to another vial, and a second diethyl ether extraction was performed. After combining the two ether extracts, a 60 μL aliquot was removed, combined with 2 μL N-tert -butyldimethylsilyl- N -methyltrifluoroacetamide (MTBSTFA, Sigma-Aldrich #394882) in a GC auto-sampler vial with a 200 μL glass insert, and incubated for 2 h at room temperature. Derivatized samples (1 μL) were injected onto an Agilent 7890B/5977 A GC/MSD instrument with Agilent DB1-ms 0.25 mm×60 m column with a 0.25 μm bonded phase. A discontinuous oven program was used, starting at 40 °C for 2.25 min, then ramping at 20 °C/min to 200 °C, then ramping at 100 °C/min to 300 °C and holding for 7 min. The total run time was 18.25 min. Linear column flow was maintained at 1.26 mL/min. The inlet temperature was set to 250 °C with an injection split ratio of 15:1. Quantitation was performed using selected ion monitoring (SIM) acquisition mode, and metabolites were compared to relevant labeled internal standards using Agilent Mass Hunter v. Acquisition B.07.02.1938. The m/z of monitored ions are as follows: 117 (acetic acid), 120 (acetic acid-D4), 131 (propionic acid), 136 (propionic acid-D6), 145 (butyric acid), and 151 (butyric acid-D7). Concentrations were normalized to mg of cecal contents. Targeted phenol metabolome for plasma samples The UPLC-MS/MS advanced scheduled multiple-reaction monitoring (ADsMRM) scanning methodological workflow was utilized to identify metabolites of quercetin, along with other phytochemical and host metabolites which may be impacted by treatment with quercetin. The metabolites were purified from 100 μl plasma by 96-well plate solid phase extraction (SPE; Strata TM -X Polymeric Reversed Phase, microelution 2 mg/well). The solid phase extraction treated samples were chromatographically separated and quantified using Exion high-performance liquid chromatography-tandem hybrid triple quadrupole-linear ion trap mass spectrometer (SCIEX QTRAP 6500+; UHPLC-ESI-MS/MS) with electrospray IonDrive Turbo-V Source. The samples were injected into a Kinetex PFP UPLC column (1.7 μm particle size, 100 Å pore size, 100 mm length, 2.1 mm internal diameter; Phenomenex) with oven temperature maintained at 37 °C. Mobile phase A and mobile phase B consisted of 0.1% v.v. formic acid in water and 0.1% v.v. formic acid in LC-MS grade acetonitrile, with a binary gradient ranging from 2% B to 90% B over 30 min and a flow rate gradient from 0.55 mL/min to 0.75 mL/min. MS/MS scanning was accomplished by ADsMRM using polarity switching between positive and negative ionization mode in Analyst (v.1.6.3, SCIEX) and with peak area and intensity recorded using SCIEX OS (v.2.0.0.45330, SCIEX). Internal standards included l -tyrosine-13C9,15N, resveratrol-13C6, hippuric acid 13C, 13C6 4-hydroxybenzoic acid propyl ester, and phlorizin dehydrate (Sigma). Peaks matching retention time, fragmentation patterns, and having intensity greater than 1e4, area greater than 2e4, and number of data points across baseline greater than 5 were annotated, and peak area, height, and area:height ratio were returned for statistical analysis. Metabolome analysis Metabolites and their respective normalized peak areas were analyzed by the MetaboAnalystR package . sPLS-DA was used to determine the separation between groups of the metabolite variables through rotation of the principal components obtained by PCA. Volcano plots were used to compare the size of the fold change to statistical significance. Volcano plots of significantly changing metabolites were determined using a two-sample Student’s t-test with a probability threshold of P < 0.05 corrected for multiple comparisons using the false discovery rate for type-1 error control. TEER measurements HAoEC (passages 4–6) were grown in 25 cm 2 flasks until confluency (80–90%). Using trypsin, cells were released, collected, and centrifuged. The supernatant was removed, and the pellet was resuspended in 7 ml of V2 medium. Initially, 1.5 ml of media alone were added to the outer transwell system, and then, 0.5 ml of the cell suspension was added to the inner compartment. Cells were maintained at 37 °C and 5% CO 2 . In a separate 12-well plate (no inserts), the remaining 1 ml medium volume was put in a well. This cell monolayer in the plastic surface served as a control to visualize the confluency of cells since this cannot be established in the transwell. When the cells were confluent, they were left 5–7 additional days to obtain a homogeneous monolayer (which can be verified in the control well-plate). At this point, cells were treated as follows: 1. Control cells (no treatment); 2. LPS 100 ng/ml; 3. LPS + 10 nM 3,4-DHBA; 4. LPS + 100 nM 3,4-DHBA. TEER was measured after 5 and 10 min. Bone marrow-derived macrophages (BMDM) To prepare murine BMDM, tibias and femurs from C57BL/6 mice were collected and flushed with RPMI 1640 media supplemented with 10% FBS, non-essential amino acids, sodium pyruvate, penicillin/streptomycin, and Glutamax before plating with 20% L-cell conditioned media. Cells were cultured in petri plates for six days at 37 °C and 5% CO 2 before use. For IL-1β detection, BMDM were plated at 4 × 10 5 in a 24-well plate and stimulated with 50 ng/mL LPS for 4 h before treatment with 3,4-DHBA at either 10 nM, 100 nM, 1 µM or 10 µM for 3 h then 5 mM ATP for 1 h. For IL-6 detection, BMDMs were plated at the same density but first treated with either 10 nM, 100 nM, 1 µM or 10 µM for 1 h, then stimulated with 50 ng/mL LPS for 4 h. Cell supernatants were collected and stored at −80 °C until ELISA analysis. IL-1β and IL-6 were detected in cell supernatants by ELISA. Antibodies for IL-1β ELISA (MAB401 and BAF401) were obtained from R&D Systems and used according to the manufacturer’s instructions. The IL-6 ELISA was done using the Mouse IL-6 DuoSet ELISA kit (R&D Systems, Cat. No. DY406) according to the manufacturer’s instructions. Statistical analysis The data were expressed as individual dots with mean ± SEM or box-and-whisker plots where the center line was the median, boxes extended to 25th and 75th percentiles, and whiskers extended to min and max values, and analyzed using R (3.6.2). For the high-MAC diet, significance was calculated by two-way ANOVA with Bonferroni post-tests. The correlation between the two variables was calculated by the Pearson correlation coefficient. For the low-MAC diet, significant differences between two two groups were evaluated by two-tailed unpaired Student’s t tests. LDA effect size (LefSe) used a nonparametric Wilcoxon sum-rank test followed by LDA analysis to measure the effect size of each abundant taxon, and two filters ( P < 0.05 and LDA score of >3) were applied to the present features. All GF ApoE KO mice were maintained in a controlled environment in plastic flexible film gnotobiotic isolators under a strict 12 h light/dark cycle and received sterilized water and standard chow (LabDiet 5021; LabDiet, St Louis, MO) ad libitum until 6 weeks of age. Using traditional microbiology methods, the sterility of GF animals was assessed by incubating freshly collected fecal samples under aerobic and anaerobic conditions. Experiments: i) Six-week-old group-housed male ConvR or GF C57BL/6 ApoE KO mice were fed a standard grain-based chow diet composed of 18.6% (w/w) protein, 58.9% total carbohydrates including 14.7% neutral detergent fiber, and 6.2% fat (i.e., high-MAC diet, 3.1 kcal/g, TD.2018; Envigo, Supplementary Table ) or the high-MAC diet supplemented with 0.1% (w/w) quercetin (TD.150883; Envigo, Supplementary Table ) for 16 weeks. Dietary fiber in the high-MAC diet is derived from various plants, including ground wheat, ground corn, wheat middling, dehulled soybean meal, and corn gluten meal. The experimental diets were sterilized by irradiation. ii) Six-week-old group-housed male ConvR ApoE KO mice were fed a defined diet composed of 17.7% (w/w) protein, 60.1% carbohydrate, and 7.2% fat (i.e., low-MAC diet, 3.8 kcal/g, TD.97184; Envigo, Supplementary Table ) or the low-MAC diet supplemented with 0.1% (w/w) quercetin (TD.150881; Envigo, Supplementary Table ) for 16 weeks. Littermates from multiple mating pairs were used in this study, and they were randomly assigned to groups at weaning. Due to the different diets used in the experiment, blinding was not feasible for the duration of the study. Blinding was implemented for the measurement of atherosclerosis. After 4 h fasting, mice were then euthanized at 22 weeks of age between Zeitgeber time 6-8. Mice were placed into a chamber filled with vapor of the anaesthetic isoflurane to induce unconsciousness, and blood samples were drawn by cardiac puncture, followed by cervical dislocation for euthanasia. All animals in the current study were handled and maintained in accordance with the University of Wisconsin–Madison, standards for animal welfare and all protocols were approved by the university’s Animal Care and Use Committee. Atherosclerotic lesions were assessed as previously described . Briefly, mice were anesthetized, and the aorta was perfused with PBS. To determine the atherosclerotic lesion size at the aortic sinus, the samples were cut in the ascending aorta, and the proximal samples containing the aortic sinus were embedded in OCT compound (Tissue-Tek; Sakura Finetek, Tokyo, Japan). Five consecutive sections (10 μm thickness) taken at 100 μm intervals (i.e. 50, 150, 250, 350, and 450 μm from the bottom of the aortic sinus) were collected from each mouse and stained with Oil Red O. The atherosclerosis volume in the aortic sinus was expressed as the mean size of the 5 sections for each mouse. Immunohistochemistry was performed on formalin-fixed cryosections of mouse aortic roots using antibodies to identify macrophages (MOMA-2, 1:50; ab33451, Abcam, Cambridge, MA), followed by detection with biotinylated secondary antibodies (1:400; ab6733, Abcam) and streptavidin-horseradish peroxidase (1:500; P0397, Dako, Carpinteria, CA). Negative controls were prepared with substitution with an isotype control antibody. Staining with Masson’s trichrome was used to delineate the fibrous area according to the manufacturer’s instructions (ab150686, Abcam). Stained sections were digitally captured, and the stained area was calculated. Plaque area, Oil Red O-positive area, macrophage area, and fibrous area were measured using Image J software (National Institutes of Health, Bethesda, MD). DNA was isolated from cecal contents by extraction using a bead-beating protocol . Mouse cecal samples were re-suspended in a solution containing 500 μl of extraction buffer [200 mM Tris (pH 8.0), 200 mM NaCl, 20 mM EDTA], 210 μl of 20% SDS, 500 μl phenol:chloroform:isoamyl alcohol (pH 7.9, 25:24:1) and 500 μl of 0.1-mm diameter zirconia/silica beads. Cells were mechanically disrupted using a bead beater (BioSpec Products, Barlesville, OK; maximum setting for 3 min at room temperature), centrifuged to separate phases, then the nucleic acids in the aqueous phase were precipitated by the addition of isopropanol. Following solubilization in 10 mM Tris/HCl (pH 8.0) + 1 mM EDTA, contaminants were removed using QIAquick 96-well PCR Purification Kit (Qiagen, Germantown, MD, USA). Isolated DNA was eluted in 5 mM Tris/HCL (pH 8.5) and was stored at −80 °C until further use. PCR was performed using universal primers flanking the variable 4 (V4) region of the bacterial 16S rRNA gene . Genomic DNA samples were amplified in duplicate. Each reaction contained 25 ng genomic DNA, 10 μM of each uniquely barcoded primer, 12.5 μl 2x HiFi HotStart ReadyMix (KAPA Biosystems, Wilmington, MA, USA), and water to a final reaction volume of 25 μl. PCR was carried out under the following conditions: initial denaturation for 3 min at 95 °C, followed by 20 cycles of denaturation for 30 s at 95 °C, annealing for 30 s at 55 °C and elongation for 30 s at 72 °C, and a final elongation step for 5 min at 72 °C. PCR products were purified with the QIAquick 96-well PCR Purification Kit and quantified using the Qubit dsDNA HS Assay kit (Invitrogen, Oregon, USA). Samples were equimolar pooled and sequenced by the University of Wisconsin–Madison Biotechnology Center with the MiSeq 2×250 v2 kit (Illumina, San Diego, CA, USA) using custom sequencing primers. Demultiplexed paired-end fastq files were generated by CASAVA (Illumina), and a sample mapping file were used as input files. Sequences were processed, quality filtered and analyzed with QIIME2 (version 2019.10) ( https://qiime2.org ), a plugin-based microbiome analysis platform . DADA2 was used to denoise sequencing reads with the q2-dada2 plugin for quality filtering and identification of ASV (i.e. 100% exact sequence match). This resulted in 3,580,038 total sequences, with an average of 81,364 sequences per sample. Sequence variants were aligned with mafft with the q2-alignment plugin. The q2-phylogeny plugin was used for phylogenetic reconstruction via FastTree . Taxonomic classification was assigned using classify-sklearn against the SILVA 132 reference sequences . Alpha- and beta-diversity (weighted and unweighted UniFrac ) analyses were performed using the q2-diversity plugin at a rarefaction depth of 30000 sequences per sample. Subsequent processing and analysis were performed in R (v.3.6.2), and data generated in QIIME2 was imported into R using Phyloseq . LefSe analysis was performed using parameters as follows (p < 0.05 and LDA score 3.0 ;). Plasma was acquired by centrifugation and stored at −80 °C until measurement. The triglycerides, total cholesterol, and high-density lipoprotein cholesterol levels were measured with commercially available kits from Wako Chemicals (Richmond, VA). Sample preparation was based on a previously described procedure , with some modifications. Cecal contents were weighed in 4 mL vials, then 10 μL of a mixture of internal standards (20 mM each; acetic acid-D4, Sigma-Aldrich #233315; propionic acid-D6, Sigma-Aldrich #490644; and butyric acid-D7, CDN isotopes #D-171) was subsequently added, followed by 20 μL of 33% HCl and 1 mL diethyl ether and the vials were sealed with polytetrafluoroethylene-lined screw caps. For plasma samples, 50 μL of each sample, 1.25 μL of the internal standard mix, 5 μL of 33% HCl, and 0.75 mL of diethyl ether were mixed. The mixture was vortexed vigorously for 3 min and then centrifuged (4000 g , 10 min). The upper organic layer was transferred to another vial, and a second diethyl ether extraction was performed. After combining the two ether extracts, a 60 μL aliquot was removed, combined with 2 μL N-tert -butyldimethylsilyl- N -methyltrifluoroacetamide (MTBSTFA, Sigma-Aldrich #394882) in a GC auto-sampler vial with a 200 μL glass insert, and incubated for 2 h at room temperature. Derivatized samples (1 μL) were injected onto an Agilent 7890B/5977 A GC/MSD instrument with Agilent DB1-ms 0.25 mm×60 m column with a 0.25 μm bonded phase. A discontinuous oven program was used, starting at 40 °C for 2.25 min, then ramping at 20 °C/min to 200 °C, then ramping at 100 °C/min to 300 °C and holding for 7 min. The total run time was 18.25 min. Linear column flow was maintained at 1.26 mL/min. The inlet temperature was set to 250 °C with an injection split ratio of 15:1. Quantitation was performed using selected ion monitoring (SIM) acquisition mode, and metabolites were compared to relevant labeled internal standards using Agilent Mass Hunter v. Acquisition B.07.02.1938. The m/z of monitored ions are as follows: 117 (acetic acid), 120 (acetic acid-D4), 131 (propionic acid), 136 (propionic acid-D6), 145 (butyric acid), and 151 (butyric acid-D7). Concentrations were normalized to mg of cecal contents. The UPLC-MS/MS advanced scheduled multiple-reaction monitoring (ADsMRM) scanning methodological workflow was utilized to identify metabolites of quercetin, along with other phytochemical and host metabolites which may be impacted by treatment with quercetin. The metabolites were purified from 100 μl plasma by 96-well plate solid phase extraction (SPE; Strata TM -X Polymeric Reversed Phase, microelution 2 mg/well). The solid phase extraction treated samples were chromatographically separated and quantified using Exion high-performance liquid chromatography-tandem hybrid triple quadrupole-linear ion trap mass spectrometer (SCIEX QTRAP 6500+; UHPLC-ESI-MS/MS) with electrospray IonDrive Turbo-V Source. The samples were injected into a Kinetex PFP UPLC column (1.7 μm particle size, 100 Å pore size, 100 mm length, 2.1 mm internal diameter; Phenomenex) with oven temperature maintained at 37 °C. Mobile phase A and mobile phase B consisted of 0.1% v.v. formic acid in water and 0.1% v.v. formic acid in LC-MS grade acetonitrile, with a binary gradient ranging from 2% B to 90% B over 30 min and a flow rate gradient from 0.55 mL/min to 0.75 mL/min. MS/MS scanning was accomplished by ADsMRM using polarity switching between positive and negative ionization mode in Analyst (v.1.6.3, SCIEX) and with peak area and intensity recorded using SCIEX OS (v.2.0.0.45330, SCIEX). Internal standards included l -tyrosine-13C9,15N, resveratrol-13C6, hippuric acid 13C, 13C6 4-hydroxybenzoic acid propyl ester, and phlorizin dehydrate (Sigma). Peaks matching retention time, fragmentation patterns, and having intensity greater than 1e4, area greater than 2e4, and number of data points across baseline greater than 5 were annotated, and peak area, height, and area:height ratio were returned for statistical analysis. Metabolites and their respective normalized peak areas were analyzed by the MetaboAnalystR package . sPLS-DA was used to determine the separation between groups of the metabolite variables through rotation of the principal components obtained by PCA. Volcano plots were used to compare the size of the fold change to statistical significance. Volcano plots of significantly changing metabolites were determined using a two-sample Student’s t-test with a probability threshold of P < 0.05 corrected for multiple comparisons using the false discovery rate for type-1 error control. HAoEC (passages 4–6) were grown in 25 cm 2 flasks until confluency (80–90%). Using trypsin, cells were released, collected, and centrifuged. The supernatant was removed, and the pellet was resuspended in 7 ml of V2 medium. Initially, 1.5 ml of media alone were added to the outer transwell system, and then, 0.5 ml of the cell suspension was added to the inner compartment. Cells were maintained at 37 °C and 5% CO 2 . In a separate 12-well plate (no inserts), the remaining 1 ml medium volume was put in a well. This cell monolayer in the plastic surface served as a control to visualize the confluency of cells since this cannot be established in the transwell. When the cells were confluent, they were left 5–7 additional days to obtain a homogeneous monolayer (which can be verified in the control well-plate). At this point, cells were treated as follows: 1. Control cells (no treatment); 2. LPS 100 ng/ml; 3. LPS + 10 nM 3,4-DHBA; 4. LPS + 100 nM 3,4-DHBA. TEER was measured after 5 and 10 min. To prepare murine BMDM, tibias and femurs from C57BL/6 mice were collected and flushed with RPMI 1640 media supplemented with 10% FBS, non-essential amino acids, sodium pyruvate, penicillin/streptomycin, and Glutamax before plating with 20% L-cell conditioned media. Cells were cultured in petri plates for six days at 37 °C and 5% CO 2 before use. For IL-1β detection, BMDM were plated at 4 × 10 5 in a 24-well plate and stimulated with 50 ng/mL LPS for 4 h before treatment with 3,4-DHBA at either 10 nM, 100 nM, 1 µM or 10 µM for 3 h then 5 mM ATP for 1 h. For IL-6 detection, BMDMs were plated at the same density but first treated with either 10 nM, 100 nM, 1 µM or 10 µM for 1 h, then stimulated with 50 ng/mL LPS for 4 h. Cell supernatants were collected and stored at −80 °C until ELISA analysis. IL-1β and IL-6 were detected in cell supernatants by ELISA. Antibodies for IL-1β ELISA (MAB401 and BAF401) were obtained from R&D Systems and used according to the manufacturer’s instructions. The IL-6 ELISA was done using the Mouse IL-6 DuoSet ELISA kit (R&D Systems, Cat. No. DY406) according to the manufacturer’s instructions. The data were expressed as individual dots with mean ± SEM or box-and-whisker plots where the center line was the median, boxes extended to 25th and 75th percentiles, and whiskers extended to min and max values, and analyzed using R (3.6.2). For the high-MAC diet, significance was calculated by two-way ANOVA with Bonferroni post-tests. The correlation between the two variables was calculated by the Pearson correlation coefficient. For the low-MAC diet, significant differences between two two groups were evaluated by two-tailed unpaired Student’s t tests. LDA effect size (LefSe) used a nonparametric Wilcoxon sum-rank test followed by LDA analysis to measure the effect size of each abundant taxon, and two filters ( P < 0.05 and LDA score of >3) were applied to the present features. Supplementary Information |
FORTA Score and Negative Outcomes in Older Adults: Insights from Italian Internal Medicine Wards | bd64a773-d62b-4e01-b0a2-c95acf2d08bc | 11805544 | Internal Medicine[mh] | The study aimed to associate the Fit fOR The Aged score with negative outcomes in older patients. No significant association was found between the Fit fOR The Aged score and negative outcomes like impaired cognition, adverse events or mortality. The Fit fOR The Aged score did not predict negative outcomes, more research is needed to define specific cut-offs for better evaluation. The older population is growing worldwide and the progressive increase in the age diversity of the population implies a rise in the number of the most “fragile,” as they are more likely to need hospitalization. Older people frequently have multiple chronic conditions and are likely to receive multiple drug treatments: polypharmacy may precipitate adverse drug reactions (ADRs), which may lead to a prescribing cascade, drug-drug or drug-disease interactions, dosing and medication errors and even death . Strategies for prescribing medications more safely in older adults and pharmaco-epidemiological studies to assess the relation between potentially inappropriate medication (PIM) and adverse outcomes are mainly based on negative lists, such as the Beers or Screening Tool of Older Persons' Prescriptions STOPP criteria . In 2008, Wehling et al. introduced the Fit fOR The Aged (FORTA) classification system containing negative and positive labelling of treatments or drugs thus supporting the screening for unnecessary, inappropriate or harmful medications and for the omission of individual drugs. The FORTA system has undergone subsequent updates, with the latest revision dating from 2021 . Pilot intervention studies have shown improvement of medication quality and reduction of clinical endpoints (such as falls) with FORTA compared to standard treatment . To further validate the FORTA concept regarding its impact on medication quality and relevant clinical end-points, and on its practical teachability and implementation, the VALFORTA study, a randomized controlled clinical trial was carried out comparing FORTA-guided versus standard care in older patients recruited in two geriatric clinics and using the FORTA score . In the VALFORTA trial, using the FORTA score improved medication quality and reduced clinical endpoints like falls. The study also established a relationship between the FORTA score and mortality, as well as cognitive and physical function outcomes among geriatric in-hospital patients. However, only few studies have been conducted to validate the association between the FORTA score and its relationship with mortality and other adverse outcomes . This study aimed to assess the relation between FORTA score and negative outcomes (impaired cognitive performance and functional status, ADR, and all-cause mortality 3, 6, and 12 months after hospital discharge) in a sample of older adult patients discharged by a sample of Italian internal medicine and geriatric wards. Study Design and Population This retrospective study was conducted on the ELICADHE cohort, a cluster randomized, single-blind controlled was run in 20 Italian internal medicine and geriatric wards from July 2014 to July 2015 , including all patients aged 75 years or over consecutively admitted to the participating wards. The ELICADHE cohort was chosen for this analysis due to its comprehensive data collection, which included detailed information on drug prescriptions, comorbidities, cognitive and physical functions, and follow-up outcomes. This made it particularly suitable for evaluating the impact of the FORTA score on various clinical endpoints and the multicenter nature of the cohort, covering a diverse range of Italian hospitals, reduces the risk of bias. Exclusion criteria were refusal of consent or estimated life expectancy <6 months. In addition to the criteria applied by the ELICADHE cohort, we excluded patients who died during hospitalization and those with incomplete follow-up data. Data Collection Sociodemographic details were recorded (age, sex, alcohol consumption, smoking), drug therapy (at discharge and during follow-ups), comorbidities (acute and chronic), Cumulative Illness Rating Scale (CIRS) index , Barthel Index (BI) , Mini Mental State Examination (MMSE) , and date of death (when recorded). Data were collected via face-to-face structured interviews of the monitors in the wards. Patients who agreed to participate gave written consent to participate in the study. The attending physicians were asked to provide all necessary information regarding the patients’ chronic diseases and prescribed medications through accessing the patients’ medical electronic and nonelectronic records. Patients assuming a missing value for a specific variable were excluded only from the analysis including that variable in the process. FORTA Score The FORTA classification, developed in Germany, is a patient-centered approach for evaluating the appropriateness of medications in older adults. It incorporates both negative and positive labeling for individual drugs or drug classes . The FORTA criteria classify medications with regard to their overall age-appropriateness in four categories from A to D and facilitates (i) detection of therapeutic gaps (undertreatment); (ii) nonoptimal therapy, and (iii) treatment without indication (overtreatment). FORTA A (A-bsolutely) drugs have clear-cut benefits in terms of the efficacy/safety ratio; an example is β-blockers for atrial fibrillation. FORTA B (B-eneficial) drugs are beneficial but have limitations with regard to safety and efficacy: an example is sertraline prescribed for depression. FORTA C (C-areful) drugs have a questionable safety/efficacy profile, require monitoring and should be avoided. An example is amantadine for Parkinson’s disease. FORTA D (D-on’t) drugs should generally be avoided. Every long-acting benzodiazepine falls into this category. The FORTA score is obtained by summing over-, under-, and mistreatments . For each drug, one point is assigned if a symptom or condition is left untreated when beneficial pharmacological options exist (undertreatment, i.e., the absence of drugs in the FORTA categories A and B despite medical necessity) or a drug is prescribed without an appropriate medical indication (overtreatment). Two points are assigned if a symptom or condition is treated with a drug that does not belong to the best available FORTA category (mistreatment), as it represents both overtreatment and undertreatment. The total score is the sum of all single scores. Some examples are available in online supplementary Appendix S1 (for all online suppl. material, see https://doi.org/10.1159/000542109 ). The FORTA score was calculated at discharge and at follow-ups. Drugs not included in the FORTA score calculation were: (i) medications that were not assigned a classification by the FORTA criteria; (ii) medications prescribed for a medical condition not included in FORTA; (iii) drugs used for conditions not covered by the FORTA criteria. The alignments between FORTA diagnoses and the patients’ diagnosis are reported in online supplementary Appendix S2. To determine the patients at risk of an unfavorable outcome, we considered a FORTA score cut-off of 3 in relation to the MMSE score and the BI score since a FORTA score greater than 3 was associated with an increased risk of dementia ; and a cut-off value of 5 for the relationship between the score and adverse clinical events, readmission, and all-cause mortality at follow-up since a FORTA score higher than 6 is associated with increased mortality . The cut-off reported in the cited studies was chosen on the basis of the median FORTA score, analyses were repeated using the median value of our sample. A Sankey plot is employed to depict the dynamic alterations in the proportion of patients across distinct FORTA score categories (0–3, 4–5, 6+) at discharge and throughout the three follow-up: individuals who were either lost to follow-up or for whom information is unattainable are classified in the “censored” class. Assessment of Cognitive and Daily Activities Performances Cognitive patients’ status was evaluated by the MMSE. It includes 30 items and assesses temporal and spatial orientation, working memory, recall, attention, arithmetic capacity, and linguistic and visual motor skills. The maximum score is 30 points (one point per correct item). Any score ≥24 indicates a normal cognition. Lower scores can indicate severe (≤8 points), moderate (9–18 points) or mild (19–23 points) cognitive impairment . The BI evaluates the functional ability in 10 activities of daily living. The BI total score spans from 0 to 100 points and indicates the person’s degree of dependence as follows: a score below 24 indicates total dependence, a score between 25 and 49 signifies a high level of dependence, 50–74 indicates partial dependence, 75 to 90 suggests minimal dependence, and a score between 91 and 100 reflects the ability to live independently . Outcomes The primary outcomes of this study were the association of the FORTA score with impaired cognitive performance, functional status, ADR, and all-cause mortality at 3, 6, and 12 months after hospital discharge. The relation between FORTA score and cognitive and physical functions was evaluated using MMSE and BI scores. Cognitive performance and physical function were evaluated at baseline, as well as at 3, 6, and 12 months, post-discharge. A decline was defined as a decrease in points on the assessment tool compared to baseline. Adverse clinical events were defined as any new hospitalization or acute clinical problem, including ADR that occurred from discharge to the follow-up date. For these purposes, we only considered patients eligible for this analysis who had at least one complete follow-up. Statistical Analysis Sociodemographic characteristics of patients were described using standard descriptive statistics. We tabulated the percentages for discrete variables and differences were evaluated with Pearson’s chi-squared test. Differences in groups were analyzed with a T test or Wilcoxon test according to their distribution. A survival analysis was developed to assess the relationship between increasing FORTA score and 1-year mortality. First, we estimated the survival function using Kaplan-Meier curves then, having checked that proportional hazard assumption was not violated using zph tests, we used a Cox’s regression model to estimate the risk of death in the first year from discharge. This model was conducted first univariately and then adjusted by age, sex, and comorbidity burden using the CIRS. For detection of events, we used date of death reported in our database: subjects not found during the follow-up calls after discharge were considered right-censored at the first missing time-point according to survival analysis theory. Risk of death was checked from the resulting hazard ratios (HRs) with two-sided p values and 95% confidence intervals. We then examined the relationship between FORTA score and adverse clinical events arising in the first year from discharge. Due to the lack of dates for these events in most cases, we employed a logit regression to avoid the problem. Also, for this model we showed first a univariate model and then the version corrected by age, sex, and CIRS comorbidity index. The risk of occurrences of these kinds of event according to FORTA score was evaluated using odds ratios (ORs) with two-sided p values and 95% confidence intervals. To check the relationship between FORTA criteria and cognitive or physical functions of our sample, we assumed, respectively, MMSE and BI as the count of points scored by patients in these tests. According to generalized linear models’ theory, we conducted a negative-binomial model regression univariately, then corrected by age, sex, and CIRS, to assess how the results of these tests were related to the increasing of FORTA scores, using rate ratios (RRs) and estimated mean scores with two-sided p values and 95% confidence intervals. For all these analyses, we considered the FORTA score at discharge, using a continuous model or different cut-offs: (0–3 vs. 4+, 0–5 vs. 6+ and a mixed three-way level variable 0–3 vs. 4–5 vs. 6+), and using our FORTA score median (0–4 vs. 5+). We performed ROC curves to detect optimal cut-offs for mortality and for the combined outcome. We used Youden’s J to identify the best threshold in our sample. Results are shown in online supplementary Appendix S3. The significance criterion (alpha) was set at 0.05 for all tests. Analyses were using SAS 9.4 (SAS Institute Inc., Cary, NC, USA) and RStudio 12.1 (RStudio Inc., Boston, MA, USA). This retrospective study was conducted on the ELICADHE cohort, a cluster randomized, single-blind controlled was run in 20 Italian internal medicine and geriatric wards from July 2014 to July 2015 , including all patients aged 75 years or over consecutively admitted to the participating wards. The ELICADHE cohort was chosen for this analysis due to its comprehensive data collection, which included detailed information on drug prescriptions, comorbidities, cognitive and physical functions, and follow-up outcomes. This made it particularly suitable for evaluating the impact of the FORTA score on various clinical endpoints and the multicenter nature of the cohort, covering a diverse range of Italian hospitals, reduces the risk of bias. Exclusion criteria were refusal of consent or estimated life expectancy <6 months. In addition to the criteria applied by the ELICADHE cohort, we excluded patients who died during hospitalization and those with incomplete follow-up data. Sociodemographic details were recorded (age, sex, alcohol consumption, smoking), drug therapy (at discharge and during follow-ups), comorbidities (acute and chronic), Cumulative Illness Rating Scale (CIRS) index , Barthel Index (BI) , Mini Mental State Examination (MMSE) , and date of death (when recorded). Data were collected via face-to-face structured interviews of the monitors in the wards. Patients who agreed to participate gave written consent to participate in the study. The attending physicians were asked to provide all necessary information regarding the patients’ chronic diseases and prescribed medications through accessing the patients’ medical electronic and nonelectronic records. Patients assuming a missing value for a specific variable were excluded only from the analysis including that variable in the process. The FORTA classification, developed in Germany, is a patient-centered approach for evaluating the appropriateness of medications in older adults. It incorporates both negative and positive labeling for individual drugs or drug classes . The FORTA criteria classify medications with regard to their overall age-appropriateness in four categories from A to D and facilitates (i) detection of therapeutic gaps (undertreatment); (ii) nonoptimal therapy, and (iii) treatment without indication (overtreatment). FORTA A (A-bsolutely) drugs have clear-cut benefits in terms of the efficacy/safety ratio; an example is β-blockers for atrial fibrillation. FORTA B (B-eneficial) drugs are beneficial but have limitations with regard to safety and efficacy: an example is sertraline prescribed for depression. FORTA C (C-areful) drugs have a questionable safety/efficacy profile, require monitoring and should be avoided. An example is amantadine for Parkinson’s disease. FORTA D (D-on’t) drugs should generally be avoided. Every long-acting benzodiazepine falls into this category. The FORTA score is obtained by summing over-, under-, and mistreatments . For each drug, one point is assigned if a symptom or condition is left untreated when beneficial pharmacological options exist (undertreatment, i.e., the absence of drugs in the FORTA categories A and B despite medical necessity) or a drug is prescribed without an appropriate medical indication (overtreatment). Two points are assigned if a symptom or condition is treated with a drug that does not belong to the best available FORTA category (mistreatment), as it represents both overtreatment and undertreatment. The total score is the sum of all single scores. Some examples are available in online supplementary Appendix S1 (for all online suppl. material, see https://doi.org/10.1159/000542109 ). The FORTA score was calculated at discharge and at follow-ups. Drugs not included in the FORTA score calculation were: (i) medications that were not assigned a classification by the FORTA criteria; (ii) medications prescribed for a medical condition not included in FORTA; (iii) drugs used for conditions not covered by the FORTA criteria. The alignments between FORTA diagnoses and the patients’ diagnosis are reported in online supplementary Appendix S2. To determine the patients at risk of an unfavorable outcome, we considered a FORTA score cut-off of 3 in relation to the MMSE score and the BI score since a FORTA score greater than 3 was associated with an increased risk of dementia ; and a cut-off value of 5 for the relationship between the score and adverse clinical events, readmission, and all-cause mortality at follow-up since a FORTA score higher than 6 is associated with increased mortality . The cut-off reported in the cited studies was chosen on the basis of the median FORTA score, analyses were repeated using the median value of our sample. A Sankey plot is employed to depict the dynamic alterations in the proportion of patients across distinct FORTA score categories (0–3, 4–5, 6+) at discharge and throughout the three follow-up: individuals who were either lost to follow-up or for whom information is unattainable are classified in the “censored” class. Cognitive patients’ status was evaluated by the MMSE. It includes 30 items and assesses temporal and spatial orientation, working memory, recall, attention, arithmetic capacity, and linguistic and visual motor skills. The maximum score is 30 points (one point per correct item). Any score ≥24 indicates a normal cognition. Lower scores can indicate severe (≤8 points), moderate (9–18 points) or mild (19–23 points) cognitive impairment . The BI evaluates the functional ability in 10 activities of daily living. The BI total score spans from 0 to 100 points and indicates the person’s degree of dependence as follows: a score below 24 indicates total dependence, a score between 25 and 49 signifies a high level of dependence, 50–74 indicates partial dependence, 75 to 90 suggests minimal dependence, and a score between 91 and 100 reflects the ability to live independently . The primary outcomes of this study were the association of the FORTA score with impaired cognitive performance, functional status, ADR, and all-cause mortality at 3, 6, and 12 months after hospital discharge. The relation between FORTA score and cognitive and physical functions was evaluated using MMSE and BI scores. Cognitive performance and physical function were evaluated at baseline, as well as at 3, 6, and 12 months, post-discharge. A decline was defined as a decrease in points on the assessment tool compared to baseline. Adverse clinical events were defined as any new hospitalization or acute clinical problem, including ADR that occurred from discharge to the follow-up date. For these purposes, we only considered patients eligible for this analysis who had at least one complete follow-up. Sociodemographic characteristics of patients were described using standard descriptive statistics. We tabulated the percentages for discrete variables and differences were evaluated with Pearson’s chi-squared test. Differences in groups were analyzed with a T test or Wilcoxon test according to their distribution. A survival analysis was developed to assess the relationship between increasing FORTA score and 1-year mortality. First, we estimated the survival function using Kaplan-Meier curves then, having checked that proportional hazard assumption was not violated using zph tests, we used a Cox’s regression model to estimate the risk of death in the first year from discharge. This model was conducted first univariately and then adjusted by age, sex, and comorbidity burden using the CIRS. For detection of events, we used date of death reported in our database: subjects not found during the follow-up calls after discharge were considered right-censored at the first missing time-point according to survival analysis theory. Risk of death was checked from the resulting hazard ratios (HRs) with two-sided p values and 95% confidence intervals. We then examined the relationship between FORTA score and adverse clinical events arising in the first year from discharge. Due to the lack of dates for these events in most cases, we employed a logit regression to avoid the problem. Also, for this model we showed first a univariate model and then the version corrected by age, sex, and CIRS comorbidity index. The risk of occurrences of these kinds of event according to FORTA score was evaluated using odds ratios (ORs) with two-sided p values and 95% confidence intervals. To check the relationship between FORTA criteria and cognitive or physical functions of our sample, we assumed, respectively, MMSE and BI as the count of points scored by patients in these tests. According to generalized linear models’ theory, we conducted a negative-binomial model regression univariately, then corrected by age, sex, and CIRS, to assess how the results of these tests were related to the increasing of FORTA scores, using rate ratios (RRs) and estimated mean scores with two-sided p values and 95% confidence intervals. For all these analyses, we considered the FORTA score at discharge, using a continuous model or different cut-offs: (0–3 vs. 4+, 0–5 vs. 6+ and a mixed three-way level variable 0–3 vs. 4–5 vs. 6+), and using our FORTA score median (0–4 vs. 5+). We performed ROC curves to detect optimal cut-offs for mortality and for the combined outcome. We used Youden’s J to identify the best threshold in our sample. Results are shown in online supplementary Appendix S3. The significance criterion (alpha) was set at 0.05 for all tests. Analyses were using SAS 9.4 (SAS Institute Inc., Cary, NC, USA) and RStudio 12.1 (RStudio Inc., Boston, MA, USA). Of the 700 in-patients initially considered for this study, 194 were excluded: of these, 71 patients (10.1%) were excluded because they died during hospitalization, and 123 patients (17.6%) were excluded due to incomplete follow-up data. A total of 506 patients were included in the study. Of these, 171 (33.8%) patients had prescribed drugs fully meeting the FORTA criteria and were therefore completely assessable and included in the following analysis. Supplementary analysis, also conducted on patients who are not fully assessable according to FORTA criteria, is reported in online supplementary Appendix S4. The main sociodemographic details and characteristics of these patients, such as the mean FORTA score and adverse outcomes are provided in . Over 85% of the patients had at least one adverse clinical event and about 10% had died at 1-year follow-up. The characteristics of patients at discharge and follow-ups are provided in . Most of the patients are in the group with the lowest FORTA score (lower than 3), while the mean number of drugs in the group with the highest FORTA score (higher than 6) is definitely larger than in the other groups at baseline and at each follow-up. The rate of mortality, rehospitalization, and adverse clinical outcome was not different in the three groups. provides the Sankey plot for patients with FORTA scores 0–3, 4–5, and 6+ and shows that most patients’ FORTA scores did not change during the follow-up. Cognitive and Physical Function Outcomes No significant relation was found between impaired cognitive performance at MMSE or impaired physical function at BI and a higher FORTA score. We used a negative-binomial regression to analyze scores in either unadjusted or adjusted models (RR (95% CI) 0.98 (0.88–1.09) p = 0.051, 0.97 (0.79–1.20) p = 0.784, FORTA class 6+ for MMSE or BI . The analysis of the cohort of 506 patients revealed a modest association between the FORTA score and cognitive performance in the univariate model (RR [95% CI]: 0.94 [0.98–0.99)] p = 0.04), with higher FORTA scores (6+) corresponding to lower MMSE scores. However, this association lost significance after adjustment in the multivariate model (RR [95% CI]: 0.96 [0.91–1.01], p = 0.15) (online suppl. Appendix S4; Table S3). Adverse Clinical Events Of the 171 patients, 146 (85.4%) were readmitted to hospital in the 12 months after discharge or had at least one adverse clinical event. New acute clinical problems, including ADRs and hospital readmissions, do not appear to be related to a higher FORTA score, in a logit regression for the occurrence of clinical events during the follow-up period, neither in univariate nor in the multivariate models (OR [95% CI] 4.99 [0.99–25.2] p = 0.627 class FORTA 4–5; 1.50 [0.57–3.91], p = 0.441 class FORTA 6+). The odds ratios are reported in . The analysis was repeated utilizing the median FORTA score of our sample, yielding comparable outcomes OR (95% CI) 1.98 (0.81–4.85), p = 0.134 for univariate analysis; 1.48 (0.57–3.87), p = 0.644 for multivariate analysis. Age was significantly related with an increased risk of adverse clinical events (OR [95% CI] 1.12 [1.04–1.21], p = 0.002). Similar results emerged from the analyses conducted on the cohort of 506 patients (online suppl. Appendix S4; Table S3), and no relation was found between FORTA score and adverse clinical events. Mortality In total, 20 (11.7%) of the 171 patients died during the observation period. No relationship was found between a higher FORTA score at discharge and mortality in a proportional hazard regression for 1-year mortality, in either univariate analysis or in a model adjusted for age, sex, and CIRS . The Cox model showed that age was significantly associated with a higher risk of mortality HR (95% CI) 1.12 (1.04–1.21) p = 0.002. The analysis, repeated using the median FORTA score of our sample, yielded similar results (HR (95% CI) 1.98 (0.81–4.85), p = 0.134 for univariate analysis; 1.48 (0.57–3.87), p = 0.644 for multivariate analysis and only age was significantly associated with an increased risk of mortality (HR (95% CI) 1.12 (1.04–1.21), p = 0.002. Similar results emerged from the analyses conducted on the cohort of 506 patients (online suppl. Appendix S4; Table S3), and no relation was found between FORTA score and mortality. No significant relation was found between impaired cognitive performance at MMSE or impaired physical function at BI and a higher FORTA score. We used a negative-binomial regression to analyze scores in either unadjusted or adjusted models (RR (95% CI) 0.98 (0.88–1.09) p = 0.051, 0.97 (0.79–1.20) p = 0.784, FORTA class 6+ for MMSE or BI . The analysis of the cohort of 506 patients revealed a modest association between the FORTA score and cognitive performance in the univariate model (RR [95% CI]: 0.94 [0.98–0.99)] p = 0.04), with higher FORTA scores (6+) corresponding to lower MMSE scores. However, this association lost significance after adjustment in the multivariate model (RR [95% CI]: 0.96 [0.91–1.01], p = 0.15) (online suppl. Appendix S4; Table S3). Of the 171 patients, 146 (85.4%) were readmitted to hospital in the 12 months after discharge or had at least one adverse clinical event. New acute clinical problems, including ADRs and hospital readmissions, do not appear to be related to a higher FORTA score, in a logit regression for the occurrence of clinical events during the follow-up period, neither in univariate nor in the multivariate models (OR [95% CI] 4.99 [0.99–25.2] p = 0.627 class FORTA 4–5; 1.50 [0.57–3.91], p = 0.441 class FORTA 6+). The odds ratios are reported in . The analysis was repeated utilizing the median FORTA score of our sample, yielding comparable outcomes OR (95% CI) 1.98 (0.81–4.85), p = 0.134 for univariate analysis; 1.48 (0.57–3.87), p = 0.644 for multivariate analysis. Age was significantly related with an increased risk of adverse clinical events (OR [95% CI] 1.12 [1.04–1.21], p = 0.002). Similar results emerged from the analyses conducted on the cohort of 506 patients (online suppl. Appendix S4; Table S3), and no relation was found between FORTA score and adverse clinical events. In total, 20 (11.7%) of the 171 patients died during the observation period. No relationship was found between a higher FORTA score at discharge and mortality in a proportional hazard regression for 1-year mortality, in either univariate analysis or in a model adjusted for age, sex, and CIRS . The Cox model showed that age was significantly associated with a higher risk of mortality HR (95% CI) 1.12 (1.04–1.21) p = 0.002. The analysis, repeated using the median FORTA score of our sample, yielded similar results (HR (95% CI) 1.98 (0.81–4.85), p = 0.134 for univariate analysis; 1.48 (0.57–3.87), p = 0.644 for multivariate analysis and only age was significantly associated with an increased risk of mortality (HR (95% CI) 1.12 (1.04–1.21), p = 0.002. Similar results emerged from the analyses conducted on the cohort of 506 patients (online suppl. Appendix S4; Table S3), and no relation was found between FORTA score and mortality. This study found no significant relation with any negative outcomes (impaired cognitive performance, functional status, adverse clinical events, and all-cause mortality) among older adult patients discharged from Italian internal medicine and geriatric wards. Furthermore, no correlation was found when applying specific cut-offs to distinguish between high and low FORTA scores, based on previous literature or using the median FORTA score of our sample, following the method employed by the authors of the FORTA to define cut-offs. The analysis including those patients receiving medications not covered by the FORTA criteria confirmed the main findings of the study, showing no significant relationship between FORTA scores and clinical outcomes. Importantly, this further analysis underscores the limitations of the FORTA tool in covering the full spectrum of medications used in clinical practice, which could partly explain the lack of association with adverse outcomes. This investigation builds on the foundation laid by the VALFORTA study, which demonstrated the efficacy of FORTA-guided care in improving medication quality and clinical endpoints, and found relationships between the FORTA score, mortality and cognitive and physical function outcomes among geriatric in-hospital patients (mean age 81.5 years) . However, we found no relation with clinical outcome despite a sample of patients with similar characteristics to those included in the VALFORTA study. This discrepancy may be attributed to difficulties in the FORTA assessment and differences in the follow-up. First, about 67% of the patients were excluded from the analysis because of the lack of a complete FORTA score, as some medications are not considered by FORTA criteria (e.g., paroxetine and promazine) or for the lack of inclusion of numerous chronic conditions with their corresponding medications, such as benign prostatic hyperplasia or gout. This issue has been extensively discussed in other studies where, due to the scarcity of drugs and classified conditions, FORTA identified a smaller percentage of potentially inappropriate psychotropic medications in nursing home residents compared to the Beers and STOPP criteria and compared to the EU(7)-PIM list in eight different study centers in Germany . Again, the different observation periods we considered might explain our findings. While Pazan et al.’s study involved six follow-up assessments at an average interval of 1 and a half years, our observation period was only 1 year with follow-up assessments at 3, 6, and 12 months. In a prior association study by one of the creators of the FORTA criteria, higher FORTA scores, indicating more frequent medication errors, were linked to impaired cognitive and physical function tests in older hospital patients . The VALFORTA trial demonstrated a significant improvement in activities of daily living through the application of the FORTA intervention . So, in the smaller number of patients evaluated at 1 year the relation between inappropriate prescribing measured with the FORTA score and certain adverse clinical outcomes, such as mortality, may not be detectable. Differences in results may also be due to the unclear definition of the cut-offs for the FORTA score. Because of the low dimension of our sample, our ROC analysis may suffer from overfitting bias, so we used author’s cut-offs. Few studies propose specific cut-offs, mainly established by the criteria authors, and the figurers vary. One aspect that warrants discussion is the source of our data, the ELICADHE study – a randomized controlled trial aimed at optimizing prescribing quality. The intervention’s focus on optimizing prescribing quality might have mitigated some of the risks associated with polypharmacy and PIM use, potentially blurring the relationship between FORTA scores and adverse outcomes but the ELCIADHE study failed to improve clinician drug prescription for hospitalized older patients and found no significant differences in the prevalence of PIM, drug-disease interaction or mortality between the intervention and control groups . One of the primary strengths of this observational study lies in its assessment of the FORTA score in a sample of older adults and its correlation with functional status, adverse clinical events, and all-cause mortality. This study stands out as one of the few, beyond the VALFORTA and those conducted by the authors of the criteria, that assesses the applicability of the FORTA score in a setting where patient data are largely comprehensive, both pharmacologically and clinically: the data used in the study were in fact collected during the ELICADHE study, approved and financially supported by the Italian Medicines Agency (AIFA), ensuring their accuracy and comprehensiveness. Our study has several potential biases that may affect the interpretation of the results. First, selection bias could have been introduced by including only patients with a complete FORTA score assessment, potentially limiting the representativeness of the sample, reducing the statistical power of the study, and increasing the uncertainty of the findings, as reflected in the wide confidence intervals. The exclusion of several patients, due to the strict application of the FORTA criteria, further contributed to influence the small sample size. Although our study focused on a precise and strict application of these criteria, this selection process impacted the generalizability of our findings: the study sample may not fully reflect the broader population of older adults in hospital settings. Attrition bias is also a concern due to the loss of patients during follow-up, particularly in the 12-month follow-up, which may have resulted in an unbalanced analysis. The short observation period might not have been sufficient to capture the full impact of PIM, especially long-term effects. Furthermore, we did not evaluate in-hospital mortality, which may have introduced additional limitations in assessing the overall impact of medication appropriateness on patient outcomes. The large number of exclusions due to incomplete FORTA assessments, coupled with the relatively short duration of patient observation and significant follow-up losses introduce uncertainty in our results, and broader generalizations should be made with caution. The lack of statistical significance and broad confidence intervals further underscore the need for careful interpretation of our findings. Further research is necessary to clarify the relationship between the FORTA score with a specific cut-off and different outcomes. We hope that future versions of the FORTA criteria will include more pathologies, for a fuller evaluation of patients and a list of ICD codes for FORTA diagnosis to reduce the risk of wrong alignment. Our findings suggest that the FORTA score may not have a clear or consistent relationship with impaired cognitive and physical function, adverse clinical events, or mortality among older adult discharged from internal medicine and geriatric wards. However, these results are uncertain and should be interpreted with caution due to potential biases and relevant limitations, such as the small sample size, exclusion of patients with incomplete FORTA scores, and the relatively short observation period. Further research is needed to define specific cut-off values for different clinical outcomes and the number of drugs and clinical conditions considered in the FORTA criteria should be expanded to improve the accuracy and comprehensiveness of the FORTA score assessment. The authors are very grateful to the investigators for data collection and to J.D. Baggott for language editing. See Electronic online supplementary material Appendix S5 for a list of investigators and co-authors. This was a retrospective study and data collection complied fully with Italian law on personal data protection. All data were anonymous and informed consent was not required for the purpose of the study. The study was first approved by the Ethical Committee of the coordinating clinical unit (IRCCS Cà Granda Maggiore Hospital Foundation, Milan, Italy). The ELICADHE study was approved and financially supported by the Italian Medicines Agency (AIFA) according to the 2008 Italian Program for Independent Research (Project no. FARM87SA2B). The authors have no conflicts of interest to declare. This study was not supported by any sponsor or funder. All authors participated in drafting or critical revision for important intellectual content. Individual contributions are as follows: Marina Azab and Luca Pasina designed the study, interpreted data, and wrote the manuscript; Alessio Novella did and interpreted statistical analyses. All authors read, approved the final version of the paper, and agreed to be accountable for the work. |
Composition of soil fungal communities and microbial activity along an elevational gradient in Mt. Jiri, Republic of Korea | f76670a5-d1a4-4e79-837b-32894c015c6a | 11748422 | Microbiology[mh] | Approximately 64% of the Republic of Korea consists of mountainous regions, which can be further divided into subalpine areas below the tree line and alpine areas above the tree line, starting at the 1,400 m mark . The overall mountainous, and the alpine and subalpine ecosystems, including those on Mt. Jiri, are sensitive to climate change, particularly as cold and high-altitude regions . Microorganisms play crucial roles in subalpine ecosystems, contributing significantly to processes such as nutrient cycling, organic matter decomposition, and soil development. Specifically, soil microorganisms regulate the flow of necessary nutrients, such as phosphorus, sulfur, potassium, iron, manganese, and zinc, or contribute to nitrogen (N) cycling through transformations, such as nitrification, denitrification, and ammonification . Through enzymatic activity, they decompose complex compounds, such as lignin and cellulose, contributing to the overall carbon cycle . Finally, they build mutualistic, commensalistic, or parasitic relationships that influence vegetation development and health . However, despite their potential role in regulating climate change, especially in colder and higher altitude ecosystems such as alpine and subalpine areas, the role of microorganisms in the climate change loop is rarely the focus of related studies . The variations in temperature and precipitation levels induced by increases in altitude serve as an appropriate approximation of a climate gradient, where the microbiome plays an important role in biogeochemical cycling. Large-scale climatic shifts modify local vegetation and edaphic conditions, such as soil pH and soil moisture content, which further influence microbial communities . For instance, warmer temperatures increase microbial activity, leading to generally higher rates of N mineralization and nitrification. Higher N availability makes plants less inclined to form mutualistic relationships with mycorrhizal fungi, resulting in a shift in the mycobiome towards more generalized species with lower diversity . The soil microbiome may be a valuable indicator of the direct and indirect effects of climate change on ecosystems, and inspection of soil microbial communities can offer insights into how changes in microbial diversity and activity patterns contribute to alterations in soil nutrient dynamics, plant-microbe interactions, and ecosystem health. Hence, as a key microbial category, this study focused on the soil-inhabiting fungal communities of Mt. Jiri, an area particularly susceptible to climate alterations as a cold and high-altitude region. Factors influencing soil microbial communities in alpine and subalpine ecosystems have been studied for decades . Biotic and abiotic components, such as vegetation type, temperature, pH, moisture content, and soil type, are key determinants of the functions and diversity of microbial communities. Additionally, elevation has been reported to significantly impact microbial diversity, activity, and community composition, with relative abundances of main functional groups varying along the elevational gradient. For example, a decreasing fungi to bacteria ratio with increasing elevation was found in the Austrian Limestone Alps (900–1,900 m) , but the opposite trend was reported in the Austrian Central Alps . Fungal communities generally exhibited contrasting patterns, such as a decrease in diversity with increasing altitude or a hump-shaped trend with highest alpha diversity reported at mid-altitudes . The within-community responses to elevation were likewise not uniform. In an East African Mt. Kilimanjaro study (767–4,190 m), major phyla Ascomycota decreased with elevation, Glomeromycota followed a hump-shaped curve, while Chytridiomycota showed a U-shaped trend . Broadly, these contradictory discoveries may be attributable to the confounding effects of regional-scale environmental factors, such as geography, rock parent material, and seasonality , making it difficult to establish consensus in the literature regarding general diversity patterns or community composition. However, among different functional groups, fungal communities were shown to respond more strongly to regional-scale factors, such as mean annual temperature and precipitation, rather than local-scale factors, such as soil pH and total carbon , highlighting their importance in climate change impact studies. To deepen our understanding of the factors shaping the soil microbial communities in alpine and subalpine ecosystems under climate change, we analyzed the soil microbial functions and fungal community composition along an elevational gradient on Mt. Jiri. Mt. Jiri is a significant natural and cultural resource in the Republic of Korea, renowned for its rich biodiversity and representative subalpine and alpine landscapes. However, it has undergone significant destruction and changes in land use, particularly in the lower regions, due to human activities including post-Korean War logging and slash-and-burn agriculture. In 1967, Mt. Jiri was designated as Republic of Korea’s first National Park, and since then, substantial conservation efforts have been made to restore and protect the mountain’s ecosystem . Thus, understanding its internal working mechanisms and the effects of climate change are of great national relevance. We aimed to address the following exploratory questions: (i) How is microbial activity on Mt. Jiri influenced by altitude? (ii) What are the dominant fungal phyla in Mt. Jiri soils and how do their relative abundances vary with elevation? (iii) How does the overall fungal diversity vary along the altitudinal gradient on Mt. Jiri? (iv) What edaphic factors predict soil fungal community composition on Mt. Jiri? To establish an elevational gradient on Mt. Jiri (35°17′23.64″–35°19′26.76″N, 127°29′36.6″–127°34′11.64″E), four sampling altitudes (600, 1,000, 1,200, and 1,400 m) along the western slope were selected . Designated elevational sites belong to the permanent research station of Seoul National University. Soil samples were collected from each site in September 2021, October 2021, April 2022, and September 2022. For the 2021 samplings, we collected samples from two sites per altitude level, and for the 2022 samplings, we increased the number of sites to three per altitude level. All sites were 20 × 20 m in size and set up within a 500-m area, but the distances between them and their arrangement varied at each altitude due to the constrictive topological features present at each level. The dominant species were Pinus densiflora and Acer pseudosieboldianum at the 600-m site, Fraxinus rhynchophylla and Acer pictum Thunb. var. mono at the 1,000-m site, Quercus mongolica and Fraxinus sieboldiana at the 1,200-m site, and Rhododendron schlippenbachii and Quercus mongolica at the 1,400-m site. As understory vegetation, Sasamorpha borealis was present across all sites. Each sample was collected in two replicates and the same sampling method was used throughout. After removal of the organic layer, approximately 700 g of soils were collected between the 0 and 25 cm depth, stored in sealed bags and transported on ice to the laboratory. Soils were sieved with a 2-mm sieve and any roots, debris or residues were removed to an approximate soil weight of 500 g. Resulting soils were stored at −20 °C before starting the subsequent analyses. Due to restricted access to certain sites, we were unable to collect samples from said sites in September 2021 (1,200 m), October 2021 (1,000 and 1,400 m) and April 2022 (1,200 m). The sand, silt, and clay percentages were 41.8%, 40.4%, and 17.8% at the 600-m sites, and 36.7%, 50.2%, and 13.1% at the 1,200-m sites, respectively, with only the silt content showing a significant difference . The δ 13 C—soil organic carbon (SOC) was about −25‰, which is expected for forested areas with no known history of agricultural C4 plants . The Δ 14 C-SOC at a depth of 0–15 cm was similar between the 600- and 1,200-m sites, but at a depth of 15–30 cm it was approx. −12‰ at 600 m and −80‰ at 1,200 m, suggesting that SOC at the 1,200-m site is significantly older than that of the 600-m site . Soil analyses Soil pH was determined using a pH meter (Mettler Toledo, Greifensee, Switzerland) after shaking 10 grams of soil in distilled water at a ratio of 1:5 (w/v) for 30 min. Water content was measured gravimetrically for 20 g of soil. Organic matter content, total N and its inorganic fractions, total carbon, and cation exchange capacity were measured at the National Instrumentation Center for Environmental Management (NICEM, Seoul, Republic of Korea). Soil microbial characteristics Soil enzyme activity To determine microbial community metabolism, fluorometric assays were performed using methylumbelliferone (MUB)-linked substrates, as previously described in . In short, the activities of β-1,4-glucosidase (BG), cellobiohydrolase (CBH), N-acetylglucosaminidase (NAG), acid phosphatase (AP), and β-1,4-xylosidase (BX) were measured; these are extracellular enzymes involved in nutrient cycling that increase their availability or degrade cellulose, hemicellulose, and chitin in the soil . Two grams of soils were placed in 125 mL of sodium acetate buffer, and the slurry was transferred to a 96-well microplate that included eight analytical replicates of each enzyme assay. Plates containing all five enzymes were incubated at 24 °C for 2 h. Fluorescence was measured using a Synergy HT multi-mode microplate reader (BioTek Instruments Inc., Winooski, VT, USA), in which the excitation energy was set at 360 nm, and emission was measured at 460 nm. The enzyme activity was expressed as nmol 4-MUB g −1 h −1 . Soil microbial biomass Soil microbial biomass carbon was measured using the chloroform fumigation-extraction method . Ten grams of non-fumigated and chloroform-fumigated soils were extracted using 0.5 M potassium persulfate (K 2 S 2 O 8 ). The C concentration was determined using a SIEVERS 900 TOC analyzer (GE Analytical Instruments, Boulder, CO, USA), and a conversion factor of 0.45 was applied to estimate the biomass C content from the carbon concentrations recorded by the analyzer. Soil fungal communities DNA extraction and MiSeq sequencing Soil genomic DNA was extracted using a DNeasy PowerSoil Pro Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. The extracted soil DNA was quantified by a Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and stored at −20 °C until further use. PCR amplification was performed using the universal internal transcribed spacer (ITS) region targeting primers ITS3F (5′-GCATCGATGAAGAACGCAGC-3′; ) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′; ). The amplification was performed similarly to , under the following conditions: initial denaturation at 95 °C for 5 min, followed by 30 cycles of denaturation at 95 °C for 30 s, primer annealing at 55 °C for 30 s, and extension at 72 °C for 30 s, with a final elongation at 72 °C for 5 min. PCR products were confirmed using 2% agarose gel electrophoresis and visualized using a Gel Doc system (BioRad, Hercules, CA, USA). The amplified products, which were 250 bp in size, were purified using a QIAquick PCR purification kit (Qiagen, Hilden, Germany). DNA sequencing was performed by CJ Bioscience (Seoul, Republic of Korea) using the MiSeq platform (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Sequencing data are available at NCBI SRA under the project accession code PRJNA1144666 . Fungal community analyses The demultiplexed FASTQ files received from CJ Bioscience were inputted into the bioinformatics platform QIIME2 (version 2022.11.1) and prepared for further downstream analysis. Front and reverse reads were merged using the DADA2 plugin while performing quality control by trimming and truncating the sequences, denoising, and removing existing chimeras. Taxonomy was assigned to the obtained amplicon sequence variants (ASV) using the q2-feature-classifier plugin employing a pre-trained Naive-Bayes classifier on the UNITE database version 9.0 . The OTU tables and associated taxonomies were further imported into R version 4.3.0, where alpha and beta diversities were computed using the R package vegan (version 2.6-4) . We used the open annotation tool FunGuild (version 1.1) to match ASVs to potentially corresponding functional guilds, with a mention of the confidence level of the match and more subdivisions, such as the trophic mode and growth morphology. The input sequences had a median value of 69,151 reads per sequence, and an average of 56.83% of the original reads was retained after quality filtering. The 24 samples had a total of 6,105 observed features, of which 2,613 were assigned to a functional guild by FunGuild. Statistical analyses Statistical variance and correlation analyses were performed using SPSS statistics for Windows (version 25.0; IBM Corp., Armonk, NY, USA). Data are presented as arithmetic means with standard errors. For exploring the altitude variance, since achieving uniform sampling across all elevation levels was challenging, the non-parametric Kruskal–Wallis test, which relies on ranks rather than means and Dunn’s post-hoc test were employed. Dunn’s test has been previously reported to reduce the effects of uneven sample size , hence was the preferred test in this study. Spearman’s rank correlation was performed to test for correlations between elevation and both environmental and microbial properties. For analyses concerning variables independent of elevation level, as most violated the assumption of normality, we likewise opted for Spearman’s rank correlation . All analyses were performed at a significance level of α = 0.05. Multivariate analyses of fungal community were performed using the R software version 4.3.0. Principal coordinates analysis (PCoA), based on the Bray–Curtis distance, was utilized to assess the beta diversity across the four elevational levels. Additionally, Kruskal–Wallis variance testing, followed by the Wilcoxon rank–sum test as the post-hoc method, was employed to identify significantly different phyla and genera between the elevation levels. To account for multiple comparisons, the Benjamini–Hochberg correction was applied, as it is reported to be well-suited for noisy data such as microbial datasets . Redundancy analysis testing (RDA) was performed to determine the environmental factors which best correlate with the dominant fungal phyla across all sites. Only variables that were relevant to the model and free from collinearity were included in the final analysis. Seasonal influences Sampling was performed four times during two seasons, spring and autumn. To ensure that the observed changes in edaphic properties, microbial quantity and activity are an effect of altitude rather than seasonality, we tested for differences between seasons and found no statistically significant differences, except for soil pH at the 600-m site. DNA extraction and sequencing for fungal community analysis were performed only on soils sampled in September 2021 and 2022, and therefore were not subjected to seasonal influences. Soil pH was determined using a pH meter (Mettler Toledo, Greifensee, Switzerland) after shaking 10 grams of soil in distilled water at a ratio of 1:5 (w/v) for 30 min. Water content was measured gravimetrically for 20 g of soil. Organic matter content, total N and its inorganic fractions, total carbon, and cation exchange capacity were measured at the National Instrumentation Center for Environmental Management (NICEM, Seoul, Republic of Korea). Soil enzyme activity To determine microbial community metabolism, fluorometric assays were performed using methylumbelliferone (MUB)-linked substrates, as previously described in . In short, the activities of β-1,4-glucosidase (BG), cellobiohydrolase (CBH), N-acetylglucosaminidase (NAG), acid phosphatase (AP), and β-1,4-xylosidase (BX) were measured; these are extracellular enzymes involved in nutrient cycling that increase their availability or degrade cellulose, hemicellulose, and chitin in the soil . Two grams of soils were placed in 125 mL of sodium acetate buffer, and the slurry was transferred to a 96-well microplate that included eight analytical replicates of each enzyme assay. Plates containing all five enzymes were incubated at 24 °C for 2 h. Fluorescence was measured using a Synergy HT multi-mode microplate reader (BioTek Instruments Inc., Winooski, VT, USA), in which the excitation energy was set at 360 nm, and emission was measured at 460 nm. The enzyme activity was expressed as nmol 4-MUB g −1 h −1 . Soil microbial biomass Soil microbial biomass carbon was measured using the chloroform fumigation-extraction method . Ten grams of non-fumigated and chloroform-fumigated soils were extracted using 0.5 M potassium persulfate (K 2 S 2 O 8 ). The C concentration was determined using a SIEVERS 900 TOC analyzer (GE Analytical Instruments, Boulder, CO, USA), and a conversion factor of 0.45 was applied to estimate the biomass C content from the carbon concentrations recorded by the analyzer. To determine microbial community metabolism, fluorometric assays were performed using methylumbelliferone (MUB)-linked substrates, as previously described in . In short, the activities of β-1,4-glucosidase (BG), cellobiohydrolase (CBH), N-acetylglucosaminidase (NAG), acid phosphatase (AP), and β-1,4-xylosidase (BX) were measured; these are extracellular enzymes involved in nutrient cycling that increase their availability or degrade cellulose, hemicellulose, and chitin in the soil . Two grams of soils were placed in 125 mL of sodium acetate buffer, and the slurry was transferred to a 96-well microplate that included eight analytical replicates of each enzyme assay. Plates containing all five enzymes were incubated at 24 °C for 2 h. Fluorescence was measured using a Synergy HT multi-mode microplate reader (BioTek Instruments Inc., Winooski, VT, USA), in which the excitation energy was set at 360 nm, and emission was measured at 460 nm. The enzyme activity was expressed as nmol 4-MUB g −1 h −1 . Soil microbial biomass carbon was measured using the chloroform fumigation-extraction method . Ten grams of non-fumigated and chloroform-fumigated soils were extracted using 0.5 M potassium persulfate (K 2 S 2 O 8 ). The C concentration was determined using a SIEVERS 900 TOC analyzer (GE Analytical Instruments, Boulder, CO, USA), and a conversion factor of 0.45 was applied to estimate the biomass C content from the carbon concentrations recorded by the analyzer. DNA extraction and MiSeq sequencing Soil genomic DNA was extracted using a DNeasy PowerSoil Pro Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. The extracted soil DNA was quantified by a Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and stored at −20 °C until further use. PCR amplification was performed using the universal internal transcribed spacer (ITS) region targeting primers ITS3F (5′-GCATCGATGAAGAACGCAGC-3′; ) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′; ). The amplification was performed similarly to , under the following conditions: initial denaturation at 95 °C for 5 min, followed by 30 cycles of denaturation at 95 °C for 30 s, primer annealing at 55 °C for 30 s, and extension at 72 °C for 30 s, with a final elongation at 72 °C for 5 min. PCR products were confirmed using 2% agarose gel electrophoresis and visualized using a Gel Doc system (BioRad, Hercules, CA, USA). The amplified products, which were 250 bp in size, were purified using a QIAquick PCR purification kit (Qiagen, Hilden, Germany). DNA sequencing was performed by CJ Bioscience (Seoul, Republic of Korea) using the MiSeq platform (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Sequencing data are available at NCBI SRA under the project accession code PRJNA1144666 . Fungal community analyses The demultiplexed FASTQ files received from CJ Bioscience were inputted into the bioinformatics platform QIIME2 (version 2022.11.1) and prepared for further downstream analysis. Front and reverse reads were merged using the DADA2 plugin while performing quality control by trimming and truncating the sequences, denoising, and removing existing chimeras. Taxonomy was assigned to the obtained amplicon sequence variants (ASV) using the q2-feature-classifier plugin employing a pre-trained Naive-Bayes classifier on the UNITE database version 9.0 . The OTU tables and associated taxonomies were further imported into R version 4.3.0, where alpha and beta diversities were computed using the R package vegan (version 2.6-4) . We used the open annotation tool FunGuild (version 1.1) to match ASVs to potentially corresponding functional guilds, with a mention of the confidence level of the match and more subdivisions, such as the trophic mode and growth morphology. The input sequences had a median value of 69,151 reads per sequence, and an average of 56.83% of the original reads was retained after quality filtering. The 24 samples had a total of 6,105 observed features, of which 2,613 were assigned to a functional guild by FunGuild. Soil genomic DNA was extracted using a DNeasy PowerSoil Pro Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. The extracted soil DNA was quantified by a Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and stored at −20 °C until further use. PCR amplification was performed using the universal internal transcribed spacer (ITS) region targeting primers ITS3F (5′-GCATCGATGAAGAACGCAGC-3′; ) and ITS4R (5′-TCCTCCGCTTATTGATATGC-3′; ). The amplification was performed similarly to , under the following conditions: initial denaturation at 95 °C for 5 min, followed by 30 cycles of denaturation at 95 °C for 30 s, primer annealing at 55 °C for 30 s, and extension at 72 °C for 30 s, with a final elongation at 72 °C for 5 min. PCR products were confirmed using 2% agarose gel electrophoresis and visualized using a Gel Doc system (BioRad, Hercules, CA, USA). The amplified products, which were 250 bp in size, were purified using a QIAquick PCR purification kit (Qiagen, Hilden, Germany). DNA sequencing was performed by CJ Bioscience (Seoul, Republic of Korea) using the MiSeq platform (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Sequencing data are available at NCBI SRA under the project accession code PRJNA1144666 . The demultiplexed FASTQ files received from CJ Bioscience were inputted into the bioinformatics platform QIIME2 (version 2022.11.1) and prepared for further downstream analysis. Front and reverse reads were merged using the DADA2 plugin while performing quality control by trimming and truncating the sequences, denoising, and removing existing chimeras. Taxonomy was assigned to the obtained amplicon sequence variants (ASV) using the q2-feature-classifier plugin employing a pre-trained Naive-Bayes classifier on the UNITE database version 9.0 . The OTU tables and associated taxonomies were further imported into R version 4.3.0, where alpha and beta diversities were computed using the R package vegan (version 2.6-4) . We used the open annotation tool FunGuild (version 1.1) to match ASVs to potentially corresponding functional guilds, with a mention of the confidence level of the match and more subdivisions, such as the trophic mode and growth morphology. The input sequences had a median value of 69,151 reads per sequence, and an average of 56.83% of the original reads was retained after quality filtering. The 24 samples had a total of 6,105 observed features, of which 2,613 were assigned to a functional guild by FunGuild. Statistical variance and correlation analyses were performed using SPSS statistics for Windows (version 25.0; IBM Corp., Armonk, NY, USA). Data are presented as arithmetic means with standard errors. For exploring the altitude variance, since achieving uniform sampling across all elevation levels was challenging, the non-parametric Kruskal–Wallis test, which relies on ranks rather than means and Dunn’s post-hoc test were employed. Dunn’s test has been previously reported to reduce the effects of uneven sample size , hence was the preferred test in this study. Spearman’s rank correlation was performed to test for correlations between elevation and both environmental and microbial properties. For analyses concerning variables independent of elevation level, as most violated the assumption of normality, we likewise opted for Spearman’s rank correlation . All analyses were performed at a significance level of α = 0.05. Multivariate analyses of fungal community were performed using the R software version 4.3.0. Principal coordinates analysis (PCoA), based on the Bray–Curtis distance, was utilized to assess the beta diversity across the four elevational levels. Additionally, Kruskal–Wallis variance testing, followed by the Wilcoxon rank–sum test as the post-hoc method, was employed to identify significantly different phyla and genera between the elevation levels. To account for multiple comparisons, the Benjamini–Hochberg correction was applied, as it is reported to be well-suited for noisy data such as microbial datasets . Redundancy analysis testing (RDA) was performed to determine the environmental factors which best correlate with the dominant fungal phyla across all sites. Only variables that were relevant to the model and free from collinearity were included in the final analysis. Seasonal influences Sampling was performed four times during two seasons, spring and autumn. To ensure that the observed changes in edaphic properties, microbial quantity and activity are an effect of altitude rather than seasonality, we tested for differences between seasons and found no statistically significant differences, except for soil pH at the 600-m site. DNA extraction and sequencing for fungal community analysis were performed only on soils sampled in September 2021 and 2022, and therefore were not subjected to seasonal influences. Sampling was performed four times during two seasons, spring and autumn. To ensure that the observed changes in edaphic properties, microbial quantity and activity are an effect of altitude rather than seasonality, we tested for differences between seasons and found no statistically significant differences, except for soil pH at the 600-m site. DNA extraction and sequencing for fungal community analysis were performed only on soils sampled in September 2021 and 2022, and therefore were not subjected to seasonal influences. Soil physicochemical characteristics Soil temperature and pH in Mt. Jiri decreased significantly with increasing elevation (Spearman’s correlation: R temp = −0.923, p < 0.001; R pH = −0.445, p < 0.001), whereas soil water content was positively correlated with elevation (Spearman’s correlation: R = 0.702, p < 0.001). The soil was acidic with pH of 4.23–6.08, and soil pH was negatively correlated with soil water content (Spearman’s correlation: R = −0.689, p < 0.001), with more acidic soils retaining more moisture. Significant elevation differences were identified with pH values being higher at the 600- and 1,000-m sites, and lower for the 1,200- and 1,400-m sites. The opposite trend was visible for soil water content, supporting the found inverse correlation between the two properties ( and ). Soil water content was positively correlated with CEC (Spearman’s correlation: R = 0.622, p = 0.031), and TN and TC were likewise positively correlated (Spearman’s correlation: R = 0.829, p < 0.001). Soil organic matter, cation exchange capacity (CEC), total nitrogen (TN, ) and total carbon (TC, ) showed no statistically significant elevational trends. Microbial biomass and enzyme activity To identify the soil microbial characteristics on Mt. Jiri, we investigated five extracellular enzymes and microbial biomass carbon. Among the activities of the five soil extracellular enzymes, cellobiohydrolase, β-1,4-glucosidase, and β-1,4-xylosidase activity exhibited differences among the elevation levels (Kruskal–Wallis: H CBH = 21.81, p < 0.001; H BG = 14.75, p = 0.002; H BX = 18.93, p < 0.001) , with pairwise comparison test indicating lower activity at 600 m than that at 1,400 m ( p < 0.001). Soil microbial biomass was significantly lower at the 600- and 1,000-m sites than at the 1,200- and 1,400-m sites . Moreover, microbial biomass was positively correlated with elevation and soil water content (Spearman’s correlation: R elevation = 0.421, p < 0.001; R SWC = 0.735, p < 0.001), whereas the opposite was observed for soil pH (Spearman’s correlation: R = −0.590; p < 0.001). To identify how microbial biomass carbon affects the trend of enzyme activity along the elevation slope, we performed the altitude variance test for enzyme activity on a gram microbial biomass carbon basis. The results of the three aforementioned enzymes remained within the same ranges. However, the activity of the β-1,4-N-acetylglucosaminidase enzyme normalized by microbial biomass carbon decreased with increasing elevation . Soil fungal communities Alpha and beta diversity For this experiment, we clustered 517,442 quality sequences classified into 14,277 OTUs at ≥97% similarity level, distributed across all samples. The number of observed features and Shannon, Simpson, and inverse Simpson indices from the vegan package were used as metrics to assess fungal alpha diversity against the total feature count. An almost even distribution was observed among all the sites, with a high degree of diversity and heterogeneity. Two samples, one from the 600-m and one from the 1,000-m site, pulled the curve down, but they could be considered possible outliers existing in nature . Additionally, the altitude variance test revealed no difference in alpha diversity among the elevation levels, and no correlation was identified with soil properties ( p > 0.05). Using vegan’s avgdist algorithm at a subsampling depth of 21,900, we computed the beta diversity and observed three clusters, which may be supported by similar groupings found in the soil pH and soil moisture properties (for soils of the 600–1,000 and 1,200–1,400 m elevations) . Community composition The relative abundances of the five most common phyla in Mt. Jiri were plotted, with Basidiomycota being the most abundant phylum accounting for approximately 45.5% of the total sequences obtained, followed by Ascomycota and Mortierellomycota at 20.6% and 14.4% of the total sequences, respectively . presents the nine most abundant OTUs at each altitude site, regardless of the taxonomic level, which included either families, such as Mortierellaceae , or genera, such as Amanita . The fungal communities in the 600-, 1,200-, and 1,400-m sites showed similar distributions of the five dominant phyla , but they were also clustered with the 1,000 m-plot3 samples. This was supported by the PCoA beta diversity analysis results, which grouped the 1,000 m-plot3 samples together with those of the 600-m site . The remaining 1,000-m site samples presented relatively lower symbiotic fungi, but higher abundance of pathogenic fungi, with the latter more often belonging to the phylum Ascomycota . Significance testing was performed to identify statistically different phyla and initially found Ascomycota , Basidiomycota and Olpidiomycota to show differences. However, after applying the Benjamini-Hochberg correction, these differences were no longer statistically significant. Despite this, the elevational trend for Ascomycota and Basidiomycota can still be visualized in . Relation between main fungal phyla and environmental variables The environmental factors, that were identified as relevant to the model and that best correlated with the dominant fungal phyla in Mt. Jiri, are graphically represented in . reflects the influence of pH, moisture, and elevation on the fungal communities present in both years of sampling, 2021 and 2022, with pH showing the strongest relation ( F = 40.26, p < 0.001), followed by soil moisture ( F = 4.56, p = 0.039). Additionally, we confirmed the influence of organic matter, temperature, TN, and CEC on the 2021 samples, with CEC ( F = 13.14, p = 0.006), TN ( F = 8.16, p = 0.015), and organic matter ( F = 5.08, p = 0.052) exhibiting statistically significant relations to fungal community composition . Soil temperature and pH in Mt. Jiri decreased significantly with increasing elevation (Spearman’s correlation: R temp = −0.923, p < 0.001; R pH = −0.445, p < 0.001), whereas soil water content was positively correlated with elevation (Spearman’s correlation: R = 0.702, p < 0.001). The soil was acidic with pH of 4.23–6.08, and soil pH was negatively correlated with soil water content (Spearman’s correlation: R = −0.689, p < 0.001), with more acidic soils retaining more moisture. Significant elevation differences were identified with pH values being higher at the 600- and 1,000-m sites, and lower for the 1,200- and 1,400-m sites. The opposite trend was visible for soil water content, supporting the found inverse correlation between the two properties ( and ). Soil water content was positively correlated with CEC (Spearman’s correlation: R = 0.622, p = 0.031), and TN and TC were likewise positively correlated (Spearman’s correlation: R = 0.829, p < 0.001). Soil organic matter, cation exchange capacity (CEC), total nitrogen (TN, ) and total carbon (TC, ) showed no statistically significant elevational trends. To identify the soil microbial characteristics on Mt. Jiri, we investigated five extracellular enzymes and microbial biomass carbon. Among the activities of the five soil extracellular enzymes, cellobiohydrolase, β-1,4-glucosidase, and β-1,4-xylosidase activity exhibited differences among the elevation levels (Kruskal–Wallis: H CBH = 21.81, p < 0.001; H BG = 14.75, p = 0.002; H BX = 18.93, p < 0.001) , with pairwise comparison test indicating lower activity at 600 m than that at 1,400 m ( p < 0.001). Soil microbial biomass was significantly lower at the 600- and 1,000-m sites than at the 1,200- and 1,400-m sites . Moreover, microbial biomass was positively correlated with elevation and soil water content (Spearman’s correlation: R elevation = 0.421, p < 0.001; R SWC = 0.735, p < 0.001), whereas the opposite was observed for soil pH (Spearman’s correlation: R = −0.590; p < 0.001). To identify how microbial biomass carbon affects the trend of enzyme activity along the elevation slope, we performed the altitude variance test for enzyme activity on a gram microbial biomass carbon basis. The results of the three aforementioned enzymes remained within the same ranges. However, the activity of the β-1,4-N-acetylglucosaminidase enzyme normalized by microbial biomass carbon decreased with increasing elevation . Alpha and beta diversity For this experiment, we clustered 517,442 quality sequences classified into 14,277 OTUs at ≥97% similarity level, distributed across all samples. The number of observed features and Shannon, Simpson, and inverse Simpson indices from the vegan package were used as metrics to assess fungal alpha diversity against the total feature count. An almost even distribution was observed among all the sites, with a high degree of diversity and heterogeneity. Two samples, one from the 600-m and one from the 1,000-m site, pulled the curve down, but they could be considered possible outliers existing in nature . Additionally, the altitude variance test revealed no difference in alpha diversity among the elevation levels, and no correlation was identified with soil properties ( p > 0.05). Using vegan’s avgdist algorithm at a subsampling depth of 21,900, we computed the beta diversity and observed three clusters, which may be supported by similar groupings found in the soil pH and soil moisture properties (for soils of the 600–1,000 and 1,200–1,400 m elevations) . Community composition The relative abundances of the five most common phyla in Mt. Jiri were plotted, with Basidiomycota being the most abundant phylum accounting for approximately 45.5% of the total sequences obtained, followed by Ascomycota and Mortierellomycota at 20.6% and 14.4% of the total sequences, respectively . presents the nine most abundant OTUs at each altitude site, regardless of the taxonomic level, which included either families, such as Mortierellaceae , or genera, such as Amanita . The fungal communities in the 600-, 1,200-, and 1,400-m sites showed similar distributions of the five dominant phyla , but they were also clustered with the 1,000 m-plot3 samples. This was supported by the PCoA beta diversity analysis results, which grouped the 1,000 m-plot3 samples together with those of the 600-m site . The remaining 1,000-m site samples presented relatively lower symbiotic fungi, but higher abundance of pathogenic fungi, with the latter more often belonging to the phylum Ascomycota . Significance testing was performed to identify statistically different phyla and initially found Ascomycota , Basidiomycota and Olpidiomycota to show differences. However, after applying the Benjamini-Hochberg correction, these differences were no longer statistically significant. Despite this, the elevational trend for Ascomycota and Basidiomycota can still be visualized in . Relation between main fungal phyla and environmental variables The environmental factors, that were identified as relevant to the model and that best correlated with the dominant fungal phyla in Mt. Jiri, are graphically represented in . reflects the influence of pH, moisture, and elevation on the fungal communities present in both years of sampling, 2021 and 2022, with pH showing the strongest relation ( F = 40.26, p < 0.001), followed by soil moisture ( F = 4.56, p = 0.039). Additionally, we confirmed the influence of organic matter, temperature, TN, and CEC on the 2021 samples, with CEC ( F = 13.14, p = 0.006), TN ( F = 8.16, p = 0.015), and organic matter ( F = 5.08, p = 0.052) exhibiting statistically significant relations to fungal community composition . For this experiment, we clustered 517,442 quality sequences classified into 14,277 OTUs at ≥97% similarity level, distributed across all samples. The number of observed features and Shannon, Simpson, and inverse Simpson indices from the vegan package were used as metrics to assess fungal alpha diversity against the total feature count. An almost even distribution was observed among all the sites, with a high degree of diversity and heterogeneity. Two samples, one from the 600-m and one from the 1,000-m site, pulled the curve down, but they could be considered possible outliers existing in nature . Additionally, the altitude variance test revealed no difference in alpha diversity among the elevation levels, and no correlation was identified with soil properties ( p > 0.05). Using vegan’s avgdist algorithm at a subsampling depth of 21,900, we computed the beta diversity and observed three clusters, which may be supported by similar groupings found in the soil pH and soil moisture properties (for soils of the 600–1,000 and 1,200–1,400 m elevations) . The relative abundances of the five most common phyla in Mt. Jiri were plotted, with Basidiomycota being the most abundant phylum accounting for approximately 45.5% of the total sequences obtained, followed by Ascomycota and Mortierellomycota at 20.6% and 14.4% of the total sequences, respectively . presents the nine most abundant OTUs at each altitude site, regardless of the taxonomic level, which included either families, such as Mortierellaceae , or genera, such as Amanita . The fungal communities in the 600-, 1,200-, and 1,400-m sites showed similar distributions of the five dominant phyla , but they were also clustered with the 1,000 m-plot3 samples. This was supported by the PCoA beta diversity analysis results, which grouped the 1,000 m-plot3 samples together with those of the 600-m site . The remaining 1,000-m site samples presented relatively lower symbiotic fungi, but higher abundance of pathogenic fungi, with the latter more often belonging to the phylum Ascomycota . Significance testing was performed to identify statistically different phyla and initially found Ascomycota , Basidiomycota and Olpidiomycota to show differences. However, after applying the Benjamini-Hochberg correction, these differences were no longer statistically significant. Despite this, the elevational trend for Ascomycota and Basidiomycota can still be visualized in . The environmental factors, that were identified as relevant to the model and that best correlated with the dominant fungal phyla in Mt. Jiri, are graphically represented in . reflects the influence of pH, moisture, and elevation on the fungal communities present in both years of sampling, 2021 and 2022, with pH showing the strongest relation ( F = 40.26, p < 0.001), followed by soil moisture ( F = 4.56, p = 0.039). Additionally, we confirmed the influence of organic matter, temperature, TN, and CEC on the 2021 samples, with CEC ( F = 13.14, p = 0.006), TN ( F = 8.16, p = 0.015), and organic matter ( F = 5.08, p = 0.052) exhibiting statistically significant relations to fungal community composition . In the case of Mt. Jiri, soil pH decreased with altitude, likely in response to changes in vegetation cover and increased soil moisture retention at higher elevations. An independent investigation of soil pH at 600- and 1,200-m altitudes also demonstrated a decrease of soil pH with altitude . In studies that have analyzed soils at higher altitudes (3,100–5,200 m), the soil pH value decreased with an increase in elevation, possibly due to the decline in vegetation cover and increase in precipitation rates, causing leaching of basic cations . In contrast, soil pH values increased with elevation for medium-altitude ranges (1,000–3,700 m) , whereas in other studies, no specific pattern could be observed . Although less data are available on soil moisture, the negative correlation between soil pH and soil moisture, which was also apparent in our study, was previously reported , and moisture levels increased linearly at altitudes above the tree line . Differences in the results reported throughout the literature may be attributed to edaphic, climatic, or region-specific differences in the study areas. Among the five soil enzymes investigated, cellobiohydrolase, β-1,4-glucosidase, and β-1,4-xylosidase activities increased with elevation. According to , β-glucosidase and acidic phosphatase activities were enhanced with elevation, showing the most significant correlations with C, N, and soil microbial biomass. However, in our study, no influence of microbial biomass carbon could be observed on the activity of the three enzymes, signifying that the increase in soil enzyme activity along the elevated slope was not a result of the larger microbial biomass. Possible influencing factors include the effect of soil properties, such as soil pH and moisture, various nutrient availability, such as TN and indirectly through organic matter, or other environmental factors, such as vegetation. In the case of the β-1,4-N-acetylglucosaminidase, its activity showed no clear trend with altitude before normalizing to microbial biomass. However, when normalized, the activity decreased with increasing elevation. Since overall microbial biomass was greatest at higher altitudes and β-1,4-N-acetylglucosaminidase is often an indicator of fungal activity and biomass in the soils , this suggests that either fungal activity lessened, or the bacteria-to-fungi biomass ratio increased in favor of bacteria at higher elevations. In regards to fungal community composition, in the present study, the 600, 1,200 and 1,400 m soils presented similar community distribution at the phylum and taxonomic level with Basidiomycota phylum and its Russula , Amanita and Sebacina –genus–and Thelephoraceae and Inocybaceae –family–taxonomic subunits dominating communities at the three altitudes. Russula is an ectomycorrhizal symbiont that plays an important role in the global forest ecosystems and typically thrives in neutral or acidic soils, such as is the case of the Mt. Jiri soils . Additionally, Russula is closely linked to tree community composition and has been found to associate with the Pinaceae and Fagaceae families , which are the primary tree families reported in Mt. Jiri . In this study, Pinus densiflora , belonging to the Pinaceae family, at the 600-m sites, and Quercus mongolica , belonging to the Fagaceae family, at the 1,200- and 1,400-m-sites, were identified as dominant species, hence explaining the prevalence of the genus at the three altitudes. In both the Northern Limestone and the Central Austrian Alps, the class Agaricomycetes decreased with elevation . In this study, the trend was reflected by the mycorrhizal symbiont genus Sebacina , which decreased gradually at the 600-, 1,200-, and 1,400-m sites, unlike the other genus belonging to the class Agaricomycetes , Russula . The higher abundance of the ectomycorrhiza-rich Thelephoraceae and Amanitaceae families, likewise, contributed to the dominance of phylum Basidiomycota . In the Northern Limestone and the Central Austrian Alps, symbiotrophs were most abundant at lower elevation sites (900, and 1,300 m, respectively) and were gradually replaced by saprotrophic fungi at middle and high elevations (1,300–1,900, and 1,600–2,100 m, respectively) . The same elevational trend could not be observed in our research, but it was the case of the 1,000-m site, where Ascomycota phylum, with the saprotrophic genus Ciboria and Mycoarthris not present at other altitudes, dominated. Moreover, this study marks one of the rare records of Mycoarthris in the Republic of Korea, previously being recorded in fresh waters . Other abundant saprotrophic taxon was genus Mortierella , that remained constant at all altitudes of the gradient. Additionally, the beta diversity analysis revealed clustering between the 1,000 m-plot3 and the 600-m-sites, with dominant taxa being shared between them. While the exact cause of this clustering is unclear, it is worth noting that unlike 1,000 m-plot1 and -plot2, which are situated close to one another, plot3 is located in a more remote area at a slightly lower altitude. Generally, environmental factors, such as pH, soil moisture, soil organic carbon or various nutrients have been shown to shape fungal communities in elevational gradients . For example, in the Eastern Andes, Peru, the fungal alpha diversity in the mineral horizon decreased linearly with elevation, with mean annual temperature as the deterministic factor, whereas fungal alpha diversity in the organic horizon followed a concave shape with the lowest point in mid-altitude . In Norikura Mountain, Japan, overall diversity showed a dip along the elevation gradient, with the lowest value in the middle around 1,700 m, with the two most influential factors being the elevation gradient and mean annual temperature. However, the main phyla present— Ascomycota , Basidiomycota , Chytridiomycota , and Zygomycota —showed a linear increase in abundance with higher elevation . Additionally, the fungal co-occurrence network, which depicts species as nodes and relationships for matter, energy, or information exchange as links, indicated towards decreased connectivity, with fewer links observed with increasing altitude. It also showed fewer keystone taxa, marked by fewer network nodes, compared with those at lower elevations . This indicates a less compact fungal network structure at higher altitudes, potentially because of decreased vegetation diversity and enhanced environmental stress, which manifests through soil physical properties. In this study, we observed similar components, such as soil moisture, organic matter, total N, temperature, and CEC, to be influential in Mt. Jiri . However, we discovered pH to be the leading driver of community changes, finding supported by other research . Additionally, pH exhibited a close relation with phylum Ascomycota and the two main phyla, Ascomycota and Basidiomycota , had opposing responses to the environmental factors, which was reported in previous studies . In a study that investigated fungal community differences based on the health of the Korean fir tree species on Mt. Halla in the Republic of Korea, the same three main phyla present in our research were identified; however, Ascomycota had the highest percentage, followed by Basidiomycota , and Mortierellomycota . Similarly, a higher abundance of Ascomycota was associated with the increased presence of pathogenic fungi in bulk soil and the rhizosphere of dead Korean fir trees, such as is the case of the 1,000-m soils in our study . Overall, in Mt. Jiri an increasing trend in fungal activity with elevation was observed and Basidiomycota, Ascomycota and Mortierellomycota were identified as the predominant phyla. However, their relative abundance did not show any statistically significant elevational trend and neither did the alpha diversity at the four altitude levels. Both local factors such as soil pH, total N, organic matter content and CEC, as well as regionally influenced factors, such as soil water content and temperature were found to influence soil fungal communities. Hence, our study adds to the understanding that the diversity, structure, and driving mechanisms of fungal communities in alpine and subalpine ecosystems may be influenced by a vast number of contributing factors, leading to no universal pattern along the elevation slope. In this study, we investigated the elevation gradient from 600- to 1,400-m on the second tallest mountain in the Republic of Korea, Mt. Jiri, and analyzed the soil properties and microbial community trends with elevation. Elevation was negatively correlated with soil pH, with soils becoming more acidic at higher altitudes. In addition, we confirmed the negative correlation between soil pH and soil moisture, the latter of which increased with elevation. These trends may be attributed to meteorological conditions, such as higher precipitation rates at higher altitudes leading to increased moisture, or the changes in vegetation cover. Microbial biomass also increased with elevation, and cellobiohydrolase, β-1,4-glucosidase, and β-1,4-xylosidase showed increased activity at higher elevations. However, no correlation was found between microbial biomass and enzyme activities, signifying that the increase in microbial biomass did not correspond to higher soil enzyme activity. Instead, it can be inferred as a byproduct of the effects of pH, soil moisture, CEC and TN—environmental factors designated through RDA analysis as impacting community composition. Fungal alpha diversity showed no elevational trend, but did indicate a stable, rich fungal community throughout Mt. Jiri, which had a different community composition with diversification observed at mid-altitudes (two clusters at 600–1,000 and 1,200–14,000 m elevations). Long-term monitoring and further comprehensive analyses of vegetation and soil biogeochemical properties are recommended to reveal the main factors controlling soil microbial community composition in the subalpine areas of Mt. Jiri. 10.7717/peerj.18762/supp-1 Supplemental Information 1 Raw data of soil and microbial properties measurements and the associated diversity metrics for the sequenced samples. 10.7717/peerj.18762/supp-2 Supplemental Information 2 A list of the matched OTUs and the assigned functional guilds by FUNGuild. |
Primary Care Support Tools for Digestive Health Care: A Mixed Method Study | 046ed2e8-52a9-473c-acc2-1461c81d09a5 | 11300069 | Internal Medicine[mh] | Digestive disorders and related conditions impact the health and quality of life of many Canadians. A recent survey of global prevalence and burden of functional gastrointestinal (GI) conditions showed 41.3% (95% CI: 39.1–43.4) of Canadians surveyed ( N = 2029) reported having a functional GI disorder, slightly higher than the worldwide prevalence of 40.3% (95% CI: 39.3–40.7) . Colorectal cancer is currently the 3rd most diagnosed type of cancer among Canadians, affecting 1 in 14 men and 1 in 18 women . Furthermore, the burden of digestive disorders and diseases on both Canadians and provincial health systems is expected to increase as a large proportion of the population ages . The best-available Canada-wide data on wait times for gastroenterology and related care come from the now dated 2012 SAGE Survey of Access from the Canadian Association of Gastroenterology (CAG) estimating a median wait time for consultation of 92 days (95% CI: 85–100). The median wait time for procedures was 55 days (95% CI: 50–61), resulting in a total median wait time of 155 days (95% CI: 142–175). Average and median wait times vary significantly between provinces due to factors specific to the provincial health systems (macro) as well as individual patient and cultural differences (micro) . Similarly, provision of gastroenterology care is not uniform throughout the country. Telford et al. report the most common method of managing referrals to gastroenterology clinics is a “first-in/first-out, often with an ad hoc prioritization for urgent cases,” while some provinces have implemented other methods of triaging patients using various “prioritization tools.” Moreover, Switzer et al. note that general surgeons perform nearly half of all colonoscopies in Canada, particularly in Prince Edward Island and Manitoba. While gastroenterologists are reported to perform a significant proportion of endoscopies in Alberta, other providers, such as general internists, primary care physicians (PCPs), and surgeons perform endoscopic procedures as part of their practice Across Canada, the COVID-19 pandemic reduced capacity in both primary and specialty care practices, increased referral backlog, and adversely impacted health outcomes of patients facing longer wait times for health services. For example, one Ontario-based study found a 38% decrease in endoscopic procedures for patients with inflammatory bowel disease (IBD) between 2019 and 2020 . The COVID-19 pandemic also limited training opportunities for gastroenterology trainees when shortage of gastroenterologists in Canada is a recognized barrier for patients with digestive health concerns. Khan et al. reported a statistically significant decrease in procedures performed between the pre-COVID period and during the pandemic, thus contributing to longer training requirements and reduced clinical competency. The province of Alberta, situated in the western part of Canada, boasts a diverse and dynamic healthcare landscape. As one of the fastest-growing provinces in the country, Alberta has faced unique challenges in maintaining and improving healthcare access for its residents. In Alberta, patients with digestive health symptoms and conditions often face difficulty accessing specialty care within an appropriate timeframe. Long wait times exist for both GI specialist consultations as well as endoscopy services due to a high volume of referrals and limited endoscopy suite capacity , according to . For patients categorized as “routine,” wait times are a minimum of 12–24 months, which may result in lowered quality of life and adverse health outcomes. Alberta has focused the restructuring of primary health to build a patient-entered medical (PCM) home, aimed to provide significant benefit to patients with chronic conditions through collaborative multidisciplinary care guided by their family physician. This collaborative team-based, integrative care occurs within a primary care network (PCN) . Codeveloped clinical care pathways (specialty and primary care) implemented for a number of conditions support care within primary care and optimize referral appropriateness to specialty care . In addition, support is available with specialist physicians by telephone through the specialist link program and connect MD and through electronic advice. Clinical care pathways are used in Canada and throughout the world, as a tool to implement best-evidence guidelines into practice . Care pathways function as “recommendations for optimal management plans” utilizing concrete “sequences” of evaluation and timing for specific testing and treatment . Care pathways exist in various sectors of clinical care and for myriad conditions and can be an effective means of translating evidence-based practice into clinical care . Myriad outcome benefits of pathways include the reduction of adverse events, mortality rates, decreased wait times for healthcare services, and specialty care, among others . One Canadian study evaluating the impact of an acute care surgery pathway for appendicitis reported decreases in wait time from emergency department (ED) triage to surgery, sustained at 12-month follow-up . An Alberta-based study evaluating a perioperative glycemic management quality improvement pathway found use of the pathway increased screening rates as well as A1C testing for patients . There are nine clinical pathways for GI conditions in use in Alberta including chronic abdominal pain, chronic constipation, chronic diarrhea, dyspepsia, gastroesophageal reflux disease (GERD), H. pylori, hepatitis C, irritable bowel syndrome (IBS), and nonalcoholic fatty liver disease (NAFLD) . Alongside the GI pathways, gastroenterology care in the urban centers in Alberta is provided through a centralized access and triage (CAT) system. The foundation of the CAT system occurred in Calgary, Alberta, in 2005, within a community of academic gastroenterologists . This innovation starkly contrasted a previous system in which practitioners managed their own referrals and rosters on an individual basis. The proposed benefits of the CAT system include increased access, reduction in wait times as well as enhanced system knowledge which can better respond to areas of demand, gaps in provision of services or access, and other challenges. Use of the CAT system spread to Edmonton's University of Alberta Hospital in 2009 and throughout the province in the following years (Novak et al. 2013). This paper will demonstrate the following: PCPs and GI specialists' perceptions of primary care supports Barriers and facilitators to implement primary care supports, from both PCP's and GI specialist's perspectives A mixed method approach was used to explore PCPs and GI specialists' experiences and satisfaction with the primary care support tools. Data collection took place between March and September 2022. This study was approved by the University of Calgary Conjoint Health Research Ethics Board (REB19-2106). All participants were provided with information on the project and how the data would be used. 2.1. Surveys The survey was developed by the authors in consultation with key stakeholders including PCPs and GI specialists. It was pilot tested on a small sample before being formally rolled out, with no changes required after pilot testing. The survey was developed and implemented using Select Survey. A different version was used for PCPs and GI specialists. PCP participants were recruited through two main avenues: firstly, survey invitation letters were distributed during a primary care conference and secondly, survey invitation letters were attached to referral closures letters from CAT centers in Calgary and Edmonton. GI specialists were recruited through a dissemination of survey invitation letters through the Digestive Health Strategic Clinical Network (DHSCN) and through the project lead. The survey ran from March to June 2022 for both GI specialists and PCPs. A draw was also included at the end of the survey for a $500 gift card. The sampling frame included 112 Albertan GI specialists who received the online survey. The number of PCPs is unknown due to the methods of PCP recruitment. To increase the response rate, we used QR codes on the invitation to scan directly from the online survey. The invitation letter included a brief description on the project and link/QR code to the survey. Participation was voluntary and all information provided was anonymous and confidential. Only participants who expressed their interest to participate in a qualitative interview and provided their contact information were approached. Survey data were analyzed in IBM SPSS version 25. Descriptive and inferential statistics were used to provide data summaries and associations. T -test and chi-square tests were applied at 95% confidence level; p value <0.05 was considered as significant. Responses to open-ended survey questions are provided under each section of PCP and GI specialist survey findings. 2.2. Interviews Survey responders who expressed interest in participating in a telephone interview were approached by the evaluation team. Seven GI specialists and 20 PCPs agreed to be contacted for the interview with 19 PCP and 7 GI specialist interviews completed. Interviews were conducted from July 2022 to September 2022, each lasting 30 minutes. Participants were offered a $50 gift card as a token of appreciation. Semistructured interview guides were used; slightly different versions for PCPs and GI specialists. The interview guide was developed along with the survey. Due to restrictions from the Ethics Board, the survey data were not linked to the qualitative interviews as the identifiers were deleted from the survey. Interview guide questions for GI specialists included participation and perceptions about CAT, awareness of primary care pathways, and challenges with wait times for GI referrals and endoscopies. The interview guide for PCPs included awareness and utilization of primary care pathways, perceptions about the benefits of pathways and central triage services, and the challenges with wait times for GI specialist referrals and endoscopies. Most interviews were conducted over MS Teams, a few were over the telephone. With permission from the participants, we also recorded all interviews to ensure accuracy during write-up and analysis. After removing identifiable information, each set of verbatim interview transcripts was cleaned for time stamps, repeated words, or inaccurate words inserted by transcription recording software. An evaluation team member compared the audio file with the written transcript to confirm accuracy of content. Two authors (MAA and RM) followed qualitative analysis processes, coded transcripts, and then compared themes that emerged. During this process, overarching main themes were named and described. Between team members, the main themes were agreed upon. Once all members had completed theming, one team member merged all working copies into a master file. All files were stored on the secure server. Due to the voluntary nature of participation in both the survey and interviews, PCP and GI samples consisted of self-selected individuals. The survey was developed by the authors in consultation with key stakeholders including PCPs and GI specialists. It was pilot tested on a small sample before being formally rolled out, with no changes required after pilot testing. The survey was developed and implemented using Select Survey. A different version was used for PCPs and GI specialists. PCP participants were recruited through two main avenues: firstly, survey invitation letters were distributed during a primary care conference and secondly, survey invitation letters were attached to referral closures letters from CAT centers in Calgary and Edmonton. GI specialists were recruited through a dissemination of survey invitation letters through the Digestive Health Strategic Clinical Network (DHSCN) and through the project lead. The survey ran from March to June 2022 for both GI specialists and PCPs. A draw was also included at the end of the survey for a $500 gift card. The sampling frame included 112 Albertan GI specialists who received the online survey. The number of PCPs is unknown due to the methods of PCP recruitment. To increase the response rate, we used QR codes on the invitation to scan directly from the online survey. The invitation letter included a brief description on the project and link/QR code to the survey. Participation was voluntary and all information provided was anonymous and confidential. Only participants who expressed their interest to participate in a qualitative interview and provided their contact information were approached. Survey data were analyzed in IBM SPSS version 25. Descriptive and inferential statistics were used to provide data summaries and associations. T -test and chi-square tests were applied at 95% confidence level; p value <0.05 was considered as significant. Responses to open-ended survey questions are provided under each section of PCP and GI specialist survey findings. Survey responders who expressed interest in participating in a telephone interview were approached by the evaluation team. Seven GI specialists and 20 PCPs agreed to be contacted for the interview with 19 PCP and 7 GI specialist interviews completed. Interviews were conducted from July 2022 to September 2022, each lasting 30 minutes. Participants were offered a $50 gift card as a token of appreciation. Semistructured interview guides were used; slightly different versions for PCPs and GI specialists. The interview guide was developed along with the survey. Due to restrictions from the Ethics Board, the survey data were not linked to the qualitative interviews as the identifiers were deleted from the survey. Interview guide questions for GI specialists included participation and perceptions about CAT, awareness of primary care pathways, and challenges with wait times for GI referrals and endoscopies. The interview guide for PCPs included awareness and utilization of primary care pathways, perceptions about the benefits of pathways and central triage services, and the challenges with wait times for GI specialist referrals and endoscopies. Most interviews were conducted over MS Teams, a few were over the telephone. With permission from the participants, we also recorded all interviews to ensure accuracy during write-up and analysis. After removing identifiable information, each set of verbatim interview transcripts was cleaned for time stamps, repeated words, or inaccurate words inserted by transcription recording software. An evaluation team member compared the audio file with the written transcript to confirm accuracy of content. Two authors (MAA and RM) followed qualitative analysis processes, coded transcripts, and then compared themes that emerged. During this process, overarching main themes were named and described. Between team members, the main themes were agreed upon. Once all members had completed theming, one team member merged all working copies into a master file. All files were stored on the secure server. Due to the voluntary nature of participation in both the survey and interviews, PCP and GI samples consisted of self-selected individuals. 3.1. Survey Findings 3.1.1. Primary Care Provider Survey A total of 36 PCPs responded to the survey. The majority of respondents were from Calgary (61.1%, n = 22) and Edmonton (25.0%, n = 9) . Most reported (86%, n = 31) being a member of a PCN, most reported being in practice for 6–10 years (38.2%, n = 13), and most worked full-time (73.5%, n = 25). The majority of PCPs surveyed were female (73.5%, n = 25). Around 42% ( n = 14) of participants said they were using the pathways regularly. Roughly 48% ( n = 16) used them occasionally while very few (9.1%, n = 3) were aware of the pathways but had not used them. Around 36.4% ( n = 12) of respondents always found primary care pathways useful, while 48.5% ( n = 16) only sometimes found them useful . Few (6.1%, n = 2) reported never finding the pathways useful, while 9.1% ( n = 3) rarely found the pathways useful. Most respondents strongly agreed (42.4%) or agreed (42.4%) pathways were useful and felt they supported PCPs to care for patients in the PCM (39.4% strong agreement and 42.4% agreement). The largest proportion of respondents strongly agreed (45.5%) or agreed (39.4%) pathways were used because GI specialists would not accept their referral. The largest proportion of disagreement from respondents was for the statement “Using a pathway saves time” with 15.2% disagreed and 21.2% strongly disagreed. Respondents strongly agreed patients preferred to see a GI specialist for assessment of their GI symptoms (34.4%), and lack of GI specialists outside of major urban centers (50.0%) contributed to wait times for access . The largest proportion of disagreement was for the statement, “Gastroenterologists follow up with certain patients longer than necessary, rather than transitioning their care to the medical home with recommendations” (28.1% disagreed and 21.9% strongly disagreed). In addition, most respondents were neutral as to whether gastroenterology triage results in patients being seen who may not benefit from either GI specialist consultation and/or endoscopy (40.6% neither agreed nor disagreed). 3.1.2. GI Specialist Survey There were 28 out of 112 GI specialists who completed the survey (response rate of 25%). Most (60.7%, n = 17) practiced in Calgary with nearly equal gender, males (50.0%, n = 14) and females (39.2%, n = 11) . Most had been in practice for 6–10 years (28.6%, n = 8), and the majority had a full-time practice (96.4%, n = 27) and were participating in a CAT (73.9%, n = 17). The most cited remuneration model was the ARP∗ (53.6%, n = 15), followed by fee-for-service (42.9%, n = 12) . The majority of respondents believed CAT was an efficient use of healthcare resources (26.9% strongly agreed and 42.3% agreed) and CAT ensures those with the greatest need are given high priority (15.4% strongly agreed and 57.7% agreed) . In addition, respondents reported CAT reduced physician workload (11.5% strong agreement and 46.2% agreement) and provided standardized means of measuring and reporting on referrals and wait times (26.9% strong agreement and 46.2% agreement). However, respondents were not convinced that CAT improved the quality of referrals (30.8% neither agreed nor disagreed) or access for patients (42.3% neither agreed nor disagreed). A large proportion of respondents believed participation in CAT reduced their autonomy (11.1% strongly agreed and 55.6% agreed) . Two contributing factors to this sense of loss of autonomy were lack of agreement with clinical criteria used in CAT (22.2% strongly agreed and 51.9% agreed) and inflexibility of CAT criteria (22.2% strongly agreed and 48.1% agreed). However, most respondents disagreed that CAT programs increased their workload (40.7% disagreed and 14.8% strongly disagreed) or that CAT rendered their workload too high (37.0% disagreed and 11.1% strongly disagreed). (1) Comparison between GI Specialist on FFS and ARP Models . Significant variations were seen for FFS and ARP GI specialists regarding their views about the benefits of CAT services and primary care pathways. FFS model specialists were significantly less likely to be involved in CAT , while ARP specialists were significantly more likely to strongly agree/agree to the statements that “CAT is an efficient use of health system resources” and “CAT improves access for patients” . compares the perceptions of ARP and FFS specialists regarding views about pathway development and utilization. ARP specialists were significantly more likely to agree with the statement “pathways were codeveloped between primary and specialty care providers” , and they had significantly more positive responses towards pathways such as “all specialties use a common format for pathways” and “pathways across all specialties are available in a single location” . 3.2. Qualitative Interviews 3.2.1. Primary Care Provider Interviews We conducted 19 interviews with PCPs. Three PCPs had <5 years of PCP experience, while all others had 5 or more years of experience. There were 13 PCPs working full time. The following are key themes that emerged from the PCPs' interviews. (1) Awareness of Primary Care Pathways . All interviewed PCPs were aware of primary care pathways and most regularly used them. PCPs mentioned the use of pathways not only helped to provide the best care to patients in the medical home, but also to ensure appropriate referrals were sent to the GI specialists. I think pathways are good because they follow the evidence-based practice. They give some things that you should consider, so maybe it will slow down the number of referrals needed for things that can be managed in primary care. [PCP, Edmonton] However, some PCPs were aware of the pathways but not utilizing them fully in their practice. I think that they [Pathways] are potentially very helpful, but I probably haven't implemented them fully yet in my practice. [PCP, Edmonton] It was mentioned that higher awareness of the GI pathways would be helpful. I think just raising more awareness of the breadth of the pathway. So I think you know I've looked at the pathways before, I'm familiar with the Specialist Link, but even for me, I didn't know that there were so many different pathways as existed for GI. [PCP, Calgary] One participant appreciated the educational sessions conducted to increase awareness of primary care pathways. Probably people who haven't done their due diligence and gone through the pathways when they get that back, probably get very frustrated. But again, I think that's like a medical education thing and hopefully, like the series that Dr. XX and her team did, if you know if it reaches more people, more people will be aware and then we will be sending less referrals and that will help. [PCP, Calgary] (2) Perceived Impact of Primary Care Tools on Reducing Inappropriate Referrals . Participants perceived a significant positive impact with the implementation of primary care supports such as pathways and telephone advice. During the interviews, one Calgary PCP provided an example where the patient was managed in a medical home. I use the pathway for chronic diarrhea. He was a patient that came to me and I had worked him up a little bit, but I thought that he was a, you know, in the age group where with his problem he might need a colonoscopy. His referral was rejected at the beginning. I did follow the pathway and determined he had the celiac disease after that. So, it prevented him from needing a colonoscopy. [PCP, Calgary] Similarly, a PCP from Edmonton also perceived a positive impact of ConnectMD on reducing inappropriate referrals. I think having our phone access to connect MD and I think it's maybe Specialist Link in Calgary, which is like the phone consult, I think that has been great. Probably helps to alleviate because sometimes referrals are just…. You're not really sure what the next step is or what you should do, or if it needs a referral or needs to be seen. So, a lot of those kinds of more ambiguous referrals can be taken care of effectively through the connect MD. [PCP, Edmonton] (3) Perceptions about the Benefits of Primary Care Pathways . Participants spoke of the benefits of primary care pathways. I find them [Pathways] very useful. I could just even use dyspepsia as an example. You know there sometimes with dyspepsia, we might not, initially because there's so much you're thinking about all at once, and you've only got 10 minutes for an appointment. So, your brain is trying to think of all the different things that we should be thinking of all at once. And sometimes we tend to miss something simple. [PCP, Edmonton] I think Specialist LINK integrating with them [Pathways] is a really good idea. I think that, you know, needs to be there so that you can kind of say oh, maybe they don't need to phone Specialist LINK. I'll just look at the pathways. [PCP, Calgary] There was an example provided by IBS pathways which the PCP found very useful in patient management. Irritable bowel syndrome has also been a good tool. I find there's often a lot of anxiety for people with that diagnosis and fear that it's something else so going through the pathway can be very helpful to reassure the patient that we're managing their symptoms properly. [PCP, Calgary] (4) Challenges with Primary Care Pathways . One of the PCPs from Calgary explained the number of diagnostic tests that are required to complete for pathways sometimes goes against the Choosing Wisely Canada guidelines. If there's one piece of information missing, the whole referral is rejected, and then you have to get that and send it back the other challenge I have with that is that a lot of the, I'm quite interested in choosing Wisely Canada and some of their guidelines and a lot of the things that are recommended were insisted on by triage are actually contrary to what Choosing Wisely recommends, so we're maybe doing unnecessary tests and then we're responsible for those for the 2 1/2 years while the patients waiting to see the specialist [laughs] is very, [laughs] very, very challenging. [PCP, Calgary] Another PCP from Edmonton mentioned a very similar concern about ordering too many diagnostic tests to complete the pathways. I think they [pathways] are good. They're the fatty liver ones, pretty comprehensive, and kind of the workup that's needed for every patient. And you know, sometimes I feel like oh, this is way more than I would have ordered on most patients with just normal ALT for example. So that sometimes I wondered, am I like costing the system a lot more by doing every test on these patients? [PCP, Edmonton] Participants were concerned about completing the pathways for walk-in patients which they found very challenging. I found walk-in patients even more difficult and I cannot imagine having to do pathways. If I was doing a walk-in clinic. I mean, walk-in clinics are so rapid-paced and just so like there's no time to be doing extra pointing and clicking. [PCP, Edmonton] (5) CAT Streamlining the Referral Process . The implementation of CAT streamlines the referral process. The process has been more streamlined with it [CAT] and then also with the presence of Specialist Link. It is supposed to centralize things and make them more equal and standardized across the city, so it feels like it's fair. So you don't feel like some patients are getting into seeing certain specialists and others are not… it's all shared. [PCP, Calgary] I really like central access systems. I feel like that's a better way of triaging and getting to the right person faster rather than not being aware of all the different specialists and having to pick and choose one from a group of gastroenterologists. I like central access. [PCP, Calgary] (6) Challenges with CAT . PCPs also expressed frustration with CAT and believed that there could be a way for PCPs to get faster access when a PCP strongly feels a patient needs to be seen by a GI specialist quickly. If we are sending a really urgent referral, there are red flags. It's a legitimate reason for referral and you know our office of calling your central triage and trying to sort this out then to be told, Oh well, this is going to take several months, and then in several months being told, well, it'll still take several months, things like that shouldn't really be happening. [PCP, Calgary] One PCP mentioned that she started using CAT but after seeing a number of inappropriate refusals (i.e., a discrepancy between the referring PCP and CAT resulting in refusal of the referral), she stopped using the central triage in Calgary. Particularly for referrals requiring urgent attention, PCPs requested better referral processes to expedite urgent cases. I don't know if there's a better way to sort of have a, Is this quote urgent, and why on the form? Or is this routine or is this semi-urgent and why so that it puts the onus on the referring physician to explain why they feel it's urgent and maybe all those urgent check marks? Urgent referrals could be looked at first. [PCP, Calgary] Another PCP emphasized the need for better standardized processes for the referral. I think that a standardized referral form would be much better than a pathway so that the pathway means that have you done these tests and if you have written information on your referral letter which corresponds to that pathway, the referral gets accepted. [PCP, Calgary] (7) Education and Training . Participants agreed that additional education and training would make PCPs more comfortable and confident in caring for GI-related conditions and processes. Some education for family doctors and teams will be helpful to see what's happening within the gastroenterology world because some of us did our training quite a long time ago, so having those, and I mean the pathways are part of that, but having some more education to enable us to do a little bit more within primary care would be good. [PCP, Calgary] Also, more communication in the form of feedback from GI specialists regarding the referral quality would be helpful for PCPs. We just don't have enough communication between the two groups where they say to us, look, we're doing our best, we are so overwhelmed, we have 200 referrals a day; there's no way we can look at them all and you know it makes us then think am I over referring or am I providing enough information or am I providing too much information. [PCP, Calgary] A periodic clinical audit of referrals could be useful to use as an educational tool for improving the quality of referrals. I think there's a huge area for some education back to the referrers and the referring people could provide us with a little bit of [feedback], you know and hopefully, I think that could go a long way in improving the quality of referrals. [PCP, Calgary] 3.2.2. GI Specialist Interviews A total of seven GI specialists participated in the interviews. Three had <5 years of experience in GI, while the others had 6 or more years of experience. Four GI specialists were remunerated through FFS and 3 ARP/AMHSP. There were 5 GIs who regularly participated in CAT services, while two had never participated. There was significant response variation between the GI specialists working on FFS versus ARP/AMHSP models regarding the benefits of central triage, primary care pathways, and other tools. The following are key themes that emerged from the seven interviews. (1) Pathways and Appropriate GI Specialist Referrals . Participants agreed that there are many benefits to the pathways. According to one GI specialist, pathways provide a guideline for PCPs to consider when thinking about making a GI referral. There are many patients that I think could be realistically managed in their medical home with their family practitioner. The pathway would help identify those patients by giving the general practitioners a path to follow, to order the appropriate tests, excluding other diagnoses, and for assistance with management. [GI specialist, Fee for Service] I've been working in Calgary for about three years. So, I think the uptake [of using the pathways] has increased. Maybe 75% of my patients have completed pathways when I see them. [GI specialist, Fee for Service] All GI specialists included in the interviews were aware of primary care pathways; however, it appeared that GI specialists believed that more awareness is needed among PCPs. I think creating more awareness would be helpful. I don't know, like how they're reaching out to different PCNs about these, but there are definitely people like GPs and some good GPs out there who aren't aware that these pathways exist and that they're really good resources. [GI specialist, ARP] (2) Benefits of CAT . GI specialists noticed a reduction in their waitlist because of implementing primary care support tools. Like I'm a big fan and proponent of, you know, central triage. I think it creates more equity and just much more standardization. And I think having the pathways and having central triage, I think those are good ways to kind of streamline and consults cut down on unnecessary wait times, and just overall improve the efficiency within the system. [GI specialist, ARP] GI specialists perceived that access has been improved as a result of implementing CAT. I have seen that these patients sometimes have been jerked around as different doctors have passed them. By using central triage, it makes a patient have to be seen by a GI specialist eventually. So I've seen that as actually beneficial. [GI specialist, Fee for Service] GI specialists reported that CAT not only provided fair access to GI specialists for patients but also ensured steady referrals for GI specialists. You're part of a system where there's like a flood of referrals, right? So I know in COVID-19 and Calgary, some of the doctors who are getting referrals from family doctors, had no referrals, right? They had no work, right? Whereas if they're part of the central triage program, you get work no matter what happens, right? So that would ensure that you have a steady referral base to get your patient seen. [GI specialist, Fee for Service] (3) Challenges with CAT . There are perceived biases within CAT: The issue with it is a bit of the autonomy goes away. So, some of these referrals I have no interest in seeing, right, somebody who's had abdominal pain for 30 years. But the central triage system will basically force you to have to take those referrals. [GI specialist, Fee for Service] I think there has always been some concern from the fee-for-service physicians that the academic people are going to kind of cherry-pick off interesting cases or rare cases or organic cases. And then the community fee for service physicians will be left dealing with nothing but IBS and things like that. [GI specialist, ARP] (4) Checklist to Standardize CAT Process . Participants mentioned CAT processes were subjective: There's no standard checklist that's been shared with me as yet. There are some criteria for the prioritization of urgency levels. [GI specialist, Fee for Service] (5) Clinical Audits to Promote Best Practices . Participants suggested an audit of endoscopic procedures may help to examine and then reduce unnecessary procedures: I think in gastroscopy, 80% of the time we don't find anything anyways, right? So that would be the key if you audit them all and you find out that a lot of people are doing procedures that aren't necessary then that would be a reason to get rid of the waiting list. [GI specialist, Fee for Service] Similarly, an audit of PCP referrals may help reduce unnecessary referrals. This inquiry could be used for education with PCPs. Understanding referrals may shed light on drivers of higher referral rates. And if you find that Dr. X is the person who's always referring that well, then maybe actually supporting them. It might be they have a really complex patient population that they serve. Like the low socioeconomic status or English language resources, or they're from a particular community where actually the pathway just isn't appropriate for them. [GI specialist, Fee for Service] 3.1.1. Primary Care Provider Survey A total of 36 PCPs responded to the survey. The majority of respondents were from Calgary (61.1%, n = 22) and Edmonton (25.0%, n = 9) . Most reported (86%, n = 31) being a member of a PCN, most reported being in practice for 6–10 years (38.2%, n = 13), and most worked full-time (73.5%, n = 25). The majority of PCPs surveyed were female (73.5%, n = 25). Around 42% ( n = 14) of participants said they were using the pathways regularly. Roughly 48% ( n = 16) used them occasionally while very few (9.1%, n = 3) were aware of the pathways but had not used them. Around 36.4% ( n = 12) of respondents always found primary care pathways useful, while 48.5% ( n = 16) only sometimes found them useful . Few (6.1%, n = 2) reported never finding the pathways useful, while 9.1% ( n = 3) rarely found the pathways useful. Most respondents strongly agreed (42.4%) or agreed (42.4%) pathways were useful and felt they supported PCPs to care for patients in the PCM (39.4% strong agreement and 42.4% agreement). The largest proportion of respondents strongly agreed (45.5%) or agreed (39.4%) pathways were used because GI specialists would not accept their referral. The largest proportion of disagreement from respondents was for the statement “Using a pathway saves time” with 15.2% disagreed and 21.2% strongly disagreed. Respondents strongly agreed patients preferred to see a GI specialist for assessment of their GI symptoms (34.4%), and lack of GI specialists outside of major urban centers (50.0%) contributed to wait times for access . The largest proportion of disagreement was for the statement, “Gastroenterologists follow up with certain patients longer than necessary, rather than transitioning their care to the medical home with recommendations” (28.1% disagreed and 21.9% strongly disagreed). In addition, most respondents were neutral as to whether gastroenterology triage results in patients being seen who may not benefit from either GI specialist consultation and/or endoscopy (40.6% neither agreed nor disagreed). 3.1.2. GI Specialist Survey There were 28 out of 112 GI specialists who completed the survey (response rate of 25%). Most (60.7%, n = 17) practiced in Calgary with nearly equal gender, males (50.0%, n = 14) and females (39.2%, n = 11) . Most had been in practice for 6–10 years (28.6%, n = 8), and the majority had a full-time practice (96.4%, n = 27) and were participating in a CAT (73.9%, n = 17). The most cited remuneration model was the ARP∗ (53.6%, n = 15), followed by fee-for-service (42.9%, n = 12) . The majority of respondents believed CAT was an efficient use of healthcare resources (26.9% strongly agreed and 42.3% agreed) and CAT ensures those with the greatest need are given high priority (15.4% strongly agreed and 57.7% agreed) . In addition, respondents reported CAT reduced physician workload (11.5% strong agreement and 46.2% agreement) and provided standardized means of measuring and reporting on referrals and wait times (26.9% strong agreement and 46.2% agreement). However, respondents were not convinced that CAT improved the quality of referrals (30.8% neither agreed nor disagreed) or access for patients (42.3% neither agreed nor disagreed). A large proportion of respondents believed participation in CAT reduced their autonomy (11.1% strongly agreed and 55.6% agreed) . Two contributing factors to this sense of loss of autonomy were lack of agreement with clinical criteria used in CAT (22.2% strongly agreed and 51.9% agreed) and inflexibility of CAT criteria (22.2% strongly agreed and 48.1% agreed). However, most respondents disagreed that CAT programs increased their workload (40.7% disagreed and 14.8% strongly disagreed) or that CAT rendered their workload too high (37.0% disagreed and 11.1% strongly disagreed). (1) Comparison between GI Specialist on FFS and ARP Models . Significant variations were seen for FFS and ARP GI specialists regarding their views about the benefits of CAT services and primary care pathways. FFS model specialists were significantly less likely to be involved in CAT , while ARP specialists were significantly more likely to strongly agree/agree to the statements that “CAT is an efficient use of health system resources” and “CAT improves access for patients” . compares the perceptions of ARP and FFS specialists regarding views about pathway development and utilization. ARP specialists were significantly more likely to agree with the statement “pathways were codeveloped between primary and specialty care providers” , and they had significantly more positive responses towards pathways such as “all specialties use a common format for pathways” and “pathways across all specialties are available in a single location” . A total of 36 PCPs responded to the survey. The majority of respondents were from Calgary (61.1%, n = 22) and Edmonton (25.0%, n = 9) . Most reported (86%, n = 31) being a member of a PCN, most reported being in practice for 6–10 years (38.2%, n = 13), and most worked full-time (73.5%, n = 25). The majority of PCPs surveyed were female (73.5%, n = 25). Around 42% ( n = 14) of participants said they were using the pathways regularly. Roughly 48% ( n = 16) used them occasionally while very few (9.1%, n = 3) were aware of the pathways but had not used them. Around 36.4% ( n = 12) of respondents always found primary care pathways useful, while 48.5% ( n = 16) only sometimes found them useful . Few (6.1%, n = 2) reported never finding the pathways useful, while 9.1% ( n = 3) rarely found the pathways useful. Most respondents strongly agreed (42.4%) or agreed (42.4%) pathways were useful and felt they supported PCPs to care for patients in the PCM (39.4% strong agreement and 42.4% agreement). The largest proportion of respondents strongly agreed (45.5%) or agreed (39.4%) pathways were used because GI specialists would not accept their referral. The largest proportion of disagreement from respondents was for the statement “Using a pathway saves time” with 15.2% disagreed and 21.2% strongly disagreed. Respondents strongly agreed patients preferred to see a GI specialist for assessment of their GI symptoms (34.4%), and lack of GI specialists outside of major urban centers (50.0%) contributed to wait times for access . The largest proportion of disagreement was for the statement, “Gastroenterologists follow up with certain patients longer than necessary, rather than transitioning their care to the medical home with recommendations” (28.1% disagreed and 21.9% strongly disagreed). In addition, most respondents were neutral as to whether gastroenterology triage results in patients being seen who may not benefit from either GI specialist consultation and/or endoscopy (40.6% neither agreed nor disagreed). There were 28 out of 112 GI specialists who completed the survey (response rate of 25%). Most (60.7%, n = 17) practiced in Calgary with nearly equal gender, males (50.0%, n = 14) and females (39.2%, n = 11) . Most had been in practice for 6–10 years (28.6%, n = 8), and the majority had a full-time practice (96.4%, n = 27) and were participating in a CAT (73.9%, n = 17). The most cited remuneration model was the ARP∗ (53.6%, n = 15), followed by fee-for-service (42.9%, n = 12) . The majority of respondents believed CAT was an efficient use of healthcare resources (26.9% strongly agreed and 42.3% agreed) and CAT ensures those with the greatest need are given high priority (15.4% strongly agreed and 57.7% agreed) . In addition, respondents reported CAT reduced physician workload (11.5% strong agreement and 46.2% agreement) and provided standardized means of measuring and reporting on referrals and wait times (26.9% strong agreement and 46.2% agreement). However, respondents were not convinced that CAT improved the quality of referrals (30.8% neither agreed nor disagreed) or access for patients (42.3% neither agreed nor disagreed). A large proportion of respondents believed participation in CAT reduced their autonomy (11.1% strongly agreed and 55.6% agreed) . Two contributing factors to this sense of loss of autonomy were lack of agreement with clinical criteria used in CAT (22.2% strongly agreed and 51.9% agreed) and inflexibility of CAT criteria (22.2% strongly agreed and 48.1% agreed). However, most respondents disagreed that CAT programs increased their workload (40.7% disagreed and 14.8% strongly disagreed) or that CAT rendered their workload too high (37.0% disagreed and 11.1% strongly disagreed). (1) Comparison between GI Specialist on FFS and ARP Models . Significant variations were seen for FFS and ARP GI specialists regarding their views about the benefits of CAT services and primary care pathways. FFS model specialists were significantly less likely to be involved in CAT , while ARP specialists were significantly more likely to strongly agree/agree to the statements that “CAT is an efficient use of health system resources” and “CAT improves access for patients” . compares the perceptions of ARP and FFS specialists regarding views about pathway development and utilization. ARP specialists were significantly more likely to agree with the statement “pathways were codeveloped between primary and specialty care providers” , and they had significantly more positive responses towards pathways such as “all specialties use a common format for pathways” and “pathways across all specialties are available in a single location” . 3.2.1. Primary Care Provider Interviews We conducted 19 interviews with PCPs. Three PCPs had <5 years of PCP experience, while all others had 5 or more years of experience. There were 13 PCPs working full time. The following are key themes that emerged from the PCPs' interviews. (1) Awareness of Primary Care Pathways . All interviewed PCPs were aware of primary care pathways and most regularly used them. PCPs mentioned the use of pathways not only helped to provide the best care to patients in the medical home, but also to ensure appropriate referrals were sent to the GI specialists. I think pathways are good because they follow the evidence-based practice. They give some things that you should consider, so maybe it will slow down the number of referrals needed for things that can be managed in primary care. [PCP, Edmonton] However, some PCPs were aware of the pathways but not utilizing them fully in their practice. I think that they [Pathways] are potentially very helpful, but I probably haven't implemented them fully yet in my practice. [PCP, Edmonton] It was mentioned that higher awareness of the GI pathways would be helpful. I think just raising more awareness of the breadth of the pathway. So I think you know I've looked at the pathways before, I'm familiar with the Specialist Link, but even for me, I didn't know that there were so many different pathways as existed for GI. [PCP, Calgary] One participant appreciated the educational sessions conducted to increase awareness of primary care pathways. Probably people who haven't done their due diligence and gone through the pathways when they get that back, probably get very frustrated. But again, I think that's like a medical education thing and hopefully, like the series that Dr. XX and her team did, if you know if it reaches more people, more people will be aware and then we will be sending less referrals and that will help. [PCP, Calgary] (2) Perceived Impact of Primary Care Tools on Reducing Inappropriate Referrals . Participants perceived a significant positive impact with the implementation of primary care supports such as pathways and telephone advice. During the interviews, one Calgary PCP provided an example where the patient was managed in a medical home. I use the pathway for chronic diarrhea. He was a patient that came to me and I had worked him up a little bit, but I thought that he was a, you know, in the age group where with his problem he might need a colonoscopy. His referral was rejected at the beginning. I did follow the pathway and determined he had the celiac disease after that. So, it prevented him from needing a colonoscopy. [PCP, Calgary] Similarly, a PCP from Edmonton also perceived a positive impact of ConnectMD on reducing inappropriate referrals. I think having our phone access to connect MD and I think it's maybe Specialist Link in Calgary, which is like the phone consult, I think that has been great. Probably helps to alleviate because sometimes referrals are just…. You're not really sure what the next step is or what you should do, or if it needs a referral or needs to be seen. So, a lot of those kinds of more ambiguous referrals can be taken care of effectively through the connect MD. [PCP, Edmonton] (3) Perceptions about the Benefits of Primary Care Pathways . Participants spoke of the benefits of primary care pathways. I find them [Pathways] very useful. I could just even use dyspepsia as an example. You know there sometimes with dyspepsia, we might not, initially because there's so much you're thinking about all at once, and you've only got 10 minutes for an appointment. So, your brain is trying to think of all the different things that we should be thinking of all at once. And sometimes we tend to miss something simple. [PCP, Edmonton] I think Specialist LINK integrating with them [Pathways] is a really good idea. I think that, you know, needs to be there so that you can kind of say oh, maybe they don't need to phone Specialist LINK. I'll just look at the pathways. [PCP, Calgary] There was an example provided by IBS pathways which the PCP found very useful in patient management. Irritable bowel syndrome has also been a good tool. I find there's often a lot of anxiety for people with that diagnosis and fear that it's something else so going through the pathway can be very helpful to reassure the patient that we're managing their symptoms properly. [PCP, Calgary] (4) Challenges with Primary Care Pathways . One of the PCPs from Calgary explained the number of diagnostic tests that are required to complete for pathways sometimes goes against the Choosing Wisely Canada guidelines. If there's one piece of information missing, the whole referral is rejected, and then you have to get that and send it back the other challenge I have with that is that a lot of the, I'm quite interested in choosing Wisely Canada and some of their guidelines and a lot of the things that are recommended were insisted on by triage are actually contrary to what Choosing Wisely recommends, so we're maybe doing unnecessary tests and then we're responsible for those for the 2 1/2 years while the patients waiting to see the specialist [laughs] is very, [laughs] very, very challenging. [PCP, Calgary] Another PCP from Edmonton mentioned a very similar concern about ordering too many diagnostic tests to complete the pathways. I think they [pathways] are good. They're the fatty liver ones, pretty comprehensive, and kind of the workup that's needed for every patient. And you know, sometimes I feel like oh, this is way more than I would have ordered on most patients with just normal ALT for example. So that sometimes I wondered, am I like costing the system a lot more by doing every test on these patients? [PCP, Edmonton] Participants were concerned about completing the pathways for walk-in patients which they found very challenging. I found walk-in patients even more difficult and I cannot imagine having to do pathways. If I was doing a walk-in clinic. I mean, walk-in clinics are so rapid-paced and just so like there's no time to be doing extra pointing and clicking. [PCP, Edmonton] (5) CAT Streamlining the Referral Process . The implementation of CAT streamlines the referral process. The process has been more streamlined with it [CAT] and then also with the presence of Specialist Link. It is supposed to centralize things and make them more equal and standardized across the city, so it feels like it's fair. So you don't feel like some patients are getting into seeing certain specialists and others are not… it's all shared. [PCP, Calgary] I really like central access systems. I feel like that's a better way of triaging and getting to the right person faster rather than not being aware of all the different specialists and having to pick and choose one from a group of gastroenterologists. I like central access. [PCP, Calgary] (6) Challenges with CAT . PCPs also expressed frustration with CAT and believed that there could be a way for PCPs to get faster access when a PCP strongly feels a patient needs to be seen by a GI specialist quickly. If we are sending a really urgent referral, there are red flags. It's a legitimate reason for referral and you know our office of calling your central triage and trying to sort this out then to be told, Oh well, this is going to take several months, and then in several months being told, well, it'll still take several months, things like that shouldn't really be happening. [PCP, Calgary] One PCP mentioned that she started using CAT but after seeing a number of inappropriate refusals (i.e., a discrepancy between the referring PCP and CAT resulting in refusal of the referral), she stopped using the central triage in Calgary. Particularly for referrals requiring urgent attention, PCPs requested better referral processes to expedite urgent cases. I don't know if there's a better way to sort of have a, Is this quote urgent, and why on the form? Or is this routine or is this semi-urgent and why so that it puts the onus on the referring physician to explain why they feel it's urgent and maybe all those urgent check marks? Urgent referrals could be looked at first. [PCP, Calgary] Another PCP emphasized the need for better standardized processes for the referral. I think that a standardized referral form would be much better than a pathway so that the pathway means that have you done these tests and if you have written information on your referral letter which corresponds to that pathway, the referral gets accepted. [PCP, Calgary] (7) Education and Training . Participants agreed that additional education and training would make PCPs more comfortable and confident in caring for GI-related conditions and processes. Some education for family doctors and teams will be helpful to see what's happening within the gastroenterology world because some of us did our training quite a long time ago, so having those, and I mean the pathways are part of that, but having some more education to enable us to do a little bit more within primary care would be good. [PCP, Calgary] Also, more communication in the form of feedback from GI specialists regarding the referral quality would be helpful for PCPs. We just don't have enough communication between the two groups where they say to us, look, we're doing our best, we are so overwhelmed, we have 200 referrals a day; there's no way we can look at them all and you know it makes us then think am I over referring or am I providing enough information or am I providing too much information. [PCP, Calgary] A periodic clinical audit of referrals could be useful to use as an educational tool for improving the quality of referrals. I think there's a huge area for some education back to the referrers and the referring people could provide us with a little bit of [feedback], you know and hopefully, I think that could go a long way in improving the quality of referrals. [PCP, Calgary] 3.2.2. GI Specialist Interviews A total of seven GI specialists participated in the interviews. Three had <5 years of experience in GI, while the others had 6 or more years of experience. Four GI specialists were remunerated through FFS and 3 ARP/AMHSP. There were 5 GIs who regularly participated in CAT services, while two had never participated. There was significant response variation between the GI specialists working on FFS versus ARP/AMHSP models regarding the benefits of central triage, primary care pathways, and other tools. The following are key themes that emerged from the seven interviews. (1) Pathways and Appropriate GI Specialist Referrals . Participants agreed that there are many benefits to the pathways. According to one GI specialist, pathways provide a guideline for PCPs to consider when thinking about making a GI referral. There are many patients that I think could be realistically managed in their medical home with their family practitioner. The pathway would help identify those patients by giving the general practitioners a path to follow, to order the appropriate tests, excluding other diagnoses, and for assistance with management. [GI specialist, Fee for Service] I've been working in Calgary for about three years. So, I think the uptake [of using the pathways] has increased. Maybe 75% of my patients have completed pathways when I see them. [GI specialist, Fee for Service] All GI specialists included in the interviews were aware of primary care pathways; however, it appeared that GI specialists believed that more awareness is needed among PCPs. I think creating more awareness would be helpful. I don't know, like how they're reaching out to different PCNs about these, but there are definitely people like GPs and some good GPs out there who aren't aware that these pathways exist and that they're really good resources. [GI specialist, ARP] (2) Benefits of CAT . GI specialists noticed a reduction in their waitlist because of implementing primary care support tools. Like I'm a big fan and proponent of, you know, central triage. I think it creates more equity and just much more standardization. And I think having the pathways and having central triage, I think those are good ways to kind of streamline and consults cut down on unnecessary wait times, and just overall improve the efficiency within the system. [GI specialist, ARP] GI specialists perceived that access has been improved as a result of implementing CAT. I have seen that these patients sometimes have been jerked around as different doctors have passed them. By using central triage, it makes a patient have to be seen by a GI specialist eventually. So I've seen that as actually beneficial. [GI specialist, Fee for Service] GI specialists reported that CAT not only provided fair access to GI specialists for patients but also ensured steady referrals for GI specialists. You're part of a system where there's like a flood of referrals, right? So I know in COVID-19 and Calgary, some of the doctors who are getting referrals from family doctors, had no referrals, right? They had no work, right? Whereas if they're part of the central triage program, you get work no matter what happens, right? So that would ensure that you have a steady referral base to get your patient seen. [GI specialist, Fee for Service] (3) Challenges with CAT . There are perceived biases within CAT: The issue with it is a bit of the autonomy goes away. So, some of these referrals I have no interest in seeing, right, somebody who's had abdominal pain for 30 years. But the central triage system will basically force you to have to take those referrals. [GI specialist, Fee for Service] I think there has always been some concern from the fee-for-service physicians that the academic people are going to kind of cherry-pick off interesting cases or rare cases or organic cases. And then the community fee for service physicians will be left dealing with nothing but IBS and things like that. [GI specialist, ARP] (4) Checklist to Standardize CAT Process . Participants mentioned CAT processes were subjective: There's no standard checklist that's been shared with me as yet. There are some criteria for the prioritization of urgency levels. [GI specialist, Fee for Service] (5) Clinical Audits to Promote Best Practices . Participants suggested an audit of endoscopic procedures may help to examine and then reduce unnecessary procedures: I think in gastroscopy, 80% of the time we don't find anything anyways, right? So that would be the key if you audit them all and you find out that a lot of people are doing procedures that aren't necessary then that would be a reason to get rid of the waiting list. [GI specialist, Fee for Service] Similarly, an audit of PCP referrals may help reduce unnecessary referrals. This inquiry could be used for education with PCPs. Understanding referrals may shed light on drivers of higher referral rates. And if you find that Dr. X is the person who's always referring that well, then maybe actually supporting them. It might be they have a really complex patient population that they serve. Like the low socioeconomic status or English language resources, or they're from a particular community where actually the pathway just isn't appropriate for them. [GI specialist, Fee for Service] We conducted 19 interviews with PCPs. Three PCPs had <5 years of PCP experience, while all others had 5 or more years of experience. There were 13 PCPs working full time. The following are key themes that emerged from the PCPs' interviews. (1) Awareness of Primary Care Pathways . All interviewed PCPs were aware of primary care pathways and most regularly used them. PCPs mentioned the use of pathways not only helped to provide the best care to patients in the medical home, but also to ensure appropriate referrals were sent to the GI specialists. I think pathways are good because they follow the evidence-based practice. They give some things that you should consider, so maybe it will slow down the number of referrals needed for things that can be managed in primary care. [PCP, Edmonton] However, some PCPs were aware of the pathways but not utilizing them fully in their practice. I think that they [Pathways] are potentially very helpful, but I probably haven't implemented them fully yet in my practice. [PCP, Edmonton] It was mentioned that higher awareness of the GI pathways would be helpful. I think just raising more awareness of the breadth of the pathway. So I think you know I've looked at the pathways before, I'm familiar with the Specialist Link, but even for me, I didn't know that there were so many different pathways as existed for GI. [PCP, Calgary] One participant appreciated the educational sessions conducted to increase awareness of primary care pathways. Probably people who haven't done their due diligence and gone through the pathways when they get that back, probably get very frustrated. But again, I think that's like a medical education thing and hopefully, like the series that Dr. XX and her team did, if you know if it reaches more people, more people will be aware and then we will be sending less referrals and that will help. [PCP, Calgary] (2) Perceived Impact of Primary Care Tools on Reducing Inappropriate Referrals . Participants perceived a significant positive impact with the implementation of primary care supports such as pathways and telephone advice. During the interviews, one Calgary PCP provided an example where the patient was managed in a medical home. I use the pathway for chronic diarrhea. He was a patient that came to me and I had worked him up a little bit, but I thought that he was a, you know, in the age group where with his problem he might need a colonoscopy. His referral was rejected at the beginning. I did follow the pathway and determined he had the celiac disease after that. So, it prevented him from needing a colonoscopy. [PCP, Calgary] Similarly, a PCP from Edmonton also perceived a positive impact of ConnectMD on reducing inappropriate referrals. I think having our phone access to connect MD and I think it's maybe Specialist Link in Calgary, which is like the phone consult, I think that has been great. Probably helps to alleviate because sometimes referrals are just…. You're not really sure what the next step is or what you should do, or if it needs a referral or needs to be seen. So, a lot of those kinds of more ambiguous referrals can be taken care of effectively through the connect MD. [PCP, Edmonton] (3) Perceptions about the Benefits of Primary Care Pathways . Participants spoke of the benefits of primary care pathways. I find them [Pathways] very useful. I could just even use dyspepsia as an example. You know there sometimes with dyspepsia, we might not, initially because there's so much you're thinking about all at once, and you've only got 10 minutes for an appointment. So, your brain is trying to think of all the different things that we should be thinking of all at once. And sometimes we tend to miss something simple. [PCP, Edmonton] I think Specialist LINK integrating with them [Pathways] is a really good idea. I think that, you know, needs to be there so that you can kind of say oh, maybe they don't need to phone Specialist LINK. I'll just look at the pathways. [PCP, Calgary] There was an example provided by IBS pathways which the PCP found very useful in patient management. Irritable bowel syndrome has also been a good tool. I find there's often a lot of anxiety for people with that diagnosis and fear that it's something else so going through the pathway can be very helpful to reassure the patient that we're managing their symptoms properly. [PCP, Calgary] (4) Challenges with Primary Care Pathways . One of the PCPs from Calgary explained the number of diagnostic tests that are required to complete for pathways sometimes goes against the Choosing Wisely Canada guidelines. If there's one piece of information missing, the whole referral is rejected, and then you have to get that and send it back the other challenge I have with that is that a lot of the, I'm quite interested in choosing Wisely Canada and some of their guidelines and a lot of the things that are recommended were insisted on by triage are actually contrary to what Choosing Wisely recommends, so we're maybe doing unnecessary tests and then we're responsible for those for the 2 1/2 years while the patients waiting to see the specialist [laughs] is very, [laughs] very, very challenging. [PCP, Calgary] Another PCP from Edmonton mentioned a very similar concern about ordering too many diagnostic tests to complete the pathways. I think they [pathways] are good. They're the fatty liver ones, pretty comprehensive, and kind of the workup that's needed for every patient. And you know, sometimes I feel like oh, this is way more than I would have ordered on most patients with just normal ALT for example. So that sometimes I wondered, am I like costing the system a lot more by doing every test on these patients? [PCP, Edmonton] Participants were concerned about completing the pathways for walk-in patients which they found very challenging. I found walk-in patients even more difficult and I cannot imagine having to do pathways. If I was doing a walk-in clinic. I mean, walk-in clinics are so rapid-paced and just so like there's no time to be doing extra pointing and clicking. [PCP, Edmonton] (5) CAT Streamlining the Referral Process . The implementation of CAT streamlines the referral process. The process has been more streamlined with it [CAT] and then also with the presence of Specialist Link. It is supposed to centralize things and make them more equal and standardized across the city, so it feels like it's fair. So you don't feel like some patients are getting into seeing certain specialists and others are not… it's all shared. [PCP, Calgary] I really like central access systems. I feel like that's a better way of triaging and getting to the right person faster rather than not being aware of all the different specialists and having to pick and choose one from a group of gastroenterologists. I like central access. [PCP, Calgary] (6) Challenges with CAT . PCPs also expressed frustration with CAT and believed that there could be a way for PCPs to get faster access when a PCP strongly feels a patient needs to be seen by a GI specialist quickly. If we are sending a really urgent referral, there are red flags. It's a legitimate reason for referral and you know our office of calling your central triage and trying to sort this out then to be told, Oh well, this is going to take several months, and then in several months being told, well, it'll still take several months, things like that shouldn't really be happening. [PCP, Calgary] One PCP mentioned that she started using CAT but after seeing a number of inappropriate refusals (i.e., a discrepancy between the referring PCP and CAT resulting in refusal of the referral), she stopped using the central triage in Calgary. Particularly for referrals requiring urgent attention, PCPs requested better referral processes to expedite urgent cases. I don't know if there's a better way to sort of have a, Is this quote urgent, and why on the form? Or is this routine or is this semi-urgent and why so that it puts the onus on the referring physician to explain why they feel it's urgent and maybe all those urgent check marks? Urgent referrals could be looked at first. [PCP, Calgary] Another PCP emphasized the need for better standardized processes for the referral. I think that a standardized referral form would be much better than a pathway so that the pathway means that have you done these tests and if you have written information on your referral letter which corresponds to that pathway, the referral gets accepted. [PCP, Calgary] (7) Education and Training . Participants agreed that additional education and training would make PCPs more comfortable and confident in caring for GI-related conditions and processes. Some education for family doctors and teams will be helpful to see what's happening within the gastroenterology world because some of us did our training quite a long time ago, so having those, and I mean the pathways are part of that, but having some more education to enable us to do a little bit more within primary care would be good. [PCP, Calgary] Also, more communication in the form of feedback from GI specialists regarding the referral quality would be helpful for PCPs. We just don't have enough communication between the two groups where they say to us, look, we're doing our best, we are so overwhelmed, we have 200 referrals a day; there's no way we can look at them all and you know it makes us then think am I over referring or am I providing enough information or am I providing too much information. [PCP, Calgary] A periodic clinical audit of referrals could be useful to use as an educational tool for improving the quality of referrals. I think there's a huge area for some education back to the referrers and the referring people could provide us with a little bit of [feedback], you know and hopefully, I think that could go a long way in improving the quality of referrals. [PCP, Calgary] A total of seven GI specialists participated in the interviews. Three had <5 years of experience in GI, while the others had 6 or more years of experience. Four GI specialists were remunerated through FFS and 3 ARP/AMHSP. There were 5 GIs who regularly participated in CAT services, while two had never participated. There was significant response variation between the GI specialists working on FFS versus ARP/AMHSP models regarding the benefits of central triage, primary care pathways, and other tools. The following are key themes that emerged from the seven interviews. (1) Pathways and Appropriate GI Specialist Referrals . Participants agreed that there are many benefits to the pathways. According to one GI specialist, pathways provide a guideline for PCPs to consider when thinking about making a GI referral. There are many patients that I think could be realistically managed in their medical home with their family practitioner. The pathway would help identify those patients by giving the general practitioners a path to follow, to order the appropriate tests, excluding other diagnoses, and for assistance with management. [GI specialist, Fee for Service] I've been working in Calgary for about three years. So, I think the uptake [of using the pathways] has increased. Maybe 75% of my patients have completed pathways when I see them. [GI specialist, Fee for Service] All GI specialists included in the interviews were aware of primary care pathways; however, it appeared that GI specialists believed that more awareness is needed among PCPs. I think creating more awareness would be helpful. I don't know, like how they're reaching out to different PCNs about these, but there are definitely people like GPs and some good GPs out there who aren't aware that these pathways exist and that they're really good resources. [GI specialist, ARP] (2) Benefits of CAT . GI specialists noticed a reduction in their waitlist because of implementing primary care support tools. Like I'm a big fan and proponent of, you know, central triage. I think it creates more equity and just much more standardization. And I think having the pathways and having central triage, I think those are good ways to kind of streamline and consults cut down on unnecessary wait times, and just overall improve the efficiency within the system. [GI specialist, ARP] GI specialists perceived that access has been improved as a result of implementing CAT. I have seen that these patients sometimes have been jerked around as different doctors have passed them. By using central triage, it makes a patient have to be seen by a GI specialist eventually. So I've seen that as actually beneficial. [GI specialist, Fee for Service] GI specialists reported that CAT not only provided fair access to GI specialists for patients but also ensured steady referrals for GI specialists. You're part of a system where there's like a flood of referrals, right? So I know in COVID-19 and Calgary, some of the doctors who are getting referrals from family doctors, had no referrals, right? They had no work, right? Whereas if they're part of the central triage program, you get work no matter what happens, right? So that would ensure that you have a steady referral base to get your patient seen. [GI specialist, Fee for Service] (3) Challenges with CAT . There are perceived biases within CAT: The issue with it is a bit of the autonomy goes away. So, some of these referrals I have no interest in seeing, right, somebody who's had abdominal pain for 30 years. But the central triage system will basically force you to have to take those referrals. [GI specialist, Fee for Service] I think there has always been some concern from the fee-for-service physicians that the academic people are going to kind of cherry-pick off interesting cases or rare cases or organic cases. And then the community fee for service physicians will be left dealing with nothing but IBS and things like that. [GI specialist, ARP] (4) Checklist to Standardize CAT Process . Participants mentioned CAT processes were subjective: There's no standard checklist that's been shared with me as yet. There are some criteria for the prioritization of urgency levels. [GI specialist, Fee for Service] (5) Clinical Audits to Promote Best Practices . Participants suggested an audit of endoscopic procedures may help to examine and then reduce unnecessary procedures: I think in gastroscopy, 80% of the time we don't find anything anyways, right? So that would be the key if you audit them all and you find out that a lot of people are doing procedures that aren't necessary then that would be a reason to get rid of the waiting list. [GI specialist, Fee for Service] Similarly, an audit of PCP referrals may help reduce unnecessary referrals. This inquiry could be used for education with PCPs. Understanding referrals may shed light on drivers of higher referral rates. And if you find that Dr. X is the person who's always referring that well, then maybe actually supporting them. It might be they have a really complex patient population that they serve. Like the low socioeconomic status or English language resources, or they're from a particular community where actually the pathway just isn't appropriate for them. [GI specialist, Fee for Service] Overall, this mixed method study demonstrates some important insights into mechanisms built in Alberta to improve access to GI care and highlights important strengths associated with the implementation of CAT and primary care pathways from both PCPs and GI specialists. Some of the most important contributions from CAT include an effective means of standardizing care, with assurance of transparency and equity in accessing specialty care. For primary care pathways, marked advantages include the empowerment of PCPs to manage patients in their PCM using best-evidence method, practical algorithms with increased appropriateness, and timeliness of investigations and patient management. Despite significant existing complexities within the health system, CAT and primary care supports may facilitate improved system integration, standardize communication, and enhance collaboration, aimed to improve health outcomes for patients. The inclusion of perceptions from both PCPs and GI specialists about CAT and primary care supports such as care pathways, telephone advice, and CAT provide substantive and balanced insight into the variable supports built to improve system integration. PCPs reported their use of pathways was mainly determined by the increased likelihood of having their referral accepted by the specialist with CAT (39.4% strong agreement and 42.4% agreement). Nevertheless, PCPs also recognized pathways offer evidence-based guidance to evaluate patients with digestive complaints and reduced the number of patients for whom the referral was necessary. This is consistent with the extant published research study, demonstrating reduced wait times and improved patient outcomes with pathway implementation (or similar clinical algorithms). Srivastava et al. reported the use of a nonalcoholic fatty liver disease pathway increased the detection of advanced fibrosis and cirrhosis (OR 4.23, 95% CI: 1.52-12.25, p =0.006), with a decrease in unnecessary referrals (OR 0.23; 95% CI: 0.66–0.82, p =0.006). PCPs expressed the usefulness of the pathways in conjunction with CAT as a means of ensuring fairness and equitable access to tertiary care and endoscopy. PCPs also reported difficulty in managing patient expectations of a referral. There are limitations; however, several PCPs expressed caution with the use of pathways for patients accessing walk-in clinics given the lack of certain return to care or follow-up. Similarly, for vulnerable patients, such as those experiencing homelessness or addiction, who cannot easily undergo the investigation and follow-up to complete a pathway limited safe use in this context. Thus, a strong theme emerged from the PCP surveys and interviews that pathways could not effectively function in a “one-size-fits-all” capacity, and a more nuanced approach to ensure appropriate follow through, facilitated by ongoing communication and collaboration to optimize use. In addition, GI specialist were hopeful standardized approaches to triage and would be implemented with CAT, consistent with a previously published scoping review on referral criteria for gastroenterology, recommending the “development of a prioritization referral tool…” as well as “for primary care providers, (a) tool (which) would help to standardize the referral process, reducing the frustration of multiple forms and referral requirements” . GI specialists included in this study reported satisfaction with both the primary care pathways and the implementation of CAT in reducing unnecessary referrals. In the survey and qualitative interviews, GI specialists believed pathways would be beneficial in strengthening the medical home model, facilitate best-evidence management to expediate care and thus reduce wait time and resource expenditure for patients referred to their clinics for specialty care. However, variability among specialists was observed among different remuneration models: FFS model physicians were less likely to participate in CAT than those in an ARP model. There was little agreement with the statements “CAT is an efficient use of health system resources” and “CAT improves access for patients” between FFS and ARP GI specialists. The remuneration model itself is unlikely the only explanation for variable perceptions about CAT and primary care tools between these groups. FFS specialists, for example, are independent contractors, guided largely by the principle or value of autonomy. Skepticism regarding the benefits of CAT is therefore not uncommon, and perceived loss of control over the referral triage process may contribute to reservations around fairness or consistency. Alternatively, the ARP model specialist's professional responsibilities are less tied to referral volume . Both groups, however, identified measurement of referral demand and wait times as paramount. This study had three main limitations. Firstly, the response rate to the survey was low and we had a small survey sample from both PCPs and GI specialists. Although the PCP survey sample was small, a number of in-depth qualitative interviews provided rich data regarding their views on primary care tools' implementation. The GI specialist sample was small in both surveys and interviews, so findings should be interpreted with caution; however, their insights were highly valuable to understand and improve triage processes and understand limitations. Secondly, the physician participation (for both PCPs and GI specialists) hailed mainly from urban areas, thus may have a more “urban-centric” view of pathway implementation and impacts. Rural patients face significant challenges accessing healthcare and may face worse health outcomes. Training rural full-scope family physicians in endoscopic procedures may help address these needs . Family physicians (PCPs) in our study also revealed the potential benefits of training primary care providers in endoscopic procedures, yet noted access to use endoscopy suites remain limited. Future studies are required to focus on further optimizing referral appropriateness, standardizing CAT services across urban and rural sites, and increasing capacity to improve patient access to GI-related health services. Primary care pathways are valued and widely used by PCPs in Alberta; however, their implementation continues to face a number of challenges. Further support including education and training for PCPs in pathway use may ease these barriers. CAT services play an important role to ensure fair and equitable access to GI specialists; however, the system is not perfect and challenges exist for both PCPs and GI specialists. Improved communication and collaboration between primary and specialty care is core to better system integration. Significant variation exists between FSS and ARP GI specialists regarding their perceptions of the benefits of CAT and primary care supports. ARP specialists have more positive views regarding the benefits of CAT and primary care supports. Overall, improvement in CAT processes and increased awareness of primary care supports may significantly reduce the number of low yield referrals and endoscopies. Summary recommendations for successful implementation of primary care support tools for improving access to specialists include the following: Clear, evidence-based triage processes should be transparently implemented to ensure the sickest patients will be consistently prioritized Data collection and measurement to quantify, characterize, and inform providers regarding low yield and avoidable referrals are important to improve the referral process for primary care Engagement and collaboration among all GI specialists including FFS and ARPs among others in leading and guiding CAT are important for successful and sustainability of CAT and primary care support tools |
Uncovering Forensic Evidence: A Path to Age Estimation through DNA Methylation | 2db79520-a299-4f37-9da1-e735c0a4fb53 | 11084977 | Forensic Medicine[mh] | 1.1. Epigenetics Epigenetics, derived from the Greek “epi” (ἐπί) meaning “upon” or “above,” refers to the chemical alterations of DNA that regulate gene expression without modifying the DNA sequence itself. Epigenetic modifications include DNA methylation, histone modification, chromatin remodeling, and non-coding RNAs. Particularly, DNA methylation has been related to different processes such as embryonic development, cellular reprogramming, transcriptional regulation, genomic imprinting, chromosomal stability, and X-chromosome inactivation (reviewed ). Nowadays, understanding this epigenetic modification has become crucial across fields such as forensic science and research related to aging and disease, prompting extensive investigation. DNA methylation (DNAm) is a chemical modification where methyltransferases add a methyl group (-CH3) to the 5′ carbon of cytosines followed by guanines in a 5′-3′ direction in the DNA. In mammals, 60–90% of CpGs are methylated, while unmethylated regions are clustered into “CpG islands”. These islands are predominantly located at the promoters of housekeeping genes and consist of high-density CG content (>55%), typically ranging from 300 to 3000 base pairs in length . DNA methylation is considered one of the most promising biomarkers for age estimation studies: as individuals age, specific CpGs become hypermethylated (gain of methylation) or hypomethylated (loss of methylation) . These modifications can influence the activation or deactivation of the gene at certain sites and times, regulating protein production and affecting the individual’s observable features (phenotype). Epigenetic alterations such as DNA methylation are part of the “hallmarks of aging”, representing shared processes among different organisms related to aging . So far, twelve hallmarks of aging have been identified , which are closely interconnected and have been considered to enhance age estimation. Research on telomere length correlation , deletion in the mitochondrial genome , aspartic acid racemization , rearrangement in T-cells , accumulation of advanced glycation end products (AGEs) , and mRNA profile analysis has been extensively conducted, showing different accuracies. However, each of these methods has shown specific limitations when used independently. In overcoming these challenges, DNA methylation has emerged as a promising hallmark for age estimation. Specific CpGs can be chosen to gauge the rate at which an individual ages using what are known as epigenetic clocks . Horvath defined an epigenetic clock as “the age estimate in years resulting from a mathematical algorithm based on the methylation state of specific CpGs in the genome” . Thanks to epigenetic clocks, it is now known that DNA methylation patterns undergo significant changes during childhood, characterized by rapid accumulation followed by a stabilization phase in adulthood . Moreover, the distinction between biological and chronological age was of special relevance to these studies. Chronological age refers to the calendar time that has passed since birth while biological age is a more ambiguous concept that depends on the aging process, mainly related to the relationship between the environment and the individual and how it impacts the phenotype. Hence, it is also referred to as physiological age, organismal age, or phenotypic age . It is commonly observed that an individual’s biological age may not correspond to their chronological age due to various factors such as environmental exposures, lifestyle habits, and diseases . Moreover, factors like ancestry and biological sex could further contribute to this discrepancy. 1.2. DNA Methylation for Forensic Science DNA methylation becomes of particular interest to the forensic sciences both in the context of criminalistics and forensic anthropology . In the former case, estimating an individual’s characteristics from a biological sample found at a crime scene when there is no reference sample to compare with would help narrow down the pool of suspects. Furthermore, in forensic anthropology, age estimation is part of the biological profile for the identification of human remains. Age estimation is a component of Forensic DNA Phenotyping (FDP), which involves predicting a person’s externally visible characteristics (EVCs) alongside appearance and biogeographical ancestry. Examples include the extensively researched study of single nucleotide polymorphisms (SNPs) predictive for eye, hair, skin color, and biogeographical ancestry , which has become the most accurate and widely used in Forensic DNA Phenotyping. However, common SNPs proved inadequate for estimating certain individual characteristics that epigenetics could potentially address. Hence, in recent years, DNA methylation has been studied in forensic sciences for predicting chronological age , differentiating monozygotic twins , and identifying tissues and cell types . In the realm of distinguishing monozygotic twins, who share identical genetic bases, conventional forensic DNA profiling falls short in providing differentiation. Nevertheless, they may display distinct DNA methylation patterns influenced by lifestyle factors, offering a new avenue for their individual identification. Although definitive markers for differentiation have still to be established, several studies employing various technologies present promising findings . Another objective of DNA methylation in forensics is to identify the type of body fluid. In crime scenes, it is possible to find samples of unknown origin or complex mixtures, such as those in cases of sexual assault. In recent years, several research groups have explored both approaches as the differentiation between tissues could be immensely helpful in reconstructing a criminal case . DNA methylation holds significant promise for the development of reliable, accurate, and practical methods for age estimation in the future. Identifying the most informative and sensitive markers, while considering the impact of various factors, is essential for integrating age prediction into routine forensic workflows. Over the years, the epigenetic age predictor most widely studied in relation to DNA methylation is the gene ELOVL2 (Elongation of Very Long Chain Fatty Acids Protein 2). Beginning with the studies of Garagnani et al. in 2012, it remains a marker of great utility in forensic epigenetic age prediction. This is not only due to its high correlation with age across numerous tissues, but also, as emphasized by Aliferi et al. , because of its large methylation changes over the human lifespan. ELOVL2 has been extensively investigated in blood , teeth , bones , and buccal swabs , consistently demonstrating strong positive correlations with age. Age estimation models based solely on ELOVL2 CpGs have been developed . Nevertheless, currently, the use of a single marker is not sufficient to achieve the accuracy required for age prediction models. The challenges encountered in DNA methylation analysis are diverse and varied. As emphasized by Montesanto et al. , an ideal age prediction system within the forensic field should exhibit specific properties, including applicability to different tissues, replication across diverse populations, coverage of the entire age spectrum, and reproducibility across various technology platforms. The validation of these systems by multiple researchers will be critical for driving future advancements. This review delves into the recent progress in utilizing DNA methylation for age prediction within the field of forensic science. Epigenetics, derived from the Greek “epi” (ἐπί) meaning “upon” or “above,” refers to the chemical alterations of DNA that regulate gene expression without modifying the DNA sequence itself. Epigenetic modifications include DNA methylation, histone modification, chromatin remodeling, and non-coding RNAs. Particularly, DNA methylation has been related to different processes such as embryonic development, cellular reprogramming, transcriptional regulation, genomic imprinting, chromosomal stability, and X-chromosome inactivation (reviewed ). Nowadays, understanding this epigenetic modification has become crucial across fields such as forensic science and research related to aging and disease, prompting extensive investigation. DNA methylation (DNAm) is a chemical modification where methyltransferases add a methyl group (-CH3) to the 5′ carbon of cytosines followed by guanines in a 5′-3′ direction in the DNA. In mammals, 60–90% of CpGs are methylated, while unmethylated regions are clustered into “CpG islands”. These islands are predominantly located at the promoters of housekeeping genes and consist of high-density CG content (>55%), typically ranging from 300 to 3000 base pairs in length . DNA methylation is considered one of the most promising biomarkers for age estimation studies: as individuals age, specific CpGs become hypermethylated (gain of methylation) or hypomethylated (loss of methylation) . These modifications can influence the activation or deactivation of the gene at certain sites and times, regulating protein production and affecting the individual’s observable features (phenotype). Epigenetic alterations such as DNA methylation are part of the “hallmarks of aging”, representing shared processes among different organisms related to aging . So far, twelve hallmarks of aging have been identified , which are closely interconnected and have been considered to enhance age estimation. Research on telomere length correlation , deletion in the mitochondrial genome , aspartic acid racemization , rearrangement in T-cells , accumulation of advanced glycation end products (AGEs) , and mRNA profile analysis has been extensively conducted, showing different accuracies. However, each of these methods has shown specific limitations when used independently. In overcoming these challenges, DNA methylation has emerged as a promising hallmark for age estimation. Specific CpGs can be chosen to gauge the rate at which an individual ages using what are known as epigenetic clocks . Horvath defined an epigenetic clock as “the age estimate in years resulting from a mathematical algorithm based on the methylation state of specific CpGs in the genome” . Thanks to epigenetic clocks, it is now known that DNA methylation patterns undergo significant changes during childhood, characterized by rapid accumulation followed by a stabilization phase in adulthood . Moreover, the distinction between biological and chronological age was of special relevance to these studies. Chronological age refers to the calendar time that has passed since birth while biological age is a more ambiguous concept that depends on the aging process, mainly related to the relationship between the environment and the individual and how it impacts the phenotype. Hence, it is also referred to as physiological age, organismal age, or phenotypic age . It is commonly observed that an individual’s biological age may not correspond to their chronological age due to various factors such as environmental exposures, lifestyle habits, and diseases . Moreover, factors like ancestry and biological sex could further contribute to this discrepancy. DNA methylation becomes of particular interest to the forensic sciences both in the context of criminalistics and forensic anthropology . In the former case, estimating an individual’s characteristics from a biological sample found at a crime scene when there is no reference sample to compare with would help narrow down the pool of suspects. Furthermore, in forensic anthropology, age estimation is part of the biological profile for the identification of human remains. Age estimation is a component of Forensic DNA Phenotyping (FDP), which involves predicting a person’s externally visible characteristics (EVCs) alongside appearance and biogeographical ancestry. Examples include the extensively researched study of single nucleotide polymorphisms (SNPs) predictive for eye, hair, skin color, and biogeographical ancestry , which has become the most accurate and widely used in Forensic DNA Phenotyping. However, common SNPs proved inadequate for estimating certain individual characteristics that epigenetics could potentially address. Hence, in recent years, DNA methylation has been studied in forensic sciences for predicting chronological age , differentiating monozygotic twins , and identifying tissues and cell types . In the realm of distinguishing monozygotic twins, who share identical genetic bases, conventional forensic DNA profiling falls short in providing differentiation. Nevertheless, they may display distinct DNA methylation patterns influenced by lifestyle factors, offering a new avenue for their individual identification. Although definitive markers for differentiation have still to be established, several studies employing various technologies present promising findings . Another objective of DNA methylation in forensics is to identify the type of body fluid. In crime scenes, it is possible to find samples of unknown origin or complex mixtures, such as those in cases of sexual assault. In recent years, several research groups have explored both approaches as the differentiation between tissues could be immensely helpful in reconstructing a criminal case . DNA methylation holds significant promise for the development of reliable, accurate, and practical methods for age estimation in the future. Identifying the most informative and sensitive markers, while considering the impact of various factors, is essential for integrating age prediction into routine forensic workflows. Over the years, the epigenetic age predictor most widely studied in relation to DNA methylation is the gene ELOVL2 (Elongation of Very Long Chain Fatty Acids Protein 2). Beginning with the studies of Garagnani et al. in 2012, it remains a marker of great utility in forensic epigenetic age prediction. This is not only due to its high correlation with age across numerous tissues, but also, as emphasized by Aliferi et al. , because of its large methylation changes over the human lifespan. ELOVL2 has been extensively investigated in blood , teeth , bones , and buccal swabs , consistently demonstrating strong positive correlations with age. Age estimation models based solely on ELOVL2 CpGs have been developed . Nevertheless, currently, the use of a single marker is not sufficient to achieve the accuracy required for age prediction models. The challenges encountered in DNA methylation analysis are diverse and varied. As emphasized by Montesanto et al. , an ideal age prediction system within the forensic field should exhibit specific properties, including applicability to different tissues, replication across diverse populations, coverage of the entire age spectrum, and reproducibility across various technology platforms. The validation of these systems by multiple researchers will be critical for driving future advancements. This review delves into the recent progress in utilizing DNA methylation for age prediction within the field of forensic science. Over time, researchers have investigated various technologies to identify the most accurate DNA methylation markers for age prediction. These methods have targeted tissues commonly employed in forensic analyses, yielding heterogeneous outcomes. The choice of techniques hinges on specific objectives. When screening hundreds of loci, DNA microarray technology is required. However, for routine age prediction in forensic scenarios, the primary goal is to utilize a minimal number of DNA markers that enable analysis with low DNA input while maintaining sensitivity and accuracy. Bisulfite conversion stands as the gold standard for DNA methylation analysis, with most of the current methods relying on it as the initial step for identifying CpG sites . In this treatment, during the process of deamination, unmethylated cytosines are transformed into uracil, while methylated cytosines remain unaltered. As a result, the distinction between methylated and unmethylated cytosines can be observed as a change in the DNA sequence . The limitations of bisulfite conversion are well-documented, primarily due to its tendency to cause significant DNA degradation, thereby reducing the amount of DNA available for subsequent analysis and lowering sequence information. Moreover, incomplete conversion of the sample and reannealing conversion are important considerations. Additionally, the quality of the DNA plays a crucial role in successful bisulfite conversion, which can be a limitation, particularly in forensic scenarios (reviewed ). Despite its challenges, bisulfite conversion has been fundamental in advancing the field of DNA methylation analysis. Multiple commercial kits are available for conducting DNA bisulfite conversion, and their performances have been assessed by different authors . 2.1. Bisulfite Sequencing by Sanger Bisulfite sequencing is one of the oldest techniques for DNA methylation analysis. First, the sequence is treated by bisulfite conversion following PCR amplification. Then, the product is sequenced by the Sanger reaction. By comparing the treated DNA sample with the untreated one, it is possible to infer if a CpG site was methylated or not. One of the advantages is that this method is easy and simple to perform in any forensic laboratory. However, it can become challenging when studying multiple sites simultaneously within the sequence, particularly in noisy sequencing scenarios. Additionally, careful primer design is essential as they must effectively bind to both the methylated and unmethylated strands . The method’s application can be observed in the work of Correia Dias et al., which encompasses analysis involving blood, bones, and teeth . 2.2. Methylation-Specific PCR (MSP) MSP is a PCR-based method used to assess methylation status. Initially, it was based on two distinct PCR reactions (one for the methylated strand and one for the unmethylated strand). Subsequently, the PCR products, treated previously with bisulfite, were subjected to electrophoresis analysis . Recently, the technique has been adapted for potential semi-quantification using real-time PCR. The percentage of methylated reference (PMR) is derived from the threshold values of each sample. By utilizing a single primer pair for the target sequence and constructing a standard curve with diluted samples of known methylation levels, it is possible to determine the proportion of methylation in the region of interest. The method’s advantages include its simplicity, minimal requirement of genomic DNA, and utilizing equipment commonly found in forensic laboratories . Limitations are the inability to simultaneously analyze multiple CpG sites , non-quantitative nature, low specificity, and lack of methylation information at the CpG resolution level . Current studies conducted using this method include the works of Kondo et al. and Ogata et al. on teeth. 2.3. Methylation-Sensitive High-Resolution Melting (MS-HRM) MS-HRM is based on the comparison between methylated and non-methylated sites generating differential melting profiles . Following bisulfite conversion and PCR amplification, the product is subjected to increasing temperatures in the presence of an intercalating agent that emits fluorescence upon DNA binding. At a specific temperature, the double DNA strands dissociate, leading to the release of the dye from the DNA and a subsequent decrease in fluorescence. The dissociation time relies on the complementary bonds between both strands: CG pairs are distinguished by three hydrogen bonds, whereas AT pairs possess only two; consequently, the latter will dissociate faster than the former. After bisulfite treatment, methylated strains will exhibit a higher CG content, while non-methylated ones will show a higher AT content. Controls with known methylation levels allow comparison with the study sample in a graph of signal percentage over time. Finally, the derivative curve will reveal melting peaks, with the peak with non-methylated sites appearing to the left (at lower melting temperatures) compared to methylated sites (at higher melting temperatures). Primers must be designed to amplify both types of strands; however, often there is a differential amplification towards the non-methylated strand, a phenomenon known as PCR bias. Several strategies to mitigate this issue, such as optimizing primer design and PCR conditions, have been described in the literature . MS-HRM is considered both cost-effective and time-efficient for assessing methylation at a single locus . For this method, the obtention of a pure PCR product is crucial. However, one drawback is that although it is possible to determine the general methylation status of the region of interest, providing the status of a specific CpG is not feasible. Furthermore, conducting a multiplex with several primers targeting different regions would hinder the clear visualization of distinct peaks and increase the complexity of results interpretation. This presents another disadvantage, particularly considering that age estimation currently achieves greater accuracy with the use of multiple markers. This review includes several examples of this method in blood samples , saliva , and semen . 2.4. MassArray (EpiTYPER) EpiTYPER ® utilizes mass spectrometry for quantitative analysis of DNA methylation on a large scale. Following PCR amplification of bisulfite-converted DNA, reverse strand transcription is performed, and the resulting RNA product is subjected to cleavage after each U base. The resulting fragments are then analyzed using a mass spectrometer (MALDI-TOF), yielding different masses according to the composition of the DNA strand. Analysis of the peaks in the spectrum allows differentiation by weight of methylated versus unmethylated cytosines, generating a methylation ratio at each location in the sequence . The method offers advantages in terms of speed, accuracy, and ability to analyze a large number of samples. However, it faces limitations related to cost-effectiveness in high-throughput analysis, potential misinterpretations of methylation levels due to polymorphisms, and the presence of contaminant peaks in the analysis . Several studies using EpiTYPER are mentioned in the following sections . 2.5. Multiplex Minisequencing Reaction (SNaPshot) The SNaPshot assays present a semi-quantitative technique for evaluating DNA methylation, employing dideoxy single-base extension (SBE) with capillary electrophoresis. After the PCR amplification of bisulfite-converted DNA, a SBE reaction is performed to analyze a nucleotide change occurring at a specific CpG within the region of interest. The nucleotide to be extended following the SBE primer is fluorescently labeled, and depending on which nucleotide binds to the complementary strand, the color of the peak detected by the capillary electrophoresis instrument will differ. The peak’s intensity is correlated with the level of methylation. For each SBE primer, only one CpG can be analyzed. A major advantage is its ability to conduct a multiplex reaction targeting various regions, thereby expanding the analysis scope. Additionally, the instrumentation employed is capillary electrophoresis, commonly utilized for STR analysis in forensic laboratories . A potential disadvantage of this method is that the development of a multiplex system might be time-consuming and less straightforward. Furthermore, due to its semi-quantitative nature, this method might not offer the level of precision necessary for in-depth DNA methylation analysis. Numerous SNaPshot multiplex assays have been designed for age prediction using forensically significant tissues such as blood , saliva , semen , and bones and teeth . 2.6. Pyrosequencing Pyrosequencing is considered the gold standard technique for the identification of allele-specific methylation patterns . This method relies on chemiluminescence detection to determine the sequence of interest, previous bisulfite conversion and PCR amplification. It operates as a sequencing-by-synthesis technique, where deoxynucleotide triphosphates (dNTPs) are each dispensed into a chamber containing the DNA template. When the correct complementary dNTP is added by the polymerase, inorganic pyrophosphate (PPi) is released. Through enzymatic reactions, the PPi generates light, observed as sequential peaks in a pyrogram. The height of these peaks correlates with the proportion of pyrophosphate released, indicating the number of nucleotides added. Thus, the quantity of cytosines and thymines at a particular position can be determined by comparing the peaks, thereby revealing the level of methylation . It is important to highlight that fluorescence in pyrosequencing is produced by nucleotide incorporation during PCR, whereas in Sanger sequencing, it is determined by nucleotide chain termination . Pyrosequencing is a quantitative technique, which makes it very useful in forensics. Its primary advantages include its relative simplicity, high reproducibility, capability to discern differences of less than 5% in methylation levels, and its applicability for heterogeneous samples . Compared to massively parallel sequencing (discussed in the following section), this method has restricted multiplexing capability . Furthermore, even with its excellent quality-to-price ratio , and its practical use in routine forensic cases remains limited. However, for research purposes, pyrosequencing is widely used, and examples can be observed in multiple sources: blood and bloodstains , saliva and buccal swabs , semen , and teeth . 2.7. Next Generation Sequencing (NGS) NGS, also known as massively parallel sequencing (MPS), is a high-throughput DNA sequencing method where billions of short reads are sequenced per instrument run. NGS has significant advantages for analyzing a wide range of specific methylation sites within a single reaction, enabling extensive exploration of genetic information. There are different NGS platforms, each with their own technology and distinctive characteristics. In epigenetics, for example, it is common to find platforms based on sequencing by synthesis like Illumina ® BeadChip arrays (San Diego, CA, USA). Different approaches can be identified, either using a large number of markers across the entire genome (whole-genome sequencing) or focusing on exons (whole-exome sequencing), as well as the analysis of a small number of CpG markers (targeted sequencing) . In the literature, there are both examples of whole-genome sequencing (WGS) studies and targeted NGS approaches . Many of these studies belong to the VISAGE Consortium (VISible Attributes through GEnomics), which has emerged in recent years as a tool that employs NGS to create and validate models for predicting appearance, ancestry, and age. NGS enables thorough screening to identify potential new DNA methylation markers, which can then be used by the same method to develop prediction models with a smaller subset of candidates. The advantages of this method include the simultaneous analysis of a large number of DNA markers in a very short period of time and the obtention of high-resolution data. Moreover, it possesses the capability to process low quality/quantity DNA, a crucial advantage given the prior bisulfite conversion step and forensic contexts . The disadvantages include the elevated equipment/infrastructure costs and the complexity of the data analysis, necessitating thorough training for laboratory personnel in NGS data processing . 2.8. Exploring New Approaches in DNA Methylation Analysis Droplet Digital PCR (ddPCR) is an innovative quantitative method. In the first steps, the sample is fractionated into thousands of microdroplets of bisulfite-converted DNA, followed by PCR amplification and analysis of each droplet. This enables parallel digital sequencing of single molecules. The method is highly sensitive and rapid . Furthermore, in comparison to traditional qPCR, ddPCR is less dependent on PCR inhibition or high PCR efficiency and could be a more efficient procedure due to its simplicity in a single PCR amplification . However, ddPCR requires specialized instrumentation and primer design can be labor-intensive . In forensics, there are examples of this method in blood samples and saliva samples . Enzymatic-based non-chemical conversion techniques are being investigated as alternatives to bisulfite conversion. Vaisvila et al. introduced an enzymatic methylation sequencing (EM-seq) method capable of detecting methylated and non-methylated cytosines using sets of enzymatic reactions . However, further studies are needed for its extensive use, and bisulfite conversion remains the method of choice for DNA methylation analysis in forensic research. In summary, a range of methods have been explored for age prediction based on DNA methylation . The outcomes vary depending on their specific strengths and limitations. Bisulfite conversion is commonly employed as the initial step in the analysis and so, it is critically important to monitor conversion efficiency and variations in performance across different kits . Technical errors in DNAm analysis can vary across analysis technologies, emphasizing the importance of conducting training, testing, and validation of models using the same technology to integrate them into routine forensic workflows . Furthermore, there are instances where reference data produced from a particular DNAm microarray technology is subsequently utilized in forensic analyses employing a different technology, leading to variations in the results . Correction models, like the one introduced by Feng et al. , which utilizes Z-score transformation to address differences between reference model data generated from EpiTYPER microarrays and actual casework data produced with pyrosequencing, are pivotal in managing these variations. Bisulfite sequencing is one of the oldest techniques for DNA methylation analysis. First, the sequence is treated by bisulfite conversion following PCR amplification. Then, the product is sequenced by the Sanger reaction. By comparing the treated DNA sample with the untreated one, it is possible to infer if a CpG site was methylated or not. One of the advantages is that this method is easy and simple to perform in any forensic laboratory. However, it can become challenging when studying multiple sites simultaneously within the sequence, particularly in noisy sequencing scenarios. Additionally, careful primer design is essential as they must effectively bind to both the methylated and unmethylated strands . The method’s application can be observed in the work of Correia Dias et al., which encompasses analysis involving blood, bones, and teeth . MSP is a PCR-based method used to assess methylation status. Initially, it was based on two distinct PCR reactions (one for the methylated strand and one for the unmethylated strand). Subsequently, the PCR products, treated previously with bisulfite, were subjected to electrophoresis analysis . Recently, the technique has been adapted for potential semi-quantification using real-time PCR. The percentage of methylated reference (PMR) is derived from the threshold values of each sample. By utilizing a single primer pair for the target sequence and constructing a standard curve with diluted samples of known methylation levels, it is possible to determine the proportion of methylation in the region of interest. The method’s advantages include its simplicity, minimal requirement of genomic DNA, and utilizing equipment commonly found in forensic laboratories . Limitations are the inability to simultaneously analyze multiple CpG sites , non-quantitative nature, low specificity, and lack of methylation information at the CpG resolution level . Current studies conducted using this method include the works of Kondo et al. and Ogata et al. on teeth. MS-HRM is based on the comparison between methylated and non-methylated sites generating differential melting profiles . Following bisulfite conversion and PCR amplification, the product is subjected to increasing temperatures in the presence of an intercalating agent that emits fluorescence upon DNA binding. At a specific temperature, the double DNA strands dissociate, leading to the release of the dye from the DNA and a subsequent decrease in fluorescence. The dissociation time relies on the complementary bonds between both strands: CG pairs are distinguished by three hydrogen bonds, whereas AT pairs possess only two; consequently, the latter will dissociate faster than the former. After bisulfite treatment, methylated strains will exhibit a higher CG content, while non-methylated ones will show a higher AT content. Controls with known methylation levels allow comparison with the study sample in a graph of signal percentage over time. Finally, the derivative curve will reveal melting peaks, with the peak with non-methylated sites appearing to the left (at lower melting temperatures) compared to methylated sites (at higher melting temperatures). Primers must be designed to amplify both types of strands; however, often there is a differential amplification towards the non-methylated strand, a phenomenon known as PCR bias. Several strategies to mitigate this issue, such as optimizing primer design and PCR conditions, have been described in the literature . MS-HRM is considered both cost-effective and time-efficient for assessing methylation at a single locus . For this method, the obtention of a pure PCR product is crucial. However, one drawback is that although it is possible to determine the general methylation status of the region of interest, providing the status of a specific CpG is not feasible. Furthermore, conducting a multiplex with several primers targeting different regions would hinder the clear visualization of distinct peaks and increase the complexity of results interpretation. This presents another disadvantage, particularly considering that age estimation currently achieves greater accuracy with the use of multiple markers. This review includes several examples of this method in blood samples , saliva , and semen . EpiTYPER ® utilizes mass spectrometry for quantitative analysis of DNA methylation on a large scale. Following PCR amplification of bisulfite-converted DNA, reverse strand transcription is performed, and the resulting RNA product is subjected to cleavage after each U base. The resulting fragments are then analyzed using a mass spectrometer (MALDI-TOF), yielding different masses according to the composition of the DNA strand. Analysis of the peaks in the spectrum allows differentiation by weight of methylated versus unmethylated cytosines, generating a methylation ratio at each location in the sequence . The method offers advantages in terms of speed, accuracy, and ability to analyze a large number of samples. However, it faces limitations related to cost-effectiveness in high-throughput analysis, potential misinterpretations of methylation levels due to polymorphisms, and the presence of contaminant peaks in the analysis . Several studies using EpiTYPER are mentioned in the following sections . The SNaPshot assays present a semi-quantitative technique for evaluating DNA methylation, employing dideoxy single-base extension (SBE) with capillary electrophoresis. After the PCR amplification of bisulfite-converted DNA, a SBE reaction is performed to analyze a nucleotide change occurring at a specific CpG within the region of interest. The nucleotide to be extended following the SBE primer is fluorescently labeled, and depending on which nucleotide binds to the complementary strand, the color of the peak detected by the capillary electrophoresis instrument will differ. The peak’s intensity is correlated with the level of methylation. For each SBE primer, only one CpG can be analyzed. A major advantage is its ability to conduct a multiplex reaction targeting various regions, thereby expanding the analysis scope. Additionally, the instrumentation employed is capillary electrophoresis, commonly utilized for STR analysis in forensic laboratories . A potential disadvantage of this method is that the development of a multiplex system might be time-consuming and less straightforward. Furthermore, due to its semi-quantitative nature, this method might not offer the level of precision necessary for in-depth DNA methylation analysis. Numerous SNaPshot multiplex assays have been designed for age prediction using forensically significant tissues such as blood , saliva , semen , and bones and teeth . Pyrosequencing is considered the gold standard technique for the identification of allele-specific methylation patterns . This method relies on chemiluminescence detection to determine the sequence of interest, previous bisulfite conversion and PCR amplification. It operates as a sequencing-by-synthesis technique, where deoxynucleotide triphosphates (dNTPs) are each dispensed into a chamber containing the DNA template. When the correct complementary dNTP is added by the polymerase, inorganic pyrophosphate (PPi) is released. Through enzymatic reactions, the PPi generates light, observed as sequential peaks in a pyrogram. The height of these peaks correlates with the proportion of pyrophosphate released, indicating the number of nucleotides added. Thus, the quantity of cytosines and thymines at a particular position can be determined by comparing the peaks, thereby revealing the level of methylation . It is important to highlight that fluorescence in pyrosequencing is produced by nucleotide incorporation during PCR, whereas in Sanger sequencing, it is determined by nucleotide chain termination . Pyrosequencing is a quantitative technique, which makes it very useful in forensics. Its primary advantages include its relative simplicity, high reproducibility, capability to discern differences of less than 5% in methylation levels, and its applicability for heterogeneous samples . Compared to massively parallel sequencing (discussed in the following section), this method has restricted multiplexing capability . Furthermore, even with its excellent quality-to-price ratio , and its practical use in routine forensic cases remains limited. However, for research purposes, pyrosequencing is widely used, and examples can be observed in multiple sources: blood and bloodstains , saliva and buccal swabs , semen , and teeth . NGS, also known as massively parallel sequencing (MPS), is a high-throughput DNA sequencing method where billions of short reads are sequenced per instrument run. NGS has significant advantages for analyzing a wide range of specific methylation sites within a single reaction, enabling extensive exploration of genetic information. There are different NGS platforms, each with their own technology and distinctive characteristics. In epigenetics, for example, it is common to find platforms based on sequencing by synthesis like Illumina ® BeadChip arrays (San Diego, CA, USA). Different approaches can be identified, either using a large number of markers across the entire genome (whole-genome sequencing) or focusing on exons (whole-exome sequencing), as well as the analysis of a small number of CpG markers (targeted sequencing) . In the literature, there are both examples of whole-genome sequencing (WGS) studies and targeted NGS approaches . Many of these studies belong to the VISAGE Consortium (VISible Attributes through GEnomics), which has emerged in recent years as a tool that employs NGS to create and validate models for predicting appearance, ancestry, and age. NGS enables thorough screening to identify potential new DNA methylation markers, which can then be used by the same method to develop prediction models with a smaller subset of candidates. The advantages of this method include the simultaneous analysis of a large number of DNA markers in a very short period of time and the obtention of high-resolution data. Moreover, it possesses the capability to process low quality/quantity DNA, a crucial advantage given the prior bisulfite conversion step and forensic contexts . The disadvantages include the elevated equipment/infrastructure costs and the complexity of the data analysis, necessitating thorough training for laboratory personnel in NGS data processing . Droplet Digital PCR (ddPCR) is an innovative quantitative method. In the first steps, the sample is fractionated into thousands of microdroplets of bisulfite-converted DNA, followed by PCR amplification and analysis of each droplet. This enables parallel digital sequencing of single molecules. The method is highly sensitive and rapid . Furthermore, in comparison to traditional qPCR, ddPCR is less dependent on PCR inhibition or high PCR efficiency and could be a more efficient procedure due to its simplicity in a single PCR amplification . However, ddPCR requires specialized instrumentation and primer design can be labor-intensive . In forensics, there are examples of this method in blood samples and saliva samples . Enzymatic-based non-chemical conversion techniques are being investigated as alternatives to bisulfite conversion. Vaisvila et al. introduced an enzymatic methylation sequencing (EM-seq) method capable of detecting methylated and non-methylated cytosines using sets of enzymatic reactions . However, further studies are needed for its extensive use, and bisulfite conversion remains the method of choice for DNA methylation analysis in forensic research. In summary, a range of methods have been explored for age prediction based on DNA methylation . The outcomes vary depending on their specific strengths and limitations. Bisulfite conversion is commonly employed as the initial step in the analysis and so, it is critically important to monitor conversion efficiency and variations in performance across different kits . Technical errors in DNAm analysis can vary across analysis technologies, emphasizing the importance of conducting training, testing, and validation of models using the same technology to integrate them into routine forensic workflows . Furthermore, there are instances where reference data produced from a particular DNAm microarray technology is subsequently utilized in forensic analyses employing a different technology, leading to variations in the results . Correction models, like the one introduced by Feng et al. , which utilizes Z-score transformation to address differences between reference model data generated from EpiTYPER microarrays and actual casework data produced with pyrosequencing, are pivotal in managing these variations. Epigenetic clocks originated from genome-wide studies that analyzed DNA methylation patterns at specific genomic positions . These investigations explored how methylation changes with age and its association with factors such as biological sex, disease status, and lifestyle choices (including smoking, diet, exercise, and alcohol consumption). As previously mentioned, epigenetic clocks introduced the concepts of chronological age (measured in calendar time since birth) and biological age (dependent on an individual’s biological state) . In forensic science, the primary objective is to estimate an individual’s chronological age, especially in criminal investigations or for anthropological purposes. However, when the focus shifts to understanding how environmental factors influence phenotypes and contribute to human aging, researchers delve into biological age (reviewed ). Chronological clocks focus exclusively on CpG sites that correlate with chronological age, whereas biological clocks encompass a broader range of CpG sites associated with factors such as lifestyle and lifelong environmental influences. The term age acceleration residual (AAR) refers to the discrepancy between predicted age and actual chronological age. Elevated AAR has been associated with a higher risk of mortality . Ideally, a perfect chronological clock would yield zero AAR. However, distinguishing between chronological and biological components of aging remains a significant challenge (reviewed ). Chronological clocks are considered first-generation, exemplified by Hannum et al. and Horvath clocks, which have achieved remarkable accuracy with correlation coefficients exceeding 0.9. Hannum et al. examined over 450,000 CpG markers in whole-blood human samples (ranging from 19 to 101 years old) and identified 71 methylation markers strongly associated with age. However, Hannum’s clock loses accuracy when applied to non-blood tissues and samples from children . Horvath developed a multi-tissue clock composed of 353 CpGs, using Illumina DNA methylation array datasets and samples from 51 healthy tissues and cells of young children and adults . A total of 193 CpGs were identified as positively correlated with age, while the remaining 160 CpGs showed a negative correlation. The pace of these DNA methylation changes accelerated during growth and development. While the methylation status of individual CpGs showed weak associations with age, combining the 353 CpGs yielded a robust biomarker of biological aging. Thus, a higher number of CpGs enhanced accuracy and robustness. Moreover, within the same study, the analysis of cancer samples showed significant age acceleration. The second generation of epigenetic clocks consists of biological clocks, designed to improve assessments related to factors like time to death and healthspan . Zhang et al. were the first to integrate mortality-associated CpGs to create an overall mortality risk score . However, a more robust predictor emerged later: PhenoAge . A total of 42 clinical biomarkers were assessed using blood samples in a cohort of 9926 individuals, considering factors such as creatine, C-reactive protein, white blood cell count (WBC), and other indicators to develop an age prediction model composed of 513 CpGs. Out of these, only 41 markers overlapped with Horvath’s clock, with five being shared among Hannum et al., Horvath, and Levine et al.’s epigenetic clocks . The CpGs shared among these clocks demonstrated a stronger correlation with chronological age, whereas the non-shared CpGs were more indicative of biological age. This observation supports the notion that the initial generation of DNAm age estimators was primarily linked to chronological age and exhibited fewer associations with clinical measures of biological age, as seen in PhenoAge . This biological clock surpassed the initial generation of DNAm clocks in predicting various health outcomes, including all-cause mortality, cancers, healthspan, cardiovascular disease, and Alzheimer’s disease . In addition to PhenoAge, scientists have subsequently introduced another biological clock known as GrimAge . This epigenetic clock was developed in correlation with biomarkers for 12 of the plasma proteins, chronological age, biological sex, and smoking (measured in smoking pack-years), to predict the time to death . Using large-scale validation data from three ancestry groups, the age acceleration measure (AgeAccelGrim) surpassed former epigenetic clocks in predicting time-to-death, time-to-coronary heart disease, and time-to-cancer, and was linked to computed tomography data for fatty liver/excess fat and early age at menopause. It also strongly correlated with comorbidities, exhibiting associations with lifestyle factors like a healthy diet and educational attainment. Notably, GrimAge has been used to study many conditions including COVID , autism , major depression disorder , and post-traumatic stress disorder (PTSD) . The second version of GrimAge (GrimAge 2) used two additional DNAm-based estimators of plasma proteins: high-sensitivity C-reactive protein (logCRP) and hemoglobin A1C (logA1C) . GrimAge2 was assessed in 13,399 blood samples from nine study cohorts, which included individuals of Hispanic, European, and African populations (aged 40 to 92 years). This second version outperformed GrimAge in predicting mortality and exhibited stronger associations with age-related conditions, including kidney and lung dysfunction, metabolism, cognitive behavior, lipid profiles, vital signs, and CT-derived measures of adiposity across multiple racial and ethnic groups. Regarding DNAm markers of metabolic syndrome, DNAm logCRP was positively correlated with morbidity count, and DNAm logA1C was highly associated with type 2 diabetes. GrimAge version 2 was also studied in younger individuals and saliva samples, extending the analysis beyond the initial version. While Hannum’s and Horvath’s epigenetic clocks used different CpGs to predict actual age-related to all-cause mortality, PhenoAge and GrimAge used CpG methylation to improve the previously proposed age-related mortality and phenotypic indicators, adjusted for chronological age . Both first-generation and second-generation epigenetic clocks mentioned earlier offered a cross-sectional view, capturing an individual’s methylome at a specific point in time . Emerging longitudinal methylation studies, such as DunedinPoAm and DunedinPACE, enabled exploration of methylation changes over an extended period, showing how epigenetic modifications evolve in individuals over time. DunedinPoAm (Dunedin Pace of Aging methylation) evaluated the rate of biological aging using whole-genome blood DNA methylation data and elastic-net regression . This epigenetic clock analyzed differences in biological aging rates among 954 individuals who shared the same birth year and followed changes in 18 biomarkers indicating organ-system integrity during 12 years. Higher DunedinPoAm scores correlated with midlife cognitive and physical decline, accelerated facial aging, and increased risk of disease and mortality in older adults. Among young individuals, experiences of early-life adversity were also associated with a faster DunedinPoAm. Furthermore, the study included validation analysis conducted within cohort studies and the CALERIE trial. The same research team subsequently developed DunedinPACE (Dunedin for Pace of Aging Calculated from the Epigenome) as a DNA-methylation biomarker to quantify the pace of aging through a blood test . Using data from a cohort of people of the same chronological age, it tracked the within-individual decline in 19 indicators of organ system integrity over a two-decade period. It had three distinguishing features: it analyzed a single-year birth cohort, conducted follow-ups in young adults to separate effects from disease effects and avoid survival bias, and focused on changes in multi-organ system integrity during adulthood to distinguish ongoing aging processes from early developmental deficits. DunedinPACE demonstrated correlations with morbidity, disability, and mortality, and identified accelerated aging among young adults with a history of childhood adversity, providing information on how behavioral and environmental modifications may influence the rate of aging. One limitation of DunedinPACE is that it was established in a small cohort from a single country and did not consider individual diseases or causes of death. Assessing it in larger datasets would enable researchers to explore further the impacts of specific diseases or causes of death. The continuous evolution and improvement of epigenetic clocks can be beneficial both in forensic contexts and for enhancing human health. Considering the features observed thus far, selecting the appropriate epigenetic clock based on the need to estimate chronological or biological age is essential. Precise age estimation plays a critical role in anthropological investigations, especially when reconstructing the biological profiles of skeletal remains in scenarios like mass disasters, genocides, and bioarchaeology/paleodemography. The hardest structures in the human body, resistant to decomposition, include bones and teeth and are crucial for postmortem examinations, playing a significant role in forensic anthropology . While accurately estimating age in children is feasible through anthropological markers of growth and development, the process becomes significantly more challenging in adults. Estimating age-at-death in adult individuals relies on assessing the degeneration of skeletal and dental structures, which involves examining macroscopic characteristics and evaluating anatomical features (reviewed ). Still, the difference between chronological and estimated age can be ±10 years, which could hamper the proper identification of the victim . As a result, researchers have investigated a range of techniques based on the biochemical mechanisms of aging to improve the age-at-death estimation in adult individuals. Among these techniques, aspartic acid racemization stands out as the oldest one , followed by protein glycosylation , telomere length measurement , mitochondrial mutations , DNA damage response , and T-cell DNA rearrangement , among others. Nonetheless, their widespread use in forensic sciences is hindered by limitations such as accuracy, applicability, or technical complexity. External factors like pathological conditions and mass disasters significantly influence the subject of study. For instance, although aspartic acid racemization shows good accuracy with a mean absolute error (MAE) of 5 years , it is not suitable for analyzing burnt human remains due to temperature constraints . Consequently, with the growing understanding of DNA methylation, it emerges as a promising biochemical biomarker for age prediction in anthropology. However, research regarding DNA methylation analyses in bone and tooth tissues has been relatively limited. Bekaert et al. pioneered the initial analysis of DNA methylation in teeth for age estimation. Their research focused on analyzing four genes (ASPA, PDE4C, ELOVL2, and EDARADD) using DNA extracted from 29 dentin samples (third molars) obtained from individuals aged between 19 and 70 years old. Through pyrosequencing, they established an age estimation model that yielded a mean absolute deviation (MAD) of 4.86 years. Giuliani et al. were the first to introduce age estimation techniques using a combination of different layers of tooth samples employing EpiTYPER. They presented unique age prediction models for each tooth tissue (dental pulp, cementum, and dentin) individually, as well as combined models. Each model incorporated 5–13 CpGs from the ELOVL2, FHL2, and PENK genes, showing MADs ranging from 1.2 to 7.1 years, depending on the tissue analyzed. Notably, the most accurate correlations were found when combining pulp and cementum (MAD = 1.2), and dental pulp alone (MAD = 2.25). Márquez-Ruiz et al. employed pyrosequencing to evaluate methylation levels at specific CpG sites in the ELOVL2, ASPA, and PDE4C genes for age prediction. Using 65 whole teeth samples from individuals aged 15 to 85 years, they achieved MAEs ranging from 4.8 to 6.9 years using the three genes. The study further explored the correlation between methylation data and relative telomere length measurements to develop age prediction quantile regression models for both biomarkers together and separately. Results indicated that DNA methylation was more informative than telomere length when evaluated independently, and the combined study suggested limited utility for telomere length as a supplementary marker with DNA methylation markers for age estimation. They found no significant impact on the age prediction based on the type of tooth or biological sex. The final estimation model was based on nine CpGs in two genes (ELOVL2 and PDE4C), resulting in a MAE of 5.04 years. Zapico et al. were the first to use DNA methylation to estimate age in pulp tissues by applying pyrosequencing. Employing a set of 20 healthy erupted third molars (age 22–70), researchers integrated established DNA markers from the ELOVL2 and FHL2 genes, alongside three newly identified ones (NPTX2, KLF14, and SCGN), subjecting them to analysis within four distinct multivariate regression models. The outcome demonstrated great accuracy, with MAEs ranging from 1.5 to 2.13 years when comparing predicted age to chronological age in adult individuals. Two important factors to consider are: firstly, that this study exclusively utilized third molars, which are protected within the jaw, potentially influencing the study’s outcomes; secondly, the use of pulp may also significantly impact the accuracy of age estimation, as its location and properties render it more resistant to environmental stresses. Correia Dias et al. developed two multi-tissue models for age estimation, employing 31 bones and 31 whole tooth samples and using both Sanger sequencing and SNaPshot techniques to analyze DNA methylation levels. For Sanger sequencing, the optimal model for bones included six CpGs located in the genes ELOVL2, EDARADD, and MIR29B2C, obtaining a MAD of 2.56. In the case of teeth, the marker FHL2 CpG 4 had the best performance, with a MAD of 11.35. When genes were evaluated with SNaPshot, the best model for bones included genes FHL2 and KLF14, producing a MAD of 7.2. For teeth, the optimal model included CpGs at ELOVL2 and KLF14, yielding a MAE of 7.1. In a follow-up study, Correia et al. developed the first multi-age prediction models for bones and teeth. They built two multi-tissue models for age estimation utilizing Sanger sequencing and SNaPshot, incorporating blood, bones, and tooth samples from both living and deceased individuals. Kondo et al. developed the first age estimation method for teeth using real-time methylation-specific PCR (RT-MSP), focusing on the validated biomarker ELOVL2. They built a single-gene age prediction model using 29 whole teeth samples spanning ages from 20 to 79 years, achieving a MAD of 8.94. In this case, it is important to consider that only one marker was utilized, which may have impacted the accuracy. Moreover, they observed that methylation levels were not influenced by biological sex. Following this, Ogata et al. extended the application of RT-MSP by incorporating a CpG site in the EDARADD gene alongside the previous marker in the ELOVL2 gene. In this instance, the samples were also whole teeth (n = 59), and they developed a multiple regression prediction model, achieving a MAE of 6.69 years. Validation of the age estimation model using an additional 40 teeth resulted in a MAE value of 8.28 years. Similar to the previous study, they highlighted the importance of further exploration of this method, especially given its affordability and accessibility in forensic laboratories. In recent years, genome-wide DNA methylation data from bone samples has been generated in various publications . The various CpG sites identified to predict age from these different datasets further contribute to the knowledge for the future development of a more accurate bone clock. In the realm of new technologies such as NGS, the VISAGE consortium developed age prediction models for various tissues, including bones . The bone-specific model (n = 161) utilized six CpGs from four genes (ELOVL2, KLF14, PDE4C, and ASPA), achieving a MAE of 3.4. The development of prediction models in bones and teeth is not as extensive as for blood or saliva samples, which is why more studies and additional validations are needed. In anthropology, samples are often in highly diverse conditions, so the impact of taphonomy and environment, as well as the selection of tissue and markers, are crucial. Furthermore, the development of new epigenetic clocks specific to teeth and bone would also be important for future advancements in the field. Age Estimation in Children In childhood, age estimation through forensic anthropological assessment of growth and development can be achieved with high accuracy (reviewed ); however, there are situations where these techniques are limited. In cases of child abduction, missing children, and unaccompanied minors in migration, estimating age could be critical , and DNA methylation could become highly useful. Among recent discoveries, Freire-Aradas et al. conducted a search aimed at identifying potential DNA methylation markers correlated with age among blood donors aged 2 to 18 years (n = 209). First, public datasets from Illumina BeadChip arrays were analyzed, followed by the utilization of EpiTYPER DNA methylation analysis system to identify six loci (KCNAB3, EDARADD, ELOVL2, CCDC102B, MIR29B2CHG, CR_23_CpG_3) associated with age. Their novel finding was the strong correlation between age and the KCNAB3 gene (potassium voltage-gated channel subfamily A regulatory beta subunit 3, chromosome 17), showing rapid changes between the ages of 2 and 18 years. This underscores the potential of this gene as a biomarker for childhood and adolescence. McEwen et al. then created a specialized epigenetic clock for children, using noninvasive buccal epithelial swab samples from people aged 0 to 19. The data, generated and evaluated from 1721 genome-wide DNA methylation profiles, was then employed to create the Pediatric Buccal Epigenetic (PedBE) clock, which estimates the age based on methylation patterns across 94 CpGs (MAE = 0.35). The PedBE clock has since been utilized in later publications for age estimation in children . As buccal swabs are deemed noninvasive for collection, they could offer advantages, particularly in children. Freire-Aradas et al. studied the development of a new epigenetic clock that included children and adolescents. The prediction model used 895 DNA blood samples from people aged 2 to 104, employing the EpiTYPER technique. Through a comparison of various statistical methods, they identified the optimal prediction model as a quantile regression neural network applying markers from ELOVL2, ASPA, PDE4C, FHL2, CCDC102B, MIR29B2CHG, and chr16:85395429 (GRCh38). The validation model based on a quantile regression neural network exhibited the highest accuracy, with a MAE of 3.32 across 152 samples. One important aspect of this study was the comparison of multiple statistical methods, which demonstrated the advantage of the quantile regression tool in generating age-dependent prediction intervals, thereby enabling the adjustment of errors to match the estimated age. Based on the gathered information, there is a need to delve deeper into the recently discovered markers for children. Additionally, novel prediction models should be generated using different techniques and tissues based on these recent epigenetic clocks. This would improve the identification of children and adolescents when the most recognized anthropological methods are not feasible to use. In childhood, age estimation through forensic anthropological assessment of growth and development can be achieved with high accuracy (reviewed ); however, there are situations where these techniques are limited. In cases of child abduction, missing children, and unaccompanied minors in migration, estimating age could be critical , and DNA methylation could become highly useful. Among recent discoveries, Freire-Aradas et al. conducted a search aimed at identifying potential DNA methylation markers correlated with age among blood donors aged 2 to 18 years (n = 209). First, public datasets from Illumina BeadChip arrays were analyzed, followed by the utilization of EpiTYPER DNA methylation analysis system to identify six loci (KCNAB3, EDARADD, ELOVL2, CCDC102B, MIR29B2CHG, CR_23_CpG_3) associated with age. Their novel finding was the strong correlation between age and the KCNAB3 gene (potassium voltage-gated channel subfamily A regulatory beta subunit 3, chromosome 17), showing rapid changes between the ages of 2 and 18 years. This underscores the potential of this gene as a biomarker for childhood and adolescence. McEwen et al. then created a specialized epigenetic clock for children, using noninvasive buccal epithelial swab samples from people aged 0 to 19. The data, generated and evaluated from 1721 genome-wide DNA methylation profiles, was then employed to create the Pediatric Buccal Epigenetic (PedBE) clock, which estimates the age based on methylation patterns across 94 CpGs (MAE = 0.35). The PedBE clock has since been utilized in later publications for age estimation in children . As buccal swabs are deemed noninvasive for collection, they could offer advantages, particularly in children. Freire-Aradas et al. studied the development of a new epigenetic clock that included children and adolescents. The prediction model used 895 DNA blood samples from people aged 2 to 104, employing the EpiTYPER technique. Through a comparison of various statistical methods, they identified the optimal prediction model as a quantile regression neural network applying markers from ELOVL2, ASPA, PDE4C, FHL2, CCDC102B, MIR29B2CHG, and chr16:85395429 (GRCh38). The validation model based on a quantile regression neural network exhibited the highest accuracy, with a MAE of 3.32 across 152 samples. One important aspect of this study was the comparison of multiple statistical methods, which demonstrated the advantage of the quantile regression tool in generating age-dependent prediction intervals, thereby enabling the adjustment of errors to match the estimated age. Based on the gathered information, there is a need to delve deeper into the recently discovered markers for children. Additionally, novel prediction models should be generated using different techniques and tissues based on these recent epigenetic clocks. This would improve the identification of children and adolescents when the most recognized anthropological methods are not feasible to use. 5.1. Blood The initial epigenetic age prediction methods introduced in forensic science were mainly tailored for estimating age using blood samples, as it would not be uncommon, for instance, to encounter bloodstains at a crime scene. The most renowned DNA marker ELOVL2 was initially examined in blood samples by Garagnani et al. using the Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA), and subsequently validated in follow-up studies . One of the early works on age prediction models using pyrosequencing was conducted by Weidner et al. . This study analyzed 151 blood samples and focused on three CpG sites within the genes ITGA2B, ASPA, and PDE4C, obtaining a MAD of less than 5 years. Zbieć-Piekarska et al. evaluated the methylation status of seven CpGs within the ELOVL2 gene in 427 blood samples from Polish individuals using bisulfite pyrosequencing. Their final model included two CpG sites in ELOVL2 and enabled age prediction with a MAD of 5.75 in the test set (n = 124). Additionally, they analyzed the methylation levels in bloodstains to evaluate the stability of prediction accuracy over time, finding no changes after 4 weeks of room-temperature storage. The subsequent study conducted by Zbieć-Piekarska et al. involved the analysis of 41 CpGs in 420 blood samples (age 2–75 years) using pyrosequencing. The age prediction model, which utilized five markers from the genes ELOVL2, TRIM59, C1orf132/MIR29B2CHG, KLF14, and FHL2, yielded a standard error (SE) of the estimate of 3.9 in the test set (n = 120) . Later, Cho used this same development to construct multiple age prediction models in 100 blood samples from the Korean population. In this case, they employed the same genes from Zbieć-Piekarska et al.’s model, where the most accurate age prediction was achieved using six CpG sites across the genes ELOVL2, TRIM59, C1orf132/MIR29B2CHG, FHL2, excluding KLF14 (MAD = 3.29). Several studies employed SNaPshot assays in different populations and markers. Jung et al. used 448 samples from various tissues, including blood, to develop independent age prediction models and a combined one based on five CpG sites (genes ELOVL2, FHL2, KLF14, C1orf132/MIR29B2C, and TRIM59), achieving a MAD of 3.48 years for blood. Pan et al. analyzed 310 blood samples from the Chinese Han population using a multiplex methylation SNaPshot assay. They incorporated seven CpG markers from genes ASPA, EDARADD, KLF14, CCDC102B, ZNF423, ITGA2B, KLF14, and FHL2 to construct two distinct age prediction models (stepwise regression and support vector regression). Notably, the support vector regression model was the most accurate, achieving a MAD of 5.56 in the test set (n = 80). Onofri et al. aimed to study previous models and validate them in an Italian population. They employed 84 blood samples in a SNaPshot assay targeting five CpG sites in the genes ELOVL2, FHL2, KLF14, MIR29B2C, and TRIM59, achieving a MAD of 3.01 years in the test set. Feng et al. investigated 153 age-associated CpG sites within 21 genomic regions in 390 Chinese blood samples (aged 15–75 years) using the EpiTYPER system. Their primary objective was to determine the optimal feature selection method. In two independent validation sets, they identified nine CpG sites located in genes ELOVL2, TRIM59, MIR29B2CHG, PDE4C, CCDC102B, RASSF5, and a region on chr10:22334463/65 as the optimal subset for age estimation (MAD = 2.49). Notably, the linear model performed better than machine learning models like support vector machine (SVM) and artificial neural network (ANN). Additionally, they showed that a z-score transformation could partially remove the batch effect between data generated from EpiTYPER and pyrosequencing techniques. In their study, Lau and Fung analyzed DNA methylation from 991 blood samples (aged 19–101 years) using Infinium ® HumanMethylation450 BeadChip (San Diego, CA, USA). They explored various variable selection methods including forward selection (FS), least absolute shrinkage and selection operator (LASSO), elastic net (EN), and smoothly clipped selection deviation (SCAD), to predict human age. With this information, they compared the performance of classical statistical models (multiple linear regression) with sophisticated machine learning algorithms (random forest regression, one hidden layer, two hidden layers, and SVM). Their analysis revealed that the optimal model was achieved from the forward selection method of 16 CpG markers alongside the multiple linear regression statistical model, resulting in a MAD of 3.76. Notably, they found that increasing the number of markers beyond this threshold did not improve the model’s accuracy. The VISAGE Consortium developed a prototype tool for age estimation employing a multiplex PCR/MPS assay. They analyzed 32 CpGs from five genes (ELOVL2, MIR29B2C, FHL2, TRIM59, and KLF14), previously identified by Zbieć-Piekarska et al. , including reproducibility and sensitivity analysis, achieving robust quantification of methylation levels (mean standard deviation of 1.4% across ratios). Moreover, The VISAGE enhanced tool for age prediction in somatic tissues, incorporating six CpG sites from genes previously studied by Heidegger et al. alongside PDE4C, yielded a MAE of 3.2 in the test set of 48 samples. In a subsequent study, the VISAGE consortium developed a prediction model from blood samples (n = 160) using six CpGs from six genes (ELOVL2, MIR29B2CHG, KLF14, FHL2, TRIM59, and PDE4C), achieving a MAE of 3.2 years. Multiple studies have explored age prediction using bloodstains, especially important in crime scene investigations . The most recent study, Yang et al. utilized pyrosequencing and random forest regression (RFR) to develop an age prediction model. Initially, they evaluated 46 CpG sites from six genes (ELOVL2, C1orf132, TRIM59, KLF14, FHL2, and NPTX2) using bloodstain samples from 128 males and 113 females (aged 10 to 79 years). Subsequently, they employed RFR to build two models, one for males (MAD = 2.8) and another for females (MAD = 2.93). A notable distinction is that they obtained reproducible results using only 0.1 ng of genomic DNA. Age estimation models in blood samples have been extensively studied in forensic research. Multiple studies have explored various techniques, markers, sample types, and statistical models, leading to varying levels of accuracy. 5.1.1. Postmortem Blood Samples In addition to age estimation from blood samples left at the crime scene, potentially belonging to the perpetrator, it is important to assess whether blood from a deceased individual could differ in methylation patterns affecting age estimates. The following provides a brief description of the work published to date on this topic. The first study related to age prediction using DNA methylation in deceased samples was by Bekaert et al. . They investigated DNA methylation patterns employing pyrosequencing technology targeting four CpG markers (ASPA, PDE4C, ELOVL2, and EDARADD). The model was built from blood samples of both living (n = 37) and deceased (n = 37) patients aged 0 to 91 years old, achieving a MAD of 3.75 years. Two important discoveries emerged from the study: prediction accuracy remained consistent across samples from both living and deceased individuals, and there were no discernible differences based on biological sex. These results are consistent with the findings of Hamano et al. , who analyzed blood samples from 22 living and 52 deceased individuals aged 0 to 95 years and developed a combined age prediction model. Using markers in the genes ELOVL2 and FHL2 through MS-HRM, they yielded a MAD of 7.71 years for the test set. They also included the information that the samples were analyzed within 10 days after death, suggesting that these data could be important for future comparisons when establishing a prediction model. Correia Dias et al. performed bisulfite Sanger sequencing on blood samples obtained from 51 deceased individuals (24 to 86 years), processed within 5 days postmortem. They evaluated the methylation levels of ELOVL2, FHL2, EDARADD, PDE4C, and C1orf132, reporting a MAD of 6.08 years for the training set and 8.84 years for the test sets. The same group developed prediction models using blood samples from 59 living and 62 deceased individuals (28 to 86 years) utilizing SNaPshot assays and building upon CpG sites analyzed in a previous study . For the final model applied to living individuals, they employed three CpG sites located at the ELOVL2, FHL2, and C1orf132 genes, resulting in a MAD of 4.25 years. In contrast, for the final model in deceased individuals, they integrated four CpG sites found in the ELOVL2, FHL2, C1orf132, and TRIM59 genes, yielding a MAD of 5.36 years. Similar to previous studies, they found no differences in prediction accuracy based on biological sex. Anaya et al. employed bisulfite pyrosequencing, similar to Bekaert’s previous work , to assess individual CpG sites on five genes (KLF14, ELOVL2, C1orf132, TRIM59, and FHL2) in 264 postmortem blood samples ranging from 3 months to 93 years of age, achieving a MAD of 7.42 for the testing data (n = 72). Furthermore, the researchers explored potential factors that could influence accuracy. These factors included sample storage time before analysis, which in this case ranged from 2.5 to 4 days. Additionally, a lower prediction potential of age estimation as an individual’s age increases, consistent with prior research . Naue et al. employed MPS to investigate 13 previously selected CpGs (DDO, ELOVL2, F5, GRM2, HOXC4, KLF14, LDB2, MEIS1-AS3, NKIRAS2, RPA2, SAMD10, TRIM59, and ZYG11A) in brain, bone, muscle, buccal swabs, and whole blood of 29 deceased individuals (0 to 87 years). Their analysis included a larger number of markers, which, although not ideal for a model applicable in forensic cases, could also offer an opportunity for further exploration of these methylation regions in subsequent research. The VISAGE Consortium also developed prediction models using samples from 24 deceased individuals, including blood, cartilage, and muscle. While the blood prediction model achieved a MAE of 3.1 years, further investigation is needed to improve the accuracy of cartilage and muscle samples, with respective MAEs of 13.1 and 17.1 . The studies conducted so far have shown significant variability in techniques, the selection and combination of DNA methylation markers, and even in the time interval between sample collection and analysis. These variations could have influenced the accuracies obtained. However, it was also discovered that the difference in age prediction accuracy observed in samples from living and deceased individuals, as well as that observed in biological sex, may not be significant. These findings are particularly relevant in a forensic context. 5.1.2. Y-Chromosome in Blood Samples The study of DNA methylation in the Y-chromosome (ChrY) presents significant interest for forensic investigations . Firstly, age prediction through DNA methylation could help in estimating the age of male individuals in mixed stains, such as those found in assault cases. Additionally, it could assist in distinguishing between male relatives of different ages within the same paternal lineage, a challenge not feasible with current ChrY analyses. However, this chromosome exhibits unique characteristics: it is exclusive to males, haploid in nature, and the smallest human chromosome. Consequently, a distinct approach is necessary in comparison to autosomal chromosomes. In recent years, the earliest studies related to Y-chromosome and age prediction have been primarily focused on the different methylation patterns and their association with mortality , studying ChrY blood-based DNA methylation data from 624 men in a chromosome-wide epigenetic association analysis. They identified up to 416 CpG sites that exhibited differential methylation across ages. The results showed an increasing tendency in DNA methylation with age, a finding that was supported in further studies . Additionally, later work from Lund et al. found a significant overlap between mortality-associated and age-associated CpGs. Although these studies contributed to a deeper understanding of the ChrY and methylation patterns, they were not conducted with the aim of developing an age prediction model. Vidaki et al. developed the first male-specific Y-CpG-based epigenetic age predictor using publicly available blood-based DNA methylation data of 1057 European males (aged 15–87) obtained previously by Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA). Machine learning was applied to create two age prediction models: one utilizing 75 age-dependent Y-CpGs (MAE = 7.54), and the other only employing the most predictive 19 Y-CpGs (MAE = 8.46). Although MAEs are higher compared to studies on autosomal chromosomes, this research sets a path for conducting further research for the application of Y-chromosome age prediction models in forensics. In 2023, Jiang et al. investigated 13 age-related Y-CpGs using publicly available DNA methylation data from 817 blood samples of males aged 15 to 87, obtained through the Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA). They developed two SNaPshot systems for a male-specific age prediction model, achieving MADs between 4 and 6 years. Despite the moderate accuracy, this research holds promise for future studies. Additionally, the study incorporated the analysis of bloodstains as well as mixed samples. Research on the Y-chromosome and its association with DNA methylation in age prediction remains limited. Further investigations of this entire chromosome, using non-array methodologies and focusing on microarrays of Y-CpG markers in non-blood tissues, are crucial for developing future prediction models relevant to forensic science applications . 5.2. Semen Semen traces constitute a primary biological material for perpetrator identification in forensics, especially in cases of sexual assaults . Furthermore, because of the unique age-related DNA methylation pattern observed in sperm cells compared to somatic cells , epigenetic clocks, such as Horvath’s , have not been able to accurately estimate age within this specific context. In recent years, there has been a growing body of research dedicated to age estimation models in sperm cells. The studies so far have focused on identifying the most suitable age-correlated candidates using a combination of individual CpGs or broader regions known as DMSs (differentially methylated sites), which could introduce complexity to the marker selection process . FOLH1B, located on chromosome 11, is a highly studied gene for semen age prediction, encoding folate hydrolase 1b, also known as prostate-specific membrane antigen-like protein. It has been researched for both age estimation in forensic contexts and its potential role in the development of prostate cancer . Hwan Young Lee et al. developed the first age prediction model using sperm samples for application in forensic science. Initially, an assessment was conducted on 485,000 CpG loci from 12 sperm donors (aged 20 to 59 years) using the Infinium HumanMethylation450 BeadChip array (Illumina, San Diego, CA, USA) and identifying 24 potential epigenetic age predictors. Subsequently, the final age prediction model using SNaPshot was developed based on the more strongly correlated methylation regions (TTC7B, FOLH1B/NOX4, and cg12837463), achieving a MAD of 5.4 for the validation test (n = 37). In a follow-up validation model, Lee et al. analyzed both sample donors (n = 12) and forensic casework samples (n = 19), resulting in a MAD of 4.8 and 5.2, respectively. A primary highlight of the study was the inclusion of forensic casework samples, achieving reproducible results with less than 5 ng of bisulfite-converted DNA, a factor of particular significance for its potential implementation in forensic contexts. Individuals in their 20s and 50s showed distinct MADs of 2.9 and 7.2, respectively, indicating that, in line with findings from other tissue models, prediction sensitivity decreases with age progression. These two studies highlighted the importance of increasing the number of samples to improve reliability. Later, Li et al. developed a predictive age model utilizing two CpG sites in genes TTC7B and NOX4/FOLH1B, previously studied by a research group , using samples from Chinese males aged 21 to 54 years in a bisulfite pyrosequencing model. One notable aspect of the study was the utilization of different types of semen samples (liquid semen, fresh seminal stains, aged seminal stains, and mixed stains of semen and vaginal secretion), which resulted in MADs range between 3.8 and 4.3 years. Recently, the VISAGE Consortium designed a three-stage study to explore potential age predictors suitable for sperm cells. The study included identifying and validating novel age-correlated CpGs, as well as developing a prediction model based on the top candidates. First, they used 40 semen samples (24 to 58 years) in Infinium Methylation EPIC ® BeadChip arrays (Illumina, San Diego, CA, USA) to target approximately 850,000 CpGs and identified distinctive age-correlated DMSs suitable for age prediction. Building upon this, the ten most promising candidate CpGs, along with the three markers previously reported , were validated in an independent set of semen-derived DNA samples (n = 125) using targeted NGS assays. Finally, a prediction model was developed and further validated consisting of four novel (SH2B2, EXOC3, IFITM2, and GALR2) and one previously identified (FOLH1B) DNAm markers, achieving a MAE of 5.1 years in the testing set (n = 54). Other investigations concerning age prediction from semen have predominantly adopted a clinical perspective. For instance, Jenkins et al. analyzed data from previous studies to develop a statistical DNA methylation model employing the Infinium HumanMethylation450 BeadChip (Illumina, San Diego, CA, USA) (n = 329). The model was based on 51 genomic regions and reported a MAE of 2.4. The study used samples with diverse characteristics and found that age prediction could be achieved regardless of fertility status. Additionally, smokers showed a tendency towards elevated age profiles. In a recent study conducted by Pilsner et al. , two distinct age prediction models were constructed employing Infinium Methylation BeadChip and machine learning analysis: one utilizing individual CpGs (120 CpGs) and the other incorporating the entire DMRs (318 CpGs). The models exhibited less than 1% overlap in CpGs between them, suggesting a substantial pool of potential candidates for further investigation. These studies, although oriented to clinics, could aid in understanding the relationship between DNA methylation and aging, ultimately paving the way for potential future research projects in the forensic field. Additional factors must also be considered regarding age prediction in sperm. Limitations include the decline in both sperm quality and quantity among the general population in recent decades, as well as variability in sperm count per individual . Low sperm count can lead to an increase in non-spermatozoa cells in the sample, underscoring the need for further research into the differences between methylation profiles from purified sperm cells and whole semen samples . Furthermore, there is the added complexity of researchers using diverse DNA markers and terminologies (DMR vs. CgG) for age prediction in semen samples. Finally, it is necessary to evaluate and validate the applicability of models in mixed samples, which may be encountered in sexual assault cases. 5.3. Saliva and Buccal Swabs Saliva and buccal swab samples have been extensively investigated for DNA methylation-based age prediction methods. These samples are used in forensic contexts due to their simple and noninvasive collection from individuals. Moreover, they are frequently encountered at crime scenes, such as in cigarette butts and bottles . However, a significant challenge comes from the cellular heterogeneity of these samples, where the proportions of leukocytes and epithelial cells vary depending on the sample type and are further influenced by individual characteristics . Bocklandt et al. developed the first saliva model using only three CpGs, achieving an average accuracy of 5.2, based on data obtained from Illumina HumanMethylation27 microarrays (San Diego, CA, USA) . Following this, several studies have focused on differentiating tissue and cell types and developing accurate combined models independent of the sample type . Hamano et al. developed an age prediction model utilizing 263 saliva samples (1 to 73 years) employing MS-HRM and focusing solely on two markers in the genes ELOVL2 and EDARADD. They achieved a MAD of 6.25 years for the test set (n = 50). Additionally, they applied the same model to seven samples of cigarette butts, obtaining a MAD of 7.65 years. The difference in MAD was attributed to the limited number of samples. Years later, Oka et al. used MS-HRM on 113 saliva samples (aged 20 to 50 years) to investigate the impact of ancestry on age prediction through methylation in the EDARADD and FHL2 genes, based on previous studies . The differences they found in the methylation levels of Japanese and Indonesian participants led them to conclude about the importance of considering the population of origin in existing DNA methylation age prediction methods. Eipel et al. was the first to address the differential cell composition in saliva samples and its impact on age estimation. In the study, 55 buccal swabs were utilized to create an age prediction model based on three CpG sites (specifically, genes PDE4C, ASPA, and ITGA2B) using pyrosequencing. The model achieved a MAD of 7.03 for the validation test. Subsequently, two additional CpG markers specific to cell type (genes CD6 and SERPINB5) were incorporated to distinguish between leukocytes and epithelial cells, leading to MADs of 5.09 and 5.12 for two independent validation tests. This model, known as the “Buccal-Cell-Signature”, exhibited greater accuracy compared to the model without cell-type specific CpG markers. Hong et al. developed an age prediction model based on 226 saliva samples using the SNaPshot method. Initially, they employed Illumina BeadChip array (San Diego, CA, USA) data from 54 individuals to identify the most age-associated CpG markers. Then, they utilized 226 saliva samples (aged 18 to 65 years) to construct an age prediction model based on six CpG markers (genes SST, CNGA3, KLF14, TSSK6, TBR1, and SLC12A5), along with one cell type-specific CpG marker (PTPN7 gene), in a SNaPshot assay. The testing set model (n = 113) achieved a MAE of 3.13. The incorporation of PTPN7 was based on its ability to distinguish between leukocytes and buccal epithelial cells. The use of a cell-specific marker was motivated by previous research . A following study investigated these same markers using both MPS technology and SNaPshot from saliva samples of the 95 individuals. As the predicted age obtained from these two methods varied greatly, they constructed platform-independent age predictive models, achieving a MAD of 3.19. In a similar vein, 368 samples from 184 individuals (n = 184 saliva and n = 184 buccal cells) were analyzed using publicly available data from the Illumina HumanMethylation450 BeadChip array to select two tissue-specific markers (HUNK and RUNX1), along with seven age-correlated CpG sites (cg10501210, LHFPL4, ELOVL2, PDE4C, HOXC4, OTUD7A, and EDARADD). Subsequently, tissue-specific and combined age prediction models were developed using SNaPshot. The combined model, employing multivariate quantile regression, achieved a MAE of 3.66 on the testing set (n = 91 saliva and n = 93 buccal cells). In this case, no improvement was detected in age predictions when adding tissue-specific markers. Therefore, according to these results and those of previous studies , using markers of cellular composition as a co-variable was more effective than using tissue-specific markers . Jung et al. explored multiple tissues by analyzing five CpGs within the genes ELOVL2, FHL2, KLF14, C1orf132/MIR29B2C, and TRIM59. They developed independent and combined prediction models for saliva, buccal swabs, and blood samples using the SNaPshot assay. Specifically, in saliva samples (n = 150) and buccal swabs (n = 148), MADs of 3.55 and 4.29 were respectively obtained. In a separate study, Woźniak et al. conducted MPS assays on various types of samples, including buccal cells. In this instance, they utilized five CpGs within the genes PDE4C, MIR29B2CHG, ELOVL2, KLF14, and EDARADD, achieving a MAE of 3.7 years in the testing set (n = 48). Schwender et al. developed age prediction models comparing pyrosequencing and SNaPshot. Initially, an analysis of 88 CpG sites in the genes PDE4C, ELOVL2, ITGA2B, ASPA, EDARADD, SST, KLF14, and SLC12A5 was conducted on buccal swab samples (n = 141) to identify markers that best correlated with age. Based on this, two prediction models were developed considering three markers in the genes SST, KLF14, and SLC12A5, one using pyrosequencing (MAD = 5.33) and the other using SNaPshot (MAD = 6.44). The aim was to compare the results of both methods, taking into consideration that SNaPshot could be more easily integrated into the routine workflow in a forensic laboratory. One of the latest studies involves the first development of an age prediction model using ddPCR technology for human saliva samples. Initially, an analysis was conducted on methylation ratios at four CpG sites located within the genes SST, KLF14, TSSK6, and SLC12A5. Then, saliva samples from 76 individuals were employed to construct the prediction model, which yielded a MAD of 3.3 . Koop et al. investigated postmortem samples using buccal swabs from both living individuals (n = 142) and deceased individuals (n = 73), spanning ages from 0 to 90 years. Their goal was to develop an age prediction model based on a single gene, PDE4C, using a pyrosequencing assay. Methylation levels of PDE4C were assessed in the samples from deceased individuals at different stages of decomposition, and age estimation was not possible only in cases of advanced putrefaction. The main finding of the study was that DNA methylation remained stable across several stages of decomposition, and buccal swabs were suitable samples for assessing age-related methylation patterns in postmortem contexts. In the context of forensic science, saliva and buccal swabs are among the most studied samples, alongside blood. The predominant focus in recent years has been on the type of sample to be analyzed and the technique used, resulting in varying approaches and impacting the accuracy of the different models. The potential types of biological samples found at a crime scene and the accuracies of their age estimates based on DNA methylation are summarized in and . 5.4. Multi-Tissue Age Prediction Models In forensic scenarios, distinguishing between various types of samples can be challenging. Therefore, the development of multi-tissue age prediction models would be highly beneficial. While tissue-specific models currently provide the most accurate age estimations , efforts are also directed toward creating universal markers suitable for multi-tissue samples. In this section, we will explore some of the multi-tissue models developed thus far. Alsaleh et al. , identified 10 age-related DNA methylation markers and developed different age prediction models using samples from five tissue types: whole blood, saliva, semen, menstrual blood, and vaginal secretions. Their multi-tissue model, based on 41 samples, achieved an average prediction accuracy of 3.8 years in the training set. For the testing set (n = 24), three independent prediction models resulted in a MAD of 6.9 years for menstrual blood and vaginal fluid, 5.6 years for buccal swabs, and 7.8 years for blood. The overall multi-tissue accuracy rate, based on bootstrap analysis, was 7.8 years. In another study, CpG markers previously studied were examined in 29 samples from deceased individuals aged 0 to 87 years to explore the potential for developing a multi-tissue age predictor . Utilizing massive parallel sequencing, the study revealed the capability of markers within 13 CpG regions of genes such as DDO, ELOVL2, KLF14, NKIRAS, RPA2, TRIM59, and ZYG11 previously studied to predict age across various tissues including the brain, bones, muscles, buccal epithelial cells, and blood. Alghanim et al. examined 27 CpG sites within the SCGN, DLX5, and KLF14 genes across blood (n = 71) and saliva samples (n = 91) using pyrosequencing. Methylation levels at CpG sites within the SCGN and KLF14 loci were found to be correlated with chronological age in both tissues. Various predictive models were tested, ultimately resulting in age prediction with MADs ranging between 7.1 and 10.3 in independent testing datasets. In the study by Jung et al. mentioned in previous sections, samples of saliva, blood, and buccal swabs showed strong correlations with age across three CpG sites within the genes ELOVL2, KLF14, and TRIM59. Three different tissue-specific models for age prediction and a combined model that included data from all three sample types were developed using SNaPshot, achieving a MAD of 3.8. Also described previously, in the research from Correia Dias et al. , two multi-tissue age prediction models were the Sanger sequencing model and the SNaPshot assay. From the studies mentioned in this section, it can be observed that multi-tissue age prediction models currently demonstrate lower accuracies compared to those based on individual tissues. Therefore, it could be inferred that further analyses with different markers are necessary to reduce this disparity. The initial epigenetic age prediction methods introduced in forensic science were mainly tailored for estimating age using blood samples, as it would not be uncommon, for instance, to encounter bloodstains at a crime scene. The most renowned DNA marker ELOVL2 was initially examined in blood samples by Garagnani et al. using the Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA), and subsequently validated in follow-up studies . One of the early works on age prediction models using pyrosequencing was conducted by Weidner et al. . This study analyzed 151 blood samples and focused on three CpG sites within the genes ITGA2B, ASPA, and PDE4C, obtaining a MAD of less than 5 years. Zbieć-Piekarska et al. evaluated the methylation status of seven CpGs within the ELOVL2 gene in 427 blood samples from Polish individuals using bisulfite pyrosequencing. Their final model included two CpG sites in ELOVL2 and enabled age prediction with a MAD of 5.75 in the test set (n = 124). Additionally, they analyzed the methylation levels in bloodstains to evaluate the stability of prediction accuracy over time, finding no changes after 4 weeks of room-temperature storage. The subsequent study conducted by Zbieć-Piekarska et al. involved the analysis of 41 CpGs in 420 blood samples (age 2–75 years) using pyrosequencing. The age prediction model, which utilized five markers from the genes ELOVL2, TRIM59, C1orf132/MIR29B2CHG, KLF14, and FHL2, yielded a standard error (SE) of the estimate of 3.9 in the test set (n = 120) . Later, Cho used this same development to construct multiple age prediction models in 100 blood samples from the Korean population. In this case, they employed the same genes from Zbieć-Piekarska et al.’s model, where the most accurate age prediction was achieved using six CpG sites across the genes ELOVL2, TRIM59, C1orf132/MIR29B2CHG, FHL2, excluding KLF14 (MAD = 3.29). Several studies employed SNaPshot assays in different populations and markers. Jung et al. used 448 samples from various tissues, including blood, to develop independent age prediction models and a combined one based on five CpG sites (genes ELOVL2, FHL2, KLF14, C1orf132/MIR29B2C, and TRIM59), achieving a MAD of 3.48 years for blood. Pan et al. analyzed 310 blood samples from the Chinese Han population using a multiplex methylation SNaPshot assay. They incorporated seven CpG markers from genes ASPA, EDARADD, KLF14, CCDC102B, ZNF423, ITGA2B, KLF14, and FHL2 to construct two distinct age prediction models (stepwise regression and support vector regression). Notably, the support vector regression model was the most accurate, achieving a MAD of 5.56 in the test set (n = 80). Onofri et al. aimed to study previous models and validate them in an Italian population. They employed 84 blood samples in a SNaPshot assay targeting five CpG sites in the genes ELOVL2, FHL2, KLF14, MIR29B2C, and TRIM59, achieving a MAD of 3.01 years in the test set. Feng et al. investigated 153 age-associated CpG sites within 21 genomic regions in 390 Chinese blood samples (aged 15–75 years) using the EpiTYPER system. Their primary objective was to determine the optimal feature selection method. In two independent validation sets, they identified nine CpG sites located in genes ELOVL2, TRIM59, MIR29B2CHG, PDE4C, CCDC102B, RASSF5, and a region on chr10:22334463/65 as the optimal subset for age estimation (MAD = 2.49). Notably, the linear model performed better than machine learning models like support vector machine (SVM) and artificial neural network (ANN). Additionally, they showed that a z-score transformation could partially remove the batch effect between data generated from EpiTYPER and pyrosequencing techniques. In their study, Lau and Fung analyzed DNA methylation from 991 blood samples (aged 19–101 years) using Infinium ® HumanMethylation450 BeadChip (San Diego, CA, USA). They explored various variable selection methods including forward selection (FS), least absolute shrinkage and selection operator (LASSO), elastic net (EN), and smoothly clipped selection deviation (SCAD), to predict human age. With this information, they compared the performance of classical statistical models (multiple linear regression) with sophisticated machine learning algorithms (random forest regression, one hidden layer, two hidden layers, and SVM). Their analysis revealed that the optimal model was achieved from the forward selection method of 16 CpG markers alongside the multiple linear regression statistical model, resulting in a MAD of 3.76. Notably, they found that increasing the number of markers beyond this threshold did not improve the model’s accuracy. The VISAGE Consortium developed a prototype tool for age estimation employing a multiplex PCR/MPS assay. They analyzed 32 CpGs from five genes (ELOVL2, MIR29B2C, FHL2, TRIM59, and KLF14), previously identified by Zbieć-Piekarska et al. , including reproducibility and sensitivity analysis, achieving robust quantification of methylation levels (mean standard deviation of 1.4% across ratios). Moreover, The VISAGE enhanced tool for age prediction in somatic tissues, incorporating six CpG sites from genes previously studied by Heidegger et al. alongside PDE4C, yielded a MAE of 3.2 in the test set of 48 samples. In a subsequent study, the VISAGE consortium developed a prediction model from blood samples (n = 160) using six CpGs from six genes (ELOVL2, MIR29B2CHG, KLF14, FHL2, TRIM59, and PDE4C), achieving a MAE of 3.2 years. Multiple studies have explored age prediction using bloodstains, especially important in crime scene investigations . The most recent study, Yang et al. utilized pyrosequencing and random forest regression (RFR) to develop an age prediction model. Initially, they evaluated 46 CpG sites from six genes (ELOVL2, C1orf132, TRIM59, KLF14, FHL2, and NPTX2) using bloodstain samples from 128 males and 113 females (aged 10 to 79 years). Subsequently, they employed RFR to build two models, one for males (MAD = 2.8) and another for females (MAD = 2.93). A notable distinction is that they obtained reproducible results using only 0.1 ng of genomic DNA. Age estimation models in blood samples have been extensively studied in forensic research. Multiple studies have explored various techniques, markers, sample types, and statistical models, leading to varying levels of accuracy. 5.1.1. Postmortem Blood Samples In addition to age estimation from blood samples left at the crime scene, potentially belonging to the perpetrator, it is important to assess whether blood from a deceased individual could differ in methylation patterns affecting age estimates. The following provides a brief description of the work published to date on this topic. The first study related to age prediction using DNA methylation in deceased samples was by Bekaert et al. . They investigated DNA methylation patterns employing pyrosequencing technology targeting four CpG markers (ASPA, PDE4C, ELOVL2, and EDARADD). The model was built from blood samples of both living (n = 37) and deceased (n = 37) patients aged 0 to 91 years old, achieving a MAD of 3.75 years. Two important discoveries emerged from the study: prediction accuracy remained consistent across samples from both living and deceased individuals, and there were no discernible differences based on biological sex. These results are consistent with the findings of Hamano et al. , who analyzed blood samples from 22 living and 52 deceased individuals aged 0 to 95 years and developed a combined age prediction model. Using markers in the genes ELOVL2 and FHL2 through MS-HRM, they yielded a MAD of 7.71 years for the test set. They also included the information that the samples were analyzed within 10 days after death, suggesting that these data could be important for future comparisons when establishing a prediction model. Correia Dias et al. performed bisulfite Sanger sequencing on blood samples obtained from 51 deceased individuals (24 to 86 years), processed within 5 days postmortem. They evaluated the methylation levels of ELOVL2, FHL2, EDARADD, PDE4C, and C1orf132, reporting a MAD of 6.08 years for the training set and 8.84 years for the test sets. The same group developed prediction models using blood samples from 59 living and 62 deceased individuals (28 to 86 years) utilizing SNaPshot assays and building upon CpG sites analyzed in a previous study . For the final model applied to living individuals, they employed three CpG sites located at the ELOVL2, FHL2, and C1orf132 genes, resulting in a MAD of 4.25 years. In contrast, for the final model in deceased individuals, they integrated four CpG sites found in the ELOVL2, FHL2, C1orf132, and TRIM59 genes, yielding a MAD of 5.36 years. Similar to previous studies, they found no differences in prediction accuracy based on biological sex. Anaya et al. employed bisulfite pyrosequencing, similar to Bekaert’s previous work , to assess individual CpG sites on five genes (KLF14, ELOVL2, C1orf132, TRIM59, and FHL2) in 264 postmortem blood samples ranging from 3 months to 93 years of age, achieving a MAD of 7.42 for the testing data (n = 72). Furthermore, the researchers explored potential factors that could influence accuracy. These factors included sample storage time before analysis, which in this case ranged from 2.5 to 4 days. Additionally, a lower prediction potential of age estimation as an individual’s age increases, consistent with prior research . Naue et al. employed MPS to investigate 13 previously selected CpGs (DDO, ELOVL2, F5, GRM2, HOXC4, KLF14, LDB2, MEIS1-AS3, NKIRAS2, RPA2, SAMD10, TRIM59, and ZYG11A) in brain, bone, muscle, buccal swabs, and whole blood of 29 deceased individuals (0 to 87 years). Their analysis included a larger number of markers, which, although not ideal for a model applicable in forensic cases, could also offer an opportunity for further exploration of these methylation regions in subsequent research. The VISAGE Consortium also developed prediction models using samples from 24 deceased individuals, including blood, cartilage, and muscle. While the blood prediction model achieved a MAE of 3.1 years, further investigation is needed to improve the accuracy of cartilage and muscle samples, with respective MAEs of 13.1 and 17.1 . The studies conducted so far have shown significant variability in techniques, the selection and combination of DNA methylation markers, and even in the time interval between sample collection and analysis. These variations could have influenced the accuracies obtained. However, it was also discovered that the difference in age prediction accuracy observed in samples from living and deceased individuals, as well as that observed in biological sex, may not be significant. These findings are particularly relevant in a forensic context. 5.1.2. Y-Chromosome in Blood Samples The study of DNA methylation in the Y-chromosome (ChrY) presents significant interest for forensic investigations . Firstly, age prediction through DNA methylation could help in estimating the age of male individuals in mixed stains, such as those found in assault cases. Additionally, it could assist in distinguishing between male relatives of different ages within the same paternal lineage, a challenge not feasible with current ChrY analyses. However, this chromosome exhibits unique characteristics: it is exclusive to males, haploid in nature, and the smallest human chromosome. Consequently, a distinct approach is necessary in comparison to autosomal chromosomes. In recent years, the earliest studies related to Y-chromosome and age prediction have been primarily focused on the different methylation patterns and their association with mortality , studying ChrY blood-based DNA methylation data from 624 men in a chromosome-wide epigenetic association analysis. They identified up to 416 CpG sites that exhibited differential methylation across ages. The results showed an increasing tendency in DNA methylation with age, a finding that was supported in further studies . Additionally, later work from Lund et al. found a significant overlap between mortality-associated and age-associated CpGs. Although these studies contributed to a deeper understanding of the ChrY and methylation patterns, they were not conducted with the aim of developing an age prediction model. Vidaki et al. developed the first male-specific Y-CpG-based epigenetic age predictor using publicly available blood-based DNA methylation data of 1057 European males (aged 15–87) obtained previously by Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA). Machine learning was applied to create two age prediction models: one utilizing 75 age-dependent Y-CpGs (MAE = 7.54), and the other only employing the most predictive 19 Y-CpGs (MAE = 8.46). Although MAEs are higher compared to studies on autosomal chromosomes, this research sets a path for conducting further research for the application of Y-chromosome age prediction models in forensics. In 2023, Jiang et al. investigated 13 age-related Y-CpGs using publicly available DNA methylation data from 817 blood samples of males aged 15 to 87, obtained through the Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA). They developed two SNaPshot systems for a male-specific age prediction model, achieving MADs between 4 and 6 years. Despite the moderate accuracy, this research holds promise for future studies. Additionally, the study incorporated the analysis of bloodstains as well as mixed samples. Research on the Y-chromosome and its association with DNA methylation in age prediction remains limited. Further investigations of this entire chromosome, using non-array methodologies and focusing on microarrays of Y-CpG markers in non-blood tissues, are crucial for developing future prediction models relevant to forensic science applications . In addition to age estimation from blood samples left at the crime scene, potentially belonging to the perpetrator, it is important to assess whether blood from a deceased individual could differ in methylation patterns affecting age estimates. The following provides a brief description of the work published to date on this topic. The first study related to age prediction using DNA methylation in deceased samples was by Bekaert et al. . They investigated DNA methylation patterns employing pyrosequencing technology targeting four CpG markers (ASPA, PDE4C, ELOVL2, and EDARADD). The model was built from blood samples of both living (n = 37) and deceased (n = 37) patients aged 0 to 91 years old, achieving a MAD of 3.75 years. Two important discoveries emerged from the study: prediction accuracy remained consistent across samples from both living and deceased individuals, and there were no discernible differences based on biological sex. These results are consistent with the findings of Hamano et al. , who analyzed blood samples from 22 living and 52 deceased individuals aged 0 to 95 years and developed a combined age prediction model. Using markers in the genes ELOVL2 and FHL2 through MS-HRM, they yielded a MAD of 7.71 years for the test set. They also included the information that the samples were analyzed within 10 days after death, suggesting that these data could be important for future comparisons when establishing a prediction model. Correia Dias et al. performed bisulfite Sanger sequencing on blood samples obtained from 51 deceased individuals (24 to 86 years), processed within 5 days postmortem. They evaluated the methylation levels of ELOVL2, FHL2, EDARADD, PDE4C, and C1orf132, reporting a MAD of 6.08 years for the training set and 8.84 years for the test sets. The same group developed prediction models using blood samples from 59 living and 62 deceased individuals (28 to 86 years) utilizing SNaPshot assays and building upon CpG sites analyzed in a previous study . For the final model applied to living individuals, they employed three CpG sites located at the ELOVL2, FHL2, and C1orf132 genes, resulting in a MAD of 4.25 years. In contrast, for the final model in deceased individuals, they integrated four CpG sites found in the ELOVL2, FHL2, C1orf132, and TRIM59 genes, yielding a MAD of 5.36 years. Similar to previous studies, they found no differences in prediction accuracy based on biological sex. Anaya et al. employed bisulfite pyrosequencing, similar to Bekaert’s previous work , to assess individual CpG sites on five genes (KLF14, ELOVL2, C1orf132, TRIM59, and FHL2) in 264 postmortem blood samples ranging from 3 months to 93 years of age, achieving a MAD of 7.42 for the testing data (n = 72). Furthermore, the researchers explored potential factors that could influence accuracy. These factors included sample storage time before analysis, which in this case ranged from 2.5 to 4 days. Additionally, a lower prediction potential of age estimation as an individual’s age increases, consistent with prior research . Naue et al. employed MPS to investigate 13 previously selected CpGs (DDO, ELOVL2, F5, GRM2, HOXC4, KLF14, LDB2, MEIS1-AS3, NKIRAS2, RPA2, SAMD10, TRIM59, and ZYG11A) in brain, bone, muscle, buccal swabs, and whole blood of 29 deceased individuals (0 to 87 years). Their analysis included a larger number of markers, which, although not ideal for a model applicable in forensic cases, could also offer an opportunity for further exploration of these methylation regions in subsequent research. The VISAGE Consortium also developed prediction models using samples from 24 deceased individuals, including blood, cartilage, and muscle. While the blood prediction model achieved a MAE of 3.1 years, further investigation is needed to improve the accuracy of cartilage and muscle samples, with respective MAEs of 13.1 and 17.1 . The studies conducted so far have shown significant variability in techniques, the selection and combination of DNA methylation markers, and even in the time interval between sample collection and analysis. These variations could have influenced the accuracies obtained. However, it was also discovered that the difference in age prediction accuracy observed in samples from living and deceased individuals, as well as that observed in biological sex, may not be significant. These findings are particularly relevant in a forensic context. The study of DNA methylation in the Y-chromosome (ChrY) presents significant interest for forensic investigations . Firstly, age prediction through DNA methylation could help in estimating the age of male individuals in mixed stains, such as those found in assault cases. Additionally, it could assist in distinguishing between male relatives of different ages within the same paternal lineage, a challenge not feasible with current ChrY analyses. However, this chromosome exhibits unique characteristics: it is exclusive to males, haploid in nature, and the smallest human chromosome. Consequently, a distinct approach is necessary in comparison to autosomal chromosomes. In recent years, the earliest studies related to Y-chromosome and age prediction have been primarily focused on the different methylation patterns and their association with mortality , studying ChrY blood-based DNA methylation data from 624 men in a chromosome-wide epigenetic association analysis. They identified up to 416 CpG sites that exhibited differential methylation across ages. The results showed an increasing tendency in DNA methylation with age, a finding that was supported in further studies . Additionally, later work from Lund et al. found a significant overlap between mortality-associated and age-associated CpGs. Although these studies contributed to a deeper understanding of the ChrY and methylation patterns, they were not conducted with the aim of developing an age prediction model. Vidaki et al. developed the first male-specific Y-CpG-based epigenetic age predictor using publicly available blood-based DNA methylation data of 1057 European males (aged 15–87) obtained previously by Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA). Machine learning was applied to create two age prediction models: one utilizing 75 age-dependent Y-CpGs (MAE = 7.54), and the other only employing the most predictive 19 Y-CpGs (MAE = 8.46). Although MAEs are higher compared to studies on autosomal chromosomes, this research sets a path for conducting further research for the application of Y-chromosome age prediction models in forensics. In 2023, Jiang et al. investigated 13 age-related Y-CpGs using publicly available DNA methylation data from 817 blood samples of males aged 15 to 87, obtained through the Illumina HumanMethylation450 BeadChip array (San Diego, CA, USA). They developed two SNaPshot systems for a male-specific age prediction model, achieving MADs between 4 and 6 years. Despite the moderate accuracy, this research holds promise for future studies. Additionally, the study incorporated the analysis of bloodstains as well as mixed samples. Research on the Y-chromosome and its association with DNA methylation in age prediction remains limited. Further investigations of this entire chromosome, using non-array methodologies and focusing on microarrays of Y-CpG markers in non-blood tissues, are crucial for developing future prediction models relevant to forensic science applications . Semen traces constitute a primary biological material for perpetrator identification in forensics, especially in cases of sexual assaults . Furthermore, because of the unique age-related DNA methylation pattern observed in sperm cells compared to somatic cells , epigenetic clocks, such as Horvath’s , have not been able to accurately estimate age within this specific context. In recent years, there has been a growing body of research dedicated to age estimation models in sperm cells. The studies so far have focused on identifying the most suitable age-correlated candidates using a combination of individual CpGs or broader regions known as DMSs (differentially methylated sites), which could introduce complexity to the marker selection process . FOLH1B, located on chromosome 11, is a highly studied gene for semen age prediction, encoding folate hydrolase 1b, also known as prostate-specific membrane antigen-like protein. It has been researched for both age estimation in forensic contexts and its potential role in the development of prostate cancer . Hwan Young Lee et al. developed the first age prediction model using sperm samples for application in forensic science. Initially, an assessment was conducted on 485,000 CpG loci from 12 sperm donors (aged 20 to 59 years) using the Infinium HumanMethylation450 BeadChip array (Illumina, San Diego, CA, USA) and identifying 24 potential epigenetic age predictors. Subsequently, the final age prediction model using SNaPshot was developed based on the more strongly correlated methylation regions (TTC7B, FOLH1B/NOX4, and cg12837463), achieving a MAD of 5.4 for the validation test (n = 37). In a follow-up validation model, Lee et al. analyzed both sample donors (n = 12) and forensic casework samples (n = 19), resulting in a MAD of 4.8 and 5.2, respectively. A primary highlight of the study was the inclusion of forensic casework samples, achieving reproducible results with less than 5 ng of bisulfite-converted DNA, a factor of particular significance for its potential implementation in forensic contexts. Individuals in their 20s and 50s showed distinct MADs of 2.9 and 7.2, respectively, indicating that, in line with findings from other tissue models, prediction sensitivity decreases with age progression. These two studies highlighted the importance of increasing the number of samples to improve reliability. Later, Li et al. developed a predictive age model utilizing two CpG sites in genes TTC7B and NOX4/FOLH1B, previously studied by a research group , using samples from Chinese males aged 21 to 54 years in a bisulfite pyrosequencing model. One notable aspect of the study was the utilization of different types of semen samples (liquid semen, fresh seminal stains, aged seminal stains, and mixed stains of semen and vaginal secretion), which resulted in MADs range between 3.8 and 4.3 years. Recently, the VISAGE Consortium designed a three-stage study to explore potential age predictors suitable for sperm cells. The study included identifying and validating novel age-correlated CpGs, as well as developing a prediction model based on the top candidates. First, they used 40 semen samples (24 to 58 years) in Infinium Methylation EPIC ® BeadChip arrays (Illumina, San Diego, CA, USA) to target approximately 850,000 CpGs and identified distinctive age-correlated DMSs suitable for age prediction. Building upon this, the ten most promising candidate CpGs, along with the three markers previously reported , were validated in an independent set of semen-derived DNA samples (n = 125) using targeted NGS assays. Finally, a prediction model was developed and further validated consisting of four novel (SH2B2, EXOC3, IFITM2, and GALR2) and one previously identified (FOLH1B) DNAm markers, achieving a MAE of 5.1 years in the testing set (n = 54). Other investigations concerning age prediction from semen have predominantly adopted a clinical perspective. For instance, Jenkins et al. analyzed data from previous studies to develop a statistical DNA methylation model employing the Infinium HumanMethylation450 BeadChip (Illumina, San Diego, CA, USA) (n = 329). The model was based on 51 genomic regions and reported a MAE of 2.4. The study used samples with diverse characteristics and found that age prediction could be achieved regardless of fertility status. Additionally, smokers showed a tendency towards elevated age profiles. In a recent study conducted by Pilsner et al. , two distinct age prediction models were constructed employing Infinium Methylation BeadChip and machine learning analysis: one utilizing individual CpGs (120 CpGs) and the other incorporating the entire DMRs (318 CpGs). The models exhibited less than 1% overlap in CpGs between them, suggesting a substantial pool of potential candidates for further investigation. These studies, although oriented to clinics, could aid in understanding the relationship between DNA methylation and aging, ultimately paving the way for potential future research projects in the forensic field. Additional factors must also be considered regarding age prediction in sperm. Limitations include the decline in both sperm quality and quantity among the general population in recent decades, as well as variability in sperm count per individual . Low sperm count can lead to an increase in non-spermatozoa cells in the sample, underscoring the need for further research into the differences between methylation profiles from purified sperm cells and whole semen samples . Furthermore, there is the added complexity of researchers using diverse DNA markers and terminologies (DMR vs. CgG) for age prediction in semen samples. Finally, it is necessary to evaluate and validate the applicability of models in mixed samples, which may be encountered in sexual assault cases. Saliva and buccal swab samples have been extensively investigated for DNA methylation-based age prediction methods. These samples are used in forensic contexts due to their simple and noninvasive collection from individuals. Moreover, they are frequently encountered at crime scenes, such as in cigarette butts and bottles . However, a significant challenge comes from the cellular heterogeneity of these samples, where the proportions of leukocytes and epithelial cells vary depending on the sample type and are further influenced by individual characteristics . Bocklandt et al. developed the first saliva model using only three CpGs, achieving an average accuracy of 5.2, based on data obtained from Illumina HumanMethylation27 microarrays (San Diego, CA, USA) . Following this, several studies have focused on differentiating tissue and cell types and developing accurate combined models independent of the sample type . Hamano et al. developed an age prediction model utilizing 263 saliva samples (1 to 73 years) employing MS-HRM and focusing solely on two markers in the genes ELOVL2 and EDARADD. They achieved a MAD of 6.25 years for the test set (n = 50). Additionally, they applied the same model to seven samples of cigarette butts, obtaining a MAD of 7.65 years. The difference in MAD was attributed to the limited number of samples. Years later, Oka et al. used MS-HRM on 113 saliva samples (aged 20 to 50 years) to investigate the impact of ancestry on age prediction through methylation in the EDARADD and FHL2 genes, based on previous studies . The differences they found in the methylation levels of Japanese and Indonesian participants led them to conclude about the importance of considering the population of origin in existing DNA methylation age prediction methods. Eipel et al. was the first to address the differential cell composition in saliva samples and its impact on age estimation. In the study, 55 buccal swabs were utilized to create an age prediction model based on three CpG sites (specifically, genes PDE4C, ASPA, and ITGA2B) using pyrosequencing. The model achieved a MAD of 7.03 for the validation test. Subsequently, two additional CpG markers specific to cell type (genes CD6 and SERPINB5) were incorporated to distinguish between leukocytes and epithelial cells, leading to MADs of 5.09 and 5.12 for two independent validation tests. This model, known as the “Buccal-Cell-Signature”, exhibited greater accuracy compared to the model without cell-type specific CpG markers. Hong et al. developed an age prediction model based on 226 saliva samples using the SNaPshot method. Initially, they employed Illumina BeadChip array (San Diego, CA, USA) data from 54 individuals to identify the most age-associated CpG markers. Then, they utilized 226 saliva samples (aged 18 to 65 years) to construct an age prediction model based on six CpG markers (genes SST, CNGA3, KLF14, TSSK6, TBR1, and SLC12A5), along with one cell type-specific CpG marker (PTPN7 gene), in a SNaPshot assay. The testing set model (n = 113) achieved a MAE of 3.13. The incorporation of PTPN7 was based on its ability to distinguish between leukocytes and buccal epithelial cells. The use of a cell-specific marker was motivated by previous research . A following study investigated these same markers using both MPS technology and SNaPshot from saliva samples of the 95 individuals. As the predicted age obtained from these two methods varied greatly, they constructed platform-independent age predictive models, achieving a MAD of 3.19. In a similar vein, 368 samples from 184 individuals (n = 184 saliva and n = 184 buccal cells) were analyzed using publicly available data from the Illumina HumanMethylation450 BeadChip array to select two tissue-specific markers (HUNK and RUNX1), along with seven age-correlated CpG sites (cg10501210, LHFPL4, ELOVL2, PDE4C, HOXC4, OTUD7A, and EDARADD). Subsequently, tissue-specific and combined age prediction models were developed using SNaPshot. The combined model, employing multivariate quantile regression, achieved a MAE of 3.66 on the testing set (n = 91 saliva and n = 93 buccal cells). In this case, no improvement was detected in age predictions when adding tissue-specific markers. Therefore, according to these results and those of previous studies , using markers of cellular composition as a co-variable was more effective than using tissue-specific markers . Jung et al. explored multiple tissues by analyzing five CpGs within the genes ELOVL2, FHL2, KLF14, C1orf132/MIR29B2C, and TRIM59. They developed independent and combined prediction models for saliva, buccal swabs, and blood samples using the SNaPshot assay. Specifically, in saliva samples (n = 150) and buccal swabs (n = 148), MADs of 3.55 and 4.29 were respectively obtained. In a separate study, Woźniak et al. conducted MPS assays on various types of samples, including buccal cells. In this instance, they utilized five CpGs within the genes PDE4C, MIR29B2CHG, ELOVL2, KLF14, and EDARADD, achieving a MAE of 3.7 years in the testing set (n = 48). Schwender et al. developed age prediction models comparing pyrosequencing and SNaPshot. Initially, an analysis of 88 CpG sites in the genes PDE4C, ELOVL2, ITGA2B, ASPA, EDARADD, SST, KLF14, and SLC12A5 was conducted on buccal swab samples (n = 141) to identify markers that best correlated with age. Based on this, two prediction models were developed considering three markers in the genes SST, KLF14, and SLC12A5, one using pyrosequencing (MAD = 5.33) and the other using SNaPshot (MAD = 6.44). The aim was to compare the results of both methods, taking into consideration that SNaPshot could be more easily integrated into the routine workflow in a forensic laboratory. One of the latest studies involves the first development of an age prediction model using ddPCR technology for human saliva samples. Initially, an analysis was conducted on methylation ratios at four CpG sites located within the genes SST, KLF14, TSSK6, and SLC12A5. Then, saliva samples from 76 individuals were employed to construct the prediction model, which yielded a MAD of 3.3 . Koop et al. investigated postmortem samples using buccal swabs from both living individuals (n = 142) and deceased individuals (n = 73), spanning ages from 0 to 90 years. Their goal was to develop an age prediction model based on a single gene, PDE4C, using a pyrosequencing assay. Methylation levels of PDE4C were assessed in the samples from deceased individuals at different stages of decomposition, and age estimation was not possible only in cases of advanced putrefaction. The main finding of the study was that DNA methylation remained stable across several stages of decomposition, and buccal swabs were suitable samples for assessing age-related methylation patterns in postmortem contexts. In the context of forensic science, saliva and buccal swabs are among the most studied samples, alongside blood. The predominant focus in recent years has been on the type of sample to be analyzed and the technique used, resulting in varying approaches and impacting the accuracy of the different models. The potential types of biological samples found at a crime scene and the accuracies of their age estimates based on DNA methylation are summarized in and . In forensic scenarios, distinguishing between various types of samples can be challenging. Therefore, the development of multi-tissue age prediction models would be highly beneficial. While tissue-specific models currently provide the most accurate age estimations , efforts are also directed toward creating universal markers suitable for multi-tissue samples. In this section, we will explore some of the multi-tissue models developed thus far. Alsaleh et al. , identified 10 age-related DNA methylation markers and developed different age prediction models using samples from five tissue types: whole blood, saliva, semen, menstrual blood, and vaginal secretions. Their multi-tissue model, based on 41 samples, achieved an average prediction accuracy of 3.8 years in the training set. For the testing set (n = 24), three independent prediction models resulted in a MAD of 6.9 years for menstrual blood and vaginal fluid, 5.6 years for buccal swabs, and 7.8 years for blood. The overall multi-tissue accuracy rate, based on bootstrap analysis, was 7.8 years. In another study, CpG markers previously studied were examined in 29 samples from deceased individuals aged 0 to 87 years to explore the potential for developing a multi-tissue age predictor . Utilizing massive parallel sequencing, the study revealed the capability of markers within 13 CpG regions of genes such as DDO, ELOVL2, KLF14, NKIRAS, RPA2, TRIM59, and ZYG11 previously studied to predict age across various tissues including the brain, bones, muscles, buccal epithelial cells, and blood. Alghanim et al. examined 27 CpG sites within the SCGN, DLX5, and KLF14 genes across blood (n = 71) and saliva samples (n = 91) using pyrosequencing. Methylation levels at CpG sites within the SCGN and KLF14 loci were found to be correlated with chronological age in both tissues. Various predictive models were tested, ultimately resulting in age prediction with MADs ranging between 7.1 and 10.3 in independent testing datasets. In the study by Jung et al. mentioned in previous sections, samples of saliva, blood, and buccal swabs showed strong correlations with age across three CpG sites within the genes ELOVL2, KLF14, and TRIM59. Three different tissue-specific models for age prediction and a combined model that included data from all three sample types were developed using SNaPshot, achieving a MAD of 3.8. Also described previously, in the research from Correia Dias et al. , two multi-tissue age prediction models were the Sanger sequencing model and the SNaPshot assay. From the studies mentioned in this section, it can be observed that multi-tissue age prediction models currently demonstrate lower accuracies compared to those based on individual tissues. Therefore, it could be inferred that further analyses with different markers are necessary to reduce this disparity. In recent years, extensive research has delved into the impact of various exogenous and endogenous factors on DNA methylation patterns. These factors may contribute to differences between epigenetic age and chronological age, crucial in forensic casework, as it directly affects the accuracy of age estimation . Spólnicka et al. have shown differences in DNA methylation markers associated with age prediction in Alzheimer’s, Graves’ disease, and cancer, particularly chronic lymphocytic leukemia (CLL). Furthermore, infections like HIV , Helicobacter pylori , and cytomegalovirus have been associated with increasing age prediction. Recent studies have suggested that both the COVID-19 virus and its medical management can impact DNA methylation levels at specific CpG loci, resulting in significant changes in epigenetic age clocks . Additionally, research in forensic epigenomic profiling now is also focused on the advancement of predicting lifestyle habits like smoking, alcohol intake, diet, and sports . For example, Spólnicka et al. observed accelerated DNA hypermethylation in elite athletes, while healthy nutrition was associated with decreased epigenetic age estimates. Conversely, insomnia and working night shifts were linked to increased age estimates (reviewed ). The VISAGE Consortium recently conducted a study that explored the influence of alcohol intake on DNA methylation-based age prediction . The study analyzed eight DNAm age predictors (ELOVL2, MIR29B2CHG, TRIM59, KLF14, FHL2, EDARADD, PDE4C, and ASPA) in individuals with alcohol dependency, using the VISAGE Enhanced Tool for age prediction previously described for somatic tissues . Among these markers, MIR29B2CHG was the only one that showed an impact on age prediction, albeit a small one. Moreover, the study highlighted the need for further exploration of MIR29B2CHG, its function, and its connection with alcohol intake . Vidaki et al. further employed MPS technology to develop a new assay for exploring DNA methylation markers associated with smoking habits. Interestingly, out of the thirteen smoking-associated CpGs previously investigated by Maas et al. , eight CpGs were strongly correlated with age, with one also showing an association with biological sex. Adding further complexity, studies have demonstrated that DNA markers can vary depending on the ancestry. For example, differences in DNA methylation patterns were observed in specific genes (ELOVL2, FHL2, MIR29B2CHG, TRIM59, and KLF14) in blood samples from Polish , Korean , Italian , and Portuguese populations . Inter-population differences in DNA methylation markers related to age prediction models have also been described in saliva samples and semen samples . This underscores the importance of ancestry analysis across different types of samples in age prediction through DNA methylation. Epigenetics, with a particular focus on DNA methylation, is currently a field of extensive study. It serves both aiding in age prediction within forensic contexts and contributing to research on aging, lifestyle, and diseases in the clinics. A central question in this field is whether the epigenome actively contributes to the aging process or if aging itself influences epigenetic patterns, leading to DNA methylation serving as an age marker. Advancing the understanding in this field is crucial for addressing this question and enhancing the development of epigenetic clocks and age prediction models. Ensuring robust and reproducible results is imperative for incorporating different methods into criminal investigations. The diversity in techniques poses a potential barrier when comparing outcomes. Results obtained over the years suggest that both pyrosequencing and NGS would be the preferred technologies for further research. These methods have contributed to advancements in the analysis of DNA methylation and its correlation with age estimation, thereby improving accuracy and broadening knowledge in the field. While the equipment costs may be high, it is essential to acknowledge their high-quality price ratio. This consideration is important, given the significance of using methods that are accessible and affordable for forensic laboratories. Moreover, efforts could be directed towards enhancing bisulfite technique or exploring alternatives such as non-chemical conversions for the future. In the realm of epigenetic clocks, chronological clocks are most suitable for forensic purposes. Additionally, tissue-specific clocks (especially for semen, bones, and teeth, where information is scarcer) could play a crucial role in future enhancements. Ultimately, improving epigenetic clocks may offer insights into identifying the most efficient set of DNA markers for precise age prediction. In anthropology, studies on bones and teeth are relatively limited compared to other tissues, and the results often exhibit highly variable accuracies. It is important to consider that the primary goal in this case is for DNA methylation to enhance the accuracy of classical anthropological techniques. Hence, the choice of the suitable technique and tissue is paramount. Thus far, pulp appears to be the tissue with the highest accuracy, and further studies employing the most advanced technologies could prove highly beneficial. Blood samples have been the most extensively studied, both in epigenetic clock research and age prediction models in criminalistics. Various techniques and statistical models have been employed, as well as different approaches such as age estimation from bloodstains and samples from deceased individuals. Currently, there seems to be no difference in age prediction accuracy between living and deceased individuals. Furthermore, recent advancements have been made in the study of the Y-chromosome from blood samples, although studies are still limited, resulting in highly variable outcomes compared to somatic chromosome studies, and further research is needed in non-blood tissues. Saliva and buccal swab samples have also been widely studied. Their analysis using different techniques and sample types has contributed to improving accuracy over the years. Models have been examined in saliva, buccal swabs, and even in cigarette butts. Additionally, a deeper understanding of cellular heterogeneity has been achieved, which is a crucial consideration when working with any type of sample. Another example is semen, where the proportion of spermatic and non-spermatic cells may also impact age prediction. However, this tissue has been much less studied, and the consensus on which markers to use is not as clear-cut as in the case of saliva or blood samples. Additionally, new studies have been conducted on other types of tissues such as fingernails , hair , and menstrual blood . In general, the authors of the aforementioned studies emphasize the importance of type and sample size in research, along with the selection of the best DNA methylation markers. The key objective is to use the fewest but most informative markers possible, making them more suitable for forensic use. Studies have also proposed multi-tissue models for age prediction, which, overall, have not yet demonstrated the accuracy observed in tissue-specific models, although they are considered ideal in forensic contexts. Testing age prediction models across external and internal factors is crucial, as authors have identified influences on age prediction attributed to lifestyle and several conditions. Additionally, ancestry seems to impact DNA methylation patterns. Therefore, the authors stress the significance of conducting research across different regions globally or integrating samples with diverse origins into the same study. There are studies to date made by scientific groups in collaboration to validate DNA methylation age prediction models, important for their future successful integration into standard forensic workflows . Additionally, interdisciplinary research is expected. For example, aging has also been studied in relation to different ‘omics’ including transcriptomics , proteomics , and post-translational modifications such as glycans . The interplay between them, alongside methylomics, holds promise for advancing our understanding of aging. Future research endeavors should prioritize validating proposed methodologies and enhancing their accuracy. The translation of these techniques into the practice of forensic science and forensic anthropology necessitates fulfilling the requirements set by the Organization of Scientific Area Committees (OSAC) for their future application. Despite the challenges discussed, the remarkable advancements and evolution in epigenetics in recent years foster expectations that, with continued knowledge and discoveries, the ultimate application of age prediction in forensic sciences will be achieved in the near future. |
Proteomic insights into molecular alterations associated with Kawasaki disease in children | be4f4034-f73a-4944-b786-2bb81af42657 | 11846444 | Biochemistry[mh] | Kawasaki disease (KD) is a common systemic vasculitis primarily affecting children under five years of age . It is a leading cause of acquired heart disease in this population. If not diagnosed and treated promptly, KD can lead to serious complications such as coronary artery lesions (CAL), aneurysms, and long-term cardiovascular problems . The pathophysiology of KD involves a multifaceted and poorly understood immune response, characterized by systemic vascular inflammation, endothelial dysfunction, and immune cell infiltration . Platelet activation and thrombocytosis, which are hallmark features of KD, exacerbate vascular injury and increase the risk of coronary artery damage . Despite extensive research, the exact molecular mechanisms that drive KD are still unclear, complicating both its diagnosis and treatment. In recent years, there has been growing interest in identifying biomarkers for KD that can facilitate early diagnosis and improve prognosis . Biomarkers can help distinguish KD from other febrile illnesses and provide valuable insights into disease activity and response to treatment . This is especially critical for distinguishing incomplete KD, where clinical manifestations are less apparent, leading to delays in diagnosis . Proteomics has emerged as a powerful tool for discovering disease-specific biomarkers, offering a more comprehensive approach than transcriptomics. While transcriptomics provides data at the gene expression level, proteomics captures the functional changes in proteins, including post-translational modifications, which are crucial for understanding cellular processes and disease mechanisms . These functional changes at the protein level provide a more direct correlation to disease activity, aiding clinicians in interpreting complex disease patterns. This approach is particularly relevant for KD, where the interplay of immune, vascular, and metabolic pathways is complex and cannot be fully elucidated through mRNA-level analyses . Proteomic approaches have the potential to offer deeper insights into the pathophysiological mechanisms of KD. For example, pathways such as AMPK and PI3K-Akt, which are involved in vascular inflammation, endothelial repair, and cellular survival, have been implicated in KD . Furthermore, complement and coagulation cascades have been highlighted as key players in both the acute and resolution phases of KD, indicating their dual roles in immune defense and vascular repair . These findings underscore the need for targeted research to validate these pathways’ roles in KD and their potential for clinical application. These pathways not only provide a window into KD pathophysiology but also serve as potential therapeutic targets. Despite its potential, proteomics research in KD has yet to reach its full translational potential. This study bridges this gap by utilizing proteomic technologies to analyze serum samples from children with KD, identify differentially expressed proteins, and uncover the molecular mechanisms underlying the disease. Through functional and pathway enrichment analyses, we seek to identify novel biomarkers and therapeutic targets that could aid in the early diagnosis and management of KD. Subjects and study design Children with KD hospitalized at our tertiary children’s hospital in Fujian Province, China ( n = 20) between January 2018 and December 2020 were included in the CQB group. Age-matched febrile children ( n = 20) admitted during the same period due to bacterial infections formed the infection control group (C group), while children who had recovered from KD ( n = 8) were classified as the CQBC group. KD diagnoses were made according to the 2017 Guidelines of the American Heart Association (AHA). Exclusion criteria included cases beyond the acute phase of KD, a disease duration exceeding 10 days, congenital heart defects, prior treatment, or incomplete medical records. Medical records of included participants were reviewed, encompassing age, gender, clinical manifestations, blood routine indices, and serum biochemical indices. This study was approved by the Ethics Committee of Fujian Maternity and Child Health Hospital (No. 199 [2018]), and informed consent was obtained from all participants’ families. All procedures adhered to institutional and national ethical standards and the Helsinki Declaration (1964) and its later amendments. Proteomics analysis Sample preparation and fractionation to generate DDA library Fasting venous blood samples from children with KD were collected for proteomic analysis within 48 h before and after intravenous immune globulin (IVIG) treatment. Similarly, fasting venous blood samples from the infection control group were collected within 48 h of admission for proteomic analysis. After centrifugation at 12,000×g for 10 min at 4 °C, the clear supernatants were carefully separated and stored at -80 °C. Agilent technology was employed to remove the most prevalent proteins from plasma samples, leveraging human 14/mouse 3 multiple affinity reagents . Following this, proteins of high and low abundance were isolated and collected individually. The desalination and concentration of these fractions were achieved using ultrafiltration membranes with a 5 kDa molecular weight cutoff. Subsequently, the samples underwent treatment with an SDT buffer solution, consisting of 4% SDS, 100 mM DTT, and 150 mM Tris-HCl at pH 8.0, and were then heated to boiling point for 15 min. After centrifugation at 14,000 g for 20 min, the protein concentration in the resulting supernatant was measured using the BCA Protein Assay Kit. The samples were stored at -80 °C for long-term preservation. Filtration assisted sample preparation digestion procedure A protein sample of 200 µg was processed through an ultrafiltration device (using a Microcon unit with a 10 kD cutoff) . It was washed with an ultrafiltration buffer (UA buffer, containing 8 M urea and 150 mM Tris-HCl at pH 8.0) to remove detergent, DTT, and other substances. Subsequently, 100 µl of iodoacetamide was added to alkylate the cysteine residues, followed by a 30-minute incubation in the dark. The protein sample was triple-washed with 100 µl of UA buffer and rinsed twice with 100 µl of 25 mM NH4HCO3 buffer. Trypsin (4 µg) in 40 µl of 25 mM NH4HCO3 buffer was added for digestion. The digested peptides were desalted and gathered after vacuum centrifugation in 40 µl of a 0.1% (v/v) formic acid solution. The peptide concentration was determined using UV spectroscopy at 280 nm. Peptides were fractionated into 10 distinct groups using a high pH reversed-phase peptide fractionation kit from Thermo Scientific™ Pierce™, concentrated using a vacuum centrifuge, and recombined in 15 µl of 0.1% (v/v) formic acid solution. Desalting was performed using Empore™ SPE C18 columns (7 mm inner diameter, 3 ml volume), followed by reconstitution in 40 µl of 0.1% (v/v) formic acid solution . iRT standards from Biognosys were incorporated to calibrate retention time, with a set ratio of 1:3 between iRT and sample peptides. DDA mass spectrometry The DDA library-derived fractions were analyzed using a Thermo Fisher Scientific Q Exactive HF-X mass spectrometer interfaced with an Easy nLC 1200 chromatography system . A 1.5 µg peptide sample was applied to an EASY-Spray™ C18 trap column (Thermo Scientific, P/N 164946, 3 μm, 75 μm × 2 cm) before separation on an EASY-Spray™ C18 LC analysis column (Thermo Scientific, ES802, 2 μm, 75 μm × 25 cm). The peptides were eluted at a flow rate of 250 nl/min over a 120-minute gradient using buffer B (84% acetonitrile, 0.1% formic acid). The mass spectrometer operated in a scanning range of 300–1800 m/z, with a resolution of 60,000 at 200 m/z. It utilized target AGC settings of 3e6, a maximum ion time of 25 ms, a dynamic exclusion duration of 30.0 s, and a normalized collision energy setting of 30 eV. Each MS-SIM scan was followed by 20 ddMS2 scans. Mass spectrometry analysis is used for data-independent acquisition (DIA) Peptides from each sample were examined using liquid chromatography-tandem mass spectrometry (LC-MS/MS) in DIA mode . Each DIA cycle comprised a full MS-SIM scan accompanied by 30 DIA scans spanning the m/z range of 350–1800. The parameters were configured as follows: the SIM full scan resolution was set at 120,000 at 200 m/z; the automatic gain control (AGC) was set to 3e6; and the maximum ion time (IT) was 50 ms. For the DIA scans in profile mode, the resolution was 15,000; the AGC target was 3e6; the maximum IT was set to automatic; and the normalized collision energy was maintained at 30 eV. A linear gradient of buffer B, consisting of 84% acetonitrile and 0.1% formic acid, was applied at a flow rate of 250 nl/min over a period of 120 min. Quality control (QC) samples were introduced into the DIA mode at the commencement of the mass spectrometry analysis and following every sixth injection during the course of the experiment to ensure consistent MS performance. Mass spectrometry data analysis The Specronaut™ software version 14.4.200727.47784 was utilized to interrogate the FASTA Sequence Database. The parameters were configured as follows: the enzyme specificity was set to trypsin, with a maximum of two missed cleavages allowed; the fixed modification was set to carbamoylation at cysteine (C); and the dynamic modifications included oxidation at methionine (M) and acetylation at the protein N-terminus. Protein identifications were ascertained with a confidence level of 99%, and the false discovery rate (FDR) was calculated using the formula FDR = N (bait) * 2 / (N (bait) + N (target)) to ensure it was less than or equal to 1%. The spectral library was constructed by integrating the raw data file and DDA search results into Specronaut Pulsar X TM_12.0.20491.4 from Biognosys. The key software parameters were set as follows: dynamic iRT was selected for retention time prediction; MS2 level interference correction and cross-run normalization were activated. All results were filtered to ensure an FDR of ≤ 1%. Bioinformatics analysis GO analysis of differentially expressed proteins was conducted, including biological processes (BP), cellular components (CC), and molecular functions (MF) . Protein-protein interactions (PPIs) were mapped using the STRING database (version 10.0) . Subcellular localization predictions were made with WolfPsort software (version 0.2) . InterProScan was used for domain predictions, and domain enrichment analysis was performed using Fisher’s Exact Test . KEGG pathway enrichment analysis was conducted for clustering molecular interactions, reactions, and networks . Data visualizations were created using the R package ggplot2 . Statistical analysis Statistical analysis was performed using IBM SPSS, version 23.0 (Chicago, USA). Descriptive analyses were conducted, with results expressed as mean ± standard deviation (SD) for normally distributed variables. Variance between the C group and CQB group, as well as between the CQB group and CQBC group, was assessed using Student’s t-test for normally distributed variables. A P-value of < 0.05 was considered statistically significant. Children with KD hospitalized at our tertiary children’s hospital in Fujian Province, China ( n = 20) between January 2018 and December 2020 were included in the CQB group. Age-matched febrile children ( n = 20) admitted during the same period due to bacterial infections formed the infection control group (C group), while children who had recovered from KD ( n = 8) were classified as the CQBC group. KD diagnoses were made according to the 2017 Guidelines of the American Heart Association (AHA). Exclusion criteria included cases beyond the acute phase of KD, a disease duration exceeding 10 days, congenital heart defects, prior treatment, or incomplete medical records. Medical records of included participants were reviewed, encompassing age, gender, clinical manifestations, blood routine indices, and serum biochemical indices. This study was approved by the Ethics Committee of Fujian Maternity and Child Health Hospital (No. 199 [2018]), and informed consent was obtained from all participants’ families. All procedures adhered to institutional and national ethical standards and the Helsinki Declaration (1964) and its later amendments. Sample preparation and fractionation to generate DDA library Fasting venous blood samples from children with KD were collected for proteomic analysis within 48 h before and after intravenous immune globulin (IVIG) treatment. Similarly, fasting venous blood samples from the infection control group were collected within 48 h of admission for proteomic analysis. After centrifugation at 12,000×g for 10 min at 4 °C, the clear supernatants were carefully separated and stored at -80 °C. Agilent technology was employed to remove the most prevalent proteins from plasma samples, leveraging human 14/mouse 3 multiple affinity reagents . Following this, proteins of high and low abundance were isolated and collected individually. The desalination and concentration of these fractions were achieved using ultrafiltration membranes with a 5 kDa molecular weight cutoff. Subsequently, the samples underwent treatment with an SDT buffer solution, consisting of 4% SDS, 100 mM DTT, and 150 mM Tris-HCl at pH 8.0, and were then heated to boiling point for 15 min. After centrifugation at 14,000 g for 20 min, the protein concentration in the resulting supernatant was measured using the BCA Protein Assay Kit. The samples were stored at -80 °C for long-term preservation. Filtration assisted sample preparation digestion procedure A protein sample of 200 µg was processed through an ultrafiltration device (using a Microcon unit with a 10 kD cutoff) . It was washed with an ultrafiltration buffer (UA buffer, containing 8 M urea and 150 mM Tris-HCl at pH 8.0) to remove detergent, DTT, and other substances. Subsequently, 100 µl of iodoacetamide was added to alkylate the cysteine residues, followed by a 30-minute incubation in the dark. The protein sample was triple-washed with 100 µl of UA buffer and rinsed twice with 100 µl of 25 mM NH4HCO3 buffer. Trypsin (4 µg) in 40 µl of 25 mM NH4HCO3 buffer was added for digestion. The digested peptides were desalted and gathered after vacuum centrifugation in 40 µl of a 0.1% (v/v) formic acid solution. The peptide concentration was determined using UV spectroscopy at 280 nm. Peptides were fractionated into 10 distinct groups using a high pH reversed-phase peptide fractionation kit from Thermo Scientific™ Pierce™, concentrated using a vacuum centrifuge, and recombined in 15 µl of 0.1% (v/v) formic acid solution. Desalting was performed using Empore™ SPE C18 columns (7 mm inner diameter, 3 ml volume), followed by reconstitution in 40 µl of 0.1% (v/v) formic acid solution . iRT standards from Biognosys were incorporated to calibrate retention time, with a set ratio of 1:3 between iRT and sample peptides. DDA mass spectrometry The DDA library-derived fractions were analyzed using a Thermo Fisher Scientific Q Exactive HF-X mass spectrometer interfaced with an Easy nLC 1200 chromatography system . A 1.5 µg peptide sample was applied to an EASY-Spray™ C18 trap column (Thermo Scientific, P/N 164946, 3 μm, 75 μm × 2 cm) before separation on an EASY-Spray™ C18 LC analysis column (Thermo Scientific, ES802, 2 μm, 75 μm × 25 cm). The peptides were eluted at a flow rate of 250 nl/min over a 120-minute gradient using buffer B (84% acetonitrile, 0.1% formic acid). The mass spectrometer operated in a scanning range of 300–1800 m/z, with a resolution of 60,000 at 200 m/z. It utilized target AGC settings of 3e6, a maximum ion time of 25 ms, a dynamic exclusion duration of 30.0 s, and a normalized collision energy setting of 30 eV. Each MS-SIM scan was followed by 20 ddMS2 scans. Mass spectrometry analysis is used for data-independent acquisition (DIA) Peptides from each sample were examined using liquid chromatography-tandem mass spectrometry (LC-MS/MS) in DIA mode . Each DIA cycle comprised a full MS-SIM scan accompanied by 30 DIA scans spanning the m/z range of 350–1800. The parameters were configured as follows: the SIM full scan resolution was set at 120,000 at 200 m/z; the automatic gain control (AGC) was set to 3e6; and the maximum ion time (IT) was 50 ms. For the DIA scans in profile mode, the resolution was 15,000; the AGC target was 3e6; the maximum IT was set to automatic; and the normalized collision energy was maintained at 30 eV. A linear gradient of buffer B, consisting of 84% acetonitrile and 0.1% formic acid, was applied at a flow rate of 250 nl/min over a period of 120 min. Quality control (QC) samples were introduced into the DIA mode at the commencement of the mass spectrometry analysis and following every sixth injection during the course of the experiment to ensure consistent MS performance. Mass spectrometry data analysis The Specronaut™ software version 14.4.200727.47784 was utilized to interrogate the FASTA Sequence Database. The parameters were configured as follows: the enzyme specificity was set to trypsin, with a maximum of two missed cleavages allowed; the fixed modification was set to carbamoylation at cysteine (C); and the dynamic modifications included oxidation at methionine (M) and acetylation at the protein N-terminus. Protein identifications were ascertained with a confidence level of 99%, and the false discovery rate (FDR) was calculated using the formula FDR = N (bait) * 2 / (N (bait) + N (target)) to ensure it was less than or equal to 1%. The spectral library was constructed by integrating the raw data file and DDA search results into Specronaut Pulsar X TM_12.0.20491.4 from Biognosys. The key software parameters were set as follows: dynamic iRT was selected for retention time prediction; MS2 level interference correction and cross-run normalization were activated. All results were filtered to ensure an FDR of ≤ 1%. Fasting venous blood samples from children with KD were collected for proteomic analysis within 48 h before and after intravenous immune globulin (IVIG) treatment. Similarly, fasting venous blood samples from the infection control group were collected within 48 h of admission for proteomic analysis. After centrifugation at 12,000×g for 10 min at 4 °C, the clear supernatants were carefully separated and stored at -80 °C. Agilent technology was employed to remove the most prevalent proteins from plasma samples, leveraging human 14/mouse 3 multiple affinity reagents . Following this, proteins of high and low abundance were isolated and collected individually. The desalination and concentration of these fractions were achieved using ultrafiltration membranes with a 5 kDa molecular weight cutoff. Subsequently, the samples underwent treatment with an SDT buffer solution, consisting of 4% SDS, 100 mM DTT, and 150 mM Tris-HCl at pH 8.0, and were then heated to boiling point for 15 min. After centrifugation at 14,000 g for 20 min, the protein concentration in the resulting supernatant was measured using the BCA Protein Assay Kit. The samples were stored at -80 °C for long-term preservation. A protein sample of 200 µg was processed through an ultrafiltration device (using a Microcon unit with a 10 kD cutoff) . It was washed with an ultrafiltration buffer (UA buffer, containing 8 M urea and 150 mM Tris-HCl at pH 8.0) to remove detergent, DTT, and other substances. Subsequently, 100 µl of iodoacetamide was added to alkylate the cysteine residues, followed by a 30-minute incubation in the dark. The protein sample was triple-washed with 100 µl of UA buffer and rinsed twice with 100 µl of 25 mM NH4HCO3 buffer. Trypsin (4 µg) in 40 µl of 25 mM NH4HCO3 buffer was added for digestion. The digested peptides were desalted and gathered after vacuum centrifugation in 40 µl of a 0.1% (v/v) formic acid solution. The peptide concentration was determined using UV spectroscopy at 280 nm. Peptides were fractionated into 10 distinct groups using a high pH reversed-phase peptide fractionation kit from Thermo Scientific™ Pierce™, concentrated using a vacuum centrifuge, and recombined in 15 µl of 0.1% (v/v) formic acid solution. Desalting was performed using Empore™ SPE C18 columns (7 mm inner diameter, 3 ml volume), followed by reconstitution in 40 µl of 0.1% (v/v) formic acid solution . iRT standards from Biognosys were incorporated to calibrate retention time, with a set ratio of 1:3 between iRT and sample peptides. The DDA library-derived fractions were analyzed using a Thermo Fisher Scientific Q Exactive HF-X mass spectrometer interfaced with an Easy nLC 1200 chromatography system . A 1.5 µg peptide sample was applied to an EASY-Spray™ C18 trap column (Thermo Scientific, P/N 164946, 3 μm, 75 μm × 2 cm) before separation on an EASY-Spray™ C18 LC analysis column (Thermo Scientific, ES802, 2 μm, 75 μm × 25 cm). The peptides were eluted at a flow rate of 250 nl/min over a 120-minute gradient using buffer B (84% acetonitrile, 0.1% formic acid). The mass spectrometer operated in a scanning range of 300–1800 m/z, with a resolution of 60,000 at 200 m/z. It utilized target AGC settings of 3e6, a maximum ion time of 25 ms, a dynamic exclusion duration of 30.0 s, and a normalized collision energy setting of 30 eV. Each MS-SIM scan was followed by 20 ddMS2 scans. Peptides from each sample were examined using liquid chromatography-tandem mass spectrometry (LC-MS/MS) in DIA mode . Each DIA cycle comprised a full MS-SIM scan accompanied by 30 DIA scans spanning the m/z range of 350–1800. The parameters were configured as follows: the SIM full scan resolution was set at 120,000 at 200 m/z; the automatic gain control (AGC) was set to 3e6; and the maximum ion time (IT) was 50 ms. For the DIA scans in profile mode, the resolution was 15,000; the AGC target was 3e6; the maximum IT was set to automatic; and the normalized collision energy was maintained at 30 eV. A linear gradient of buffer B, consisting of 84% acetonitrile and 0.1% formic acid, was applied at a flow rate of 250 nl/min over a period of 120 min. Quality control (QC) samples were introduced into the DIA mode at the commencement of the mass spectrometry analysis and following every sixth injection during the course of the experiment to ensure consistent MS performance. The Specronaut™ software version 14.4.200727.47784 was utilized to interrogate the FASTA Sequence Database. The parameters were configured as follows: the enzyme specificity was set to trypsin, with a maximum of two missed cleavages allowed; the fixed modification was set to carbamoylation at cysteine (C); and the dynamic modifications included oxidation at methionine (M) and acetylation at the protein N-terminus. Protein identifications were ascertained with a confidence level of 99%, and the false discovery rate (FDR) was calculated using the formula FDR = N (bait) * 2 / (N (bait) + N (target)) to ensure it was less than or equal to 1%. The spectral library was constructed by integrating the raw data file and DDA search results into Specronaut Pulsar X TM_12.0.20491.4 from Biognosys. The key software parameters were set as follows: dynamic iRT was selected for retention time prediction; MS2 level interference correction and cross-run normalization were activated. All results were filtered to ensure an FDR of ≤ 1%. GO analysis of differentially expressed proteins was conducted, including biological processes (BP), cellular components (CC), and molecular functions (MF) . Protein-protein interactions (PPIs) were mapped using the STRING database (version 10.0) . Subcellular localization predictions were made with WolfPsort software (version 0.2) . InterProScan was used for domain predictions, and domain enrichment analysis was performed using Fisher’s Exact Test . KEGG pathway enrichment analysis was conducted for clustering molecular interactions, reactions, and networks . Data visualizations were created using the R package ggplot2 . Statistical analysis was performed using IBM SPSS, version 23.0 (Chicago, USA). Descriptive analyses were conducted, with results expressed as mean ± standard deviation (SD) for normally distributed variables. Variance between the C group and CQB group, as well as between the CQB group and CQBC group, was assessed using Student’s t-test for normally distributed variables. A P-value of < 0.05 was considered statistically significant. Clinical features of the study population Children diagnosed with KD before IVIG treatment and hospitalized in our hospital ( n = 20) were included in the CQB group. Febrile children ( n = 20) admitted to our hospital for treatment due to bacterial infection were included in the infection control (C) group. Children with KD after IVIG treatment ( n = 8) were classified as the CQBC group. The study was approved by the ethics committee of the University (No. 2018 − 199). Informed consent was obtained from all patients and their families, and patient data were analyzed anonymously. Significant differences were observed among the three groups (Table ) in the rates of conjunctival hyperemia, skin rashes, fissured lips, lymph node enlargement, changes in extremities, as well as the average levels of platelet count, erythrocyte sedimentation rate, serum sodium, serum chloride, and alanine transaminase. Other factors, such as gender and age, showed no statistical significance. Identification of differentially abundant proteins in the CQB/C group The proteomics experimental workflow primarily encompasses procedures such as protein extraction, enzymatic digestion into peptides, chromatographic separation, LC-MS/MS analysis, DDA data collection, and database queries. Post-experiment, a series of bioinformatics analyses were conducted, including protein identification, differential expression analysis, and functional profiling. Initially, a subcellular localization study was carried out on the proteins exhibiting differential expression between the CQB and C groups. Figure A illustrates that the majority of these proteins (105) were predominantly localized in the extracellular space, followed by 38 in the nucleus, 8 in the cytoplasm, and 4 in the mitochondria. When screening for differentially abundant proteins, a threshold of fold change (FC) greater than 1.5 (indicating an increase of more than 1.5-fold) or less than 0.67 (indicating a decrease to less than 0.67-fold) with a P-value less than 0.05 was applied. This analysis identified 92 proteins that were up-regulated and 101 that were down-regulated. Figure B presents a volcano plot to visualize the significant differences in protein expression. Proteins significantly down-regulated are shown in blue (FC < 0.67 and P < 0.05), and those significantly up-regulated are shown in red (FC > 1.5 and P < 0.05). Non-differentially expressed proteins are indicated in gray. The top 10 most significantly up-regulated and down-regulated proteins are specifically marked. To further connect these findings to clinical relevance, the identified proteins were analyzed for their potential as biomarkers. For instance, complement component 3 (C3), which was significantly up-regulated, is known to play a critical role in immune modulation, endothelial repair, and KD pathophysiology. This highlights its potential utility in differentiating between acute and recovery phases of KD. A hierarchical clustering analysis was performed to evaluate expression profiles between and within groups, verifying the project’s group allocation logic and confirming the biological relevance of the identified differential proteins. Figure D depicts a heatmap of the clustered differentially expressed proteins, further underscoring potential clinical markers such as α1-Antitrypsin, which has implications for vascular repair. Identification of proteins function in CQB/C group Domains associated with differentially expressed proteins were predicted, revealing that these proteins predominantly belong to domains such as globin, trypsin, serpin (serine protease inhibitors), and the Kringle domain (Fig. A). GO annotation provided insights into the proteins’ roles, cellular locations, and involvement in biological pathways. Differentially expressed proteins were primarily enriched in BP categories such as cellular processes, biological regulation, response to stimuli, and regulation of biological processes. In MF, they were associated with binding, catalytic activity, and molecular carrier activity, while in CC, they were localized to the extracellular region, cell parts, and extracellular region parts (Fig. B). KEGG pathway enrichment analysis revealed that pathways like the AMPK signaling pathway were significantly affected in the CQB group (Fig. C). The dysregulation of this pathway is relevant to KD as it influences endothelial function, inflammation resolution, and vascular repair. For example, AMPK activation has been implicated in reducing inflammation and promoting endothelial repair, highlighting its potential as a therapeutic target. A PPI network diagram, constructed using STRING database data, represents the differential expression patterns of proteins within this group (Fig. D). Identification of differentially abundant proteins in CQBC/CQB group The subcellular localization analysis of differentially expressed proteins within the CQBC/CQB group showed that 591 were predominantly found in the extracellular space, 212 in the nucleus, 40 in the cytoplasm, 37 in the mitochondria, and 13 in the plasma membrane (Fig. A). Between the CQBC and CQB groups, 537 proteins were up-regulated and 231 were down-regulated. Figure B highlights the top 10 proteins with the most notable expression changes. Proteins such as A0A5C2H3L0, A0A5C2GB96, and A0A5C2GQ34 exhibited significant up-regulation in the CQBC group (Fig. C). Hierarchical clustering further illustrated these group differences (Fig. D). Identification of protein functions in the CQBC/CQB group Domain analysis of the differentially expressed proteins in the CQBC/CQB group revealed associations with Sushi repeat (SCR repeat), low-density lipoprotein receptor class A domain, and MAC/Perforin domain (Fig. A). GO analysis indicated enrichment in BP such as biological regulation, response to stimuli, and cellular processes. In MF, the proteins were associated with binding, catalytic activity, and molecular function regulation, while in CC, they were localized to the extracellular region and cell parts (Fig. B). KEGG pathway analysis indicated alterations in the PI3K-Akt signaling pathway, which is critically involved in vascular inflammation and repair. This pathway has been linked to coronary artery lesion (CAL) development in KD and could serve as a potential therapeutic target (Fig. C). A PPI network diagram was constructed for these proteins (Fig. D). These results underscore the dynamic changes in signaling pathways during the progression and resolution of KD. Key proteins involved in KD progression by combining CQB/C Group and CQBC/CQB group We identified 56 differentially expressed proteins that exhibited both elevated expressions in the CQB/C group and decreased expressions in the CQBC/CQB group, or vice versa. A Venn diagram illustrates this overlap (Fig. A). A heatmap further highlights these proteins (Fig. B). GO enrichment analysis revealed that these proteins are involved in activities such as peptidase regulation, endopeptidase inhibition, and peptidase inhibition (Fig. C). KEGG pathway analysis identified complement and coagulation cascades as key pathways, with notable contributions from complement component 6 (C6), complement component 3 (C3), and α1-Antitrypsin (Fig. D). These findings suggest that these proteins play critical roles in immune modulation and vascular repair, providing potential biomarkers for clinical differentiation between complete and incomplete KD, CAL-positive and CAL-negative cases, and KDSS vs. responsive KD cases. Children diagnosed with KD before IVIG treatment and hospitalized in our hospital ( n = 20) were included in the CQB group. Febrile children ( n = 20) admitted to our hospital for treatment due to bacterial infection were included in the infection control (C) group. Children with KD after IVIG treatment ( n = 8) were classified as the CQBC group. The study was approved by the ethics committee of the University (No. 2018 − 199). Informed consent was obtained from all patients and their families, and patient data were analyzed anonymously. Significant differences were observed among the three groups (Table ) in the rates of conjunctival hyperemia, skin rashes, fissured lips, lymph node enlargement, changes in extremities, as well as the average levels of platelet count, erythrocyte sedimentation rate, serum sodium, serum chloride, and alanine transaminase. Other factors, such as gender and age, showed no statistical significance. The proteomics experimental workflow primarily encompasses procedures such as protein extraction, enzymatic digestion into peptides, chromatographic separation, LC-MS/MS analysis, DDA data collection, and database queries. Post-experiment, a series of bioinformatics analyses were conducted, including protein identification, differential expression analysis, and functional profiling. Initially, a subcellular localization study was carried out on the proteins exhibiting differential expression between the CQB and C groups. Figure A illustrates that the majority of these proteins (105) were predominantly localized in the extracellular space, followed by 38 in the nucleus, 8 in the cytoplasm, and 4 in the mitochondria. When screening for differentially abundant proteins, a threshold of fold change (FC) greater than 1.5 (indicating an increase of more than 1.5-fold) or less than 0.67 (indicating a decrease to less than 0.67-fold) with a P-value less than 0.05 was applied. This analysis identified 92 proteins that were up-regulated and 101 that were down-regulated. Figure B presents a volcano plot to visualize the significant differences in protein expression. Proteins significantly down-regulated are shown in blue (FC < 0.67 and P < 0.05), and those significantly up-regulated are shown in red (FC > 1.5 and P < 0.05). Non-differentially expressed proteins are indicated in gray. The top 10 most significantly up-regulated and down-regulated proteins are specifically marked. To further connect these findings to clinical relevance, the identified proteins were analyzed for their potential as biomarkers. For instance, complement component 3 (C3), which was significantly up-regulated, is known to play a critical role in immune modulation, endothelial repair, and KD pathophysiology. This highlights its potential utility in differentiating between acute and recovery phases of KD. A hierarchical clustering analysis was performed to evaluate expression profiles between and within groups, verifying the project’s group allocation logic and confirming the biological relevance of the identified differential proteins. Figure D depicts a heatmap of the clustered differentially expressed proteins, further underscoring potential clinical markers such as α1-Antitrypsin, which has implications for vascular repair. Domains associated with differentially expressed proteins were predicted, revealing that these proteins predominantly belong to domains such as globin, trypsin, serpin (serine protease inhibitors), and the Kringle domain (Fig. A). GO annotation provided insights into the proteins’ roles, cellular locations, and involvement in biological pathways. Differentially expressed proteins were primarily enriched in BP categories such as cellular processes, biological regulation, response to stimuli, and regulation of biological processes. In MF, they were associated with binding, catalytic activity, and molecular carrier activity, while in CC, they were localized to the extracellular region, cell parts, and extracellular region parts (Fig. B). KEGG pathway enrichment analysis revealed that pathways like the AMPK signaling pathway were significantly affected in the CQB group (Fig. C). The dysregulation of this pathway is relevant to KD as it influences endothelial function, inflammation resolution, and vascular repair. For example, AMPK activation has been implicated in reducing inflammation and promoting endothelial repair, highlighting its potential as a therapeutic target. A PPI network diagram, constructed using STRING database data, represents the differential expression patterns of proteins within this group (Fig. D). The subcellular localization analysis of differentially expressed proteins within the CQBC/CQB group showed that 591 were predominantly found in the extracellular space, 212 in the nucleus, 40 in the cytoplasm, 37 in the mitochondria, and 13 in the plasma membrane (Fig. A). Between the CQBC and CQB groups, 537 proteins were up-regulated and 231 were down-regulated. Figure B highlights the top 10 proteins with the most notable expression changes. Proteins such as A0A5C2H3L0, A0A5C2GB96, and A0A5C2GQ34 exhibited significant up-regulation in the CQBC group (Fig. C). Hierarchical clustering further illustrated these group differences (Fig. D). Domain analysis of the differentially expressed proteins in the CQBC/CQB group revealed associations with Sushi repeat (SCR repeat), low-density lipoprotein receptor class A domain, and MAC/Perforin domain (Fig. A). GO analysis indicated enrichment in BP such as biological regulation, response to stimuli, and cellular processes. In MF, the proteins were associated with binding, catalytic activity, and molecular function regulation, while in CC, they were localized to the extracellular region and cell parts (Fig. B). KEGG pathway analysis indicated alterations in the PI3K-Akt signaling pathway, which is critically involved in vascular inflammation and repair. This pathway has been linked to coronary artery lesion (CAL) development in KD and could serve as a potential therapeutic target (Fig. C). A PPI network diagram was constructed for these proteins (Fig. D). These results underscore the dynamic changes in signaling pathways during the progression and resolution of KD. We identified 56 differentially expressed proteins that exhibited both elevated expressions in the CQB/C group and decreased expressions in the CQBC/CQB group, or vice versa. A Venn diagram illustrates this overlap (Fig. A). A heatmap further highlights these proteins (Fig. B). GO enrichment analysis revealed that these proteins are involved in activities such as peptidase regulation, endopeptidase inhibition, and peptidase inhibition (Fig. C). KEGG pathway analysis identified complement and coagulation cascades as key pathways, with notable contributions from complement component 6 (C6), complement component 3 (C3), and α1-Antitrypsin (Fig. D). These findings suggest that these proteins play critical roles in immune modulation and vascular repair, providing potential biomarkers for clinical differentiation between complete and incomplete KD, CAL-positive and CAL-negative cases, and KDSS vs. responsive KD cases. Kawasaki disease (KD) is a pediatric vasculitis characterized by fever and other nonspecific clinical manifestations, which often leads to delayed diagnosis and misdiagnosis as bacterial infections. This misdiagnosis can result in inappropriate anti-infective therapy and delayed administration of gamma globulin, which is critical for preventing coronary artery lesions (CAL) and other complications. Given the difficulty in diagnosing incomplete KD, the identification of reliable biomarkers is crucial for improving early diagnosis and clinical outcomes for children with unexplained fever . Our study aimed to explore proteomic alterations associated with KD, providing insight into potential biomarkers and therapeutic pathways. Our proteomic analysis identified 105 differentially expressed proteins in the CQB/C group. These proteins were predominantly localized in the extracellular space, with notable changes observed in proteins such as A0A0F7TC28, A0A4V1EJ13, and others, which were significantly down-regulated in the CQB group. Conversely, proteins like E1B4S8 and A0A5C2GHD2 were markedly up-regulated. The clinical relevance of these findings lies in their potential as biomarkers for distinguishing between subtypes of KD, including incomplete KD, which is often more challenging to diagnose. Functional analysis revealed that the differentially expressed proteins were involved in key biological processes, such as cellular response to stimuli and biological regulation, which are crucial in understanding the inflammatory and vascular responses in KD. Our study also revealed significant alterations in the AMPK pathway in the CQB group. AMPK regulates cellular energy balance and plays a pivotal role in maintaining cellular and whole-body energy homeostasis . Beyond its metabolic functions, AMPK has been implicated in modulating inflammatory responses, mitigating endothelial dysfunction, and reducing vascular injury, all of which are central to the progression of KD . One of the mechanisms by which activated AMPK exerts its protective effects is through promoting NADPH synthesis, thereby decreasing ROS accumulation and suppressing NF-κB activation . This cascade ultimately leads to reduced TNF-α production, a key inflammatory mediator in KD. TNF-α plays a critical role in local inflammation and coronary artery damage, as it stimulates vascular endothelial cells to express intercellular adhesion molecule-1 (ICAM-1) and monocyte chemoattractant protein-1 (MCP-1), facilitating inflammatory cell infiltration into affected tissues . Additionally, AMPK has been shown to inhibit mTOR via direct phosphorylation of TSC2 and Raptor, further suppressing NF-κB activity and contributing to its anti-inflammatory effects . Evidence from prior studies supports the protective role of AMPK activation in KD, with findings suggesting that it can attenuate inflammation and prevent apoptosis in endothelial cells through modulation of the AMPK/mTOR/NF-κB pathway . Similarly, cordycepin has been demonstrated to reduce TNF-α production via AMPK activation, reinforcing the therapeutic potential of targeting this pathway . These findings underscore the promise of AMPK as a therapeutic target in KD, with potential to ameliorate inflammation and protect against vascular damage. Further research is warranted to elucidate the precise mechanisms by which AMPK influences coronary artery lesions and to evaluate the clinical efficacy of AMPK-targeted therapies in improving outcomes for KD patients. Additionally, our analysis revealed significant alterations in the PI3K-Akt pathway in the CQBC group. This pathway regulates endothelial cell survival, proliferation, and inflammation, making it highly relevant in the context of KD, where vascular injury and CAL are primary concerns . Studies have shown that modulating the PI3K/Akt axis can protect endothelial cells from inflammatory damage induced by mediators such as TNF-α Given the pivotal role of PI3K/Akt in vascular damage, targeting this pathway could represent a novel therapeutic approach to prevent and manage CAL in KD patients. Berberine, which modulates PI3K/Akt, has demonstrated protective effects in endothelial cells , suggesting that similar therapeutic strategies could be effective in the management of KD. Moreover, combining the CQB/C and CQBC/CQB groups allowed us to identify 56 differentially expressed proteins, which were either up-regulated in the CQB/C group and down-regulated in the CQBC/CQB group, or vice versa. KEGG pathway analysis revealed that the complement and coagulation cascades play a significant role in the development and resolution of KD. Complement components C3 and C6, along with α1-Antitrypsin, were notably involved. The immune-inflammatory response and endothelial dysfunction contribute to CAL in KD . The involvement of complement and coagulation pathways in KD suggests potential therapeutic strategies targeting these systems, which may help regulate inflammation and maintain vascular integrity. These pathways help regulate the inflammatory response and maintaining vascular integrity, processes that are critically disrupted in KD. The engagement of these systems is subject to a delicate equilibrium and is managed by precise regulatory processes . These systems are essential for an appropriate innate response to injury, curbing hemorrhage and infection, and fostering the healing process . Studies have demonstrated that the triggering of complement and coagulation cascades is a principal pathophysiological mechanism in early-onset severe preeclampsia, as identified through maternal proteomic analysis . Moreover, the complement and coagulation pathways have been linked to chemotherapy responsiveness and overall patient survival rates in soft tissue sarcoma . C6 is part of the membrane attack complex, which plays a crucial role in bacterial lysis , while complement component 3 is pivotal for innate immunity and inflammation . C3 is involved in phagocytosis, inflammation, and immunomodulatory processes to destroy infectious microorganisms . Research has shown that complement component 3 was significantly elevated in KD patients . Research has shown that complement component 3 levels are significantly elevated in KD patients and decrease after intravenous immunoglobulin treatment . The main function of α1-Antitrypsin is as an antitrypsin, especially against neutrophil elastase . α1-Antitrypsin, which regulates neutrophil elastase, has been linked to CAL in KD . Our findings suggest that complement components C3, C6, and α1-Antitrypsin could serve as valuable biomarkers for KD, helping to identify patients at risk for CAL and guiding treatment decisions. In summary, our study provides valuable insights into the molecular mechanisms underlying KD and identifies several potential biomarkers for early diagnosis and disease monitoring. The involvement of the AMPK, PI3K-Akt, and complement and coagulation cascade pathways suggests new therapeutic targets for KD. Clinical trials are needed to evaluate the efficacy of targeting these pathways in improving patient outcomes, particularly in preventing vascular damage and reducing CAL incidence. The identification of biomarkers such as C3, C6, and α1-Antitrypsin could improve early detection of KD, particularly in cases with incomplete or atypical clinical presentations. Further research is needed to validate these biomarkers in larger, multicenter studies and explore their clinical utility in routine diagnostic practice and treatment strategies. Significant progress has been made in research on biological markers associated with the diagnosis of KD, but these markers are less specific for the diagnosis of KD. We have identified some KD-related biomarkers through proteomics studies, but these biomarkers still require further multicenter, large-sample clinical studies to be used to diagnose KD. We believe that following extensive validation across various populations, these biomarkers may offer novel perspectives for investigating the etiology and targeted therapy of KD. |
A survey on brachytherapy training of gynecological cancer focusing on the competence of residents in China | 9658e0ea-a294-4e08-8ba9-8bc76e139097 | 11110279 | Internal Medicine[mh] | Brachytherapy is an essential mode of treatment for gynecological tumors, specifically cervical cancer and endometrial cancer . Compared to patients with locally advanced cervical or endometrial cancer who did not undergo GBT, there was a significant survival benefit in those patients treated with GBT. In particular, image-guided adaptive brachytherapy (IGBT) combined with interstitial techniques has been demonstrated to enhance local control and survival for patients with cervix cancer, while reducing the incidence of complications. However, several studies indicated that the utilization rate of GBT was declining or disparities . Strengthening the training and education of resident doctors on brachytherapy is one of the important methods to improve the application of GBT. The competency-based medical education (CBME) has emerged as a crucial training model during residency training . The clinical competence of residents encompasses patient care, medical knowledge, professionalism, system-based practice, practice-based learning, and communication skill. The assessment of competence would still be based on entrustable professional activity (EPA) worldwide. In China, the residents should undergo a standardized residency training program of three years’ duration, after which they could be appointed as staff to conduct professional training for a period of over two years . The training provided to residents emphasizes on enhancing their skills and knowledge, as well as improving their accountability and proficiency. According to a survey regarding GBT training in Europe and the United States, only 35% of students in Europe and 59% of students in the United States gained the confidence they needed to independently operate GBT upon completion of the training. Additionally, only 35% of students in both regions passed the corresponding institutional examinations . In China, all institutes for radiotherapy training are equipped with GBT treatment machines and treat over 100 patients with gynecological tumors each year . The residents who specialize in radiation therapy are mandated to manage more than 10 patients with gynecological tumors requiring radiotherapy. However, the current status of GBT training in gynecologic cancer for radiation oncology residents and the factors related to their competence in completing GBT were not clear. GBT training has its own characteristics. During GBT training, the residents were required to understand the distinctive physical characteristics and biological function of brachytherapy, the principles of target delineation and dose evaluation system, and also to practice placing applicators in a reasonable manner. This study has investigated issues related to GBT training for residents specializing in radiation therapy with aim to find ways for improving the process and outcome of training. In order to assess the current status of GBT training in China, an anonymous questionnaire was designed and sent to 28 institutes nationwide on the beginning of December 2022. The response for questionnaire sheets including personnel information and 20 questions related to GBT training were completed within a week and submitted to the department of radiation oncology of the Xijing hospital via email. The questionnaire (list in appendix 1) included the self-reported assessment of competence in performing GBT, institutional support, barriers to acquiring competence, and preferences for additional training. The survey encompassed the competence on three ways of vaginal stump brachytherapy, intracavity brachytherapy, and interstitial brachytherapy, also included the practice on image-guided GBT. A semi-quantitative evaluation was used for certain content to reflect the strength of opinion, by using a 5-Likert-type scale, which included labels such as “very irrelevant,” “unimportant,” and “impossible,” as well as “very relevant,” “important,” and “possible.” The self-reported competence was classified as follows: unable to perform GBT, competence to perform GBT with major assistance, indeterminacy, competence to perform GBT with minor assistance, and competence to perform GBT whole independently. During statistical analysis, we classify the ability to perform GBT with minor assistance or complete it entirely independently as having strong confidence to complete GBT. The scale for practice or experience was determined by analyzing the workload of units investigated and reports of previous papers . The papers read by residents should include the ICRU 89 report, the MRI-guided brachytherapy guidelines, the ABC guidelines, and the Chinese expert consensus . The Statistical Package for Social Sciences (SPSS version 27.0, IBM, Armonk, NY, USA) software was employed for the statistical analyses. Continuous variables, categorical variables and inter-group differences were processed according to statistical principles. The univariable and multivariable logistic regression analysis was used to investigate the influence factors on self-reported competence. The factors that reached statistical significance after screening by univariable logistic regression analysis (with a p value of 0.05 for entry and 0.10 for removal) were included in the multivariable logistic regression analysis model, and estimated hazard ratios (HRs) and 95% confidence intervals (CIs) were calculated. Participants and response to questionnaire One hundred and ninety-nine residents special for radiation oncology from 28 hospitals had completed anonymous questionnaires. Of the participants, 100 trainees were in their standardized residency training year, while 99 others were in professional training for radiotherapy. To ensure a more comprehensive perspective and representativeness of the results from the present study, the statistical analysis was restricted to 132 participants, namely senior residents, including 33 residents in their third year of standardized training and 99 others in professional training. The number of gynecological cancer cases across all investigated training units exceeds 100 cases per year, with a median of 600 cases (range 180–1200) per year. According to the data presented in Fig. , 53.79% (71/132) of senior residents had experience in performing image-guided GBT, and 76.52% (101/132) had observed the process. 91.67% (121 out of 132) of the residents took care of 10 or more patients with gynecological tumors. 92.42% (122 out of 132) of the residents read MRI images of 10 or more patients with gynecological tumors. 97.73% (129/132) of residents have read professional articles about GBT. 56.82% (75/132) of the residents have participated in the formal courses of GBT. During training period, 93.18% (123 out of 132) of them participated in more than one international or domestic academic activity. Table contains detailed information on the resident’s understanding of GBT and related issues in this survey. The self-reported competence about finishing brachytherapy of gynecologic cancer The percentage of senior residents who were confident to complete the GBT independently was 78.03% (103/132) for intracavitary and 75.00% (99/132) for vaginal stump brachytherapy respectively. However, for interstitial implantation, the percentage was only 50.03% (70/132). The factors related to self-reported competence of senior residents in finishing brachytherapy were listed in Tables and and supplementary Tables –4, including GBT load of units, the number of reading literature, patient management, image analysis and the number for observation and practice of GBT. The univariate and multivariate analysis revealed a significant association between the competence in performing interstitial GBT independently and the number of cases operated, as well as the GBT load of institutes. In the multivariate analysis of confidence in completing vaginal and intracavitary brachytherapy, only the number of observing GBT was significantly correlated with confidence (as shown in supplementary Tables – ). The residents with more operational experience exhibited greater competence in executing GBT. According to the self-report, all senior residents are comfident in performing GBT as long as they have completed 10 cases of intracavitary, 5 cases of vaginal stump, or 20 cases of interstitial implantation (as shown in Fig. and supplementary Table ). The resident`s realization on the importance of GBT The residents had the highest assurance in managing patients with gynecologic cancer compared to others (see supplementary Table ). Compared to completing stereotactic body radiation therapy (SBRT) (44/132, 33.3%), a higher proportion of residents could independently complete GBT (50.03–78.03%). Out of the 132 residents surveyed, 60.61% (80/132) of the population expressed their opinion that the reduction in GBT usage was a substantial concern that impacted the treatment outcomes. In the treatment of cervical cancer and endometrial cancer, 91.67% and 89.39% of respondents respectively believed that the use of GBT would remain consistent, or even grow in the future. The existing problems and expectations for improving GBT training 21.97% (29/132) of residents believed that they did not have sufficient operational opportunities, despite the high GBT workload in the institutes. Regarding entrustability for finishing brachytherapy, 50.76% (67/132) of residents confirmed that no specific tests for GBT were conducted during their training. To enhance the effectiveness of GBT training, the residents realized that it was crucial to establish a dedicated assessment process that was aligns with the content of training. 91.67%(121/132) residents believed that it was necessary to establish comprehensive GBT courses special for training. 93.94% (124/132) of residents believed that simulation phantoms should be used for training purposes. One hundred and ninety-nine residents special for radiation oncology from 28 hospitals had completed anonymous questionnaires. Of the participants, 100 trainees were in their standardized residency training year, while 99 others were in professional training for radiotherapy. To ensure a more comprehensive perspective and representativeness of the results from the present study, the statistical analysis was restricted to 132 participants, namely senior residents, including 33 residents in their third year of standardized training and 99 others in professional training. The number of gynecological cancer cases across all investigated training units exceeds 100 cases per year, with a median of 600 cases (range 180–1200) per year. According to the data presented in Fig. , 53.79% (71/132) of senior residents had experience in performing image-guided GBT, and 76.52% (101/132) had observed the process. 91.67% (121 out of 132) of the residents took care of 10 or more patients with gynecological tumors. 92.42% (122 out of 132) of the residents read MRI images of 10 or more patients with gynecological tumors. 97.73% (129/132) of residents have read professional articles about GBT. 56.82% (75/132) of the residents have participated in the formal courses of GBT. During training period, 93.18% (123 out of 132) of them participated in more than one international or domestic academic activity. Table contains detailed information on the resident’s understanding of GBT and related issues in this survey. The percentage of senior residents who were confident to complete the GBT independently was 78.03% (103/132) for intracavitary and 75.00% (99/132) for vaginal stump brachytherapy respectively. However, for interstitial implantation, the percentage was only 50.03% (70/132). The factors related to self-reported competence of senior residents in finishing brachytherapy were listed in Tables and and supplementary Tables –4, including GBT load of units, the number of reading literature, patient management, image analysis and the number for observation and practice of GBT. The univariate and multivariate analysis revealed a significant association between the competence in performing interstitial GBT independently and the number of cases operated, as well as the GBT load of institutes. In the multivariate analysis of confidence in completing vaginal and intracavitary brachytherapy, only the number of observing GBT was significantly correlated with confidence (as shown in supplementary Tables – ). The residents with more operational experience exhibited greater competence in executing GBT. According to the self-report, all senior residents are comfident in performing GBT as long as they have completed 10 cases of intracavitary, 5 cases of vaginal stump, or 20 cases of interstitial implantation (as shown in Fig. and supplementary Table ). The residents had the highest assurance in managing patients with gynecologic cancer compared to others (see supplementary Table ). Compared to completing stereotactic body radiation therapy (SBRT) (44/132, 33.3%), a higher proportion of residents could independently complete GBT (50.03–78.03%). Out of the 132 residents surveyed, 60.61% (80/132) of the population expressed their opinion that the reduction in GBT usage was a substantial concern that impacted the treatment outcomes. In the treatment of cervical cancer and endometrial cancer, 91.67% and 89.39% of respondents respectively believed that the use of GBT would remain consistent, or even grow in the future. 21.97% (29/132) of residents believed that they did not have sufficient operational opportunities, despite the high GBT workload in the institutes. Regarding entrustability for finishing brachytherapy, 50.76% (67/132) of residents confirmed that no specific tests for GBT were conducted during their training. To enhance the effectiveness of GBT training, the residents realized that it was crucial to establish a dedicated assessment process that was aligns with the content of training. 91.67%(121/132) residents believed that it was necessary to establish comprehensive GBT courses special for training. 93.94% (124/132) of residents believed that simulation phantoms should be used for training purposes. It was the first investigation about gynecological tumor brachytherapy training for residents in China. The current study revealed that among senior residents, 78.03%, 75%, and 50.03% of participants had the self-reported competence to perform intracavity, vaginal stump and interstitial brachytherapy on their own. 46.99% of the residents passed the special GBT ability assessment. The results further suggested that for residents to gain confidence in GBT, a minimum of 10, 5, 20 cases respectively for intracavity, vaginal stump, interstitial GBT practice were required. To enhance the quality of GBT training, the special and comprehensive curriculum along with assessment for entrustability is necessary. The incidence of gynecological tumors, including cervical cancer, remained high in China . The GBT workload for gynecological tumors investigated in this study’s training institutes ranged from 180 to 1200 cases per year, offering ample opportunities for trainees to engage in operations and observe clinical procedures. Therefore, the self-confidence in completing the GBT of cervical or endometrial cancer is relatively high when compared to other studies . According to EMBRACE research, image-guided GBT can enhance the local control rate and survival . GBT is considered an indispensable method for the treatment of cervical cancer in international treatment guidelines. According to Chinese expert consensus , patients in units without GBT procedures must be promptly referred to department capable of performing GBT. In this survey, 91.67% and 89.39% of residents believed that the application of GBT for cervical cancer and endometrial cancer would not decrease in the future. 96.48% of the residents strongly believed that the GBT training was extremely valuable for the standardized treatment of gynecological tumors. A positive correlation was discovered between the number of cases in GBT practice and the self-reported competence of residents. The residents training for radiation oncology in China were mandated to care more than 40 patients who required radiotherapy, being more than 10 with gynecological tumors among them . The Accreditation Council for Graduate Medical Education (ACGME) mandates that residents in radiation oncology must carry out a minimum of 5 interstitial and 15 intracavitary procedures throughout their residency training. The survey conducted in the United States reveals that individuals with 15 or more experiences of treatment have significantly high confidence in finishing brachytherapy . European studies also revealed that while 50% of the residents believed that individuals who completed 15 cases of intracavitary brachytherapy could have high confidence, 87% of the residents acknowledged that those who practice 5 cases of interstitial implantation could not independently accomplish such complex brachytherapy . The current study discovered that providing training on finishing more than 10 cases of intracavity, 5 cases of vaginal stump, and 20 cases of interstitial brachytherapy could aid residents in developing substantial self-reported competence. The ideas behind competency-based training held immense significance for medical education. The competencies should be specific, comprehensive, and trainable . The competence of brachytherapy was a crucial element of the ACGME milestones for radiation oncology medical residents . Performing GBT well requires not only knowledge and skill related, but also professional quality, empathetic patient care, communication, and cooperation. In China, there are textbooks specifically written for residents in the field of radiotherapy including the knowledge of GBT . Our research also indicates that reading more papers and observing more GBT process can enhance confidence to finish interstitial operation, albeit without statistical difference. The assessment of competence is a fundamental aspect of EPAs . The purpose of formative assessment is to identify any problems that exist in students’ knowledge, skills, and attitudes. According to results of studies about medical training , implementing high-quality formative evaluation according to a schedule and criterion can enhance the outcome of summative assessment and improve the competence of residents. The assessment of EPAs can also assist supervisors in determining the entrustability of their trainees. Despite this, our study found that 50.76% of individuals did not participate in the special assessment for GBT training. Establishing a standardized test system special for GBT training and supervising the procession in the future is an urgent problem. There are numerous methods available for clinical skill training, including Chart-stimulated recall, direct observation, clinical vignettes, and multisource feedback. The method of simulator training is highly efficient . For instance, the seven-hour endoscopic simulation operation training has considerably enhanced the ability of the actual operation in the operating room . In the European study, only 36% of the residents were found to have operated on more than 5 cases of cervical cancer’s brachytherapy. Our research indicated that despite the high workload of GBT in training bases, 21.97% of residents believed that they did not get sufficient GBT operational opportunities. Due to the limited rotation time in the radiotherapy department, it was impossible to utilize actual patients to enhance the confidence of all residents in operating. Hence, the development of a simulation phantom for training is highly imperative. Campelo and his team designed and produced a simulation phantom for the GBT training by utilizing 3D printing technology . The phantom well exhibited human histological characteristics and was suitable for exercising intracavitary and interstitial brachytherapy, as well as for teaching and practicing image-guided brachytherapy. The phantom training is expected to enhance the resident` s competence and training efficiency. There are certain limitations present in this study. Firstly, small sample size and selection bias should be found in this study. However, considering the actual number of residents and training bases (workload of 180–1200 cases per year) investigated, the result of the study was somewhat representative. Secondly, only self-evaluation indicators are employed to demonstrate competence in completing GBT in the study. In the future, we aim to create a detailed collection of assessment criteria and teaching quality control measures special for the theory and practice of brachytherapy training. Thirdly, the impact of trainer-related elements on the training outcome was not investigated and analyzed. Recently, there have been numerous trainings and seminars conducted for teachers, with a focus on residency training across the country. It is expected that the quality and efficiency of training will be enhanced in the future by implementing normative training system for both the trainer and the trainee. The study revealed that the self-reported competence in performing GBT was relatively high among the surveyed residents who specialized in radiation therapy. However, in the future, it is important to strengthen the scheduling of a comprehensive curriculum, assessment procedure special for GBT training, and provision of more practice opportunities and teaching devices. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Tubercular panophthalmitis in a patient with human immunodeficiency virus infection: Proven clinicopathologically and by molecular diagnostic tests | 04854430-b59d-4649-8073-9b7a7ab7b402 | 7690524 | Pathology[mh] | A 35-year-old HIV positive male reported with history of pain, redness, and diminution of vision (DV) in right eye (OD) for 3 months. He was on regular HAART (Efavirenz, Emtricitabine, and Tenofovir) for 1 year and his last known CD4 counts 2 months back was 324 cells/cu.mm. He had undergone YAG peripheral iridotomy (PI) for angle closure glaucoma elsewhere and was on topical steroids, cycloplegics, and antiglaucoma medications (AGMs) when he presented to us. He was not on any systemic steroids. On examination, his best-corrected visual acuity (BCVA) was no perception of light (PL) in OD and 20/20, N6 in left eye (OS). Anterior segment examination-OD showed mutton fat keratic precipitates, rubeosis iridis, ectropion uvea, shallow anterior chamber (AC) with 0.5 mm hypopyon, and complicated cataract. Intraocular pressure (IOP) was 56 mm of Hg in OD and 16 mm in OS by applanation tonometry (AT). Four-mirror indirect gonioscopy showed closed angles with 360-degree peripheral anterior synechiae (PAS). Fundus examination showed grade 4 vitritis. Ultrasound (USG) B-scan of OD showed moderate vitreous echoes and diffuse choroidal thickening. Aqueous humor (AH) analysis was positive for Mycobacterium tuberculosis (MTB) genome and negative for eubacterial and panfungal genome. High resolution computed tomography of chest was normal and systemic TB was ruled out by infectious disease specialist. The patient was started on ATT along with topical steroids, cycloplegics, and antiglaucoma medication. He was lost to follow up after that and reported one and half months later with an increase in pain and redness in OD. BCVA was no PL in OD and 20/20, N6 in OS. His CD4 counts were 410 cells/cu.mm and was on ATT and regular HAART. He did not have any systemic complaints. An inferior scleral abscess with limitation of ocular motility and yellow reflex on ophthalmoscopy was noted. USG showed significant vitreous echoes with retinal detachment. Conjunctival scraping was negative. A diagnosis of panophthalmitis in a painful blind eye with restricted ocular movements was made. Patient underwent enucleation with ball implant. Gross examination showed a whitish mass filling the vitreous cavity with posterior thickened sclera . Pathological examination of enucleated specimen showed caseating granulomatous inflammation involving intraocular contents and sclera with numerous acid-fast bacilli (AFB) on Ziehl–Neelsen staining [Fig. - ]. Real-time polymerase chain reaction (RT-PCR) from paraffin section was positive for MTB with 4714 copies/ml . Patient completed the full course of ATT and at final follow-up, right socket was healthy with acceptable cosmesis. His systemic condition was stable. Ocular TB has been reported in 3.8% of HIV patients in a large study from India. It can present with varied manifestations although choroidal granuloma is the commonest feature. Panophthalmitis is uncommonly reported, especially in HIV patients. Our patient had hypopyon granulomatous uveitis as a presenting feature of ocular tuberculosis. Hypopyon which is uncommonly seen in HIV patients with OTB, can be noted in patients with endophthalmitis. He did not have other signs and symptoms of underlying active systemic TB. He was on regular HAART with moderate CD4 counts. Cell mediated immunity has been suggested as a cause of such fulminant inflammation in previous studies. With an uncommon presentation like this, AH PCR was done which was positive for MTB genome. Paradoxical reactions with ATT and HAART have been attributed to the increase in the CD4+ lymphocyte counts with corresponding decrease in the viral load leading to intense inflammation at sites of tubercular disease. Tuberculosis-associated immune reconstitution inflammatory syndrome (TB-IRIS) and IRU has been reported with both MTB and with other atypical mycobacteria. In HIV-TB co-infection to balance the risks related to worsening of inflammation and systemic mortality, guidelines have been formulated based on CD4 counts regarding initiation of ART. Significantly, in our patient, the increase in CD4 counts was not marked when he presented with panophthalmitis and fulminant spread of infection to the whole of the eye. A possibility of drug resistance was also considered but there was no other supportive clinical evidence for the same. The other eye was normal and he has been systemically stable till the last follow up. Most reports of TB panophthalmitis in HIV/AIDS have been reported in association with extensive systemic TB. Our case reinforces the fact that significant inflammation with worsening of clinical condition can occur in HIV patients with OTB on HAART even with a marginal increase in CD4 counts. CD4 count values alone may not indicate the immune recovery state. Paradoxical worsening in TB can lead to even loss of eye despite adequate and appropriate ATT. Medline search revealed limited literature on TB panophthalmitis in HIV patients without systemic involvement especially presenting as hypopyon granulomatous uveitis worsening despite HAART and adequate ATT. The diagnosis of TB was established initially by aqueous PCR and later by histopathology and RT-PCR from ocular specimen. Additional appropriate anti-inflammatory therapy with close monitoring could help reduce inflammation and possibly can save the eye. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Nil. There are no conflicts of interest. |
Reflections on the opportunities and challenges of applying experience‐based co‐design (EBCD) to phase 1 clinical trials in oncology | d5f79eb0-edc6-44ec-8e74-9b8dea2783c6 | 11211206 | Internal Medicine[mh] | BACKGROUND Experience‐based co‐design (EBCD) is a form of participatory action research that enables healthcare professionals and patients to identify areas for quality improvement within healthcare settings. By pinpointing the key moments and situations (or ‘touchpoints') where individuals interact with a service and where their subjective experience is shaped, EBCD provides comprehensive understanding of the challenges and opportunities for enhancing healthcare delivery. EBCD is a multistage process using qualitative and participatory methods. These methods include observations, individual audio‐recorded and filmed interviews and workshops in which patients and staff work together to co‐design service improvements (Figure ). Due to its adaptability, EBCD has been used in various health settings such as primary care, mental health services and cancer care. , However, to our knowledge, EBCD has never been applied in early‐phase clinical trials in oncology to improve person‐centred care (PCC). Historically, phase 1 clinical trials in oncology have primarily focused on assessing the safety, tolerability and establishing the recommended phase 2 dosage of new treatments, typically involving a limited number of patients. However, with the emergence of immuno‐oncology agents, such as immune checkpoint inhibitors, this traditional approach has recently undergone significant transformation. Clinical trials have incorporated large expansion cohorts within phase 1/2 trials, with the aim of demonstrating not only the safety but also the treatment efficacy of these immunotherapies. This shift has led to conditional accelerated approval for some agents, challenging the traditional phase 1/2/3 drug development process. The impact of these changes is twofold. First, early‐phase clinical trials are increasingly considered a viable treatment choice for patients facing refractory or relapsed diseases who have exhausted standard therapeutic options. , Second, through the accelerated approval process, innovative treatments like immunotherapies swiftly become an integral part of the standard of care for specific cancer types, offering new hope to patients by providing access to innovative therapies with less prolonged delays. Phase 1 clinical trials are not only characterized by the mobilization of new biological entities or technologies but also by new forms of care adapted to these settings. For instance, the early development of experimental chemotherapies involved ‘total care’, consisting of adjusting diet, psychosocial support and medication. It has also been argued that supportive or palliative care has to be developed simultaneously to early‐phase clinical trials. In the contemporary landscape of immuno‐oncology, the boundaries between research and standard care are fading, which makes attending to the development of new care processes associated with the early clinical uses of an experimental treatment even more relevant. One important consideration is the need to rethink early‐phase surrogate endpoints to ensure that they accurately reflect outcomes that are important to patients. Additionally, the delivery of supportive care must adapt highly standardized and systematic procedures of trials to a broader group of participants with diverse characteristics and needs. This requires careful consideration of the individual needs and goals of all stakeholders involved to improve the overall quality of care delivery. We are currently implementing the EBCD approach within the context of experimental immunotherapies in early trial phases. Our experiences thus far have provided valuable insights, and we anticipate that sharing our reflections to date could offer assistance and insights to others in similar endeavours (Table ). Drawing on both existing literature and our ongoing EBCD research experience, we reflect upon: (1) the opportunities of applying EBCD as a method to improve the delivery of PCC in early‐phase clinical trials in oncology; (2) potential challenges to–and solutions for—applying this methodology in such contexts. CARE NEEDS AND OPPORTUNITIES OF APPLYING EBCD IN EARLY‐PHASE CLINICAL TRIALS 2.1 Person‐Centered Car (PCC) PCC aims to direct health care around the preferences and needs of patients. While quality of life and symptom self‐reporting are increasingly measured in early‐phase clinical trials in oncology, little attention is paid to systematically assess patient needs from a holistic perspective. Given the context of early‐phase clinical trials where patients are exposed to a high physical, mental and spiritual burden, a more comprehensive understanding of their needs and experiences during these therapies could lead to improvements in the quality of their care and health outcomes. Existing studies have also highlighted the lack of consideration for engaging patients' informal caregivers, who play a crucial role in supporting patients in early‐phase clinical trials. EBCD has already demonstrated how it can facilitate the implementation of PCC in oncology by highlighting touchpoints related to information needs about side effects or treatment ending. , Furthermore, use of EBCD methods may strengthen the role of informal caregivers in cancer care, such as in the timely reporting of patients' symptoms or seeking professional support when needed. More widely, EBCD can result in facilitating professionals–patient partnerships by, for instance, developing training and support resources relating to complex care situations. Therefore, the utilization of EBCD in early‐phase clinical trials can anticipate inherent touchpoints or needs (personal, clinical and organizational) that may pertain not only to the trial itself but also to the therapy being administered. The EBCD approach identifies and prioritizes needs, proposing improvement strategies that will ensure consideration of PCC principles upon treatment approval and standard practice adoption. 2.2 Communication related to risks and benefits Communication between patients and healthcare providers in early‐phase clinical trials continues to pose challenges, encompassing a range of issues, including misinterpretation, confusion and omission of crucial information as well as the occurrence of therapeutic misconception, wherein patients mistakenly equate research objectives with care goals. Furthermore, healthcare professionals may find it difficult to explain genuine risks because they want to respect patients' hope in what may be a last curative option. It has also been documented that patients in early‐phase clinical trials do sometimes not report symptoms for fear of being withdrawn from the experimental protocol. By valuing users' voices, EBCD may help identify gaps or points to improve regarding communication about the risks, benefits and other sensitive aspects of the clinical trial. EBCD can improve communication between staff and patients as well as between services, including when dealing with sensitive information such as adverse reactions or bad news in oncology or palliative care. , , Through bringing patients and staff together as co‐designers, the method has helped to inform the tailoring of information—such as designing information sheets, training or protecting time for communication purposes—to specific organizational contexts. 2.3 Lack of care coordination Existing literature highlights several care coordination issues during early‐phase clinical trials. First, supportive and palliative care are often not well integrated within phase 1/2 clinical trials. However, it has been argued that ‘simultaneous care’—that is the integration of palliative care within clinical trials—can be instrumental to improving physical, emotional and social well‐being. Second, a few studies have highlighted a lack of support during the transition between clinical trials and standard care. This transition is particularly difficult for patients who have been withdrawn from clinical trials, because of health deterioration, violation of a protocol's criteria or a personal decision to withdraw. EBCD could enhance coordination between supportive or palliative care and clinical trials. Indeed, several studies have shown how EBCD can lead to improvement activities that better integrate different forms of care ; for instance, by enhancing the integration of palliative care within an emergency department. More generally, EBCD seeks to facilitate organizational changes, such as redesigning coordination between teams or departments. In our study, we engage different professionals beyond the clinical trial team, particularly in the advisory board, including experts in palliative care, psycho‐oncology and social sciences. Involving stakeholders from the outset may enhance their commitment in subsequent stages of implementing the improvement strategies identified. Person‐Centered Car (PCC) PCC aims to direct health care around the preferences and needs of patients. While quality of life and symptom self‐reporting are increasingly measured in early‐phase clinical trials in oncology, little attention is paid to systematically assess patient needs from a holistic perspective. Given the context of early‐phase clinical trials where patients are exposed to a high physical, mental and spiritual burden, a more comprehensive understanding of their needs and experiences during these therapies could lead to improvements in the quality of their care and health outcomes. Existing studies have also highlighted the lack of consideration for engaging patients' informal caregivers, who play a crucial role in supporting patients in early‐phase clinical trials. EBCD has already demonstrated how it can facilitate the implementation of PCC in oncology by highlighting touchpoints related to information needs about side effects or treatment ending. , Furthermore, use of EBCD methods may strengthen the role of informal caregivers in cancer care, such as in the timely reporting of patients' symptoms or seeking professional support when needed. More widely, EBCD can result in facilitating professionals–patient partnerships by, for instance, developing training and support resources relating to complex care situations. Therefore, the utilization of EBCD in early‐phase clinical trials can anticipate inherent touchpoints or needs (personal, clinical and organizational) that may pertain not only to the trial itself but also to the therapy being administered. The EBCD approach identifies and prioritizes needs, proposing improvement strategies that will ensure consideration of PCC principles upon treatment approval and standard practice adoption. Communication related to risks and benefits Communication between patients and healthcare providers in early‐phase clinical trials continues to pose challenges, encompassing a range of issues, including misinterpretation, confusion and omission of crucial information as well as the occurrence of therapeutic misconception, wherein patients mistakenly equate research objectives with care goals. Furthermore, healthcare professionals may find it difficult to explain genuine risks because they want to respect patients' hope in what may be a last curative option. It has also been documented that patients in early‐phase clinical trials do sometimes not report symptoms for fear of being withdrawn from the experimental protocol. By valuing users' voices, EBCD may help identify gaps or points to improve regarding communication about the risks, benefits and other sensitive aspects of the clinical trial. EBCD can improve communication between staff and patients as well as between services, including when dealing with sensitive information such as adverse reactions or bad news in oncology or palliative care. , , Through bringing patients and staff together as co‐designers, the method has helped to inform the tailoring of information—such as designing information sheets, training or protecting time for communication purposes—to specific organizational contexts. Lack of care coordination Existing literature highlights several care coordination issues during early‐phase clinical trials. First, supportive and palliative care are often not well integrated within phase 1/2 clinical trials. However, it has been argued that ‘simultaneous care’—that is the integration of palliative care within clinical trials—can be instrumental to improving physical, emotional and social well‐being. Second, a few studies have highlighted a lack of support during the transition between clinical trials and standard care. This transition is particularly difficult for patients who have been withdrawn from clinical trials, because of health deterioration, violation of a protocol's criteria or a personal decision to withdraw. EBCD could enhance coordination between supportive or palliative care and clinical trials. Indeed, several studies have shown how EBCD can lead to improvement activities that better integrate different forms of care ; for instance, by enhancing the integration of palliative care within an emergency department. More generally, EBCD seeks to facilitate organizational changes, such as redesigning coordination between teams or departments. In our study, we engage different professionals beyond the clinical trial team, particularly in the advisory board, including experts in palliative care, psycho‐oncology and social sciences. Involving stakeholders from the outset may enhance their commitment in subsequent stages of implementing the improvement strategies identified. POTENTIAL CHALLENGES TO—AND SOLUTIONS FOR—APPLYING EBCD TO CLINICAL TRIALS 3.1 Integrating co‐design in a context of high standardization EBCD seeks to generate change including in complex care settings. However, this could be challenging within the context of clinical trials, which are usually characterized by a high level of standardization. Standardizing practices aim to both organize research procedures and ensure scientific validity through quantification and the reproducibility of research. The interpretive paradigm of EBCD could become at odds with a ‘traditional, positivist, science paradigm’. In an EBCD project aiming to improve the experiences of older patients with breast and colorectal cancer, some staff struggled to consider that patients' knowledge could really contribute to design solutions. Such tensions between research paradigms could represent a barrier to the implementation of EBCD , ; this could be particularly the case in early‐phase clinical trials. Solution While organizational change may be challenging in the context of the highly standardized practices of clinical trials, it is important to stress that EBCD typically generates ‘liminal’ space for changes. In other words, EBCD is well suited to both identify and shape new areas within existing services and to enhance communication between stakeholders. To enhance capacity for change, a solution could lie in the establishment of a steering committee including staff, patients/informal caregivers and institutional representatives to provide support throughout the EBCD project and assure the feasibility and uptake of co‐designed improvements. , Professional facilitators can also help support co‐design workshops in complex organizations. 3.2 Planning EBCD in a context of high uncertainty Planning EBCD could be challenging in the context of high uncertainty characterizing early‐phase clinical trials. Indeed, a research protocol can be changed or even interrupted at any time, because a severe adverse reaction has been detected or because a concurrent treatment has demonstrated a higher efficacy. Uncertainty is also related to recruitment and retention: phase 1 clinical trials are often marked by slow recruitment, failure to reach the inclusion targets or a high rate of patient dropout because of narrow inclusion criteria or overburdening procedures. Hence, it might be particularly challenging to plan and implement EBCD adequately. For example, it may be challenging to organize joint co‐design workshops when the number of patients who will be recruited and retained in the trial is highly uncertain. Furthermore, staff's time constraints and standardized practices can limit the possibility to conduct each step of the EBCD (although this challenge is not limited solely to the context of clinical trials). Solution The literature shows that EBCD is a flexible and adaptable method. One strategy we employed in our study to mitigate the potential impact of low recruitment and retention rates in clinical trials was to utilize a cross‐sectional design for patient inclusion during the study's design phase (Table ). This implies that patients can be invited to participate in the EBCD study at various stages of the clinical trial, including inclusion, treatment or follow‐up. Employing a purposeful sampling strategy would allow for the inclusion of a predetermined quota of patients at each stage of the clinical trial or a quota of patients responding to treatment or progressing. This not only aims to guarantee an adequate number of participants but also to ensure a diversity of experiences (decision of inclusion, therapeutic failure, benefits, severe adverse reactions, coordination challenges, over‐optimism, noneligibility, etc.), especially during stages 2 and 3 of the EBCD method. Furthermore, involving different patients/informal caregivers at various moments of the clinical trial, and allowing participants to take part in one or several stages of the EBCD process ensures flexibility for both patients, informal caregivers and staff and may enhance the effectiveness and feasibility of the EBCD method (Tables and ). 3.3 Engaging vulnerable patients and informal caregivers in EBCD In early‐phase clinical trials in oncology, patients are deemed vulnerable due to the considerable uncertainty surrounding the outcomes of experimental treatments, while it does often represent their last therapeutic option. Patients have a relatively high performance status before entering a phase 1 protocol, while often being confronted with a high symptom burden during the experimental phase. In addition, many patients may not benefit from treatment, resulting in poorer physical health and increased psychological distress, especially when hope for an effective final therapeutic option has been dashed. Other studies have also shown the strong psychological impact and moral distress among caregivers of clinical trial participants. , Some EBCD studies involving patients with severe conditions or impaired states, such as in palliative care, have documented that recalling their experience can cause mental distress. Thus, an important issue to consider is the burden of co‐design activities if patients are suffering from severe physical or psychological impairments. Furthermore, involving patients with varying health conditions, outcomes and trial stages during co‐design activities may subject them to divergent realities, causing discomfort and psychological distress. Solution To overcome specific challenges related to highly vulnerable patients, it is important to minimize the risk of overburdening participants by allowing flexibility and responsiveness to users' needs through meaningful adjustments in EBCD activities (e.g., leverage established community networks, provide a quiet space or emotional support). Some components can be overlapped or withdrawn (such as the filmed narrative interviews or the observational fieldwork), albeit raising issues in relation to realizing some of the benefits of the approach. Because the film can be time‐consuming and emotionally challenging to compile, an ‘accelerated’ EBCD approach has been developed and tested based on archives of patient films. Regarding the ethical challenge of involving vulnerable patients in co‐design activities, available literature emphasizes the need to consider consent as a process that has to be monitored throughout all stages of the research project. During a clinical trial, patients could encounter physical or psychological challenges that hinder their continuous participation in the various EBCD stages. Seeking clear agreement and willingness to engage before each stage will ensure that ethical standards are followed during the co‐design process. It may also be possible to involve indirectly the most vulnerable patients through patient representatives such as informal caregivers. As proposed in the previous point, the adoption of a cross‐sectional design and the flexibility to participate in one or multiple EBCD stages could help to alleviate the potential burden associated with participating throughout the entire process while accommodating the diverse needs of patients and research objectives. Whilst patients may be depending on the treatment as their last hope for a therapeutic option, this may make it particularly problematic to engage in co‐design activities (steps 4, 5 and 6). Hence, attention should be directed towards avoiding the integration of (a) patients who have benefitted from the treatment and/or their informal caregivers with (b) other patients and/or their informal caregivers in a situation of treatment failure, dropout or withdrawal or who could not receive the therapy (e.g., disease progression, health deterioration, manufacturing‐related issues). In this regard, independent co‐design workshops or alternative strategies, such as individual sessions with each patient to identify priorities and strategies, could be considered. As part of the latter approach, individual validation and rating systems for the overall results could be implemented, even remotely. Table provides a summary on the challenges and solutions discussed in this section. Lastly, while it is true that participants in this context are particularly vulnerable, we have to emphasize that the desire to help future patients is a strong motivation to participate in early‐phase clinical trials, and therefore in EBCD as a means to improve care delivery and services. Integrating co‐design in a context of high standardization EBCD seeks to generate change including in complex care settings. However, this could be challenging within the context of clinical trials, which are usually characterized by a high level of standardization. Standardizing practices aim to both organize research procedures and ensure scientific validity through quantification and the reproducibility of research. The interpretive paradigm of EBCD could become at odds with a ‘traditional, positivist, science paradigm’. In an EBCD project aiming to improve the experiences of older patients with breast and colorectal cancer, some staff struggled to consider that patients' knowledge could really contribute to design solutions. Such tensions between research paradigms could represent a barrier to the implementation of EBCD , ; this could be particularly the case in early‐phase clinical trials. Solution While organizational change may be challenging in the context of the highly standardized practices of clinical trials, it is important to stress that EBCD typically generates ‘liminal’ space for changes. In other words, EBCD is well suited to both identify and shape new areas within existing services and to enhance communication between stakeholders. To enhance capacity for change, a solution could lie in the establishment of a steering committee including staff, patients/informal caregivers and institutional representatives to provide support throughout the EBCD project and assure the feasibility and uptake of co‐designed improvements. , Professional facilitators can also help support co‐design workshops in complex organizations. Planning EBCD in a context of high uncertainty Planning EBCD could be challenging in the context of high uncertainty characterizing early‐phase clinical trials. Indeed, a research protocol can be changed or even interrupted at any time, because a severe adverse reaction has been detected or because a concurrent treatment has demonstrated a higher efficacy. Uncertainty is also related to recruitment and retention: phase 1 clinical trials are often marked by slow recruitment, failure to reach the inclusion targets or a high rate of patient dropout because of narrow inclusion criteria or overburdening procedures. Hence, it might be particularly challenging to plan and implement EBCD adequately. For example, it may be challenging to organize joint co‐design workshops when the number of patients who will be recruited and retained in the trial is highly uncertain. Furthermore, staff's time constraints and standardized practices can limit the possibility to conduct each step of the EBCD (although this challenge is not limited solely to the context of clinical trials). Solution The literature shows that EBCD is a flexible and adaptable method. One strategy we employed in our study to mitigate the potential impact of low recruitment and retention rates in clinical trials was to utilize a cross‐sectional design for patient inclusion during the study's design phase (Table ). This implies that patients can be invited to participate in the EBCD study at various stages of the clinical trial, including inclusion, treatment or follow‐up. Employing a purposeful sampling strategy would allow for the inclusion of a predetermined quota of patients at each stage of the clinical trial or a quota of patients responding to treatment or progressing. This not only aims to guarantee an adequate number of participants but also to ensure a diversity of experiences (decision of inclusion, therapeutic failure, benefits, severe adverse reactions, coordination challenges, over‐optimism, noneligibility, etc.), especially during stages 2 and 3 of the EBCD method. Furthermore, involving different patients/informal caregivers at various moments of the clinical trial, and allowing participants to take part in one or several stages of the EBCD process ensures flexibility for both patients, informal caregivers and staff and may enhance the effectiveness and feasibility of the EBCD method (Tables and ). Engaging vulnerable patients and informal caregivers in EBCD In early‐phase clinical trials in oncology, patients are deemed vulnerable due to the considerable uncertainty surrounding the outcomes of experimental treatments, while it does often represent their last therapeutic option. Patients have a relatively high performance status before entering a phase 1 protocol, while often being confronted with a high symptom burden during the experimental phase. In addition, many patients may not benefit from treatment, resulting in poorer physical health and increased psychological distress, especially when hope for an effective final therapeutic option has been dashed. Other studies have also shown the strong psychological impact and moral distress among caregivers of clinical trial participants. , Some EBCD studies involving patients with severe conditions or impaired states, such as in palliative care, have documented that recalling their experience can cause mental distress. Thus, an important issue to consider is the burden of co‐design activities if patients are suffering from severe physical or psychological impairments. Furthermore, involving patients with varying health conditions, outcomes and trial stages during co‐design activities may subject them to divergent realities, causing discomfort and psychological distress. Solution To overcome specific challenges related to highly vulnerable patients, it is important to minimize the risk of overburdening participants by allowing flexibility and responsiveness to users' needs through meaningful adjustments in EBCD activities (e.g., leverage established community networks, provide a quiet space or emotional support). Some components can be overlapped or withdrawn (such as the filmed narrative interviews or the observational fieldwork), albeit raising issues in relation to realizing some of the benefits of the approach. Because the film can be time‐consuming and emotionally challenging to compile, an ‘accelerated’ EBCD approach has been developed and tested based on archives of patient films. Regarding the ethical challenge of involving vulnerable patients in co‐design activities, available literature emphasizes the need to consider consent as a process that has to be monitored throughout all stages of the research project. During a clinical trial, patients could encounter physical or psychological challenges that hinder their continuous participation in the various EBCD stages. Seeking clear agreement and willingness to engage before each stage will ensure that ethical standards are followed during the co‐design process. It may also be possible to involve indirectly the most vulnerable patients through patient representatives such as informal caregivers. As proposed in the previous point, the adoption of a cross‐sectional design and the flexibility to participate in one or multiple EBCD stages could help to alleviate the potential burden associated with participating throughout the entire process while accommodating the diverse needs of patients and research objectives. Whilst patients may be depending on the treatment as their last hope for a therapeutic option, this may make it particularly problematic to engage in co‐design activities (steps 4, 5 and 6). Hence, attention should be directed towards avoiding the integration of (a) patients who have benefitted from the treatment and/or their informal caregivers with (b) other patients and/or their informal caregivers in a situation of treatment failure, dropout or withdrawal or who could not receive the therapy (e.g., disease progression, health deterioration, manufacturing‐related issues). In this regard, independent co‐design workshops or alternative strategies, such as individual sessions with each patient to identify priorities and strategies, could be considered. As part of the latter approach, individual validation and rating systems for the overall results could be implemented, even remotely. Table provides a summary on the challenges and solutions discussed in this section. Lastly, while it is true that participants in this context are particularly vulnerable, we have to emphasize that the desire to help future patients is a strong motivation to participate in early‐phase clinical trials, and therefore in EBCD as a means to improve care delivery and services. DISCUSSION In the context of early‐phase clinical trials in oncology, it is increasingly important to anticipate care needs before an experimental treatment should be used in routine practice. Due to its transformative nature in complex health settings, EBCD represents a way to develop more PCC simultaneously to the development of an experimental therapy. However, the experimental settings of clinical trials could pose specific challenges for using the EBCD approach. High standardization settings are likely to increase the challenges of engaging all stakeholders and their undertaking improvement activities. In a context of uncertainty about the duration of a clinical trial, planning each step of EBCD could be particularly challenging. We propose that as a flexible method used widely with vulnerable patients, adaptations to the standard EBCD approach can overcome these challenges. In this manuscript, we suggest potential solutions and alternative strategies to overcome them. While the early stages of the EBCD approach ensure the diversity of individual experiences (gathered by qualitative methods), special attention needs to be paid to the challenge of upbringing together patients in the co‐design stages at very different stages of the trajectory of care or with different outcomes. Although the possibility of patients dropping out and different patients participating in various EBCD stages might be seen as a challenge, it can actually enrich the process by bringing diverse perspectives and experiences. This diversity allows for a more comprehensive exploration of the issues at hand and promotes inclusivity in the development of solutions. Variation in participant involvement can ultimately enhance the effectiveness and relevance of the iterative co‐design stages of the EBCD approach. In contrast to traditional qualitative research methods, EBCD offers distinct advantages by not only capturing diverse individual care experiences but also facilitating consensus‐building and co‐created targeted strategies and solutions for improvement. Although alternative methods or designs—such as participatory action research, focus groups or collaborative brainstorming sessions—can ensure the involvement of all stakeholders, EBCD provides a structured framework to guide and implement solutions, ensuring a systematic process from exploring individual experiences to driving meaningful change. Nils Graber : Investigation; writing—original draft; methodology; writing—review & editing. Nina Canova : Investigation; writing—review and editing; methodology. Denise Bryant‐Lukosius : Conceptualization; validation; writing—review and editing. Glenn Robert : Conceptualization; methodology; writing—review and editing. Blanca Navarro‐Rodrigo : Conceptualization; validation. Lionel Trueb : Validation. George Coukos : Conceptualization; validation. Manuela Eicher : Conceptualization; writing—review and editing; funding acquisition. Tourane Corbière : Validation; writing—review and editing; conceptualization. Sara Colomer‐Lahiguera : Conceptualization; funding acquisition; investigation; writing—original draft; methodology; writing—review & editing. The authors declare no conflict of interest. |
Senkyunolide I: A Review of Its Phytochemistry, Pharmacology, Pharmacokinetics, and Drug-Likeness | 3d41e769-5df7-469f-bc9a-b0317365470f | 10144034 | Pharmacology[mh] | Phthalides are a group of structural specific constituents naturally distributed in several important medicinal herbs in Asia, Europe, and North Africa . Accumulating evidence demonstrated that natural phthalides have various pharmacological activities, including analgesic , anti-inflammatory , antithrombotic , and antiplatelet activities, mostly consistent with the traditional medicinal uses of their natural plant sources. For example, Ligusticum chuanxiong Hort. ( L. chuanxiong ) and Angelica sinensis (Oliv.) Diels ( A. sinensis ), frequently used in traditional Chinese medicine (TCM) to invigorate the circulation of qi and the blood, both contain a high level of phthalide components, typically exceeding 1% in their rhizome or root . One of these phthalides that has been broadly studied is ligustilide (LIG) ( a), which displays analgesic, anti-inflammatory, antihypertensive, and neuroprotective activities on brain injury . However, LIG is not a promising drug candidate due to its instability, potent lipophilicity, poor water solubility, and low bioavailability. Druggability improvement was achieved, to a certain degree, by preparing LIG into a nano-emulsion or a hydroxypropyl-β-cyclodextrin complex , but a specific technique is required and the manufacture cost is high. N-butylphthalide (NBP), first isolated from celery seed, has been licensed in China for the indication of mild and moderate acute ischemic stroke , and clinical trials of its effects on vascular cognitive disorder as well as amyotrophic lateral sclerosis are ongoing . Still, extensive application of NBP is limited, owing to its hepatotoxicity, poor solubility, and unsatisfactory bioavailability . Therefore, discovering natural phthalides with improved druggability from traditional medicinal herbs is both intriguing and meaningful. SI ( b) is also a natural phthalide existing in L. chuanxiong and A. sinensis in relatively low contents and is generally considered as an oxidation product of LIG. It has similar pharmacological activities but significantly superior stability, solubility, safety, bioavailability, and brain accessibility compared with LIG, thus meriting further druggability research and evaluation. In this paper, the physicochemical characteristics, isolation and purification methods, as well as pharmacological and pharmacokinetic properties of SI are overviewed. An illustrated summary is described in . 2.1. Distribution in Nature SI was firstly discovered as a natural phthalide from Ligusticum wallichii Franch in 1983, under the name (Z)-ligustidiol . Subsequently, SI was found in the rhizome of Cnidium officinale Makino in 1984 . According to the published literature to date, SI was found mainly in Umbelliferae plants, including Angelica sinensis (Oliv.) Diels , Ligusticum chuanxiong Hort , Lomatium californicum (Nutt) , Cryptotaenia japonica Hassk , and so on. In general, natural phthalides are distributed mainly in plants belonging to the Umbelliferae family, and are also occasionally found in Cactaceae, Compositae, Lamiaceae, Gentianaceae, and Loganiaceae families. In addition, natural phthalides obtained as fungal and lichen metabolites have been reported . 2.2. Production 2.2.1. Chemical Transformation from LIG Only a trace amount of SI can be found in fresh rhizomes of L. chuanxiong , while more SI is produced by the degradation of LIG during processing and storage. Li and colleagues investigated the chemical changes induced by different processing methods, and the results indicated that the main phthalides in rhizomes of L. chuanxiong , such as LIG and senkyunolide A (SA), decreased significantly. Meanwhile, levistolide A, SI, and its isomer senkyunolide H (SH) increased correspondingly. According to the report, the highest level of SI (0.32 mg/g) was found when fresh rhizomes of L. chuanxiong were dried at 60 °C for 24 h. In addition, the chemical changes of rhizomes of L. chuanxiong during storage were assayed. The results showed that the contents of LIG, coniferyl ferulate, and SA decreased significantly after 2 years of storage at room temperature, resulting in increases in the quantities of SI, SH, ferulic acid, levistilide A, and vanillin. SI increased by 37.6% during the period of storage and was presumed as the dominating oxidative product of LIG . Duric and co-workers found that LIG is relatively stable in plant oil cells. However, the purified LIG became very unstable and inclined to form dimers or trimers under light, whereas when heated in the dark, it mainly transformed into SI and its isomer SH . The results above are consistent with those reported by Lin et al. . Duan and colleagues studied the reaction products of LIG in an electrochemical reactor. Five products were separated and identified, including two dihydroxyl products SI and SH, as well as an epoxide 6,7-epoxyligustilide. The latter is a key intermediate in the transformation of LIG into SI and SH. Processing conditions influence SI production in the rhizome of L. chuanxiong . A steaming process with or without rice wine resulted in higher SI levels compared to a stir-frying process. . A simple mechanism for the transformation of LIG to SI is illustrated in . 2.2.2. Metabolic Transformation of LIG SI is the major metabolite of LIG in vivo and in vitro. Yan et al. found that SI was one of the main metabolites when LIG was injected intravenously in SD rats. Similarly, LIG can be transformed into SI when incubated with small intestinal homogenates or liver microsomes of rats . When incubating LIG with human or rat hepatocytes at 37 °C, SI was found to be the main metabolite, with proportions of 42% and 70%, respectively . Furthermore, research on the enzyme kinetics of LIG incubated with rat liver microsomes demonstrated that CYP3A4, CYP2C9, and CYP1A2 are the main metabolic enzymes involved in the LIG metabolism . However, the key enzyme catalyzing LIG into SI in vivo has not been identified. SI was firstly discovered as a natural phthalide from Ligusticum wallichii Franch in 1983, under the name (Z)-ligustidiol . Subsequently, SI was found in the rhizome of Cnidium officinale Makino in 1984 . According to the published literature to date, SI was found mainly in Umbelliferae plants, including Angelica sinensis (Oliv.) Diels , Ligusticum chuanxiong Hort , Lomatium californicum (Nutt) , Cryptotaenia japonica Hassk , and so on. In general, natural phthalides are distributed mainly in plants belonging to the Umbelliferae family, and are also occasionally found in Cactaceae, Compositae, Lamiaceae, Gentianaceae, and Loganiaceae families. In addition, natural phthalides obtained as fungal and lichen metabolites have been reported . 2.2.1. Chemical Transformation from LIG Only a trace amount of SI can be found in fresh rhizomes of L. chuanxiong , while more SI is produced by the degradation of LIG during processing and storage. Li and colleagues investigated the chemical changes induced by different processing methods, and the results indicated that the main phthalides in rhizomes of L. chuanxiong , such as LIG and senkyunolide A (SA), decreased significantly. Meanwhile, levistolide A, SI, and its isomer senkyunolide H (SH) increased correspondingly. According to the report, the highest level of SI (0.32 mg/g) was found when fresh rhizomes of L. chuanxiong were dried at 60 °C for 24 h. In addition, the chemical changes of rhizomes of L. chuanxiong during storage were assayed. The results showed that the contents of LIG, coniferyl ferulate, and SA decreased significantly after 2 years of storage at room temperature, resulting in increases in the quantities of SI, SH, ferulic acid, levistilide A, and vanillin. SI increased by 37.6% during the period of storage and was presumed as the dominating oxidative product of LIG . Duric and co-workers found that LIG is relatively stable in plant oil cells. However, the purified LIG became very unstable and inclined to form dimers or trimers under light, whereas when heated in the dark, it mainly transformed into SI and its isomer SH . The results above are consistent with those reported by Lin et al. . Duan and colleagues studied the reaction products of LIG in an electrochemical reactor. Five products were separated and identified, including two dihydroxyl products SI and SH, as well as an epoxide 6,7-epoxyligustilide. The latter is a key intermediate in the transformation of LIG into SI and SH. Processing conditions influence SI production in the rhizome of L. chuanxiong . A steaming process with or without rice wine resulted in higher SI levels compared to a stir-frying process. . A simple mechanism for the transformation of LIG to SI is illustrated in . 2.2.2. Metabolic Transformation of LIG SI is the major metabolite of LIG in vivo and in vitro. Yan et al. found that SI was one of the main metabolites when LIG was injected intravenously in SD rats. Similarly, LIG can be transformed into SI when incubated with small intestinal homogenates or liver microsomes of rats . When incubating LIG with human or rat hepatocytes at 37 °C, SI was found to be the main metabolite, with proportions of 42% and 70%, respectively . Furthermore, research on the enzyme kinetics of LIG incubated with rat liver microsomes demonstrated that CYP3A4, CYP2C9, and CYP1A2 are the main metabolic enzymes involved in the LIG metabolism . However, the key enzyme catalyzing LIG into SI in vivo has not been identified. Only a trace amount of SI can be found in fresh rhizomes of L. chuanxiong , while more SI is produced by the degradation of LIG during processing and storage. Li and colleagues investigated the chemical changes induced by different processing methods, and the results indicated that the main phthalides in rhizomes of L. chuanxiong , such as LIG and senkyunolide A (SA), decreased significantly. Meanwhile, levistolide A, SI, and its isomer senkyunolide H (SH) increased correspondingly. According to the report, the highest level of SI (0.32 mg/g) was found when fresh rhizomes of L. chuanxiong were dried at 60 °C for 24 h. In addition, the chemical changes of rhizomes of L. chuanxiong during storage were assayed. The results showed that the contents of LIG, coniferyl ferulate, and SA decreased significantly after 2 years of storage at room temperature, resulting in increases in the quantities of SI, SH, ferulic acid, levistilide A, and vanillin. SI increased by 37.6% during the period of storage and was presumed as the dominating oxidative product of LIG . Duric and co-workers found that LIG is relatively stable in plant oil cells. However, the purified LIG became very unstable and inclined to form dimers or trimers under light, whereas when heated in the dark, it mainly transformed into SI and its isomer SH . The results above are consistent with those reported by Lin et al. . Duan and colleagues studied the reaction products of LIG in an electrochemical reactor. Five products were separated and identified, including two dihydroxyl products SI and SH, as well as an epoxide 6,7-epoxyligustilide. The latter is a key intermediate in the transformation of LIG into SI and SH. Processing conditions influence SI production in the rhizome of L. chuanxiong . A steaming process with or without rice wine resulted in higher SI levels compared to a stir-frying process. . A simple mechanism for the transformation of LIG to SI is illustrated in . SI is the major metabolite of LIG in vivo and in vitro. Yan et al. found that SI was one of the main metabolites when LIG was injected intravenously in SD rats. Similarly, LIG can be transformed into SI when incubated with small intestinal homogenates or liver microsomes of rats . When incubating LIG with human or rat hepatocytes at 37 °C, SI was found to be the main metabolite, with proportions of 42% and 70%, respectively . Furthermore, research on the enzyme kinetics of LIG incubated with rat liver microsomes demonstrated that CYP3A4, CYP2C9, and CYP1A2 are the main metabolic enzymes involved in the LIG metabolism . However, the key enzyme catalyzing LIG into SI in vivo has not been identified. Pure SI is yellowish amorphous powder or sticky oil with a celery-like smell. Unlike most natural phthalides, SI is soluble in water and some organic solvents, such as ethanol, ethyl acetate, and chloroform. Several studies suggested that SI has better drug-like properties compared to LIG. 3.1. Stability The degradation of SI in aqueous solution conforms to first-order degradation kinetics, and the energy of activation (Ea) was 194.86 k J/mol. SI in weakly acidic solution showed a better stability, while its degradation ratio accelerated significantly under alkalescent conditions . It was reported that oxygen is the dominating factor that accelerates the degradation rates of SI and SA induced by light and temperature. At room temperature with daylight, SA was completely converted into butylphthalide within 2 months, while only about 20% of SI was converted into its cis-trans isomer after 5 months of storage, indicating that SI is more stable than SA . Peihua Zhang et al. introduced a methanol extract of L. chuanxiong in boiling water and evaluated the content changes during decoction. As a result, the content of LIG decreased from 14 mg/g to 0.4 mg/g after 20 min, while the SI content increased from 1.4 mg/g to 1.7 mg/g during 60 min of heating. Formula granules are a type of dried decoction of a prepared herbal medicine. In the characteristic chromatograms of both A. sinensis and L. chuanxiong formula granules issued by the National Pharmacopoeia Commission of China, SI is marked as a dominant and characteristic peak, suggesting that SI is stable during decoction, concentration, and drying processes. On the contrary, as the most abundant phthalide both in L. chuanxiong and A. sinensis slices, LIG is almost undetectable in these formula granules . 3.2. Permeability SI has satisfactory permeability and solubility. Yuan and co-workers screened the potential transitional components in L. chuanxiong extract by a serum pharmacochemical method and high-performance liquid chromatography-diode array detection tandem mass spectrometry/mass spectrometry (HPLC-DAD-MS/MS) analysis. SI was identified as a transitional component both in the plasma and cerebrospinal fluid, while ferulic acid was detected only in plasma. SI can pass through the BBB easily, and the AUC of SI in the brain accounted for 77.9% of that in plasma . The water solubility of SI was measured to be 34.3 mg/mL, and the lipid–water partition coefficient was 13.43 . Previous studies revealed that SI exhibits good absorption in the rat gastrointestinal tract, including the jejunum, colon, ileum, and duodenum, and no significant differences in the absorption rate constant and apparent absorption coefficient were observed . The degradation of SI in aqueous solution conforms to first-order degradation kinetics, and the energy of activation (Ea) was 194.86 k J/mol. SI in weakly acidic solution showed a better stability, while its degradation ratio accelerated significantly under alkalescent conditions . It was reported that oxygen is the dominating factor that accelerates the degradation rates of SI and SA induced by light and temperature. At room temperature with daylight, SA was completely converted into butylphthalide within 2 months, while only about 20% of SI was converted into its cis-trans isomer after 5 months of storage, indicating that SI is more stable than SA . Peihua Zhang et al. introduced a methanol extract of L. chuanxiong in boiling water and evaluated the content changes during decoction. As a result, the content of LIG decreased from 14 mg/g to 0.4 mg/g after 20 min, while the SI content increased from 1.4 mg/g to 1.7 mg/g during 60 min of heating. Formula granules are a type of dried decoction of a prepared herbal medicine. In the characteristic chromatograms of both A. sinensis and L. chuanxiong formula granules issued by the National Pharmacopoeia Commission of China, SI is marked as a dominant and characteristic peak, suggesting that SI is stable during decoction, concentration, and drying processes. On the contrary, as the most abundant phthalide both in L. chuanxiong and A. sinensis slices, LIG is almost undetectable in these formula granules . SI has satisfactory permeability and solubility. Yuan and co-workers screened the potential transitional components in L. chuanxiong extract by a serum pharmacochemical method and high-performance liquid chromatography-diode array detection tandem mass spectrometry/mass spectrometry (HPLC-DAD-MS/MS) analysis. SI was identified as a transitional component both in the plasma and cerebrospinal fluid, while ferulic acid was detected only in plasma. SI can pass through the BBB easily, and the AUC of SI in the brain accounted for 77.9% of that in plasma . The water solubility of SI was measured to be 34.3 mg/mL, and the lipid–water partition coefficient was 13.43 . Previous studies revealed that SI exhibits good absorption in the rat gastrointestinal tract, including the jejunum, colon, ileum, and duodenum, and no significant differences in the absorption rate constant and apparent absorption coefficient were observed . 4.1. Analytical Methods The reported analytical methods of SI in herbs and prescriptions, as well as corresponding parameters, are shown in . The SI analyses were generally performed by high-performance liquid chromatography (HPLC) combined with an ultraviolet (UV) or diode array detection (DAD) detector. Most of the separations were carried out on a C18 column using a mixture of acetonitrile and acidic aqueous solution as the mobile phase. In addition, other detective devices, such as electrospray ionization tandem mass spectrometry (ESI-MS) and time-of-flight mass spectrometry (TOF-MS), were used for the structure elucidation and metabolite analysis of SI. 4.2. Content in Medicinal Material and Preparation The contents of SI medicinal materials and preparations are shown in and , respectively. Among the commonly used TCM, SI exists limitedly in A. sinensis and L. chuanxiong. indicates that the maximum content of SI in A. sinensis is 1 mg/g, while it reaches more than 10 mg/g in L. chuanxiong . The reason is presumably that LIG in L. chuanxiong is in a higher level and may produce more SI compared with that in A. sinensis . In addition, SI concentrations in Chuanxiong dispensing granules range from 2.08 to 6.07 mg/g. The relatively high content might be attributed to its good water solubility or accelerated transformation from LIG during decocting, concentrating, or drying processes. summarizes the quantitative analysis of SI in multiple compound preparations containing L . chuanxiong rhizome and A. sinensis root. The results show a large fluctuation from 0.02 to 2.206 mg/g, suggesting that SI content is most likely influenced by material quality, formulation, and preparation technology. The reported analytical methods of SI in herbs and prescriptions, as well as corresponding parameters, are shown in . The SI analyses were generally performed by high-performance liquid chromatography (HPLC) combined with an ultraviolet (UV) or diode array detection (DAD) detector. Most of the separations were carried out on a C18 column using a mixture of acetonitrile and acidic aqueous solution as the mobile phase. In addition, other detective devices, such as electrospray ionization tandem mass spectrometry (ESI-MS) and time-of-flight mass spectrometry (TOF-MS), were used for the structure elucidation and metabolite analysis of SI. The contents of SI medicinal materials and preparations are shown in and , respectively. Among the commonly used TCM, SI exists limitedly in A. sinensis and L. chuanxiong. indicates that the maximum content of SI in A. sinensis is 1 mg/g, while it reaches more than 10 mg/g in L. chuanxiong . The reason is presumably that LIG in L. chuanxiong is in a higher level and may produce more SI compared with that in A. sinensis . In addition, SI concentrations in Chuanxiong dispensing granules range from 2.08 to 6.07 mg/g. The relatively high content might be attributed to its good water solubility or accelerated transformation from LIG during decocting, concentrating, or drying processes. summarizes the quantitative analysis of SI in multiple compound preparations containing L . chuanxiong rhizome and A. sinensis root. The results show a large fluctuation from 0.02 to 2.206 mg/g, suggesting that SI content is most likely influenced by material quality, formulation, and preparation technology. The rhizomes of L. chuanxiong and roots of A. sinensis are commonly used materials for SI extraction, isolation, and purification. Ethanol of high concentration was the most used extraction solvent, followed by methanol and water. Besides the conventional extraction methods, such as reflux, immersion, and ultrasonication, supercritical fluid extraction or ultra-high pressure ultrasonic-assisted extraction was carried out to improve the effect and efficiency. SI separation and purification were mainly performed by different column chromatographic methods, including flash column chromatography, counter-current chromatography, borate gel affinity column chromatography, and preparative HPLC. The packing materials used were silica gel, RP-C18, and macroporous resin. The details of SI extraction and isolation are shown in . The reported pharmacological activities of SI were summarized in and . 6.1. Protection of the Brain 6.1.1. Neuroprotection of Cerebral Ischemia/Hemorrhage Due to the high risks of disability and mortality, cerebral hemorrhage and ischemia remain intractable diseases, resulting in neurologic impairment, tissue necrosis, cell apoptosis, and subsequent complications . Previous studies demonstrated that SI performs significant neuroprotection mainly through antioxidant and anti-apoptotic pathways. Hu et al. investigated the protective effect and possible mechanism of SI (36 and 72 mg/kg, i.v.) on cerebral ischemia–reperfusion (I/R) impairment using the rat transient middle cerebral artery occlusion (tMCAO) model. The results indicated that SI could ameliorate neurological injury, reduce cerebral infarct volume, decrease the malonaldehyde (MDA) content, and increase the superoxide dismutase (SOD) activity of brain tissue. The mechanism involves promoting the expression of p-Erk1/2/t-Erk1/2, c-Nrf2, n-Nrf2, HO-1, and NQO1, and deregulating the expression of Bcl-2, Bax, caspase 3, and caspase 9. The protective effects of compounds (SI, SH, SA, LIG, and ferulic acid) isolated from L. chuanxiong were evaluated on an oxygen–glucose deprivation–reoxygenation (OGD/R) model using cultured SH-SY5Y cells. The results demonstrated that both SI and LIG could improve cell viability, reduce reactive oxygen species (ROS), and lactate dehydrogenase (LDH) levels. SI showed a more potent inhibiting activity on LDH compared to LIG . LIG and its metabolites SI and SH have protective effects on the intracerebral hemorrhage (ICH) model caused by autologous blood injection into CD-1 mice. SI could ameliorate neurological deficit, brain edema, and neuronal injury; alleviate microglia cell and astrocyte activations; and reduce peripheral immune cell infiltration caused by ICH. However, SI is less effective than SH. Inhibition of the Prx1/TLR4/NF-κB signal pathway and anti-neuroinflammatory injury are involved in the potential mechanism of LIG and SH . 6.1.2. Protection against Septic Encephalopathy Sepsis is a systemic inflammatory response syndrome caused by microbial infection. Septic encephalopathy (SE) with cerebrovascular dysfunction and neuron growth inhibition is a common complication. SI (36 and 144 mg/kg, i.p.) ameliorates injury on SE rats by increasing Ngb expression, upregulating the p38 MAPK signal pathway, and the consequent promotion of neuron growth . Sleep quality impairment of sepsis rats would accelerate inflammatory factor release, and the prognosis of sepsis may benefit from sleep improvement . SI demonstrated sleep-improving sedative effects, but its role in sepsis is unclear. Thus, a cecal ligation and puncture (CLP)-induced sepsis model using C57BL/6J mice was established. The results showed that SI (36 mg/kg, i.p.) improved the survival rate and cognitive dysfunction of sepsis mice, ameliorated systemic inflammatory response, reduced apoptotic cells in the hippocampus, and inhibited the inflammatory signaling pathway. Surprisingly, the hypothesis that alleviating sleep deprivation could ameliorate SE injury was further confirmed by reversing the expression of sleep deprivation-related markers BNDF and c-FOS after SI administration . 6.2. Protection of the Liver, Kidneys, and Lungs Blood supply is critical for ameliorating tissue and organ damage caused by persistent ischemia. SI can attenuate hepatic and renal I/R injury through antioxidant, anti-inflammatory, and anti-apoptotic effects. SI (50, 100, and 200 mg/kg) was injected intraperitoneally to the modified liver I/R murine model. As a result, SI (200 mg/kg) decreased TNF-α, IL-1β, and IL-6 in serum; inhibited the phosphorylation of p65 NF-κB and MAPK kinases; and reduced the expression of Bax and Bcl-2. Furthermore, SI can alleviate H 2 O 2 -induced oxidative damage in HuCCT1 cells, promote the nuclear translocation of Nrf-2, and reduce the levels of ROS and MDA . Administration on renal I/R injury mice confirmed that SI can protect renal function and structural integrity, reverse increases in ischemia–induced blood urea nitrogen (BUN) and serum creatinine (SCr), ameliorate pathological renal damage, and inhibit TNF-α and IL-6 secretions. Furthermore, reductions in ROS production as well as endoplasmic reticulum stress-related protein expressions are involved in the potential protection mechanism . It was reported that SI (36 mg/kg, i.p.) could ameliorate sepsis-related lung injury on cecal ligation and puncture-induced sepsis C57BL/6 mice. SI performed its effects by decreasing protein levels and neutrophil infiltration, inhibiting the phosphorylation of JNK, ERK, P38, and p65, and downregulating TNF-α, IL-1β, and IL-6 in plasma and lung tissue. CD42d/GP5 staining results indicated that platelet activation was decreased after SI administration. Moreover, SI could significantly reduce MPO-DNA levels stimulated by phorbol 12-myristate 13-acetate (PMA) . 6.3. Protection of Blood and Vascular Systems 6.3.1. Effects on the Blood System The rhizome of L. chuanxiong , a herb commonly used to promote blood circulation and remove blood clots, has drawn interest due to its anticoagulant and antiplatelet activities. Anticoagulant activity was screened by measuring the binding rates between components from herbal extracts and thrombin (THR) in vitro. Preliminary results showed that SI and isochlorogenic acid C could inhibit the activity of THR. The results of molecular docking revealed that SI and isochlorogenic acid C could bind to the catalytic active site of THR . Similarly, L. chuanxiong extracts were screened for their possible inhibitory effects on THR and Factor Xa (FXa) using an on-line dual-enzyme immobilization microreactor based on capillary electrophoresis. SI, SA, LIG, and ferulic acid exhibited vigorous THR inhibitory activities, while isochlorogenic acid A could effectively inhibit FXa activity . A study eliminated SI from Siwu decoction (SWD) to explore its contribution to the antiplatelet and anticoagulant activities of the formula. The absence of SI resulted in a significantly shortened activated partial thromboplastin time of SWD, while the active sequence of prothrombin time (PT) was inhibited, indicating that SI plays an important role in the activities of SWD . 6.3.2. Effects on the Vascular System SI can promote angiogenesis and it represents vasodilating and antithrombotic effects, thereby providing protection to the vascular system. SI in Guanxinning tablets could ameliorate endogenous thrombus injury in zebrafish through various pathways, including oxidative stress, platelet activation, and coagulation cascade . In addition, it was reported that SI prevents microthrombus formation by attenuating Con A-induced erythrocyte metamorphic damage and reducing erythrocyte aggregation . Suxiao Jiuxin Pill (SX) is a Chinese patent medicine containing extracts of L. chuanxiong and is usually used for coronary heart disease treatment. The potential active components of SX were screened for cell Ca 2+ regulation activity, which is critical for vascular resistance and pressure handing. SI isolated from SX can amplify cardiovascular diastolic activity through calcium antagonistic activity . Additionally, a study on the effect on the endothelial vascular cell model confirmed that SI might promote the formation of the luminal structure of human microvascular endothelial cells and induce endothelial angiogenesis by upregulating placental growth factor . 6.4. Other Pharmacological Effects The analgesic effect of SI was evaluated by an acetic acid-induced writhing test on Kunming mice (8, 16, and 32 mg/kg, i.g.), and the anti-migraine activity was tested by nitroglycerin-induced headaches in SD rats (18, 36, and 72 mg/kg, i.g.). SI (32 mg/kg) significantly elevated the pain thresholds and the number of acetic acid-induced writhing reactions in mice. SI (72 mg/kg) in rats remarkably reduced the NO levels in plasma and brain tissue and increased 5-HT levels in plasma . In another study where rats were dosed with SI (144, 72, and 36 mg/kg, i.p.) to cure the cortical spread of migraine, plasma NO and calcitonin gene-related peptide (CGRP) significantly decreased after SI (144 mg/kg) treatment . It was reported that SI inhibited NF-κB expression in a dose-dependent manner in HEK293 cells, which was stimulated by pro-inflammatory factors TNF-α, IL-1β, and IL-6. Similarly, SI reduced pro-inflammatory factors IL-6 and IL-8 in THP-1 cells induced by lipopolysaccharide . In OGD/R-treated microglial cells, which are often used to evaluate stroke and the consequent inflammatory injury, SI could inhibit proinflammatory cytokines and enzymes, attenuate the nuclear translocation of the NF-κB pathway in BV-2 microglia, and restrain the TLR4/NF-κB pathway or upregulate extracellular heat shock protein 70. These results indicated that SI could effectively inhibit the neuroinflammation induced by stroke . Moreover, SI could attenuate oxidative stress damage by activating the HO-1 pathway and enhancing cellular resistance to hydrogen peroxide-induced oxidative damage . Surprisingly, SI might be used as a potential antitumor agent. Good affinity between SI and C-X-C chemokine receptor type 4 (CXCR4) was observed by affinity detection and SPR ligand screening. The measured affinity constant was 2.94 ± 0.36 μM, indicating that SI might be a potential CXCR4 antagonist that can inhibit the CXCR4-mediated migration of human breast cancer cells . SI showed inhibition capability against cell proliferation. Phthalides from the rhizome of Cnidium chinensis were evaluated on smooth muscle cells from a mouse aorta. The order of proliferation–inhibiting efficacy was as follows: senkyunolide L > SH > senkyunolide J > SI > LIG = senkyunolide A > butylidenephthalide, suggesting that SI had an effect to some extent. However, the underlying mechanism is unclear . The BBB permeability of SI was investigated in MDCK-MDR1 cells. The results indicated that SI could enhance cellular transport by downregulating the expression of claudin-5 and zonula occludens-1, two main tight junction proteins that are closely associated with BBB tightness . Additionally, SI decreased the expression of P-glycoprotein (P-gp), which acts as a drug-efflux pump, via the paracellular route to enhance xenobiotics transport . 6.1.1. Neuroprotection of Cerebral Ischemia/Hemorrhage Due to the high risks of disability and mortality, cerebral hemorrhage and ischemia remain intractable diseases, resulting in neurologic impairment, tissue necrosis, cell apoptosis, and subsequent complications . Previous studies demonstrated that SI performs significant neuroprotection mainly through antioxidant and anti-apoptotic pathways. Hu et al. investigated the protective effect and possible mechanism of SI (36 and 72 mg/kg, i.v.) on cerebral ischemia–reperfusion (I/R) impairment using the rat transient middle cerebral artery occlusion (tMCAO) model. The results indicated that SI could ameliorate neurological injury, reduce cerebral infarct volume, decrease the malonaldehyde (MDA) content, and increase the superoxide dismutase (SOD) activity of brain tissue. The mechanism involves promoting the expression of p-Erk1/2/t-Erk1/2, c-Nrf2, n-Nrf2, HO-1, and NQO1, and deregulating the expression of Bcl-2, Bax, caspase 3, and caspase 9. The protective effects of compounds (SI, SH, SA, LIG, and ferulic acid) isolated from L. chuanxiong were evaluated on an oxygen–glucose deprivation–reoxygenation (OGD/R) model using cultured SH-SY5Y cells. The results demonstrated that both SI and LIG could improve cell viability, reduce reactive oxygen species (ROS), and lactate dehydrogenase (LDH) levels. SI showed a more potent inhibiting activity on LDH compared to LIG . LIG and its metabolites SI and SH have protective effects on the intracerebral hemorrhage (ICH) model caused by autologous blood injection into CD-1 mice. SI could ameliorate neurological deficit, brain edema, and neuronal injury; alleviate microglia cell and astrocyte activations; and reduce peripheral immune cell infiltration caused by ICH. However, SI is less effective than SH. Inhibition of the Prx1/TLR4/NF-κB signal pathway and anti-neuroinflammatory injury are involved in the potential mechanism of LIG and SH . 6.1.2. Protection against Septic Encephalopathy Sepsis is a systemic inflammatory response syndrome caused by microbial infection. Septic encephalopathy (SE) with cerebrovascular dysfunction and neuron growth inhibition is a common complication. SI (36 and 144 mg/kg, i.p.) ameliorates injury on SE rats by increasing Ngb expression, upregulating the p38 MAPK signal pathway, and the consequent promotion of neuron growth . Sleep quality impairment of sepsis rats would accelerate inflammatory factor release, and the prognosis of sepsis may benefit from sleep improvement . SI demonstrated sleep-improving sedative effects, but its role in sepsis is unclear. Thus, a cecal ligation and puncture (CLP)-induced sepsis model using C57BL/6J mice was established. The results showed that SI (36 mg/kg, i.p.) improved the survival rate and cognitive dysfunction of sepsis mice, ameliorated systemic inflammatory response, reduced apoptotic cells in the hippocampus, and inhibited the inflammatory signaling pathway. Surprisingly, the hypothesis that alleviating sleep deprivation could ameliorate SE injury was further confirmed by reversing the expression of sleep deprivation-related markers BNDF and c-FOS after SI administration . Due to the high risks of disability and mortality, cerebral hemorrhage and ischemia remain intractable diseases, resulting in neurologic impairment, tissue necrosis, cell apoptosis, and subsequent complications . Previous studies demonstrated that SI performs significant neuroprotection mainly through antioxidant and anti-apoptotic pathways. Hu et al. investigated the protective effect and possible mechanism of SI (36 and 72 mg/kg, i.v.) on cerebral ischemia–reperfusion (I/R) impairment using the rat transient middle cerebral artery occlusion (tMCAO) model. The results indicated that SI could ameliorate neurological injury, reduce cerebral infarct volume, decrease the malonaldehyde (MDA) content, and increase the superoxide dismutase (SOD) activity of brain tissue. The mechanism involves promoting the expression of p-Erk1/2/t-Erk1/2, c-Nrf2, n-Nrf2, HO-1, and NQO1, and deregulating the expression of Bcl-2, Bax, caspase 3, and caspase 9. The protective effects of compounds (SI, SH, SA, LIG, and ferulic acid) isolated from L. chuanxiong were evaluated on an oxygen–glucose deprivation–reoxygenation (OGD/R) model using cultured SH-SY5Y cells. The results demonstrated that both SI and LIG could improve cell viability, reduce reactive oxygen species (ROS), and lactate dehydrogenase (LDH) levels. SI showed a more potent inhibiting activity on LDH compared to LIG . LIG and its metabolites SI and SH have protective effects on the intracerebral hemorrhage (ICH) model caused by autologous blood injection into CD-1 mice. SI could ameliorate neurological deficit, brain edema, and neuronal injury; alleviate microglia cell and astrocyte activations; and reduce peripheral immune cell infiltration caused by ICH. However, SI is less effective than SH. Inhibition of the Prx1/TLR4/NF-κB signal pathway and anti-neuroinflammatory injury are involved in the potential mechanism of LIG and SH . Sepsis is a systemic inflammatory response syndrome caused by microbial infection. Septic encephalopathy (SE) with cerebrovascular dysfunction and neuron growth inhibition is a common complication. SI (36 and 144 mg/kg, i.p.) ameliorates injury on SE rats by increasing Ngb expression, upregulating the p38 MAPK signal pathway, and the consequent promotion of neuron growth . Sleep quality impairment of sepsis rats would accelerate inflammatory factor release, and the prognosis of sepsis may benefit from sleep improvement . SI demonstrated sleep-improving sedative effects, but its role in sepsis is unclear. Thus, a cecal ligation and puncture (CLP)-induced sepsis model using C57BL/6J mice was established. The results showed that SI (36 mg/kg, i.p.) improved the survival rate and cognitive dysfunction of sepsis mice, ameliorated systemic inflammatory response, reduced apoptotic cells in the hippocampus, and inhibited the inflammatory signaling pathway. Surprisingly, the hypothesis that alleviating sleep deprivation could ameliorate SE injury was further confirmed by reversing the expression of sleep deprivation-related markers BNDF and c-FOS after SI administration . Blood supply is critical for ameliorating tissue and organ damage caused by persistent ischemia. SI can attenuate hepatic and renal I/R injury through antioxidant, anti-inflammatory, and anti-apoptotic effects. SI (50, 100, and 200 mg/kg) was injected intraperitoneally to the modified liver I/R murine model. As a result, SI (200 mg/kg) decreased TNF-α, IL-1β, and IL-6 in serum; inhibited the phosphorylation of p65 NF-κB and MAPK kinases; and reduced the expression of Bax and Bcl-2. Furthermore, SI can alleviate H 2 O 2 -induced oxidative damage in HuCCT1 cells, promote the nuclear translocation of Nrf-2, and reduce the levels of ROS and MDA . Administration on renal I/R injury mice confirmed that SI can protect renal function and structural integrity, reverse increases in ischemia–induced blood urea nitrogen (BUN) and serum creatinine (SCr), ameliorate pathological renal damage, and inhibit TNF-α and IL-6 secretions. Furthermore, reductions in ROS production as well as endoplasmic reticulum stress-related protein expressions are involved in the potential protection mechanism . It was reported that SI (36 mg/kg, i.p.) could ameliorate sepsis-related lung injury on cecal ligation and puncture-induced sepsis C57BL/6 mice. SI performed its effects by decreasing protein levels and neutrophil infiltration, inhibiting the phosphorylation of JNK, ERK, P38, and p65, and downregulating TNF-α, IL-1β, and IL-6 in plasma and lung tissue. CD42d/GP5 staining results indicated that platelet activation was decreased after SI administration. Moreover, SI could significantly reduce MPO-DNA levels stimulated by phorbol 12-myristate 13-acetate (PMA) . 6.3.1. Effects on the Blood System The rhizome of L. chuanxiong , a herb commonly used to promote blood circulation and remove blood clots, has drawn interest due to its anticoagulant and antiplatelet activities. Anticoagulant activity was screened by measuring the binding rates between components from herbal extracts and thrombin (THR) in vitro. Preliminary results showed that SI and isochlorogenic acid C could inhibit the activity of THR. The results of molecular docking revealed that SI and isochlorogenic acid C could bind to the catalytic active site of THR . Similarly, L. chuanxiong extracts were screened for their possible inhibitory effects on THR and Factor Xa (FXa) using an on-line dual-enzyme immobilization microreactor based on capillary electrophoresis. SI, SA, LIG, and ferulic acid exhibited vigorous THR inhibitory activities, while isochlorogenic acid A could effectively inhibit FXa activity . A study eliminated SI from Siwu decoction (SWD) to explore its contribution to the antiplatelet and anticoagulant activities of the formula. The absence of SI resulted in a significantly shortened activated partial thromboplastin time of SWD, while the active sequence of prothrombin time (PT) was inhibited, indicating that SI plays an important role in the activities of SWD . 6.3.2. Effects on the Vascular System SI can promote angiogenesis and it represents vasodilating and antithrombotic effects, thereby providing protection to the vascular system. SI in Guanxinning tablets could ameliorate endogenous thrombus injury in zebrafish through various pathways, including oxidative stress, platelet activation, and coagulation cascade . In addition, it was reported that SI prevents microthrombus formation by attenuating Con A-induced erythrocyte metamorphic damage and reducing erythrocyte aggregation . Suxiao Jiuxin Pill (SX) is a Chinese patent medicine containing extracts of L. chuanxiong and is usually used for coronary heart disease treatment. The potential active components of SX were screened for cell Ca 2+ regulation activity, which is critical for vascular resistance and pressure handing. SI isolated from SX can amplify cardiovascular diastolic activity through calcium antagonistic activity . Additionally, a study on the effect on the endothelial vascular cell model confirmed that SI might promote the formation of the luminal structure of human microvascular endothelial cells and induce endothelial angiogenesis by upregulating placental growth factor . The rhizome of L. chuanxiong , a herb commonly used to promote blood circulation and remove blood clots, has drawn interest due to its anticoagulant and antiplatelet activities. Anticoagulant activity was screened by measuring the binding rates between components from herbal extracts and thrombin (THR) in vitro. Preliminary results showed that SI and isochlorogenic acid C could inhibit the activity of THR. The results of molecular docking revealed that SI and isochlorogenic acid C could bind to the catalytic active site of THR . Similarly, L. chuanxiong extracts were screened for their possible inhibitory effects on THR and Factor Xa (FXa) using an on-line dual-enzyme immobilization microreactor based on capillary electrophoresis. SI, SA, LIG, and ferulic acid exhibited vigorous THR inhibitory activities, while isochlorogenic acid A could effectively inhibit FXa activity . A study eliminated SI from Siwu decoction (SWD) to explore its contribution to the antiplatelet and anticoagulant activities of the formula. The absence of SI resulted in a significantly shortened activated partial thromboplastin time of SWD, while the active sequence of prothrombin time (PT) was inhibited, indicating that SI plays an important role in the activities of SWD . SI can promote angiogenesis and it represents vasodilating and antithrombotic effects, thereby providing protection to the vascular system. SI in Guanxinning tablets could ameliorate endogenous thrombus injury in zebrafish through various pathways, including oxidative stress, platelet activation, and coagulation cascade . In addition, it was reported that SI prevents microthrombus formation by attenuating Con A-induced erythrocyte metamorphic damage and reducing erythrocyte aggregation . Suxiao Jiuxin Pill (SX) is a Chinese patent medicine containing extracts of L. chuanxiong and is usually used for coronary heart disease treatment. The potential active components of SX were screened for cell Ca 2+ regulation activity, which is critical for vascular resistance and pressure handing. SI isolated from SX can amplify cardiovascular diastolic activity through calcium antagonistic activity . Additionally, a study on the effect on the endothelial vascular cell model confirmed that SI might promote the formation of the luminal structure of human microvascular endothelial cells and induce endothelial angiogenesis by upregulating placental growth factor . The analgesic effect of SI was evaluated by an acetic acid-induced writhing test on Kunming mice (8, 16, and 32 mg/kg, i.g.), and the anti-migraine activity was tested by nitroglycerin-induced headaches in SD rats (18, 36, and 72 mg/kg, i.g.). SI (32 mg/kg) significantly elevated the pain thresholds and the number of acetic acid-induced writhing reactions in mice. SI (72 mg/kg) in rats remarkably reduced the NO levels in plasma and brain tissue and increased 5-HT levels in plasma . In another study where rats were dosed with SI (144, 72, and 36 mg/kg, i.p.) to cure the cortical spread of migraine, plasma NO and calcitonin gene-related peptide (CGRP) significantly decreased after SI (144 mg/kg) treatment . It was reported that SI inhibited NF-κB expression in a dose-dependent manner in HEK293 cells, which was stimulated by pro-inflammatory factors TNF-α, IL-1β, and IL-6. Similarly, SI reduced pro-inflammatory factors IL-6 and IL-8 in THP-1 cells induced by lipopolysaccharide . In OGD/R-treated microglial cells, which are often used to evaluate stroke and the consequent inflammatory injury, SI could inhibit proinflammatory cytokines and enzymes, attenuate the nuclear translocation of the NF-κB pathway in BV-2 microglia, and restrain the TLR4/NF-κB pathway or upregulate extracellular heat shock protein 70. These results indicated that SI could effectively inhibit the neuroinflammation induced by stroke . Moreover, SI could attenuate oxidative stress damage by activating the HO-1 pathway and enhancing cellular resistance to hydrogen peroxide-induced oxidative damage . Surprisingly, SI might be used as a potential antitumor agent. Good affinity between SI and C-X-C chemokine receptor type 4 (CXCR4) was observed by affinity detection and SPR ligand screening. The measured affinity constant was 2.94 ± 0.36 μM, indicating that SI might be a potential CXCR4 antagonist that can inhibit the CXCR4-mediated migration of human breast cancer cells . SI showed inhibition capability against cell proliferation. Phthalides from the rhizome of Cnidium chinensis were evaluated on smooth muscle cells from a mouse aorta. The order of proliferation–inhibiting efficacy was as follows: senkyunolide L > SH > senkyunolide J > SI > LIG = senkyunolide A > butylidenephthalide, suggesting that SI had an effect to some extent. However, the underlying mechanism is unclear . The BBB permeability of SI was investigated in MDCK-MDR1 cells. The results indicated that SI could enhance cellular transport by downregulating the expression of claudin-5 and zonula occludens-1, two main tight junction proteins that are closely associated with BBB tightness . Additionally, SI decreased the expression of P-glycoprotein (P-gp), which acts as a drug-efflux pump, via the paracellular route to enhance xenobiotics transport . Up to now, the pharmacokinetic parameters of SI in rats, mice, rabbits, dogs, and humans have been studied with different administration routes, including intravenous injection, intraperitoneal injection, gavage, etc. The reported pharmacokinetic parameters are summarized in . 7.1. Pharmacokinetic Properties of SI The pharmacokinetic properties of SI have been studied on animals (mice, rats, and dogs) via different administration routes . The results indicated that SI would be absorbed rapidly followed by short half-life (<1 h) elimination and acceptable oral bioavailability (>35%) after intragastric administration. SI is widely distributed in tissues and organs in vivo, and the AUC values in descending order were as follows: kidneys > liver > lungs > muscle > brain > heart > thymus > spleen . The pharmacokinetic differences between normal and migrainous rats have been investigated . The results demonstrated that migraines caused some significant changes. For example, the decreased clearance and increased volume of distribution resulted in a several-fold increase in t 1/2 and AUC. The pharmacokinetic parameters of SI were significantly different in normal and migrainous rats, which should be taken into consideration during the design of a clinical dosage regimen for SI. Similarly, the pharmacokinetic differences of SI and SH in normal and migrainous rats after gavage administration of 70% ethanol extract of L. chuanxiong were studied. Compared with normal rats, the absorptions of SI and SH in migraine rats increased significantly, the C max and AUC (0–t) of SI increased by 192% and 184%, while SH increased by 266% and 213%, respectively . Furthermore, the effects of warfarin on the pharmacokinetics of SI in a rat model of biliary drainage following administration of the extract of L. chuanxiong were investigated. It was reported that warfarin could significantly increase the t 1/2 , T max , and C max of SI. The result highlights the importance of drug–herb interactions . The metabolic pathways of SI in vivo involve methylation, hydrolysis, and epoxidation of phase I metabolism, as well as glucuronidation and glutathionylation of phase II metabolism. The mainly metabolic pathways in vivo are shown in . It was reported that after administration of SI in rats, a total of 18 metabolites were identified in bile, 6 in plasma, and 5 in urine . Ma et al. identified four metabolites of SI in bile, namely, SI-6S-O-β-D-glucuronide, SI-7S-O-β-D-glucuronide, SI-7S-S-glutathione, and SI-7R-S-glutathione. He and colleagues found nine metabolites in rat bile and speculated the metabolic pathways. 7.2. Pharmacokinetic Properties of SI Containing Herbal Preparations Up to now, the pharmacokinetics and metabolism of SI were studied in animals administrated not only with pure SI compound, but also with SI containing herbal preparations. A total of 25 compounds were detected in plasma after SD rats were gavaged with L. chuanxiong decoction, among which 13 were absorbed as prototypes. LIG, the main alkyl phthalide in L. chuanxiong , was rapidly absorbed and converted into hydroxyphthalides by phase I metabolism, including SI, SH, senkyunolide F, and senkyunolide G. The absorbed, as well as the generated hydroxyphthalides, were further combined with glutathione or glucuronic acid through phase Ⅱ metabolism . A sequential metabolism approach was developed to study the absorption and metabolism of multiple components in L. chuanxiong decoction at different stages of intestinal bacteria, intestinal wall enzymes, and liver metabolism. After enema administration, SI was quickly absorbed as a prototype and stable at each stage of sequential metabolism . SI was used as index component of several herbal preparations, such as Dachuanxiong Pills , Shaofu Zhuyu Decoction, and Yigan Powder . SI had been detected as one of the main components in plasma and tissues after normal and model animals were administrated. The results confirmed that SI could easily be released from herbal preparations followed by rapid absorption, a short half-time of elimination, and acceptable oral bioavailability in vivo. Previous studies suggested that there were remarkable differences in SI pharmacokinetics between normal and model animals administrated with SI containing herbal preparations. For example, multi-component pharmacokinetics of the Naomaitong formula was performed in normal and stroke rats. The results indicated that the stroke rats had higher values of AUC (0–t) , AUC (0–∞) , t 1/2 , and MRT (0–∞) . The AUC (0–∞) values of SI and LIG were both five times higher than those of the normal rats . Moreover, pharmacokinetics differences were compared after oral administration of Xinshenghua Granules in normal and blood-deficient rats. As a result, a total of 15 components were detected in plasma. However, most of them were eliminated within six hours. The SI values of C max , AUC (0–t), and AUC (0–∞) in a blood-deficient rat model were 23%, 32.6%, and 31.6% higher than those of normal rats, respectively . Based on pharmacokinetic experiments in humans and rats, the active phthalides in Xuebijing injection in the treatment of sepsis were determined. A variety of phthalides (SI, SH, senkyunolide G, senkyunolide N, 3-hydroxy-3-N-butylphthalide, etc.) were detected in human and rat plasma, among which both SI and senkyunolide G have significant exposures in plasma . The pharmacokinetic properties of SI have been studied on animals (mice, rats, and dogs) via different administration routes . The results indicated that SI would be absorbed rapidly followed by short half-life (<1 h) elimination and acceptable oral bioavailability (>35%) after intragastric administration. SI is widely distributed in tissues and organs in vivo, and the AUC values in descending order were as follows: kidneys > liver > lungs > muscle > brain > heart > thymus > spleen . The pharmacokinetic differences between normal and migrainous rats have been investigated . The results demonstrated that migraines caused some significant changes. For example, the decreased clearance and increased volume of distribution resulted in a several-fold increase in t 1/2 and AUC. The pharmacokinetic parameters of SI were significantly different in normal and migrainous rats, which should be taken into consideration during the design of a clinical dosage regimen for SI. Similarly, the pharmacokinetic differences of SI and SH in normal and migrainous rats after gavage administration of 70% ethanol extract of L. chuanxiong were studied. Compared with normal rats, the absorptions of SI and SH in migraine rats increased significantly, the C max and AUC (0–t) of SI increased by 192% and 184%, while SH increased by 266% and 213%, respectively . Furthermore, the effects of warfarin on the pharmacokinetics of SI in a rat model of biliary drainage following administration of the extract of L. chuanxiong were investigated. It was reported that warfarin could significantly increase the t 1/2 , T max , and C max of SI. The result highlights the importance of drug–herb interactions . The metabolic pathways of SI in vivo involve methylation, hydrolysis, and epoxidation of phase I metabolism, as well as glucuronidation and glutathionylation of phase II metabolism. The mainly metabolic pathways in vivo are shown in . It was reported that after administration of SI in rats, a total of 18 metabolites were identified in bile, 6 in plasma, and 5 in urine . Ma et al. identified four metabolites of SI in bile, namely, SI-6S-O-β-D-glucuronide, SI-7S-O-β-D-glucuronide, SI-7S-S-glutathione, and SI-7R-S-glutathione. He and colleagues found nine metabolites in rat bile and speculated the metabolic pathways. Up to now, the pharmacokinetics and metabolism of SI were studied in animals administrated not only with pure SI compound, but also with SI containing herbal preparations. A total of 25 compounds were detected in plasma after SD rats were gavaged with L. chuanxiong decoction, among which 13 were absorbed as prototypes. LIG, the main alkyl phthalide in L. chuanxiong , was rapidly absorbed and converted into hydroxyphthalides by phase I metabolism, including SI, SH, senkyunolide F, and senkyunolide G. The absorbed, as well as the generated hydroxyphthalides, were further combined with glutathione or glucuronic acid through phase Ⅱ metabolism . A sequential metabolism approach was developed to study the absorption and metabolism of multiple components in L. chuanxiong decoction at different stages of intestinal bacteria, intestinal wall enzymes, and liver metabolism. After enema administration, SI was quickly absorbed as a prototype and stable at each stage of sequential metabolism . SI was used as index component of several herbal preparations, such as Dachuanxiong Pills , Shaofu Zhuyu Decoction, and Yigan Powder . SI had been detected as one of the main components in plasma and tissues after normal and model animals were administrated. The results confirmed that SI could easily be released from herbal preparations followed by rapid absorption, a short half-time of elimination, and acceptable oral bioavailability in vivo. Previous studies suggested that there were remarkable differences in SI pharmacokinetics between normal and model animals administrated with SI containing herbal preparations. For example, multi-component pharmacokinetics of the Naomaitong formula was performed in normal and stroke rats. The results indicated that the stroke rats had higher values of AUC (0–t) , AUC (0–∞) , t 1/2 , and MRT (0–∞) . The AUC (0–∞) values of SI and LIG were both five times higher than those of the normal rats . Moreover, pharmacokinetics differences were compared after oral administration of Xinshenghua Granules in normal and blood-deficient rats. As a result, a total of 15 components were detected in plasma. However, most of them were eliminated within six hours. The SI values of C max , AUC (0–t), and AUC (0–∞) in a blood-deficient rat model were 23%, 32.6%, and 31.6% higher than those of normal rats, respectively . Based on pharmacokinetic experiments in humans and rats, the active phthalides in Xuebijing injection in the treatment of sepsis were determined. A variety of phthalides (SI, SH, senkyunolide G, senkyunolide N, 3-hydroxy-3-N-butylphthalide, etc.) were detected in human and rat plasma, among which both SI and senkyunolide G have significant exposures in plasma . The structural variety and biological correspondence of natural products have provided beneficial enlightenment for new drug discovery and development. A valid strategy is to screen potential candidates from traditional herbal medicine with historically proven effects, such as morphine from poppy, artemisinin from sweet wormwood, and salicylic acid from willow bark. Unfortunately, many natural products, despite their significant bioactivity, fail to meet the requirements of qualified drug candidates due to unsatisfactory safety, stability, solubility, bioavailability, or other druggable deficiencies. In this case, their natural or modified derivates are often researched to discover potential substitutes with superior druggable properties and comparable bioactivities. Despite their low bioavailability, LIG and NBP present outstanding neuroprotective effects. SI is an oxidation product and an in vivo metabolite of LIG. Compared with LIG, SI is more chemically stable, easily soluble in water, and presents significantly better bioavailability. Furthermore, SI can permeate the BBB, which means it can access the brain’s disease focus directly. These properties make SI a potentially useful medicinal compound. Nevertheless, further studies need to be performed before SI can be considered a candidate to comprehensively assess its druggability. First, it is necessary to develop a preparation method that can obtain large quantities of SI at a low cost, thus providing substantial material for efficacy assessment, safety studies, and new drug development. Second, the efficacy evaluation and mechanism clarification of SI are still insufficient compared to LIG. In particular, in vivo comparative studies of SI with similar drugs or components, such as NBP and LIG, are needed to address the effectiveness and potential advantages of SI. Third, a structure–activity comparison between SI and similar phthalides would be useful. SI is a product of dihydroxylation of the six and seven double bonds of LIG. The introduction of o-dihydroxyl significantly improves the water solubility of the molecule while leaving BBB transmissibility unchanged. The structural properties and mechanisms of the transmissibility of SI across the BBB deserve more investigation, which may provide valuable references for subsequent structural modifications and the design of other drug molecules. |
Benchmarking pharmacogenomics genotyping tools: Performance analysis on short‐read sequencing samples and depth‐dependent evaluation | c2adbc8c-7437-4c4d-af1a-5b7c5e27ddcc | 11315677 | Pharmacology[mh] | Pharmacogenomics (PGx) is a field that studies how genetic variations in genes (pharmacogenes) influence drug metabolism, aiming to modify treatments based on an individual's germline DNA. Due to significant inter‐individual variability, a dose that is effective for one person may be sub‐therapeutic for another. A key factor in medication metabolism is the cytochrome P450 (CYP) enzymes, which are subject to genetic polymorphisms within their genes. These polymorphisms can significantly alter enzyme functionality, leading to variations in metabolism activity, either reducing or increasing it. Microarrays have been widely used to identify variants in pharmacogenes. However, despite next‐generation sequencing becoming the standard in clinical diagnostics, it is not routinely used in clinical practice for pharmacogenomics. Sequencing approaches, such as whole genome sequencing (WGS), allows the detection of single‐nucleotide polymorphisms (SNPs) with high accuracy, enabling not only to interrogate known SNPs but also to identify novel variants, in contrast to microarrays, which can only detect predetermined SNPs. Several publicly available PGx software tools have been developed for genotyping pharmacogenes from short‐read WGS data, including Aldy, Astrolabe, Cyrius, PharmCAT, Stargazer and StellarPGx. The performance of PGx tools has been evaluated by the authors of PGx tools when comparing the developed software with the other available software, , , , and in one comparison, the impact of higher sequencing depths (60× and 100×) was also investigated. Interestingly, despite many similarities in the outcomes of various studies, there were some notable discrepancies. For instance, while Stargazer achieved a 100% concordance in genotyping CYP3A5 in one comparison, another one reported only 65.7%. In this independent study, we aim to evaluate the latest versions of the main PGx computational tools (Aldy, Stargazer, StellarPGx, and Cyrius) using a publicly available reference WGS dataset consisting of samples from four superpopulations (38.6% Europeans, 30.0% Africans, 27.1% East Asians, and 2.9% Admixed Americans; unknown for 1.4%). We assess the call rate of tools and concordance with the ground truth for six genes that have multilaboratory consensus results available and are supported by the tools in our study, specifically CYP2D6 , CYP2C9, CYP2C19, CYP3A5, CYP2B6 , and TPMT . While Aldy, Stargazer, and StellarPGx have multigene support covering all these genes, Cyrius is specifically designed for the complex CYP2D6 and does not assess other genes. Given that all tools now support the GRCh38 assembly, which has become a standard in clinical research, we will mainly use this reference genome to assess their performance. Additionally, we will align samples on the older assembly (GRCh37) and use different aligners (BWA and Bowtie2) to determine any effect on the downstream analysis. Finally, although these tools have been primarily assessed at their original coverage depth of around 30–40×, and in one study also at higher depths of 60× and 100×, we aim to evaluate their performance at lower depths, including 30×, 20×, 10×, and 5×. This will offer valuable insights regarding the use of any of the benchmarked PGx tools on datasets with lower depth coverage (<30×) or those planning to use methods, such as low‐coverage WGS sequencing, in PGx research. Seventy PCR‐free Illumina WGS FASTQ files (150 bp paired‐end Illumina HiSeq X) from the Genetic Testing Reference Material Program (GeT‐RM) were downloaded from the European Nucleotide Archive (project ID: PRJEB19931). The integrity of compressed files was confirmed by calculating md5 hash of downloaded files and comparing it with the one stated in the project's database. FASTQ files were aligned to the GRCh38 and separately to the GRCh37 reference genome using BWA‐MEM, followed by sorting and indexing with Samtools. Similarly, FASTQ files were separately aligned to the GRCh38 with Bowtie2. The average depth of each BAM file was determined using Samtools depth. Subsequently, samples aligned on the GRCh38 with BWA‐MEM were downsampled using GATK DownsampleSam, applying a ratio to achieve target depths of 30×, 20×, 10×, or 5×, based on the calculated average depth. Diplotypes were called using Aldy v4.5, Cyrius v1.1.1, Stargazer v2.0.2, and StellarPGx v1.2.7. A frequently used tool PharmCAT was not included in the evaluation because it depends on external callers, such as StellarPGx or Stargazer for genotyping CYP2D6 which was a focus of this study. All tools were executed using their default settings. Given that Stargazer requires a variant call format (VCF) file for input, this was generated using GATK HaplotypeCaller, which was run on the sample to call variants within a predefined list of pharmacogenes (regions based on the ones defined in Stargazer's program, and merged with Cyrius regions). The VDR gene was used as a control gene for Stargazer. Commands used to prepare reference genome, call variants, and run tools are provided in Text . Ground truth was acquired from datasets published for CYP2D6 , , CYP2C9 , CYP2C19 , , CYP3A5 , , CYP2B6 , and TPMT . , An adjustment to the truth dataset was made based on recent literature where the CYP2D6 truth diplotype for NA18519 was updated from *1/*29 to *106/*29. , , Additionally, for NA18540, where the CYP2D6 truth is defined as (*36+)10/*41, we also considered results correct if there was more than one copy of *36, thereby defining the ground truth as (*36( xN )+)*10/*41, for which evidence has previously been shown. All calls were compared with the ground truth (major alleles) and in instances where the truth dataset presented multiple different haplotype possibilities due to variations in laboratory results, all options were considered as correct if identified by the tool. When tools reported two possible diplotypes, the first reported solution was chosen for comparison. This was not applied to Stargazer, which unlike other tools, did not report a second diplotype but provided a list of candidate haplotypes. For calculating consensus results, in rare instances where Cyrius, StellarPGx, or Aldy reported two possible diplotypes, both were included in the pool of potential diplotypes to reach a consensus. Individual calls and results are provided in Tables . Performance of tools on GeT‐RM samples Alignment on GRCh38 with BWA‐MEM First, all WGS samples were aligned on the GRCh38 reference assembly by using the BWA‐MEM algorithm. The mean depth across the genome was determined as 39.7, with a standard deviation of 2.73 (median: 40×). The ground truth diplotypes were compared with the calls from individual tools, as well as with consensus results obtained from combinations of two and three tools. Rarely, two possible solutions were provided: once by Cyrius for CYP2D6 and five times by StellarPGx for CYP2B6 . For the latter, the first solution matched the ground truth in three instances, while in two instances, neither solution matched. Stargazer often provided a list of other possible haplotypes, sometimes including a lengthy list of up to 10 items. As presented in Table , Aldy, StellarPGx, and Stargazer demonstrated strong performance in genotyping CYP2C19 , CYP2C9 , CYP3A5 , and TPMT , incorrectly identifying only a maximum of one sample. For CYP2B6 , concordance rates were lower and were also similar across the tools, varying between 85.7% and 87.1%. Focusing on the CYP2D6 gene, greater variability was observed between the different tools. Specifically, Cyrius incorrectly genotyped 2 samples and failed to provide results for 3 others. StellarPGx, Aldy, and Stargazer made incorrect calls on 4, 6, and 11 samples, respectively. All tools were wrongly called CYP2D6 in NA18565, while only Cyrius was able to correctly determine all haplotypes, albeit with incorrect phasing. Additionally, samples NA21781 and NA18540 were incorrectly genotyped by three tools, but were correctly identified by Stargazer and Cyrius, respectively. Meanwhile, the remaining 10 samples with wrong calls were identified incorrectly by either one or two tools. Notably, Stargazer exhibited more genotyping inaccuracies compared with the ground truth across various samples, including reporting of a rare *122 haplotype for four samples, instead of the actual *1. One of these samples was also misidentified by Aldy as *122. The alignment process was conducted again for all 15 samples with incorrect CYP2D6 diplotypes (or those with no call) from any tool to eliminate the possibility of incomplete alignments. We also experimented with aligning those samples using BWA‐MEM with and without the “‐M” parameter (either a split read is flagged as duplicate or as supplementary alignment, respectively). Separately, we applied post‐processing by marking and removing duplicate reads as well as performing base recalibration. In summary, there were no differences between those samples aligned with BWA, regardless of whether the “‐M” parameter was used. Removing duplicates resolved one no‐call issue by Cyrius and yielded the correct diplotype. However, compared with merely removing duplicates, the additional step of base recalibration did not provide any benefit, instead, it led to an additional incorrect call by StellarPGx and provided a different (incorrect) diplotype for one sample already miscalled by Aldy. Since Stargazer relies on the provided VCF file, we also performed filtering on the VCFs based on allelic balance, and separately, quality scores. While the former was able to solve some of the rare haplotypes (*122), it made additional incorrect results in other samples and the overall concordance decreased to 81.8% (with a 94.3% call rate). For the latter, no improvement in the whole dataset was observed, resulting in a slightly lower, 81.4% concordance for CYP2D6 . Alignment on GRCh37 with BWA‐MEM Since post‐processing had negligible effect on the CYP2D6 results and considering some of the results which were incorrect in our study, were genotyped correctly in another one using GRCh37 reference genome, we decided to investigate whether tools perform differently on samples aligned on the older assembly. For this, all samples were aligned on GRCh37 and tools were run using the same methodology, with parameters adjusted for the different reference. Nearly identical results were seen for all other genes except for CYP2D6 (Table ). Notably, for CYP2D6 , using the GRCh37 reference genome corrected one result for Cyrius and also provided accurate results for two samples for which it made no calls on GRCh38. For StellarPGx, all four incorrect calls on GRCh38 were correct on GRCh37. For Aldy, one incorrect call (NA07055; *17/*122) was corrected to *1/*17, and for Stargazer, a total of four calls were corrected (involving three cases where *122 was called erroneously instead of *1 on GRCh38). However, while resolving those issues, the tools made incorrect calls on GRCh37‐aligned samples that were correct on the newer reference. Compared with the GRCh38 results, Aldy maintained an identical concordance rate of 88.6%, whereas Stargazer and StellarPGx showed lower performance in the GRCh37 dataset, reaching 70.0% and 90.0% concordance, respectively. Only Cyrius did not make any additional incorrect calls, therefore achieving a higher, 98.6% concordance. Several incorrect calls on GRCh37 involved reporting rare alleles, such as *131 or *139 instead of *1 as observed for Aldy, Stargazer, and StellarPGx, where the *139 was especially frequent in *1/*4 diplotypes (6 out of 7 cases). Alignment on GRCh38 with Bowtie2 Due to several incorrect results in the GRCh38 and GRCh37 datasets, and in light of other studies that successfully identified correct diplotypes on the same samples, but which used pre‐aligned sequencing files, , we aimed to determine the effect of the aligner on the downstream process. Specifically, we determined the performance of tools on a dataset aligned on the GRCh38 assembly with Bowtie 2 (Table ) and compared the results with samples aligned by using BWA (Figure ). Interestingly, Bowtie2 alignments resolved all incorrect *122 haplotype assignments, provided two calls for Cyrius that were not made with BWA‐aligned GRCh38 datasets and corrected one StellarPGx call. On the other hand, Bowtie2 alignments also resulted in some incorrect calls in other samples. Compared with BWA alignments, a more noticeable drop in concordance was observed for Stargazer and StellarPGx, while it remained nearly unchanged for Aldy and Cyrius. Performance differences in CYP2D6 based on variant types The samples were categorized according to whether they contained structural variations (SVs) in the CYP2D6 gene and the performance of the tools was assessed separately on each subset. In the dataset, 46 samples did not contain SVs, while 23 did include SVs (the NA18540 sample was omitted due to the uncertainty of the presence of structural variations). Of the samples with SVs, 10 had at least one haplotype with a duplication, eight had a deletion and seven had a fusion. All tools performed best on samples without SVs (Table ). Cyrius achieved 100% concordance in all datasets, followed by Aldy with ~95.7% and StellarPGx with 97.8% on the BWA‐aligned GRCh38 dataset and below 90% for others. Similarly, Stargazer had the highest concordance (87%) also in the BWA‐aligned GRCh38 dataset and lower (below 80%) in others (Figure ). On samples with structural variants (Figure ), both Cyrius and StellarPGx performed similarly well, however, with lower concordance than for samples without SVs (90.5%–95.5% for the former and 87%–91.3% for the latter). Stargazer performed better than Aldy on GRCh38 samples, with concordance around 82.6% for the former and 78.3% for the latter. Impact of sequencing depth on results No studies have compared PGx tools on lower depths and therefore, we assessed their performance on the GRCh38 BWA‐aligned dataset again, but with downsampling aligned sequencing files to reach mean coverage depths of 30×, 20×, 10×, and 5×. We also downsampled to 1× but did not include the results, as the tools failed to make calls most of the time. Figure illustrates the results obtained by all tools and the consensus approaches. In assessing tool performance across various depths, some trends were noted. For CYP2C9 , CYP2C19 , CYP3A5 , and TPMT tools showed high concordance even at low (10×) coverage, with a slight decline at 5×. CYP2B6 's concordance decreased more steadily with reduced depth, maintaining over 80% concordance at higher depths but falling to around 60% at 5×. CYP2D6 analysis showed a marked decrease in concordance across all tools at lower depths. Notably, Cyrius maintained very high accuracy even at 10× and 5×, but with a low call rate, 10% (seven samples) at 10× and 2.9% (two samples) at 5×. In most cases, the 2‐tool consensus resulted in the same or higher concordance than the best‐performing tool (except for Cyrius at lower depths), and even better results were seen for the three‐tool consensus, but with the cost of lower call rate. Consensus call benefits were more pronounced below 20× depths. Since for some genes, haplotypes other than the reference (*1) are infrequent in the population (for example, TPMT ), the data were analyzed again after removing all samples with wild‐type diplotypes (*1/*1) to determine the extent that tools may provide the correct result due to their inability to determine variants. When comparing a dataset containing all samples with a dataset excluding wild‐types, minimal differences in concordance were observed down to the 20× depth. However, at the 5× depth, disparities became more pronounced, particularly for CYP2B6 , CYP2C19 , and especially for CYP2C9 and TPMT (Figure – semi‐transparent dotted lines; separately Figure ). The differences in concordance between the original dataset and a subset composed solely of non‐wild‐type samples were computed for each depth, followed by the calculation of Pearson's correlation between the number of wild‐type samples and the difference in concordance (across all tools). Only for 5× depth, a significant moderate negative correlation ( r = −0.571, p = 0.01) was observed, suggesting that an increased number of wild‐type diplotypes is associated with decreased concordance at 5× sequencing depth. In other words, the high concordance for TPMT and CYP2C9 at 5× in this dataset (~95% and ~88%, respectively) may have been influenced by the high proportion of samples with a wild‐type diplotype, while the concordance for non‐wild‐types is around 40%–60% instead. Results from the consensus approach Given that a consensus approach could improve accuracy and reduce false‐positive rate, we separately examined this and used two‐tool and three‐tool consensus models. In general, consensus results were nearly identical to those of other tools for genes with high concordance across datasets ( CYP2C9 , CYP2C19 , CYP3A5 , and TPMT ). For BWA‐aligned GRCh38 samples, the two‐tool consensus achieved slightly higher concordance on CYP2B6 (88.4%) than the best‐performing tool (87.1%), and the three‐tool consensus further increased to 91.8%. However, as a tradeoff, call rates dropped to 98.6% and 87.1% for the two‐tool and three‐tool consensus, respectively. Considering at least a two‐tool or three‐tool consensus for CYP2D6 , concordance increased to over 98%, surpassing the other tools, albeit with a reduced call rate. Additionally, a 4‐tool consensus was tested for CYP2D6 , achieving 100% concordance but reducing the call rate to 75.7%. Results on the BWA‐aligned GRCh37 samples were similar, with Cyrius slightly outperforming the two‐tool consensus and achieving nearly identical concordance with the three‐tool model. Finally, the results of the Bowtie2‐aligned samples on the GRCh38 assembly showed only minor differences from Cyrius, yet markedly better results than by any other tool. While the consensus approach did not surpass Cyrius in CYP2D6 samples without structural variants, in samples with SVs the consensus approaches outperformed Cyrius in all datasets by 4%–5% (except for the GRCh37 dataset where the two‐tool consensus was nearly identical). Alignment on GRCh38 with BWA‐MEM First, all WGS samples were aligned on the GRCh38 reference assembly by using the BWA‐MEM algorithm. The mean depth across the genome was determined as 39.7, with a standard deviation of 2.73 (median: 40×). The ground truth diplotypes were compared with the calls from individual tools, as well as with consensus results obtained from combinations of two and three tools. Rarely, two possible solutions were provided: once by Cyrius for CYP2D6 and five times by StellarPGx for CYP2B6 . For the latter, the first solution matched the ground truth in three instances, while in two instances, neither solution matched. Stargazer often provided a list of other possible haplotypes, sometimes including a lengthy list of up to 10 items. As presented in Table , Aldy, StellarPGx, and Stargazer demonstrated strong performance in genotyping CYP2C19 , CYP2C9 , CYP3A5 , and TPMT , incorrectly identifying only a maximum of one sample. For CYP2B6 , concordance rates were lower and were also similar across the tools, varying between 85.7% and 87.1%. Focusing on the CYP2D6 gene, greater variability was observed between the different tools. Specifically, Cyrius incorrectly genotyped 2 samples and failed to provide results for 3 others. StellarPGx, Aldy, and Stargazer made incorrect calls on 4, 6, and 11 samples, respectively. All tools were wrongly called CYP2D6 in NA18565, while only Cyrius was able to correctly determine all haplotypes, albeit with incorrect phasing. Additionally, samples NA21781 and NA18540 were incorrectly genotyped by three tools, but were correctly identified by Stargazer and Cyrius, respectively. Meanwhile, the remaining 10 samples with wrong calls were identified incorrectly by either one or two tools. Notably, Stargazer exhibited more genotyping inaccuracies compared with the ground truth across various samples, including reporting of a rare *122 haplotype for four samples, instead of the actual *1. One of these samples was also misidentified by Aldy as *122. The alignment process was conducted again for all 15 samples with incorrect CYP2D6 diplotypes (or those with no call) from any tool to eliminate the possibility of incomplete alignments. We also experimented with aligning those samples using BWA‐MEM with and without the “‐M” parameter (either a split read is flagged as duplicate or as supplementary alignment, respectively). Separately, we applied post‐processing by marking and removing duplicate reads as well as performing base recalibration. In summary, there were no differences between those samples aligned with BWA, regardless of whether the “‐M” parameter was used. Removing duplicates resolved one no‐call issue by Cyrius and yielded the correct diplotype. However, compared with merely removing duplicates, the additional step of base recalibration did not provide any benefit, instead, it led to an additional incorrect call by StellarPGx and provided a different (incorrect) diplotype for one sample already miscalled by Aldy. Since Stargazer relies on the provided VCF file, we also performed filtering on the VCFs based on allelic balance, and separately, quality scores. While the former was able to solve some of the rare haplotypes (*122), it made additional incorrect results in other samples and the overall concordance decreased to 81.8% (with a 94.3% call rate). For the latter, no improvement in the whole dataset was observed, resulting in a slightly lower, 81.4% concordance for CYP2D6 . Alignment on GRCh37 with BWA‐MEM Since post‐processing had negligible effect on the CYP2D6 results and considering some of the results which were incorrect in our study, were genotyped correctly in another one using GRCh37 reference genome, we decided to investigate whether tools perform differently on samples aligned on the older assembly. For this, all samples were aligned on GRCh37 and tools were run using the same methodology, with parameters adjusted for the different reference. Nearly identical results were seen for all other genes except for CYP2D6 (Table ). Notably, for CYP2D6 , using the GRCh37 reference genome corrected one result for Cyrius and also provided accurate results for two samples for which it made no calls on GRCh38. For StellarPGx, all four incorrect calls on GRCh38 were correct on GRCh37. For Aldy, one incorrect call (NA07055; *17/*122) was corrected to *1/*17, and for Stargazer, a total of four calls were corrected (involving three cases where *122 was called erroneously instead of *1 on GRCh38). However, while resolving those issues, the tools made incorrect calls on GRCh37‐aligned samples that were correct on the newer reference. Compared with the GRCh38 results, Aldy maintained an identical concordance rate of 88.6%, whereas Stargazer and StellarPGx showed lower performance in the GRCh37 dataset, reaching 70.0% and 90.0% concordance, respectively. Only Cyrius did not make any additional incorrect calls, therefore achieving a higher, 98.6% concordance. Several incorrect calls on GRCh37 involved reporting rare alleles, such as *131 or *139 instead of *1 as observed for Aldy, Stargazer, and StellarPGx, where the *139 was especially frequent in *1/*4 diplotypes (6 out of 7 cases). Alignment on GRCh38 with Bowtie2 Due to several incorrect results in the GRCh38 and GRCh37 datasets, and in light of other studies that successfully identified correct diplotypes on the same samples, but which used pre‐aligned sequencing files, , we aimed to determine the effect of the aligner on the downstream process. Specifically, we determined the performance of tools on a dataset aligned on the GRCh38 assembly with Bowtie 2 (Table ) and compared the results with samples aligned by using BWA (Figure ). Interestingly, Bowtie2 alignments resolved all incorrect *122 haplotype assignments, provided two calls for Cyrius that were not made with BWA‐aligned GRCh38 datasets and corrected one StellarPGx call. On the other hand, Bowtie2 alignments also resulted in some incorrect calls in other samples. Compared with BWA alignments, a more noticeable drop in concordance was observed for Stargazer and StellarPGx, while it remained nearly unchanged for Aldy and Cyrius. Performance differences in CYP2D6 based on variant types The samples were categorized according to whether they contained structural variations (SVs) in the CYP2D6 gene and the performance of the tools was assessed separately on each subset. In the dataset, 46 samples did not contain SVs, while 23 did include SVs (the NA18540 sample was omitted due to the uncertainty of the presence of structural variations). Of the samples with SVs, 10 had at least one haplotype with a duplication, eight had a deletion and seven had a fusion. All tools performed best on samples without SVs (Table ). Cyrius achieved 100% concordance in all datasets, followed by Aldy with ~95.7% and StellarPGx with 97.8% on the BWA‐aligned GRCh38 dataset and below 90% for others. Similarly, Stargazer had the highest concordance (87%) also in the BWA‐aligned GRCh38 dataset and lower (below 80%) in others (Figure ). On samples with structural variants (Figure ), both Cyrius and StellarPGx performed similarly well, however, with lower concordance than for samples without SVs (90.5%–95.5% for the former and 87%–91.3% for the latter). Stargazer performed better than Aldy on GRCh38 samples, with concordance around 82.6% for the former and 78.3% for the latter. First, all WGS samples were aligned on the GRCh38 reference assembly by using the BWA‐MEM algorithm. The mean depth across the genome was determined as 39.7, with a standard deviation of 2.73 (median: 40×). The ground truth diplotypes were compared with the calls from individual tools, as well as with consensus results obtained from combinations of two and three tools. Rarely, two possible solutions were provided: once by Cyrius for CYP2D6 and five times by StellarPGx for CYP2B6 . For the latter, the first solution matched the ground truth in three instances, while in two instances, neither solution matched. Stargazer often provided a list of other possible haplotypes, sometimes including a lengthy list of up to 10 items. As presented in Table , Aldy, StellarPGx, and Stargazer demonstrated strong performance in genotyping CYP2C19 , CYP2C9 , CYP3A5 , and TPMT , incorrectly identifying only a maximum of one sample. For CYP2B6 , concordance rates were lower and were also similar across the tools, varying between 85.7% and 87.1%. Focusing on the CYP2D6 gene, greater variability was observed between the different tools. Specifically, Cyrius incorrectly genotyped 2 samples and failed to provide results for 3 others. StellarPGx, Aldy, and Stargazer made incorrect calls on 4, 6, and 11 samples, respectively. All tools were wrongly called CYP2D6 in NA18565, while only Cyrius was able to correctly determine all haplotypes, albeit with incorrect phasing. Additionally, samples NA21781 and NA18540 were incorrectly genotyped by three tools, but were correctly identified by Stargazer and Cyrius, respectively. Meanwhile, the remaining 10 samples with wrong calls were identified incorrectly by either one or two tools. Notably, Stargazer exhibited more genotyping inaccuracies compared with the ground truth across various samples, including reporting of a rare *122 haplotype for four samples, instead of the actual *1. One of these samples was also misidentified by Aldy as *122. The alignment process was conducted again for all 15 samples with incorrect CYP2D6 diplotypes (or those with no call) from any tool to eliminate the possibility of incomplete alignments. We also experimented with aligning those samples using BWA‐MEM with and without the “‐M” parameter (either a split read is flagged as duplicate or as supplementary alignment, respectively). Separately, we applied post‐processing by marking and removing duplicate reads as well as performing base recalibration. In summary, there were no differences between those samples aligned with BWA, regardless of whether the “‐M” parameter was used. Removing duplicates resolved one no‐call issue by Cyrius and yielded the correct diplotype. However, compared with merely removing duplicates, the additional step of base recalibration did not provide any benefit, instead, it led to an additional incorrect call by StellarPGx and provided a different (incorrect) diplotype for one sample already miscalled by Aldy. Since Stargazer relies on the provided VCF file, we also performed filtering on the VCFs based on allelic balance, and separately, quality scores. While the former was able to solve some of the rare haplotypes (*122), it made additional incorrect results in other samples and the overall concordance decreased to 81.8% (with a 94.3% call rate). For the latter, no improvement in the whole dataset was observed, resulting in a slightly lower, 81.4% concordance for CYP2D6 . Since post‐processing had negligible effect on the CYP2D6 results and considering some of the results which were incorrect in our study, were genotyped correctly in another one using GRCh37 reference genome, we decided to investigate whether tools perform differently on samples aligned on the older assembly. For this, all samples were aligned on GRCh37 and tools were run using the same methodology, with parameters adjusted for the different reference. Nearly identical results were seen for all other genes except for CYP2D6 (Table ). Notably, for CYP2D6 , using the GRCh37 reference genome corrected one result for Cyrius and also provided accurate results for two samples for which it made no calls on GRCh38. For StellarPGx, all four incorrect calls on GRCh38 were correct on GRCh37. For Aldy, one incorrect call (NA07055; *17/*122) was corrected to *1/*17, and for Stargazer, a total of four calls were corrected (involving three cases where *122 was called erroneously instead of *1 on GRCh38). However, while resolving those issues, the tools made incorrect calls on GRCh37‐aligned samples that were correct on the newer reference. Compared with the GRCh38 results, Aldy maintained an identical concordance rate of 88.6%, whereas Stargazer and StellarPGx showed lower performance in the GRCh37 dataset, reaching 70.0% and 90.0% concordance, respectively. Only Cyrius did not make any additional incorrect calls, therefore achieving a higher, 98.6% concordance. Several incorrect calls on GRCh37 involved reporting rare alleles, such as *131 or *139 instead of *1 as observed for Aldy, Stargazer, and StellarPGx, where the *139 was especially frequent in *1/*4 diplotypes (6 out of 7 cases). GRCh38 with Bowtie2 Due to several incorrect results in the GRCh38 and GRCh37 datasets, and in light of other studies that successfully identified correct diplotypes on the same samples, but which used pre‐aligned sequencing files, , we aimed to determine the effect of the aligner on the downstream process. Specifically, we determined the performance of tools on a dataset aligned on the GRCh38 assembly with Bowtie 2 (Table ) and compared the results with samples aligned by using BWA (Figure ). Interestingly, Bowtie2 alignments resolved all incorrect *122 haplotype assignments, provided two calls for Cyrius that were not made with BWA‐aligned GRCh38 datasets and corrected one StellarPGx call. On the other hand, Bowtie2 alignments also resulted in some incorrect calls in other samples. Compared with BWA alignments, a more noticeable drop in concordance was observed for Stargazer and StellarPGx, while it remained nearly unchanged for Aldy and Cyrius. CYP2D6 based on variant types The samples were categorized according to whether they contained structural variations (SVs) in the CYP2D6 gene and the performance of the tools was assessed separately on each subset. In the dataset, 46 samples did not contain SVs, while 23 did include SVs (the NA18540 sample was omitted due to the uncertainty of the presence of structural variations). Of the samples with SVs, 10 had at least one haplotype with a duplication, eight had a deletion and seven had a fusion. All tools performed best on samples without SVs (Table ). Cyrius achieved 100% concordance in all datasets, followed by Aldy with ~95.7% and StellarPGx with 97.8% on the BWA‐aligned GRCh38 dataset and below 90% for others. Similarly, Stargazer had the highest concordance (87%) also in the BWA‐aligned GRCh38 dataset and lower (below 80%) in others (Figure ). On samples with structural variants (Figure ), both Cyrius and StellarPGx performed similarly well, however, with lower concordance than for samples without SVs (90.5%–95.5% for the former and 87%–91.3% for the latter). Stargazer performed better than Aldy on GRCh38 samples, with concordance around 82.6% for the former and 78.3% for the latter. No studies have compared PGx tools on lower depths and therefore, we assessed their performance on the GRCh38 BWA‐aligned dataset again, but with downsampling aligned sequencing files to reach mean coverage depths of 30×, 20×, 10×, and 5×. We also downsampled to 1× but did not include the results, as the tools failed to make calls most of the time. Figure illustrates the results obtained by all tools and the consensus approaches. In assessing tool performance across various depths, some trends were noted. For CYP2C9 , CYP2C19 , CYP3A5 , and TPMT tools showed high concordance even at low (10×) coverage, with a slight decline at 5×. CYP2B6 's concordance decreased more steadily with reduced depth, maintaining over 80% concordance at higher depths but falling to around 60% at 5×. CYP2D6 analysis showed a marked decrease in concordance across all tools at lower depths. Notably, Cyrius maintained very high accuracy even at 10× and 5×, but with a low call rate, 10% (seven samples) at 10× and 2.9% (two samples) at 5×. In most cases, the 2‐tool consensus resulted in the same or higher concordance than the best‐performing tool (except for Cyrius at lower depths), and even better results were seen for the three‐tool consensus, but with the cost of lower call rate. Consensus call benefits were more pronounced below 20× depths. Since for some genes, haplotypes other than the reference (*1) are infrequent in the population (for example, TPMT ), the data were analyzed again after removing all samples with wild‐type diplotypes (*1/*1) to determine the extent that tools may provide the correct result due to their inability to determine variants. When comparing a dataset containing all samples with a dataset excluding wild‐types, minimal differences in concordance were observed down to the 20× depth. However, at the 5× depth, disparities became more pronounced, particularly for CYP2B6 , CYP2C19 , and especially for CYP2C9 and TPMT (Figure – semi‐transparent dotted lines; separately Figure ). The differences in concordance between the original dataset and a subset composed solely of non‐wild‐type samples were computed for each depth, followed by the calculation of Pearson's correlation between the number of wild‐type samples and the difference in concordance (across all tools). Only for 5× depth, a significant moderate negative correlation ( r = −0.571, p = 0.01) was observed, suggesting that an increased number of wild‐type diplotypes is associated with decreased concordance at 5× sequencing depth. In other words, the high concordance for TPMT and CYP2C9 at 5× in this dataset (~95% and ~88%, respectively) may have been influenced by the high proportion of samples with a wild‐type diplotype, while the concordance for non‐wild‐types is around 40%–60% instead. Given that a consensus approach could improve accuracy and reduce false‐positive rate, we separately examined this and used two‐tool and three‐tool consensus models. In general, consensus results were nearly identical to those of other tools for genes with high concordance across datasets ( CYP2C9 , CYP2C19 , CYP3A5 , and TPMT ). For BWA‐aligned GRCh38 samples, the two‐tool consensus achieved slightly higher concordance on CYP2B6 (88.4%) than the best‐performing tool (87.1%), and the three‐tool consensus further increased to 91.8%. However, as a tradeoff, call rates dropped to 98.6% and 87.1% for the two‐tool and three‐tool consensus, respectively. Considering at least a two‐tool or three‐tool consensus for CYP2D6 , concordance increased to over 98%, surpassing the other tools, albeit with a reduced call rate. Additionally, a 4‐tool consensus was tested for CYP2D6 , achieving 100% concordance but reducing the call rate to 75.7%. Results on the BWA‐aligned GRCh37 samples were similar, with Cyrius slightly outperforming the two‐tool consensus and achieving nearly identical concordance with the three‐tool model. Finally, the results of the Bowtie2‐aligned samples on the GRCh38 assembly showed only minor differences from Cyrius, yet markedly better results than by any other tool. While the consensus approach did not surpass Cyrius in CYP2D6 samples without structural variants, in samples with SVs the consensus approaches outperformed Cyrius in all datasets by 4%–5% (except for the GRCh37 dataset where the two‐tool consensus was nearly identical). This independent PGx tool benchmarking study mostly showed small differences among tools for the genes analyzed, except for CYP2D6 , where the differences between tools, reference genomes, and aligners were more notable. Comparing our findings with other benchmarks, which mostly used earlier versions of the tools (except for Cyrius, which has not seen a public update since 2021), we saw similarities but also some differences. For instance, the study by Chen and colleagues, which assessed Cyrius's performance across a larger dataset of 144 samples, reported 99.3% concordance, while our findings indicate very close results, only a percentage or two lower (depending on the dataset). Our results diverge more from those reported by Aldy's developers, mostly for CYP2D6 where a 98.6% concordance for Aldy was reported on the same Illumina WGS dataset, while we found this to be 88.6% (BWA‐aligned). Furthermore, StellarPGx authors reported a 99% concordance for CYP2D6 diplotypes in 109 GeT‐RM WGS samples, which we also found to be lower (94.3%) on GRCh38 and 90% on GRCh37 BWA‐aligned dataset. The differences between concordance may arise from the use of different datasets or, in cases where the same dataset is used, in the method of when a call is considered to be concordant with the truth as well as from variations in the alignment method or any post‐alignment processing steps. It is also possible that the differences arise from using older ground truth with more incorrect truth diplotypes. In this study, we used the most up‐to‐date ground truth data and therefore, we explored the other sources of potential variation. First, we investigated the effect of common post‐processing steps on 15 samples with incorrect calls. Removing duplicates helped to resolve one no‐call made by Cyrius, while base recalibration had rather a minor negative effect and resulted in an additional incorrect call made by StellarPGx. The differences between studies have been more variable for Stargazer, a tool that requires a VCF file as an input, which can be created and processed using various methods. As a result, this may lead to different outcomes even on the same samples. For example, we filtered VCFs based on quality scores and allelic balance and while this approach resolved incorrect calls for some samples, it introduced erroneous calls in others. This indicates that Stargazer is sensitive to the input VCF file and also suggests that preprocessing of VCFs may require further fine‐tuning to achieve optimal results for CYP2D6 when using Stargazer. Another factor in the PGx analysis, as demonstrated in our experiments, is the reference genome. This was illustrated by several corrected diplotypes when calling star alleles on GRCh37‐aligned samples instead of GRCh38. However, we also observed incorrect calls on GRCh37‐aligned samples, indicating that the choice of reference genome can affect the results. Based on our examination of some sample alignments, we believe this may be due to certain regions being more susceptible to misalignment of reads from the homologous CYP2D7 gene's region. For example, in several instances where samples were aligned to GRCh38 with BWA, a *1 was mistakenly called *122, suggesting the presence of the corresponding variant rs61745683. However, the alignments indicate that other reads, likely from the CYP2D7 region, have misaligned to this region, falsely representing the sequence and resulting in the incorrect call (see example in Figure ). When comparing the read alignments of samples aligned to the two reference genomes, misaligned reads in this region are more prevalent with the GRCh38 reference, affecting the calls on these samples. In contrast, GRCh37 seems to be less prone to such misalignments in this region, thereby providing the correct *1 haplotype instead of *122. However, while samples aligned with GRCh38 may be more susceptible to these misalignments, we observed that Bowtie2‐aligned samples may have the same number of misaligned reads. Nonetheless, those reads generally have significantly lower mapping quality, which the tools can account for in their genotyping model (e.g., all *122 alleles were correctly called as *1 in the Bowtie2‐aligned dataset). In our other work, we have observed a similar issue with DRAGEN‐aligned samples as well. The noise generated via such misalignment could be the cause of other incorrect calls as well that were observed in across all dataset (e.g. *139 in GRCh37‐dataset). Interestingly, in our BWA‐aligned GRCh38 dataset, Cyrius initially failed to make calls for three samples, which were correctly called at lower sequencing depths (correct calls were made for samples NA19147 and HG00276 at both 30× and 20× depth, and for NA07055 at 20× depth). The missing calls can be explained by an ambiguous normalized depth value for calling a deletion in one sample and noisy alignments at key variant sites in the other two samples. This ambiguity and noise were reduced when downsampled to a lower depth (X. Chen, personal communication, April 22, 2024). The issue of ambiguous normalized depth values was also resolved after removing duplicates from the aligned file, which was the only positive effect of the post‐processing we observed. In general, Cyrius seems to adopt a more cautious approach, opting not to provide a result rather than risk making an incorrect call. This is well illustrated by the data on lower sequencing depths, where 100% concordance was observed at 10x and 5x, but for only 10% and 2.9% of samples, respectively. Thus, Cyrius may be the preferred choice for genotyping CYP2D6 when prioritizing high accuracy and minimizing false‐positives with a single tool, which is particularly important in clinical settings. With regard to sequencing depth, we observed that tools typically perform well at depths of 20× or higher, with small or no differences compared with higher depths, depending on the gene. Additionally, for some genes such as TPMT and CYP2C9 , while performance at 5× remains around 90% or more, it may be biassed due to the high number of wild‐type alleles. When assessing performance based solely on non‐wild‐type alleles, markedly lower results (around 40%–60%) were observed for those genes. Aldy appeared to be more influenced by depth, as the concordance in CYP2D6 steadily decreased across all depths and was notably more sensitive at 10×. Consensus results outperformed Stargazer, Aldy, and StellarPGx on CYP2D6 , but not always Cyrius itself. Therefore, a consensus approach can be recommended when using the first three tools, but its utility is more debatable when using Cyrius at depths of 20x and higher. In instances when Cyrius is unable to make a call, a consensus call from other tools would be beneficial. For other genes, the concordance of all tools was very close to the consensus approach, making clear recommendations difficult. However, considering that no single tool consistently performed best, using multiple tools and consensus approach might be advisable for the most accurate results. It is important to note that with lower sequencing depths, this approach can also lead to incorrect consensus calls, but using more tools can help minimize this risk. In conclusion, this study demonstrates that PGx tools perform well on the assessed pharmacogenes, even at lower sequencing depths. Based on our analysis, we recommend using sequencing data with at least 20× depth and at lower depths, considering a consensus approach using the best‐performing tools to lower the risk of incorrectly called haplotypes by any single tool. When analyzing CYP2D6 , a consensus approach may be less important if using Cyrius, but it can still be beneficial in avoiding incorrect calls made by a single tool. Limitations We used 70 samples from four superpopulations, but a larger and more diverse dataset could offer a more comprehensive assessment of the tools' performances, particularly with a higher number of haplotypes, including rarer ones. For example, some population groups may have a higher frequency of SVs, which are more challenging to accurately call, and, as determined in this study, this may result in lower performance of the tools. Since there was no consensus among laboratories/studies on the ground truth for some samples (mostly involving rare variants), therefore, due to the inability to ascertain the correct result, all possible diplotype variants were included to determine a true call, which could affect concordance results with the truth. Finally, performance of tools may vary for other datasets containing samples sequenced with different sequencers and using other library preparation methods (such as PCR) or aligning sequencing data with different aligners. This study focused on six selected genes; therefore, the performance on genes such as SLCO1B1 , DPYD , G6PD , and others was not assessed, and tools' performance may vary for genes not involved in this study. We used 70 samples from four superpopulations, but a larger and more diverse dataset could offer a more comprehensive assessment of the tools' performances, particularly with a higher number of haplotypes, including rarer ones. For example, some population groups may have a higher frequency of SVs, which are more challenging to accurately call, and, as determined in this study, this may result in lower performance of the tools. Since there was no consensus among laboratories/studies on the ground truth for some samples (mostly involving rare variants), therefore, due to the inability to ascertain the correct result, all possible diplotype variants were included to determine a true call, which could affect concordance results with the truth. Finally, performance of tools may vary for other datasets containing samples sequenced with different sequencers and using other library preparation methods (such as PCR) or aligning sequencing data with different aligners. This study focused on six selected genes; therefore, the performance on genes such as SLCO1B1 , DPYD , G6PD , and others was not assessed, and tools' performance may vary for genes not involved in this study. A.H., S.L., S.S., C.M. and R.C. wrote the manuscript; A.H. designed and performed the research and analyzed the data with the input from S.L., S.S., C.M. and R.C. This work is supported by a Medical Research Future Fund Genomics Health Future Mission Grant [MRF/2024900 CIA Conyers] supporting AH and CM positions. RC is a recipient of a Murdoch Children's Research Institute Clinician Scientist Fellowship and is an associate investigator with the ReNEW Novo Nordisk Stem Cell Foundation. The authors declared no competing interests for this work. Appendix S1. |
O Twitter (X) como Ferramenta de Comunicação e Educação para Cardiologistas Brasileiros: Perfil, Influência e Desafios | 9c601fd8-71df-45b1-9189-9832538e8c58 | 11634314 | Internal Medicine[mh] | Desde que a Internet se tornou um local comum para divulgar e acessar informações de saúde, as mídias sociais têm sido um espaço cada vez mais importante em que profissionais de saúde e acadêmicos compartilham resultados de pesquisas e informações científicas e fortalecem os laços com os pacientes. O Twitter (X) é atualmente a mídia social mais usada para comunicação em saúde. Compartilhar informações no Twitter (X) pode criar uma atmosfera comunicativa e colaborativa para pacientes, médicos e pesquisadores e até mesmo melhorar a qualidade do atendimento. Devido aos recursos da plataforma que permitem uma comunicação conversacional interpessoal, - os tweets têm o potencial de capitalizar as mídias sociais para ampliar o alcance das mensagens de saúde. O Twitter (X) ganhou um papel importante como fórum acadêmico, especialmente por sua natureza de microblog , que permite interações diretas entre diversos especialistas em tempo real e rápido. Cerca de 20% dos artigos no PubMed são tweetados pelo menos uma vez e isso pode aumentar as chances de citação. No entanto, apesar dessa conquista científica, muito poucos médicos e cientistas se envolvem com o Twitter (X) rotineiramente, conforme indicado em uma pesquisa que apenas 238 de 1,500 cardiologistas (16%) possuíam contas no Twitter (X). Embora existam inúmeras explicações potenciais para essa baixa adoção entre a comunidade científica, preocupações importantes em relação à promoção de pontos de vista infundados, manipulação de dados, uso ineficiente do tempo e privacidade do paciente provavelmente sejam os principais contribuintes. Conforme observado por Ferguson et al., houve um aumento no percentual de profissionais da área cardiovascular, incluindo periódicos e associações, que utilizam o Twitter (X) para interagir com outras pessoas e trocar ideias. A avaliação do âmbito e do impacto da pesquisa em saúde e da prática médica nas redes sociais pode fornecer informações sobre melhores estratégias para promover a utilização das redes sociais. Embora alguns pesquisadores discutam o perfil profissional de pesquisadores e profissionais de saúde nas mídias sociais de diferentes países, , , ainda não há estudos focados no contexto brasileiro. Portanto, o objetivo desta pesquisa é identificar quem são os cardiologistas brasileiros presentes no Twitter (X), sua rede de influência e alcance, e como eles se apresentam em sua bio descrição. Entendemos que a utilização das mídias digitais pelos profissionais da cardiologia é uma forma de construção de autoridade e capital social importantes para entender como a área pode ser apresentada no microblog. A presente pesquisa se caracteriza como exploratória de abordagem quantitativa, descritiva com intuito de identificar a presença, visibilidade e influência online de cardiologistas brasileiros no Twitter (X). Coleta de dados As biografias dos usuários do Twitter (X) foram examinadas com a ferramenta baseada na web FollowerWonk (https://moz.com/followerwonk) usando as palavras-chave ‘cardiologist’ OR ‘cardiologista’ em dezembro de 2022. O Followerwonk tem a capacidade de visualizar redes do Twitter (X) geograficamente, comparar diferentes contas de usuários e analisar melhor o conteúdo dos Tweets de regiões específicas. Todos os dados do perfil, incluindo o Social Authority Score (SAS), foram exportados para uma planilha de banco de dados onde foi realizada a análise estatística descritiva. The SAS é uma escala de influência do Twitter (X) (1–100) que considera indicadores-chave de desempenho, como número de seguidores, menções de usuários, número de retuítes ( retweets , ou RT) e engajamento das publicações dos usuários no Twitter (X). Os critérios de exclusão dos perfis foram: (a) perfis pessoais ou institucionais não pertencentes a cardiologistas; (b) Contas que não estivessem em português ou em inglês; (c) Usuário inativo (nenhum tweet postado nos últimos 6 meses); (d) Localização de usuário fora do Brasil ou usuário sem vínculo com instituição brasileira; (e) Perfis restritos; e (f) Perfis sem fotos. Análise dos dados As variáveis de análise consideradas na pesquisa foram: (i) número de perfis identificados como cardiologistas brasileiros e data de criação da conta; (ii) URLs disponíveis na descrição dos perfis; (iii) número de seguidores de cardiologistas brasileiros (média, desvio padrão); (iv) top 100 perfis de autoridade social; (v) correlação das 100 principais localizações geográficas e Autoridade Social; (vi) desigualdades de gênero e raça relacionadas ao uso da Cardiologia no Twitter (X), e; (vii) tópicos mais comuns tweetados. Os dados de descrição de bio de cada usuário foram extraídos e organizados em uma planilha csv. Para o processamento dos dados, utilizou-se o software IRAMUTEQ ( Interface de R pour lês Analyses Multidimensionnelles de Textes et de Questionnaires ). Trata-se de um programa livre de linguagem em R, e que permite processamento e análises estatísticas de textos produzidos. Para análise dos conteúdos textuais das bios foram utilizadas as técnicas de Classificação Hierárquica Descendente (CHD) e Análise Fatorial de Correspondência (AFC), que permitem sua identificação por meio de um arquivo textual único, devidamente configurado. As biografias dos usuários do Twitter (X) foram examinadas com a ferramenta baseada na web FollowerWonk (https://moz.com/followerwonk) usando as palavras-chave ‘cardiologist’ OR ‘cardiologista’ em dezembro de 2022. O Followerwonk tem a capacidade de visualizar redes do Twitter (X) geograficamente, comparar diferentes contas de usuários e analisar melhor o conteúdo dos Tweets de regiões específicas. Todos os dados do perfil, incluindo o Social Authority Score (SAS), foram exportados para uma planilha de banco de dados onde foi realizada a análise estatística descritiva. The SAS é uma escala de influência do Twitter (X) (1–100) que considera indicadores-chave de desempenho, como número de seguidores, menções de usuários, número de retuítes ( retweets , ou RT) e engajamento das publicações dos usuários no Twitter (X). Os critérios de exclusão dos perfis foram: (a) perfis pessoais ou institucionais não pertencentes a cardiologistas; (b) Contas que não estivessem em português ou em inglês; (c) Usuário inativo (nenhum tweet postado nos últimos 6 meses); (d) Localização de usuário fora do Brasil ou usuário sem vínculo com instituição brasileira; (e) Perfis restritos; e (f) Perfis sem fotos. As variáveis de análise consideradas na pesquisa foram: (i) número de perfis identificados como cardiologistas brasileiros e data de criação da conta; (ii) URLs disponíveis na descrição dos perfis; (iii) número de seguidores de cardiologistas brasileiros (média, desvio padrão); (iv) top 100 perfis de autoridade social; (v) correlação das 100 principais localizações geográficas e Autoridade Social; (vi) desigualdades de gênero e raça relacionadas ao uso da Cardiologia no Twitter (X), e; (vii) tópicos mais comuns tweetados. Os dados de descrição de bio de cada usuário foram extraídos e organizados em uma planilha csv. Para o processamento dos dados, utilizou-se o software IRAMUTEQ ( Interface de R pour lês Analyses Multidimensionnelles de Textes et de Questionnaires ). Trata-se de um programa livre de linguagem em R, e que permite processamento e análises estatísticas de textos produzidos. Para análise dos conteúdos textuais das bios foram utilizadas as técnicas de Classificação Hierárquica Descendente (CHD) e Análise Fatorial de Correspondência (AFC), que permitem sua identificação por meio de um arquivo textual único, devidamente configurado. As características descritivas extraídas dos perfis de cardiologistas brasileiros identificados no Twitter (X) indicam que as 1083 contas analisadas foram criadas entre os anos de 2006 e 2021. O Gráfico 1 apresenta a distribuição das contas pelo ano de criação. Houve baixa adesão nos primeiros anos do microblog , sendo observado um pico de contas criadas em 2009 (n= 191) e 2010 (n=125) havendo uma progressiva queda nos anos seguintes. A partir de 2017, observamos nova retomada, com destaque aos anos de 2019 (n=125) e 2020 (n=168). As variáveis recomendadas para a autoapresentação online incluem variáveis individuais, cultura/filiação de grupo, motivações, variáveis específicas da mídia social, conteúdo de autoapresentação gerado por si mesmo e por outros, bem como a eficácia de auto apresentação. Os perfis foram analisados quanto ao tipo (pessoal ou institucional) e gênero. Verificou-se que 0,8% dos perfis analisados eram institucionais. Entre os perfis pessoais, 76,5% eram de homens e 21,2% de mulheres. Registrou-se ainda que para 1,5% das contas não foi possível identificar o gênero dos usuários das contas. Além da análise quanto ao tipo do perfil, o estudo buscou mapear e categorizar as URLs disponibilizadas em cada uma das contas como possibilidade de informações adicionais ou de vínculo profissional dos usuários. Apenas 241 perfis disponibilizaram URL na descrição do perfil e a distribuição das URLs por tipo pode ser verificada no Gráfico 2. Os benefícios do uso do Twitter (X) por médicos incluem melhoria na comunicação médico-paciente e médico-médico, promoção da saúde, rastreamento de tópicos em saúde e doenças e construção de identidade online positiva. Esses podem ser observados com uma atuação consistente e no uso de funcionalidades, como compartilhamento de conteúdos com link (URL) e interações com outros usuários da rede por meio de menções e respostas e reprodução de conteúdo de terceiros (RT). Os dados indicam que essas práticas são pouco realizadas pelos cardiologistas brasileiros no Twitter (X), uma vez que apenas 1,9% possuem perfis com URL, e mensagens com interações com RT e @ também só foram registradas em 1,7% das contas. A baixa participação em mídias sociais pode estar associada ao fato de alguns médicos relutarem em se envolver em comunicação online com seus pacientes ou suas comunidades devido a preocupações com leis de responsabilidade e privacidade. Quanto ao conteúdo dos tweets foi possível analisar as hashtags mais utilizadas pelas contas no período analisado. A análise de presença e atuação nas mídias sociais com fins acadêmicos e profissionais costuma se valer de métricas e indicadores de desempenho. Entre esses, há os indicadores de conectividade social que agrupam métricas que expressam o grau em que um usuário está conectado com o resto da comunidade científica ou profissional que o cercam, e mesmo com a sociedade em geral. Portanto, a conectividade social corresponde às interações usuário-usuário, medidas pelo número de seguidos e seguidores. Os resultados da pesquisa indicam que as contas possuem ao todo 418 312 seguidores e seguem 293 006 perfis, o que corresponde a uma média de 386 seguidores e 270 seguindo. A conectividade social das contas analisadas pode ser observada na , onde podemos observar que poucas contas alcançam mais de 2000 seguidores. Embora a média de seguidores seja maior que a de contas seguindo, no geral, as contas não parecem atrair muitos seguidores. A maior concentração é de perfis com até 100 seguidores (71%) e a menor concentração é de perfis com mais de 1000 seguidores (4%). Quando analisamos essa distribuição de contas seguindo notamos que contas que seguem até 100 perfis (48,0%) ou mais de 100 até 1.000 (47,8%) estão bem próximas. As poucas contas com mais seguidores é que são responsáveis por elevar o valor da média para cima. A mediana calculada de 169,5 seguidores confirma essa assimetria com relação à média, efeito que não é visto para o número de contas que eles seguem, cuja mediana de 323 é bem próxima à média. A atuação no microblog – mantendo-se regularidade nas postagens com conteúdos relevantes e o uso dos recursos de interação – contribui para um bom desempenho na rede que, por sua vez, reflete na autoridade social do perfil. A demonstra a distribuição das contas analisadas pela autoridade social. Ao se atribuir uma escala de 1 a 100 à autoridade social, percebemos que as contas analisadas não apresentam um bom desempenho nesse indicador tendo em vista que 81,8% não ultrapassaram 25 pontos e 15% apresentaram até 50 pontos, ou seja, pouco mais de 97% das contas não superaram a metade da escala do indicador. Para qualificar um pouco mais “autoridade social”, listamos os 20 principais perfis com melhor desempenho nesse indicador . Quanto ao conteúdo da autoapresentação, foi possível analisar os termos e expressões mais recorrentes na descrição da bio. Analisando o filograma na , percebe-se que o conjunto de textos obtidos a partir dos Tweets analisados pelo programa foi dividido em dois eixos: um profissional e outro pessoal. O primeiro conjunto divide-se em três temáticas relacionadas ao âmbito profissional: a primeira (23,9%; azul escuro) em sua maioria está relacionada às especializações na área da saúde (Cardio-oncologia, Cardiologia), doenças cardiovasculares (insuficiência cardíaca, cardiopulmonar), exames e tratamentos relacionados principalmente a doenças do coração (eco, 3D, exercício, terapia) e referências a grupos de profissionais da área médica (gbcobrazil - Grupo Brasileiro de Cardio-Oncologia). Assim, o assunto central nessa classe foi a “prática profissional médica”, mais especificamente na área da cardiologia. A segunda (18,6%, verde-água) está mais voltada para os cargos que os usuários ocupam, como diretor, professor, interno, clínico, médico, entre outros. Trata-se de uma categoria que demarca a autoridade pelo exercício profissional. Já a terceira (30,5%, verde) refere-se à vinculação institucional, sobretudo universidades e demais instituições de pesquisa. Trazem palavras que remetem à “autoridade e vinculação profissional” dos usuários, como referências a instituições de ensino (Universidade, UNIFESP, UERJ), formação (Medicina) e títulos (Ph.D., fellow, conselheiro), bem como a profissões (professor universitário) e estados da região Sudeste do Brasil (Rio de Janeiro, São Paulo). Em relação ao eixo pessoal (27%, vermelho), as autodescrições estão relacionadas a interação entre “gostos e valores” dos usuários. Verifica-se menção a termos relacionados à religião (cristão, Deus, vida) e à família (casar, pai), bem como termos relacionados a atividades esportivas (jogador, tênis, flamenguista) e ao lazer (música, viagem). Percebe-se, pela análise de CHD , que mostra como as palavras se relacionam em grupo, que este eixo pessoal apresenta certo distanciamento em relação às demais. A AFC permite, por meio de gráficos, visualizar a proximidade, ou seja, as relações entre as palavras e das classes oriundas da CHD . A AFC veio confirmar as percepções já obtidas com a figura anterior. O eixo pessoal mostra-se mais isolado das outras, de modo que apenas algumas de suas palavras representativas se misturam às das outras classes. Já os grupos de palavras relacionados às filiações institucionais e aos cargos e atividades profissionais vigentes se encontram muito próximas, com grande parte de seus termos misturados entre si. Uma ligação entre esses grupos é plausível, já que, por exemplo, ambas incluem termos relacionados a profissões (professor, médico, internista) e títulos (Ph.D., Dr.). Este estudo investigou o perfil dos cardiologistas brasileiros no Twitter (X), com foco em sua presença, influência e alcance online , bem como na apresentação de suas biografias. Os resultados revelaram algumas características importantes sobre a comunidade de cardiologistas no Twitter (X) no contexto brasileiro. Uma descoberta interessante foi que a maioria dos cardiologistas brasileiros, usuários do Twitter (X), opta por utilizar suas contas pessoais para se engajar na plataforma. Isso pode indicar uma preferência por uma comunicação mais pessoal e direta com seus colegas, pacientes e seguidores, em vez de uma representação institucional. Nakagawa et al. avaliaram o perfil dos 100 maiores influenciadores em cardiologia de 2016 até 2020 e observaram uma predominância de perfis de cardiologias dos Estados Unidos da América e Europa, e nenhum cardiologista da América Latina. Dos 20 perfis de maior influência, a maioria era de homens (80%), com alta concentração na região sudeste (68%), refletindo o perfil da área da cardiologia no país. Interessante foi o crescimento significativo de contas de cardiologistas brasileiros no ano de 2009. Esse aumento acompanhou o crescimento no número global de usuários brasileiros do Twitter (X) que aumentou de 1 milhão em 2008 para 4 milhões em 2009. Esse crescimento foi impulsionado por uma série de fatores, incluindo o lançamento do Twitter (X) para dispositivos móveis em português e o aumento da popularidade da plataforma entre celebridades e influenciadores brasileiros. Outra observação relevante é a disparidade de gênero entre os cardiologistas brasileiros no Twitter (X). A maioria (76,5%) dos perfis identificados como cardiologistas pertencia a homens, enquanto apenas 21,2% eram de mulheres. Esses dados reproduzem os achados do próprio Twitter (X) que mostraram que em 2022 cerca de 69% das contas eram de homens e 31% eram de mulheres. Essa disparidade é mais acentuada em algumas regiões do mundo, como no Oriente Médio e na África, onde as mulheres representam apenas 20% dos usuários do Twitter (X). É importante investigar mais a fundo as razões por trás dessa disparidade de gênero e explorar formas de promover uma maior participação e representação das mulheres cardiologistas na plataforma. Sarah e colaboradores avaliaram diversos sites e mídias sociais e encontraram disparidades importantes quanto ao gênero e etnia, reforçando a necessidade de maior compreensão sobre o tema. No que diz respeito ao alcance e influência online dos cardiologistas brasileiros no Twitter (X), observou-se que o número médio de seguidores por conta foi de 386, enquanto o número médio de perfis seguidos por conta foi de 270. Esses números indicaram um certo grau de interconexão e engajamento entre os cardiologistas brasileiros na plataforma. No entanto, também foi observado que a maioria das contas tinha um número relativamente baixo de seguidores e baixa autoridade social. Isso sugere que a influência online dos cardiologistas brasileiros no Twitter (X) ainda é limitada na maioria dos casos. As porcentagens de contas com até 100 seguidores (71%) e a de contas com mais de 1000 seguidores (4%) indicam que a maioria dos cardiologistas brasileiros no Twitter (X) tem um alcance relativamente limitado. Isso pode ser atribuído a vários fatores, como a natureza específica do campo da cardiologia e a competição com outros especialistas e conteúdos na plataforma. Essas descobertas podem refletir desigualdades existentes no campo da cardiologia, incluindo disparidades de gênero e desigualdades regionais no acesso a oportunidades e recursos. Estudos recentes sugerem que as mídias sociais como o Twitter (X) podem ser ferramentas efetivas para disseminar informações e inovações médicas e aumentar a produtividade acadêmica. Isto deve ser levado em conta pelos usuários cardiologistas como forma de ampliar o alcance de suas atividades. O uso do Twitter (X) por cardiologistas brasileiros apresenta desafios significativos. Primeiramente, a baixa autoridade social de suas contas pode ser atribuída a vários fatores. A barreira da língua portuguesa pode limitar a visibilidade internacional, uma vez que grande parte do conteúdo científico é compartilhado em inglês. Além disso, o número reduzido de publicações produções científicas na área cardiovascular no Brasil, em comparação a outros países presentes na rede social, também afeta a credibilidade e o alcance das contas dos cardiologistas brasileiros. Para superar essas limitações, é crucial incentivar a participação ativa desses profissionais no Twitter, promovendo a disseminação de conhecimento e colaborações internacionais. Uma limitação importante do nosso estudo é que utilizamos um corte temporal restrito para análise das contas do Twitter (X). Esta rede social tem passado por modificações constantes ao longo do tempo que podem haver afetado a participação dos cardiologistas brasileiros, apesar de acreditarmos que os usuários médicos não tenham sofrido impacto significativo das mudanças. Uma vez que a coleta de dados se baseou na autoapresentação, a pesquisa pode apresentar limitações de cobertura devido a não identificação de cardiologistas que não se apresentam na plataforma como tal. A busca por temas, expressões ou hashtags que denotam debates na área de Cardiologia como o #CardioTwitter pode contornar esse tipo de limitação e complementar as contas identificadas. Por fim, não foi realizada uma busca por perfis institucionais e, consequentemente, a baixa taxa de 0,8% desses perfis identificados era esperada. Assim, essa limitação deve ser considerada na interpretação dos resultados do estudo. Os cardiologistas brasileiros com presença e atuação no Twitter (X) apresentaram uma baixa autoridade social, que pode em parte, ser explicada pelo uso da língua portuguesa nas suas publicações. Observamos uma disparidade de gênero entre os cardiologistas brasileiros na plataforma, com predominância de homens. Os perfis com maior influência online foram de homens, sendo identificada uma alta concentração de usuários na região sudeste. Novos estudos devem ser realizados sobre o tema para verificar o impacto dessas características na população ao longo do tempo. |
Amyloid-β and phosphorylated tau screening in bottlenose dolphin ( | 3c305e96-5e5f-49f2-8b6c-4f67818c0db5 | 11594424 | Anatomy[mh] | Marine mammals, especially cetaceans, are often regarded as "sentinels of the sea" providing critical insights into marine ecosystem health . Infectious disease, toxins, and pollution can trigger neurodegenerative mechanisms that can lead to disorientation and abnormal behaviors, sometimes resulting in strandings . Alzheimer’s disease (AD), one of the most widespread neurodegenerative diseases (NDDs) in human beings, is characterized by the pathological aggregation of amyloid-β (Aβ) and hyperphosphorylated tau (pTau) proteins, eventually forming amyloid plaques (APs) and neurofibrillary tangles (NFT), respectively . Recent studies reconsidered the idea that the presence of these proteins is solely pathogenic, demonstrating that they play crucial roles in normal conditions and only become damaging when their production or degradation is disrupted, leading to an accumulation . In fact, in a physiologic condition, Aβ is involved in synaptic activity and neuronal survival, while the balance of tau protein phosphorylation is essential for regulating cytoplasmic microtubules and enabling cellular growth and remodeling . According to the most diffuse theory for the pathogenesis of AD, the accumulation of extracellular fibrillar, insoluble Aβ peptides in the brain is triggered by aging . Age-dependent formation of APs, NFTs, and oligodendroglial tau has been observed in several non-human primate species , while non-primate animals , especially Carnivora species, show species-specific patterns of Aβ and pTau accumulation. Among these, aged dogs and bears exhibit the presence of APs in their brains without NFTs, while Feliformia species, such as cats, leopard cats, and cheetahs, display NFTs without AP formation, even if small granular deposits of Aβ are detected in the cerebral cortex. The concomitant accumulation of Aβ and pTau has also been observed in the brain of aged pinnipeds . Recent studies have shown the presence of both APs and NFTs in the brains of cetaceans. These species, like humans, present a long post-reproductive lifespan (PRLS), which has been proposed to be more closely associated with the development of AD-like changes than chronological aging itself . Sacchini and colleagues described APs and NFTs in three odontocete species from the Canary Islands, noting the more extensive lesions in deep-diving odontocetes (beaked whale, Ziphius cavirostris ) and suggesting that hypoxic events may play a crucial role as risk factors for cetacean NDDs. Furthermore, the social behavior of odontocetes, characterized by highly social groups that often show caregiving support towards ill or dying pod members, can help sick or cognitively impaired animals to survive longer, allowing the pathology to progress further. Vacher and collegues described concomitant AD-like lesions in three oceanic species of odontocetes (bottlenose dolphin, Tursiops truncatus; white-beaked dolphin, Lagenorhynchus albirostris and long-finned pilot whale, Globicephala melas ) and noted that the brain areas affected were analogous to those typically affected by AD in human brains, and that the cortices were more affected than brainstem nuclei. Furthermore, the distribution of the lesions was similar to that observed in pinnipeds . The rare combination of caregiving behavior and PRLS make odontocetes theoretically more likely to develop advanced stages of aging-related disorders than other wild mammals and it is tempting to think of classifying cetaceans into the same NDD categories as we know from humans. Apart from age, genetic susceptibility, environmental factors, and infectious diseases can influence the development of neurodegenerative lesions . Exposure to toxins and contaminants has been reported as a risk factor for AD-like pathology in cetaceans , however, nothing is known about other contributing factors. Further characterization of Aβ and pTau immunoreactivity in cetaceans is necessary to establish physiological baselines for each species. Monitoring and comparing geographically distinct populations, as well as investigating the potential influence of age, sex, and coexisting pathology on Aβ and pTau deposition is essential to better characterize the underlying causes and significance of NDDs in these animals. For this study, we screened the parietal brain cortices of 30 bottlenose and 13 striped dolphins that stranded or died under human care in Italy. Immunohistochemical reactivity to Aβ-42 and pTau was tested and the dolphins compared according to species, sex, age, pathological condition, and sample age. To the best of our knowledge, this is the first overview of Aβ-42 and pTau accumulation for the Mediterranean Sea region. 2.1 Specimens The dolphin brains investigated in this study came from deceased animals that had either a) stranded along the Italian shoreline (14 bottlenose and 13 striped dolphins) or b) died in facilities under human care (16 bottlenose dolphins). Only brains from dolphins with a decomposition condition code (DCC) 1 and 2 were selected from the University of Padova’s Marine Mammal Tissue Bank and from archived specimens at CReDiMa. Upon necropsy and brain extraction following the joint ACCOBAMS/ASCOBANS Best Practice guidelines , the largest part of the brains was placed in 10% neutral-buffered formalin for immersion fixation, while a representative subset of different brain areas (cerebrum, midbrain, cerebellum, and brainstem), were frozen for microbiological analyses. A sample of the right parietal cortex was used for subsequent analyses when available . Where the right side was not available, the left side was used. No ethical approval was required for this study because tissues from deceased wildlife animals submitted for routine diagnosis were used retrospectively. 2.2 Immunohistochemistry Following a morphological analysis of hematoxylin-eosin (HE) stained sections of the parietal cortex cut at 4 μm thickness, immunohistochemistry was performed using the semi-automated procedure as described by Orekhova and colleagues for the Aβ-42 antibody (ab201060, abcam, Cambridge, UK). Brain tissue from aged dogs with multifocal β-amyloid plaques was used as positive control. Immunohistochemistry using pTau antibodies targeting Thr231 sites of pre-NFTs (AT180, MN1040, ThermoFisher, Renfrew, UK), and AT8 against Ser202 and Thr205 of mature NFTs (MN1020, ThermoFisher) was performed manually according to the protocol described by Vallino Costassa and colleagues . Briefly, the sections were cut approximately 5 μm thick, rehydrated by routine methods and then immersed in 98% formic acid for 10 min. To enhance pTau immunoreactivity, sections were simmered in citrate buffer (pH 6.1) for 20 min. Tissues were then incubated overnight at 4°C with mouse monoclonal antibodies AT180 or AT8 (1:1000 dilution). After rinsing, a biotinylated secondary antibody (1:200 dilution; Vector Laboratories, Burlingame, CA) was applied to tissue sections for 30 min at room temperature, followed by the avidin-biotin peroxidase complex (Vectastain ABC kit; Vector Laboratories) according to the manufacturer’s protocol. Cases with APs were additionally tested with Congo Red to corroborate Aβ-42 specificity . In dolphins in which protozoa-associated glial nodules, astrocytosis, or gliosis were observed in HE-stained sections, immunohistochemistry was performed using GFAP (FLEX Polyclonal Rabbit Anti-Glial Fibrillary Acidic Protein, Ready-to-Use, Dako Autostainer/Autostainer Plus; Mob 199–05, Diagnostic Biosystems, Pleasanton, CA) and Iba-1 (019–1974, Wako Chemicals USA, Richmond, VA) antibodies . In both semi-automatic and manual procedures, immunoreactivity was visualized using 3, 3’-diaminobenzidine (DakoCytomation, Carpinteria, CA) as a chromogen; sections were then counterstained with Meyer’s hematoxylin. To test the specificity of staining, primary antibodies were omitted. Each immunohistochemical run was made with an appropriate positive control. Further relevant details regarding the antibodies used are listed in . 2.3 Basic linear alignment of amino acid sequences Protein sequence multiple alignments comparing human β-amyloid precursor protein (APP) [GenBank: AAB29908.1 and NCBI Reference Sequence: NP_000475.1] with APP homologs expressed by striped dolphin [GenBank: AAX81912.1], bottlenose dolphin [GenBank: AAX81917.1], and domesticated dog, Canis lupus familiaris [GenBank: AAX81908.1] were performed using CLUSTALW 2.1 program as previously described . Also, microtubule-associated protein tau sequence expressed by bottlenose dolphin [NCBI Reference Sequence: XP_033704325.1] was compared with human homolog [NCBI Reference Sequence: NP_058519.3]. 2.4 Animal categorization and Histoscore comparisons In the case of Aβ-42, semi-quantitative analysis could be performed based on the type and intensity score (IS) of immunoreactivity (1—mild; 2—moderate; 3—intense signal) of the parietal cortex. In the case of bottlenose dolphins, the categorization of animals based on pathological lesions (P) or absence thereof (N), as well as based on age (young adults < 30 years estimated age; old adults > 30 years-old, and calves) was consistent with that reported by Orekhova and colleagues (21). For striped dolphins, the cut-off between young and old adults was estimated to lie at 12 years old and at 195 cm of total length, based on information from Guarino and colleagues . provides an overview of the individuals considered for this study. For statistical analyses of the Histoscores (H) = (1 * % of structures with IS1) + (2 * % of structures with IS2) + (3 * % of structures with IS3), two types of Aβ-42 immunoreactive structures were considered: neuronal cytoplasmic and perineuronal Aβ-42 plaques. For each animal, the 5 high-power fields (HPFs) viewed included three representative HPFs in the grey matter, and two in the white matter, as some APs were observed there. Therefore, statistical results for neuronal and plaque immunoreactivity are reported for the 5 HPF average and the 3 HPF average of the grey matter. One-way ANOVAs were used to compare parametric Histoscore averages, whereas the Kruskal-Wallis test was implemented on non-parametric Histoscore averages when three groups were being compared. Unpaired T-test (parametric) and the Wilcoxon test (non-parametric) were implemented when two groups were compared. If significant global differences ( p values < 0.05) were detected in multiple group comparisons, Tukey HSD and Wilcoxon signed-rank tests for parametric and non-parametric data, respectively, were used to investigate which groups were different. Due to the small sample size, three levels of adjustment with increasing restriction for α-error of the Wilcoxon test were used: None, Benjamini-Hochberg and Bonferroni. Unadjusted p values of significant differences (α < 0.05) are reported below. The dolphin brains investigated in this study came from deceased animals that had either a) stranded along the Italian shoreline (14 bottlenose and 13 striped dolphins) or b) died in facilities under human care (16 bottlenose dolphins). Only brains from dolphins with a decomposition condition code (DCC) 1 and 2 were selected from the University of Padova’s Marine Mammal Tissue Bank and from archived specimens at CReDiMa. Upon necropsy and brain extraction following the joint ACCOBAMS/ASCOBANS Best Practice guidelines , the largest part of the brains was placed in 10% neutral-buffered formalin for immersion fixation, while a representative subset of different brain areas (cerebrum, midbrain, cerebellum, and brainstem), were frozen for microbiological analyses. A sample of the right parietal cortex was used for subsequent analyses when available . Where the right side was not available, the left side was used. No ethical approval was required for this study because tissues from deceased wildlife animals submitted for routine diagnosis were used retrospectively. Following a morphological analysis of hematoxylin-eosin (HE) stained sections of the parietal cortex cut at 4 μm thickness, immunohistochemistry was performed using the semi-automated procedure as described by Orekhova and colleagues for the Aβ-42 antibody (ab201060, abcam, Cambridge, UK). Brain tissue from aged dogs with multifocal β-amyloid plaques was used as positive control. Immunohistochemistry using pTau antibodies targeting Thr231 sites of pre-NFTs (AT180, MN1040, ThermoFisher, Renfrew, UK), and AT8 against Ser202 and Thr205 of mature NFTs (MN1020, ThermoFisher) was performed manually according to the protocol described by Vallino Costassa and colleagues . Briefly, the sections were cut approximately 5 μm thick, rehydrated by routine methods and then immersed in 98% formic acid for 10 min. To enhance pTau immunoreactivity, sections were simmered in citrate buffer (pH 6.1) for 20 min. Tissues were then incubated overnight at 4°C with mouse monoclonal antibodies AT180 or AT8 (1:1000 dilution). After rinsing, a biotinylated secondary antibody (1:200 dilution; Vector Laboratories, Burlingame, CA) was applied to tissue sections for 30 min at room temperature, followed by the avidin-biotin peroxidase complex (Vectastain ABC kit; Vector Laboratories) according to the manufacturer’s protocol. Cases with APs were additionally tested with Congo Red to corroborate Aβ-42 specificity . In dolphins in which protozoa-associated glial nodules, astrocytosis, or gliosis were observed in HE-stained sections, immunohistochemistry was performed using GFAP (FLEX Polyclonal Rabbit Anti-Glial Fibrillary Acidic Protein, Ready-to-Use, Dako Autostainer/Autostainer Plus; Mob 199–05, Diagnostic Biosystems, Pleasanton, CA) and Iba-1 (019–1974, Wako Chemicals USA, Richmond, VA) antibodies . In both semi-automatic and manual procedures, immunoreactivity was visualized using 3, 3’-diaminobenzidine (DakoCytomation, Carpinteria, CA) as a chromogen; sections were then counterstained with Meyer’s hematoxylin. To test the specificity of staining, primary antibodies were omitted. Each immunohistochemical run was made with an appropriate positive control. Further relevant details regarding the antibodies used are listed in . Protein sequence multiple alignments comparing human β-amyloid precursor protein (APP) [GenBank: AAB29908.1 and NCBI Reference Sequence: NP_000475.1] with APP homologs expressed by striped dolphin [GenBank: AAX81912.1], bottlenose dolphin [GenBank: AAX81917.1], and domesticated dog, Canis lupus familiaris [GenBank: AAX81908.1] were performed using CLUSTALW 2.1 program as previously described . Also, microtubule-associated protein tau sequence expressed by bottlenose dolphin [NCBI Reference Sequence: XP_033704325.1] was compared with human homolog [NCBI Reference Sequence: NP_058519.3]. In the case of Aβ-42, semi-quantitative analysis could be performed based on the type and intensity score (IS) of immunoreactivity (1—mild; 2—moderate; 3—intense signal) of the parietal cortex. In the case of bottlenose dolphins, the categorization of animals based on pathological lesions (P) or absence thereof (N), as well as based on age (young adults < 30 years estimated age; old adults > 30 years-old, and calves) was consistent with that reported by Orekhova and colleagues (21). For striped dolphins, the cut-off between young and old adults was estimated to lie at 12 years old and at 195 cm of total length, based on information from Guarino and colleagues . provides an overview of the individuals considered for this study. For statistical analyses of the Histoscores (H) = (1 * % of structures with IS1) + (2 * % of structures with IS2) + (3 * % of structures with IS3), two types of Aβ-42 immunoreactive structures were considered: neuronal cytoplasmic and perineuronal Aβ-42 plaques. For each animal, the 5 high-power fields (HPFs) viewed included three representative HPFs in the grey matter, and two in the white matter, as some APs were observed there. Therefore, statistical results for neuronal and plaque immunoreactivity are reported for the 5 HPF average and the 3 HPF average of the grey matter. One-way ANOVAs were used to compare parametric Histoscore averages, whereas the Kruskal-Wallis test was implemented on non-parametric Histoscore averages when three groups were being compared. Unpaired T-test (parametric) and the Wilcoxon test (non-parametric) were implemented when two groups were compared. If significant global differences ( p values < 0.05) were detected in multiple group comparisons, Tukey HSD and Wilcoxon signed-rank tests for parametric and non-parametric data, respectively, were used to investigate which groups were different. Due to the small sample size, three levels of adjustment with increasing restriction for α-error of the Wilcoxon test were used: None, Benjamini-Hochberg and Bonferroni. Unadjusted p values of significant differences (α < 0.05) are reported below. Out of 43 dolphin parietal cortices tested in this screening, all were immunoreactive to Aβ-42 antibody, although the patterns of immunoreactivity differed. Three old bottlenose dolphin females, one of which had lived under human care (ID 653), and two old striped dolphins (one male and one female, respectively) were positive for Aβ-42 perineuronal plaques. Plaque morphology and distribution varied: in the bottlenose dolphin female from under human care, which was estimated to be > 59 years old, plaques were observed in both gyri and sulci across cortical layers and in the white matter . Many were dense-core plaques with IS3 immunoreactivity . Some plaques were diffuse ( , top), encompassing groups of 6–10 non-reactive neurons, while cells within dense-core plaques appeared compressed ( , bottom). While the size of the plaques was comparable, the distribution differed in the dog positive control, where denser, smaller plaques (diameter between 10–40 μm) were observed in superficial, and diffuse Aβ-42 signal in deeper cortical layers . In the wild bottlenose dolphin females and in the striped dolphins, AP morphology was often diffuse to fibrillar, irregularly shaped with poorly defined borders, limited to cortical layer I, and more often observed in sulci than in gyral crowns, especially when few plaques were present . In most individuals, a diffuse, light background signal (IS ≤ 1) could be detected. However, in all dolphins apart from ID 653, cytoplasmic immunoreactivity of varying intensities (IS 0–2) was observed multifocally in the neurons, particularly visible in layers II, III, V . In some cases, this signal could not be distinguished from nuclear immunoreactivity , while in others, it was clearly visible in both nucleus and cytoplasm of neurons affected by satellitosis . In some instances, IS2-neurons were intermixed with entirely non-immunoreactive neurons . An example of an IS1-neuron is shown in the inset of . In two striped dolphins, nuclear signal alone was multifocally detected in some neurons (arrowheads in ) and in the case of Sc51416, other neurons in the vicinity had conspicuously large, non-reactive neuronal nuclei (asterisk in ). Multifocally, white matter glia (IS1–2, ) and some blood vessels and adjacent neuropil were immunoreactive to Aβ-42, though this did not correlate with AP presence. With regards to pTau, most investigated parietal cortices from 43 dolphins were non-immunoreactive. Only four bottlenose dolphins displayed discrete immunoreactivity to pre-NFT-associated pTau (AT180). This took the form of single, small, diffuse foci with an IS1 within grey or white matter . This was visible in animals without Aβ-42 plaques, one (ID 20) being a 30-year-old female from under human care, and the other female (ID 596) a young adult from the wild. Another young adult female (ID624) displayed multifocal IS1-immunoreactive neurons . In the > 59-year-old female (ID 653), we observed single neurons with cytoplasmic signal against both AT8 and AT180 (IS1–2, ). Comparison with AT-180 and AT-8 human positive controls are shown in respectively. The immunoreactivity observed was similar in distribution and intensity for AT-8, whereas for AT-180 the positive dolphin showed a weaker signal than the human control. All other dolphins were negative for pTau, therefore semi-quantitative Histoscore assessments were made only on sections immunohistochemically marked with Aβ-42-antibody, reported below. 3.1 Sex and age differences In comparisons of neuronal cytoplasmic immunoreactivity, no significant differences owing to older age could be detected in either species . However, among the bottlenose dolphins, calves consistently displayed higher Histoscore averages than both young ( p = 0 . 02 ) and old adults ( p = 0 . 0079 ). Unadjusted p values are displayed in the box plots below. There were no significant sex differences. 3.2 Differences by pathology When animals were grouped generally according to presence of (P) or no pathology (N) within the brain, the only statistically significant differences could be noted amongst both pathological and non-pathological adults and calves amongst the bottlenose dolphins, which also influenced the statistics of all dolphins taken together . Calves were considered separately here, as it is unknown whether their developing brains have a different baseline from adults altogether. No calves were available for the striped dolphins, and only one non-pathological individual was included, so no meaningful comparison could be made here . The next step was to assess viral, bacterial, and parasitic etiologies separately. 3.2.1 Viral No statistically significant differences could be detected using Histoscore comparisons for cytoplasmic immunoreactivity within neurons. However, qualitatively, one bottlenose dolphin (Tt177/22) with molecular traces of Dolphin Morbillivirus (CeMV) in its brain displayed a distinct immunoreactivity pattern against Aβ-42. Multifocally, single neurons in deeper cortical layers were intensely stained with the antibody (IS3) at the soma as well as along the dendritic processes, almost reminiscent of a “neuropil thread” as it is known to occur when using anti-pTau antibodies . Moreover, when comparing Histoscores of perineuronal plaques, a very clear pattern emerged as to the presence of viral infections (CeMV or herpesvirus) and presence of plaques in our study group, which represents a large part of the decomposition and conservation code 1–2 striped and bottlenose dolphin brains sampled in Italy over the last 20 years. At first glance, all but ID 653 had viral infections detected within the brain, resulting in a significant p value of p = 0 . 00097 . However, this > 59-year-old female under human care had been wild-caught, and while the brain had resulted negative in PCR analyses, there had been a signal for herpesvirus in some skin lesions and, weakly, in the kidneys. ID 653 was therefore experimentally regrouped into the viral group, which resulted in a highly significant p-value ( p = 2e -5 ; ). 3.2.2 Bacterial No statistically significant group differences could be detected for bacterial presence in this investigation. This is depicted in . 3.2.3 Parasitic No statistically significant group differences could be detected for parasitic presence in this investigation, as shown in . Qualitatively, however, there was frequently a cross-reaction of the Aβ-42 with microcyst-like structures in animals positive for Toxoplasma gondii ( T . gondii ) whether there were single cysts , glial nodules with additionally immunoreactive, clustered glia , or severe focal-extensive gyral necrosis and multifocal-coalescing lymphohistiocytic encephalitis . In the most severe case (Sc95661), there was also a more intense background signal in the neuropil, many IS2-positive neurons, and endothelial as well as perivascular immunoreactivity (IS2; inset of ). 3.3 Difference by sample age To ascertain that any perceived differences in immunoreactivity were not due to artifacts owing to lengthy storage of the samples in formalin or in paraffin blocks, pairwise comparisons were performed between tissues sampled > 10, > 5, and < 5 years before the immunohistochemical analysis. As shown in , no significant differences could be detected. 3.4 Difference between dolphins from under human care and the wild There were no significant differences between dolphins from under human care and wild dolphins ( p values for Aβ-42 Histoscore comparisons are summarized in ). 3.5 Basic linear alignment of amino acid sequences Human and dog APPs shared homology greater than 96.7% with APP expressed by striped and bottlenose dolphin, with a 100% homology between human β-amyloid from neuritic plaques of AD patients (AAB29908.1) and both dolphin and dog species. Complete sequences for microtubule-associated protein isoform 1 and X1 were only available for human and bottlenose dolphin, respectively, and shared a homology of 85.9%. The full report of CLUSTALW results is reported in . In comparisons of neuronal cytoplasmic immunoreactivity, no significant differences owing to older age could be detected in either species . However, among the bottlenose dolphins, calves consistently displayed higher Histoscore averages than both young ( p = 0 . 02 ) and old adults ( p = 0 . 0079 ). Unadjusted p values are displayed in the box plots below. There were no significant sex differences. When animals were grouped generally according to presence of (P) or no pathology (N) within the brain, the only statistically significant differences could be noted amongst both pathological and non-pathological adults and calves amongst the bottlenose dolphins, which also influenced the statistics of all dolphins taken together . Calves were considered separately here, as it is unknown whether their developing brains have a different baseline from adults altogether. No calves were available for the striped dolphins, and only one non-pathological individual was included, so no meaningful comparison could be made here . The next step was to assess viral, bacterial, and parasitic etiologies separately. 3.2.1 Viral No statistically significant differences could be detected using Histoscore comparisons for cytoplasmic immunoreactivity within neurons. However, qualitatively, one bottlenose dolphin (Tt177/22) with molecular traces of Dolphin Morbillivirus (CeMV) in its brain displayed a distinct immunoreactivity pattern against Aβ-42. Multifocally, single neurons in deeper cortical layers were intensely stained with the antibody (IS3) at the soma as well as along the dendritic processes, almost reminiscent of a “neuropil thread” as it is known to occur when using anti-pTau antibodies . Moreover, when comparing Histoscores of perineuronal plaques, a very clear pattern emerged as to the presence of viral infections (CeMV or herpesvirus) and presence of plaques in our study group, which represents a large part of the decomposition and conservation code 1–2 striped and bottlenose dolphin brains sampled in Italy over the last 20 years. At first glance, all but ID 653 had viral infections detected within the brain, resulting in a significant p value of p = 0 . 00097 . However, this > 59-year-old female under human care had been wild-caught, and while the brain had resulted negative in PCR analyses, there had been a signal for herpesvirus in some skin lesions and, weakly, in the kidneys. ID 653 was therefore experimentally regrouped into the viral group, which resulted in a highly significant p-value ( p = 2e -5 ; ). 3.2.2 Bacterial No statistically significant group differences could be detected for bacterial presence in this investigation. This is depicted in . 3.2.3 Parasitic No statistically significant group differences could be detected for parasitic presence in this investigation, as shown in . Qualitatively, however, there was frequently a cross-reaction of the Aβ-42 with microcyst-like structures in animals positive for Toxoplasma gondii ( T . gondii ) whether there were single cysts , glial nodules with additionally immunoreactive, clustered glia , or severe focal-extensive gyral necrosis and multifocal-coalescing lymphohistiocytic encephalitis . In the most severe case (Sc95661), there was also a more intense background signal in the neuropil, many IS2-positive neurons, and endothelial as well as perivascular immunoreactivity (IS2; inset of ). No statistically significant differences could be detected using Histoscore comparisons for cytoplasmic immunoreactivity within neurons. However, qualitatively, one bottlenose dolphin (Tt177/22) with molecular traces of Dolphin Morbillivirus (CeMV) in its brain displayed a distinct immunoreactivity pattern against Aβ-42. Multifocally, single neurons in deeper cortical layers were intensely stained with the antibody (IS3) at the soma as well as along the dendritic processes, almost reminiscent of a “neuropil thread” as it is known to occur when using anti-pTau antibodies . Moreover, when comparing Histoscores of perineuronal plaques, a very clear pattern emerged as to the presence of viral infections (CeMV or herpesvirus) and presence of plaques in our study group, which represents a large part of the decomposition and conservation code 1–2 striped and bottlenose dolphin brains sampled in Italy over the last 20 years. At first glance, all but ID 653 had viral infections detected within the brain, resulting in a significant p value of p = 0 . 00097 . However, this > 59-year-old female under human care had been wild-caught, and while the brain had resulted negative in PCR analyses, there had been a signal for herpesvirus in some skin lesions and, weakly, in the kidneys. ID 653 was therefore experimentally regrouped into the viral group, which resulted in a highly significant p-value ( p = 2e -5 ; ). No statistically significant group differences could be detected for bacterial presence in this investigation. This is depicted in . No statistically significant group differences could be detected for parasitic presence in this investigation, as shown in . Qualitatively, however, there was frequently a cross-reaction of the Aβ-42 with microcyst-like structures in animals positive for Toxoplasma gondii ( T . gondii ) whether there were single cysts , glial nodules with additionally immunoreactive, clustered glia , or severe focal-extensive gyral necrosis and multifocal-coalescing lymphohistiocytic encephalitis . In the most severe case (Sc95661), there was also a more intense background signal in the neuropil, many IS2-positive neurons, and endothelial as well as perivascular immunoreactivity (IS2; inset of ). To ascertain that any perceived differences in immunoreactivity were not due to artifacts owing to lengthy storage of the samples in formalin or in paraffin blocks, pairwise comparisons were performed between tissues sampled > 10, > 5, and < 5 years before the immunohistochemical analysis. As shown in , no significant differences could be detected. There were no significant differences between dolphins from under human care and wild dolphins ( p values for Aβ-42 Histoscore comparisons are summarized in ). Human and dog APPs shared homology greater than 96.7% with APP expressed by striped and bottlenose dolphin, with a 100% homology between human β-amyloid from neuritic plaques of AD patients (AAB29908.1) and both dolphin and dog species. Complete sequences for microtubule-associated protein isoform 1 and X1 were only available for human and bottlenose dolphin, respectively, and shared a homology of 85.9%. The full report of CLUSTALW results is reported in . This multicenter study screened bottlenose and striped dolphins that died under human care or stranded along the Italian coastline for the presence and distribution of Aβ-42, pre-NFT associated pTau (AT180), and mature NFT-associated pTau (AT8) in the most consistently archived brain tissue: the parietal cortex. To ensure that any potential plaques would not be missed, a concentration higher than that of previous studies was used for Aβ-42 (1:700 as opposed to 1:25,000 reported by Vacher and colleagues ). While five dolphins displayed Aβ plaques, almost all tested dolphins displayed varying patterns of neuronal and glial immunoreactivity to this antibody. To the best of our knowledge, this is the first time that striped dolphins have been investigated with this combination of antibodies. Among the 30 bottlenose dolphins examined in our study, only three old females (10% of the study group) had apparent plaques, and amongst those, the > 59-year-old female, the oldest known bottlenose dolphin under human care in Italy, had by far the most plaques. These were distributed relatively evenly across cortical layers and gyral folds, many of them with a dense-core (IS3) and some with a diffuse morphology. The plaques present in the other bottlenose dolphin females were fewer and were more likely to be found in the sulci or sides of the gyri, less so than in the gyral crowns. Meanwhile, in the striped dolphins, plaques (in two animals i.e., 15% of the study group) were distributed multifocally, but these had a more fibrillar to diffuse, clustered appearance, and when fewer were present, they would often be visible in or close to the sulci. Preliminary studies suggest that differing plaque loads and morphologies could be linked to different apolipoprotein E genotypes , with APOE ε3 alleles associated with more dense-core plaques, and APOE ε4 alleles more frequently observed in humans with fibrillar plaques . With increasing availability of sequenced cetacean genomes, future studies could elucidate whether genetic variations in these and other AD-related genes (e.g., APP, presenilins 1 and 2) underlie the observed plaque morphologies. There is some evidence that while accumulation of Aβ deposits in the human temporal lobe tend to be greater in gyral crests , sulcal deposits are more dense, possibly due to a higher density of neurons and blood vessels in sulcal versus gyral regions in humans . In other regions of the brain like in the frontal lobe, sulci appeared to harbor more plaques than gyri . To our knowledge, this neurovascular configuration has not been investigated in cetaceans. However, inquiring whether plaque clustering on a regional scale corresponds to cortical modules and specific pathways could help us better understand not only the baseline functions of the dolphin brain, but how these can be impaired in neuropathological cases. In this study, it is likely that the > 59-year-old bottlenose dolphin female was the oldest dolphin examined, and the fact that the plaques were more distributed, often very dense, can be seen as compelling evidence for plaque clusters increasing and aggregating with disease progression. It is noteworthy that all the dolphins with APs were positive for a viral infection (CeMV, herpesvirus, or both) within the brain itself (4/5 animals) or in the skin and kidneys (herpesvirus in ID 653). Considering the mode of action and tendency to latency of herpesviruses, it is likely that at some point after the infection, this virus reached the brain. Along with genetic susceptibility and environmental factors such as exposure to toxins like β-methylalanine from harmful algal blooms , viral infections are a known risk factor for development of NDDs . In APOE ε4 -knockout mice, herpesvirus simplex (HSV1) neurotropism and latency is facilitated, and its presence within the brain is thought to induce Aβ and pTau-related pathology . Both the striped dolphins with APs, and one bottlenose dolphin were positive to CeMV. While the exact pathogenesis of CeMV is unknown, assuming similarities to human Measles virus (MV), a persistence of viral RNA in the blood can lead to the invasion and even brain-only form of this disease in cetaceans . MV is associated with subacute sclerosing panencephalitis, including NFT formation, and a complex interplay of factors such as neuroinflammation, dysregulation of immune system and protein synthesis pathways is implicated in subsequent viral induction of NDDs . There is no evidence that the APs in the dolphins of this study are triggered by viral infection only, but considering the argument presented above, that sulci may have a high packing density of blood vessels and neurons, neuroinflammation and other metabolic disruptions induced by infectious disease could lead to a distinct pattern of AP distribution across time and depending on pathogen involved. In general, while infectious agents may trigger a pro-inflammatory state predisposing the animal to a NDD, it should be considered as one of many possible causes, and we need more specimens and brain areas sampled in cetaceans to draw clearer conclusions. Interestingly, no significant pattern could be observed for dolphins with bacterial infections in the brain, although some authors argue that Aβ fibrillization may be an induced antimicrobial peptide-like response of the innate immune system reacting to both sterile and infectious neuroinflammatory stimuli not limited to viruses . In our study, AP presence and pTau immunoreactivity showed no significant correlation. Only ID 653, the bottlenose dolphin with the most abundant APs, showed immunoreactivity to both AT180 and AT8 in consecutive sections and potentially in the same neuron, suggesting a focal presence of mature neurofibrillary tangles. However, this affected single neurons in the examined parietal cortex sections. Another aged bottlenose dolphin had an AT180-immunoreactive plaque in the grey matter, and a young bottlenose dolphin had a single small focus of immunoreactivity in the white matter, however these foci cannot be interpreted as neuritic plaques. They showed no correlation to Aβ-42-positive cells, age, or brain pathology. Different species have different combinations of Aβ-42 and pTau in older specimens’ brains–terrestrial canids often show only APs without NFTs, several pinniped species have had both APs and NFTs , while different brain areas studied in cetaceans have had variable degree of co-occuring APs and tauopathy. Some neuroscientific schools of thought argue that concurrent presence of Aβ in pTau-immunoreactive dystrophic neurites is necessary for the spread of AD-like neuropathology within the brain . Moreover, NFTs have numerous phospho-sites, and a more comprehensive, albeit less specific, way to assess them would be to use Sevier Munger silver stain . Future studies will better assess distribution and quantity immunoreactivity to these, and further, antibodies in other areas of the cetacean brain. Until then, it is too early to definitively categorize neurodegenerative phenomena in cetacean brains according to human NDD categories such as Lewy Body pathology, primary age-related tauopathy (PART), limbic‑predominant age‑related TDP‑43 encephalopathy (LATE), chronic traumatic encephalopathy (CTE), and Parkinson’s Disease . Comparing our results to those of two Grampus griseus (Risso’s dolphin), seven G lobicephala melas (long-finned pilot whales), six Lagenorhynchus albirostris (white-beaked dolphins), five Phocoena phocoena (harbor porpoise) and two bottlenose dolphins from the Atlantic Ocean , some similarities are evident. These include: 1) the distribution of APs primarily in cortical layers I, III, and V; 2) no clear correlation between vascular Aβ-42 immunoreactivity and AP presence; 3) some dolphins without APs also being immunoreactive to AT180; 4) many neurons displayed cytoplasmic, and few intranuclear, immunoreactivity (although in our study, younger animals did not have fewer positive neurons than old dolphins–indeed the opposite was the case for bottlenose dolphin calves); 5) little glial involvement around APs was observed in both studies. There were also notable differences: 1) the incidence of plaques per species was less variable in aged odontocetes in our study (19–25% depending on species) compared to Vacher and colleagues’ analyses (20–100%) with more individuals per species considered in our study, and yet more needed to establish solid estimates of AP incidence; 2) no information on co-morbidities in the Atlantic odontocetes is reported, while in our screening, viral and parasitic infections were reflected in distinct immunoreactivity patterns using the same Aβ-42 antibody; 3) all investigated Atlantic dolphins with APs were immunoreactive to AT180, with overlapping immunohistochemical patterns–this was not the case in our study, with only single neurons with pTau signal in the dolphin with the most APs (ID 653); 4) in Vacher and colleagues , AT8 and AT180 immunoreactivity did not correlate, while in ID 653, these two antibodies appeared to colocalize; 5) we observed multifocal glial Aβ-42 immunoreactivity, at times aggregated in gliotic foci surrounding protozoan cysts, while this type of signal is not mentioned in the other study. With regards to intraneuronal Aβ-42 immunoreactivity, studies on human brains reveal that this is not a reliable predictor of NDDs. Indeed, it is more often found in brain regions less susceptible to AD-like pathology, secreted by α- and β-secretases as a product of physiological cell metabolism, which is interwoven with that of APOE . In people with Down syndrome and AD, neurons display reduced Aβ in neurons, thought to be the result of a shift in APP processing from in amino-terminally truncated intraneuronal Aβ to extracellular secretion of Aβ40/42 in AD patients . This is reflected in two ways in our study: ID 653 exhibited abundant APs but no neuronal Aβ-42 , and qualitatively, often a higher IS was observed in neurons of the gyral crests compared to sulcal neurons, which inversely correlates to the tendency of AP distribution towards the sulci or sides of the gyri, and less so in the crests. Another reason for low or no intraneuronal Aβ (and other antigens assessed by immunohistochemistry) expression can be fixation and storage-related artifacts and loss of antigenicity. For this reason, a Histoscore comparison between brain specimens stored over many years was undertaken, and no significant differences were observed for Aβ42 in dolphin brains stored mainly as formalin-fixed, paraffin-embedded tissue for > 10, > 5, and < 5 years. Due to the insufficient number of dolphins immunoreactive against pTau, this parameter could not be assessed. In this regard, performing cetacean necropsies is often challenging due to time constraints and the location of strandings that makes difficult, if not impossible, to keep the central nervous system under ice, which would be desirable for good tissue preservation. Moreover, these challenges are compounded by varying degrees of post-mortem autolysis commonly observed in stranded cetaceans, which can affect the rate and gradient of formalin tissue penetration, as well as antigen immunoreactivity. Immunoreactivity is further influenced by the duration of tissue fixation. In our study, these factors likely contributed to the observed low pTau immunoreactivity, as well as the transient, temperature-sensitive nature of Tau phosphorylation. Neuronal intranuclear Aβ-42 reactivity has been observed in several cetacean species , although the H31L21 Aβ antibody was shown to cross-react with a proteins corresponding to the molecular weight of APP , and does not show as much affinity for plaques as the mOC64 clone used in this and the study by Vacher and colleagues . The significance of the intranuclear signal is not clear, however there is evidence of its effectiveness as a regulator of gene transcription , and some authors hypothesize a potential neuroprotective function against cellular stress such as hypoxia . In our study, nuclear Aβ-42 by itself was observed in two aged striped dolphins, of which one had CeMV and T . gondii infections in the brain. More often, a combination of nuclear and cytoplasmic Aβ-42 was seen, sometimes in neurons with satellitosis. We consider this to be insufficient evidence to interpret intranuclear Aβ-42 function in cetaceans, but future studies should continue noting the immunoreactivity patterns in conjunction with morphological and molecular pathology to enable a more complete comparative picture. Moreover, as was already the case in the study of other immunohistochemical biomarkers of neuropathological lesions in cetacean brains , bottlenose dolphin calves have repeatedly displayed significant differences in the expression of proteins, including a higher cytoplasmic Aβ-42 Histoscore in the present study. This argues for the inclusion of various age groups and sexes in future assessments of different biomarkers generally associated with neurodegenerative processes, and the need to keep an open mind to group-specific baselines. It is valuable to use the same antibodies and compare geographically distinct populations to refine our ability to distinguish between physiological baselines and pathological deviations of NDD-related proteins. Thereby, it is important to consider that extant cetaceans are the product of millions of years of evolution in adaptation to aquatic life and separate from that of primates, thus their baselines may deviate greatly from that of humans and other mammals. We began by looking at cetaceans as potential models for human NDDs but discovered that they likely have their own pathological patterns meriting thorough investigation. At this point in the neuroscientific exploration of cetacean brains, human neuropathological syndromes like AD, Parkinson’s Disease, PART, and others are compelling bridges in comparative neuropathology that can help to direct systematic efforts of marine mammal research. S1 Fig Congo red stain. A) Aβ-42 positive control dog and B) ID 653. Inset in (B) shows Congo Red reaction to a β-sheet structured protein around capillaries. (TIF) S2 Fig Glial immunoreactivity in the brains of investigated cetaceans. A) Multifocal immunoreactivity of astrocytes in glial nodules in the white matter of Toxoplasma gondii-infected striped dolphin (Sc26362) using monoclonal GFAP antibody made in mouse (Mob199-05). Magnification: 100x. B) Multifocal/coalescing astrogliosis in the grey matter of striped dolphin Sc95661 using polyclonal GFAP antibody made in rabbit. Magnification: 200x. C) Iba-1-immunoreactive microglia in ID 598 with mostly ramified morphology. Few amoeboid microglia present. Magnification: 200x. (TIF) S3 Fig Amyloid-β plaques in dolphins with bacterial infections. Group comparisons of perineuronal Aβ-42 Histoscore results (y-axis) relative to sex (color coding) and presence of bacterial (A, C) and parasitic (B, D) infections (x-axis) of the dolphins, considering the total averages of 5 HPFs including white matter (A, B) or just grey matter (C, D). The box plots are visual aids to give an overview of values obtained for each age and sex group. Statistical comparisons were performed on age and sex variables separately, and sex differences were not assessed within age groups due to low sample sizes. P values displayed are those of the age comparisons. (TIF) S1 Table Amyloid-β Histoscore results of all the cases analyzed. (XLSX) S2 Table P-value. (XLSX) S1 File Full report of CLUSTALW results. (DOCX) |
Childbirth Experiences and Challenges for Women with Sensory Disabilities: A Systematic Review of Delivery Methods and Healthcare Barriers | 73feef3d-6277-435e-9711-6e210c11bc43 | 11835229 | Surgical Procedures, Operative[mh] | Women with sensory disabilities, including hearing loss and visual impairments, represent a significant portion of the global population. According to estimates from the World Health Organization (WHO), this demographic comprises over 5% of the world’s population, equating to approximately 430 million individuals. This figure is projected to exceed 700 million by 2050, reflecting the growing importance of addressing the unique challenges faced by this group in various aspects of healthcare, including childbirth . Despite advancements in healthcare, women with disabilities continue to face numerous medical and socio-economic challenges that impact their pregnancy and childbirth experiences. Medically, these challenges are often linked to the nature of their disabilities and associated physiological issues. Socio-economically, barriers include limited access to education, poverty, discrimination, and a lack of inclusion within healthcare systems, all of which contribute to higher risks during pregnancy, childbirth, and the postpartum period . To address these inequities, the WHO has called for the establishment of an inclusive healthcare framework that aims to ensure that women with disabilities receive equitable and adequate care, thereby safeguarding both the mothers and their newborns from adverse health outcomes and social discrimination . A notable concern in maternal care for women with sensory disabilities is the increased likelihood of caesarean section deliveries. Caesarean sections, while often medically necessary, are associated with longer recovery times and potential negative psychological effects for the mother . Despite these risks, the global rate of caesarean deliveries continues to rise, currently accounting for 21% of all births worldwide, with projections suggesting that this figure could approach 29% by 2030 . This trend raises critical questions about whether women with sensory disabilities are disproportionately subjected to interventional deliveries, such as caesarean sections, and whether they experience limitations in exercising autonomy over their childbirth choices . This article aims to systematically review the existing literature on childbirth experiences of women with sensory disabilities, specifically focussing on their rates of caesarean section and other interventional birth methods. Additionally, it seeks to identify the healthcare barriers these women face and offer recommendations to improve maternal care services. This review will contribute to ongoing discussions on reproductive rights, equality in healthcare, and the mental health impacts of childbirth on women with sensory disabilities. Search strategy A comprehensive search strategy was employed to identify relevant literature for this systematic review, focussing on childbirth experiences and healthcare barriers for women with sensory disabilities. The search was conducted across several key scientific databases, including PubMed/Medline, Scopus, BioMed Central, and the Cochrane Library. These databases were selected for their extensive coverage of medical and healthcare research. Initially, a broad search was performed using a wide array of relevant terms. This strategy was subsequently refined, narrowing the scope to two specific search algorithms. The terminology included keywords related to midwifery, childbirth, pregnancy, and women with sensory disabilities, such as deaf or blind women. The search adhered to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, ensuring a transparent and systematic approach to the literature review and source citation . The two search algorithms employed in this study are detailed in , which outlines the database, search terms, and the corresponding research questions addressed by each algorithm. This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) under the registration number CRD42024593330. Criteria for inclusion and exclusion The inclusion and exclusion criteria for this review were guided by the PICOST framework , a tool that ensures a structured and transparent selection process. The criteria were applied as follows: (a) Population: The studies included were focussed on women with sensory disabilities who were pregnant or had given birth; (b) Intervention: The focus was on the types of childbirth experienced by women with sensory disabilities; (c) Comparison: Comparative analysis was conducted on the types of childbirth for women with sensory disabilities versus those without such disabilities; (d) Outcome: The primary outcomes included the method of delivery and associated healthcare barriers; (e) Study: Only primary research studies (quantitative and qualitative) were included. Articles not available in full text or written in languages other than English were excluded; and (f) Timeliness: Studies published between January 1, 2010, and April 24, 2024, were considered. PRISMA process The initial search across the databases identified a total of 270 entries. Following the removal of five duplicate entries, 265 unique records remained for further evaluation. Titles and abstracts were screened to exclude studies not directly related to the research objective, focussing specifically on childbirth experiences for women with sensory disabilities (deaf, blind, or otherwise). After the screening process, 254 articles were excluded, leaving 11 articles for further review. Of these, three articles were inaccessible, and one was excluded for being in a language other than English. This resulted in a final sample of seven articles. Additionally, 3 relevant studies were added, bringing the total number of articles analysed to 10, comprising 8 quantitative studies and 2 qualitative studies. The PRISMA 2020 flowchart, presented in , illustrates the process of study selection, from initial identification to final inclusion. Quality assessment C.R.A. and D.S independently assessed the quality of the included studies. No discrepancies were found between the evaluators, thus negating the need for a third-party arbitrator. The quality of the selected studies was assessed using the Caldwell framework , which is suitable for both quantitative and qualitative research. Data extraction The following data are extracted from the table for this systematic review: (a) First author; (b) Title of each study; (c) Year of publication; (d) Journal: The academic journal in which each study was published; (e) Country: The country where the research was conducted or where the data were gathered; (f) Type of research: The type of research methodology employed, such as quantitative, qualitative, retrospective cohort study, or secondary quantitative analysis; (g) Sample size: The total number of participants in each study; (h) Targeted sample size: The number of women with disabilities or specific subgroups within the total sample; (i) Measurement: The tools and methods used to collect data, such as questionnaires, diagnostic codes, in-depth interviews, and administrative hospital discharge data; (j) Control group: The size and characteristics of the comparison group of women without disabilities; (k) Measured outcome: The main outcomes of interest, such as the assessment of medical outcomes, childbirth experiences, caesarean section rates, postpartum care satisfaction, and hospital readmissions; (l) Key findings; (m) Specific percentages for labour: The breakdown of delivery methods, such as the percentage of caesarean sections or vaginal deliveries; (n) Follow-up; and (o) Limitations of the study. A comprehensive search strategy was employed to identify relevant literature for this systematic review, focussing on childbirth experiences and healthcare barriers for women with sensory disabilities. The search was conducted across several key scientific databases, including PubMed/Medline, Scopus, BioMed Central, and the Cochrane Library. These databases were selected for their extensive coverage of medical and healthcare research. Initially, a broad search was performed using a wide array of relevant terms. This strategy was subsequently refined, narrowing the scope to two specific search algorithms. The terminology included keywords related to midwifery, childbirth, pregnancy, and women with sensory disabilities, such as deaf or blind women. The search adhered to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, ensuring a transparent and systematic approach to the literature review and source citation . The two search algorithms employed in this study are detailed in , which outlines the database, search terms, and the corresponding research questions addressed by each algorithm. This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) under the registration number CRD42024593330. The inclusion and exclusion criteria for this review were guided by the PICOST framework , a tool that ensures a structured and transparent selection process. The criteria were applied as follows: (a) Population: The studies included were focussed on women with sensory disabilities who were pregnant or had given birth; (b) Intervention: The focus was on the types of childbirth experienced by women with sensory disabilities; (c) Comparison: Comparative analysis was conducted on the types of childbirth for women with sensory disabilities versus those without such disabilities; (d) Outcome: The primary outcomes included the method of delivery and associated healthcare barriers; (e) Study: Only primary research studies (quantitative and qualitative) were included. Articles not available in full text or written in languages other than English were excluded; and (f) Timeliness: Studies published between January 1, 2010, and April 24, 2024, were considered. The initial search across the databases identified a total of 270 entries. Following the removal of five duplicate entries, 265 unique records remained for further evaluation. Titles and abstracts were screened to exclude studies not directly related to the research objective, focussing specifically on childbirth experiences for women with sensory disabilities (deaf, blind, or otherwise). After the screening process, 254 articles were excluded, leaving 11 articles for further review. Of these, three articles were inaccessible, and one was excluded for being in a language other than English. This resulted in a final sample of seven articles. Additionally, 3 relevant studies were added, bringing the total number of articles analysed to 10, comprising 8 quantitative studies and 2 qualitative studies. The PRISMA 2020 flowchart, presented in , illustrates the process of study selection, from initial identification to final inclusion. C.R.A. and D.S independently assessed the quality of the included studies. No discrepancies were found between the evaluators, thus negating the need for a third-party arbitrator. The quality of the selected studies was assessed using the Caldwell framework , which is suitable for both quantitative and qualitative research. The following data are extracted from the table for this systematic review: (a) First author; (b) Title of each study; (c) Year of publication; (d) Journal: The academic journal in which each study was published; (e) Country: The country where the research was conducted or where the data were gathered; (f) Type of research: The type of research methodology employed, such as quantitative, qualitative, retrospective cohort study, or secondary quantitative analysis; (g) Sample size: The total number of participants in each study; (h) Targeted sample size: The number of women with disabilities or specific subgroups within the total sample; (i) Measurement: The tools and methods used to collect data, such as questionnaires, diagnostic codes, in-depth interviews, and administrative hospital discharge data; (j) Control group: The size and characteristics of the comparison group of women without disabilities; (k) Measured outcome: The main outcomes of interest, such as the assessment of medical outcomes, childbirth experiences, caesarean section rates, postpartum care satisfaction, and hospital readmissions; (l) Key findings; (m) Specific percentages for labour: The breakdown of delivery methods, such as the percentage of caesarean sections or vaginal deliveries; (n) Follow-up; and (o) Limitations of the study. This systematic review analysed 10 scientific articles published between January 1, 2010, and April 24, 2024, focussing on the experiences of childbirth and postpartum care for women with disabilities . Specifically, the review examined three articles that addressed the childbirth experiences of women with sensory impairments (deaf and blind), seven articles focussed exclusively on deaf women, and three articles examined the experiences of blind women. These studies were conducted across a diverse set of countries, including the United Kingdom, Brazil, the United States, England, Ethiopia, and Canada. Each article was carefully reviewed, and key details were systematically recorded in a Microsoft Excel spreadsheet, adhering to the methodological framework described in the theoretical background. The studies were organised chronologically by publication date, providing a structured approach to analysing the evolution of research on this topic . Caesarean section rates Across all studies, women with disabilities demonstrated an increased likelihood of caesarean section deliveries compared to women without disabilities. This trend was consistent irrespective of the nature of the disability, whether physical, sensory, or intellectual. For instance, in the study by Darney et al. , women with disabilities had a caesarean section rate of 32.7%, nearly double that of women without disabilities (16.3%). In a study conducted by Redshaw et al. , 24.4% of women with sensory disabilities underwent caesarean sections, while Malouf et al. reported that women with physical disabilities had a higher incidence of both planned (18.4%) and emergency (18.6%) caesarean sections compared to their non-disabled counterparts (11.0% and 14.4%, respectively). This pattern persisted even in smaller-scale studies, such as Werner et al.’s study of blind women in Brazil, where all six participants delivered via caesarean section. Similarly, in the qualitative study by Wudneh et al. focussing on obstetric violence in Ethiopia, 72.72% of women with disabilities delivered via caesarean section. Studies focussing on deaf women, such as those by Schiff et al. and Mitra et al. , also reported increased caesarean rates (29.9% and 35.4%, respectively), though the increase was not statistically significant after adjustment in some instances . Vaginal deliveries Despite the higher rates of caesarean sections, vaginal deliveries remained common among women with disabilities in several studies. Redshaw et al. found that women with sensory disabilities had a spontaneous vaginal delivery rate of 54.3%, comparable to that of women without disabilities. Mitra et al. reported that 67.9% of deaf women delivered vaginally, which was not significantly different from the 71.3% rate observed in women without disabilities. Maternal–fetal attachment and postpartum experiences The qualitative study by Werner et al. provided insight into maternal–fetal attachment among blind women using three-dimensional ultrasound and magnetic resonance imaging (MRI) data. This study highlighted that the connection between the mother and fetus was influenced using physical models, and the findings suggested that caesarean deliveries were often chosen due to the structure of private maternity clinics rather than medical necessity. Similarly, Tarasoff et al. highlighted significant gaps in postpartum care for women with disabilities, with many participants reporting inadequate physical recovery care and limited accommodations for their disabilities. Healthcare access and satisfaction Several studies noted disparities in healthcare access and satisfaction among women with disabilities. Redshaw et al. found that while women with disabilities had similar access to healthcare during the early stages of pregnancy, they experienced more prenatal checkups, ultrasounds, and longer hospital stays. However, they were less likely to breastfeed and expressed greater dissatisfaction with communication and support during childbirth and postpartum care. Similar concerns were echoed by Wudneh et al. , who reported widespread experiences of obstetric violence among disabled women, including physical and verbal abuse, neglect, and breaches of privacy. Postpartum hospital readmissions McKee et al. explored postpartum hospital readmissions among deaf women in Massachusetts. The study found that deaf women had a significantly higher risk of hospital readmissions during all postpartum periods than women without disabilities. Notably, deaf women had nearly seven times the risk of repeated hospital admissions within 43–90 days postpartum and nearly four times the risk within 91–365 days postpartum. Limitations of the studies The studies reviewed presented various limitations, including reliance on self-reported data, potential recall bias, small sample sizes, and the absence of long-term follow-up. Several studies, such as those by Redshaw et al. and Malouf et al. , relied on self-reported data, which may have affected the accuracy of findings due to recall bias. The small sample sizes in studies like Werner et al. and Wudneh et al. limited the generalisability of their findings, as did the qualitative nature of some studies. Additionally, many studies did not re-contact participants for follow-up, limiting the ability to assess long-term outcomes for women with disabilities after childbirth. Obstetric violence and discrimination Wudneh et al. provided important qualitative data on the experiences of obstetric violence among women with disabilities in Ethiopia. The study highlighted that most participants experienced physical and verbal abuse during childbirth, neglect, and discrimination. Similarly, Tarasoff et al. noted that many participants felt fear of being judged and faced discrimination during their postpartum care, underscoring the need for more inclusive and supportive maternity care for women with disabilities. Across all studies, women with disabilities demonstrated an increased likelihood of caesarean section deliveries compared to women without disabilities. This trend was consistent irrespective of the nature of the disability, whether physical, sensory, or intellectual. For instance, in the study by Darney et al. , women with disabilities had a caesarean section rate of 32.7%, nearly double that of women without disabilities (16.3%). In a study conducted by Redshaw et al. , 24.4% of women with sensory disabilities underwent caesarean sections, while Malouf et al. reported that women with physical disabilities had a higher incidence of both planned (18.4%) and emergency (18.6%) caesarean sections compared to their non-disabled counterparts (11.0% and 14.4%, respectively). This pattern persisted even in smaller-scale studies, such as Werner et al.’s study of blind women in Brazil, where all six participants delivered via caesarean section. Similarly, in the qualitative study by Wudneh et al. focussing on obstetric violence in Ethiopia, 72.72% of women with disabilities delivered via caesarean section. Studies focussing on deaf women, such as those by Schiff et al. and Mitra et al. , also reported increased caesarean rates (29.9% and 35.4%, respectively), though the increase was not statistically significant after adjustment in some instances . Despite the higher rates of caesarean sections, vaginal deliveries remained common among women with disabilities in several studies. Redshaw et al. found that women with sensory disabilities had a spontaneous vaginal delivery rate of 54.3%, comparable to that of women without disabilities. Mitra et al. reported that 67.9% of deaf women delivered vaginally, which was not significantly different from the 71.3% rate observed in women without disabilities. The qualitative study by Werner et al. provided insight into maternal–fetal attachment among blind women using three-dimensional ultrasound and magnetic resonance imaging (MRI) data. This study highlighted that the connection between the mother and fetus was influenced using physical models, and the findings suggested that caesarean deliveries were often chosen due to the structure of private maternity clinics rather than medical necessity. Similarly, Tarasoff et al. highlighted significant gaps in postpartum care for women with disabilities, with many participants reporting inadequate physical recovery care and limited accommodations for their disabilities. Several studies noted disparities in healthcare access and satisfaction among women with disabilities. Redshaw et al. found that while women with disabilities had similar access to healthcare during the early stages of pregnancy, they experienced more prenatal checkups, ultrasounds, and longer hospital stays. However, they were less likely to breastfeed and expressed greater dissatisfaction with communication and support during childbirth and postpartum care. Similar concerns were echoed by Wudneh et al. , who reported widespread experiences of obstetric violence among disabled women, including physical and verbal abuse, neglect, and breaches of privacy. McKee et al. explored postpartum hospital readmissions among deaf women in Massachusetts. The study found that deaf women had a significantly higher risk of hospital readmissions during all postpartum periods than women without disabilities. Notably, deaf women had nearly seven times the risk of repeated hospital admissions within 43–90 days postpartum and nearly four times the risk within 91–365 days postpartum. The studies reviewed presented various limitations, including reliance on self-reported data, potential recall bias, small sample sizes, and the absence of long-term follow-up. Several studies, such as those by Redshaw et al. and Malouf et al. , relied on self-reported data, which may have affected the accuracy of findings due to recall bias. The small sample sizes in studies like Werner et al. and Wudneh et al. limited the generalisability of their findings, as did the qualitative nature of some studies. Additionally, many studies did not re-contact participants for follow-up, limiting the ability to assess long-term outcomes for women with disabilities after childbirth. Wudneh et al. provided important qualitative data on the experiences of obstetric violence among women with disabilities in Ethiopia. The study highlighted that most participants experienced physical and verbal abuse during childbirth, neglect, and discrimination. Similarly, Tarasoff et al. noted that many participants felt fear of being judged and faced discrimination during their postpartum care, underscoring the need for more inclusive and supportive maternity care for women with disabilities. Women with disabilities, across various studies, were more likely to undergo caesarean sections, experience longer hospital stays, and report dissatisfaction with communication and support during childbirth. Additionally, some studies identified specific risks for deaf or hard-of-hearing women, including higher rates of postpartum hospital readmissions and adverse birth outcomes. Despite these disparities, many women with disabilities were able to deliver vaginally, indicating that caesarean sections were not universally necessary for this population. However, the findings across studies consistently highlight the need for better communication, improved support, and tailored accommodations to enhance the childbirth and postpartum experiences of women with disabilities. Another literature review examined the childbirth experiences of women with physical disabilities, focussing on barriers such as healthcare professionals’ lack of knowledge, negative attitudes, and inaccessible facilities . The review highlighted the challenges in delivery methods, pain management, and communication, emphasising the need for improved clinician training, better collaboration, and more inclusive care environments to ensure positive outcomes. Compared to our review on women with sensory disabilities, both studies highlight similar healthcare system deficiencies, but our review found higher caesarean rates, possibly due to unique communication barriers. Another review focussed on the perinatal care experiences of women with vision disorders, highlighting the lack of healthcare staff training and inadequate facilities tailored to these women’s needs . The review identified barriers such as dissatisfaction with the quality of care, unsuitable antenatal classes, and the stigmatisation of motherhood among women with visual impairments. The authors emphasised the need for specialised training for healthcare providers to better accommodate these women’s functional needs and improve their maternity care experience . In comparison, our review on women with sensory disabilities (deaf or blind) also found higher rates of caesarean sections, but these were often attributed to communication gaps and healthcare providers’ assumptions about the women’s ability to give birth vaginally. While both reviews emphasise inadequate provider training and systemic healthcare barriers, our review highlights a broader issue of communication challenges, particularly for deaf women, which often led to misunderstandings about care options and outcomes. Additionally, our review suggests a more pronounced lack of informed decision-making due to these communication difficulties, a factor less emphasised in the vision disorder review. Both reviews stress the importance of tailored, inclusive care, but the underlying causes of inadequate care differ slightly based on the type of sensory impairment. Discrimination against these women is a pervasive issue, with alarming instances of human rights violations, such as cases where deaf women were coerced into sterilisation under the pretext of undergoing a caesarean section . Such actions are not only reprehensible but also violate bodily autonomy, eroding trust between patients and healthcare providers. Ensuring that healthcare is delivered in a manner that respects patient autonomy and informed consent is critical to maintaining ethical standards and safeguarding the dignity of all individuals . Communication barriers significantly exacerbate the difficulties faced by deaf and blind women during their interactions with the healthcare system. These barriers often hinder their ability to access adequate prenatal and postnatal care, and they may struggle to communicate critical concerns regarding their children’s development . The inability to communicate effectively with healthcare providers limits their participation in decision-making processes, compromising the quality of care they receive. To address these issues, healthcare systems must prioritise inclusivity and ensure that the unique needs of women with sensory disabilities are met. In line with the Sustainable Development Goals (SDGs) aimed at promoting inclusion and reducing inequalities, it is imperative to develop healthcare systems that accommodate the diverse needs of patients with disabilities. This can be achieved through several strategic actions. One key solution is investing in assistive communication technologies and appointing specialised sign language interpreters with knowledge of medical terminology to bridge the communication gap. Additionally, healthcare professionals should receive specialised training in effective communication methods tailored to individuals with hearing and vision impairments . Awareness and understanding of disability among healthcare professionals are crucial for improving care delivery. Disabilities are often misunderstood, and individuals with disabilities may experience their condition differently from how those without disabilities perceive it . Certified training programs and awareness seminars can help medical staff better understand the specific needs of women with disabilities, fostering a more inclusive and compassionate healthcare environment. Furthermore, the creation of support networks dedicated to mothers with disabilities during pregnancy, childbirth, and postpartum care is essential to ensuring comprehensive care . Personalising the childbirth experience is another promising approach. For example, creating delivery rooms designed with accessibility in mind—such as installing support rails and guiding paths for blind women—can greatly improve the safety and comfort of the birthing environment. Technological innovations that facilitate communication and information sharing can also enhance the overall care experience for women with sensory disabilities . Ethically, healthcare providers are obligated to respect patients’ preferences and decisions. Incorporating practices that align with women’s choices is fundamental to ensuring their autonomy and fostering a positive childbirth experience . This principle is embedded in the broader concept of informed decision-making and respect for individual autonomy. Specialised medical staff, particularly midwives trained to provide tailored care for women with disabilities, can play a vital role in creating a safe and supportive environment for both mothers and their families . This systematic review has several limitations that affect the breadth and depth of its findings. First, many of the included studies had small sample sizes, particularly those focussing on women with sensory disabilities, which limits the generalisability of the results. These smaller studies may not accurately reflect the experiences of the broader population of women with sensory impairments. Additionally, most of the studies were observational in nature, and many relied on self-reported data, introducing the possibility of recall bias. The lack of randomised controlled trials or more robust longitudinal studies further weakens the ability to establish causal relationships between sensory disabilities and specific childbirth outcomes. Another limitation is the geographical and cultural variation in the studies. The research was conducted across different countries, each with distinct healthcare systems, policies, and cultural attitudes toward women with disabilities, making it challenging to generalise the findings universally. Furthermore, long-term follow-up data were lacking in most studies, limiting the understanding of postnatal care and long-term maternal and child health outcomes. Issues such as obstetric violence and discrimination, although mentioned in some studies, were often underreported or not fully explored, suggesting a possible underestimation of their prevalence. Finally, the exclusion of non-English language studies may have limited the scope of the review by omitting potentially relevant findings from other regions. These limitations highlight the need for more comprehensive research with larger, more diverse populations, improved study designs, and a greater focus on postnatal care and long-term outcomes. In conclusion, this systematic review revealed several critical insights into the childbirth experiences of women with sensory disabilities, particularly deaf and blind women. The analysis found a higher prevalence of caesarean deliveries among this demographic, influenced by medical necessity, biases from healthcare professionals, or personal preferences. Despite this, many women with disabilities are fully capable of having successful vaginal deliveries, underscoring the importance of promoting information about the benefits of natural childbirth. Additionally, the review highlighted significant deficiencies within the healthcare system, including inadequate communication, lack of appropriate infrastructure, and insufficient information tailored to the needs of women with sensory disabilities. To address these issues, healthcare systems should adopt a framework that fosters inclusion, equity, and respect for patient autonomy. Bridging the communication gap between patients and healthcare providers, as well as providing proper support and resources, will enhance the quality of care and improve trust in the patient–provider relationship. Ultimately, by ensuring that women with sensory disabilities receive dignified and equitable care, we can reduce healthcare disparities and promote better outcomes for both mothers and their newborns. Further research and the development of inclusive policies are essential steps toward achieving these goals. |
Hildegard of Bingen’s Embryology: Enabling Women’s Reproductive Power without Seed | 3b7196da-ff29-497b-ab20-41bd68ee81a3 | 11855046 | Anatomy[mh] | How are humans born? How is a new baby conceived and brought into the world? Human generation and embryology have long been critical in the context of medieval and modern philosophy and theology or science. However, it is only relatively recently that the academic field has started to recognize the close relations between the religious/philosophical and scientific/medical understandings of human generation, not as antitheses or rivals but as collaborators or parallel streams of thought. Embryology presents an important part of the understanding of how human beings conceive and reproduce, one that brings together different intellectual fields . This study has historically been deeply connected to philosophical and spiritual frameworks, extending beyond the boundaries of medical science . Such aspects are often neglected in contemporary discourse on evolutionary embryology. On the other hand, when women’s roles in the history of medicine, especially in gynecology, started to be investigated in the late twentieth century, scholars discovered that women engaged in producing medical knowledge as well. In particular, the German nun Hildegard of Bingen (1098–1179), whose medical work was often described as isolated or disconnected from the majority of the medical theories of her time, began to be newly appreciated as someone who was actively engaged with not only theological but also medical theories. Although Hildegard of Bingen and other women have begun to receive recognition in the history of medicine, this attention has often been narrowly focused on their contributions to gynecology rather than embryology. While gynecology is undoubtedly important for health and life, the works of these women were often dismissed as mere practical manuals lacking engagement with more philosophical or theoretical debates, such as those surrounding embryology. Since Monica Green first investigated women’s substantial contribution to medieval gynecology, scholarship has expanded to explore women’s roles in science and medicine more broadly . However, their contributions to embryology remain underexamined, particularly in relation to the theoretical dimensions of medicine and science. Hildegard of Bingen, for example, was an active participant in the medieval peak of Western medicine, a period marked by the transmission and reinterpretation of ancient and Arabic embryological theories by Western scholars. Yet, the history of embryology often overlooks the contributions of women, a marginalization exacerbated by the rise of male authority in gynecology through licensing systems and male-dominated guilds from the twelfth century onward . This marginalization is exemplified by Franklin , a foundational work in the field, which rarely mentions women apart from Hildegard of Bingen. Even within this limited acknowledgment, Hildegard’s contributions are relegated to a section subtitled ‘Visions,’ while male theorists like Albertus Magnus are afforded entire chapters on their scientific contributions . Such framing underscores the need for greater academic attention to women’s contributions to embryology as part of the broader theoretical history of medicine and science. Hildegard of Bingen (1098–1179) was a German Benedictine nun who was a prolific writer, uncommon for women of her period. She was known to receive visions from God, who was the source of her knowledge according to her and her followers. Although Hildegard referred to herself as simple and “uneducated” in her books and letters, she must have received a certain level of education. She read her contemporaries’ books and composed spiritual and scholarly works. Along with her famous books of visions such as Scivias , she composed two books on natural science and medicine, Physica and Cause et cure , while her understanding of the human body is also present in her writings in other fields, such as music and drama, where she highlights women’s important roles in God’s creation and salvation. Hildegard should not be the only woman in embryology; however, except for her manuscripts, it is hard to find a surviving book wholly devoted to medicine that has been historically proven to have been written by a woman. At the same time, among women and men, it is hard to find another theologian whose expertise reached to embryology. Recent feminist scholars, such as Monica Green and Victoria Sweet, have started to appreciate Hildegard’s knowledge of human generation . As they argue, Hildegard of Bingen established herself in the history of science by reading translations of classical medical and philosophical knowledge and the medical and philosophical works of her contemporaries. Hildegard was familiar with the most up-to-date medical and theological theories of her time, and she was at the center of the historical debates although she was not entirely free from the conventional physiology that men were stronger, and women were weaker. Hildegard’s medical and scientific perspectives require further examination through the lens of embryology, as scholarship has predominantly focused on her contributions to gynecology and medical practice. She synthesizes embryological understanding through an interdisciplinary lens, integrating contemporary physiological theories, classical medical knowledge, and Christian theological perspectives into a comprehensive natural philosophical framework. Limiting Hildegard to gynecology, largely due to her being female, is to limit her vast knowledge of human generation to diagnoses and treatments, not appreciating her foundational concepts of human origins and generation along with her ideas of pathology. And even further, Hildegard’s writings suggest that women were not simply practicing midwifery but were also developing their own version of embryology as foundational knowledge. For Hildegard, just as for Aristotle, Galen, and Albertus, it is necessary to understand how humans are generated to understand what is needed to help their generation and improve their reproductive health. And as those male theorists all did, Hildegard also combined her theology and embryology to link human origins and ends. In particular, her semen theory, even if not fully discussed as a separate topic, represents her deep understanding of the human condition in the close connections with natural science and the religious background that her theory shares with those of male embryologists. Hildegard stands out among female medical experts, and even among most male scholars, for her ability to develop and elaborate embryological knowledge during the Middle Ages. Hildegard’s Cause et cure , her medical book composed in the twelfth century, devotes a significant part of the earlier chapters to human generation. Unlike her spiritual books, Hildegard of Bingen’s Cause et cure gained scholarly attention in the mid-20th century amid debates over its authorship. Now widely attributed to her, the text exemplifies her distinctive integration of medical knowledge, natural philosophy, and spiritual insight. Her contributions to humoral theory, practical medicine, and the synthesis of diverse healing traditions establish her as a pivotal figure in medieval medical history. The book begins with the origin of the universe to contextualize the emergence of the first human beings. According to Hildegard’s views of the microcosm and macrocosm, which show her desire to encompass the logic applied to every creature, the human being’s symptoms can be understood and treatments identified only through understanding God’s creation. She begins directly from the creation of the world, giving an account aligned with Genesis in the Hebrew Scripture. In Cause et cure , the creation of the world is followed by the creation of the angels, the fall of Lucifer, and the creation of all other natural objects and phenomena in Book I. Book II starts with the fall of Adam, which is the main reason why humankind has a need for reproduction after becoming mortal. Book II makes up the largest part of Cause et cure , covering various topics in the microcosm and the macrocosm, from animals to diseases. The chapters of Books III, IV, and V discuss various diseases and bodily symptoms. And Book VI explains conception in relation to astronomy. Across her various medical treatises, however, the topic that Hildegard returned to most frequently was human reproduction. For her, diseases and reproduction shared the same cause: the original sin of the first humans disturbed the perfect condition of the body and mind. Therefore, the solution was the same, to find the right balance between humors and elements in the body, returning to the prelapsarian state. Therefore, Hildegard repeatedly returned to the foundational question of how human beings were created and what their creation would reveal about the current human state, including reproduction. Hildegard’s semen theory was the product of combining her knowledge of classical embryology and medieval theology with her unique appreciation of the woman’s sexual/reproductive body. Rather than the gender hierarchy of the Aristotelian and Galenic theories, this female saint emphasized the complementary roles of both sexes, which were also represented in her understanding of seed. Examining Hildegard of Bingen’s embryology is valuable not simply because she provides a rare case of female perspective. Possibly grounded on her experience as a woman witnessing other women’s engagement with and refusal of reproductive/sexual life, she developed a unique embryology not dependent on the man’s semen. Rather, she saw the importance of uiriditas , the greening power existing in women, and appreciated complementary roles in generation by both sexes. Her new embryological theory solved the dilemma perpetually posed by the earlier embryologies of Aristotle and Galen, which highlighted the dominance of the man’s semen while failing to explain the woman’s contribution to human generation. Granted that it is not so uncommon to find the microcosm and macrocosm in the Middle Ages, Hildegard of Bingen’s medical-theological writings take a particular place in the history of medicine. She combined ancient Greco-Roman medical theory, the medieval Christian story of creation and salvation, and German folk medicine. At the same time, unlike male theologians and medical theorists, Hildegard dared to represent the positive dimensions and meanings of the female body. Hildegard did not do this simply because she was a woman, but it is related to the fact that she was a woman, more particularly a female theologian and medical expert. Her medical book, Cause et cure , is a collaboration between her theology and medicine. For Hildegard, how human beings were created by God and how they could maintain their bodies and souls in health could not be separated but were interconnected. If we want to understand why human beings fall into sickness, it is necessary to acknowledge how God created human beings and how human beings degenerated due to original sin. Her embryological theories significantly contributed to her broader medical framework, particularly in understanding the human condition. However, as previously noted, they have not been afforded serious consideration within the history of embryology, despite her broader recognition in the fields of general medicine and gynecology. Scholarly interest in Hildegard of Bingen’s medicine has revived since the full manuscript of Cause et cure was rediscovered and her contribution was re-evaluated in the middle of the twentieth century as a woman specialized in physiology and natural science as well as theology. To emphasize the gender-complementary aspects of Hildegard’s embryology, it is essential to contextualize her work within the dominant embryological theories of her time. Medieval embryology was largely divided into Aristotelian and Galenic theories, which disagreed on the number of seeds . While Galen was frequently referred to as an authoritative figure in medieval medical and scientific manuscripts in the Middle Ages , Aristotle’s embryology became influential mostly through the Arabic translations that introduced ancient theories on this topic to Europe. Aristotle (384–322 BCE), who contributed to building the Western history of medicine and medical theory, claimed in his one-seed theory that it was only men who contained seed or semen. For him, semen did not simply make conception possible but also had the power to generate human beings. On the other hand, women, who did not have seed, only provided the fetus with material, because they lacked semen and generative power. In this hylomorphism, according to which matter and form in unity constitute the material object , the man’s body provided the form for a baby, and the woman’s body only provided the matter. In his famous Generation of Animals , Aristotle clearly stated that women did not have semen. Now since what comes to be from females is as the semen from males, and it is not possible for two spermatic secretions to come to be at once, it is plain that the female does not contribute semen to generation. For if (the female) had semen, it would not have menstrual fluid. Now because that one is present [in the female], the other one (semen) is not . Instead of semen, Aristotle argued that women had menstrual blood, which might contribute to conception by providing materials. In this one-seed theory, the woman’s menstrual blood and the man’s semen are coagulated to bring about conception. However, this menstrual blood lacks the same generative power as the man’s semen, although it can at least fertilize eggs and feed embryos. This understanding that only the seed contained the generative power, the more fundamental ability to generate a new human being, and that this semen was only possessed by men placed the male in the more active role in reproduction. But the female, as female, is passive, and the male, as male, is active, and the principle of the movement comes from him. . . . It is plain then that it is not necessary that anything at all should come away from the male, and if anything does come away it does not follow that this gives rise to the embryo as being in the embryo, but only as that which imparts the motion and as the form; . . . . . On the other hand, women were ascribed a passive role in reproduction, although this Aristotelian theory failed to explain why children resemble their mothers, too. Aristotle’s successors embraced the possibility that women might produce semen-like substances, but they still kept Aristotle’s stance that women did not contribute to the formation of the embryo. In other words, they maintained the view that a woman’s menses provided the material for the fetus, while the man’s semen imparted its essential form . Therefore, even if women had a semen-like liquid, it was not comparable to the man’s sperm. This is more obvious in Aristotle’s use of the cheese analogy in his embryology, which Hildegard of Bingen later adapted in a way that more explicitly acknowledged women’s contributions. Evidently, Aristotle saw semen as having the more fundamental power to form a fetus. The man’s semen is compared to rennet in milk, which initiates coagulation to make cheese. Rennet and the semen had “vital heat.” The milk is the matter that is acted upon, like catamenia or the woman’s menstrual fluids. While both are certainly needed for making cheese, the seed has the fundamental power to form and impart movement to the embryo by separating liquid and solidified materials; the latter becomes the fetus. The roles of men’s and women’s sexual fluids were distinct in Aristotelian theory. The woman’s fluid was more passive, while the man’s was more active. Elsewhere, Aristotle once again emphasized the vitality of semen, which menstrual blood did not have. Because women “lacked” semen, they did not need to ejaculate for conception; therefore, sexual pleasure was not required for women, unlike men. Although the Aristotelian one-seed theory was not accepted as widely as the Galenic two-seed theory, it certainly had a significant impact on the philosophical and gynecological fields. Hildegard’s embryology aligns with the Aristotelian one-seed model of human generation, yet she emphasizes maternal contributions by elaborating on fertilization, pregnancy, and childbirth processes. Galenic theory, which did not agree with the Aristotelian one-seed theory, allowed a female seed to women. Galen (129–216 CE) agreed with Aristotle that semen had the generative power that was essential to make an embryo. Unlike Aristotle, however, Galen argued that women had semen like men. Now the fact that the female animal has semen must be accepted on the evidence of the senses, as we said earlier, and the existence of what is clearly seen must not be overturned by argument. But we must try to find the reason why, when the female too produces semen, the male animal nevertheless came into being, or why, when the male had come into being, the female’s semen was also preserved; for it was better for it to have a residue that contributed to the generation of the fetus . When Galen was developing his two-seed theory, he clearly acknowledged the Aristotelian theory of one seed, and he based his opposition to it on his observation and evidence. However, this does not mean that Galen appreciated the woman’s equal contribution to conception. For him, the woman’s semen could not be equivalent to the man’s. The woman’s semen was weaker and less complete compared to the man’s . Why do women have weaker semen? According to Galen, it was because the woman’s body was colder and wetter than the man’s body: “because the female is colder in krasis than the male” . The fact that women tend to have more fat also supported his hypothesis; Galen said that fat was more strongly associated with colder animals. This claim was able to be supported by the role of humors in gender differences. Women, having more fat, possessed the colder body and humors, based on humoral composition. Galenic humoralism considers the four different humors as essential to the human body, blood, (yellow) bile, black bile, and phlegm, which should be balanced to keep one’s health sound. And these humors have different characteristics in terms of hot/cold and dry/wet qualities. According to this humoralism, the woman’s body lacked heat, which was considered the better quality. Therefore men, whose bodies were believed to contain stronger heat, could produce better semen that contributed more to conception. Again, at the same time, it was obvious to Galen that women did not have the same qualities in their semen as men do: “[w]ell, then, Aristotle was right in thinking the female less perfect than the male” . Since women were colder, their bodies were inferior , and they failed to produce semen of the same quality: “the semen generated in them [female testes] must be scantier, colder, and wetter” . Like Aristotle, Galen saw making semen as a process of concoction that required heat. Since women lacked heat, their semen was no better than the prostatic liquid . Interestingly, Galen’s theory posited that ejaculation was necessary for both men and women, asserting that fertilization occurred only through the combination of two ejaculated seeds. Unlike the Aristotelian theory, which dismissed female ejaculation due to the belief that women lacked semen, Galen emphasized the importance of female ejaculation and the role of pleasure in triggering it. Also, Galen considered the female pleasure from intercourse to be evidence of the female seed . However, even for him, female sperm played more of a supporting role, not that of the main contributor like male seed. On the other hand, at the anatomical level, Galen claimed to observe that women had testicles like men to produce semen—his interpretation of the ovaries in modern science—and their fallopian tubes might become empty after coitus . The difference was that women would ejaculate inside their uterus while men would do so outside their genitals. Nevertheless, that woman’s genitals were inside the body supported Galen’s belief in female inferiority. He saw the woman’s body as not as complete as the man’s, on the grounds that women had their genitals inside while the man’s came out externally by pneuma . Here, Galen’s famous analogy of the mole’s eyes appeared to explain the imperfect status of women’s genitals. If ever it should lack the strength for the final act, it leaves unfinished the thing being made, as is seen, for instance, in the whole race of moles; their eyes were sketched internally but were unable to emerge to the outside, their nature having lost the strength for this, so that it did not complete the work it had proposed to do . Like Aristotle, Galen also observed that the main role of the woman’s sexual body was to provide a fetus with matter, such as nourishment, and a place , which was rather similar to the Aristotelian embryology. It was Galen’s two-seed theory which was more widely adopted in medieval medicine , although interestingly, Hildegard of Bingen’s theory has more similarities to the Aristotelian one-seed theory, as will be discussed in the next section. Greatly influenced by Galen, medieval embryology continued to uphold the supportive position of the woman’s semen. For example, ‘Ali ibn al-’Abbās (d. 994), highly influenced by Galen, asserted that female semen was useful and necessary because it would liquify and decrease in thermal intensity so that the man’s thick, heated semen could spread and reach the woman’s womb. Although woman’s seed was “useful,” it was still believed by many medieval male theorists that the woman’s sexual fluid performed a secondary role. The woman’s seed was believed to contribute its thinness and frigidity—qualities typically regarded as inferior to thickness and heat—along with nourishment to the fetus. Whether or not women could produce semen or could contribute to giving generative power to the man’s semen, both theories shared the commonality that the woman’s sexual fluid was inferior to the man’s. Both were substantially based on the gender theory that men’s bodies possessed better qualities, and therefore, their contribution to reproduction had to be more crucial. Whether women were believed to produce semen or not, the focus was whether women would provide substantial qualities to the fetus. And in either case, the woman’s reproductive role was considered passive in contrast to the man’s active role, as well as supplementary to the man’s essential part, highlights Hildegard’s distinctive emphasis on the woman’s agency both before and after fertilization. 5.1. The Hildegardian One Seed Theory in Medicine: Women’s Primary Contribution to Generation One of the most debated topics that Hildegard of Bingen engaged with throughout her lifetime was embryological theory. Hildegard must have known the classical views about semen, along with other theories that she was exposed to. Her semen theories, like the Greco-Roman ones, appear in her discussion of conception. Her medical book, Cause et cure , plays a significant part in explaining how human beings are conceived. However, how she might have contributed to the general history of medicine had often been often neglected, and she has often been described as isolated from the major medical discourses of her time. However, recent studies have depicted her as at the center of medical discussion and theories, influencing the formation of medical knowledge in the Middle Ages. Hildegard’s theory of semen was similar to Aristotle’s one-seed theory in that she did not find semen in the woman’s body. Because man’s flesh was made from earth, his blood has semen of a strong and correct nature. A woman’s blood is also of a correct nature. Because she is weak and tender she does not have semen, but emits merely a tiny, watery foam, since she is not of both earth and flesh, as a male, but was taken only from the flesh of the male . Unlike Aristotle, Hildegard of Bingen did not see the generative power of the form only in male semen. As noted above, the Aristotelian one-seed theory failed to explain why children could resemble their mothers. Although Hildegard technically supported the one-seed theory, she did not confine generative power solely to the man’s seed. Rather, she asserted that women made a significant contribution to conception, which accounts for the variation in forms and resemblances among offspring. For example, she saw different dynamic processes and different results shaping the fetus and baby, such as when the woman’s heat “overcomes the semen of man, so that the child is often formed with their appearance .” Strictly speaking, Hildegard argued for a one-seed theory, in which only men contain and ejaculate semen. However, upon closer examination of her theory, it becomes clear that she did not emphasize the absence of semen in women. Whether or not women could produce semen was not the focus in her embryology. Even if women lacked semen, they were fully needed to bring a new life into the world. Hildegard highlighted the woman’s role in conception by identifying various bodily elements, including female foam, as equal contributors to reproduction. For instance, Hildegard highly valued the woman’s ability to nullify the toxic nature of the man’s semen. In order to enable reproduction, humans needed the fact that women did not have semen, because men’s semen had degenerated so much that men’s bodies were not capable of making new life. For Hildegard, the human reproductive process is a smaller version of God’s creation. This process involves the four basic elements, fire, earth, water, and air, just like God’s creation. God sends the soul into the fetus so that the fetus could be divided and develop into a human form. And finally, the baby comes out of the mother’s body just like Eve was created from Adam’s body by God’s eternal power. If human generation was the repetition of God’s creation of the first human beings, it had to be free from conditions directly related to any defects, especially original sin. Semen was impacted by Adam’s transgression, but this degenerate seed did not exist in the woman’s body, giving hope to human generation because the woman’s body was deprived of semen. Then, why did women not have semen, and why and how were they less impacted by the aftermath of the original transgression? Again, Hildegard’s embryology looks all the way back to God’s creation, underlining the close relationship between her theology and medical theory. When God created the first human beings, God used different materials to make them. Adam’s creation came straight from the natural elements in the form of mud. In contrast, Eve was created from Adam’s flesh, which gave her distinctive features such as softness and malleability. Traditionally in Christian theology and medicine, these female characteristics were often translated into the weakness and inferiority of women. However, Hildegard replaced these womanly defects with strengths, which must have been useful to her in claiming distinctive authority as God’s female messenger. According to Hildegard of Bingen, at the same time, the different creations of Adam and Eve differentiated the first couple’s reproductive bodies. As noted above, she argued that women did not have semen as men did. Adam ended up having semen because he was directly produced from earth, which is characteristically strong and rigid. Therefore, Adam’s body and mind were already strong and rigid from the moment of his creation, and his condition, which would be inherited by his male descendants, gave him semen. On the other hand, Eve, created out of her partner’s flesh, is as soft in her mind and body as his flesh, resulting in a semen-free condition that would be inherited by her female descendants as well. Nevertheless, this does not mean that Eve lacked a reproductive role in conception. Hildegard claimed that women’s foam contributed to making an embryo. The woman’s foam is something closer to our understanding of an egg. This “foam” was not as thick as semen, but it was certainly essential to bring a new life into the world. In addition, Hildegard’s embryology addresses the defective nature of semen, which is not found in the male ancient philosophers. For Hildegard, semen became deformed when Adam committed sin, which caused degeneration of his body and mind, and most importantly his semen. Eve, despite committing the sin first, could avoid the degeneration of semen because she did not have semen in any case. Adam’s body received the direct impact of sin due to its strength, unlike Eve’s soft and malleable body. Instead, Eve started to have flows, meaning menstruation. “In Adam’s transgression, the strength in the male’s genital member changed into a poisonous foam, and the female’s blood changed to a dangerous effusion” . Her “weak and fragile” body saved her from the poison of original sin. God created the human being, and all animals were subject to serving him; but when man transgressed God’s order, he was changed in mind and body. The purity of his blood changed to another type, so that instead of purity, it throws off the foam of semen. If the human had stayed in paradise, he would have continued in his unchangeable and perfect state. But these all changed after the transgression into another, bitter type . It was Adam’s semen that degenerated so much that human beings began to suffer disease and death. Now, the woman’s foam had to perform its reproductive duty by overcoming the man’s noxious semen. Therefore, Hildegard notes that sexual pleasure is necessary for conception, as it enables the couple to emit sexual fluids, the foam from women and semen from men, similar to the Galenic two-seed theory to some extent. The argument about whether sexual pleasure might be absolutely needed for fertilization is rather complicated in terms of the gendered contribution to reproduction. For Hildegard, sexual satisfaction made it possible for the couple to ejaculate the needed substances, whether it was the man’s semen or the woman’s foam. What makes her different from her predecessors or contemporaries is that she evaluated the crucial part played by affection. The nonaffective part of sexual pleasure is directly related to original sin and was given to humankind as a punitive result. This does not mean that Hildegard of Bingen overturned the gender hierarchy: “The woman is subject to the man in that he sows his seed in her, as he works the earth to make it bear fruit” . However, she did not advocate its absolute fixation, either. Unlike philosophers or philosophically oriented theologians in antiquity or the Middle Ages, she was relatively unconcerned about whether women had semen or not. In the end, the ultimate power to generate came from God, not from men or women, because human reproduction is the smaller version of God’s creation. Hildegard’s embryology did not fixate on passivity or activity in gender roles. By emphasizing the complementary roles of the two sexes, she minimized the sex/gender hierarchy. Her embryology is the scientific version of her theology that women and men could not exist without each other . Furthermore, this extends to her theological view that God and human beings could not exist without each other. In the same logic, the moon and the sun, the Church and Jesus should coexist . Even if Hildegard did not argue for the woman’s seed, she still valued the woman’s pleasure, like Galen. She emphasizes the mutual love between the couple; therefore, the female’s affection toward her male partner decides how healthy the fetus is, while the man’s semen decides its sex . Also, the power of the form was not in either the male seed or the female foam. It came directly from God, one month after conception. In Cause et cure , it was the soul that would give the fetus a form. Prior to ensoulment, the fertilized egg was considered an unformed mass. Once ensoulment happened, it started to be divided into different parts and to have possibilities for movement, developments that are comparable to the works of the man’s semen according to Aristotle and Galen. The major difference between Aristotle and Hildegard, despite their shared one-seed theory, is the origin of the soul. Aristotle argued that ensoulment occurs in different stages, helping the embryo to develop differently with each step, and he believed that the soul did not come from outside; rather, the soul was generated inside the embryo . Where Aristotle saw the soul internally generated from the embryo, Hildegard understood the soul would be infused into the embryo externally, from God. Since the soul would be given by God, Hildegard did not need to locate the potential soul or form to generate the form of the fetus in the human body. Therefore, for her, the major development of the fetus did not need to come from either the man’s semen or the woman’s foam. At its most fundamental level, Hildegard did not find it necessary to address the question of the origin of the active power behind human generation. As God’s creation was complete and free from any possible defects, human reproduction has to be free from any possible residue of original sin. Therefore, Adam’s defective semen should go through an additional process of purification, which, according to Hildegard, would happen in the woman’s body. Not having flawed semen, the woman’s body could use her blood and heat to warm the man’s semen in order to create the right conditions for conception and fertilization. From the love of the male, her blood is aroused and she sends it, as if a foam, more bloody than white, to the semen of the male. It joins with it and shapes it, making it warm and bloody. After it has fallen into its place, and lain there, it grows cold. It is as if a poisonous foam until fire, that is heat, warms it; and until air, that is breath, dries it; and until water, that is liquid, allows pure dampness to enter; and until the earth, that is a membrane, constrains it. And then it will be bloody—not totally blood, but combined with a bit of blood . The man’s semen needs so much care even when it is coagulated with the woman’s foam. And the process is quite similar to God’s creation, using the four basic elements. And at least in this moment, this generation is detached from original sin. Rather, the woman’s body removes the residue of original sin. In this context, the woman’s role is undeniably significant in human generation, despite the absence of semen in the female body. In the mother’s body, the fertilized egg starts to proceed toward the shape of a human. It starts to be split and divided. Then, it begins to become a human. 5.2. The Hildegardian Seed Theory in Theology: Women’s Power of Eternity Hildegard’s unique embryology also appears in her visions. The well-known cheese analogy for the Aristotelian theory also appears in Hildegard’s writings to explain human reproduction. And behold! I saw on the earth people carrying milk in earthen vessels and making cheeses from it; and one part was thick, and from it strong cheeses were made; and one part was mixed with corruption, and from it bitter cheeses were formed. And I saw the image of a woman who had a perfect human form in her womb. And behold! By the secret design of the Supernal Creator that form moved with vital motion, so that a fiery globe that had no human lineaments possessed the heart of that form and touched its brain and spread itself through all its members . This passage comes from the fourth vision of the first book of Hildegard’s Scivias , where she explains the composition of the world, including in the preceding third vision, an analysis of creation. This part discusses the soul and body of the human being. Following her typical format, in which she first recounts her vision and then gives exegeses of it, in the next chapters Hildegard explained in God’s voice what her vision meant. This vision in general represents the human soul and how it was deceived by the devil and aided by God’s knowledge, justice, and forgiveness. In the exegesis of this passage, God explaines to Hildegard that “the earth people” are the women and men in this world. They are holding the vessels, which are their bodies, and the cheese in the vessels designated their seeds . By using the analogy of cheese, Hildegard explains in God’s voice why and how people in the world vary. From thick milk, which is compared to thick semen, strong cheese is made, which “is usefully and well matured and tempered” and “produces energetic people.” These people are not just excellent in their bodies but also in their souls; therefore, they would remain strong against evil temptations. In contrast, weak people are produced out of the weak cheese. The weak cheese is connected to weak semen, “imperfectly matured and tempered in a weak season,” which generates people who are weak and unwilling to serve God. However, according to Hildegard, the worst case is that the milk has gone bad. From the corrupted milk, bitter cheese is made. Hildegard makes extremely harsh comments about those who are born out of it. At the same time, still, there is still hope that these people can overcome their innate condition and become devout, especially when they face hardship. Accordingly, they are the people who can work as the messengers of God in turbulent times like Hildegard’s own period, which she calls “effeminate.” For that semen is basely emitted in weakness and confusion and mixed uselessly, and it produces misshapen people, who often have bitterness, adversity and oppression of heart and are thus unable to raise their minds to higher things. Many of them nonetheless become useful; though they suffer many tempests and troubles in their hearts and in their actions, they come out victors. For if they were left in peace and quiet, they would become languid and useless, and therefore God forces them and leads them to the path of salvation . This chapter is followed by a chapter referring the words of Moses, who was the spiritual leader and God’s messenger when the Israelites were suffering and then wandering in the desert, possibly because Moses could be the cheese from corrupted milk. Moses’s words also deliver the hopeful message from God that even weak people could be raised to health through his will and justice, in alignment with Hildegard’s comments on the bitter cheese people, combined with her embryology and optimistic theology. One of the most debated topics that Hildegard of Bingen engaged with throughout her lifetime was embryological theory. Hildegard must have known the classical views about semen, along with other theories that she was exposed to. Her semen theories, like the Greco-Roman ones, appear in her discussion of conception. Her medical book, Cause et cure , plays a significant part in explaining how human beings are conceived. However, how she might have contributed to the general history of medicine had often been often neglected, and she has often been described as isolated from the major medical discourses of her time. However, recent studies have depicted her as at the center of medical discussion and theories, influencing the formation of medical knowledge in the Middle Ages. Hildegard’s theory of semen was similar to Aristotle’s one-seed theory in that she did not find semen in the woman’s body. Because man’s flesh was made from earth, his blood has semen of a strong and correct nature. A woman’s blood is also of a correct nature. Because she is weak and tender she does not have semen, but emits merely a tiny, watery foam, since she is not of both earth and flesh, as a male, but was taken only from the flesh of the male . Unlike Aristotle, Hildegard of Bingen did not see the generative power of the form only in male semen. As noted above, the Aristotelian one-seed theory failed to explain why children could resemble their mothers. Although Hildegard technically supported the one-seed theory, she did not confine generative power solely to the man’s seed. Rather, she asserted that women made a significant contribution to conception, which accounts for the variation in forms and resemblances among offspring. For example, she saw different dynamic processes and different results shaping the fetus and baby, such as when the woman’s heat “overcomes the semen of man, so that the child is often formed with their appearance .” Strictly speaking, Hildegard argued for a one-seed theory, in which only men contain and ejaculate semen. However, upon closer examination of her theory, it becomes clear that she did not emphasize the absence of semen in women. Whether or not women could produce semen was not the focus in her embryology. Even if women lacked semen, they were fully needed to bring a new life into the world. Hildegard highlighted the woman’s role in conception by identifying various bodily elements, including female foam, as equal contributors to reproduction. For instance, Hildegard highly valued the woman’s ability to nullify the toxic nature of the man’s semen. In order to enable reproduction, humans needed the fact that women did not have semen, because men’s semen had degenerated so much that men’s bodies were not capable of making new life. For Hildegard, the human reproductive process is a smaller version of God’s creation. This process involves the four basic elements, fire, earth, water, and air, just like God’s creation. God sends the soul into the fetus so that the fetus could be divided and develop into a human form. And finally, the baby comes out of the mother’s body just like Eve was created from Adam’s body by God’s eternal power. If human generation was the repetition of God’s creation of the first human beings, it had to be free from conditions directly related to any defects, especially original sin. Semen was impacted by Adam’s transgression, but this degenerate seed did not exist in the woman’s body, giving hope to human generation because the woman’s body was deprived of semen. Then, why did women not have semen, and why and how were they less impacted by the aftermath of the original transgression? Again, Hildegard’s embryology looks all the way back to God’s creation, underlining the close relationship between her theology and medical theory. When God created the first human beings, God used different materials to make them. Adam’s creation came straight from the natural elements in the form of mud. In contrast, Eve was created from Adam’s flesh, which gave her distinctive features such as softness and malleability. Traditionally in Christian theology and medicine, these female characteristics were often translated into the weakness and inferiority of women. However, Hildegard replaced these womanly defects with strengths, which must have been useful to her in claiming distinctive authority as God’s female messenger. According to Hildegard of Bingen, at the same time, the different creations of Adam and Eve differentiated the first couple’s reproductive bodies. As noted above, she argued that women did not have semen as men did. Adam ended up having semen because he was directly produced from earth, which is characteristically strong and rigid. Therefore, Adam’s body and mind were already strong and rigid from the moment of his creation, and his condition, which would be inherited by his male descendants, gave him semen. On the other hand, Eve, created out of her partner’s flesh, is as soft in her mind and body as his flesh, resulting in a semen-free condition that would be inherited by her female descendants as well. Nevertheless, this does not mean that Eve lacked a reproductive role in conception. Hildegard claimed that women’s foam contributed to making an embryo. The woman’s foam is something closer to our understanding of an egg. This “foam” was not as thick as semen, but it was certainly essential to bring a new life into the world. In addition, Hildegard’s embryology addresses the defective nature of semen, which is not found in the male ancient philosophers. For Hildegard, semen became deformed when Adam committed sin, which caused degeneration of his body and mind, and most importantly his semen. Eve, despite committing the sin first, could avoid the degeneration of semen because she did not have semen in any case. Adam’s body received the direct impact of sin due to its strength, unlike Eve’s soft and malleable body. Instead, Eve started to have flows, meaning menstruation. “In Adam’s transgression, the strength in the male’s genital member changed into a poisonous foam, and the female’s blood changed to a dangerous effusion” . Her “weak and fragile” body saved her from the poison of original sin. God created the human being, and all animals were subject to serving him; but when man transgressed God’s order, he was changed in mind and body. The purity of his blood changed to another type, so that instead of purity, it throws off the foam of semen. If the human had stayed in paradise, he would have continued in his unchangeable and perfect state. But these all changed after the transgression into another, bitter type . It was Adam’s semen that degenerated so much that human beings began to suffer disease and death. Now, the woman’s foam had to perform its reproductive duty by overcoming the man’s noxious semen. Therefore, Hildegard notes that sexual pleasure is necessary for conception, as it enables the couple to emit sexual fluids, the foam from women and semen from men, similar to the Galenic two-seed theory to some extent. The argument about whether sexual pleasure might be absolutely needed for fertilization is rather complicated in terms of the gendered contribution to reproduction. For Hildegard, sexual satisfaction made it possible for the couple to ejaculate the needed substances, whether it was the man’s semen or the woman’s foam. What makes her different from her predecessors or contemporaries is that she evaluated the crucial part played by affection. The nonaffective part of sexual pleasure is directly related to original sin and was given to humankind as a punitive result. This does not mean that Hildegard of Bingen overturned the gender hierarchy: “The woman is subject to the man in that he sows his seed in her, as he works the earth to make it bear fruit” . However, she did not advocate its absolute fixation, either. Unlike philosophers or philosophically oriented theologians in antiquity or the Middle Ages, she was relatively unconcerned about whether women had semen or not. In the end, the ultimate power to generate came from God, not from men or women, because human reproduction is the smaller version of God’s creation. Hildegard’s embryology did not fixate on passivity or activity in gender roles. By emphasizing the complementary roles of the two sexes, she minimized the sex/gender hierarchy. Her embryology is the scientific version of her theology that women and men could not exist without each other . Furthermore, this extends to her theological view that God and human beings could not exist without each other. In the same logic, the moon and the sun, the Church and Jesus should coexist . Even if Hildegard did not argue for the woman’s seed, she still valued the woman’s pleasure, like Galen. She emphasizes the mutual love between the couple; therefore, the female’s affection toward her male partner decides how healthy the fetus is, while the man’s semen decides its sex . Also, the power of the form was not in either the male seed or the female foam. It came directly from God, one month after conception. In Cause et cure , it was the soul that would give the fetus a form. Prior to ensoulment, the fertilized egg was considered an unformed mass. Once ensoulment happened, it started to be divided into different parts and to have possibilities for movement, developments that are comparable to the works of the man’s semen according to Aristotle and Galen. The major difference between Aristotle and Hildegard, despite their shared one-seed theory, is the origin of the soul. Aristotle argued that ensoulment occurs in different stages, helping the embryo to develop differently with each step, and he believed that the soul did not come from outside; rather, the soul was generated inside the embryo . Where Aristotle saw the soul internally generated from the embryo, Hildegard understood the soul would be infused into the embryo externally, from God. Since the soul would be given by God, Hildegard did not need to locate the potential soul or form to generate the form of the fetus in the human body. Therefore, for her, the major development of the fetus did not need to come from either the man’s semen or the woman’s foam. At its most fundamental level, Hildegard did not find it necessary to address the question of the origin of the active power behind human generation. As God’s creation was complete and free from any possible defects, human reproduction has to be free from any possible residue of original sin. Therefore, Adam’s defective semen should go through an additional process of purification, which, according to Hildegard, would happen in the woman’s body. Not having flawed semen, the woman’s body could use her blood and heat to warm the man’s semen in order to create the right conditions for conception and fertilization. From the love of the male, her blood is aroused and she sends it, as if a foam, more bloody than white, to the semen of the male. It joins with it and shapes it, making it warm and bloody. After it has fallen into its place, and lain there, it grows cold. It is as if a poisonous foam until fire, that is heat, warms it; and until air, that is breath, dries it; and until water, that is liquid, allows pure dampness to enter; and until the earth, that is a membrane, constrains it. And then it will be bloody—not totally blood, but combined with a bit of blood . The man’s semen needs so much care even when it is coagulated with the woman’s foam. And the process is quite similar to God’s creation, using the four basic elements. And at least in this moment, this generation is detached from original sin. Rather, the woman’s body removes the residue of original sin. In this context, the woman’s role is undeniably significant in human generation, despite the absence of semen in the female body. In the mother’s body, the fertilized egg starts to proceed toward the shape of a human. It starts to be split and divided. Then, it begins to become a human. Hildegard’s unique embryology also appears in her visions. The well-known cheese analogy for the Aristotelian theory also appears in Hildegard’s writings to explain human reproduction. And behold! I saw on the earth people carrying milk in earthen vessels and making cheeses from it; and one part was thick, and from it strong cheeses were made; and one part was mixed with corruption, and from it bitter cheeses were formed. And I saw the image of a woman who had a perfect human form in her womb. And behold! By the secret design of the Supernal Creator that form moved with vital motion, so that a fiery globe that had no human lineaments possessed the heart of that form and touched its brain and spread itself through all its members . This passage comes from the fourth vision of the first book of Hildegard’s Scivias , where she explains the composition of the world, including in the preceding third vision, an analysis of creation. This part discusses the soul and body of the human being. Following her typical format, in which she first recounts her vision and then gives exegeses of it, in the next chapters Hildegard explained in God’s voice what her vision meant. This vision in general represents the human soul and how it was deceived by the devil and aided by God’s knowledge, justice, and forgiveness. In the exegesis of this passage, God explaines to Hildegard that “the earth people” are the women and men in this world. They are holding the vessels, which are their bodies, and the cheese in the vessels designated their seeds . By using the analogy of cheese, Hildegard explains in God’s voice why and how people in the world vary. From thick milk, which is compared to thick semen, strong cheese is made, which “is usefully and well matured and tempered” and “produces energetic people.” These people are not just excellent in their bodies but also in their souls; therefore, they would remain strong against evil temptations. In contrast, weak people are produced out of the weak cheese. The weak cheese is connected to weak semen, “imperfectly matured and tempered in a weak season,” which generates people who are weak and unwilling to serve God. However, according to Hildegard, the worst case is that the milk has gone bad. From the corrupted milk, bitter cheese is made. Hildegard makes extremely harsh comments about those who are born out of it. At the same time, still, there is still hope that these people can overcome their innate condition and become devout, especially when they face hardship. Accordingly, they are the people who can work as the messengers of God in turbulent times like Hildegard’s own period, which she calls “effeminate.” For that semen is basely emitted in weakness and confusion and mixed uselessly, and it produces misshapen people, who often have bitterness, adversity and oppression of heart and are thus unable to raise their minds to higher things. Many of them nonetheless become useful; though they suffer many tempests and troubles in their hearts and in their actions, they come out victors. For if they were left in peace and quiet, they would become languid and useless, and therefore God forces them and leads them to the path of salvation . This chapter is followed by a chapter referring the words of Moses, who was the spiritual leader and God’s messenger when the Israelites were suffering and then wandering in the desert, possibly because Moses could be the cheese from corrupted milk. Moses’s words also deliver the hopeful message from God that even weak people could be raised to health through his will and justice, in alignment with Hildegard’s comments on the bitter cheese people, combined with her embryology and optimistic theology. Women have always contributed to medicine, treatment, and healing in history. While earlier studies were more focused on the nonliterary parts of women’s medical practices, the new historical approach investigates women’s written culture of medicine and actively claims that women’s contribution to the history of medicine has been dismissed by narrow studies on renowned authorities and elites. Even when women participated in academic discussions with male scholars and engaged in crafting knowledge, their academic contributions are easily overlooked. Hildegard of Bingen was a woman whose two books on medicine and natural science have reached modern readers, representing medieval medical views on specific topics combined with folk medicine and contemporary theories. Embryology plays a significant part in her medical book, Cause et cure , as an important bridge between her theology and her gynecology. At the same time, her embryology was the academic product of the contemporary theories of her time. Her writings convey the development of embryology based on classical ancient medicine and transferred through Arabic sources. In particular, she discusses Galenic humoralism and uses the Aristotelian one-seed theory when she explains how a fetus is made in the womb. In theoretical terms, Hildegard’s embryonic theory is closer to the Aristotelian one-seed theory than to Galen’s two-seed theory. Her discussion of menstrual fluid and her use of the cheese analogy suggest that she was likely familiar with Aristotelian embryology, either directly or indirectly. In Cause et cure , Hildegard argued that women did not generate semen like men. However, unlike the two major fathers of medicine, she emphasized the female power that the woman’s reproductive body contributed to conception, while the fetus’s form originated from God. In the Hildegardian embryology, there is no hierarchical order between the female and male fluids. For Hildegard, female foam, although it was weaker and thinner than the man’s, was not an underdeveloped version of semen. Unfortunately, her unique appreciation for the woman’s contributions did not continue and has not been centered in other embryological theories of her time and subsequent periods. However, her embryology, along with her expertise in medicine, receives new value in light of the fact that medieval medicine was understood through Aristotle and Galen, whose embryology was highly male-centered, dismissing the woman’s role. At the same time, Hildegard’s alternative theory raises possibilities of women-empowering theories in the Middle Ages with its emphasis on the complementary roles of women and men as well as of human beings and nature. Just as Hildegardian embryology finds a way to escape male-centered views, it offers an environment-friendly way of explaining human generation, escaping human-centered views. The semen or seed cannot become the sole master key to explain the whole process of human generation. The relations and connections between different entities are the keys, as Hildegard argues. Exploring major differences in ancient embryology, Rebecca Flemming asserts the importance of shifting our focus to understanding the nature of seeds and menstrual fluids in those theories. According to her, it is more important to analyze the woman’s sexual substance and its role than to argue whether or not women could produce seed . Situating Hildegard of Bingen’s understanding of conception in the lively discussion of embryological traditions in premodern medicine and philosophy is important not just because it is one missing part but because it is an important case in which the woman’s reproductive contribution was more fully recognized and appreciated by a female theorist who was called a mother by her followers. Hildegard received the heritage of classical embryology and developed it into the innovative claim that the woman’s body would nourish and purify the embryo. This approach is not separable from her active role in the church when, as she claimed, the male church authorities were too tainted to fulfill their duties. As Adam’s semen suffered degeneration in its ability to contribute to reproduction as a result of his transgression, Eve’s female descendants took on an important role in human generation. Although Hildegard of Bingen did not take part in sexual or reproductive life, she also carried out an important role as a woman in putting forward an alternative theory of embryology that combined her visions with medical knowledge. |
Neuropharmacology of Alcohol Addiction with Special Emphasis on Proteomic Approaches for Identification of Novel Therapeutic Targets | 2745637b-c3d3-4830-a451-fa26c86c4e83 | 10193758 | Pharmacology[mh] | INTRODUCTION Alcohol consumption is commonplace in many societies. These cultures pay a high price for the benefits connected with the production, sale, and consumption of alcoholic beverages . Alcohol-related disorders, including alcohol use disorder (AUD), result from a person's genetic make-up, cumulative responses to alcohol exposure, and environmental perturbations over time. Alcohol consumption can become compulsive and eventually addictive, depending on many modifying factors, for instance, genetic predisposition, provoking environmental events, social context, pharmaceutical history, and others . Associated psychiatric problems, for example, anxiety and, foremost depressive disorders, are genetically established nonspecific susceptibility factors that significantly raise the likelihood of increasing alcohol addiction . AUD is defined as a chronic and progressive disorder characterized by the development of alcohol tolerance, alcohol dependence ( i.e ., uncontrolled intake of alcohol even though negative consequences), and craving and/or removal syndrome once alcohol is detached. Furthermore, the pharmacokinetics of alcohol is determined by several mechanisms. It is generally known that genes with a significant impact on alcohol metabolism influence how people react to alcohol and how likely they are to develop an addiction to it . According to current research, acetaldehyde's contribution to alcohol's effects is finest described by a procedure in which acetaldehyde regulates, rather than intercedes, some of the effects of alcohol . Long-term, uncontrolled alcohol usage can lead to antisocial personality disorder, as well as mood and anxiety issues. For more than a century, scientists have been studying the mechanisms that underpin ethanol's activities in the central nervous system (CNS). The Meyer-Overton connection defines the straight association between an alcohol's hydrophobicity and its potency for creating intoxication . An early study on the effects of alcohol focused on physical qualities of alcohol and created the highly acclaimed Meyer-Overton relation. Alcohol affects the activity of several neurotransmitter receptors ( e.g ., acetylcholine, glutamate, GABAA, nor-epinephrine, glycine, dopamine, and serotonin) and transporters ( e.g ., dopamine, adenosine, serotonin, nor-epinephrine) . Recent research has revealed that gene regulation is significantly more complex than formerly thought and does not fully clarify changes in protein levels. As a result, direct study of the proteome, which differs significantly from the genome/transcriptome in complexity and dynamicity, has yielded distinctive insights to many investigators . Neuroproteomics has the potential to revolutionize alcohol research by allowing researchers to gain a better knowledge of how alcohol impacts protein structure, function, connections, and networks on a global scale . The Neuroproteomic workflow of the alcohol model (human sample and some parts from bioinformatics approach using human data) is shown in Fig. . These discoveries have yielded a plethora of information that can be used to find essential biomarkers for initial detection and improved prognosis of AUD, as well as prospective pharmacological targets for the treatment of this addiction. Furthermore, multidimensional approaches to addiction, for instance, the planned addictions neuroclinical assessment (ANA), could be utilized to discover new addiction biomarkers and refine existing ones . This article discusses some of the core neurobiological underpinnings of alcohol addiction, focusing on neuroproteomics. In addition, the essay discusses the latest findings on identifying novel treatment targets and proteomic biomarkers for alcohol addiction. INITIATION AND MAINTENANCE OF ALCOHOL CONSUMPTION The onset of alcohol use is primarily impacted by an individual's biological (genetically determined) traits . Multiple genetic factors determine the level of sensitivity to alcohol and the resulting intoxication stop indication . A range of neuropharmacological research, including lesion, microinjection, and microdialysis tests, found brain areas that assist an essential part in facilitating the reinforcing effects of alcohol . Glutamatergic activity affects a significant portion of the mesolimbic DAergic pathway . Additionally, the dorsal raphe nucleus 5-hydroxy-tryptamine (5-HT) system influences the ventral tegmental area (VTA) and nucleus accumbens (NAC) DAergic activity . Aside from these physiologically and genetically defined reactions (early sensitivity, alcohol reinforcement) to a severe alcohol challenge in alcohol-drunk individuals (Fig. ), environmental factors such as stress exposure have long been hypothesized to accelerate the initiation of alcohol use . Stressors are hypothesized to affect several neurobiological systems, including hypothalamic-pituitary–adrenal axis and additional hypothalamic corticotropin-releasing factor (CRF) signalling . 5-hydroxytryptamine (5-HT) acts on diverse 5-HT receptor subtypes to modify glutamate and GABA-mediated actions . 5-HT can influence neurotransmitter release at the pre-synaptic level; for example, 5-HT1A, 5-HT1B, and 5-HT6 receptors inhibit glutamate release in diverse brain areas . The amplitude of a T-type stumpy threshold voltage-dependent Ca 2+ current, which is accountable for rhythmic oscillations of membrane potential, is augmented by 5-HT and decreased by GABAB agonists in interneurons from the stratum lacunosum-molecolare of the CA1 area . The increased quantity and function of the N-methyl-D-aspartate receptor (NMDAR) is assumed to be a physiopathological state linked with alcohol withdrawal . NMDARs are ionotropic glutamate receptors that are widely expressed in CNS. In alcohol-treated primary cortical and hippocampal cultures, the NMDA-induced upsurge in cytosolic calcium level, along with NMDA-induced excitotoxicity, was observed to be potentiated. NMDARs are influenced by neurosteroids as well . Estrogen and pregnenolone sulfate (PS) was previously thought to directly affect NMDA receptors, increasing the incidence of first and mean open time of the ion channel. The unique characteristics of these receptors, such as high permeability to Ca 2+ ions, comparatively slow activation/deactivation kinetics, and voltage-sensitive obstacle by Mg 2+ ions, explain their involvement in miscellaneous neural functions such as excitatory synaptic transmission, synaptic plasticity, and excitotoxicity . Alcohol misuse induces significant brain neuroadaptations that lead to tolerance, dependence, and behavioral changes. The study of the protein complexes and species that make up the nervous system is known as neuroproteomics. In terms of profiling the entire neural proteome, neuroproteomics is a complicated field with a long way to go. Multiple neuroadaptations in the brain, comprising extensive modifications in gene/protein expression patterns, contribute to alcohol addiction, tolerance, and physical dependence, at least in part. The fact that alcoholism is a complex characteristic illness that affects numerous genes, even if each gene's effect is tiny, identifies these adaptations. Furthermore, here, 'omic' techniques have been widely used to investigate altered gene/protein expression patterns following excessive alcohol use in the brain. Although using miRNAomic, transcriptomic, and proteomic approaches in alcohol research have yielded a wealth of data, our knowledge of how individual expression changes interact to contribute to alcoholism remains restricted. NEUROCHEMICAL SYSTEMS AND SIGNALING PATHWAYS INVOLVED IN THE ACTION OF ALCOHOL With increasing concentrations of alcohol, the intoxication signal ranges from disinhibition to sedation and even hypnosis, with the distinctive acute particular effects including the discriminative stimulus properties of alcohol, and connected with psychotropic effects, the intoxication signal extending from disinhibition to tranquility and even hypnosis occurs. A10 DA neurons in the midbrain are essential in beginning reinforcement processes . They start in the VTA and project to limbic system regions, most notably the NAC shell area and the prefrontal cortex (PFC). The VTA receives glutamatergic projections from PFC, stria terminals bed nucleus, laterodorsal tegmental nucleus, and lateral hypothalamus . Growth hormone secretagogue receptors (GHSR-1A), the efficient ghrelin receptor, are expressed in both locations, and there is cholinergic involvement from the laterodorsal tegmental area to the VTA. Alcohol dose-response curves for DA discharge in alcohol-naive, high-alcohol-drinking (HAD), and low-alcohol-drinking (LAD) cells exhibited no change in sensitivity to alcohol concerning the lines . In a separate investigation, “no-net-flux” quantitative microdialysis was used to assess the basal and ethanol-stimulated DA release in the NAC in the alcohol-naive HAD/ LAD model . Alcohol-induced activation of DAergic A10 neurons has been found to comprise central nACh and strychnine-sensitive glycine receptors, implying that these receptors can play a role in alcohol reinforcing . The most common substance use disorder, AUD, has a substantial global impact. After decades of attempts to find new treatment methods that have failed to deliver increased sobriety rates, relapse rates remain incredibly high. Excessive alcohol consumption negatively affects the CNS and may lead to AUDs. Current research suggests that myelin impairments may directly affect the CNS dysfunctions linked with AUDs. Electrical insulation and trophic support are provided by myelin, which is made up of compact lipid membranes that cover the axons. Because of its substantial effects on neural network computation, myelin regulation is seen as a unique form of brain plasticity. The endocannabinoid system appears to play a key role in the control of the rewarding characteristics of substances of abuse, including alcohol, according to mounting research. Orphan receptor G protein-coupled receptor 55 (GPR55), Cannabinoid type 1 receptor (CB1), and Cannabinoid type 2 receptor (CB2) a novel CB receptors, as well as endocannabinoids such as 2-arachidonoylglycerol (2-AG) and anandamide, their biosynthetic and inactivating enzymes, and maybe endocannabinoid transporters, make up the endocannabinoid system . CB1 receptor activation is required for alcohol reinforcing mechanisms. Although few studies have shown that acute ethanol administration upsurge proopiomelanocortin (POMC) mRNA in the arcuate nucleus, others have been incapable of finding any outcome of acute ethanol on arcuate POMC mRNA content, it is uncertain whether the ethanol-induced escalation in extracellular NAC endorphin levels is a consequence of direct activation of the arcuate-NAC endorphin pathway. D1-like receptors , which comprise the DA D1 and D5 receptors, boost adenylyl cyclase (AC) activity by connecting to stimulatory G proteins (Gs). D2-like receptors (D2-D4), on the other hand, block AC through inhibitory G proteins (Gi). The stimulation of D1-like receptors causes a rise in cAMP levels and the stimulation of cAMP-dependent protein kinase A (PKA) signalling, which leads to substrate phosphorylation. The transcription factor cAMP response element-binding protein (CREB) is one of the substrates of PKA, which leads to enhanced transcription of genes with cAMP response elements (CRE) in their promoter region . Voluntary alcohol consumption reduces Ca 2+ /calmodulin-dependent protein kinase IV (CaMKIV) expression and CREB phosphorylation, particularly in the shell of NAC, implying that reduced CaMKIV-dependent CREB phosphorylation in the shell region of NAC is implicated in alcohol reinforcing . Dopamine- and cAMP-regulated neuronal phosphoprotein-32 (DARPP-32), a 32-kDa protein produced primarily on striatal medium spiny neurons, is likewise phosphorylated when D1 cAMP-PKA signalling is activated, in addition to CREB . It operates as a potent inhibitor of protein phosphatase-1 (PP1) in its phosphorylated state, making it a key regulator of DAergic signalling. nNOS/NO/cGMP/cGKII signalling has been linked to the effect of alcohol in pharmacological and knockout investigations . The nucleus accumbens medium spiny neurons receive an excitatory glutamatergic response from the forebrain and dopaminergic response from the ventral tegmental region. This integration point could be a site where glutamate receptors of the NMDA subtype increase drug reinforcement. In DARPP-32 experimental models, the alcohol sensitivity of NMDA receptors is not regulated by D1 receptors. Following activation of dopaminergic neurons in the ventral tegmental region, DARPP-32-mediated blunting of the response to alcohol initiates molecular changes that influence synaptic plasticity in this circuit, increasing the establishment of alcohol reinforcing. EFFECTS OF ALCOHOL ON PROTEIN-PROTEIN INTERACTIONS AND NEUROPROTEOMICS The proteome is an organism's whole collection of proteins. It is much bigger and more complicated than the genome, the collection of genes that code for these proteins. External stimuli, such as alcohol consumption, can change the number of proteins and the post-translational changes they experience. When researching genes and encoded proteins affected by alcohol or that intervene in its effects, developments in proteomics offer considerable benefits over traditional molecular techniques. Investigators can use a high-throughput technique like this to survey many putative target molecules in an unbiased manner, deprived of knowing which molecules are involved. Chronic alcohol abuse can cause a variety of changes in brain function. Multiple brain regions have been proven to be damaged by alcohol, resulting in cognitive impairment and other abnormalities in brain function. When the brain's neural and behavioral adaptations to the continual presence of alcohol are disrupted, severe withdrawal symptoms can occur. Trans-splicing, differential gene splicing, post-translational alterations, and other processes result in a wide diversity of proteins. Phosphorylation is essential for NMDAR function modulation . Fyn and Src are two highly similar tyrosine kinases that have been involved in phosphorylating NR2 subunits on tyrosine residues on sites thus far . The development of long-term potentiation (LTP) in the CA1 area of the hippocampus increases NR2B phosphorylation, but inhibitors of Src tyrosine kinases stop LTP from occurring . Calcineurin regulates the activity of PP1 via dopamine and cAMP-regulated phosphoprotein by requiring calcium/calmodulin binding for activation . DARPP-32 is primarily expressed in the neostriatum's medium spiny dopaminergic neurons, and it is a crucial regulator of various processes, including NMDAR control . It was recently shown that in the presence of ethanol, the DARPP-32/PP1 cascade is a key regulating mechanism for neostriatal NMDARs. Some significant phenomena, including cell signalling, vesicular trafficking etc ., induced by alcohol etc ., are shown in Fig. , which ultimately ends with the alcohol addiction of an individual. The effects of alcohol on intracellular signalling pathways contribute to both acute and neuroadaptive responses to repeated alcohol intake. PKA is involved in learning and memory, and behavioral responses to alcoholic beverages. PKA is activated when cyclic adenosine monophosphate (cAMP) binds to the regulatory subunits, forcing them to dissociate from the catalytic subunits, which become active. Signals produced from lipid second messengers are facilitated by protein kinase C (PKC) family of serine-threonine kinases. PKC-related kinases, triggered by the tiny G-proteins Rac and Rho, have recently been classified as a subfamily. Alcohol has been discovered to promote tumor angiogenesis and accelerate tumor growth. In vitro and in vivo , alcohol increases the expression of vascular endothelial growth factor (VEGF) in mammary gland carcinoma cells. Mitogen-activated protein kinase (MAPK) family members and extracellular-signal-regulated kinases (ERKs) are serine-threonine protein kinases. A Ras–Raf–MEK signalling cascade is initiated by RTK or calcium influx through NMDA and voltage-gated calcium channels, activating ERKs. Dopamine and glutamate receptor activate ERK activity, and it may serve as a coincidence detector during the development of addiction, combining information about rewards and contextual information. Recent research has revealed that neuroproteomics can revolutionize alcohol research by allowing researchers to gain a better knowledge of how alcohol impacts protein structure, function, connections, and networks on a global scale . Names of a few proteins expressed in alcoholic cells are shown in Table . With recent breakthroughs in high-resolution mass spectroscopy (MS) and bioinformatics, concurrent quantitative analysis of hundreds of proteins is now possible. Proteomic techniques , such as two-dimensional (2D) electrophoresis , have been used to investigate the effects of alcohol exposure on cell cultures, and different proteomic approaches were used to study . Mutation in type 1 equilibrative nucleoside transporter (ENT1) have found to be related with increased alcohol consumption . Greater glutamate neurotransmission in the nucleus accumbens is linked to increased alcohol preference in these animals (NAC). A recent study employed 2-D fluorescence difference gel electrophoresis (2D-DIGE) to examine cerebral cortices and midbrains that had been treated with chronic intermittent ethanol (CIE) two-bottle choice (2BC) paradigm, which causes substantial drinking and is one of the best available animal models for alcohol dependency. Diverse alcohol-sensitive brain areas from simple and hepatic cirrhosis-convoluted human alcoholics have been studied using global proteome techniques on the human post-mortem brain . GENE TRANSCRIPTION AND EPIGENETIC EFFECTS MEDIATED BY ALCOHOL Recent research suggests that ethanol can cause epigenetic changes, such as histone acetylation and methylation, as well as hypo- and hypermethylation of DNA . This has sparked an innovative concern in alcohol research by revealing novel information on ethanol's actions at the nucleosomal level concerning gene expression along with pathophysiological effects . Changes in methylation and acetylation patterns caused by alcohol could lead to long-term changes in gene expression. Nevertheless, it is too early to say whether epigenetic changes in synuclein gene represent a biological switch that causes long-term brain maladaptations. ATP converts methionine to S-adenosyl-methionine (SAM). It can transfer methyl groups to cytosine residues in genomic DNA's CpG dinucleotide sequence. Homocysteine levels are commonly elevated in alcohol-dependent individuals, whether they are actively drinking alcoholics or are in the early stages of abstinence . Pharmacological specificity of medication action is characterized by site-specific effects inside the anatomical and cellular intricacy of the brain. The exact activation patterns of ethanol-evoked c-fos responses have been extensively examined, and they most likely reflect action via many neurotransmitter systems . After ethanol administration, stimulus-activated transcription can be detected using histochemistry to map CRE-mediated gene transcription in the brain of the CRE lacZ transgenic experimental model . Axon remodeling is aided by myelin-related genes, and the prefrontal cortex appears to be particularly vulnerable to the toxic effects of ethanol . Furthermore, research on animal models selected for diverse ethanol-related characteristics, such as penchant and tolerance, suggests that ethanol affects different sets of genes depending on the dose . Tenacious alcohol-induced gene expression changes have been postulated as a 'molecular switch' that might facilitate long-term brain adaptations and mal-adaptations, as well as disordered behavior . SYNAPTIC, CELLULAR AND NEURONAL NETWORK EFFECTS INDUCED BY ALCOHOL The ability of all synapses to endure activity-dependent alterations in synaptic plasticity, which may be examined most successfully using electrophysiological methods in brain slices, is a universal trait of all synapses . Abuse of alcohol boosts the reactivity of DA neurons to glutamate by increasing synaptic strength, enhancing LTP, or preventing long-term depression (LTD), and, as a result, induces increased DA release in brain areas, for instance, the NAC and the prefrontal cortex . Changes in α-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptor subunit composition are linked to alcohol-induced synaptic strengthening in DA neurons in the ventral tegmental area (VTA). Incorporation of AMPA receptor subunit GluR1 increases alcohol-induced synaptic reinforcement, most likely by forming highly conductive, Ca 2+ -permeable GluR1 homomeric AMPA receptors, whereas GluR2-containing receptors reverse it . The activation of NMDA receptors is required for synaptic recruitment of GluR1 subunits and resulting synaptic potentiation. In the dorsomedial striatum, a striatal sub-region which plays a vital role in the acquisition and assortment of goal-focused activities, the effects of ethanol on long-standing synaptic plasticity have also been investigated . The processes of brain injury in alcohol-dependent people have been revealed using structural magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), spectroscopy, and positron emission tomography (PET) . Using graphic alcohol stimuli shows that a close relationship develops in teenagers with concise drinking histories, implying that the observed reaction to alcohol advertisements in teenagers with drinking issues has a neurological foundation. Measurable and noninvasive access to various metabolites in diverse brain areas in vivo is possible with proton magnetic resonance spectroscopy (MRS) . Reduced N-acetyl aspartate and choline-containing substances are significant neurometabolic alterations in alcohol-dependent patients . PET imaging has been used to investigate the DA system extensively. The most consistent findings in alcohol-dependent patients have been altered DA function with decreased DA transmission in the NAC and a decrease in DA D2 receptor density, which may be connected to the intensity of desire and relapse behavior. The development of NMDA receptor, in addition to metabotropic glutamate receptor PET ligands, as well as recent improvements in glutamate spectroscopy, will aid in applying this knowledge to alcohol-dependent individuals . PHARMACODYNAMIC EFFECTS OF ALCOHOL AND TREATMENT ASPECTS Alcohol has complicated pharmacology, affecting different receptor or effector proteins through direct or indirect interactions, and at very high doses, it may even modify the lipid makeup of the surrounding membrane . The ability of several alcohols to suppress the NMDA-activated current is proportional to their intoxicating potency, implying that alcohol-induced suppression of responses to NMDA receptor activation may contribute to intoxication-related neurological and cognitive deficits. Alcohol enhances the action of 5-hydroxytryptamine 3 (serotonin) and neuronal nicotinic ACh receptor (nAChR) in addition to GABAA and glycine receptors . Alcohol also has a strong affinity for ion channels, and as a result, alcohol blocks dihydropyridine-sensitive L-type Ca 2+ channels . Recent molecular pharmacology studies show that alcohol has significantly less key targets, counting NMDA, GABAA, 5-HT3, nAChRs, and L-type Ca 2+ channels, despite the belief that it is an unspecific pharmacological agent . Activated G-protein inwardly rectifying K channels (GIRKs) , where concentrations as low as 1mM influence the activity of these receptors and ion channels. N-cholinergic agonists and 5-HT3 receptor agonists also enhance the alcohol stimulation effect. As a result, treatment has progressed from social and behavioral rehabilitation to complementary pharmacotherapies aimed at disrupting the underlying mechanisms. Pharmacological inhibition of opioid receptors resulted in a considerable reduction in high alcohol intake in an animal model displaying excessive alcohol consumption generated by such a post-dependent condition . Following their preclinical discovery of a genetic defect in the neurokinin 1 receptor have a significant reduction in voluntary alcohol consumption. Neramexane is a brand-new NMDA glutamate receptor antagonist with a moderate affinity and low competitiveness. It works by shutting down the NMDA receptor channel. Topiramate (Topamax), an anticonvulsant that inhibits glutamate activity while facilitating GABA action, lowers the negative consequences of binge drinking and relapse rates in alcoholics. Ondansetron, a 5-HT3 antagonist , is another potential medicine for treating AUD. Galantamine is an allosteric modulator of nACh receptors and a reversible, competitive inhibitor of acetylcholinesterase . In an operant self-administration drinking scenario, acute treatment of varenicline in dosages reported diminishing nicotine reinforcement lowered selectively desire for ethanol but not sucrose . There is an urgent need for novel pharmacotherapies, and alcohol-related diseases have been treated with medicines that target GABAB receptors . Additionally, a summary of laboratory studies looking at the biobehavioral effects of Baclofen, polymorphisms related to baclofen treatment, and safety issues with GABAB therapies will be covered. A possible treatment for AUD is Baclofen, a selective gamma-aminobutyric acid-B (GABA-B) receptor agonist. Since the early 1970s, it has been advertised for managing secondary neurological disorders' muscle stiffness. According to research that dates back to the 1970s and focused primarily on animal addiction models, Baclofen may also be helpful in treating AUD. Baclofen is one drug that targets GABAB receptors and may be moderately successful in treating alcohol consumption disorders. However, safety issues prevent the currently available drugs from being used widely . Baclofen therapy also reduced the frequency of daily alcoholic beverages and the obsessive and compulsive aspects of alcohol seeking. Finally, in alcohol-dependent patients, a single non-sedative dose of Baclofen caused the quick elimination of alcohol withdrawal symptoms, including delirium tremens. Baclofen was well tolerated in clinical testing with few adverse effects, which suggests that Baclofen may be a potentially helpful drug in the treatment of people with alcohol dependence . When consumed, Baclofen causes the brain's GABA receptors to become active. Baclofen can reduce the neuronal activity that causes muscular spasms because it inhibits these receptors. Phenibut, which is similar to Baclofen, also affects GABA receptors. G protein-coupled GABAB receptors are responsible for delayed and protracted inhibitory effects by activating G-i/o-type proteins. GABAB receptors slow down nerve impulses by turning on inwardly rectifying K+ channels, turning off voltage-gated Ca 2+ channels, and turning off adenylate cyclase. NEUROPROTEOMIC MARKERS AND DIFFERENT PROTEIN EXPRESSIONS OF ALCOHOL USE DISORDER Alcohol addiction is one of the utmost paid health issues since it produces physiological changes in the brain and affects many areas. Alcohol not only affects memory and brain cells, but it also causes undesired and useful proteins to be expressed and suppressed, respectively. For studying different characteristics of brain components, particularly the proteome of brain cells, several methodologies involving separation techniques, quantification techniques, and analytical techniques, as well as a mix of bioinformatic tools, are used. These could be employed as alcoholic neuroproteomics indicators. Furthermore, neuroproteomics studies the impact of alcohol on the brain proteome using various animal models, preferably those that resemble humans . Different proteins including glyceraldehyde 3-phosphate dehydrogenase, syntaxin binding protein 1, dihydropyrimidinase-related protein 2, heat shock 70–71 kDa proteins, neurofilament light polypeptide, guanine nucleotide-binding protein, creatine kinase and septin have all been found to be differentially expressed in the human alcoholic frontal cortex. Early detection is critical for the successful treatment of most diseases. In the realm of alcohol, biomarkers with diagnostic and prognostic significance are essential. Most people with AUD or alcohol use disorders go undetected until they face serious medical, legal, or societal consequences. In addition, biomarkers for alcohol consumption (Fig. ) with broader evaluation ranges have been discovered. These biomarkers detect tissue damage or different physiological reactions to heavy drinking over time and indirectly assess alcohol consumption . One of the best specific blood markers of chronic, heavy alcohol use is carbohydrate-deficient transferrin (CDT) . G-Glutamyltransferase (GGT) is another serum sign that is commonly examined. Since GGT is also elevated in non-alcohol-related liver disorders, it is less specific than CDT . EtG, or ethyl glucuronide, is a promising alcohol metabolite biomarker. So far, no clinical test has been found to be sufficiently trustworthy to support analysis of active AUD or abuse in the general populace . Addiction-related biomarkers might be useful in detecting addiction without the traditional social outcomes, as in Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), and also regardless of whether patients have experienced tolerance or removal to an addictive agent; actually, some people may experience changes in other domains of brain function irrespective of physical patience or withdrawal . Alcohol's main action is known to be mediated by the NMDA, GABAA, glycine, 5-HT3, and nAChR neurotransmitter/neuromodulator systems, L-type Ca 2+ channels and GIRKs. As soon as alcohol drinking is commenced, alcohol affects all brain neurotransmission essentially. However, some counter-adaptive alterations in the brain reinforcement system, such as modulatory input systems, may become chronic, and it is thought that these persistent modifications are what cause the 'molecular and structural flip' from controlled to compulsive alcohol misuse. Omics techniques have been widely employed to investigate altered gene/protein expression patterns in the brain due to excessive alcohol use. While the use of miRNAomic, transcriptomic, and proteomic approaches in the alcohol-based investigation has yielded a wealth of data, our knowledge of how personal expression changes interact to contribute to AUD remains restricted. Proteomic approaches have revolutionized how we evaluate the configuration, regulation, and function of protein complexes and pathways underpinning altered neurobiological circumstances dramatically in the last decade. These advancements, when paired with complementary techniques, provide the contextual knowledge needed to decode massive data sets into biologically significant adaptive processes. Dysregulation in molecular signalling transversely in numerous brain areas causes chronic alcohol addiction. Due to our limited understanding of the molecular pathways that impact addiction pathology, there are insufficient treatment choices for treating alcohol addiction and related psychiatric problems in clinical settings. Addiction-related behaviors originate from the convergence of multiple minor deviations in molecular signalling networks, including protein-protein interactions (interactome), neuropeptides (neuropeptidome), and post-translational modifications, counting protein phosphorylation, according to many studies (phosphoproteome). As our knowledge of the neuroscience of addiction has grown, it has become clear that slight alterations in discrete brain circuits can allow maladaptive temperament and cognition patterns, leading to relapse and long-term alcohol use. Neuroproteomics has the potential to revolutionize alcohol research by allowing researchers to gain a better knowledge of how alcohol impacts protein structure, function, connections, and networks on a global scale. These discoveries have yielded a plethora of information that can be used to find essential biomarkers for early detection and better prognosis of AUD, as well as prospective pharmacological targets for managing this addiction. The integration of addiction biomarkers into particular functional patterns across brain domains suggests that assessing these domains' weaknesses and strengths is a crucial initial step in assessing addictions. Improved neuroproteomics, in combination with new technology, will eventually aid in the development of preventive medications for those who have a genetic or behavioral predisposition to AUD, potentially paving the way for a breakthrough in addiction therapy. The identification of different profiles, when combined with genetic background, may be one way that neuroscience-based biomarkers may support elucidating the heterogeneity existing in an addictive disease, with the goal of targeting feebleness and strengths in any given individual and stirring towards precision medicine. |
Novel risk factors associated with retained placenta after vaginal birth | c9f6a4f8-7517-4083-87d9-0ec6261e6c70 | 11823396 | Surgical Procedures, Operative[mh] | INTRODUCTION Retained placenta is the second leading cause of postpartum hemorrhage. The term “retained placenta” refers to the failure of the placenta to spontaneously expel within 30 min following vaginal birth. This definition is particularly applicable in the third trimester, when the third stage of labor involves active management, such as the administration of a uterotonic agent before placenta delivery and controlled cord traction. With this managed setting, approximately 98% of placentas are expelled within the specified timeframe. , Known risk factors for retained placenta include a history of retained placenta, preterm delivery, previous uterine surgery, previous pregnancy termination, miscarriage or curettage, grand multiparity (more than five previous deliveries), and congenital uterine anomalies. Data concerning other possible risk factors for retained placenta are scarce. The present study aimed to determine the incidence of and identify maternal, pregnancy, and labor characteristics associated with retained placenta in women after spontaneous vaginal birth without previous cesarean section or intrauterine procedures. MATERIALS AND METHODS A retrospective case–control cohort, of women who had retained placenta after singleton live vaginal delivery at or after 24 weeks of pregnancy with vertex presentation managed by manual removal of the placenta, between January 1, 2015, and December 31, 2022, compared with women who had a normal vaginal delivery without complications. The control group was matched in a 1:2 ratio for maternal age, gestational age, and parity. The researcher also accounted for previous studies highlighting common issues related to retained placenta. However, the inclusion criteria were specifically limited to pregnant women over the age of 18 years. This method is susceptible to selection bias because of the patient‐record inclusion process. Individuals with retained placenta who are in poorer health tend to have more comprehensive data available compared with healthier patients or instances where one patient group is overrepresented in the sample. As a result, the composition of study participants may be biased, as it may not accurately reflect the general population's distribution of risk factors associated with retained placenta after vaginal delivery. We used systematic matching for the control group, identifying women from a hospital delivery list, matched on maternal age, gestational age, and parity, without selection bias. Although propensity score matching was not used, we believe that the demographic variables controlled for were sufficient for the study. Inclusion criteria were: pregnant women delivered between 24 and 42 weeks of pregnancy, singleton pregnancies, women aged 18 years or older, and vertex presentation. Previous cesarean section, other uterine surgeries, dilatation and curettage; gestational age less than 24 weeks; intrauterine fetal demise; known uterine anomalies; twin pregnancies; and non‐vertex presentations were all considered exclusion criteria. We excluded previous uterine surgeries to ensure that the observed associations with retained placenta were not confounded by surgical alterations to the uterus, which are well‐established risk factors. Given the potential for selection bias in the study criteria, a systematic approach was employed to minimize its influence. The women in the control group were selected using a structured process, ensuring no selection bias in the method. The control group was formed as follows: a list of women who gave birth at our hospital between 2015 and 2022 and met the inclusion criteria (15 260 women) was prepared. Those meeting the exclusion criteria were removed from the list. Women whose placenta did not separate were marked as “case” (99 women). The list was then sorted by birth date, and for each “case,” two women with the same age, gestational age, and parity who gave birth near the same date were selected for the control group (198 women). To further reduce bias, the groups were matched on parity. Although propensity score matching could be advantageous in some studies, the researcher determined that matching on key demographic variables was sufficient for this analysis. This matching method provided a robust control group for the present study. Data were collected from patients' computerized medical records. Demographic and medical characteristics included; maternal age, body mass index (calculated as weight in kilograms divided by the square of height in meters), smoking habits, gravidity and parity, chronic hypertension, thrombophilia, systemic lupus erythematosus (SLE), previous retained placenta, endometriosis, pregestational diabetes, thyroid disease, and cases involving in vitro fertilization (IVF). Obstetrical and delivery characteristics included gestational age, premature rupture of membranes, labor induction, duration of the second stage, type of analgesia, mode of delivery (normal vaginal delivery versus instrumental delivery), pre‐eclampsia, estimated maternal blood loss, neonatal weight, postpartum hemorrhage, intrapartum fever, and endometritis. At our medical institution we routinely perform active management of the third stage of labor: administration of uterotonic agents immediately postpartum; 10 units of in vitro oxytocin diluted in 1 L of normal saline at a rate of 1–2 mL/min as well as controlled cord traction for expulsion of the placenta. Retained placenta was defined as a placenta that failed to separate for more than 30 min after fetal delivery and was manually removed. Delayed cord clamping is a recognized practice for managing the third stage of labor, but research has not yet established a direct connection between its use and the incidence of retained placenta. Further studies are needed to determine if any relationship exists, particularly considering variations in clamping timing and potential interactions with other management strategies. All characteristics were compared between deliveries with the retained placenta (study group) and the control group. Our primary goal was to evaluate the risk factors (maternal and obstetrical characteristics) associated with retained placenta without a history of uterine procedures. Our secondary goal was to assess maternal outcomes and delivery complications, including the incidence of postpartum hemorrhage, endometritis, the need for blood transfusion, incidence of hypovolemic shock, prolonged hospital stay (>4 days), and intrapartum fever greater than 38°C. The study was approved by the Medical Center Nazareth Hospital EMMS Ethics Committee in October 2022 (approval number 50‐22‐EMMS). We used the χ 2 test or the Fisher exact test to examine the relationships between risk factors and the dichotomous outcome variable (retained placenta versus spontaneous placental expulsion). The relation between the dichotomous outcome variable (retained placenta versus spontaneous placental expulsion) and the normally distributed variables was examined using the t test. The relation between the dichotomous outcome variable (retained placenta versus spontaneous placental expulsion) and the non‐normally distributed variables was examined using the Mann–Whitney (Wilcoxon) U test. We used logistic regression models to examine the multivariate relationships between risk factors and the odds for retained placenta. Before introducing the variables into the model, the multicollinearity of the variables was analyzed, using the Variance Inflation Factor statistic. The statistical analyses were performed using a statistical software package for data analysis (SAS OnDemand for Academics, version 3.8, Enterprise Edition). A P value of 0.05 or less was considered statistically significant. Clarification of the statistical methods is essential for transparency and reproducibility. Recognizing this need, the researchers employed several methods. To evaluate the relationships between risk factors and the dichotomous outcome variable (retained placenta versus spontaneous placental expulsion), both univariate and multivariate analyses were conducted. For categorical variables, the χ 2 test or Fisher exact test, as appropriate, was used to examine associations with the outcome. For continuous variables, the researchers first assessed their distribution through visual inspection, specifically examining histograms. Based on this assessment, normally distributed variables were analyzed using the t test, while non‐normally distributed variables were evaluated with the Mann–Whitney U test. To explore multivariate relationships between risk factors and the odds of retained placenta, logistic regression modeling was employed. Variables identified as significant in the univariate analysis were included in the logistic regression model. Before inclusion, multicollinearity among the independent variables was assessed using the Variance Inflation Factor. All statistical analyses were conducted using the SAS OnDemand for Academics software package (version 3.8, Enterprise Edition). A P value of 0.05 or less was considered statistically significant. The normality of the variables was evaluated through visual inspections, including histograms. Normally distributed variables were analyzed with the t test, while non‐normally distributed variables were tested using the Mann–Whitney U test. Multivariate logistic regression was used to assess the relationship between risk factors and retained placenta, with a Variance Inflation Factor used to assess multicollinearity. Additionally, we examined the influence of analgesia (epidural, sedation, or none) on the risk of retained placenta while accounting for other factors such as intervention type (e.g. labor induction) and the duration of labor. RESULTS During the study period, 15 260 women underwent delivery at our medical center, 170 (1.1%) were diagnosed with retained placenta. Ninety‐nine women (0.65%) met the inclusion criteria for the retained placenta group, they were matched with 198 (1.3%) controls. Power calculations were conducted using SAS software (SAS OnDemand for Academics, version 3.8). For example, in the present study, 15% ( n = 22) of women with IVF pregnancies experienced retained placenta, compared with 85% ( n = 275) of women with spontaneous pregnancies. These calculations confirmed that the sample size was adequate to reject the null hypothesis, supporting the alternative hypothesis that women with spontaneous pregnancies had a lower rate of retained placenta at a significance level of 0.05. 3.1 Demographic characteristics and chronic diseases As planned the groups were matched for maternal age, parity, and gestational age at birth (Table ). The following characteristics were more prevalent in the retained placenta group compared with the control group: women with body mass index greater than 25 (29% versus 19%, P = 0.038), chronic hypertension (16% versus 8%, P = 0.022), IVF (15% versus 4%, P < 0.001). SLE (4% versus1%, P = 0.025). Previous retained placenta (8% versus 0%, P < 0.001), and endometriosis (4% versus 0%, P = 0.004) respectively (Table ). 3.2 Obstetrical and delivery characteristics A higher percentage of women with retained placenta underwent labor induction (96% versus 66%, P < 0.001), mainly with prostaglandin E2 (31% versus 12%, P < 0.001). Premature rupture of membranes (27% versus 15%, P = 0.012), and pre‐eclampsia (8% versus 2%, P = 0.012). Epidural analgesia was more frequent in women with a retained placenta (86% versus 26%, P < 0.001), as well as a longer second stage (>3 h) (47% versus 5%, P < 0.001) and more vacuum‐assisted deliveries (16% versus 2%, P < 0.001). Birth weight less than 2500 g or more than 4000 g was more prevalent in women with retained placenta (7% versus 2%, P < 0.001 and 15% versus 2% P < 0.001, respectively) (Table ). 3.3 Maternal and delivery outcomes associated with retained placenta Women with retained placenta were more likely to experience postpartum hemorrhage (36% versus 13%, P < 0.001) and to receive blood products (13% versus 2%, P < 0.001), respectively. Additionally, they had a higher rate of endometritis and intrapartum fever (14% versus 0%, P < 0.001 and 8% versus 3%, P = 0.005) and prolonged hospital stay of 4 days or longer (41% versus 7%, P < 0.001) (Table ). 3.4 Multivariate logistic regression analysis Multivariable logistic regression analysis (Table ) revealed that the following characteristics were independently associated with increased risk of a retained placent—IVF pregnancy (odds ratio [OR] 3.8, 95% confidence interval [CI] 1.3–11.7, P = 0.018), labor induction (OR 21.8, 95% CI 5.5–86.8, P < 0.001), pre‐eclampsia (OR 4.5, 95% CI 1.1–17.5, P = 0.031), duration of the second stage greater than 3 hours (OR 3.9, 95% CI 1–15.1, P < 0.001), instrumental delivery (vacuum versus vaginal delivery) (OR 2.3, 95% CI 1.2–4.5, P = 0.010), small for gestational age (small versus appropriate for gestational age) (OR 16.8, 95% CI 2.7–103.7, P = 0.223), large for gestational age (large versus appropriate for gestational age) (OR 28.2, 95% CI 5.4–148.5, P = 0.029). Demographic characteristics and chronic diseases As planned the groups were matched for maternal age, parity, and gestational age at birth (Table ). The following characteristics were more prevalent in the retained placenta group compared with the control group: women with body mass index greater than 25 (29% versus 19%, P = 0.038), chronic hypertension (16% versus 8%, P = 0.022), IVF (15% versus 4%, P < 0.001). SLE (4% versus1%, P = 0.025). Previous retained placenta (8% versus 0%, P < 0.001), and endometriosis (4% versus 0%, P = 0.004) respectively (Table ). Obstetrical and delivery characteristics A higher percentage of women with retained placenta underwent labor induction (96% versus 66%, P < 0.001), mainly with prostaglandin E2 (31% versus 12%, P < 0.001). Premature rupture of membranes (27% versus 15%, P = 0.012), and pre‐eclampsia (8% versus 2%, P = 0.012). Epidural analgesia was more frequent in women with a retained placenta (86% versus 26%, P < 0.001), as well as a longer second stage (>3 h) (47% versus 5%, P < 0.001) and more vacuum‐assisted deliveries (16% versus 2%, P < 0.001). Birth weight less than 2500 g or more than 4000 g was more prevalent in women with retained placenta (7% versus 2%, P < 0.001 and 15% versus 2% P < 0.001, respectively) (Table ). Maternal and delivery outcomes associated with retained placenta Women with retained placenta were more likely to experience postpartum hemorrhage (36% versus 13%, P < 0.001) and to receive blood products (13% versus 2%, P < 0.001), respectively. Additionally, they had a higher rate of endometritis and intrapartum fever (14% versus 0%, P < 0.001 and 8% versus 3%, P = 0.005) and prolonged hospital stay of 4 days or longer (41% versus 7%, P < 0.001) (Table ). Multivariate logistic regression analysis Multivariable logistic regression analysis (Table ) revealed that the following characteristics were independently associated with increased risk of a retained placent—IVF pregnancy (odds ratio [OR] 3.8, 95% confidence interval [CI] 1.3–11.7, P = 0.018), labor induction (OR 21.8, 95% CI 5.5–86.8, P < 0.001), pre‐eclampsia (OR 4.5, 95% CI 1.1–17.5, P = 0.031), duration of the second stage greater than 3 hours (OR 3.9, 95% CI 1–15.1, P < 0.001), instrumental delivery (vacuum versus vaginal delivery) (OR 2.3, 95% CI 1.2–4.5, P = 0.010), small for gestational age (small versus appropriate for gestational age) (OR 16.8, 95% CI 2.7–103.7, P = 0.223), large for gestational age (large versus appropriate for gestational age) (OR 28.2, 95% CI 5.4–148.5, P = 0.029). DISCUSSION The present study investigated and aimed to identify previously unreported risk factors associated with retained placenta, focusing on women who had undergone vaginal births with no previous intrauterine intervention. Over 8 years, the incidence of retained placenta was 1.1%, which was consistent with previously reported rates of 0.5%–3%. , , Retained placenta was associated with several risk factors, some of which have not been previously described, such as macrosomia, IVF, and endometriosis, as well as reported risk factors such as pre‐eclampsia, labor induction, regional anesthesia, instrumental delivery, and prolonged second stage of labor. In our study cohort, women undergoing assisted reproductive technology independently contributed to an elevated risk of retained placenta with an OR of 3.8 compared with women who conceived naturally. The identified risk factors, such as IVF, macrosomia, and labor induction, may influence the physiologic processes of placental separation through hormonal or mechanical influences, as supported by previous studies on abnormal placental adherence and vascular stress. These findings underscore the multifactorial nature of retained placenta, necessitating a deeper exploration of the underlying pathophysiology. Aziz et al. investigated the relationship between IVF and the length of the third stage. The authors concluded that cryopreserved embryo transfer (donated or autologous) without controlled ovarian hyperstimulation was not associated with a longer third stage but significantly increased the risk for manual removal of the placenta. Our finding of an increased risk of retained placenta associated with IVF pregnancies is consistent with previous research, such as the study by Wertheimer et al., who found that complications of the third stage of labor were more prevalent in IVF pregnancies. These findings suggest that IVF may contribute to a higher risk of retained placenta. We noted a significantly higher risk of retained placenta in women with endometriosis. This group of women is also known to have an increased risk for placenta previa and excessive bleeding during cesarean section and systematic literature review supports our interpretation of the identified retained placenta risk factors, encompassing endometriosis and assisted reproductive technologies. Endometriosis has been associated with an increased risk of various obstetric complications, as highlighted by Kobayashi et al. In their review of the relationship between endometriosis and obstetric complications, the authors emphasized that women with endometriosis are more likely to experience retained placenta. This complication may stem from the pathophysiologic changes associated with the condition. One of the proposed mechanisms is the alteration of uterine peristalsis in women suffering from endometriosis, which may impede the normal migration of the blastocyst during implantation. Such disturbances can lead to the improper placement of the blastocyst and consequently elevate the risk of conditions like placenta previa, and structural and functional modifications within the inner layer of the myometrium, particularly in the junctional zone, can hinder the physiologic remodeling of the spiral arteries in the uteroplacental bed. This failure is crucial as it can adversely affect placentation and is frequently observed in cases of retained placenta, affirming the association between defective placentation and the prevalence of retained placenta in patients with endometriosis. Furthermore, a multicenter retrospective study highlighted that assisted reproductive technologies significantly increased the risk of retained placenta, necessitating manual removal of the placenta and leading to postpartum hemorrhage. Additionally, women with retained placenta were more likely to experience premature rupture of membranes, and large‐for‐gestational‐age infants. Macrosomia had a strong association with a retained placenta (OR 28.2) compared with infants classified as appropriate for gestational age. We hypothesize that mechanical factors associated with macrosomic infants, such as increased shoulder width and head circumference, can hinder the effective contraction of the uterus and impair the natural separation of the placenta. This may lead to incomplete placental expulsion, triggering complications like uterine atony and postpartum hemorrhage. The intricate interplay of biomechanical and physiologic factors underscores the importance of vigilant obstetric management in cases of macrosomia to minimize the likelihood of retained placenta and its associated adverse outcomes. Placenta‐associated pregnancy complications such as chronic hypertension and pre‐eclampsia lead to hypoperfusion and placental oxidative stress. This association extends to other maternal characteristics linked to abnormal placentation, specifically SLE. Individuals with SLE encounter an increased risk of adverse pregnancy outcomes, mainly attributable to impaired placentation. These changes predominantly encompass abnormalities in placental vascularity and coagulation, ultimately resulting in impaired trophoblastic invasion. In the present study, women with pre‐eclampsia had 4.5 times higher odds of experiencing retained placenta compared with those without pre‐eclampsia. Women who underwent labor induction had a 21.8 times higher likelihood of experiencing retained placenta compared with those who did not undergo induction(mainly the use of prostaglandins). Their use can increase the risk of retained placenta for several reasons. First, if uterine contractions are excessively stimulated, they can lead to uterine atony—a condition where the uterus lacks adequate muscle tone to contract effectively after delivery. This diminished contractile ability may prevent the complete expulsion of the placenta. Women who had an instrumental delivery using vacuum extraction had a 2.3 times higher likelihood of experiencing retained placenta compared with those who had a vaginal delivery. We identified an association between retained placenta and regional analgesia. Upon further review, this association may be confounded by the longer duration of labor and higher rates of labor induction seen in the analgesia group, both of which are independent risk factors for retained placenta. Therefore, it is difficult to conclude that analgesia alone increases the risk of retained placenta. Further studies are required to disentangle the effects of analgesia from these other interventions. Although some studies suggest a possible link between epidural analgesia and retained placenta, the mechanism remains unclear. Epidural analgesia might depress the autonomic nervous system, potentially affecting uterine contractions and increasing the risk of incomplete placental expulsion. It is important to note that pethidine (meperidine hydrochloride) for labor analgesia did not show a similar association with retained placenta. Obstetrical complications and intrapartum conditions are associated with placental disease, particularly maternal vascular hypoperfusion. This leads to inadequate contraction of the retroplacental uterine wall, affecting placental detachment during the third stage of labor. Importantly, in our study, women with a prolonged second stage of labor lasting more than 3 h had a 3.9 times higher likelihood of experiencing retained placenta compared with those with a shorter second stage, emphasizing that labor dystocia in the second stage increases the likelihood of subsequent retained placenta. In the present study, a history of retained placenta appears to be linked to an increased likelihood of recurrence in subsequent vaginal deliveries. These findings align with existing literature. Specifically, women with a history of retained placenta during vaginal delivery exhibited a significantly heightened risk of recurrence in subsequent deliveries. Notably, a study involving over 280 women in Denmark reported a substantial increase in the risk of recurrence, reaching approximately 25%. Contrary to the findings of Romero et al., the present study did not reveal a higher incidence of retained placenta in preterm deliveries compared with term parturients. This could be because of the low incidence of preterm birth, in our population accounting for only 6% of the total births. This limited number of preterm births may have contributed to the lack of association between retained placenta and preterm birth. The present study did not identify any significant association between the retained placenta and maternal age, smoking, pregestational diabetes, or thrombophilia. These factors may not be strong predictors or contributors to the development of retained placenta in our study population. Future research should focus on the external validation of our findings in other populations and healthcare settings. Additionally, further studies are needed to explore the applicability of our identified risk factors across different geographic, socioeconomic, and clinical environments. This would help confirm the robustness of our results and provide more comprehensive guidelines for the management of retained placenta. Our cohort study demonstrates notable strengths as a large retrospective investigation performed in a single medical center with consistent obstetrical protocols during an 8‐year period. The utilization of 2:1 matched controls enhances the reliability of the study. The capacity to extrapolate demographics, obstetrical history, and chronic diseases not previously studied enriches the research. Nevertheless, the retrospective design and potential data limitations, including the absence of certain data points and histopathologic findings, introduce inherent limitations. Confidence intervals are crucial when considering study limitations, as they highlight the uncertainty of the estimates. Although the study's sample size was sufficient for the analysis, wide confidence intervals in some variables, such as IVF pregnancies, suggest potential variability that may affect the precision of the estimates. Future studies with larger populations could further refine these findings. Some of the estimates of ORs in the multivariate model are indeed broad, but as the ORs are similar to the ORs in the univariate models, we think they reflect reality. Further studies are needed to prove these hypotheses. It is important to note that the discussions in the present study may not fully align with its primary purpose. Despite the correlations observed, they may be less strongly associated with retained placenta than initially suggested. Consequently, the study may fall short of providing cohesive and coherent insights into the pathophysiologic mechanisms needed to support its findings. Future studies could benefit from developing a nomogram to better predict the risk of retained placenta based on the identified risk factors. In conclusion, the present study highlights the importance of early identification of previously unreported risk factors such as macrosomia, IVF, and endometriosis, while the study provides significant insights, some areas warrant further investigation. Future research could focus on elucidating the specific mechanisms by which macrosomia, IVF, and endometriosis contribute to the development of retained placenta. Our findings underscore the need for informed patient counseling regarding potential complications like retained placenta and postpartum hemorrhage. Understanding the pathophysiology of these disorders is crucial for developing effective preventive and treatment strategies. Our novel findings offer promise for physicians in assessing these risks pre‐delivery, thus providing valuable insights into maternal care. B.H.N., I.H., N.A.‐K., L.A.L., and J.E.J. contributed to conception, acquisition, analysis, and interpretation of the data, and to drafting the manuscript. All authors agree with the final version of the manuscript and its submission to the International Journal of Obstetrics and Gynecology . The authors declare no conflict of interest. |
Application of health action process approach model to promote toothbrushing behavior among Iranian elderly population: a cluster randomized controlled trial | 1d039166-92cd-4d48-8cf7-fa0a14329538 | 11823198 | Dentistry[mh] | The older population is increasing substantially . Aging leads to an increased risk of noncommunicable chronic diseases and a gradual decline in physical function . Additionally, an individual’s ability to perform adequate oral care may be affected by medical and physical restrictions associated with the aging process . The oral health status of the elderly population is generally deficient, with an elevated prevalence of caries, periodontal disease, and tooth loss , which can lead to secondary health problems and impair quality of life and well-being in old age . Regular oral care behaviors are highly effective at maintaining healthy oral status in the elderly population . Dental plaque is the primary cause of gingivitis and caries, leading to tooth loss and halitosis . Therefore, plaque removal with a toothbrush is crucial for preventing the deposition of dental plaque . Greater oral health awareness was associated with better oral health care behaviors . Behavior modification approaches such as sociocognitive models of behavior change are based on social cognition, which is a broad term used to describe how individuals encode, process, interpret, remember and then learn from and use information in social interactions with the objective of making sense of the behavior of others and the social environment . The Health Action Process Approach (HAPA) is a theoretical framework designed to better understand health behavior change. The HAPA describes the social-cognitive and self-regulatory processes that are involved in the adoption and maintenance of health behaviors. It is based on the assumption that there is a distinction between a motivational and a volitional phase of behavior and that different psychological constructs are seen as being influential in each of the phases. In the motivational phase, factors such as perceptions of risk, outcome expectancies, and action self-efficacy are proposed to play important roles in motivating individuals into action. In the volition phase, coping self-efficacy, planning, and action control (such as self-monitoring) are proposed as key self-regulatory factors that are important for ensuring that an intended behavior is initiated and then maintained once initiated. Several studies have shown the usefulness of the HAPA for explaining changes in oral health behavior . There is also growing support for the effectiveness of HAPA-based interventions in the context of promoting oral hygiene behaviors . Improving the oral health of elderly people by different healthcare providers , is one of the key objectives of the multidisciplinary team responsible for their care to increase their quality of life . The integration of oral health promotion into existing health promotion programs should be considered by health authorities to improve geriatric oral health . To the best of the authors’ knowledge, no study has been designed based on the principles of the HAPA model for the promotion of oral health in elderly people. Consequently, the present randomized controlled trial is conducted to evaluate the use of the HAPA for promoting toothbrushing behavior and oral hygiene status among the Iranian elderly population. The hypothesis is that the oral health education program provided by a health officer is as effective as the oral health education program provided by a dentist. The aim of the present study is to compare the effectiveness of an oral health education program based on the HAPA model provided by a dentist and a health officer among elderly individuals in municipality centers in Tehran, Iran. Trial design This study was a multicenter, double-blind, parallel, clustered, randomized controlled trial (RCT) with a 1:1 allocation ratio, involving elderly individuals aged 60 years and older residing in Tehran, Iran. The study comprised multiple phases, including a pilot study, baseline assessment, interventions, fortnightly reinforcements for allocated groups, and follow-up examinations after 1 and 3 months. The total study period lasted from February 2021 to October 2021. The design and planning of this study were based on the Health Action Process Approach Model, with self-reported measures of HAPA constructs and toothbrushing behavior assessed using a valid and reliable researcher-made questionnaire administered at baseline and at 1- and 3-month follow-ups. The trial protocol was registered in the Iranian Registry of Clinical Trials (IRCT) on 7-12-2020 (registration number: IRCT20200928048868N1). Study population and randomization Participants Eligible participants ( n = 190) were elderly individuals aged 60 years and older. The inclusion criteria were: Presence of at least 10 teeth in the mouth, residing in selected districts, being a member of the neighborhood health center, having the ability to communicate with research facilitators, and completing the informed consent form. Individuals with uncontrolled systemic diseases, motor limitation and non-Iranian citizenship were excluded. Sample size The sample size calculation is performed with consideration for the study’s power and the design effect due to clustering. The sample size is determined as follows: N ≥ 50 + 8 K where k refer to independent variables. In our study we had two independent variables (time and intervention). So, the minimum sample size was 66. Due to the intracluster correlation, a design effect is considered during the planning phase to account for the inflation in sample size caused by clustering. The design effect is calculated using the following formula: Design effect = 1+(m-1) ×ƿ where (m) is the average cluster size, and ƿ is the intracluster correlation coefficient (ICC). In our study the design effect is = 1+ (132/24–1) ×0.05. So, the design effect is approximately 1.225. This means the effective sample size is inflated by a factor of 1.225 due to clustering. To adjust the sample size for clustering, we multiply the initial sample size by the design effect. Thus, we would need approximately 162 samples to account for clustering effect. Considering 15% loss to follow up, the present study was conducted with a sample size of 190 individuals. Sampling, randomization, and allocation Tehran is divided into 22 municipality districts, each containing several municipality neighborhood houses operating within the framework of policies and macrourban management programs. This study was conducted in health centers to achieve better access to the target group, particularly during the COVID-19 pandemic. A multistage cluster sampling approach was used, with the health centers of selected neighborhood houses as clusters. Six out of the 22 municipality districts were randomly selected, including districts 1 and 5 from the north, 19 and 21 from the south, and 11 and 13 from the center of Tehran. Ultimately, 24 municipality neighborhood houses (four randomly selected from each district) were included in the study. Using the convenience sampling method, 7–10 eligible elderly subjects were selected from each center. Different centers had varying numbers of older people. Consequently, standardizing the number of samples could lead to under-representation in some centers and over-representation in others. The sampling method employed in this study is cluster sampling, conducted using probability proportional to size (PPS) sampling. This approach implies that a greater number of samples were collected from clusters with larger populations. Following baseline data collection, allocation took place within neighborhood houses as units of randomization. In each selected district, four neighborhood houses were randomized into two equal arms of parallel groups. For simple randomization, each neighborhood house’s name was written on a piece of paper and concealed in an envelope. These four neighborhood houses were then randomly allocated into the intervention groups by drawing envelopes randomly. This process was repeated for the six selected districts. Ultimately, 12 neighborhood houses were allocated to intervention Group A ( N = 89) and 12 to intervention Group B ( N = 101). A flow chart of the study demonstrating participants at baseline and during two postintervention evaluations is provided in Fig. . Blinding This trial was double-blind with regard to outcome measure assessment and data analysis. The examiner who conducted the postintervention oral examination was blinded to the group allocation of the study participants. Statistical analysis was carried out by a trial statistician who was blinded to the allocation. The intervention groups were coded without disclosing the labels. There was no contamination, as participants did not interact with any elderly individuals from the other groups. Data collection The data were collected at three time points: baseline (T 0 ), one-month follow-up (T 1 ) and three-month follow-up (T 2 ). Outcome measurements Primary outcomes The primary outcomes for this trial were changes in the self-reported frequency of toothbrushing behavior and constructs of the HAPA model, which were used as sociocognitive factors. Secondary outcomes The secondary outcome included improvement in oral hygiene status, measured by a decrease in the Simplified Oral Hygiene Index (OHI-S). Clinical measure The OHI-S comprises the Debris Index (DI) and the calculus index (CI). Six teeth in the permanent dentition (buccal surfaces of 3, 8, 14, and 24 and lingual surfaces of 19 and 30) were scored on a scale of 0 to 3. The debris scores were subsequently summed and divided by the number of examined teeth for each individual to calculate the DI. The same process was used to obtain the CI . The sum of the DI and CI was defined as the OHI-S. In this study, the OHI-S was considered a quantitative outcome variable. Self-reported measure The questionnaire was designed to assess oral health behavior, including toothbrushing, based on the HAPA constructs. The validity and reliability of the questionnaire were assessed before data collection . The HAPA questionnaire included the following constructs and questions. Each construct was measured with a single item (question) for each behavior, and the answers were rated on a 7-point Likert scale from 1 to 7 . To assess the frequency of toothbrushing behavior, participants were asked “How many times do you brush your teeth every day?”. For example, outcome expectancies were assessed by “If I brush my teeth regularly, my breath will be fresh,” scored on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Risk perceptions were measured using, for example, “If I do not brush my teeth frequently, the risk of caries will be,” scored on a 7-point Likert scale ranging from 1 (very unlikely) to 7 (very likely). Action self-efficacy was measured by, for example, “I am confident that I can brush my teeth twice a day in the future even if I’m tired,” scored on a 7-point Likert scale ranging from 1 (not at all true) to 7 (exactly true). Behavioral intention toward toothbrushing was measured using, for example, “I plan to brush my teeth twice a day in the coming weeks or months,” scored on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Action planning was assessed with “I have made a detailed plan regarding…” followed by (a) “when to brush my teeth,” (b) “where to brush my teeth,” “how to brush my teeth,” (d) “how often to brush my teeth,” and (e) “how much time to spend brushing my teeth”; responses were rated on a scale ranging from 1 (strongly disagree) to 7 (strongly agree). Coping planning was assessed with the following stem item: “I have made a detailed plan regarding…” followed by (a) “what to do if something interferes with my plans,” (b) “how to cope with possible setbacks,” and “what to do if I forget.” The answers were rated on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Maintenance Self-Efficacy: The question was “I’m sure I can brush my teeth twice a day even if it takes a long time to become part of my daily routine.” The answers were rated on a 7-point Likert scale from “not at all true” to “definitely true” . Recovery Self-Efficacy: The sample item was “I’m sure I can brush my teeth twice a day again regularly even if I have not done for a month.” The answers were rated on a 7-point Likert scale from “not at all true” to “definitely true” . Pilot study and calibrations To implement the pilot study, a neighborhood house that was not included in the studied clusters was selected, and 30 elderly individuals participated. The two examiners reviewed how to record OHI-S scores (DI and CI) and examined one volunteer health officer from the selected municipality neighborhood to observe the performance of each other and discuss agreement on coding. For intraexaminer calibration, each examiner examined 10 elderly individuals for OHI-S (DI and CI) assessment. Due to debris removal by an explorer during the first examination, it was not possible to reassess the DI; only the CI was assessed, and the infraclass correlation coefficient (ICC) was calculated (ICC = 0.93). For interexaminer calibration, 30 elderly individuals were examined by both examiners. For the DI, one of the examiners collected debris by an explorer from each tooth, and both examiners separately recorded the scores. The DI was determined by comparing the scores made by both examiners (ICC for DI = 0.88). For CI, they examined elderly individuals separately (ICC for CI = 0.95). In total, the ICC for the OHI-S was calculated to be 0.89. In the pilot phase, toothbrushing education according to the constructs of the HAPA model was provided, and a skill checklist was completed for research trainers (dentist and health officer) to confirm the skills of the training they provided in the intervention phase. The results of the pilot study were discussed among the research team members, and minor revisions were made to the study protocol where necessary. A trained dentist was chosen for the control group. (Group A) to ensure standardization for comparison with the trained health officer (Group B). The validity and reliability of the questionnaire were also assessed before data collection during the pilot study. Baseline The questionnaire was completed by the participants as described in the pilot study ( n = 190). The questionnaire included (1) information on sociodemographic characteristics (age, sex, income, education, employment status, living status, and medical history), (2) information about toothbrushing frequencies, and (3) information about the constructs of HAPA model for toothbrushing behavior. Oral examinations were performed to assess OHI-S scores. The elderly individuals were examined in municipality neighborhoods in a room, seated in an ordinary chair, under proper illumination from a headlamp, by using a disposable mouth mirror and an explorer, while taking protective cross-infection control measures involving the use of disposable gloves and masks. Interventions The 24 municipality neighborhood houses (clusters) were randomly assigned into two groups: Group A Elderly participants who received oral health education based on the constructs of the HAPA model by a dentist (control group) and an educational pamphlet. Group B Participants who received the same educational content from a health officer at the health center in the municipality neighborhood and an educational pamphlet. Both the dentist and the health officer were trained and calibrated by the researchers. The instructions used were the same for both groups. The educational content included general information on oral health behaviors as well as risk factors, oral hygiene practices, the importance of teeth and their care, the correct way to brush and floss, the importance of controlling the consumption of sugary substances, fluoride use, regular dental visits, and the adverse effects of smoking on oral health. The study also provided explanations for the relationship between general health and quality of life. Information was provided about the positive consequences of daily toothbrushing, and elderly people were encouraged to formulate their own potential pros and cons of regular toothbrushing. Additionally, effective toothbrushing was demonstrated using a dental model. Elders were asked to make concrete plans on when, where and after what activity they would brush their teeth in the future using the if-then formulation. Participants were also asked to identify barriers and possible solutions by making coping plans to increase adherence to their action plans. At the end of the educational session, participants received a pamphlet containing information about oral health and toothbrushing according to the HAPA model. Reinforcement Every two weeks both groups received reinforcement after the beginning of the intervention, the instructions were repeated for each participant via phone calls, questions were answered, and problems were addressed. Additionally, WhatsApp messages were sent to reinforce the potential positive outcomes of oral health care. The reinforcement was in different times in two groups, because time of intervention in every center was different. Statistical analysis The normality of the quantitative variables was assessed using the Kolmogorov–Smirnov test. Independent t-test was used to compare monthly incomes and mean age between two intervention groups and Chi-square test was used to compare gender, educational status, employment status between two groups. The marginal model of generalized estimating equations (GEE) was used for intragroup and intergroup comparisons. Bonferroni post hoc test was used for pairwise comparisons. The statistical analysis was conducted using SPSS version 25. A p value less than 0.05 was considered to indicate statistical significance. Ethics statement This study was approved by the Research Ethics Committee of Tehran University of Medical Sciences (IR.TUMS.DENTISTRY.REC.1399.102). Before completing the questionnaires and oral examinations, the researcher explained the study purpose and obtained written or verbal informed consent from the elderly individuals for voluntary participation. All the information collected from the respondents during this research was kept confidential. All identifiable details of the participants will be separated from the coded details. The identifiable details and data entered on the computer will be password protected and accessible only to the researchers. This study was a multicenter, double-blind, parallel, clustered, randomized controlled trial (RCT) with a 1:1 allocation ratio, involving elderly individuals aged 60 years and older residing in Tehran, Iran. The study comprised multiple phases, including a pilot study, baseline assessment, interventions, fortnightly reinforcements for allocated groups, and follow-up examinations after 1 and 3 months. The total study period lasted from February 2021 to October 2021. The design and planning of this study were based on the Health Action Process Approach Model, with self-reported measures of HAPA constructs and toothbrushing behavior assessed using a valid and reliable researcher-made questionnaire administered at baseline and at 1- and 3-month follow-ups. The trial protocol was registered in the Iranian Registry of Clinical Trials (IRCT) on 7-12-2020 (registration number: IRCT20200928048868N1). Participants Eligible participants ( n = 190) were elderly individuals aged 60 years and older. The inclusion criteria were: Presence of at least 10 teeth in the mouth, residing in selected districts, being a member of the neighborhood health center, having the ability to communicate with research facilitators, and completing the informed consent form. Individuals with uncontrolled systemic diseases, motor limitation and non-Iranian citizenship were excluded. Sample size The sample size calculation is performed with consideration for the study’s power and the design effect due to clustering. The sample size is determined as follows: N ≥ 50 + 8 K where k refer to independent variables. In our study we had two independent variables (time and intervention). So, the minimum sample size was 66. Due to the intracluster correlation, a design effect is considered during the planning phase to account for the inflation in sample size caused by clustering. The design effect is calculated using the following formula: Design effect = 1+(m-1) ×ƿ where (m) is the average cluster size, and ƿ is the intracluster correlation coefficient (ICC). In our study the design effect is = 1+ (132/24–1) ×0.05. So, the design effect is approximately 1.225. This means the effective sample size is inflated by a factor of 1.225 due to clustering. To adjust the sample size for clustering, we multiply the initial sample size by the design effect. Thus, we would need approximately 162 samples to account for clustering effect. Considering 15% loss to follow up, the present study was conducted with a sample size of 190 individuals. Sampling, randomization, and allocation Tehran is divided into 22 municipality districts, each containing several municipality neighborhood houses operating within the framework of policies and macrourban management programs. This study was conducted in health centers to achieve better access to the target group, particularly during the COVID-19 pandemic. A multistage cluster sampling approach was used, with the health centers of selected neighborhood houses as clusters. Six out of the 22 municipality districts were randomly selected, including districts 1 and 5 from the north, 19 and 21 from the south, and 11 and 13 from the center of Tehran. Ultimately, 24 municipality neighborhood houses (four randomly selected from each district) were included in the study. Using the convenience sampling method, 7–10 eligible elderly subjects were selected from each center. Different centers had varying numbers of older people. Consequently, standardizing the number of samples could lead to under-representation in some centers and over-representation in others. The sampling method employed in this study is cluster sampling, conducted using probability proportional to size (PPS) sampling. This approach implies that a greater number of samples were collected from clusters with larger populations. Following baseline data collection, allocation took place within neighborhood houses as units of randomization. In each selected district, four neighborhood houses were randomized into two equal arms of parallel groups. For simple randomization, each neighborhood house’s name was written on a piece of paper and concealed in an envelope. These four neighborhood houses were then randomly allocated into the intervention groups by drawing envelopes randomly. This process was repeated for the six selected districts. Ultimately, 12 neighborhood houses were allocated to intervention Group A ( N = 89) and 12 to intervention Group B ( N = 101). A flow chart of the study demonstrating participants at baseline and during two postintervention evaluations is provided in Fig. . Blinding This trial was double-blind with regard to outcome measure assessment and data analysis. The examiner who conducted the postintervention oral examination was blinded to the group allocation of the study participants. Statistical analysis was carried out by a trial statistician who was blinded to the allocation. The intervention groups were coded without disclosing the labels. There was no contamination, as participants did not interact with any elderly individuals from the other groups. Data collection The data were collected at three time points: baseline (T 0 ), one-month follow-up (T 1 ) and three-month follow-up (T 2 ). Eligible participants ( n = 190) were elderly individuals aged 60 years and older. The inclusion criteria were: Presence of at least 10 teeth in the mouth, residing in selected districts, being a member of the neighborhood health center, having the ability to communicate with research facilitators, and completing the informed consent form. Individuals with uncontrolled systemic diseases, motor limitation and non-Iranian citizenship were excluded. The sample size calculation is performed with consideration for the study’s power and the design effect due to clustering. The sample size is determined as follows: N ≥ 50 + 8 K where k refer to independent variables. In our study we had two independent variables (time and intervention). So, the minimum sample size was 66. Due to the intracluster correlation, a design effect is considered during the planning phase to account for the inflation in sample size caused by clustering. The design effect is calculated using the following formula: Design effect = 1+(m-1) ×ƿ where (m) is the average cluster size, and ƿ is the intracluster correlation coefficient (ICC). In our study the design effect is = 1+ (132/24–1) ×0.05. So, the design effect is approximately 1.225. This means the effective sample size is inflated by a factor of 1.225 due to clustering. To adjust the sample size for clustering, we multiply the initial sample size by the design effect. Thus, we would need approximately 162 samples to account for clustering effect. Considering 15% loss to follow up, the present study was conducted with a sample size of 190 individuals. Tehran is divided into 22 municipality districts, each containing several municipality neighborhood houses operating within the framework of policies and macrourban management programs. This study was conducted in health centers to achieve better access to the target group, particularly during the COVID-19 pandemic. A multistage cluster sampling approach was used, with the health centers of selected neighborhood houses as clusters. Six out of the 22 municipality districts were randomly selected, including districts 1 and 5 from the north, 19 and 21 from the south, and 11 and 13 from the center of Tehran. Ultimately, 24 municipality neighborhood houses (four randomly selected from each district) were included in the study. Using the convenience sampling method, 7–10 eligible elderly subjects were selected from each center. Different centers had varying numbers of older people. Consequently, standardizing the number of samples could lead to under-representation in some centers and over-representation in others. The sampling method employed in this study is cluster sampling, conducted using probability proportional to size (PPS) sampling. This approach implies that a greater number of samples were collected from clusters with larger populations. Following baseline data collection, allocation took place within neighborhood houses as units of randomization. In each selected district, four neighborhood houses were randomized into two equal arms of parallel groups. For simple randomization, each neighborhood house’s name was written on a piece of paper and concealed in an envelope. These four neighborhood houses were then randomly allocated into the intervention groups by drawing envelopes randomly. This process was repeated for the six selected districts. Ultimately, 12 neighborhood houses were allocated to intervention Group A ( N = 89) and 12 to intervention Group B ( N = 101). A flow chart of the study demonstrating participants at baseline and during two postintervention evaluations is provided in Fig. . This trial was double-blind with regard to outcome measure assessment and data analysis. The examiner who conducted the postintervention oral examination was blinded to the group allocation of the study participants. Statistical analysis was carried out by a trial statistician who was blinded to the allocation. The intervention groups were coded without disclosing the labels. There was no contamination, as participants did not interact with any elderly individuals from the other groups. The data were collected at three time points: baseline (T 0 ), one-month follow-up (T 1 ) and three-month follow-up (T 2 ). Primary outcomes The primary outcomes for this trial were changes in the self-reported frequency of toothbrushing behavior and constructs of the HAPA model, which were used as sociocognitive factors. Secondary outcomes The secondary outcome included improvement in oral hygiene status, measured by a decrease in the Simplified Oral Hygiene Index (OHI-S). Clinical measure The OHI-S comprises the Debris Index (DI) and the calculus index (CI). Six teeth in the permanent dentition (buccal surfaces of 3, 8, 14, and 24 and lingual surfaces of 19 and 30) were scored on a scale of 0 to 3. The debris scores were subsequently summed and divided by the number of examined teeth for each individual to calculate the DI. The same process was used to obtain the CI . The sum of the DI and CI was defined as the OHI-S. In this study, the OHI-S was considered a quantitative outcome variable. The primary outcomes for this trial were changes in the self-reported frequency of toothbrushing behavior and constructs of the HAPA model, which were used as sociocognitive factors. The secondary outcome included improvement in oral hygiene status, measured by a decrease in the Simplified Oral Hygiene Index (OHI-S). The OHI-S comprises the Debris Index (DI) and the calculus index (CI). Six teeth in the permanent dentition (buccal surfaces of 3, 8, 14, and 24 and lingual surfaces of 19 and 30) were scored on a scale of 0 to 3. The debris scores were subsequently summed and divided by the number of examined teeth for each individual to calculate the DI. The same process was used to obtain the CI . The sum of the DI and CI was defined as the OHI-S. In this study, the OHI-S was considered a quantitative outcome variable. The questionnaire was designed to assess oral health behavior, including toothbrushing, based on the HAPA constructs. The validity and reliability of the questionnaire were assessed before data collection . The HAPA questionnaire included the following constructs and questions. Each construct was measured with a single item (question) for each behavior, and the answers were rated on a 7-point Likert scale from 1 to 7 . To assess the frequency of toothbrushing behavior, participants were asked “How many times do you brush your teeth every day?”. For example, outcome expectancies were assessed by “If I brush my teeth regularly, my breath will be fresh,” scored on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Risk perceptions were measured using, for example, “If I do not brush my teeth frequently, the risk of caries will be,” scored on a 7-point Likert scale ranging from 1 (very unlikely) to 7 (very likely). Action self-efficacy was measured by, for example, “I am confident that I can brush my teeth twice a day in the future even if I’m tired,” scored on a 7-point Likert scale ranging from 1 (not at all true) to 7 (exactly true). Behavioral intention toward toothbrushing was measured using, for example, “I plan to brush my teeth twice a day in the coming weeks or months,” scored on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Action planning was assessed with “I have made a detailed plan regarding…” followed by (a) “when to brush my teeth,” (b) “where to brush my teeth,” “how to brush my teeth,” (d) “how often to brush my teeth,” and (e) “how much time to spend brushing my teeth”; responses were rated on a scale ranging from 1 (strongly disagree) to 7 (strongly agree). Coping planning was assessed with the following stem item: “I have made a detailed plan regarding…” followed by (a) “what to do if something interferes with my plans,” (b) “how to cope with possible setbacks,” and “what to do if I forget.” The answers were rated on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Maintenance Self-Efficacy: The question was “I’m sure I can brush my teeth twice a day even if it takes a long time to become part of my daily routine.” The answers were rated on a 7-point Likert scale from “not at all true” to “definitely true” . Recovery Self-Efficacy: The sample item was “I’m sure I can brush my teeth twice a day again regularly even if I have not done for a month.” The answers were rated on a 7-point Likert scale from “not at all true” to “definitely true” . To implement the pilot study, a neighborhood house that was not included in the studied clusters was selected, and 30 elderly individuals participated. The two examiners reviewed how to record OHI-S scores (DI and CI) and examined one volunteer health officer from the selected municipality neighborhood to observe the performance of each other and discuss agreement on coding. For intraexaminer calibration, each examiner examined 10 elderly individuals for OHI-S (DI and CI) assessment. Due to debris removal by an explorer during the first examination, it was not possible to reassess the DI; only the CI was assessed, and the infraclass correlation coefficient (ICC) was calculated (ICC = 0.93). For interexaminer calibration, 30 elderly individuals were examined by both examiners. For the DI, one of the examiners collected debris by an explorer from each tooth, and both examiners separately recorded the scores. The DI was determined by comparing the scores made by both examiners (ICC for DI = 0.88). For CI, they examined elderly individuals separately (ICC for CI = 0.95). In total, the ICC for the OHI-S was calculated to be 0.89. In the pilot phase, toothbrushing education according to the constructs of the HAPA model was provided, and a skill checklist was completed for research trainers (dentist and health officer) to confirm the skills of the training they provided in the intervention phase. The results of the pilot study were discussed among the research team members, and minor revisions were made to the study protocol where necessary. A trained dentist was chosen for the control group. (Group A) to ensure standardization for comparison with the trained health officer (Group B). The validity and reliability of the questionnaire were also assessed before data collection during the pilot study. Baseline The questionnaire was completed by the participants as described in the pilot study ( n = 190). The questionnaire included (1) information on sociodemographic characteristics (age, sex, income, education, employment status, living status, and medical history), (2) information about toothbrushing frequencies, and (3) information about the constructs of HAPA model for toothbrushing behavior. Oral examinations were performed to assess OHI-S scores. The elderly individuals were examined in municipality neighborhoods in a room, seated in an ordinary chair, under proper illumination from a headlamp, by using a disposable mouth mirror and an explorer, while taking protective cross-infection control measures involving the use of disposable gloves and masks. Interventions The 24 municipality neighborhood houses (clusters) were randomly assigned into two groups: Group A Elderly participants who received oral health education based on the constructs of the HAPA model by a dentist (control group) and an educational pamphlet. Group B Participants who received the same educational content from a health officer at the health center in the municipality neighborhood and an educational pamphlet. Both the dentist and the health officer were trained and calibrated by the researchers. The instructions used were the same for both groups. The educational content included general information on oral health behaviors as well as risk factors, oral hygiene practices, the importance of teeth and their care, the correct way to brush and floss, the importance of controlling the consumption of sugary substances, fluoride use, regular dental visits, and the adverse effects of smoking on oral health. The study also provided explanations for the relationship between general health and quality of life. Information was provided about the positive consequences of daily toothbrushing, and elderly people were encouraged to formulate their own potential pros and cons of regular toothbrushing. Additionally, effective toothbrushing was demonstrated using a dental model. Elders were asked to make concrete plans on when, where and after what activity they would brush their teeth in the future using the if-then formulation. Participants were also asked to identify barriers and possible solutions by making coping plans to increase adherence to their action plans. At the end of the educational session, participants received a pamphlet containing information about oral health and toothbrushing according to the HAPA model. Reinforcement Every two weeks both groups received reinforcement after the beginning of the intervention, the instructions were repeated for each participant via phone calls, questions were answered, and problems were addressed. Additionally, WhatsApp messages were sent to reinforce the potential positive outcomes of oral health care. The reinforcement was in different times in two groups, because time of intervention in every center was different. The questionnaire was completed by the participants as described in the pilot study ( n = 190). The questionnaire included (1) information on sociodemographic characteristics (age, sex, income, education, employment status, living status, and medical history), (2) information about toothbrushing frequencies, and (3) information about the constructs of HAPA model for toothbrushing behavior. Oral examinations were performed to assess OHI-S scores. The elderly individuals were examined in municipality neighborhoods in a room, seated in an ordinary chair, under proper illumination from a headlamp, by using a disposable mouth mirror and an explorer, while taking protective cross-infection control measures involving the use of disposable gloves and masks. The 24 municipality neighborhood houses (clusters) were randomly assigned into two groups: Group A Elderly participants who received oral health education based on the constructs of the HAPA model by a dentist (control group) and an educational pamphlet. Group B Participants who received the same educational content from a health officer at the health center in the municipality neighborhood and an educational pamphlet. Both the dentist and the health officer were trained and calibrated by the researchers. The instructions used were the same for both groups. The educational content included general information on oral health behaviors as well as risk factors, oral hygiene practices, the importance of teeth and their care, the correct way to brush and floss, the importance of controlling the consumption of sugary substances, fluoride use, regular dental visits, and the adverse effects of smoking on oral health. The study also provided explanations for the relationship between general health and quality of life. Information was provided about the positive consequences of daily toothbrushing, and elderly people were encouraged to formulate their own potential pros and cons of regular toothbrushing. Additionally, effective toothbrushing was demonstrated using a dental model. Elders were asked to make concrete plans on when, where and after what activity they would brush their teeth in the future using the if-then formulation. Participants were also asked to identify barriers and possible solutions by making coping plans to increase adherence to their action plans. At the end of the educational session, participants received a pamphlet containing information about oral health and toothbrushing according to the HAPA model. Elderly participants who received oral health education based on the constructs of the HAPA model by a dentist (control group) and an educational pamphlet. Participants who received the same educational content from a health officer at the health center in the municipality neighborhood and an educational pamphlet. Both the dentist and the health officer were trained and calibrated by the researchers. The instructions used were the same for both groups. The educational content included general information on oral health behaviors as well as risk factors, oral hygiene practices, the importance of teeth and their care, the correct way to brush and floss, the importance of controlling the consumption of sugary substances, fluoride use, regular dental visits, and the adverse effects of smoking on oral health. The study also provided explanations for the relationship between general health and quality of life. Information was provided about the positive consequences of daily toothbrushing, and elderly people were encouraged to formulate their own potential pros and cons of regular toothbrushing. Additionally, effective toothbrushing was demonstrated using a dental model. Elders were asked to make concrete plans on when, where and after what activity they would brush their teeth in the future using the if-then formulation. Participants were also asked to identify barriers and possible solutions by making coping plans to increase adherence to their action plans. At the end of the educational session, participants received a pamphlet containing information about oral health and toothbrushing according to the HAPA model. Every two weeks both groups received reinforcement after the beginning of the intervention, the instructions were repeated for each participant via phone calls, questions were answered, and problems were addressed. Additionally, WhatsApp messages were sent to reinforce the potential positive outcomes of oral health care. The reinforcement was in different times in two groups, because time of intervention in every center was different. The normality of the quantitative variables was assessed using the Kolmogorov–Smirnov test. Independent t-test was used to compare monthly incomes and mean age between two intervention groups and Chi-square test was used to compare gender, educational status, employment status between two groups. The marginal model of generalized estimating equations (GEE) was used for intragroup and intergroup comparisons. Bonferroni post hoc test was used for pairwise comparisons. The statistical analysis was conducted using SPSS version 25. A p value less than 0.05 was considered to indicate statistical significance. This study was approved by the Research Ethics Committee of Tehran University of Medical Sciences (IR.TUMS.DENTISTRY.REC.1399.102). Before completing the questionnaires and oral examinations, the researcher explained the study purpose and obtained written or verbal informed consent from the elderly individuals for voluntary participation. All the information collected from the respondents during this research was kept confidential. All identifiable details of the participants will be separated from the coded details. The identifiable details and data entered on the computer will be password protected and accessible only to the researchers. Sociodemographic characteristics The mean ages of participants in groups A and B were 63.90 ± 3.78 and 63.71 ± 4.19, respectively. In both groups, there were fewer males than females. The baseline sociodemographic characteristics of the participants, including age, sex, income, educational status, and employment status, are presented in Table . The sociodemographic variables did not show differences between the two intervention groups. The sociodemographic characteristics of the dropped-out persons are the same as those present in the study. No statistically significant difference was observed regarding age, gender, monthly income, educational status and employment status ( p -value > 0.05). HAPA model constructs According to the GEE models for the HAPA constructs, the interaction between group and time effects was not statistically significant. Consequently, the results were shown in models fitted only with the group and time variables as fixed effects. The GEE model fit showed that HAPA model constructs, including risk perception, outcome expectancies, action self-efficacy, intention, action planning, coping planning, maintenance self-efficacy, and recovery self-efficacy, did not significantly differ between the two groups. However, the time effect was statistically significant in both groups ( P < 0.05) (Table ). For all constructs, pairwise comparisons were conducted between three time points: baseline to one-month follow-up (T 0 -T 1 ), baseline to three-month follow-up (T 0 -T 2 ), and between the two follow-ups (T 1 -T 2 ). The results showed that all the changes in the HAPA model constructs were statistically significant between baseline and the second follow-up (T 0 -T 2 ) and between the first and second follow-ups (T 1 -T 2 ) in both Groups A and B ( p < 0.001), with the exception of outcome expectancies. Toothbrushing behavior and OHIS Between-group comparisons of the frequency of toothbrushing are presented in Table . No statistically significant differences were observed between Groups A and B at baseline or at the one-month or three-month follow-up. However, the frequency of toothbrushing increased after the interventions in both groups. Table displays the GEE model fitting results, which show that the frequency of toothbrushing did not significantly differ between the two groups ( p = 0.09), while the time effect was statistically significant in both groups ( P < 0.001). The GEE model also demonstrated that the OHIS did not significantly differ between the two groups ( p = 0.56), but the time effect was statistically significant in both groups ( P < 0.001); that is, the intervention was effective in both groups ( p < 0.001). The mean differences in the Simplified Oral Hygiene Index indicated that the intervention in both groups effectively reduced the OHIS during T 1 and T 2 compared to baseline (T 0 ). However, there were no significant differences between Groups A and B in this regard (Fig. ). The mean ages of participants in groups A and B were 63.90 ± 3.78 and 63.71 ± 4.19, respectively. In both groups, there were fewer males than females. The baseline sociodemographic characteristics of the participants, including age, sex, income, educational status, and employment status, are presented in Table . The sociodemographic variables did not show differences between the two intervention groups. The sociodemographic characteristics of the dropped-out persons are the same as those present in the study. No statistically significant difference was observed regarding age, gender, monthly income, educational status and employment status ( p -value > 0.05). According to the GEE models for the HAPA constructs, the interaction between group and time effects was not statistically significant. Consequently, the results were shown in models fitted only with the group and time variables as fixed effects. The GEE model fit showed that HAPA model constructs, including risk perception, outcome expectancies, action self-efficacy, intention, action planning, coping planning, maintenance self-efficacy, and recovery self-efficacy, did not significantly differ between the two groups. However, the time effect was statistically significant in both groups ( P < 0.05) (Table ). For all constructs, pairwise comparisons were conducted between three time points: baseline to one-month follow-up (T 0 -T 1 ), baseline to three-month follow-up (T 0 -T 2 ), and between the two follow-ups (T 1 -T 2 ). The results showed that all the changes in the HAPA model constructs were statistically significant between baseline and the second follow-up (T 0 -T 2 ) and between the first and second follow-ups (T 1 -T 2 ) in both Groups A and B ( p < 0.001), with the exception of outcome expectancies. Between-group comparisons of the frequency of toothbrushing are presented in Table . No statistically significant differences were observed between Groups A and B at baseline or at the one-month or three-month follow-up. However, the frequency of toothbrushing increased after the interventions in both groups. Table displays the GEE model fitting results, which show that the frequency of toothbrushing did not significantly differ between the two groups ( p = 0.09), while the time effect was statistically significant in both groups ( P < 0.001). The GEE model also demonstrated that the OHIS did not significantly differ between the two groups ( p = 0.56), but the time effect was statistically significant in both groups ( P < 0.001); that is, the intervention was effective in both groups ( p < 0.001). The mean differences in the Simplified Oral Hygiene Index indicated that the intervention in both groups effectively reduced the OHIS during T 1 and T 2 compared to baseline (T 0 ). However, there were no significant differences between Groups A and B in this regard (Fig. ). The present randomized controlled trial aimed to compare the effectiveness of an oral health education program based on the HAPA model provided by a dentist and by a health officer among elderly individuals in municipality health centers in Tehran, Iran. According to the study results on toothbrushing behavior, there were obvious increases in the T 1 and T 2 time intervals compared with the T 0 interval in both groups. However, our findings did not reveal any differences between the two intervention groups. This finding showed that the frequency of toothbrushing behavior effectively increased at the one-month and three-month follow-ups after both interventions in elderly individuals. Additionally, the oral hygiene status of the participants in both groups improved. Weizi and colleagues showed that the HAPA theory-based mini-program significantly improved oral health behavior and oral hygiene outcomes in young adults treated with fixed orthodontic appliances over the 12 weeks of the study . Moreover, in a cluster randomized controlled trial study, Scheerman et al. demonstrated that oral health intervention resulted in significant improvements in toothbrushing behavior and clinical oral health indicators (CPI and VPI) as well as more positive social cognitions based on the HAPA model and OHRQoL among Iranian adolescent students in the short and long term . The improvements observed in the present study may be partly due to the reminder messages in WhatsApp, which were sent to both groups, as well as phone call reminders every two weeks. A systematic review showed that SMS reminders make better prospective memory and reinforce behavior change interventions by reminding recipients to involve with behaviors they wish to change . The present findings showed that changes in the hypothetical determinants of action (HAPA constructs) lead to changes in the relevant behavior (increase toothbrushing frequencies) and, as a consequence, oral hygiene status. Our study results also support the idea that it is necessary to target self-regulatory processes, such as those specified in the HAPA model, in addition to motivational variables . The present study also indicated that the interventions were significantly effective on the HAPA constructs, which included risk perception, outcome expectancies, action self-efficacy, intention, action planning, coping planning, maintenance self-efficacy and recovery self-efficacy, in both groups. All the HAPA model constructs were enhanced at T 1 and T 2 compared with T 0 in groups A and B. The psychological determinants of oral health behavior and oral health outcomes have also been indicated in many contexts . A recent systematic review and meta-analysis showed that intention, self-efficacy, social influence, and coping planning are important psychosocial factors of toothbrushing . In the majority of the studies, using the HAPA model, the targeted intervention involved flossing; the population consisted of students, adolescents and dental patients; and all the studies used only a selection of the HAPA constructs . The present research employed a cluster randomized trial methodology and was the first to apply HAPA, which targets toothbrushing behavior in elderly individuals. The application of the HAPA, a model recognized for its effectiveness in behavioral change, could afford more reliable estimations. Blinding regarding the outcome measure assessment and data analysis could reduce the risk of bias. The interventions were provided by trained health officers who had stable workforces in municipality centers, facilitating the integration of oral health promotion into existing health programs. In addition, the reminder messages sent via WhatsApp and by phone calls for both groups may have a supportive role in promoting the consistency of the training and behavior enhancement. Unfortunately, because of the onset of the COVID-19 pandemic, some of the baseline samples were lost to follow-up due to biological risk. Additionally, it was difficult to compare the results with those of previous studies because they were conducted in different age groups and targeted different behaviors. Most of the study participants were women. This finding was in accordance with the pattern of attendance at health centers, especially during working hours. Men are more likely to have a job after official retirement and are busy working during the day. However, this topic should be considered in future studies. It is recommended that similar studies with larger sample sizes and longer follow-up durations be designed. In the present study, no difference was observed regarding the effectiveness of educational methods using the HAPA model by dentist as the gold standard versus a health officer at the municipality center on the oral hygiene status of elderly individuals. The results indicated that trained health officers at municipality centers can act as available and appropriate workers in oral health promotion programs to improve oral hygiene skills and enhance the self-efficacy of elderly people in oral health behaviors. |
Exploratory study of serum protein biomarkers for sudden cardiac arrest using protein extension assay: A case-control study | a86955a4-86e8-4aec-a667-439dfc522d9c | 11849859 | Biochemistry[mh] | Sudden cardiac arrest (SCA) is a significant health burden worldwide, and the survival rate has remained low for decades . The annual incidence of SCA is estimated to be 62 per 100,000 in the USA, 50–90 per 100,000 in European countries, and 40–90 per 100,000 in Asian countries . Because of the high fatality rate, identifying high-risk populations as candidates for preventive interventions is crucial for reducing its burden of disease . Currently, no widely utilized methods exist in clinical practice for detecting these high-risk populations. Various biomarkers, including genomic, proteomic, clinical and, imaging biomarkers, have been investigated in previous studies . However, these studies have various limitations, including low accuracy, limited sample size, and targeting of specific disease groups . Compared to other biomarkers, protein biomarkers are more cost-effective, provide rapid results, and enable real-time monitoring of disease states and treatment responses . Previous proteomic biomarker studies for SCA occurrence are often limited by their focus on commonly used biomarkers like C-reactive protein (CRP) and N-terminal pro-B-type natriuretic peptide (NT-proBNP), and by often relying on samples collected long before or long after the cardiac event occurred . Recently, a study utilizing mass spectrometry proteomics analysis was conducted on 330 proteins using samples from SCA survivors and age- and gender-matched control groups, resulting in the proposal of 26 new protein biomarkers . However, many well-known proteins related to cardiovascular diseases were missed in the exploration, and the analysis included only survivors, with samples collected a median of 11 months after SCA. To address these limitations and improve the understanding of protein biomarkers in SCA, further research is needed to assess well-known cardiovascular-related proteins in more timely collected samples, ideally from both survivors and non-survivors, to enhance the accuracy and relevance of biomarker discovery. The aim of this study was to conduct an exploratory analysis to elucidate the association between well-recognized proteins related to cardiovascular, inflammatory and immune diseases and the occurrence of SCA using protein extension assay technique, and to evaluate their predictive power alongside traditional cardiovascular risk factors. Study design This case-controlled study is a part of the Cardiac Arrest Pursuit Trial with Unique Registration and Epidemiologic Surveillance (CAPTURES) project in Korea . This study aimed to identify the risk factors of SCA and develop preventive strategies against it, and has been ongoing since September 29, 2017. In this study, data from 17 participating hospitals from September 29, 2017 to April 30, 2022 were analysed. Ethics statements The study was approved by the ethics committees of all participating centers . All participants or their proxy provided written informed consent before taking part in the study and the study complied with the tenets of the Declaration of Helsinki. This study is registered at ClinicalTrials.gov (NCT03700203). No minors were included in the study. Populations SCA patients aged 20–79 years, who experienced cardiac arrest due to medical causes and were treated by emergency medical services before arrival at the emergency department (ED), were enrolled in the CAPTURES project. Patients with terminal illnesses, pregnancies, in hospice care, living alone, homeless, without reliable information sources, or with a ‘Do Not Resuscitate’ card were excluded. Among the enrolled patients, only SCA patients aged ≤ 65 years whose initial rhythm was shockable in the ED were included in this study because we wanted to focus primarily on relatively young SCA patients with a shockable rhythm and exclude patients with a long lapse of time from the cardiac arrest to ED arrival. Community-based voluntary controls were enrolled from two centers representing metropolitan and non-metropolitan areas. All controls were recruited in collaboration with public health centers or community centers where the project was promoted. One or two controls matched for age, sex, and urbanization level of residence were recruited in each case. Sample collection Structured questionnaires, physical examination, routine laboratory analysis, and blood sampling were conducted for the patients and controls Blood samples (20 mL) were drawn and split into an EDTA tube and two serum-separating tubes (SSTs). The SSTs were centrifuged within 2 hours of sampling. After refrigerated storage of blood samples at 2–8 °C, all blood samples were sent to an external laboratory (Seoul Clinical Laboratories, Seoul, Republic of Korea) for storage and future study. Blood samples were sent to the laboratory once daily on weekdays. For the patients, blood sample extraction was recommended during the initial management, but blood samples collected within 24 h of the ED visit were also included in the study. Lactate was included in the routine laboratory analysis for patients. Protein analysis We used three Olink target panels: the Cardiovascular II (version 5007) 96-Plex panel, Cardiovascular III (version 6114) 96-Plex panel, and Immuno-Oncology (version 3113) panel. The Cardiovascular panels cover proteins associated with biological functions linked to cardiovascular and inflammatory diseases, while the Immuno-Oncology panel covers proteins associated with cancer, the immune system, and systemic inflammation. Serum protein levels were measured using proximity extension immunoassay (PEA) (Olink Proteomics, Uppsala, Sweden). The Olink PEA technology uses a dual-recognition DNA-coupled immunoassay that rapidly allows for protein identification with high sensitivity and specificity. Proteomic level assessments have been described in detail previously . Protein levels were measured on a relative scale and presented as normalised protein expression (NPX), which is an arbitrary unit on a log2 scale. A high NPX value corresponds to a high protein concentration. The levels of different proteins cannot be compared using NPX. However, using inter-plate controls (IPC), any systematic differences across different plates were adjusted; therefore, a consistent comparison of the same protein levels across different plates was possible. The IPC consists of a pool of 92 antibodies, each with unique DNA-tags, and is included in triplicate on each plate. The IPC serves as a synthetic sample, expected to give a high signal across all assays, and the median of the IPC triplicates is used to normalize each assay, correcting for potential variation between runs and plates. Except for the use of IPC in triplicate, the samples themselves were not measured in replicate. Each panel analyzed 92 proteins, totaling 276 proteins with 18 overlapping. NPX values from the panel with the fewest quality control flags were kept for overlaps. The limit of detection (LOD) of each protein was estimated based on the concentration in the negative controls in each sample plate. Proteins were excluded if more than 25% of the measurements were below the LOD. For the remaining proteins, the values below the LOD were replaced by the respective LOD. Since the samples were randomized before analysis, the placement of case and control samples within the plate was random. Statistical analysis All analyses were performed using the R environment for statistical computing, version 4.2.1. The association between proteins and SCA was assessed using a two-sided rank-based Spearman test. We labeled SCA as 1 and the control as 0 and calculated Spearman’s correlation coefficient for each protein. Power analysis was carried out, and we have 0.95 power at 0.05 significance level to detect correlation of 0.591. Therefore, proteins with a correlation exceeding the cutoff (|Spearman’s correlation coefficient|>0.591) were extracted. Among them, we further extracted proteins with low post-cardiac arrest changes, because SCA causes systemic ischemia and inflammation, which affect the levels of various proteins. The procedure was performed in two steps. First, from the 40 SCA patients with confirmed lactate levels, we extracted proteins with no or weak correlation between lactate and protein levels (|Spearman’s correlation coefficient|<0.1). The lactate level is a sensitive marker of cellular hypoxia, including cardiac arrest . Second, from the 20 SCA patients with confirmed arrest time and blood sampling performed within 60 min of SCA onset, we extracted proteins with no or weak correlation between onset-to-sampling time and protein level (|Spearman’s correlation coefficient|<0.1). A full list of proteins and the results of each extraction step are available in and . The distribution of extracted proteins according to SCA was plotted using boxplot, and t-test was used to compare protein levels between groups. A heatmap was also used to visualize the distribution of biomarkers, and a hierarchical cluster analysis was performed. The heatmap reorders the rows and columns of the dataset to place data with similar profiles close to one another. Subsequently, ranges of similar values were assigned specific color codes, and each entry in the data matrix was displayed graphically as one specific colour according to its degree of expression. We also performed a Gene Ontology (GO) Slim summary to simplify the interpretation of the gene ontology analysis . GO is a widely used bioinformatics tool that provides a standardised vocabulary for describing genes and their products . GO Slim is a subset of the full GO dataset, which includes a number of terms selected from each of the three main GO categories (biological process, molecular function, and cellular component). For exploratory analysis, we plotted the relationship between age, BNP, and protein levels in the SCA group and control groups using a scatterplot and smooth line with a fitted linear line since age is an important demographic factor and BNP is a well-known risk factor for cardiac arrest . To evaluate predictive performance of extracted proteins, we calculated area under the receiver operating characteristic curve (AUROC). Multivariable logistic regression models were constructed using extracted proteins, with six traditional risk factors (age, sex, diabetes, hypertension, myocardial infarction, stroke) included as independent variables. Multivariable logistic regression models were also constructed based on BNP, with traditional risk factors and extracted proteins added separately. DeLong’s test was utilized to assess whether there is a statistically significant difference between each ROC curve of the models . This case-controlled study is a part of the Cardiac Arrest Pursuit Trial with Unique Registration and Epidemiologic Surveillance (CAPTURES) project in Korea . This study aimed to identify the risk factors of SCA and develop preventive strategies against it, and has been ongoing since September 29, 2017. In this study, data from 17 participating hospitals from September 29, 2017 to April 30, 2022 were analysed. The study was approved by the ethics committees of all participating centers . All participants or their proxy provided written informed consent before taking part in the study and the study complied with the tenets of the Declaration of Helsinki. This study is registered at ClinicalTrials.gov (NCT03700203). No minors were included in the study. SCA patients aged 20–79 years, who experienced cardiac arrest due to medical causes and were treated by emergency medical services before arrival at the emergency department (ED), were enrolled in the CAPTURES project. Patients with terminal illnesses, pregnancies, in hospice care, living alone, homeless, without reliable information sources, or with a ‘Do Not Resuscitate’ card were excluded. Among the enrolled patients, only SCA patients aged ≤ 65 years whose initial rhythm was shockable in the ED were included in this study because we wanted to focus primarily on relatively young SCA patients with a shockable rhythm and exclude patients with a long lapse of time from the cardiac arrest to ED arrival. Community-based voluntary controls were enrolled from two centers representing metropolitan and non-metropolitan areas. All controls were recruited in collaboration with public health centers or community centers where the project was promoted. One or two controls matched for age, sex, and urbanization level of residence were recruited in each case. Structured questionnaires, physical examination, routine laboratory analysis, and blood sampling were conducted for the patients and controls Blood samples (20 mL) were drawn and split into an EDTA tube and two serum-separating tubes (SSTs). The SSTs were centrifuged within 2 hours of sampling. After refrigerated storage of blood samples at 2–8 °C, all blood samples were sent to an external laboratory (Seoul Clinical Laboratories, Seoul, Republic of Korea) for storage and future study. Blood samples were sent to the laboratory once daily on weekdays. For the patients, blood sample extraction was recommended during the initial management, but blood samples collected within 24 h of the ED visit were also included in the study. Lactate was included in the routine laboratory analysis for patients. We used three Olink target panels: the Cardiovascular II (version 5007) 96-Plex panel, Cardiovascular III (version 6114) 96-Plex panel, and Immuno-Oncology (version 3113) panel. The Cardiovascular panels cover proteins associated with biological functions linked to cardiovascular and inflammatory diseases, while the Immuno-Oncology panel covers proteins associated with cancer, the immune system, and systemic inflammation. Serum protein levels were measured using proximity extension immunoassay (PEA) (Olink Proteomics, Uppsala, Sweden). The Olink PEA technology uses a dual-recognition DNA-coupled immunoassay that rapidly allows for protein identification with high sensitivity and specificity. Proteomic level assessments have been described in detail previously . Protein levels were measured on a relative scale and presented as normalised protein expression (NPX), which is an arbitrary unit on a log2 scale. A high NPX value corresponds to a high protein concentration. The levels of different proteins cannot be compared using NPX. However, using inter-plate controls (IPC), any systematic differences across different plates were adjusted; therefore, a consistent comparison of the same protein levels across different plates was possible. The IPC consists of a pool of 92 antibodies, each with unique DNA-tags, and is included in triplicate on each plate. The IPC serves as a synthetic sample, expected to give a high signal across all assays, and the median of the IPC triplicates is used to normalize each assay, correcting for potential variation between runs and plates. Except for the use of IPC in triplicate, the samples themselves were not measured in replicate. Each panel analyzed 92 proteins, totaling 276 proteins with 18 overlapping. NPX values from the panel with the fewest quality control flags were kept for overlaps. The limit of detection (LOD) of each protein was estimated based on the concentration in the negative controls in each sample plate. Proteins were excluded if more than 25% of the measurements were below the LOD. For the remaining proteins, the values below the LOD were replaced by the respective LOD. Since the samples were randomized before analysis, the placement of case and control samples within the plate was random. All analyses were performed using the R environment for statistical computing, version 4.2.1. The association between proteins and SCA was assessed using a two-sided rank-based Spearman test. We labeled SCA as 1 and the control as 0 and calculated Spearman’s correlation coefficient for each protein. Power analysis was carried out, and we have 0.95 power at 0.05 significance level to detect correlation of 0.591. Therefore, proteins with a correlation exceeding the cutoff (|Spearman’s correlation coefficient|>0.591) were extracted. Among them, we further extracted proteins with low post-cardiac arrest changes, because SCA causes systemic ischemia and inflammation, which affect the levels of various proteins. The procedure was performed in two steps. First, from the 40 SCA patients with confirmed lactate levels, we extracted proteins with no or weak correlation between lactate and protein levels (|Spearman’s correlation coefficient|<0.1). The lactate level is a sensitive marker of cellular hypoxia, including cardiac arrest . Second, from the 20 SCA patients with confirmed arrest time and blood sampling performed within 60 min of SCA onset, we extracted proteins with no or weak correlation between onset-to-sampling time and protein level (|Spearman’s correlation coefficient|<0.1). A full list of proteins and the results of each extraction step are available in and . The distribution of extracted proteins according to SCA was plotted using boxplot, and t-test was used to compare protein levels between groups. A heatmap was also used to visualize the distribution of biomarkers, and a hierarchical cluster analysis was performed. The heatmap reorders the rows and columns of the dataset to place data with similar profiles close to one another. Subsequently, ranges of similar values were assigned specific color codes, and each entry in the data matrix was displayed graphically as one specific colour according to its degree of expression. We also performed a Gene Ontology (GO) Slim summary to simplify the interpretation of the gene ontology analysis . GO is a widely used bioinformatics tool that provides a standardised vocabulary for describing genes and their products . GO Slim is a subset of the full GO dataset, which includes a number of terms selected from each of the three main GO categories (biological process, molecular function, and cellular component). For exploratory analysis, we plotted the relationship between age, BNP, and protein levels in the SCA group and control groups using a scatterplot and smooth line with a fitted linear line since age is an important demographic factor and BNP is a well-known risk factor for cardiac arrest . To evaluate predictive performance of extracted proteins, we calculated area under the receiver operating characteristic curve (AUROC). Multivariable logistic regression models were constructed using extracted proteins, with six traditional risk factors (age, sex, diabetes, hypertension, myocardial infarction, stroke) included as independent variables. Multivariable logistic regression models were also constructed based on BNP, with traditional risk factors and extracted proteins added separately. DeLong’s test was utilized to assess whether there is a statistically significant difference between each ROC curve of the models . Demographic findings During the study period, a total of 1,228 SCA patients and 2,065 controls were enrolled in CAPTURES project. Among the SCA cases included, 60 patients aged ≤ 65 years with a shockable initial rhythm at the ED were identified. Among the 60 SCA patients with a shockable rhythm, a random sample of 42 patients was analyzed with 42 matched controls. For the patient group, 42 cases were collected from 13 centers as follows: 5, 4, 7, 5, 2, 2, 1, 2, 5, 1, 4, 1, and 3 cases, respectively. For the control group, 26 and 10 cases were collected from two centers, respectively. The demographic characteristics of the 42 patients and 42 controls are shown in . Each group had 35 males (83.3%) with a median (IQR) age of 56 (60–61) years. The number of comorbidities was significantly higher in the cases . Among 42 cases, survival to admission and survival to discharge were 36 (85.7%) and 23 (54.8%), respectively. Coronary angiography was performed in 23 (54.8%) cases, and percutaneous coronary intervention was performed in 13 (31.0%) cases. Among the 35 patients with confirmed arrest time and blood sampling time, the median arrest-to-sampling time was 55 minutes (IQR: 35–105). Among the 40 patients with confirmed initial lactate levels, the median lactate level was 11.3 mmol/L (IQR: 10.1–14.7). Biomarker extraction and exploration Among the 258 distinct proteins, 12 proteins were excluded for analysis because more than 25% of the measurements were below the LOD. Of the remaining 246 proteins, 97 showed a strong correlation with SCA, exceeding the cutoff (|Spearman’s correlation coefficient|>0.591). Among these 97 proteins, 44 showed weak or no correlation with lactate levels, and 12 showed weak or no correlation with onset-to-sampling time. Two proteins (AXL receptor tyrosine kinase [AXL] and TIMP Metallopeptidase inhibitor 4 [TIMP-4]) met all the criteria for biomarker extraction ( and ). In the GO Slim summary, both proteins were related to the extracellular space in the cellular component category . Similarly, in the GO Slim summary for 97 proteins strongly correlated with sudden cardiac arrest, the top cellular component category was also related to the extracellular space . The distribution of extracted proteins according to SCA was plotted in . Both proteins had higher NPX levels in the SCA group compared to the control group (both p < 0.001). In the SCA group, AXL’s NPX values ranged from 7.7 to 10, while TIMP-4’s NPX values ranged from 2.6 to 4.9, with the difference between the maximum and minimum values being less than an NPX of 3 for both. The NPX values of the two proteins collected from each center are presented in . The results of hierarchical cluster analysis were plotted using a heatmap. The heatmap also showed that the levels of the two proteins were higher in patients than in the controls. Examining the largest cluster in the heatmap by group, 25 (59.5%) in the SCA group and 22 (52.3%) in the control group were in the same cluster . In the exploratory analysis, the values of these two proteins tended to be higher in patients than in controls for all age groups and BNP levels . In the case of BNP, the overall level was lower in controls than in patients, but the level of the two proteins was higher in patients than in controls when the BNP level was low in both groups . The AUROC (95% confidence interval [CI]) of AXL model and TIMP-4 model were 0.893 (0.820–0.967) and 0.867 (0.792–0.942), respectively. The predictive performance of the AXL model and the TIMP-4 model was similar (p for comparison = 0.593). However, the NPX values of AXL and TIMP-4 did not showed a strong correlation (correlation coefficient [95% CI]: 0.543 [0.372–0.679]), and when both AXL and TIMP-4 were included in the model, the predictive performance was significantly higher than that of each extracted protein model (AUROC [95% CI] 0.944 [0.895–0.994] for AXL with TIMP-4 model (p for comparison = 0.026 for AXL model and p for comparison = 0.031 for TIMP-4 model). The AUROC of the baseline model using six traditional risk factors was 0.692 (95% Confidence interval [CI], 0.578–0.806). The addition of AXL, TIMP-4 or both showed a significantly higher predictive power compared to baseline model (AUROC [95% CI] 0.891 [0.817–0.964] for baseline with AXL model and 0.910 [0.910–0.997] for baseline with TIMP-4 model, and 0.954 [0.910–0.997] for baseline with AXL and TIMP-4 model, respectively, all p < 0.01 comparison to the baseline model). When both proteins were added to the model, there was a significant difference in AUROC compared to the baseline with AXL model ( p = 0.007), but there was no significant difference in AUROC compared to the baseline with TIMP-4 model ( p = 0.121) . The AUROC of the BNP model was 0.787 (95% CI, 0.688–0.885). While the addition of six traditional risk factors to BNP did not significantly enhance predictive power (AUROC [95% CI] 0.788 [0.689–0.888] for BNP with six traditional risk factors model, p for comparison = 0.072), the inclusion of AXL or TIMP–4 significantly improved the predictive performance compared to the BNP model (AUROC [95% CI] 0.918 [0.853–0.983] and p for comparison = 0.029 for BNP with AXL model and 0.914 [0.850 + 0.978] and p for comparison = 0.005 for BNP with TIMP-4 model) . During the study period, a total of 1,228 SCA patients and 2,065 controls were enrolled in CAPTURES project. Among the SCA cases included, 60 patients aged ≤ 65 years with a shockable initial rhythm at the ED were identified. Among the 60 SCA patients with a shockable rhythm, a random sample of 42 patients was analyzed with 42 matched controls. For the patient group, 42 cases were collected from 13 centers as follows: 5, 4, 7, 5, 2, 2, 1, 2, 5, 1, 4, 1, and 3 cases, respectively. For the control group, 26 and 10 cases were collected from two centers, respectively. The demographic characteristics of the 42 patients and 42 controls are shown in . Each group had 35 males (83.3%) with a median (IQR) age of 56 (60–61) years. The number of comorbidities was significantly higher in the cases . Among 42 cases, survival to admission and survival to discharge were 36 (85.7%) and 23 (54.8%), respectively. Coronary angiography was performed in 23 (54.8%) cases, and percutaneous coronary intervention was performed in 13 (31.0%) cases. Among the 35 patients with confirmed arrest time and blood sampling time, the median arrest-to-sampling time was 55 minutes (IQR: 35–105). Among the 40 patients with confirmed initial lactate levels, the median lactate level was 11.3 mmol/L (IQR: 10.1–14.7). Among the 258 distinct proteins, 12 proteins were excluded for analysis because more than 25% of the measurements were below the LOD. Of the remaining 246 proteins, 97 showed a strong correlation with SCA, exceeding the cutoff (|Spearman’s correlation coefficient|>0.591). Among these 97 proteins, 44 showed weak or no correlation with lactate levels, and 12 showed weak or no correlation with onset-to-sampling time. Two proteins (AXL receptor tyrosine kinase [AXL] and TIMP Metallopeptidase inhibitor 4 [TIMP-4]) met all the criteria for biomarker extraction ( and ). In the GO Slim summary, both proteins were related to the extracellular space in the cellular component category . Similarly, in the GO Slim summary for 97 proteins strongly correlated with sudden cardiac arrest, the top cellular component category was also related to the extracellular space . The distribution of extracted proteins according to SCA was plotted in . Both proteins had higher NPX levels in the SCA group compared to the control group (both p < 0.001). In the SCA group, AXL’s NPX values ranged from 7.7 to 10, while TIMP-4’s NPX values ranged from 2.6 to 4.9, with the difference between the maximum and minimum values being less than an NPX of 3 for both. The NPX values of the two proteins collected from each center are presented in . The results of hierarchical cluster analysis were plotted using a heatmap. The heatmap also showed that the levels of the two proteins were higher in patients than in the controls. Examining the largest cluster in the heatmap by group, 25 (59.5%) in the SCA group and 22 (52.3%) in the control group were in the same cluster . In the exploratory analysis, the values of these two proteins tended to be higher in patients than in controls for all age groups and BNP levels . In the case of BNP, the overall level was lower in controls than in patients, but the level of the two proteins was higher in patients than in controls when the BNP level was low in both groups . The AUROC (95% confidence interval [CI]) of AXL model and TIMP-4 model were 0.893 (0.820–0.967) and 0.867 (0.792–0.942), respectively. The predictive performance of the AXL model and the TIMP-4 model was similar (p for comparison = 0.593). However, the NPX values of AXL and TIMP-4 did not showed a strong correlation (correlation coefficient [95% CI]: 0.543 [0.372–0.679]), and when both AXL and TIMP-4 were included in the model, the predictive performance was significantly higher than that of each extracted protein model (AUROC [95% CI] 0.944 [0.895–0.994] for AXL with TIMP-4 model (p for comparison = 0.026 for AXL model and p for comparison = 0.031 for TIMP-4 model). The AUROC of the baseline model using six traditional risk factors was 0.692 (95% Confidence interval [CI], 0.578–0.806). The addition of AXL, TIMP-4 or both showed a significantly higher predictive power compared to baseline model (AUROC [95% CI] 0.891 [0.817–0.964] for baseline with AXL model and 0.910 [0.910–0.997] for baseline with TIMP-4 model, and 0.954 [0.910–0.997] for baseline with AXL and TIMP-4 model, respectively, all p < 0.01 comparison to the baseline model). When both proteins were added to the model, there was a significant difference in AUROC compared to the baseline with AXL model ( p = 0.007), but there was no significant difference in AUROC compared to the baseline with TIMP-4 model ( p = 0.121) . The AUROC of the BNP model was 0.787 (95% CI, 0.688–0.885). While the addition of six traditional risk factors to BNP did not significantly enhance predictive power (AUROC [95% CI] 0.788 [0.689–0.888] for BNP with six traditional risk factors model, p for comparison = 0.072), the inclusion of AXL or TIMP–4 significantly improved the predictive performance compared to the BNP model (AUROC [95% CI] 0.918 [0.853–0.983] and p for comparison = 0.029 for BNP with AXL model and 0.914 [0.850 + 0.978] and p for comparison = 0.005 for BNP with TIMP-4 model) . In this exploratory study, differences in serum protein profiles of 42 SCA cases with medical causes, aged 20 to 65 years, and whose initial rhythm was shockable on admission to the ED, compared to 42 community-based age- and sex-matched controls, were evaluated using a PEA protein assay. Among 246 proteins that met the quality criteria, 97 showed a strong correlation, satisfying sufficient power in this study’s sample size. When extracting proteins unlikely to show post-cardiac arrest changes based on their levels in relation to lactate and sampling time, two proteins (AXL and TIMP-4) were identified. Both proteins demonstrated enhanced discrimination power when added to traditional risk factors in multivariable analysis. AXL is a cell surface receptor that is involved in signal transduction, from the extracellular matrix (ECM) into the cytoplasm, associated with cell proliferation, adhesion, migration and survival. AXL is an inhibitor of the innate immune response, and is associated with a variety of pathological processes including cancer and autoimmune disorders . AXL also drives cardiac remodelling by regulating endothelial cells, vascular smooth muscle cells, cardiomyocytes, and potentially, fibroblasts . A study using a rat model reported that AXL level increases in the early stages of left ventricular remodelling with pressure overload, with no further increase in heart failure . TIMP-4 inhibits the activity of matrix metalloproteinases (MMP). MMPs play a crucial role in extracellular matrix remodelling and are involved in various physiological processes including tissue development, wound healing, and the malignant conversion of tumour cells . TIMP-4 are the most abundant TIMP protein within the myocardium. A previous study reported that the TIMP-4 level increased soon after acute myocardial infarction (AMI) and was positively correlated with left ventricular volume changes . In animal model-based studies, an increase in TIMP-4 was observed in compensated left ventricular hypertrophy, but in heart failure, TIMP-4 level or activity had decreased . TIMP-4 was also negatively correlated with atrial fibrosis and ECM changes in the atria of rheumatic heart disease with atrial fibrillation . We found that the biomarker analysis results of both proteins were related to the ECM region . In addition, we found that all two proteins were directly associated with cardiac remodelling. Cardiac remodelling is one of the main causes of cardiac arrhythmia, ventricular dysfunction, and sudden death . In particular, a previous study has reported an important relationship between cardiac remodelling and arrhythmia, whereby the acquired changes in cardiac structure or function can promote the occurrence of cardiac arrhythmia (arrhythmogenic cardiac remodelling) . The heart can be electrically remodelled by various stimuli in the absence of structural remodelling . Aging itself can cause functional cardiac changes before structural remodelling . In this study, we found that the levels of the two protein biomarkers identified were higher in SCA patients than in controls, under a low BNP level scenario . We also found a significant improvement in predictive performance for SCA when BNP was combined with the extracted protein, compared to the BNP model . These findings suggest that functional or molecular changes in the heart prior to prominent structural changes may affect the risk of cardiac arrest, and that the two biomarkers we discovered might help detect these changes. A recent study analyzed 330 proteins of 20 SCA survivors and 40 control participants using a TripleTOF® 6600 mass spectrometer with a data-independent acquisition technique, and reported 26 protein biomarkers associated with SCA, of which 20 differentiated SCA from coronary artery disease . In this study, the extracellular matrix was included among the top identified biological processes, which is consistent with our results. This study enhances its validity by conducting additional replication analyses using an additional 29 cases and 57 controls. However, direct comparison with our study is limited because only nine proteins overlap with those analyzed in the current research, and neither AXL nor TIMP-4 were analyzed. In the GO Slim summary for 97 proteins strongly correlated with sudden cardiac arrest, we also found that extracellular components accounted for a significant proportion in the cellular component category. However, the top GO terms in the biological process category were related to inflammatory response and apoptosis-related pathways, while in the molecular function category, the top GO terms were identical protein binding, cytokine activity, and zinc ion binding. This may reflect various pathogenesis mechanisms associated with the occurrence of SCA. Further research targeting various proteins is still necessary. Limitations This study had several limitations. First, the case-control design was used to explore differences in blood test results between SCA patients and controls. Given the unexpected nature of SCA, a case-control design was employed to generate hypotheses more efficiently. Our findings need to be verified in larger cohorts. Second, blood sampling was performed after SCA occurred, which means the samples could be influenced by post-cardiac arrest changes. We reduced this effect by using early post-SCA samples and additional analysis with lactate levels and arrest-to-sampling time. The timing of biomarker measurement is also a concern in previous studies, with samples collected months before or after SCA, leading to interpretation difficulties . We minimized the temporal gap to SCA, but retrospective proximity may still influence data, requiring caution. In addition, single-time sampling without repeats limited the identification of post-cardiac arrest effects. Third, only pre-specified proteins were analyzed, excluding other known or unknown proteins. Fourth, only Korean patients with shockable rhythm aged ≤ 65 years were included, requiring caution in interpreting and applying the results. Fifth, AXL and TIMP-4 may be influenced by confounding effects due to their association with other conditions related to SCA. While we included diabetes, hypertension, myocardial infarction, and stroke in our multivariable model, the small sample size limited our ability to adequately adjust for other potential comorbidities. Lastly, because this study was conducted with a retrospective design, it was inherently limited to identifying associations among the variables examined. As a result, it is not possible to draw definitive conclusions regarding causal relationships from the findings. This study had several limitations. First, the case-control design was used to explore differences in blood test results between SCA patients and controls. Given the unexpected nature of SCA, a case-control design was employed to generate hypotheses more efficiently. Our findings need to be verified in larger cohorts. Second, blood sampling was performed after SCA occurred, which means the samples could be influenced by post-cardiac arrest changes. We reduced this effect by using early post-SCA samples and additional analysis with lactate levels and arrest-to-sampling time. The timing of biomarker measurement is also a concern in previous studies, with samples collected months before or after SCA, leading to interpretation difficulties . We minimized the temporal gap to SCA, but retrospective proximity may still influence data, requiring caution. In addition, single-time sampling without repeats limited the identification of post-cardiac arrest effects. Third, only pre-specified proteins were analyzed, excluding other known or unknown proteins. Fourth, only Korean patients with shockable rhythm aged ≤ 65 years were included, requiring caution in interpreting and applying the results. Fifth, AXL and TIMP-4 may be influenced by confounding effects due to their association with other conditions related to SCA. While we included diabetes, hypertension, myocardial infarction, and stroke in our multivariable model, the small sample size limited our ability to adequately adjust for other potential comorbidities. Lastly, because this study was conducted with a retrospective design, it was inherently limited to identifying associations among the variables examined. As a result, it is not possible to draw definitive conclusions regarding causal relationships from the findings. Using blood samples from 42 SCA patients and 42 controls, we evaluated the serum levels of 246 proteins, identifying AXL and TIMP-4 as potential SCA biomarkers. Both proteins showed a significant association with SCA and enhanced predictive power with traditional risk factors in multivariable analysis. Our findings suggest that these biomarkers, involved in cardiac remodelling and extracellular matrix processes, may aid early detection and risk assessment of SCA. However, the study’s limitations, including its case-control design, single-time sampling, and small sample size, necessitate validation in future studies. Identifying patients with minimal post-cardiac arrest changes could be one potential approach. Selecting patients with witnessed cardiac arrest who have both a short time from arrest to return of spontaneous circulation and a short time to blood sampling could help minimize post-cardiac arrest changes for analysis. Alternatively, analyzing blood samples already collected from cohorts where SCA occurrence is monitored could provide an opportunity to investigate the relationship between identified biomarkers and SCA. Future research should also explore additional biomarkers and verify AXL and TIMP-4’s utility in diverse populations to solidify their clinical role in SCA prevention and management. S1 Table Institutional Review Board (IRB) Numbers of participating hospitals. (DOCX) S2 Table Full list of proteins in the analysis with protein selection criteria. (DOCX) S3 Table Full list of proteins in the analysis with correlation coefficients at each criteria. (DOCX) S1 Fig Goslim summary for biological process, molecular function, and cellular component for AXL and TIMP-4 proteins. AXL Receptor Tyrosine Kinase; TIMP Metallopeptidase Inhibitor 4. (DOCX) S2 Fig Goslim summary for biological process, molecular function, and cellular component for 97 proteins with strong correlation with sudden cardiac arrest, exceeding the cutoff (|Spearman’s correlation coefficient|>0.516). (DOCX) S3 Fig Distribution of AXL and TIMP-4 in patient groups by center. AXL Receptor Tyrosine Kinase; TIMP Metallopeptidase Inhibitor 4. (DOCX) |
Real-World Treatment Intensity and Patterns in Patients With Myopic Choroidal Neovascularization: Common Data Model in Ophthalmology | a2de693b-b18d-43ec-97cb-e43e7bef366a | 10261705 | Ophthalmology[mh] | Myopia has emerged as a public concern during the last decades in East Asia, where the prevalence of myopia is higher than other regions in the world. Due to coronavirus disease 2019, that trend is now being exacerbated with a growing speed of myopia progression, which eventually results in more people having pathologic myopia (usually occurs in patients with high myopia, diopters ≤ −6.0). Myopic choroidal neovascularization (mCNV), a vision-threatening complications of pathologic myopia which might lead to irreversible macular atrophy or fibrosis after 5 years of onset, ranked the first in the most common cause of choroidal neovascularization (CNV) in less-than-50-year-old working age group. Historically, the treatment had been restricted to photodynamic therapy (PDT) for subfoveal mCNV and laser photocoagulation for extra or juxtafoveal mCNV. However, anti-vascular endothelial growth factor (VEGF) drugs significantly improve the visual outcome in patients with mCNV, so that the anti-VEGF drugs became the first-line option for mCNV treatment since 2009 and following pivotal studies, the RADIANCE and MYRROR studies, confirmed the era of anti-VEGF drugs in treatment of mCNV. Howerver, as virtually nothing is known about these shifting in real-world practice; some lacked switching patterns among treatment options and others did not take account for all treatment options including non-reimbursement treatment choices and off-label drugs. Similar to other countries, the Korean National Health Insurance Scheme (NHIS) had not covered any anti-VEGF drugs until 2016, and therefore, the claims database from the NHIS is not suitable for research regarding the mCNV unlike exudative age-related macular degeneration (AMD), which NHIS have covered these treatments since 2007. The Observational Health Data Sciences and Informatics (OHDSI), which is a global consortium established to accelerate observational data research, has introduced the Observational Medical Outcomes Partnership (OMOP)-Common Data Model (CDM). The OMOP-CDM provides a comprehensive ability to capture data in the same manner across places which enhance scalability of studies and to analyze data in a rigorous way which enables reproducible research. Studies using CDM have raised an academic attention in several fields of medicine as the CDM could generate the real-world evidence. Recently, studies in ophthalmology have been conducted using the data from the OMOP CDM to analyze the real-world incidence of endophthalmitis following anti-VEGF drugs and the real-world treatment intensities and pathways of macular edema following retinal vein occlusion (RVO). Unlike some diseases that need intensive chart review in addition to operational definition to confirm the accuracy of diagnosis, mCNV could be investigated using OMOP CDM. Therefore, in this research, we aimed at characterizing the treatment intensity and patterns in patients with mCNV. Data source and eligible criteria for study population This was a retrospective, observational study using the OMOP CDM database (version 5.3.1) in Seoul National University Bundang Hospital (SNUBH), which included a number of 2,006,478 patients (47.6% female) from April 2003 to December 2020. For the analysis, we created a treatment-naïve mCNV cohort consisted of patients who exposed to any of three anti-VEGF drugs (ranibizumab, aflibercept, and bevacizumab) or any of two procedures (laser photocoagulation and PDT). The index date was defined as the date of the first exposure to these drugs or procedures. Of these, we included patients who had at least 365 days before the index date for wash-out period to ensure treatment-naïve profiles and had at least 365 days after the index date to observe the subsequent prescriptions. The end date of the cohort was the end of observation in each patient or the end of database at December 31, 2020. Then, we identified mCNV patients who had 1) the diagnosis for mCNV or 2) the diagnosis for CNV along with high myopia defined by the diagnosis of high myopia, degenerative myopia, or pathologic myopia or by the measurement of refractive errors (spherical equivalent ≤ −6.0 diopters) between 365 days before and after the index date. Lastly, we excluded all patients who had any other conditions which might require anti-VEGF drugs including exudative AMD, RVO, diabetic macular edema (DME), etc. depicts the Schematic diagram for definition of treatment naïve mCNV cohort, and shows the Flow Chart for the eligible patients. provides standard concept codes used in the analyses. Study outcomes, statistical analysis We investigated the baseline characteristics of patients including age, gender, race, medical history, myopic status (measured in diopters). The visual acuity (presented in LogMAR) at baseline (index date) and the date of 90, 365, 730 days after the index date was also assessed; and the Generalized Estimating Equations (GEE) model was applied to treat these repeated measured visual acuities, considering the effect of drug types on the visual outcomes. Statistical significance was defined as P < 0.05. After that, we assessed the treatment intensity as follows. We first assessed the number of treatments in each patient, and also calculated the number of treatments in the first and second year after the index date. We stratified the results by calendar years and the three periods (the era of pre-anti-VEGF drug, from April 2003 to December 2005; the era of early anti-VEGF drugs, from January 2006 to November 2017; and the era of anti-VEGF drug reimbursement, from December 2017 to December 2020) based on the index date in each patient. We also confined the analysis to patients who completed at least 2 years of observation periods from the index date. In addition, we stratified the included patients according to their initial treatments. Lastly, among the patients having at least-2 year of observation periods, we assess the proportion of patients who did not have any prescriptions in the second year after their index date. By using logistic regression via GEE, we also studied the effect of previously known prognostic factors - namely “age”, “visual acuity at baseline” and “type of drugs received” , plus another “gender” factor - on the treatment demand at the second year after treatment initiation. The functions provided in ATLAS version 2.10.1 with modification the source code (using open-source R package) were used. We also used R Studio version 3.6.3 and PostgreSQL version 8.0.2 in the analyses. Ethics statement Our study was conducted in accordance with the Declaration of Helsinki and adhered to Good Clinical Practice Guidelines. Institutional Review Board (IRB) in SNUBH proved the present study and waived the informed consent (IRB Number X-2112-727-902). This was a retrospective, observational study using the OMOP CDM database (version 5.3.1) in Seoul National University Bundang Hospital (SNUBH), which included a number of 2,006,478 patients (47.6% female) from April 2003 to December 2020. For the analysis, we created a treatment-naïve mCNV cohort consisted of patients who exposed to any of three anti-VEGF drugs (ranibizumab, aflibercept, and bevacizumab) or any of two procedures (laser photocoagulation and PDT). The index date was defined as the date of the first exposure to these drugs or procedures. Of these, we included patients who had at least 365 days before the index date for wash-out period to ensure treatment-naïve profiles and had at least 365 days after the index date to observe the subsequent prescriptions. The end date of the cohort was the end of observation in each patient or the end of database at December 31, 2020. Then, we identified mCNV patients who had 1) the diagnosis for mCNV or 2) the diagnosis for CNV along with high myopia defined by the diagnosis of high myopia, degenerative myopia, or pathologic myopia or by the measurement of refractive errors (spherical equivalent ≤ −6.0 diopters) between 365 days before and after the index date. Lastly, we excluded all patients who had any other conditions which might require anti-VEGF drugs including exudative AMD, RVO, diabetic macular edema (DME), etc. depicts the Schematic diagram for definition of treatment naïve mCNV cohort, and shows the Flow Chart for the eligible patients. provides standard concept codes used in the analyses. We investigated the baseline characteristics of patients including age, gender, race, medical history, myopic status (measured in diopters). The visual acuity (presented in LogMAR) at baseline (index date) and the date of 90, 365, 730 days after the index date was also assessed; and the Generalized Estimating Equations (GEE) model was applied to treat these repeated measured visual acuities, considering the effect of drug types on the visual outcomes. Statistical significance was defined as P < 0.05. After that, we assessed the treatment intensity as follows. We first assessed the number of treatments in each patient, and also calculated the number of treatments in the first and second year after the index date. We stratified the results by calendar years and the three periods (the era of pre-anti-VEGF drug, from April 2003 to December 2005; the era of early anti-VEGF drugs, from January 2006 to November 2017; and the era of anti-VEGF drug reimbursement, from December 2017 to December 2020) based on the index date in each patient. We also confined the analysis to patients who completed at least 2 years of observation periods from the index date. In addition, we stratified the included patients according to their initial treatments. Lastly, among the patients having at least-2 year of observation periods, we assess the proportion of patients who did not have any prescriptions in the second year after their index date. By using logistic regression via GEE, we also studied the effect of previously known prognostic factors - namely “age”, “visual acuity at baseline” and “type of drugs received” , plus another “gender” factor - on the treatment demand at the second year after treatment initiation. The functions provided in ATLAS version 2.10.1 with modification the source code (using open-source R package) were used. We also used R Studio version 3.6.3 and PostgreSQL version 8.0.2 in the analyses. Our study was conducted in accordance with the Declaration of Helsinki and adhered to Good Clinical Practice Guidelines. Institutional Review Board (IRB) in SNUBH proved the present study and waived the informed consent (IRB Number X-2112-727-902). We included a total of 94 patients with mCNV, and of these, 74 patients completed at least 2 years of follow-up from the index date . A majority of patients aged from 50 years old above (67.38%), most were females (73.4%), and all patients were Korean. While the accompanied visual system disorders were high (87.23%), less than 5% having chronic diseases such as diabetes or hypertension as well as other cardiovascular diseases. The myopia status at baseline were −5.79 ± 5.04 diopter . The visual acuity (LogMAR) were 0.291 (at baseline), 0.243 (after 90 days), 0.237 (after 365 days), and 0.237 (after 730 days) ; however, the improvement in visual acuity was not statistically significant ( P = 0.203) , even when the type of initial drugs was taken into account ( P > 0.05) . Treatment intensity in patients with mCNV The number of treatments tended to increase over time, and bevacizumab was the most frequently selected treatment for mCNV throughout the study period . In the first era (2003–2005), only one patient was included and only one PDT was administered. The average number of treatments increased from 1.5 to 2.5 per patient per year in the second era (2006–2017) to 3 in the third era (2017–2020) . In each patient with mCNV, the number of treatments decreased dramatically in the second year of treatment in comparison with that of the first year from 2.09 to 0.47, and the trend was consistent irrespective of treatment modalities . Similar trends were observed in those patients with at least 2-year observation period . It was clearly seen that a large majority (77.03%) did not undergo any treatment in the second year . No significant prognostic factors of the treatment demand in the second year was found, except initial drug types with “Bevacizumab” ( P = 0.003) and “Ranibizumab” ( P = 0.009) . However, the number of patients in the reference category “Aflibercept” was very small and further studies are warranted. Treatment patterns in patients with mCNV We identified 10 unique treatment patterns of patients with mCNV throughout the study period . A vast majority received anti-VEGF drugs as the initial treatment (96.8%) and the most prevalent first-line treatment for patients with mCNV belonged to bevacizumab (68.1%), whose figure was far higher than the second most popular, ranibizumab (20.2%). It was found that bevacizumab was still the most common choice of second-line anti-VEGF drugs among all patients with multitherapy regardless of their first-line treatment status, at 53.8%. Regarding the sub-analyses in the divided periods, in the period from April 2003 to December 2005, only one case of mCNV was detected. And the patient was initially treated with bevacizumab then switched to PDT . Bevacizumab first-line users accounted for 72.6% during the period from January 2006 to November 2017 . The period of December 2017 to December 2020 witnessed an absolute preference of anti-VEGF treatments without any session of laser photocoagulation nor PDT. The aflibercept took place of bevacizumab to be the dominant choice of first-line treatment (55.6% vs 33.3%) in this period . Loyal-to-one-treatment patients stood at a large percentage (86.2%) and the rest 13.8% opted for a switch to a second line. No cases received the third-line treatment. Of those patients who initiated their treatment with bevacizumab (68.1), large proportion (92.2%) did not need alternative therapy, as only a small number of patients switched to ranibizumab (6.25%) or PDT (1.5%). The similar patterns were observed in other therapies. The number of treatments tended to increase over time, and bevacizumab was the most frequently selected treatment for mCNV throughout the study period . In the first era (2003–2005), only one patient was included and only one PDT was administered. The average number of treatments increased from 1.5 to 2.5 per patient per year in the second era (2006–2017) to 3 in the third era (2017–2020) . In each patient with mCNV, the number of treatments decreased dramatically in the second year of treatment in comparison with that of the first year from 2.09 to 0.47, and the trend was consistent irrespective of treatment modalities . Similar trends were observed in those patients with at least 2-year observation period . It was clearly seen that a large majority (77.03%) did not undergo any treatment in the second year . No significant prognostic factors of the treatment demand in the second year was found, except initial drug types with “Bevacizumab” ( P = 0.003) and “Ranibizumab” ( P = 0.009) . However, the number of patients in the reference category “Aflibercept” was very small and further studies are warranted. We identified 10 unique treatment patterns of patients with mCNV throughout the study period . A vast majority received anti-VEGF drugs as the initial treatment (96.8%) and the most prevalent first-line treatment for patients with mCNV belonged to bevacizumab (68.1%), whose figure was far higher than the second most popular, ranibizumab (20.2%). It was found that bevacizumab was still the most common choice of second-line anti-VEGF drugs among all patients with multitherapy regardless of their first-line treatment status, at 53.8%. Regarding the sub-analyses in the divided periods, in the period from April 2003 to December 2005, only one case of mCNV was detected. And the patient was initially treated with bevacizumab then switched to PDT . Bevacizumab first-line users accounted for 72.6% during the period from January 2006 to November 2017 . The period of December 2017 to December 2020 witnessed an absolute preference of anti-VEGF treatments without any session of laser photocoagulation nor PDT. The aflibercept took place of bevacizumab to be the dominant choice of first-line treatment (55.6% vs 33.3%) in this period . Loyal-to-one-treatment patients stood at a large percentage (86.2%) and the rest 13.8% opted for a switch to a second line. No cases received the third-line treatment. Of those patients who initiated their treatment with bevacizumab (68.1), large proportion (92.2%) did not need alternative therapy, as only a small number of patients switched to ranibizumab (6.25%) or PDT (1.5%). The similar patterns were observed in other therapies. To the best of our knowledge, the present study is the first study describing the real-world treatment intensity and patterns in treatment-naïve mCNV patients using the OMOP CDM. The findings pointed out that patients with mCNV experienced one or two treatment modalities, and among over 60% of patients, bevacizumab was selected as the first line or in the second line treatments. The total number of prescriptions tended to increase year by year, which was consistent with current knowledge because anti-VEGF agents so far have been proving its efficiency and safety in the treatment of mCNV in the real world, and they are also superior to other modalities demonstrated by two significant clinical trials (RADIANCE and BRILLIANCE). In our study, the statistically insignificant improvement in visual acuity might be attributed to the incompleteness of visual acuity data. Before 2005, when anti-VEGFs were not introduced, extremely few patients were included in this study; mCNV patients were often not indicated for PDT or laser photocoagulation, so it is likely that most of them did not receive any treatment. When it comes to the number of included patients in the era of anti-VEGF drugs, the figures were disproportionate with 84 patients included in the period of 2006–2017 (12 years), compared to only 9 patients in the period of 2017–2020 (4 years). The backlog of patients waiting for the suitable treatment method might be a reasonable justification for this sudden surge in number of patients after introduction of anti-VEGF drugs. With respect to the mean number of prescriptions, there was a remarkable decline between the first year and the second year in our study (from 2.09 to 0.47), suggesting that initial treatment might be adequate to deactivate the mCNV. It was in accordance with the established guideline of the prorenata treatment without loading phases regimen (one injection in the first episode and then as needed) in treating patients with mCNV, and the results from observational studies. Two studies showed that the mean/median number of injections decreased from approximately three in the first year to just under 0.5 in the subsequent year, and other 12-month observation study treated with ranibizumab, more than half (52.2%) of patients received just one injection in the study period and about 90% of patients took less than three injections. No prognostic factors were found regarding the 2 nd year treatment demand, raising the need for further studies on this matter. In addition, it is important to compare the number of prescriptions needed in mCNV with other retina diseases that require anti-VEGF drugs, such as exudative AMD, DME, and RVO. In patients with exudative AMD and DME, it is usually necessary to have loading doses (at least 3 injections) since diagnosis. On the other hand, patients with mCNV tend to receive less frequent injections thanks to the omission of loading doses. In a CDM-based real world study examining the treatment intensities of macular edema following RVO, the mean number of injections in the first year of treatment was from 2.45 to 3.12 (depending on the era), which is approximately one injection higher than our results. In addition, the findings from the present study is quite similar to those from the two landmark trials, the RADIANCE and MYRROR. Interestingly, there was a decrease in the mean number of injections in the first year of treatment between the 2 nd era and 3 rd era. It might be the results of the increasing use of aflibercept which might have probable superiority in mCNV treatment: aflibercept is superior to ranibizumab in terms of better final visual outcomes, and to bevacizumab in terms of less treatment intensity during 12-month period. In our study, bevacizumab was indicated to be the most frequently applied treatment for mCNV both in first and second line of treatment, which has an affordable price as well as non-inferior efficacy compared to ranibizumab in the mCNV treatment. It allows individualized and sufficient treatments without any restriction from the NHIS in Korea, and therefore, physicians prefer bevacizumab in case of patients who require repetitive treatments or who do not meet reimbursement criteria. Most of the included patients (86.2%) were sufficient for non-switching monotherapy, which revealed the effectiveness of anti-VEGF drugs in treatment of mCNV, especially in comparison with other diseases treated with anti-VEGF drugs. The drugs conversion, whose literature has been still in its infancy, has been well described in our study. Our study obviously has certain limitations. First, pristinely-mapped CDM may result in some information biases. For example, the disease laterality (left or right eye) was frequently neglected; however, the fellow eye does not always share the same problem with the affected eye. In addition, the incompleteness of visual acuity mapping refrain us to answer the question whether under-treatment may drive the underestimation of treatment intensity as there was an association between poor outcomes and less-than-needed treatment frequency. Third, the lost-to-follow-up issue could lead to the ignorance of mCNV relapse events (though recurrence episodes occurs mostly within first year in treatment naïve patients ), meaning ignoring additional injections in those patients. Forth, although one year of wash-out period is sufficiently long to ensure the treatment-naïve profile, it may be still biased by the inability of our database to capture patient’s history of treatment across other healthcare facilities. Finally, this study had the drawbacks of inadequate heterogeneity due to single center sampling, leading to the limited generalization of the findings. Despite those limitations, our study had strengths that are unique in addressing our research question. First, our study is among pioneers in terms of visualizing the pathways of mCNV treatment, including a switch of drugs and procedures in different eras of 18-year period. Second, using CDM results in a much less time-consuming, labor-intensive and human error-prone process in dealing with real-world data. CDM allows the very flexible searching, dealing with the queries, and extending to multiple CDM databases. Also, concepts, cohorts, and analysis in a CDM-based study could be reused later in each single step of another research. Third, our advantage of using electronic medical record compared with claim data is that we can assess off-label use and non-reimbursement options. In conclusion, there has been a shift to anti-VEGF drugs as a treatment of choice in mCNV over the last decades, both first and second-line treatment. Anti-VEGF drugs have also proved their effectiveness in real-world settings: non-switching prorenata monotherapy is the main treatment regimen in most cases and there is a sharp decrease in treatment intensity from the second year of treatment. |
Quitting on TikTok: Effects of Message Themes, Frames, and Sources on Engagement with Vaping Cessation Videos | c54c75f7-0875-4345-8537-cf5f7a9a5f13 | 11606514 | Health Communication[mh] | Health campaigns have increasingly utilized social media in recent decades to reach youth and young adults . Social media engagement is broadly defined as any action where users interact, share, and create content within their networks . In health campaigns using social media, engagement has also become commonplace in campaign evaluations, serving as a proxy for message effectiveness . Engagement as Part of Behavioral Change The Integrated Behavioral Model posits that positive attitudes, perceived social norms, and personal agency regarding a behavior predict behavioral intentions, which subsequently influence actual behavior . People like social media posts for various reasons, such as socializing, giving feedback, sharing interests, and enjoyment; however, liking generally indicates a direct expression of positive sentiment . Furthermore, individuals tend to share social media content that aligns with their beliefs . Therefore, liking and sharing a post on social media may signal audience interest and positive attitudes toward the content, potentially serving as a “priming step” to behavior change . Based on the Integrated Behavioral Model, positive comments about promoted health behaviors suggest a favorable attitude toward adopting the behavior, whereas negative comments may reflect reluctance to embrace the recommended behavior. Engagement as Persuasive Cues The bandwagon effect is when people conform to the behavior and attitudes of others due to the belief that such behavior and attitudes are popular, desirable, or socially acceptable . In the context of social media communication, bandwagon cues, such as a large number of likes, shares, and positive comments, can trigger the bandwagon effect by signaling popularity and social acceptance . For example, found that news headlines on Facebook with many likes were rated more credible than news with fewer likes. Health campaigns that received a greater number of positive comments were evaluated more favorably than those campaigns associated with more negative comments and fewer positive comments . Moreover, high shares increased perceptions of message influence and preventive health behavioral intentions . Therefore, engagement with social media health campaigns not only reflects how audiences respond to the post but also influences how the post is processed. This study focuses on metrics including positive engagement (i.e., likes, shares, positive comments about quitting vaping) and negative engagement (i.e., negative comments about quitting vaping) to identify effective features for future vaping cessation social media campaigns. Specifically, we focus on examining the effect of message source and content features, including message themes and frames on audience engagement with vaping cessation TikTok videos. The Integrated Behavioral Model posits that positive attitudes, perceived social norms, and personal agency regarding a behavior predict behavioral intentions, which subsequently influence actual behavior . People like social media posts for various reasons, such as socializing, giving feedback, sharing interests, and enjoyment; however, liking generally indicates a direct expression of positive sentiment . Furthermore, individuals tend to share social media content that aligns with their beliefs . Therefore, liking and sharing a post on social media may signal audience interest and positive attitudes toward the content, potentially serving as a “priming step” to behavior change . Based on the Integrated Behavioral Model, positive comments about promoted health behaviors suggest a favorable attitude toward adopting the behavior, whereas negative comments may reflect reluctance to embrace the recommended behavior. The bandwagon effect is when people conform to the behavior and attitudes of others due to the belief that such behavior and attitudes are popular, desirable, or socially acceptable . In the context of social media communication, bandwagon cues, such as a large number of likes, shares, and positive comments, can trigger the bandwagon effect by signaling popularity and social acceptance . For example, found that news headlines on Facebook with many likes were rated more credible than news with fewer likes. Health campaigns that received a greater number of positive comments were evaluated more favorably than those campaigns associated with more negative comments and fewer positive comments . Moreover, high shares increased perceptions of message influence and preventive health behavioral intentions . Therefore, engagement with social media health campaigns not only reflects how audiences respond to the post but also influences how the post is processed. This study focuses on metrics including positive engagement (i.e., likes, shares, positive comments about quitting vaping) and negative engagement (i.e., negative comments about quitting vaping) to identify effective features for future vaping cessation social media campaigns. Specifically, we focus on examining the effect of message source and content features, including message themes and frames on audience engagement with vaping cessation TikTok videos. Previous research has identified the following common themes in vaping-related health messages: 1) physical health outcomes , 2) mental health outcomes , 3) harmful chemicals in vape products , 4) nicotine addiction , 5) the negative social image associated with vaping , and 6) financial costs of vaping . Themes addressing nicotine addiction, harmful chemicals, and negative health outcomes led to higher perceived message effectiveness among youth . found that themes related to physical health outcomes were perceived as the most effective, surpassing themes on chemicals in vapes, mental health outcomes, and nicotine addiction. Additionally, nicotine addiction themes were less effective in eliciting negative affect compared to physical health effects and chemicals in vapes Notably, these theme-based studies pertained to vaping prevention instead of vaping cessation. The current study explores what message themes receive more engagement with TikTok vaping cessation videos. The following research question was proposed: RQ1: What are the associations between the six pre-identified themes and both positive and negative engagement with vaping cessation TikTok videos? Health messages can be framed to emphasize either the benefits of a behavior (gain frame) or the consequences of not engaging in it (loss frame) . Studies suggest that loss-framed messages are more persuasive for detection behaviors like cancer screening, while gain-framed messages are more effective for promoting prevention behaviors such as exercise or quitting tobacco products . Research on gain and loss frames in the context of vaping prevention has yielded mixed results . However, no studies have specifically examined the effects of gain and loss frames on promoting vaping cessation . Despite the distinctions between cigarette cessation and vaping cessation concerning the products involved, a previous meta-analysis suggests that gain-framed messages were more likely than loss-framed messages to encourage smoking cessation . Ratio of Gain and Loss Frames Previous experimental studies have predominantly focused on comparing pure gain-framed and loss-framed messages . However, in real-life scenarios, the incorporation of both gain and loss frames in health messages, particularly within the context of TikTok videos, is common. The Emotions-as-Frames model (EFM, , ) argues that loss-framed messages, emphasizing the negative consequences of not adopting recommended behaviors, tend to evoke negative emotions such as fear and guilt . Conversely, gain-framed messages are more likely to elicit positive emotions such as hope . Furthermore, EFM suggests positive emotions enhance the persuasive impact of gain framing, while negative emotions strengthen the influence of loss framing . Increasing the ratio of gain to loss frames in a message could intensify emotional responses. Given the documented advantage of gain frames in smoking cessation literature, we posit the following hypotheses: H1 : Vaping cessation TikTok videos with a higher ratio of gain frames elicit more positive social media engagement and less negative engagement than videos with a lower ratio of gain frames. H2 : Vaping cessation TikTok videos with a higher ratio of loss frames elicit less positive social media engagement and more negative engagement than videos with a lower ratio of loss frames. Previous experimental studies have predominantly focused on comparing pure gain-framed and loss-framed messages . However, in real-life scenarios, the incorporation of both gain and loss frames in health messages, particularly within the context of TikTok videos, is common. The Emotions-as-Frames model (EFM, , ) argues that loss-framed messages, emphasizing the negative consequences of not adopting recommended behaviors, tend to evoke negative emotions such as fear and guilt . Conversely, gain-framed messages are more likely to elicit positive emotions such as hope . Furthermore, EFM suggests positive emotions enhance the persuasive impact of gain framing, while negative emotions strengthen the influence of loss framing . Increasing the ratio of gain to loss frames in a message could intensify emotional responses. Given the documented advantage of gain frames in smoking cessation literature, we posit the following hypotheses: H1 : Vaping cessation TikTok videos with a higher ratio of gain frames elicit more positive social media engagement and less negative engagement than videos with a lower ratio of gain frames. H2 : Vaping cessation TikTok videos with a higher ratio of loss frames elicit less positive social media engagement and more negative engagement than videos with a lower ratio of loss frames. A message source is the individual, group, or organization that the audience perceives as the communication originator . The characteristics of a message source can contribute to attitudinal and behavioral change through two psychological processes: internalization and identification . The internalization process can be manifested in terms of the expertise of message sources; formal experts, like healthcare professionals, can increase vaping risk perceptions among young adults . In addition, recent research has acknowledged the persuasive effects of informal experts, which are individuals who have firsthand experience (i.e., experiential expertise) with specific health issues . In vaping cessation, individuals who have successfully quit possess informal expertise, drawing on their firsthand experiences and knowledge of the quitting process. Identification is enhanced by source homophily, where similarities in beliefs, values, and social status between sender and recipient strengthen message impact . Although the literature on youth preferences for vaping cessation sources is limited, research shows youth smokers prefer messages from peers who smoke . When the recipient of a message perceives themselves to be relatable to the sender, the persuasive impact of the message tends to be stronger . Thus, current e-cigarette users might be effective message sources for vaping cessation campaigns. Given the inability to determine the vaping and quitting status of TikTok video viewers and the lack of research on different message sources in vaping cessation videos, we have developed the following research questions to evaluate the influence of various message sources on engagement. RQ2 : Do videos featuring formal experts, informal experts, and current e-cigarette users receive greater positive engagement and less negative engagement than videos that do not incorporate these message sources? RQ3 : Which of the message sources (formal experts, informal experts, current e-cigarette users) generate the highest positive engagement and the least negative engagement in vaping cessation TikTok videos? Study Design and Data Collection Using an open-source TikTok scraping tool , we collected all publicly available TikTok videos containing the hashtags #quitvaping and/or #quitvape posted between January 1 st , 2022, and December 31 st , 2022. In total, we collected 1,709 public TikTok videos, including associated metadata such as the number of video diggs (i.e., likes), comments, and follower counts. The comments associated with the 1,709 TikTok videos were collected, resulting in a total of 47,879 comments. We randomly sampled 50% of the 1,709 videos ( N = 855) for the content analysis. The Institutional Review Board at a major university in the northeastern US exempted this study from review because it involved non-human subjects and used publicly available data. Sampling and Inclusion Criteria We first coded if the video was in English. Videos that were not in English were excluded from further analysis. Next, we determined the relevance of each video to vaping cessation. Only videos that explicitly mentioned quitting e-cigarettes were considered relevant to our study. For instance, videos that offered advice on quitting, shared personal experiences of quitting, or discussed the benefits of quitting were deemed relevant to quitting vaping. displays the sampling procedure used in this study. Intercoder Reliability To attain high coding reliability, two coders were first trained on 50 videos that were not included in the sampled video dataset. Discrepancies were discussed to resolve coding disagreements in three separate meetings. Next, two coders independently coded 10% of the sample data ( N = 86) for inter-coder reliability. Coding agreements were assessed with Cohen’s Kappa values, which were above 0.7 across all content variables, indicating a high level of intercoder reliability . The two trained coders then independently coded the rest of the videos. displays the inter-coder reliability. Video Coding Features – Predictor Variables The coding of message frames is contingent on message themes, as a frame can only be properly understood within the context of a specific theme. Therefore, we coded the presence/absence of six gain and/or loss-framed themes related to vaping from previous studies: 1) physical health outcomes; 2) mental health outcomes; 3) harmful chemicals in vape products; 4) nicotine addiction; 5) negative social image associated with and 6) financial costs of vaping. A video could contain both gain and loss-framed messages across six specific themes. Thus, a total of 12 gain/loss-framed themes were coded for each video. Presence of Six Message Themes The presence of each of the six themes was determined based on the inclusion of gain or loss-framed messages related to the coded theme. Ratio of Gain Frames We calculated the ratio of gain frames by dividing the number of gain-framed themes by the total number of present gain/loss-framed themes. Ratio of Loss Frames Similarly, we calculated the ratio of loss frames by dividing the number of loss-framed themes by the total number of present gain or loss-framed themes. Message Source A message source was categorized as a formal expert source (i.e., healthcare professionals) if the main character in a video introduced themselves as a healthcare professional or wore medical professional attire (e.g., white coats, scrub tops). In addition, a message source was determined as an informal expert (i.e., individuals who have successfully quit vaping) if the main character in the video indicated they had successfully quit vaping. Lastly, a message source was classified as a current user message source if the s character disclosed current e-cigarette use. Videos that did not contain any of the above three message sources were categorized as having non-expert and non-user sources. Video Engagement - Outcome Variables Numbers of Likes and Shares The number of likes and shares a video received was obtained during the scraping of the videos. Positive and Negative Comments About Quitting Vaping. To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . Statistical Analyses Mixed-effect negative binomial models were utilized to test hypotheses and research questions, with each engagement metric (likes, positive and negative comments regarding quitting vaping, shares) treated as outcome variables respectively. The models included the following predictors: 1) the presence or absence of each of the six message themes, 2) a four-level categorical variable indicating the type of message source, and 3) a continuous variable representing the ratio of gain/loss frames in the video. To avoid multicollinearity, the ratio of gain frames and loss frames was entered separately as predictors, along with two other predictor variables in each of the negative binomial models. The analyses were conducted using R (Version 1.4.1106) and the R package glmmADMB. All models included random effects of TikTok users and were adjusted for variables that could affect video engagement, including TikTok account follower counts (per thousand), video length (in seconds), and the total numbers of gain and loss-framed themes in the video. Videos featuring at least one of the six identified themes were included in the negative binomial analysis of likes and shares. Additionally, videos that mentioned at least one theme and received at least one comment were analyzed for positive and negative comments about quitting vaping. Using an open-source TikTok scraping tool , we collected all publicly available TikTok videos containing the hashtags #quitvaping and/or #quitvape posted between January 1 st , 2022, and December 31 st , 2022. In total, we collected 1,709 public TikTok videos, including associated metadata such as the number of video diggs (i.e., likes), comments, and follower counts. The comments associated with the 1,709 TikTok videos were collected, resulting in a total of 47,879 comments. We randomly sampled 50% of the 1,709 videos ( N = 855) for the content analysis. The Institutional Review Board at a major university in the northeastern US exempted this study from review because it involved non-human subjects and used publicly available data. We first coded if the video was in English. Videos that were not in English were excluded from further analysis. Next, we determined the relevance of each video to vaping cessation. Only videos that explicitly mentioned quitting e-cigarettes were considered relevant to our study. For instance, videos that offered advice on quitting, shared personal experiences of quitting, or discussed the benefits of quitting were deemed relevant to quitting vaping. displays the sampling procedure used in this study. To attain high coding reliability, two coders were first trained on 50 videos that were not included in the sampled video dataset. Discrepancies were discussed to resolve coding disagreements in three separate meetings. Next, two coders independently coded 10% of the sample data ( N = 86) for inter-coder reliability. Coding agreements were assessed with Cohen’s Kappa values, which were above 0.7 across all content variables, indicating a high level of intercoder reliability . The two trained coders then independently coded the rest of the videos. displays the inter-coder reliability. The coding of message frames is contingent on message themes, as a frame can only be properly understood within the context of a specific theme. Therefore, we coded the presence/absence of six gain and/or loss-framed themes related to vaping from previous studies: 1) physical health outcomes; 2) mental health outcomes; 3) harmful chemicals in vape products; 4) nicotine addiction; 5) negative social image associated with and 6) financial costs of vaping. A video could contain both gain and loss-framed messages across six specific themes. Thus, a total of 12 gain/loss-framed themes were coded for each video. Presence of Six Message Themes The presence of each of the six themes was determined based on the inclusion of gain or loss-framed messages related to the coded theme. Ratio of Gain Frames We calculated the ratio of gain frames by dividing the number of gain-framed themes by the total number of present gain/loss-framed themes. Ratio of Loss Frames Similarly, we calculated the ratio of loss frames by dividing the number of loss-framed themes by the total number of present gain or loss-framed themes. Message Source A message source was categorized as a formal expert source (i.e., healthcare professionals) if the main character in a video introduced themselves as a healthcare professional or wore medical professional attire (e.g., white coats, scrub tops). In addition, a message source was determined as an informal expert (i.e., individuals who have successfully quit vaping) if the main character in the video indicated they had successfully quit vaping. Lastly, a message source was classified as a current user message source if the s character disclosed current e-cigarette use. Videos that did not contain any of the above three message sources were categorized as having non-expert and non-user sources. The presence of each of the six themes was determined based on the inclusion of gain or loss-framed messages related to the coded theme. We calculated the ratio of gain frames by dividing the number of gain-framed themes by the total number of present gain/loss-framed themes. Similarly, we calculated the ratio of loss frames by dividing the number of loss-framed themes by the total number of present gain or loss-framed themes. A message source was categorized as a formal expert source (i.e., healthcare professionals) if the main character in a video introduced themselves as a healthcare professional or wore medical professional attire (e.g., white coats, scrub tops). In addition, a message source was determined as an informal expert (i.e., individuals who have successfully quit vaping) if the main character in the video indicated they had successfully quit vaping. Lastly, a message source was classified as a current user message source if the s character disclosed current e-cigarette use. Videos that did not contain any of the above three message sources were categorized as having non-expert and non-user sources. Numbers of Likes and Shares The number of likes and shares a video received was obtained during the scraping of the videos. Positive and Negative Comments About Quitting Vaping. To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . The number of likes and shares a video received was obtained during the scraping of the videos. Positive and Negative Comments About Quitting Vaping. To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . Mixed-effect negative binomial models were utilized to test hypotheses and research questions, with each engagement metric (likes, positive and negative comments regarding quitting vaping, shares) treated as outcome variables respectively. The models included the following predictors: 1) the presence or absence of each of the six message themes, 2) a four-level categorical variable indicating the type of message source, and 3) a continuous variable representing the ratio of gain/loss frames in the video. To avoid multicollinearity, the ratio of gain frames and loss frames was entered separately as predictors, along with two other predictor variables in each of the negative binomial models. The analyses were conducted using R (Version 1.4.1106) and the R package glmmADMB. All models included random effects of TikTok users and were adjusted for variables that could affect video engagement, including TikTok account follower counts (per thousand), video length (in seconds), and the total numbers of gain and loss-framed themes in the video. Videos featuring at least one of the six identified themes were included in the negative binomial analysis of likes and shares. Additionally, videos that mentioned at least one theme and received at least one comment were analyzed for positive and negative comments about quitting vaping. Descriptive Analysis Results The 412 videos received over 83 million views on TikTok, with an average of 203,201 views per video ( SD = 677,793). Videos received a mean of 248 comments (SD = 924, Mdn = 28, IQR = 89), 21185 likes (SD = 72,775, Mdn = 1,408, IQR = 5,119), and 368 shares (SD = 1,541, Mdn = 11, IQR = 76). The mean number of positive comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 3), and the mean number of negative comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 4). Message Themes and Frames presents the presence of twelve gain- and loss-framed themes in English-language vaping cessation videos ( N = 412). The most common theme was nicotine addiction, followed by physical health, mental health, harmful chemicals in vapes, financial impacts of vaping, and negative social perceptions of vaping. Exploratory inductive coding of the 135 videos without these six themes revealed that 56 (41%) featured individuals discussing their decision to quit vaping (see ). provides examples of gain and loss-framed messages for each theme. Among the 277 videos containing at least one of the identified themes, the average ratio of gain frames was 0.29 (SD = 0.37), while the ratio of loss frames was 0.71 (SD = 0.37). Message Sources Among the coded videos, 10 (2.4%) videos featured formal experts. Additional string-matching analyses using keywords like “doctor,” and “MD” did not find additional formal expert videos . Furthermore, 54 (13.1%) videos showed informal experts, who indicated that they have successfully quit vaping, while 241 (58.5%) videos portrayed current e-cigarette user sources. Lastly, 107 (26.0%) videos included non-expert and non-user sources. Predicting Video Engagement with Message Themes, Frames, and Sources displays the results of mixed-effect negative binomial regression models. Effects of Six Message Themes on Video Engagement RQ1 examined the effects of six distinct message themes on video engagement. Negative binomial regression results revealed that the presence of the chemical theme was associated with both more negative (IRR = 2.74, p = .02, 95% CI = 1.15, 6.52) and positive comments (IRR = 2.15, p = .05, 95% CI = 1.01, 4.56) about quitting vaping. Additionally, the physical health theme was linked to more likes (IRR = 3.30, p = .01, 95% CI = 1.39, 7.86) and shares (IRR = 5.11, p = .003, 95% CI = 1.74, 15.05), while the addiction theme received more likes (IRR = 2.76, p = .05, 95% CI = 1.01, 7.50). Effects of Gain and Loss Frames on Video Engagement H1 proposed that a higher ratio of gain frames to the total number of gain and loss frames in a video would predict increased positive engagement and reduced negative engagement. The results suggest that videos with a higher ratio of gain frames elicited more likes (IRR = 2.79, p = .01, 95% CI = 1.23, 6.30), positive comments about quitting vaping (IRR = 1.86, p = .04, 95% CI = 1.04, 3.33), and more shares (IRR = 3.51, p = .01, 95% CI = 1.35, 9.12). However, no significant association was found between negative comments and the ratio of gain frames (IRR = 0.32, p = 1.40, 95% CI = 0.72, 2.72). Therefore, H1 was partially supported. H2 proposed that a higher ratio of loss frames in a video would predict decreased positive engagement and increased negative engagement. The results suggest that videos with a higher ratio of loss frames elicited fewer likes (IRR = 0.36, p = .01, 95% CI = 0.16, 0.81), fewer positive comments about quitting vaping (IRR = 0.54, p = .04, 95% CI = 0.30, 0.96), and fewer shares (IRR = 0.28, p = .01, 95% CI = 0.11, 0.74). Additionally, no significant association was found between negative comments and the ratio of loss frames (IRR = 0.71, p = .32, 95% CI = 0.37, 1.38). Therefore, H2 was partially supported. Effects of Message Sources on Video Engagement RQ2 investigated that if TikTok vaping cessation videos featuring formal experts (i.e., healthcare professionals), informal experts (i.e., individuals who have successfully quit vaping), and current user sources (i.e., individuals who currently use e-cigarettes) generate more positive engagement and less negative engagement compared to videos featuring non-expert and non-user sources. Findings from negative binomial regressions showed that non-expert and non-user sources received fewer likes (IRR = 0.45, p = .04, 95% CI = 0.21, 0.97) than current user sources. In addition, non-expert and non-user videos were associated with more negative comments about quitting vaping than informal experts who have successfully quit vaping (IRR = 2.61, p = .03, 95% CI = 1.12, 6.07). RQ3 asked which of the three message sources (formal experts, informal experts, current user sources) generate the highest engagement compared to one another. The results indicated that informal expert sources received both fewer positive comments (IRR = 0.40, p = .005, 95% CI = 0.21, 0.76) and fewer negative comments (IRR = 0.31, p = .002, 95% CI = 0.15, 0.64) about vaping than current user sources. No other significant differences were observed in video engagement when comparing the three types of message sources. The 412 videos received over 83 million views on TikTok, with an average of 203,201 views per video ( SD = 677,793). Videos received a mean of 248 comments (SD = 924, Mdn = 28, IQR = 89), 21185 likes (SD = 72,775, Mdn = 1,408, IQR = 5,119), and 368 shares (SD = 1,541, Mdn = 11, IQR = 76). The mean number of positive comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 3), and the mean number of negative comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 4). Message Themes and Frames presents the presence of twelve gain- and loss-framed themes in English-language vaping cessation videos ( N = 412). The most common theme was nicotine addiction, followed by physical health, mental health, harmful chemicals in vapes, financial impacts of vaping, and negative social perceptions of vaping. Exploratory inductive coding of the 135 videos without these six themes revealed that 56 (41%) featured individuals discussing their decision to quit vaping (see ). provides examples of gain and loss-framed messages for each theme. Among the 277 videos containing at least one of the identified themes, the average ratio of gain frames was 0.29 (SD = 0.37), while the ratio of loss frames was 0.71 (SD = 0.37). Message Sources Among the coded videos, 10 (2.4%) videos featured formal experts. Additional string-matching analyses using keywords like “doctor,” and “MD” did not find additional formal expert videos . Furthermore, 54 (13.1%) videos showed informal experts, who indicated that they have successfully quit vaping, while 241 (58.5%) videos portrayed current e-cigarette user sources. Lastly, 107 (26.0%) videos included non-expert and non-user sources. presents the presence of twelve gain- and loss-framed themes in English-language vaping cessation videos ( N = 412). The most common theme was nicotine addiction, followed by physical health, mental health, harmful chemicals in vapes, financial impacts of vaping, and negative social perceptions of vaping. Exploratory inductive coding of the 135 videos without these six themes revealed that 56 (41%) featured individuals discussing their decision to quit vaping (see ). provides examples of gain and loss-framed messages for each theme. Among the 277 videos containing at least one of the identified themes, the average ratio of gain frames was 0.29 (SD = 0.37), while the ratio of loss frames was 0.71 (SD = 0.37). Among the coded videos, 10 (2.4%) videos featured formal experts. Additional string-matching analyses using keywords like “doctor,” and “MD” did not find additional formal expert videos . Furthermore, 54 (13.1%) videos showed informal experts, who indicated that they have successfully quit vaping, while 241 (58.5%) videos portrayed current e-cigarette user sources. Lastly, 107 (26.0%) videos included non-expert and non-user sources. displays the results of mixed-effect negative binomial regression models. Effects of Six Message Themes on Video Engagement RQ1 examined the effects of six distinct message themes on video engagement. Negative binomial regression results revealed that the presence of the chemical theme was associated with both more negative (IRR = 2.74, p = .02, 95% CI = 1.15, 6.52) and positive comments (IRR = 2.15, p = .05, 95% CI = 1.01, 4.56) about quitting vaping. Additionally, the physical health theme was linked to more likes (IRR = 3.30, p = .01, 95% CI = 1.39, 7.86) and shares (IRR = 5.11, p = .003, 95% CI = 1.74, 15.05), while the addiction theme received more likes (IRR = 2.76, p = .05, 95% CI = 1.01, 7.50). Effects of Gain and Loss Frames on Video Engagement H1 proposed that a higher ratio of gain frames to the total number of gain and loss frames in a video would predict increased positive engagement and reduced negative engagement. The results suggest that videos with a higher ratio of gain frames elicited more likes (IRR = 2.79, p = .01, 95% CI = 1.23, 6.30), positive comments about quitting vaping (IRR = 1.86, p = .04, 95% CI = 1.04, 3.33), and more shares (IRR = 3.51, p = .01, 95% CI = 1.35, 9.12). However, no significant association was found between negative comments and the ratio of gain frames (IRR = 0.32, p = 1.40, 95% CI = 0.72, 2.72). Therefore, H1 was partially supported. H2 proposed that a higher ratio of loss frames in a video would predict decreased positive engagement and increased negative engagement. The results suggest that videos with a higher ratio of loss frames elicited fewer likes (IRR = 0.36, p = .01, 95% CI = 0.16, 0.81), fewer positive comments about quitting vaping (IRR = 0.54, p = .04, 95% CI = 0.30, 0.96), and fewer shares (IRR = 0.28, p = .01, 95% CI = 0.11, 0.74). Additionally, no significant association was found between negative comments and the ratio of loss frames (IRR = 0.71, p = .32, 95% CI = 0.37, 1.38). Therefore, H2 was partially supported. Effects of Message Sources on Video Engagement RQ2 investigated that if TikTok vaping cessation videos featuring formal experts (i.e., healthcare professionals), informal experts (i.e., individuals who have successfully quit vaping), and current user sources (i.e., individuals who currently use e-cigarettes) generate more positive engagement and less negative engagement compared to videos featuring non-expert and non-user sources. Findings from negative binomial regressions showed that non-expert and non-user sources received fewer likes (IRR = 0.45, p = .04, 95% CI = 0.21, 0.97) than current user sources. In addition, non-expert and non-user videos were associated with more negative comments about quitting vaping than informal experts who have successfully quit vaping (IRR = 2.61, p = .03, 95% CI = 1.12, 6.07). RQ3 asked which of the three message sources (formal experts, informal experts, current user sources) generate the highest engagement compared to one another. The results indicated that informal expert sources received both fewer positive comments (IRR = 0.40, p = .005, 95% CI = 0.21, 0.76) and fewer negative comments (IRR = 0.31, p = .002, 95% CI = 0.15, 0.64) about vaping than current user sources. No other significant differences were observed in video engagement when comparing the three types of message sources. RQ1 examined the effects of six distinct message themes on video engagement. Negative binomial regression results revealed that the presence of the chemical theme was associated with both more negative (IRR = 2.74, p = .02, 95% CI = 1.15, 6.52) and positive comments (IRR = 2.15, p = .05, 95% CI = 1.01, 4.56) about quitting vaping. Additionally, the physical health theme was linked to more likes (IRR = 3.30, p = .01, 95% CI = 1.39, 7.86) and shares (IRR = 5.11, p = .003, 95% CI = 1.74, 15.05), while the addiction theme received more likes (IRR = 2.76, p = .05, 95% CI = 1.01, 7.50). H1 proposed that a higher ratio of gain frames to the total number of gain and loss frames in a video would predict increased positive engagement and reduced negative engagement. The results suggest that videos with a higher ratio of gain frames elicited more likes (IRR = 2.79, p = .01, 95% CI = 1.23, 6.30), positive comments about quitting vaping (IRR = 1.86, p = .04, 95% CI = 1.04, 3.33), and more shares (IRR = 3.51, p = .01, 95% CI = 1.35, 9.12). However, no significant association was found between negative comments and the ratio of gain frames (IRR = 0.32, p = 1.40, 95% CI = 0.72, 2.72). Therefore, H1 was partially supported. H2 proposed that a higher ratio of loss frames in a video would predict decreased positive engagement and increased negative engagement. The results suggest that videos with a higher ratio of loss frames elicited fewer likes (IRR = 0.36, p = .01, 95% CI = 0.16, 0.81), fewer positive comments about quitting vaping (IRR = 0.54, p = .04, 95% CI = 0.30, 0.96), and fewer shares (IRR = 0.28, p = .01, 95% CI = 0.11, 0.74). Additionally, no significant association was found between negative comments and the ratio of loss frames (IRR = 0.71, p = .32, 95% CI = 0.37, 1.38). Therefore, H2 was partially supported. RQ2 investigated that if TikTok vaping cessation videos featuring formal experts (i.e., healthcare professionals), informal experts (i.e., individuals who have successfully quit vaping), and current user sources (i.e., individuals who currently use e-cigarettes) generate more positive engagement and less negative engagement compared to videos featuring non-expert and non-user sources. Findings from negative binomial regressions showed that non-expert and non-user sources received fewer likes (IRR = 0.45, p = .04, 95% CI = 0.21, 0.97) than current user sources. In addition, non-expert and non-user videos were associated with more negative comments about quitting vaping than informal experts who have successfully quit vaping (IRR = 2.61, p = .03, 95% CI = 1.12, 6.07). RQ3 asked which of the three message sources (formal experts, informal experts, current user sources) generate the highest engagement compared to one another. The results indicated that informal expert sources received both fewer positive comments (IRR = 0.40, p = .005, 95% CI = 0.21, 0.76) and fewer negative comments (IRR = 0.31, p = .002, 95% CI = 0.15, 0.64) about vaping than current user sources. No other significant differences were observed in video engagement when comparing the three types of message sources. This study investigated how message themes, frames, and sources impact engagement with user-generated vaping cessation videos on TikTok. The primary themes in TikTok videos were physical health outcomes and nicotine addiction. On average, the videos featured a higher ratio of loss-framed messages over gain-framed messages. Additionally, over half of the videos featured individuals who disclosed current e-cigarette use, followed by non-expert non-user sources, informal experts who successfully quit, and formal experts such as doctors. Engagement with Vaping Cessation TikTok Videos Themes and Video Engagement Nicotine addiction emerged as the most prevalent theme, correlating with higher positive engagement (likes). Physical health, the second most common theme, also showed a positive correlation with positive engagement (likes and shares). Given that likes often indicate positive audience sentiment , the increased correlation between likes and both nicotine addiction and physical health themes suggests potential effectiveness in future social media vaping cessation campaigns. Sharing health-related information on social media can be driven by a desire to spread knowledge and show care for others . Our findings suggest that people might regard physical health as significant enough to share within their networks. Future vaping cessation campaigns aim at increasing awareness and engagement with the issue of vaping cessation could emphasize the physical health effects of vaping. Incorporating the theme of harmful chemicals in vaping products led to more positive comments about quitting, consistent with previous research on its effectiveness in prevention messages . However, the theme of harmful chemicals also generated more negative comments about quitting. Previous research found that cigarette pack messages about toxic chemicals did not increase intentions to quit smoking, but increased awareness of chemicals and health harms . Further research is needed to understand the effects of the chemical theme in vaping cessation and moderators that might affect the message effect. Frames and Engagement Aligning with the detection/prevention behavioral classification in gain and loss framing effects , our study found that a higher ratio of gain frames in vaping cessation videos was associated with increased likes, shares, and positive comments about quitting vaping. The benefits of incorporating gain frames may be explained by the heuristic processing of social media posts . Individuals who rely on heuristic processing prefer positive information while avoiding negative information, consistent with the hedonic principle . As the effectiveness of gain frames in persuasion depends on the intensity of positive emotions evoked , future TikTok vaping cessation campaigns may benefit from incorporating more gain-framed messages to maximize engagement . However, our results indicate that gain frames were not associated with reduced negative comments about quitting vaping compared to loss frames. Future research should explore why negative comments arise in response to social media health campaigns, considering factors like message reactance and personal agency , to decrease negative engagement among audiences. Sources and Engagement When examining the effects of different message sources on video engagement, our study revealed an advantage in utilizing potentially relatable message sources who currently vape and informal expert sources. Vaping cessation videos featuring current users garnered more likes than those from non-expert, non-user sources. Additionally, videos featuring successful quitters received more positive comments compared to those featuring current users. Prior research has shown that “current teenaged smoker” and “successful teenaged quitter” were the top two preferred message sources for smoking cessation videos among youth . Our study suggests that both current user and informal expert sources may effectively influence the audience’s attitudes toward quitting vaping. Contrary to the hypothesis based on the internalization process of persuasion , our study found that formal expert sources such as doctors were not associated with more positive engagement. One possible explanation for the unexpected results could be the relatively small sample size of videos featuring formal expert sources ( N = 10). Further research is needed to evaluate the effectiveness of including formal experts, like healthcare professionals, in vaping cessation TikTok videos. Implications and Limitations of Using Engagement as Proxy Measures of Campaign Effectiveness Drawing on the Integrated Behavioral Model and the bandwagon effect , engagement metrics such as likes, shares, and comments may reflect audience perceptions of recommended behaviors, potentially precede behavioral change, and serve as persuasive cues in social media campaigns. For example, liking a brand on social media does not always result in purchasing the product . Therefore, while high engagement with health campaigns might signal positive sentiment, researchers have cautioned that such engagement does not always lead to meaningful attitude shifts or sustained behavior change . Moreover, engagement can also be influenced by factors unrelated to persuasion, such as entertainment value or peer influence . Research gaps include the aggregation of engagement types into a single score and a lack of focus on negative engagement, such as negative comments . Our study contributes to the literature by examining different engagement types and distinguishing positive and negative comments toward recommended health behaviors. However, a clearer theoretical understanding of the reasons and outcomes of engagement with social media health campaigns is still needed . Longitudinal and observational studies that link social media engagement to real-life health attitudes and behaviors could provide deeper insights. Our study has limitations. Given our specific focus on TikTok vaping cessation videos, the findings may not apply to other social media platforms. Due to the content analysis nature, we lacked data on audience vaping status and age, preventing the examination of causal links between video exposure and quitting behaviors. Additionally, we were unable to study specific persuasive outcomes, nor did we analyze audience emotional responses to the videos. Moreover, it is essential to recognize that video engagement does not guarantee video persuasiveness. Our study suggests that future TikTok vaping cessation campaigns could benefit from incorporating themes related to physical health, addiction, harmful chemicals, and gain-framed messages. Additionally, utilizing message sources current e-cigarette users and individuals who have successfully quit vaping, might enhance campaign engagement. The effectiveness of featuring formal experts, such as healthcare professionals, in vaping cessation TikTok videos warrants further research. Themes and Video Engagement Nicotine addiction emerged as the most prevalent theme, correlating with higher positive engagement (likes). Physical health, the second most common theme, also showed a positive correlation with positive engagement (likes and shares). Given that likes often indicate positive audience sentiment , the increased correlation between likes and both nicotine addiction and physical health themes suggests potential effectiveness in future social media vaping cessation campaigns. Sharing health-related information on social media can be driven by a desire to spread knowledge and show care for others . Our findings suggest that people might regard physical health as significant enough to share within their networks. Future vaping cessation campaigns aim at increasing awareness and engagement with the issue of vaping cessation could emphasize the physical health effects of vaping. Incorporating the theme of harmful chemicals in vaping products led to more positive comments about quitting, consistent with previous research on its effectiveness in prevention messages . However, the theme of harmful chemicals also generated more negative comments about quitting. Previous research found that cigarette pack messages about toxic chemicals did not increase intentions to quit smoking, but increased awareness of chemicals and health harms . Further research is needed to understand the effects of the chemical theme in vaping cessation and moderators that might affect the message effect. Frames and Engagement Aligning with the detection/prevention behavioral classification in gain and loss framing effects , our study found that a higher ratio of gain frames in vaping cessation videos was associated with increased likes, shares, and positive comments about quitting vaping. The benefits of incorporating gain frames may be explained by the heuristic processing of social media posts . Individuals who rely on heuristic processing prefer positive information while avoiding negative information, consistent with the hedonic principle . As the effectiveness of gain frames in persuasion depends on the intensity of positive emotions evoked , future TikTok vaping cessation campaigns may benefit from incorporating more gain-framed messages to maximize engagement . However, our results indicate that gain frames were not associated with reduced negative comments about quitting vaping compared to loss frames. Future research should explore why negative comments arise in response to social media health campaigns, considering factors like message reactance and personal agency , to decrease negative engagement among audiences. Sources and Engagement When examining the effects of different message sources on video engagement, our study revealed an advantage in utilizing potentially relatable message sources who currently vape and informal expert sources. Vaping cessation videos featuring current users garnered more likes than those from non-expert, non-user sources. Additionally, videos featuring successful quitters received more positive comments compared to those featuring current users. Prior research has shown that “current teenaged smoker” and “successful teenaged quitter” were the top two preferred message sources for smoking cessation videos among youth . Our study suggests that both current user and informal expert sources may effectively influence the audience’s attitudes toward quitting vaping. Contrary to the hypothesis based on the internalization process of persuasion , our study found that formal expert sources such as doctors were not associated with more positive engagement. One possible explanation for the unexpected results could be the relatively small sample size of videos featuring formal expert sources ( N = 10). Further research is needed to evaluate the effectiveness of including formal experts, like healthcare professionals, in vaping cessation TikTok videos. Implications and Limitations of Using Engagement as Proxy Measures of Campaign Effectiveness Drawing on the Integrated Behavioral Model and the bandwagon effect , engagement metrics such as likes, shares, and comments may reflect audience perceptions of recommended behaviors, potentially precede behavioral change, and serve as persuasive cues in social media campaigns. For example, liking a brand on social media does not always result in purchasing the product . Therefore, while high engagement with health campaigns might signal positive sentiment, researchers have cautioned that such engagement does not always lead to meaningful attitude shifts or sustained behavior change . Moreover, engagement can also be influenced by factors unrelated to persuasion, such as entertainment value or peer influence . Research gaps include the aggregation of engagement types into a single score and a lack of focus on negative engagement, such as negative comments . Our study contributes to the literature by examining different engagement types and distinguishing positive and negative comments toward recommended health behaviors. However, a clearer theoretical understanding of the reasons and outcomes of engagement with social media health campaigns is still needed . Longitudinal and observational studies that link social media engagement to real-life health attitudes and behaviors could provide deeper insights. Our study has limitations. Given our specific focus on TikTok vaping cessation videos, the findings may not apply to other social media platforms. Due to the content analysis nature, we lacked data on audience vaping status and age, preventing the examination of causal links between video exposure and quitting behaviors. Additionally, we were unable to study specific persuasive outcomes, nor did we analyze audience emotional responses to the videos. Moreover, it is essential to recognize that video engagement does not guarantee video persuasiveness. Our study suggests that future TikTok vaping cessation campaigns could benefit from incorporating themes related to physical health, addiction, harmful chemicals, and gain-framed messages. Additionally, utilizing message sources current e-cigarette users and individuals who have successfully quit vaping, might enhance campaign engagement. The effectiveness of featuring formal experts, such as healthcare professionals, in vaping cessation TikTok videos warrants further research. Nicotine addiction emerged as the most prevalent theme, correlating with higher positive engagement (likes). Physical health, the second most common theme, also showed a positive correlation with positive engagement (likes and shares). Given that likes often indicate positive audience sentiment , the increased correlation between likes and both nicotine addiction and physical health themes suggests potential effectiveness in future social media vaping cessation campaigns. Sharing health-related information on social media can be driven by a desire to spread knowledge and show care for others . Our findings suggest that people might regard physical health as significant enough to share within their networks. Future vaping cessation campaigns aim at increasing awareness and engagement with the issue of vaping cessation could emphasize the physical health effects of vaping. Incorporating the theme of harmful chemicals in vaping products led to more positive comments about quitting, consistent with previous research on its effectiveness in prevention messages . However, the theme of harmful chemicals also generated more negative comments about quitting. Previous research found that cigarette pack messages about toxic chemicals did not increase intentions to quit smoking, but increased awareness of chemicals and health harms . Further research is needed to understand the effects of the chemical theme in vaping cessation and moderators that might affect the message effect. Aligning with the detection/prevention behavioral classification in gain and loss framing effects , our study found that a higher ratio of gain frames in vaping cessation videos was associated with increased likes, shares, and positive comments about quitting vaping. The benefits of incorporating gain frames may be explained by the heuristic processing of social media posts . Individuals who rely on heuristic processing prefer positive information while avoiding negative information, consistent with the hedonic principle . As the effectiveness of gain frames in persuasion depends on the intensity of positive emotions evoked , future TikTok vaping cessation campaigns may benefit from incorporating more gain-framed messages to maximize engagement . However, our results indicate that gain frames were not associated with reduced negative comments about quitting vaping compared to loss frames. Future research should explore why negative comments arise in response to social media health campaigns, considering factors like message reactance and personal agency , to decrease negative engagement among audiences. When examining the effects of different message sources on video engagement, our study revealed an advantage in utilizing potentially relatable message sources who currently vape and informal expert sources. Vaping cessation videos featuring current users garnered more likes than those from non-expert, non-user sources. Additionally, videos featuring successful quitters received more positive comments compared to those featuring current users. Prior research has shown that “current teenaged smoker” and “successful teenaged quitter” were the top two preferred message sources for smoking cessation videos among youth . Our study suggests that both current user and informal expert sources may effectively influence the audience’s attitudes toward quitting vaping. Contrary to the hypothesis based on the internalization process of persuasion , our study found that formal expert sources such as doctors were not associated with more positive engagement. One possible explanation for the unexpected results could be the relatively small sample size of videos featuring formal expert sources ( N = 10). Further research is needed to evaluate the effectiveness of including formal experts, like healthcare professionals, in vaping cessation TikTok videos. Drawing on the Integrated Behavioral Model and the bandwagon effect , engagement metrics such as likes, shares, and comments may reflect audience perceptions of recommended behaviors, potentially precede behavioral change, and serve as persuasive cues in social media campaigns. For example, liking a brand on social media does not always result in purchasing the product . Therefore, while high engagement with health campaigns might signal positive sentiment, researchers have cautioned that such engagement does not always lead to meaningful attitude shifts or sustained behavior change . Moreover, engagement can also be influenced by factors unrelated to persuasion, such as entertainment value or peer influence . Research gaps include the aggregation of engagement types into a single score and a lack of focus on negative engagement, such as negative comments . Our study contributes to the literature by examining different engagement types and distinguishing positive and negative comments toward recommended health behaviors. However, a clearer theoretical understanding of the reasons and outcomes of engagement with social media health campaigns is still needed . Longitudinal and observational studies that link social media engagement to real-life health attitudes and behaviors could provide deeper insights. Our study has limitations. Given our specific focus on TikTok vaping cessation videos, the findings may not apply to other social media platforms. Due to the content analysis nature, we lacked data on audience vaping status and age, preventing the examination of causal links between video exposure and quitting behaviors. Additionally, we were unable to study specific persuasive outcomes, nor did we analyze audience emotional responses to the videos. Moreover, it is essential to recognize that video engagement does not guarantee video persuasiveness. Our study suggests that future TikTok vaping cessation campaigns could benefit from incorporating themes related to physical health, addiction, harmful chemicals, and gain-framed messages. Additionally, utilizing message sources current e-cigarette users and individuals who have successfully quit vaping, might enhance campaign engagement. The effectiveness of featuring formal experts, such as healthcare professionals, in vaping cessation TikTok videos warrants further research. Supplementary Material |
Liver T1 mapping in Fontan patients and patients with biventricular congenital heart disease – insights into the effects of venous congestions on diffuse liver disease | 0fe2f85c-f79b-4708-b771-7831b634eb7f | 11811443 | Surgical Procedures, Operative[mh] | In patients with congenital heart disease (CHD), chronic venous congestion can lead to liver disease . The German National Register for CHD reports liver dysfunction as a major non-cardiac complication, observed in 6% of deceased patients with CHD . Liver fibrosis has been noted in patients with a biventricular circulation (BVC) and within this group particularly in patients with tetralogy of Fallot (ToF) who have right ventricular (RV) dysfunction . However, in patients with a single ventricle (SV), liver injury is a ubiquitous sequela of the Fontan circulation (FC) known as Fontan-associated liver disease (FALD) . FALD is characterised by hepatic congestion and hepatic fibrosis progressing to cirrhosis caused by the absence of a ventricle supporting the pulmonary circulation. Besides hepatic fibrosis and cirrhosis, approximately 10% of patients diagnosed with FALD may develop hepatocellular carcinoma within 20 years . Clinical symptoms are rare and late findings are common in patients with advanced disease, making early liver monitoring of important to detect alterations. Liver biopsy is the gold standard for investigating histological liver abnormalities; however, it is invasive and typically samples only very confined liver regions. Therefore, non-invasive, repeatable and reliable monitoring techniques are necessary. Magnetic resonance (MR) mapping is a modern, reproducible technique that quantifies diffuse changes in organs like the liver, heart, and pancreas . This imaging technique has a short history of application in the liver , and relatively few reports of liver mapping, especially in patients with CHD, exist . We hypothesised that this technique captures subclinical liver tissue alterations through numerical values, which relate to the cardiac status. We therefore aimed to assess the liver status in patients with various CHDs using MR mapping and to correlate the findings with data obtained from cardiovascular MR imaging. Ethical statement Informed consent was obtained from all participants or their parents or guardians as appropriate. The study was approved by the ethics committee of University Hospital Schleswig-Holstein (D 588/24). Patients and study design This retrospective study analysed MR images of patients with CHD who either had a BVC or FC. All patients are followed at the outpatient clinic of the University Hospital Schleswig-Holstein, and MR images were captured by a single examiner from June 2023 to December 2023. Magnetic resonance imaging MR images were obtained with a 1.5-Tesla scanner (MAGNETOM Aera, Siemens Healthcare, Erlangen, Germany). Liver T1 mapping was performed in the axial plane at the liver’s widest dimension using a modified look-locker inversion sequence (echo time: 1.12 (1.08–1.20 ms); repetition time: 281 (272–361) ms; slice thickness: 8 (5–16) mm; flip angle: 35°; field of view: 360 × 307 mm 2 ; matrix size: 256 × 218). Ventricular mass and volumes were measured using electrocardiographic-gated steady-state free precession cine images. All images were acquired during breath-holding. Analysis of magnetic resonance imaging data Acquired images were analysed using cvi42 (Circle, Cardiovascular Imaging, Calgary, Canada). Native liver T1 values were manually measured at eight different regions inspired by the Couinaud liver anatomy classification on axial images . The liver segments were divided as follows: the right lobe into anterior and posterior sections and the left lobe into medial and lateral sections. Each section was further divided into regions adjacent to the inferior vena cava and close to the edge of the liver (Fig. ). A circular region of interest of limited size (< 50 mm 2 ) was placed in each liver segment, avoiding major blood vessels and bile ducts. Cardiac parameters including end-diastolic volumes (EDV), end-systolic volumes (ESV), stroke volume (SV), end-diastolic myocardial mass (EDMM), and ejection fraction (EF) of both ventricles in patients with a BVC and of the single ventricle in patients with FC were measured from short-axis cine images. Volumetry was performed as described before , and volumes and mass were indexed to body surface area (EDVi, ESVi, SVi, EDMMi) using the Mosteller formula. RV enlargement in patients with BVC was defined as EDV of the RV/ EDV of the left ventricle (LV) > 1.3” . Data analysis Liver T1 values were compared between patients with BVC and patients with FC, and their association with cardiac parameters was examined. In patients with BVC, liver T1 values were compared based on the presence of the right ventricular enlargement. In patients with FC, liver T1 values were analysed according to Fontan procedure type and dominance of the systemic ventricle. We also examined the association between liver T1 values and cardiac parameters and the number of years since completion of the Fontan procedure. Furthermore, liver T1 values between patients with BVC and those with FC were compared after age-matching. Statistical analysis Continuous variables were presented as medians with ranges. The Mann–Whitney U test was used for non-normally distributed data, while the Chi-square test was used for categorical variables. The Friedman test compared median liver T1 values across eight liver segments within the same patient, with Bonferroni’s multiple comparisons applied for post hoc analysis. Correlations were assessed using univariate regression analysis with Spearman’s rank correlation. Statistical significance was set at p < 0.05. All statistical analyses were performed using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan), a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). More precisely, it is a modified version of R commander with added biostatistical functions . Informed consent was obtained from all participants or their parents or guardians as appropriate. The study was approved by the ethics committee of University Hospital Schleswig-Holstein (D 588/24). This retrospective study analysed MR images of patients with CHD who either had a BVC or FC. All patients are followed at the outpatient clinic of the University Hospital Schleswig-Holstein, and MR images were captured by a single examiner from June 2023 to December 2023. MR images were obtained with a 1.5-Tesla scanner (MAGNETOM Aera, Siemens Healthcare, Erlangen, Germany). Liver T1 mapping was performed in the axial plane at the liver’s widest dimension using a modified look-locker inversion sequence (echo time: 1.12 (1.08–1.20 ms); repetition time: 281 (272–361) ms; slice thickness: 8 (5–16) mm; flip angle: 35°; field of view: 360 × 307 mm 2 ; matrix size: 256 × 218). Ventricular mass and volumes were measured using electrocardiographic-gated steady-state free precession cine images. All images were acquired during breath-holding. Acquired images were analysed using cvi42 (Circle, Cardiovascular Imaging, Calgary, Canada). Native liver T1 values were manually measured at eight different regions inspired by the Couinaud liver anatomy classification on axial images . The liver segments were divided as follows: the right lobe into anterior and posterior sections and the left lobe into medial and lateral sections. Each section was further divided into regions adjacent to the inferior vena cava and close to the edge of the liver (Fig. ). A circular region of interest of limited size (< 50 mm 2 ) was placed in each liver segment, avoiding major blood vessels and bile ducts. Cardiac parameters including end-diastolic volumes (EDV), end-systolic volumes (ESV), stroke volume (SV), end-diastolic myocardial mass (EDMM), and ejection fraction (EF) of both ventricles in patients with a BVC and of the single ventricle in patients with FC were measured from short-axis cine images. Volumetry was performed as described before , and volumes and mass were indexed to body surface area (EDVi, ESVi, SVi, EDMMi) using the Mosteller formula. RV enlargement in patients with BVC was defined as EDV of the RV/ EDV of the left ventricle (LV) > 1.3” . Liver T1 values were compared between patients with BVC and patients with FC, and their association with cardiac parameters was examined. In patients with BVC, liver T1 values were compared based on the presence of the right ventricular enlargement. In patients with FC, liver T1 values were analysed according to Fontan procedure type and dominance of the systemic ventricle. We also examined the association between liver T1 values and cardiac parameters and the number of years since completion of the Fontan procedure. Furthermore, liver T1 values between patients with BVC and those with FC were compared after age-matching. Continuous variables were presented as medians with ranges. The Mann–Whitney U test was used for non-normally distributed data, while the Chi-square test was used for categorical variables. The Friedman test compared median liver T1 values across eight liver segments within the same patient, with Bonferroni’s multiple comparisons applied for post hoc analysis. Correlations were assessed using univariate regression analysis with Spearman’s rank correlation. Statistical significance was set at p < 0.05. All statistical analyses were performed using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan), a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). More precisely, it is a modified version of R commander with added biostatistical functions . Patient demographics and baseline characteristics In total, 104 patients (75 patients with BVC and 29 patients with FC) were included in the study. Patients with heterotaxy were excluded. In all BVC patients the anatomical right ventricle was the subpulmonary ventricle. The median age was 22.4 (11.2–59.3) years for patients with BVC and 19.1 years (range 10.8–40.4) for patients with FC. Detailed data for each group is provided in Table . Patients with BVC In the 75 patients with BVC, liver T1 values did not correlate significantly with the RV parameters (EDVi, ESVi, SVi, EF, EDMMi; Table ). Liver T1 values varied significantly among the eight liver areas ( p = 0.002), but these values did not differ between regions near the inferior vena cava and marginal liver segments (Fig. ). Liver T1 values did not differ significantly between patients with an enlarged RV (RVEDVi/LVEDVi ratio > 1.3) ( n = 18) and others ( n = 56) (Table ); a moderate positive correlation was observed between RVEDVi and T1 values in the right liver lobe in patients presenting with an enlarged RV (Fig. ). Patients with FC Liver T1 values differed significantly among the eight segments ( p < 0.001), with significantly higher T1 values observed close to the inferior vena cava in the right lobe but not in the left lobe (Fig. ). Patients with a right dominant ventricle ( n = 16) had slightly larger ventricular volumes and lower EF than patients with a left dominant ventricle ( n = 11), although no significant differences were observed in age, body surface area, years since Fontan completion, or the mode of Fontan operation. No significant differences in liver T1 values were found between these two groups (Table ). Furthermore, T1 values did not differ between patients with an extracardiac conduit versus a lateral tunnel ( p > 0.2 for all liver segments). No correlation was found between the years since Fontan operation and liver T1 values for all liver segments in the entire group of patients with FC (Fig. ). Comparison between age-matched FC and BVC patients Twenty-nine age-matched patients from each patient group were selected. No significant differences were observed in the systemic ventricular volumes (EDVi, ESVi, and SVi) between these two groups; however, EF and EDMMi were markedly lower among patients with FC ( p < 0.05, Table ). Liver T1 values were significantly higher in patients with FC across all eight liver segments ( p < 0.001) compared to those in the BVC group (Table ; Fig. ). In total, 104 patients (75 patients with BVC and 29 patients with FC) were included in the study. Patients with heterotaxy were excluded. In all BVC patients the anatomical right ventricle was the subpulmonary ventricle. The median age was 22.4 (11.2–59.3) years for patients with BVC and 19.1 years (range 10.8–40.4) for patients with FC. Detailed data for each group is provided in Table . In the 75 patients with BVC, liver T1 values did not correlate significantly with the RV parameters (EDVi, ESVi, SVi, EF, EDMMi; Table ). Liver T1 values varied significantly among the eight liver areas ( p = 0.002), but these values did not differ between regions near the inferior vena cava and marginal liver segments (Fig. ). Liver T1 values did not differ significantly between patients with an enlarged RV (RVEDVi/LVEDVi ratio > 1.3) ( n = 18) and others ( n = 56) (Table ); a moderate positive correlation was observed between RVEDVi and T1 values in the right liver lobe in patients presenting with an enlarged RV (Fig. ). Liver T1 values differed significantly among the eight segments ( p < 0.001), with significantly higher T1 values observed close to the inferior vena cava in the right lobe but not in the left lobe (Fig. ). Patients with a right dominant ventricle ( n = 16) had slightly larger ventricular volumes and lower EF than patients with a left dominant ventricle ( n = 11), although no significant differences were observed in age, body surface area, years since Fontan completion, or the mode of Fontan operation. No significant differences in liver T1 values were found between these two groups (Table ). Furthermore, T1 values did not differ between patients with an extracardiac conduit versus a lateral tunnel ( p > 0.2 for all liver segments). No correlation was found between the years since Fontan operation and liver T1 values for all liver segments in the entire group of patients with FC (Fig. ). Twenty-nine age-matched patients from each patient group were selected. No significant differences were observed in the systemic ventricular volumes (EDVi, ESVi, and SVi) between these two groups; however, EF and EDMMi were markedly lower among patients with FC ( p < 0.05, Table ). Liver T1 values were significantly higher in patients with FC across all eight liver segments ( p < 0.001) compared to those in the BVC group (Table ; Fig. ). This study demonstrates that liver T1 values vary among different liver segments, regardless of ventricular physiology. A moderate positive correlation was found between RV volumes and liver T1 values in patients with enlarged RV. Median liver T1 values were higher in patients with FC compared with those of patients with BVC. Chronic systemic venous congestion, particularly in patients with ventricular dysfunction or without a sub-pulmonary ventricle (FC patients), raises concerns about progressive liver injury . The pathology of the liver begins with fibrosis, which progresses to cirrhosis that can promote the development of hepatocellular carcinoma . Early detection of pathological liver changes is crucial, ideally with non-invasive, easily repeatable, and reliable imaging. In recent years, ultrasound elastography has been used as an imaging modality to assess liver fibrosis, but it has some limitations, such as measurement difficulties in patients with ascites and poor reproducibility of measurements in patients with obesity and those with fatty liver. However, MR imaging offers advantages, as it is normally not affected by these conditions and can be performed simultaneously with cardiovascular MR, making it highly relevant for patients with cardiac disease. Liver T1 mapping is a relatively new method for assessing liver fibrosis . However, several studies have reported a strong correlation between liver T1 values and scoring systems for liver disease in the absence of congestive hepatopathy . Interestingly, even in healthy livers T1 values vary by segment . For patients with CHD and a BVC, several reports of liver abnormalities exist, particularly in patients with ToF. These studies examined liver status in the setting of chronic venous congestion due to RV dysfunction . Ravndal et al. used various imaging techniques and reported that 30% of patients with ToF had mild liver fibrosis . In contrast, Kazour et al. reported no difference in liver T1 values between patients with ToF and healthy controls . However, the link between liver abnormalities and the status of the RV was not sufficiently studied in these reports. In the current study, we focused on the relationship between RV size and function and liver T1 values. We compared liver T1 values between patients with and without RV enlargement and found no differences between the two groups in any part of the liver. However, in patients with RV enlargement, a moderate positive correlation between EDVi and liver T1 values was observed, particularly in the right liver lobe. Although no correlation was observed in the entire cohort of BVH patients, we speculate that the reason for the moderate correlation observed only in patients with RV enlargement is due to population bias, in that the majority of BVH patients (56 patients, 75%) did not have RV enlargement. In addition, the fact that the correlation between RVEDVi and liver T1 remained moderate even in patients with RV enlargement (median RV/LV EDV ratio = 1.44) suggests that the liver is not affected unless a patient has considerable RV enlargement. Notably, hepatocellular carcinoma in patients with ToF has predominantly been found in the right lobe, suggesting that the right lobe is more susceptible to venous congestion . Several studies have reported that patients with FC have higher liver T1 values than healthy controls. In a previous study from our group that included 29 patients with FC, median T1 values were 735 (705–764) ms in the left lobe and 745 (715–784) ms in the right lobe . Similar liver T1 values have been reported in other studies that did not focus on the different liver lobes. Greidanus et al. reported median liver T1 values of 728 (714–744) ms in 20 Fontan patients , while Beigh et al. reported a mean T1 value of 727 ± 49 ms in 16 patients with a FC . There is no absolute reference for liver T1 values, as they vary depending on the vendor of the MR machine and magnetic field strength . In our present study, using the same MR machine and post-processing software, the measured T1 values (700 ± 82 ms) were consistent with those described in the literature , suggesting that liver T1 mapping is reproducible. In previous studies, T1 values were assessed at specific liver segments or averaged from several segments in the parasagittal view for patients with FC . In our previous study, we mapped the entire liver using detailed measurements based on the Couinaud classification . However, in our current study, we used a single axial image, which captures the widest area of the liver, as a simple method for clinical practicality. Our results showed consistently higher liver T1 values in all segments of patients with FC without exception compared with those of patients with BVC. Furthermore, our study confirms differences in liver T1 values between areas in the periphery of the IVC and the liver margins in patients with FC, especially in the right lobe, as reported by Beigh et al. . This may imply that the peripheral right lobe is more susceptible to venous congestion, leading to faster development of liver fibrosis. If this speculation is correct, this could lead to an underestimation of fibrosis when measuring T1 values in different segments (e.g. near the centre of the left lobe). This should be kept in mind, especially when monitoring serial changes of liver T1 values in the same patient. In the modern era, there remain two major types of Fontan procedures: the extracardiac conduit and the lateral tunnel. However, it remains controversial as to which procedure is superior . Kisamori et al. reported that the extracardiac conduit resulted in worse liver outcomes compared with the lateral tunnel technique . In our study, liver T1 values did not differ between the two surgical techniques. The follow-up period after Fontan surgery was, however, short (4.8 years). Weixler et al. reported no significant difference in liver complications between these procedures , supporting our results. In addition, differences in ventricular physiology are always of interest when discussing the prognosis of patients with FC. Some studies discuss its association with prognosis in patients with FC, but the results remain controversial . In a previous study, we reported that the difference in ventricular physiology was not a risk factor for FALD , a finding supported by the present study. This result also aligns with the report of Beigh et al. . Previously, we identified a longer period post-Fontan completion as a risk factor for advanced FALD . Therefore, it was assumed that there would be a correlation between liver T1 values and years since Fontan completion, but no correlation was found for any segments of the liver in this study. Nevertheless, this result fits to another previous report from this group, which showed no differences in T1 values among patients with FALD of different severity . In contrast, Shiina et al. reported different results from ours, with a correlation between age and liver T1 in 16 patients with FC with an average age of 31 years . This discrepancy may be due to the younger age of our patient cohort, which is, on average, more than 10 years younger. In this study, we investigated the association between liver T1 values and cardiac parameters in patients with CHD by accessing liver T1 values in each defined liver segment. Elevated liver T1 values were found in patients with possible systemic venous congestion, including patients with BVC and RV dilation as well as patients with a FC. Furthermore, liver T1 values also varied by segment. They were typically higher at the liver margins than at the centre, highlighting the importance of site measurement, especially when following the same patient longitudinally. Liver T1 mapping can be a valuable addition to cardiac MRI studies for patients at risk of liver fibrosis due to potential systemic venous congestion. |
Evaluation of the parasympathetic tone activity (PTA) for posttraumatic pain assessment in awake dogs before orthopaedic surgery - A prospective non-randomised clinical study | f9839727-b7ac-4cfe-a0f7-bc8c93677acc | 11853568 | Surgical Procedures, Operative[mh] | Adequate analgesia and recognition of pain after surgical procedures is an important and growing issue in veterinary medicine. Just as important as analgesia in animals after surgery is analgesia in trauma patients before surgery. Initially after car accidents or other trauma, dogs and cats are usually in shock and get stabilised with infusion and analgesics until their general condition permits surgical treatment of fractures . Patients must have an adequate analgesic protocol and must be evaluated regularly for pain severity during this time. Adequate analgesia must also be considered after surgical care, which can be reduced over time as pain subsides . Nevertheless, inadequate analgesia is still common in patients with acute pain . There are already many methods for assessing severity of pain and whether an animal needs analgesia based on the animal’s behaviour or facial expressions . The accuracy of pain evaluation relies on the examiner’s experience, leading to potentially inaccurate recognition of pain . This is rather due to the different behaviour of the animals than the examiners individual assessment. While some dogs may show increased vocalisation when in pain, other dogs do not make any vocalisation even in severe pain . To improve objectivity, it is helpful to use multidimensional established pain scales instead of unidimensional ones. These scoring systems include the behaviour of an animal in an attempt to estimate how much pain the patient is in . The functionality of these scales is difficult to validate because there is no adequate comparison or gold standard. There are many pain scales that have been evaluated for the different species, for acute and chronic pain, respectively. The Modified Glasgow Pain Scale (MGPS), which is used in this study is validated for acute pain . In contrast, the Canine Acute Pain Scale of the Colorado Ctate University (CSU-CAPS) has not yet been validated. In addition to analogue scales for pain evaluation by an examiner, there is the possibility to determine the Parasympathetic Tone Activity (PTA). An electronic device is used to draw conclusions about a painful stimulus and has been validated on unconscious patients . The principle is based on the breath-dependent heart rate variability (HRV), which results from the vegetative influence at the sinus node of the heart. It measures the balance between sympathetic and parasympathetic tone which approximates nociception . Nociception increases sympathetic tone with a corresponding decrease in parasympathetic tone. The heartbeat accelerates briefly in the inspiration phase and decreases in the expiration phase. There are various theories as to the exact underlying mechanism, but it is primarily assumed that this is mediated by the baroreflex . Baroreceptors are predominantly located in the aortic arch or carotid sinus, where they serve as proportional-differential sensors that do not measure blood pressure as an absolute value but perceive pressure fluctuations and can also measure the rate of pressure rise. When these baroreceptors register an increase in pressure caused by respiratory activity during expiration in the thorax, the sympathetic nervous system is inhibited, and the parasympathetic nervous system is activated. The result of this process is a decrease in arterial peripheral resistance and heart rate. Other mechanisms, such as the stimulation of stretch receptors in the lungs, also appear to be involved. It is assumed that respiratory sinus arrhythmia leads to increased efficiency in gas exchange. During inspiration, perfusion is adapted to alveolar ventilation, while unnecessary heartbeats are suppressed during expiration to save energy . To calculate the PTA, an electrocardiogram (ECG) is recorded and the distances between the individual R-spikes of each QRS complex are first measured. These R-R intervals are represented as frequencies by spectral analysis and can thus be divided into frequency ranges that make it possible to identify periodic patterns of HRV . This makes it possible to divide HRV into three different main spectral components. One is the ‘very low frequency’ (VLF) component, which is relatively irrelevant to us and involves very low frequencies in the range of 20–50 mHz. The VLF component is subject to influences of thermoregulation, the level of circulating hormones such as catecholamines or the influence of the renin–angiotensin system. Next, there are low frequencies (LF) from 0.004 to 0.15 Hz, which reflect an activation of the sympathetic and parasympathetic nervous systems. Lastly, the high frequencies (HF) from 0.15 to 0.5 Hz are associated with a dominance of parasympathetic activity and are influenced by respiratory sinus arrhythmia. The real-time ECG normalises the calculated R-R intervals to 8 Hz and plots them on a 64-second graph. The influences of the patient’s baseline heart rate are removed by subtracting the mean M of the R-R intervals of the window at each sampling. The area under the curve (AUC) is calculated in four sections, with each section covering a period of 16 s. To be independent of the influence of respiratory rate, maxima and minima of the curves are determined, and the delineation of the outlines of the upper and lower areas is included in calculating the areas A1, A2, A3 and A4. The PTA is calculated from a formula in which the minimum AUC from A1 to A4 is included . The PTA is given as a value between zero and 100. A value of 100 corresponds to the highest proportion of parasympathetic-influenced respiratory sinus arrhythmia, and a value of zero corresponds to the lowest parasympathetic proportion with a low HRV . In dogs, values below 40 are defined as severe pain and values in the range of 40–50 are defined as pain; values above 50 indicate a pain-free state . A PTA value is calculated every second, and the immediate PTA (PTAi) and the mean PTA (PTAm) can be derived from this. The PTAi is calculated from the 54 previous values and displayed in yellow on the monitor. The PTAm is the averaged PTA index calculated from the 176 previous values and displayed in orange. Several studies have shown that the PTA monitor can detect nociceptive stimuli in anaesthetised dogs and pigs during surgical procedures . The anaesthesiologist usually recognises this by an increase in heart rate, respiratory rate, and blood pressure. The studies indicate that even when heart rate and blood pressure remain normal, it is possible to detect if the animal is in a painful state by PTA, which can sense even weaker nociceptive stimuli . In the study by Mansour et al. (2017), a significant drop in the PTA index occurred one minute after the predefined time points TClamp, TCut and TPrePTA. At TClamp clamps were placed in the skin, at TCut the surgical incision was made, and the TPrePTA was the point five minutes before a haemodynamic response occurred. A haemodynamic response was defined in this study as a 20% increase in heart rate and/or blood pressure. In this study the PTA is useful in anaesthetised dogs to detect pain even before a haemodynamic response . The latest veterinary study by Mansour et al. (2021) compares PTA with mean arterial blood pressure in anaesthetised horses. One group was undergoing elective surgery, and the other group was surgery for colic. The authors of this study concluded that the animal’s health status must influence the PTA index since the horses in the ‘colic group’ had significantly lower values, which may be related to a predominance of sympathetic activity in situations of great stress. Furthermore, fluctuations in mean arterial blood pressure seem to be closely associated with fluctuations in PTA values, regardless of the horse’s health status . In human medicine, the PTA corresponds to the Analgesia Nociception Index (ANI), which is already frequently used there to determine intraoperative and early postoperative haemodynamic changes due to pain . It seems difficult to use the ANI to recognise pain in awake individuals. Gall et al. compared the ANI with the FLACC scale in children, where FLACC stands for ‘Face, Legs, Activity, Cry and Consolability’. This pain scale is used in human medicine to evaluate pain in children with mild to severe cognitive impairment. One result of this study is that the ANI is more suitable as a screening tool for the detection of postoperative pain. Patients in the control group (children without surgery) tend to have higher ANI values than the patients who underwent a surgical procedure. Conversely some patients in the control group had low ANI values . In contrast to human medicine all previous veterinary studies have been conducted in anaesthetised animals. Therefore, this study will be the first using the monitor in awake patients, with the potential to demonstrate a new pain assessment. This could help to identify patients with pain who are not classified as having pain on conventional pain scales. Our hypothesis was that the awake animal is exposed to too many exogenous stimuli impairing pain detection by the PTA monitor. Prospective non-randomised clinical trial. This study included 18 dogs from the patient population of the Clinic for Small Animals (Surgery) of the Justus-Liebig-University in Giessen and nine dogs from employees of the same clinic in the context of a health check-up. The owners were informed, and their consent was obtained. Dogs under one year of age and over 12 years of age, brachycephalic dog breeds, and dogs suffering from cardiac arrhythmias and chronic nervous system degenerative diseases were excluded. Patients who received anticholinergics or drugs other than opioids, NSAIDs, and maropitant as part of the stabilisation protocol after trauma were also excluded. Fractious temperament also made measurements impossible and led to exclusion from the study. The measurements of the control group were conducted as part of a voluntary health check-up. These dogs had no previous history of a painful condition and had not received any analgesic treatment in the preceding six months. All nine dogs received a general clinical examination, so only clinically healthy dogs were included in the control group. After that, the ECG leads of the monitor were connected to the patient. The red ECG electrode is clamped on the right, the yellow electrode on the left side on the chest directly behind the olecranon. The black electrode is clamped on the skin of the left knee crease. The dogs are free to sit, lie down, or remain standing. The electrodes are moistened slightly with alcohol until a good signal is obtained. Once the monitor obtained a good signal, the ECG was left for eight minutes and then removed. The dogs first had three minutes to get used to the situation, and the values of the remaining five minutes were included in the study. The study group included dogs that had sustained trauma with one or more fractures requiring surgical treatment after initial stabilisation. As part of the initial stabilisation phase, patients were administered only an analgesic (methadone), an antiemetic (maropitant) and infusion therapy. Additional treatment included oxygen flow by and, in a few cases, active warming. As in the control group, a general clinical examination was performed before the measurement on the awake animal and before the surgical intervention. According to the American Society of Anaesthesiologists, all patients with moderate systemic disease are classified as ASA class III . Based on the pre-report trauma, all patients were classified as ASA class III after initial stabilisation. Dogs, after trauma, prior to surgical care, were evaluated using the Acute Pain Scale of Colorado State University (CSU-CAPS) and the Modified Glasgow Pain Scale (MGPS) . This was done 20 min before the regular administration of methadone (Comfortan ® 10 mg/ml Dechra Veterinary Products Deutschland GmbH, Aulendorf). The evaluations were always performed by two persons per patient, first by the head of the study, who is familiar with the pain scales and the evaluation of pain, and then by various veterinary colleagues in surgery, veterinary assistants, or veterinary medicine students. It was left to the examiner to decide which pain scale to use to begin the patient’s assessment. The monitor was then connected in the same way as in the control group. The dogs are free to sit, lie down, or remain standing. After acclimatising the patient to the situation, the monitor was left on the patient for three minutes. Then, 0.2 mg/kg methadone (Comfortan ® ) was administered intravenously, which was noted in the monitor as an event. After administration of the opioid, the PTA was recorded for an additional five minutes. Figure shows a screenshot of the monitor as it appears when the signal quality is good. Data evaluation was performed in the Unit for Biomathematics and Data Processing of the Department of Veterinary Medicine at the Justus-Liebig-University in Giessen. Statistical analysis was performed using the statistical programme SAS 9.4 (SAS ® Institute Inc., 2013). To compare the results of the different pain scales by the two groups of examiners, the results are presented in four-fold tables, and the extent to which a correlation exists was examined. The degrees of agreement are expressed as percentages . The Spearman correlation coefficient ( rs ) is calculated to compare the MGPS scores of the different examiners. The PTAm values are averaged over a 30 s period prior to the administration of methadone and then compared with the averaged PTAm values in the following five minutes of the administration of the drug. The PTAm values of the control group are averaged over a five-minute period and then compared with the averaged values of the study group before methadone administration. To calculate differences for dependent samples, the connected samples were tested for normal distribution beforehand, and paired t -tests were performed afterwards. Results with a p -value of less than < 0.05 were considered statistically significant. When comparing PTA scores to pain scales, PTA scores of less than 50 are considered ‘painful’ for statistical analysis, as the PTA index for adequate analgesia in dogs is 50–100 . To examine whether the PTA monitor was suitable for predicting the evaluated scores of the different pain scales, a binary logistic regression was performed. The criteria were classified as y = 0 ‘yes, the animal is in pain’ and y = 1 ‘no, the animal is not in pain’. The values of the PTA monitor served as predictors here. In addition, ROC (Receiver Operating Characteristics) curves are created to compare the individual pain scales with the PTA values. ROC values provide information about the diagnostic quality of a diagnostic test. AUC values (area under the curve) of 0.5 represent the bisector and indicate that a test has no diagnostic quality. The study group consisted of 18 dogs, while the control group consisted of nine dogs. In the control group, the data of all nine animals could be evaluated. In the study group, two of the CSU-CAPS scores were right on the border between ‘painful’ and ‘not painful’, so these patients were excluded, leaving a total of 16 patients to compare the CSUCAPS with the PTA monitor and the MGPS. The animals in the control group were between one and twelve years old; the mean age was 5.89 years (± 4.121). The animals in the study group were between one and twelve years old; the mean age was 3.65 years (± 2.74). The weights of the animals in the control group ranged from 18 to 30 kg, and the mean weight was 24.89 kg (± 4.01). The weights of the animals in the study group ranged from 11 to 42 kg, and the average weight was 25.27 kg (± 9.08). The control group consisted of six males (66.6%) and three females (33.3%); two of the females and two of the males were castrated. The study group comprised twelve male (60%) and eight female (40%) animals. Of the males, seven were castrated, and of the females, two were castrated (Table ). With the CSU-CAPS, examiner A (experienced) classified 14 of 16 animals (87.5%) and examiners A2 (inexperienced) classified 7 of 16 patients (43.75%) as painful. In 56.25% of the cases, both examiners (A and A2) reached the same decision when using CSU-CAPS (Table ). When the MGPS was used, both examiners classified 10 of 18 patients (55.56%) as painful. In only two cases, the results of both examiners differed, which is why, in 88.89% of the cases, the two examiners reached the same decision for the MGPS (Table ). When comparing the number of points given to each patient, there was a highly positive correlation between the two examiners (Spearman correlation coefficient rs = 0.84). The average PTA values of the control group are 45.67 (± 13.64). The average PTA values of the study group were 56.16 (± 15.11) and 51.05 (± 13.24) before and after methadone administration, respectively (Table ). Comparing the average values of the study group 30 s before methadone administration with the average values of the control group, there was no significant difference ( p = 0.5403). If all PTA values above 50 represent a pain-free state, two of nine (22.22%) animals in the control group have their average PTA value above this limit. If an average PTA value of less than 50 reflects pain, 7 of 18 dogs in the study group were below this threshold before methadone administration. After a paired-sample t -test, the mean monitor values 30 s before and five minutes after medication administration differ significantly ( p = 0.0379). There was no statistical correlation between the PTA values and the score values of both pain scales in either examiner (Fig. ). In the binary logistic regression, the monitor is also not suitable for predicting the results of the pain scales. Figure shows the binary logistic regression for each pain scale. The x-axis shows the values of the predictor, in this case the PTA values from zero to 100. On the y-axis, a value of ‘zero’ means that the animal is not considered to be painful and a value of ‘one’ means that the animal is painful. The steeper the curve of the statistical regression, the more suitable the predictor is for predicting an event. It can be seen very clearly here that a somewhat steeper curve can only be seen for ‘Painscale A’, which corresponds to the evaluation of the CSU-CAPS of the experienced examiner. Otherwise, the logistic regression curves are very flat, which graphically illustrates the lack of predictive power of the PTA monitor. At least the ROC curves and corresponding AUC values for the comparison of the individual pain scales with the PTA values are shown in Fig. . In this study, the main aim was to evaluate whether the PTA monitor was suitable for detecting pain in the awake animal. The results of this study indicate that the PTA monitor is not an effective tool for identifying pain in conscious animals, because many dogs in the control group were classified as painful. In the control group, the average PTA values are 45.67%. In the manual of MDoloris Medical Systems, the range for pain in anaesthetized dogs is between 40 and 50, and below 40, the manufacturer speaks of extreme pain . On average, all animals in the control group are below the manufacturer’s ‘pain value’. Looking at the details, only two out of nine animals in the control group are above the threshold of 50. Since the dogs in the control group are healthy, pain-free dogs, it is most likely a wrong assessment of pain. In awake humans, there are no precise reference values in the literature that indicate a painful state. In the study of Issa et al. (2017), the PTA in awake, pain-free human patients is, on average, 82. Still, the values show a considerable scatter, and in humans, some patients show values of around 40, although no painful stimulus occurred at that time . In Jess et al. (2016), the average values before any stimulation were 82.1 +/- 10.7 and showed a high dispersion . The higher values observed in human medicine, could be explained by the fact that the measurement procedure can be precisely explained to people before the study. Simply connecting a foreign object, such as an electrocardiogram, can be associated with enormous stress for a dog. Since stress is known to affect our sympathetic nervous system, this may explain why the PTA values of the control group are in such low ranges. Looking at the patients in the study group, the average PTA value before methadone administration is 56.16, which tends to be slightly higher than the PTA value of the control group. If the PTA were indicative to recognise pain, this value would have to be significantly lower. If the threshold value is assumed to be 50, 12 of the 18 patients are above this threshold value and would be pain-free according to the PTA index. In direct comparison, there is no significant difference ( p = 0.5403) between the PTA values of the awake patients before methadone administration and the control group. A significant difference would have meant that the monitor could detect pain in the animal. One reason for the slightly higher values could be the residual effect of the last administration of methadone four hours before the measurement. Methadone has a slightly depressant effect on the central nervous system (CNS), thus reducing stress and sympathetic tone. In addition, it could play a role as patients after trauma are more likely to have higher vagotone or cardiac arrhythmias such as ventricular extrasystoles . Ventricular extrasystoles did not occur in the patients in the study group, but the higher vagotone may influence the PTA measurement. Another explanation for the tendency of the values of the study group to be slightly higher may be that the patients are accustomed to the connection of an electrocardiogram due to the handling during the inpatient stay and are, therefore, less stressed than the animals of the control group. Another striking feature is that PTA values are significantly lower after the administration of methadone than before ( p = 0.0379). Several hypotheses can explain this fact. When administered intravenously, methadone acts immediately due to rapid redistribution from the blood plasma to the CNS, which directly exerts its analgesic effect . Therefore, it can be assumed that the PTA values, which are measured in the three minutes after injection, are already influenced by the effect of methadone. Since parasympathetic activity is determined by the electrocardiogram, it is possible that the bradycardia induced by the opioid influences the PTA values . In addition, PTA is dependent on respiratory sinus arrhythmia, which may also be affected by the opioid, as opioids can cause respiratory depression . Müller (2021) notes in her study in anaesthetised dogs that the values of the PTA monitor three to five minutes after administering a drug, such as methadone, are not evaluable . Consequently, in the present study, the values five to ten minutes after the methadone injection should have been compared with the values before the administration. However, this would have required leaving the monitor connected to the awake animal for a longer period, which is usually not tolerated by animals and makes the measurements impractical. Theoretically, the MGPS scores should be inversely related to the PTA scores because the higher the pain score, the lower the PTA scores should be. However, looking at the results from both examiners, only 44.44% agreement between the MGPS and the PTA monitor was reached. The graphical representation comparing the assessment from both examiners in Fig. , shows the lack of correlation between the results. Comparing the results of the CSU-CAPS with the values of the PTA monitor, we found agreement in 37.5% of the cases for the experienced examiner and in 56.2% of the cases for the inexperienced examiner. This roughly mirrors the results of many studies in human medicine. There was only 56.25% agreement between the two different examiner groups when evaluating pain using the CSU-CAPS. In contrast, there was 88.89% agreement when evaluating the MGPS. Seven out of 16 patients (43.75%) were classified as ‘pain-free’ according to the CSU-CAPS by the untrained examiners, which were considered ‘painful’ by the trained examiner. It can be inferred that if inexperienced examiners used CSU-CAPS, patients in pain would likely not be noticed, and they would then not receive the necessary analgesia. Cerny (2011) compared the CSU-CAPS with a ‘Dynamic and Interactive Visual Analogue Scale’ (DIVAS) in cats and concluded that a large variability prevails between the different examiner groups as well as within the groups themselves . It seems that MGPS can be better integrated into daily clinical practice since it cannot always be guaranteed that pain evaluation is only performed by experienced examiners. This theory is supported by a study from the Netherlands that investigated the use of MGPS in clinical practice and confirmed its practicality for detecting pain . However, this study did not indicate whether patients classified as non-painful were indeed pain-free. This is because there is still no gold standard in pain evaluation against which to measure pain scales. The high discrepancy between the two examiners comes from the fact that by using the MGPS the experienced examiner classifies only 55.56% of the animals as painful, and when using the CSU-CAPS 87.5%. In the group of inexperienced examiners, the same percentage of patients were classified as painful when using MGPS, but only 43.75% when using CSU-CAPS. Thus, the inexperienced examiner arrived at the same assessment in 75% of the cases when using both pain scales on the same patient, but the experienced examiner only in 62.5%. This result suggests that the MGPS may not detect some painful patients by both examiners because they remain below the intervention level. When evaluated by the MGPS, only about half (55.56%) of the patients reached the intervention level and would thus receive an analgesic according to this pain scale. A closer look at the individual questions and the answer options reveals that a patient is more likely to reach the intervention level if it interacts with the examiner through vocalisation. However, this does not apply to all patients, as dogs do not show their pain only through vocalisation . Signs of pain can be very subtle and may only be reflected in a change in facial expression, which is why ‘grimace scales’ have become important in the context of pain evaluation . This leads to the assumption that if the MGPS pain scale alone is used, many painful patients will remain below the intervention level despite severe pain and will not receive an analgesic. This is because if one looks at the pain scale results from Colorado State University, the experienced examiner concludes that the patient is in a painful state 87.5% of the time. The CSU-CAPS includes many more parameters based on the patient’s facial expressions and gestures. This can be challenging for the inexperienced examiner and explains the high discrepancy between the two examiners. Unfortunately, there is still no gold standard against which to measure both pain scales. However, it is striking that by MGPS, only 55.56% of patients are in pain, even though they are all, one-day post major trauma, and the last analgesic administration was four hours ago. Further or other pain scale methods should be considered, because analgesics are still underused in many cases. Simon et al. name this problem ‘oligoanalgesia’ and advocate for putting more focus on evaluating pain in college and educating staff in clinical practice . In human medicine, there are studies in awake patients that compare the Analgesia Nociception Index (ANI) scores with unidimensional pain scales such as the Numeric Rating Scale (NRS). They concluded that there is a correlation between the two pain evaluation methods in the immediate postoperative period . This would be a helpful complementary tool for pain evaluation, especially for young children or cognitively impaired individuals who cannot self-assess their pain on an NRS. The Face-Legs-Activity-Cry-Consolability (FLACC) scale is often used for pain evaluation in young children. The ANI in children measured immediately postoperatively correlates with the values on the FLACC scale . Ledowski et al. (2013) recognised a low correlation between the ANI and the NRS but concluded in their study that there was only a low sensitivity and specificity . The study by Charier et al. (2019) compared the ANI and the Variation Coefficient of Pupillary Diameter (VCPD) with a Visual Analogue Scale (VAS), pupillary diameter (PD), and pupillary light reflex (PLR). They concluded that the VCPD correlated more with the VAS than the ANI in the postoperative period, and the PD and PLR had no correlation with the VAS . They discussed many reasons for this lack of correlation, such as respiratory depression from anaesthesia, which could affect the ANI. This does not play a role in the dogs in this study, as they were evaluated before general anaesthesia. Similarly, Charier et al. (2019) cite that for evaluating pain, looking at the pupil has various limitations, as the size of the pupil is dependent on both opioids and the exposure in the room, and therefore, the VCPD is more appropriate than the static parameters PD and PLR . Only two studies, which took place before general anaesthesia and evaluated healthy subjects, are known to the author. In one study, 23 participants were given nociceptive stimuli in the form of small electrical pulses while the ANI was recorded, and they had to rank their pain on an NRS. In this study, only a very weak negative correlation between the ANI and the scores on the NRS was found. They compared the ANI values with changes in ANI (ΔANI). The ΔANI had a slightly higher negative correlation, but no clear negative correlation was seen . The study by Jess et al. (2016) differed in that patients were not continuously subjected to increasingly painful electrical stimuli but did not know whether the stimulus would be painful or non-painful. Nonetheless, the study came to the same conclusion, namely that the ANI has no negative correlation with an NRS and is likely to be strongly influenced by stress and emotion . The present study has several limitations. It is possible that the administration of methadone four hours before the assessment with the pain scales and the PTA monitor influenced the results due to some residual effect. To eliminate this factor, patients would have to be assessed before receiving any pain medication and the measurement would have to be made by the monitor. This was rejected by us for ethical reasons, as patients need to receive an opioid as soon as possible after a major trauma. We assumed that the patients were painful due to their illnesses and concluded that it may not be possible to identify some patients as painful with the help of MGPS. It is possible that some patients did not actually show any clear pain due to the residual effect of methadone. However, this does not explain the fact that more patients were classified as painful by the CSU-CAPS. Additionally the sample size was small in the control and study group and the study was not blinded. Follow-up studies with a larger patient group could potentially increase statistical significance. One possibility would be to evaluate using PTA in dogs in the immediate postoperative period when patients are still lightly sedated from general anaesthesia and not stressed by environmental factors. However, the drugs administered during general anaesthesia would be an unavoidable factor affecting the measurement. To conclude, the PTA monitor is not an effective tool for detecting pain in awake dogs. The study also demonstrates that there is a high degree of variability between individual examiner groups when utilising the CSU-CAPS, which is not reproduced when using the MGPS. Finally, despite the use of established pain scales, it appears that some dogs cannot be identified as being in pain especially when the examiner is not used to pain recognition. In summary, pain evaluation using multidimensional pain scales in the awake dog shows no correlation with PTA, mirroring the results of studies from human medicine in awake subjects. Because pain recognition remains challenging, evaluating pain regularly and educating staff is important. |
Implementing community-engaged pharmacogenomics in Indigenous communities | 405ad4bd-63a8-4fb8-9406-29889ab37e8d | 10831049 | Pharmacology[mh] | Pharmacogenomic research focuses on the genetic contribution to response to medications and characterizing genetic interindividual variation in drug-metabolizing enzymes and drug transporters, which can affect drug elimination and biotransformation. Varying allele frequencies in pharmacogenes across global populations have important clinical implications, yet consistent population grouping is lacking, inadequate, or inappropriate and continues to include many racial descriptors—a topic more fully discussed in the recent National Academies report on population descriptors in genomic research . The Pharmacogenomics Knowledgebase (PharmGKB) uses a biogeographic grouping system based on seven geographically defined groups, but this will have to be reassessed if data from Indigenous peoples are to be useful to distinct tribal groups, which are political, geographic, and cultural ethnic groups . Increasing AIAN representation in pharmacogenomics research may lead to improved genotyping arrays that are inclusive of unique variants as well as variants that are more common in AIAN people, which may lead to improved predictions of genotype-phenotype associations. These data may lead to personalized drug therapy aimed to reduce health disparities and improve health outcomes in Indigenous people, although clearly these disparities are also impacted by health and social inequities that cannot be addressed by improved access to pharmacogenomics. Trial and error approaches traditionally used in medication management can be problematic for tribal members who may live far from healthcare facilities, and we advocate that pharmacogenomic-guided approaches be used as early as possible in prescribing practices to reduce risk of therapeutic failure. While data are sparse, there are examples where pharmacogenomics research can impact the health of AIAN peoples, including therapeutic areas of cancer , , cardiovascular disease , smoking cessation , , and transplantation . Most pharmacogenetic variation remains unknown across tribes and biogeographical regions of North America. An important class of drug-metabolizing enzymes is the cytochrome P450 (CYP) gene family, which plays a pivotal role in the biotransformation and elimination of xenobiotics, including pharmaceutical compounds. We summarize the known landscape of CYP pharmacogenetic variability in AIAN peoples in Table . Notably, frequencies of CYP variants are highly variable and population-specific. Novel genetic variants at relatively high frequency have been identified in several CYP genes in AIAN populations that may result in altered enzyme activity – , . There is a tendency to treat AIAN peoples as a homogenous group, but data from CYP pharmacogenes highlight the extensive heterogeneity within AIAN peoples. We highlight two examples where inclusion of AIAN participants in pharmacogenomics research has led to findings with important clinical implications. The first is the use of CYP3A5 pharmacogenetic-based dosing of a primary immunosuppressant used in transplantation, tacrolimus , . Specific variants are designated by star alleles, such as CYP3A5*1 or *3 . CYP3A5 variant frequencies differ by population group, with normal function CYP3A5*1 and no function CYP3A5*6 and CYP3A5*7 enriched in populations of recent African ancestry , while the no function CYP3A5*3 variant is more common in most other populations. Researchers have further identified the CYP3A5*3 variant at high-frequency and the CYP3A5*6 and *7 variants at low-frequencies in three AIAN communities in Montana and Alaska , similar to variant frequencies in self-identified AIAN kidney transplant recipients . In transplantation, to avoid sub-therapeutic tacrolimus plasma concentrations and the potential for allograft rejection, individuals carrying a CYP3A5*1 allele require higher doses of tacrolimus compared to individuals carrying CYP3A5 *3 , *6 or *7 alleles, making inclusion of AIAN in testing an important consideration. Another example of pharmacogene variation in AIAN populations is the identification of a common, novel, function-disrupting variant in CYP2C9 called M1L that predicts response to the anticoagulant warfarin . Further in vitro characterization found that CYP2C9 M1L conferred reduced catalytic activity and an in vivo pharmacokinetic study suggested that M1L carriers exhibited slower drug elimination . For CYP2C9 pharmacogenetic-based warfarin dosing, patients with the M1L variant are at risk of adverse events when given a standard dose of warfarin and may require a lower starting dose at initiation of warfarin therapy. These examples highlight some benefits of pharmacogenomics research that is inclusive of AIAN participants and underscores the potential harms of not including a fuller spectrum of genetic variation in pharmacogenetic-guided drug therapy. The identification of novel variants requires time-intensive validation of genotype-phenotype function in vitro or in silico models and validation in independent tribal populations. Additionally, drawing broad conclusions for all Indigenous peoples from basic and clinical research studies involving only a few AIAN populations is problematic as communities may differ. Despite diverse views related to pharmacogenomic clinical utility and cost effectiveness, there is inherent value in doing PGx research with AIAN peoples. Characterizing pharmacogenetic variants in Indigenous populations is an important step to establishing actionable precision drug dosing to improve clinical outcomes. The effect sizes of genome-wide association study loci also show great heterogeneity across different ancestries, and consequently, derived risk prediction scores may not translate well for diverse populations , which necessitates developing comprehensive variant information and improved analysis frameworks with Indigenous communities. Community-engaged research approaches, such as community-based participatory research (CBPR), encourage the building of partnerships between communities and researchers, in which research addresses community health priorities and helps build relationships and trust . While there may not be clear or immediate benefits to research, it is important for researchers to try to establish direct or indirect benefits for AIAN peoples at the outset. These benefits could pertain to the individual research participant, but also could emphasize the potential to improve health outcomes for the future generations of AIAN peoples. CBPR approaches in pharmacogenomics research must prioritize community engagement and capacity building. For example, the Northwest-Alaska Pharmacogenomics Research Network established community-academic partnerships with three tribal community partners—and in the example of CYP2C9 variation and the anticoagulant warfarin described above—they used CBPR approaches that included maintaining tribal oversight, frequent discussions with tribal community advisory boards, and some indirect benefits included the training of Indigenous scholars and community members – . Community-engaged research approaches are also being used by some private companies, potentially bringing flexibility and more sustainable funding mechanisms . Essential considerations for increasing the inclusion of AIAN communities in precision medicine and pharmacogenomic research are: (1) ensuring tribal governance and oversight—particularly around issues of data sovereignty with respect to biospecimens and data ; (2) pursuing research that is inclusive of AIAN values and priorities ; (3) establishing productive research partnerships with sustained funding, which is challenging given current funding structures where grants are awarded for only finite periods of time; and (4) shifting toward long-term, community-engaged pharmacogenomics research. Various community-engaged approaches have been used to develop Indigenous reference genomes, genomic databases, and biobanks ; these resources are essential to ensuring Indigenous diversity is represented by improving imputation methods and providing equitable access to genomic research. For example, better reference genomes are needed to reflect the diversity of the world’s populations as well as to improve imputation and read mapping (which influences the apparent frequencies of rare variants or eliminates potential biases) . Importantly, Indigenous communities throughout the world are heterogeneous for novel variants, such that establishing “representative” or reference genomes may not be possible. Two initiatives—“Silent Genomes” in Canada ( www.bcchr.ca/silent-genomes-project ) and “Aotearoa Variome” in New Zealand ( www.genomics-aotearoa.org.nz )—aim to create Indigenous background variant libraries (IBVL) using a CBPR approach. Specifically, the Silent Genomes project is working to create an IBVL with the Indigenous people of Canada. The Aotearoa Variome is sequencing the genomes of New Zealanders, emphasizing Māori and Polynesian peoples. Biospecimen and data storage is also a primary concern for Indigenous people. The Native BioData Consortium, housed on tribal lands in South Dakota, recently formed to store Indigenous biospecimens and associated data ( https://nativebio.org ). These genomics initiatives are led and designed by Indigenous researchers, further emphasizing the importance of Indigenous knowledge and tribal capacity building skill sets to ensure Indigenous governance over the data . Pharmacogenomic initiatives with Indigenous populations are global clinical research priorities , – , highlighting the need for comprehensive characterization of pharmacogenetic variation to guide precise medication management for Indigenous patients. The translation of pharmacogenomics research into clinical practice has generated much enthusiasm for the possibility to improve outcomes and personalize treatments, yet remains largely unfulfilled for Indigenous communities. Thus, it is imperative to prioritize engagement and collaborations between researchers and healthcare facilities serving Indigenous peoples to ensure inclusion and representation in pharmacogenetic based-precision medicine. Most pharmacogenomic implementation efforts have focused on patients served by large healthcare systems, and more research and resources need to be allocated to promoting clinical decision support, pharmacogenetic testing, and the return of clinically significant results in Indigenous communities. We are hopeful that the long-term health of AIAN communities can be improved and sustained with community-engaged approaches for pharmacogenomic-based precision medicine inclusive of traditional AIAN values and ethics. By using inclusive and community-driven approaches in pharmacogenomic research, we can diversify knowledge of pharmacogenomic variation and advance clinical implementation aimed at improving health and well-being in AIAN peoples. |
Surfactin facilitates establishment of | f9465bb2-cc3d-4625-807d-9e23fdbbacc0 | 11833321 | Microbiology[mh] | Microbes produce a plethora of small molecules with diverse activities, which are extensively exploited in modern society. Several of these natural products, often denoted as secondary or specialized metabolites (SMs), have been pivotal in contemporary medicine and biotechnological industries . They serve as frontline therapy against infectious diseases, therapeutics for cancer , food additives , or crop protection agents . Besides the long-standing tradition of industrial exploitation, SMs are considered chemical mediators that modulate interactions within and between microbial species or even cross-kingdoms. For instance, defensive molecules might help producers defend their resources or niche from microbial competitors . Furthermore, some SMs function as signal molecules for coordinated growth (i.e. for quorum-sensing) and cell-differentiation . Among the diverse array of SM-producing microorganisms, the Bacillus subtilis species complex stands out as a prolific group with significant potential for SM production. This soil-dwelling bacterial species comprises several strains capable of synthesizing a wide range of SMs, including cyclic lipopeptides (LPs), polyketides, ribosomally synthesized and post-transcriptionally modified peptides, and signaling molecules . Specifically, LPs are the most extensively studied class. They are synthesized by non-ribosomal peptide synthase (NRPS), acting as a molecular assembly line that catalyzes the incorporation of amino acids into a growing peptide . In the B. subtilis species group, LPs are structurally categorized into three families: surfactins, iturins, and fengycins, based on their peptide core sequence. These molecules consist of seven (surfactins and iturins) or ten (fengycins) α-amino acids linked to β-amino (iturins) or β-hydroxyl (surfactin and fengycins) fatty acids . LPs exemplify multifunctional SMs, acting not only as antimicrobials by antagonizing other microorganisms but also playing pivotal roles in processes including motility, cellular differentiation, surface colonization, and signaling . Although significant progress has been made in understanding the mode of action, biosynthesis, regulation, and functionality of LPs, their natural functions in natural environments remain largely uncharacterized. Experimental studies addressing these questions are constrained by the immense biological and chemical diversity of soil microbiomes and the community-level interactions modulating SMs functions. Additionally, technical challenges in tracking and quantifying the in situ productions of LPs and other classes of SM pose further barriers to elucidating their natural role in soil . Most evidence supporting the multifaceted functions of LPs has been gathered under in vitro conditions using pure cultures. However, these controlled settings may not accurately reflect the complexity of soil environments and the actual dynamics of SMs production in a broader ecological context. To address this limitation, several studies have adopted the use of less complex systems that mimic natural biomes . One promising strategy is the use of synthetic bacterial communities (SynComs), which allow for the testing of fundamental ecological questions in controlled yet more ecologically relevant conditions . For instance, Cairns et al. used a 62-strain SynCom to demonstrate how low antibiotic concentration impacts community composition and horizontal transfer of resistance genes, whereas Niu et al. built a seven-member community mimicking the core microbiome of maize, which was able to protect the host from a plant-pathogenic fungus . Simultaneously, the development of soil-like matrices and artificial soil has provided a useful option for studying chemical ecology in highly controlled gnotobiotic systems compatible with analytical chemistry and microbiological methods . Thus, coupling the use of artificial soil systems and simplified SynCom is a fast-growing approach to examine microbial interactions whereas maintaining some degree of ecological complexity. This study aims to explore the roles of LPs produced by a B. subtilis isolate during SynCom assembly and simultaneously dissect the impact of LPs on B. subtilis establishment success within SynComs. Utilizing an artificial soil-mimicking system , we assessed the impact of non-ribosomal peptides and bacillaene (a hybrid NRPS – polyketide) (s fp ), as well as specifically surfactin ( srfAC ) or plipastatin ( ppsC ), on the ability of B. subtilis to establish within a four-member SynCom. We demonstrated that surfactin production facilitates B. subtilis establishment success within SynCom in a soil-mimicking environment. Regarding the SynCom assembly, we found that the wild-type and non-producer strains had a comparable influence on the SynCom composition over time. Moreover, we revealed that the B. subtilis and SynCom metabolome were both altered. Intriguingly, the importance of surfactin for the establishment of B. subtilis has been demonstrated in diverse SynCom systems with variable composition. Altogether, our work expands the knowledge about the role of surfactin production in microbial communities, suggesting a broad spectrum of action of this natural product. Bacterial strains and culture media All the strains used in this study are listed in . B. subtilis strains were routinely grown in lysogeny broth (LB) medium supplemented with the appropriated antibiotic (LB-Lennox, Carl Roth, Karlsruhe, Germany; 10 g/L tryptone, 5 g/L yeast extract, and 5 g/L NaCl) at 37°C with shaking at 220 rpm. The strains composing the different synthetic communities were grown in 0.5 × Trypticase Soy Broth (TSB; Sigma-Aldrich, St. Louis, Missouri, USA) for 24 h at 28°C with shaking at 220 rpm. Bacillus subtilis establishment in the Dyrehaven synthetic community propagated in soil-like matrix The impact of introducing B. subtilis P5_B1 and its secondary-metabolite-deficient mutants into the SynCom was investigated using an artificial soil-mimicking microcosm . Spherical beads were created by dripping a polymer solution, comprising 9.6 g/L of Phytagel™ and 2.4 g/L sodium alginate in distilled water, into a 2% CaCl2 cross-linker solution . After 2 h of soaking in 0.1× TSB as a nutrient solution, the beads were sieved to remove any residual medium. Twenty milliliters of beads were then transferred to 50 ml Falcon tubes. Cultures of B. subtilis P5_B1 and the four SynCom members were grown as described above. The members of the SynCom were mixed at different OD, whereas fast-growing strains (i.e. S. indicatrix and Chryseobacterium sp.) had to be included at low density to ensure SynCom stability. Specifically, Pedobacter sp. and Rhodococcus globerulus were adjusted to OD 2.0, whereas S. indicatrix and Chryseobacterium sp. were adjusted to OD 0.1 before mixing. Suspensions of B. subtilis P5_B1 and its mutants were standardized to OD 2.0. Next, bacterial inocula were prepared by mixing equal volumes of these adjusted cultures (four members plus each B. subtilis strain, respectively), and 2 ml of this suspension was then inoculated into freshly prepared beads. The bead microcosms were statically incubated at room temperature. Concurrently, microcosms inoculated with each strain as a monoculture were set as controls. At days 1, 3, 6, 9, 12, and 14, one gram of beads was transferred into a 15 ml Falcon tube, diluted in 0.9% NaCl, and vortexed for 10 min at maximum speed to disrupt the beads. The suspensions were then used for cell number estimation via colony-forming unit (CFU) and flow cytometry. For colony counting, 100 μL of the sample was serially diluted, spread onto 0.1× TSA, and CFU were estimated after 3 days. For the quantification of B. subtilis using flow cytometry, the samples were first passed through a Miracloth (Millipore) to remove any trace of beads and diluted 100-fold in 0.9% NaCl. Subsequently, 1 ml of each sample was transferred to an Eppendorf tube and assayed on a flow cytometer (MACSQuant VYB, Miltenyi Biotec). gfp -labeled B. subtilis was detected using the blue laser (488 nm) and filter B1 (525/50 nm). Cells above 1 cell/ml were detected. Controls with non-inoculated beads and 0.1× TSB were employed to identify background autofluorescence. Single events were gated into the GFP vs. SSC-A plot, where GFP-positive cells were identified for each sample. WT: srfAC complementation assay Overnight cultures of the strains of interest (OD600 = 2.0; WT::mKate and srfAC ::gfp) were premixed at 1:1 ratio. The inoculum was prepared by mixing equal volumes of the premixed Bacillus suspension with each member of the SynCom. Subsequently, 2 ml of this mixture were inoculated into freshly prepared beads. Propagation of the microcosms and B. subtilis quantification were performed as described above. Detection of secondary metabolites from artificial soil microcosms To extract secondary metabolites from the bead samples, 1 g of bead was transferred into a 15 ml with 4 ml of isopropyl alcohol:ethyl acetate (1:3 v/v), containing 1% formic acid. The tubes were sonicated for 60 min and centrifuged at 13400 rpm for 3 min. Then, the extracts were evaporated under N 2 overnight, re-suspended in 300 μL of methanol, and centrifuged at 13400 rpm. The supernatants were transferred to an HPLC vial and subjected to ultrahigh-performance liquid chromatography-high resolution mass spectrometry (UHPLC-HRMS) analysis. The running conditions and the subsequent data analysis were performed as previously described . Metatranscriptomic analysis For the RNA sequencing, the SynCom was propagated in the artificial soil matrix and challenged with either B. subtilis P5_B1 or the mutant impaired in NRP synthesis ( sfp mutant). A SynCom without B. subtilis inoculation served as the control group. On days 1 and 6, 4 g of beads from each treatment were snap-frozen in liquid nitrogen and stored at −80°C. The RNA extraction was performed using the RNeasy PowerSoil Total RNA Kit (QIAGEN) following the manufacturer’s instructions. After extraction, the samples were treated with the TURBODNA-free kit (ThermoFisher) to degrade the remaining DNA. The library preparation and sequencing were carried out by Novogene Europe on a NovaSeq 6000 S4 flow cell with PE150 (Illumina). The reads were demultiplexed by the sequencing facility. Subsequently, reads were trimmed using Trimmomatic v.0.39 . Quality assessment was performed using FASTQC, and reads were sorted with SortMeRNA v.4.2.0 to select only the non-rRNA reads for the downstream analysis. Reads were then mapped onto the genomes of the strains (D764, D763, D757, D749, and B. subtilis P5_B1) using Bowtie v.2–2.3.2 . Differential gene expression analysis was conducted using the R package DESeq2 using the shrunken log2 fold change values for analysis The P values of each gene were corrected using Benjamini and Hochberg’s approach for controlling the false discovery rate (FDR). A gene was considered as differentially expressed when absolute log2 fold change was greater than 2 and FDR was less than 0.05. For functional analysis, the protein-coding sequences were mapped with KEGG Ontology, Gene Ontology (GO) terms, and Clusters of Orthologous Genes (COGs) using eggNOG-mapper . Then, the eggNOG-mapper annotated dataset was used for gene set enrichment for pathway analysis in GAGE . Transcriptomic analysis was performed from three independent replicates for each sample. Inhibition assay The in vitro antagonistic effect of B. subtilis P5_B1 and its secondary metabolite-deficient mutants was assessed using double-layer agar plate inhibition assays against each SynCom member (target bacterium). All strains were cultured for 24 h in 0.1× TSB medium as described previously. The cultures underwent two washes with 0.9% NaCl followed by centrifugation at 10 000 rpm for 2 min, and OD 600 was adjusted to 0.1. For the first layer, 10 ml of 0.1× TSA (1.5% agar) were poured into petri dishes and allowed to dry for 30 min. Then, 100 μL of each target bacterium was added to 10 ml of 0.1× TSB containing 0.9% agar preheated to 45°C. This mixture was evenly spread on top of the 0.1× TSA and dried for an additional 30 min. Subsequently, 5 μL of each B. subtilis suspension was spotted on each plate. The plates were then incubated at room temperature, followed by examination of the inhibition zones on the lawn formed in the top layer. Similarly, we investigated the impact of exometabolites produced by SynCom members on the growth properties of B. subtilis strains . Spent media from SynCom cultures were collected after 48 h of growth in 0.1× TSB at 25°C and 250 rpm, filtered through 0.22 μm filters, and stored at 4°C. Growth curves were generated in 96-well microtiter plates. Each well contained 180 μL of 0.1× TSB supplemented with 5% spent media from each SynCom strain and 20 μL of either B. subtilis WT or its mutants. Control wells contained only 0.1× TSB medium without spent media supplementation. Cultivation was carried out in a Synergy XHT multi-mode reader at 25°C with linear continuous shaking (3 mm), monitoring optical density at 600 nm every 5 min. Competition assay Overnight cultures of the SynCom members and the gfp- labeled B. subtilis (WT; sfp and srfAC ) were pelleted (8000 rpm, 2 min) and resuspend in 0.1× TSB at an OD 600 of 0.1. Next, 200 μL of a SynCom member was inoculated in the first row of a 96-well microtiter plate. From there, the SynCom member was 10-fold diluted by transferring 20 μL of culture to the next row containing 180 μL of medium. This process was repeated for 6 dilution steps. Subsequently, 20 μL of the GFP-labelled B. subtilis variants was added to each well to establish the co-culture. Monocultures of both the SynCom member and B. subtilis variants served as controls to calculate competitiveness in co-culture. Cultivation was carried out in a Synergy XHT multi-mode reader (Biotek Instruments, Winooski, VT, US), at 25°C with linear continuous shaking (3 mm), monitoring the optical density and GFP fluorescence (Ex: 482/20; Em:528/20; Gain: 35) every 5 min. Kinetic parameters were estimated using the package GrowthCurver in R. Bacillus subtilis specialized metabolites induction by the synthetic community spent media The WT strain was inoculated in the presence of culture spent media from the SynCom members. The spent media were obtained after 48 h of growth in 0.1× TSB and filtered through at 0.22 μm. 10% of the spent media to Erlenmeyer flasks containing potato dextrose broth (15 ml in 100 ml flasks), followed by inoculation with an overnight culture of P5_B1 (OD 600 = 0.1). After 48 h of incubation at 25°C and 220 rpm, the cultures were centrifuged, filtered, and subjected to HPLC analysis for surfactin detection. Surfactin was detected already at 0.1 μg/ml using a purified standard. Assessment of Bacillus subtilis establishment in diverse synthetic communities To elucidate the role of surfactin in determining the establishment of B. subtilis within synthetic communities, we investigated whether P5_B1 can establish in various SynComs in a surfactin-dependent manner, using a methodology like the one described above for the competition assay. For this purpose, we selected five previously characterized bacterial SynComs, each with distinct compositions in terms of taxonomy and number of members, assembled for various objectives . In all cases, the SynCom members and the gfp -labeled B. subtilis strains (WT and srfAC ) were cultured overnight in 0.5× TSB. Following two washes with 0.9% NaCl, the ODs were adjusted to 0.1 in 0.1× TSB. The SynCom members were mixed in a 1:1 ratio and then inoculated and diluted in a 96-well plate. Subsequently, 20 μL of the gfp -labeled B. subtilis variants were added to each well to create the co-culture . Monocultures of both the SynCom member and B. subtilis variants were included as controls to determine competitiveness in the co-culture. Cultivation conditions and data analysis were conducted as described for the competition assay. Each experiment was performed with at least three independent replicates per treatment. Statistical analysis Data analysis and graphical representation were performed using R 4.1.0 and the package ggplot2 . Statistical differences in experiments with two groups were explored via Student’s t- tests. For multiple comparisons (more than two treatments), one-way analysis of variance (ANOVA) and Tukey’s honestly significant difference (HSD) were performed. In all the cases, normality and equal variance were assessed using the Shapiro–Wilks and Levene test, respectively. Statistical significance (α) was set at 0.05. Detailed statistical analysis description for each experiment is provided in figure legends. All the strains used in this study are listed in . B. subtilis strains were routinely grown in lysogeny broth (LB) medium supplemented with the appropriated antibiotic (LB-Lennox, Carl Roth, Karlsruhe, Germany; 10 g/L tryptone, 5 g/L yeast extract, and 5 g/L NaCl) at 37°C with shaking at 220 rpm. The strains composing the different synthetic communities were grown in 0.5 × Trypticase Soy Broth (TSB; Sigma-Aldrich, St. Louis, Missouri, USA) for 24 h at 28°C with shaking at 220 rpm. establishment in the Dyrehaven synthetic community propagated in soil-like matrix The impact of introducing B. subtilis P5_B1 and its secondary-metabolite-deficient mutants into the SynCom was investigated using an artificial soil-mimicking microcosm . Spherical beads were created by dripping a polymer solution, comprising 9.6 g/L of Phytagel™ and 2.4 g/L sodium alginate in distilled water, into a 2% CaCl2 cross-linker solution . After 2 h of soaking in 0.1× TSB as a nutrient solution, the beads were sieved to remove any residual medium. Twenty milliliters of beads were then transferred to 50 ml Falcon tubes. Cultures of B. subtilis P5_B1 and the four SynCom members were grown as described above. The members of the SynCom were mixed at different OD, whereas fast-growing strains (i.e. S. indicatrix and Chryseobacterium sp.) had to be included at low density to ensure SynCom stability. Specifically, Pedobacter sp. and Rhodococcus globerulus were adjusted to OD 2.0, whereas S. indicatrix and Chryseobacterium sp. were adjusted to OD 0.1 before mixing. Suspensions of B. subtilis P5_B1 and its mutants were standardized to OD 2.0. Next, bacterial inocula were prepared by mixing equal volumes of these adjusted cultures (four members plus each B. subtilis strain, respectively), and 2 ml of this suspension was then inoculated into freshly prepared beads. The bead microcosms were statically incubated at room temperature. Concurrently, microcosms inoculated with each strain as a monoculture were set as controls. At days 1, 3, 6, 9, 12, and 14, one gram of beads was transferred into a 15 ml Falcon tube, diluted in 0.9% NaCl, and vortexed for 10 min at maximum speed to disrupt the beads. The suspensions were then used for cell number estimation via colony-forming unit (CFU) and flow cytometry. For colony counting, 100 μL of the sample was serially diluted, spread onto 0.1× TSA, and CFU were estimated after 3 days. For the quantification of B. subtilis using flow cytometry, the samples were first passed through a Miracloth (Millipore) to remove any trace of beads and diluted 100-fold in 0.9% NaCl. Subsequently, 1 ml of each sample was transferred to an Eppendorf tube and assayed on a flow cytometer (MACSQuant VYB, Miltenyi Biotec). gfp -labeled B. subtilis was detected using the blue laser (488 nm) and filter B1 (525/50 nm). Cells above 1 cell/ml were detected. Controls with non-inoculated beads and 0.1× TSB were employed to identify background autofluorescence. Single events were gated into the GFP vs. SSC-A plot, where GFP-positive cells were identified for each sample. srfAC complementation assay Overnight cultures of the strains of interest (OD600 = 2.0; WT::mKate and srfAC ::gfp) were premixed at 1:1 ratio. The inoculum was prepared by mixing equal volumes of the premixed Bacillus suspension with each member of the SynCom. Subsequently, 2 ml of this mixture were inoculated into freshly prepared beads. Propagation of the microcosms and B. subtilis quantification were performed as described above. To extract secondary metabolites from the bead samples, 1 g of bead was transferred into a 15 ml with 4 ml of isopropyl alcohol:ethyl acetate (1:3 v/v), containing 1% formic acid. The tubes were sonicated for 60 min and centrifuged at 13400 rpm for 3 min. Then, the extracts were evaporated under N 2 overnight, re-suspended in 300 μL of methanol, and centrifuged at 13400 rpm. The supernatants were transferred to an HPLC vial and subjected to ultrahigh-performance liquid chromatography-high resolution mass spectrometry (UHPLC-HRMS) analysis. The running conditions and the subsequent data analysis were performed as previously described . For the RNA sequencing, the SynCom was propagated in the artificial soil matrix and challenged with either B. subtilis P5_B1 or the mutant impaired in NRP synthesis ( sfp mutant). A SynCom without B. subtilis inoculation served as the control group. On days 1 and 6, 4 g of beads from each treatment were snap-frozen in liquid nitrogen and stored at −80°C. The RNA extraction was performed using the RNeasy PowerSoil Total RNA Kit (QIAGEN) following the manufacturer’s instructions. After extraction, the samples were treated with the TURBODNA-free kit (ThermoFisher) to degrade the remaining DNA. The library preparation and sequencing were carried out by Novogene Europe on a NovaSeq 6000 S4 flow cell with PE150 (Illumina). The reads were demultiplexed by the sequencing facility. Subsequently, reads were trimmed using Trimmomatic v.0.39 . Quality assessment was performed using FASTQC, and reads were sorted with SortMeRNA v.4.2.0 to select only the non-rRNA reads for the downstream analysis. Reads were then mapped onto the genomes of the strains (D764, D763, D757, D749, and B. subtilis P5_B1) using Bowtie v.2–2.3.2 . Differential gene expression analysis was conducted using the R package DESeq2 using the shrunken log2 fold change values for analysis The P values of each gene were corrected using Benjamini and Hochberg’s approach for controlling the false discovery rate (FDR). A gene was considered as differentially expressed when absolute log2 fold change was greater than 2 and FDR was less than 0.05. For functional analysis, the protein-coding sequences were mapped with KEGG Ontology, Gene Ontology (GO) terms, and Clusters of Orthologous Genes (COGs) using eggNOG-mapper . Then, the eggNOG-mapper annotated dataset was used for gene set enrichment for pathway analysis in GAGE . Transcriptomic analysis was performed from three independent replicates for each sample. The in vitro antagonistic effect of B. subtilis P5_B1 and its secondary metabolite-deficient mutants was assessed using double-layer agar plate inhibition assays against each SynCom member (target bacterium). All strains were cultured for 24 h in 0.1× TSB medium as described previously. The cultures underwent two washes with 0.9% NaCl followed by centrifugation at 10 000 rpm for 2 min, and OD 600 was adjusted to 0.1. For the first layer, 10 ml of 0.1× TSA (1.5% agar) were poured into petri dishes and allowed to dry for 30 min. Then, 100 μL of each target bacterium was added to 10 ml of 0.1× TSB containing 0.9% agar preheated to 45°C. This mixture was evenly spread on top of the 0.1× TSA and dried for an additional 30 min. Subsequently, 5 μL of each B. subtilis suspension was spotted on each plate. The plates were then incubated at room temperature, followed by examination of the inhibition zones on the lawn formed in the top layer. Similarly, we investigated the impact of exometabolites produced by SynCom members on the growth properties of B. subtilis strains . Spent media from SynCom cultures were collected after 48 h of growth in 0.1× TSB at 25°C and 250 rpm, filtered through 0.22 μm filters, and stored at 4°C. Growth curves were generated in 96-well microtiter plates. Each well contained 180 μL of 0.1× TSB supplemented with 5% spent media from each SynCom strain and 20 μL of either B. subtilis WT or its mutants. Control wells contained only 0.1× TSB medium without spent media supplementation. Cultivation was carried out in a Synergy XHT multi-mode reader at 25°C with linear continuous shaking (3 mm), monitoring optical density at 600 nm every 5 min. Overnight cultures of the SynCom members and the gfp- labeled B. subtilis (WT; sfp and srfAC ) were pelleted (8000 rpm, 2 min) and resuspend in 0.1× TSB at an OD 600 of 0.1. Next, 200 μL of a SynCom member was inoculated in the first row of a 96-well microtiter plate. From there, the SynCom member was 10-fold diluted by transferring 20 μL of culture to the next row containing 180 μL of medium. This process was repeated for 6 dilution steps. Subsequently, 20 μL of the GFP-labelled B. subtilis variants was added to each well to establish the co-culture. Monocultures of both the SynCom member and B. subtilis variants served as controls to calculate competitiveness in co-culture. Cultivation was carried out in a Synergy XHT multi-mode reader (Biotek Instruments, Winooski, VT, US), at 25°C with linear continuous shaking (3 mm), monitoring the optical density and GFP fluorescence (Ex: 482/20; Em:528/20; Gain: 35) every 5 min. Kinetic parameters were estimated using the package GrowthCurver in R. specialized metabolites induction by the synthetic community spent media The WT strain was inoculated in the presence of culture spent media from the SynCom members. The spent media were obtained after 48 h of growth in 0.1× TSB and filtered through at 0.22 μm. 10% of the spent media to Erlenmeyer flasks containing potato dextrose broth (15 ml in 100 ml flasks), followed by inoculation with an overnight culture of P5_B1 (OD 600 = 0.1). After 48 h of incubation at 25°C and 220 rpm, the cultures were centrifuged, filtered, and subjected to HPLC analysis for surfactin detection. Surfactin was detected already at 0.1 μg/ml using a purified standard. Bacillus subtilis establishment in diverse synthetic communities To elucidate the role of surfactin in determining the establishment of B. subtilis within synthetic communities, we investigated whether P5_B1 can establish in various SynComs in a surfactin-dependent manner, using a methodology like the one described above for the competition assay. For this purpose, we selected five previously characterized bacterial SynComs, each with distinct compositions in terms of taxonomy and number of members, assembled for various objectives . In all cases, the SynCom members and the gfp -labeled B. subtilis strains (WT and srfAC ) were cultured overnight in 0.5× TSB. Following two washes with 0.9% NaCl, the ODs were adjusted to 0.1 in 0.1× TSB. The SynCom members were mixed in a 1:1 ratio and then inoculated and diluted in a 96-well plate. Subsequently, 20 μL of the gfp -labeled B. subtilis variants were added to each well to create the co-culture . Monocultures of both the SynCom member and B. subtilis variants were included as controls to determine competitiveness in the co-culture. Cultivation conditions and data analysis were conducted as described for the competition assay. Each experiment was performed with at least three independent replicates per treatment. Data analysis and graphical representation were performed using R 4.1.0 and the package ggplot2 . Statistical differences in experiments with two groups were explored via Student’s t- tests. For multiple comparisons (more than two treatments), one-way analysis of variance (ANOVA) and Tukey’s honestly significant difference (HSD) were performed. In all the cases, normality and equal variance were assessed using the Shapiro–Wilks and Levene test, respectively. Statistical significance (α) was set at 0.05. Detailed statistical analysis description for each experiment is provided in figure legends. Description of the artificial soil system inoculated with synthetic community To assess the role of B. subtilis SMs in shaping bacterial community assembly under soil-like conditions, we previously customized a hydrogel matrix that supports the axenic growth of multiple bacterial strains and enables the quantification of specific B. subtilis LPs (i.e. surfactin and plipastatin) . We subsequently assembled a four-membered bacterial SynCom obtained from the same sample site as B. subtilis P5_B1 . We selected these four isolates due to their shared origin with P5_B1, their stable co-existence in our hydrogel beads system, and their morphological distinctness, which allowed for straightforward quantification by plate count at detection limits around 10 2 CFU/g of beads. Although the relative abundance of each of the four strains fluctuated throughout the experiments, all four members were still detectable for up to three days of sampling . At the end of the experiment, we observed a clear strain co-existence pattern in the SynCom as previously reported: Stenotrophomonas indicatrix and Chryseobacterium sp. were the most dominant strains, R. globerulus was kept at low density whereas Pedobacter sp. was below our detection limit after day 3 . Using this established experimental system, we explored the role of LPs in the successful establishment of B. subtilis , as well as in SynCom assembly and functionality. A schematic diagram illustrating the core experimental design, and the scientific questions is presented in . Surfactin production facilitates Bacillus subtilis P5_B1 establishment in a four-member synthetic community To evaluate the contribution of specific LPs to P5_B1 establishment in the SynCom, we co-cultivated either the WT strain or the SM production-impaired mutants ( sfp , srfAC and ppsC ) in the presence of the SynCom using the hydrogel matrix that mimics soil characteristics . Initially, we confirmed that P5_B1 and its mutant derivatives grew and produced the expected LPs when cultivated axenically in the soil-like system. All B. subtilis strains colonized the hydrogel system at comparable rates (ANOVA at day 14, P = .87), demonstrating a similar population dynamic pattern: a one-log increase within a day followed by a plateau of nearly 1x10 7 CFU/g of the hydrogel after three days of cultivation, which was maintained up to the final sampling time on day 14 . When introduced to the SynCom, the WT and ppsC mutant (that produce surfactin but not plipastatin), successfully colonized the beads and maintained their population at approximately 1x10 7 CFU/g throughout the experiment, comparable to the titers obtained in axenic cultivation. In contrast, the population size of the B. subtilis genotypic variants impaired in non-ribosomal peptides ( sfp ) or solely in surfactin ( srfAC ) production sharply declined during the first six days. By the end of the experiment, the cell titers decreased to around three log-fold below the initial population levels (ANOVA, P < .01) . Following up on these observations, we investigated whether the WT strain could rescue the srfAC mutant by co-inoculating a mixture of both strains into the SynCom. In this co-culture, the WT strain remained more competitive than the srfAC mutant. However, the presence of the WT strain, and presumable its surfactin production capability, evidently rescued the srfAC mutant, as its decline was less pronounced compared to when introduced alone into the SynCom . Subsequently, we investigated the potential contribution of individual SynCom members to the decline of the surfactin-deficient strains using a pair-wise competition assay in planktonic cultures. Here, varying ratios of each SynCom member and B. subtilis were assessed and the reduction of the growth (i.e. area under the curve) relative to the monoculture was measured. B. subtilis populations experienced a significant reduction when co-cultured with S. indicatrix D763 and Chryseobacterium sp. D764 at the highest ratio (1, 0.1, 0.01 of the tested strain relative to the B. subtilis cultures), irrespective of B. subtilis capability to produce surfactin. However, in co-cultures where the SynCom members were diluted (more than 0.01 relative to B. subtilis ), B. subtilis strains lacking surfactin production were outcompeted by S. indicatrix D763 and Chryseobacterium sp. D764. Overall, B. subtilis WT showed greater competitiveness against these SynCom members, maintaining higher growth at higher dilution ratios compared to the sfp and srfAC mutants. In contrast, the less competitive strains in the bead systems, R. globerulus D757 and Pedobacter sp. D749, only impacted B. subtilis growth at the highest co-culture ratio, with strains lacking surfactin production exhibiting comparable growth to WT . Bacillus subtilis secondary metabolites do not have a major impact on synthetic community assembly Motivated by our observation that SM production, specifically surfactin, plays a crucial role in B. subtilis establishment success, we investigated if these SMs impact the SynCom composition over time. To do this, we evaluated the abundance of SynCom members (CFU) using NMDS and PERMANOVA . Regardless of the B. subtilis strain introduced, the SynCom followed similar assembly dynamics as we described above: S. indicatrix and Chryseobacterium sp. dominated the community whereas R. globerulus and Pedobacter sp. were less abundant ( and ). Estimation of the growth rates and the carrying capacity of each SynCom member in 0.1× TSB revealed that S. indicatrix , the most dominant strain, grew significantly faster and reached the highest cell density whereas Pedobacter sp. grew at the slowest rate . This could explain the observed SynCom composition on the hydrogel system, which was dominated by the fastest growers and more productive strains. A fixed-effect PERMANOVA using sampling time, B. subtilis variants and their interaction (how sampling time and B. subtilis variants jointly influence community composition) confirmed that the main driver of SynCom composition was the sampling time (PERMANOVA, R 2 = 0.49, P = .001), with a minor effect of B. subtilis strain introduced (PERMANOVA, R 2 = 0.06, P = .037) and the interaction (PERMANOVA, R 2 = 0.18, P = .005). Overall, the results suggested that introducing either the WT or its SM-impaired mutants did not have a major impact on the SynCom assembly, with the differences mainly explained by the sampling time ( and ). We investigated whether the antagonistic activity between the SynCom members and B. subtilis could explain our observations. Using an in vitro inhibition test, we found that the less competitive strains, Pedobacter sp. D749 and R. globerulus D757, were both susceptible to B. subtilis . Specifically, the antagonistic activity against Pedobacter sp. D749 was linked to NRP production, particularly surfactin, whereas R. globerulus was inhibited by all the variants. This suggests that other classes of SMs beyond NRP, produced by B. subtilis , may contribute to the inhibition of these two species. Nevertheless, the SynCom-abundant strains, S. indicatrix D763 and Chryseobacterium sp. D764, displayed no growth reduction by B. subtilis and its SMs, as evidenced by the absence of inhibition halos . Bacillus subtilis and synthetic community metabolome are both altered during the establishment experiments To explore the role of B. subtilis secondary metabolites in shaping the SynCom metabolome and how surfactin production was modulated in co-cultivation, we profiled both the SynCom and B. subtilis metabolome at day 14 of the experiment using liquid chromatography-mass spectrometry (LC–MS). A targeted approach revealed that the production of surfactin was significantly increased when the WT was grown in the presence of the SynCom compared with the WT production in axenic cultures ( t -test, P = .0317) . This finding was further validated in vitro by supplementing P5_B1 cultures with cell-free supernatants from each of the SynCom members or all strains together. Here, the spent media from both the monocultures and the SynCom induced surfactin production, with the highest increase observed when P5_B1 was supplemented with R. globerulus supernatant . Although most of the molecular features ( m/z ) detected in our system remained unidentified, the molecular network clearly shows the presence of the B. subtilis LPs, plipastatin, and surfactin, and their analogs. Moreover, the presence of ornithine lipids (OLs) was observed in the dataset . These metabolites are derived from Gram-negative bacterial cell outer membrane as surrogates of phospholipids under phosphate-limited conditions . The lipid abundances ( m/z between 597 and 671) increased in the SynCom alone, indicating this conversion of phospholipids to OLs occurs in the absence of B. subtilis. Ecologically, OLs have been linked to stress response . When surfactin producers (WT or ppsC mutant) were introduced into the system, the presence of OLs was strongly reduced. In contrast, with the sfp and srfAC mutants, OLs remained at levels comparable to the SynCom alone . We corroborated this observation by conducting an experiment with the SynCom in the presence of pure surfactin. Here, the same group of compounds ( m/z features) was altered in the surfactin-supplemented SynCom culture as in the presence of surfactin-producing B. subtilis co-cultures, although these were abundant in the control samples (i.e., without B. subtilis ) . Less competitive strains of the synthetic community were the most transcriptionally affected species by Bacillus subtilis specialized metabolites To dissect the mechanism of how surfactin facilitates B. subtilis establishment within the SynCom, a meta-transcriptomic approach was conducted comparing the transcriptional profile of the SynCom challenged with the WT and the sfp mutant. In total, 430 genes and 490 genes were differentially expressed (DEG) in the SynCom after 1 and 5 days, respectively, inoculated with the WT compared with the sample seeded with the sfp mutant. In both sampling days, the less competitive strains, Pedobacter sp. D749 and R. globerulus D757 had the highest number of differentially expressed genes (DGEs) in the system, accounting for around the 83% of DEGs at day 1 and 95% of those at the last sampling point . Subsequently, we explored the distribution of clusters of orthologous groups (COG categories) among the DEGs genes to discover which processes within the SynCom are potentially affected by the introduction of either the WT or sfp mutant. Here, many DEGs were not annotated or classified as COG S, an unknown function. However, cell wall/membrane/envelope biogenesis (COG M) and amino acid transport and metabolism (COG E) were the most abundant functional categories among the genes downregulated in the SynCom with WT strain added relative to the SynCom in the presence of sfp mutant . We explored the functions and enrichment pathways of DEGs for the less competitive strains ( Pedobacter sp. D749 and R. globerulus D757). The GO enrichment analysis revealed that both strains responded transcriptionally differently in the presence of the WT strains. Whereas the enriched biological processes in R. globerulus D757 were related to defense mechanisms or response to other organisms, upregulated processes in Pedobacter sp. were linked to amino acid transport, specifically histidine . Surfactin-facilitated establishment of Bacillus subtilis is conserved across diverse synthetic communities To survey if surfactin is important for establishment of B. subtilis P5_B1 within diverse microbial communities, we assessed the abundance of WT and surfactin-deficient mutant in five previously published and characterized SynComs . These SynComs varied in composition, reflecting different functionalities and ecological niches. Overall, the co-culture experiments revealed that the ability of B. subtilis to establish within the SynComs depended on surfactin production, SynCom composition (number of members), and the inoculation ratio. In most SynComs, except for the Kolter Lab’s SynCom which was broadly invaded, both the WT and the srfAC mutant displayed reduced growth at a high inoculation ratio of SynCom (10:1, 1:1, 1:10). However, the WT, which produces surfactin, generally reached higher population densities compared to the surfactin-deficient mutant across most SynComs. Although the difference between the WT and srfAC mutant was less pronounced in these shaken cultures compared with the tests performed on the alginate bead microcosm, this could be due to the lack of spatial structure present in the surface-attached communities or the differences in oxygen diffusion between the two experimental setups. When B. subtilis was inoculated at high ratios relative to the SynComs, the growth dynamics resembled those observed in axenic cultures of both the WT and srfAC mutant . To assess the role of B. subtilis SMs in shaping bacterial community assembly under soil-like conditions, we previously customized a hydrogel matrix that supports the axenic growth of multiple bacterial strains and enables the quantification of specific B. subtilis LPs (i.e. surfactin and plipastatin) . We subsequently assembled a four-membered bacterial SynCom obtained from the same sample site as B. subtilis P5_B1 . We selected these four isolates due to their shared origin with P5_B1, their stable co-existence in our hydrogel beads system, and their morphological distinctness, which allowed for straightforward quantification by plate count at detection limits around 10 2 CFU/g of beads. Although the relative abundance of each of the four strains fluctuated throughout the experiments, all four members were still detectable for up to three days of sampling . At the end of the experiment, we observed a clear strain co-existence pattern in the SynCom as previously reported: Stenotrophomonas indicatrix and Chryseobacterium sp. were the most dominant strains, R. globerulus was kept at low density whereas Pedobacter sp. was below our detection limit after day 3 . Using this established experimental system, we explored the role of LPs in the successful establishment of B. subtilis , as well as in SynCom assembly and functionality. A schematic diagram illustrating the core experimental design, and the scientific questions is presented in . Bacillus subtilis P5_B1 establishment in a four-member synthetic community To evaluate the contribution of specific LPs to P5_B1 establishment in the SynCom, we co-cultivated either the WT strain or the SM production-impaired mutants ( sfp , srfAC and ppsC ) in the presence of the SynCom using the hydrogel matrix that mimics soil characteristics . Initially, we confirmed that P5_B1 and its mutant derivatives grew and produced the expected LPs when cultivated axenically in the soil-like system. All B. subtilis strains colonized the hydrogel system at comparable rates (ANOVA at day 14, P = .87), demonstrating a similar population dynamic pattern: a one-log increase within a day followed by a plateau of nearly 1x10 7 CFU/g of the hydrogel after three days of cultivation, which was maintained up to the final sampling time on day 14 . When introduced to the SynCom, the WT and ppsC mutant (that produce surfactin but not plipastatin), successfully colonized the beads and maintained their population at approximately 1x10 7 CFU/g throughout the experiment, comparable to the titers obtained in axenic cultivation. In contrast, the population size of the B. subtilis genotypic variants impaired in non-ribosomal peptides ( sfp ) or solely in surfactin ( srfAC ) production sharply declined during the first six days. By the end of the experiment, the cell titers decreased to around three log-fold below the initial population levels (ANOVA, P < .01) . Following up on these observations, we investigated whether the WT strain could rescue the srfAC mutant by co-inoculating a mixture of both strains into the SynCom. In this co-culture, the WT strain remained more competitive than the srfAC mutant. However, the presence of the WT strain, and presumable its surfactin production capability, evidently rescued the srfAC mutant, as its decline was less pronounced compared to when introduced alone into the SynCom . Subsequently, we investigated the potential contribution of individual SynCom members to the decline of the surfactin-deficient strains using a pair-wise competition assay in planktonic cultures. Here, varying ratios of each SynCom member and B. subtilis were assessed and the reduction of the growth (i.e. area under the curve) relative to the monoculture was measured. B. subtilis populations experienced a significant reduction when co-cultured with S. indicatrix D763 and Chryseobacterium sp. D764 at the highest ratio (1, 0.1, 0.01 of the tested strain relative to the B. subtilis cultures), irrespective of B. subtilis capability to produce surfactin. However, in co-cultures where the SynCom members were diluted (more than 0.01 relative to B. subtilis ), B. subtilis strains lacking surfactin production were outcompeted by S. indicatrix D763 and Chryseobacterium sp. D764. Overall, B. subtilis WT showed greater competitiveness against these SynCom members, maintaining higher growth at higher dilution ratios compared to the sfp and srfAC mutants. In contrast, the less competitive strains in the bead systems, R. globerulus D757 and Pedobacter sp. D749, only impacted B. subtilis growth at the highest co-culture ratio, with strains lacking surfactin production exhibiting comparable growth to WT . secondary metabolites do not have a major impact on synthetic community assembly Motivated by our observation that SM production, specifically surfactin, plays a crucial role in B. subtilis establishment success, we investigated if these SMs impact the SynCom composition over time. To do this, we evaluated the abundance of SynCom members (CFU) using NMDS and PERMANOVA . Regardless of the B. subtilis strain introduced, the SynCom followed similar assembly dynamics as we described above: S. indicatrix and Chryseobacterium sp. dominated the community whereas R. globerulus and Pedobacter sp. were less abundant ( and ). Estimation of the growth rates and the carrying capacity of each SynCom member in 0.1× TSB revealed that S. indicatrix , the most dominant strain, grew significantly faster and reached the highest cell density whereas Pedobacter sp. grew at the slowest rate . This could explain the observed SynCom composition on the hydrogel system, which was dominated by the fastest growers and more productive strains. A fixed-effect PERMANOVA using sampling time, B. subtilis variants and their interaction (how sampling time and B. subtilis variants jointly influence community composition) confirmed that the main driver of SynCom composition was the sampling time (PERMANOVA, R 2 = 0.49, P = .001), with a minor effect of B. subtilis strain introduced (PERMANOVA, R 2 = 0.06, P = .037) and the interaction (PERMANOVA, R 2 = 0.18, P = .005). Overall, the results suggested that introducing either the WT or its SM-impaired mutants did not have a major impact on the SynCom assembly, with the differences mainly explained by the sampling time ( and ). We investigated whether the antagonistic activity between the SynCom members and B. subtilis could explain our observations. Using an in vitro inhibition test, we found that the less competitive strains, Pedobacter sp. D749 and R. globerulus D757, were both susceptible to B. subtilis . Specifically, the antagonistic activity against Pedobacter sp. D749 was linked to NRP production, particularly surfactin, whereas R. globerulus was inhibited by all the variants. This suggests that other classes of SMs beyond NRP, produced by B. subtilis , may contribute to the inhibition of these two species. Nevertheless, the SynCom-abundant strains, S. indicatrix D763 and Chryseobacterium sp. D764, displayed no growth reduction by B. subtilis and its SMs, as evidenced by the absence of inhibition halos . and synthetic community metabolome are both altered during the establishment experiments To explore the role of B. subtilis secondary metabolites in shaping the SynCom metabolome and how surfactin production was modulated in co-cultivation, we profiled both the SynCom and B. subtilis metabolome at day 14 of the experiment using liquid chromatography-mass spectrometry (LC–MS). A targeted approach revealed that the production of surfactin was significantly increased when the WT was grown in the presence of the SynCom compared with the WT production in axenic cultures ( t -test, P = .0317) . This finding was further validated in vitro by supplementing P5_B1 cultures with cell-free supernatants from each of the SynCom members or all strains together. Here, the spent media from both the monocultures and the SynCom induced surfactin production, with the highest increase observed when P5_B1 was supplemented with R. globerulus supernatant . Although most of the molecular features ( m/z ) detected in our system remained unidentified, the molecular network clearly shows the presence of the B. subtilis LPs, plipastatin, and surfactin, and their analogs. Moreover, the presence of ornithine lipids (OLs) was observed in the dataset . These metabolites are derived from Gram-negative bacterial cell outer membrane as surrogates of phospholipids under phosphate-limited conditions . The lipid abundances ( m/z between 597 and 671) increased in the SynCom alone, indicating this conversion of phospholipids to OLs occurs in the absence of B. subtilis. Ecologically, OLs have been linked to stress response . When surfactin producers (WT or ppsC mutant) were introduced into the system, the presence of OLs was strongly reduced. In contrast, with the sfp and srfAC mutants, OLs remained at levels comparable to the SynCom alone . We corroborated this observation by conducting an experiment with the SynCom in the presence of pure surfactin. Here, the same group of compounds ( m/z features) was altered in the surfactin-supplemented SynCom culture as in the presence of surfactin-producing B. subtilis co-cultures, although these were abundant in the control samples (i.e., without B. subtilis ) . Bacillus subtilis specialized metabolites To dissect the mechanism of how surfactin facilitates B. subtilis establishment within the SynCom, a meta-transcriptomic approach was conducted comparing the transcriptional profile of the SynCom challenged with the WT and the sfp mutant. In total, 430 genes and 490 genes were differentially expressed (DEG) in the SynCom after 1 and 5 days, respectively, inoculated with the WT compared with the sample seeded with the sfp mutant. In both sampling days, the less competitive strains, Pedobacter sp. D749 and R. globerulus D757 had the highest number of differentially expressed genes (DGEs) in the system, accounting for around the 83% of DEGs at day 1 and 95% of those at the last sampling point . Subsequently, we explored the distribution of clusters of orthologous groups (COG categories) among the DEGs genes to discover which processes within the SynCom are potentially affected by the introduction of either the WT or sfp mutant. Here, many DEGs were not annotated or classified as COG S, an unknown function. However, cell wall/membrane/envelope biogenesis (COG M) and amino acid transport and metabolism (COG E) were the most abundant functional categories among the genes downregulated in the SynCom with WT strain added relative to the SynCom in the presence of sfp mutant . We explored the functions and enrichment pathways of DEGs for the less competitive strains ( Pedobacter sp. D749 and R. globerulus D757). The GO enrichment analysis revealed that both strains responded transcriptionally differently in the presence of the WT strains. Whereas the enriched biological processes in R. globerulus D757 were related to defense mechanisms or response to other organisms, upregulated processes in Pedobacter sp. were linked to amino acid transport, specifically histidine . Bacillus subtilis is conserved across diverse synthetic communities To survey if surfactin is important for establishment of B. subtilis P5_B1 within diverse microbial communities, we assessed the abundance of WT and surfactin-deficient mutant in five previously published and characterized SynComs . These SynComs varied in composition, reflecting different functionalities and ecological niches. Overall, the co-culture experiments revealed that the ability of B. subtilis to establish within the SynComs depended on surfactin production, SynCom composition (number of members), and the inoculation ratio. In most SynComs, except for the Kolter Lab’s SynCom which was broadly invaded, both the WT and the srfAC mutant displayed reduced growth at a high inoculation ratio of SynCom (10:1, 1:1, 1:10). However, the WT, which produces surfactin, generally reached higher population densities compared to the surfactin-deficient mutant across most SynComs. Although the difference between the WT and srfAC mutant was less pronounced in these shaken cultures compared with the tests performed on the alginate bead microcosm, this could be due to the lack of spatial structure present in the surface-attached communities or the differences in oxygen diffusion between the two experimental setups. When B. subtilis was inoculated at high ratios relative to the SynComs, the growth dynamics resembled those observed in axenic cultures of both the WT and srfAC mutant . Secondary metabolites have traditionally been studied for their antimicrobial or anticancer properties. However, several of these natural products exert multifaceted functions, influencing the physiology of the producing microorganism and modulating interactions with other organisms . Understanding the role of these compounds in natural habitats ( e. g. in soil) is crucial for optimizing their use and biotechnological applications. However, this has been challenging due to the chemical and biological complexity and the limitations of quantifying SMs in situ . Therefore, this study aimed to elucidate the contribution of cyclic LPs, particularly surfactin and plipastatin, in the establishment and functional dynamics of both B. subtilis and SynCom members in a soil-mimicking environment. Our key findings demonstrate that surfactin production facilitates the establishment success of B. subtilis across multiple SynComs. Whereas surfactin was crucial for B. subtilis competitiveness, its production did not markedly alter the overall composition of the SynCom. Additionally, the metabolomic and transcriptomic analysis revealed that surfactin modulates both the producer and SynCom metabolic landscapes. Together, our results support past observations and the long-standing hypothesis, that bacteria lacking secondary metabolite production are less competitive than SM-producing wild-types . We experimentally demonstrated the contribution of surfactin in B. subtilis success when inoculated in the presence of a SynCom using a reductionist approach: four-member bacterial SynCom propagated in microcosms based on an artificial hydrogel matrix . One of the biggest methodological challenges in studying SM-driven microbial interactions is to mimic the environmental conditions. Consequently, the need for developing model systems of intermediate complexity for elucidating the ecological role of these molecules and shedding light on microbiome assembly-related questions has been widely stated . This is because classic axenic in vitro assays do not resemble crucial aspects of microbial niches, whereas natural samples are far too complex to dissect the underlying processes at the molecular level. Our SynCom is not intended to represent the natural sample site, i.e. Dyrehaven soil community, where all strains used in this study were isolated, but rather, it represents a reproducible, trackable, and easy-to-set bacterial assemblage useful for testing the role of SMs in SynCom assembly, and together with the soil-mimicking matrix, might help to overcome the bottlenecks imposed by soil complexity in terms of microbial diversity and SMs quantification. The described system aligns conceptually with recent approaches that used transparent microcosms mimicking the complexity of natural environments also allowing for testing hypotheses with statistical power in a controlled setup . Throughout the present work, we revealed the crucial role of surfactin in the establishment and persistence of B. subtilis within a set of diverse SynComs. Surfactin is by far one of the most-studied LPs and appears to confer a competitive advantage to B. subtilis under different conditions and environments. The relevance of this multifunctional SM has been demonstrated in biofilm formation , swarming and sliding motility , root and phyllosphere colonization , and triggering induced systemic resistance (ISR) in plants . Although it is not frequently highlighted as a primary function of surfactin, its contribution to the fitness of producers has been shown in different environmental conditions. For instance, Luo et al. demonstrated that a B. subtilis strain impaired in surfactin production did not colonize rice sheaths inoculated with Rhophitulus solani. At the same time, WT increased its population size over time . Similarly, Zeriouh and colleagues showed that srfAB mutant (of Bacillus amyloliquefaciens UMAF6614) presents reduced persistence in the melon phylloplane . In soil, similar observations were made where surfactin-impaired mutants of B. subtilis were unable to colonize Arabidopsis thaliana roots . In all these examples, the underlying mechanism links surfactin production with triggering Bacillus biofilm formation, surface spreading, and colonization. Even though further experiments are needed to fully understand how surfactin enhance B. subtilis establishment in the SynComs, we hypothesize that surfactin-mediated niche colonization (spreading and biofilm formation) and alterations of the SynCom chemical landscape might play important roles in the observed phenomenon. B. subtilis P5_B1 is a strong biofilm producer both in vitro and on plant roots in laboratory settings . We have shown here and previously that P5_B1 produces surfactin in the microcosms at levels that are presumably required for timing of biofilm formation (~15 μg/ g of beads) , which may aid its attachment to the hydrogel beads, creating niches where B. subtilis could minimize competition for resources with other SynCom members. Furthermore, the surfactin-induced modulation of the overall SynCom chemical landscape could lead to niche differentiation. By reshaping community chemodiversity, surfactin may help to create distinct ecological niches. This differentiation could be essential for reducing competition and allowing the coexistence of the surfactin-producing strain within the community. Alternatively, surfactin production could help B. subtilis to cope with a potential oxygen depletion induced by the SynCom growth. Such function of surfactin has been recently demonstrated where surfactin production mediated B. subtilis survival via membrane depolarization and increased oxygen diffusion under low oxygen concentration . We observe that the WT and the SM-mutant strains had hardly any influence on the composition and dynamics of the SynCom, but surfactin production altered the chemical diversity of the SynCom, besides the sensitivity of minor SynCom members to B. subtilis SMs. Several studies have highlighted that isolates of the B. subtilis species complex are not strong competitors of indigenous soil microbiota, and as a consequence, they did not shift the composition rhizosphere bacterial community to a considerable degree or mainly influenced specific groups of the rhizospheres’ microbial community . However, application of B. subtilis and its close-relative species in the rhizosphere improve plant health and resiliency, and SM production contributes to these properties. Beyond the impact of the examined LPs on B. subtilis growth dynamics and SynCom composition, we found that surfactin production was stimulated in the presence of the SynCom or specific SynCom members compared to B. subtilis monocultures. This observation supports the well-established notion that microbial interactions play a crucial role in modulating the production of bioactive secondary metabolites . Several studies have elegantly demonstrated the enhanced production of various natural products and their consequences for the producers (reviewed in ). For example, Andric et al. showed that Bacillus velezensis , a member of the B. subtilis complex, increases the production of bacillaene and surfactin upon sensing metabolic cues produced by Pseudomonas sessilinigenes CMR12a; leading to enhanced antibacterial activity by B. velezensis . The increased surfactin production observed under our experimental conditions likely provides benefits to B. subtilis during community-level interactions. Beyond its antagonistic activity, particularly against closely related species, surfactin production is linked to multiple beneficial Bacillus phenotypes, potentially serving as defensive responses upon detecting bacterial competitors. For instance, phenotypes such as increased biofilm formation , enhanced motility , induction of sporulation , and secondary metabolite production have been proposed as defensive mechanisms after sensing competitors . However, the underlying mechanisms regulating B. subtilis SM production in response to their neighbor’s activity remain largely unknown. The so-called “competition sensing” hypothesis provides an ecological framework, suggesting that microbes have evolved the ability to sense hazard signals coupled with a stress response that enables a “counterpunch” by upregulating the production of antibiotics and toxins . Similarly, the SynCom-secreted metabolome was modulated by the surfactin production. Here, we observed that primarily OLS lipids were downregulated when the SynCom was exposed to surfactin. In sum, soil bacteria are well known for their potential to synthesize a plethora of SMs with a wide diversity of activities. Our understanding of the ecological roles of these metabolites under natural conditions has just begun to be unlocked. Our observations, gathered in an intermediate ecological complex experimental system revealed the role of surfactin in the ecology of the producers and how this SM impacts the metabolism of its interacting partners. Thus, we hypothesize that the production of multimodal secondary metabolites by B. subtilis is a refined strategy that contributes to fitness and persistence in natural habitats where competition could be thorough. LozanoAndrade_ISMEJ_SupplementaryInformation_R2_wraf013 Table_S1_wraf013 |
To study the utility of HER2 and Ki-67 as immunohistochemical prognostic markers in comparison to histopathological parameters and tumour, node and metastasis staging in colorectal carcinoma | 092eb0b5-f8c8-42aa-88fe-257a56736e28 | 11399454 | Anatomy[mh] | Colorectal carcinoma (CRC) is one of the most prevalent malignancies globally, and it is the fourth leading cause of cancer-related death. CRC incidence rates vary greatly, with roughly 60% of cases identified in wealthy nations. The switch to a more Western diet has been linked to an increase in cancer rates in emerging nations . In 2020, CRC will account for 10% of global cancer incidence and 9.4% of cancer-related deaths, trailing only lung cancer, which will account for 18% of fatalities. According to projections of population growth, ageing, and human progress, the global number of newly diagnosed CRC cases is predicted to reach 3.2 million by 2040.Based on projections of ageing, population expansion, and human progress, the worldwide number of new CRC cases is expected to reach 3.2 million by 2040 . The majority of Indian research show that these tumours occur in people aged 45 to 84, with a male preponderance. The most likely locations for carcinomas are the rectum and sigmoid colon. Carcinoma is a malignant neoplasm of epithelial origin or a cancer originating from internal and external lining of the body. All cancers of the large intestine (colorectal cancer) originate from the cecum to the anus. Colon cancer, which extends from the cecum to the sigmoid (about 15 cm above the anal margin), and rectal cancer, which extends from the recto-sigmoid to the anus, are the two types of colorectal cancer. CRC originates as a benign polyp on the inner lining of the colon or rectum . Unfavorable stressors like obesity, calorie rich diet, and diet devoid of fibre, minimal physical activity, cigarette smoking, and alcohol consumption, in addition to an ageing population in high-income nations, epithelial cells are exposed for a lengthened transit time. As a consequence, colorectal epithelium will be more susceptible to the impact of mutagenic compounds, thereby increasing cancer risk . Patients may have a variety of signs and symptoms, including occult overt rectal bleeding, bowel changes, anaemia, or abdominal discomfort. Individuals over the age of 45 should have a colonoscopy if they experience rectal bleeding. CT colonography or MRI abdomen and pelvis are complimentary imaging methods for the diagnosis of colorectal cancer. MRI is used to do locoregional staging in rectal cancer. PET-CT imaging is also being used. The Tumour, node and metastasis (TNM) and American Joint Committee on Cancer classifications are used to predict the prognosis of newly diagnosed colorectal cancer (CRC) (8 th edition). Tumor extent, lymph node status, tumour grade, and lymphatic and venous invasion evaluation remain the major morphological prognostic variables. Tumor budding and tumour border configuration are crucial additional histological markers, however they are not considered critical in prognosis . The availability of monoclonal antibodies against the vascular endothelial - and epidermal growth factor receptor has recently boosted the therapeutic arsenal. In mCRC, preclinical and clinical trials of Anti-HER2 targeted therapy have shown promising results. Because of the high mortality rate in advanced metastatic cancer, it appears that improved detection procedures are necessary. The concept of antibodies attaching selectively to antigens in biological tissues to detect antigens in cells of a tissue segment is utilized in Immunohistochemistry (IHC). The antibody-antigen binding can be seen in a variety of ways. In histology, immunohistochemistry is used to identify the presence of a particular protein marker that can help with tumour categorization and diagnosis. ErbB-2 is a receptor tyrosine-protein kinase that is also known as HER2. The prognostic biomarker human epidermal growth factor receptor 2 (HER2) is utilised to detect tumoral tissues. The transmembrane receptor protein is present on all normal cells and is involved in a wide range of biological processes such as cell proliferation and apoptosis, differentiation, and cell migration . HER2 protein overexpression or gene amplification has been associated with higher stage, positive lymph node status and tendency for poor overall survival. Ki-67 as a prognostic predictor for well-known cancers. It has been established that its expression is inextricably bound to cell growth. Ki-67 expression and proliferation activity appear to rise dramatically, and it is essential in the majority of cell cycle stages. Elevated expression of Ki-67 in CRC is related with a lower survival rate, carcinogenesis, and cancer cell metastasis, suggesting that Ki-67 might be used as a prognostic biomarker in CRC patients . The current study proposes to investigate the status of Ki-67 and HER2 expression in colorectal carcinomas in connection to prognostic parameters such as histological type, grade, tumour size, and lymph node status. The objectives were a) to confirm already diagnosed colorectal carcinoma by histopathological examination; b) to determine histological grades of CRC based on histopathological prognostic markers;m c) to determine staging of colorectal carcinoma by TNM classification based on American Joint Committee of Cancer (AJCC); d) to asses HER2 and Ki-67 expression in tumor tissues of colon and rectum by immunohistochemistry; e) to compare Ki-67 and HER2 expression with histopathological prognostic markers and TNM staging. Review Question : how do HER2 and Ki-67 expression levels correlate with histopathological parameters in colorectal carcinoma? What is the comparative prognostic value of HER2 and Ki-67 immunohistochemical markers versus TNM staging in colorectal carcinoma? Can HER2 and Ki-67 immunohistochemical analysis provide additional prognostic information beyond traditional histopathological parameters in colorectal carcinoma? How do HER2 and Ki-67 expression patterns vary across different stages of colorectal carcinoma according to TNM staging? What are the implications of incorporating HER2 and Ki-67 immunohistochemistry into the prognostic assessment of colorectal carcinoma alongside TNM staging and histopathological features? Study Design : the current study is an observational, cross-sectional, retrospective, and prospective study that will last two years (June 2022 to June 2024), in the Histopathology and Immunohistochemistry division of the Department of Pathology, Jawaharlal Nehru Medical College, Sawangi (Meghe), in coordination with the Department of General Surgery, Acharya Vinoba Bhave Rural Hospital, Sawangi (Meghe). Approval will be obtained from Institutional Ethics Committee and informed consent will be taken from the patients participating in this study. Inclusion Criteria : a) already diagnosed as colorectal carcinoma on histopathology; b) all operated cases of colorectal carcinoma; c) primary cases of colorectal carcinoma without any history of previous treatment; d) all patients with colorectal carcinoma arising de novo . Exclusion Criteria : 1) all benign lesions of colon and rectum; b) all the already treated cases of colorectal carcinoma; c) all patients with colorectal carcinoma where the cancer is arising as a result of recurrence; d) patients with no histological confirmation of colorectal carcinoma. Participant's intervention comparison and outcomes (PICO) : the information for PICOs (participants, intervention, comparison, and outcomes) is provided below; Population : patients diagnosed with colorectal carcinoma (CRC). Intervention : immunohistochemical analysis of HER2 and Ki-67 expression levels . Comparison : comparison with histopathological parameters and TNM staging. Outcomes : evaluation of the utility of HER2 and Ki-67 as prognostic markers in CRC and comparison with traditional histopathological parameters and TNM staging in predicting disease prognosis. Information sources : the search will use sensitive topic-based strategies designed for each database. The search will be carried out in the following databases: Pubmed, Embase, Cinahl, Research Gate, Ajol, Google Scholar, Web of Science, Scopus and Cohrane Library. Only observational studies will be included. Search strategy : ((“Colorectal Neoplasms”[MeSH] OR “Colorectal Carcinoma”[Text Word]) AND ("HER2"[MeSH] OR “HER2 Positive”[Text Word]) AND ("Ki-67 Antigen"[MeSH] OR “Ki-67”[Text Word]) AND ("Immunohistochemistry"[MeSH] OR “IHC”[Text Word]) AND ("Prognostic Markers"[MeSH] OR “Prognostic Factors”[MeSH] OR “Prognosis”[MeSH]) AND ("Histopathological Parameters"[MeSH] OR “Histopathology”[MeSH]) AND ("TNM Staging"[MeSH] OR “TNM Classification”[MeSH])). Sample size : sample size calculation for a study estimating a population prevalence was described by Daniel in 1999. The calculation is intended to determine an adequate sample size to estimate the population prevalence with a good precision. The sample size calculation will be performed according to the formula suggested by Daniel. Z a / 2 2 * p * 1 − p d 2 Where, “Z a/2” is the level of significance at 5% that is 95% confidence interval = 1.96 “p” is the prevalence of colorectal carcinoma = 0.2854 “d” is the desired error of margin = 7% = 0.07 “n” is the sample size. n = 1.96 2 x 0.2854 x (1-0.2854)/ 0.07 2 = 63.85 = Approximately 60-65 patients are needed in each study group. Study Reference: Consensus document for management of colorectal carcinoma by ICMR Formula Reference: (1999) Statistical formulas: Kappa statistics, Test statistics Software Used: SPSS 27.0 Version Level of study: Level III Sample Allocation: Random Selection of patients Study Design: Prospective Cross-sectional study. Approach to present study : the approach for the current study is detailed in the figure provided below. This figure outlines the step-by-step approach used in conducting the research and presenting the findings. It encompasses various stages of the study like determining the tumor staging using TNM , Evaluation of expression of Ki-67 and HER2 . This approach ensures transparency and clarity in how the study was conducted, facilitating reproducibility and reliability of the results. Grossing techniques for colon and rectum specimen : surgery is the main stay of treatment of colorectal carcinoma. TNM staging of tumour is a major prognostic factor which helps decide further management. Derivation of TNM staging is entirely dependent upon a meticulous examination and appropriate sampling of surgical specimen by the pathologist . 1) Unopened specimens in formalin were received along with proper clinical history. The specimens were checked for identification; 2) the nature of the surgical procedure was noted; 3) the length of the entire specimen was recorded; 4) palpation of the tumour was carried out on the outer aspect of the specimen; 5) the quality of total mesorectal excision before application of ink or opening the APR and AR specimens was assessed; 6) both aspects of specimen photographed for records purposes; 7) tumour site perforation looked for before inking; 8) the non-peritonealised surface was painted with ink with special reinforcement to the NPS related to the tumour. It is adviced to not paint the serosa; 9) upon being inked, the specimen should be opened from the anterior aspect starting from either ends of the tumor to 1 cm above and below the tumor; 10) the distances of both longitudinal resection margins from the tumour are noted; 11) the location of tumour was recorded with relation to the anterior peritoneal reflection in the rectosigmoid, AR and APR specimens; 12) the entire specimen was fixed in appropriate volume of formalin over the course of 48 hours; 13) upon adequate fixation, sample longitudinal mucosal resection margins; 14) document the size of tumour in two dimensions; 15) sample appropriate parts of the tumor as described above and submit for microscopy; 16) all lymph nodes were dissected and submitted; 17) examine the rest of the bowel segment for any abnormality; 17) mesorectum/peri-colonic fat was sampled; 18) sections to be taken; 19) four or five sections of the tumour, all inclusive of serosa and/or CRM; 20) all lymph nodes dissected off the specimen and submitted according to the level of the tumour; 21) longitudinal mucosal resection margins; 22) adjacent mucosa; 23) sample from any other grossly abnormal area. Materials : a) the study will include approximately 60 resected specimens from confirmed and planned Colectomy specimen received in the Department of General Pathology, J.N.M.C; b) formalin fixed, paraffin embedded blocks of tumor masses from resected Colectomy specimens; c) 10% formalin; d) grossing instruments (grossing tray, knife, scalpel, measuring tape, plain forceps, toothed forceps); e) automated tissue processing assembly; f) haematoxylin & Eosin stain; g) HER2 and Ki-67 marker; h) glass slides (Blue Star®). Dimensions: 7.5x 2.5 centimeters; i) binocular research microscope. Staining Protocol: haematoxylin and eosin staining : a) colorectal carcinoma sections are deparaffinized in xylene: Three 10-minute shifts; b) sections dewaxing is performed. Sections are rehydrated using progressively higher grades of alcohol; c) bring sections to water; d) in a jar, stain for 10 minutes with Harris hematoxylin; e) wash for 2-3 minutes under running water; f) for a few seconds, separate in 1 percent acid alcohol (1 percent HCl in 70% alcohol); g) alkaline water was used for 5 minutes; h) for 1 minute, stain in 1% aqueous Eosin; g) dehydrate through 90% alcohol; h) fount in Dibutylphthalate Polystyrene Xylene (DPX). Procedure for immunohistochemistry as given by manufacturer (path in situ) : a) 3 mm portions to be incubated for 1 hour at 60-70O C on charged slides; b) for 15 minutes each deparafinze using xylene for twice; c) hydrate through descending grades of alcohol as follows: 1) absolute alcohol- 2 changes, five minutes each; 2) 90% alcohol- 5 minutes; 3)70% alcohol-5 minutes; 4) wash in distilled water, two changes, 2 minutes each; 5) antigen retrieval for 15 -20 mins in MERS. pH of retrieval buffer (6,8 or 9.5) as per the marker; 6) wash in distilled water, two changes, 2 minutes each; 7) wash in PBS /TBS for 2 minutes; 8) endogenous peroxidation is inhibited by administering H2O2 to the area for 5 minutes. Wash in the wash buffer for 2 minutes twice; 9) for 30 minutes, place the HER2/Ki-67 primary antibody in a moist chamber. Then, wash twice in the wash buffer for 2 minutes each; 10) keep the Polyexcel Target binder reagent for 12 minutes. Wash in two different buffers for two minutes each; 11) incubate Polyexcel HRP for 12 minutes. Wash with buffer for 2 minutes, then change; 12) add functioning DAB chromogen (1ml DAB Buffer + 1 drop DAB Chromogen, mix thoroughly) and leave for 2-5 minutes before washing in distilled water; 13) hematoxylin counterstain for 30 seconds, then rinse with water; 14) dehydrate (70, 90, and absolute), clear (xylene), and mount as standard. Methodology of Interpretation Interpretation based on histologic grade : a variety of colorectal cancer grading systems have been proposed, however there is no single generally acknowledged and regularly utilised grading criteria. Most tumours are classified into three or four grades: a) Grade 1 (G1) - Well-differentiated; b) Grade 2 (G2) - Moderately differentiated; c) Grade 3 (G3) - Poorly differentiate; d) Grade 4 (G4) - Undifferentiated. Despite high interobserver heterogeneity, multivariate analysis has repeatedly demonstrated that histologic grade is a stage-independent predictive predictor. High tumour grade, in particular, has been shown to be a poor predictive factor. The use of a two-tiered grading system for colorectal cancer is recommended in view of its established predictive efficacy, relative simplicity, and consistency. The following grading standards are proposed based only on gland development. A) low grade: >/= 50% gland formation; b) high grade: < 50% gland formation. Statistical analysis: it will be carried out by ‘chi square test’ by analyzing the relationship of Ki-67 and HER2 protein expression in colorectal carcinoma. Multiple linear regression analysis is performed to determine the relative elements that contribute to metastasis. A value of P <0.05 considered to indicate statistical significance. Scope : due to its high prevalence and mortality rate, colorectal cancer (CRC) has evolved into a global issue for public health. Immunohistochemical analysis and protein markers such as HER2 and Ki-67 are now being used to enhance the identification of individuals who are more likely to have a poor clinical outcome and hence benefit from early detection. Limitations : 1) inter-observer and intra-observer variability; 2) technical errors while processing can influence the interpretation of immunostaining. Observation and results : they will be collected and combined together over the period of two years and will be analyzed statistically. Trial registration number: this study is registered with the Clinical Trials Registry - India (CTRI) and the Trial number is CTRI/2023/05/053165. Interpretation based on histologic grade : a variety of colorectal cancer grading systems have been proposed, however there is no single generally acknowledged and regularly utilised grading criteria. Most tumours are classified into three or four grades: a) Grade 1 (G1) - Well-differentiated; b) Grade 2 (G2) - Moderately differentiated; c) Grade 3 (G3) - Poorly differentiate; d) Grade 4 (G4) - Undifferentiated. Despite high interobserver heterogeneity, multivariate analysis has repeatedly demonstrated that histologic grade is a stage-independent predictive predictor. High tumour grade, in particular, has been shown to be a poor predictive factor. The use of a two-tiered grading system for colorectal cancer is recommended in view of its established predictive efficacy, relative simplicity, and consistency. The following grading standards are proposed based only on gland development. A) low grade: >/= 50% gland formation; b) high grade: < 50% gland formation. Statistical analysis: it will be carried out by ‘chi square test’ by analyzing the relationship of Ki-67 and HER2 protein expression in colorectal carcinoma. Multiple linear regression analysis is performed to determine the relative elements that contribute to metastasis. A value of P <0.05 considered to indicate statistical significance. Scope : due to its high prevalence and mortality rate, colorectal cancer (CRC) has evolved into a global issue for public health. Immunohistochemical analysis and protein markers such as HER2 and Ki-67 are now being used to enhance the identification of individuals who are more likely to have a poor clinical outcome and hence benefit from early detection. Limitations : 1) inter-observer and intra-observer variability; 2) technical errors while processing can influence the interpretation of immunostaining. Observation and results : they will be collected and combined together over the period of two years and will be analyzed statistically. Trial registration number: this study is registered with the Clinical Trials Registry - India (CTRI) and the Trial number is CTRI/2023/05/053165. The results of an IHC research employing HER2 and Ki-67 to identify HER2 and Ki-67 protein expression and correlate with histological prognostic indicators and TNM staging will be used to make conclusions. |
Determination of drug-related problems in the hematology service: a prospective interventional study | 616c2d36-f038-4463-8e09-797be9432646 | 11067252 | Internal Medicine[mh] | Hematological malignancies include a variety of diseases such as Hodgkin lymphoma, non-Hodgkin lymphoma, leukemias, and multiple myeloma . New treatment strategies were developed for all these diseases and the survival time of patients was increased . Hematological cancer patients require combination therapy using a variety of antineoplastic agents and supportive care medications . Polypharmacy is the use of multiple medications and is common in this patient group . Polypharmacy increases the risk of drug-related problems (DRPs) . DRPs are defined as an event or situation involving medication that interferes with desired health outcomes. DRPs include inappropriate dosage and method of administration, drug-drug interactions, drug omissions and monitoring deficiencies, and adverse drug reactions . This may fail to achieve drug therapy goals or harm the patient . It also causes prolonged hospital stay, readmission, and increased mortality . Within a multidisciplinary team, clinical pharmacists can detect and prevent DRPs early through comprehensive medication review . Clinical pharmacy services are pretty new in Turkey. Although there have been postgraduate programs (master’s degree, doctorate) related to clinical pharmacy for years, there has been a clinical pharmacy specialty program since 2018 . Only graduates of the clinical specialty program can work in public hospitals . Therefore, the number of clinical pharmacists actively working in hospitals is relatively low. The contributions of clinical pharmacists in identifying and preventing DRPs have been demonstrated in many clinical departments . However, studies on determining DRPs in patients with hematological malignancy are limited . In a study conducted in an onco-hematology and bone marrow transplant unit in Brazil , the frequency of DRPs was found to be 135 (9%). 135 interventions were performed by the pharmacist and 90% were accepted. In a study conducted in France , 552 (12.6%) DRPs were found. Medication problems were mostly related to anti-infective agents, and oncologists’ acceptance of interventions was found to be high (96%). In a study conducted in Korea , a total of 1187 DRPs were identified in 438 (23.9%) of 1836 hospitalized patients with hematological malignancy. Pharmacists’ intervention was accepted by 88.3%. In a study examining the clinical and economic impact of pharmacist interventions in an outpatient hematology-oncology department in France , a total of 1970 pharmacist interventions were performed, corresponding to an average of 3.5 pharmacist interventions/patient, and the total cost savings was €175,563. The clinical pharmacist’s cost-benefit ratio was found to be €3.7 for every €1 invested. As far as it is known, no study shows that DRPs are determined by the clinician in the hematology service in Turkey. Therefore, this study aims to determine drug-related problems by a clinical pharmacist within the multidisciplinary team in patients with a diagnosis of hematological malignancy hospitalized in the hematology services of a university hospital in Turkey. Study design This study was conducted prospectively between December 2022 and May 2023 in the hematology service of Suleyman Demirel University Research and Application Hospital in Isparta, Turkey. All patients over the age of 18 who were hospitalized in the hematology service for more than 24 h were included in the study. Only the first hospitalization of each patient was evaluated. Informed consent was obtained from all participants before they participated in the study. Ethics Committee approval was obtained from Suleyman Demirel University Faculty of Medicine Clinical Research Ethics Committee (Approval No:274, Date:28.09.2022). Setting The service where the research was conducted had 15 beds and two physicians and assistant physicians were working. There was no stem cell transplant unit in the hospital. Isparta was a small city with a population of 449,777 . The hospital and patient population where the study was conducted were smaller than the hospitals in Turkey’s metropolitan cities. Sample size The sample size was calculated based on the approximate number of patients admitted to the hematology service during the previous 6 months. With the Raosoft sample size calculator, the sample size was found to be minimum 123 with a population size of 180, 5% margin of error, 95% confidence interval and 50% distribution rate . Data collection The clinical pharmacist in the study was an academic, did not routinely work in this hospital, and was present at the hospital for this study. The clinical pharmacist performed comprehensive medication reviews of patients and provided interventions. The patients’ socio-demographic characteristics, history, diagnosis, comorbidities, medications used, laboratory test results, and interventions were recorded in the data collection form by the clinical pharmacist. The patients’ data were obtained from the hospital database, patient files, and patients. In general, interventions were made through verbal communication. UpToDate® and Sanford Guide to Antimicrobial Therapy Mobile® software were used for the interventions . The Lexicomp Drug Interactions® tool, accessed via UpToDate®, was used to identify drug-drug interactions . According to Lexicomp Drug Interactions®, drug interactions consist of five categories. A -no known interaction, B- no action required, C -monitor therapy, D- consider changing therapy, X- avoid combination. The presence of at least one of the risk levels C, D, and X was defined as a potential drug-drug interactions because it was clinically significant . Polypharmacy was defined as the use of 5 or more medications . DRPs were determined using the Pharmaceutical Care Network Europe (PCNE) 9.1 Turkish version. PCNE 9.1 has 3 primary fields for problems, 9 primary fields for causes, 5 primary fields for planned interventions, 3 primary fields for acceptance level (of interventions), and 4 primary fields for status of the problem. Problems include treatment effectiveness and safety, while reasons include drug selection, drug form dose selection, and treatment duration . Statistical analysis Statistical analysis was performed using SPSS 20. Continuous variables were expressed as median-interquartile range, and categorical variables were expressed as percentage and frequency. The normality of the data was analysed with the Kolmogorov-Smirnov test. The Mann-Whitney U test was used to compare continuous independent variables, and the Chi-Square test was used for categorical variables. The Pearson Chi-Square (> 25), the Continuity Correction (5–25), and the Fisher’s Exact test (< 5) were used according to the number of cases. Multiple logistic regression analysis was performed to determine the best predictor(s) which effect on the presence of DRP. Any variable whose univariable test had a p value < 0.10 was accepted as a candidate for the multivariable model along with all variables of known clinical importance. Odds ratios, 95% confidence intervals and Wald statistics for each independent variable were also calculated. A p-value smaller than 0.05 was considered statistically significant. This study was conducted prospectively between December 2022 and May 2023 in the hematology service of Suleyman Demirel University Research and Application Hospital in Isparta, Turkey. All patients over the age of 18 who were hospitalized in the hematology service for more than 24 h were included in the study. Only the first hospitalization of each patient was evaluated. Informed consent was obtained from all participants before they participated in the study. Ethics Committee approval was obtained from Suleyman Demirel University Faculty of Medicine Clinical Research Ethics Committee (Approval No:274, Date:28.09.2022). The service where the research was conducted had 15 beds and two physicians and assistant physicians were working. There was no stem cell transplant unit in the hospital. Isparta was a small city with a population of 449,777 . The hospital and patient population where the study was conducted were smaller than the hospitals in Turkey’s metropolitan cities. The sample size was calculated based on the approximate number of patients admitted to the hematology service during the previous 6 months. With the Raosoft sample size calculator, the sample size was found to be minimum 123 with a population size of 180, 5% margin of error, 95% confidence interval and 50% distribution rate . The clinical pharmacist in the study was an academic, did not routinely work in this hospital, and was present at the hospital for this study. The clinical pharmacist performed comprehensive medication reviews of patients and provided interventions. The patients’ socio-demographic characteristics, history, diagnosis, comorbidities, medications used, laboratory test results, and interventions were recorded in the data collection form by the clinical pharmacist. The patients’ data were obtained from the hospital database, patient files, and patients. In general, interventions were made through verbal communication. UpToDate® and Sanford Guide to Antimicrobial Therapy Mobile® software were used for the interventions . The Lexicomp Drug Interactions® tool, accessed via UpToDate®, was used to identify drug-drug interactions . According to Lexicomp Drug Interactions®, drug interactions consist of five categories. A -no known interaction, B- no action required, C -monitor therapy, D- consider changing therapy, X- avoid combination. The presence of at least one of the risk levels C, D, and X was defined as a potential drug-drug interactions because it was clinically significant . Polypharmacy was defined as the use of 5 or more medications . DRPs were determined using the Pharmaceutical Care Network Europe (PCNE) 9.1 Turkish version. PCNE 9.1 has 3 primary fields for problems, 9 primary fields for causes, 5 primary fields for planned interventions, 3 primary fields for acceptance level (of interventions), and 4 primary fields for status of the problem. Problems include treatment effectiveness and safety, while reasons include drug selection, drug form dose selection, and treatment duration . Statistical analysis was performed using SPSS 20. Continuous variables were expressed as median-interquartile range, and categorical variables were expressed as percentage and frequency. The normality of the data was analysed with the Kolmogorov-Smirnov test. The Mann-Whitney U test was used to compare continuous independent variables, and the Chi-Square test was used for categorical variables. The Pearson Chi-Square (> 25), the Continuity Correction (5–25), and the Fisher’s Exact test (< 5) were used according to the number of cases. Multiple logistic regression analysis was performed to determine the best predictor(s) which effect on the presence of DRP. Any variable whose univariable test had a p value < 0.10 was accepted as a candidate for the multivariable model along with all variables of known clinical importance. Odds ratios, 95% confidence intervals and Wald statistics for each independent variable were also calculated. A p-value smaller than 0.05 was considered statistically significant. This study included 140 patients. Almost half (55%) of the patients were male and the median age was 65 (55–74) years. The median length of hospital stay was 8 (5–14) days. The median number of medications used by the patients was 6 (4–7). Polypharmacy was present in 67% of the patients. Older age, longer hospital stay, presence of acute lymphoblastic leukemia, presence of comorbidities, higher number of medications used, and polypharmacy rate were statistically significantly higher in the DRP group than in the non-DRP group ( p < 0.05). Table shows the socio-demographic and clinical characteristics of the patients. At least one DRP was detected in 69 (49.3%) patients and the total number of DRPs was 152. Possible or actual adverse drug events (96.7%) were the most common DRPs. The most important cause of DRPs were drug choice (94.7%), and the highest frequency within its subcategories was the combination of inappropriate drugs (93.4%). Potential drug-drug interactions were detected in at least one C risk in 43 (30.7%) patients, at least one D risk in 11 (7.9%) patients, and at least one X risk in 6 patients (4.3%). The clinical pharmacist performed 104 (68.4%) interventions on the prescriber, of which 100 (96.15%) were accepted and fully implemented. All 120 DRPs (78.9%) were resolved, and 28 DRPs (18.4%) were not possible or necessary to be resolved. Table shows the classification of DRPs. Table shows some examples of interventions performed by the clinical pharmacist. Anticancer drugs such as venetoclax, lenalidomide, and dasatinib were examples of potential drug-drug interactions. Table shows the adverse effects that occurred. Drug-related nephrotoxicity was the most common adverse effect. Table shows the results of the multivariate logistic regression analysis: factors most predictive of the presence of DRP. Polypharmacy and length of hospitalization were the most determinant factors in differentiating the groups with and without DRP, respectively. After adjustment for other factors, the likelihood of the presence of DRP was statistically significantly 7.921 folds (95% CI: 3.033–20.689) higher in patients with polypharmacy compared to patients without polypharmacy ( p < 0.001). On the other hand, each 5-day increase in the duration of hospitalization continued to increase the likelihood of the presence of DRP by a statistically significant (OR = 1.476, 95% CI: 1.125–1.938 p = 0.005). In our study, 152 DRPs were identified and 120 DRPs were totally solved. This reveals the importance of involving the clinical pharmacist in a multidisciplinary team. The most common DRPs in our study were possible or actual adverse drug events. Since the patient population was generally elderly and cancer patients, they were exposed to polypharmacy and drug-drug interactions. Additionally, this was not surprising since the risk of exposure to possible or actual adverse drug events was high due to the anticancer medications they use . Adverse drug events varied across studies. While this rate was 28.6% in the study conducted by Kim et al. in the hematology service, it was 78.6% in the study conducted by Umar et al. in the oncology service. Since Kim et al.‘s study was retrospective, the rate of possible or actual adverse effects may have been found to be low. Additionally, although both studies used the PCNE classification system, it was not mentioned in Kim et al.‘s study which drug-drug interaction tool was used and which risk ratio for drug-drug interaction was considered clinically significant. In our study, most of the causes of DRPs were related to drug selection and their subgroup, inappropriate combination of drugs. Drug-drug interaction rates in the studies were 14.3%, 7.4%, 13.6%, and 73.2%, respectively . Differences in this rate may be due to polypharmacy rates, differences in healthcare services, and different drug-drug interaction software . Most of the potential drug-drug interactions in our study were at risk C (monitor therapy). Therefore, in some drug-drug interactions that required monitoring, only the physician was informed, and in others, intervention was recommended to the prescriber. Drug-drug interactions were mostly related to supportive medications. In our study, anticancer drugs such as venetoclax, lenalidomide, bortezomib, and dasatinib had potential drug-drug interactions. Venetoclax had potential drug-drug interactions with verapamil-trandolapril at increased risk of D. Verapamil-trandolapril is a CYP3A4 inhibitor , and concomitant use with venetoclax increases the concentration of venetoclax. It is recommended that the dose of venetoclax be reduced by 50% . Also, there was a potential drug-drug interaction at risk X (avoid combination) between dasatinib and pantoprazole. Concomitant use of these two agents decreases the concentration of dasatinib . Bortezomib had potential drug-drug interactions at risk level C with antihypertensive drugs and drugs used in the treatment of benign prostatic hyperplasia, such as tamsulosin . Bortezomib may have a blood pressure-lowering effect, so if used concomitantly with an antihypertensive drug or another drug that can lower blood pressure, the patient should be monitored for hypotension . In our study, there was also a potential drug-drug interaction between bortezomib and diltiazem at risk level C. Diltiazem, as a CYP3A4 inhibitor, may increase bortezomib concentration . The bortezomib prescribing information emphasizes that in this case, it should be monitored for toxicity and dose reduction should be made if necessary . In our study, there was a potential drug-drug interaction between lenalidomide and dexamethasone. When lenalidomide and dexamethasone are used together, venous thromboembolism prophylaxis should be considered, as the thrombogenic activity of lenalidomide may increase . Additionally, potential drug-drug interactions with antiemetics and opioid-derived analgesics were frequently observed in our study. Identifying, monitoring, and intervening when necessary, drug-drug interactions are very important in cancer patients, and clinical pharmacists have important roles in this regard . Dose selection was the second important DRP in our study. Renal dosage adjustment of drugs is significant, especially in patients who develop acute kidney injury . Even if the drugs are started at the correct dose, the dose of the drugs should be monitored and adjusted when necessary in case of liver and renal dysfunction . In our study, antimicrobials were among the drugs that required dosage adjustment according to renal function. This was due to the fact that although infectious disease physicians started antimicrobials at the correct dose, these doses were sometimes not followed up later. Drug-induced nephrotoxicity was a common adverse event in our study, similar to other studies . Also, venetoclax-related hyperuricemia, hyperkalemia and neutropenia were observed in some patients. In a study investigating the incidence of venetoclax-related toxicity risk in British Columbia, hyperkalemia and hyperphosphatemia were observed in 9 patients (27%), and hyperuricemia was observed in 7 patients (21%) . In their study by Koehler et al., venetoclax-related hyperkalemia (31%) and hyperuricemia (5%) were observed . In our study, one acute lymphoblastic leukemia patient had vincristine-induced neuropathy. Vincristine-induced neuropathy is a common side effect and its incidence is between 30 and 40% . The clinical pharmacist’s acceptance rate of the interventions was good. In general, interventions regarding renal and hepatic dosing were accepted. The clinical pharmacist did not intervene in some cases that required monitoring (for example, category C drug interactions) and only informed the physician. These were evaluated as not possible or necessary to resolve the problem. One of the strengths of the study is that the acceptability of the interventions was higher than other studies . Additionally, our study was the first study in Turkey to reveal DRPs in detail in this vulnerable patient population in the hematology service. One of the limitations of our study is that it was conducted in a single center and with a small number of patients. In addition, the clinical pharmacist in the study was an academician and did not work full-time in the hospital, but worked at certain times of the day. This may have caused some DRPs not to be determined. According to our study, a high frequency of DRPs and possible or actual adverse drug events were detected in patients. Older age, longer hospital stay, presence of acute lymphoblastic leukemia, presence of comorbidities, higher number of medications used, and polypharmacy rate were statistically significantly higher in the DRP group than in the non-DRP group According to the results of multiple logistic regression analysis, polypharmacy and length of hospital stay were the most determining factors in distinguishing between groups with and without DRP. The most common DRP was related to possible or actual adverse drug events. The most common cause of DRPs was drug selection and its subgroup, inappropriate combination of drugs. Also, our study shows the importance of including a clinical pharmacist in a multidisciplinary team in identifying and preventing DRPs in the hematology service. |
Rethreading the needle: A novel molecular index of soil health (MISH) using microbial functional genes to predict soil health management | 83b119d5-1db9-4d6b-a768-76088c547652 | 11611206 | Microbiology[mh] | Jenkinson described the microbial community as the “eye of the needle through which all nutrients pass”. This pioneering work launched a new research emphasis to link soil microbes with key soil functions important for plant productivity and ecosystem health. With rapid advances in sequencing technologies using marker genes, numerous studies have linked shifts in the soil microbial community with land use and fundamental functions of healthy soils , including nutrient cycling , aggregate stability , carbon sequestration , and plant health and crop productivity through plant growth-promoting bacteria . Additionally, the microbiome provides defenses against environmental stresses like disease , drought , and flooding . These marker gene studies quantified microbial community composition metrics (e.g., alpha and beta diversity) and/or relative abundances of specific taxa. While information about the actions and interactions of microbes holds great promise to support future developments in sustainable agriculture , interpretations are hampered. A significant limitation of taxonomy is a decoupling of microbial function from composition such that the composition may change in response to external perturbations while function does not . This is an example of functional redundancy, or the ability of a broad range of taxa to perform similar metabolic functions . Consequently, shifts in community composition do not provide answers as to why or how the community function changes. This is exacerbated in large-scale studies in which community composition is more likely to differ by region even when function is similar . Conversely, microbiomes are capable of changing function without changing composition, as was shown in Bowles et al. in which enzyme activities changed more dramatically than the soil taxonomic community under different nutrient sources and rates. These caveats highlight the importance of characterizing function rather than taxonomy for a more complete understanding of microbial responses to the environment. Soil enzymes are critical catalysts responsible for biochemical reactions necessary to support soil life and numerous ecosystem functions. Enzymes that are known to differ across land use management strategies and disease states include (but are not limited to) carbohydrate hydrolases (e.g., β-glucosidase, β-N-acetylglucosaminidase, chitinase, catalase, invertase etc.), sulfur cycling enzymes (e.g., arylsulfatase), phosphorus cycling enzymes (e.g., phosomonoesterases, phosphodiesterase), and nitrogen cycling enzymes (e.g., amidohydrolases and enzymes involved in ammonia oxidation, protein decomposition, denitrification, and nitrogen fixation) . These enzymes have been studied for decades and are commonly measured because they are closely linked with nutrient cycling and mineralization. However, most soil enzyme assays are conducted using bench-scale approaches where a limited number of enzymes are evaluated. Furthermore, microbes can produce at least 2500 different enzymes . Thus, a broader and non-specific approach is necessary to effectively capture the diverse microbial functions that collectively enhance soil health. The appropriate methodology for assessment of functional gene profiles is not without controversy with recommendations ranging from biochemical to molecular techniques. Although biochemical enzymatic and targeted molecular techniques (e.g., gene-specific quantitative PCR) can provide information on how microbial communities respond to management and climate , these approaches require a known substrate and an individual assay for each target enzyme or gene . For more inclusive molecular techniques, two widely available options are whole genome sequencing and metagenome prediction tools (e.g., PICRUSt2 , Tax4Fun ) using phylogenetic reconstruction. Sun et al. compared metagenome prediction tools (PICRUSt, PICRUSt2, Tax4Fun) with whole genome sequencing for a variety of sample types. Overall, they found very high spearman correlations (r > 0.622) between gene relative abundances; however, significant differences between groups were more consistent with samples of less complexity (e.g., human metagenomes) versus higher complexity (e.g., soil metagenomes). Furthermore, Rodriguez and Konstantinidis estimated that soil samples require a 100-fold or more greater sequencing depth to achieve 95% coverage for a single soil (50 Gbp) vs human (0.5 Gbp) metagenome sample. To achieve this level of coverage, a single soil sample would require upwards of four Illumina MiSeq (~15 Gbp) sequencing runs or one Oxford Nanopore (~48 Gbp) sequencing run per sample. Assuming a consumable cost of $1,000 per Oxford Nanopore sequencing run, metagenome sequencing of 500 samples at 95% coverage would cost approximately $500,000. In contrast, gene (e.g., 16S rRNA) sequencing costs approximately $20–50 per sample . Assuming a consumable cost of $30 per sample, a 500-sample study using amplicon sequencing and phylogenetic reconstruction would cost $15,000. For wide-scale surveys, such as the one conducted here, whole genome sequencing efforts for complex soil samples are currently cost-prohibitive and unfeasible. As described above and outlined in Manter et al. , many studies have identified specific taxa or key functional genes that respond to management practices and are associated with healthy soils. More recently, machine learning techniques have been used to align specific taxa as predictors of traditional soil health indicators , but we are unaware of any studies that have developed a microbial functional index that represents their collective contribution to soil health. Thus, our objective was to develop and test a new soil health index based on the molecular characterization of microbial functional capacity. To tackle this complex goal, we sequenced over 500 soil samples previously used as part of a national soil health assessment and posed two interdependent questions: 1) Can the relative abundance of enzymes as estimated using PICRUSt2 predict individual soil health indicators using a random forest modeling approach? 2) Can we use this model to develop a new molecular index of soil health (MISH) that is sensitive to management at a national scale? Previous comparisons between gene abundances derived from PICRUSt2 phylogenetic reconstruction and qPCR have also been shown to be significantly correlated but may be influenced by primer-specificity. Since our goal was to assess the entire microbial functional capacity in soil samples and develop an untargeted molecular index of soil health, we utilized 16S rRNA amplicon sequencing and gene abundance estimates from PICRUSt2 as both the most comprehensive and cost-effective approach currently amenable to a large-scale national survey of soils. Sample collection and DNA sequencing Details on soil collection, management histories, geography, and soil health measurements are provided in . Briefly, subsamples from the 536 soil samples (0–15 cm) collected from 26 states in the U.S. representing annual cropland (n = 335), perennial cropland (n = 91), and rangeland (n = 110) systems were frozen and shipped to the U.S. Department of Agriculture, Agricultural Research Service, Fort Collins, CO. DNA extraction, PCR amplification, and library preparation were conducted following protocols commonly used in our laboratory . Briefly, DNA was extracted from 0.25 g subsamples using the Qiagen DNeasy Powersoil Pro Kit (Qiagen, Germantown, MD) using a 10-min vortex lysis step and a fully automated Qiagen QIAcube robot. DNA quality was assessed using a Nanodrop 1000 (Thermo Scientific, Waltham, MA) and quantified fluorometrically with the Invitrogen dsDNA HS Assay Kit on a Qubit 2.0 (Life Technologies, Carlsbad, CA). The V3-V4 hypervariable region of the 16S rRNA gene was amplified and prepared for sequencing using the Illumina MiSeq Reagent Kit v3 using the following primers: forward 5′- TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG CCTACGGGNGGCWGCAG-3′ and reverse 5′- GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG GACTACHVGGGTATCTAATCC-3′ with Illumina adapter sequences denoted in italics and underlined. The master mix consisted of 2 μL sample genomic DNA, 10 μL of 2X Maxima SYBR Green (Thermo Scientific, Waltham, MA, USA), and 2 μL each (10 μM) of forward and reverse primers for a total 20 μL reaction mix. The PCR thermal cycling conditions were as follows: 95°C for 5 min, 30 cycles of 95°C for 40 s, 55°C for 120 s, 72°C for 60 s, and a final annealing at 72°C for 7 min. The resulting amplicons were purified using an in-house preparation of solid phase reversible immobilization (SPRI) magnetic beads. Samples were barcoded using Illumina Nextera XT index sequences added by a second PCR amplification. The master mix (50 μL) consisted of 5 μL of first-round PCR product, 25 μL of 2X Maxima SYBR Green (Thermo Scientific, Waltham, MA, USA), 10 μL water, and 5 μL each of forward and reverse indices. PCR reactions were amplified at 95°C for 3 min, 8 cycles of 95°C for 30 s, 55°C for 30 s and 72°C for 30 s, followed by final annealing of 72°C for 5 min. Following amplification, the PCR product was cleaned using SPRI beads and quantified using a Qubit fluorometer (Thermo Scientific, Waltham, MA, USA). Final library size and purity were verified using a TapeStation system (Agilent Technologies, Santa Clara, CA, USA) and the Kapa Biosystems kit (Sigma Aldrich, St Louis, MO, USA). The final pooled sample was diluted to 4 nM with ddH 2 O, denatured with 0.2 N NaOH, and a final dilution to 15 pM with HT1 buffer. Sequencing was performed on an Illumina MiSeq using the v3 600 cycle kit (Illumina, San Diego, USA) with a 25% PhiX spike-in control. DNA sequence processing consisted of primer removal from demultiplexed raw fastq files using Cutadapt v3.2 and inference of amplicon sequence variants using the default pipeline in DADA2 . All sequence variants were classified using the default NCBI-linked 16S rRNA reference database available from Emu v3.0.0 ( https://github.com/treangenlab/emu ) using minimap2 v2.22 . Functional profiling The bacterial community functional profiles were created using the metagenome prediction pipeline, PICRUSt2 . The full pipeline (picrust2_pipeline.py) was used with the representative sequences and biom tables for each sequencing with the “—stratified” (to create stratified tables at all steps) and “—skip_norm” (to skip normalizing sequence abundances by predicted marker gene copy numbers) parameters. Additionally, hidden state prediction (hsp.py) was used to predict 16S copy numbers with the “-n” parameter so that Nearest-sequenced taxon index (NSTI) values were calculated. The stratified metagenome output (pred_metagenome_contrib.tsv.gz) and the predicted 16S copy numbers (marker_predicted_and_nsti.tsv.gz) were imported into R. The predicted 16S copy numbers were used to correct bacterial abundances, and relative abundances were calculated using the general equation taxon_relative_abundance / 100 * genome_function_count / genome_16S_count. Functional gene relative abundances were calculated for each gene associated with an enzyme commission number (EC), then converted to a feature table of ECs per sample and merged with metadata. Previous indicator ratings and management indices used for model development and testing We developed and tested our new molecular-based index against the three soil health metrics from our previous national assessment: 1) individual soil health indicator ratings; 2) an overall soil health index; and 3) our Soil Health Management Index. The first two were developed using a structural equation model (SEM) that accounted for differences in climate and texture, thus enabling comparison at a national level, and are described in detail in Deel et al. . Briefly, indicator ratings were calculated using the embedded SEM regressions to predict indicator values at each location based on clay content and climate zone, and the residuals (observed–predicted) were converted into a rating using the empirical distribution function in R . Soil health indicator ratings were calculated for two physical properties (wet aggregate stability [AggStab], available water capacity [WaterCap]) and four biological properties (soil organic matter [SOM], active carbon [ActiveC], autoclaved-citrate extractable protein [ACE], and soil respiration [Resp]). Details of all protocols for the soil health indicators are provided by Schindelbeck and Moebius-Clune . These indicators are among those that have been evaluated for use in “standardized, rapid, and quantitative assessments of soil health based on relevance to key soil processes [and] response to management” . In addition, this SEM combined the contribution of each soil health indicator into a single latent variable of soil health, which we refer to as SEMWISE (Structural Equation Model for Well-Informed Soil Evaluation) . Similar to the individual ratings, we used the empirical distribution function in R to transform these values into an overall SEMWISE rating that ranged from 0–100. The overall ratings were then grouped into five equidistant bins (very low: 0–20, low: 20–40, med: 40–60, high: 60–80, and very high: 80–100) to reflect soil health status across the samples. Our Soil Health Management Index (SHMI) was designed to translate the influence of multiple management practices into a single index based on three soil health principles . The practices include increasing plant biodiversity, minimizing soil disturbance, and maximizing living roots and soil cover. The SHMI score is then grouped into five bins, with lower values representing management systems of low soil health (e.g., monocultures with intensive tillage practices) and higher values approaching a management system represented by all soil health principles (e.g., perennial grazing lands with a diversity of plant species or cover crops with no-till and/or diverse crop rotations). Distilling management into a single index allows for the comparison of management across a wide range of agricultural systems that differ in their management histories and captures the influence of multiple practices (e.g., cover cropping and tillage). Molecular index development and testing All analyses were performed in R v4.4.0 . We used Extreme Gradient Boosting decision trees (XGBoost) to model the relationship between microbial functional gene profiles (i.e., EC relative abundances) and the SEMWISE-derived soil health indicator ratings. XGBoost has been shown to perform well on microbiome data . Each feature (EC relative abundances) was first scaled between 0 and 1 using the vegan package and only features present in more than one-third of the samples were included in the XGBoost model. The dataset was then randomly split into training (80%) and test (20%) sets stratified by climate zone using the rsample package . Models were run 25 times using independent splits to account for lucky and unlucky splits. For each run, model tuning was based on three-fold cross-validation of training data combined with Bayesian optimization to select the best hyperparameters (eta, gamma, max_depth, min_child_weight, lambda, alpha) using AUC as the evaluation criteria. To compare accuracies of all model types, R 2 values between observed versus predicted for all 25 models and all indicators were graphed as a distribution. A linear model between observed versus predicted values was generated for each model using the appropriate test datasets . R packages used for modeling include xgboost , caTools , and caret . For each soil health indicator rating, the top enzymes were selected (e.g., enzymes present in 13 or more of the 25 independent models runs and ranked by their average gain across all model runs where the enzyme was present) to create a molecular index of soil health (MISH). Any enzyme (normalized relative abundance) that exhibited a negative Spearman correlation with the indicator of interest was first inverted (subtracted from 1) and then a weighted mean was calculated using the scaled relative abundances with average gains as weights to create individual MISH indicator ratings. An overall MISH rating was created similarly using enzymes from the previous step. The number of enzymes to select from each rating for the overall MISH rating was determined by creating the MISH score with variable numbers of enzymes (from 10–100), running a regression between the MISH indicator rating and the SEMWISE indicator rating, and observing when the R 2 and average AIC reached a maximum. If a feature was common between two or more soil health indicators, the maximum gain was used for weighting. The ability of the MISH ratings to capture differences associated with soil health indicators or management was assessed by comparing the distribution of ratings across indicator bins (very low: 0–20, low: 20–40, med: 40–60, high: 60–80, and very high: 80–100) using the non-parametric Kruskal-Wallis test with pairwise comparisons using Wilcoxon rank sum tests with FDR adjustment. All bins were constructed to account for climate and textural differences to enable comparisons across regions, soil types, and individual management practices. The top enzymes for each indicator were assigned to KEGG pathways , which provides information on the function of each enzyme. All enzyme names and classifications were extracted from the ExplorEnz database, which is the approved International Union of Biochemistry and Molecular Biology Enzyme nomenclature and classification list . Details on soil collection, management histories, geography, and soil health measurements are provided in . Briefly, subsamples from the 536 soil samples (0–15 cm) collected from 26 states in the U.S. representing annual cropland (n = 335), perennial cropland (n = 91), and rangeland (n = 110) systems were frozen and shipped to the U.S. Department of Agriculture, Agricultural Research Service, Fort Collins, CO. DNA extraction, PCR amplification, and library preparation were conducted following protocols commonly used in our laboratory . Briefly, DNA was extracted from 0.25 g subsamples using the Qiagen DNeasy Powersoil Pro Kit (Qiagen, Germantown, MD) using a 10-min vortex lysis step and a fully automated Qiagen QIAcube robot. DNA quality was assessed using a Nanodrop 1000 (Thermo Scientific, Waltham, MA) and quantified fluorometrically with the Invitrogen dsDNA HS Assay Kit on a Qubit 2.0 (Life Technologies, Carlsbad, CA). The V3-V4 hypervariable region of the 16S rRNA gene was amplified and prepared for sequencing using the Illumina MiSeq Reagent Kit v3 using the following primers: forward 5′- TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG CCTACGGGNGGCWGCAG-3′ and reverse 5′- GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG GACTACHVGGGTATCTAATCC-3′ with Illumina adapter sequences denoted in italics and underlined. The master mix consisted of 2 μL sample genomic DNA, 10 μL of 2X Maxima SYBR Green (Thermo Scientific, Waltham, MA, USA), and 2 μL each (10 μM) of forward and reverse primers for a total 20 μL reaction mix. The PCR thermal cycling conditions were as follows: 95°C for 5 min, 30 cycles of 95°C for 40 s, 55°C for 120 s, 72°C for 60 s, and a final annealing at 72°C for 7 min. The resulting amplicons were purified using an in-house preparation of solid phase reversible immobilization (SPRI) magnetic beads. Samples were barcoded using Illumina Nextera XT index sequences added by a second PCR amplification. The master mix (50 μL) consisted of 5 μL of first-round PCR product, 25 μL of 2X Maxima SYBR Green (Thermo Scientific, Waltham, MA, USA), 10 μL water, and 5 μL each of forward and reverse indices. PCR reactions were amplified at 95°C for 3 min, 8 cycles of 95°C for 30 s, 55°C for 30 s and 72°C for 30 s, followed by final annealing of 72°C for 5 min. Following amplification, the PCR product was cleaned using SPRI beads and quantified using a Qubit fluorometer (Thermo Scientific, Waltham, MA, USA). Final library size and purity were verified using a TapeStation system (Agilent Technologies, Santa Clara, CA, USA) and the Kapa Biosystems kit (Sigma Aldrich, St Louis, MO, USA). The final pooled sample was diluted to 4 nM with ddH 2 O, denatured with 0.2 N NaOH, and a final dilution to 15 pM with HT1 buffer. Sequencing was performed on an Illumina MiSeq using the v3 600 cycle kit (Illumina, San Diego, USA) with a 25% PhiX spike-in control. DNA sequence processing consisted of primer removal from demultiplexed raw fastq files using Cutadapt v3.2 and inference of amplicon sequence variants using the default pipeline in DADA2 . All sequence variants were classified using the default NCBI-linked 16S rRNA reference database available from Emu v3.0.0 ( https://github.com/treangenlab/emu ) using minimap2 v2.22 . The bacterial community functional profiles were created using the metagenome prediction pipeline, PICRUSt2 . The full pipeline (picrust2_pipeline.py) was used with the representative sequences and biom tables for each sequencing with the “—stratified” (to create stratified tables at all steps) and “—skip_norm” (to skip normalizing sequence abundances by predicted marker gene copy numbers) parameters. Additionally, hidden state prediction (hsp.py) was used to predict 16S copy numbers with the “-n” parameter so that Nearest-sequenced taxon index (NSTI) values were calculated. The stratified metagenome output (pred_metagenome_contrib.tsv.gz) and the predicted 16S copy numbers (marker_predicted_and_nsti.tsv.gz) were imported into R. The predicted 16S copy numbers were used to correct bacterial abundances, and relative abundances were calculated using the general equation taxon_relative_abundance / 100 * genome_function_count / genome_16S_count. Functional gene relative abundances were calculated for each gene associated with an enzyme commission number (EC), then converted to a feature table of ECs per sample and merged with metadata. We developed and tested our new molecular-based index against the three soil health metrics from our previous national assessment: 1) individual soil health indicator ratings; 2) an overall soil health index; and 3) our Soil Health Management Index. The first two were developed using a structural equation model (SEM) that accounted for differences in climate and texture, thus enabling comparison at a national level, and are described in detail in Deel et al. . Briefly, indicator ratings were calculated using the embedded SEM regressions to predict indicator values at each location based on clay content and climate zone, and the residuals (observed–predicted) were converted into a rating using the empirical distribution function in R . Soil health indicator ratings were calculated for two physical properties (wet aggregate stability [AggStab], available water capacity [WaterCap]) and four biological properties (soil organic matter [SOM], active carbon [ActiveC], autoclaved-citrate extractable protein [ACE], and soil respiration [Resp]). Details of all protocols for the soil health indicators are provided by Schindelbeck and Moebius-Clune . These indicators are among those that have been evaluated for use in “standardized, rapid, and quantitative assessments of soil health based on relevance to key soil processes [and] response to management” . In addition, this SEM combined the contribution of each soil health indicator into a single latent variable of soil health, which we refer to as SEMWISE (Structural Equation Model for Well-Informed Soil Evaluation) . Similar to the individual ratings, we used the empirical distribution function in R to transform these values into an overall SEMWISE rating that ranged from 0–100. The overall ratings were then grouped into five equidistant bins (very low: 0–20, low: 20–40, med: 40–60, high: 60–80, and very high: 80–100) to reflect soil health status across the samples. Our Soil Health Management Index (SHMI) was designed to translate the influence of multiple management practices into a single index based on three soil health principles . The practices include increasing plant biodiversity, minimizing soil disturbance, and maximizing living roots and soil cover. The SHMI score is then grouped into five bins, with lower values representing management systems of low soil health (e.g., monocultures with intensive tillage practices) and higher values approaching a management system represented by all soil health principles (e.g., perennial grazing lands with a diversity of plant species or cover crops with no-till and/or diverse crop rotations). Distilling management into a single index allows for the comparison of management across a wide range of agricultural systems that differ in their management histories and captures the influence of multiple practices (e.g., cover cropping and tillage). All analyses were performed in R v4.4.0 . We used Extreme Gradient Boosting decision trees (XGBoost) to model the relationship between microbial functional gene profiles (i.e., EC relative abundances) and the SEMWISE-derived soil health indicator ratings. XGBoost has been shown to perform well on microbiome data . Each feature (EC relative abundances) was first scaled between 0 and 1 using the vegan package and only features present in more than one-third of the samples were included in the XGBoost model. The dataset was then randomly split into training (80%) and test (20%) sets stratified by climate zone using the rsample package . Models were run 25 times using independent splits to account for lucky and unlucky splits. For each run, model tuning was based on three-fold cross-validation of training data combined with Bayesian optimization to select the best hyperparameters (eta, gamma, max_depth, min_child_weight, lambda, alpha) using AUC as the evaluation criteria. To compare accuracies of all model types, R 2 values between observed versus predicted for all 25 models and all indicators were graphed as a distribution. A linear model between observed versus predicted values was generated for each model using the appropriate test datasets . R packages used for modeling include xgboost , caTools , and caret . For each soil health indicator rating, the top enzymes were selected (e.g., enzymes present in 13 or more of the 25 independent models runs and ranked by their average gain across all model runs where the enzyme was present) to create a molecular index of soil health (MISH). Any enzyme (normalized relative abundance) that exhibited a negative Spearman correlation with the indicator of interest was first inverted (subtracted from 1) and then a weighted mean was calculated using the scaled relative abundances with average gains as weights to create individual MISH indicator ratings. An overall MISH rating was created similarly using enzymes from the previous step. The number of enzymes to select from each rating for the overall MISH rating was determined by creating the MISH score with variable numbers of enzymes (from 10–100), running a regression between the MISH indicator rating and the SEMWISE indicator rating, and observing when the R 2 and average AIC reached a maximum. If a feature was common between two or more soil health indicators, the maximum gain was used for weighting. The ability of the MISH ratings to capture differences associated with soil health indicators or management was assessed by comparing the distribution of ratings across indicator bins (very low: 0–20, low: 20–40, med: 40–60, high: 60–80, and very high: 80–100) using the non-parametric Kruskal-Wallis test with pairwise comparisons using Wilcoxon rank sum tests with FDR adjustment. All bins were constructed to account for climate and textural differences to enable comparisons across regions, soil types, and individual management practices. The top enzymes for each indicator were assigned to KEGG pathways , which provides information on the function of each enzyme. All enzyme names and classifications were extracted from the ExplorEnz database, which is the approved International Union of Biochemistry and Molecular Biology Enzyme nomenclature and classification list . Data quality and coverage A total of eight MiSeq runs were conducted resulting in 7,332,013 high-quality sequencing reads, approximately 2 Gbp of sequence data, and an average sequencing depth of 17,262 reads per sample. For the entire dataset, we observed a total of 6,733 bacterial species and a total of 2,433 bacterial enzymes after phylogenetic reconstruction with PICRUSt2. A collector’s curve analysis showed that approximately 450 and 50 samples were required to reach 95% coverage of the total taxonomic and enzymatic richness, respectively . Furthermore, the enzyme collector’s curve flattened while the species curve did not, indicating that we had likely captured the full enzyme community but not species. In a typical sample, an average of 280 species and 1,704 enzymes were present, representing 4% and 70% of the total potential richness. In addition, there were 1,547 enzymes present in over 80% of samples, while most species were present in less than 20% of samples . Since species are often unique to smaller subsets of samples, as shown here, more samples are required to create an accurate model. Furthermore, it is uncertain whether unique species perform the same functions (i.e., functional redundancy). By using an enzyme approach, we circumvent the need to characterize functional redundancy, and we utilize the universally present enzymes to create a widely applicable model for measuring soil health. However, PICRUSt2 also has potential limitations. First, due to its DNA-based nature, it is not a direct measure of enzyme presence or activity but rather a measure of functional gene abundance or capacity. This limitation is true for any DNA-based marker gene or metagenome sequencing project as activity will ultimately depend upon gene expression and protein activity. Second, there is some uncertainty in applying a phylogenetic approach using a single marker gene (i.e., 16S rRNA) ; however, previous studies have shown that PICRUSt2 can be highly correlated with metagenome sequencing , but for a fraction of the price. Overall, previous studies and the data shown in highlight the robustness of using enzymes to develop a soil health index due to its ubiquity across a variety of soils and its cost-effectiveness. Microbial functional data predicts soil health indicators Since the goal is to develop a comprehensive index, we ran each model 25 times and compiled the results. This is because random forest variable importance measures are biased , and repeating the model increases confidence in the enzymes identified as important to soil health indicators. A compilation of the 25 models predicting soil health indicator ratings from the PICRUSt2 functional genes (enzyme relative abundances) was developed for each of the six soil health indicators. Linear regression p-values of predicted versus observed values for the test sets were significant (p < 0.001) for all soil health indicators. Mean adjusted R 2 values for each soil health indicator rating ranged from 0.221 to 0.337 and root mean square errors (RMSE) ranged from 0.239 to 0.251 . ACE protein (0.337) and SOM (0.310) measurements had the highest mean R 2 values with WaterCap (0.221) and Resp (0.223) the lowest. For the 25 independent model runs, the average number of enzymes retained in the models ranged from a low of 359 (AggStab) to a high of 554 (SOM) . There was significant variation in the number of enzymes retained in each random forest model depending on the train/test data split. For example, the number of enzymes ranged from 8 to 1164 for ACE protein . The enzymes selected also varied between each model run with a range of 0–7 enzymes present in all 25 model runs for each indicator . Due to the 25 model repetitions and the feature selection implemented by XGBoost, many enzymes were not present in all models. To compile these results, a list of potential “important” enzymes was created, including enzymes that were present in more than half of the model runs and had the highest impact on model accuracy or gain. These enzymes included both positively and negatively correlated with the indicators and may not necessarily be defined by linear relationships. Most enzymes were not included in the models, with an approximate range of 15–23% of the total number of identified enzymes included in any single model run. The enzymes with the top ten average gains for each soil health indicator tended to be unique for each indicator ( and ), and they spanned a range of KEGG pathways ( and ) and enzyme classes . The top 50 enzymes for each indicator were compiled. Since some of the top enzymes were common between indicators, this resulted in a final list of 235 unique enzymes. Of these 235 enzymes, 22.6% were associated with “Carbohydrate metabolism”, 20.9% with “Amino acid metabolism”, 9.8% with “Energy Metabolism” or “Metabolism of cofactors and vitamins”, 9.4% with “Xenobiotics biodegradation and metabolism”, 6.8% with “Biosynthesis of other secondary metabolites”, and 6.4% with “Lipid metabolism” or “Nucleotide metabolism”; all other KEGG pathways were associated with less than 5% of the enzymes. For each indicator, the distribution of enzymes and their mapped KEGG pathways differed ( and ). The pathways represented by the 50 important enzymes for the SOM rating was most diverse , likely because organic matter is complex and requires the use of many bacterial enzymes to break it down. Carbohydrate metabolism was among the most common pathway for all six indicators. Many of these enzymes (a full list is shown in ) are dehydrogenases, which are known to oxidize SOM as part of the microbial respiration pathway . This may explain why the carbohydrate metabolism pathway, closely followed by energy metabolism, have the highest proportion of enzymes in the Resp rating . Carbohydrate metabolism was also the most abundant pathway in the ACE and AggStab ratings . Amino acid metabolism was the next most abundant pathway of the top enzymes. Manipulating amino acid metabolism has been shown to improve crop nitrogen (N) use efficiency through regulating N uptake, assimilation, and remobilization efficiencies . Amino acids are a key mobilizable source of N for plants in which the N is made available by extracellular microbial enzymes through deamination and the release of ammonium N . This influx of N can then influence soil aggregation, either by increasing or decreasing its stability. Additionally, amino acid metabolism has been shown to maintain energetic balance by coordinating with carbohydrate metabolism , the most abundant pathway. Amino acid metabolism had the highest relative abundance in the ActiveC and WaterCap ratings . Active C has been shown to be associated with soil N availability , and a low C:N ratio is needed to store and maintain N in the soil organic matter . Water availability has been shown to affect amino acid composition , and the associated enzymes identified here may be targeted in future studies to better understand the role that microbes play in this relationship. Two enzymes that were within the top predictive enzymes of several indicator ratings are notable. EC 1.5.5.2, a proline dehydrogenase involved in amino acid metabolism, increased with ACE, Active C, SOM, and WaterCap ratings . Additionally, EC 1.17.4.1, a reductase involved in DNA repair, significantly increased with ACE, Active C, and SOM ratings . Both enzymes are likely constitutive, or always present in the soil and have consequently been ignored in studies relating microbial enzymes and soil health. However, their positive correlation with several soil health indicators warrants further investigation. The top enzymes showed both positive and negative correlations with the soil health indicators . It is beyond the scope of this paper to determine if the positive enzymes are responsible for building soil health or responding to the higher levels of C and N (e.g., SOM) increasing microbial growth and survival and thus enzyme abundances. However, these enzymes are the most consistent and important features for predicting the various soil health indicators and may be key for developing indices for predicting soil health from a single low-cost 16S rRNA amplicon analysis. Molecular index of soil health Rather than supplying a single machine learning model as the tool for measuring soil health, we chose to develop a comprehensive molecular index that incorporated results from multiple (25) models. Random forest variable importance measures are biased such that the split during tree generation can change which features are identified as most important. By running 25 models, our goal was to account for this bias and identify enzymes that are consistently important to soil health. These results could then be combined into a final, simplified index that includes few, but important, enzymes, and still has accurate prediction of soil health. This would additionally allow for the index to be readily applied across other datasets. To compile important enzymes into a molecular index of soil health (MISH), the optimal number of enzymes to incorporate was first selected using average R 2 and Akaike Information Criterion (AIC). Although all regressions between the MISH indicator rating and SEMWISE indicator ratings were significant, average R 2 and AIC appeared to reach a maximum at 50 enzymes . We chose to create individual indicator indices as well as an overall MISH index to determine which is more predictive across a variety of climates. Therefore, the top 50 important enzymes from each soil health indicator rating were compiled into individual indicator ratings and an overall MISH rating in which all enzymes were combined, resulting in a total of 235 unique enzymes. For all six SEMWISE indicator ratings, MISH indicator ratings were significantly different between bins based on a non-parametric Kruskal-Wallis test (p < 0.001) . MISH indicator ratings tended to significantly increase (p < 0.05) with successive SEMWISE indicator bins based on Wilcoxon rank sum tests with FDR adjustment. For each indicator, a single MISH indicator rating derived from only 50 commonly occurring enzymes was sufficient to predict the six soil health indicators from a wide range of agricultural systems across the U.S. Similar to the individual ratings, we compared the MISH overall rating to binned SEMWISE ratings . The MISH overall ratings were significantly higher with each successive SEMWISE bin in the very low, low, med, and high categories but not the very high bin. One of the difficulties in conducting national-scale assessments of soil health is due to differing combinations of management practices that may co-exist in time and space. For example, two sites may both have cover crops but one is under no-till and the other conventional tillage and/or sites may differ in the diversity of crops. This complexity makes it difficult to compare ratings across sites. To address these complexities, we previously introduced a soil health management index (SHMI) that combines soil health management practices into a single index . The SHMI bins represent combinations of practices that manage soil health through the principles to minimize soil disturbance, increase plant diversity, and provide continuous soil cover and living roots . This binning procedure resulted in different land uses typically assigned to specific bins. For example, the very high bin is represented by rangeland, perennial cropland dominated in the high bin, and annual cropland was spread among the very low to high bins . Typical annual cropland management systems for each SHMI bin are as follows: conventionally tilled, monoculture cropping systems (very low); no-till monoculture cropping systems (low); conventionally tilled with cover crops or diversified crop rotations (medium); and no-till plus cover crops and/or diversified crop rotations (high). The SHMI scores were calculated based on a two- to three-year management history and some of the samples experienced a transition in management over that period (i.e., annual cropland converted to perennial cropland or green dots in the low SHMI bin), which resulted in some of the inconsistency in SHMI rankings across land use types. Across the entire national dataset, MISH overall ratings significantly increased (p ≤ 0.05) with each successive SHMI bin, except the very high bin . The overall congruence between the two measures suggests that as soil health management practice adoption increases, the MISH overall score increases. This trend is similar to those seen for the individual indicator ratings , suggesting that individual indicator ratings are not more accurate, and an overall index is suitable for comparison across locations. In its current form, the SHMI rating does not address the length of time a management practice has been in place. Soil health indicators can vary in their response times to management changes. For example, converting from conventional to no-tillage can take up to 20 years to reach a new SOM equilibrium . Efforts are currently underway to improve our SHMI rating to account for time, which should further improve the relationship between MISH and SHMI at the national scale. A total of eight MiSeq runs were conducted resulting in 7,332,013 high-quality sequencing reads, approximately 2 Gbp of sequence data, and an average sequencing depth of 17,262 reads per sample. For the entire dataset, we observed a total of 6,733 bacterial species and a total of 2,433 bacterial enzymes after phylogenetic reconstruction with PICRUSt2. A collector’s curve analysis showed that approximately 450 and 50 samples were required to reach 95% coverage of the total taxonomic and enzymatic richness, respectively . Furthermore, the enzyme collector’s curve flattened while the species curve did not, indicating that we had likely captured the full enzyme community but not species. In a typical sample, an average of 280 species and 1,704 enzymes were present, representing 4% and 70% of the total potential richness. In addition, there were 1,547 enzymes present in over 80% of samples, while most species were present in less than 20% of samples . Since species are often unique to smaller subsets of samples, as shown here, more samples are required to create an accurate model. Furthermore, it is uncertain whether unique species perform the same functions (i.e., functional redundancy). By using an enzyme approach, we circumvent the need to characterize functional redundancy, and we utilize the universally present enzymes to create a widely applicable model for measuring soil health. However, PICRUSt2 also has potential limitations. First, due to its DNA-based nature, it is not a direct measure of enzyme presence or activity but rather a measure of functional gene abundance or capacity. This limitation is true for any DNA-based marker gene or metagenome sequencing project as activity will ultimately depend upon gene expression and protein activity. Second, there is some uncertainty in applying a phylogenetic approach using a single marker gene (i.e., 16S rRNA) ; however, previous studies have shown that PICRUSt2 can be highly correlated with metagenome sequencing , but for a fraction of the price. Overall, previous studies and the data shown in highlight the robustness of using enzymes to develop a soil health index due to its ubiquity across a variety of soils and its cost-effectiveness. Since the goal is to develop a comprehensive index, we ran each model 25 times and compiled the results. This is because random forest variable importance measures are biased , and repeating the model increases confidence in the enzymes identified as important to soil health indicators. A compilation of the 25 models predicting soil health indicator ratings from the PICRUSt2 functional genes (enzyme relative abundances) was developed for each of the six soil health indicators. Linear regression p-values of predicted versus observed values for the test sets were significant (p < 0.001) for all soil health indicators. Mean adjusted R 2 values for each soil health indicator rating ranged from 0.221 to 0.337 and root mean square errors (RMSE) ranged from 0.239 to 0.251 . ACE protein (0.337) and SOM (0.310) measurements had the highest mean R 2 values with WaterCap (0.221) and Resp (0.223) the lowest. For the 25 independent model runs, the average number of enzymes retained in the models ranged from a low of 359 (AggStab) to a high of 554 (SOM) . There was significant variation in the number of enzymes retained in each random forest model depending on the train/test data split. For example, the number of enzymes ranged from 8 to 1164 for ACE protein . The enzymes selected also varied between each model run with a range of 0–7 enzymes present in all 25 model runs for each indicator . Due to the 25 model repetitions and the feature selection implemented by XGBoost, many enzymes were not present in all models. To compile these results, a list of potential “important” enzymes was created, including enzymes that were present in more than half of the model runs and had the highest impact on model accuracy or gain. These enzymes included both positively and negatively correlated with the indicators and may not necessarily be defined by linear relationships. Most enzymes were not included in the models, with an approximate range of 15–23% of the total number of identified enzymes included in any single model run. The enzymes with the top ten average gains for each soil health indicator tended to be unique for each indicator ( and ), and they spanned a range of KEGG pathways ( and ) and enzyme classes . The top 50 enzymes for each indicator were compiled. Since some of the top enzymes were common between indicators, this resulted in a final list of 235 unique enzymes. Of these 235 enzymes, 22.6% were associated with “Carbohydrate metabolism”, 20.9% with “Amino acid metabolism”, 9.8% with “Energy Metabolism” or “Metabolism of cofactors and vitamins”, 9.4% with “Xenobiotics biodegradation and metabolism”, 6.8% with “Biosynthesis of other secondary metabolites”, and 6.4% with “Lipid metabolism” or “Nucleotide metabolism”; all other KEGG pathways were associated with less than 5% of the enzymes. For each indicator, the distribution of enzymes and their mapped KEGG pathways differed ( and ). The pathways represented by the 50 important enzymes for the SOM rating was most diverse , likely because organic matter is complex and requires the use of many bacterial enzymes to break it down. Carbohydrate metabolism was among the most common pathway for all six indicators. Many of these enzymes (a full list is shown in ) are dehydrogenases, which are known to oxidize SOM as part of the microbial respiration pathway . This may explain why the carbohydrate metabolism pathway, closely followed by energy metabolism, have the highest proportion of enzymes in the Resp rating . Carbohydrate metabolism was also the most abundant pathway in the ACE and AggStab ratings . Amino acid metabolism was the next most abundant pathway of the top enzymes. Manipulating amino acid metabolism has been shown to improve crop nitrogen (N) use efficiency through regulating N uptake, assimilation, and remobilization efficiencies . Amino acids are a key mobilizable source of N for plants in which the N is made available by extracellular microbial enzymes through deamination and the release of ammonium N . This influx of N can then influence soil aggregation, either by increasing or decreasing its stability. Additionally, amino acid metabolism has been shown to maintain energetic balance by coordinating with carbohydrate metabolism , the most abundant pathway. Amino acid metabolism had the highest relative abundance in the ActiveC and WaterCap ratings . Active C has been shown to be associated with soil N availability , and a low C:N ratio is needed to store and maintain N in the soil organic matter . Water availability has been shown to affect amino acid composition , and the associated enzymes identified here may be targeted in future studies to better understand the role that microbes play in this relationship. Two enzymes that were within the top predictive enzymes of several indicator ratings are notable. EC 1.5.5.2, a proline dehydrogenase involved in amino acid metabolism, increased with ACE, Active C, SOM, and WaterCap ratings . Additionally, EC 1.17.4.1, a reductase involved in DNA repair, significantly increased with ACE, Active C, and SOM ratings . Both enzymes are likely constitutive, or always present in the soil and have consequently been ignored in studies relating microbial enzymes and soil health. However, their positive correlation with several soil health indicators warrants further investigation. The top enzymes showed both positive and negative correlations with the soil health indicators . It is beyond the scope of this paper to determine if the positive enzymes are responsible for building soil health or responding to the higher levels of C and N (e.g., SOM) increasing microbial growth and survival and thus enzyme abundances. However, these enzymes are the most consistent and important features for predicting the various soil health indicators and may be key for developing indices for predicting soil health from a single low-cost 16S rRNA amplicon analysis. Rather than supplying a single machine learning model as the tool for measuring soil health, we chose to develop a comprehensive molecular index that incorporated results from multiple (25) models. Random forest variable importance measures are biased such that the split during tree generation can change which features are identified as most important. By running 25 models, our goal was to account for this bias and identify enzymes that are consistently important to soil health. These results could then be combined into a final, simplified index that includes few, but important, enzymes, and still has accurate prediction of soil health. This would additionally allow for the index to be readily applied across other datasets. To compile important enzymes into a molecular index of soil health (MISH), the optimal number of enzymes to incorporate was first selected using average R 2 and Akaike Information Criterion (AIC). Although all regressions between the MISH indicator rating and SEMWISE indicator ratings were significant, average R 2 and AIC appeared to reach a maximum at 50 enzymes . We chose to create individual indicator indices as well as an overall MISH index to determine which is more predictive across a variety of climates. Therefore, the top 50 important enzymes from each soil health indicator rating were compiled into individual indicator ratings and an overall MISH rating in which all enzymes were combined, resulting in a total of 235 unique enzymes. For all six SEMWISE indicator ratings, MISH indicator ratings were significantly different between bins based on a non-parametric Kruskal-Wallis test (p < 0.001) . MISH indicator ratings tended to significantly increase (p < 0.05) with successive SEMWISE indicator bins based on Wilcoxon rank sum tests with FDR adjustment. For each indicator, a single MISH indicator rating derived from only 50 commonly occurring enzymes was sufficient to predict the six soil health indicators from a wide range of agricultural systems across the U.S. Similar to the individual ratings, we compared the MISH overall rating to binned SEMWISE ratings . The MISH overall ratings were significantly higher with each successive SEMWISE bin in the very low, low, med, and high categories but not the very high bin. One of the difficulties in conducting national-scale assessments of soil health is due to differing combinations of management practices that may co-exist in time and space. For example, two sites may both have cover crops but one is under no-till and the other conventional tillage and/or sites may differ in the diversity of crops. This complexity makes it difficult to compare ratings across sites. To address these complexities, we previously introduced a soil health management index (SHMI) that combines soil health management practices into a single index . The SHMI bins represent combinations of practices that manage soil health through the principles to minimize soil disturbance, increase plant diversity, and provide continuous soil cover and living roots . This binning procedure resulted in different land uses typically assigned to specific bins. For example, the very high bin is represented by rangeland, perennial cropland dominated in the high bin, and annual cropland was spread among the very low to high bins . Typical annual cropland management systems for each SHMI bin are as follows: conventionally tilled, monoculture cropping systems (very low); no-till monoculture cropping systems (low); conventionally tilled with cover crops or diversified crop rotations (medium); and no-till plus cover crops and/or diversified crop rotations (high). The SHMI scores were calculated based on a two- to three-year management history and some of the samples experienced a transition in management over that period (i.e., annual cropland converted to perennial cropland or green dots in the low SHMI bin), which resulted in some of the inconsistency in SHMI rankings across land use types. Across the entire national dataset, MISH overall ratings significantly increased (p ≤ 0.05) with each successive SHMI bin, except the very high bin . The overall congruence between the two measures suggests that as soil health management practice adoption increases, the MISH overall score increases. This trend is similar to those seen for the individual indicator ratings , suggesting that individual indicator ratings are not more accurate, and an overall index is suitable for comparison across locations. In its current form, the SHMI rating does not address the length of time a management practice has been in place. Soil health indicators can vary in their response times to management changes. For example, converting from conventional to no-tillage can take up to 20 years to reach a new SOM equilibrium . Efforts are currently underway to improve our SHMI rating to account for time, which should further improve the relationship between MISH and SHMI at the national scale. In this study, we used PICRUSt2 to estimate enzyme or functional gene relative abundances and developed individual scores and an overall molecular index of soil health (MISH). Enzymes were first selected using XGBoost modeling to identify the most important enzymes for predicting known soil health indicators (ACE, ActiveC, AggStab, Resp, SOM, and WaterCap). From these models, individual MISH ratings were constructed for each indicator, as well as an overall MISH rating from the most important enzymes associated with each of the six indicators. The individual MISH indices were positively correlated and showed good agreement with the soil health indices across the 536 samples from this national assessment of U.S. agricultural systems. An overall MISH index was also positively correlated with overall measures of soil health (SEMWISE) and management practices (SHMI). Additionally, since the MISH index was created using indicator data that was corrected for clay content and climate zone and based on enzymes present in all samples, it is suitable across multiple regions and agricultural systems. By leveraging the power of phylogenetic reconstruction using PICRUSt2, this assay involves a single 16S rRNA amplicon sequencing approach that is relatively low cost and easily employed in molecular biology laboratories. This new, molecular-based index correlates with soil health indicators and management. It is a quick, easy, and inexpensive way to measure and compare microbial contributions to soil health, and will be particularly useful for surveys, meta-analyses, and long-term studies. S1 Fig Construction of SEMWISE indicator ratings. (TIF) S2 Fig Number of common enzymes in the number of models for each soil health indicator. For example, for the ACE rating, in two models there ~1600 common enzymes retained in both models. In ten models, there were less than 400 retained enzymes in common between all ten models. (TIF) S3 Fig Criteria for selecting the number of enzymes in each MISH indicator rating. No. of enzymes = the number of enzymes selected (based on presence in >13 of the random forest models and highest average gain) and used to create the MISH score. Adjusted R 2 and average Akaike Information Criteria were calculated from a regression between the MISH indicator rating with the selected number of enzymes and the SEMWISE indicator rating. (TIF) S1 Table Enzymes with the top 50 average highest gains for each soil health indicator. (XLSX) S2 Table The KEGG pathways of all enzymes with high gains. (XLSX) S3 Table The enzyme classes of all enzymes with high gains. (XLSX) |
Knowledge and responsibility in CBCT practice among general and specialized Israeli dentists – a questionnaire based study | 46b88d4e-f84c-4958-94df-f45bb56b634b | 11847327 | Dentistry[mh] | Cone Beam Computed Tomography (CBCT) use is widely expanding . CBCT technology provides volumetric data with high resolution and accuracy. It is used for a variety of indications such as dental implants, impacted teeth, endodontic pathologies, bone lesions, temporomandibular joint, airway studies and more . The Field Of View (FOV) of the CBCT scan performed is often larger than the Region Of Interest (ROI) indicated in the referral request, and may include incidental findings, shown to occur in the head and the neck region in 17–93% of CBCT scans. These incidental findings may include pathologies in the sinuses, airway, odontogenic or non-odontogenic cysts or tumors, or soft-tissue calcifications , with 0.3-1.4% of the cases suspected to be malignant . CBCT scan output is diverse and may include (1) Cross Sectional Images (CSI), perpendicular to a line representing the center of a jaw, localized to a specific region requested by the referring clinician (ROI) ; (2) Digital Imaging and Communications in Medicine (DICOM) files with a viewer software, which creates a multiplanar reconstruction of the entire scanned volume allowing the visualization of data in multiple orientations such as: axial, coronal and sagittal as well as reformatted panoramic images, CSI etc .; (3) Radiologic interpretation report, describing the radiographic findings demonstrated in the scan within and outside the ROI ; or combination of these (Fig. ). There are international guidelines relating to the responsibility and training of the referring, the operating and the interpreting dentists who use CBCT . Various organizations around the world such as the ADA (American Dental Association) and the AAOMR (American Academy of Oral Radiology) in the U.S. as well as the IRMER (Ionizing Radiation Medical Exposure Regulation) in the United Kingdom and the RCDSO (Royal College of Dental Surgeons of Ontario) in Ontario, Canada and IAEA (International Atomic Energy Agency) , all issued guidelines stating that all of the scanned volume must be interpreted by a trained medical professional (oral and maxillofacial radiology specialist or a trained specialist) and a detailed radiological report relating to areas inside and outside the ROI needs to be produced . Nonetheless, in many countries around the globe, issues regarding the medico-legal, ownership, interpretation and licensing of CBCT are currently partly formulated and regulated . Since the introduction of CBCT technology, many dental practitioners did not receive CBCT education or training in their undergraduate dental school. A position paper by the European Academy of DentoMaxilloFacial Radiology (EADMFR) describes the training requirements for the justification, acquisition and interpretation of dental CBCT . A recent survey among predominantly Turkish pediatric dentists showed that 36% had no knowledge of CBCT . Similarly, general knowledge about CBCT was described in another study, as 66.7% among endodontists and 56% among General Practitioners(GPs) . In another study, 63% of endodontists stated they have not undergone any training or continued education course in CBCT . A previous study of CBCT use among Israeli dentists conducted in 2012, has shown that although dentists are confident that they review the entire scanned volume in 77% of cases, they requested DICOM files in only 56% of cases. The conclusion of that study was that these dentists were not aware of the additional information present outside the data presented in the CSI . The aim of this study was to assess the clinical practice and general knowledge of CBCT among general and specialized dentists, and the influence of the dentists’ background on their CBCT output requests. Our hypothesis was that there is still a CBCT knowledge gap among most practicing dentists. We used an anonymous survey including 20 multiple-choice questions, distributed via e-mail lists of known dentists in Israel, and during 3 local conferences during the beginning of 2020. The questionnaire focused on: (1) Demographics; (2) Clinical practice (including indications for CBCT, requested output and the definition of the entire volume, training, encountering incidental findings, referral (termed “referred to another specialist for consultation”) (3) Knowledge (Radiation dose, terminology, FOV vs. ROI); and (4) Legal issues (responsibility for the incidental findings which may be encountered in the scan, and the need of interpretation service (termed “radiologic interpretation CBCT report by a radiologist”) need for CBCT education, CBCT pre-purchasing requirement) (Appendix ). The needed sample size for our survey was calculated using the Raosoft software. Assuming a current estimate of 5,622 active local dentists and 12% active specialists , a sample size of 360 participants was calculated with 95% confidence interval and assuming 50% response rate. Data was tabulated into an excel spreadsheet and analyzed using SPSS statistics software version 23.0 (IBM, Chicago, IL, USA). Statistical significance was considered as p < 0.05. Numerical variables were presented as means and standard deviations, categorical variables were presented as frequencies and percentages. The associations between nominal variables were tested using Pearson Chi-squared test and Likelihood ratio. T-test was used to examine the associations between the nominal-dichotomous variables and the numeric variables and ANOVA was used to examine the association between a nominal variable with 3 or more options and a numeric variable. The study was approved by the institutional review board (0726-18-HMO). Demographics The questionnaire was presented to 1,020 clinicians, of which 387 participated (38% response rate, 124 during conferences and 263 via e-mail). Respondents were GPs and specialists aged 44.2 ± 12.0 [range 25–81] years, with male to female ratio of 61% (236) to 39% (151). Their average professional dental experience was 16.8 ± 12.6 [range 1–57] years. Among specialists, their average professional experience as specialist was 9.5 ± 11.1 [range 1–50] years. Most (58.1%) of respondents were specialists, and the remainder (41.9%) were GPs. Among all respondents, 83.2% were graduated from dental schools in Israel. Most (68.5%) were graduated from the Hebrew University of Jerusalem, 14.7% were graduates of the Tel Aviv University and 16.8% were graduates of universities in other countries (in descending order: Eastern Europe, Jordan, Western Europe, USA, South America, Australia and South Africa). Most dentists (49.9%) practice in private clinics only, 31% practice in both private and public or corporate-dental-clinics, 14.7% in public clinics only and 3.1% in corporate clinics only. Most (55.3%) dentists worked in the greater Tel Aviv metropolitan region, 27.1% in the Jerusalem area, 11.1% in the north and 6.2% in the south. Clinical practice Of all respondents, 93% of dentists stated that they refer patients to CBCT scans. The indications for referral were dental implant planning (27%), impacted 3rd molars (24%), bone pathology (16%), endodontic assessment (14%), other impacted teeth (13%), orthognathic surgery (5%), and 1% other indications (TMJ, sialography, congenital abnormalities, trauma and periodontal) (Fig. ). As for the requested CBCT output, respondents were asked to choose which output they usually requested. The options were: CSI, DICOM files, an interpretation report or a combination of these. Most dentists (44.2%) requested CSI together with DICOM files, 31.3% request only CSI, 9.3% request radiologic report with or without CSI or DICOM files, and 7.7% request DICOM files only. Respondents were asked to specify how often they request DICOM files as an output. In 31.5% of cases dentists do not request DICOM files at all, while in 26.6% of cases they always request DICOM. When comparing GPs to specialists, specialists request DICOM files significantly more frequently ( p = 0.007, Table ). Regarding the way the DICOM files were being used, 46.4% of respondents stated that they read DICOM files with Viewer software, 17.1% use implant planning software, 26.2% use both and 10% use other software (such as orthodontic related-software). This preference did not correlate with age, or the type of training the dentists received. Respondents were further asked to state in which frequency they “review the full scanned volume”, without explaining precisely the meaning of the phrase. For simplicity, we combined the three partial options (a quarter, half and most cases) into one new category termed “partially” (Table ). When reviewing the results, about a third of the respondents (33.8%) stated they always review the entire scanned volume, while 13.5% stated they never review the entire volume (Table ). More than half of the respondents (52.6%) stated that they sometimes reviewed the entire scanned volume (the three partial options) (Table ). This result did not correlate with age, or the type of training they received (data not shown). Combining the data regarding reviewing the entire scanned volume with the respondents’ request for DICOM files, as a CBCT output, two interesting results were found. On the one hand, among respondents who stated they always review the entire scanned volume, 41.3% never request DICOM files, only 41.8% always request DICOM files, and 21.1% sometimes request DICOM files (Table ). Remarkably, on the other hand, among respondents who stated they never review the entire imaging volume, 11.2% always ask for DICOM files, 19.2% never request DICOM files, and 10.6% sometimes ask for DICOM files (Table ). Most (61.2%) respondents stated they had received CBCT training. There was a significant correlation between receiving CBCT training and younger age ( p = 0.001), fewer years of experience ( p = 0.001), as well as being specialists compared to GPs (66.6% vs. 53.7%, p = 0.01). Training in 50.6% of the dentists who had received training included both reading CSI and using DICOM-Viewer software. In 78.9%, this training was academic. Respondents who stated that they received academic training were significantly younger ( p = 0.015), with no significant difference between GPs and specialists. Regarding the frequency of encountering incidental findings, most (85.9%) dentists reported they have, at least once, encountered incidental findings within a CBCT scan. In 52.1% of the cases, the findings were in both soft tissue and bone (jaw), in 45.9% of cases findings were only in the jaw, and in 1% of cases findings were in soft tissue alone. When such findings had been encountered, almost all respondents (93.9%) stated they referred the patient to a specialist for consultation. Only 6.1% of respondents reported that they do not refer to a specialist for consultation at all. When asked of the reason they did not refer for a consultation, in 85.1% of cases the respondents were confident in their ability to independently address and manage the incidental findings. In 7% of cases, the reason was that the patient was unwilling to pay for the consultation. Of all referrals, 40.8% referred to Oral and Maxillofacial Surgeons, followed by 25.8% to Oral Medicine specialists, 14% to Orthodontists, 13.4% to Ear, Nose and Throat (ENT) specialists, and 2.6% to other specialists (Periodontists and Radiologists). About half the respondents (47.5%) reported that in most of cases, they were satisfied with the “response of the consultant” (e.g. periodontist, endodontist etc.). Specifically, 24.5% of dentists were always satisfied, whereas 1.8% reported that the answer was not satisfactory in any case. Almost all (94.5%) respondents replied that they are interested in having a service providing a “adiologic interpretation CBCT report by a radiologist”, regardless of age or type of CBCT training they received. Knowledge assessment Knowledge questions included questions about CBCT radiation dose and terminology. 70.3% and 61.2% of respondents had correct answers regarding radiation dose and the terminology of cross-sectional images, respectively. Most (88.8%) were aware of the fact that not all of the scanned volume was demonstrated in the CSI. As for practical issues, 45.2% had correct answers for the following question: “If a patient is referred for imaging of a localized site for implant planning, would the contralateral side of the jaw, be necessarily included?” Most (71.8%) respondents were aware that the FOV and ROI may be different. Overall, most respondents did relatively well in their knowledge test (68.1% average points). Correct answers correlated with young age in all questions ( p < 0.001), and with fewer years of practice ( p < 0.001) (Fig. ). When comparing GPs to specialists, there was in average scores, and no significant difference in the number of correct answers to specific questions, except for the question regarding the difference between FOV and ROI (76% of specialists vs. 66% of GPs had correct answers, p = 0.03). Legal issues and responsibility Regarding the awareness of the possible existence of incidental findings within the scanned volume, almost all respondents (95.5%) stated that they were aware of their existence in CBCT, regardless of their age or type of CBCT training they received. Furthermore, the respondents were asked who they believe, should be responsible for reviewing the entire scanned volume. While 41.7% replied it should be the referring clinician, 57.1% thought it should be a trained radiologist, and in 6.6% of cases the answer was both the referring clinician and a trained radiologist. Most (70.7%) respondents answered that they would be willing to participate in a CBCT continuing education course, regardless of their age or type of CBCT training previously received. Finally, almost all (93.4%) respondents believed a CBCT course should be a pre-purchasing mandatory requirement. The questionnaire was presented to 1,020 clinicians, of which 387 participated (38% response rate, 124 during conferences and 263 via e-mail). Respondents were GPs and specialists aged 44.2 ± 12.0 [range 25–81] years, with male to female ratio of 61% (236) to 39% (151). Their average professional dental experience was 16.8 ± 12.6 [range 1–57] years. Among specialists, their average professional experience as specialist was 9.5 ± 11.1 [range 1–50] years. Most (58.1%) of respondents were specialists, and the remainder (41.9%) were GPs. Among all respondents, 83.2% were graduated from dental schools in Israel. Most (68.5%) were graduated from the Hebrew University of Jerusalem, 14.7% were graduates of the Tel Aviv University and 16.8% were graduates of universities in other countries (in descending order: Eastern Europe, Jordan, Western Europe, USA, South America, Australia and South Africa). Most dentists (49.9%) practice in private clinics only, 31% practice in both private and public or corporate-dental-clinics, 14.7% in public clinics only and 3.1% in corporate clinics only. Most (55.3%) dentists worked in the greater Tel Aviv metropolitan region, 27.1% in the Jerusalem area, 11.1% in the north and 6.2% in the south. Of all respondents, 93% of dentists stated that they refer patients to CBCT scans. The indications for referral were dental implant planning (27%), impacted 3rd molars (24%), bone pathology (16%), endodontic assessment (14%), other impacted teeth (13%), orthognathic surgery (5%), and 1% other indications (TMJ, sialography, congenital abnormalities, trauma and periodontal) (Fig. ). As for the requested CBCT output, respondents were asked to choose which output they usually requested. The options were: CSI, DICOM files, an interpretation report or a combination of these. Most dentists (44.2%) requested CSI together with DICOM files, 31.3% request only CSI, 9.3% request radiologic report with or without CSI or DICOM files, and 7.7% request DICOM files only. Respondents were asked to specify how often they request DICOM files as an output. In 31.5% of cases dentists do not request DICOM files at all, while in 26.6% of cases they always request DICOM. When comparing GPs to specialists, specialists request DICOM files significantly more frequently ( p = 0.007, Table ). Regarding the way the DICOM files were being used, 46.4% of respondents stated that they read DICOM files with Viewer software, 17.1% use implant planning software, 26.2% use both and 10% use other software (such as orthodontic related-software). This preference did not correlate with age, or the type of training the dentists received. Respondents were further asked to state in which frequency they “review the full scanned volume”, without explaining precisely the meaning of the phrase. For simplicity, we combined the three partial options (a quarter, half and most cases) into one new category termed “partially” (Table ). When reviewing the results, about a third of the respondents (33.8%) stated they always review the entire scanned volume, while 13.5% stated they never review the entire volume (Table ). More than half of the respondents (52.6%) stated that they sometimes reviewed the entire scanned volume (the three partial options) (Table ). This result did not correlate with age, or the type of training they received (data not shown). Combining the data regarding reviewing the entire scanned volume with the respondents’ request for DICOM files, as a CBCT output, two interesting results were found. On the one hand, among respondents who stated they always review the entire scanned volume, 41.3% never request DICOM files, only 41.8% always request DICOM files, and 21.1% sometimes request DICOM files (Table ). Remarkably, on the other hand, among respondents who stated they never review the entire imaging volume, 11.2% always ask for DICOM files, 19.2% never request DICOM files, and 10.6% sometimes ask for DICOM files (Table ). Most (61.2%) respondents stated they had received CBCT training. There was a significant correlation between receiving CBCT training and younger age ( p = 0.001), fewer years of experience ( p = 0.001), as well as being specialists compared to GPs (66.6% vs. 53.7%, p = 0.01). Training in 50.6% of the dentists who had received training included both reading CSI and using DICOM-Viewer software. In 78.9%, this training was academic. Respondents who stated that they received academic training were significantly younger ( p = 0.015), with no significant difference between GPs and specialists. Regarding the frequency of encountering incidental findings, most (85.9%) dentists reported they have, at least once, encountered incidental findings within a CBCT scan. In 52.1% of the cases, the findings were in both soft tissue and bone (jaw), in 45.9% of cases findings were only in the jaw, and in 1% of cases findings were in soft tissue alone. When such findings had been encountered, almost all respondents (93.9%) stated they referred the patient to a specialist for consultation. Only 6.1% of respondents reported that they do not refer to a specialist for consultation at all. When asked of the reason they did not refer for a consultation, in 85.1% of cases the respondents were confident in their ability to independently address and manage the incidental findings. In 7% of cases, the reason was that the patient was unwilling to pay for the consultation. Of all referrals, 40.8% referred to Oral and Maxillofacial Surgeons, followed by 25.8% to Oral Medicine specialists, 14% to Orthodontists, 13.4% to Ear, Nose and Throat (ENT) specialists, and 2.6% to other specialists (Periodontists and Radiologists). About half the respondents (47.5%) reported that in most of cases, they were satisfied with the “response of the consultant” (e.g. periodontist, endodontist etc.). Specifically, 24.5% of dentists were always satisfied, whereas 1.8% reported that the answer was not satisfactory in any case. Almost all (94.5%) respondents replied that they are interested in having a service providing a “adiologic interpretation CBCT report by a radiologist”, regardless of age or type of CBCT training they received. Knowledge questions included questions about CBCT radiation dose and terminology. 70.3% and 61.2% of respondents had correct answers regarding radiation dose and the terminology of cross-sectional images, respectively. Most (88.8%) were aware of the fact that not all of the scanned volume was demonstrated in the CSI. As for practical issues, 45.2% had correct answers for the following question: “If a patient is referred for imaging of a localized site for implant planning, would the contralateral side of the jaw, be necessarily included?” Most (71.8%) respondents were aware that the FOV and ROI may be different. Overall, most respondents did relatively well in their knowledge test (68.1% average points). Correct answers correlated with young age in all questions ( p < 0.001), and with fewer years of practice ( p < 0.001) (Fig. ). When comparing GPs to specialists, there was in average scores, and no significant difference in the number of correct answers to specific questions, except for the question regarding the difference between FOV and ROI (76% of specialists vs. 66% of GPs had correct answers, p = 0.03). Regarding the awareness of the possible existence of incidental findings within the scanned volume, almost all respondents (95.5%) stated that they were aware of their existence in CBCT, regardless of their age or type of CBCT training they received. Furthermore, the respondents were asked who they believe, should be responsible for reviewing the entire scanned volume. While 41.7% replied it should be the referring clinician, 57.1% thought it should be a trained radiologist, and in 6.6% of cases the answer was both the referring clinician and a trained radiologist. Most (70.7%) respondents answered that they would be willing to participate in a CBCT continuing education course, regardless of their age or type of CBCT training previously received. Finally, almost all (93.4%) respondents believed a CBCT course should be a pre-purchasing mandatory requirement. This study evaluated CBCT use, knowledge and medico-legal issues among Israeli dentists, and the influence of their background on these parameters. The results show a widespread use of CBCT, for common indications including dental implant, impacted third molars and bone pathology, as widely reported in the literature . Conventional (plain radiography) imaging education has always been taught in dental graduate studies, whereas volumetric imaging has only gradually been employed. A survey of US, UK, and Australian dental schools showed that the majority of responding schools do not include instruction in higher level use of CBCT for undergraduate students . In the current study, about third of the respondents stated that they request only CSI as the output, and they do not use DICOM files at all. In addition, among respondents who stated they always review the entire scanned volume, 41.3% never ask for DICOM files, and only 41.8% always ask for DICOM files. The CSI, which represents specific slices from within the ROI, does not represent the entire 3-dimentional CBCT available data. Thus, important findings may be overlooked . Indeed, various parameters may affect the selected CSI dimensions. Examples include: the distance between two slices (often called “steps”), the bucco-lingual width or the superior-inferior height. These may affect the portion of scan volume being represented within the CSI slice. Thus, incidental findings in the scanned volume may not be included in the selected CSI. These findings may be present in between two slices in case of wide “steps”, or located beyond the bucco-lingual width or the superior-inferior height (such as sialolith in the mandible, or antral pathologies, respectively). Moreover, information from a few CSI may be different from series of many CSI . As DICOM files are necessary for reviewing the entire scanned volume, it seems that many of the study respondents did not understand the meaning of scanned volume reviewing. These observations suggest a gap in understanding of the limitation of CSI as well as their inability to present the entire scanned volume for review. In addition, only 33.8% of respondents stated they always review the entire scanned volume, although almost all of them (95.5%) stated that they were aware of the existence of incidental findings in CBCT, and although most of them (85.9%) reported that they have encountered incidental findings themselves within a CBCT scan at least once. This practice did not correlate with age, or the type of the CBCT training they received. Many organizations worldwide including AAOMR , ADA , EADMFR , IRMER , RCDSO , agree that all volumetric data should be reviewed and interpreted, but it is not always clear who is responsible for the task. 41.7% of the respondents in the current survey believed it is the responsibility of the referring clinician, while 57.1% thought it should be a trained radiologist, and 6.6% thought that it should be both. Furthermore, every country has its own regulations and accepted practice regarding the interpretation of CBCT scans. Hence, in many countries, interpretations by maxillofacial radiologists are not standard. The authors believe that GPs should, therefore, be knowledgeable about the use and interpretations of CBCT. GPs should be able to examine the entire scanned volume, recognize abnormal findings, and when needed, refer the scan to a maxillofacial radiologist for consultation. Alternatively, it may be a common practice that all CBCT scans are reviewed by a maxillofacial radiologist, who would generate a report with all the radiographic findings, which will then be correlated clinically, by the clinician. Some international guidelines distinguish between small and medium-large FOVs with respect to the identity of the interpreter who is generating the report. For instance, a trained GP or DMFR specialist should make the radiological report for CBCT scans that are 8 × 8 cm or smaller. For larger CBCT scans, a DMFR specialist or a medical radiologist should make the report . Requesting and saving the DICOM files in the patient’s records is important for several reasons. First, for reviewing the entire scanned volume within and outside the ROI, in search of incidental findings with clinical significance. Second, for avoiding unnecessary repetition of imaging and increasing patients’ radiation dose for possible future treatment for regions outside the ROI. Third, for follow-up of pathology (volume assessment and monitoring). Sometimes, dental imaging centers provide the referring clinician with CSI only, and the entire scanned volume (DICOM files) is not stored and can become unavailable if needed in the future. The major limitation is the large amount of memory required for back-up. Unless a national cloud storage is available, it is recommended that the DICOM files be kept either in the patient’s records within the clinic or by the patient. Our questionnaire included two questions which may seem similar: one refers to the satisfaction with the answer the respondents got “from a consultant”, following referral due to an incidental finding in the scan, while the second one was about the possibly which of the respondentshaving,”a radiological interpretation” available to them. We included both questions since currently, there is no national practice requiring the routine generation of radiological interpretation, by a maxillofacial radiologist. Thus, clinicians may refer a scan or an image from a scan for a consultation to various kinds of experts: maxillofacial surgeons, oral medicine specialist, ENT specialists, maxillofacial radiologists or a medical radiologists. Therefore – there was a need to first inquire about the satisfaction with the answer they got from the “consultation” about the findings. Most participating dentists showed fair knowledge in the knowledge test, with an average score of 68.1% correct answers. A main knowledge gap was that most of them were unaware that often there is a difference between the region of interest (ROI) they request in the referral and the actual scanned volume, depending on the protocol and on the CBCT machine (differences in the FOV and ROI size) . For example, an ROI could be CBCT of the lower right molar, but the actual scanned volume could include the left mandible, depending on the CBCT machine and protocol being used. Corrects answers in the knowledge test significantly correlated with younger age. This is not surprising, as we found a significant correlation between receiving CBCT training and younger age. Most (61.2%) respondents stated they had received CBCT training. In our study, in 78.9% of cases, this training was academic. Respondents who stated that they received academic training were significantly younger. We did not inquire about whether training was at the undergraduate or graduate level. Limitations of the current study may include bias as most participants were specialists, which represent only 10% of the dentists in Israel, and that most respondents were graduates from the Hebrew University of Jerusalem (one of the two dental schools that exist in Israel). In addition, since this questionnaire was distributed electronically, participants who responded may be more technological compared to those who declined. Moreover, this was a voluntary survey, entitled CBCT use questionnaire. It is reasonable to assume that clinicians who are not familiar or who are not comfortable with CBCT technology were less willing to participate in such a survey. All of the above created a bias in our results and show an optimistic view and may not reflect the real status. Therefore, future work should aim to reach all relevant practicing dentists to obtain a more representative sample. Strengths of the study include the assessment of CBCT detailed practice, including CBCT output, as well as actual in-depth knowledge of CBCT among dentists, with useful information to be used in continued education courses and for academic undergraduate curriculum development. Based on the results of the study, more CBCT continuing education courses are needed, with emphasis on clinical practice and CBCT output. In addition, as GPs acquire and utilize CBCT technology, CBCT must be included in undergraduate dental education to ensure that future practitioners are properly trained for safe and up-to-date professional use of this imaging modality. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Findings from precision oncology in the clinic: rare, novel variants are a significant contributor to scaling molecular diagnostics | f3f4db40-aed7-4308-bb78-276ced810405 | 8962530 | Pathology[mh] | Next generation sequencing (NGS) in clinical pathology laboratories for the management of patients with cancer is now routine. A number of factors have converged to allow the adoption of these technologies including the declining costs of sequencing, the replacement of narrowly focussed gene and single exon tests with assays using improved sequencing technologies that allow broader and more detailed genomic changes to be assayed. However, the use of such genomic tests has led to a significant increase in the number of variants that a laboratory must analyse to determine pathogenicity and potential diagnostic, prognostic or therapeutic use. This increasing volume of variants to be analysed has exposed a bottleneck within molecular laboratories, namely—the expert curation of variants and their integration into a clinical report. Depending on jurisdiction, curation of variants is performed by either pathologists, medical scientists or genetic counsellors following international guidelines . This in-house expertise represents a scarce workforce that is difficult to scale in line with variant volumes. To address this shortcoming several commercial solutions have been established that range from a complete testing service through to curation of individual variants . Nevertheless, the variant curation bottleneck is likely to become an increasing problem and has been estimated that it will contribute to over half the cost of testing by 2026 . We hypothesise that without some form of scalable artificial intelligence or other automated solution for variant analysis, the curation burden will become unsustainable. To test this hypothesis, we have examined the generation of variants over six years of genomic testing within our institution. Our aims were to (1) document the number and type of variants generated over time (2) identify which genes require the most curation effort (3) assess the benefit of commonly used publicly available variant databases and (4) compare commercial solutions to reduce the curation burden. All sequenced variants were uploaded to an in-house tertiary analysis decision support software system called PathOS for filtering, analysis and reporting. Detailed descriptions of laboratory processes have been described previously . Reported variants were manually curated using the ACMG or AMP guidelines , to establish variant action in a patient’s clinical context. Curated variants with enriched expert annotations were deposited within a common database enabling subsequent patients presenting with the same variants to be matched to the existing variant annotations so that only novel variants need be curated. The patient’s clinical context is also stored with curated variants to inform decisions on whether the same variant appearing in a different clinical context warrants using the same stored curation or whether a new distinct, and perhaps adapted, curation of the variant and context is required. For details of the pipelines and curation workflows please refer to the Supplementary Methods section. Patient samples were aggregated into somatic, haematology and germline sets depending on the sequencing panels used. Clinically reported variants in this study are from 453 distinct cancer associated genes (see Additional file : Figure S1). The genes were further broken down into overlapping categories of 63 germline genes, 401 somatic genes and 109 haematology assayed genes. These genes were categorised as either tumour suppressor or oncogene based on The Cancer Gene Census . Analysis of variants from germline, somatic and haematology assays Between the period October 2013 to May 2019, we performed next generation sequencing assays on samples from a cohort of hospital (n = 32,670) and external (n = 15,365) patients, covering a broad range of tumour streams, over a period of six years. This yielded 24,168,398 variants of which 23,255 were clinically reported from 95,954 patient samples from 48,036 patients using a heterogenous set of cancer assays (see Fig. ). The assays were targeted cancer gene panels covering a wide range of genomic capture regions ranging from highly targeted panels of four genes through to comprehensive cancer panels of up to 701 genes. Ten different panels were employed covering varying regions of the genome using hybrid capture or amplicon technologies (see Table ) comprising hereditary cancer germline panels, somatic panels and haematology panels for solid cancers and blood cancers respectively. A detailed breakdown by assay is provided in Additional file : Table S1. Of the 23,255 clinically reported variants, 17,240 (74.1%) were identified in subsequent assays and reused in reports. The remainder, 6015 (25.9%) were only observed in a single patient sample. Curation workload growth The total number of variants curated over the study is shown in Fig. showing the significant increase with the introduction of hybrid capture assays in 2017. The solid line shows all curated variants (reported, benign and variants of unknown significance (VUS)) compared to the pale lines of reported variants (69.1% of total). The number of new variants requiring curation per sample per month increased from 3.38 to 3.73 from January 2017 until May 2019 (see Fig. ). Over this period, curations of somatic hybrid capture assays rose significantly from 0.90 to 2.55 samples per month until they accounted for 68% of the curation burden per month. There was also more variability in the number of average variants per month for somatic hybrid capture assays as shown by the larger 95% confidence intervals (see Fig. ). Low overlap between in-house and public databases We compared the presence of reported variants with a number of common public genomic knowledgebases. Of the 8214 unique clinically reported variants within our in-house database, 28.6% (n = 2356) were not present within key public cancer variant resources; COSMIC (size = 11,453,569 coding mutations), ClinVar (size = 789,593 variants), VICC (incorporating CiVIC , size = 2528 variants) and GA4GH Beacon network (see Fig. ). The highest number of in-house (PathOS) variant matches was to COSMIC, 4049 (49.2%), followed by ClinVar matches with 2888 (35.1%), but only 581 (7.1%) matched VICC variants. Variant matches to resources on the Beacon Network were 2127 (25.9%). Our clinically reported variants include prognostic and diagnostic variants in addition variants with a clear therapeutic option which is a focus of VICC. Further, the variants within PathOS but not present in VICC are enriched for TSGs as these variants are often loss of function variants (see Additional file : Figure S2 and Figure S3). We then examined the variants (n = 2356) not found in external knowledgebases to more closely identify their characteristics. The majority of variants (87.6%: n = 2041) were non-recurrent, that is, only reported in a single patient (see Fig. ). Somatic assays contributed 65.5% (n = 1543), 24.8% (n = 585) from haematology assays, and 9.7% (n = 228) from germline assays. The category of variants without external knowledgebase data were curated de novo and stored in our internal database, where they provided little benefit for future patients due to the large proportion that did not reoccur within other cancer patients over the study period. Of the in-house only variants, 43.2% (n = 1017) were from somatic assays, of missense consequence and classified as VUS (see Additional file : Figure S4). Analysis of gene type shows a large number of the variants were missense VUS from oncogenes (n = 239), tumour suppressor genes (n = 290), or within genes not listed in the Cancer Gene Census (n = 381) (see Additional file : Figure S5). A gene level analysis of the in-house only curated variants reflects the mix of genes in our custom targeted gene panels (see Fig. ). Key genes associated with haematological cancers contribute significant numbers of in-house only variants. In particular, the tumour suppressor TET2 is implicated in haematological malignancies and 134 TET2 unique variants were reported, none of which were seen in external databases. Other genes frequently mutated in haematological malignancy included ASXL1 , RUNX1 and WT1 . This may be attributed to the large number of haematology assays within PathOS and the underrepresentation of haematological genes within the compared public resources. Commercial systems may increase misclassification risk A subset of novel in-house only curated somatic and germline variants (n = 307) were submitted to a commercial tertiary analysis platform (CTAP) for annotation and pathogenicity assessment. The CTAP only used ACMG classifications for both germline and somatic variants. Although this framework is not a relevant categorisation for somatic variants, these were compared to our in-house classifications that were mapped to ACMG categories. The subset comprised four pathogenicity classes using the ACMG classifications (‘benign’ n = 2, ‘VUS’ n = 249, ‘likely pathogenic’ n = 18 and ‘pathogenic’ n = 38). Although 81.1% (n = 249) variants were concordant for pathogenicity, 18.9% (n = 58) were discordant (see Table ). Discordant classifications included 29 classified as ‘VUS’ by CTAP but ‘pathogenic’ by PathOS and 17 variants classified as ‘VUS’ by CTAP but ‘likely pathogenic’ by PathOS (see Additional file : Table S2). Of these 29 discordant classifications, 17 were non-synonymous, 11 nonsense non-synonymous and one within a splice site; 15 were substitution variants and 14 were insertions. A particular example is chr1:g.45799193dup (HGVSc:NM_001128425.1:c.240dup, HGVSp:NP_001121897.1:p.(V81Cfs*12)) classified as pathogenic due to a frameshift resulting in a stop codon leading to loss of function in the tumour suppressor MUTYH but CTAP has this annotated as VUS. Another example is chr16:g.23641608T > A (hgvsc: NM_024675.3:c.1867A > T, hgvsp: NP_078951.2:p.(Lys623*) which we predicted as a truncated PALB2 protein by approximately 46%, resulting in loss of significant functional domains. Literature suggests ovarian, breast and other malignancies with a loss of HR proteins, including PALB2, have been shown to confer clinical sensitivity to PARP inhibitors and platinum agents . CTAP had this variant classified as VUS which may lead to potential therapeutic approaches for the patient being missed. Comparison of gene distributions by tumour stream From the 10,965 somatic assay patients, 3939 variants were curated according to the clinical context reported with the patient sample. The top ten clinical contexts with the most variants show that these variants are dominated by VUS classifications (see Additional file : Figure S6). To examine the concordance at the gene level between databases in specific clinical contexts, we compared the top 20 genes across melanoma, colorectal and hematological malignancies in our in-house knowledgebase (PathOS) to COSMIC and ICGC by matching the primary tumour site (see Additional file : Figure S7). The patient gene counts were positively correlated for the melanoma (ICGC: Pearson’s r = 0.80, p < 0.01; COSMIC: r = 0.81, p < 0.01) and also for colorectal (ICGC: r = 0.74, p < 0.01; COSMIC: r = 0.81, p < 0.01) cohorts (see Additional file : Table S3). In contrast, the haematology stream shows marked difference in gene distributions and did not show a significant association with ICGC but did show a weak correlation with COSMIC (r = 0.63, p < 0.01). This may be attributed to the custom gene panels of the PMCC haematology assays and differing ranges of blood cancers incorporated into ICGC and COSMIC analysis. Between the period October 2013 to May 2019, we performed next generation sequencing assays on samples from a cohort of hospital (n = 32,670) and external (n = 15,365) patients, covering a broad range of tumour streams, over a period of six years. This yielded 24,168,398 variants of which 23,255 were clinically reported from 95,954 patient samples from 48,036 patients using a heterogenous set of cancer assays (see Fig. ). The assays were targeted cancer gene panels covering a wide range of genomic capture regions ranging from highly targeted panels of four genes through to comprehensive cancer panels of up to 701 genes. Ten different panels were employed covering varying regions of the genome using hybrid capture or amplicon technologies (see Table ) comprising hereditary cancer germline panels, somatic panels and haematology panels for solid cancers and blood cancers respectively. A detailed breakdown by assay is provided in Additional file : Table S1. Of the 23,255 clinically reported variants, 17,240 (74.1%) were identified in subsequent assays and reused in reports. The remainder, 6015 (25.9%) were only observed in a single patient sample. The total number of variants curated over the study is shown in Fig. showing the significant increase with the introduction of hybrid capture assays in 2017. The solid line shows all curated variants (reported, benign and variants of unknown significance (VUS)) compared to the pale lines of reported variants (69.1% of total). The number of new variants requiring curation per sample per month increased from 3.38 to 3.73 from January 2017 until May 2019 (see Fig. ). Over this period, curations of somatic hybrid capture assays rose significantly from 0.90 to 2.55 samples per month until they accounted for 68% of the curation burden per month. There was also more variability in the number of average variants per month for somatic hybrid capture assays as shown by the larger 95% confidence intervals (see Fig. ). We compared the presence of reported variants with a number of common public genomic knowledgebases. Of the 8214 unique clinically reported variants within our in-house database, 28.6% (n = 2356) were not present within key public cancer variant resources; COSMIC (size = 11,453,569 coding mutations), ClinVar (size = 789,593 variants), VICC (incorporating CiVIC , size = 2528 variants) and GA4GH Beacon network (see Fig. ). The highest number of in-house (PathOS) variant matches was to COSMIC, 4049 (49.2%), followed by ClinVar matches with 2888 (35.1%), but only 581 (7.1%) matched VICC variants. Variant matches to resources on the Beacon Network were 2127 (25.9%). Our clinically reported variants include prognostic and diagnostic variants in addition variants with a clear therapeutic option which is a focus of VICC. Further, the variants within PathOS but not present in VICC are enriched for TSGs as these variants are often loss of function variants (see Additional file : Figure S2 and Figure S3). We then examined the variants (n = 2356) not found in external knowledgebases to more closely identify their characteristics. The majority of variants (87.6%: n = 2041) were non-recurrent, that is, only reported in a single patient (see Fig. ). Somatic assays contributed 65.5% (n = 1543), 24.8% (n = 585) from haematology assays, and 9.7% (n = 228) from germline assays. The category of variants without external knowledgebase data were curated de novo and stored in our internal database, where they provided little benefit for future patients due to the large proportion that did not reoccur within other cancer patients over the study period. Of the in-house only variants, 43.2% (n = 1017) were from somatic assays, of missense consequence and classified as VUS (see Additional file : Figure S4). Analysis of gene type shows a large number of the variants were missense VUS from oncogenes (n = 239), tumour suppressor genes (n = 290), or within genes not listed in the Cancer Gene Census (n = 381) (see Additional file : Figure S5). A gene level analysis of the in-house only curated variants reflects the mix of genes in our custom targeted gene panels (see Fig. ). Key genes associated with haematological cancers contribute significant numbers of in-house only variants. In particular, the tumour suppressor TET2 is implicated in haematological malignancies and 134 TET2 unique variants were reported, none of which were seen in external databases. Other genes frequently mutated in haematological malignancy included ASXL1 , RUNX1 and WT1 . This may be attributed to the large number of haematology assays within PathOS and the underrepresentation of haematological genes within the compared public resources. A subset of novel in-house only curated somatic and germline variants (n = 307) were submitted to a commercial tertiary analysis platform (CTAP) for annotation and pathogenicity assessment. The CTAP only used ACMG classifications for both germline and somatic variants. Although this framework is not a relevant categorisation for somatic variants, these were compared to our in-house classifications that were mapped to ACMG categories. The subset comprised four pathogenicity classes using the ACMG classifications (‘benign’ n = 2, ‘VUS’ n = 249, ‘likely pathogenic’ n = 18 and ‘pathogenic’ n = 38). Although 81.1% (n = 249) variants were concordant for pathogenicity, 18.9% (n = 58) were discordant (see Table ). Discordant classifications included 29 classified as ‘VUS’ by CTAP but ‘pathogenic’ by PathOS and 17 variants classified as ‘VUS’ by CTAP but ‘likely pathogenic’ by PathOS (see Additional file : Table S2). Of these 29 discordant classifications, 17 were non-synonymous, 11 nonsense non-synonymous and one within a splice site; 15 were substitution variants and 14 were insertions. A particular example is chr1:g.45799193dup (HGVSc:NM_001128425.1:c.240dup, HGVSp:NP_001121897.1:p.(V81Cfs*12)) classified as pathogenic due to a frameshift resulting in a stop codon leading to loss of function in the tumour suppressor MUTYH but CTAP has this annotated as VUS. Another example is chr16:g.23641608T > A (hgvsc: NM_024675.3:c.1867A > T, hgvsp: NP_078951.2:p.(Lys623*) which we predicted as a truncated PALB2 protein by approximately 46%, resulting in loss of significant functional domains. Literature suggests ovarian, breast and other malignancies with a loss of HR proteins, including PALB2, have been shown to confer clinical sensitivity to PARP inhibitors and platinum agents . CTAP had this variant classified as VUS which may lead to potential therapeutic approaches for the patient being missed. From the 10,965 somatic assay patients, 3939 variants were curated according to the clinical context reported with the patient sample. The top ten clinical contexts with the most variants show that these variants are dominated by VUS classifications (see Additional file : Figure S6). To examine the concordance at the gene level between databases in specific clinical contexts, we compared the top 20 genes across melanoma, colorectal and hematological malignancies in our in-house knowledgebase (PathOS) to COSMIC and ICGC by matching the primary tumour site (see Additional file : Figure S7). The patient gene counts were positively correlated for the melanoma (ICGC: Pearson’s r = 0.80, p < 0.01; COSMIC: r = 0.81, p < 0.01) and also for colorectal (ICGC: r = 0.74, p < 0.01; COSMIC: r = 0.81, p < 0.01) cohorts (see Additional file : Table S3). In contrast, the haematology stream shows marked difference in gene distributions and did not show a significant association with ICGC but did show a weak correlation with COSMIC (r = 0.63, p < 0.01). This may be attributed to the custom gene panels of the PMCC haematology assays and differing ranges of blood cancers incorporated into ICGC and COSMIC analysis. This study conducted a longitudinal examination of clinically reported variants to assess the current and future curation workload and burden. The curation burden has become a key limitation to the scalability of genomic testing as current practices rely on the time and expertise of skilled genomic scientists to manually process the variants observed through NGS. The scalability covers the dimensions of numbers of patients assayed and the size of the genomic regions observed per assay or both. This analysis has shown a long-term upward trend in patient numbers as well as the size of the genomic regions assayed. Both factors have resulted in an increasing number of curated variants over the study period. The in-house caching of expert curated variants should ideally have the effect of needing to curate less variants over time as fewer and fewer novel variants are seen for each assay type. This is indeed the case for all the assay groups except for somatic assays in which we show novel variants are growing over time. The germline assay group is primarily used for screening a limited number of hereditary cancer genes. This together with multiple rich publically available databases built over many years of testing yield fewer reportable variants per patient. In contrast, the somatic and haematology assays are primarily clinician requested assays for patients presenting with cancer. The rapid adoption of clinical testing of somatic cancer implicated genes has contributed significantly to the curation effort required for these assays. The grouping of assays into germline, somatic and haematology reflects the differing curation requirements between the groups. Both somatic and haem. groups must also allow for tumour purity and clonality and so analyse a greater number of variants at much lower allele frequencies and also distinguish between germline and somatic variants. Each group has distinct but overlapping gene sets with their own pathways and mechanisms (see Additional file : Figure S1). The specialisation of genetic scientists into these groups adds further pressure on the availability of trained curators. Ideally, a set of global genomic variant knowledgebases would reduce the duplication of curation effort across laboratories (whose data is frequently unshared) while also harmonising classifications across knowledgebases . Although this goal has not yet been realised , there are active efforts by the Global Alliance for Genomic Health (GA4GH) to create such resources . A meta-knowledgebase has been developed by the Variants In Cancer Consortium (VICC) that has aggregated and harmonised six different cancer variant interpretation knowledgebases, including CIViC, to collect actionable clinical interpretations for cancer associated variants . An alternate model is the web-accessible Beacon Project , which allows aggregation of evidence for a given variant from over 100 variant resources . From a clinical utility perspective, different annotation resources can be ranked according to curation value offered (see Additional file : Figure S8). Manually curated resources such as CIViC often provide the most reliable annotations and the highest clinical value, if from a trusted curator, however due to the effort required to accurately curate knowledge about a variant, these resources are limited in size. Observational resources e.g. ClinVar and COSMIC provide greater variant numbers but provide significantly less detail and less clinical benefit . There are also increasing numbers of national level curation databases which aggregate variants from multiple laboratories under a common framework as well as gene and disease specific databases such as ENIGMA and IARC TP53 . These inititives often provide a staging database which feeds into the larger consortium databases such as ClinVar and COSMIC. We examined the extent to which public knowledgebases (COSMIC, ClinVar, VICC or GA4GH Beacon) and a commercial package could assist with expert curation by matching in-house clinically reported variants with external resources. We showed that at best, 71.4% of our variants were also catalogued externally. The overlap between our in-house variants and the external knowledgebases varied widely from COSMIC (49.2%), followed by ClinVar (35.1%), while only (7.1%) matched VICC variants. The low number of variants matching in VICC is likely due the therapeutic focus of the VICC knowledgebase in contrast to the other data sources. As a molecular diagnostic lab, prognostic and diagnostic variants need to be reported in addition to the therapeutically actionable variants. These external data sources provide some assistance to our internal curation effort but by no means replace the work needed to create a complete and trusted in-house curation entry that complies with laboratory SOPs and accreditation standards. Consistancy of external knowledgebases is also a problem when incorporating external variants into in-house reports. A recent study has highlighted the difficulties in achieving consistent classifications of variants across commercial knowledgebases and also reflected the variability in ascribing clinical actionability to variants . Similar variability was also found between N-of-One, IBM Watson for Genomics and OncoKB in a a study by Katsoulakis, et al. . These issues will mitigate some of the benefits of public knowledge bases until there is a shared trust of the data and a common framework for variant sharing . Analysis of the 28.6% of in-house only variants shows them to be mostly seen in a single patient and are enriched, relative to the set of reported variants, for indels and tumour suppressor genes. This characterisation is not unexpected as they often represent loss of function (LOF) variants in tumour suppressor genes that can be commonly disrupted by indel and splice junction variants but are non-recurrent in other patients. In contrast, gain of function (GOF) variants are typically focussed at a hotspot locus and well documented in therapeutically focussed public knowledgebases if actionable. This study has shown the widespread use of variant knowledgebases by laboratories has limitations for the scalability of clinical diagnostic sequencing. This is the case even with a trusted in-house variant database which has been built up over many years or public genomic resources which are not yet comprehensive enough or sufficiently standardised to augment or replace in-house curated resources. Even when observed variants are matched with public resources, effort is needed to take external variants and apply laboratory SOPs and accreditation standards prior to reporting and storing them as a trusted in-house entry. Further, there will always be classes of variants, such as loss of function variants, that are not commonly recurring and often won’t find their way into public resources. These variants still require expert analysis of their consequences within a patient’s clinical context although the clinical information about them may be scarce. Sophisticated computational algorithms arguably have the greatest potential to relieve the variant curation bottleneck. There are currently a large number of pathogenicity prediction algorithms available but these software need to be applied with caution due to their high false positive rate and confounding data used to train some of the algorithms . This is recognised by the ACMG guidelines for germline variants and AMP guidelines for somatic variant curation by specifying pathogenicity predictors must only be applied as supporting evidence in variant classification . A detailed comparison of pathogenicity prediction tools may be found in Suybeng et al. . Machine-learning approaches such as natural language processing to train curation models from medical literature, and deep-learning methods for variants may provide greater value in increasing the throughput of clinical variant interpretation, and perhaps provide the greatest hope in relieving the curation bottleneck . This study demonstrates the challenges faced by clinical cancer genomics laboratories to efficiently deliver clinical genomic reports in the face of an increasing variant curation workload. Our work highlights that, particularly for somatic analysis, increasing the genomic coverage for clinical reporting can increase the curation workload and a large percentage of the newly identified variants will be absent from variant resources and require greater curation effort. Further, particular classes of variants, such as loss of function variants in tumour suppressor genes and private patient mutations do not appear recurrently in patients and their curation has little chance of reuse for subsequent patients. Although this study is from a large public cancer hospital, it is anticipated that genetic analysis in complex diseases other than cancer will involve many of the same issues and limitations described here. As personalised medicine is more widely adopted with greater sample numbers and larger genomic regions interrogated, we will have to be more reliant on developments in computational methods facilitating more automated approaches. This section covers details of additional methods used in the study. PathOS annotates variants from sequencing pipelines and presents them within a web application for pathogenic classification prior to its generation of a clinical report. For this study we only consider SNVs and short indels. Although PathOS contains copy number variant data derived from off target alignments , this has only been captured across a subset of the cohort so not included for analysis. To identify relevant variants from the patient samples, sequenced samples are aligned to the GRCh37 reference genome and variants called using GATK best practice pipelines combined with in-house variant calling software . The called variants are normalised and 3’ shifted using custom software and Mutalyzer and annotated using Variant Effect Predictor and other sources to enrich annotation for variant curation and pathogenicity classification. A single curated Refseq transcript is selected as representative of the variant locus and used as a basis for consistent calling of variants within gene coding regions. All sequenced samples are quality assessed using FastQC and variants are filtered for common sequencing artefacts. Curated variants are classified according to ACMG or AMP guidelines for pathogenicity by experienced molecular post docs with specialisation in cancer biology using laboratory SOPs adhering to accredited standards including ISO15189. Of the variants analysed, curated and stored in PathOS, 69.1% (n = 23,255) were clinically reported (see Fig. ). The burden of curation remains as high for the non-reported variants (typically VUS, likely benign and benign) to ensure that they are not false negatives for diagnostic reporting. In this analysis, the PathOS variant data were compared against four publicly available variant databases COSMIC, ClinVar, GA4GH beacons and VICC. Variants in normalised HGVSg nomenclature were compared to those present within VICC knowledgebase, queried on 1st September 2019 . ClinVar variants were downloaded on 6th September 2019 and matched on HGVSg. COSMIC variants were downloaded on 11th July 2019 and matched based on HGVSp position and reference allele, not the alternate allele to maximise matching. The Beacon network was also queried for presence of PathOS reported variants via the web service on 1st October 2019 using HGVSg position. If a variant was identified from ClinVar/COSMIC/VICC/Beacons, but not identified through matching the data downloads from the individual resources, the matching variants were consolidated. Beacons serving computationally derived datasets such as CADD , or aggregators of computationally derived datasets (i.e. dbNSFP ) were filtered out as they over estimate presence of the variant without the ability to assess its validity. A subset from PathOS variants with no matches in the previously described public resources were submitted to a well-known commercial tertiary analysis platform to assess the value such resources can provide in variant annotation and pathogenic assessment. Amplicon based assays (n = 36) were used throughout the study period and genomic coverage for these assays ranged from 21.9 kilobases (Kb) through to 158.2 Kb. Hybrid capture assays (n = 8) with larger genomic coverage (421.8–2994 Kb) from 90 to 701 genes (see Table , Additional file : Figure S9) are replacing the amplicon assays over time. When creating Table , duplicate patient samples (n = 80) occurring in multiple analysis groups and assays were excluded from the patient and average counts, to prevent biasing the analysis. Data analysis and linear modelling were conducted using R 3.5.1. MannKendall tests were conducted using the R package Kendall. Beta coefficients between models were compared by computing a Z-score to test for equality and rejecting at 5% level of significance if there was a difference in the coefficients. To forecast the number of curations per sample per month the Holt-Winters exponential smoothing with trend and without seasonal component was applied using the HoltWinters function in R stats package . A two-sample test of equality of proportions with continuity correction was applied to compare percentages using the R function prop.test . Analysis of variance between groups was conducted using R stats package. Comparisons between mean values were performed with a two-tailed Student’s t -test. A P value of less than 0.05 was considered statistically significant. Additional file 1. Table S1 : Excel worksheet of panel attributes. Additional file 2. Table S2 : 24 variants with discordant pathogenicity classifications between CTAP and PathOS. Table S3 : Correlation analysis of variant recurrence at the gene level between PathOS and publicly available datasets from ICGC and COSMIC, using Pearson’s Correlation Coefficient and log2 scale. Figure S1 : Breakdown of numbers of genes in common across each analysis group and assay. (a) all genes surveyed in amplicon assays, (b) all genes surveyed in hyb-capture assays, (c) genes containing clinically reported variants only across all assays. (d-f) Displays the genes in common between assays (amplicon and hyb-capture) by analysis type. (g-i) Displays the genes containing reported variants in common between assays by analysis types. Figure S2 : A gene level comparison of consequence between PathOS and VICC. The first column shows the top 20 genes in PathOS. The top row shows the genes coloured by ONC/TSG classification and the black diamond shows the number of distinct variants seen for each gene. The oncogenes have few distinct variants while TSGs and ONC/TSGs have many variants occurring in the gene. This highlights the focal nature of oncogene mutations. The third column shows varinats seen in PathOS after removing corresponding VICC variants showing that common oncogenes appear in VICC but far fewer TSGs and ONC/TSGs. Figure S3 : These graphs show compare the top 20 variant loci between PathOS and VICC. The third graph shows the top 20 variant loci in PathOS after removing matching VICC variants. The predominance of TSG genes becomes apparent. Figure S4 : Breakdown of novel variants not matching public cancer variant annotation resources by analysis type (n = 2,356). Each variant is classified by functional consequence and coloured by pathogenicity level. Note the high number of somatic, missense, VUS variants. Figure S5 : Breakdown of PathOS only variants not matching public cancer variant annotation resources by analysis type (somatic, haematological and germline) (n = 2,356). Each variant is classified by functional consequence and coloured by pathogenicity level and separated based on classification of oncogene or TSG. Figure S6 : Barplot of somatic solid variants curated by clinical contexts with > 100 variants. CUP=cancer of unknown primary, NSCLC=non-small cell lung cancer, MEL=melanoma, OVCA=ovarian cancer, PCA=prostate cancer, CRC=colorectal cancer, TCC= urothelial carcinoma, SARC=sarcoma. Figure S7 : Comparison of patient counts by gene of reported variants between in-house database (PathOS), COSMIC and ICGC for the patient clinical contexts of melanoma, colorectal and haematological malignancies. Figure S8 : Variant interpretation resources are not all considered equal from a somatic variant curation perspective. Resources with higher curation offer more value than observational or computationally derived resources. Figure S9: Patient samples analysed per month. A large increase in germline analysis can be observed when hyb-capture assays were implemented in 2017. There is also a steady increase over time in somatic molecular haematology (Mol_haem) samples across both assays. The number of somatic solid samples has remained relatively consistent since 2014 but an increasing number of samples were analysed with hyb-capture assays since 2017. Plotted values are calculated using a three-month rolling average. |
Editorial: Non-coding genome and endocrinology: from bench to bedside | e328bada-2e93-49d0-8273-6e9adb94d398 | 10563120 | Physiology[mh] | RP: Writing – original draft. KN: Writing – original draft, Writing – review & editing. GC: Supervision, Writing – review & editing. |
EducAR: implementing a multicomponent strategy to improve therapeutic adherence in rheumatoid arthritis | b11edb32-7b94-446b-90ac-dadbdfb991e2 | 11873333 | Patient Education as Topic[mh] | Treatment adherence is a problem in rheumatoid arthritis; therefore, EULAR has recently issued points to consider; however, implementation may be challenging. A group of professionals and patients developed a web that serves both as a patient education tool and as a guide to implementing the EULAR points to consider. The tool was tested in a cluster clinical trial and followed by a discussion with the implementation teams. The results showed that the tool needed additional tuning to be truly implemented. Implementation of best practices is not easy. A thorough understanding of the reasons for not using a seemingly useful tool is key. Rheumatoid arthritis (RA) is a systemic autoimmune disease characterised by a form of erosive arthritis that causes severe disability. Numerous currently available treatments have proven effective in controlling and preventing the disease’s complications. However, it is estimated that between 20% and 50% of RA patients are not adherent to their treatment. In RA, non-adherence to treatment has been associated with increased disease activity and a higher degree of disability, which in turn has an impact on higher healthcare expenditure, with an increase in both direct and indirect costs. In sum, the lack of therapeutic adherence in RA constitutes a problem of great magnitude that requires the development of effective interventions. Numerous factors involved in adherence to treatment have been identified, some modifiable and others not. To facilitate their study, the WHO proposes classifying them into five groups: socioeconomic, health system-related, disease-related, medication-related and patient-related factors. When developing interventions to improve adherence, it is necessary to acknowledge that it is a complex phenomenon that cannot be explained by a single factor but is the result of the interaction of several factors. The results of the "Adherence in RA" (ADHIERA) study, a multilevel analysis conducted in Spain on predictors of adherence in patients with RA, showed that non-adherence is influenced by psychological, communicational and logistic factors to a greater extent than by the sociodemographic and clinical characteristics of the patients. In 2020, the European Alliance of Associations for Rheumatology (EULAR) published points to consider (PtC) for detecting, preventing and managing non-adherence in rheumatic diseases based on a series of systematic reviews. The PtC highlights the need for a multifaceted and tailored approach to non-adherence. Multicomponent interventions, including patient education components, have the largest effect on patient adherence. In 2015, EULAR already published PtC for effective patient education, which not only proposed informing but also empowering the patient to participate in decision-making in the context of a planned and interactive learning process. In fact, the patient’s involvement in the decision-making process is critical in adherence to medication. Evidence-based recommendations are useless if not implemented in clinical practice. Implementation is a complex process involving many steps in cycles involving relevant stakeholders, a team, analysis of the context and evaluation, among other components. It is possible that many of the PtC suggested to improve adherence may not be easily implemented in our setting, especially since it takes leadership and resources. The aim of this work was threefold: (1) to cocreate a multicomponent intervention strategy to improve adherence to pharmacological and non-pharmacological treatments in RA based on the best available evidence; (2) to evaluate its implementation through an intervention study and (3) to analyse barriers and facilitators for the implementation of this strategy in a qualitative study. Development of the multicomponent intervention strategy A nominal group meeting was held with a multidisciplinary panel including rheumatologists, psychologists, nurses, RA patients, a hospital pharmacist and a graphic designer with two implementation researchers. All participants received prior information on existing interventions and the EULAR PtC for adherence and the results of the ADHIERA study. The objective of this meeting, moderated by a methodologist, was to identify how to translate the PtC into concrete implementable actions. All processes were made transparent and commented on a Miro board, accessible to all. The proposals obtained at the meeting were voted on anonymously in a Delphi survey for prioritisation. The development team then designed a proposal based on a website with two subsites, as suggested by the panel, which was fine-tuned with email iterations and during a second meeting. The time spent on the development of the platform was 8 months. Cluster randomised intervention study The efficacy of the multicomponent strategy designed to improve adherence was evaluated in a 6-month randomised intervention cluster study. We invited centres that had already participated in a study of adherence, thinking that their motivation to change behaviour would facilitate demonstrating the effect we were seeking. 15 centres were randomised to receive access to the intervention or not using the RAND function in Excel. After an informative session with the intervention centres and 3 months to let the centres implement the intervention as preferred, all centres started recruiting consecutive patients with <2 years of RA. All centres were instructed to continue care as usual, plus the intervention group had access to educational videos and aids included on the web, providing patients with access to the information platform. The outcome variable was adherence at 6 months, defined as a score >80% on both the Compliance Questionnaire on Rheumatology and the Reported Adherence to Medication scale. Secondary outcomes were adherence to healthy habits, such as exercise (Exercise Attitude Questionnaire-18 ) and Mediterranean diet (Mediterranean Diet Adherence Screener questionnaire ), disease activity (Disease activity score, (DAS)28-erythrocyte sedimentation rate (ESR)), cardiovascular risk factors (body mass index, blood pressure, glycated haemoglobin, cholesterol and smoking) and degree of satisfaction with the medical care received (Arthritis Treatment Satisfaction Questionnaire. The effect of the intervention on adherence was analysed by logistic regression using 6-month adherence as the dependent variable and the study group (intervention or control) as the exposure variable. Crude models were adjusted for baseline adherence, and potential confounders were studied. The efficacy in terms of the secondary outcomes was analysed by creating change variables (6 months minus baseline) and using Student’s t-tests or Mann-Whitney U tests, according to the normality of these difference variables. Missing data were not imputed. Analysis of implementation barriers and facilitators A focus group explored the level of implementation, the strategies used and the barriers and facilitators to implementing the multicomponent strategy. Rheumatologists and nurses from the centres belonging to the intervention group of the efficacy study participated in the study. A single focus group meeting was held to discuss the difficulties encountered in implementing the tool. The meeting was conducted according to a pre-established guide via Zoom, was recorded, and lasted 1 hour, but it could be extended if the discourse was not saturated. Two rheumatologists with implementation expertise facilitated the meeting, one of whom was taking notes and cross-checking them with the group. Participants discussed the dissemination strategies in their departments and how the tool was used. They also discussed potential causes of low implantation, the most useful components, difficulties in using the tool, if they had received feedback from colleagues and patients and any aspect that could be improved in the tool and the implementation process. The content of the discourse was transcribed, transported into bullet points, organised in trees and codes using Word processor tools (headings and subheadings), inductively, and cross-checked with the notes. Once synthesised and organised, it was cross-checked with the group. The implementation rate was defined as the percentage of uptake, that is, the number of rheumatologists and nurses who used the tool divided by the total number in their department. A nominal group meeting was held with a multidisciplinary panel including rheumatologists, psychologists, nurses, RA patients, a hospital pharmacist and a graphic designer with two implementation researchers. All participants received prior information on existing interventions and the EULAR PtC for adherence and the results of the ADHIERA study. The objective of this meeting, moderated by a methodologist, was to identify how to translate the PtC into concrete implementable actions. All processes were made transparent and commented on a Miro board, accessible to all. The proposals obtained at the meeting were voted on anonymously in a Delphi survey for prioritisation. The development team then designed a proposal based on a website with two subsites, as suggested by the panel, which was fine-tuned with email iterations and during a second meeting. The time spent on the development of the platform was 8 months. The efficacy of the multicomponent strategy designed to improve adherence was evaluated in a 6-month randomised intervention cluster study. We invited centres that had already participated in a study of adherence, thinking that their motivation to change behaviour would facilitate demonstrating the effect we were seeking. 15 centres were randomised to receive access to the intervention or not using the RAND function in Excel. After an informative session with the intervention centres and 3 months to let the centres implement the intervention as preferred, all centres started recruiting consecutive patients with <2 years of RA. All centres were instructed to continue care as usual, plus the intervention group had access to educational videos and aids included on the web, providing patients with access to the information platform. The outcome variable was adherence at 6 months, defined as a score >80% on both the Compliance Questionnaire on Rheumatology and the Reported Adherence to Medication scale. Secondary outcomes were adherence to healthy habits, such as exercise (Exercise Attitude Questionnaire-18 ) and Mediterranean diet (Mediterranean Diet Adherence Screener questionnaire ), disease activity (Disease activity score, (DAS)28-erythrocyte sedimentation rate (ESR)), cardiovascular risk factors (body mass index, blood pressure, glycated haemoglobin, cholesterol and smoking) and degree of satisfaction with the medical care received (Arthritis Treatment Satisfaction Questionnaire. The effect of the intervention on adherence was analysed by logistic regression using 6-month adherence as the dependent variable and the study group (intervention or control) as the exposure variable. Crude models were adjusted for baseline adherence, and potential confounders were studied. The efficacy in terms of the secondary outcomes was analysed by creating change variables (6 months minus baseline) and using Student’s t-tests or Mann-Whitney U tests, according to the normality of these difference variables. Missing data were not imputed. A focus group explored the level of implementation, the strategies used and the barriers and facilitators to implementing the multicomponent strategy. Rheumatologists and nurses from the centres belonging to the intervention group of the efficacy study participated in the study. A single focus group meeting was held to discuss the difficulties encountered in implementing the tool. The meeting was conducted according to a pre-established guide via Zoom, was recorded, and lasted 1 hour, but it could be extended if the discourse was not saturated. Two rheumatologists with implementation expertise facilitated the meeting, one of whom was taking notes and cross-checking them with the group. Participants discussed the dissemination strategies in their departments and how the tool was used. They also discussed potential causes of low implantation, the most useful components, difficulties in using the tool, if they had received feedback from colleagues and patients and any aspect that could be improved in the tool and the implementation process. The content of the discourse was transcribed, transported into bullet points, organised in trees and codes using Word processor tools (headings and subheadings), inductively, and cross-checked with the notes. Once synthesised and organised, it was cross-checked with the group. The implementation rate was defined as the percentage of uptake, that is, the number of rheumatologists and nurses who used the tool divided by the total number in their department. Multicomponent intervention strategy Considering the resources available, the strategy chosen was a website, www.proyectoeducar.es , with two clearly separated sites, one for patients and another for healthcare professionals . Both are freely accessible, but before the clinical trial, we did not disseminate the website, and the physician’s part of the website was password-protected. The website offers educational resources for individuals with RA and healthcare professionals, covering aspects that can improve adherence and suggested in the PtC paper, such as self-management tips, calendars and advice to enhance the doctor–patient relationship and make the right adherence questions. The tool includes decision-making tools developed by the graphic designer based on the information contained in the summary of product characteristics and Cochrane reviews. It also contains calendars and diaries. For healthcare professionals, it includes short videos on how (and how not) to show empathy, increase patient confidence, ask open-ended questions, handle relatives, dispel fears and deal with difficult patients and time management, as well as checklists and guides for the clinical interview. Cluster randomised intervention study The sample consisted of 141 patients with RA (67 in the control group and 74 in the intervention group). Most were women (76%) with a median age of 56 years and a time since diagnosis of 12 months. Median joint counts were 0 and 2 for swollen and painful joints, respectively. Seropositivity was 75% for both rheumatoid factor and anti-CCP antibodies. In relation to treatment, 77% were receiving first-line conventional synthetic disease-modifying drugs (csDMARDs), 41% corticosteroids, 29% biologic DMARDs and 28% non-steroidal antinflammatory drugs (NSAIDs). A baseline comparison of the study groups demonstrated inefficient randomisation with significant differences in disease activity, comorbidity, cardiovascular risk factors and concomitant treatments. The control group had higher disease activity with higher swollen joint count (median 1 vs 0; p=0.026), and visual analogue (VAS) (4 vs 2; p=0.004), as well as higher frequency of biologic treatment (39% vs 20%; p=0.016) and greater cardiovascular (34% vs 19%; p=0.034), respiratory (16% vs 5%; p=0.034) and digestive (19% vs 4%; p=0.006) comorbidity. At 6-month follow-up, an increase in adherence was observed in both study groups, although of greater magnitude in the control group (from 48% to 67%) than in the intervention group (from 42% to 47%). In addition, there was a decrease in ESR (from 17 to 12 in the control group and from 15 to 11 in the intervention group) and in the count of painful joints in the intervention group and swollen joints in the control group (medians from 1 to 0 in both cases) . The analysis of the efficacy of the intervention on adherence is shown in . The crude model showed that the adherence at follow-up decreases with the intervention (OR=0.4; p=0.025) and increased in those patients who were adherent at baseline (OR=4.25; p<0.0001), were receiving biological treatment (OR=2.25; p=0.046) and had respiratory comorbidity (OR=4.95; p=0.043). In the multivariate model, the main determinant of adherence at follow-up was baseline adherence (OR=3.92; p=0.001), while intervention was associated with a decrease in adherence (OR=0.41; p=0.040). Regarding the efficacy of the intervention on the secondary outcome measures, the only observed differences were the greater decrease in the number of painful joints in the control group than in the intervention group (difference of 1.63 vs 0.42; p=0.004) and the greater decrease in triglyceride concentration in the intervention group (difference of 8.81 vs −7.53; p=0.030) . Analysis of implementation barriers and facilitators Nine participants attended the focus group, representing all but one of the centres where EducAR was implemented. Despite all the intervention team members being invited to participate, the group was attended by two rheumatologists and seven nurses. Notwithstanding a high degree of acceptability, the implementation rate was low overall, ranging from 10% to 66% of the members of the rheumatology departments in the intervention group. The reasons given for the poor implementation were lack of time, redundancy with other existing materials, inadequate focus (exclusively for nurses), specialisation of rheumatologists with little interest in patients with recent onset arthritis and consideration of the standard of care as already adequate. The materials most used were the videos and the treatment information sheets or decision-making aids. Overall, the aids related to summary information, especially on medications, the printable materials (treatment cards and calendar) and the effective communication videos were considered very useful. In addition, the web format was considered a facilitating element for the young population. Those who used the materials were very satisfied with them and found them very useful. The main difficulties encountered were related to the difficulty of older patients in accessing the internet, the lack of perceived need in the case of patients already diagnosed, despite all being below 2 years of disease duration, and the absence of some important resources, such as a video on how to use MTX. Other aspects that may influence the implementation fidelity are the delegation of responsibility, for example, believing that EducAR is designed exclusively for nurses, and the lack of motivation to change their current management, which they consider adequate. Interestingly, when confronted with medication adherence results in their centres, the health professionals were surprised at how low they were. Considering the resources available, the strategy chosen was a website, www.proyectoeducar.es , with two clearly separated sites, one for patients and another for healthcare professionals . Both are freely accessible, but before the clinical trial, we did not disseminate the website, and the physician’s part of the website was password-protected. The website offers educational resources for individuals with RA and healthcare professionals, covering aspects that can improve adherence and suggested in the PtC paper, such as self-management tips, calendars and advice to enhance the doctor–patient relationship and make the right adherence questions. The tool includes decision-making tools developed by the graphic designer based on the information contained in the summary of product characteristics and Cochrane reviews. It also contains calendars and diaries. For healthcare professionals, it includes short videos on how (and how not) to show empathy, increase patient confidence, ask open-ended questions, handle relatives, dispel fears and deal with difficult patients and time management, as well as checklists and guides for the clinical interview. The sample consisted of 141 patients with RA (67 in the control group and 74 in the intervention group). Most were women (76%) with a median age of 56 years and a time since diagnosis of 12 months. Median joint counts were 0 and 2 for swollen and painful joints, respectively. Seropositivity was 75% for both rheumatoid factor and anti-CCP antibodies. In relation to treatment, 77% were receiving first-line conventional synthetic disease-modifying drugs (csDMARDs), 41% corticosteroids, 29% biologic DMARDs and 28% non-steroidal antinflammatory drugs (NSAIDs). A baseline comparison of the study groups demonstrated inefficient randomisation with significant differences in disease activity, comorbidity, cardiovascular risk factors and concomitant treatments. The control group had higher disease activity with higher swollen joint count (median 1 vs 0; p=0.026), and visual analogue (VAS) (4 vs 2; p=0.004), as well as higher frequency of biologic treatment (39% vs 20%; p=0.016) and greater cardiovascular (34% vs 19%; p=0.034), respiratory (16% vs 5%; p=0.034) and digestive (19% vs 4%; p=0.006) comorbidity. At 6-month follow-up, an increase in adherence was observed in both study groups, although of greater magnitude in the control group (from 48% to 67%) than in the intervention group (from 42% to 47%). In addition, there was a decrease in ESR (from 17 to 12 in the control group and from 15 to 11 in the intervention group) and in the count of painful joints in the intervention group and swollen joints in the control group (medians from 1 to 0 in both cases) . The analysis of the efficacy of the intervention on adherence is shown in . The crude model showed that the adherence at follow-up decreases with the intervention (OR=0.4; p=0.025) and increased in those patients who were adherent at baseline (OR=4.25; p<0.0001), were receiving biological treatment (OR=2.25; p=0.046) and had respiratory comorbidity (OR=4.95; p=0.043). In the multivariate model, the main determinant of adherence at follow-up was baseline adherence (OR=3.92; p=0.001), while intervention was associated with a decrease in adherence (OR=0.41; p=0.040). Regarding the efficacy of the intervention on the secondary outcome measures, the only observed differences were the greater decrease in the number of painful joints in the control group than in the intervention group (difference of 1.63 vs 0.42; p=0.004) and the greater decrease in triglyceride concentration in the intervention group (difference of 8.81 vs −7.53; p=0.030) . Nine participants attended the focus group, representing all but one of the centres where EducAR was implemented. Despite all the intervention team members being invited to participate, the group was attended by two rheumatologists and seven nurses. Notwithstanding a high degree of acceptability, the implementation rate was low overall, ranging from 10% to 66% of the members of the rheumatology departments in the intervention group. The reasons given for the poor implementation were lack of time, redundancy with other existing materials, inadequate focus (exclusively for nurses), specialisation of rheumatologists with little interest in patients with recent onset arthritis and consideration of the standard of care as already adequate. The materials most used were the videos and the treatment information sheets or decision-making aids. Overall, the aids related to summary information, especially on medications, the printable materials (treatment cards and calendar) and the effective communication videos were considered very useful. In addition, the web format was considered a facilitating element for the young population. Those who used the materials were very satisfied with them and found them very useful. The main difficulties encountered were related to the difficulty of older patients in accessing the internet, the lack of perceived need in the case of patients already diagnosed, despite all being below 2 years of disease duration, and the absence of some important resources, such as a video on how to use MTX. Other aspects that may influence the implementation fidelity are the delegation of responsibility, for example, believing that EducAR is designed exclusively for nurses, and the lack of motivation to change their current management, which they consider adequate. Interestingly, when confronted with medication adherence results in their centres, the health professionals were surprised at how low they were. We tried to bridge the gap between research evidence in adherence to pharmacological and non-pharmacological treatments in RA and actual practice by combining tailored interventions with user-centred design methodologies. However, the strategy chosen for implementation failed, and adherence at 6 months was only determined by baseline adherence, not by being assigned to the intervention or the control group. Analysing the barriers and facilitators to implementing the EducAR strategy has provided us with valuable insights that allow us to significantly improve both the usability and implementation aspects, potentially enhancing therapeutic adherence in rheumatology in the future. The research team decided to use a web-based approach for the strategy because they thought the web would be the most adaptable and have a far outreach. However, discrete strategies may not work as effectively as multifaceted ones and have limited sustainability. To underpin our implementation strategy, the barriers analysed after the implementation provided us with very clear messages: (1) physicians see educational websites as mainly for nurses; (2) they may undervalue the power of physician–patient communication to generate desired behaviours and patient satisfaction; (3) they do not see the need to change their behaviour given their time constraints. Patient education is a core role of nurses. However, nurses are not available in all rheumatology departments, and physicians can enhance patient education without extending visit times by implementing tailored approaches, individualising feedback and using teaching aids, just like the ones proposed in EducAR. Furthermore, it has been proven that one of the critical steps in reaching optimal treatment adherence is involving the patient in a shared decision-making process, which is difficult for the nurse to be involved. The materials designed for EducAR are freely available and can be used to compare therapeutical options with the patient. A QR code has been prepared to get access to the web and compare the options at home. A centre in the project where the rheumatologists downloaded the decision aids, these were used widely, and both patients and the healthcare team were very satisfied. Time constraints are one of the most common reasons for not implementing PtC in clinical practice. Many physicians believe open questions and effective communication are time-consuming. However, it has been shown that training in effective communication, as was the objective of the educational videos in EducAR, can lead to greater patient satisfaction without extending the duration of the visit, ultimately improving the efficiency of medical encounters. Finally, readiness is critical for health professionals to change behaviour. If rheumatologists think that their patients are adherent enough and that they communicate well, then there is no need to introduce any change. Feedback and measurement are key. A survey that evaluated patient–physician communication and treatment goal understanding in 502 RA patients and 216 physicians found that the perception of short- and long-term treatment goals between patients with RA and physicians treating RA differs, highlighting the importance of aligning treatment goals through effective communication for improved patient satisfaction and treatment outcomes. We learnt that implementation cannot be achieved with a 1-hour standardisation webinar. It needs dedicated follow-up and adaptation in each centre until fidelity can be ensured. The website can be used as a placeholder for an educational programme covering adherence, shared decision-making, patient education and effective communication. Follow-up visits (or virtual meetings) can be planned, thus becoming a true implementation plan with proper evaluation and analysis of the context (eg, already used materials or strategies that can be as useful as the ones included in the programme). Our take-home message is that a discrete implementation strategy such as the EducAR website, even if it has been cocreated by its end users and is highly acceptable, cannot improve adherence in the short term without an implementation plan. Using the website as the foundation, we must establish a plan that includes (1) feedback on the reality of the patient’s adherence and rheumatologist and nurse communication styles, (2) reassurance that training in effective communication does not necessarily increase visit time and (3) an educational programme with follow-up. Finally, as with any implementation plan, it must include periodic evaluation and adaptation. We will now focus on developing an educational programme and using the website for outreach. This will include testimonials from patients and healthcare professionals highlighting the most useful parts and improving the web with the suggestions from the focus group, like creating versions offline and adding videos for methotrexate. |
“The leading role of pathology in assessing the somatic molecular alterations of cancer: Position Paper of the European Society of Pathology”: letter to the Editor | d8837273-c7e0-4678-897a-a343c187cb4f | 7969541 | Pathology[mh] | |
Mechanism of Formononetin in Improving Energy Metabolism and Alleviating Neuronal Injury in | 5681d076-c779-44d9-87ff-6c0318460a6d | 11850093 | Biochemistry[mh] | Introduction Ischaemic stroke, characterised by its high morbidity, recurrence rate and mortality, presents a significant and challenging medical condition. Clinically, drug thrombolysis serves as the primary approach to swiftly restore blood supply to ischaemic brain tissue. However, the rapid restoration of blood flow can trigger compensatory mechanisms that worsen brain tissue damage, leading to cerebral ischaemia–reperfusion injury (CIRI), profoundly impacting patients' quality of life . Despite considerable advancements in the diagnosis and treatment of CIRI in modern medicine, challenges such as postoperative rehabilitation difficulties and adverse drug reactions persist . Hence, there exists an urgent necessity to explore and develop novel treatment modalities and medications to more effectively tackle the challenges in CIRI management. The distinctive benefits of natural products in ameliorating CIRI have garnered widespread attention . A randomised controlled clinical study demonstrated that saffron extract effectively enhances the antioxidant capacity of ischaemic stroke patients following thrombolysis, thereby reducing neurological deficits . Another clinical investigation revealed that ginkgolide B regulates brain energy metabolism and tissue oxygenation, consequently improving brain injury . Furthermore, a meta‐analysis highlighted the neuroprotective potential of curcumin in CIRI, attributed to its antioxidant and anti‐inflammatory properties . Understanding the mechanisms underlying the efficacy of natural products in alleviating CIRI holds significant importance for their broad application. Energy metabolism dysfunction and oxidative stress injury play pivotal roles in exacerbating CIRI . Following cerebral ischaemia, local brain tissue experiences a blockade in oxygen and glucose supply, depleting the energy metabolite adenosine triphosphate (ATP) and triggering an escalation in the ischaemic injury cascade. Upon restoration of blood supply to ischaemic brain tissue, various pathological processes, including oxidative stress and inflammatory reactions, exacerbate mitochondrial damage, thereby worsening energy metabolism dysfunction Studies have validated that reinstating ATP synthesis can effectively enhance neuronal viability and facilitate recovery from CIRI . Furthermore, a meta‐analysis has underscored the significant elevation of oxidative stress levels in stroke patients, with antioxidant therapy demonstrating substantial efficacy in reducing the infarcted brain area and mitigating poor outcomes . Enhancing energy metabolism to bolster neuronal repair stands out as a crucial mechanism in the treatment of CIRI utilising natural products. Ginsenosides have shown efficacy in regulating energy metabolism in CIRI rats, enhancing mitochondrial activity and stimulating ATP production, thereby exerting neuroprotective effects . Formononetin, a prominent member of the isoflavone family, is commonly found in traditional Chinese medicine, such as Astragalus membranaceus (Fisch.) Bge. It exhibits pharmacological properties including antioxidation, anti‐infection, anti‐apoptosis and enhanced blood circulation, holding significant promise in the prevention and treatment of neurological disorders like stroke . However, the therapeutic effects and mechanisms of FMN on CIRI remain underexplored. Here, we aim to elucidate the therapeutic potential of FMN on CIRI and further investigate its underlying mechanisms using metabolomics techniques. Initially, we induced a CIRI rat model via middle cerebral artery occlusion reperfusion (MCAO/R) to assess the beneficial effects of FMN on CIRI. Subsequently, metabolomics techniques were employed to explore the impact of FMN intervention on brain tissue metabolites in CIRI rats. Based on the metabolomics results, our focus centred on validating FMN's effects on nicotinate and nicotinamide metabolism, alanine, aspartate and glutamate metabolism, as well as its influence on neuronal injury and repair. Methods 2.1 Animals and Reagents We procured SPF‐grade healthy male SD rats, aged 6–8 weeks and weighing around 230 g, from Beijing SPF Biotechnology Co. Ltd., bearing the animal licence number SCXK (Beijing) 2019–0010. Each cage accommodated five rats, and they were maintained under standard SPF‐grade conditions with regular feeding. Approval for all animal experiments was obtained from the Ethical Review Committee of Animal Experiments in Yunnan University of Chinese Medicine (Approval No.: R‐062023LH265), dated March 07, 2023. Details regarding the materials and kits utilised in this experiment are provided in the . 2.2 Modelling, Grouping and Administration We established a rat model of cerebral ischaemia–reperfusion injury using the suture occlusion method . Initially, rats underwent an 8‐h fasting period with access to water ad libitum before modelling. Subsequently, they were intraperitoneally anaesthetised with pentobarbital sodium at a dose of 50 mg/kg. Following anaesthesia, rats were positioned supinely, securely fixed and the surgical area was disinfected. A midline incision was made in the neck to expose the left common carotid artery, external carotid artery and internal carotid artery. The external carotid artery was ligated at its bifurcation with the internal carotid artery. A small incision was then made in the left common carotid artery, and a 0.26‐mm‐diameter nylon suture was inserted and utilised to occlude the common carotid artery. The incision was sutured, leaving the suture ends protruding outside the body. Reperfusion was initiated 2 h postsurgery by withdrawing the suture to the level of the common carotid artery. The Sham‐operated group underwent identical surgical procedures, excluding ligation and suture insertion. We collected tissue samples and measured relevant indicators following a 5‐day reperfusion period. Ninety SD rats were randomly allocated into six groups: Sham‐operated group (Sham), CIRI group, Ginaton group (GNT), low‐dose FMN group (L‐FMN), medium‐dose FMN group (M‐FMN) and high‐dose FMN group (H‐FMN). CIRI models were induced in all groups except the Sham group. Postmodelling, the Sham and CIRI groups received 0.01 mL/g of physiological saline via gavage, while the GNT group was administered 21.6 mg/kg of Ginaton via the same route. The L‐FMN, M‐FMN and H‐FMN groups were orally administered 15, 30 and 60 mg/kg of FMN, respectively, once daily for five consecutive days. The dosage of GNT and FMN was set according to previous studies . After model establishment and drug administration, six rats from each group were used for 2,3,5‐triphenyl tetrazolium chloride (TTC) staining, while the cerebral cortex tissue from the ischaemic side of the brains from the remaining nine rats was divided into four parts for further experiments, including pathological staining, untargeted metabolomics analysis, detection of SOD, MDA, ROS and ATP in brain tissue, RT‐qPCR, Western blot and immunofluorescence assays (Figure ). 2.3 Neurological Function Evaluation We utilised the Longa score and asymmetry score to evaluate neurological dysfunction in rats. The Longa score, a scale ranging from 0 to 4, delineates the severity of deficits: 0 signifies no observable neurological impairment, 1 denotes an inability to extend the contralateral forepaw, 2 signifies circling towards the contralateral side, 3 indicates falling towards the contralateral side and 4 represents loss of consciousness and spontaneous ambulation. Meanwhile, the asymmetry score, determined by the frequency of forelimb contact with the cage when the tail is elevated, complements this assessment. The calculation formula is as follows: Asymmetry score = Left − Right Both + Left + Right × 100 % 2.4 Cerebral Infarction Area Assessment We evaluated the infarct area in CIRI rats based on prior research . In this procedure, six rats were randomly chosen from each group, anaesthetised and their brain tissues were dissected and subsequently stored in a − 20°C refrigerator for 30 min. Subsequently, the brains were embedded in a brain‐slicing mould and sliced coronally into 2‐mm‐thick sections. These sections were then immersed in a 2% TTC staining solution and incubated in darkness at 37°C for 30 min, with rotation every 5 min to ensure consistent staining. Following staining, the brain sections underwent photography and documentation. The infarct area in each rat group was precisely quantified using Image J software. 2.5 Pathological Staining These sections underwent staining with haematoxylin and eosin (HE) , Nissl and terminal deoxynucleotidyl transferase dUTP nick end labelling (TUNEL) staining methods, as per established protocols. Briefly, cerebral cortex tissue from the same ischaemic region was obtained, fixed with 4% paraformaldehyde, dehydrated with alcohol and embedded in paraffin. The tissue was then cut into 5‐μm sections, which were dewaxed with xylene and washed with water for 20 min. Subsequently, the sections were stained with haematoxylin and eosin or Nissl stain using 0.1% toluidine blue. For TUNEL staining, 2‐μm sections were prepared and stained following the instructions provided in the TUNEL kit. Subsequently, we observed the sections under a microscope. The extent of pathological damage in the HE‐stained sections was evaluated using the degenerative cell index (DCI), computed as the ratio of degenerative cells to the total cell count . Quantitative analysis of Nissl bodies and TUNEL‐positive areas was conducted using Image J software. 2.6 Untargeted Metabolomics Analysis We collected brain tissue samples from the ischaemic side, which were then ground using liquid nitrogen and diluted with water at a mass‐to‐volume ratio of 1:3 to create a tissue suspension. Subsequently, we added methanol containing internal standards at a volume ratio of 1:4, mixed the solution, allowed it to stand, centrifuged it and transferred the supernatant to GC vials. The supernatant was then transformed into a dry powder using a concentration instrument. This dry powder was combined with a methoxyamine pyridine solution, allowed to stand for a defined period and then mixed with N‐methyl‐N‐(trimethylsilyl) trifluoroacetamide, followed by another period of standing. Finally, we added an external standard solution, mixed it and analysed it using the instrument. We prepared quality control (QC) samples by mixing equal volumes of each sample. Further details on sample processing, GC/MS detection and data analysis are available in the . 2.7 SOD , MDA , ROS , ATP Detection in Brain Tissue We collected cerebral cortex tissue from the ischaemic side and homogenised the tissue. We normalised the total protein concentration of the samples using the BCA method. Following this, we detected the activity of SOD and measured the levels of MDA, ROS and ATP in the brain tissue, adhering to the instructions provided in the kit. 2.8 RT ‐ qPCR Detection Following the extraction of total RNA from cerebral cortex tissue of the ischaemic side, we added TRIzol reagent to preserve RNA integrity. Subsequently, we measured the purity and concentration of total RNA and performed reverse transcription to generate complementary DNA (cDNA). The samples were then loaded into a 96‐well plate for PCR amplification. We recorded the cycle threshold (C t value) for each reaction tube, indicating the number of cycles required for the fluorescent signal to reach a predefined threshold. Utilising the 2 −ΔΔC t method, we calculated the relative expression levels of each target mRNA compared to Actb . Primer sequences are available in the (Table ). 2.9 Western Blot Assay We retrieved cerebral cortex tissue of the ischaemic side from the −80°C freezer, minced it and placed it in a 1.5‐mL EP tube. RIPA lysis buffer was then added, and the tissue underwent homogenisation and lysis for 30 min. Following centrifugation, we collected the supernatant and determined the concentration of total protein using the BCA method. The protein was subsequently separated via SDS‐PAGE electrophoresis and transferred to a PVDF membrane. After blocking the membrane with 5% skim milk and washing it with TBST, we added the primary antibody against the target protein, allowing it to incubate overnight at 4°C. The next day, we washed the membrane five times with TBST and applied the appropriate secondary antibody for a 2‐h incubation on a rocker at 37°C. Following another round of washing with TBST, we developed the membrane using ECL and exposed it. Analysis was conducted using Image J software, with β‐actin serving as an internal control for the calculation of relative expression levels of the target protein. 2.10 Immunofluorescence Assay We processed the fixed cerebral cortex tissue of the ischaemic side through gradient ethanol dehydration, clearing, wax immersion and embedding for slicing. After dewaxing the sections with xylene and ethanol, we performed antigen retrieval and blocked them with goat serum for 30 min. Ki67 antibodies were then applied and left to incubate overnight in a dark, humid chamber at 4°C. Following this, we added fluorescent labels and allowed them to incubate for 1 h at 37°C in a humid chamber. Nuclei were stained with DAPI in a dark setting, and the slides were mounted with antifade mounting medium for observation. We quantified the positive regions using Image Pro Plus 6.0 software. 2.11 Statistical Analysis We conducted statistical analysis using SPSS Pro, and we presented all data as mean ± SD. When data adhered to a normal distribution and exhibited homogeneity of variance among groups, we employed a t ‐test or one‐way ANOVA. Alternatively, if the data did not meet the criteria for normal distribution, we utilised a rank‐sum test for analysis. We considered a p ‐value less than 0.05 as statistically significant. Animals and Reagents We procured SPF‐grade healthy male SD rats, aged 6–8 weeks and weighing around 230 g, from Beijing SPF Biotechnology Co. Ltd., bearing the animal licence number SCXK (Beijing) 2019–0010. Each cage accommodated five rats, and they were maintained under standard SPF‐grade conditions with regular feeding. Approval for all animal experiments was obtained from the Ethical Review Committee of Animal Experiments in Yunnan University of Chinese Medicine (Approval No.: R‐062023LH265), dated March 07, 2023. Details regarding the materials and kits utilised in this experiment are provided in the . Modelling, Grouping and Administration We established a rat model of cerebral ischaemia–reperfusion injury using the suture occlusion method . Initially, rats underwent an 8‐h fasting period with access to water ad libitum before modelling. Subsequently, they were intraperitoneally anaesthetised with pentobarbital sodium at a dose of 50 mg/kg. Following anaesthesia, rats were positioned supinely, securely fixed and the surgical area was disinfected. A midline incision was made in the neck to expose the left common carotid artery, external carotid artery and internal carotid artery. The external carotid artery was ligated at its bifurcation with the internal carotid artery. A small incision was then made in the left common carotid artery, and a 0.26‐mm‐diameter nylon suture was inserted and utilised to occlude the common carotid artery. The incision was sutured, leaving the suture ends protruding outside the body. Reperfusion was initiated 2 h postsurgery by withdrawing the suture to the level of the common carotid artery. The Sham‐operated group underwent identical surgical procedures, excluding ligation and suture insertion. We collected tissue samples and measured relevant indicators following a 5‐day reperfusion period. Ninety SD rats were randomly allocated into six groups: Sham‐operated group (Sham), CIRI group, Ginaton group (GNT), low‐dose FMN group (L‐FMN), medium‐dose FMN group (M‐FMN) and high‐dose FMN group (H‐FMN). CIRI models were induced in all groups except the Sham group. Postmodelling, the Sham and CIRI groups received 0.01 mL/g of physiological saline via gavage, while the GNT group was administered 21.6 mg/kg of Ginaton via the same route. The L‐FMN, M‐FMN and H‐FMN groups were orally administered 15, 30 and 60 mg/kg of FMN, respectively, once daily for five consecutive days. The dosage of GNT and FMN was set according to previous studies . After model establishment and drug administration, six rats from each group were used for 2,3,5‐triphenyl tetrazolium chloride (TTC) staining, while the cerebral cortex tissue from the ischaemic side of the brains from the remaining nine rats was divided into four parts for further experiments, including pathological staining, untargeted metabolomics analysis, detection of SOD, MDA, ROS and ATP in brain tissue, RT‐qPCR, Western blot and immunofluorescence assays (Figure ). Neurological Function Evaluation We utilised the Longa score and asymmetry score to evaluate neurological dysfunction in rats. The Longa score, a scale ranging from 0 to 4, delineates the severity of deficits: 0 signifies no observable neurological impairment, 1 denotes an inability to extend the contralateral forepaw, 2 signifies circling towards the contralateral side, 3 indicates falling towards the contralateral side and 4 represents loss of consciousness and spontaneous ambulation. Meanwhile, the asymmetry score, determined by the frequency of forelimb contact with the cage when the tail is elevated, complements this assessment. The calculation formula is as follows: Asymmetry score = Left − Right Both + Left + Right × 100 % Cerebral Infarction Area Assessment We evaluated the infarct area in CIRI rats based on prior research . In this procedure, six rats were randomly chosen from each group, anaesthetised and their brain tissues were dissected and subsequently stored in a − 20°C refrigerator for 30 min. Subsequently, the brains were embedded in a brain‐slicing mould and sliced coronally into 2‐mm‐thick sections. These sections were then immersed in a 2% TTC staining solution and incubated in darkness at 37°C for 30 min, with rotation every 5 min to ensure consistent staining. Following staining, the brain sections underwent photography and documentation. The infarct area in each rat group was precisely quantified using Image J software. Pathological Staining These sections underwent staining with haematoxylin and eosin (HE) , Nissl and terminal deoxynucleotidyl transferase dUTP nick end labelling (TUNEL) staining methods, as per established protocols. Briefly, cerebral cortex tissue from the same ischaemic region was obtained, fixed with 4% paraformaldehyde, dehydrated with alcohol and embedded in paraffin. The tissue was then cut into 5‐μm sections, which were dewaxed with xylene and washed with water for 20 min. Subsequently, the sections were stained with haematoxylin and eosin or Nissl stain using 0.1% toluidine blue. For TUNEL staining, 2‐μm sections were prepared and stained following the instructions provided in the TUNEL kit. Subsequently, we observed the sections under a microscope. The extent of pathological damage in the HE‐stained sections was evaluated using the degenerative cell index (DCI), computed as the ratio of degenerative cells to the total cell count . Quantitative analysis of Nissl bodies and TUNEL‐positive areas was conducted using Image J software. Untargeted Metabolomics Analysis We collected brain tissue samples from the ischaemic side, which were then ground using liquid nitrogen and diluted with water at a mass‐to‐volume ratio of 1:3 to create a tissue suspension. Subsequently, we added methanol containing internal standards at a volume ratio of 1:4, mixed the solution, allowed it to stand, centrifuged it and transferred the supernatant to GC vials. The supernatant was then transformed into a dry powder using a concentration instrument. This dry powder was combined with a methoxyamine pyridine solution, allowed to stand for a defined period and then mixed with N‐methyl‐N‐(trimethylsilyl) trifluoroacetamide, followed by another period of standing. Finally, we added an external standard solution, mixed it and analysed it using the instrument. We prepared quality control (QC) samples by mixing equal volumes of each sample. Further details on sample processing, GC/MS detection and data analysis are available in the . SOD , MDA , ROS , ATP Detection in Brain Tissue We collected cerebral cortex tissue from the ischaemic side and homogenised the tissue. We normalised the total protein concentration of the samples using the BCA method. Following this, we detected the activity of SOD and measured the levels of MDA, ROS and ATP in the brain tissue, adhering to the instructions provided in the kit. RT ‐ qPCR Detection Following the extraction of total RNA from cerebral cortex tissue of the ischaemic side, we added TRIzol reagent to preserve RNA integrity. Subsequently, we measured the purity and concentration of total RNA and performed reverse transcription to generate complementary DNA (cDNA). The samples were then loaded into a 96‐well plate for PCR amplification. We recorded the cycle threshold (C t value) for each reaction tube, indicating the number of cycles required for the fluorescent signal to reach a predefined threshold. Utilising the 2 −ΔΔC t method, we calculated the relative expression levels of each target mRNA compared to Actb . Primer sequences are available in the (Table ). Western Blot Assay We retrieved cerebral cortex tissue of the ischaemic side from the −80°C freezer, minced it and placed it in a 1.5‐mL EP tube. RIPA lysis buffer was then added, and the tissue underwent homogenisation and lysis for 30 min. Following centrifugation, we collected the supernatant and determined the concentration of total protein using the BCA method. The protein was subsequently separated via SDS‐PAGE electrophoresis and transferred to a PVDF membrane. After blocking the membrane with 5% skim milk and washing it with TBST, we added the primary antibody against the target protein, allowing it to incubate overnight at 4°C. The next day, we washed the membrane five times with TBST and applied the appropriate secondary antibody for a 2‐h incubation on a rocker at 37°C. Following another round of washing with TBST, we developed the membrane using ECL and exposed it. Analysis was conducted using Image J software, with β‐actin serving as an internal control for the calculation of relative expression levels of the target protein. Immunofluorescence Assay We processed the fixed cerebral cortex tissue of the ischaemic side through gradient ethanol dehydration, clearing, wax immersion and embedding for slicing. After dewaxing the sections with xylene and ethanol, we performed antigen retrieval and blocked them with goat serum for 30 min. Ki67 antibodies were then applied and left to incubate overnight in a dark, humid chamber at 4°C. Following this, we added fluorescent labels and allowed them to incubate for 1 h at 37°C in a humid chamber. Nuclei were stained with DAPI in a dark setting, and the slides were mounted with antifade mounting medium for observation. We quantified the positive regions using Image Pro Plus 6.0 software. Statistical Analysis We conducted statistical analysis using SPSS Pro, and we presented all data as mean ± SD. When data adhered to a normal distribution and exhibited homogeneity of variance among groups, we employed a t ‐test or one‐way ANOVA. Alternatively, if the data did not meet the criteria for normal distribution, we utilised a rank‐sum test for analysis. We considered a p ‐value less than 0.05 as statistically significant. Results 3.1 The Therapeutic Effect of FMN on CIRI Rats Throughout the evaluation of FMN's therapeutic effect on CIRI rats, we utilised the Longa score and asymmetry score to gauge neurological dysfunction (Figure ). We employed TTC staining to assess cerebral infarction and HE along with Nissl staining to evaluate brain tissue pathology. Our findings from the Longa score and asymmetry score indicated that CIRI rats displayed notably higher scores compared to the Sham group, signifying neurological impairment. However, intervention with FMN significantly mitigated these deficits in CIRI rats. TTC staining illustrated a significant increase in the infarct area in CIRI rats compared to the Sham group, yet after 5 days of FMN intervention, this area notably decreased (Figure ). HE staining unveiled pathological changes in the brain tissue of CIRI rats, including increased glial cells, loose intercellular substance and nuclear pyknosis, in contrast to the Sham group. FMN intervention effectively attenuated these damages (Figure ). Nissl staining demonstrated a significant reduction in the number of Nissl bodies in CIRI rats compared to the Sham group, which was reversed by FMN intervention (Figure ). The therapeutic efficacy of different doses of FMN suggested a dose‐dependent response in treating CIRI. Consequently, we proceeded with further analysis focusing on the H‐FMN group. 3.2 The Effects of FMN on the Metabolic Pathways in the Brain Tissue of CIRI Rats Principal component analysis (PCA) of nontargeted metabolomics in rat brain tissue samples delineated clear separations among the Sham, CIRI and H‐FMN groups, with clustered data within each group (Figure ). This observation indicates notable differences in metabolite levels within brain tissue among the three groups. Utilising the PLS‐DA model to identify differential metabolites and validating the model via permutation testing, we obtained R 2 values of (0.00, 0.79) and Q 2 values of (0.00, −0.80) for CIRI versus Sham, and R 2 values of (0.00, 0.78) and Q 2 values of (0.00, −0.84) for H‐FMN versus CIRI. These findings signify the robust fitting and predictive capacity of the statistical model (Figure ). Subsequently, we screened the differential metabolites among the groups based on the criteria of p < 0.05, VIP > 1, and fold change greater than 1.5 or less than 0.67. And differential metabolites were visualised by volcano plots (Figure ). Comprehensive information on the identified differential metabolites is available in the (differential metabolites of CIRI vs. Sham are shown in Table and differential metabolites of H‐FMN vs. CIRI groups are shown in Table ). We conducted KEGG pathway enrichment analysis on the identified differential metabolites using MetaboAnalyst 6.0. The selection criteria for key pathways were p < 0.05 and pathway impact > 0.05. ‘Impact’ is used to evaluate the importance and contribution of a metabolic pathway. Additionally, ‘Hits’ reflect the number of metabolites detected within that pathway. The outcomes revealed nicotinate and nicotinamide metabolism (map00760) and alanine, aspartate and glutamate metabolism (map00250) as key pathways for CIRI versus Sham, and H‐FMN versus CIRI. Notably, these pathways overlapped, suggesting their significance as key metabolic pathways for FMN‐mediated improvement in CIRI (Figure ). Hence, we proceeded to validate the key factors within these pathways. 3.3 Effects of FMN on Antioxidative‐Related Pathway and Oxidative Stress Injury in Brain Tissue In nicotinate and nicotinamide metabolism, our screening process identified nicotinamide (NAM), L‐aspartic acid (L‐Asp), fumaric acid (FA) and gamma‐aminobutyric acid (GABA) as key metabolites, which were consistent with previous reports . Relative to the Sham group, these metabolite levels significantly decreased in the CIRI group, whereas FMN intervention elevated their levels (Figure ). Examination of the distribution and expression of these metabolites in nicotinate and nicotinamide metabolism revealed that L‐Asp, FA and GABA serve as upstream or downstream products of alanine, aspartate and glutamate metabolism, thereby corroborating FMN's role in enhancing this metabolic pathway (Figure ). NAM, a central metabolite in nicotinate and nicotinamide metabolism, is well‐known for its antioxidant properties . Given FMN's ability to increase NAM levels, we assessed FMN's antioxidant effects in CIRI. Our findings demonstrated that FMN significantly boosted SOD activity in ischaemic brain tissue and lowered MDA and ROS levels (Figure ). Furthermore, to evaluate FMN's efficacy in mitigating cell damage induced by oxidative stress, we performed TUNEL staining, which revealed FMN's substantial amelioration of cellular damage caused by oxidative stress (Figure ). 3.4 Effects of FMN on Key Energy Metabolism and Brain Tissue Repair Building upon our preceding research, we directed our focus towards validating alanine, aspartate and glutamate metabolism. Specifically, L‐Asp, FA, GABA and L‐glutamic acid (L‐Glu) emerged as key metabolites in our screenings. Our findings unveiled a significant reversal in the reduction of these metabolites following FMN intervention (Figure ). Examination of the distribution and expression of these metabolites within alanine, aspartate and glutamate metabolism highlighted their role as upstream products of the tricarboxylic acid (TCA) cycle, thereby influencing energy metabolism in brain tissue. adenylosuccinate lyase (ADSL) and glutamic acid decarboxylase (GAD) emerged as pivotal enzymes governing the metabolism of these metabolites (Figure ). Consequently, we proceeded to assess the impact of FMN on the gene and protein expression of ADSL and GAD through RT‐qPCR and Western blot techniques. Our results demonstrated that FMN notably upregulated the gene and protein expression of ADSL and GAD (Figure ). Furthermore, we evaluated changes in ATP levels in rat brain tissue, revealing that FMN intervention bolstered ATP content in the ischaemic brain tissue of CIRI rats, thereby enhancing their energy metabolism and exerting neurotrophic effects (Figure ). This assertion was corroborated by Ki67 immunofluorescence results, which illustrated FMN's significant promotion of neuronal cell proliferation (Figure ). The Therapeutic Effect of FMN on CIRI Rats Throughout the evaluation of FMN's therapeutic effect on CIRI rats, we utilised the Longa score and asymmetry score to gauge neurological dysfunction (Figure ). We employed TTC staining to assess cerebral infarction and HE along with Nissl staining to evaluate brain tissue pathology. Our findings from the Longa score and asymmetry score indicated that CIRI rats displayed notably higher scores compared to the Sham group, signifying neurological impairment. However, intervention with FMN significantly mitigated these deficits in CIRI rats. TTC staining illustrated a significant increase in the infarct area in CIRI rats compared to the Sham group, yet after 5 days of FMN intervention, this area notably decreased (Figure ). HE staining unveiled pathological changes in the brain tissue of CIRI rats, including increased glial cells, loose intercellular substance and nuclear pyknosis, in contrast to the Sham group. FMN intervention effectively attenuated these damages (Figure ). Nissl staining demonstrated a significant reduction in the number of Nissl bodies in CIRI rats compared to the Sham group, which was reversed by FMN intervention (Figure ). The therapeutic efficacy of different doses of FMN suggested a dose‐dependent response in treating CIRI. Consequently, we proceeded with further analysis focusing on the H‐FMN group. The Effects of FMN on the Metabolic Pathways in the Brain Tissue of CIRI Rats Principal component analysis (PCA) of nontargeted metabolomics in rat brain tissue samples delineated clear separations among the Sham, CIRI and H‐FMN groups, with clustered data within each group (Figure ). This observation indicates notable differences in metabolite levels within brain tissue among the three groups. Utilising the PLS‐DA model to identify differential metabolites and validating the model via permutation testing, we obtained R 2 values of (0.00, 0.79) and Q 2 values of (0.00, −0.80) for CIRI versus Sham, and R 2 values of (0.00, 0.78) and Q 2 values of (0.00, −0.84) for H‐FMN versus CIRI. These findings signify the robust fitting and predictive capacity of the statistical model (Figure ). Subsequently, we screened the differential metabolites among the groups based on the criteria of p < 0.05, VIP > 1, and fold change greater than 1.5 or less than 0.67. And differential metabolites were visualised by volcano plots (Figure ). Comprehensive information on the identified differential metabolites is available in the (differential metabolites of CIRI vs. Sham are shown in Table and differential metabolites of H‐FMN vs. CIRI groups are shown in Table ). We conducted KEGG pathway enrichment analysis on the identified differential metabolites using MetaboAnalyst 6.0. The selection criteria for key pathways were p < 0.05 and pathway impact > 0.05. ‘Impact’ is used to evaluate the importance and contribution of a metabolic pathway. Additionally, ‘Hits’ reflect the number of metabolites detected within that pathway. The outcomes revealed nicotinate and nicotinamide metabolism (map00760) and alanine, aspartate and glutamate metabolism (map00250) as key pathways for CIRI versus Sham, and H‐FMN versus CIRI. Notably, these pathways overlapped, suggesting their significance as key metabolic pathways for FMN‐mediated improvement in CIRI (Figure ). Hence, we proceeded to validate the key factors within these pathways. Effects of FMN on Antioxidative‐Related Pathway and Oxidative Stress Injury in Brain Tissue In nicotinate and nicotinamide metabolism, our screening process identified nicotinamide (NAM), L‐aspartic acid (L‐Asp), fumaric acid (FA) and gamma‐aminobutyric acid (GABA) as key metabolites, which were consistent with previous reports . Relative to the Sham group, these metabolite levels significantly decreased in the CIRI group, whereas FMN intervention elevated their levels (Figure ). Examination of the distribution and expression of these metabolites in nicotinate and nicotinamide metabolism revealed that L‐Asp, FA and GABA serve as upstream or downstream products of alanine, aspartate and glutamate metabolism, thereby corroborating FMN's role in enhancing this metabolic pathway (Figure ). NAM, a central metabolite in nicotinate and nicotinamide metabolism, is well‐known for its antioxidant properties . Given FMN's ability to increase NAM levels, we assessed FMN's antioxidant effects in CIRI. Our findings demonstrated that FMN significantly boosted SOD activity in ischaemic brain tissue and lowered MDA and ROS levels (Figure ). Furthermore, to evaluate FMN's efficacy in mitigating cell damage induced by oxidative stress, we performed TUNEL staining, which revealed FMN's substantial amelioration of cellular damage caused by oxidative stress (Figure ). Effects of FMN on Key Energy Metabolism and Brain Tissue Repair Building upon our preceding research, we directed our focus towards validating alanine, aspartate and glutamate metabolism. Specifically, L‐Asp, FA, GABA and L‐glutamic acid (L‐Glu) emerged as key metabolites in our screenings. Our findings unveiled a significant reversal in the reduction of these metabolites following FMN intervention (Figure ). Examination of the distribution and expression of these metabolites within alanine, aspartate and glutamate metabolism highlighted their role as upstream products of the tricarboxylic acid (TCA) cycle, thereby influencing energy metabolism in brain tissue. adenylosuccinate lyase (ADSL) and glutamic acid decarboxylase (GAD) emerged as pivotal enzymes governing the metabolism of these metabolites (Figure ). Consequently, we proceeded to assess the impact of FMN on the gene and protein expression of ADSL and GAD through RT‐qPCR and Western blot techniques. Our results demonstrated that FMN notably upregulated the gene and protein expression of ADSL and GAD (Figure ). Furthermore, we evaluated changes in ATP levels in rat brain tissue, revealing that FMN intervention bolstered ATP content in the ischaemic brain tissue of CIRI rats, thereby enhancing their energy metabolism and exerting neurotrophic effects (Figure ). This assertion was corroborated by Ki67 immunofluorescence results, which illustrated FMN's significant promotion of neuronal cell proliferation (Figure ). Discussion CIRI poses a considerable challenge for the outcomes of ischaemic stroke . In this study, we assessed neural functional impairment in rats utilising the Longa score and asymmetry score . The Longa score focuses on evaluating neurological deficits and recovery, while the asymmetry score emphasises motor asymmetry assessment. Our findings indicate that FMN effectively mitigates neurological deficits in CIRI rats. Previous research has highlighted the cascade reaction during cerebral ischaemia–reperfusion, exacerbating brain injury and resulting in enlarged infarct areas, increased cellular damage, elevated glial cell counts and reduced neuron numbers . However, our staining results with TTC, HE and Nissl reveal that FMN significantly reverses these pathological conditions in CIRI rats. This reversal encompasses reductions in enlarged infarct areas, decreased glial cell counts and increased Nissl bodies, suggesting FMN's potential as a therapeutic agent for CIRI. Furthermore, our comparison with Ginaton as a positive control demonstrates that high‐dose FMN yields comparable efficacy to Ginaton in CIRI treatment. Previous studies have shown that the molecular mechanisms by which FMN improves cerebral ischaemia–reperfusion injury (CIRI) include regulating the JAK2/STAT3 signalling pathway , the PARP‐1/PARG/Iduna signalling pathway and the PI3K/Akt signalling pathway , inhibiting endoplasmic reticulum stress and apoptosis and enhancing cerebrovascular neovascularisation . However, no studies have yet elucidated the metabolic regulatory mechanisms by which FMN improves CIRI. Therefore, we conducted a preliminary exploration of the metabolic mechanism of FMN on CIRI based on untargeted metabolomics. We found that FMN mitigated brain injury and conferred neuroprotective effects by modulating nicotinate and nicotinamide metabolism, as well as alanine, aspartate and glutamate metabolism. Prior studies have underscored the crucial inhibitory role of nicotinate and nicotinamide metabolism in oxidative stress–induced damage . Our findings indicate that FMN upregulates the levels of NAM, a core metabolite in nicotinate and nicotinamide metabolism. NAM, the amide form of niacin, serves as an essential precursor of nicotinamide adenine dinucleotide and is pivotal for energy metabolism and cellular function. Research has confirmed NAM's ability to inhibit oxidative stress in mouse models of Parkinson's disease, thereby exerting neuroprotective effects . During cerebral ischaemia–reperfusion injury, an imbalance between oxidation and antioxidation within neurons leads to the generation of excessive oxygen free radicals . These radicals attack reperfused neurons and healthy neighbouring neurons, exacerbating neural functional impairment . Supplementation with NAM effectively boosts the body's antioxidant capacity, enhances superoxide dismutase (SOD) activity and reduces levels of malondialdehyde (MDA) and reactive oxygen species (ROS), thus mitigating oxidative stress–induced damage . Our study demonstrates that FMN upregulates NAM levels. Subsequent assessments of oxidative stress factors indicate that FMN intervention enhances the antioxidant capacity of CIRI rats and reduces oxidative stress–induced neuronal damage. Furthermore, L‐Asp FA, and GABA are vital metabolites in nicotinate and nicotinamide metabolism, serving as crucial upstream and downstream metabolites in alanine, aspartate and glutamate metabolism. Within the alanine, aspartate and glutamate metabolism pathway, L‐Asp, recognised for its antifatigue properties, undergoes metabolisation into FA by ADSL. Previous investigations have highlighted FA's antioxidant attributes, its facilitation of DNA damage repair and its role in mitigating cellular injury. Crucially, FA is integral to energy metabolism within the TCA cycle . Our research findings demonstrate that FMN effectively reverses the decline in L‐Asp and FA levels while upregulating ADSL expression. This suggests FMN's capacity to enhance energy metabolism in CIRI rats by modulating L‐Asp and FA. Notably, our results are substantiated by the significant increase in ATP content observed in ischaemic brain tissue following FMN intervention. Furthermore, the crucial roles of L‐Glu and GABA in neurological diseases have gained widespread recognition, as they collaborate to regulate nervous system function . L‐Glu, serving as a metabolic precursor of GABA, undergoes decarboxylation by GAD to generate GABA. A portion of the L‐Glu and GABA present in the synaptic cleft undergoes further conversion into glutamine (Gln) and L‐Glu through the reuptake mechanism facilitated by glial cells. This metabolic loop involving L‐Glu/GABA‐Gln maintains the balance between L‐Glu‐mediated neural excitation and GABA‐mediated neural inhibition, thereby conferring neuroprotective effects . Moreover, GABA can also participate in the TCA cycle as a precursor of succinate, thereby regulating the body's energy metabolism . Our findings indicate that FMN enhances the levels of L‐Glu and GABA while upregulating GAD expression, suggesting FMN's neuroprotective effects on CIRI rats through modulation of L‐Glu and GABA. This conclusion is further supported by the immunofluorescence results of Ki67. Of course, the metabolism of an organism is a dynamic process, and the metabolic changes observed in our research represent merely a temporal snapshot. While this may not fully encapsulate the overall metabolic dynamics of disease progression, the metabolic differences within this temporal snapshot offer us a glimpse into significant changes. In future research, the application of spatial metabolomics and metabolic flux technology will facilitate the observation of the holistic dynamic changes of metabolites during disease progression, thereby further elucidating the pharmacological mechanisms of FMN in a comprehensive manner. Furthermore, transcriptomics offers a comprehensive analysis of gene expression profiles in cells under specific conditions, and combining the results of transcriptomics is expected to provide new insights into the molecular mechanisms by which FMN exerts neuroprotective effects. These in‐depth mechanistic studies will facilitate the discovery of specific targets for FMN on CIRI and provide essential basic research data for the development of FMN‐related clinical medications. Conclusion In summary, FMN exhibits considerable therapeutic promise as a potential treatment for CIRI. Mechanistic studies utilising nontargeted metabolomics indicate that FMN mitigates oxidative stress damage and promotes the restoration of energy metabolism in CIRI rats by modulating nicotinate and nicotinamide metabolism, as well as alanine, aspartate and glutamate metabolism, thus conferring neuroprotective effects (Figure ). This study provides new insights into the metabolic mechanism of FMN on CIRI and offers candidates for the development of neurorepair drugs for CIRI from the perspective of energy metabolism. Jianwen Zhao: funding acquisition (equal), investigation (equal), writing – original draft (equal). Yanwei Zhang: data curation (equal), investigation (equal). Shuquan Lv: formal analysis (equal), validation (equal). Feng Wang: formal analysis (equal), validation (equal). Ting Shan: formal analysis (equal), validation (equal). Jian Wang: investigation (equal), visualization (equal). Zeng Liu: investigation (equal), visualization (equal). Limin Zhang: writing – review and editing (equal). Huantian Cui: conceptualization (lead). Junbiao Tian: conceptualization (equal), writing – review and editing (equal). The authors declare no conflicts of interest. Data S1. |
The past and future of industrial hygiene in Japan | 969f4408-95ca-4bcf-95f1-b6cd317bbb8d | 10079497 | Preventive Medicine[mh] | Industrial hygiene in Japan has generally been considered emerge in the mid to late 1950s. Of course, even before the 1950s, the importance of ensuring workers’ health had been recognized mainly in the medical field; however, it was not until the “Hepburn Sandal Incident” that industrial hygiene research, which incorporated technology and information from the science and engineering fields, was launched in earnest under the leadership of the Japanese government. The Hepburn Sandals Incident was a major industrial disease in Japan during the mid to late 1950s that was triggered by the success of one American romantic movie. The movie “Roman Holiday”, released in Japan in 1954, was a huge hit, and the sandals worn by the lead actress (Audrey Hepburn) in the movie immediately became widely popular among young Japanese women. At this time, most footwear used in Japan, including sandals, were produced by small-scale manufacturers with only several employees. Unfortunately, at a time when laws and regulations to protect workers’ health were absent, most workers in sandal manufacturers were exposed to and unprotected against toxic solvents, such as benzene, used in the production processes. Benzene, which today requires extremely strict control due to its high carcinogenic potential, was not regulated in Japan at that time. Therefore, workers—many of whom were young females—in sandal manufacturing workshops were exposed to high concentrations of benzene vapor on a daily basis, which produced a large number of victims in a short period of time. The Japanese government responded promptly and promulgated the Ordinance on Prevention of Organic Solvent Poisoning in 1960 to prevent incidents of benzene poisoning, which had frequently occurred among small-scale footwear manufacturers. The ordinance was subsequently incorporated into the Industrial Safety and Health Law (1972) and has continued to significantly impact Japanese industrial hygiene from 1960 to the present considering that the ordinance specifies the methods for measuring organic solvent concentrations and ventilation requirements for workplaces involved with organic solvents. On the other hand, the major early administrative measure in Japan for occupational dust exposure was the enactment of the Pneumoconiosis Law in 1960. Unlike the Ordinance on Prevention of Organic Solvent Poisoning mentioned earlier, the Pneumoconiosis Law regulates workers’ health care and does not provide for working environment control. Thus, it had no significant and direct impact on industrial hygiene research in Japan. The Pneumoconiosis Law was amended several times thereafter; however, even 20 years after its enactment, it made no significant contribution to the reduction of pneumoconiosis. In 1978, the Japanese government enacted the Ordinance on Prevention of Hazards Due to Dust , which mandated the wetting and sealing of dust sources, installation of various types of ventilators, wearing of personal protective equipment, and working environment measurements. This ordinance contributed to the promotion of research on methods of measuring dust concentration, particle size, and chemical composition, as well as research on techniques to protect workers from dust, such as designing effective ventilation systems and the development of high-performance dust masks. The ordinance can be considered successful given that it promoted a decrease in the number of newly diagnosed pneumoconiosis cases from 6,842 in 1980 to 124 in 2020. Indeed, industrial hygiene in Japan has reduced the number of occupational diseases in conjunction with various government regulations; however, it must be noted that the needs for industrial hygiene have gradually changed as society has evolved. Since the mid-20th century, the share of the tertiary sector in Japanese industry has steadily expanded. According to the Japanese Census, the share of tertiary workers in 1950 was 29.6%, whereas that in 2019 was 71.2%. In line with this, ensuring the health of office workers, caregivers, delivery service providers, and hospitality workers, such as preventing low back pain, muscle fatigue, eye strain, and passive smoking, had emerged as an important issue for industrial hygiene, increasing the presence of ergonomics, aerosol science, and chemical engineering. In this context, the relative presence of conventional industrial hygiene declined with times, and the “Osaka Occupational Cholangiocarcinoma Disaster (2012)” occurred. The “Osaka Occupational Cholangiocarcinoma Disaster” was an industrial disease that occurred at a small printing factory in Osaka City, in which 17 employees developed cholangiocarcinoma, among whom 9 died. Subsequent investigations found that the primary cause of their cholangiocarcinoma was exposure to dichloropropane (DCP), which was used to clean the printing presses. However, no legal restrictions on the use of DCP were in place at this time. This industrial disease prompted Japanese labor administrators and industrial hygienists to recognize that conventional controls of chemical substances through legal restrictions alone were insufficient to protect workers’ health. The Japanese government immediately designated DCP as a regulated substance while developing a new law on risk assessment for chemicals. Currently, DCP is classified as a “special organic solvent” under the Ordinance on Prevention of Hazards Due to Specified Chemical Substances , which requires particularly strict control measures for use. In 2016, the Industrial Safety and Health Law was amended to require discretionary risk assessment, in which chemical users have discretion in the frequency and method of their assessment, for 640 chemicals, including approximately 520 chemicals that have yet to be legally regulated. Since 2016, chemicals subject to risk assessment have been added continuously, with risk assessment being mandatory for 674 chemicals as of January 2023. In the future, the Japanese government looks to increase the number of substances subject to risk assessment, which is expected to reach approximately 3,000 substances within a few years. In addition, the government intends to require personal exposure measurement using a personal sampler in addition to conventional working environment measurements based on area sampling. Along with these, the government also plans to essentially abolish the Ordinance on Prevention of Organic Solvent Poisoning , the Ordinance on Prevention of Hazards Due to Specified Chemical Substances , and other ordinances that have been the cornerstones of Japanese industrial hygiene, although no definite date has yet been finalized as of January 2023. As mentioned earlier, these ordinances specify not only the measurement procedures of the substances concerned but also countermeasures against their exposure. For example, when a local ventilation system (LEV) is applied to prevent exposure to regulated organic solvents, the current ordinance specifies the type of exhaust hood to be applied and the exhaust flow velocity. Therefore, after the abolishment of the ordinance, the users of the organic solvent will be responsible for selecting exposure control methods, including the LEV at their own discretion. However, it may be difficult for most users to select appropriate control methods independently. Currently, the Ministry of Health, Labor and Welfare Japan (MHLW) is preparing a “Recommended Case Studies for Reducing Chemical Exposure” through the National Institute of Occupational Safety and Health, Japan (JNIOSH), which will be of great benefit to many industrial hygienists who are struggling with countermeasures against hazardous substances once completed and released. One of the serious problems expected by the Japanese industrial hygiene system in the near future will be the shortage of young professionally trained industrial hygienists. In fact, three Japanese universities, namely Kitasato University, University of Occupational and Environmental Health, Japan (UOEH), and Waseda University, offered specialized industrial hygiene courses just a few years ago, but only the School of Health Sciences, UOEH remains now. Furthermore, even JNIOSH, which is to be the national center of occupational safety and health research in Japan, seems to abolish its research branch on industrial ventilation within a few years. As such, the future of industrial hygiene in Japan will perhaps be directed not by experts from universities or public research institutes but primarily by engineers from ventilator or protective equipment manufacturers or publicly licensed professionals, such as certified consultants, occupational hygienists, industrial physicians, official health supervisors, and environmental measurement specialists who are in charge of health and safety practices in the workplace. |
The utility of cerebrospinal fluid–derived cell-free DNA in molecular diagnostics for the | 60741abe-67e0-4d9d-a2a1-ee808e3a84b9 | 9059787 | Pathology[mh] | The PIK3CA -related megalencephaly-capillary malformation (MCAP) syndrome (MIM #602501) is a multisystem overgrowth disorder caused by mosaic gain-of-function (activating) variants in PIK3CA (MIM #171834). The most common features of MCAP include diffuse or focal brain overgrowth (i.e., megalencephaly [MEG] or hemimegalencephaly [HMEG]), cortical abnormalities (predominantly polymicrogyria and focal cortical dysplasia), vascular malformations, digital anomalies (cutaneous syndactyly, polydactyly), and other skin and connective tissue abnormalities . Activating PIK3CA variants cause a wide range of overgrowth phenotypes, collectively termed PIK3CA -related overgrowth spectrum (PROS). Given that these disorders are caused by mosaic variants, the yield from molecular diagnostic testing is higher when affected or lesional tissues are available for testing . Previous studies have shown that affected tissues (e.g., skin fibroblast) have a higher diagnostic yield than peripheral blood or saliva . Therefore, obtaining affected tissues in PROS is important for establishing an accurate molecular diagnosis. Among the neurological phenotypes that fall under PROS, PIK3CA mutational hotspots (notably variants c.1624G > A p.Glu542Lys, c.1633G > A p.Glu545Lys, c.3140A > T p.His1047Leu, and c.3140A > G p.His1047Arg) can be associated with more severe brain phenotypes such as focal cortical dysplasia (FCD), HMEG, and dysplastic megalencephaly (DMEG) with or without severe segmental body overgrowth, whereas less-activating somatic variants cause MCAP that is most often characterized by diffuse megalencephaly and polymicrogyria (PMG) . Among the common comorbidities associated with these PIK3CA -related brain phenotypes is epilepsy. About 30% of individuals with MCAP have seizures and refractory epilepsy is not uncommon, especially in those with cortical dysplasia . Establishing a molecular diagnosis early in individuals with PIK3CA -related brain phenotypes is not only helpful to better understand the disorder and its prognosis, but it can also have important therapeutic implications, especially as PI3K-AKT-MTOR pathway inhibitors are beginning to show promising results in treating epilepsy and neuropsychiatric disorders in association with other disorders within this pathway such as the tuberous sclerosis complex (TSC) . Select MTOR, AKT, or PI3K inhibitors have been proposed as therapeutic options for treating refractory epilepsy in children with activating mutations of this pathway. However, for individuals not eligible for epilepsy surgery, establishing the molecular cause to determine whether these molecularly targeted therapies can be used poses an important diagnostic challenge . Therefore, alternative approaches for detecting mosaic variants are warranted, especially for those with brain-restricted mosaic variants. Free-floating cell-free DNA (cfDNA) has recently become a standard source for cancer genomic profiling and prenatal diagnostics . cfDNA from cerebrospinal fluid (CSF) has also recently emerged as an alternative source for molecular diagnostics in brain tumors . Detection of somatic cancer variants in CSF-derived cfDNA has further encouraged researchers to investigate whether mosaic variants underlying other developmental brain disorders can be detected in CSF cfDNA. Two recent studies have shown that known somatic variants in the brain were detectable in CSF-derived cfDNA in individuals with HMEG, FCD, ganglioglioma, and subcortical band heterotopia (SBH) . These studies show that CSF-derived cfDNA can serve as a “proxy” tissue source to resected brain tissues for sequencing. Therefore, utilizing CSF-derived cfDNA has emerging potential in achieving a molecular diagnosis for individuals with mosaic brain malformations and other developmental brain disorders. Cell-free DNA obtained by minimally invasive procedures, such as lumbar puncture, can facilitate an earlier molecular diagnosis as well as consideration of medical management options (i.e., PI3K-AKT3-MTOR pathway inhibitors) prior to more invasive surgical resection. It can also offer a potential biomarker to monitor disease activity and treatment response. Here, we report the first report of an individual with MCAP syndrome secondary to a mosaic PIK3CA variant that was successfully detected in CSF-derived cfDNA, confirming his diagnosis. Case Report This boy was delivered at term following an uncomplicated pregnancy. Shortly after birth, he was identified to have macrocephaly, right-sided asymmetric overgrowth, and abnormal skin pigmentation with extensive deep purple-red vascular markings that were widely distributed over his body. He was initially clinically misdiagnosed with Klippel–Trenaunay syndrome (KTS). He also had thrombocytopenia requiring a platelet transfusion at 2 d of life. His neurological exam showed diffuse hypotonia. Cranial magnetic resonance imaging (MRI) obtained soon after birth showed asymmetric brain overgrowth and cerebellar tonsillar herniation ( , panels). At later follow-up, he had progressive hydrocephalus requiring ventriculoperitoneal shunt placement at 6 mo of age. He later underwent laser treatment for the capillary malformation on his upper lip and right cheek. History is also notable for intestinal lymphangiectasia leading to episodes of diarrhea and nutritional deficiency during his early childhood. Developmentally, he had moderate speech delays with major delays in his gross motor skills. Early physical examinations showed apparent macrocephaly and right-sided asymmetric overgrowth of the face and extremities. Measurements performed at 3 yr of age showed that his right ear was 6 cm (97th percentile) while the left was 4.7 cm (50th percentile). His right hand measured 9.9 cm (third percentile) from middle fingertip to wrist, whereas his left hand measured 9.3 cm (∼1st percentile) . His head appeared megalencephalic with a prominent venous pattern over the scalp. His occipitofrontal circumference (OFC) at 3 yr of age was 58.5 cm (+5.8 SD), and later grew to 65.5 cm (+7 SD) at 19 yr of age. He also had pinpoint elevated capillary malformations that ranged in size from 5 × 5 mm to 1 × 1 cm all over the scalp. After laser ablation, he had residual vascular staining on his right cheek, right upper lip, and more extensive irregular vascular patterns over the left arm ( A). In his late childhood, he developed recurrent lymphedema, protein-losing enteropathy, and pleural effusions ( K). At age 19 yr, he was admitted to the pediatric intensive care unit (PICU) because of capillary leak syndrome with systemic inflammation. He was later found to have atypical lymphocytes in pleural and peritoneal fluid and increased fluorodeoxyglucose (FDG) uptake in bilateral cervical, mediastinal, abdominal, and pelvic lymph nodes on entire body positron emission tomography (PET) scan. Excisional biopsy of cervical lymph nodes showed sheets of atypical cells with large, vesicular nuclei with prominent nucleoli and scanty cytoplasm. Immunocytochemically, cells stained positive for CD20 and CD45. Flow cytometry immunotyping confirmed the diagnosis of diffuse large B-cell lymphoma (DLBCL). Involved sites included cervical, mediastinal, pelvic lymph nodes, and spleen with bowel wall thickening and pleural effusion (stage IIIb). He underwent lumbar puncture (LP) four times for staging and during chemotherapy with additional CSF collected for molecular diagnostics as well. There were no atypical lymphocytes found in any of the four CSF samples. He completed chemotherapy (R-CHOP [rituximab, cyclophosphamide, hydroxydaunomycin, vincristine, and prednisone]) without further remission. Cranial MRI obtained during this period showed asymmetric megalencephaly with mildly abnormal cortical gyral pattern, asymmetric dysplastic ventricles, and cerebellar tonsillar ectopia ( F,G,J). He only had three seizures for a short period of time during chemotherapy. The semiology of his seizures included tonic seizure of left upper extremity with eye deviation to left, followed by bilateral tonic seizure, apnea, and desaturation. Seizures lasted ∼1–2 min with postictal confusion for several minutes. Seizures responded well to levetiracetam, and he therefore did not require epilepsy surgery. We previously published his clinical features as part of a large clinical-molecular series prior to his molecular diagnostic workup using cfDNA (case LR14-300) . Molecular Analysis The proband underwent molecular diagnostic testing around the time when his lymphoma was diagnosed. Sequencing was performed using a clinically validated targeted multigene panel (the megaplex) performed in a College of American Pathologists (CAP)-accredited, Clinical Laboratory Improvement Amendments pf 1988 (CLIA)-certified laboratory as previously reported . A mosaic variant in PIK3CA (NM_006218.2: c.3139C > T, p.His1047Tyr) was detected at a variant allele fraction (VAF) of 2% (variant [var]/reference [ref] reads: 8/394) in peripheral blood and 37.31% (var/ref reads: 673/1131) in cultured skin fibroblasts. The variant was identified in CSF-derived cfDNA at a VAF of 3.08% (var/ref reads: 14/440) . A summary of the child's molecular findings in all samples is shown in . This variant has been previously published in association with MCAP syndrome and is listed as pathogenic in ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar/variation/39705/ ). This boy was delivered at term following an uncomplicated pregnancy. Shortly after birth, he was identified to have macrocephaly, right-sided asymmetric overgrowth, and abnormal skin pigmentation with extensive deep purple-red vascular markings that were widely distributed over his body. He was initially clinically misdiagnosed with Klippel–Trenaunay syndrome (KTS). He also had thrombocytopenia requiring a platelet transfusion at 2 d of life. His neurological exam showed diffuse hypotonia. Cranial magnetic resonance imaging (MRI) obtained soon after birth showed asymmetric brain overgrowth and cerebellar tonsillar herniation ( , panels). At later follow-up, he had progressive hydrocephalus requiring ventriculoperitoneal shunt placement at 6 mo of age. He later underwent laser treatment for the capillary malformation on his upper lip and right cheek. History is also notable for intestinal lymphangiectasia leading to episodes of diarrhea and nutritional deficiency during his early childhood. Developmentally, he had moderate speech delays with major delays in his gross motor skills. Early physical examinations showed apparent macrocephaly and right-sided asymmetric overgrowth of the face and extremities. Measurements performed at 3 yr of age showed that his right ear was 6 cm (97th percentile) while the left was 4.7 cm (50th percentile). His right hand measured 9.9 cm (third percentile) from middle fingertip to wrist, whereas his left hand measured 9.3 cm (∼1st percentile) . His head appeared megalencephalic with a prominent venous pattern over the scalp. His occipitofrontal circumference (OFC) at 3 yr of age was 58.5 cm (+5.8 SD), and later grew to 65.5 cm (+7 SD) at 19 yr of age. He also had pinpoint elevated capillary malformations that ranged in size from 5 × 5 mm to 1 × 1 cm all over the scalp. After laser ablation, he had residual vascular staining on his right cheek, right upper lip, and more extensive irregular vascular patterns over the left arm ( A). In his late childhood, he developed recurrent lymphedema, protein-losing enteropathy, and pleural effusions ( K). At age 19 yr, he was admitted to the pediatric intensive care unit (PICU) because of capillary leak syndrome with systemic inflammation. He was later found to have atypical lymphocytes in pleural and peritoneal fluid and increased fluorodeoxyglucose (FDG) uptake in bilateral cervical, mediastinal, abdominal, and pelvic lymph nodes on entire body positron emission tomography (PET) scan. Excisional biopsy of cervical lymph nodes showed sheets of atypical cells with large, vesicular nuclei with prominent nucleoli and scanty cytoplasm. Immunocytochemically, cells stained positive for CD20 and CD45. Flow cytometry immunotyping confirmed the diagnosis of diffuse large B-cell lymphoma (DLBCL). Involved sites included cervical, mediastinal, pelvic lymph nodes, and spleen with bowel wall thickening and pleural effusion (stage IIIb). He underwent lumbar puncture (LP) four times for staging and during chemotherapy with additional CSF collected for molecular diagnostics as well. There were no atypical lymphocytes found in any of the four CSF samples. He completed chemotherapy (R-CHOP [rituximab, cyclophosphamide, hydroxydaunomycin, vincristine, and prednisone]) without further remission. Cranial MRI obtained during this period showed asymmetric megalencephaly with mildly abnormal cortical gyral pattern, asymmetric dysplastic ventricles, and cerebellar tonsillar ectopia ( F,G,J). He only had three seizures for a short period of time during chemotherapy. The semiology of his seizures included tonic seizure of left upper extremity with eye deviation to left, followed by bilateral tonic seizure, apnea, and desaturation. Seizures lasted ∼1–2 min with postictal confusion for several minutes. Seizures responded well to levetiracetam, and he therefore did not require epilepsy surgery. We previously published his clinical features as part of a large clinical-molecular series prior to his molecular diagnostic workup using cfDNA (case LR14-300) . The proband underwent molecular diagnostic testing around the time when his lymphoma was diagnosed. Sequencing was performed using a clinically validated targeted multigene panel (the megaplex) performed in a College of American Pathologists (CAP)-accredited, Clinical Laboratory Improvement Amendments pf 1988 (CLIA)-certified laboratory as previously reported . A mosaic variant in PIK3CA (NM_006218.2: c.3139C > T, p.His1047Tyr) was detected at a variant allele fraction (VAF) of 2% (variant [var]/reference [ref] reads: 8/394) in peripheral blood and 37.31% (var/ref reads: 673/1131) in cultured skin fibroblasts. The variant was identified in CSF-derived cfDNA at a VAF of 3.08% (var/ref reads: 14/440) . A summary of the child's molecular findings in all samples is shown in . This variant has been previously published in association with MCAP syndrome and is listed as pathogenic in ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar/variation/39705/ ). Sequencing and detection of genetic variants underlying mosaic and tissue-restricted disorders typically rely on the availability of affected (or lesional) tissues. Here, we demonstrate the utility of CSF-derived cfDNA-based molecular diagnosis in PIK3CA -related MCAP syndrome. This case report has several helpful clinical implications. First, it demonstrates the utility of sequencing cfDNA from CSF to achieve a molecular diagnosis in the absence of affected or lesional brain tissues, which is particularly useful for individuals who have isolated or tissue-restricted mosaicism. Second, establishing a molecular diagnosis prior to undergoing invasive epilepsy surgery could potentially shift the paradigm of current testing and treatment strategies, especially as MTOR inhibitors are becoming more widely used . The PIK3CA p.His1407Tyr variant identified in this proband lies within the most commonly mutated codon within the kinase domain of the gene and has been reported multiple times as a disease-causing variant . It has been previously identified in individuals with CLOVES (congenital lipomatous asymmetric overgrowth of the trunk, lymphatic, capillary, venous, and combined-type vascular malformations, epidermal nevi, skeletal and spinal anomalies) (MIM #612918) and MCAP syndromes . Missense variants in this codon have been shown to cause PI3K-AKT-MTOR pathway hyperactivation and were also reported in various types of cancer tissues . It is uncertain whether individuals with PIK3CA -related overgrowth syndrome (PROS) are at risk for specific types of cancer. An association with Wilms’ tumor has been anecdotally suggested but not proven . The cancer risk in PROS in general and in MCAP in particular, however, continues to be unknown and there are no data suggesting an association between DLBCL and MCAP, with only one other individual with MCAP and leukemia diagnosed in adolescence previously reported . In the individual reported here, DNA extracted from lymphomatous tissue showed a 1.3% VAF (var/ref = 8/597) for the PIK3CA variant, which was similar to the level detected in the peripheral blood before the occurrence of DLBCL, and several additional somatic variants were detected in the lymphoma at a much higher VAF. Therefore, we conclude that this PIK3CA variant is unlikely the cause of his DLBCL. Moreover, earlier studies have shown activation of PI3K-AKT3-MTOR in cases with DLBCL but only a small subset harbored variants in PIK3CA . Further, data from large cancer genomic databases suggest that somatic PIK3CA variants are found in hematopoietic and lymphoid cancer, including DLBCL (cBioPortal for Cancer Genomics [ https://www.cbioportal.org ] and COSMIC genomic mutation [ https://cancer.sanger.ac.uk/cosmic ]). Notably, a high burden of somatic variants was seen in the lymphomatous tissue but not in the CSF cfDNA sample. For example, copy number gain of Chromosome 1q including DNMT3A, and copy-number loss of Chromosome 9 including GNAQ and Chromosome 1p including MTOR and EPHB2 were seen in the lymphoma tissue but were absent in the CSF cfDNA sample. The lack of overlapping genomic findings provides further support of the nonlymphomatous origin of the PIK3CA variant in the CSF cfDNA sample in this individual. Cell-free DNA is now widely used for genomic profiling in cancer . Cell-free DNA refers to DNA present in body fluids after cell death . Plasma cfDNA in healthy individuals is mostly derived from blood cells. In individuals with cancer, the amount of plasma cfDNA increases because of high rates of apoptosis and necrosis of cancer cells . Therefore, plasma cfDNA has gained prominence in cancer diagnosis, treatment, and monitoring (a.k.a liquid biopsy) . Body fluids can also contain cfDNA from noncancerous tissues as well. For example, cfDNA from cyst fluids of lymphatic malformations has been identified as a more reliable source than plasma to diagnose PIK3CA -associated lymphatic malformations . Altogether these lines of evidence suggest that cfDNA from various body fluids, such as CSF, in direct contact with pathological tissues can be utilized as a “proxy” for molecular diagnostics. Indeed, two recent studies have shown some early yet promising evidence . In one study, CSF from the epilepsy cohort (FCD, ganglioglioma, SBH, and other tumors) contained significantly more cfDNA, which demonstrated brain-specific methylation patterns, than those without epilepsy (502 copies/mL vs. 61 copies/mL). Variants in several genes ( LIS1, TSC1 , and BRAF ) were detectable in CSF-derived cfDNA with VAFs ranging from 3.20% to 9.40% . The second study included individuals with HMEG, ganglioglioma, malformation of cortical development with oligodendroglial hyperplasia in epilepsy (MOGHE) and FCD, mosaic variants in PIK3CA, BRAF , and SLC35A2 were identified in CSF-derived cfDNA with VAFs ranging from 0.136% to 1.45%. Compared with VAFs in paired brain tissues (ranging from 1.00% to 24.00%), levels of detectable mosaicism CSF-derived cfDNA were lower . However, there seemed to be no correlation between VAFs from cfDNA to the affected brain tissues from studies on both brain tumors and malformations . To have a meaningful clinical impact for the diagnosis and treatment of individuals with mosaic brain disorders (i.e., pharmaceutical vs. surgical approaches), a molecular diagnosis needs to ideally precede invasive brain surgery. Mosaic variants usually cannot be reliably detected from peripheral blood given not only the low mosaicism level but also clonality of blood cells . Therefore, CSF-derived cfDNA could potentially provide a more reliable surrogate for brain-limited mosaicism. Hence, CSF-derived cfDNA-based molecular diagnostics via lumbar puncture may provide a practical and novel method for variant identification. Notably, this individual also had cutaneous capillary malformations and body overgrowth which constitute additional lesional tissues for sequencing and variant detection . However, these features are highly variable among affected individuals . All in all, novel molecular diagnostic approaches using minimally invasive procedures can have therapeutic implications for affected individuals. Syndromes caused by genetic variants of the PI3K-AKT-MTOR pathway share many similar features associated with dysregulated overgrowth. One notable example is the tuberous sclerosis complex (TSC) characterized by cortical tubers (which shows histopathological features similar to FCD), seizures, cutaneous findings, and other systemic features . Loss of inhibition of the PI3K-AKT-MTOR pathway secondary to TSC1 or TSC2 variants results in neuronal overgrowth , and the FDA-approved MTOR inhibitor everolimus has shown promising results in treating TSC-related refractory epilepsy , which occurs in ∼30% of patients even after surgical resection . Similarly, other pathway-specific drugs can be repurposed to treat PROS. For example, alpelisib, a PI3K inhibitor recently approved for the treatment of PIK3CA mutation–positive hormone receptor–positive advanced breast cancer, has been used to treat CLOVES with significant clinical improvement of body overgrowth . Whether this and other inhibitors can be used to treat epilepsy associated with PIK3CA -associated MCAP, HMEG or focal cortical dysplasia is still under preclinical investigation. Nevertheless, mouse models expressing hotspot PIK3CA variants and corresponding histopathological neuronal findings have shown dramatic antiepileptic response to other PI3K inhibitors (e.g., BKM120) . Although further clinical studies are needed, PI3K inhibitors or other pathway-specific drugs (such as mTOR and AKT inhibitors) might have a role in treating PI3K-AKT-MTOR pathway–related intractable or recurrent epilepsy after surgical resection. In conclusion, CSF-derived cfDNA-based molecular diagnostics provides a new method for the detection of mosaicism in individuals with developmental brain disorders. This novel method will not only facilitate an early and minimally invasive molecular diagnosis but might also have therapeutic implications in refractory epilepsy as repurposed PI3K-AKT-MTOR pathway–specific drugs are becoming more widely used. CSF was collected in a centrifuge tube and processed immediately after collection. CSF was centrifuged for 10 min at 400 g and 4°C and then the supernatant was transferred to 2 mL cryovial for further centrifuge (10 min at 16,000 g and 4°C). The cell pellet was discarded, and the supernatant was frozen. Quality-control (QC) data showed DNA fragment sizes ranging from 147 to 167 bp, consistent with cfDNA . Sequencing libraries were prepared from DNA samples and hybridized to a custom set of complementary RNA (cRNA) biotinylated oligonucleotides targeting the exons of 63 genes in a panel including PTEN, PIK3CA, AKT1, AKT3 , and PIK3R2 , among others, and select intronic regions for targeted DNA sequencing (Megaplex, Agilent SureSelect, Agilent Technologies, Inc). The panel is a targeted, massively parallel gene sequencing assay ( https://testguide.labmed.uw.edu/public/view/MEGPX ). The test uses next-generation “deep” sequencing to detect mutations including single-nucleotide variants (SNVs), indels, and copy-number changes including gene amplifications. DNA was extracted from CSF cfDNA, peripheral blood, and fresh tissue samples using a purification kit (QIAsymphony Circulating DNA Kit; QIAsymphony 93756), Gentra Puregene DNA Isolation Kit (Gentra 158489), QIASymphony DSP DNA Midi QIAGEN Kit (QIAGEN 937255) . Sequencing libraries were constructed from DNA using KAPA Hyper Prep kits (Kapa Biosystems Inc.) and hybridization was performed with custom oligonucleotide probes (Agilent SureSelect, Agilent Technologies). DNA sequencing was performed on a massively parallel instrument (HiSeq2500 sequencing system, Illumina) with 2 × 101-bp, paired-end reads according to the manufacturer's instructions. Initial read mapping against the human reference genome (hg19/GRCh37) and alignment processing was performed using BWA version 0.6.1 ( http://sourceforge.net/projects/bio-bwa/files ) and SAMtools version 1.3.1 ( http://sourceforge.net/projects/samtools/files ), respectively. Sample-level, fully local indel realignment was then performed using GATK version 2.4.9 (Broad Institute). Duplicate reads were removed using PICARD version 1.72 ( http:// broadinstitute.github.io/picard ). Quality score recalibration was then performed using GATK. This realigned and recalibrated alignment was used for all subsequent analyses. SNV and indel calling were performed through the GATK Universal Genotyper using default parameters and VarScan version 2.3.6 ( http://dkoboldt.github.io/varscan ). For indel calling through VarScan, the minimum variant frequency was set to 0.01 reads, and the minimum number of variant reads was set to 4, whereas for SNV calling, the minimum variant frequency was set to 0.03, and the minimum number of variant reads was set to 5, with default parameters for all other settings. Variants identified by VarScan alone were manually reviewed using the Integrated Genomics Viewer version 2.3 (Broad Institute) to assess the quality of base calls, the mapping quality for the reads, and the overall read depth at the site. PINDEL version 0.2.570 was used to identify tandem duplications and indels >10 bp in length. Structural variants were identified using CREST version 1.0 and BreakDancer version 1.1.1.71 For CNV analysis, copy number states for individual probes were initially called using CONTRA version 2.0.5 ( http://sourceforge.net/projects/contra-cnv/files ) with reference to a CNV control comprising reads from two independent rounds of library preparation and sequencing of the HapMap individual NA12878. CNV calls were made at the resolution of individual exons using custom Perl scripts. Data Deposition The PIK3CA variant identified in this patient (NM_006218.2: c.3139C > T, p.His1047Tyr) has been deposited in ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar/ ) under accession number SCV002104174.1 and submitted to the Leiden Open Variation Database (LOVD; https://www.lovd.nl/ ) under submission number 0000406049. Ethics Statement This patient was prospectively enrolled in the Developmental Brain Disorders Research Study under an Institutional Review Board (IRB)-approved protocol at Seattle Children's Hospital (IRB#13291). Written informed consent was obtained from parents. Acknowledgments We thank the family and referring providers for their contribution to this study. Author Contributions W.-L.C., C.L., and G.M.M. conceived the designed the study, acquired and analyzed the data, and drafted the manuscript and figures. E.P., J.O., I.G., C.P., and B.H.S. contributed to acquisition, analysis of the data, and manuscript write-up. Funding Research reported in this publication was supported by Jordan's Guardian Angels, the Sunderland Foundation and the Brotman Baty Institute (BBI) (to G.M.M.). W.-L.C. was supported by the National Institutes of Health (NIH) National Institute of General Medical Sciences (NIGMS) Postdoctoral Fellowship in Medical Genetics 5T32GM007454. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. Competing Interest Statement The authors have declared no competing interest. The PIK3CA variant identified in this patient (NM_006218.2: c.3139C > T, p.His1047Tyr) has been deposited in ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar/ ) under accession number SCV002104174.1 and submitted to the Leiden Open Variation Database (LOVD; https://www.lovd.nl/ ) under submission number 0000406049. This patient was prospectively enrolled in the Developmental Brain Disorders Research Study under an Institutional Review Board (IRB)-approved protocol at Seattle Children's Hospital (IRB#13291). Written informed consent was obtained from parents. We thank the family and referring providers for their contribution to this study. W.-L.C., C.L., and G.M.M. conceived the designed the study, acquired and analyzed the data, and drafted the manuscript and figures. E.P., J.O., I.G., C.P., and B.H.S. contributed to acquisition, analysis of the data, and manuscript write-up. Research reported in this publication was supported by Jordan's Guardian Angels, the Sunderland Foundation and the Brotman Baty Institute (BBI) (to G.M.M.). W.-L.C. was supported by the National Institutes of Health (NIH) National Institute of General Medical Sciences (NIGMS) Postdoctoral Fellowship in Medical Genetics 5T32GM007454. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. The authors have declared no competing interest. |
The Role of Epithelial Cell Adhesion Molecule Cancer Stem Cell Marker in Evaluation of Hepatocellular Carcinoma | 68f740ba-97d6-4977-bf2f-dbcaba1b831b | 11205386 | Anatomy[mh] | In the year 2020, 905,677 cases of primary liver cancer were reported worldwide, resulting in 830,180 deaths, making this the sixth-most prevalent cancer type, and the second-leading cause of cancer mortality among males . Approximately 90% of all primary liver tumors are hepatocellular carcinomas (HCCs). Roughly 85% of individuals with cirrhosis will develop hepatocellular carcinoma. Currently, HCC ranks as the sixth-most prevalent cause of cancer globally . Most people respond poorly to traditional therapies like radiation and chemotherapy, and this may be because cancer stem cells (CSCs) are present in the patient population . A tiny subset of tumor cells exhibit stem cell characteristics such as self-renewal and widespread proliferation; these are known as CSCs , and they have been associated with enhanced DNA repair and inhibition of apoptosis . For liver cancer, several cancer stem cell markers have been found, including EpCAM, CD133, CD90, and CD13 . Studies of EpCAM have indicated that it may play roles in cancer stemness, cell proliferation, metabolism, angiogenesis, metastasis, resistance to chemotherapy and radiation, and immunomodulation . Most human epithelial carcinomas, including those of the liver, breast, colon, prostate, and head and neck regions, have overexpressed EpCAM. Moreover, several human carcinomas are now treated with immunotherapy using EpCAM as a target . As a tumor grows, EpCAM interacts with numerous key signaling pathways, including p53, TGF-β/SMAD, EpEX/EGFR, PI3K/AKT/mTOR, and Wnt/β-catenin, to alter the biology of cancer cells . Yamashita et al. showed that, even with standard chemotherapy, EpCAM-positive HCC is characterized by poor prognosis and a high probability of tumor recurrence . However, tumors may be eliminated without recurrence by targeting EpCAM with certain monoclonal antibodies, by gene silencing, and by blocking Wnt/β-catenin signaling . 2.1. Tissue Samples Paraffin blocks of 42 liver resection (LR) cases for hepatocellular carcinoma dating from between January 2017 and April 2022 were obtained from the surgical archives of the Histopathology Department of Al-Azhar University Hospital after approval was granted by the research ethical committee. Cases were selected if there was sufficient specimen material on the paraffin blocks and good clinicopathological data in hospital records relating to the age and gender of patients, histological grades and stages, the diversity and size of tumors, vascular invasion, associated cirrhosis or viral hepatitis, α-fetoprotein (AFP) levels, microvessel invasion, intrahepatic metastasis, and follow-up data. Pathological staging was determined according to the 8th edition of the Cancer Staging Manual published by the American Joint Committee on Cancer. In all cirrhotic liver cases, hepatitis virus infection (HCV and HBV) was additionally present. The time period between the first surgical intervention and the date of the last follow-up (or the patient’s death from HCC) was termed the follow-up period. To reassess diagnoses, representative samples were stained with hematoxylin and eosin. Histological grades of hepatocellular carcinoma were determined using the Edmondson–Steiner grading system . Exclusion criteria included cases for which tissue blocks were unavailable in our institution as well as cases lacking related clinical data. 2.2. Immunohistochemistry Immunohistochemical (IHC) staining was performed using the standard streptavidin–biotin–peroxidase complex (ABC) method (DakoCytomation, CA, USA). Tissue samples of 3–5 microns thickness were prepared using 10% formalin-fixed, paraffin-embedded liver specimens of representative tumor areas; these were then deparaffinized in xylene and then rehydrated in graded alcohols. Sections were boiled in citrate buffer (pH 6.0) for 20 min, followed by rinsing in distal water and washing in phosphate buffer saline. Endogenous peroxidase activity was blocked by incubating the sections for 15 min with 0.3% hydrogen peroxide in absolute methanol. The slides were then incubated overnight with anti-EpCAM antibody (Mouse, B302–323/A3, Abcam, Cambridge, UK, 1:200). After washing with PBS, the slides were incubated with multilink secondary antibody, after which goat anti-mouse streptavidin–biotin–peroxidase reagent was applied for 30 min (Dako, Japan). Finally, sections were incubated with diaminobenzidine, counter-stained with hematoxylin, and then cleared and mounted. 2.3. Interpretation of the Staining The expression of EpCAM was examined based on the intensity of staining and the percentage of positive cells. The IHC results were assessed using the extent of cell staining, which ranged from 0% to 100%. When no positive cells were found, the degree of positivity was given a score of 0; other outcomes were semi-quantitatively evaluated as follows: negative, <5%; weak (1+), 5–30%; moderate (2+), 30–60%; or strong (3+), >60%. If more than 10% of the cells had a final staining score that was moderate or strong, the expression of EpCAM and other stemness-related markers was deemed positive . The slide examination was conducted on-site by expert histopathologists with similar levels of experience, or by remote examination telepathologically. 2.4. Positive and Negative Controls As a negative control, the main antibody was left out. The bile duct epithelium served as EpCAM’s internal positive control. Positive and negative controls were implemented concurrently. 2.5. Statistical Analysis Data were collected and coded using a Microsoft Excel spreadsheet. The Statistical Package for Social Science (SPSS, IBM Inc., Armonk, NY, USA, Windows version 25) was used to conduct all statistical analyses. The Shapiro–Wilk test for data normality was used. Normally distributed continuous data were presented as mean and standard deviation (SD), while medians and interquartile ranges (IQRs) were used to present non-normally distributed data. Categorical variables were presented as frequencies and percentages. Chi-squared tests and Fisher’s exact tests were employed to assess the association between categorical groups. An independent Student t -test or a Mann–Whitney U test was used to investigate the association between continuous patient characteristics and survival. p -values of ≤0.05 were deemed to be significant. Paraffin blocks of 42 liver resection (LR) cases for hepatocellular carcinoma dating from between January 2017 and April 2022 were obtained from the surgical archives of the Histopathology Department of Al-Azhar University Hospital after approval was granted by the research ethical committee. Cases were selected if there was sufficient specimen material on the paraffin blocks and good clinicopathological data in hospital records relating to the age and gender of patients, histological grades and stages, the diversity and size of tumors, vascular invasion, associated cirrhosis or viral hepatitis, α-fetoprotein (AFP) levels, microvessel invasion, intrahepatic metastasis, and follow-up data. Pathological staging was determined according to the 8th edition of the Cancer Staging Manual published by the American Joint Committee on Cancer. In all cirrhotic liver cases, hepatitis virus infection (HCV and HBV) was additionally present. The time period between the first surgical intervention and the date of the last follow-up (or the patient’s death from HCC) was termed the follow-up period. To reassess diagnoses, representative samples were stained with hematoxylin and eosin. Histological grades of hepatocellular carcinoma were determined using the Edmondson–Steiner grading system . Exclusion criteria included cases for which tissue blocks were unavailable in our institution as well as cases lacking related clinical data. Immunohistochemical (IHC) staining was performed using the standard streptavidin–biotin–peroxidase complex (ABC) method (DakoCytomation, CA, USA). Tissue samples of 3–5 microns thickness were prepared using 10% formalin-fixed, paraffin-embedded liver specimens of representative tumor areas; these were then deparaffinized in xylene and then rehydrated in graded alcohols. Sections were boiled in citrate buffer (pH 6.0) for 20 min, followed by rinsing in distal water and washing in phosphate buffer saline. Endogenous peroxidase activity was blocked by incubating the sections for 15 min with 0.3% hydrogen peroxide in absolute methanol. The slides were then incubated overnight with anti-EpCAM antibody (Mouse, B302–323/A3, Abcam, Cambridge, UK, 1:200). After washing with PBS, the slides were incubated with multilink secondary antibody, after which goat anti-mouse streptavidin–biotin–peroxidase reagent was applied for 30 min (Dako, Japan). Finally, sections were incubated with diaminobenzidine, counter-stained with hematoxylin, and then cleared and mounted. The expression of EpCAM was examined based on the intensity of staining and the percentage of positive cells. The IHC results were assessed using the extent of cell staining, which ranged from 0% to 100%. When no positive cells were found, the degree of positivity was given a score of 0; other outcomes were semi-quantitatively evaluated as follows: negative, <5%; weak (1+), 5–30%; moderate (2+), 30–60%; or strong (3+), >60%. If more than 10% of the cells had a final staining score that was moderate or strong, the expression of EpCAM and other stemness-related markers was deemed positive . The slide examination was conducted on-site by expert histopathologists with similar levels of experience, or by remote examination telepathologically. As a negative control, the main antibody was left out. The bile duct epithelium served as EpCAM’s internal positive control. Positive and negative controls were implemented concurrently. Data were collected and coded using a Microsoft Excel spreadsheet. The Statistical Package for Social Science (SPSS, IBM Inc., Armonk, NY, USA, Windows version 25) was used to conduct all statistical analyses. The Shapiro–Wilk test for data normality was used. Normally distributed continuous data were presented as mean and standard deviation (SD), while medians and interquartile ranges (IQRs) were used to present non-normally distributed data. Categorical variables were presented as frequencies and percentages. Chi-squared tests and Fisher’s exact tests were employed to assess the association between categorical groups. An independent Student t -test or a Mann–Whitney U test was used to investigate the association between continuous patient characteristics and survival. p -values of ≤0.05 were deemed to be significant. The study included a total of 42 patients with a mean age of 50.1 years and a median tumor size of 5.75 cm. Most patients were female (62%), and the prevalence of tumor multiplicity was 19%. High-grade tumors were more frequent than low-grade tumors (59.5% vs. 40.5%) . In terms of stages of disease, the greatest number of patients were at stage I (45.2%), followed by stage III (33.3%). Most patients (74%) had associated cirrhosis but no vascular invasion. Regarding AFP levels, 64% of the patients had levels greater than 100 ng/mL. Finally, EpCAM expression was almost evenly distributed, with 52.5% of patients expressing this marker . 3.1. EpCAM Expression in Different Studied Cases Significant differences were observed between EpCAM-positive and EpCAM-negative groups across various clinical and pathological characteristics . Patients with EpCAM-positive tumors were more likely to have larger tumor sizes (>5 cm) than those with EpCAM-negative tumors, with 71% of EpCAM-positive patients having tumor sizes greater than 5 cm, compared with 29% of patients in the EpCAM-negative group ( p = 0.006). Notably, all patients (100%) with multiple tumors were in the EpCAM-positive group ( p = 0.004). Additionally, a higher percentage of patients in the EpCAM-positive group had high-grade tumors (72%), compared with patients in the EpCAM-negative group (28%) ( p = 0.002). Differences were also observed with respect to the stages of patients’ cancers, with a notably higher proportion of stage III patients in the EpCAM-positive group (86%), compared with the EpCAM-negative group (14%) ( p = 0.003). Vascular invasion was more prevalent in patients with EpCAM-positive tumors, with 82% of these patients exhibiting vascular invasion, compared with just 18% of patients in the EpCAM-negative group ( p = 0.023). Regarding patients who had associated cirrhosis, we obtained statistically significant values ( p = 0.052) for EpCAM positivity, with 61% of cirrhotic patients showing positive expression, compared with 27% of non-cirrhotic patients. Furthermore, patients with EpCAM-positive tumors were significantly more likely to have an AFP level which was higher than 100 ng/mL (67%) than patients with EpCAM-negative tumors (33%) ( p = 0.013) . 3.2. Correlations between Survival and Different Clinicopathological Parameters A comparison between survivors and non-survivors with respect to various clinicopathological parameters is presented in . Among non-survivors, a significantly higher prevalence of higher-stage cancer was evident, with all such cases being stage III ( p = 0.006). Below the level of statistical significance, the following results were also obtained: all non-survivors were found to have high-grade tumors, with tumor sizes greater than 5 cm; all non-survivors also had associated cirrhosis and a high AFP level of above 100 ng/mL; in three cases, non-survivors showed vascular invasion (60%, p = 0.134); in four cases, non-survivors were EpCAM positive (80%, p = 0.355). No significant differences between survivor and non-survivor groups were found with respect to patient age, sex, or tumor multiplicity. 3.3. Correlations between Recurrence and Different Clinicopathological Parameters A comparison between the group of patients who experienced a recurrence of the disease and the group in which the disease did not recur is presented in . The mean age was significantly higher in the recurrent group (62.2 years), compared with the non-recurrent group (40.7 years; p = 0.003). Recurrent cases showed significantly higher prevalences of high stages (all cases were stage III, p < 0.001), high grades (all cases were high grade, p = 0.006), large tumor sizes (all cases were >5 cm, p = 0.005), vascular invasion (all cases showed vascular invasion, p < 0.001), AFP levels (all cases showed high levels of AFP > 100 ng/mL, p = 0.016), associated cirrhosis (all cases showed associated cirrhosis, p = 0.079), and EpCAM expression (all cases showed positive EpCAM expression, p = 0.002). In addition, tumor multiplicity was present in all recurrent cases (100%) but absent in all non-recurrent cases ( p < 0.001). Significant differences were observed between EpCAM-positive and EpCAM-negative groups across various clinical and pathological characteristics . Patients with EpCAM-positive tumors were more likely to have larger tumor sizes (>5 cm) than those with EpCAM-negative tumors, with 71% of EpCAM-positive patients having tumor sizes greater than 5 cm, compared with 29% of patients in the EpCAM-negative group ( p = 0.006). Notably, all patients (100%) with multiple tumors were in the EpCAM-positive group ( p = 0.004). Additionally, a higher percentage of patients in the EpCAM-positive group had high-grade tumors (72%), compared with patients in the EpCAM-negative group (28%) ( p = 0.002). Differences were also observed with respect to the stages of patients’ cancers, with a notably higher proportion of stage III patients in the EpCAM-positive group (86%), compared with the EpCAM-negative group (14%) ( p = 0.003). Vascular invasion was more prevalent in patients with EpCAM-positive tumors, with 82% of these patients exhibiting vascular invasion, compared with just 18% of patients in the EpCAM-negative group ( p = 0.023). Regarding patients who had associated cirrhosis, we obtained statistically significant values ( p = 0.052) for EpCAM positivity, with 61% of cirrhotic patients showing positive expression, compared with 27% of non-cirrhotic patients. Furthermore, patients with EpCAM-positive tumors were significantly more likely to have an AFP level which was higher than 100 ng/mL (67%) than patients with EpCAM-negative tumors (33%) ( p = 0.013) . A comparison between survivors and non-survivors with respect to various clinicopathological parameters is presented in . Among non-survivors, a significantly higher prevalence of higher-stage cancer was evident, with all such cases being stage III ( p = 0.006). Below the level of statistical significance, the following results were also obtained: all non-survivors were found to have high-grade tumors, with tumor sizes greater than 5 cm; all non-survivors also had associated cirrhosis and a high AFP level of above 100 ng/mL; in three cases, non-survivors showed vascular invasion (60%, p = 0.134); in four cases, non-survivors were EpCAM positive (80%, p = 0.355). No significant differences between survivor and non-survivor groups were found with respect to patient age, sex, or tumor multiplicity. A comparison between the group of patients who experienced a recurrence of the disease and the group in which the disease did not recur is presented in . The mean age was significantly higher in the recurrent group (62.2 years), compared with the non-recurrent group (40.7 years; p = 0.003). Recurrent cases showed significantly higher prevalences of high stages (all cases were stage III, p < 0.001), high grades (all cases were high grade, p = 0.006), large tumor sizes (all cases were >5 cm, p = 0.005), vascular invasion (all cases showed vascular invasion, p < 0.001), AFP levels (all cases showed high levels of AFP > 100 ng/mL, p = 0.016), associated cirrhosis (all cases showed associated cirrhosis, p = 0.079), and EpCAM expression (all cases showed positive EpCAM expression, p = 0.002). In addition, tumor multiplicity was present in all recurrent cases (100%) but absent in all non-recurrent cases ( p < 0.001). HCC is among the leading causes of cancer-related mortality worldwide , with males twice as likely as females to be diagnosed with the disease . HCC is highly resistant to current chemotherapeutic treatments, and the survival rate for the disease is low . Overall, the increasing incidence of liver cancer may be seen as placing a significant burden on human societies . However, the discovery of novel biomarkers may lead to improvements in HCC survival rates. Such biomarkers may be used to predict outcomes, enabling clinical practitioners to select better treatment options and prevent needless side effects, in HCC patients . It has been shown that HCC contains CSCs; these are a small but distinct minority of cells that consistently display stem cell characteristics such as self-renewal, cell proliferation, and differentiation . One surface marker of CSCs has been identified as the EpCAM . Furthermore, poor HCC prognosis has been associated with EpCAM expression, indicating that EpCAM may be a useful biomarker for risk classification . Therefore, its identification in individuals with HCC may be a significant prognostic factor . EpCAM-negative HCC is characterized by short telomerase length and limited proliferation . However, the extent to which the intensity and spatial distribution of intratumoral EpCAM expression influences the spread and local aggressiveness of metastasis remains unknown . The transmembrane protein EpCAM is considered to have multiple functions; in cancer cells, it is involved in the control of stemness, cell adhesion, proliferation, migration, and epithelial-to-mesenchymal transition. To carry out these tasks, EpCAM is essential for both intra- and intercellular communication as a whole molecule and, after controlled intramembrane proteolysis, for producing extracellular and intracellular fragments that are functionally active . Overexpression of EpCAM has been detected in various human carcinomas, including cancer of the breast , pancreas , and liver . Such overexpression makes EpCAM a novel molecular target for oncological therapy. In addition, in epithelial ovarian cancer, overexpression of EpCAM has been associated with a higher risk of tumor malignancy, further suggesting that EpCAM expression might serve as a molecular therapeutic target for advanced-stage epithelial ovarian cancer and as a possible biomarker for monitoring the disease’s progression . EpCAM has also emerged as a metric for the ability of circulating tumor cells (CTCs) to metastasize and thus serve as a marker for the epithelial state of primary and systemic tumor cells. As a result, EpCAM’s potential as a target and prognostic marker for primary and systemic tumor cells has been confirmed . In the present study, a total of 42 cases were considered. The median tumor size was 5.75 cm and the mean patient age was 50.1 years. Sixty-two percent of the patients were female, and nineteen percent of patients had multiple tumors. A higher percentage of patients had high-grade tumors than low-grade tumors (59.5% vs. 40.5%), and the greatest number of patients had stage I illness (45.2%), followed by stage III (33.3%). Most patients (74%) had concomitant cirrhosis but no vascular invasion. Sixty-four percent of the patients had AFP levels higher than 100 ng/mL. Out of all patients studied, 52.5% had EpCAM expression, which was nearly uniformly distributed. Our pathological data revealed that expression of the EpCAM marker was observed in 22 (52.4%) out of 42 cases of hepatocellular carcinoma; this result is in agreement with the findings of other studies by Yamashita et al., 2008 , Kim et al., 2011 , and Shan et al., 2010 , who found that between 15.9% and 48.7% of all hepatocellular carcinomas expressed EpCAM. In the present study, we found that, between the EpCAM-positive and EpCAM-negative groups, there were notable variations with respect to several clinical and pathological traits. Individuals with EpCAM-positive tumors were more likely to have larger tumor sizes (>5 cm) than individuals with EpCAM-negative tumors. Among EpCAM-positive patients, 71% had tumor sizes greater than 5 cm, compared with 29% in the EpCAM-negative group ( p = 0.006). This result is contrary to the findings of Lima et al., 2018 who reported that, among 35 small-size cases (<2 cm), EpCAM expression was detected in 54% of tumors, suggesting that this molecule plays an important role in early stages of tumorogenesis due to its stem cell properties. In the present study, none of the patients with multiple tumors were in the EpCAM-negative group ( p = 0.004), while all patients (100%) with multiple tumors were in the EpCAM-positive group. This result is in line with the findings of Krause et al., 2020 , who reported that EpCAM expression (homogeneous distribution) was significantly associated with higher levels of serum AFP ( p = 0.03), and thus confirmed the previous findings of Bae et al., 2012 and Yamashita et al., 2008 . Regarding pathological stages and histological grades, we found that higher EpCAM expression was associated with high stages and high grades ( p = 0.003) and ( p = 0.002), respectively). Previously, Xu et al., 2014 studied HCC patients with high EpCAM expression and found that patients with advanced TNM stages and high AFP levels were more likely to have aggressive clinical characteristics, including greater relapse rates, in line with the results of the present study. In addition, a meta-analysis conducted by Liu et al. in 2015 , demonstrated that EpCAM expression was associated with poor differentiation of HCCs. In the present study, we also found that increased EpCAM expression was associated with increased AFP levels and vascular invasion ( p = 0.013) and ( p = 0.023), respectively). Similarly, Abdelgawad, 2020 detected higher serum levels of AFP among EpCAM-positive cases, compared with EpCAM negative cases ( p = 0.022) , with five out of thirteen (38%) EpCAM-positive cases having AFP levels > 400 ng/dl. Yamashita et al., 2013 found that EpCAM-positive CTCs were associated with poor prognoses and unfavorable criteria such as the presence of vascular invasion, high levels of AFP, and poor differentiation. The findings of Kelley and Venook, 2013 suggested a potential role for CTCs in the prognostic stratification of HCC patients and decision-making with regard to treatment, both of which may be seen as challenging because of the great prognostic heterogeneity of this disease. Tsuchiya et al., 2019, reported that 10–20% of cancer cells in primary tumors expressed NCAM (NCAM2+) and 5–10% expressed EpCAM (EpCAM1+), indicating that cancer cells positive for both markers exhibited more extensive vascular invasion, compared with cancer cells negative for HPC markers . Clinicians ought to identify patients with resected HCC who are at a higher risk of recurrence following treatment. This is crucial for determining whether additional medications or further follow-up are required, as prognostic predictive value is essential . In the present study, we carried out a comparison of survivors and non-survivors among the 42 patients in our study population. We found that the prevalence of high stages was substantially higher in non-survivors (all cases were stage III, p = 0.006). In addition, all of the non-survivors had high-grade tumors measuring more than 5 cm, along with concomitant cirrhosis and high AFP levels of approximately 100 ng/mL; however, these results were not statistically significant. Vascular invasion was present in three non-survivor cases (60%, p = 0.134), and four non-survivor cases (80%, p = 0.355) which tested positive for EpCAM. There were no significant differences between the survivor and non-survivor groups in terms of age, sex, or tumor multiplicity. We also compared groups of patients amongst whom the disease either recurred or did not recur. In comparison with the non-recurrent group (40.7 years; p = 0.003), the mean age in the recurrent group was significantly higher (62.2 years). The recurrent cases exhibited a significantly higher prevalence of high stages (all cases were stage III, p < 0.001), high grades (all cases were high grade, p = 0.006), large sizes (all cases were 5 cm, p = 0.005), high AFP levels (all cases showed a level of AFP >100 ng/mL, p = 0.016), associated cirrhosis (all cases showed associated cirrhosis, p = 0.079), and EpCAM expression (all cases showed positive EpCAM expression, p = 0.002). Tumor multiplicity was exhibited in all patients in whom disease recurred (100%) but was wholly absent amongst non-recurrent patients ( p < 0.001). The IHC expression of EpCAM was previously confirmed by Noh et al., 2018 . In the present study, this was linked to a lower overall survival rate and an increased chance of recurrence in patients with HCC. In addition, high levels of blood AFP and positive EpCAM expression were linked to fast recurrence following surgical resection. These findings imply that a preoperative biopsy may be used to forecast a patient’s prognosis and that the study of specimens removed during surgery may be of value in this regard. Another study by Zhou et al. in 2016 , showed an association between the preoperative presence of EpCAM-expressing CTCs and T-regulatory cell levels with HCC tumor recurrence after resection. Similarly, Schulze et al., 2013 found that patients with EpCAM-positive CTCs had significantly reduced overall survival rates, in comparison with patients without these cells ( p = 0.017), and that the presence of CTCs was correlated with high levels of serum AFP ( p = 0.050). Moreover, von Felden et al., 2017 , reported a correlation between EpCAM-positive CTCs and high recurrence rates (HR = 2.3, p = 0.027), with shorter periods of recurrence-free survival among patients who underwent curative resection for HCC (5.0 ± 1.5 vs. 12.0 ± 2.6 months, p = 0.039). In the present study, we discovered that EpCAM expression was a predictor of low survival rates and poor recurrence prognoses in patients who underwent surgical resection for HCC, even after controlling for clinicopathological prognostic factors. We believe that, for patients with HCC who undergo hepatic resection and percutaneous biopsy, EpCAM immunohistochemical expression may be utilized to predict prognosis. We wholeheartedly support the conclusions of earlier studies in which EpCAM was shown to be a significant biomarker and prognostic factor for HCC. This chemical is thought to be a biomarker for CTC and CSC detection; as such, it offers potentially novel methods for diagnosis and prognosis . Vasanthakumar et al., 2017 stated that EpCAM can be used as a cancer stem cell marker and as a potential therapeutic target for EpCAM-positive tumors. Targeting EpCAM can eradicate tumors without any relapse, whether this is achieved by gene silencing, inhibition of Wnt/β-catenin signaling, vaccination, nanomedicinal approaches, or the use of specific monoclonal antibodies, . Gene silencing is a technique used to knock down a desired gene by using RNAi (siRNA) to inhibit a particular gene function. EpCAM is a marker of many carcinomas and cancer stem cells involved in a variety of functions such as cell proliferation, cell migration, invasion, metastasis, chemoresistance, and tumor relapse. So, silencing the EpCAM gene can help conventional chemotherapy to work more effectively without any influence of cancer stem cell activity This study has several limitations including a low total number of cases and a lack of evaluation of peritumoral marker expression. We were also unable to assess the relation of expression with the underlying etiology. We recommend further studies on patients from multiple centers with the inclusion of ancillary genetic testing. The results of our analysis, taken together with the findings of previous studies, indicate that the overexpression of EpCAM may be connected to clinicopathological traits of HCC, including poor differentiation and elevated AFP levels. Gene silencing is a technique used to knock down a target gene by employing RNA interference (siRNA) to limit the activity of specific genes. Knockdown of EpCAM has been shown to reduce proliferation and spheroid formation in several EpCAM-positive cell lines and enhance chemo- and radiosensitivity. However, more clinical and experimental studies are needed to determine the likely molecular pathways of EpCAM for HCC. |
High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients | 6fee11e9-cd3e-4ffe-a8eb-d473bbea6cfa | 11440626 | Anatomy[mh] | Epidemiological data have suggested that breast cancer (BRCA) is the most commonly diagnosed female cancer. With the rise in life expectancy and rapid development of diagnostic techniques, the BRCA incidence rate has been increasing annually. Recent studies employing high-throughput sequencing-based comparisons of tumor tissues and normal tissues have identified several genes that are preferentially expressed or frequently mutated in BRCA. These include BRCA1, BRCA2, ERBB2, ERα, and JAK2, which act as oncogenes in BRCA. Multiphase tumorigenesis in BRCA involves the development, progression, and invasion of tumors. Consequently, it is imperative to elucidate the underlying mechanisms and identify innovative reliable biomarkers to support effective therapeutic development for BRCA. The neurotrophic receptor tyrosine kinase 2 ( NTRK2 ) gene encodes the NTRK2 protein. Also known as tropomyosin receptor kinase B (TRKB), this protein serves as a specific receptor for brain-derived neurotrophic factor (BDNF). Multiple studies have shown that NTRK2 is involved in the pathogenesis and progression of carcinoma. Chromosomal translocation leads to NTRK2 fusions, resulting in activation of the NTRK2 signaling pathway, which has been implicated in promoting tumor progression and drug resistance development in carcinoma. Some reports have suggested that NTRK2 expression levels are downregulated in BRCA and associated with a worse prognosis. Conversely, other studies have indicated a positive association between NTRK2 expression levels and overall survival (OS) in BRCA patients. Because of these controversial conclusions, the expression patterns and biological function of NTRK2 in BRCA remain ambiguous. Therefore, an in-depth investigation is required. In this study, we have validated the clinical and prognostic data of NTRK2 in BRCA using a BRCA tissue microarray. Furthermore, we have elucidated the role of NTRK2 for predicting the sensitivity of these tumors to chemotherapy and immunotherapy. Human BRCA tissue microarray BRCA tissues and adjacent normal breast tissues (catalog numbers: HBreD131Su08 and HBreD077Su01) were received from Shanghai Outdo Biotech Co., Ltd. (Shanghai, China), which collaborated with Taizhou Hospital to establish a specimen library. The cancer tissues were obtained from BRCA patients who received surgery from January 2005 to September 2012. The deadline for follow-up was set at August 2016. NTRK2 protein (TRKB) was obtained from Affinity Biosciences (Jiangsu, China). This retrospective study complied with the Helsinki Declaration of 1975 as revised in 2013. We have de-identified all patient details. All patients provided written informed consent. The study was approved by the ethics committee of Taizhou Hospital of Zhejiang Province on 26 January 2010 and by the ethics committee of Wuhan No.1 Hospital (approval no. [2024]47). Immunohistochemistry (IHC) An anti-NTRK2 protein antibody (AF6461, Affinity Biosciences) diluted at 1:100 was used for IHC analysis. The IHC results were independently evaluated by two pathologists who were blinded to clinical information. The NTRK2 protein expression score was determined by multiplying the staining intensity and percentage of positive cells. The intensity was quantified as “0” for no color particles, “1” for light brown particles, “2” for moderate brown particles, or “3” for dark brown particles. The scores were used to classify the samples as high (score > 180) or low (score ≤ 180) NTRK2 protein expression. RNA-sequencing data Data files, which included the expression matrix and clinical information of BRCA samples in TSV and JOSN formats, were first procured from The Cancer Genome Atlas (TCGA) database ( https://portal.gdc.cancer.gov/ ) (downloaded in July 2022). This database included 1,057 BRCA tissues and 111 normal breast tissues. The data were then extracted and converted into TXT format. Estimation of the sensitivity to chemotherapy and immunotherapy The IC50 values of common chemotherapeutics were evaluated using the “pRRophetic” package in R software ( www.r-project.org ). The different responses to anti-programmed cell death protein 1 (PD1) and anti-cytotoxic T lymphocyte antigen 4 (CTLA4) treatments between the low- and high-NTRK2 expression groups were analyzed and visualized using The Cancer Immunome Atlas (TCIA) website ( https://tcia.at/ ) via the “limma” and “ggpubr” packages in R (Table S1). Statistical analysis IHC scores and survival statistical analyses were performed using IBM SPSS 25.0 (IBM Corp., Armonk, NY, USA) and GraphPad Prism 8.0 (La Jolla, CA, USA). The differences between the low- and high-NTRK2 expression groups were assessed by chi-square tests. The Kaplan–Meier method was used to analyze prognosis, with log-rank tests used to assess statistical significance. The relationships between the clinical variables and BRCA patient prognosis were assessed by univariate analysis and multivariate Cox regression analysis. P -values < 0.05 were considered statistically significant: * P < 0.05, ** P < 0.01, *** P < 0.001. TCGA database analyses and graphs were performed using R software (4.2.1). This retrospective study conforms to STROBE guidelines. BRCA tissues and adjacent normal breast tissues (catalog numbers: HBreD131Su08 and HBreD077Su01) were received from Shanghai Outdo Biotech Co., Ltd. (Shanghai, China), which collaborated with Taizhou Hospital to establish a specimen library. The cancer tissues were obtained from BRCA patients who received surgery from January 2005 to September 2012. The deadline for follow-up was set at August 2016. NTRK2 protein (TRKB) was obtained from Affinity Biosciences (Jiangsu, China). This retrospective study complied with the Helsinki Declaration of 1975 as revised in 2013. We have de-identified all patient details. All patients provided written informed consent. The study was approved by the ethics committee of Taizhou Hospital of Zhejiang Province on 26 January 2010 and by the ethics committee of Wuhan No.1 Hospital (approval no. [2024]47). An anti-NTRK2 protein antibody (AF6461, Affinity Biosciences) diluted at 1:100 was used for IHC analysis. The IHC results were independently evaluated by two pathologists who were blinded to clinical information. The NTRK2 protein expression score was determined by multiplying the staining intensity and percentage of positive cells. The intensity was quantified as “0” for no color particles, “1” for light brown particles, “2” for moderate brown particles, or “3” for dark brown particles. The scores were used to classify the samples as high (score > 180) or low (score ≤ 180) NTRK2 protein expression. Data files, which included the expression matrix and clinical information of BRCA samples in TSV and JOSN formats, were first procured from The Cancer Genome Atlas (TCGA) database ( https://portal.gdc.cancer.gov/ ) (downloaded in July 2022). This database included 1,057 BRCA tissues and 111 normal breast tissues. The data were then extracted and converted into TXT format. The IC50 values of common chemotherapeutics were evaluated using the “pRRophetic” package in R software ( www.r-project.org ). The different responses to anti-programmed cell death protein 1 (PD1) and anti-cytotoxic T lymphocyte antigen 4 (CTLA4) treatments between the low- and high-NTRK2 expression groups were analyzed and visualized using The Cancer Immunome Atlas (TCIA) website ( https://tcia.at/ ) via the “limma” and “ggpubr” packages in R (Table S1). IHC scores and survival statistical analyses were performed using IBM SPSS 25.0 (IBM Corp., Armonk, NY, USA) and GraphPad Prism 8.0 (La Jolla, CA, USA). The differences between the low- and high-NTRK2 expression groups were assessed by chi-square tests. The Kaplan–Meier method was used to analyze prognosis, with log-rank tests used to assess statistical significance. The relationships between the clinical variables and BRCA patient prognosis were assessed by univariate analysis and multivariate Cox regression analysis. P -values < 0.05 were considered statistically significant: * P < 0.05, ** P < 0.01, *** P < 0.001. TCGA database analyses and graphs were performed using R software (4.2.1). This retrospective study conforms to STROBE guidelines. BRCA tissues have higher NTRK2 expression levels Overall, 131 BRCA tissues and 56 adjacent normal breast tissues were included in the tissue microarray. High NTRK2 protein expression was detected in the BRCA tissues compared with the adjacent normal tissues in the human BRCA tissue microarray . Pearson chi-square analysis indicated that the NTRK2 protein expression levels were significantly higher in BRCA patients, with a frequency of 39% (51/131), compared with only 21% (12/56) in adjacent normal tissues (χ 2 = 5.380, P = 0.02) . NTRK2 protein (TRKB) expression is positively correlated with BRCA patient clinicopathological characteristics We further explored the correlations between the NTRK2 protein expression patterns and BRCA patient clinicopathological characteristics. As shown in , chi-square tests suggested strongly positive significant correlations between NTRK2 protein expression and BRCA patient clinicopathological characteristics, including vascular invasion (χ 2 = 5.481, P = 0.019), lymph node metastasis (χ 2 = 13.011, P = 0.001), TNM stage (χ 2 = 11.42, P = 0.003), progesterone receptor (PR) status (χ 2 = 4.756, P = 0.029), tumor recurrence and metastasis events (χ 2 = 9.746, P = 0.002), and OS (χ 2 = 23.537, P < 0.001). NTRK2 protein expression is related to BRCA patient prognosis We found that NTRK2 protein expression was related to the tumor recurrence, metastasis, and survival status of BRCA patients. We further analyzed the relationship between NTRK2 protein expression and BRCA patient prognosis. Among the 131 BRCA cases, 27 cases were HR-/HER2− (triple negative), 44 cases were HR+/HER2−, and 60 cases were HER2+. The survival analysis revealed that BRCA patients with higher NTRK2 protein expression levels experienced significantly reduced disease-free survival (DFS) ( P = 0.0012) and OS ( P < 0.001) than those with lower expression levels . HR+/HER2− BRCA patients with higher NTRK2 protein expression levels showed significantly reduced DFS ( P = 0.011) and OS ( P = 0.005) than those with lower expression levels . HER2+ BRCA patients with higher NTRK2 protein expression levels showed significantly lower OS ( P < 0.001), but not DFS, than those with lower expression levels . Triple negative BRCA patients with higher NTRK2 protein expression levels did not display significant differences in DFS and OS compared with those with lower expression levels . NTRK2 expression is an independent risk factor for BRCA patient prognosis To determine if BRCA recurrence, metastasis, and cancer-related deaths were associated with NTRK2 protein expression, we explored the relationships between recurrence, metastasis, and clinicopathological characteristics of 131 BRCA cases through univariate and multivariate analyses. We found vascular invasion to be an independent predictor of poor DFS in BRCA patients (hazard ratio (HR) = 6.108, P < 0.001) (Table S2). In addition, both high NTRK2 protein expression (HR = 4.52, P = 0.002) and vascular invasion (HR = 3.33, P = 0.009) were independent predictors of poor OS in BRCA . NTRK2 expression is a predictive biomarker for the sensitivity to chemotherapy and immunotherapy Because chemotherapy, targeted therapy, and immunotherapy have been demonstrated to be effective in BRCA patients, we evaluated the IC50 values of common drugs recommended for BRCA therapy. The low NTRK2 expression group was more sensitive to AKT inhibitor VIII, methotrexate, and gefitinib ( P < 0.02), while the high NTRK2 expression group was more sensitive to bleomycin (50 µM), cytarabine, and (5Z)-7-Oxozeaenol ( P < 0.03) . Furthermore, the treatment scores of immune checkpoint proteins were examined. Our analysis demonstrated that the immunotherapy score was significantly elevated in the high NTRK2 expression group compared with the low NTRK2 expression group for all treatment categories examined. These included the no anti-CTLA4 or anti-PD1 treatment ( P = 1.6 × 10 −6 ; ), combined anti-PD1 and anti-CTLA4 treatment ( P = 0.0018; ), anti-CTLA4 treatment alone ( P = 0.00042; ), and anti-PD1 treatment alone ( P = 2 × 10 −5 ; ). Overall, 131 BRCA tissues and 56 adjacent normal breast tissues were included in the tissue microarray. High NTRK2 protein expression was detected in the BRCA tissues compared with the adjacent normal tissues in the human BRCA tissue microarray . Pearson chi-square analysis indicated that the NTRK2 protein expression levels were significantly higher in BRCA patients, with a frequency of 39% (51/131), compared with only 21% (12/56) in adjacent normal tissues (χ 2 = 5.380, P = 0.02) . We further explored the correlations between the NTRK2 protein expression patterns and BRCA patient clinicopathological characteristics. As shown in , chi-square tests suggested strongly positive significant correlations between NTRK2 protein expression and BRCA patient clinicopathological characteristics, including vascular invasion (χ 2 = 5.481, P = 0.019), lymph node metastasis (χ 2 = 13.011, P = 0.001), TNM stage (χ 2 = 11.42, P = 0.003), progesterone receptor (PR) status (χ 2 = 4.756, P = 0.029), tumor recurrence and metastasis events (χ 2 = 9.746, P = 0.002), and OS (χ 2 = 23.537, P < 0.001). We found that NTRK2 protein expression was related to the tumor recurrence, metastasis, and survival status of BRCA patients. We further analyzed the relationship between NTRK2 protein expression and BRCA patient prognosis. Among the 131 BRCA cases, 27 cases were HR-/HER2− (triple negative), 44 cases were HR+/HER2−, and 60 cases were HER2+. The survival analysis revealed that BRCA patients with higher NTRK2 protein expression levels experienced significantly reduced disease-free survival (DFS) ( P = 0.0012) and OS ( P < 0.001) than those with lower expression levels . HR+/HER2− BRCA patients with higher NTRK2 protein expression levels showed significantly reduced DFS ( P = 0.011) and OS ( P = 0.005) than those with lower expression levels . HER2+ BRCA patients with higher NTRK2 protein expression levels showed significantly lower OS ( P < 0.001), but not DFS, than those with lower expression levels . Triple negative BRCA patients with higher NTRK2 protein expression levels did not display significant differences in DFS and OS compared with those with lower expression levels . To determine if BRCA recurrence, metastasis, and cancer-related deaths were associated with NTRK2 protein expression, we explored the relationships between recurrence, metastasis, and clinicopathological characteristics of 131 BRCA cases through univariate and multivariate analyses. We found vascular invasion to be an independent predictor of poor DFS in BRCA patients (hazard ratio (HR) = 6.108, P < 0.001) (Table S2). In addition, both high NTRK2 protein expression (HR = 4.52, P = 0.002) and vascular invasion (HR = 3.33, P = 0.009) were independent predictors of poor OS in BRCA . Because chemotherapy, targeted therapy, and immunotherapy have been demonstrated to be effective in BRCA patients, we evaluated the IC50 values of common drugs recommended for BRCA therapy. The low NTRK2 expression group was more sensitive to AKT inhibitor VIII, methotrexate, and gefitinib ( P < 0.02), while the high NTRK2 expression group was more sensitive to bleomycin (50 µM), cytarabine, and (5Z)-7-Oxozeaenol ( P < 0.03) . Furthermore, the treatment scores of immune checkpoint proteins were examined. Our analysis demonstrated that the immunotherapy score was significantly elevated in the high NTRK2 expression group compared with the low NTRK2 expression group for all treatment categories examined. These included the no anti-CTLA4 or anti-PD1 treatment ( P = 1.6 × 10 −6 ; ), combined anti-PD1 and anti-CTLA4 treatment ( P = 0.0018; ), anti-CTLA4 treatment alone ( P = 0.00042; ), and anti-PD1 treatment alone ( P = 2 × 10 −5 ; ). Of all female malignant tumor types, BRCA has the highest morbidity rate worldwide. Further research is needed to help develop new treatment methods for this disease. Novel mutations in tumor-related genes have been investigated in various cancers, including those in the genes encoding RET, FGFR1, FGFR2, FGFR3, NRG1, MET, ERBB2, PIK3CA, and AKT. Gene fusions caused by gene translocation contribute to NTRK overexpression. Innovative drugs targeting insulin like growth factor receptor (IGFR), epidermal growth factor receptor (EGFR), and programmed cell death ligand-1 (PD-L1) have been gradually used to treat BRCA. Entrectinib (RDX-101, NMS-P626), a multikinase inhibitor, demonstrated efficacy in three clinical trials involving patients with NTRK2 gene fusions. Secretory breast carcinoma, one of the rarest types of BRCA that accounts for less than 1% of all cases, is reportedly related to NTRK2 gene fusion. However, the role of NTRK2 in invasive BRCA remains inadequately understood and warrants further investigation. NTRK2 is a member of the NTRK protein family. NTRK2 overexpression has recently been reported in rat and human kidney epithelial cells acting as a potent anoikis suppressor through AKT activation. Additionally, NTRK2 expression correlates with a malignant phenotype and poorer prognosis in colorectal cancer and small cell lung cancer patients. Abnormal NTRK2 expression or activation has been demonstrated to participate in the progression and metastasis of many malignant tumors. Furthermore, NTRK2 remains an attractive therapeutic target for anti-cancer therapies. However, the specific NTRK2-related signaling pathways that induce and maintain the cancerous and metastatic nature of the BRCA cells have not been thoroughly investigated. In this study, we demonstrated that NTRK2 protein is expressed at higher levels in BRCA tissues compared with adjacent normal tissues , as well as that NTRK2 is related to lymph node metastasis, TNM stage, tumor recurrence, distant metastasis, and OS in BRCA patients . Notably, high NTRK2 protein expression levels correlated with poorer prognosis, which holds considerable implications for clinical translation . Studies have shown that NTRK2 regulates the epithelial–mesenchymal transition through matrix metalloproteinase (MMP)2 and MMP9. NTRK2 is involved in many signaling pathways, including the MAPK, AKT, and JAK/STAT3 pathways. NTRK2 can also downregulate the expression of p-AKT and p-ERK. Two studies performed Kaplan–Meier survival analyses to investigate the connection between NTRK2 expression patterns and BRCA patient prognosis. Kim et al. showed that patients with higher NTRK2 expression levels exhibited worse survival outcomes than those with lower NTRK2 expression levels. However, data from Wang et al. displayed the opposite result. Our examination of NTRK2 protein levels using tissue microarray experiments indicated that NTRK2 overexpression was significantly related to poorer prognosis and was an independent risk factor of OS in BRCA . BDNF-mediated activation of EGFR has been observed in lung cancer, suggesting that BDNF/NTRK2/EGFR crosstalk is a more common mechanism promoting brain metastasis. BDNF can activate NTRK2 or EGFR in triple negative BRCA cells that positively express these receptors. Immunotherapy-based approaches for BRCA have developed considerably, with many immune checkpoint inhibitors (ICIs) being applied in clinical trials and demonstrating encouraging results. However, some patients still respond poorly to anti-immune treatment methods. Additionally, certain immunotherapy-related adverse events have been identified in clinical trials, which may limit their wide application. From the above results, we further discovered that high NTRK2 expression levels could help predict and increased likelihood of sensitivity to bleomycin, cytarabine, and (5Z)-7-Oxozeaenol. ICIs (anti-CTLA4 and anti-PD1 antibodies) also played an important role in targeting NTRK2 . By analyzing NTRK2 protein expression patterns in 131 BRCA tissues, we found that NTRK2 expression was positively associated with vascular invasion, lymph node metastasis, TNM stage, and tumor recurrence and metastasis in BRCA. These correlations suggest that high NTRK2 expression is linked to a poorer prognosis. Our study implicated NTRK2 as a therapeutic target in BRCA. In our previous study, we analyzed the role of NTRK2 expression in HER2-positive BRCA using the same human BRCA tissue microarray (HBreD131Su08), which contained 60 HER2-positive BRCA cases. This work confirmed that NTRK2 is related to the brain metastasis of HER2-positive BRCA. NTRK2 inhibitors are now emerging as potential therapeutic alternatives for the prevention or treatment of BRCA brain metastasis. Our research still has some limitations, such as the lack of in vitro and in vivo experiments to explore the relationship between NTRK2 expression and immune cell infiltration in BRCA tumors. In subsequent studies, we plan to examine if NTRK2 affects the tumor immune microenvironment and immune cell infiltration in this disease. Many signatures have been used to provide biological explanations of BRCA or drug-related mechanisms. Despite their great potential, few signatures have entered clinical practice, as none have demonstrated a sensible biological interpretation or meaning with respect to disease etiology. Our research data in this study were from a single center, which may limit the generalization of our results. Although we revealed that high NTRK2 protein expression may be an independent risk factor of BRCA patient OS, this has not been verified by strict testing framework and no clinical trial has been performed. Drugs should also be screened for their sensitivity prior to follow-up clinical trials according to NTRK2 expression to further confirm its relevance in guiding clinical decisions for treating BRCA. In this study, we demonstrated that NTRK2 expression is closely associated with BRCA patient clinicopathological characteristics and is an independent risk factor for OS in this disease. In addition, NTRK2 can potentially serve as a predictive biomarker for immunotherapy efficacy, as its expression levels correlated with the status of immune checkpoint proteins. Overall, these findings suggest that NTRK2 could serve as a therapeutic target for clinical intervention in BRCA. sj-pdf-1-imr-10.1177_03000605241281322 - Supplemental material for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients Supplemental material, sj-pdf-1-imr-10.1177_03000605241281322 for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients by Rui Zhang, Jianguo Zhao and Lu Zhao in Journal of International Medical Research sj-pdf-2-imr-10.1177_03000605241281322 - Supplemental material for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients Supplemental material, sj-pdf-2-imr-10.1177_03000605241281322 for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients by Rui Zhang, Jianguo Zhao and Lu Zhao in Journal of International Medical Research sj-pdf-3-imr-10.1177_03000605241281322 - Supplemental material for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients Supplemental material, sj-pdf-3-imr-10.1177_03000605241281322 for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients by Rui Zhang, Jianguo Zhao and Lu Zhao in Journal of International Medical Research sj-pdf-4-imr-10.1177_03000605241281322 - Supplemental material for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients Supplemental material, sj-pdf-4-imr-10.1177_03000605241281322 for High NTRK2 protein expression levels may be associated with poorer prognosis of breast cancer patients by Rui Zhang, Jianguo Zhao and Lu Zhao in Journal of International Medical Research |
Does stereoscopic immersive virtual reality have a significant impact on anatomy education? A literature review | 63b4ce88-6209-4ee5-b2bd-dfca2839b660 | 11739219 | Anatomy[mh] | Teaching anatomy with virtual reality (VR) has gained increased scientific interest in the last two decades and caused controversy among researchers . At first, some authors perceived VR as a fully immersive digital technology, enabling the presentation of digital objects via head-mounted devices, which obscure the real world from the user . Other authors perceived VR as a technology that can be immersive or not, enabling users to view three-dimensional (3D) digital objects on two-dimensional (2D) screens . Another point of controversy concerns the effectiveness of VR in anatomy education. The meta-analysis by Moro et al. showed that immersive VR (IVR) was not more effective than 2D images in anatomy teaching. However, a more recent meta-analysis by García-Robles et al. demonstrated the effectiveness of IVR in anatomy education, indicating that it is a more effective tool than traditional 2D methods. A factor that could play an essential role in the effectiveness of VR in anatomy pedagogy is the possible presence of stereoscopy, which means the perception of two different 3D images of a digital object with each of the two eyes . The two images are fused to provide a single 3D image. The Bogomolova et al. meta-analysis showed that stereoscopy (or stereopsis) plays a critical role in anatomy learning via 3D digital visualization, primarily when the user interacts with the VR environment. Also, it has been found that interaction is essential when VR is implemented in anatomy teaching . Thus, it could be hypothesized that stereoscopic delivery of VR (especially if it involves interaction) leads to better effectiveness in anatomy education compared to the absence of stereopsis. The meta-analysis by García-Robles et al. did not distinguish the outcomes according to the presence of VR interaction or stereoscopic delivery. Although immersive forms of VR can be delivered with either stereoscopic or non-stereoscopic (monoscopic) forms, several authors who investigated the role of VR in anatomy education did not clarify if the users experienced a stereoscopic form of VR. Thus, the current review examined if stereopsis plays a significant role when IVR is used in anatomy education. Three independent reviewers conducted a literature search on October 27, 2024, in the databases PubMed, Scopus, ERIC, and Cochrane Library with the terms: (“stereoscopy” OR “stereoscopic” OR “stereopsis”) AND “anatomy” AND “virtual reality” AND (“education” OR “teaching” OR “learning”). The inclusion criteria were articles to explore the outcomes of the use of stereoscopic immersive VR (SIVR) in anatomy education ( effectiveness , perceptions about effectiveness , and side effects ), published in peer-reviewed journals, in the English language, and in the last decade (since January 1, 2015) (to be up to date). Also, the reviewers scanned the reference list of each included article. Conference papers, comments to the editor, and reviews were excluded. The reviewers initially checked the title of each retrieved study. If the title was not enough to indicate if the article was eligible for inclusion, the reviewers checked the abstract. If they could not conclude, they scanned the entire text. In the event of a disagreement, the senior author would make the final decision. Reviewers extracted the following data from each included article: authors, year of publication, number of participants, whether there was any interaction with the VR environment, the outcomes of using SIVR in anatomy education, and the corresponding level in the Kirkpatrick hierarchy, which evaluates the levels of educational outcomes (Table ) . Also, the reviewers searched for data concerning the side effects of SIVR because the literature suggested that IVR is occasionally accompanied by symptoms such as dizziness . In total, 105 articles were retrieved after the initial literature search. After the exclusion of duplicates and irrelevant studies, 20 articles remained. From them, we excluded one comment, five reviews, and six articles that did not provide educational outcomes after using SVR in anatomy education. Thus, eight articles were included (Tables and ; Fig. ). Six evaluated examination performance and had a level 2b in the Kirkpatrick hierarchy. Two articles only assessed participants’ perceptions; thus, they had level 1 in the Kirkpatrick hierarchy. In five studies , SVR involved user interaction with the VR environment. Four studies compared SIVR with conventional 2D images, while four studies did not. Stereoscopic immersive VR (SIVR) versus conventional 2D images The study by de Faria et al. involved 84 medical students who were taught neuroanatomy and divided into three groups (28 students each): the first received conventional 2D teaching, the second learned with a SIVR, and the third via interactive non-stereoscopic digital images (desktop-based). The second and third groups did not significantly differ in examination results, while the first group performed significantly worse than the other two groups. Four students in the stereoscopic group felt eyestrain. Wainman et al. investigated if stereopsis played a role in anatomy education via a VR environment without interaction. Twenty medical students were taught pelvic bone anatomy via VR using both eyes (thus being able to have stereopsis). In comparison, another group of 20 students were trained with one of their eyes blocked (hence impeding stereopsis). It was found that the second group’s examination performance was significantly lower than the first group’s. Also, the first group had significantly worse performance than students who were taught via a physical model, and there was no significant difference between the two groups that were taught via mixed reality (MR) and key 2D views, respectively. The paper by Copson et al. included 47 students divided into three groups who were taught temporal bone anatomy via 2D PowerPoint presentation, monoscopic (desktop-based) IVR, and SIVR. Six weeks after the educational intervention, the three groups did not significantly differ regarding anatomy knowledge acquisition. Students preferred stereoscopic and monoscopic delivery and perceived them as effective anatomy education tools. The paper by Kockro et al. included 169 medical students who were separated into two groups: the first was taught neuroanatomy (anatomy of the third ventricle) via SVR (non-interactive), and the second was taught via a 2D PowerPoint presentation. Immediately after teaching, the two groups did not significantly differ in their examination performance. However, SVR was rated significantly superior to 2D teaching regarding spatial understanding and effectiveness. Studies about SIVR without comparison with conventional 2D images Patel et al. compared two groups of participants: the first (24 individuals) were taught the anatomy of congenital heart disease via SVR. In contrast, the second (27 individuals) was taught via monoscopic (desktop-based) digital models. Both groups interacted with the models. After the educational intervention, knowledge acquisition did not significantly differ between the two groups. However, SVR was accompanied by considerably better perceptions regarding the impression of understanding. Of note, 17% of the participants of the SVR group experienced side effects (such as nausea and dizziness), while none of the second group participants reported such symptoms. Luursema et al. included in their research 63 students divided into three groups: the first experienced SIVR for learning the anatomy of C1 and C2 vertebrae, the second experienced interactive non-SVR for learning the same subject, while the third (control group) experienced an unrelated virtual environment. Afterward, the three groups were asked to localize a cross-section of the upper cervical anatomy on a frontal view of the same anatomy. The performance of the three groups did not significantly differ. Birbara et al. evaluated the perceptions of three groups of participants about the use of skull anatomy education tools. The first group included 44 students who received teaching with SVR, the second comprised 19 participants who received desktop-based teaching, and the third included five anatomy tutors who experienced both methods (which were interactive). Regarding the usefulness of understanding, the tutors’ perceptions of the two methods did not significantly differ. However, students considered SVR significantly more useful in the same domain. Compared to desktop-based models, more participants perceived SVR as the cause of physical discomfort and disorientation, but the authors did not evaluate statistical significance. Castro et al. researched 257 students taught anatomy via SIVR. Afterward, they completed a questionnaire using a five-point Likert scale. Regarding perceived usefulness for anatomy learning, SVR was assessed highly, with an average of about 4.5/5. The study by de Faria et al. involved 84 medical students who were taught neuroanatomy and divided into three groups (28 students each): the first received conventional 2D teaching, the second learned with a SIVR, and the third via interactive non-stereoscopic digital images (desktop-based). The second and third groups did not significantly differ in examination results, while the first group performed significantly worse than the other two groups. Four students in the stereoscopic group felt eyestrain. Wainman et al. investigated if stereopsis played a role in anatomy education via a VR environment without interaction. Twenty medical students were taught pelvic bone anatomy via VR using both eyes (thus being able to have stereopsis). In comparison, another group of 20 students were trained with one of their eyes blocked (hence impeding stereopsis). It was found that the second group’s examination performance was significantly lower than the first group’s. Also, the first group had significantly worse performance than students who were taught via a physical model, and there was no significant difference between the two groups that were taught via mixed reality (MR) and key 2D views, respectively. The paper by Copson et al. included 47 students divided into three groups who were taught temporal bone anatomy via 2D PowerPoint presentation, monoscopic (desktop-based) IVR, and SIVR. Six weeks after the educational intervention, the three groups did not significantly differ regarding anatomy knowledge acquisition. Students preferred stereoscopic and monoscopic delivery and perceived them as effective anatomy education tools. The paper by Kockro et al. included 169 medical students who were separated into two groups: the first was taught neuroanatomy (anatomy of the third ventricle) via SVR (non-interactive), and the second was taught via a 2D PowerPoint presentation. Immediately after teaching, the two groups did not significantly differ in their examination performance. However, SVR was rated significantly superior to 2D teaching regarding spatial understanding and effectiveness. Patel et al. compared two groups of participants: the first (24 individuals) were taught the anatomy of congenital heart disease via SVR. In contrast, the second (27 individuals) was taught via monoscopic (desktop-based) digital models. Both groups interacted with the models. After the educational intervention, knowledge acquisition did not significantly differ between the two groups. However, SVR was accompanied by considerably better perceptions regarding the impression of understanding. Of note, 17% of the participants of the SVR group experienced side effects (such as nausea and dizziness), while none of the second group participants reported such symptoms. Luursema et al. included in their research 63 students divided into three groups: the first experienced SIVR for learning the anatomy of C1 and C2 vertebrae, the second experienced interactive non-SVR for learning the same subject, while the third (control group) experienced an unrelated virtual environment. Afterward, the three groups were asked to localize a cross-section of the upper cervical anatomy on a frontal view of the same anatomy. The performance of the three groups did not significantly differ. Birbara et al. evaluated the perceptions of three groups of participants about the use of skull anatomy education tools. The first group included 44 students who received teaching with SVR, the second comprised 19 participants who received desktop-based teaching, and the third included five anatomy tutors who experienced both methods (which were interactive). Regarding the usefulness of understanding, the tutors’ perceptions of the two methods did not significantly differ. However, students considered SVR significantly more useful in the same domain. Compared to desktop-based models, more participants perceived SVR as the cause of physical discomfort and disorientation, but the authors did not evaluate statistical significance. Castro et al. researched 257 students taught anatomy via SIVR. Afterward, they completed a questionnaire using a five-point Likert scale. Regarding perceived usefulness for anatomy learning, SVR was assessed highly, with an average of about 4.5/5. The study participants in our review positively evaluated SIVR. However, SIVR has not shown better educational effectiveness than interactive non-SVR. Of the studies that involved non-stereoscopic images, four comprised desktop-based models , and two involved immersive environments . It should be noted that the term “non-SVR” was not homogenously perceived in our review studies. In the papers by Wainman et al. and Luursema et al. , this term meant “IVR without the ability of stereoscopic vision”. In the studies by de Faria et al. , Copson et al. , Patel et al. , and Barbara et al. , the term “non-SVR” was perceived as “desktop-based digital models.” Although those desktop-based models were projected on a 2D screen, they provided the ability of 3D perception. Thus, they could not be considered conventional 2D images (such as a PowerPoint presentation). All the studies that compared SVR with non-SVR in terms of effectiveness involving interaction with the VR environment did not find significant differences. This finding contrasts with the meta-analysis by Bogomolova et al. , which showed that stereoscopy plays a critical role in the effectiveness of anatomy education when the 3D digital stereoscopic environment is interactive. However, this meta-analysis investigated the role of stereopsis in anatomy teaching in 3D visualization technologies without focusing on VR. More recent research by Bogomolova et al. , which compared stereoscopic augmented reality with non-stereoscopic teaching methods, also demonstrated that the former is not a more effective anatomy teaching tool. The meta-analysis by Bogomolova et al. found that if stereopsis was combined with interaction, it was significantly more effective than in monoscopic 3D digital environments. In contrast, when interaction was absent, stereoscopic 3D visualization was not significantly more effective than monoscopic 3D one. Of note, in all studies of our review that involved interaction, SVR was not considerably superior to monoscopic digital images. In contrast, in the only study of the review that showed the superiority of SVR to monoscopic VR (MVR) , there was no interaction. In the same survey, stereopsis did not lead to better teaching effectiveness than critical 2D views of anatomical structures, while it was significantly inferior to physical models. There was no explanation for this difference in the educational outcomes of SVR. The fact that both forms of VR were found inferior to physical models and equally effective with critical 2D views can probably be explained by the absence of interaction between the users and the VR environment because interaction plays an important role when VR is used for anatomy teaching . Only two papers explored the use of non-interactive SVR . Thus, there is insufficient data to evaluate this type of VR delivery. In our review, three studies showed that SVR was less effective than conventional 2D images. In one of those studies , the users interacted with the VR environment, while in two studies they did not. In contrast, de Faria et al. demonstrated that SIVR led to significantly better outcomes than 2D images. These data do not show if interactive and non-interactive SIVR are more effective anatomy teaching tools than 2D images. Also, in the meta-analysis by Bogomolova et al. , it was unclear if stereopsis led to superior educational outcomes of 3D visualization compared to 2D images. Bogomolova et al. showed that non-interactive stereoscopic images were less effective than 2D images. Furthermore, in all studies of our review with evaluation of participants’ perceptions , SVR was considered more effective than traditional 2D methods or simply effective. However, it should be noted that the possible exposure to VR before the educational intervention might have influenced the participants’ perceptions of this technology. The studies of our review did not evaluate the effect of this possible exposure. The considerable acceptability of SVR indicates that this method has a non-ignorable potential in anatomy education. Thus, further research could enhance the academic performance of students taught via SVR environments. However, in all studies of the review assessing the side effects of the use of SVR , those effects were met considerably more frequently compared with other educational methods. Those side effects included nausea, dizziness, eye strain and discomfort. This fact raises concerns about whether SVR should be more widely applied. Currently, there is no data about which VR exposure duration is safe to avoid side effects. The findings of our review have implications for several fields of health sciences where SVR has been applied. The implementation of this technology in health sciences has shown conflicting outcomes. Al Ali et al. compared the impact of stereoscopic versus non-stereoscopic vision on dental students’ performance in a VR simulator. It was found that the former type of vision led to better depth perception and significantly impacted tooth-cutting accuracy within the target area. However, the stereoscopic view did not considerably influence the task completion time . In another study , neurosurgical residents were trained in three procedures via SVR, and afterward, they completed a questionnaire to evaluate the use of this technology. Over nine out of ten participants stated that the educational intervention was helpful in their training, while the sickness due to the use of SVR was negligible . Also, Vrillon et al. investigated the use of this technology for medical students’ and residents’ lumbar puncture training. They found that the perceived benefit was high, while the discomfort was minimal. Despite the relatively positive perceptions about using SVR for health sciences training purposes, there is generally a lack of data regarding its educational effectiveness in the clinical setting. Although there is evidence that VR can not only improve residents’ skills but also be successfully applied in the operating room and enhance athletic training performance and injury rehabilitation , it has not been clarified in the literature if stereoscopic delivery is crucial for the value of VR in any health sciences domain. A wide variety of medical procedures , which have generated controversy regarding their outcomes, may benefit from the VR implementation; thus, the addition of a stereoscopic component may stimulate further research to shed light on the role of this component in the advantages of VR. Our review does have some limitations. The included studies are relatively few, and the data is quite heterogeneous. Nevertheless, our literature search strategy has probably allowed us to include the maximum possible number of papers. We are optimistic that future research will produce more consistent and comprehensive data, facilitating a thorough meta-analysis. SIVR in anatomy education has generally garnered positive feedback from participants. However, the educational effectiveness of engaging in an SVR environment did not demonstrate significant advantages over non-SVR, mainly when the users interacted with the virtual environment. Furthermore, the application of SIVR has been linked to a considerably higher incidence of side effects than alternative methods. Future research will seek to clarify the extent to which this technology should be incorporated into anatomy education, aiming to minimize side effects while maximizing its educational benefits. |
Effectiveness of oregano essential oil vapor on shelf life extension of kai lan ( | d17d670a-07cd-4074-959b-132c25762643 | 11743022 | Microbiology[mh] | INTRODUCTION Essential oils (EOs), fully described as plant‐based EOs, have been widely proposed as food preservatives due to their antimicrobial and antioxidative properties (Pandey et al., ). EOs comprise intricate blends of volatile compounds synthesized by plants with strong smells. These compounds can be found in various plant parts, including flowers, leaves, roots, fruits, seeds, wood, or bark (Pandey et al., ). For instance, the whole oregano grass has been extracted for aromatic oregano EO, while thyme EO is typically acquired from the leaves and flowers of Thymus vulgaris (Rajkovic et al., ). These EOs are characterized by the bioactive compounds, including terpenes, polyphenol, and flavonoid. These compounds are responsible for the antioxidant and antimicrobial activity of EOs. The application of EOs against food pathogens like Staphylococcus aureus and Salmonella spp. has been widely researched (Bajpai et al., ; Y. Zhang et al., ). However, less emphasis has been placed on the significance of microbial spoilage and quality deterioration on food under EO treatment. The worldwide consumption of leafy greens can result in 135 billion USD revenue, with around 7.3% annual increasing rate (Batzios & Tsiouni, ). Microbial growth is a main cause of the spoilage of leafy greens. Leafy greens are highly affected by the microorganisms during harvest and post‐harvest period. Due to the direct contact of soil during growth, leafy greens generally contain high numbers of microorganisms, including both bacteria and fungi (Tournas, ). For instance, Pseudomonas spp., Erwinia spp., and Acinetobacter spp. have been identified as spoilage bacteria on leafy greens, resulting in soft rots, slime formation, and discoloration (Alegbeleye et al., ). Mold spoilage of leafy greens can be mainly caused by many species of Penicillium , Alternaria , Botrytis , and Aspergillus . Similarly, mold growth can also lead to the rotting and discoloration, while the growth of mold is more visible. Specifically, kai lan sampling in Vietnam has been detected with 9.28 log CFU/g of total aerobic plate count and 5.17 log CFU/g of total coliform (Minh, ). Kale ( Brassica oleracea L. var. acephala ), another leafy green that is highly similar to kai lan, is prone to browning and easily infected by aerobic bacteria, yeasts, and molds due to the minimal processing (Wang et al., ). In this case, it is important to apply preservation techniques to inhibit the microbial growth during the storage of leafy greens. EOs have been widely proposed as food preservatives thanks to their antimicrobial activity, while their strong influence on food sensory qualities has limited their application especially when EOs were directly formulated in the foods or applied to the foods as liquids. Moreover, for leafy green vegetables, a water‐based preservative agent might introduce more moisture and encourage microbial deterioration on the contrary. Several in vitro studies have evaluated the antimicrobial activity of EO in both direct application and vapor application. For instance, Citrus sinensis EO has shown an MIC value of 800 mg/L (air) in vapor application and 1600 mg/L in liquid phase application against Aspergillus flavus (Velázquez‐Nuñez et al., ). Eucalyptus globulus EO has shown stronger antimicrobial activity in vapor application than direct application in both fungal and bacterial strains, with higher inhibition zone diameter under same concentration of EO treatment (Tyagi & Malik, ). EO has shown their strong potency in the vapor application for the antimicrobial activity against foods. Therefore, this study aimed to investigate the preservative effects of EO vapor on leafy green vegetables. First, the antimicrobial activities of different types of EO vapor were evaluated against the selection of spoilage‐causing microorganisms from vegetables and fruits. Oregano EO vapor was then applied on several leafy greens to evaluate the appearance change under room temperature (25°C) and refrigeration (7°C) storage. The oregano EO vapor was further applied on kai lan to evaluate the shelf life extension effects evaluated from multiple perspectives. MATERIALS AND METHODS 2.1 Chemical and biological materials EOs extracted from oregano ( Origanum vulgare ), clove ( Eugenia caryophyllus ), and basil ( Ocimum basilicum ) were purchased from Now Foods in Singapore. Kai lan was purchased from a local urban farm (Vegeponics). Kale, butter lettuce, and iceberg lettuce were purchased from a local supermarket (Fairprice). Folin–Ciocalteau reagent, sodium carbonate, gallic acid, quercetin, and aluminum chloride were bought from Sigma‐Aldrich Chemical Company (Sigma‐Aldrich). Nutrient agar, nutrient broth, and potato dextrose agar were purchased from Oxoid. The pure cultures of five bacteria from American Type Culture Collection (ATCC), including Pantoea agglomerans (ATCC 27,155), Pseudomonas cichorii (ATCC 13,455), Pectobacterium carotovorum (ATCC 15,713), Pantoea ananatis (ATCC TSD‐232), Pseudomonas marginalis (ATCC 51,281), and two fungi ( Alterneria bassicicola [ATCC 96,836] and Botrytis cinerea [ATCC 11,542]) were purchased from Everlife (Chemoscience Pte Ltd.). The selected EO was determined by gas chromatography (GC)–mass spectrometry (MS) analysis (Figure ) following a protocol reported by Hai et al. . The composition of oregano EO was determined by GC–MS (Agilent Scientific Instruments) with HP‐5MS 5% phenylmethyl siloxane capillary column (30 m × 0.25 mm × 0.25 µm). The initial temperature of the oven was set at 100°C, with increase rate of 5°C/min to 220°C, followed by 4°C/min to 250°C and 25°C/min to 300°C. At each temperature stage, the holding time was 5 min. The temperature for the injection inlet and detector was set at 250°C. The flow rate of 99% helium was set at 1 mL/min. The split ratio was 1:20. For the MS setting, the electron ionization mode was set at 70 eV, and the MS scan was from 35 m/z to 500 m/z. 2.2 Vapor‐phase antibacterial assay The frozen bacteria cultures (−80°C) were activated by transferring them into 10 mL of nutrient broth and incubating for 24 h at relative temperatures as suggested by ATCC, respectively. An aliquot of the bacterial solution was inoculated onto a nutrient agar plate and incubated for 48 h to obtain several individual colonies, which were transferred into another 10 mL of nutrient broth for subculturing. All bacterial strains were subcultured for three times consecutively to prepare the ultimate bacterial inoculum for further experiments. Antibacterial tests of EO in vapor phase were established based on Mukurumbira et al. with slight modifications. For all the tested bacterial strains, the optical density at 600 nm (OD 600 ) of each bacterial solution was measured by the BioDrop µLite Spectrophotometer to ensure the bacterial concentration was around 2–3 × 10 8 CFU/mL as starting point. Solution was serially diluted 10 5 times by 0.1% sterile peptone water, and 100 µL of the diluted solution was further inoculated on each nutrient agar. A glass cover slide was stuck onto the inside of Petri dish lid, and 2.5 µL of each EO was dropped and spread across the cover slide surface, respectively. Then, each agar plate set was sealed with parafilm and incubated under relative temperature for 48 h following the ATCC recommendations. Afterward, bacterial plate counts were enumerated, and the antimicrobial potencies of each EO vapor were indicated by their bacterial reduction ratio in contrast to the negative control (no EO applied). All the samples were tested in triplicate. The experimental setup is illustrated in Figure . 2.3 Vapor‐phase antifungal assays For molds, the pure cultures were activated by plating on potato dextrose agar and incubated at 25°C for 5–7 days until the whole agar plate was visibly covered by the molds. The molds were subcultured for three cycles consecutively before the following experiments. The fungi cells were harvested by scraping off the agar surface with 10 mL of sterile 0.1% Triton X‐100 solution, obtaining the fungal suspensions for further work. The mold suspension was directly inoculated on Dichloran‐rose Bengal Chloramphenicol agar. A glass cover slide was stuck on the lid of Petri dish, and a paper disk (diameter: 6 mm) was fixed at the lid center. EO was previously serially diluted by two times with dimethyl sulfoxide (DMSO), and 10 µL of each EO solution was dropped onto the paper disk, respectively. The lowest EO concentration that can induce a visible inhibition zone was recorded, and the size of inhibition zones was measured. Pure DMSO solution was used as the negative control and all tests were conducted in triplicate. The experimental setup is illustrated in Figure . 2.4 Application of EO vapor on leafy green vegetables An inert plastic piece (4 × 8 cm) was adhered to the inner side of a commercial packaging container lid (volume: 1.8 L), on top of which a filter paper (diameter: 9 cm) was affixed to assist the EO vaporization. Fresh leaves (10 ± 0.5 g) of kai lan, kale, butter lettuce, or iceberg lettuce were placed at the bottom of each container, with their surfaces fully exposed to the EO vapor during storage. After that, EO was deposited onto the paper disk and the plastic containers were closed immediately. The sample containers were kept in the darkness at 25°C and 7°C, respectively, and the vegetables were sampled and analyzed at different time points. 2.5 Color measurement The International Comission on Illumination (abbreviated CIE) L * a * b * coordinates were measured, and the indices, including color index (CI), total color difference (TCD), and yellowing index (YI) (Francis & Clydesdale, ; Goni et al., ), were calculated to properly characterize the evolution of vegetable surface color. Specifically, the a * values range from red (positive, + a *) to green (negative, − a *), the b * values vary from yellow (positive, + b *) to blue (negative, − b *), and the L * values (lightness) are from black ( L = 0) to white ( L = 100). Prior to the measurements, the instrument was calibrated using a standard white reference plate ( L * = 41.103, a * = −4.743, and b * = 4.800). Each leaf was measured at four similar positions, and triplicates of each type were examined (total n = 12) using a Minolta CR‐100 Colorimeter Reflectance Spectrophotometer with a D65 illumination source (Minolta Camera Co.). Each spot was measured three times, and the automatic average values were reported. The color indices were calculated as follows (1–3): (1) CI = 1000 × a ∗ / L ∗ / b ∗ (2) TCD = L ∗ − L 0 ∗ 2 + a ∗ − a 0 ∗ 2 + b ∗ − b 0 ∗ 2 where L *, a *, and b * were values of differently treated kai lan, and L 0 , a 0 , and b 0 were the values of fresh kai lan without EO fumigation (the control) on Day 0 (Manolopoulou & Varzakas, ). (3) YI = 142.86 × b ∗ / L ∗ where YIs were calculated based on the b * values (yellow–blue axis) and L * (lightness) values, respectively. 2.6 Total chlorophyll content measurement Chlorophylls are principal plant pigments that predominantly influence the visual green color of kai lan leaves, and their contents were determined based on a previous protocol reported elsewhere (Huang et al., ). Kai lan leaves after oregano EO treatment and storage were lyophilized, followed by grinding into powder. Around 10 ± 0.1 mg of freeze‐dried kai lan powder was dissolved in 10 mL of aqueous acetone solution (acetone:water = 80:20, v/v) and ultrasonicated for 15 min (by Elmasonic S 60H). Final solution was stored under −20°C for 24 h, respectively. The mixtures were further centrifuged at 3500 × g for 10 min at 4°C, and the absorbances of the supernatant were measured at 663.6 (A 663.6 ), 646.6 (A 646.6 ), and 440.5 nm (A 440.5 ) using a the BioDrop µLite Spectrophotometer (Biodrop). The concentrations of chlorophylls (Chl) were quantified in milligrams per gram of dry weight (DW) based on the following formulas (4–6): (4) Chl a content = 12.25 A 663.6 − 2.55 A 646.6 DW (5) Chl b content = 20.31 A 646.6 − 4.91 A 663.6 DW (6) Total Chl content = Chl a content + Chl b content 2.7 Total microbial aerobic count measurement For the total aerobic counts (TAC) tests, 10 g of treated vegetables were transferred to a sterile stomacher bag (Delta lab, 180 × 300 mm), and thoroughly mixed with 90 mL 0.1% peptone water by stomacher (Masticator Stomacher, IUL Instruments). Then, the mixture was 10 times serially diluted, and the same extract was inoculated on both the plate count agar (for TAC) and then incubated at 30°C for 2 days. TAC results were expressed in log CFU/g vegetables, and samples were analyzed in triplicate. The rest of the mixture for the kai lan leaves and peptone water was transferred to a 50‐mL falcon tube for further DNA extraction and sequencing in the next section. 2.8 DNA extraction and 16S/18S sequencing DNA extraction was conducted based on Dakwa et al. with slight modifications. The mixture in 50‐mL falcon tubes was first centrifuged at 1500 × g for 10 min, and the supernatant was extracted and further centrifuged (Greiner Bio‐one) for 35 min at 3900 × g to collect the bacterial cell pellets. All the supernatant was discarded, and the sediment pellet was washed two times using 5 mL 1% PBS solution followed by centrifugation at 8000 × g for 5 min. Supernatant was further removed, and the cell pellet was resuspended in 1 mL PBS. This bacterial suspension was transferred to a 1.5‐mL microcentrifuge tube and then stored at −80°C for the following (approximately 2 weeks) DNA extraction. DNA was extracted using the DNeasy PowerFood Microbial Kit (Qiagen Singapore Pte. Ltd.) according to the manufacturer's protocol. Bacteria cells in PBS solution were thawed at room temperature (RT, 25°C) and transferred to the collection tube provided by the kit. Cells were centrifuged at 13,000 × g for 1 min, and the supernatant was decanted using a pipette tip. Microbial DNA was extracted according to the protocol. The concentration and purity of extracted DNA were assessed by the BioDrop µLite Spectrophotometer. Extracted DNA was contained in a 1.5‐mL microcentrifuge tube and stored at ‐80°C before sending for the sequencing analysis. Extracted DNA was sent to NovogeneAIT Genomics Singapore Pte Ltd. for 16S V3‐V4 amplicon and 18S V4 amplicon sequencing via the Illumina NovaSeq 6000. Quality control, polymerase chain reactions (PCR), library preparation, and bioinformatics analysis pipeline were conducted accordingly. Data were analyzed via the open‐source Quantitative Insights into Microbial Ecology 2 pipeline for the denoise to obtain initial amplicon sequence variants and species annotation via the Silva Database (Bokulich et al., ; Li et al., ). The top 10 taxa of each sample or group at each taxonomic rank (phylum, class, order, family, genus, and species) were used to plot the distribution histogram of relative abundance in Perl through Scalable Vector Graphics (SVG) function, which visually displays different abundance and taxa clustering. Genus's rank of 16S amplicon sequencing and family's rank of 18S amplicon sequencing were presented in discussion. Non‐metric multidimensional scaling (NMDS) was conducted to comparatively analyze the differences in compositions of microbial community among tested groups through R software with the ade4 package (Rivas et al., ). To determine species with significant variations between groups, t ‐test was performed at genus rank for 16S amplicon sequencing and family rank for 18S amplicon sequencing, respectively. 2.9 Statistical analysis All the tests were performed for three times independently and were analyzed by one‐way analysis of variance and paired‐samples t ‐test by IBM SPSS statistic 25.0. The significance of difference was p < 0.05. Chemical and biological materials EOs extracted from oregano ( Origanum vulgare ), clove ( Eugenia caryophyllus ), and basil ( Ocimum basilicum ) were purchased from Now Foods in Singapore. Kai lan was purchased from a local urban farm (Vegeponics). Kale, butter lettuce, and iceberg lettuce were purchased from a local supermarket (Fairprice). Folin–Ciocalteau reagent, sodium carbonate, gallic acid, quercetin, and aluminum chloride were bought from Sigma‐Aldrich Chemical Company (Sigma‐Aldrich). Nutrient agar, nutrient broth, and potato dextrose agar were purchased from Oxoid. The pure cultures of five bacteria from American Type Culture Collection (ATCC), including Pantoea agglomerans (ATCC 27,155), Pseudomonas cichorii (ATCC 13,455), Pectobacterium carotovorum (ATCC 15,713), Pantoea ananatis (ATCC TSD‐232), Pseudomonas marginalis (ATCC 51,281), and two fungi ( Alterneria bassicicola [ATCC 96,836] and Botrytis cinerea [ATCC 11,542]) were purchased from Everlife (Chemoscience Pte Ltd.). The selected EO was determined by gas chromatography (GC)–mass spectrometry (MS) analysis (Figure ) following a protocol reported by Hai et al. . The composition of oregano EO was determined by GC–MS (Agilent Scientific Instruments) with HP‐5MS 5% phenylmethyl siloxane capillary column (30 m × 0.25 mm × 0.25 µm). The initial temperature of the oven was set at 100°C, with increase rate of 5°C/min to 220°C, followed by 4°C/min to 250°C and 25°C/min to 300°C. At each temperature stage, the holding time was 5 min. The temperature for the injection inlet and detector was set at 250°C. The flow rate of 99% helium was set at 1 mL/min. The split ratio was 1:20. For the MS setting, the electron ionization mode was set at 70 eV, and the MS scan was from 35 m/z to 500 m/z. Vapor‐phase antibacterial assay The frozen bacteria cultures (−80°C) were activated by transferring them into 10 mL of nutrient broth and incubating for 24 h at relative temperatures as suggested by ATCC, respectively. An aliquot of the bacterial solution was inoculated onto a nutrient agar plate and incubated for 48 h to obtain several individual colonies, which were transferred into another 10 mL of nutrient broth for subculturing. All bacterial strains were subcultured for three times consecutively to prepare the ultimate bacterial inoculum for further experiments. Antibacterial tests of EO in vapor phase were established based on Mukurumbira et al. with slight modifications. For all the tested bacterial strains, the optical density at 600 nm (OD 600 ) of each bacterial solution was measured by the BioDrop µLite Spectrophotometer to ensure the bacterial concentration was around 2–3 × 10 8 CFU/mL as starting point. Solution was serially diluted 10 5 times by 0.1% sterile peptone water, and 100 µL of the diluted solution was further inoculated on each nutrient agar. A glass cover slide was stuck onto the inside of Petri dish lid, and 2.5 µL of each EO was dropped and spread across the cover slide surface, respectively. Then, each agar plate set was sealed with parafilm and incubated under relative temperature for 48 h following the ATCC recommendations. Afterward, bacterial plate counts were enumerated, and the antimicrobial potencies of each EO vapor were indicated by their bacterial reduction ratio in contrast to the negative control (no EO applied). All the samples were tested in triplicate. The experimental setup is illustrated in Figure . Vapor‐phase antifungal assays For molds, the pure cultures were activated by plating on potato dextrose agar and incubated at 25°C for 5–7 days until the whole agar plate was visibly covered by the molds. The molds were subcultured for three cycles consecutively before the following experiments. The fungi cells were harvested by scraping off the agar surface with 10 mL of sterile 0.1% Triton X‐100 solution, obtaining the fungal suspensions for further work. The mold suspension was directly inoculated on Dichloran‐rose Bengal Chloramphenicol agar. A glass cover slide was stuck on the lid of Petri dish, and a paper disk (diameter: 6 mm) was fixed at the lid center. EO was previously serially diluted by two times with dimethyl sulfoxide (DMSO), and 10 µL of each EO solution was dropped onto the paper disk, respectively. The lowest EO concentration that can induce a visible inhibition zone was recorded, and the size of inhibition zones was measured. Pure DMSO solution was used as the negative control and all tests were conducted in triplicate. The experimental setup is illustrated in Figure . Application of EO vapor on leafy green vegetables An inert plastic piece (4 × 8 cm) was adhered to the inner side of a commercial packaging container lid (volume: 1.8 L), on top of which a filter paper (diameter: 9 cm) was affixed to assist the EO vaporization. Fresh leaves (10 ± 0.5 g) of kai lan, kale, butter lettuce, or iceberg lettuce were placed at the bottom of each container, with their surfaces fully exposed to the EO vapor during storage. After that, EO was deposited onto the paper disk and the plastic containers were closed immediately. The sample containers were kept in the darkness at 25°C and 7°C, respectively, and the vegetables were sampled and analyzed at different time points. Color measurement The International Comission on Illumination (abbreviated CIE) L * a * b * coordinates were measured, and the indices, including color index (CI), total color difference (TCD), and yellowing index (YI) (Francis & Clydesdale, ; Goni et al., ), were calculated to properly characterize the evolution of vegetable surface color. Specifically, the a * values range from red (positive, + a *) to green (negative, − a *), the b * values vary from yellow (positive, + b *) to blue (negative, − b *), and the L * values (lightness) are from black ( L = 0) to white ( L = 100). Prior to the measurements, the instrument was calibrated using a standard white reference plate ( L * = 41.103, a * = −4.743, and b * = 4.800). Each leaf was measured at four similar positions, and triplicates of each type were examined (total n = 12) using a Minolta CR‐100 Colorimeter Reflectance Spectrophotometer with a D65 illumination source (Minolta Camera Co.). Each spot was measured three times, and the automatic average values were reported. The color indices were calculated as follows (1–3): (1) CI = 1000 × a ∗ / L ∗ / b ∗ (2) TCD = L ∗ − L 0 ∗ 2 + a ∗ − a 0 ∗ 2 + b ∗ − b 0 ∗ 2 where L *, a *, and b * were values of differently treated kai lan, and L 0 , a 0 , and b 0 were the values of fresh kai lan without EO fumigation (the control) on Day 0 (Manolopoulou & Varzakas, ). (3) YI = 142.86 × b ∗ / L ∗ where YIs were calculated based on the b * values (yellow–blue axis) and L * (lightness) values, respectively. Total chlorophyll content measurement Chlorophylls are principal plant pigments that predominantly influence the visual green color of kai lan leaves, and their contents were determined based on a previous protocol reported elsewhere (Huang et al., ). Kai lan leaves after oregano EO treatment and storage were lyophilized, followed by grinding into powder. Around 10 ± 0.1 mg of freeze‐dried kai lan powder was dissolved in 10 mL of aqueous acetone solution (acetone:water = 80:20, v/v) and ultrasonicated for 15 min (by Elmasonic S 60H). Final solution was stored under −20°C for 24 h, respectively. The mixtures were further centrifuged at 3500 × g for 10 min at 4°C, and the absorbances of the supernatant were measured at 663.6 (A 663.6 ), 646.6 (A 646.6 ), and 440.5 nm (A 440.5 ) using a the BioDrop µLite Spectrophotometer (Biodrop). The concentrations of chlorophylls (Chl) were quantified in milligrams per gram of dry weight (DW) based on the following formulas (4–6): (4) Chl a content = 12.25 A 663.6 − 2.55 A 646.6 DW (5) Chl b content = 20.31 A 646.6 − 4.91 A 663.6 DW (6) Total Chl content = Chl a content + Chl b content Total microbial aerobic count measurement For the total aerobic counts (TAC) tests, 10 g of treated vegetables were transferred to a sterile stomacher bag (Delta lab, 180 × 300 mm), and thoroughly mixed with 90 mL 0.1% peptone water by stomacher (Masticator Stomacher, IUL Instruments). Then, the mixture was 10 times serially diluted, and the same extract was inoculated on both the plate count agar (for TAC) and then incubated at 30°C for 2 days. TAC results were expressed in log CFU/g vegetables, and samples were analyzed in triplicate. The rest of the mixture for the kai lan leaves and peptone water was transferred to a 50‐mL falcon tube for further DNA extraction and sequencing in the next section. DNA extraction and 16S/18S sequencing DNA extraction was conducted based on Dakwa et al. with slight modifications. The mixture in 50‐mL falcon tubes was first centrifuged at 1500 × g for 10 min, and the supernatant was extracted and further centrifuged (Greiner Bio‐one) for 35 min at 3900 × g to collect the bacterial cell pellets. All the supernatant was discarded, and the sediment pellet was washed two times using 5 mL 1% PBS solution followed by centrifugation at 8000 × g for 5 min. Supernatant was further removed, and the cell pellet was resuspended in 1 mL PBS. This bacterial suspension was transferred to a 1.5‐mL microcentrifuge tube and then stored at −80°C for the following (approximately 2 weeks) DNA extraction. DNA was extracted using the DNeasy PowerFood Microbial Kit (Qiagen Singapore Pte. Ltd.) according to the manufacturer's protocol. Bacteria cells in PBS solution were thawed at room temperature (RT, 25°C) and transferred to the collection tube provided by the kit. Cells were centrifuged at 13,000 × g for 1 min, and the supernatant was decanted using a pipette tip. Microbial DNA was extracted according to the protocol. The concentration and purity of extracted DNA were assessed by the BioDrop µLite Spectrophotometer. Extracted DNA was contained in a 1.5‐mL microcentrifuge tube and stored at ‐80°C before sending for the sequencing analysis. Extracted DNA was sent to NovogeneAIT Genomics Singapore Pte Ltd. for 16S V3‐V4 amplicon and 18S V4 amplicon sequencing via the Illumina NovaSeq 6000. Quality control, polymerase chain reactions (PCR), library preparation, and bioinformatics analysis pipeline were conducted accordingly. Data were analyzed via the open‐source Quantitative Insights into Microbial Ecology 2 pipeline for the denoise to obtain initial amplicon sequence variants and species annotation via the Silva Database (Bokulich et al., ; Li et al., ). The top 10 taxa of each sample or group at each taxonomic rank (phylum, class, order, family, genus, and species) were used to plot the distribution histogram of relative abundance in Perl through Scalable Vector Graphics (SVG) function, which visually displays different abundance and taxa clustering. Genus's rank of 16S amplicon sequencing and family's rank of 18S amplicon sequencing were presented in discussion. Non‐metric multidimensional scaling (NMDS) was conducted to comparatively analyze the differences in compositions of microbial community among tested groups through R software with the ade4 package (Rivas et al., ). To determine species with significant variations between groups, t ‐test was performed at genus rank for 16S amplicon sequencing and family rank for 18S amplicon sequencing, respectively. Statistical analysis All the tests were performed for three times independently and were analyzed by one‐way analysis of variance and paired‐samples t ‐test by IBM SPSS statistic 25.0. The significance of difference was p < 0.05. RESULTS AND DISCUSSION 3.1 Antimicrobial effect of EO vapor on spoilage‐causing microorganisms All the tested EO vapors have shown noticeable antibacterial activities against the tested bacteria as indicated by their plate count reduction (Table ). Among three tested EO, oregano EO showed stronger antibacterial effects than clove EO and basil EO against all five tested bacteria. The inhibition of EO vapors on the tested strains used in this work has been rarely explored so far. Nonetheless, the efficacies of oregano EO against Pseudomonas syringae , a vegetable spoilage bacterium, were reported previously (Carezzano et al., ). Oregano EO could inhibit the biofilm formation and phytotoxin (coronatine, syringomycin, and tabtoxin) production based on an in vitro study. Antibacterial mechanisms of EOs have been widely investigated and normally involve several aspects. Major components of EOs are terpenes, alcohols, esters, and phenolic compounds, which can interact with cell membrane components and disrupt the cell membrane integrity (Pandey et al., ). Oregano EO can increase the cell membrane electrical conductivity and reduce intracellular protein concentrations of S. aureus , which reflects the cell membrane disruption and leakage of cellular components (Cui et al., ). EO can also interact with DNA components and affect the gene expression in bacterial cells. EO vapor has similar antibacterial mechanisms, where they exert greater antibacterial efficacies than EOs in the liquid form (Reyes‐Jurado et al., ). This may be because the lipophilic EO forms micelles in an aqueous phase, which restrains the interactions between active EOs and bacteria (Nadjib et al., ). Additionally, EO in the vapor phase can mitigate their interaction with food matrices, which alleviates the unwanted changes in sensory and odor attributes of foods during EO preservation. Two notorious fungal pathogens, A. bassicicola and B. cinerea , can plague a wide range of fruits and vegetables (Leifert et al., ; Soylu et al., ). In this work, the three EOs were serially diluted and their antifungal effects in the vapor phase were evaluated based on the lowest effective volume within each Petri dish (Table ), and the dimensions of relative inhibition zone (ZOI) (Figure ). EO was deposited at the center of each Petri dish lid and a circle of inhibition zone appears when the EO vapors can effectively inhibit the fungal growth. According to Table , the oregano and clove EO vapors exhibited similar antifungal activities as indicated by their proximate lowest effective volume (1.25–2.5 µL) that can cause similar ZOI. In general, A. bassicicola showed stronger resistance compared to B. cinerea against the oregano and basil EO vapors. Antifungal mechanisms of EOs are similar to those against bacteria, while the different fungal cell structures may render them higher resistance against EO (Sekyere & Asante, ). General antifungal mechanisms of EOs involve cell membrane disruption (Gogoi et al., ), mitochondria dysfunction (Chen et al., ), and reactive oxygen species production (Nazzaro et al., ). According to the literature, low levels of oregano EO both in the vapor (0.2 µg/mL air) and in liquid contact phase (12.8 µg/mL liquid) can effectively inhibit B. cinerea (Soylu et al., ). The antifungal activities of EOs were attributed to their possible accumulation in the lipophilic components within fungal cell membranes (Nazzaro et al., ). This accumulation facilitates the subsequent translocation of other EO components to the intracellular milieu. The variations in antifungal efficacies between different EOs result from their diverse physicochemical properties, especially their water solubility and lipophilicity (X. M. Xie et al., ). Interestingly, clove EO, when tested in the liquid phase against A. bassicicola , resulted in the abnormal growth of mycelia and swollen hypha (Peddi et al., ; Suwitchayanon & Kunasakdakul, ). By contrast, the antifungal activities of clove EO vapors were noticeable in this work, highlighting the advantages of EO vapors as antimicrobial agents. To compare the overall antimicrobial activities of different EOs and further select the EO applied for shelf life extension tests on kai lan, the antimicrobial activities of the EO vapors were scored and ranked (Table ). The EO with strongest antimicrobial activity against the strain among tested EOs was scored as 1, followed by 2 and 3. The total scores for all the EO were calculated as the sum of scores for five bacterial strains and three fungal strains. According to Table , oregano EO has the lowest total score of 10 followed by 13 for clove EO and 17 for basil EO. Therefore, oregano EO with the strongest antimicrobial activity was selected for further screening and experiments. The chemical composition of oregano EO used in this work was analyzed by GC–MS. As shown in Figure , the main component of oregano EO was carvacrol, accounting for 85.78% of total content, followed by 7.19% of cymene and 2.56% of linalool. This composition was comparable to other finding, showing that the major component of oregano EO was carvacrol (63.97%), p‐cymene (12.63%), and linalool (3.67%) (Özkan et al., ). 3.2 Application of oregano EO vapor on leafy vegetables Four leafy greens commonly consumed in Singapore (kai lan, kale, butter lettuce, and iceberg lettuce) were chosen to examine the applicability of the oregano EO vapor for their preservation. To identify the applicable EO dose and suitable vegetables, 10 or 75 µL of oregano EO was added into each container (1.8 L), and the vegetables were stored at 7°C for 5 days. As shown in Figure , the green color of kai lan leaves can be preserved using oregano EO vapor at a low dose of 10 µL EO in 1.8 L, which effectively delayed its yellowing in contrast to the control (no EO). However, a high dose of oregano EO vapor (75 µL EO in 1.8 L) caused noticeable darkening of leaves. This was possibly attributed to the accumulation of phenolic compounds (Kraśniewska et al., ; Saltveit et al., ). Figure shows that the kale leaves exhibited a color‐change pattern similar to those of kai lan leaves (Chinese kale), as they are both from the Brassicaceae family. However, the green color preservative effect on kale was not as apparent as that on kai lan. In contrast, oregano EO vapor did not provide any noteworthy benefits in preserving the appearance of butter lettuce and iceberg lettuce (Figure ). Oregano EO vapor has shown phytotoxic effects toward these two kinds of lettuce, even at the low dosage (10 µL EO in 1.8 L) treatment. Such phytotoxic effects of oregano EO have been observed against the crop seeds of radish, wild mustard, and also the weed of Italian ryegrass, especially under high concentration treatment (1 and 0.5 g/mL) (Amato et al., ). Potential reasons for the phytotoxic effects of oregano EO are the inhibition of α‐amylase activity and the interaction with plant cell membrane under high dosage (Amato et al., ). Additionally, the seed germination of cucumber and tomato was inhibited by oregano EO under 0.5 µL/mL concentration treatment. Thus, with consideration of both green color preservative and phytotoxic effects of oregano EO, kai lan was chosen as the target matrix for further studies. As for the dosage, 10 µL EO in 1.8 L was selected as it produced notable color preservative effects, and 75 µL EO in 1.8 L was not tested further as it resulted in grayish kai lan leaves. Instead, a medium dosage of 40 µL EO in 1.8 L was tested. 3.3 Surface color determination of kai lan leaves Deterioration in the visual quality of leafy greens typically manifests as the unfavorable color changes such as yellowing. Both surface color and total chlorophyll content have been widely determined to characterize the color changes of leafy vegetables (Barrett et al., ). To characterize the color and its changes during storage, CI, TCD, and YI were determined. CI mainly focuses on the absolute color value of an object, indicating the real color of the sample (López Camelo & Gómez, ). TCD is a numerical description of the color difference between two colors, while YI mainly describes the yellowness of the samples, which can provide more direct information about the yellowness of kai lan leaves during the storage (Jung & Sato, ; Wu et al., ). The CI of kai lan under oregano EO vapor treatment is shown in Figure . According to the standard evaluation, CI (−40 to −20) represents blue violet to dark green, CI (−20 to −2) indicates dark green to yellowish green, CI (+2–+20) shows pale yellow to deep orange, and CI (+20 to +40) stands for deep orange to deep red (GoÑI et al., ). After the 7‐day storage at 25°C, the CI value of kai lan in the control group (no EO) markedly increased from −26.50 ± 6.89 to 2.36 ± 0.65 ( p < 0.05), indicating that its color changed from dark green to pale yellow. By contrast, in the presence of oregano EO vapors, the color change of kai lan was significantly retarded as indicated by the slower increases in their CI values. This was consistent with TCD (Figure ) and YI (Figure ), both of which significantly decreased under the EO vapor treatment. Particularly, the oregano EO vapor showed the lowest TCD and YI values, being the most effective to preserve the initial green color of kai lan against the following color changes and yellowing. These results were consistent with visual observations, confirming that the oregano EO vapor is promising to preserve the green color of kai lan. It has been shown that refrigeration storage can effectively retard the yellowing progress of several leafy greens such as lettuce, broccoli (Manolopoulou & Varzakas, ), baby spinach (Y. Kou et al., ; Y. Xie et al., ), and rocket leaves (Kim & Ishii, ). Therefore, kai lan storage with oregano EO vapor was also tested at 7°C for 14 days. Similar color protection effects of oregano EO vapor were observed (Figure ). The TCD was in the range of 5–15 under oregano EO vapor treatment and this value was about 36 for the control group (compared to the Day 0 value; Figure [b]). Based on Figure , the yellowish effects of kai lan leaves were remarkably inhibited by oregano EO vapor treatment. 3.4 Total chlorophyll content of kai lan The overall discoloration of leafy greens is preliminarily associated with the losses of chlorophylls, which simultaneously magnifies the appearance of leaf yellowing during storage. The decreasing chlorophyll content was consistent with the fading of green color (Figure ). For the control group (no EO), the total chlorophylls in kai lan decreased from 15.39 ± 1.38 mg/g DW to 4.91 ± 1.60 mg/g DW (reduced by 68.10%; p < 0.05) over 5 days at 25°C and reached a plateau till Day 7. With a dose of 10 µL oregano EO in 1.8 L, the total chlorophylls in kai lan were markedly maintained with no significant reduction after 14 days of storage at 7°C ( p > 0.05). After the 7 days of storage at 25°C, the dose of 40 µL EO in 1.8 L provided better protection against the chlorophyll decreases in kai lan than 10 µL EO in 1.8 L EO vapor ( p < 0.05). The maintenance of chlorophyll content in kai lan leaves of treatment groups can be due to the antioxidative effects of oregano EO and the inhibition of oxygen permeation with oregano EO vapor surrounding the kai lan leaves (Abedi et al., ). 3.5 The microbial analysis of kai lan under oregano EO vapor treatment Apart from the favorable color preservation, it was indeed essential to investigate the effects of oregano EO vapors on the microbial quality of kai lan regarding their ultimate shelf life extension in real‐life scenarios. Although oregano EO vapor showed significant inhibition to all the five tested spoilage‐causing bacteria strains and two fungal pathogens, as shown in Section 3.1, the antimicrobial effect was not fully demonstrated by measuring the TAC on kai lan with complicated natural microbiome. As shown in Figure , significantly lower microbial loads in oregano EO vapor‐treated groups than in the control group were only noticed on Day 3 at 25°C with the dose of 40 µL oregano EO in 1.8 L (Figure ; p < 0.05), on Days 5 and 10 at 7°C with the dose of 10 µL oregano EO in 1.8 L, and on Days 5, 10, and 14 at 7°C with the dose of 40 µL oregano EO in 1.8 L (Figure ; p < 0.05). We hypothesized this was largely due to the various susceptibilities of different microbes on kai lan, and thus it became necessary to investigate the microbial composition changes before and after oregano EO vapor treatment. The NMDS analysis of 16S amplicon sequencing results from different groups (doses of EO vapor and storage temperatures) is shown in Figure . Stress smaller than 0.2 indicates that this scaling result is reliable. It was interesting to find that the samples treated with oregano EO vapor with the dose of 40 µL oregano EO in 1.8 L at both 25°C and 7°C showed more similar patterns to the samples on Day 0 than other groups. The samples treated with lower dose of EO vapor (10 µL oregano EO in 1.8 L) showed more similar patterns to the control groups without EO vapor treatment than the samples treated with higher dose of EO vapor (40 µL oregano EO in 1.8 L) at 25°C and 7°C, respectively. However, the relative abundance in bacteria genus as shown in Figure still showed the distinctive microbial compositions in samples treated with different doses of EO vapor and different storage temperatures. In comparison with the major genus identified on Day 0, Terribacillus became more competitive after being stored at 25°C for 7 days with the dose of 10 µL oregano EO in 1.8 L. Pantoea , Pseudomonas , and Salinicola dominated the bacterial components in kai lan stored at 25°C with the dose of 40 µL oregano EO in 1.8 L. After the kai lan was stored at 7°C for 14 days, Ralstonia became more competitive with the dose of 10 µL oregano EO in 1.8 L, whereas with the dose of 40 µL oregano EO in 1.8 L, Brevibacterium remained the dominant genus in the bacteria on kai lan after the storage. Taken together, these results indicated indeed a dose‐dependency in the antibacterial effects of oregano EO vapor when used to treat kai lan with natural microbiome. The NMDS analysis of 18S amplicon sequencing results from different groups (doses of EO vapor and storage temperatures) is shown in Figure . Stress smaller than 0.2 indicates that this scaling result is reliable. The samples treated with oregano EO vapor with the dose of 40 µL oregano EO in 1.8 L at both 25°C and 7°C showed more similar patterns to the samples on Day 0 than other groups. As shown in Figure , no fungi were identified from the samples on Day 0 and samples treated with oregano EO vapor with the dose of 40 µL oregano EO in 1.8 L at both 25°C and 7°C, whereas various fungi families were identified from the control groups without EO treatment stored at 25°C for 7 days and 7°C for 14 days, as well as the samples treated with oregano EO vapor with the dose of 10 µL oregano EO in 1.8 L. After being stored at 25°C for 7 days, Aspergillaceae, Pleosporaceae , and Gjaerumiaceae were found to be abundant on kai lan without EO treatment and with oregano EO vapor with the dose of 10 µL oregano EO in 1.8 L. When the kai lan was stored at 7°C for 14 days, Entylomatales and Cordycipitaceae were more enriched in the samples without EO treatment and with oregano EO vapor with the dose of 10 µL oregano EO in 1.8 L. Similarly to the bacterial sequencing analysis, these results indicated a dose‐dependency in the antifungal effects of oregano EO vapor when used to treat kai lan with natural microbiome. Limited research discussed the application of oregano EO vapor on leafy greens, while the antimicrobial and phytotoxicity effects of oregano EO have been studied. Oregano EO vapor has presented stronger antimicrobial activity against Gram‐negative bacteria compared to cinnamon EO vapor and thyme EO vapor, reflected from lower MIC value against same bacteria strain (López et al., ). In the actual application, oregano EO combined with rosemary EO has been applied to iceberg lettuce and chard in liquid form and resulted in significant log reductions of food pathogens related to leafy greens, including Listeria monocytogenes , Escherichia coli , and Salmonella enteritidis (de Medeiros Barbosa et al., ). The seed germination of cucumber and tomato under EO treatment has been evaluated and oregano EO has shown strongest inhibitory effects of tomato seed germination among tested EOs, with 0.125 µL/mL treatment leading to 50% reduction of seed germination (Ibáñez & Blázquez, ). This indicates that oregano EO has strong phytotoxic effects against tomatoes. In this case, oregano EO vapor can be applied as antimicrobials against kai lan, while the dosage needs to be properly controlled to avoid the negative phytotoxic effects of oregano EO. Antimicrobial effect of EO vapor on spoilage‐causing microorganisms All the tested EO vapors have shown noticeable antibacterial activities against the tested bacteria as indicated by their plate count reduction (Table ). Among three tested EO, oregano EO showed stronger antibacterial effects than clove EO and basil EO against all five tested bacteria. The inhibition of EO vapors on the tested strains used in this work has been rarely explored so far. Nonetheless, the efficacies of oregano EO against Pseudomonas syringae , a vegetable spoilage bacterium, were reported previously (Carezzano et al., ). Oregano EO could inhibit the biofilm formation and phytotoxin (coronatine, syringomycin, and tabtoxin) production based on an in vitro study. Antibacterial mechanisms of EOs have been widely investigated and normally involve several aspects. Major components of EOs are terpenes, alcohols, esters, and phenolic compounds, which can interact with cell membrane components and disrupt the cell membrane integrity (Pandey et al., ). Oregano EO can increase the cell membrane electrical conductivity and reduce intracellular protein concentrations of S. aureus , which reflects the cell membrane disruption and leakage of cellular components (Cui et al., ). EO can also interact with DNA components and affect the gene expression in bacterial cells. EO vapor has similar antibacterial mechanisms, where they exert greater antibacterial efficacies than EOs in the liquid form (Reyes‐Jurado et al., ). This may be because the lipophilic EO forms micelles in an aqueous phase, which restrains the interactions between active EOs and bacteria (Nadjib et al., ). Additionally, EO in the vapor phase can mitigate their interaction with food matrices, which alleviates the unwanted changes in sensory and odor attributes of foods during EO preservation. Two notorious fungal pathogens, A. bassicicola and B. cinerea , can plague a wide range of fruits and vegetables (Leifert et al., ; Soylu et al., ). In this work, the three EOs were serially diluted and their antifungal effects in the vapor phase were evaluated based on the lowest effective volume within each Petri dish (Table ), and the dimensions of relative inhibition zone (ZOI) (Figure ). EO was deposited at the center of each Petri dish lid and a circle of inhibition zone appears when the EO vapors can effectively inhibit the fungal growth. According to Table , the oregano and clove EO vapors exhibited similar antifungal activities as indicated by their proximate lowest effective volume (1.25–2.5 µL) that can cause similar ZOI. In general, A. bassicicola showed stronger resistance compared to B. cinerea against the oregano and basil EO vapors. Antifungal mechanisms of EOs are similar to those against bacteria, while the different fungal cell structures may render them higher resistance against EO (Sekyere & Asante, ). General antifungal mechanisms of EOs involve cell membrane disruption (Gogoi et al., ), mitochondria dysfunction (Chen et al., ), and reactive oxygen species production (Nazzaro et al., ). According to the literature, low levels of oregano EO both in the vapor (0.2 µg/mL air) and in liquid contact phase (12.8 µg/mL liquid) can effectively inhibit B. cinerea (Soylu et al., ). The antifungal activities of EOs were attributed to their possible accumulation in the lipophilic components within fungal cell membranes (Nazzaro et al., ). This accumulation facilitates the subsequent translocation of other EO components to the intracellular milieu. The variations in antifungal efficacies between different EOs result from their diverse physicochemical properties, especially their water solubility and lipophilicity (X. M. Xie et al., ). Interestingly, clove EO, when tested in the liquid phase against A. bassicicola , resulted in the abnormal growth of mycelia and swollen hypha (Peddi et al., ; Suwitchayanon & Kunasakdakul, ). By contrast, the antifungal activities of clove EO vapors were noticeable in this work, highlighting the advantages of EO vapors as antimicrobial agents. To compare the overall antimicrobial activities of different EOs and further select the EO applied for shelf life extension tests on kai lan, the antimicrobial activities of the EO vapors were scored and ranked (Table ). The EO with strongest antimicrobial activity against the strain among tested EOs was scored as 1, followed by 2 and 3. The total scores for all the EO were calculated as the sum of scores for five bacterial strains and three fungal strains. According to Table , oregano EO has the lowest total score of 10 followed by 13 for clove EO and 17 for basil EO. Therefore, oregano EO with the strongest antimicrobial activity was selected for further screening and experiments. The chemical composition of oregano EO used in this work was analyzed by GC–MS. As shown in Figure , the main component of oregano EO was carvacrol, accounting for 85.78% of total content, followed by 7.19% of cymene and 2.56% of linalool. This composition was comparable to other finding, showing that the major component of oregano EO was carvacrol (63.97%), p‐cymene (12.63%), and linalool (3.67%) (Özkan et al., ). Application of oregano EO vapor on leafy vegetables Four leafy greens commonly consumed in Singapore (kai lan, kale, butter lettuce, and iceberg lettuce) were chosen to examine the applicability of the oregano EO vapor for their preservation. To identify the applicable EO dose and suitable vegetables, 10 or 75 µL of oregano EO was added into each container (1.8 L), and the vegetables were stored at 7°C for 5 days. As shown in Figure , the green color of kai lan leaves can be preserved using oregano EO vapor at a low dose of 10 µL EO in 1.8 L, which effectively delayed its yellowing in contrast to the control (no EO). However, a high dose of oregano EO vapor (75 µL EO in 1.8 L) caused noticeable darkening of leaves. This was possibly attributed to the accumulation of phenolic compounds (Kraśniewska et al., ; Saltveit et al., ). Figure shows that the kale leaves exhibited a color‐change pattern similar to those of kai lan leaves (Chinese kale), as they are both from the Brassicaceae family. However, the green color preservative effect on kale was not as apparent as that on kai lan. In contrast, oregano EO vapor did not provide any noteworthy benefits in preserving the appearance of butter lettuce and iceberg lettuce (Figure ). Oregano EO vapor has shown phytotoxic effects toward these two kinds of lettuce, even at the low dosage (10 µL EO in 1.8 L) treatment. Such phytotoxic effects of oregano EO have been observed against the crop seeds of radish, wild mustard, and also the weed of Italian ryegrass, especially under high concentration treatment (1 and 0.5 g/mL) (Amato et al., ). Potential reasons for the phytotoxic effects of oregano EO are the inhibition of α‐amylase activity and the interaction with plant cell membrane under high dosage (Amato et al., ). Additionally, the seed germination of cucumber and tomato was inhibited by oregano EO under 0.5 µL/mL concentration treatment. Thus, with consideration of both green color preservative and phytotoxic effects of oregano EO, kai lan was chosen as the target matrix for further studies. As for the dosage, 10 µL EO in 1.8 L was selected as it produced notable color preservative effects, and 75 µL EO in 1.8 L was not tested further as it resulted in grayish kai lan leaves. Instead, a medium dosage of 40 µL EO in 1.8 L was tested. Surface color determination of kai lan leaves Deterioration in the visual quality of leafy greens typically manifests as the unfavorable color changes such as yellowing. Both surface color and total chlorophyll content have been widely determined to characterize the color changes of leafy vegetables (Barrett et al., ). To characterize the color and its changes during storage, CI, TCD, and YI were determined. CI mainly focuses on the absolute color value of an object, indicating the real color of the sample (López Camelo & Gómez, ). TCD is a numerical description of the color difference between two colors, while YI mainly describes the yellowness of the samples, which can provide more direct information about the yellowness of kai lan leaves during the storage (Jung & Sato, ; Wu et al., ). The CI of kai lan under oregano EO vapor treatment is shown in Figure . According to the standard evaluation, CI (−40 to −20) represents blue violet to dark green, CI (−20 to −2) indicates dark green to yellowish green, CI (+2–+20) shows pale yellow to deep orange, and CI (+20 to +40) stands for deep orange to deep red (GoÑI et al., ). After the 7‐day storage at 25°C, the CI value of kai lan in the control group (no EO) markedly increased from −26.50 ± 6.89 to 2.36 ± 0.65 ( p < 0.05), indicating that its color changed from dark green to pale yellow. By contrast, in the presence of oregano EO vapors, the color change of kai lan was significantly retarded as indicated by the slower increases in their CI values. This was consistent with TCD (Figure ) and YI (Figure ), both of which significantly decreased under the EO vapor treatment. Particularly, the oregano EO vapor showed the lowest TCD and YI values, being the most effective to preserve the initial green color of kai lan against the following color changes and yellowing. These results were consistent with visual observations, confirming that the oregano EO vapor is promising to preserve the green color of kai lan. It has been shown that refrigeration storage can effectively retard the yellowing progress of several leafy greens such as lettuce, broccoli (Manolopoulou & Varzakas, ), baby spinach (Y. Kou et al., ; Y. Xie et al., ), and rocket leaves (Kim & Ishii, ). Therefore, kai lan storage with oregano EO vapor was also tested at 7°C for 14 days. Similar color protection effects of oregano EO vapor were observed (Figure ). The TCD was in the range of 5–15 under oregano EO vapor treatment and this value was about 36 for the control group (compared to the Day 0 value; Figure [b]). Based on Figure , the yellowish effects of kai lan leaves were remarkably inhibited by oregano EO vapor treatment. Total chlorophyll content of kai lan The overall discoloration of leafy greens is preliminarily associated with the losses of chlorophylls, which simultaneously magnifies the appearance of leaf yellowing during storage. The decreasing chlorophyll content was consistent with the fading of green color (Figure ). For the control group (no EO), the total chlorophylls in kai lan decreased from 15.39 ± 1.38 mg/g DW to 4.91 ± 1.60 mg/g DW (reduced by 68.10%; p < 0.05) over 5 days at 25°C and reached a plateau till Day 7. With a dose of 10 µL oregano EO in 1.8 L, the total chlorophylls in kai lan were markedly maintained with no significant reduction after 14 days of storage at 7°C ( p > 0.05). After the 7 days of storage at 25°C, the dose of 40 µL EO in 1.8 L provided better protection against the chlorophyll decreases in kai lan than 10 µL EO in 1.8 L EO vapor ( p < 0.05). The maintenance of chlorophyll content in kai lan leaves of treatment groups can be due to the antioxidative effects of oregano EO and the inhibition of oxygen permeation with oregano EO vapor surrounding the kai lan leaves (Abedi et al., ). The microbial analysis of kai lan under oregano EO vapor treatment Apart from the favorable color preservation, it was indeed essential to investigate the effects of oregano EO vapors on the microbial quality of kai lan regarding their ultimate shelf life extension in real‐life scenarios. Although oregano EO vapor showed significant inhibition to all the five tested spoilage‐causing bacteria strains and two fungal pathogens, as shown in Section 3.1, the antimicrobial effect was not fully demonstrated by measuring the TAC on kai lan with complicated natural microbiome. As shown in Figure , significantly lower microbial loads in oregano EO vapor‐treated groups than in the control group were only noticed on Day 3 at 25°C with the dose of 40 µL oregano EO in 1.8 L (Figure ; p < 0.05), on Days 5 and 10 at 7°C with the dose of 10 µL oregano EO in 1.8 L, and on Days 5, 10, and 14 at 7°C with the dose of 40 µL oregano EO in 1.8 L (Figure ; p < 0.05). We hypothesized this was largely due to the various susceptibilities of different microbes on kai lan, and thus it became necessary to investigate the microbial composition changes before and after oregano EO vapor treatment. The NMDS analysis of 16S amplicon sequencing results from different groups (doses of EO vapor and storage temperatures) is shown in Figure . Stress smaller than 0.2 indicates that this scaling result is reliable. It was interesting to find that the samples treated with oregano EO vapor with the dose of 40 µL oregano EO in 1.8 L at both 25°C and 7°C showed more similar patterns to the samples on Day 0 than other groups. The samples treated with lower dose of EO vapor (10 µL oregano EO in 1.8 L) showed more similar patterns to the control groups without EO vapor treatment than the samples treated with higher dose of EO vapor (40 µL oregano EO in 1.8 L) at 25°C and 7°C, respectively. However, the relative abundance in bacteria genus as shown in Figure still showed the distinctive microbial compositions in samples treated with different doses of EO vapor and different storage temperatures. In comparison with the major genus identified on Day 0, Terribacillus became more competitive after being stored at 25°C for 7 days with the dose of 10 µL oregano EO in 1.8 L. Pantoea , Pseudomonas , and Salinicola dominated the bacterial components in kai lan stored at 25°C with the dose of 40 µL oregano EO in 1.8 L. After the kai lan was stored at 7°C for 14 days, Ralstonia became more competitive with the dose of 10 µL oregano EO in 1.8 L, whereas with the dose of 40 µL oregano EO in 1.8 L, Brevibacterium remained the dominant genus in the bacteria on kai lan after the storage. Taken together, these results indicated indeed a dose‐dependency in the antibacterial effects of oregano EO vapor when used to treat kai lan with natural microbiome. The NMDS analysis of 18S amplicon sequencing results from different groups (doses of EO vapor and storage temperatures) is shown in Figure . Stress smaller than 0.2 indicates that this scaling result is reliable. The samples treated with oregano EO vapor with the dose of 40 µL oregano EO in 1.8 L at both 25°C and 7°C showed more similar patterns to the samples on Day 0 than other groups. As shown in Figure , no fungi were identified from the samples on Day 0 and samples treated with oregano EO vapor with the dose of 40 µL oregano EO in 1.8 L at both 25°C and 7°C, whereas various fungi families were identified from the control groups without EO treatment stored at 25°C for 7 days and 7°C for 14 days, as well as the samples treated with oregano EO vapor with the dose of 10 µL oregano EO in 1.8 L. After being stored at 25°C for 7 days, Aspergillaceae, Pleosporaceae , and Gjaerumiaceae were found to be abundant on kai lan without EO treatment and with oregano EO vapor with the dose of 10 µL oregano EO in 1.8 L. When the kai lan was stored at 7°C for 14 days, Entylomatales and Cordycipitaceae were more enriched in the samples without EO treatment and with oregano EO vapor with the dose of 10 µL oregano EO in 1.8 L. Similarly to the bacterial sequencing analysis, these results indicated a dose‐dependency in the antifungal effects of oregano EO vapor when used to treat kai lan with natural microbiome. Limited research discussed the application of oregano EO vapor on leafy greens, while the antimicrobial and phytotoxicity effects of oregano EO have been studied. Oregano EO vapor has presented stronger antimicrobial activity against Gram‐negative bacteria compared to cinnamon EO vapor and thyme EO vapor, reflected from lower MIC value against same bacteria strain (López et al., ). In the actual application, oregano EO combined with rosemary EO has been applied to iceberg lettuce and chard in liquid form and resulted in significant log reductions of food pathogens related to leafy greens, including Listeria monocytogenes , Escherichia coli , and Salmonella enteritidis (de Medeiros Barbosa et al., ). The seed germination of cucumber and tomato under EO treatment has been evaluated and oregano EO has shown strongest inhibitory effects of tomato seed germination among tested EOs, with 0.125 µL/mL treatment leading to 50% reduction of seed germination (Ibáñez & Blázquez, ). This indicates that oregano EO has strong phytotoxic effects against tomatoes. In this case, oregano EO vapor can be applied as antimicrobials against kai lan, while the dosage needs to be properly controlled to avoid the negative phytotoxic effects of oregano EO. CONCLUSION In order to evaluate the potential of EO vapor in extending the shelf life of leafy green vegetables, the antimicrobial effects of oregano, clove, and basil EO vapors were first tested against five bacterial ( P. agglomerans , P. cichorii , P. carotovorum , P. ananatis , P. marginalis ) and two fungal ( A. bassicicola and B. cinerea ) strains that have been previously recognized as spoilage‐causing microorganisms of fresh produce. Oregano EO vapor was selected to proceed with the application on the vegetables as it showed the highest antimicrobial effect overall on the pure culture. Within the four leafy green vegetables tested (kai lan, kale, butter lettuce, and iceberg lettuce), kai lan suffered the least from the phytotoxic effect, whereas in the meantime, it benefited largely from the oregano EO vapor treatment, which slowed down its losses of chlorophylls and thus maintained its favorable green color. These color protective effects were further demonstrated with different storage temperatures at 25°C and 7°C, and with different dose of EO vapor generated from 10 and 40 µL of oregano EO in 1.8‐L containers. The antimicrobial effect was studied with both the plating counting method which enumerates the cultivable microorganisms on kai lan and high‐throughput sequencing method. As a result, the culture‐independent method was able to better reflect the dose‐dependent amicrobial effect of oregano EO vapor on kai lan. Important lessons were learned in this study, particularly from applying the EO vapor in genuine food samples. Although the antimicrobial property of EO is widely used to justify the use of EO as food preservatives, one must never forget the possible side effects of the antimicrobial agents on the food quality. In this study, strong phytotoxic effect was observed on butter lettuce and iceberg lettuce, suggesting the unsuitability of using EO vapor on these types of vegetables. On the other hand, a large number of studies in the literature spike pure culture of spoilage‐causing microorganisms on sterile food models to evaluate the antimicrobial effect of EO (Hai et al., ; Lou et al., ; C. Zhang et al., ). Although being more robust and repeatable, these studies excluded the implicit factors from the natural microflora on the foods. In this study, our results demonstrated the essential gaps between the effect on pure culture and on the natural microflora on foods, justifying the necessity of maintaining the natural flora on foods for future experimental setups of such studies. For the further experiments and up‐scale industry, the proper dosage of oregano EO vapor for post‐harvest preservation of kai lan is in the range of 10–40 µL/1.8 L of air applying to 10 g of kai lan leaves. It is also critical to conduct sensory evaluation to evaluate the consumer acceptance of kai lan after oregano EO vapor treatment. Additionally, we should also consider combining other packaging techniques like nano‐packaging with oregano EO vapor for effective release of EO vapor and better preservative effects against kai lan. Weichen Shu : Conceptualization; methodology; data curation; investigation; writing—original draft. Zhuoliang Deng : Data curation; investigation. Lingdai Liu : Conceptualization; methodology; investigation; validation; writing—original draft. Jiaxuan Zhang : Investigation; methodology. Dan Li : Conceptualization; validation; project administration; funding acquisition; writing—review and editing; resources; supervision. The authors declare no conflicts of interest. Supporting Information |
Neuropathology of incidental Lewy body & prodromal Parkinson’s disease | bd2dc6fe-2415-448e-a9a1-39e20dd9da24 | 10182593 | Pathology[mh] | The traditional concept of pathology progression in PD Parkinson’s disease (PD) is the most common neurodegenerative movement disorder and is characterised by the progressive development of bradykinesia, muscular rigidity, rest tremor, and postural instability . The cardinal motor features result from the progressive loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNc) . A neuropathological hallmark of PD are neuronal protein aggregates termed Lewy bodies (LBs) (Fig. a) . LBs are composed of vesicular membrane structures and dysmorphic organelles in conjunction with protein aggregates containing alpha-Synuclein as the main component (αSyn) . Gene multiplications and missense mutations in SNCA, the gene coding for αSyn, are causative for familial forms of PD, which account for 10–15% of cases . In addition, genome-wide association studies linked common variants at the SNCA locus to sporadic PD, thus further supporting an important pathogenic role of α-Syn in PD . Postmortem studies suggested that the gradual appearance of LBs correlates with disease progression in PD . Based on the gradual appearance of LBs, Braak et al . developed a neuropathological staging scheme for PD. The authors proposed that PD primarily starts in the olfactory bulb and the autonomic enteric nervous system (ENS), with a caudo-rostral (retrograde) spread of Lewy body pathology (LBP) over time, ultimately reaching the SNc where it is suspected to initiate the demise of DA neurons (Fig. ) . LBs are therefore considered to be a marker for disease progression, while neuronal loss represents a well-established neuropathological correlate of clinical PD (cPD) symptoms . Braak’s staging is divided into six different stages that reflect the progression of LBP from the dorsal motor nucleus of the vagus nerve (DMV) (Braak stage 1) to the locus coeruleus (LC) (Braak stage 2), SNc and amygdala (Braak stage 3) and ultimately reaching cortical areas (Braak stage 4–6) . The Braak stages correspond to the type and degree of clinical symptoms associated with disease progression. Early stages are characterised by non-motor symptoms, while typical PD motor signs are thought to appear once the SNc is affected at Braak stages ≥ 3, and cognitive symptoms arise only as LBP reaches the cortex in Braak stages 5 and 6 (Fig. ). Braak’s model proposes that LBP gradually appears in defined anatomical structures during disease progression . In line with the idea of a prion-like LBP ‘spreading’ mechanism, fetal DA neurons transplanted into the PD SNc exhibited proteinaceous inclusions that resembled LBs . This result was interpreted as a ‘spread’ of LBP from the host to the graft. In mice, synthetic pre-formed αSyn fibrils propagate from the site of stereotaxic injection to synaptically connected, neighboring structures, thereby creating a Lewy-like pathology . Similarly, proteins extracted from human brains with LBP and injected into the striatum of monkeys can also propagate to neighboring structures, illustrating LBP's propensity to spread from its origin . In summary, previous results support a model where LBP gradually builds up in clearly defined brain regions, leading to neuronal death. This process occurs according to a well-defined pattern, resulting in some brain areas remaining unaffected until the final stages of PD, whereas others are devastated by degeneration early on. Controversies and limitations of the braak staging scheme The co-incidence of DA neuron loss and LBs initially primed the conclusion that these intraneuronal inclusions—in combination with cell death—were responsible for the disease. However, a considerable body of research appears to give rise to concerns regarding the overall significance of LBP. For instance, Gibb et al. reported an age-dependent increase in the prevalence of LBs from 3.8% to 12.8% between the sixth and ninth decades of age. This amount exceeds the prevalence of PD by about three- to six-fold . In line, previous reports found that a significant proportion of neuropathologically confirmed LBD cases never exhibited clinical symptoms . In addition, more recent research demonstrated that cell death and LBP do not entirely correlate in the affected brain regions. For instance, even in the absence of LBP, there is considerable neuronal death in the supraoptic nucleus in PD. By contrast, there is no discernible neuron loss in the neighboring, LB-rich tuberomammillary nucleus of the hypothalamus . Furthermore, in patients who do not exhibit dementia during disease progression, the only cortical region that shows substantial neuronal loss is the pre-supplementary motor cortex, where small intra-telencephalic pyramidal neurons degenerate in the absence of LBP . These results cast doubt on the concept that cell death is a consequence of LBP in PD. Conflicting with other reports, which claim that neurons are primarily lost in brain regions with LBP , a study from Iacono et al. found no significant correlation between neuronal loss and LBs in the PD brain . Another line of evidence for an LBP-independent pathological process comes from PD cases carrying genetic mutations where LBP distribution is distinct from that of idiopathic PD . For instance, only a part of PD patients with a G2019S mutation in the LRRK2 gene exhibit LBP and most patients with other LRRK2 mutations do not even show LBP at all despite substantial SNc DA neuronal degeneration. Likewise, PD cases with PARK2 mutations have only sparsely distributed LBP with a pattern distinct from that found in idiopathic PD cases , although these cases may be phenotypic variations. Finally, conflicting evidence comes from neuropathological and histological studies. Because the number of LBs in patients with mild to moderate SNc neuron loss was higher than in patients with severe neuronal depletion, LB-containing neurons have been initially assumed to be the dying neurons . However, Tompkins and Hill demonstrated that the presence of LBs does not predict a higher degree of cell death compared to the general population of SNc neurons and that most neurons that undergo cell death do not even contain LBs . Moreover, whether SNc neurons contain LBs or not, they are similarly affected by morphological dendritic abnormalities or biochemical changes, indicating that DA neurons, in general, are involved in a yet-to-be-defined disease process . These results imply that region-specific environmental changes may prime these DA neurons to preferentially degenerate in PD. Consequently, attempts to correlate the density of either cortical or brain stem LBs with the progression and severity of clinical PD symptoms were unsuccessful . Along these lines, in a certain percentage of PD patients who developed dementia, no LBs could be detected in cortical areas or other areas outside the brain stem , and these cases may suffer from concomitant amyloid beta (Aβ) pathology. Conversely, the simultaneous presence of Lewy body pathology and Alzheimer's Disease (AD)-related changes, such as hyperphosphorylated tau protein or Aβ, can also be observed (Fig. b). Conversely, LBP, which is typically found in the amygdala, is frequently detected in AD cases . Collectively, these findings indicate that the pathophysiology of neurodegeneration and cell death can hardly be explained by LBs or LBP-related cell death alone. Alternative views are that LB formation is a process for detoxification of pathological αSyn aggregates located at a harmful site in the neuron, such as the presynapse . Studies investigating the ultrastructure of LBs indicated that they are formed in an aggresome-related process and support the notion that LBs are a way of containment of protein aggregates and degraded organelles . By using correlative light and electron microscopy and tomography on postmortem human brain tissue from PD brains, the study done by Shahmoradian et al . found a crowded environment of membranes in LBs, including vesicular structures and dysmorphic organelles . Crowding of organellar components was confirmed by stimulated emission depletion (STED)-based super-resolution microscopy, and a high lipid content within LBs was corroborated by confocal imaging, Fourier-transform coherent anti-Stokes Raman scattering infrared imaging and lipidomics. The latter report suggests that lipid membrane fragments and distorted organelles, together with a non-fibrillar form of αSyn, are the main structural components of LBs and that they do not contain fibrillar αSyn aggregates. Although a matter of current debate , these results thus challenge the pivotal role of αSyn and point towards cellular and molecular changes that may occur independent from the formation of LBP. This view is supported by studies implicating mutations in PINK1 and lysosomal genes in PD, such as GBA . Conversely, out of 2000 PD cases, only 0.05% had mutations in the SNCA gene, creating uncertainty about how αSyn accumulates in the other 99.95% of cases and whether the protein has a causative role in PD . In summary these results thus cast doubt on the significance of LBP as the sole disease-causing factor in PD and alternative models are required to explain such apparent discrepancies. Investigating pathological precursors to determine the significance of LBP in PD A small number of conceptual approaches attempted to dissect LBP-dependent from LBP-independent events in the PD brain. Besides comparing LBP-affected and unaffected neurons, one strategy has been the examination of pathological precursors of LBP. In addition to cPD, there are a few cases that exhibit brain stem-restricted LBP in the absence of the characteristic clinical PD symptoms and these cases are referred to as incidental Lewy body disease (iLBD) . iLBD occurs in 10–15% of people over 60 years of age and is assumed to represent a pathological precursor in PD . Whether these cases constitute a neuropathological PD precursor has been a matter of controversy for some years, as there is no proof that these cases would have progressed to PD if they had survived longer or instead simply recapitulated features of normal brain ageing . However, the study of consecutive cases in large case series and the recognition of several intermediate degrees of involvement of the brain stem, limbic structures and, eventually, the cerebral cortex supports the argument for iLBD as a pathological precursor stage of PD. Since SNc neurons are, by definition, still spared from LBs in iLBD, investigating SNc neurons in these cases may provide insight into the cellular and molecular changes occurring at this critical site in the absence of LBP , thus allowing to distinguish LBP-dependent from LBP-independent changes. Moreover, investigating SNc neurons in iLBD may support the understanding of early pathological events occurring prior to the appearance of LBP. Because therapeutic approaches that delay or slow down disease progression in PD are likely to be more effective prior to neuronal cell death, the examination of SNc neuronal changes in iLBD cases may aid to the identification of novel therapeutic targets and, ultimately, to the development of early-acting, potentially disease-modifying interventions. Based on these two reasons, we believe that investigating SNc neuronal changes in iLBD warrants further research. Here, we will review the evidence available from earlier studies that examined the molecular and cellular changes in the iLBD SNc to create a concise map of early neuropathological events during PD disease progression and to nurture prospective research in this direction. Parkinson’s disease (PD) is the most common neurodegenerative movement disorder and is characterised by the progressive development of bradykinesia, muscular rigidity, rest tremor, and postural instability . The cardinal motor features result from the progressive loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNc) . A neuropathological hallmark of PD are neuronal protein aggregates termed Lewy bodies (LBs) (Fig. a) . LBs are composed of vesicular membrane structures and dysmorphic organelles in conjunction with protein aggregates containing alpha-Synuclein as the main component (αSyn) . Gene multiplications and missense mutations in SNCA, the gene coding for αSyn, are causative for familial forms of PD, which account for 10–15% of cases . In addition, genome-wide association studies linked common variants at the SNCA locus to sporadic PD, thus further supporting an important pathogenic role of α-Syn in PD . Postmortem studies suggested that the gradual appearance of LBs correlates with disease progression in PD . Based on the gradual appearance of LBs, Braak et al . developed a neuropathological staging scheme for PD. The authors proposed that PD primarily starts in the olfactory bulb and the autonomic enteric nervous system (ENS), with a caudo-rostral (retrograde) spread of Lewy body pathology (LBP) over time, ultimately reaching the SNc where it is suspected to initiate the demise of DA neurons (Fig. ) . LBs are therefore considered to be a marker for disease progression, while neuronal loss represents a well-established neuropathological correlate of clinical PD (cPD) symptoms . Braak’s staging is divided into six different stages that reflect the progression of LBP from the dorsal motor nucleus of the vagus nerve (DMV) (Braak stage 1) to the locus coeruleus (LC) (Braak stage 2), SNc and amygdala (Braak stage 3) and ultimately reaching cortical areas (Braak stage 4–6) . The Braak stages correspond to the type and degree of clinical symptoms associated with disease progression. Early stages are characterised by non-motor symptoms, while typical PD motor signs are thought to appear once the SNc is affected at Braak stages ≥ 3, and cognitive symptoms arise only as LBP reaches the cortex in Braak stages 5 and 6 (Fig. ). Braak’s model proposes that LBP gradually appears in defined anatomical structures during disease progression . In line with the idea of a prion-like LBP ‘spreading’ mechanism, fetal DA neurons transplanted into the PD SNc exhibited proteinaceous inclusions that resembled LBs . This result was interpreted as a ‘spread’ of LBP from the host to the graft. In mice, synthetic pre-formed αSyn fibrils propagate from the site of stereotaxic injection to synaptically connected, neighboring structures, thereby creating a Lewy-like pathology . Similarly, proteins extracted from human brains with LBP and injected into the striatum of monkeys can also propagate to neighboring structures, illustrating LBP's propensity to spread from its origin . In summary, previous results support a model where LBP gradually builds up in clearly defined brain regions, leading to neuronal death. This process occurs according to a well-defined pattern, resulting in some brain areas remaining unaffected until the final stages of PD, whereas others are devastated by degeneration early on. The co-incidence of DA neuron loss and LBs initially primed the conclusion that these intraneuronal inclusions—in combination with cell death—were responsible for the disease. However, a considerable body of research appears to give rise to concerns regarding the overall significance of LBP. For instance, Gibb et al. reported an age-dependent increase in the prevalence of LBs from 3.8% to 12.8% between the sixth and ninth decades of age. This amount exceeds the prevalence of PD by about three- to six-fold . In line, previous reports found that a significant proportion of neuropathologically confirmed LBD cases never exhibited clinical symptoms . In addition, more recent research demonstrated that cell death and LBP do not entirely correlate in the affected brain regions. For instance, even in the absence of LBP, there is considerable neuronal death in the supraoptic nucleus in PD. By contrast, there is no discernible neuron loss in the neighboring, LB-rich tuberomammillary nucleus of the hypothalamus . Furthermore, in patients who do not exhibit dementia during disease progression, the only cortical region that shows substantial neuronal loss is the pre-supplementary motor cortex, where small intra-telencephalic pyramidal neurons degenerate in the absence of LBP . These results cast doubt on the concept that cell death is a consequence of LBP in PD. Conflicting with other reports, which claim that neurons are primarily lost in brain regions with LBP , a study from Iacono et al. found no significant correlation between neuronal loss and LBs in the PD brain . Another line of evidence for an LBP-independent pathological process comes from PD cases carrying genetic mutations where LBP distribution is distinct from that of idiopathic PD . For instance, only a part of PD patients with a G2019S mutation in the LRRK2 gene exhibit LBP and most patients with other LRRK2 mutations do not even show LBP at all despite substantial SNc DA neuronal degeneration. Likewise, PD cases with PARK2 mutations have only sparsely distributed LBP with a pattern distinct from that found in idiopathic PD cases , although these cases may be phenotypic variations. Finally, conflicting evidence comes from neuropathological and histological studies. Because the number of LBs in patients with mild to moderate SNc neuron loss was higher than in patients with severe neuronal depletion, LB-containing neurons have been initially assumed to be the dying neurons . However, Tompkins and Hill demonstrated that the presence of LBs does not predict a higher degree of cell death compared to the general population of SNc neurons and that most neurons that undergo cell death do not even contain LBs . Moreover, whether SNc neurons contain LBs or not, they are similarly affected by morphological dendritic abnormalities or biochemical changes, indicating that DA neurons, in general, are involved in a yet-to-be-defined disease process . These results imply that region-specific environmental changes may prime these DA neurons to preferentially degenerate in PD. Consequently, attempts to correlate the density of either cortical or brain stem LBs with the progression and severity of clinical PD symptoms were unsuccessful . Along these lines, in a certain percentage of PD patients who developed dementia, no LBs could be detected in cortical areas or other areas outside the brain stem , and these cases may suffer from concomitant amyloid beta (Aβ) pathology. Conversely, the simultaneous presence of Lewy body pathology and Alzheimer's Disease (AD)-related changes, such as hyperphosphorylated tau protein or Aβ, can also be observed (Fig. b). Conversely, LBP, which is typically found in the amygdala, is frequently detected in AD cases . Collectively, these findings indicate that the pathophysiology of neurodegeneration and cell death can hardly be explained by LBs or LBP-related cell death alone. Alternative views are that LB formation is a process for detoxification of pathological αSyn aggregates located at a harmful site in the neuron, such as the presynapse . Studies investigating the ultrastructure of LBs indicated that they are formed in an aggresome-related process and support the notion that LBs are a way of containment of protein aggregates and degraded organelles . By using correlative light and electron microscopy and tomography on postmortem human brain tissue from PD brains, the study done by Shahmoradian et al . found a crowded environment of membranes in LBs, including vesicular structures and dysmorphic organelles . Crowding of organellar components was confirmed by stimulated emission depletion (STED)-based super-resolution microscopy, and a high lipid content within LBs was corroborated by confocal imaging, Fourier-transform coherent anti-Stokes Raman scattering infrared imaging and lipidomics. The latter report suggests that lipid membrane fragments and distorted organelles, together with a non-fibrillar form of αSyn, are the main structural components of LBs and that they do not contain fibrillar αSyn aggregates. Although a matter of current debate , these results thus challenge the pivotal role of αSyn and point towards cellular and molecular changes that may occur independent from the formation of LBP. This view is supported by studies implicating mutations in PINK1 and lysosomal genes in PD, such as GBA . Conversely, out of 2000 PD cases, only 0.05% had mutations in the SNCA gene, creating uncertainty about how αSyn accumulates in the other 99.95% of cases and whether the protein has a causative role in PD . In summary these results thus cast doubt on the significance of LBP as the sole disease-causing factor in PD and alternative models are required to explain such apparent discrepancies. A small number of conceptual approaches attempted to dissect LBP-dependent from LBP-independent events in the PD brain. Besides comparing LBP-affected and unaffected neurons, one strategy has been the examination of pathological precursors of LBP. In addition to cPD, there are a few cases that exhibit brain stem-restricted LBP in the absence of the characteristic clinical PD symptoms and these cases are referred to as incidental Lewy body disease (iLBD) . iLBD occurs in 10–15% of people over 60 years of age and is assumed to represent a pathological precursor in PD . Whether these cases constitute a neuropathological PD precursor has been a matter of controversy for some years, as there is no proof that these cases would have progressed to PD if they had survived longer or instead simply recapitulated features of normal brain ageing . However, the study of consecutive cases in large case series and the recognition of several intermediate degrees of involvement of the brain stem, limbic structures and, eventually, the cerebral cortex supports the argument for iLBD as a pathological precursor stage of PD. Since SNc neurons are, by definition, still spared from LBs in iLBD, investigating SNc neurons in these cases may provide insight into the cellular and molecular changes occurring at this critical site in the absence of LBP , thus allowing to distinguish LBP-dependent from LBP-independent changes. Moreover, investigating SNc neurons in iLBD may support the understanding of early pathological events occurring prior to the appearance of LBP. Because therapeutic approaches that delay or slow down disease progression in PD are likely to be more effective prior to neuronal cell death, the examination of SNc neuronal changes in iLBD cases may aid to the identification of novel therapeutic targets and, ultimately, to the development of early-acting, potentially disease-modifying interventions. Based on these two reasons, we believe that investigating SNc neuronal changes in iLBD warrants further research. Here, we will review the evidence available from earlier studies that examined the molecular and cellular changes in the iLBD SNc to create a concise map of early neuropathological events during PD disease progression and to nurture prospective research in this direction. Structural changes in iLBD Neuropathological studies suggest that by the time a patient is diagnosed with PD based on clinical motor symptoms, a significant proportion of DA neurons is already lost (Fig. ) and within four years of diagnosis, DA terminals in the dorsal putamen almost entirely disappeared . Therefore, estimates propose that at least cell loss in the brain commences 5 to 10 years prior to the clinical diagnosis . Indeed, several previous studies investigating neuropathological changes in iLBD demonstrated structural changes in SNc DA neurons in the absence of LBs (Table , Fig. ). In accord with an early neuronal malfunction, previous studies demonstrated a substantial (10–20%) loss of SNc DA neurons and impaired nigrostriatal integrity at Braak stages 1 and 2 . Likewise, Dijkstra et al. found a 20% decrease in SNc neuronal cell density in iLBD compared with controls . More recently, Iakono et al. demonstrated a marked nigral neuronal loss in PD and iLBD compared to control cases . Milber et al. have shown that neuronal dysfunction and cell loss may precede LBP in the SNc because prior to the appearance of LBs, these processes were observed in the SNc in iLBD at comparable levels to those of higher Braak stages . In accord with a functionally relevant disease process occurring prior to LBP, PD motor symptoms have even been reported at stage 2 of Braak . All these results are also further supported by case reports . These findings illustrate the need for further investigation at these early stages to account for the neuronal loss before the onset of LBP in this area. General neurochemical changes in iLBD In addition to these structural changes (i.e., cell loss), investigating iLBD brains revealed certain neurochemical alterations. For instance, Dickson et al. found that tyrosine hydroxylase (TH) immunoreactivity in the striatum was decreased in iLBD compared to normal controls, but not to the same extent as in PD . TH is an enzyme critical for DA production, and its decrease in iLBD indicates a nigrostriatal system that is already impaired at this early stage. Using quantitative ELISA, Beach et al. demonstrated that striatal TH showed a 49.8% reduction in iLBD cases compared to control cases . Together with the morphological studies described above, these reports suggest an early neurochemical alteration of SNc DA neurons prior to the appearance of LBs. Other research groups have provided additional findings on the early pathological changes in PD, including neurochemical or metabolic changes. For instance, early oxidative damage was found in the SNc in iLBD, where nitrated αSyn is already present in small granules in DA neurons before the appearance of LBs . The authors thus concluded that oxidative damage is an early event in PD and may precede the formation of LBs. In the context of the Renin-angiotensin system (RAS), it is intriguing to note that although this hormonal system is traditionally associated with regulating blood pressure, there is significant interplay with the DA system . Studies have demonstrated that angiotensin blockers can exert a neuroprotective effect on midbrain DA neurons both in vivo and in vitro by reducing oxidative stress, thereby indicating their potential as a therapeutic option. For instance, a retrospective study focusing on patients receiving angiotensin blockers as treatment for hypertension showed a reduced risk of developing PD . Similarly, an analysis of data from ischemic heart disease patients revealed that those prescribed with angiotensin II inhibitors—which have the capacity to cross the blood–brain barrier—had a lower risk of developing PD . These findings underscore the potential of these compounds to counteract the early oxidative damage that primes DA neurons for degeneration, thereby presenting a promising strategy for reducing PD risk. Further biochemical studies have shown increased levels of neuroketals in the SNc in post mortem tissue from Braak stages 1 and 2, supporting the notion that oxidative damage to specific lipids in the SNc occurs at very early stages of PD and prior to the appearance of LBP . In line, recent observations have shown the concentration of L-ferritin in the SNc to be lower in iLBD (and PD) compared with controls, whereas H-ferritin in PD was found to be higher than in iLBD and controls. This illustrates the subtle abnormalities in iron metabolism in the SNc at the early stages of PD . Summarising these results, neurochemical changes occurring prior to LBP may contribute to the increased propensity of SNc DA neurons to degenerate. Changes in autophagy In line with these neurochemical changes, a report demonstrated p62 immunoreactivity in association with abnormal αSyn inclusions at the early stages of LBP, thus suggesting premature alterations to autophagic pathways in these cases . Tang et al. recently investigated autophagy-associated SNARE molecules in post mortem brain tissue from LBD cases and found a stage-dependent decline of the v-SNARE SNAP29 – a member of the SNARE complex mediating autophagolysosome fusion – as early as in Braak stage 1 (Table ) . Additional experiments in cultured dopaminergic neurons demonstrated αSyn overexpression to reduce autophagy turnover by compromising the fusion of autophagosomes with lysosomes, thus leading to a decrease in the formation of autophagolysosomes. Mechanistically, αSyn interacted with and decreased the abundance of SNAP29 in vitro. Furthermore, SNAP29 knockdown mimicked the effect of αSyn on autophagy, whereas SNAP29 co-expression reversed the αSyn-induced changes on autophagy turnover and ameliorated DA neuronal cell death. These results thus demonstrated a previously unknown capacity of αSyn to affect intracellular autophagy-associated SNARE proteins and, consequently, reduce autophagolysosome fusion. Most notably, this effect may be evident before the presence of LBs in the SNc. Whereas SNAP29 loss has been identified in SNc neurons in iLBD, the cell culture work is derived from αSyn over-expression, thus making it difficult to compare the two results. One possible explanation is that oligomeric αSyn, not yet aggregated into LBs, may cause such cellular changes during early pathology, although specific αSyn-species remain to be identified. Oligomers, which are small aggregates of misfolded proteins, are believed by some to be a key contributor to the neurodegenerative processes that occur in PD . These oligomers are thought to be more toxic than other forms of αSyn, such as monomers or fibrils, and have been shown to impair the function of neurons in cell culture and animal models of PD. Furthermore, recent research has indicated that αSyn oligomers can spread from cell to cell in a prion-like manner, propagating the disease throughout the brain . This has led to the hypothesis that targeting αSyn oligomers could be a promising therapeutic strategy for PD. Whereas LBP is visible with histologic methods αSyn oligomers remain undetectable with routine approaches but may be an important contributor of early pathological changes. Detecting αSyn oligomers requires special techniques, and their distribution and association with clinical features are important research objectives. Recent advances in detecting αSyn oligomers, such as using proximity ligation assay (PLA) or oligomer-specific antibodies may support investigating such early pathological changes in PD. Immunological changes in iLBD Following clinical reports , a recent immunohistochemical study assessing the abundance of the inflammation-associated Toll-like-Receptor 2 (TLR-2) showed increased numbers of TLR-2-positive microglia in the iLBD SNc compared to PD , suggesting inflammatory changes occur at early stages and prior to the development of PD symptoms. By contrast, there was a progressive increase from control to PD in the numbers of CD68-positive microglia/macrophages, a marker associated with phagocytosis, although an increase in the number of microglia was not identified . Walker et al. examined the differential expression of inflammatory and trophic molecules in the SNc and striatum of control, iLBD and PD cases and found distinct patterns of inflammation and growth factor changes , which was also reinforced by animal studies . Another piece of evidence suggesting early immunological changes came from the work of Galioano-Landeira et al . The authors found that CD8-positive T-lymphocytes were increased in the SNc of PD cases compared to the control group, whereas CD4-positive T cells remained unchanged . Most notably, a robust infiltration of CD8-positive T-cells has been observed prior to the appearance of LBP (Braak Stage 1) and in the absence of DA cell death. CD8-positive T-cells were found to be equipped with cytolytic enzymes (granzymes A, B and K) and proinflammatory cytokines (interferon gamma) with phenotypic differences between early and late stages. A high proportion of nigral CD8 T cells were identified as tissue-resident memory T cells. These results identified a substantial nigral cytotoxic CD8-T-cell infiltration as an early pathogenic event preceding LBP and DA cell death in PD. This further highlights microenvironmental changes which may impact later nigral cell survival. In another study by Hurley et al., iLBD cases had an increased number of IBA1-positive microglia. In the anterior cingulate cortex (ACC), PAR2-positive microglia were increased in iLBD, while in the primary motor cortex, tyrosin-1 was increased in microglia. However, TH-positive neurons in the SNc only showed a decreasing trend . Doorn et al. investigated microglia activity by quantifying the minichromosome maintenance protein 2 (MCM2), a cell proliferation marker. The authors found MCM2-positive cells to be increased in the hippocampus (HC) of iLBD cases but not in established PD patients. This study thus suggests an early microglial response in the HC, indicating that neuroinflammatory processes play an essential role in developing PD pathology . Finally, in another study, the tissue from different Braak stages was examined for the presence of integrin α v β 3 , a marker for angiogenesis, along with vessel number and activated microglia. In this study, all PD cases had greater levels of α v β 3 in the SNc compared to controls. PD subjects also had increases in microglia number and activation in the SNc, suggesting a link between inflammation and clinical disease, whereas microglia activation in iLBD subjects was limited to the LC, an area involved in early-stage PD . In summary, immune-associated changes appear to occur early during disease progression, and, consequently, anti-inflammatory strategies may be potentially disease-modifying for PD. Indeed, several anti-inflammatory drugs have been tested for their therapeutic potential in PD. For instance, statins have been proposed to exert neuroprotective effects in PD models through an anti-inflammatory response, improving motor function and attenuating the increase in inflammatory cytokines. Simvastatin, for example, effectively crosses the blood–brain barrier and is currently being studied in a phase 2 randomized, placebo-controlled futility trial . Although recently announced results indicated futility for slowing the progression of PD, an anti-inflammatory approach may require early treatment before LBP-related cell death to yield successful therapeutic effects . Other clinical trials investigating anti-inflammatory agents are also still ongoing . Early synaptic pathology in LBD Mounting evidence indicates that SNc DA neuron degeneration is likely to start from synaptic pathology and that the loss of synaptic connectivity may precede nerve cell loss. As early as 1989, by analysing vesicular monoamine transporter 2 (VMAT2) binding during ageing in PD and healthy subjects, Sherman et al. provided the first evidence indicating that PD symptoms appear when the striatal denervation state is over a critical threshold of about 50% . This illustrated the relevance of synaptic terminal degeneration in the onset of disease and its clinical phenotype . Schulz-Schaeffer et al. reported that αSyn pathology mainly involves synaptic compartments and proposed that the first neuronal compartment affected by its deposition might be the synaptic terminal . In accord with an early synaptic pathology in PD, most αSyn aggregates accumulated at presynaptic terminals in paraffin-embedded tissue blots from LBP cases . Thus, at the onset of clinical motor symptoms, the loss of DA synaptic terminals exceeds the loss of DA cell bodies, pointing towards an early alteration of synaptic projections that precede neuronal death. Moreover, neuroanatomical studies of post mortem brain samples from familial PD cases support the idea that synaptic decay precedes neuronal death . These observations support a ‘dying back’ hypothesis where synaptic demise, including presynaptic dysfunction, occurs prior to neuronal death . This view is supported by a series of preclinical studies indicating that αSyn aggregation at synaptic sites impairs neuronal function and axonal transport by affecting synaptic vesicle release . Numerous studies found pre- and postsynaptic structural integrity alterations in PD and Dementia with Lewy bodies (DLB) . Furthermore, apart from αSyn, several other PD-associated proteins such as leucine-rich repeat kinase 2 (LRRK2), parkin, DJ-1, PINK1, Rab38B and synaptojanin have been found to be involved in the control of DA synaptic function . In accord with an early synaptic dysfunction in PD, various in vivo imaging studies demonstrated presynaptic neurotransmitter deficiencies in PD . These findings seem to indicate that the degenerative process in PD is – at least in part – located at the presynapse, ultimately resulting in a neurotransmitter deficiency syndrome . This degeneration of synapses appears to emerge before motor symptom onset; however, the exact timeline of this progression and its clinical correlates are yet to be fully elucidated. Another critical aspect of these studies is that none of such results were derived directly from iLBD cases, and, although it is conceivable that, for instance, oligomeric non-aggregated αSyn species affect synaptic function prior to the appearance of typical LBs, the specific significance of such αSyn species remains uncertain. Changes in gene expression & cell types A relevant study on early transcriptomic changes in PD was conducted by Wilma van den Berg's group using RNA microarrays . The authors aimed to elucidate molecular mechanisms underlying neuronal dysfunction and LBP in the pre-motor phase of PD and investigated the transcriptome of the SNc of well-characterised iLBD, PD and age-matched controls. Before SNc-LBP, at Braak stages 1-2, they observed deregulation of pathways linked to axonal degeneration, immune response, and endocytosis, including axonal guidance signalling, mTOR signalling, eIF2 signalling and clathrin-mediated endocytosis in the SNc. The results indicate molecular mechanisms related to axonal dysfunction, endocytosis and immune response are already affected before LBP reaches the SNc, while mTOR and eIF2 signalling is also impaired during later stages. Interesting work implicating additional cell types in iLBD came from a study that integrated genome-wide association study results with single-cell transcriptomic data from the entire mouse nervous system to systematically identify cell types underlying brain complex traits . When applying expression-weighted cell-type enrichment (EWCE) to data from previous studies , the authors found that downregulated genes in PD were enriched explicitly in DA neurons (consistent with the loss of this particular cell type in the disease). In contrast, upregulated genes were significantly enriched in cells from the oligodendrocyte lineage. When analysing gene expression data from post mortem human brains, downregulated genes were not enriched in DA neurons at Braak stage 1–2. Conversely, upregulated genes were already strongly enriched in oligodendrocytes at this stage, thus indicating that their involvement precedes the emergence of pathological changes in the SNc. In summary, this study thus supports an early alteration of oligodendrocytes preceding LBP in PD, although the data were in part based on investigating mice. This finding was corroborated by a recent single-cell study where significant associations were found between reported PD risk genes and highly expressed genes in oligodendrocytes. Furthermore, the risk for PD age of onset was associated with genes highly expressed in oligodendrocyte precursor cells . These studies thus support an early alteration of oligodendrocytes and their precursors, preceding LBP in PD. A study by Santpere et al. investigated global transcriptional changes in the frontal cortex (Area 8) in iLBD, PD and DLB. The authors identified different co-expressed gene sets associated with disease stages. They conducted a functional annotation of iLBD-associated modules using the gene ontology framework categories enriched in gene modules and differentially expressed genes, including modules or gene clusters correlated to iLBD. These clusters revealed upregulated dynein genes and taste receptors and downregulated genes related to innate inflammation , thus demonstrating transcriptomic alterations in cortical brain areas in iLBD. In 2012, a study by Lin et al. investigated the extent of mtDNA mutations in early-stage PD and iLBD cases and found that mtDNA mutation levels in SNc neurons are significantly elevated in these cases . However, this study defined iLBD by the absence of clinical parkinsonism or dementia but with Lewy bodies present in the SN, which corresponds to Braak stage 3. These findings illustrate the widespread transcriptomic changes preceding LBP, affecting various cell types, and deregulating crucial molecular pathways. Proteomic changes Changes in the expression of various additional proteins have also been demonstrated, for instance, by Wilhelmus et al., who reported an aberrant ApoE and low-density lipoprotein receptor-related protein 1 expression in SNc DA neurons in PD and iLBD cases. The authors concluded that alterations in lipoprotein homeostasis/signalling in DA neurons of the SNc constitute an early disease event during PD pathogenesis . Likewise, changes in neuropeptides and glutathione levels were found in iLBD . Wilkinson identified changes in the glycosylation of proteins in iLBD: a total of 70 O-glycans were identified, with iLBD exhibiting significantly decreased levels of mannose-core and glucuronylated structures in the striatum and PD presenting an increase in sialylation and a decrease in sulfation . Early oxidative damage in the frontal cortex of iLBD cases has been suggested by a study that investigated lipoxidation of the glycolysis-associated enzymes aldolase A, enolase 1, and glyceraldehyde dehydrogenase (GAPDH) and likewise early work from Jenner et al. suggested a loss of glutathione (GSH) to be associated with iLBD . These proteomic modifications furthermore exemplify the various changes in the SNc prior to LBP emergence. Changes in neuronal function and excitability in iLBD Changes in neuronal function and excitability may occur a long time before structural events can be appreciated and recent research began to elucidate the molecular factors governing such early neuronal malfunction. For instance, Tan et al. investigated the effect of αSyn on regulatory molecules in DA SNc neurons and found a loss of the Fragile X Mental Retardation Protein (FMRP) in most neuromelanin-positive neurons of the SNc in human post mortem brain tissue from PD and iLBD cases . Because FMRP regulates the expression and function of numerous neuronal genes , these results further suggest that in PD, DA neuron dysfunction is likely to be present long before morphological and histopathological changes and that the loss of FMRP in the SNc may be a key molecular event in these stages (Fig. ). Loss of FMRP may have beneficial or detrimental effects on neuronal function in the SNc. Tan et al . demonstrated that the absence of FMRP ameliorates αSyn-induced DA dysfunction, and suggest that the early loss of FMRP in PD may in fact protective effects in PD. However, as with the aforementioned studies on autophagy, results from investigating αSyn over-expression are difficult to compare with human LBP and its sequential appearance as the specific αSyn species that are present at different time points are not yet known. The specific significance of FMRP for PD disease progression thus remains to be defined. Peripheral changes in iLBD In addition to these reported CNS changes, iLBD cases may exhibit both peripheral and autonomic pathological changes . For instance, a study by Beach et al. examined the presence of LBP in the gut of iLBD, PD and control cases. The authors found that in the vagus nerve, none of the healthy control subjects showed aggregates of phosphorylated αSyn (p-αSyn), while 46% of iLBD and 89% of PD cases were p-αSyn-positive. In the stomach, none of the control subjects had p-αSyn while 17% of iLBD and 81% of PD subjects did . Following these findings, iLBD cases were retrospectively found to exhibit a lower frequency of bowel movements . In a retrospective autopsy-based study of the human submandibular gland, PD and iLBD cases had LBP in the submandibular glands, the cervical superior ganglia, the cervical sympathetic trunk and vagal nerves . Some previous work even suggested the presence of LBP in the spinal cord of iLBD cases and another study, although limited by a small sample size, found a decrease of TH immunoreactivity within epi- and myocardial sympathetic nerve fibres in PD and iLBD . These studies appear to confirm the cumulative results from studying prodromal PD (pPD), where αSyn is present in the peripheral and autonomic nervous system. Neuropathological studies suggest that by the time a patient is diagnosed with PD based on clinical motor symptoms, a significant proportion of DA neurons is already lost (Fig. ) and within four years of diagnosis, DA terminals in the dorsal putamen almost entirely disappeared . Therefore, estimates propose that at least cell loss in the brain commences 5 to 10 years prior to the clinical diagnosis . Indeed, several previous studies investigating neuropathological changes in iLBD demonstrated structural changes in SNc DA neurons in the absence of LBs (Table , Fig. ). In accord with an early neuronal malfunction, previous studies demonstrated a substantial (10–20%) loss of SNc DA neurons and impaired nigrostriatal integrity at Braak stages 1 and 2 . Likewise, Dijkstra et al. found a 20% decrease in SNc neuronal cell density in iLBD compared with controls . More recently, Iakono et al. demonstrated a marked nigral neuronal loss in PD and iLBD compared to control cases . Milber et al. have shown that neuronal dysfunction and cell loss may precede LBP in the SNc because prior to the appearance of LBs, these processes were observed in the SNc in iLBD at comparable levels to those of higher Braak stages . In accord with a functionally relevant disease process occurring prior to LBP, PD motor symptoms have even been reported at stage 2 of Braak . All these results are also further supported by case reports . These findings illustrate the need for further investigation at these early stages to account for the neuronal loss before the onset of LBP in this area. In addition to these structural changes (i.e., cell loss), investigating iLBD brains revealed certain neurochemical alterations. For instance, Dickson et al. found that tyrosine hydroxylase (TH) immunoreactivity in the striatum was decreased in iLBD compared to normal controls, but not to the same extent as in PD . TH is an enzyme critical for DA production, and its decrease in iLBD indicates a nigrostriatal system that is already impaired at this early stage. Using quantitative ELISA, Beach et al. demonstrated that striatal TH showed a 49.8% reduction in iLBD cases compared to control cases . Together with the morphological studies described above, these reports suggest an early neurochemical alteration of SNc DA neurons prior to the appearance of LBs. Other research groups have provided additional findings on the early pathological changes in PD, including neurochemical or metabolic changes. For instance, early oxidative damage was found in the SNc in iLBD, where nitrated αSyn is already present in small granules in DA neurons before the appearance of LBs . The authors thus concluded that oxidative damage is an early event in PD and may precede the formation of LBs. In the context of the Renin-angiotensin system (RAS), it is intriguing to note that although this hormonal system is traditionally associated with regulating blood pressure, there is significant interplay with the DA system . Studies have demonstrated that angiotensin blockers can exert a neuroprotective effect on midbrain DA neurons both in vivo and in vitro by reducing oxidative stress, thereby indicating their potential as a therapeutic option. For instance, a retrospective study focusing on patients receiving angiotensin blockers as treatment for hypertension showed a reduced risk of developing PD . Similarly, an analysis of data from ischemic heart disease patients revealed that those prescribed with angiotensin II inhibitors—which have the capacity to cross the blood–brain barrier—had a lower risk of developing PD . These findings underscore the potential of these compounds to counteract the early oxidative damage that primes DA neurons for degeneration, thereby presenting a promising strategy for reducing PD risk. Further biochemical studies have shown increased levels of neuroketals in the SNc in post mortem tissue from Braak stages 1 and 2, supporting the notion that oxidative damage to specific lipids in the SNc occurs at very early stages of PD and prior to the appearance of LBP . In line, recent observations have shown the concentration of L-ferritin in the SNc to be lower in iLBD (and PD) compared with controls, whereas H-ferritin in PD was found to be higher than in iLBD and controls. This illustrates the subtle abnormalities in iron metabolism in the SNc at the early stages of PD . Summarising these results, neurochemical changes occurring prior to LBP may contribute to the increased propensity of SNc DA neurons to degenerate. In line with these neurochemical changes, a report demonstrated p62 immunoreactivity in association with abnormal αSyn inclusions at the early stages of LBP, thus suggesting premature alterations to autophagic pathways in these cases . Tang et al. recently investigated autophagy-associated SNARE molecules in post mortem brain tissue from LBD cases and found a stage-dependent decline of the v-SNARE SNAP29 – a member of the SNARE complex mediating autophagolysosome fusion – as early as in Braak stage 1 (Table ) . Additional experiments in cultured dopaminergic neurons demonstrated αSyn overexpression to reduce autophagy turnover by compromising the fusion of autophagosomes with lysosomes, thus leading to a decrease in the formation of autophagolysosomes. Mechanistically, αSyn interacted with and decreased the abundance of SNAP29 in vitro. Furthermore, SNAP29 knockdown mimicked the effect of αSyn on autophagy, whereas SNAP29 co-expression reversed the αSyn-induced changes on autophagy turnover and ameliorated DA neuronal cell death. These results thus demonstrated a previously unknown capacity of αSyn to affect intracellular autophagy-associated SNARE proteins and, consequently, reduce autophagolysosome fusion. Most notably, this effect may be evident before the presence of LBs in the SNc. Whereas SNAP29 loss has been identified in SNc neurons in iLBD, the cell culture work is derived from αSyn over-expression, thus making it difficult to compare the two results. One possible explanation is that oligomeric αSyn, not yet aggregated into LBs, may cause such cellular changes during early pathology, although specific αSyn-species remain to be identified. Oligomers, which are small aggregates of misfolded proteins, are believed by some to be a key contributor to the neurodegenerative processes that occur in PD . These oligomers are thought to be more toxic than other forms of αSyn, such as monomers or fibrils, and have been shown to impair the function of neurons in cell culture and animal models of PD. Furthermore, recent research has indicated that αSyn oligomers can spread from cell to cell in a prion-like manner, propagating the disease throughout the brain . This has led to the hypothesis that targeting αSyn oligomers could be a promising therapeutic strategy for PD. Whereas LBP is visible with histologic methods αSyn oligomers remain undetectable with routine approaches but may be an important contributor of early pathological changes. Detecting αSyn oligomers requires special techniques, and their distribution and association with clinical features are important research objectives. Recent advances in detecting αSyn oligomers, such as using proximity ligation assay (PLA) or oligomer-specific antibodies may support investigating such early pathological changes in PD. Following clinical reports , a recent immunohistochemical study assessing the abundance of the inflammation-associated Toll-like-Receptor 2 (TLR-2) showed increased numbers of TLR-2-positive microglia in the iLBD SNc compared to PD , suggesting inflammatory changes occur at early stages and prior to the development of PD symptoms. By contrast, there was a progressive increase from control to PD in the numbers of CD68-positive microglia/macrophages, a marker associated with phagocytosis, although an increase in the number of microglia was not identified . Walker et al. examined the differential expression of inflammatory and trophic molecules in the SNc and striatum of control, iLBD and PD cases and found distinct patterns of inflammation and growth factor changes , which was also reinforced by animal studies . Another piece of evidence suggesting early immunological changes came from the work of Galioano-Landeira et al . The authors found that CD8-positive T-lymphocytes were increased in the SNc of PD cases compared to the control group, whereas CD4-positive T cells remained unchanged . Most notably, a robust infiltration of CD8-positive T-cells has been observed prior to the appearance of LBP (Braak Stage 1) and in the absence of DA cell death. CD8-positive T-cells were found to be equipped with cytolytic enzymes (granzymes A, B and K) and proinflammatory cytokines (interferon gamma) with phenotypic differences between early and late stages. A high proportion of nigral CD8 T cells were identified as tissue-resident memory T cells. These results identified a substantial nigral cytotoxic CD8-T-cell infiltration as an early pathogenic event preceding LBP and DA cell death in PD. This further highlights microenvironmental changes which may impact later nigral cell survival. In another study by Hurley et al., iLBD cases had an increased number of IBA1-positive microglia. In the anterior cingulate cortex (ACC), PAR2-positive microglia were increased in iLBD, while in the primary motor cortex, tyrosin-1 was increased in microglia. However, TH-positive neurons in the SNc only showed a decreasing trend . Doorn et al. investigated microglia activity by quantifying the minichromosome maintenance protein 2 (MCM2), a cell proliferation marker. The authors found MCM2-positive cells to be increased in the hippocampus (HC) of iLBD cases but not in established PD patients. This study thus suggests an early microglial response in the HC, indicating that neuroinflammatory processes play an essential role in developing PD pathology . Finally, in another study, the tissue from different Braak stages was examined for the presence of integrin α v β 3 , a marker for angiogenesis, along with vessel number and activated microglia. In this study, all PD cases had greater levels of α v β 3 in the SNc compared to controls. PD subjects also had increases in microglia number and activation in the SNc, suggesting a link between inflammation and clinical disease, whereas microglia activation in iLBD subjects was limited to the LC, an area involved in early-stage PD . In summary, immune-associated changes appear to occur early during disease progression, and, consequently, anti-inflammatory strategies may be potentially disease-modifying for PD. Indeed, several anti-inflammatory drugs have been tested for their therapeutic potential in PD. For instance, statins have been proposed to exert neuroprotective effects in PD models through an anti-inflammatory response, improving motor function and attenuating the increase in inflammatory cytokines. Simvastatin, for example, effectively crosses the blood–brain barrier and is currently being studied in a phase 2 randomized, placebo-controlled futility trial . Although recently announced results indicated futility for slowing the progression of PD, an anti-inflammatory approach may require early treatment before LBP-related cell death to yield successful therapeutic effects . Other clinical trials investigating anti-inflammatory agents are also still ongoing . Mounting evidence indicates that SNc DA neuron degeneration is likely to start from synaptic pathology and that the loss of synaptic connectivity may precede nerve cell loss. As early as 1989, by analysing vesicular monoamine transporter 2 (VMAT2) binding during ageing in PD and healthy subjects, Sherman et al. provided the first evidence indicating that PD symptoms appear when the striatal denervation state is over a critical threshold of about 50% . This illustrated the relevance of synaptic terminal degeneration in the onset of disease and its clinical phenotype . Schulz-Schaeffer et al. reported that αSyn pathology mainly involves synaptic compartments and proposed that the first neuronal compartment affected by its deposition might be the synaptic terminal . In accord with an early synaptic pathology in PD, most αSyn aggregates accumulated at presynaptic terminals in paraffin-embedded tissue blots from LBP cases . Thus, at the onset of clinical motor symptoms, the loss of DA synaptic terminals exceeds the loss of DA cell bodies, pointing towards an early alteration of synaptic projections that precede neuronal death. Moreover, neuroanatomical studies of post mortem brain samples from familial PD cases support the idea that synaptic decay precedes neuronal death . These observations support a ‘dying back’ hypothesis where synaptic demise, including presynaptic dysfunction, occurs prior to neuronal death . This view is supported by a series of preclinical studies indicating that αSyn aggregation at synaptic sites impairs neuronal function and axonal transport by affecting synaptic vesicle release . Numerous studies found pre- and postsynaptic structural integrity alterations in PD and Dementia with Lewy bodies (DLB) . Furthermore, apart from αSyn, several other PD-associated proteins such as leucine-rich repeat kinase 2 (LRRK2), parkin, DJ-1, PINK1, Rab38B and synaptojanin have been found to be involved in the control of DA synaptic function . In accord with an early synaptic dysfunction in PD, various in vivo imaging studies demonstrated presynaptic neurotransmitter deficiencies in PD . These findings seem to indicate that the degenerative process in PD is – at least in part – located at the presynapse, ultimately resulting in a neurotransmitter deficiency syndrome . This degeneration of synapses appears to emerge before motor symptom onset; however, the exact timeline of this progression and its clinical correlates are yet to be fully elucidated. Another critical aspect of these studies is that none of such results were derived directly from iLBD cases, and, although it is conceivable that, for instance, oligomeric non-aggregated αSyn species affect synaptic function prior to the appearance of typical LBs, the specific significance of such αSyn species remains uncertain. A relevant study on early transcriptomic changes in PD was conducted by Wilma van den Berg's group using RNA microarrays . The authors aimed to elucidate molecular mechanisms underlying neuronal dysfunction and LBP in the pre-motor phase of PD and investigated the transcriptome of the SNc of well-characterised iLBD, PD and age-matched controls. Before SNc-LBP, at Braak stages 1-2, they observed deregulation of pathways linked to axonal degeneration, immune response, and endocytosis, including axonal guidance signalling, mTOR signalling, eIF2 signalling and clathrin-mediated endocytosis in the SNc. The results indicate molecular mechanisms related to axonal dysfunction, endocytosis and immune response are already affected before LBP reaches the SNc, while mTOR and eIF2 signalling is also impaired during later stages. Interesting work implicating additional cell types in iLBD came from a study that integrated genome-wide association study results with single-cell transcriptomic data from the entire mouse nervous system to systematically identify cell types underlying brain complex traits . When applying expression-weighted cell-type enrichment (EWCE) to data from previous studies , the authors found that downregulated genes in PD were enriched explicitly in DA neurons (consistent with the loss of this particular cell type in the disease). In contrast, upregulated genes were significantly enriched in cells from the oligodendrocyte lineage. When analysing gene expression data from post mortem human brains, downregulated genes were not enriched in DA neurons at Braak stage 1–2. Conversely, upregulated genes were already strongly enriched in oligodendrocytes at this stage, thus indicating that their involvement precedes the emergence of pathological changes in the SNc. In summary, this study thus supports an early alteration of oligodendrocytes preceding LBP in PD, although the data were in part based on investigating mice. This finding was corroborated by a recent single-cell study where significant associations were found between reported PD risk genes and highly expressed genes in oligodendrocytes. Furthermore, the risk for PD age of onset was associated with genes highly expressed in oligodendrocyte precursor cells . These studies thus support an early alteration of oligodendrocytes and their precursors, preceding LBP in PD. A study by Santpere et al. investigated global transcriptional changes in the frontal cortex (Area 8) in iLBD, PD and DLB. The authors identified different co-expressed gene sets associated with disease stages. They conducted a functional annotation of iLBD-associated modules using the gene ontology framework categories enriched in gene modules and differentially expressed genes, including modules or gene clusters correlated to iLBD. These clusters revealed upregulated dynein genes and taste receptors and downregulated genes related to innate inflammation , thus demonstrating transcriptomic alterations in cortical brain areas in iLBD. In 2012, a study by Lin et al. investigated the extent of mtDNA mutations in early-stage PD and iLBD cases and found that mtDNA mutation levels in SNc neurons are significantly elevated in these cases . However, this study defined iLBD by the absence of clinical parkinsonism or dementia but with Lewy bodies present in the SN, which corresponds to Braak stage 3. These findings illustrate the widespread transcriptomic changes preceding LBP, affecting various cell types, and deregulating crucial molecular pathways. Changes in the expression of various additional proteins have also been demonstrated, for instance, by Wilhelmus et al., who reported an aberrant ApoE and low-density lipoprotein receptor-related protein 1 expression in SNc DA neurons in PD and iLBD cases. The authors concluded that alterations in lipoprotein homeostasis/signalling in DA neurons of the SNc constitute an early disease event during PD pathogenesis . Likewise, changes in neuropeptides and glutathione levels were found in iLBD . Wilkinson identified changes in the glycosylation of proteins in iLBD: a total of 70 O-glycans were identified, with iLBD exhibiting significantly decreased levels of mannose-core and glucuronylated structures in the striatum and PD presenting an increase in sialylation and a decrease in sulfation . Early oxidative damage in the frontal cortex of iLBD cases has been suggested by a study that investigated lipoxidation of the glycolysis-associated enzymes aldolase A, enolase 1, and glyceraldehyde dehydrogenase (GAPDH) and likewise early work from Jenner et al. suggested a loss of glutathione (GSH) to be associated with iLBD . These proteomic modifications furthermore exemplify the various changes in the SNc prior to LBP emergence. Changes in neuronal function and excitability may occur a long time before structural events can be appreciated and recent research began to elucidate the molecular factors governing such early neuronal malfunction. For instance, Tan et al. investigated the effect of αSyn on regulatory molecules in DA SNc neurons and found a loss of the Fragile X Mental Retardation Protein (FMRP) in most neuromelanin-positive neurons of the SNc in human post mortem brain tissue from PD and iLBD cases . Because FMRP regulates the expression and function of numerous neuronal genes , these results further suggest that in PD, DA neuron dysfunction is likely to be present long before morphological and histopathological changes and that the loss of FMRP in the SNc may be a key molecular event in these stages (Fig. ). Loss of FMRP may have beneficial or detrimental effects on neuronal function in the SNc. Tan et al . demonstrated that the absence of FMRP ameliorates αSyn-induced DA dysfunction, and suggest that the early loss of FMRP in PD may in fact protective effects in PD. However, as with the aforementioned studies on autophagy, results from investigating αSyn over-expression are difficult to compare with human LBP and its sequential appearance as the specific αSyn species that are present at different time points are not yet known. The specific significance of FMRP for PD disease progression thus remains to be defined. In addition to these reported CNS changes, iLBD cases may exhibit both peripheral and autonomic pathological changes . For instance, a study by Beach et al. examined the presence of LBP in the gut of iLBD, PD and control cases. The authors found that in the vagus nerve, none of the healthy control subjects showed aggregates of phosphorylated αSyn (p-αSyn), while 46% of iLBD and 89% of PD cases were p-αSyn-positive. In the stomach, none of the control subjects had p-αSyn while 17% of iLBD and 81% of PD subjects did . Following these findings, iLBD cases were retrospectively found to exhibit a lower frequency of bowel movements . In a retrospective autopsy-based study of the human submandibular gland, PD and iLBD cases had LBP in the submandibular glands, the cervical superior ganglia, the cervical sympathetic trunk and vagal nerves . Some previous work even suggested the presence of LBP in the spinal cord of iLBD cases and another study, although limited by a small sample size, found a decrease of TH immunoreactivity within epi- and myocardial sympathetic nerve fibres in PD and iLBD . These studies appear to confirm the cumulative results from studying prodromal PD (pPD), where αSyn is present in the peripheral and autonomic nervous system. In addition to investigating iLBD, some previous studies investigated cases that exhibit so-called prodromal symptoms: prior to the appearance of the classic motor symptoms during cPD, most PD patients experience several typical non-motor signs that are collectively referred to as pPD (Fig. ). These signs include REM sleep behaviour disorder (RBD), olfactory loss, constipation, autonomic dysfunction, psychiatric symptoms, and pathological imaging markers of the presynaptic dopaminergic and autonomic nervous system . These prodromal signs and symptoms often precede cPD by 10-20 years . As such, investigating pPD would contribute to understanding early pathological events in PD and indeed, studies that examined pPD have contributed some indirect evidence for early pathological changes, although these results were primarily derived from imaging results. For instance, MRI data from isolated RBD (iRBD) cases showed structural alterations in the SNc and grey matter changes in the motor cortico-subcortical loop correlated with motor abnormalities . iRBD is considered to be an early clinical sign during disease progression with a > 80% risk of conversion to cPD within 15 years. Patients typically present with vivid, often frightening dreams that lead to vocalisation and sudden body movements (Fig. ). In addition to these characteristic sleep disturbances, some iRBD cases may exhibit mild motor deficits (Table ). Such clinical data are consistent with an early affection of extrapyramidal motor areas during disease progression, although the specific molecular correlate remains uncertain. Furthermore, iRBD cases exhibit a reduced striatal dopamine transporter (DaT) binding on [ 123 I] Ioflupan scintigraphy and an altered [ 18 F]AV133 VMAT2 positron emission tomography (PET) signal , further indicating impaired integrity of the nigrostriatal pathway in these cases. Reduced DaT binding also seems to be correlated with changes in brain glucose metabolism as assessed by [ 18 F] fluorodeoxyglucose ([ 18 F]FDG) PET . Likewise, iRBD cases exhibit impaired nigrostriatal connectivity as assessed by fMRI and ultrasound (rev. in ). In accord with the aforementioned pathological studies in iLBD, a study examining inflammatory changes in the SNc by [ 11 C]PK11195 18 kDa translocator protein (TSPO) PET found increased microglial activation in iRBD, suggesting early immunological changes in the midbrain . Furthermore, Imidazoline 2 imaging with [ 11 C]BU99008 PET indicated activated astrocytes in early PD but even decreased tracer signal at late stages compared to healthy controls . These imaging results thus collectively confirm an early and possibly inflammatory pathology in the PD midbrain. Overall, these observations thus indicate that in iRBD, the disease process extends beyond the sleep-related structures in the brainstem to other structures, including the nigrostriatal system . As a limitation of considering iRBD as pre-LBD cases, it is noteworthy that iRBD cases may exhibit LBP at different Braak stages, including those > 3, as substantiated by clinical findings and that in some cases, iRBD may develop into Multiple System Atrophy (MSA) or Dementia with Lewy bodies (DLB) instead of cPD . In addition to these imaging results, several laboratory results have been derived from iRBD cases. For instance, serum neurofilament light chain (sNfL), a neuronal cytoskeletal protein released upon neuronal damage, might mark the conversion of iRBD to cPD . Techniques such as proteomics analysis of serum samples have identified numerous proteins at significantly altered expression levels, providing further insight into the protein signature profile and molecular pathways involved in the pathogenesis of iRBD . In addition, alterations in circulating microRNAs (miRNAs) have been shown in iRBD. For instance, one study found miR-19b to be significantly down-regulated in iRBD cases that later converted to cPD but not in those who remained disease-free for several years, possibly indicating a role of miR-19b during early disease progression . Still, the diagnostic value of serum miRNA detection remains controversial, as miRNAs show strong pleiotropy. For example, miR-19b has also been implicated in lung cancer progression and schizophrenia . One study revealed decreased antioxidant superoxide dismutase and increased glycolysis in iRBD cases using peripheral blood mononuclear cells . The diagnostic value of other biospecimens, such as those from saliva, tears, or the microbiome, is yet to be explored in patients with iRBD, and longitudinal studies are required to establish whether such biosamples will support the understanding of disease onset and progression in LBP . Finally, novel methods have been developed to investigate αSyn in iRBD cases by using Real-Time Quaking-Induced Conversion assays (RT-QuIC) . These assays can detect αSyn seeding activity in different LB-associated conditions with a high sensitivity and specificity . For instance, in a recent study that examined patients with iRBD, RT-QuIC detected misfolded α-Syn in the CSF with both sensitivity and specificity of 90%, and αSyn-positivity was associated with an increased risk of subsequent conversion to cPD or DLB . Along these lines, another report aimed to detect of αSyn aggregates in the olfactory mucosa of a large cohort of subjects with iRBD by RT-QuIC . The authors found the olfactory mucosa to be α-Syn-positive in 44.4% of iRBD cases, in 46.3% of cPD cases, but only in 10.2% of the control subjects. While the sensitivity for iRBD and cPD vs. controls was comparably low (45.2%), the specificity was found to be sufficiently high (89.8%). Compared to immunofluorescent techniques (IF) RT-QuIC was found to exhibit a high diagnostic accuracy . In addition to iRBD, hyposmia is common in cPD (90%) and iRBD (67%) and sometimes precedes motor symptoms by > 20 years (Figs. and ) . The Prospective Validation of Risk Factors for the Development of Parkinson Syndromes (PRIPS) study found that cases of hyposmia had a fourfold risk of converting to cPD compared to normosmic cases . An impaired sense of smell can thus be regarded as an early clinical event during disease progression . However, hyposmia alone is likely a suboptimal predictor for developing cPD since smell loss is relatively common in older adults, and only a minority will develop PD . Concerning pathological changes in midbrain motor circuits, Sommer et al . identified 30 patients with idiopathic olfactory loss and found that 11 had increased echogenicity of the SNc on transcranial sonography and 5 cases had impaired DaT binding. This further supports early structural changes during the disease course . Moreover, studies have shown a correlation between olfactory performance and DaT binding in early PD . In another study, 11% of random hyposmic subjects had a DaT deficit at baseline compared to 1% of normosmic subjects . Congruently with clinical studies on olfactory function, a study by Silveira-Moriyama et al . examining the post-mortem tissue from iLBD, PD and control patients found LBP in all samples from the olfactory bulb and the primary olfactory cortex in iLBD and PD cases . Another study found that in both iLBD and PD tissue, the olfactory bulb was the region most frequently affected by LBP . However, the immediate correlation between hyposmia and LBP in the olfactory bulb has yet to be substantiated since records regarding hyposmia in these patients studied were unavailable. Collectively, results from investigating pPD further confirm an early structural and functional defect in motor-associated extrapyramidal circuits during PD disease progression that appears to be present prior to the evident appearance of motor signs and symptoms. On the downside, these results provide little conceptual insight into the mechanism of early midbrain neuron dysfunction in PD and no direct correlation with LBP. In the previous sections, we reviewed studies that collectively examined early pathological changes that precede the onset of LBP in the SNc. Investigating these changes has the potential to expose the significance of LBP, reveal early diagnostic and therapeutic targets and ultimately support the development of novel disease-modifying therapies for PD. However, all these approaches have conceptual shortcomings. Although investigating pPD cases by clinical and pathological methods supported the understanding of disease progression on a systemic level and generated valuable predictive data, it provided insufficient insight into the specific molecular and cellular changes occurring prior to LBP and cell death. This limitation applies particularly to areas in the brain stem and midbrain that are difficult to access in detail by routine diagnostics or tissue biopsies, including SNc DA neurons. Similar limitations apply to the neuropathological investigation of genetic PD cases , where genetic alterations ( LRRK2, GBA, SNCA ) predict the development of cPD prior to motor symptoms. Second, it is noteworthy that pPD cases may or may not exhibit LBP in SNc neurons, thus confounding the distinction between LBP-dependent and -independent changes. Although some iRBD cases may exhibit mild motor deficits , indicating SNc dysfunction, it is unclear if this is a consequence of LBP-associated cell degeneration or LBP-independent neuronal malfunction. Thus, investigating pPD does not truly help to clarify LBP's causative role in SNc DA neuron degeneration. Therefore, more research should focus on elucidating the relationship between these individual aspects of early disease events in PD and how they might correlate to one another. A shortcoming of investigating iLBD relates to the uncertain progression pattern of LBP. Previous work suggested that only about 50% of all PD patients have a distribution of LBP in the brain that is entirely consistent with the Braak staging model, a prerequisite for the assumption that iLBD is a precursor for SNc pathology in PD , and about half of PD cases do not seem to show a caudo-rostral spread of LBP throughout the brain . Furthermore, experimental evidence suggested that the spreading of αSyn via autonomic nerve fibres may occur in a caudo-rostral but also rostro-caudal direction . In order to explain these distinct spreading patterns in PD, alternative ‘body-first’ and ‘brain-first’ models have been developed . As such, a brain-stem LBP would be the most common precursor of cPD, whereas a second route would commence in limbic areas, including the amygdala and progress to the SNc in a rostro-caudal spread. Although these theoretical models may partially explain the experimental inconsistencies, conclusions drawn from iLBD cases may be impeded by the uncertain correlation between clinical and neuropathological progression. Another concern regarding the Braak staging has finally been raised by earlier work form Schulz-Schaeffer et. al. These authors suggested that instead in the form somatic LBs, > 90% of αSyn aggregates are located at the presynapses in the form of very small deposits in PD, while postsynaptic dendritic spines were found to be retracted. Based on these results, the authors hypothesized that instead of LB-associated cell death αSyn aggregate-related synaptic dysfunction may cause neurodegeneration. Although this concept has not been examined in iLBD, it suggests that the traditional neuropathological staging (assessing somatic LBs) may not capture the true onset or progression of LBP, thus limiting its validity . Here, we summarized cellular and molecular changes occurring in the SNc of iLBD (and pPD) cases. The body of previous work collectively demonstrates numerous pathological changes that appear to precede LBP in PD. These results challenge the current understanding of PD disease progression and the impact of LBP and, in a broader sense, the development of therapeutic strategies that focus on targeting αSyn . Therefore, our review may provide a starting point for future studies, which will have to further examin and connect these initial molecular changes occurring in early PD. Our work will support the investigation of novel molecular targets that could halt disease progression before the known neuropathological signs begin to show. |
Cutting-edge technology and automation in the pathology laboratory | 2f2fbced-1dcc-4d27-a930-c21c321fe5e8 | 11062949 | Pathology[mh] | Efforts to standardize surgical pathology laboratory processes and reduce manual work have increased over the past decades, aiming to enhance diagnostic accuracy and patient care outcomes. The handling of anatomic pathology samples is critical, as loss or incorrect storage can have serious diagnostic, legal, and ethical implications. Recommended conditions for storage include controlled temperature and humidity for paraffin-embedded blocks and secure, traceable systems for glass slides . The Italian Ministry of Health’s Superior Health Council has highlighted these issues in their guidelines . From collection to storage, it is crucial to maintain a secure and controlled chain of custody for biological samples, ensuring quality, traceability, and proper conservation. Improving compliance and process efficiency requires solutions that automate and simplify labelling, archiving, and search processes. Automation, defined as the use of devices to replace or supplement human effort in a process , is key. Standardization in tissue processing, analysis, and reporting is a major focus in surgical pathology, ensuring precision and repeatability of diagnostic findings, as well as clarity in diagnostic reporting. New technologies, including digital pathology systems and artificial intelligence techniques, are being developed and applied to enhance diagnostic accuracy, though adoption has been gradual due to concerns about data privacy, cost, and compatibility . Also, molecular pathology provides results that need to be precise and require standardized analytical procedures before implementation . In this setting, quality assurance and control systems play a crucial role, serving as adjuvants to ensure the accuracy and reliability of results . Proper tracking, storage, and conservation of specimens are critical, impacting diagnostic accuracy, patient care, and research. The evolution of surgical pathology and patient care relies heavily on the adoption of advanced technologies and standardized practices. This paper describes the state-of-the-art in pathology laboratory automation, aiming to inspire innovation tools and processes to support operators, organizations, and, most importantly, patients. The pre-analytical phase of tissue processing comprises all the steps, starting from receiving tissue specimens to the submission of histopathology slides for interpretation . The application fields of automation in the pre-analytical phase of pathology include specimen collection and tracking, processing, embedding, cutting, and staining (Fig. and Table ). Tracking samples It has become vital to create automated systems that can assist with tracking and managing the workflow of the specimens due to the rising volume of specimens received. In order to track the location, status, and stage of processing for each specimen, barcode scanning equipment and laboratory information management software (LIMS) are utilized . Labs can decrease the possibility of human mistakes and accelerate the turnaround time for diagnostic tests by automating the workflow process for specimens. Automated systems, for instance, can generate barcode labels that can be applied to the specimen container for tracking and notify laboratory employees when a specimen is received. The specimen’s location and status in the LIMS can be updated by scanning the barcode as it passes through each stage of processing. This enables the laboratory staff to keep track of each specimen’s progress, see any delays or problems that require attention, and know who did a certain action with the specimen and when. PathTracker™ (SPOT Imaging, Sterling Heights, MI, USA) is a laboratory solution for bulk barcode scanning that incorporates technology to acquire, process, analyze, and log all the barcodes in the field of view, with a reported scanning time of 30 s for a 150 cassette processing basket. Any damaged or poorly printed barcodes are flagged, and PathTracker™ then provides a set of correction tools to correct damaged barcodes automatically or manually, thus ensuring continuous workflow. FinderFLEX (LOGIBIOTECH, Alseno, Italy) is a robotic unit for handling and scanning cytohistological samples. Thanks to a multi-articulated mechanical arm, FinderFLEX can handle and insert slides, macrosection slides, biopsy cassettes, super mega cassettes, and vials into the appropriate racks in a fully automated and secure way. The operator simply has to turn on the device and log in to add any new samples for storage. Using a latest-generation barcode scanner, FinderFLEX also rapidly scans any barcodes, QR codes, and Data Matrix 2D codes, communicating directly with the LIS to ensure a systematic and traceable sample management and handling process. FinderFLEX identifies and manages the samples directly from their standard racks and containers, significantly reducing handling times, also thanks to the automatic transmission of the gripper fingers. The device is also equipped with simple and intuitive software and a touchscreen panel, which the operator can access in total safety in case of an emergency. Tissue processing The importance of standardization in tissue processing within anatomic pathology cannot be overstated. It ensures uniformity by reducing discrepancies and confirming that any variations are due to the samples themselves. This uniform approach simplifies quality control, making it easier to detect and rectify any issue, like the presence of contaminants . Moreover, it enhances the precision of diagnostic tests by preventing changes in tissue structure or composition that could influence subsequent analyses. Finally, it optimizes lab workflow, enhancing efficiency and saving resources. In the past, pathologists and technicians would spend countless hours manually preparing tissue samples for diagnosis. Tissue fixation and processing may now be carried out rapidly, precisely, and with a minimum of human involvement because of the development of automated methods and tools. The Tissue-Tek Xpress® × 120 tissue processor (Sakura Finetek, Tokyo, Japan) allows continuous streamlining of the histology workflow using vacuum infiltration to offer consistent results in rapid time, distributing cases uniformly and decreasing workloads, processing large tissue specimens in 2.5 h. Running numerous protocols simultaneously on a single instrument is made possible by the HistoCore PEGASUS Plus tissue processor (Leica Biosystems, Wetzlar, Germany), providing a completely integrated system with the capacity to record each cassette individually, including cassette ID, amount, and color, as well as basket ID, user ID, and reagent information. Compared to manual processing, automated tissue fixation and processing have a number of benefits. First, since all processes are meticulously regulated by the computer, automation lessens the possibility of mistakes and unpredictability in tissue processing. This may result in more precise and reliable diagnostic findings, enhancing patient care and outcomes. The second benefit of automation is quicker processing times since the computer can regulate the timing of each step to maximize effectiveness. Finally, automation frees up laboratory staff to concentrate on important duties like quality control. Automation in tissue embedding One of the most critical steps in the histology procedure is embedding; after the tissue processing stage, this laborious operation is done manually and requires proper training and experience. The correct orientation of the tissue within the paraffin is of paramount importance since a badly oriented specimen will result in an uninformative section and can lead to tissue loss at cutting, with detrimental consequences for the patient. The technician embeds surgical specimens and biopsies one at a time, making sure they are positioned correctly, which is frequently a laborious and time-consuming process. For this operation to produce the best circumstances for the cutting phase, trained specialists with good manual dexterity are needed. Compared to manual embedding, automated embedding systems have a number of benefits, such as improved productivity, standardized processing, and less manual labor. By automating the process of embedding tissues as part of the processing protocols, the Synergy system (Milestone Medical, Sorisole, Italy) eliminates the need to manually reopen the cassettes and reposition the tissues. A carefully created rack, specialized molds, and pads make up the Synergy technology system. Through the use of a single tissue processing and embedding methodology, the sponges used for the pads guarantee the specimens’ correct orientation and facilitate cutting at the microtome stage. The Tissue-Tek AutoTEC® a120, in conjunction with Tissue-Tek® Paraform® cassettes and Tissue-Tek® Paraform® Tissue Orientation Gels (Sakura Finetek, Tokyo, Japan), is a component of Sakura’s SMART automation concept to automate the manual work and produce a continuous flow in the lab. Such gels are made to securely hold and keep tiny tissue samples oriented. A complete system for automating cassette embedding with a throughput of up to 120 cassettes per hour is provided by the Tissue-Tek® Paraform® Sectionable Cassette System once the tissue is correctly oriented at grossing. This system locks the specimen during processing and embedding, minimizing tissue loss and eliminating the need for specimen reorientation. These types of automated embedding systems appear to be superior compared to manual embedding, especially in terms of productivity and uniformity. Automated embedding can increase the accuracy and reliability of diagnostic testing by lowering the possibility of human error and variability. It must be kept in mind, however, that tissues can vary greatly in size, shape, and consistency, and not all may be suitable for automatic embedding. Some delicate or irregularly shaped samples may require manual embedding to ensure proper orientation and preservation. Automatic microtome Microtomes, the cornerstone of pathology labs since the nineteenth century, have radically transformed tissue analysis by producing ultra-thin sections for detailed cellular structure examination and disease pathology investigation. Despite their indispensable role, microtome operation remains an artisanal task, demanding skillful handling and precise adjustments. The critical challenges of section thickness variation and tissue distortion call for innovative approaches and advanced automation to ensure reliable, reproducible results. The automated microtome operates by slicing the tissue sample into thin slices with the help of a motorized cutting blade. The instrument’s control panel allows for the generation of tissue sections with various thicknesses by adjusting the section’s thickness. The instrument’s automation also makes sure that tissue segment thickness is constant, lowering the possibility of mistakes and inaccurate diagnostic findings. In this regard, an automatic microtome AS-410M has been developed by Dainippon Seiki (Nagaokakyo, Japan), which automatically performs high-precision and quality histological cuts according to the pre-established requirements for each case or tissue. The cut is then transferred to a slide where it is deposited and stretched; subsequently, the slide is stored in a drying chamber from where it can be collected. The cuts obtained are very homogeneous and of high quality. In addition, the equipment may include roughing modules, cut quality control, slide printing, and connection to the Laboratory Information System (LIS) for full traceability of the samples. The approximate production is 250 blocks in a 7-h work shift, with the possibility to run 24 h a day. Sakura’s Tissue-Tek AutoSection® Automated Microtome offers one-touch trimming and customizable sectioning, coupled with numerous integrated safety measures. It aligns the block with the blade edge, ensuring precise XYZ positioning. This system enables consistent block orientation, regardless of prior trimming or sectioning on other microtomes, thus conserving both tissue and technician efforts. Some limitations in applying such technology could be related to the fact that extremely hard or brittle materials might be challenging to cut consistently; moreover, very little biopsy might still require human hands and expertise to avoid the loss of precious tissue. Automation in slide staining and coverslipping The adoption of automated staining technology has accelerated the processing of huge sample numbers while also minimizing human error, enhancing consistency, and improving staining procedures’ efficiency and dependability. Hematoxylin and eosin (H&E)-stained slides represent the cornerstone of morphological diagnosis, and their importance cannot be overestimated . Every step of the process can be automated, resulting in greater reproducibility, precision, and reliability; notably, it has been shown that automated individual staining protocol(s) as opposed to batch-stained slides might be preferable for digital pathology . An example of an individual slide staining system is the Ventana HE 600 (Roche Diagnostics, Basel, Switzerland). Using labeled antibodies, immunohistochemistry (IHC) is a potent diagnostic method in pathology that enables the identification of particular antigens in tissue slices. IHC staining has become more efficient thanks to automation, which has optimized incubation periods, temperature ranges, and reagent concentrations—elements crucial for precise antigen–antibody reactions. The automated systems also reduce background noise and non-specific staining, which raises the signal-to-noise ratio and the overall caliber of the stained slides. Modern automated staining systems have been created by different companies to meet the various needs of pathology labs. One example is the VENTANA BenchMark line of automated slide stainers (Roche Diagnostics, Basel, Switzerland), which provides complete IHC and in situ hybridization (ISH) staining solutions. The BOND-PRIME automatic staining platform from Leica Biosystems (Wetzlar, Germany) can adapt to different workflow demands like batch, continuous, single slide, or STAT cases, or a combination of these for both IHC and ISH. Another example is the Tissue-Tek Genie® system from Sakura Finetek (Tokyo, Japan), a fully automated, random access stainer for IHC and ISH, with independent staining stations for handling slides with different antibodies and probes simultaneously and at any time. A key step in the preparation of a high-quality histological glass slide is coverslipping. The quality of the coverslipping is important since the presence of air bubbles, excess or lack of mounting medium, and dried mounted slides can impair the diagnosis. There are three types of coverslipping methods, namely, the classic glass coverslip, the liquid method, and the film method. The film method is the only automatic and has been demonstrated to be the fastest, with significantly less air bubbles and staining alterations compared with the other two methods , thus resulting in the best method for the production of glass slides for digital scanners. Collaborative robots In numerous situations, it is challenging to automate manual processes. Devices, even those manufactured by the same company, often lack sufficient coordination to transfer materials. A common daily laboratory task involves moving sections between rack systems, such as transferring samples from a staining platform to a coverslipping device . The process can be time-consuming and may result in material loss due to the risk of components falling or breaking. There is ample opportunity for enhancing production flow and intelligently integrating various steps, in addition to the need for further process development. The increasing adoption of robotic systems for material transfer across processes is a product of collaborative robotics. Collaborative robots, or “cobots,” feature sensors that facilitate safe human–robot interaction without necessitating protective barriers. Flexible, camera-assisted gripping devices also contribute to the functionality of these systems, allowing them to operate effectively. The Tissue-Tek SmartConnect® from Sakura (Tokyo, Japan) represents a cutting-edge technological advancement in laboratory automation, bridging the gap between human expertise and efficient, reliable processes. This collaborative robot has been designed to work seamlessly alongside laboratory technicians, assisting in various tasks while promoting accuracy and productivity. Once the Tissue-Tek Xpress® × 120 is loaded through SmartConnect, automated tissue processing begins. SmartConnect then independently transfers the magazines to the Tissue-Tek AutoTEC® a120 embedder. Ultimately, SmartConnect delivers standardized, high-quality, embedded blocks prepared for microtomy. Laboratories can therefore improve workflow, lower human error, and boost overall effectiveness by putting such a system in place. Additionally, the incorporation of cutting-edge technology into these systems, such as machine learning and artificial intelligence, might result in even more precise and accurate treatment of samples, enhancing the overall outcomes. Such robots can benefit the lab by. eliminating manual and accidental errors as well as contamination risks; simplifying routine activities and improving processes and workflows; increasing productivity and efficiency; ensuring complete sample tracking and traceability, guaranteeing their quality; reducing repetitive manual processes performed by health care staff, freeing up more time for strategic activities with high added value; helping improve patient satisfaction and, most importantly, patient safety. It has become vital to create automated systems that can assist with tracking and managing the workflow of the specimens due to the rising volume of specimens received. In order to track the location, status, and stage of processing for each specimen, barcode scanning equipment and laboratory information management software (LIMS) are utilized . Labs can decrease the possibility of human mistakes and accelerate the turnaround time for diagnostic tests by automating the workflow process for specimens. Automated systems, for instance, can generate barcode labels that can be applied to the specimen container for tracking and notify laboratory employees when a specimen is received. The specimen’s location and status in the LIMS can be updated by scanning the barcode as it passes through each stage of processing. This enables the laboratory staff to keep track of each specimen’s progress, see any delays or problems that require attention, and know who did a certain action with the specimen and when. PathTracker™ (SPOT Imaging, Sterling Heights, MI, USA) is a laboratory solution for bulk barcode scanning that incorporates technology to acquire, process, analyze, and log all the barcodes in the field of view, with a reported scanning time of 30 s for a 150 cassette processing basket. Any damaged or poorly printed barcodes are flagged, and PathTracker™ then provides a set of correction tools to correct damaged barcodes automatically or manually, thus ensuring continuous workflow. FinderFLEX (LOGIBIOTECH, Alseno, Italy) is a robotic unit for handling and scanning cytohistological samples. Thanks to a multi-articulated mechanical arm, FinderFLEX can handle and insert slides, macrosection slides, biopsy cassettes, super mega cassettes, and vials into the appropriate racks in a fully automated and secure way. The operator simply has to turn on the device and log in to add any new samples for storage. Using a latest-generation barcode scanner, FinderFLEX also rapidly scans any barcodes, QR codes, and Data Matrix 2D codes, communicating directly with the LIS to ensure a systematic and traceable sample management and handling process. FinderFLEX identifies and manages the samples directly from their standard racks and containers, significantly reducing handling times, also thanks to the automatic transmission of the gripper fingers. The device is also equipped with simple and intuitive software and a touchscreen panel, which the operator can access in total safety in case of an emergency. The importance of standardization in tissue processing within anatomic pathology cannot be overstated. It ensures uniformity by reducing discrepancies and confirming that any variations are due to the samples themselves. This uniform approach simplifies quality control, making it easier to detect and rectify any issue, like the presence of contaminants . Moreover, it enhances the precision of diagnostic tests by preventing changes in tissue structure or composition that could influence subsequent analyses. Finally, it optimizes lab workflow, enhancing efficiency and saving resources. In the past, pathologists and technicians would spend countless hours manually preparing tissue samples for diagnosis. Tissue fixation and processing may now be carried out rapidly, precisely, and with a minimum of human involvement because of the development of automated methods and tools. The Tissue-Tek Xpress® × 120 tissue processor (Sakura Finetek, Tokyo, Japan) allows continuous streamlining of the histology workflow using vacuum infiltration to offer consistent results in rapid time, distributing cases uniformly and decreasing workloads, processing large tissue specimens in 2.5 h. Running numerous protocols simultaneously on a single instrument is made possible by the HistoCore PEGASUS Plus tissue processor (Leica Biosystems, Wetzlar, Germany), providing a completely integrated system with the capacity to record each cassette individually, including cassette ID, amount, and color, as well as basket ID, user ID, and reagent information. Compared to manual processing, automated tissue fixation and processing have a number of benefits. First, since all processes are meticulously regulated by the computer, automation lessens the possibility of mistakes and unpredictability in tissue processing. This may result in more precise and reliable diagnostic findings, enhancing patient care and outcomes. The second benefit of automation is quicker processing times since the computer can regulate the timing of each step to maximize effectiveness. Finally, automation frees up laboratory staff to concentrate on important duties like quality control. One of the most critical steps in the histology procedure is embedding; after the tissue processing stage, this laborious operation is done manually and requires proper training and experience. The correct orientation of the tissue within the paraffin is of paramount importance since a badly oriented specimen will result in an uninformative section and can lead to tissue loss at cutting, with detrimental consequences for the patient. The technician embeds surgical specimens and biopsies one at a time, making sure they are positioned correctly, which is frequently a laborious and time-consuming process. For this operation to produce the best circumstances for the cutting phase, trained specialists with good manual dexterity are needed. Compared to manual embedding, automated embedding systems have a number of benefits, such as improved productivity, standardized processing, and less manual labor. By automating the process of embedding tissues as part of the processing protocols, the Synergy system (Milestone Medical, Sorisole, Italy) eliminates the need to manually reopen the cassettes and reposition the tissues. A carefully created rack, specialized molds, and pads make up the Synergy technology system. Through the use of a single tissue processing and embedding methodology, the sponges used for the pads guarantee the specimens’ correct orientation and facilitate cutting at the microtome stage. The Tissue-Tek AutoTEC® a120, in conjunction with Tissue-Tek® Paraform® cassettes and Tissue-Tek® Paraform® Tissue Orientation Gels (Sakura Finetek, Tokyo, Japan), is a component of Sakura’s SMART automation concept to automate the manual work and produce a continuous flow in the lab. Such gels are made to securely hold and keep tiny tissue samples oriented. A complete system for automating cassette embedding with a throughput of up to 120 cassettes per hour is provided by the Tissue-Tek® Paraform® Sectionable Cassette System once the tissue is correctly oriented at grossing. This system locks the specimen during processing and embedding, minimizing tissue loss and eliminating the need for specimen reorientation. These types of automated embedding systems appear to be superior compared to manual embedding, especially in terms of productivity and uniformity. Automated embedding can increase the accuracy and reliability of diagnostic testing by lowering the possibility of human error and variability. It must be kept in mind, however, that tissues can vary greatly in size, shape, and consistency, and not all may be suitable for automatic embedding. Some delicate or irregularly shaped samples may require manual embedding to ensure proper orientation and preservation. Microtomes, the cornerstone of pathology labs since the nineteenth century, have radically transformed tissue analysis by producing ultra-thin sections for detailed cellular structure examination and disease pathology investigation. Despite their indispensable role, microtome operation remains an artisanal task, demanding skillful handling and precise adjustments. The critical challenges of section thickness variation and tissue distortion call for innovative approaches and advanced automation to ensure reliable, reproducible results. The automated microtome operates by slicing the tissue sample into thin slices with the help of a motorized cutting blade. The instrument’s control panel allows for the generation of tissue sections with various thicknesses by adjusting the section’s thickness. The instrument’s automation also makes sure that tissue segment thickness is constant, lowering the possibility of mistakes and inaccurate diagnostic findings. In this regard, an automatic microtome AS-410M has been developed by Dainippon Seiki (Nagaokakyo, Japan), which automatically performs high-precision and quality histological cuts according to the pre-established requirements for each case or tissue. The cut is then transferred to a slide where it is deposited and stretched; subsequently, the slide is stored in a drying chamber from where it can be collected. The cuts obtained are very homogeneous and of high quality. In addition, the equipment may include roughing modules, cut quality control, slide printing, and connection to the Laboratory Information System (LIS) for full traceability of the samples. The approximate production is 250 blocks in a 7-h work shift, with the possibility to run 24 h a day. Sakura’s Tissue-Tek AutoSection® Automated Microtome offers one-touch trimming and customizable sectioning, coupled with numerous integrated safety measures. It aligns the block with the blade edge, ensuring precise XYZ positioning. This system enables consistent block orientation, regardless of prior trimming or sectioning on other microtomes, thus conserving both tissue and technician efforts. Some limitations in applying such technology could be related to the fact that extremely hard or brittle materials might be challenging to cut consistently; moreover, very little biopsy might still require human hands and expertise to avoid the loss of precious tissue. The adoption of automated staining technology has accelerated the processing of huge sample numbers while also minimizing human error, enhancing consistency, and improving staining procedures’ efficiency and dependability. Hematoxylin and eosin (H&E)-stained slides represent the cornerstone of morphological diagnosis, and their importance cannot be overestimated . Every step of the process can be automated, resulting in greater reproducibility, precision, and reliability; notably, it has been shown that automated individual staining protocol(s) as opposed to batch-stained slides might be preferable for digital pathology . An example of an individual slide staining system is the Ventana HE 600 (Roche Diagnostics, Basel, Switzerland). Using labeled antibodies, immunohistochemistry (IHC) is a potent diagnostic method in pathology that enables the identification of particular antigens in tissue slices. IHC staining has become more efficient thanks to automation, which has optimized incubation periods, temperature ranges, and reagent concentrations—elements crucial for precise antigen–antibody reactions. The automated systems also reduce background noise and non-specific staining, which raises the signal-to-noise ratio and the overall caliber of the stained slides. Modern automated staining systems have been created by different companies to meet the various needs of pathology labs. One example is the VENTANA BenchMark line of automated slide stainers (Roche Diagnostics, Basel, Switzerland), which provides complete IHC and in situ hybridization (ISH) staining solutions. The BOND-PRIME automatic staining platform from Leica Biosystems (Wetzlar, Germany) can adapt to different workflow demands like batch, continuous, single slide, or STAT cases, or a combination of these for both IHC and ISH. Another example is the Tissue-Tek Genie® system from Sakura Finetek (Tokyo, Japan), a fully automated, random access stainer for IHC and ISH, with independent staining stations for handling slides with different antibodies and probes simultaneously and at any time. A key step in the preparation of a high-quality histological glass slide is coverslipping. The quality of the coverslipping is important since the presence of air bubbles, excess or lack of mounting medium, and dried mounted slides can impair the diagnosis. There are three types of coverslipping methods, namely, the classic glass coverslip, the liquid method, and the film method. The film method is the only automatic and has been demonstrated to be the fastest, with significantly less air bubbles and staining alterations compared with the other two methods , thus resulting in the best method for the production of glass slides for digital scanners. In numerous situations, it is challenging to automate manual processes. Devices, even those manufactured by the same company, often lack sufficient coordination to transfer materials. A common daily laboratory task involves moving sections between rack systems, such as transferring samples from a staining platform to a coverslipping device . The process can be time-consuming and may result in material loss due to the risk of components falling or breaking. There is ample opportunity for enhancing production flow and intelligently integrating various steps, in addition to the need for further process development. The increasing adoption of robotic systems for material transfer across processes is a product of collaborative robotics. Collaborative robots, or “cobots,” feature sensors that facilitate safe human–robot interaction without necessitating protective barriers. Flexible, camera-assisted gripping devices also contribute to the functionality of these systems, allowing them to operate effectively. The Tissue-Tek SmartConnect® from Sakura (Tokyo, Japan) represents a cutting-edge technological advancement in laboratory automation, bridging the gap between human expertise and efficient, reliable processes. This collaborative robot has been designed to work seamlessly alongside laboratory technicians, assisting in various tasks while promoting accuracy and productivity. Once the Tissue-Tek Xpress® × 120 is loaded through SmartConnect, automated tissue processing begins. SmartConnect then independently transfers the magazines to the Tissue-Tek AutoTEC® a120 embedder. Ultimately, SmartConnect delivers standardized, high-quality, embedded blocks prepared for microtomy. Laboratories can therefore improve workflow, lower human error, and boost overall effectiveness by putting such a system in place. Additionally, the incorporation of cutting-edge technology into these systems, such as machine learning and artificial intelligence, might result in even more precise and accurate treatment of samples, enhancing the overall outcomes. Such robots can benefit the lab by. eliminating manual and accidental errors as well as contamination risks; simplifying routine activities and improving processes and workflows; increasing productivity and efficiency; ensuring complete sample tracking and traceability, guaranteeing their quality; reducing repetitive manual processes performed by health care staff, freeing up more time for strategic activities with high added value; helping improve patient satisfaction and, most importantly, patient safety. The application fields of automation in the analytical phase of pathology include digital pathology and the analytical process performed by computational pathology algorithms (Fig. and Table ), as well as synoptic reporting and data entry templates. Digital and computational pathology Modern diagnostic medicine’s crucial domains of digital pathology and computational pathology are redefining how doctors approach the analysis, diagnosis, and treatment of diseases . Pathology glass slide scanners have revolutionized digital pathology by enabling the conversion of histological samples on glass slides into high-resolution digital images. This has enhanced information accessibility, storage, and sharing, fostering global collaboration among health care professionals. Over time, these scanners have significantly improved in terms of speed, resolution, and capabilities. Notable products in the market include the NanoZoomer series (Hamamatsu, Hamamatsu City, Japan), Aperio (Leica Biosystems, Wetzlar, Germany), IntelliSite (Philips, Eindhoven, the Netherlands), Pannoramic series (3DHISTECH, Budapest, Hungary), and Axioscan (Zeiss, Oberkochen, Germany). Each of these devices offers impressive image quality, processing speed, and capacity, catering to the diverse needs of diagnostic laboratories and research facilities. Pathologists may see, examine, and exchange high-resolution digital images of histological and cytological samples thanks to the digitalization of glass slides using whole slide imaging, or “digital pathology.” Through the use of this technology, pathologists may now collaborate remotely, consult with specialists around the world, improve diagnostic precision, and speed up patient care . The development and implementation of machine learning algorithms and artificial intelligence (AI) to evaluate digital slides is the subject of computational pathology. Using this method, quantifiable data may be extracted from the digitized slides and utilized to discover unique patterns and biomarkers, increasing diagnostic accuracy and enabling tailored medication . Numerous developments in whole slide imaging technologies, rising processing power, and the accessibility of enormous annotated datasets have all contributed to the growth of digital and computational pathology. Investment and research in these areas have also increased as a result of the growing need for diagnostic solutions that are more effective, accurate, and cost-efficient. The diagnosis of cancer is one of the main uses of digital and computational pathology. When it comes to the automated diagnosis and categorization of different tumor forms, including breast, lung, and prostate cancer, machine learning algorithms have achieved extraordinary results. These algorithms can examine digital histopathology images to find neoplastic cells, differentiate benign tumors from malignant ones, and even identify the subtypes and grades of the malignancies . In this regard, such technology may reduce the workload for pathologists, reduce interobserver variability, and enable more reliable and precise diagnoses by automating these procedures . In order to pinpoint potential beneficial uses of AI in pathology, Heinz et al. conducted an anonymous online survey involving 75 domain experts in computational pathology from both academic and industrial backgrounds . The survey results suggested that the most promising future application is seen as predicting treatment response directly from standard pathology slides. As a matter of fact, among different applications in translational medicine, digital pathology is being actively investigated to predict response and identify patients most likely to respond to treatment. In the era of immune-oncology, the selection of patients who may benefit the most from immune checkpoint inhibitor-based therapies (ICI) like PD-1/PD-L1 blockade is a major and still unsolved issue . Notably, besides PD-L1 expression on tumor and immune cells, the immune contexture represented by tumor-infiltrating lymphocytes (TILs) has been demonstrated to have strong predictive potential . Interestingly, Park et al. have developed an AI-based algorithm for the analysis and quantification of TILs in the tumor microenvironment, capable of defining three immune phenotypes (IP): inflamed, immune-excluded, and immune-desert . These authors demonstrated that patients with inflamed tumors have a better prognosis both in terms of OS and PFS and, in particular, that patients with inflamed neoplasms and high expression of PD-L1 show a significant improvement in survival compared to patients with high expression of PD-L1 but non-inflamed tumors. Such results underscore the fact that the application of image analysis offers increased accuracy and efficiency by automatically measuring multiple parameters that are impossible to achieve by eye. Besides tumor pathology, computational pathology is also being investigated and applied in critical but often neglected fields like transplantation pathology, a highly specialized area of pathology that examines both post-transplant graft biopsy results for rejection or graft damage as well as organ donor biopsy for organ allocation, as well as in many different fields of functional and non-neoplastic pathology . Overall, it appears clear that digital and computational pathology provide very useful methods for managing and interpreting massive datasets from various sources, such as genomics, proteomics, and clinical data, in the age of big data. Through the use of machine learning algorithms, integrative data analysis, a greater understanding of disease causes, and the discovery of novel possibilities for diagnosis, prognosis, and treatment are all made possible. One of the primary obstacles to the integration of digital pathology into clinical practice, as perceived by administrators, is associated with its cost. In this regard, Ho and colleagues elaborated a financial projection for digital pathology implementation at a large health care organization in order to estimate potential operational cost savings . The projected savings were based on two main benefits associated with the use of digital pathology: (1) potential improvements in workflow/productivity and lab consolidation; and (2) avoided treatment costs due to reduced rates of interpretive errors by general, non-subspecialist pathologists. The authors projected that the total cost savings over 5 years could reach approximately $18 million. This suggests that if the costs of acquiring and implementing digital pathology do not exceed this value, the return on investment becomes attractive to hospital administrators. Currently, different integrated digital pathology systems are being implemented around the world, providing clear examples of the feasibility of the implementation of digital pathology workflows both in small and large pathology departments supporting large and distributed health care organizations with complex patient demographic profiles ; also, official guidelines have been published . Finally, the value of digital pathology in the education of anatomic pathology is beyond doubt, with a growing number of resources like digital pathology atlases. Essential skills like identifying features, providing differential diagnoses, annotating, taking photographs, describing, and presenting are all improved through the use of such resources. The way these resources are used seems to play a crucial role in overcoming the reluctance to use digital tools among certain learners. Regularly integrating these resources into unidentified case discussions, educational collections, and tutorials has the potential to dramatically improve and speed up the learning process . Despite significant advancements and prospective applications, digital and computational pathology still has a number of issues that need to be resolved. Because medical data is sensitive and professionals must share photos and information, privacy and security issues with the data are raised. To ensure consistency and interoperability across various systems and organizations, it is also crucial to standardize digital imaging processes, data formats, and annotation strategies. Machine learning algorithms must also undergo thorough validation and testing before being integrated into clinical processes in order to guarantee their dependability and clinical utility ; strategies for preventing model accuracy losses in the contest of artifacts must also be developed . We believe that the new generation of pathologists, besides having solid and comprehensive anatomic pathology training, will also need to expand their cultural background to include at least the basic principles of computational pathology and image analysis in order to bridge the cultural gap between medicine, computer science, and data analytics. This will not mean that pathologists will have to be a sort of hybrid professional, but surely they will need to have the ability to collaborate with computer scientists to understand and overcome the possible limitations of new technological approaches and, importantly, to be the main actors in this paradigm shift. The future of digital and computational pathology is still bright, despite these difficulties, as it will make it possible to significantly increase diagnosis accuracy and give patients a more thorough grasp of disease processes by combining new imaging modalities with machine learning algorithms. Structured synoptic reporting The practice of structured reporting in pathology is of utmost significance as it fosters uniformity and comprehensiveness in recording vital cancer data. This standardization not only amplifies the clarity and usefulness of reports for immediate patient care but also guarantees that invaluable data is systematically captured for secondary purposes such as research, quality assurance, and public health management. The International Collaboration on Cancer Reporting (ICCR) has played a pivotal role in propelling this global standardization, thereby contributing to enhanced patient outcomes and breakthroughs in cancer research . The ICCR envisions improving patient outcomes through internationally standardized pathology reporting. The formulation of evidence-based datasets, encompassing all significant and current reporting information for any specific cancer, results in more exhaustive pathology cancer reports, refined cancer staging, and the optimization of treatment protocols for cancer patients. Beyond the development of datasets, the ICCR has pinpointed two additional key areas of focus for the future. The first is the translation of datasets into multiple languages to expedite the adoption of reporting standards in both developed and low-to-middle-income countries (LMICs). The second is the conversion of dataset standards into machine-readable formats to facilitate their electronic implementation and global data interoperability. The creation of evidence-based datasets, which include all essential and contemporary reporting details for each specific cancer, not only leads to more thorough pathology reports on cancer but also improves cancer staging and fine-tune treatment approaches for cancer patients . Furthermore, these datasets represent the basis for the creation of nationwide networks between pathology laboratories as is the case with the Pathological Anatomy National Automated Archive (PALGA) that has been operating in the Netherlands since 1971. The aim of such organization is to promote communication and information exchange between participating laboratories and to provide potentially useful data for health care professionals in the interest of patient care and research . Modern diagnostic medicine’s crucial domains of digital pathology and computational pathology are redefining how doctors approach the analysis, diagnosis, and treatment of diseases . Pathology glass slide scanners have revolutionized digital pathology by enabling the conversion of histological samples on glass slides into high-resolution digital images. This has enhanced information accessibility, storage, and sharing, fostering global collaboration among health care professionals. Over time, these scanners have significantly improved in terms of speed, resolution, and capabilities. Notable products in the market include the NanoZoomer series (Hamamatsu, Hamamatsu City, Japan), Aperio (Leica Biosystems, Wetzlar, Germany), IntelliSite (Philips, Eindhoven, the Netherlands), Pannoramic series (3DHISTECH, Budapest, Hungary), and Axioscan (Zeiss, Oberkochen, Germany). Each of these devices offers impressive image quality, processing speed, and capacity, catering to the diverse needs of diagnostic laboratories and research facilities. Pathologists may see, examine, and exchange high-resolution digital images of histological and cytological samples thanks to the digitalization of glass slides using whole slide imaging, or “digital pathology.” Through the use of this technology, pathologists may now collaborate remotely, consult with specialists around the world, improve diagnostic precision, and speed up patient care . The development and implementation of machine learning algorithms and artificial intelligence (AI) to evaluate digital slides is the subject of computational pathology. Using this method, quantifiable data may be extracted from the digitized slides and utilized to discover unique patterns and biomarkers, increasing diagnostic accuracy and enabling tailored medication . Numerous developments in whole slide imaging technologies, rising processing power, and the accessibility of enormous annotated datasets have all contributed to the growth of digital and computational pathology. Investment and research in these areas have also increased as a result of the growing need for diagnostic solutions that are more effective, accurate, and cost-efficient. The diagnosis of cancer is one of the main uses of digital and computational pathology. When it comes to the automated diagnosis and categorization of different tumor forms, including breast, lung, and prostate cancer, machine learning algorithms have achieved extraordinary results. These algorithms can examine digital histopathology images to find neoplastic cells, differentiate benign tumors from malignant ones, and even identify the subtypes and grades of the malignancies . In this regard, such technology may reduce the workload for pathologists, reduce interobserver variability, and enable more reliable and precise diagnoses by automating these procedures . In order to pinpoint potential beneficial uses of AI in pathology, Heinz et al. conducted an anonymous online survey involving 75 domain experts in computational pathology from both academic and industrial backgrounds . The survey results suggested that the most promising future application is seen as predicting treatment response directly from standard pathology slides. As a matter of fact, among different applications in translational medicine, digital pathology is being actively investigated to predict response and identify patients most likely to respond to treatment. In the era of immune-oncology, the selection of patients who may benefit the most from immune checkpoint inhibitor-based therapies (ICI) like PD-1/PD-L1 blockade is a major and still unsolved issue . Notably, besides PD-L1 expression on tumor and immune cells, the immune contexture represented by tumor-infiltrating lymphocytes (TILs) has been demonstrated to have strong predictive potential . Interestingly, Park et al. have developed an AI-based algorithm for the analysis and quantification of TILs in the tumor microenvironment, capable of defining three immune phenotypes (IP): inflamed, immune-excluded, and immune-desert . These authors demonstrated that patients with inflamed tumors have a better prognosis both in terms of OS and PFS and, in particular, that patients with inflamed neoplasms and high expression of PD-L1 show a significant improvement in survival compared to patients with high expression of PD-L1 but non-inflamed tumors. Such results underscore the fact that the application of image analysis offers increased accuracy and efficiency by automatically measuring multiple parameters that are impossible to achieve by eye. Besides tumor pathology, computational pathology is also being investigated and applied in critical but often neglected fields like transplantation pathology, a highly specialized area of pathology that examines both post-transplant graft biopsy results for rejection or graft damage as well as organ donor biopsy for organ allocation, as well as in many different fields of functional and non-neoplastic pathology . Overall, it appears clear that digital and computational pathology provide very useful methods for managing and interpreting massive datasets from various sources, such as genomics, proteomics, and clinical data, in the age of big data. Through the use of machine learning algorithms, integrative data analysis, a greater understanding of disease causes, and the discovery of novel possibilities for diagnosis, prognosis, and treatment are all made possible. One of the primary obstacles to the integration of digital pathology into clinical practice, as perceived by administrators, is associated with its cost. In this regard, Ho and colleagues elaborated a financial projection for digital pathology implementation at a large health care organization in order to estimate potential operational cost savings . The projected savings were based on two main benefits associated with the use of digital pathology: (1) potential improvements in workflow/productivity and lab consolidation; and (2) avoided treatment costs due to reduced rates of interpretive errors by general, non-subspecialist pathologists. The authors projected that the total cost savings over 5 years could reach approximately $18 million. This suggests that if the costs of acquiring and implementing digital pathology do not exceed this value, the return on investment becomes attractive to hospital administrators. Currently, different integrated digital pathology systems are being implemented around the world, providing clear examples of the feasibility of the implementation of digital pathology workflows both in small and large pathology departments supporting large and distributed health care organizations with complex patient demographic profiles ; also, official guidelines have been published . Finally, the value of digital pathology in the education of anatomic pathology is beyond doubt, with a growing number of resources like digital pathology atlases. Essential skills like identifying features, providing differential diagnoses, annotating, taking photographs, describing, and presenting are all improved through the use of such resources. The way these resources are used seems to play a crucial role in overcoming the reluctance to use digital tools among certain learners. Regularly integrating these resources into unidentified case discussions, educational collections, and tutorials has the potential to dramatically improve and speed up the learning process . Despite significant advancements and prospective applications, digital and computational pathology still has a number of issues that need to be resolved. Because medical data is sensitive and professionals must share photos and information, privacy and security issues with the data are raised. To ensure consistency and interoperability across various systems and organizations, it is also crucial to standardize digital imaging processes, data formats, and annotation strategies. Machine learning algorithms must also undergo thorough validation and testing before being integrated into clinical processes in order to guarantee their dependability and clinical utility ; strategies for preventing model accuracy losses in the contest of artifacts must also be developed . We believe that the new generation of pathologists, besides having solid and comprehensive anatomic pathology training, will also need to expand their cultural background to include at least the basic principles of computational pathology and image analysis in order to bridge the cultural gap between medicine, computer science, and data analytics. This will not mean that pathologists will have to be a sort of hybrid professional, but surely they will need to have the ability to collaborate with computer scientists to understand and overcome the possible limitations of new technological approaches and, importantly, to be the main actors in this paradigm shift. The future of digital and computational pathology is still bright, despite these difficulties, as it will make it possible to significantly increase diagnosis accuracy and give patients a more thorough grasp of disease processes by combining new imaging modalities with machine learning algorithms. The practice of structured reporting in pathology is of utmost significance as it fosters uniformity and comprehensiveness in recording vital cancer data. This standardization not only amplifies the clarity and usefulness of reports for immediate patient care but also guarantees that invaluable data is systematically captured for secondary purposes such as research, quality assurance, and public health management. The International Collaboration on Cancer Reporting (ICCR) has played a pivotal role in propelling this global standardization, thereby contributing to enhanced patient outcomes and breakthroughs in cancer research . The ICCR envisions improving patient outcomes through internationally standardized pathology reporting. The formulation of evidence-based datasets, encompassing all significant and current reporting information for any specific cancer, results in more exhaustive pathology cancer reports, refined cancer staging, and the optimization of treatment protocols for cancer patients. Beyond the development of datasets, the ICCR has pinpointed two additional key areas of focus for the future. The first is the translation of datasets into multiple languages to expedite the adoption of reporting standards in both developed and low-to-middle-income countries (LMICs). The second is the conversion of dataset standards into machine-readable formats to facilitate their electronic implementation and global data interoperability. The creation of evidence-based datasets, which include all essential and contemporary reporting details for each specific cancer, not only leads to more thorough pathology reports on cancer but also improves cancer staging and fine-tune treatment approaches for cancer patients . Furthermore, these datasets represent the basis for the creation of nationwide networks between pathology laboratories as is the case with the Pathological Anatomy National Automated Archive (PALGA) that has been operating in the Netherlands since 1971. The aim of such organization is to promote communication and information exchange between participating laboratories and to provide potentially useful data for health care professionals in the interest of patient care and research . The application fields of automation in the analytical phase of pathology include storage and biobanking of tissues and specimens and digital imaging archiving (Fig. and Table ). Storage and biobanking of tissue and specimens Tissue and surgical specimens should be kept in a way that protects their integrity and averts deterioration or contamination. Depending on the specimen type and the intended purpose, this can entail freezing, refrigeration, or formalin fixation . Professional associations such as the Joint Commission on Accreditation of Health Care Organizations (USA) and the College of American Pathologists (USA) advise that tissue blocks and slides be kept for a long enough time to ensure that the patient is treated properly. The UK’s Royal College of Pathologists advises keeping blocks for life and histology slides and smears for 10 years . Different companies have created automated histology cassette storage and management solutions to improve traceability, speed up archiving and retrieval, protect sensitive patient tissue and biopsy blocks, and cut down on sorting and storage time . To preserve lab security, increase productivity, and optimize procedures, built-in automation must be integrated. Reliable laboratory information system (LIS) integration and user-friendly software on board boost performance and lower errors. Complete traceability and straightforward cassette retrieval are also necessary for maintaining an efficient operation. Additionally, a safe, secure storage facility that is constantly watched over is needed to ensure the preservation, integrity, and quality of samples . To make sure that each task is managed and monitored in the right and timely manner, many solutions have been put forth by vendors, such as Arkive BC™ (Menarini Diagnostics, Florence, Italy), which may be easily interfaced with middleware or included into the LIS as needed. Authorized technicians may comprehensively oversee all areas of the operations thanks to the user-friendly interface of the onboard software. For histology/cytology slides, the same manufacturer has produced a product named Arkive SL™, a device for loading slides to be archived and the retrieval of archived slides. The SLTrack tool automatically records a full audit trail of each slide and SL item in the lab, enabling users to trace, identify, and retrieve samples quickly and efficiently. Additional units are added to build a modular system, which increases overall capacity without limiting the number of slides that can be retained for short-, medium-, and long-term storage requirements. SmartCABINET and ClientCABINET (LOGIBIOTECH, Alseno, Italy) are smart units for the automated and traceable storage of cytohistological samples. They are an intelligent automated system which can help operators perform all phases of the storage and retrieval of cytohistological samples. Thanks to the software installed, “pick-to-light” technology, Wi-Fi connection, and the use of samAPP, each storage and retrieval operation is carried out in a secure, traceable, and recorded manner. They are modular and flexible, as numerous identical units can be added to each SmartCABINET, and each of these can in turn command an infinite number of ClientCABINETs to adapt to any storage volume requirement and space available. Digital imaging archiving In pathology, digital image archiving describes the procedure of keeping WSIs and related diagnostic data for analysis and future use. The benefit of digital imaging archiving is that the data is always accessible and independent of traditional archives . A digital archive makes information easier to find and faster to retrieve, resistant to deterioration, able to view earlier case comments, and simple to share with coworkers . The methodology, high-volume scanning, and particularly the enormous storage capacity required, as well as the associated costs, provide unique problems when developing a fully digitized slide library . Medical management systems that store and handle digital images and data in a vendor-neutral format are called vendor-neutral archives (VNAs). Regardless of vendor or manufacturer, a VNA may save WSIs and associated data from digital pathology systems and link with other health care IT systems like EHR and LIS. VNAs can store images on dedicated hardware or link to third-party systems’ images. Central VNAs enable image backup, disaster recovery, business continuity, and interoperability with external organizations and health information exchanges. When new imaging technologies like radiology PACS are introduced, they can reduce picture data migration. Federated VNA may be cheaper and faster to install due to their lower hardware infrastructure requirements, but they may have more trouble connecting to multiple image sources . Digital imaging archiving enables pathologists to analyze enormous amounts of data and find patterns and trends for research and therapeutic decision-making. Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) is crucial. With over 350,000 ideas and a million relationships, SNOMED CT is the most comprehensive, multilingual clinical health care vocabulary in the world . SNOMED topography (SMOMED T) and SNOMED morphology (M) codes could be used to choose cases from a digital archive, which could help minimize the archive’s size and cost while preserving the benefits of quick and easy retrieval of WSIs . Tissue and surgical specimens should be kept in a way that protects their integrity and averts deterioration or contamination. Depending on the specimen type and the intended purpose, this can entail freezing, refrigeration, or formalin fixation . Professional associations such as the Joint Commission on Accreditation of Health Care Organizations (USA) and the College of American Pathologists (USA) advise that tissue blocks and slides be kept for a long enough time to ensure that the patient is treated properly. The UK’s Royal College of Pathologists advises keeping blocks for life and histology slides and smears for 10 years . Different companies have created automated histology cassette storage and management solutions to improve traceability, speed up archiving and retrieval, protect sensitive patient tissue and biopsy blocks, and cut down on sorting and storage time . To preserve lab security, increase productivity, and optimize procedures, built-in automation must be integrated. Reliable laboratory information system (LIS) integration and user-friendly software on board boost performance and lower errors. Complete traceability and straightforward cassette retrieval are also necessary for maintaining an efficient operation. Additionally, a safe, secure storage facility that is constantly watched over is needed to ensure the preservation, integrity, and quality of samples . To make sure that each task is managed and monitored in the right and timely manner, many solutions have been put forth by vendors, such as Arkive BC™ (Menarini Diagnostics, Florence, Italy), which may be easily interfaced with middleware or included into the LIS as needed. Authorized technicians may comprehensively oversee all areas of the operations thanks to the user-friendly interface of the onboard software. For histology/cytology slides, the same manufacturer has produced a product named Arkive SL™, a device for loading slides to be archived and the retrieval of archived slides. The SLTrack tool automatically records a full audit trail of each slide and SL item in the lab, enabling users to trace, identify, and retrieve samples quickly and efficiently. Additional units are added to build a modular system, which increases overall capacity without limiting the number of slides that can be retained for short-, medium-, and long-term storage requirements. SmartCABINET and ClientCABINET (LOGIBIOTECH, Alseno, Italy) are smart units for the automated and traceable storage of cytohistological samples. They are an intelligent automated system which can help operators perform all phases of the storage and retrieval of cytohistological samples. Thanks to the software installed, “pick-to-light” technology, Wi-Fi connection, and the use of samAPP, each storage and retrieval operation is carried out in a secure, traceable, and recorded manner. They are modular and flexible, as numerous identical units can be added to each SmartCABINET, and each of these can in turn command an infinite number of ClientCABINETs to adapt to any storage volume requirement and space available. In pathology, digital image archiving describes the procedure of keeping WSIs and related diagnostic data for analysis and future use. The benefit of digital imaging archiving is that the data is always accessible and independent of traditional archives . A digital archive makes information easier to find and faster to retrieve, resistant to deterioration, able to view earlier case comments, and simple to share with coworkers . The methodology, high-volume scanning, and particularly the enormous storage capacity required, as well as the associated costs, provide unique problems when developing a fully digitized slide library . Medical management systems that store and handle digital images and data in a vendor-neutral format are called vendor-neutral archives (VNAs). Regardless of vendor or manufacturer, a VNA may save WSIs and associated data from digital pathology systems and link with other health care IT systems like EHR and LIS. VNAs can store images on dedicated hardware or link to third-party systems’ images. Central VNAs enable image backup, disaster recovery, business continuity, and interoperability with external organizations and health information exchanges. When new imaging technologies like radiology PACS are introduced, they can reduce picture data migration. Federated VNA may be cheaper and faster to install due to their lower hardware infrastructure requirements, but they may have more trouble connecting to multiple image sources . Digital imaging archiving enables pathologists to analyze enormous amounts of data and find patterns and trends for research and therapeutic decision-making. Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) is crucial. With over 350,000 ideas and a million relationships, SNOMED CT is the most comprehensive, multilingual clinical health care vocabulary in the world . SNOMED topography (SMOMED T) and SNOMED morphology (M) codes could be used to choose cases from a digital archive, which could help minimize the archive’s size and cost while preserving the benefits of quick and easy retrieval of WSIs . Automation in surgical pathology has demonstrated immense potential for enhancing the accuracy, efficiency, and overall quality of patient care. Through the integration of advanced technologies such as robotics, AI, and machine learning, pathology laboratories can reduce human error, streamline workflows, and expedite the diagnostic process. As these innovations continue to evolve, it is essential for the medical community to embrace and adapt to these changes while addressing any ethical and legal concerns that may arise. The future of surgical pathology is undeniably intertwined with the advancements in automation, paving the way for more accurate diagnoses, improved patient outcomes, and a more profound understanding of diseases. The cited platforms and vendors are meant to serve as examples stemming from the authors’ knowledge and experience. Such examples are not intended as endorsements and might not accurately represent the latest technological advancements. |
Alterations in the proteomes of HepG2 and IHKE cells inflicted by six selected mycotoxins | a375e2ed-c631-4ce4-ae55-e57baee33f54 | 11775057 | Biochemistry[mh] | This study evolves around mycotoxins, which are secondary metabolites of different fungal species that can contaminate food and feed. Because of their toxicity to humans and animals, they represent a serious health threat and are partially regulated by legal limits (Khan et al. ; Eskola et al. ). The basis for such regulations, e.g., in terms of maximum levels in food, is a comprehensive risk assessment (More et al. ). A decisive part of risk assessment in the adverse outcome pathway (AOP) framework is the elucidation of toxicity pathways (Ankley et al. ), which are defined as “cellular response pathways that, when sufficiently perturbed, are expected to result in adverse health effects” (Krewski et al. ). AOPs are a superordinate concept used in risk assessment to characterize the pathway of biological events initiated by toxic compounds to adverse outcomes on the organism or even population level, also without the use of animal tests (Ankley et al. ; Allen et al. ). The process of identifying such toxicodynamic properties is not straightforward, since the investigated toxins exhibit different modes of action (MoA) and choosing an appropriate assay sometimes turns into a game of chance. Nevertheless, decades of research identified several distinct toxicity pathways for some of the most relevant mycotoxins (Awuchi et al. ). However, research on the bioactivity of mycotoxins was mainly based on targeted assays to investigate certain mechanisms. In 2018, the European Food Safety Authority (EFSA) stated in their Scientific Colloquium 24 that omics techniques can be a valuable addition to identify toxicity pathways in the AOP framework (Aguilera et al. ). Furthermore, the EFSA specifically mentioned omics techniques as a tool to characterize MoA of mycotoxins (Beatriz and Nolwenn ). The big advantage of such methodologies is the comprehensive investigation of any possible alteration in the considered biological system (Gutierrez Reyes et al. ). In addition, Cimbalo et al. described the usefulness of transcriptomic and proteomic data on the characterization of mycotoxins’ cellular effects and emphasized missing data for several compounds. The big advantage of such untargeted study designs over specific assays is the drastically reduced likelihood of missing otherwise undetected effects. The depth of knowledge about the different mycotoxins’ toxicity pathways is unevenly distributed. For few compounds, cellular mechanisms were precisely elucidated, while for many others, only adverse effects on certain organs or even less have been reported. For instance, trichothecenes such as deoxynivalenol (DON) and nivalenol (NIV), primarily produced by Fusarium spp., are well-known ribotoxins in eukaryotes: They inhibit the protein synthesis by binding to the ribosomal large subunit (LSU, also referred to as 60S) and, therefore, impede the formation of peptide bonds (McCormick et al. ; Cundliffe et al. ). On the other hand, aflatoxins like aflatoxin B 1 (AFB 1 ), mainly formed by Aspergillus spp., are known to induce hepatocellular carcinoma and to act mutagenic and teratogenic (Kensler et al. ). Although several studies observed inflammatory effects and oxidative stress induced by these mycotoxins, their precise cellular mechanisms are not fully elucidated yet (Cimbalo et al. ; Frangiamone et al. ; Wen et al. ). For the Aspergillus and Penicillium spp. derived mycotoxin ochratoxin A (OTA), the production of reactive oxygen species (ROS), damage of deoxyribonucleic acid (DNA), inhibition of protein synthesis and cell cycle arrest are described as impaired cellular processes (Liu et al. ; Frangiamone et al. ). Citrinin (CIT) is produced by the same fungal genera as OTA and is described to have similar effects such as cell cycle arrest, oxidative and inflammatory stress, but also indications of genotoxic and mutagenic effects through DNA damage (Oliveira Filho et al. ). The IARC ranks CIT as “Not classifiable as to its carcinogenicity to humans” in group 3 (IARC ). Research on the bioactivity of penitrem A (Pen A) is very scarce. The tremorgenic mycotoxin is produced by certain Aspergillus , Claviceps and Penicillium spp. and is described to be cytotoxic and to affect amino acid homeostasis and urea cycle in HepG2 cells (Kalinina et al. ; Gerdemann et al. ). Furthermore, the production of ROS in human neutrophils was observed (Berntsen et al. ). Our study aims to identify alterations in the proteome of HepG2 cells that allow to characterize the main cellular targets of mycotoxins and to investigate their MoA. We selected the six mycotoxins AFB 1 , OTA, CIT, DON, NIV, and Pen A (structures in Online Resource , Figure ) that are either regulated, relevant because of high occurrence, or showed noteworthy behavior in previous experiments. The human hepatoblastoma cell line HepG2 was used to mimic the liver as the main organ of xenobiotic metabolism, in accordance with a previous metabolic profiling study (Gerdemann et al. ). Moreover, the nephrotoxic compounds OTA and CIT were applied to immortalized human kidney epithelial cells (IHKE, Schwerdt et al. ; Oliveira Filho et al. ). Due to proposed combinatory effects, a mixture of both mycotoxins was used as well (Schulz et al. ; Knecht et al. ). The cells were treated with sub-cytotoxic concentrations, insignificantly impairing the overall cellular viability (maximum decrease in cell viability of 20%), based on previous assays on the cellular redox activity status (Gerdemann et al. ; Zingales et al. ; Kalinina et al. ; Knecht et al. ; Bittner et al. ). This section is based on the reporting guidelines for proteomics experiments of the Human Proteome Organization (HUPO, Taylor et al. ). Chapters on chemicals and reagents, cell culture and sample preparation (based on the filter-aided sample preparation method by Wiśniewski et al. and as previously described in Müller et al. ) are depicted in Online Resource . Cell treatment 1 × 10 6 HepG2 cells or 2 × 10 6 IHKE cells were seeded onto cell culture dishes of 3.5 cm or 6 cm diameter, respectively. After 48 h of growth, cell culture medium was replaced with serum-free medium supplemented with buffer and antibiotics for further 24 h. Afterwards, the cell culture medium was replaced with mycotoxins in different concentrations (Table ) in fresh serum-free medium and incubated for 24 h. In case of AFB 1 , an additional experiment with metabolically induced HepG2 cells was performed. Therefore, cells were pretreated with 10 µM β-naphthoflavone (β-NF) in serum-free medium 16 h prior to mycotoxin treatment. In addition, corresponding solvent controls treated with 1% acetonitrile (ACN) were added to the experiments. Three independent cell passages were used, each with two replicates prior to cell seeding ( n = 3 × 2). HPLC–MS Samples were analyzed using an Elute high-performance liquid chromatography (HPLC) pump coupled to an Impact™ II quadrupole time-of-flight (QTOF) mass spectrometer equipped with an Apollo II source (Bruker, Bremen, Germany). 45 µL was injected onto a Peptide Mapping 2.1 × 150 mm column with 120 Å pore size and 2.7 µm particle size (Agilent, Santa Clara, CA, USA). The HPLC gradient of acetonitrile and water both with 0.1% formic acid had 100 min of active gradient time and the mass spectrometer was operated in data-dependent acquisition (DDA) mode. Detailed method information is depicted in the Online Resource , Table . Data processing Proteins were identified and quantified in a label-free quantification (LFQ) approach using MaxQuant version 2.4.3 and processed by default values if not specified otherwise (Cox et al. ; Cox and Mann ). Only reviewed human proteins (Uniprot proteome: UP000005640, accessed 2 May 2023) were used for identification. False discovery rate (FDR) was set to 1% for both proteins and peptides. Methionine oxidation and protein N -terminal acetylation were allowed as variable modifications. LFQ with classic normalization and the fast LFQ option with a minimum of 5 and an average of 8 neighbors was performed. Default Bruker QTOF values were used as instrument parameters in MaxQuant. Only unmodified peptides were allowed for quantification. Statistical analysis The “proteinGroups.txt” file from MaxQuant was uploaded to Perseus version 2.0.11. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) annotations were added (accessed 1 April 2024). Proteins identified by site, reverse proteins and potential contaminants were removed. LFQ intensities were transformed by log 2 (x) and replicates were grouped. An experiment was defined as the combination of all replicates of treated samples of one mycotoxin with respective controls. Volcano plots of each experiment were generated by plotting -log 10 ( p- value) against the difference of logarithmized LFQ intensities for each protein between treated sample and control, referred to as log 2 fold change (log 2 FC) in the following. Significance of protein abundance change was determined by t -test and a permutation-based FDR. Proteins were considered as differentially abundant between samples, if it met the criteria of Perseus’ two-sided t -test with 250 randomizations, FDR ≤ 0.05 and S 0 = 0.1. The number of differentially abundant proteins (DAPs) was identified at this point. Interaction and enrichment analysis Whole proteome data of each experiment containing protein identifiers and log 2 fold change (FC) values between sample and control were uploaded in the “proteins with values/ranks” function of STRING DB version 12.0 (string-db.org, Szklarczyk et al. ) and analyzed for functional enrichments. In case of protein groups, only the first accession was used for these functional analyses. STRING annotates proteins with database terms, e.g., GO annotations and KEGG pathways and performs a functional enrichment analysis by calculating an enrichment score (ES). Through this calculation, the significant enrichment of a group of proteins with a common biological function regarding GO or KEGG terms to the down-, upregulated or to both sides is analyzed. To calculate the ES, the mean log 2 FC of proteins of a certain term is calculated at first (mean of term). Second, the mean log 2 FC of the whole input set (mean of input) is subtracted from this mean of term (mean of term − mean of input). The ES is then calculated as the ratio between “mean of term − mean of input” and the maximum log 2 FC of the input set, if mean of term > mean of input, or the minimum log 2 FC, if mean of term < mean of input, multiplied by 10 (Szklarczyk et al. ). The expected proportion of false-positive identifications, the FDR, is calculated based on either the aggregate fold change model (Yu et al. ) or two-sided Kolmogorov–Smirnov testing, which depends on the size of each term and its deviation from the mean. Bubble plots of the top functional enrichments containing term description, ES of the term, FDR, and percentage of quantified proteins of the term were generated in SRplot (Tang et al. ). In addition, DAPs were picked from Perseus-generated volcano plots and uploaded as “multiple proteins” into STRING DB and analyzed for functional enrichments and protein–protein interactions. All active interaction sources were allowed here and a medium confidence of 0.4 was set as required interaction score. Background proteomes of HepG2 and IHKE cells were generated from deep proteome analysis approaches using high pH reversed phase fractionation with concatenation modified from Wang et al. extended by proteins identified in other in-house shotgun proteomics analyses of the respective cell lines. These proteomes were used as statistical backgrounds for interaction analysis in DAPs in STRING (protein lists in Online Resource ). 1 × 10 6 HepG2 cells or 2 × 10 6 IHKE cells were seeded onto cell culture dishes of 3.5 cm or 6 cm diameter, respectively. After 48 h of growth, cell culture medium was replaced with serum-free medium supplemented with buffer and antibiotics for further 24 h. Afterwards, the cell culture medium was replaced with mycotoxins in different concentrations (Table ) in fresh serum-free medium and incubated for 24 h. In case of AFB 1 , an additional experiment with metabolically induced HepG2 cells was performed. Therefore, cells were pretreated with 10 µM β-naphthoflavone (β-NF) in serum-free medium 16 h prior to mycotoxin treatment. In addition, corresponding solvent controls treated with 1% acetonitrile (ACN) were added to the experiments. Three independent cell passages were used, each with two replicates prior to cell seeding ( n = 3 × 2). Samples were analyzed using an Elute high-performance liquid chromatography (HPLC) pump coupled to an Impact™ II quadrupole time-of-flight (QTOF) mass spectrometer equipped with an Apollo II source (Bruker, Bremen, Germany). 45 µL was injected onto a Peptide Mapping 2.1 × 150 mm column with 120 Å pore size and 2.7 µm particle size (Agilent, Santa Clara, CA, USA). The HPLC gradient of acetonitrile and water both with 0.1% formic acid had 100 min of active gradient time and the mass spectrometer was operated in data-dependent acquisition (DDA) mode. Detailed method information is depicted in the Online Resource , Table . Proteins were identified and quantified in a label-free quantification (LFQ) approach using MaxQuant version 2.4.3 and processed by default values if not specified otherwise (Cox et al. ; Cox and Mann ). Only reviewed human proteins (Uniprot proteome: UP000005640, accessed 2 May 2023) were used for identification. False discovery rate (FDR) was set to 1% for both proteins and peptides. Methionine oxidation and protein N -terminal acetylation were allowed as variable modifications. LFQ with classic normalization and the fast LFQ option with a minimum of 5 and an average of 8 neighbors was performed. Default Bruker QTOF values were used as instrument parameters in MaxQuant. Only unmodified peptides were allowed for quantification. The “proteinGroups.txt” file from MaxQuant was uploaded to Perseus version 2.0.11. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) annotations were added (accessed 1 April 2024). Proteins identified by site, reverse proteins and potential contaminants were removed. LFQ intensities were transformed by log 2 (x) and replicates were grouped. An experiment was defined as the combination of all replicates of treated samples of one mycotoxin with respective controls. Volcano plots of each experiment were generated by plotting -log 10 ( p- value) against the difference of logarithmized LFQ intensities for each protein between treated sample and control, referred to as log 2 fold change (log 2 FC) in the following. Significance of protein abundance change was determined by t -test and a permutation-based FDR. Proteins were considered as differentially abundant between samples, if it met the criteria of Perseus’ two-sided t -test with 250 randomizations, FDR ≤ 0.05 and S 0 = 0.1. The number of differentially abundant proteins (DAPs) was identified at this point. Whole proteome data of each experiment containing protein identifiers and log 2 fold change (FC) values between sample and control were uploaded in the “proteins with values/ranks” function of STRING DB version 12.0 (string-db.org, Szklarczyk et al. ) and analyzed for functional enrichments. In case of protein groups, only the first accession was used for these functional analyses. STRING annotates proteins with database terms, e.g., GO annotations and KEGG pathways and performs a functional enrichment analysis by calculating an enrichment score (ES). Through this calculation, the significant enrichment of a group of proteins with a common biological function regarding GO or KEGG terms to the down-, upregulated or to both sides is analyzed. To calculate the ES, the mean log 2 FC of proteins of a certain term is calculated at first (mean of term). Second, the mean log 2 FC of the whole input set (mean of input) is subtracted from this mean of term (mean of term − mean of input). The ES is then calculated as the ratio between “mean of term − mean of input” and the maximum log 2 FC of the input set, if mean of term > mean of input, or the minimum log 2 FC, if mean of term < mean of input, multiplied by 10 (Szklarczyk et al. ). The expected proportion of false-positive identifications, the FDR, is calculated based on either the aggregate fold change model (Yu et al. ) or two-sided Kolmogorov–Smirnov testing, which depends on the size of each term and its deviation from the mean. Bubble plots of the top functional enrichments containing term description, ES of the term, FDR, and percentage of quantified proteins of the term were generated in SRplot (Tang et al. ). In addition, DAPs were picked from Perseus-generated volcano plots and uploaded as “multiple proteins” into STRING DB and analyzed for functional enrichments and protein–protein interactions. All active interaction sources were allowed here and a medium confidence of 0.4 was set as required interaction score. Background proteomes of HepG2 and IHKE cells were generated from deep proteome analysis approaches using high pH reversed phase fractionation with concatenation modified from Wang et al. extended by proteins identified in other in-house shotgun proteomics analyses of the respective cell lines. These proteomes were used as statistical backgrounds for interaction analysis in DAPs in STRING (protein lists in Online Resource ). The six mycotoxins AFB 1 , OTA, CIT, DON, NIV, and Pen A were applied to the human hepatoblastoma cell line HepG2 and CIT, OTA and a combination of both on the human kidney epithelial cell line IHKE in sub-cytotoxic concentrations. The results are described based on two evaluation methods both assisted by STRING DB. First, bubble enrichment plots display the significant enrichment of protein groups with a common biological function to the down-, upregulated or both sides of a whole proteome (comparable to gene set enrichment analysis, for detailed information see chapter Interaction analysis). In this case, the biological function is described as terms of Gene Ontology (GO) annotations, which contain biological processes (BPs), cellular components (CCs) and molecular functions (MFs) as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) terms, which contain important signaling and metabolic pathways. The bubble plots presented here show only the strongest effects (highest enrichment scores, ESs) of each experiment on the cellular proteomes. The ES describes how distant the specific, term-associated proteins are from the middle of the proteome. In other words, the ES characterizes the intensity of a deregulation of a certain biological function, expressed in the form of up- or downregulated proteins. It should be noted that ESs for bubble plots are calculated individually for each experiment (see chapter Interaction analysis) and for this reason, the scale of the x-axis is not comparable. In addition, FDR scale and size of bubbles, representing the identified percentage of a term, are individual per plot. However, all shown effects have an FDR < 0.01. Second, functional enrichments within the group of DAPs from either side were identified and taken into account for interpretation. In this case, not the whole proteome data were used, but only the interactions between significantly altered proteins from either side were analyzed—regardless of their fold change. These results are shown in the Online Resources (HepG2) and 4 (IHKE). All proteomic effects of the investigated mycotoxins are provided in Online Resources and —individually per protein and as enriched terms by STRING analyses. Additional to enriched terms, changes in the abundance of individual proteins were considered. From all these effects, the main cellular targets of the mycotoxins’ toxicity were derived and potential MoA were illustrated. The overall number of altered proteins in the HepG2 proteome varied according to the incubated mycotoxin. In terms of amount of DAPs, DON had the strongest effect, as it induced significant changes for 17% of all proteins (see Online Resource , Figure ), according to the Perseus analysis (see chapter Statistical analysis). OTA and NIV had comparable effects, with 6.7% and 5.3%, respectively. In comparison, Pen A, CIT and AFB 1 showed fewer significant alterations, with 1.2%, 0.80% and 0.53%, respectively, but the overall effect of AFB 1 was increased by the pretreatment of HepG2 cells with β-NF to 3.2%. In IHKE cells, OTA also had a stronger overall effect on the proteome, with 14% DAPs, than CIT, with 0.62%. Their combination led to 2.3% of DAPs in the IHKE proteome. The following sections provide an overview of the effects caused by the individual mycotoxins. Ochratoxin A HepG2 cells were treated with 200 nM OTA for 24 h (Fig. , left). Strongest effects were observed in the upregulation of the “MCM complex”, which is a heterohexamer controlling DNA replication in the late M to early G 1 phase of the cell cycle (Lei ). The “CMG complex” showed the same values as the “MCM complex” of ES 2.43 and an FDR of 0.0074. These two terms include the same six modulated proteins (MCM2–MCM7), but three CMG-specific proteins were not identified. These results were supported by enriched terms associated with DNA replication and the MCM complex itself in the group of upregulated DAPs. The deregulation of the MCM complex is associated with the development of hepatocellular carcinoma and genomic instability (Lei et al. ). MCM plays a key role in DNA replication—replicative stress is discussed as a potential cause for genotoxic properties of OTA. However, the underlying mechanism remains unclear (EFSA Panel on Contaminants in the Food Chain (CONTAM) et al. ; Klotz et al. ). Our results support the mechanism of DNA replication as a relevant target of OTA toxicity, as the upregulation of the MCM complex was the strongest enriched term. Potentially, cells counter-regulate the inhibitory effect of OTA on DNA replication by upregulation of the MCM complex. The BPs on ribonucleoside and nucleoside monophosphate biosynthesis were upregulated with ESs of 1.80 and 1.72, respectively, and FDRs of 0.0016, both, likewise supported by the DAP results (see Online Resource ). The upregulation of proteins involved in nucleotide biosynthesis could be an indirect effect of the disturbed DNA replication, for which a balanced nucleotide pool is required. The induction of respective genes might be regulated via the transcription factor c-Myc, as there is a high overlap between its regulated genes, the proteins upregulated by OTA and those affiliated to nucleoside monophosphate biosynthesis (Liu et al. ). Liang et al. also described the deregulation of nucleotide metabolism and cell cycle as well as DNA repair mechanisms and the blockade of RNA synthesis by OTA in human embryonic kidney cells (HEK293), which mostly supports our results. They concluded that OTA activated the apoptosis signal-regulating kinase 1 (ASK1) via oxidative stress, which in turn led to apoptosis initiation by the mentioned mechanisms. Several terms of the whole proteome data containing upregulated proteasomal proteins were shown to be enriched with ESs from 1.75 for the CC “proteasome regulatory particle” (FDR < 0.0001) to 1.05 for the CC “proteasome complex” (FDR < 0.0001). The proteasome is a multisubunit protein complex that is responsible for the intracellular degradation of proteins. Its upregulation might be caused by direct effects of OTA or might represent an unspecific stress response. On the one hand, the Nrf2 pathway is described to induce the proteasome as a reaction to oxidative stress (Pickering et al. ), which is reported to be a key mechanism of OTA toxicity (Frangiamone et al. ). On the other hand, OTA could directly bind to proteins, like described for human and murine serum albumin (Sueck et al. ; Kuhn et al. ). Malfunctioning modified proteins need to be degraded, which could induce the upregulation of the proteasome. Furthermore, Perugino et al. recently proposed the inhibition of a prolyl 3-hydroxylase involved in protein synthesis, by 3-dimensional modeling. This effect might induce protein damage resulting in an increased requirement for degradation. Akpinar et al. also observed a time-dependent deregulation of proteasomal proteins in human kidney proximal tubule cells (HK-2) caused by 10 µM OTA. Furthermore, within the upregulated DAPs, different extracellular components were identified. Effects with ESs < 1 are not further discussed. In IHKE cells, OTA induced a strong downregulation of several histones and high mobility group nucleosome-binding proteins, which resulted in the enriched MFs “nucleosomal DNA binding” and “structural constituent of chromatin” in the whole proteome data and within the downregulated DAPs (see Online Resource ). Histone downregulation can be caused by the G 1 checkpoint pathway, which in turn can be activated by DNA damage (Su et al. ). On the upregulated side, mainly proteins of extracellular components and various metabolic processes were enriched (see Online Resource ). Taking results from both cell lines together, OTA seems to affect the cell cycle, which is already described (Kőszegi and Poór ). Different responses of the two cell lines could be explained by very different methods used to obtain the cells (López-Terrada et al. ; Tveito et al. ) and by eventually high differences in abundances of proteins like the tumor suppressor p53 in cancer cells (Zhou and Elledge ). Citrinin HepG2 cells were treated with 20 µM CIT for 24 h (Fig. , right). Similar to the effects of OTA, CIT strongly induced the six MCM subunits MCM2–MCM7 resulting in an ES of 3.06 and an FDR of 0.0022 for the CC terms “MCM complex” and “CMG complex”. This effect is supported by respective terms enriched in the DAPs and several terms regarding DNA replication (see Online Resource ). Remarkably, the same terms of upregulated proteasomal proteins and nucleotide biosynthesis-related proteins as for OTA were observed to be enriched by CIT as well. For proteasomal proteins, ESs ranged from 1.75 for the CC “proteasome accessory complex” (FDR < 0.0001) to 1.43 for the CC “proteasome complex” (FDR < 0.0001, not part of top 15 terms). Concerning nucleotide biosynthesis, the BP “ribonucleoside monophosphate biosynthesis” showed an ES of 1.88 with an FDR of 0.0014 and the respective nucleoside term showed an ES of 1.61 with an FDR of 0.0022. Supporting these results, terms on the MF “nucleotide binding” were enriched in the analysis of upregulated DAPs (see Online Resource ). Concerning the MCM upregulation and induction of the proteasome and the nucleotide biosynthesis, CIT showed effects on the proteome of HepG2 cells comparable to OTA (see chapter Ochratoxin A). We assume that this observation indicates a similarity in their toxicity pathways in terms of replication and oxidative stress. A comparison of the log 2 FC values of the six upregulated MCM proteins between the OTA and the CIT experiment by Student’s t test revealed a p -value of 0.931, demonstrating a resemblance between the mentioned effects of OTA and CIT. Oxidative stress is a comprehensively analyzed mechanism of CIT and OTA toxicity (Rašić et al. ), but replication stress is only discussed for OTA so far (EFSA Panel on Contaminants in the Food Chain (CONTAM) et al. ). Thus, the described results reveal a new potential mechanism of CIT toxicity and concurrently suggest the polyketide-derived coumarin part of the molecules (Geisen et al. ) as responsible for this mechanism. The second strongest enriched term was the “catenin complex” (ES 2.85, FDR 0.0045), caused by the downregulation of cadherins (CDH) 1 and 2, catenins α−1 and δ−1 and junction plakoglobin, all of which are junction proteins. As this term was headed by CDH1 (log 2 FC = − 2.45, − log 10 p value = 0), which was identified in only one out of six replicates of CIT treatment, this effect is not discussed in more detail. Proteins of the KEGG terms “fructose and mannose metabolism” (ES 2.29, FDR 0.00046) were upregulated, which could affect energy production from glycolysis, but also ascorbate metabolism and N -glycan biosynthesis are associated with this pathway (KEGG: hsa00051). On the other hand, some proteins affiliated to “complement and coagulation cascades” (ES 1.91, FDR 0.00052) were downregulated, which were mainly complement factors, serine protease inhibitors and fibrinogens (see Online Resource ). This could affect the blood coagulation in vivo (Amara et al. ). However, no such effects for CIT have been described in the literature. Enzymes of the “folate biosynthesis” were upregulated (ES 1.93, FDR 0.0087), which could explain the strong enrichment of dihydrobiopterin observed in HepG2 cells (Gerdemann et al. ). This effect might be related to nucleotide synthesis—and thereby probably to replication stress—as the output of folate metabolism also includes nucleotide precursors (Zheng and Cantley ). Gerdemann et al. also postulated the inhibition of the enzymes pyruvate carboxylase (PYC) and succinyl-CoA ligase (SUCL), which are part of the citrate cycle. Our results might also explain this result, as PYC (log 2 FC − 0.743, − log 10 p value 1.29) and SUCL (log 2 FC − 0.646, − log 10 p value 1.33 for subunit G2) were the strongest downregulated proteins of the enriched KEGG term “citrate cycle” (ES 1.43, FDR 0.0034) that was not listed within the top 15 terms. In IHKE cells, only the CC “mitochondrial protein-containing complex” and the KEGG term “systemic lupus erythematosus” were enriched in the whole proteome data (see Online resource ). The latter term was mainly driven by downregulation of histones, which was comparable to OTA. However, the histone downregulation still indicates a similarity in their MoA, which is supported by observations in both HepG2 and IHKE cells. The experiment with a combination of CIT and OTA did not show any effects that point towards specific combinatory effects on the proteome in the used concentrations of 15 µM and 20 nM, respectively, beyond the addition of the single compound effects—based on the enriched terms in the whole proteome data (see Online Resource ). In contrast, the combination showed a smaller number of DAPs than OTA alone (see Online Resource , Figure ). This could be caused by the inhibitory effect of CIT on the uptake of OTA described by Knecht et al. . They described, that 15 µM CIT reduced the uptake of OTA by more than 60% in IHKE cells. Aflatoxin B 1 HepG2 cells were treated with 10 µM AFB 1 for 24 h (Fig. , left). Since the metabolic activation of AFB 1 was shown to be critical for certain toxic mechanisms (Gerdemann et al. ; van Vleet et al. ), the same experiment was additionally conducted in β-NF pretreated HepG2 cells (10 µM, 16 h) to induce the metabolic activity especially of CYP1A variants (Westerink and Schoonen ; Gerets et al. ). In this study, only CYP1A1 was observed as induced (see Online Resource , sheet “β-NF proteins”), but CYP1A2 was shown to be induced by β-NF in HepG2 cells in former investigations (data not shown) and presumably lacks abundance to meet the limit of detection. The latter experiment included its own corresponding control pretreated with β-NF and afterwards treated with solvent control (see chapter Cell treatment). The results are shown in the right-hand part of Fig. . For AFB 1 , by far the strongest enriched term was the “cytokine–cytokine receptor interaction” (ES 2.79, FDR 0.0057), mainly caused by the upregulated cytokine “growth and differentiation factor 15” (GDF15, log 2 FC 1.72, − log 10 p value 4.87), but also by tumor necrosis factor receptors and interleukin receptors (see Online Resource ). In β-NF pretreated cells, again the KEGG term “cytokine–cytokine receptor interaction” was found enriched (ES 2.29, FDR 0.0026) with the same proteins involved, but in this case, enrichment on both ends (up- and downregulation) was identified. Like for AFB 1 without pretreatment, GDF15 was the main driver for this term (log 2 FC 3.72, − log 10 p value 4.27), but also affected the enriched BP term “regulation of pathway-restricted SMAD protein phosphorylation” (ES 2.79, ES 0.0046). The strong upregulation of GDF15, even enhanced by pretreatment with β-NF to increase metabolic activation of AFB 1 , probably indicates an inflammatory response or other cellular dysfunctions (Wang et al. ). GDF15 is used as a biomarker for cardiovascular disease, cancer and other diseases in humans (Luan et al. ). Therefore, the effect of AFB 1 on GDF15 abundance in vivo should be analyzed to prevent false-positive diagnosis of these diseases. Our results indicate that the metabolic activation of AFB 1 through phase I metabolism is required for the inflammatory response in HepG2 cells or at least induces it. The hypothesis of inflammatory processes is supported by respective enriched terms that include GDF15 as well as different cytokine receptors. Except for GDF15, no cytokine was detected, which could be related to their low molecular weight and subsequent loss during sample preparation. During inflammatory processes, interleukins are excreted to cell culture media which was removed prior to sample preparation and, therefore, not analyzed in this study. Eventually, the abundance of cytokine receptors could have been upregulated because of a high, but not detectable cytokine concentration. Iori et al. observed the same KEGG term “cytokine–cytokine receptor interaction” as strongest enriched in a transcriptomic approach in bovine liver cells. They proposed an activation of the toll-like receptor 2 linked to inflammatory response and oxidative stress. Other transcriptomic studies using the chicken hepatocellular carcinoma cell line LMH and the bovine fetal hepatocyte cell line BFH12 found impaired genes associated with inflammation as well (Choi et al. ; Pauletto et al. ). Within the upregulated DAPs caused by AFB 1 in non-pretreated cells, several terms related to the mitotic cell cycle were identified as enriched, although only 13 proteins (0.43% of all quantified) were identified as significantly upregulated (see Online Resource ). Within these terms, the aurora kinases A and B (AURKA, log 2 FC 0.63, − log 10 p value 3.28; AURKB, log 2 FC 0.75, − log 10 p value 3.04) and the inner centromere protein (INCENP, log 2 FC 0.76, − log 10 p value 3.53) were the key proteins. Aurora kinases and INCENP are parts of the chromosomal passenger complex and play a central regulatory role in mitosis and cytokinesis. The deregulation of these processes could lead to the described general cytotoxicity (Cimbalo et al. ), but could also induce chromosomal defects (Ruchaud et al. ) and thereby contribute to the carcinogenicity of AFB 1 . Further enriched terms in the whole proteome dataset of only AFB 1 describe the downregulation of enzymes involved in glycolysis or nucleotide phosphorylation, with a high overlap within the proteins of these terms: all proteins of the BP “glycolytic process” were also found in the BP “nucleotide phosphorylation” (see Online Resource ). The downregulation of these enzymes matches the decreased concentration of nucleoside derivatives and several metabolites of glycolysis found after AFB 1 treatment in HepG2 cells (Gerdemann et al. ). All other top terms of AFB 1 treatment in metabolically induced HepG2 cells describe downregulated proteins of processes or components of the maturation of rRNA or ribosomes. For instance, the BP “endonucleolytic cleavage in 5-ETS of tricistronic rRNA transcript (SSU-rRNA, 5.8S rRNA, LSU-rRNA)” was enriched by ES 2.73 with an FDR of 0.0052. None of these terms occurred in the top 15 enriched terms in non-pretreated cells. The ribosome biogenesis and its related terms were very much represented in the enriched terms within the group of downregulated DAPs as well (see Online Resource ) and precursors of the ribosomal LSU. The CC “preribosome, large subunit precursor” was already found in the experiment with non-pretreated cells. However, the high abundances and ESs of terms related to ribosome biogenesis or, more specifically, rRNA maturation indicate higher effects after metabolic induction. This suggests that the activation of AFB 1 through phase I metabolism enhances its effect on these terms. However, no such effect on ribosomes or ribosomal activity is described for AFB 1 so far, except for the aforementioned transcriptomics approach by Iori et al. , who found the term “ribosome biogenesis in eukaryotes” enriched. These results point towards a new potential cellular target of AFB 1 that could contribute to its hepatotoxicity. Whether the downregulation of ribosomal proteins actually impairs their activity in the form of protein synthesis needs to be investigated in further experiments. Effects with ESs < 1 are not discussed in detail. Penitrem A HepG2 cells were treated with 10 µM Pen A for 24 h. The enrichment analysis of the few DAPs (1.2%) revealed no enriched GO or KEGG terms in these groups. However, the analysis of the whole proteome dataset revealed significant enrichments of certain biological functions (Fig. ). The strongest effect was observed in the downregulation of proteins involved in the “cholesterol biosynthetic process” (ES 1.54, FDR 0.0030), while several other terms containing “sterol”, “steroid” or “secondary alcohol” showed a high overlap with this term. The diterpene part of the chemical structure of Pen A might cause the downregulation of proteins involved in these processes, as it comprises a similarity to the polycyclic backbone of cholesterol. Comparably, the exogenous steroid hypocholamide regulates the expression of genes involved in cholesterol and fatty acid homeostasis via the liver X receptor (Song and Liao ). Potentially, Pen A activates negative regulatory feedback pathways of cholesterol synthesis and metabolism by binding to this receptor. Inhibited synthesis of cholesterol in the liver in vivo can affect the uptake, metabolism and transport of lipids, but also more severe effects on the entire organism are described, especially during developmental stages (Peeples et al. ). Another term contains downregulated proteins of the “proton-transporting two-sector ATPase complex” (ES 1.09, FDR 0.0056), of which 24 different ATPases or subunits were identified. These also led to the enriched terms concerning proton motive force-driven ATP synthesis. The effect on mitochondrial ATP synthesis was highly specific, as almost all ATPases or subunits were downregulated. Its relevance for cellular energy levels is apparent and could explain cytotoxic effects of Pen A in higher concentrations (Gerdemann et al. ; Kalinina et al. ). A third effect was the downregulation of valine, leucine and isoleucine degrading enzymes (ES 1.02, FDR < 0.0001). The deregulation of branched-chain amino acid degradation via transamination and oxidative decarboxylation is associated with obesity, insulin resistance and diabetes (Choi et al. ). Few studies analyzed cellular toxicity pathways of Pen A so far and these focused on cytotoxicity (Kalinina et al. ), metabolic (Gerdemann et al. ) or tremorgenic effects in the central nervous system (Berntsen et al. ). Our study suggests three new potential deregulated mechanisms by Pen A, which are sterol biosynthesis and metabolism, mitochondrial energy production and branched-chain amino acid degradation. These results can contribute to further understand the detailed mechanisms behind the (cyto)toxicity of Pen A. Future studies should include the investigation of proteomic alterations in cells of the central nervous system, the main site of action of Pen A toxicity in vivo. For this purpose, e.g., CCF-STTG1 could be used, in which Pen A has shown stronger cytotoxicity than in HepG2 cells (Kalinina et al. ). Trichothecenes For the two type B trichothecenes DON and NIV, a deviating presentation of the enrichment analysis results was chosen. Due to their well-described ribotoxicity, effects on the abundance of ribosomal proteins were expected. These were observed in the form of upregulated proteins involved in ribosomal biogenesis. However, the corresponding enriched terms were the strongest ones on the upregulated side, but not within the overall top 15 enriched terms, since the terms on the downregulated side showed much higher ESs. As we still intended to demonstrate the ribotoxicity-related effects of trichothecenes, the following bubble plots are divided in up- and downregulated terms. Deoxynivalenol HepG2 cells were treated with 1 µM DON for 24 h. The presented results are divided in enriched terms caused by upregulated (Fig. , left) and downregulated proteins (Fig. , right). On the upregulated side, the strongest enrichments were observed for the CCs “box C/D RNP complex” (ES 2.32, FDR 0.0026) and “preribosome, large subunit precursor” (ES 2.30, FDR < 0.0001) and the BPs “maturation of LSU-rRNA ” (ES 2.26, FDR < 0.0001), specifically from tricistronic (SSU-rRNA, 5.8S rRNA, LSU-rRNA) rRNA transcript (ES 2.30, FDR < 0.0001). Several further terms described the biogenesis of ribosomes directly or indirectly (MF: “rRNA methyltransferase activity”; BPs: “ribosomal large subunit biogenesis”; “rRNA methylation”). In addition, the CC term “nucleolar exosome (RNAse complex)” and two BP terms on the processing of small nucleolar (sno(s)) RNA were enriched. All these terms were also found enriched in the group of upregulated DAPs (see Online Resource ) as well as several other terms regarding ribosomes or ribosomal biogenesis. DON is a well-known ribotoxin and thereby impairs the protein synthesis (McCormick et al. ). HepG2 cells seem to counter-regulate the inhibited ribosomal activity by upregulating proteins required to generate new ribosomes. Remarkably, the binding to the LSU becomes apparent in the respective enriched terms, like the “maturation of LSU-rRNA”. Besides the terms directly related to ribosomes, the terms concerning the box C/D RNP complex, the nucleolar exosome and sno(s) RNA are associated with the biogenesis of ribosomes as well, since all these are essential for the maturation of rRNA (Kilchert et al. ; Maden and Hughes ; Henras et al. ). On the side of downregulated proteins, terms with the strongest enrichments are the BPs “regulation of Cdc42 protein signal transduction” (ES 8.01, FDR < 0.0001) and “positive regulation of cholesterol efflux” (ES 7.78, FDR < 0.0001) as well as the KEGG term “neuroactive ligand-receptor interaction” (ES 6.79, FDR < 0.0001). The latter term included only 1% (3 out of 329) identified proteins of the whole term and was headed by the protein angiotensinogen (AGT, log 2 FC − 1.91, − log 10 p value 8.28). Remarkably, most terms were influenced by a strong downregulation of different apolipoproteins (Apos, see Online Resource ). E.g., APOA1 was the strongest downregulated protein of this experiment (log 2 FC − 2.20, − log 10 p value 3.01). Among some other proteins, the downregulated Apos resulted in an affiliation of most terms to the lipid metabolism. The same was observed for the group of downregulated DAPs, of which mainly Apos led to enriched terms of the lipid metabolism. In addition, some extracellular components and metabolic processes were identified there (see Online Resource ). Adverse effects of DON on lipid metabolism were recently described by Jin et al. , who observed disorders in livers of high-fat-diet-induced obesity mice, and by Del Favero et al. , who reported alterations in lipid biosynthesis in human epidermal cells. Previously, weight loss in high-fat-diet-induced mice after DON treatment was also described by Flannery et al. . Our results support these findings, as especially Apos were downregulated. Apos are the protein part of lipoproteins, which represent the transport form of lipids in body fluids. Due to their key role in lipid metabolism, Apo disorders can lead to several illnesses, such as dyslipidemia, obesity or cardiovascular diseases (Albitar et al. ). These results led to two hypotheses in order to explain the specific downregulation of Apos. The first one suggests that DON inhibits the cholesterol synthesis comparable to statins, which are drugs used for people with a high risk of cardiovascular diseases (Alenghat and Davis ). The decreased cholesterol concentration would finally downregulate the synthesis of Apos. The second hypothesis proposes a link to the ribotoxicity of DON. Potentially, Apos are produced in a very high amount in untreated HepG2 cells and, for that reason, they are the protein class most affected by an inhibited overall protein synthesis in HepG2 cells. Both hypotheses should be investigated in further experiments and could shed light on a second main mechanism of trichothecene toxicity. Nivalenol HepG2 cells were treated with 0.5 µM NIV for 24 h. The presented results are divided in enriched terms caused by upregulated (Fig. , left) and downregulated proteins (Fig. , right). The strongest upregulated terms were observed for the CC “preribosome, large subunit precursor” (ES 1.51, FDR 0.00046) and the MF “RNA methyltransferase activity” (ES 1.33, FDR 0.0071). Comparable to DON, all terms are directly or indirectly associated with the biogenesis of ribosomes, especially of the LSU. The “cajal body” (ES 1.22, FDR < 0.0001) is a ribonucleoprotein particle (RNP) involved in the maturation of spliceosomes and ribosomes (Liang and Li ). Again, within the group of upregulated DAPs, mainly ribosome-related terms were found enriched (see Online Resource ). The downregulated side is also comparable to DON and dominated by the downregulation of Apos, with APOA1 as the second strongest downregulated protein (log 2 FC − 1,80, − log 10 p value 1.93). This is supported by the result of enrichment analysis within downregulated DAPs only, which also included extracellular components, the endoplasmic reticulum as well as several metabolism-related terms (see Online Resource ). However, for NIV, AGT was the strongest downregulated protein (log 2 FC − 1.89, − log 10 p value 5.16) and was found to mainly affect the KEGG term “neuroactive ligand-receptor interaction” and the BP “regulation of systemic arterial blood pressure by hormone”. Angiotensin, the product of AGT cleavage, is mainly known to regulate blood pressure, but the precursor AGT was also reported to be involved in lipid metabolism, which could explain the co-downregulation with Apos (Kim et al. ). The BP “vitamin biosynthetic process” was mainly increased by CYP27A1 and PSAT1 (see Online Resource ), both of which are involved in vitamin synthesis. As they are not part in the same pathway, this term will not be discussed in detail. Besides the discussed terms, two enriched MFs concerning proteoglycan binding were observed. The very high overlap between the enriched terms after DON and NIV treatment underlines the similarity in their MoA. This was expected, since both mycotoxins are type B trichothecenes that differ only in the hydroxy group at position 4 for nivalenol yet a hydrogen for deoxynivalenol (Online Resource , Figure ). However, even the small distinction in the chemical structure seems to result in different biological activities, which became apparent in a study on the cytotoxicity in different cell lines, as well: Nagashima described more than twofold higher concentrations of DON required for 50% inhibition of cell proliferation (IC 50 ) compared to NIV. Previous in vitro bioactivity studies of trichothecenes mainly focused on the inhibition of protein synthesis, apoptosis and inflammation (Rocha et al. ). Our bottom-up proteomics approach also characterized the ribotoxicity as a main cellular target and revealed the ability of HepG2 cells to counter-regulate the inhibited protein synthesis in sub-cytotoxic trichothecene concentrations by upregulating the ribosome biogenesis. However, we additionally identified the distinct downregulation of Apos and AGT that could impair the lipid metabolism extensively. Whether the ribotoxicity of DON and NIV is connected to the downregulation of those proteins should be investigated in future studies. HepG2 cells were treated with 200 nM OTA for 24 h (Fig. , left). Strongest effects were observed in the upregulation of the “MCM complex”, which is a heterohexamer controlling DNA replication in the late M to early G 1 phase of the cell cycle (Lei ). The “CMG complex” showed the same values as the “MCM complex” of ES 2.43 and an FDR of 0.0074. These two terms include the same six modulated proteins (MCM2–MCM7), but three CMG-specific proteins were not identified. These results were supported by enriched terms associated with DNA replication and the MCM complex itself in the group of upregulated DAPs. The deregulation of the MCM complex is associated with the development of hepatocellular carcinoma and genomic instability (Lei et al. ). MCM plays a key role in DNA replication—replicative stress is discussed as a potential cause for genotoxic properties of OTA. However, the underlying mechanism remains unclear (EFSA Panel on Contaminants in the Food Chain (CONTAM) et al. ; Klotz et al. ). Our results support the mechanism of DNA replication as a relevant target of OTA toxicity, as the upregulation of the MCM complex was the strongest enriched term. Potentially, cells counter-regulate the inhibitory effect of OTA on DNA replication by upregulation of the MCM complex. The BPs on ribonucleoside and nucleoside monophosphate biosynthesis were upregulated with ESs of 1.80 and 1.72, respectively, and FDRs of 0.0016, both, likewise supported by the DAP results (see Online Resource ). The upregulation of proteins involved in nucleotide biosynthesis could be an indirect effect of the disturbed DNA replication, for which a balanced nucleotide pool is required. The induction of respective genes might be regulated via the transcription factor c-Myc, as there is a high overlap between its regulated genes, the proteins upregulated by OTA and those affiliated to nucleoside monophosphate biosynthesis (Liu et al. ). Liang et al. also described the deregulation of nucleotide metabolism and cell cycle as well as DNA repair mechanisms and the blockade of RNA synthesis by OTA in human embryonic kidney cells (HEK293), which mostly supports our results. They concluded that OTA activated the apoptosis signal-regulating kinase 1 (ASK1) via oxidative stress, which in turn led to apoptosis initiation by the mentioned mechanisms. Several terms of the whole proteome data containing upregulated proteasomal proteins were shown to be enriched with ESs from 1.75 for the CC “proteasome regulatory particle” (FDR < 0.0001) to 1.05 for the CC “proteasome complex” (FDR < 0.0001). The proteasome is a multisubunit protein complex that is responsible for the intracellular degradation of proteins. Its upregulation might be caused by direct effects of OTA or might represent an unspecific stress response. On the one hand, the Nrf2 pathway is described to induce the proteasome as a reaction to oxidative stress (Pickering et al. ), which is reported to be a key mechanism of OTA toxicity (Frangiamone et al. ). On the other hand, OTA could directly bind to proteins, like described for human and murine serum albumin (Sueck et al. ; Kuhn et al. ). Malfunctioning modified proteins need to be degraded, which could induce the upregulation of the proteasome. Furthermore, Perugino et al. recently proposed the inhibition of a prolyl 3-hydroxylase involved in protein synthesis, by 3-dimensional modeling. This effect might induce protein damage resulting in an increased requirement for degradation. Akpinar et al. also observed a time-dependent deregulation of proteasomal proteins in human kidney proximal tubule cells (HK-2) caused by 10 µM OTA. Furthermore, within the upregulated DAPs, different extracellular components were identified. Effects with ESs < 1 are not further discussed. In IHKE cells, OTA induced a strong downregulation of several histones and high mobility group nucleosome-binding proteins, which resulted in the enriched MFs “nucleosomal DNA binding” and “structural constituent of chromatin” in the whole proteome data and within the downregulated DAPs (see Online Resource ). Histone downregulation can be caused by the G 1 checkpoint pathway, which in turn can be activated by DNA damage (Su et al. ). On the upregulated side, mainly proteins of extracellular components and various metabolic processes were enriched (see Online Resource ). Taking results from both cell lines together, OTA seems to affect the cell cycle, which is already described (Kőszegi and Poór ). Different responses of the two cell lines could be explained by very different methods used to obtain the cells (López-Terrada et al. ; Tveito et al. ) and by eventually high differences in abundances of proteins like the tumor suppressor p53 in cancer cells (Zhou and Elledge ). HepG2 cells were treated with 20 µM CIT for 24 h (Fig. , right). Similar to the effects of OTA, CIT strongly induced the six MCM subunits MCM2–MCM7 resulting in an ES of 3.06 and an FDR of 0.0022 for the CC terms “MCM complex” and “CMG complex”. This effect is supported by respective terms enriched in the DAPs and several terms regarding DNA replication (see Online Resource ). Remarkably, the same terms of upregulated proteasomal proteins and nucleotide biosynthesis-related proteins as for OTA were observed to be enriched by CIT as well. For proteasomal proteins, ESs ranged from 1.75 for the CC “proteasome accessory complex” (FDR < 0.0001) to 1.43 for the CC “proteasome complex” (FDR < 0.0001, not part of top 15 terms). Concerning nucleotide biosynthesis, the BP “ribonucleoside monophosphate biosynthesis” showed an ES of 1.88 with an FDR of 0.0014 and the respective nucleoside term showed an ES of 1.61 with an FDR of 0.0022. Supporting these results, terms on the MF “nucleotide binding” were enriched in the analysis of upregulated DAPs (see Online Resource ). Concerning the MCM upregulation and induction of the proteasome and the nucleotide biosynthesis, CIT showed effects on the proteome of HepG2 cells comparable to OTA (see chapter Ochratoxin A). We assume that this observation indicates a similarity in their toxicity pathways in terms of replication and oxidative stress. A comparison of the log 2 FC values of the six upregulated MCM proteins between the OTA and the CIT experiment by Student’s t test revealed a p -value of 0.931, demonstrating a resemblance between the mentioned effects of OTA and CIT. Oxidative stress is a comprehensively analyzed mechanism of CIT and OTA toxicity (Rašić et al. ), but replication stress is only discussed for OTA so far (EFSA Panel on Contaminants in the Food Chain (CONTAM) et al. ). Thus, the described results reveal a new potential mechanism of CIT toxicity and concurrently suggest the polyketide-derived coumarin part of the molecules (Geisen et al. ) as responsible for this mechanism. The second strongest enriched term was the “catenin complex” (ES 2.85, FDR 0.0045), caused by the downregulation of cadherins (CDH) 1 and 2, catenins α−1 and δ−1 and junction plakoglobin, all of which are junction proteins. As this term was headed by CDH1 (log 2 FC = − 2.45, − log 10 p value = 0), which was identified in only one out of six replicates of CIT treatment, this effect is not discussed in more detail. Proteins of the KEGG terms “fructose and mannose metabolism” (ES 2.29, FDR 0.00046) were upregulated, which could affect energy production from glycolysis, but also ascorbate metabolism and N -glycan biosynthesis are associated with this pathway (KEGG: hsa00051). On the other hand, some proteins affiliated to “complement and coagulation cascades” (ES 1.91, FDR 0.00052) were downregulated, which were mainly complement factors, serine protease inhibitors and fibrinogens (see Online Resource ). This could affect the blood coagulation in vivo (Amara et al. ). However, no such effects for CIT have been described in the literature. Enzymes of the “folate biosynthesis” were upregulated (ES 1.93, FDR 0.0087), which could explain the strong enrichment of dihydrobiopterin observed in HepG2 cells (Gerdemann et al. ). This effect might be related to nucleotide synthesis—and thereby probably to replication stress—as the output of folate metabolism also includes nucleotide precursors (Zheng and Cantley ). Gerdemann et al. also postulated the inhibition of the enzymes pyruvate carboxylase (PYC) and succinyl-CoA ligase (SUCL), which are part of the citrate cycle. Our results might also explain this result, as PYC (log 2 FC − 0.743, − log 10 p value 1.29) and SUCL (log 2 FC − 0.646, − log 10 p value 1.33 for subunit G2) were the strongest downregulated proteins of the enriched KEGG term “citrate cycle” (ES 1.43, FDR 0.0034) that was not listed within the top 15 terms. In IHKE cells, only the CC “mitochondrial protein-containing complex” and the KEGG term “systemic lupus erythematosus” were enriched in the whole proteome data (see Online resource ). The latter term was mainly driven by downregulation of histones, which was comparable to OTA. However, the histone downregulation still indicates a similarity in their MoA, which is supported by observations in both HepG2 and IHKE cells. The experiment with a combination of CIT and OTA did not show any effects that point towards specific combinatory effects on the proteome in the used concentrations of 15 µM and 20 nM, respectively, beyond the addition of the single compound effects—based on the enriched terms in the whole proteome data (see Online Resource ). In contrast, the combination showed a smaller number of DAPs than OTA alone (see Online Resource , Figure ). This could be caused by the inhibitory effect of CIT on the uptake of OTA described by Knecht et al. . They described, that 15 µM CIT reduced the uptake of OTA by more than 60% in IHKE cells. 1 HepG2 cells were treated with 10 µM AFB 1 for 24 h (Fig. , left). Since the metabolic activation of AFB 1 was shown to be critical for certain toxic mechanisms (Gerdemann et al. ; van Vleet et al. ), the same experiment was additionally conducted in β-NF pretreated HepG2 cells (10 µM, 16 h) to induce the metabolic activity especially of CYP1A variants (Westerink and Schoonen ; Gerets et al. ). In this study, only CYP1A1 was observed as induced (see Online Resource , sheet “β-NF proteins”), but CYP1A2 was shown to be induced by β-NF in HepG2 cells in former investigations (data not shown) and presumably lacks abundance to meet the limit of detection. The latter experiment included its own corresponding control pretreated with β-NF and afterwards treated with solvent control (see chapter Cell treatment). The results are shown in the right-hand part of Fig. . For AFB 1 , by far the strongest enriched term was the “cytokine–cytokine receptor interaction” (ES 2.79, FDR 0.0057), mainly caused by the upregulated cytokine “growth and differentiation factor 15” (GDF15, log 2 FC 1.72, − log 10 p value 4.87), but also by tumor necrosis factor receptors and interleukin receptors (see Online Resource ). In β-NF pretreated cells, again the KEGG term “cytokine–cytokine receptor interaction” was found enriched (ES 2.29, FDR 0.0026) with the same proteins involved, but in this case, enrichment on both ends (up- and downregulation) was identified. Like for AFB 1 without pretreatment, GDF15 was the main driver for this term (log 2 FC 3.72, − log 10 p value 4.27), but also affected the enriched BP term “regulation of pathway-restricted SMAD protein phosphorylation” (ES 2.79, ES 0.0046). The strong upregulation of GDF15, even enhanced by pretreatment with β-NF to increase metabolic activation of AFB 1 , probably indicates an inflammatory response or other cellular dysfunctions (Wang et al. ). GDF15 is used as a biomarker for cardiovascular disease, cancer and other diseases in humans (Luan et al. ). Therefore, the effect of AFB 1 on GDF15 abundance in vivo should be analyzed to prevent false-positive diagnosis of these diseases. Our results indicate that the metabolic activation of AFB 1 through phase I metabolism is required for the inflammatory response in HepG2 cells or at least induces it. The hypothesis of inflammatory processes is supported by respective enriched terms that include GDF15 as well as different cytokine receptors. Except for GDF15, no cytokine was detected, which could be related to their low molecular weight and subsequent loss during sample preparation. During inflammatory processes, interleukins are excreted to cell culture media which was removed prior to sample preparation and, therefore, not analyzed in this study. Eventually, the abundance of cytokine receptors could have been upregulated because of a high, but not detectable cytokine concentration. Iori et al. observed the same KEGG term “cytokine–cytokine receptor interaction” as strongest enriched in a transcriptomic approach in bovine liver cells. They proposed an activation of the toll-like receptor 2 linked to inflammatory response and oxidative stress. Other transcriptomic studies using the chicken hepatocellular carcinoma cell line LMH and the bovine fetal hepatocyte cell line BFH12 found impaired genes associated with inflammation as well (Choi et al. ; Pauletto et al. ). Within the upregulated DAPs caused by AFB 1 in non-pretreated cells, several terms related to the mitotic cell cycle were identified as enriched, although only 13 proteins (0.43% of all quantified) were identified as significantly upregulated (see Online Resource ). Within these terms, the aurora kinases A and B (AURKA, log 2 FC 0.63, − log 10 p value 3.28; AURKB, log 2 FC 0.75, − log 10 p value 3.04) and the inner centromere protein (INCENP, log 2 FC 0.76, − log 10 p value 3.53) were the key proteins. Aurora kinases and INCENP are parts of the chromosomal passenger complex and play a central regulatory role in mitosis and cytokinesis. The deregulation of these processes could lead to the described general cytotoxicity (Cimbalo et al. ), but could also induce chromosomal defects (Ruchaud et al. ) and thereby contribute to the carcinogenicity of AFB 1 . Further enriched terms in the whole proteome dataset of only AFB 1 describe the downregulation of enzymes involved in glycolysis or nucleotide phosphorylation, with a high overlap within the proteins of these terms: all proteins of the BP “glycolytic process” were also found in the BP “nucleotide phosphorylation” (see Online Resource ). The downregulation of these enzymes matches the decreased concentration of nucleoside derivatives and several metabolites of glycolysis found after AFB 1 treatment in HepG2 cells (Gerdemann et al. ). All other top terms of AFB 1 treatment in metabolically induced HepG2 cells describe downregulated proteins of processes or components of the maturation of rRNA or ribosomes. For instance, the BP “endonucleolytic cleavage in 5-ETS of tricistronic rRNA transcript (SSU-rRNA, 5.8S rRNA, LSU-rRNA)” was enriched by ES 2.73 with an FDR of 0.0052. None of these terms occurred in the top 15 enriched terms in non-pretreated cells. The ribosome biogenesis and its related terms were very much represented in the enriched terms within the group of downregulated DAPs as well (see Online Resource ) and precursors of the ribosomal LSU. The CC “preribosome, large subunit precursor” was already found in the experiment with non-pretreated cells. However, the high abundances and ESs of terms related to ribosome biogenesis or, more specifically, rRNA maturation indicate higher effects after metabolic induction. This suggests that the activation of AFB 1 through phase I metabolism enhances its effect on these terms. However, no such effect on ribosomes or ribosomal activity is described for AFB 1 so far, except for the aforementioned transcriptomics approach by Iori et al. , who found the term “ribosome biogenesis in eukaryotes” enriched. These results point towards a new potential cellular target of AFB 1 that could contribute to its hepatotoxicity. Whether the downregulation of ribosomal proteins actually impairs their activity in the form of protein synthesis needs to be investigated in further experiments. Effects with ESs < 1 are not discussed in detail. HepG2 cells were treated with 10 µM Pen A for 24 h. The enrichment analysis of the few DAPs (1.2%) revealed no enriched GO or KEGG terms in these groups. However, the analysis of the whole proteome dataset revealed significant enrichments of certain biological functions (Fig. ). The strongest effect was observed in the downregulation of proteins involved in the “cholesterol biosynthetic process” (ES 1.54, FDR 0.0030), while several other terms containing “sterol”, “steroid” or “secondary alcohol” showed a high overlap with this term. The diterpene part of the chemical structure of Pen A might cause the downregulation of proteins involved in these processes, as it comprises a similarity to the polycyclic backbone of cholesterol. Comparably, the exogenous steroid hypocholamide regulates the expression of genes involved in cholesterol and fatty acid homeostasis via the liver X receptor (Song and Liao ). Potentially, Pen A activates negative regulatory feedback pathways of cholesterol synthesis and metabolism by binding to this receptor. Inhibited synthesis of cholesterol in the liver in vivo can affect the uptake, metabolism and transport of lipids, but also more severe effects on the entire organism are described, especially during developmental stages (Peeples et al. ). Another term contains downregulated proteins of the “proton-transporting two-sector ATPase complex” (ES 1.09, FDR 0.0056), of which 24 different ATPases or subunits were identified. These also led to the enriched terms concerning proton motive force-driven ATP synthesis. The effect on mitochondrial ATP synthesis was highly specific, as almost all ATPases or subunits were downregulated. Its relevance for cellular energy levels is apparent and could explain cytotoxic effects of Pen A in higher concentrations (Gerdemann et al. ; Kalinina et al. ). A third effect was the downregulation of valine, leucine and isoleucine degrading enzymes (ES 1.02, FDR < 0.0001). The deregulation of branched-chain amino acid degradation via transamination and oxidative decarboxylation is associated with obesity, insulin resistance and diabetes (Choi et al. ). Few studies analyzed cellular toxicity pathways of Pen A so far and these focused on cytotoxicity (Kalinina et al. ), metabolic (Gerdemann et al. ) or tremorgenic effects in the central nervous system (Berntsen et al. ). Our study suggests three new potential deregulated mechanisms by Pen A, which are sterol biosynthesis and metabolism, mitochondrial energy production and branched-chain amino acid degradation. These results can contribute to further understand the detailed mechanisms behind the (cyto)toxicity of Pen A. Future studies should include the investigation of proteomic alterations in cells of the central nervous system, the main site of action of Pen A toxicity in vivo. For this purpose, e.g., CCF-STTG1 could be used, in which Pen A has shown stronger cytotoxicity than in HepG2 cells (Kalinina et al. ). For the two type B trichothecenes DON and NIV, a deviating presentation of the enrichment analysis results was chosen. Due to their well-described ribotoxicity, effects on the abundance of ribosomal proteins were expected. These were observed in the form of upregulated proteins involved in ribosomal biogenesis. However, the corresponding enriched terms were the strongest ones on the upregulated side, but not within the overall top 15 enriched terms, since the terms on the downregulated side showed much higher ESs. As we still intended to demonstrate the ribotoxicity-related effects of trichothecenes, the following bubble plots are divided in up- and downregulated terms. HepG2 cells were treated with 1 µM DON for 24 h. The presented results are divided in enriched terms caused by upregulated (Fig. , left) and downregulated proteins (Fig. , right). On the upregulated side, the strongest enrichments were observed for the CCs “box C/D RNP complex” (ES 2.32, FDR 0.0026) and “preribosome, large subunit precursor” (ES 2.30, FDR < 0.0001) and the BPs “maturation of LSU-rRNA ” (ES 2.26, FDR < 0.0001), specifically from tricistronic (SSU-rRNA, 5.8S rRNA, LSU-rRNA) rRNA transcript (ES 2.30, FDR < 0.0001). Several further terms described the biogenesis of ribosomes directly or indirectly (MF: “rRNA methyltransferase activity”; BPs: “ribosomal large subunit biogenesis”; “rRNA methylation”). In addition, the CC term “nucleolar exosome (RNAse complex)” and two BP terms on the processing of small nucleolar (sno(s)) RNA were enriched. All these terms were also found enriched in the group of upregulated DAPs (see Online Resource ) as well as several other terms regarding ribosomes or ribosomal biogenesis. DON is a well-known ribotoxin and thereby impairs the protein synthesis (McCormick et al. ). HepG2 cells seem to counter-regulate the inhibited ribosomal activity by upregulating proteins required to generate new ribosomes. Remarkably, the binding to the LSU becomes apparent in the respective enriched terms, like the “maturation of LSU-rRNA”. Besides the terms directly related to ribosomes, the terms concerning the box C/D RNP complex, the nucleolar exosome and sno(s) RNA are associated with the biogenesis of ribosomes as well, since all these are essential for the maturation of rRNA (Kilchert et al. ; Maden and Hughes ; Henras et al. ). On the side of downregulated proteins, terms with the strongest enrichments are the BPs “regulation of Cdc42 protein signal transduction” (ES 8.01, FDR < 0.0001) and “positive regulation of cholesterol efflux” (ES 7.78, FDR < 0.0001) as well as the KEGG term “neuroactive ligand-receptor interaction” (ES 6.79, FDR < 0.0001). The latter term included only 1% (3 out of 329) identified proteins of the whole term and was headed by the protein angiotensinogen (AGT, log 2 FC − 1.91, − log 10 p value 8.28). Remarkably, most terms were influenced by a strong downregulation of different apolipoproteins (Apos, see Online Resource ). E.g., APOA1 was the strongest downregulated protein of this experiment (log 2 FC − 2.20, − log 10 p value 3.01). Among some other proteins, the downregulated Apos resulted in an affiliation of most terms to the lipid metabolism. The same was observed for the group of downregulated DAPs, of which mainly Apos led to enriched terms of the lipid metabolism. In addition, some extracellular components and metabolic processes were identified there (see Online Resource ). Adverse effects of DON on lipid metabolism were recently described by Jin et al. , who observed disorders in livers of high-fat-diet-induced obesity mice, and by Del Favero et al. , who reported alterations in lipid biosynthesis in human epidermal cells. Previously, weight loss in high-fat-diet-induced mice after DON treatment was also described by Flannery et al. . Our results support these findings, as especially Apos were downregulated. Apos are the protein part of lipoproteins, which represent the transport form of lipids in body fluids. Due to their key role in lipid metabolism, Apo disorders can lead to several illnesses, such as dyslipidemia, obesity or cardiovascular diseases (Albitar et al. ). These results led to two hypotheses in order to explain the specific downregulation of Apos. The first one suggests that DON inhibits the cholesterol synthesis comparable to statins, which are drugs used for people with a high risk of cardiovascular diseases (Alenghat and Davis ). The decreased cholesterol concentration would finally downregulate the synthesis of Apos. The second hypothesis proposes a link to the ribotoxicity of DON. Potentially, Apos are produced in a very high amount in untreated HepG2 cells and, for that reason, they are the protein class most affected by an inhibited overall protein synthesis in HepG2 cells. Both hypotheses should be investigated in further experiments and could shed light on a second main mechanism of trichothecene toxicity. HepG2 cells were treated with 0.5 µM NIV for 24 h. The presented results are divided in enriched terms caused by upregulated (Fig. , left) and downregulated proteins (Fig. , right). The strongest upregulated terms were observed for the CC “preribosome, large subunit precursor” (ES 1.51, FDR 0.00046) and the MF “RNA methyltransferase activity” (ES 1.33, FDR 0.0071). Comparable to DON, all terms are directly or indirectly associated with the biogenesis of ribosomes, especially of the LSU. The “cajal body” (ES 1.22, FDR < 0.0001) is a ribonucleoprotein particle (RNP) involved in the maturation of spliceosomes and ribosomes (Liang and Li ). Again, within the group of upregulated DAPs, mainly ribosome-related terms were found enriched (see Online Resource ). The downregulated side is also comparable to DON and dominated by the downregulation of Apos, with APOA1 as the second strongest downregulated protein (log 2 FC − 1,80, − log 10 p value 1.93). This is supported by the result of enrichment analysis within downregulated DAPs only, which also included extracellular components, the endoplasmic reticulum as well as several metabolism-related terms (see Online Resource ). However, for NIV, AGT was the strongest downregulated protein (log 2 FC − 1.89, − log 10 p value 5.16) and was found to mainly affect the KEGG term “neuroactive ligand-receptor interaction” and the BP “regulation of systemic arterial blood pressure by hormone”. Angiotensin, the product of AGT cleavage, is mainly known to regulate blood pressure, but the precursor AGT was also reported to be involved in lipid metabolism, which could explain the co-downregulation with Apos (Kim et al. ). The BP “vitamin biosynthetic process” was mainly increased by CYP27A1 and PSAT1 (see Online Resource ), both of which are involved in vitamin synthesis. As they are not part in the same pathway, this term will not be discussed in detail. Besides the discussed terms, two enriched MFs concerning proteoglycan binding were observed. The very high overlap between the enriched terms after DON and NIV treatment underlines the similarity in their MoA. This was expected, since both mycotoxins are type B trichothecenes that differ only in the hydroxy group at position 4 for nivalenol yet a hydrogen for deoxynivalenol (Online Resource , Figure ). However, even the small distinction in the chemical structure seems to result in different biological activities, which became apparent in a study on the cytotoxicity in different cell lines, as well: Nagashima described more than twofold higher concentrations of DON required for 50% inhibition of cell proliferation (IC 50 ) compared to NIV. Previous in vitro bioactivity studies of trichothecenes mainly focused on the inhibition of protein synthesis, apoptosis and inflammation (Rocha et al. ). Our bottom-up proteomics approach also characterized the ribotoxicity as a main cellular target and revealed the ability of HepG2 cells to counter-regulate the inhibited protein synthesis in sub-cytotoxic trichothecene concentrations by upregulating the ribosome biogenesis. However, we additionally identified the distinct downregulation of Apos and AGT that could impair the lipid metabolism extensively. Whether the ribotoxicity of DON and NIV is connected to the downregulation of those proteins should be investigated in future studies. The presented work investigated the effects of six selected mycotoxins on the proteomes of human hepatoblastoma cells (HepG2) and human epithelial kidney cells (IHKE). The aim was the identification of main cellular targets and the underlying MoA. The cells were treated with sub-cytotoxic concentrations of the mycotoxins to induce proteomic alterations without activating acute cell death mechanisms directly. An overview of the effects on the cellular proteomes is depicted in Fig. . For instance, the trichothecenes DON and NIV induced a specific upregulation of proteins that are involved in the biogenesis of ribosomes. In the strongest enriched terms, the binding of these mycotoxins to the LSU became apparent. On the downregulated side, certain terms regarding the lipid metabolism were enriched, mainly driven by decreased Apos. Moreover, OTA and CIT revealed some commonalities as well, inducing the upregulation of the MCM complex and nucleotide biosynthesis, presumably indicating replication stress. The shared proteasome upregulation by OTA and CIT could be a more unspecific response towards oxidative stress or indicate a direct interaction of these mycotoxins with proteins. The effects on DNA replication, proteasome and nucleotide synthesis suggest a similarity between the MoA of OTA and CIT that could be caused by their coumarin-derived backbone. However, CIT seems to affect also primary metabolic pathways such as fructose, mannose and folate metabolism. AFB 1 induced the upregulation of GDF15 and some cytokine receptors, pointing towards an inflammatory response. Within the upregulated DAPs, a specific effect on mitosis and cytokinesis was identified. Furthermore, after pretreatment of HepG2 cells with β-NF to induce metabolic activity and thereby generate AFB 1 phase I metabolites in vitro, several proteins involved in ribosome biogenesis were specifically downregulated, which was reflected in respective terms. Pen A affected mainly the sterol metabolism, but showed further effects on the mitochondrial energy production and branched-chain amino acid degradation. The effects of CIT and OTA on IHKE proteomes were different to those in HepG2, but still supported their proposed MoA in terms of replication stress. In conclusion, the investigated mycotoxins caused diverse responses on the cellular proteomes. On the one hand, some well-known toxicity pathways were observed as strongly affected biological functions, e.g., in the form of ribosome biogenesis upregulated by trichothecenes or inflammatory response after AFB 1 treatment. On the other hand, novel potential targets were identified, like the cholesterol metabolism affected by Pen A, ribosomal proteins and cell cycle affected by AFB 1 , Apos modulated by trichothecenes and alterations in DNA replication proteins induced by OTA and CIT. Certainly, the presented study exhibits some limitations. Approximately, 3000 proteins were quantified in the datasets that evidently only represent a part of the proteome. The described effects were proposedly observed in abundant proteins, which limits the overall completeness of proteomic analyses in general. Any alterations within the lowest abundant proteins are not captured by such methods. For this purpose, more powerful instruments or deep proteome approaches are required. In addition, results from in vitro experiments only allow a limited prediction of the in vivo situation, as several toxicokinetic and -dynamic factors are not taken into account. However, the presented results still enable the identification of cellular effects of mycotoxins, caused by well described as well as by potentially new MoA. Therefore, the study evidently encompasses novel aspects in terms of mycotoxins’ cellular targets that should improve the elucidation of their toxicodynamic properties. Thus, this work represents a significant progress for mycotoxin research within the AOP framework. Furthermore, it underlines the high potency of omics techniques to characterize biological activities of compounds of interest in an unprecedented way. Future studies should focus on a more detailed elucidation of the proposed effects. This could be accomplished for example by investigating concentration and time dependence. As the method confirmed previously described MoA, it can be applied to investigate the effects of less well-characterized mycotoxins and also be transferred to other types of cells or even ex vivo samples. Furthermore, single mechanisms could be analyzed in highly specific assays. This would signify a huge step towards a deeper elucidation of the bioactivity of mycotoxins. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 152 KB) Supplementary file2 (XLSX 147 KB) Supplementary file3 (XLSX 15405 KB) Supplementary file4 (XLSX 6611 KB) |
心脏磁共振评估左心室射血分数保留肥胖者减重术后左心室结构和功能的改变 | 7f01bd54-df00-490a-94cb-02092c816637 | 11839351 | Surgical Procedures, Operative[mh] | 资料与方法 1.1 研究对象 前瞻性纳入2020年12月–2022年12月拟于我院行腹腔镜下胃袖状切除术的肥胖者。纳入标准:①体质量指数(body mass index, BMI)≥28.0 kg/m 2 ;②符合肥胖手术适应症 。排除标准:①心血管疾病史:如心肌病、心脏瓣膜病等;②LVEF<50%;③合并慢性肝肾疾病及恶性肿瘤等可能影响心血管系统的疾病;④酒精成瘾或滥用药物;⑤磁共振检查禁忌证。所有肥胖者需在减重术前、术后1个月及12个月行CMR检查并收集临床资料。同时纳入年龄、性别匹配的健康对照。纳入标准:①18.5 kg/m 2 <BMI<24.0 kg/m 2 ;②体格检查及CMR检查正常。排除标准:①有心血管疾病史或其他可能影响心血管系统的慢性疾病;②高血压、糖尿病、血脂异常; ③近3个月有手术史;④磁共振检查禁忌证。 本研究遵循2013年修订的《世界医学会赫尔辛基宣言》,由四川大学华西医院伦理审查委员会批准(批准号2016355),所有受试者签署书面知情同意书。 1.2 图像采集 采用西门子3.0T磁共振扫描仪(Magnetom Skyra, Siemens Healthcare, Medical Solutions, Erlangen, Germany)结合18通道体部相控阵线圈进行扫描。心脏电影成像采用平衡稳态自由进动序列。扫描范围包括由左心室基底部到心尖部的连续短轴及二、三、四腔长轴图像。扫描参数如下:重复时间3.4 ms;回波时间1.3 ms;视野320~380 mm 2 ;层厚8 mm;层间距0 mm;翻转角40°~50°;矩阵256×144;时间分辨率37~42 ms;重建25个期相。 1.3 图像分析 1.3.1 常规心功能分析 使用CVI 42后处理软件(cvi42® version 5.13.5, Circle Cardiovascular Imaging, Canada)进行图像分析。在收缩末期和舒张末期,软件自动勾画左心室内外膜轮廓,并根据标准化后处理方案进行手动调整 。得到左心室舒张末期容积(left ventricular end-diastolic volume, LVEDV)、左心室收缩末期容积(left ventricular end-systolic volume, LVESV)、左心室质量(left ventricular mass, LVMASS)、LVEF。根据术前LVEF将肥胖者分为LVEF≥60%和50%≤LVEF<60%两组。 1.3.2 心肌应变及应变率分析 在舒张末期,软件自动勾画长轴和短轴图像的左心室内外膜轮廓,并进行手动调整。由短轴图像得到整体径向应变(global radial strain, GRS)、整体周向应变(global circumferential strain, GCS)、收缩期整体径向应变率(systolic global radial strain rate, GRSR-S)、收缩期整体周向应变率(systolic global circumferential strain rate, GCSR-S)、舒张期整体径向应变率(diastolic global radial strain rate, GRSR-D)、舒张期整体周向应变率(diastolic global circumferential strain rate, GCSR-D);由长轴图像得到整体纵向应变(global longitudinal strain, GLS)、收缩期整体纵向应变率(systolic global longitudinal strain rate, GLSR-S)、舒张期整体纵向应变率(diastolic global longitudinal strain rate, GLSR-D)( )。 1.4 重复性分析 由两位具有3年以上图像后处理经验的放射技师进行分析。其中一位技师完成所有数据测量,并在1个月后随机选取20例图像再次进行分析,计算组内一致性。另一位技师对这20例图像进行测量,得到组间一致性。 1.5 统计学方法 统计学软件为SPSS 23.0和GraphPad Prism 10.2.2软件。采用Shapiro-Wilk检验对计量资料进行正态性检验。肥胖组基线与对照组的比较选用单因素方差分析或Kruskal-Wallis H 检验,采用Bonferroni校正法进行事后两两比较。肥胖组基线与术后的比较选用单因素重复测量方差分析或Friedman检验,采用Bonferroni校正法进行事后两两比较。采用Pearson相关或Spearman秩相关进行相关性分析。采用组内相关系数(intraclass correlation coefficient, ICC)进行组内及组间一致性分析,ICC>0.75 表示重复性好。 P <0.05为差异有统计学意义。样本量采用G*Power 3.1.9.7软件进行计算,选择重复测量方差分析组内因素,效应量取0.25,检验水准 α 取0.05,检验效能1− β 取0.80,组数为2,测量次数为3,重复测量间的相关性为0.5,非球形性校正的值设为1。结果得出两组肥胖者的总样本量最低应为28例,每组肥胖者样本量最低应为14例。 研究对象 前瞻性纳入2020年12月–2022年12月拟于我院行腹腔镜下胃袖状切除术的肥胖者。纳入标准:①体质量指数(body mass index, BMI)≥28.0 kg/m 2 ;②符合肥胖手术适应症 。排除标准:①心血管疾病史:如心肌病、心脏瓣膜病等;②LVEF<50%;③合并慢性肝肾疾病及恶性肿瘤等可能影响心血管系统的疾病;④酒精成瘾或滥用药物;⑤磁共振检查禁忌证。所有肥胖者需在减重术前、术后1个月及12个月行CMR检查并收集临床资料。同时纳入年龄、性别匹配的健康对照。纳入标准:①18.5 kg/m 2 <BMI<24.0 kg/m 2 ;②体格检查及CMR检查正常。排除标准:①有心血管疾病史或其他可能影响心血管系统的慢性疾病;②高血压、糖尿病、血脂异常; ③近3个月有手术史;④磁共振检查禁忌证。 本研究遵循2013年修订的《世界医学会赫尔辛基宣言》,由四川大学华西医院伦理审查委员会批准(批准号2016355),所有受试者签署书面知情同意书。 图像采集 采用西门子3.0T磁共振扫描仪(Magnetom Skyra, Siemens Healthcare, Medical Solutions, Erlangen, Germany)结合18通道体部相控阵线圈进行扫描。心脏电影成像采用平衡稳态自由进动序列。扫描范围包括由左心室基底部到心尖部的连续短轴及二、三、四腔长轴图像。扫描参数如下:重复时间3.4 ms;回波时间1.3 ms;视野320~380 mm 2 ;层厚8 mm;层间距0 mm;翻转角40°~50°;矩阵256×144;时间分辨率37~42 ms;重建25个期相。 图像分析 1.3.1 常规心功能分析 使用CVI 42后处理软件(cvi42® version 5.13.5, Circle Cardiovascular Imaging, Canada)进行图像分析。在收缩末期和舒张末期,软件自动勾画左心室内外膜轮廓,并根据标准化后处理方案进行手动调整 。得到左心室舒张末期容积(left ventricular end-diastolic volume, LVEDV)、左心室收缩末期容积(left ventricular end-systolic volume, LVESV)、左心室质量(left ventricular mass, LVMASS)、LVEF。根据术前LVEF将肥胖者分为LVEF≥60%和50%≤LVEF<60%两组。 1.3.2 心肌应变及应变率分析 在舒张末期,软件自动勾画长轴和短轴图像的左心室内外膜轮廓,并进行手动调整。由短轴图像得到整体径向应变(global radial strain, GRS)、整体周向应变(global circumferential strain, GCS)、收缩期整体径向应变率(systolic global radial strain rate, GRSR-S)、收缩期整体周向应变率(systolic global circumferential strain rate, GCSR-S)、舒张期整体径向应变率(diastolic global radial strain rate, GRSR-D)、舒张期整体周向应变率(diastolic global circumferential strain rate, GCSR-D);由长轴图像得到整体纵向应变(global longitudinal strain, GLS)、收缩期整体纵向应变率(systolic global longitudinal strain rate, GLSR-S)、舒张期整体纵向应变率(diastolic global longitudinal strain rate, GLSR-D)( )。 常规心功能分析 使用CVI 42后处理软件(cvi42® version 5.13.5, Circle Cardiovascular Imaging, Canada)进行图像分析。在收缩末期和舒张末期,软件自动勾画左心室内外膜轮廓,并根据标准化后处理方案进行手动调整 。得到左心室舒张末期容积(left ventricular end-diastolic volume, LVEDV)、左心室收缩末期容积(left ventricular end-systolic volume, LVESV)、左心室质量(left ventricular mass, LVMASS)、LVEF。根据术前LVEF将肥胖者分为LVEF≥60%和50%≤LVEF<60%两组。 心肌应变及应变率分析 在舒张末期,软件自动勾画长轴和短轴图像的左心室内外膜轮廓,并进行手动调整。由短轴图像得到整体径向应变(global radial strain, GRS)、整体周向应变(global circumferential strain, GCS)、收缩期整体径向应变率(systolic global radial strain rate, GRSR-S)、收缩期整体周向应变率(systolic global circumferential strain rate, GCSR-S)、舒张期整体径向应变率(diastolic global radial strain rate, GRSR-D)、舒张期整体周向应变率(diastolic global circumferential strain rate, GCSR-D);由长轴图像得到整体纵向应变(global longitudinal strain, GLS)、收缩期整体纵向应变率(systolic global longitudinal strain rate, GLSR-S)、舒张期整体纵向应变率(diastolic global longitudinal strain rate, GLSR-D)( )。 重复性分析 由两位具有3年以上图像后处理经验的放射技师进行分析。其中一位技师完成所有数据测量,并在1个月后随机选取20例图像再次进行分析,计算组内一致性。另一位技师对这20例图像进行测量,得到组间一致性。 统计学方法 统计学软件为SPSS 23.0和GraphPad Prism 10.2.2软件。采用Shapiro-Wilk检验对计量资料进行正态性检验。肥胖组基线与对照组的比较选用单因素方差分析或Kruskal-Wallis H 检验,采用Bonferroni校正法进行事后两两比较。肥胖组基线与术后的比较选用单因素重复测量方差分析或Friedman检验,采用Bonferroni校正法进行事后两两比较。采用Pearson相关或Spearman秩相关进行相关性分析。采用组内相关系数(intraclass correlation coefficient, ICC)进行组内及组间一致性分析,ICC>0.75 表示重复性好。 P <0.05为差异有统计学意义。样本量采用G*Power 3.1.9.7软件进行计算,选择重复测量方差分析组内因素,效应量取0.25,检验水准 α 取0.05,检验效能1− β 取0.80,组数为2,测量次数为3,重复测量间的相关性为0.5,非球形性校正的值设为1。结果得出两组肥胖者的总样本量最低应为28例,每组肥胖者样本量最低应为14例。 结果 2.1 肥胖组基线与对照组的比较 共纳入75例拟行减重手术的肥胖者和46例年龄( t =−0.365, P =0.716)、性别( χ 2 =0.126, P =0.832)相匹配的健康对照(对照组)。根据术前LVEF将肥胖者分为LVEF≥60%组( n =43)和50%≤LVEF<60%组( n =32)。由 可见,LVEF≥60%组的LVEDV、LVMASS大于对照组,差异有统计学意义( P 均<0.05),GLS、GRSR-D、GCSR-D、GLSR-D低于对照组,差异有统计学意义( P 均<0.05)。50%≤LVEF<60%组的LVEDV、LVESV、LVMASS大于对照组,差异有统计学意义( P 均<0.05),LVEF、GRS、GCS、GLS、GRSR-S、GLSR-S、GRSR-D、GCSR-D、GLSR-D低于对照组,差异有统计学意义( P 均<0.05)。 2.2 减重术后肥胖者体质量的改变 最终有38例肥胖者同时完成了减重术后1个月及12个月的CMR检查。根据术前LVEF将肥胖者分为LVEF≥60%组( n =20)和50%≤LVEF<60%组( n =18)。LVEF≥60%组肥胖者术后不同时间BMI差异有统计学意义( F =76.668, P <0.001)。50%≤LVEF<60%组肥胖者术后不同时间BMI差异有统计学意义( F =104.237, P <0.001)( 和 )。 2.3 LVEF≥60%组的左心室结构和功能改变 LVEF≥60%组术后不同时间的LVESV、GRS、GCS、GLS差异无统计学意义( P 均>0.05);LVEDV、LVMASS、LVEF、GRSR-S、GCSR-S、GLSR-S、GRSR-D、GCSR-D、GLSR-D差异有统计学意义( P 均<0.05)。减重术后1个月,该组的LVMASS、LVEF、GRSR-S、GCSR-S、GLSR-S小于减重术前,差异有统计学意义( P 均<0.05)。减重术后12个月,该组的LVEDV、LVMASS、LVEF、GLSR-S小于减重术前,差异有统计学意义( P 均<0.05);GRSR-D、GCSR-D、GLSR-D大于减重术后1个月,差异有统计学意义( P 均<0.05)( )。 2.4 50%≤LVEF<60%组的左心室结构和功能改变 50%≤LVEF<60%组的LVEDV、LVESV、LVMASS、LVEF、GRS、GCS、GCSR-S、GRSR-D、GCSR-D、GLSR-D的时间效应具有统计学意义( P 均<0.05)。减重术后1个月,该组的LVMASS小于减重术前,差异有统计学意义( P =0.003),GCS、GLSR-D大于减重术前,差异有统计学意义( P 均<0.05)。减重术后12个月,LVEDV、LVESV、LVMASS小于减重术前,差异有统计学意义( P 均<0.05),LVEF、GRS、GCS、GRSR-D、GCSR-D、GLSR-D大于减重术前,差异有统计学意义( P 均<0.05);GCSR-S小于减重术后1个月,差异有统计学意义( P <0.05)( )。 2.5 相关性分析 减重术后12个月,LVEF≥60%组的GCSR-S改变与LVESV改变呈负相关( r =−0.499, P =0.025)。50%≤LVEF<60%组GRS( r =−0.492, P =0.038)、GRSR-S( r =−0.593, P =0.009)、GCSR-S( r =−0.647, P =0.004)、GRSR-D( r =−0.504, P =0.033)的改变与LVESV改变呈负相关;GCSR-S的改变与LVEDV改变呈负相关( r =−0.510, P =0.031)。 2.6 重复性分析 重复性分析结果显示,所有CMR参数的组内ICC值(95%置信区间)为0.924(0.819,0.969)~0.999(0.997,1.000),组间ICC值为0.913(0.795,0.965)~0.989(0.972,0.996),重复性好。 肥胖组基线与对照组的比较 共纳入75例拟行减重手术的肥胖者和46例年龄( t =−0.365, P =0.716)、性别( χ 2 =0.126, P =0.832)相匹配的健康对照(对照组)。根据术前LVEF将肥胖者分为LVEF≥60%组( n =43)和50%≤LVEF<60%组( n =32)。由 可见,LVEF≥60%组的LVEDV、LVMASS大于对照组,差异有统计学意义( P 均<0.05),GLS、GRSR-D、GCSR-D、GLSR-D低于对照组,差异有统计学意义( P 均<0.05)。50%≤LVEF<60%组的LVEDV、LVESV、LVMASS大于对照组,差异有统计学意义( P 均<0.05),LVEF、GRS、GCS、GLS、GRSR-S、GLSR-S、GRSR-D、GCSR-D、GLSR-D低于对照组,差异有统计学意义( P 均<0.05)。 减重术后肥胖者体质量的改变 最终有38例肥胖者同时完成了减重术后1个月及12个月的CMR检查。根据术前LVEF将肥胖者分为LVEF≥60%组( n =20)和50%≤LVEF<60%组( n =18)。LVEF≥60%组肥胖者术后不同时间BMI差异有统计学意义( F =76.668, P <0.001)。50%≤LVEF<60%组肥胖者术后不同时间BMI差异有统计学意义( F =104.237, P <0.001)( 和 )。 LVEF≥60%组的左心室结构和功能改变 LVEF≥60%组术后不同时间的LVESV、GRS、GCS、GLS差异无统计学意义( P 均>0.05);LVEDV、LVMASS、LVEF、GRSR-S、GCSR-S、GLSR-S、GRSR-D、GCSR-D、GLSR-D差异有统计学意义( P 均<0.05)。减重术后1个月,该组的LVMASS、LVEF、GRSR-S、GCSR-S、GLSR-S小于减重术前,差异有统计学意义( P 均<0.05)。减重术后12个月,该组的LVEDV、LVMASS、LVEF、GLSR-S小于减重术前,差异有统计学意义( P 均<0.05);GRSR-D、GCSR-D、GLSR-D大于减重术后1个月,差异有统计学意义( P 均<0.05)( )。 50%≤LVEF<60%组的左心室结构和功能改变 50%≤LVEF<60%组的LVEDV、LVESV、LVMASS、LVEF、GRS、GCS、GCSR-S、GRSR-D、GCSR-D、GLSR-D的时间效应具有统计学意义( P 均<0.05)。减重术后1个月,该组的LVMASS小于减重术前,差异有统计学意义( P =0.003),GCS、GLSR-D大于减重术前,差异有统计学意义( P 均<0.05)。减重术后12个月,LVEDV、LVESV、LVMASS小于减重术前,差异有统计学意义( P 均<0.05),LVEF、GRS、GCS、GRSR-D、GCSR-D、GLSR-D大于减重术前,差异有统计学意义( P 均<0.05);GCSR-S小于减重术后1个月,差异有统计学意义( P <0.05)( )。 相关性分析 减重术后12个月,LVEF≥60%组的GCSR-S改变与LVESV改变呈负相关( r =−0.499, P =0.025)。50%≤LVEF<60%组GRS( r =−0.492, P =0.038)、GRSR-S( r =−0.593, P =0.009)、GCSR-S( r =−0.647, P =0.004)、GRSR-D( r =−0.504, P =0.033)的改变与LVESV改变呈负相关;GCSR-S的改变与LVEDV改变呈负相关( r =−0.510, P =0.031)。 重复性分析 重复性分析结果显示,所有CMR参数的组内ICC值(95%置信区间)为0.924(0.819,0.969)~0.999(0.997,1.000),组间ICC值为0.913(0.795,0.965)~0.989(0.972,0.996),重复性好。 讨论 本研究应用CMR成像分析了不同LVEF范围的肥胖者基线和减重术后的左心室结构和功能的纵向改变。本研究的主要发现如下:①肥胖者存在左心室重构(左室扩大、LVMASS增加)及心肌舒缩功能降低;②减重手术后,肥胖者的左心室重构发生逆转(左室缩小、LVMASS减小);③减重术后,LVEF≥60%组肥胖者的整体应变(GRS、GCS、GLS)未见明显改变;④减重术后,50%≤LVEF<60%组肥胖者的整体应变(GRS、GCS)和舒张期整体应变率(GRSR-D、GCSR-D、GLSR-D)增加。 本研究发现肥胖者的左室扩大和LVMASS增加,肥胖者存在左心室重构,这与既往研究一致 。肥胖者通过增加每搏输出量来实现更多的心输出量,引起左心室充盈压和容积增加,导致左室扩大;由于心腔扩大,左室壁应力增加,心肌代偿性收缩增加,引起左室壁增厚和LVMASS增加 。HOMIS等 指出心脏重构可能导致心肌纤维化、心室僵硬和收缩功能降低,尽管肥胖者的LVEF正常,但肥胖仍与左心室收缩功能障碍有关。LIU等 发现肥胖者的左心室应变及应变率降低,这提示肥胖者存在左心室心肌运动功能降低。与上述研究一致,本研究肥胖者的LVEF保留,但其左心室亚临床舒缩功能降低。肥胖可增加左室前、后负荷,引起左室壁应力增加,导致左室扩张和代偿性左心室肥厚,这一过程会增加心肌硬度,从而导致左心室舒缩功能障碍 。 一篇荟萃分析 评价了减重手术后心脏结构的早期变化,减重术后左室大小和LVMASS减小。减重手术改善肥胖者的左心室结构和功能,有助于预防未来不良心血管事件的发生 。与上述研究一致,本研究发现肥胖者行减重手术后,左室缩小、LVMASS减小,提示左心室重构发生逆转。减重手术不仅可以改善左心室结构,还可以通过诱导代谢和血流动力学的改变,降低肥胖者心血管疾病的发病率和死亡率 。 在本研究中,减重术后1个月,LVEF≥60%组的LVEF、GRSR-S、GCSR-S、GLSR-S均减小。这可能是因为肥胖者需要更多的心输出量,该组肥胖者心肌代偿性运动增加,减重术后1个月,肥胖状态改善,心脏代偿性做功减少,因此,LVEF和收缩期整体应变率减小。减重术后12个月的舒张期应变率(GRSR-D、GCSR-D、GLSR-D)较术后1个月明显改善,GCS、GRS、GLS、GRSR-S较术后1个月有改善趋势,这提示肥胖者的亚临床舒张功能得到改善,但未来需要更长时间的随访来证明减重手术会改善术前LVEF≥60%的肥胖者的亚临床收缩功能。研究 指出减重术后肥胖者的整体应变及应变率增加,左心室亚临床舒缩功能改善。本研究50%≤LVEF<60%组的结果与既往研究一致,减重术后1个月,该组的GCS较基线明显改善;术后12个月,该组的GRS、GCS、GRSR-D、GCSR-D、GLSR-D较基线明显改善。既往研究 指出体质量减轻有利于肥胖者的亚临床收缩功能改善,对降低心力衰竭风险有重要价值。减重手术使得肥胖者脂肪组织减少,有效调节脂肪组织的分泌效应,发挥心脏保护作用,同时,减重手术可通过改善代谢状况来促进心脏功能的改善,降低心血管风险 。 本研究存在一定局限性。第一,本研究样本量较小,随访时间仅为1年,未来需在大样本多中心研究中进行更长期的随访研究。第二,本研究未对术前LVEF<50%的肥胖者进行分析。第三,本研究仅探讨了肥胖者左心室结构和功能的纵向改变,未对其他心腔的结构和功能进行分析。第四,本研究纳入健康对照时仅考虑了年龄和性别,未考虑其他混杂因素,可能存在残余混杂偏倚风险。第五,本研究共纳入了75例肥胖者,但由于随访问题只分析了38例肥胖者减重术后的改变,存在较大选择偏倚风险,在未来研究中将采取相应措施减少偏倚风险。 综上所述,减重术后,肥胖者左心室结构和功能的改变是一个渐进的动态过程,其结构和功能的改变情况与术前LVEF有关。 * * * 作者贡献声明 蒲倩负责论文构思、正式分析、调查研究、研究方法、验证、可视化、初稿写作和审读与编辑写作,唐露负责数据审编、调查研究、研究方法、验证和审读与编辑写作,彭鹏飞负责数据审编、调查研究、研究方法和验证,明悦负责数据审编、调查研究和验证,杨慧义、岳书婷和郦峥负责调查研究和验证,程中和陈亿负责研究项目管理和提供资源,孙家瑜负责论文构思、经费获取、研究项目管理、提供资源、软件、监督指导和审读与编辑写作。所有作者已经同意将文章提交给本刊,且对将要发表的版本进行最终定稿,并同意对工作的所有方面负责。 Author Contribution PU Qian is responsible for conceptualization, formal analysis, investigation, methodology, validation, visualization, writing--original draft, and writing--review and editing. TANG Lu is responsible for data curation, investigation, methodology, validation, and writing--review and editing. PENG Pengfei is responsible for data curation, investigation, methodology, and validation. MING Yue is responsible for data curation, investigation, and validation. YANG Huiyi, YUE Shuting, and LI Zheng are responsible for investigation and validation. CHENG Zhong and CHEN Yi are responsible for project administration and resources. SUN Jiayu is responsible for conceptualization, funding acquisition, project administration, resources, software, supervision, and writing--review and editing. All authors consented to the submission of the article to the Journal. All authors approved the final version to be published and agreed to take responsibility for all aspects of the work. 利益冲突 所有作者均声明不存在利益冲突 Declaration of Conflicting Interests All authors declare no competing interests. |
Rapid molecular diagnostics of tuberculosis resistance by targeted stool sequencing | df77b37b-cb13-42ec-b5ed-7ec43dff98f9 | 9118838 | Pathology[mh] | Only 7.1 million (71%) of the estimated 10 million individuals with tuberculosis (TB) accessed care in 2019 . A large case detection gap exists for people living with HIV (PLHIV), children, and patients with drug-resistant tuberculosis . Multi-drug resistant (MDR) tuberculosis (resistance to at least isoniazid and rifampicin) now represents 3.3% of new tuberculosis cases and 18% of previously treated cases globally and continues to rise as a proportion of detected tuberculosis cases . There is an urgent need for rapid, comprehensive detection of drug resistance of Mycobacterium tuberculosis ( M. tuberculosis) complex strains to guide appropriate treatment regimens . Early identification of patients with multidrug-resistant tuberculosis, rapid molecular drug resistance testing (mDST), and linkage to care is paramount to decreasing transmission of MDR M. tuberculosis complex strains. The etiology of the case detection gap in low and middle-income countries is multifactorial, but in part is due to challenges with sputum collection in children and PLHIV . Young children and PLHIV are often unable to physically provide sputum samples; thus, procedures such as sputum induction or gastric aspiration are required to collect diagnostic specimens for pulmonary tuberculosis . A growing body of evidence demonstrates that M. tuberculosis can be found in the stool of patients with tuberculosis. Identification of M. tuberculosis complex in stool specimens by polymerase chain reaction (PCR), typically with the GeneXpert® MTB/RIF (Xpert), has demonstrated sensitivity between 60 and 70% against culture on respiratory specimens in children and adults . Hence, stool is now accepted as a diagnostic specimen to detect M. tuberculosis complex in children and PLHIV who have difficulty producing sputum . Sensitivity may be improved through specialized DNA extraction protocols . In contrast, stool culture of M. tuberculosis complex strains has a sensitivity of under 30% against respiratory culture, limiting the utility of phenotypic drug susceptibility testing (pDST) from stool specimens . Therefore, reliable methods for resistance prediction based on stool specimens are urgently needed. Herein, we share the results of an investigation to assess the feasibility and accuracy of targeted amplicon-based next-generation sequencing (tNGS) with the Deeplex® Myc-TB assay (Genoscreen, Lille, France) on DNA obtained by a specialized stool DNA extraction method, using an adjusted version of the MP Fast DNA kit for soil (MP Biochemicals, Solon, OH) . We evaluated the performance of tNGS with DNA isolated from stool specimens provided by participants from a prospective cohort of patients treated for TB in Eswatini ( n = 66; 56 with and 10 participants without M. tuberculosis complex DNA detected in stool by real-time quantitative PCR), and an independent German validation cohort of participants with culture-confirmed TB ( n = 21). We present the first evidence that tNGS not only detects M. tuberculosis complex DNA from stool samples in relation to the amount of DNA present, but also provides full mDST predictions for at least 13 anti-tuberculosis drugs. Study population and setting For the Eswatini cohort (cohort one), samples were obtained from a prospective study cohort including child and adult tuberculosis patients, at or within two weeks of treatment initiation, and their asymptomatic household contacts. Between 2014 and 2019, outpatients were recruited from tuberculosis clinics at the Mbabane Government Hospital, Baylor Children’s Foundation Clinic in Mbabane, and the Raleigh-Fitkin Memorial Hospital in Manzini. Study data was captured by trained research assistants using uniform case report forms. Respiratory specimens were provided by expectorated or induced sputum in adults and by induced sputum or gastric aspiration in children unable to expectorate. Participants were considered to have confirmed tuberculosis if a respiratory specimen was positive by Xpert or liquid culture with Mycobacteria Growth Indicator Tubes (MGIT, Becton Dickinson, Franklin Lakes, NJ, USA) and probable tuberculosis if radiographs, clinical symptoms and response to therapy were compatible with tuberculosis. For the German cohort (cohort two), stool and sputum samples from adult patients with culture-confirmed pulmonary tuberculosis were prospectively collected, following informed consent for participation in a prospective cohort, at the Medical Clinic of the Research Center Borstel, Germany, between 2018 and 2019. Patients at the Medical Clinic in Borstel, Germany are commonly referred for initiation of TB treatment or diagnostics when the diagnosis is considered at other facilities. The objective of this study was to assess the feasibility and accuracy of tNGS on M. tuberculosis complex DNA isolated from stool in patients diagnosed with tuberculosis. A cross-sectional, convenience sample of specimens with a range of concentrations of M. tuberculosis DNA detected by qPCR or negative was selected to evaluate tNGS performance. Furthermore, samples from both cohorts were analyzed for concordance between mDST by stool tNGS and pDST from sputum. Laboratory methods Cohort One: Consistent with Eswatini national guidelines , each participant provided two sputum specimens. One was tested by Xpert MTB/RIF (2014–2019) or Xpert Ultra (2019) in accordance with manufacturer instructions . The second specimen was used for culture at the National Tuberculosis Research Laboratory (NTRL) in Mbabane, Eswatini. Sputum cultures were performed with the Mycobacterium Growth Indicator Tubes (MGIT) 960 system, liquid media for the cultivation of mycobacteria, according to manufacturer instructions . Phenotypic DST was performed with the MGIT system for isoniazid, rifampicin, pyrazinamide, streptomycin, ethambutol, and, when indicated, fluoroquinolones, amikacin and capreomycin. Phenotypic DST on solid culture (Löwenstein-Jensen) and for second-line drugs such as moxifloxacin or bedaquiline was not possible to perform due to limitations in laboratory capacity in Eswatini. Cohort Two: Each participant provided two sputum samples and a stool sample, on the same day as admission for tuberculosis care. Specimens were stained for acid-fast bacilli and analyzed by microscopy and the presence of M. tuberculosis complex DNA by Xpert Ultra, if not already performed at the referring hospital. Solid culture (Löwenstein-Jensen) and liquid culture (MGIT) were performed in addition to phenotypic DST for isoniazid, rifampicin, ethambutol, and pyrazinamide. In the case of drug resistance against isoniazid and rifampicin, comprehensive second-line pDST was performed for levofloxacin and moxifloxacin, bedaquiline, linezolid, clofazimine, cycloserine/terizidone, delamanid, amikacin, kanamycin, capreomycin, PAS, and prothionamide (representative for thiamids). Second-line DST was performed in MGIT and interpreted based on World Health Organization (WHO) critical concentrations . For cycloserine, pDST was performed on a solid medium using a critical concentration of 30 mg/L. For both cohorts, stool was frozen within 12 h of collection at -80 °C. In Eswatini, stool was frozen without preservatives in 2-g aliquots prior to DNA isolation. In Germany, stool was aliquoted with 500 mg stool in one ml 20% Glycerol/PBS. Stool was thawed in batches and DNA was isolated as previously described . In brief, 500 mg of stool was processed using the MP Fast DNA kit for soil (MP Biochemicals, Solon, OH) with a six-minute homogenization via bead-beating disruption on the SI-D238 Disruptor Genie (Scientific Industries, Inc., Bohemia, NY). The isolated DNA was tested with a previously described qPCR or with the Diarella MTB/NTM/MAC kit (Gerbion, Kornwestheim, Germany) following the manufacturer instructions and quantified using H37Rv standard curves. Isolated DNA was sent to the Molecular and Experimental Mycobacteriology, Research Center Borstel, Borstel, Germany for tNGS analysis. The Deeplex® Myc-TB assay targets full sequences (i.e. coding sequence plus part of promoter region) or the most relevant regions of 18 drug resistance-associated genes ( rpoB , ahpC , fabG1 , katG , inhA , pncA , embB , gyrA , gyrB , rrs , eis , tlyA , gidB rpsL , ethA , rv0678 , rrl , rplC ), combined with genomic targets for mycobacterial species identification ( hsp65 ) and M. tuberculosis complex strain genotyping (CRISPR locus) . After Deeplex® Myc-TB amplification as instructed by the manufacturer (24-plexed PCR using a single Master Mix), amplicon libraries were prepared using the Nextera XT kit and sequenced with 150 bp paired-end reads using a NextSeq 500 instrument (Illumina, San Diego, California, USA). Analyses were performed using the integrated bioinformatics pipeline v1.3 implemented in the Deeplex® Myc-TB web application . In short, NGS reads were automatically mapped on M. tuberculosis H37Rv reference sequences using Bowtie 2 , and variants were called with a limit of 3% read proportion depending on coverage depth. Samples were then classified in accordance with breadth of target coverage and categorized by quality as ND, − , + , + + , or + + + . Detected variants were automatically associated with drug resistance or susceptibility, or phylogenetic lineage by comparison with integrated reference variant using the curated ReSeqTB database . When variants were not included in the database, mutations were defined as uncharacterized. Furthermore, a 401-bp segment of the hsp65 gene is used as a primary reference for mycobacterial species identification , the direct repeat region for spoligotype identification of MTBC strains , and an internal control sequence to control PCR inhibition. The identification can also be used as control for mixed infections, as not only the best match is reported by the software. Mixed infection is also signaled by a phylogenetic variant detected at less than 95%, indicating the simultaneous presence of one strain harboring this variant present at this percentage and another strain sharing the same sequence as the reference at this position, present at approx. 100% minus this percentage. The association between the qPCR cycle threshold category and a successful Deeplex® Myc-TB result was evaluated using a Cochran-Armitage test for trend. Cohen’s kappa statistic was used to compare Deeplex® Myc-TB results on stool to sputum susceptibility results. The sequencing data has been deposited in European Nucleotide Archive (ENA) database (Accession number: PRJEB47403, https://www.ebi.ac.uk/ena/browser/view/PRJEB47403?show=reads . For the Eswatini cohort (cohort one), samples were obtained from a prospective study cohort including child and adult tuberculosis patients, at or within two weeks of treatment initiation, and their asymptomatic household contacts. Between 2014 and 2019, outpatients were recruited from tuberculosis clinics at the Mbabane Government Hospital, Baylor Children’s Foundation Clinic in Mbabane, and the Raleigh-Fitkin Memorial Hospital in Manzini. Study data was captured by trained research assistants using uniform case report forms. Respiratory specimens were provided by expectorated or induced sputum in adults and by induced sputum or gastric aspiration in children unable to expectorate. Participants were considered to have confirmed tuberculosis if a respiratory specimen was positive by Xpert or liquid culture with Mycobacteria Growth Indicator Tubes (MGIT, Becton Dickinson, Franklin Lakes, NJ, USA) and probable tuberculosis if radiographs, clinical symptoms and response to therapy were compatible with tuberculosis. For the German cohort (cohort two), stool and sputum samples from adult patients with culture-confirmed pulmonary tuberculosis were prospectively collected, following informed consent for participation in a prospective cohort, at the Medical Clinic of the Research Center Borstel, Germany, between 2018 and 2019. Patients at the Medical Clinic in Borstel, Germany are commonly referred for initiation of TB treatment or diagnostics when the diagnosis is considered at other facilities. The objective of this study was to assess the feasibility and accuracy of tNGS on M. tuberculosis complex DNA isolated from stool in patients diagnosed with tuberculosis. A cross-sectional, convenience sample of specimens with a range of concentrations of M. tuberculosis DNA detected by qPCR or negative was selected to evaluate tNGS performance. Furthermore, samples from both cohorts were analyzed for concordance between mDST by stool tNGS and pDST from sputum. Cohort One: Consistent with Eswatini national guidelines , each participant provided two sputum specimens. One was tested by Xpert MTB/RIF (2014–2019) or Xpert Ultra (2019) in accordance with manufacturer instructions . The second specimen was used for culture at the National Tuberculosis Research Laboratory (NTRL) in Mbabane, Eswatini. Sputum cultures were performed with the Mycobacterium Growth Indicator Tubes (MGIT) 960 system, liquid media for the cultivation of mycobacteria, according to manufacturer instructions . Phenotypic DST was performed with the MGIT system for isoniazid, rifampicin, pyrazinamide, streptomycin, ethambutol, and, when indicated, fluoroquinolones, amikacin and capreomycin. Phenotypic DST on solid culture (Löwenstein-Jensen) and for second-line drugs such as moxifloxacin or bedaquiline was not possible to perform due to limitations in laboratory capacity in Eswatini. Cohort Two: Each participant provided two sputum samples and a stool sample, on the same day as admission for tuberculosis care. Specimens were stained for acid-fast bacilli and analyzed by microscopy and the presence of M. tuberculosis complex DNA by Xpert Ultra, if not already performed at the referring hospital. Solid culture (Löwenstein-Jensen) and liquid culture (MGIT) were performed in addition to phenotypic DST for isoniazid, rifampicin, ethambutol, and pyrazinamide. In the case of drug resistance against isoniazid and rifampicin, comprehensive second-line pDST was performed for levofloxacin and moxifloxacin, bedaquiline, linezolid, clofazimine, cycloserine/terizidone, delamanid, amikacin, kanamycin, capreomycin, PAS, and prothionamide (representative for thiamids). Second-line DST was performed in MGIT and interpreted based on World Health Organization (WHO) critical concentrations . For cycloserine, pDST was performed on a solid medium using a critical concentration of 30 mg/L. For both cohorts, stool was frozen within 12 h of collection at -80 °C. In Eswatini, stool was frozen without preservatives in 2-g aliquots prior to DNA isolation. In Germany, stool was aliquoted with 500 mg stool in one ml 20% Glycerol/PBS. Stool was thawed in batches and DNA was isolated as previously described . In brief, 500 mg of stool was processed using the MP Fast DNA kit for soil (MP Biochemicals, Solon, OH) with a six-minute homogenization via bead-beating disruption on the SI-D238 Disruptor Genie (Scientific Industries, Inc., Bohemia, NY). The isolated DNA was tested with a previously described qPCR or with the Diarella MTB/NTM/MAC kit (Gerbion, Kornwestheim, Germany) following the manufacturer instructions and quantified using H37Rv standard curves. Isolated DNA was sent to the Molecular and Experimental Mycobacteriology, Research Center Borstel, Borstel, Germany for tNGS analysis. The Deeplex® Myc-TB assay targets full sequences (i.e. coding sequence plus part of promoter region) or the most relevant regions of 18 drug resistance-associated genes ( rpoB , ahpC , fabG1 , katG , inhA , pncA , embB , gyrA , gyrB , rrs , eis , tlyA , gidB rpsL , ethA , rv0678 , rrl , rplC ), combined with genomic targets for mycobacterial species identification ( hsp65 ) and M. tuberculosis complex strain genotyping (CRISPR locus) . After Deeplex® Myc-TB amplification as instructed by the manufacturer (24-plexed PCR using a single Master Mix), amplicon libraries were prepared using the Nextera XT kit and sequenced with 150 bp paired-end reads using a NextSeq 500 instrument (Illumina, San Diego, California, USA). Analyses were performed using the integrated bioinformatics pipeline v1.3 implemented in the Deeplex® Myc-TB web application . In short, NGS reads were automatically mapped on M. tuberculosis H37Rv reference sequences using Bowtie 2 , and variants were called with a limit of 3% read proportion depending on coverage depth. Samples were then classified in accordance with breadth of target coverage and categorized by quality as ND, − , + , + + , or + + + . Detected variants were automatically associated with drug resistance or susceptibility, or phylogenetic lineage by comparison with integrated reference variant using the curated ReSeqTB database . When variants were not included in the database, mutations were defined as uncharacterized. Furthermore, a 401-bp segment of the hsp65 gene is used as a primary reference for mycobacterial species identification , the direct repeat region for spoligotype identification of MTBC strains , and an internal control sequence to control PCR inhibition. The identification can also be used as control for mixed infections, as not only the best match is reported by the software. Mixed infection is also signaled by a phylogenetic variant detected at less than 95%, indicating the simultaneous presence of one strain harboring this variant present at this percentage and another strain sharing the same sequence as the reference at this position, present at approx. 100% minus this percentage. The association between the qPCR cycle threshold category and a successful Deeplex® Myc-TB result was evaluated using a Cochran-Armitage test for trend. Cohen’s kappa statistic was used to compare Deeplex® Myc-TB results on stool to sputum susceptibility results. The sequencing data has been deposited in European Nucleotide Archive (ENA) database (Accession number: PRJEB47403, https://www.ebi.ac.uk/ena/browser/view/PRJEB47403?show=reads . Study cohorts Cohort one included 66 patients diagnosed with tuberculosis (Table ); 56 participants with and 10 participants without M. tuberculosis complex DNA detected in stool by qPCR. The cohort was predominantly female (59%) with a median age of 31 (Interquartile (IQR 22 to 36) years (10/66 were aged less than 19 years) and 67% were PLHIV with a median CD4 + T cell count of 248 cells/ml (IQR 121–346). The majority had confirmed tuberculosis (96%) with 83% confirmed by Xpert, 73% by MGIT culture on respiratory specimens, and 85% by stool qPCR. The results of respiratory diagnostic testing compared with the stool qPCR are described in Table . Among participants positive by stool qPCR, 10/56 (18%) had a concentration of > 100 femtogram per microliter (fg/μl) (approximately 2316 CFU of M. tuberculosis ), 25/56 (45%) were between 1 and 100 fg/μl, and 21/56 (37%) had < 1 fg/μl of M. tuberculosis DNA (approximately 63 CFU of M. tuberculosis ) . All 21 participants of cohort two (Table ) were HIV-negative adults with culture-confirmed tuberculosis, predominantly male (90%) with a median age of 30 years (IQR 22–39). Xpert Ultra detected M. tuberculosis complex DNA in sputum of 86% of the patients (16 positive and 2 trace) (Table ). Of the 16 German participants positive by stool qPCR, 2 (13%) had concentrations > 100 fg/μl, 7 (44%) were between 1 and 100 fg/μl, and 7 (44%) had < 1 fg/μl of M. tuberculosis DNA (Fig. B). Performance of tNGS Each DNA specimen isolated from stool was evaluated by tNGS. In the Eswatini cohort (Fig. A), of ten specimens that were negative by stool quantitative qPCR, tNGS results were negative in nine and positive in one. Overall, tNGS detected M. tuberculosis complex DNA in 38/56 (68%) of samples that were positive by M. tuberculosis complex qPCR. Of the 38 samples with tNGS results, 28 (74%) had sufficient reads for the prediction of drug resistance in up to 13 anti-tuberculosis drugs. There was a concentration-dependent relationship for tNGS drug resistance prediction; it was possible for 7/10 (70%), 18/25 (72%) and 3/21 (14%) of samples with stool qPCR concentrations of > 100 fg/μl, 1 to 100 fg/μl and < 1 fg/μl to produce a resistance report, respectively ( p = 0.0004). This was confirmed by a logistic regression model, which demonstrated a strong association between increasing M. tuberculosis DNA concentrations and successful tNGS drug resistance prediction (Additional file : Table S1). There was no association between the timing of stool collection within the study enrollment window and successful tNGS (Additional file : Table S2). The quality of tNGS results increased with M. tuberculosis DNA concentrations, with a median average coverage depth of 2866.1 in samples with qPCR concentrations of > 100 fg/μl, of 1298.4 in samples with qPCR concentrations of 1–100 fg/μl, and 51.1 in samples with a qPCR concentration of < 1 fg/μl. In cohort two (Fig. B), tNGS detected M. tuberculosis complex DNA and produced a resistance report in 12/16 (75%) of samples that were positive by M. tuberculosis complex qPCR; 2/2 (100%), 7/7 (100%) and 3/7 (20%) of samples with stool qPCR M. tuberculosis DNA concentrations of > 100 fg/μl, 1–100 fg/μl and < 1 fg/μl, respectively ( p = 0.02). Detailed resistance analysis Cohort one Among cohort one participants with paired tNGS mDST results in stool and pDST results from sputum, there was a high degree of concordance ( k = 0.82) between the two assays (Fig. A and Additional file : Table S3). In 18 specimens with paired mDST and pDST results for isoniazid and ethambutol, concordance was substantial to almost perfect ( k = 0.73 and k = 1, respectively). The second-line pDST results for fluoroquinolones, amikacin and capreomycin in participants with first-line drug resistance detected by Xpert or pDST were available in three participants and were concordant with mDST results from stool. Out of the 28 samples from cohort one (Fig. A) with sufficient sequencing quality, six (21%) were classified as multidrug-resistant, four of which harbored the rpoB I491F mutation. One sample was classified as extensively drug-resistant based on the identification of a fluoroquinolone resistance mutation and a mutation in rv0678 ; the latter mutation has been defined as a marker for bedaquiline and clofazimine resistance . The pDST and stool mDST on this patient also demonstrated fluoroquinolone resistance. In addition, three other samples with the rpoB I491F mutation also had the mutation in rv0678 and are therefore likely to be bedaquiline and clofazimine resistant. In Eswatini, pDST testing for these medications was not available. Among the 18 specimens with paired molecular and phenotypic results for rifampicin resistance, two were resistant by both methods, including one specimen identified with the rpoB I491F mutation. However, two additional specimens identified with the rpoB I491F mutation via mDST were tested susceptible by pDST. The remaining 14 specimens were susceptible by both methods. Overall concordance between mDST and pDST for rifampicin resistance was substantial ( k = 0.61). Notably, all of the specimens with an rpoB I491F mutation also had a rv0678 M146T mutation that confers resistance to bedaquiline and clofazimine . Cohort Two The German validation set (cohort two) included 21 participants with culture-confirmed tuberculosis. Stool-based mDST results were completely interpretable for drug resistance in 11 of 21 (52%) stool specimens and partially interpretable in one additional sample (B1) (Fig. B). Concordance between stool-based mDST with sputum pDST was high ( k = 0.84) including the drugs used in multidrug-resistant tuberculosis treatment regimens, for which few pDST data were available for the Eswatini samples (Fig. B and Additional file : Table S3). Six out of these 12 (50%) were classified by mDST in stool as rifampicin-resistant and one sample had an unknown mutation in rpoB L430P, which showed resistance by pDST. Two out of the six samples with rifampicin resistance showed additional resistances: one was classified as pre-XDR-TB based on the identification of a fluoroquinolone resistance mutation and another one defined as XDR-TB due to a combination of fluoroquinolone resistance-mediating mutation gyrA D94G and bedaquiline resistance-mediating mutation in rv0678 . The pDST on these participants confirmed the genotypically predicted resistances. For clofazimine, mDST in stool identified a rv0678 G65E mutation, but the corresponding sputum specimen was determined to be susceptible by pDST (B12). However, this mutation is flagged as a mutation with a minimum of confidence and was also called with low frequency of 6%. Mixed infections The tNGS assay also generates data on mixed infections (e.g. with two M. tuberculosis complex strains), heteroresistance, spoligotype, and phylogenetic lineage classification. Within cohort one, four participants were found to have mixed infections (one by species identification and three by lineage-specific SNP analysis) with the minority population detected by the species identification match at 8% of the total (Additional file : Table S4). Multiple lineages were detected in one patient with multidrug-resistant tuberculosis. SNP-based lineage prediction was possible for 28 samples, with two identified as belonging to lineage 1, three to lineage 2, ten to lineage 4.3 and 13 which could not be further classified except as not H37Rv (Additional file : Table S4). Within cohort two, two of 13 participants were found to have mixed infections (one by blast and one by lineage-specific SNP analysis) with the minority population detected by blast at 6% of the total (Additional file : Table S5). Overall, SNP-based lineage prediction was possible for 13 samples, with one identified as belonging to lineage 1, four to lineage 2, one to lineage 3, one with markers for lineage 1, 7 ( M. tuberculosis ), 5, 6 ( M. africanum ), animal lineages or M. canettii and six which could not be further classified except as being other than H37Rv (Additional file : Table S5). Cohort one included 66 patients diagnosed with tuberculosis (Table ); 56 participants with and 10 participants without M. tuberculosis complex DNA detected in stool by qPCR. The cohort was predominantly female (59%) with a median age of 31 (Interquartile (IQR 22 to 36) years (10/66 were aged less than 19 years) and 67% were PLHIV with a median CD4 + T cell count of 248 cells/ml (IQR 121–346). The majority had confirmed tuberculosis (96%) with 83% confirmed by Xpert, 73% by MGIT culture on respiratory specimens, and 85% by stool qPCR. The results of respiratory diagnostic testing compared with the stool qPCR are described in Table . Among participants positive by stool qPCR, 10/56 (18%) had a concentration of > 100 femtogram per microliter (fg/μl) (approximately 2316 CFU of M. tuberculosis ), 25/56 (45%) were between 1 and 100 fg/μl, and 21/56 (37%) had < 1 fg/μl of M. tuberculosis DNA (approximately 63 CFU of M. tuberculosis ) . All 21 participants of cohort two (Table ) were HIV-negative adults with culture-confirmed tuberculosis, predominantly male (90%) with a median age of 30 years (IQR 22–39). Xpert Ultra detected M. tuberculosis complex DNA in sputum of 86% of the patients (16 positive and 2 trace) (Table ). Of the 16 German participants positive by stool qPCR, 2 (13%) had concentrations > 100 fg/μl, 7 (44%) were between 1 and 100 fg/μl, and 7 (44%) had < 1 fg/μl of M. tuberculosis DNA (Fig. B). Each DNA specimen isolated from stool was evaluated by tNGS. In the Eswatini cohort (Fig. A), of ten specimens that were negative by stool quantitative qPCR, tNGS results were negative in nine and positive in one. Overall, tNGS detected M. tuberculosis complex DNA in 38/56 (68%) of samples that were positive by M. tuberculosis complex qPCR. Of the 38 samples with tNGS results, 28 (74%) had sufficient reads for the prediction of drug resistance in up to 13 anti-tuberculosis drugs. There was a concentration-dependent relationship for tNGS drug resistance prediction; it was possible for 7/10 (70%), 18/25 (72%) and 3/21 (14%) of samples with stool qPCR concentrations of > 100 fg/μl, 1 to 100 fg/μl and < 1 fg/μl to produce a resistance report, respectively ( p = 0.0004). This was confirmed by a logistic regression model, which demonstrated a strong association between increasing M. tuberculosis DNA concentrations and successful tNGS drug resistance prediction (Additional file : Table S1). There was no association between the timing of stool collection within the study enrollment window and successful tNGS (Additional file : Table S2). The quality of tNGS results increased with M. tuberculosis DNA concentrations, with a median average coverage depth of 2866.1 in samples with qPCR concentrations of > 100 fg/μl, of 1298.4 in samples with qPCR concentrations of 1–100 fg/μl, and 51.1 in samples with a qPCR concentration of < 1 fg/μl. In cohort two (Fig. B), tNGS detected M. tuberculosis complex DNA and produced a resistance report in 12/16 (75%) of samples that were positive by M. tuberculosis complex qPCR; 2/2 (100%), 7/7 (100%) and 3/7 (20%) of samples with stool qPCR M. tuberculosis DNA concentrations of > 100 fg/μl, 1–100 fg/μl and < 1 fg/μl, respectively ( p = 0.02). Cohort one Among cohort one participants with paired tNGS mDST results in stool and pDST results from sputum, there was a high degree of concordance ( k = 0.82) between the two assays (Fig. A and Additional file : Table S3). In 18 specimens with paired mDST and pDST results for isoniazid and ethambutol, concordance was substantial to almost perfect ( k = 0.73 and k = 1, respectively). The second-line pDST results for fluoroquinolones, amikacin and capreomycin in participants with first-line drug resistance detected by Xpert or pDST were available in three participants and were concordant with mDST results from stool. Out of the 28 samples from cohort one (Fig. A) with sufficient sequencing quality, six (21%) were classified as multidrug-resistant, four of which harbored the rpoB I491F mutation. One sample was classified as extensively drug-resistant based on the identification of a fluoroquinolone resistance mutation and a mutation in rv0678 ; the latter mutation has been defined as a marker for bedaquiline and clofazimine resistance . The pDST and stool mDST on this patient also demonstrated fluoroquinolone resistance. In addition, three other samples with the rpoB I491F mutation also had the mutation in rv0678 and are therefore likely to be bedaquiline and clofazimine resistant. In Eswatini, pDST testing for these medications was not available. Among the 18 specimens with paired molecular and phenotypic results for rifampicin resistance, two were resistant by both methods, including one specimen identified with the rpoB I491F mutation. However, two additional specimens identified with the rpoB I491F mutation via mDST were tested susceptible by pDST. The remaining 14 specimens were susceptible by both methods. Overall concordance between mDST and pDST for rifampicin resistance was substantial ( k = 0.61). Notably, all of the specimens with an rpoB I491F mutation also had a rv0678 M146T mutation that confers resistance to bedaquiline and clofazimine . Cohort Two The German validation set (cohort two) included 21 participants with culture-confirmed tuberculosis. Stool-based mDST results were completely interpretable for drug resistance in 11 of 21 (52%) stool specimens and partially interpretable in one additional sample (B1) (Fig. B). Concordance between stool-based mDST with sputum pDST was high ( k = 0.84) including the drugs used in multidrug-resistant tuberculosis treatment regimens, for which few pDST data were available for the Eswatini samples (Fig. B and Additional file : Table S3). Six out of these 12 (50%) were classified by mDST in stool as rifampicin-resistant and one sample had an unknown mutation in rpoB L430P, which showed resistance by pDST. Two out of the six samples with rifampicin resistance showed additional resistances: one was classified as pre-XDR-TB based on the identification of a fluoroquinolone resistance mutation and another one defined as XDR-TB due to a combination of fluoroquinolone resistance-mediating mutation gyrA D94G and bedaquiline resistance-mediating mutation in rv0678 . The pDST on these participants confirmed the genotypically predicted resistances. For clofazimine, mDST in stool identified a rv0678 G65E mutation, but the corresponding sputum specimen was determined to be susceptible by pDST (B12). However, this mutation is flagged as a mutation with a minimum of confidence and was also called with low frequency of 6%. Among cohort one participants with paired tNGS mDST results in stool and pDST results from sputum, there was a high degree of concordance ( k = 0.82) between the two assays (Fig. A and Additional file : Table S3). In 18 specimens with paired mDST and pDST results for isoniazid and ethambutol, concordance was substantial to almost perfect ( k = 0.73 and k = 1, respectively). The second-line pDST results for fluoroquinolones, amikacin and capreomycin in participants with first-line drug resistance detected by Xpert or pDST were available in three participants and were concordant with mDST results from stool. Out of the 28 samples from cohort one (Fig. A) with sufficient sequencing quality, six (21%) were classified as multidrug-resistant, four of which harbored the rpoB I491F mutation. One sample was classified as extensively drug-resistant based on the identification of a fluoroquinolone resistance mutation and a mutation in rv0678 ; the latter mutation has been defined as a marker for bedaquiline and clofazimine resistance . The pDST and stool mDST on this patient also demonstrated fluoroquinolone resistance. In addition, three other samples with the rpoB I491F mutation also had the mutation in rv0678 and are therefore likely to be bedaquiline and clofazimine resistant. In Eswatini, pDST testing for these medications was not available. Among the 18 specimens with paired molecular and phenotypic results for rifampicin resistance, two were resistant by both methods, including one specimen identified with the rpoB I491F mutation. However, two additional specimens identified with the rpoB I491F mutation via mDST were tested susceptible by pDST. The remaining 14 specimens were susceptible by both methods. Overall concordance between mDST and pDST for rifampicin resistance was substantial ( k = 0.61). Notably, all of the specimens with an rpoB I491F mutation also had a rv0678 M146T mutation that confers resistance to bedaquiline and clofazimine . The German validation set (cohort two) included 21 participants with culture-confirmed tuberculosis. Stool-based mDST results were completely interpretable for drug resistance in 11 of 21 (52%) stool specimens and partially interpretable in one additional sample (B1) (Fig. B). Concordance between stool-based mDST with sputum pDST was high ( k = 0.84) including the drugs used in multidrug-resistant tuberculosis treatment regimens, for which few pDST data were available for the Eswatini samples (Fig. B and Additional file : Table S3). Six out of these 12 (50%) were classified by mDST in stool as rifampicin-resistant and one sample had an unknown mutation in rpoB L430P, which showed resistance by pDST. Two out of the six samples with rifampicin resistance showed additional resistances: one was classified as pre-XDR-TB based on the identification of a fluoroquinolone resistance mutation and another one defined as XDR-TB due to a combination of fluoroquinolone resistance-mediating mutation gyrA D94G and bedaquiline resistance-mediating mutation in rv0678 . The pDST on these participants confirmed the genotypically predicted resistances. For clofazimine, mDST in stool identified a rv0678 G65E mutation, but the corresponding sputum specimen was determined to be susceptible by pDST (B12). However, this mutation is flagged as a mutation with a minimum of confidence and was also called with low frequency of 6%. The tNGS assay also generates data on mixed infections (e.g. with two M. tuberculosis complex strains), heteroresistance, spoligotype, and phylogenetic lineage classification. Within cohort one, four participants were found to have mixed infections (one by species identification and three by lineage-specific SNP analysis) with the minority population detected by the species identification match at 8% of the total (Additional file : Table S4). Multiple lineages were detected in one patient with multidrug-resistant tuberculosis. SNP-based lineage prediction was possible for 28 samples, with two identified as belonging to lineage 1, three to lineage 2, ten to lineage 4.3 and 13 which could not be further classified except as not H37Rv (Additional file : Table S4). Within cohort two, two of 13 participants were found to have mixed infections (one by blast and one by lineage-specific SNP analysis) with the minority population detected by blast at 6% of the total (Additional file : Table S5). Overall, SNP-based lineage prediction was possible for 13 samples, with one identified as belonging to lineage 1, four to lineage 2, one to lineage 3, one with markers for lineage 1, 7 ( M. tuberculosis ), 5, 6 ( M. africanum ), animal lineages or M. canettii and six which could not be further classified except as being other than H37Rv (Additional file : Table S5). In these observational cohorts of outpatients diagnosed with confirmed and probable tuberculosis in Eswatini and Germany, we demonstrate for the first time that comprehensive mDST from stool samples is possible by combining a specific DNA extraction method with targeted genome sequencing . The performance and accuracy of tNGS for molecular resistance prediction from stool samples was confirmed in our validation cohort pointing towards stool as a diagnostic opportunity to complete rapid DST through tNGS when analysis in sputum fails or when sputum is not available. The data obtained also indicated that a simple pre-screening procedure based on qPCR standardized quantitative levels of M. tuberculosis complex DNA can be used to select samples with the highest chance of successful tNGS. This evidence highlights the potential to expand the role of stool as a specimen for the diagnosis of tuberculosis by allowing for rapid comprehensive mDST of first-line and second-line anti-tuberculosis drugs. Stool is now recommended as a tuberculosis diagnostic specimen by the World Health Organization for use with the Xpert assay, but resistance testing with this assay is limited to rifampicin and misses relevant mutations such as I491F in rpoB . Following a novel stool DNA extraction method , the tNGS assay provided sequence-based drug resistance information on 57% (41/72) of specimens positive by stool qPCR including both cohorts investigated in this study. The rate of tNGS M. tuberculosis complex resistance detection from stool DNA increased to 77% (34/44) when the testing was limited to specimens with a qPCR concentration of > 1 fg/μl. Similar to reductions in performance of line probe assays , WGS , and tNGS with smear-negative respiratory samples, we found a reduction in M. tuberculosis complex resistance detection by tNGS in specimens with a qPCR DNA concentration of < 1 fg/μl. This provides a potential threshold for triaging stool samples on which tNGS can reliably be performed; thereby, reducing costs associated with unsuccessful runs. The DNA isolation described in this study was performed in a tuberculosis research laboratory in Eswatini, proving that it can be implemented in other high-burden settings. Likewise, the Deeplex® Myc-TB assay streamlines sequencing requirements and has now been implemented in national drug resistance surveys in sub-Saharan Africa , suggesting that this approach may also be suitable for high-burden settings. Indeed, implementation of tNGS through the SeqMDRTB_NET network project ( https://ghpp.de/de/projekte/seqmdrtb-net/ ) in multiple Sub-Saharan countries, including Eswatini, is currently underway. One important characteristic of the Deeplex® Myc-TB assay is the ability to interrogate 18 genomic regions involved in resistance development to 13 anti-tuberculosis drugs in clinical M. tuberculosis complex strains. This is crucial in areas that are affected by the epidemic spread of drug-resistant strains with resistance mutations not detected by other conventional mDST assays such as Xpert . For example, in Eswatini more than 50% of the multidrug-resistant M. tuberculosis complex strains carry the I491F mutation in rpoB , which is not detected by the Xpert and line probe assays endorsed by the WHO . As a consequence, strains with this mutation, which confers clinical resistance to rifampicin, are typically inaccurately tested sensitive by Xpert, line probe, and liquid pDST . This leads to delayed detection of patients affected by multidrug-resistant tuberculosis, non-effective treatment, and ongoing transmission of the I491F rpoB outbreak strains . This effect is evidenced by the increase of the I491F rpoB outbreak strains in Eswatini, from 30% in 2008/2009 to 60% in the recent drug resistance survey . Of equal concern is the fact that more than 50% of the I491F rpoB MDR M. tuberculosis complex outbreak strains also have a Rv0678 M146T mutation, which confers bedaquiline and clofazimine resistance . TNGS performed directly on sputum and now on stool samples can overcome this diagnostic challenge. The capacity to perform targeted sequencing on M. tuberculosis complex DNA isolated from stool also has important implications for evaluating the impact of mixed infections on patient outcomes. In this study, 12% (6/51) of patients from both cohorts with M. tuberculosis complex detected had evidence of mixed infections which are unlikely to be detected by liquid culture media after the growth of the predominant strain. Further studies are needed to determine to what extent tuberculosis patients are affected by mixed infection, potential differences in the detection of mixed infections in stool and sputum samples, and the impact of mixed infection on diagnostics and treatment outcomes. Although the data presented in our study represent an important new area of research for tuberculosis diagnostics and drug susceptibility testing, our study also has limitations. As this nested study capitalized upon an existing biorepository, there may be selection bias in the samples analyzed. The sample size was modest and limited to two distinct clinical populations. Further, we could not determine whether discordance between mDST and pDST results was present due to strain differences in sputum and stool or inherent differences in molecular vs. phenotypic methods; the debate between whether the molecular or phenotypic susceptibility result should be considered the reference standard for some tuberculosis medications such as rifampicin or bedaquiline is ongoing. As M. tuberculosis culture performs poorly on stool, a comparison of mDST and pDST results on stool was not indicated. Finally, while these findings underline the potential impact of tNGS on stool samples as an additional diagnostic procedure, additional studies with a direct comparison of tNGS from stool and sputum will be needed to more accurately establish the target population, perhaps high-risk populations such as PLHIV and children, most likely to benefit from stool testing clinically. In conclusion, these findings represent an advance for tuberculosis diagnostics by demonstrating proof of principle that stool is a diagnostic specimen that can support rapid comprehensive mDST to inform clinicians on the choice of drugs for an individualized treatment regimen, a critically important advancement for patients with multidrug-resistant tuberculosis. The approach described in our work has the potential to increase access to comprehensive mDST for patients unable to provide sputum samples or who have greater concentrations of M. tuberculosis complex detected by stool PCR than in sputum specimens. In light of the rapid rollout of new treatment regimens for patients with multidrug-resistant tuberculosis, expanding access to targeted sequencing technology in high-burden settings must be a priority in the fight to end tuberculosis. Additional file 1: Table S1. Logistic regression models comparing the relationship between cycle threshold value (CT value) and femtogram per microliter (fg/ul) of MTB DNA detected by qPCR with successful detection of MTB by tNGS and successful resistance detection by tNGS. Table S2. Logistic regression models assessing for an association between the timing of stool collection within the study window (up to 14 days from TB treatment initiation) and detection of MTB by qPCR, successful detection of MTB by tNGS, and resistance detection by tNGS. Table S3. Comparison of genotypic DST vs. phenotypic DST resistance detection. Table S4. Mixed infection and lineage data from cohort one. Table S5. Mixed infection and lineage data from cohort two. |
Short dentin etching with universal adhesives: effect on bond strength and gingival margin adaptation | c4a59fad-aef5-40b3-ba71-a0a4a3393b7c | 11760696 | Dentistry[mh] | The evolution of adhesive dentistry has spurred the development of versatile adhesives termed “universal,” “multimode,” or “multipurpose“ . These adhesives offer application flexibility with a choice between self-etch and etch-and-rinse modes . In a systematic review by Cuevas-Suárez et al. , mild universal adhesives showed consistent bonding performance to dentin across different strategies, indicating their suitability for a multimode approach. The optimal application mode (self-etch or etch-and-rinse) for effective dentin bonding remains debated . While past studies favored universal adhesives in self-etch mode for long-term in vitro performance , recent in vivo research has yielded conflicting results . Authors have highlighted the importance of functional monomers like methacryloyloxydecyl dihydrogen phosphate (10-MDP) in facilitating chemical bonding to dentin post-phosphoric acid application . Universal adhesives have primarily been studied on non-etched or fully demineralized dentin , yet they present potential for resin-dentin bonding on selectively etched substrates, irrespective of dentin condition. Short dentin etching, a novel technique, aims to improve resin-dentin bonding by safeguarding hydroxyapatite crystals within deep dentin collagen spaces . This preservation can be achieved by using high-molecular-weight chelating agents or by reducing the acidity of traditional etchants . However, these chelating agents are less commonly available for clinical use compared to widely used 30–40% H 3 PO 4 . By reducing the etching time of H 3 PO 4 , higher calcium ratios in the hybrid layer can be maintained, potentially enhancing the stability of resin-dentin bonding with universal adhesives . Short dentin etching, as evidenced in three studies , holds promise for enhancing bond strength. While two studies used a mild ethanol-based adhesive, which was previously reported in a systematic review to have comparable performance in both etch-and-rinse and self-etch methods , the third study focused on comparing universal adhesives with short dentin etching against their self-etch protocols, without referencing the standard 15-second etching time . Further research is necessary to explore the potential of short dentin etching with different universal adhesives to improve retention rates in etch-and-rinse mode, thereby boosting bond strength and leveraging the chemical bonding advantages of these adhesives . Subgingival cavities below the CEJ present challenges in restorative dentistry. Bonding to etched enamel is effective, but dentin poses difficulties due to its organic composition, tubular structure, permeability, and lower surface energy . The presence and thickness of cementum further complicate adhesion in these areas . To address relocating the cervical margin above the CEJ, recommendations include using a traditional 3-step etch-and-rinse adhesive, simultaneous etching of thin interproximal enamel and dentin for a brief period, or employing 2-step self-etch adhesives without selective enamel etching . A recent systematic review suggests that bonding protocols and adhesive types do not significantly impact bond strength and marginal adaptation for deep subgingival margins . Self-etch or universal adhesives in self-etch or selective enamel etch mode offer benefits for elevating deep margins to avoid over-etching dentin with etch-and-rinse adhesives . The concept of short dentin etching may help address the issue of over-etching in these scenarios. In this study, the aim was to assess the immediate and post-aging bond strength of two universal adhesives (HEMA-containing ethanol-based and HEMA-free isopropanol-based) using self-etch and two etch-and-rinse methods (15-second and 3-second etching). Marginal adaptation at dentin/cementum margins was also evaluated before and after aging, alongside dentin etching patterns observed through SEM. The study sought to test several null hypotheses: (1) The type of universal adhesive used would not affect bond strength or marginal adaptation under the same strategy and aging condition. (2) The adhesive strategy, rather than the duration of phosphoric acid etching, would not affect bond strength or marginal adaptation under the same aging condition. (3) Aging conditions would not affect bond strength or marginal adaptation under the same adhesive strategy. (4) No correlation would exist between bond strength data and marginal adaptation values in any subgroup. Materials used This study aimed to investigate two mild universal adhesives with variations in monomer composition and solvent types. The adhesives under investigation were: (1) Tetric N Universal, HEMA-containing ethanol-based universal adhesive (Ivoclar Vivadent, Amherst, NY, USA), and (2) Prime&Bond Universal, HEMA-free isopropanol-based universal adhesive (Dentsply DeTrey GmbH, Konstanz, Germany). For detailed information on the materials used in the study, please refer to Table . Sample size calculation for μTBS test The required sample size for μTBS testing was determined using GPower software (Ver. 3.1.9.7; GPower, Kiel, Germany). The calculation was based on a previous study with a similar design , considering the mean and standard deviation of self-etch and short dentin etching groups after aging (31.21 ± 6.87 and 42.97 ± 7.12, respectively). A two-tailed test with an effect size of 1.68, a significance level (α) of 0.05, 80% power, and an allocation ratio of 1 were considered. The calculated sample size per subgroup was 7. Sample size calculation for marginal adaptation test The sample size calculation for the marginal adaptation test was based on a previous study that assessed the marginal adaptation of a universal adhesive used in both self-etch and etch-and-rinse modes with cervical margins of Class II cavities . The mean and standard deviation of the gap were considered (15.79 ± 3.04 and 9.94 ± 2.78, respectively). A two-tailed test with an effect size of 2, a significance level (α) of 0.05, 80% power, and an allocation ratio of 1 were taken into account. The calculated sample size per subgroup was 6. An additional specimen was added to each group to accommodate the difference in study design. Microtensile bond strength testing Selection and preparation of teeth For this research, 84 human upper molars were selected for μTBS testing. These molars were extracted because of periodontal disease and were similar in size. They were carefully examined under a stereomicroscope (Olympus model SZ-PT, Tokyo, Japan) to ensure they were free of caries and cracks. Soft tissue and calculus were removed using an ultrasonic scaler, and the teeth were stored in a 0.5% Chloramine T solution. All teeth were used within six months of extraction. Written consent was obtained from the patients, and the Scientific Research Ethics Committee (KFSIRB200-260) approved the use of the teeth for research. To streamline the process of preparing and restoring the teeth, the tooth roots were securely positioned vertically in cylindrical containers with an internal diameter of 29 mm and a height of 35 mm using a centralizing tool. Epoxy resin was poured into these cylinders, filling them up to 2 mm below CEJ. A specially designed jig device was used to ensure consistent and accurate positioning of each tooth during the fixation process. To expose the mid-coronal dentin surfaces without damaging the pulp chamber, the occlusal surfaces of all teeth were cut parallel to the occlusal table and perpendicular to the long axis of the tooth. This was achieved by using a slow-speed diamond saw (Isomet 4000, Buehler Ltd., Lake Bluff, IL, USA) with coolant. Experimental design and restorative procedures After tooth preparation, the teeth were rinsed and dried. They were then randomly divided into two groups of 42 teeth each using simple randomization via Excel, based on the type of universal adhesive. Each group was further divided into three subgroups, each consisting of 14 teeth, based on the adhesive strategy: self-etch, etch and rinse with 37% phosphoric acid (N-Etch, Ivoclar Vivadent) applied for 15 s, and etch and rinse with a 3-second acid application. To create a smear layer, the tooth surfaces were polished in a circular motion using 600-grit silicon carbide paper, ensuring a consistent and standardized smear layer formation with continuous water flow for 60 s. Care was taken to rinse and dry the dentin surfaces without excessive drying. The two universal adhesives were then applied to each subgroup, air-thinned and light-cured (LED curing light, Elipar Deep Cure; 3 M ESPE, St. Paul, MN, USA) with a power intensity of 1200 mW/cm 2 following the manufacturer’s instructions (Table ). For the etch and rinse specimens, after rinsing the etchant, the surfaces were air-dried for 10 s using an oil-free air flow three-way syringe, held at a 45-degree angle, and positioned approximately 1.5 cm away from the target area. The air pressure was set to 1 bar using a pressure regulator . In order to ensure consistency in building resin composite blocks, a custom-made Teflon mold with a rounded split design and a central square aperture (measuring 6 mm x 6 mm and 4 mm in height) was created. This Teflon mold was accurately positioned over the bonding surfaces using a specialized centralization tool . A 4.0 mm-thick layer of nanohybrid resin composite (TPH Spectra ST LV, Dentsply DeTrey GmbH) was applied to restore the specimens. The composite material was applied in two 2 mm-thick horizontal increments using a gold-plated instrument (Zeffiro, Lascod SpA, Italy). Each increment was light-cured separately from the occlusal surface, according to the manufacturer’s recommendations. The curing process was monitored using a radiometer (Demetron L.E.D. Radiometer, Kerr Corp., Orange, CA, USA) after every five specimens. To achieve a smooth surface and improved adaptation of resin composite, a clear polyester Mylar strip, 10 mm wide, was applied to the top layer. A transparent glass slide and a 500-gram weight were then placed on the strip for half a minute. After this period, the weight and glass slide were removed, and the surface was cured by pressing a light tip closely against the polyester strip. After removing the Teflon mold, an additional round of light curing was performed for 20 s on all restorations from the side. The specimens were then stored in distilled water at 37 ± 1 °C in an incubator for 24 h. All tooth preparation and restoration procedures were conducted by a single operator throughout the study using magnifying loupes (×4 loupes, Amtech, Wenzhou, China) and LED headlight illumination (HLP05, Amtech). Artificial aging In each subgroup of adhesive strategies, specimens were randomly allocated to two distinct aging conditions, with seven specimens assigned to each condition. The initial condition involved immediate testing following a 24-hour incubation in sterile water at 37 ± 1 °C. The second condition included both thermal cycling and mechanical loading procedures. Thermal cycling was performed using an SD Mechatronik Thermocycler from Germany, subjecting the specimens to 10,000 cycles to replicate a year of clinical service, in accordance with ISO 11,405 guidelines . The cycling temperatures ranged between 5 °C and 55 °C (within a ± 2 °C tolerance range), with a 25-second dwell time and a 5-second transfer interval between baths . Mechanical loading was carried out using a four-station multi-modal ROBOTA chewing simulator (ROBOTA Model ACH-09075DC-T, Ltd., AD-Tech Technology Co., Germany) operated by a servo motor. A force equivalent to 5 kg, corresponding to 49 N of chewing force, was applied. This testing regime was repeated 150,000 times to simulate one year of clinical chewing conditions, as recommended by a previous systematic review . Thermal cycling preceded mechanical loading in the testing sequence . Post-aging, all specimens were examined for damage under an optical microscope. Each tooth within a subgroup was identified by a specific color and numbered 1 to 7, with the central area of the resin composites marked before sectioning for testing. A schematic illustration of the experimental grouping and all the steps involved in specimen preparation for the μTBS test is presented in Fig. . Specimen preparation Specimens were prepared as rectangular beams by cutting them perpendicular to the bonded interface with a slow-speed diamond saw and water coolant. Each beam had a cross-sectional area of 1 mm², comprising resin composite on top and coronal dentin on the bottom. Dimensions were precisely measured using a digital caliper with 0.01 mm accuracy. Five central beams were randomly selected from each specimen for testing. For the microtensile bond strength evaluation, the beams were secured in Geraldeli’s jig and attached to an Instron universal testing machine (Model: 3345, Norwood, MA, USA). They were fixed in place with cyanoacrylate-based glue (Zapit, DVA Inc, Corona, CA, USA) and connected to the machine via a 500 N load cell. A tensile load was gradually applied at a cross-head speed of 0.5 mm/minute until the beams failed. The bond strength was calculated in MPa using Bluehill Lite software (Instron, Norwood). After testing, the fragments were removed from the jig and inspected under a stereomicroscope (Olympus model SZ-PT) at 40× magnification to identify the failure mode, which could be adhesive, cohesive within the resin or dentin, or mixed. Specimens that failed before testing were documented but excluded from further statistical analysis. All test procedures were carried out by a skilled operator who was unaware of the restorative steps. Marginal adaptation Teeth selection, fixation, and Cavity Preparation procedures Forty-two upper molars were selected and fixed as described in the μTBS study. A standardized set of occluso-mesial preparations was performed using a medium-grit diamond bur and a high-speed handpiece with water coolant. The preparations had consistent dimensions: a bucco-lingual width of 3 mm and an occlusal depth of 3 mm measured from the cavosurface margin of the cavity. For the box part, the base had a mesio-distal dimension of 1.5 mm, a bucco-lingual width of 3 mm, and extended 1 mm below the CEJ . Accurate measurements were obtained using a graduated periodontal probe. After the preparation, a thorough examination of the cavities was conducted. The teeth were randomly assigned to two groups ( n = 21) based on the type of universal adhesive used. Within each group, the teeth were further divided into three subgroups ( n = 7) based on the adhesive strategy for the bond strength test. Each subgroup’s teeth were marked with specific colors and sequentially numbered from 1 to 7. Restorative procedures After preparing the cavities, selective etching was done on the occlusal and proximal enamel margins using 37% phosphoric acid for 15 s, followed by rinsing and drying. In the dentin acid-etched subgroups of each universal adhesive group, the proximal gingival dentin margins were etched with phosphoric acid for either 15–3 s, followed by rinsing and drying. The universal adhesive was applied to all cavity surfaces, air-thinned, and light-cured according to the manufacturer’s instructions. To ensure proper sealing, Tofflemire retainers and metal matrix bands were placed around each tooth, extending beyond the gingival margin of the cavity. An Ivory matrix holder no. 1 with a rubber piece on each prong of the retainer was securely fastened over the mid-mesial and mid-distal surfaces, pressing the Tofflemire matrix-band against the two proximal surfaces of each tooth. Visual and tactile inspection with magnification and an explorer confirmed a complete seal at the gingival margins. Subsequently, all teeth were restored using the same resin composite material used in the bond strength test. The composite was inserted into the cavity in three 2 mm-thick horizontal increments and cured for 20 s from the occlusal surface. After removing the matrix-band, an additional 20-second curing was performed from the proximal surface. Finishing and polishing were carried out using Al 2 O 3 discs (Extra-Thin Sof-Lex discs, 3 M ESPE) and a low-speed handpiece with water cooling. The specimens were then subjected to ultrasonic cleaning after being removed from their fixation blocks. It is important to note that a single operator performed all the preparation and restoration procedures using magnification. A schematic illustration of the experimental grouping and all the steps involved in specimen preparation for the marginal adaptation test is presented in Fig. . Marginal adaptation evaluation using SEM For a detailed protocol regarding the recording of restoration margins, SEM evaluation, and scoring, please refer to another study . In summary, the mesial surfaces of all teeth were cleaned, and addition silicone impression materials were utilized to make impressions. These impressions were allowed to polymerize for 12 h and then filled with epoxy resin. The replicas were air-dried for 24 h at room temperature, mounted on aluminum stubs, and coated with a layer of gold using a sputter-coater. To examine the restoration/gingival margin interface, a SEM (JSM-6510LV, JEOL Ltd., Tokyo, Japan) was employed at a magnification of 30× to obtain an overall proximal view. Image analysis software was used to analyze and measure each section of the restoration/gingival dentin interface at a magnification of 200×. The marginal integrity of each restoration and gingival dentin was evaluated by determining the percentage of continuous margin (% CM), which represented the length of the perfectly sealed margin relative to the total length of both perfect and imperfect margins, measured in micrometers. Margins were classified as either continuous/gap-free or discontinuous/gap based on a predefined protocol . All SEM examinations and measurements were conducted by a single operator who was unaware of the restorative procedures. The intraexaminer reliability of the measurements was assessed by having the same examiner repeat the measurement procedures after a two-week interval, using the intraclass correlation coefficient (ICC). Artificial aging Following the initial assessment of the margins, all teeth underwent thermal cycling and mechanical loading according to the specific parameters outlined in detail in the bond strength section. Evaluation of marginal adaptation after aging Following the artificial aging procedure, the restoration/gingival dentin interfaces were reevaluated to assess their marginal adaptation. The same techniques and criteria used in the initial pre-aging evaluations were applied. Dentin etching patterns This test utilized a total of nine teeth, which underwent fixation and cutting until reaching the mid-coronal dentin, as described in the bond strength section. The teeth were trimmed 2 mm below the CEJ. Following this, the root portion of each section was embedded in epoxy resin blocks measuring 5 mm in height for easier manipulation, ensuring that the mid-coronal dentin surface faced upwards. A smear layer was formed on all dentin surfaces. The nine dentin sections were then randomly divided into three groups based on the three adhesive strategies employed. Further subdivisions were made within each group based on the universal adhesive used ( n = 1 for each). Additionally, one specimen within each group was designated as a control (without adhesive application) (Fig. ). Group 1: Occlusal surfaces were either untreated (control) or treated with universal adhesives (Self-etch strategy) without curing. Group 2: One dentin disc was etched for 15 s with phosphoric acid, while the other two received universal adhesive without curing after the same etching duration. Group 3: One disc was etched for 3 s with phosphoric acid, and the other two were treated with universal adhesive without curing after the same etching duration. The resin monomers were rinsed off, and then dehydrated using a series of ascending ethanol concentrations (50%, 70%, 80%, 90%, and 3 × 100%) . Specimens were mounted, coated, and analyzed using SEM at 2,000× and 5,000× magnifications. Statistical analysis Bond strength values (MPa) were calculated as the mean μTBS of five beams per tooth. SPSS software (version 20) was used for statistical analysis, which revealed a normal distribution of μTBS values, allowing for parametric tests. A three-way ANOVA assessed the effects of universal adhesive type, adhesive strategy, and aging condition on bond strength, with post-hoc analysis using the Bonferroni adjustment (α = 0.05). Cross-tabulations and the Chi-Square test were used to analyze the distribution of failure types. Pre-test failure data were analyzed using independent t-tests for universal adhesive type and aging condition, and a one-way ANOVA for adhesive strategy. The ICC was used to evaluate the examiner’s measurement reliability for %CM data. A two-way ANOVA analyzed the effects of universal adhesive type, adhesive strategy, and their interactions on %CM values within each aging condition. Paired-sample t-tests examined the effect of aging on %CM values for each restorative system, as the difference between paired groups was normally distributed (α = 0.05). Pearson’s correlation coefficient assessed the correlation between μTBS and %CM values. This study aimed to investigate two mild universal adhesives with variations in monomer composition and solvent types. The adhesives under investigation were: (1) Tetric N Universal, HEMA-containing ethanol-based universal adhesive (Ivoclar Vivadent, Amherst, NY, USA), and (2) Prime&Bond Universal, HEMA-free isopropanol-based universal adhesive (Dentsply DeTrey GmbH, Konstanz, Germany). For detailed information on the materials used in the study, please refer to Table . The required sample size for μTBS testing was determined using GPower software (Ver. 3.1.9.7; GPower, Kiel, Germany). The calculation was based on a previous study with a similar design , considering the mean and standard deviation of self-etch and short dentin etching groups after aging (31.21 ± 6.87 and 42.97 ± 7.12, respectively). A two-tailed test with an effect size of 1.68, a significance level (α) of 0.05, 80% power, and an allocation ratio of 1 were considered. The calculated sample size per subgroup was 7. The sample size calculation for the marginal adaptation test was based on a previous study that assessed the marginal adaptation of a universal adhesive used in both self-etch and etch-and-rinse modes with cervical margins of Class II cavities . The mean and standard deviation of the gap were considered (15.79 ± 3.04 and 9.94 ± 2.78, respectively). A two-tailed test with an effect size of 2, a significance level (α) of 0.05, 80% power, and an allocation ratio of 1 were taken into account. The calculated sample size per subgroup was 6. An additional specimen was added to each group to accommodate the difference in study design. Selection and preparation of teeth For this research, 84 human upper molars were selected for μTBS testing. These molars were extracted because of periodontal disease and were similar in size. They were carefully examined under a stereomicroscope (Olympus model SZ-PT, Tokyo, Japan) to ensure they were free of caries and cracks. Soft tissue and calculus were removed using an ultrasonic scaler, and the teeth were stored in a 0.5% Chloramine T solution. All teeth were used within six months of extraction. Written consent was obtained from the patients, and the Scientific Research Ethics Committee (KFSIRB200-260) approved the use of the teeth for research. To streamline the process of preparing and restoring the teeth, the tooth roots were securely positioned vertically in cylindrical containers with an internal diameter of 29 mm and a height of 35 mm using a centralizing tool. Epoxy resin was poured into these cylinders, filling them up to 2 mm below CEJ. A specially designed jig device was used to ensure consistent and accurate positioning of each tooth during the fixation process. To expose the mid-coronal dentin surfaces without damaging the pulp chamber, the occlusal surfaces of all teeth were cut parallel to the occlusal table and perpendicular to the long axis of the tooth. This was achieved by using a slow-speed diamond saw (Isomet 4000, Buehler Ltd., Lake Bluff, IL, USA) with coolant. Experimental design and restorative procedures After tooth preparation, the teeth were rinsed and dried. They were then randomly divided into two groups of 42 teeth each using simple randomization via Excel, based on the type of universal adhesive. Each group was further divided into three subgroups, each consisting of 14 teeth, based on the adhesive strategy: self-etch, etch and rinse with 37% phosphoric acid (N-Etch, Ivoclar Vivadent) applied for 15 s, and etch and rinse with a 3-second acid application. To create a smear layer, the tooth surfaces were polished in a circular motion using 600-grit silicon carbide paper, ensuring a consistent and standardized smear layer formation with continuous water flow for 60 s. Care was taken to rinse and dry the dentin surfaces without excessive drying. The two universal adhesives were then applied to each subgroup, air-thinned and light-cured (LED curing light, Elipar Deep Cure; 3 M ESPE, St. Paul, MN, USA) with a power intensity of 1200 mW/cm 2 following the manufacturer’s instructions (Table ). For the etch and rinse specimens, after rinsing the etchant, the surfaces were air-dried for 10 s using an oil-free air flow three-way syringe, held at a 45-degree angle, and positioned approximately 1.5 cm away from the target area. The air pressure was set to 1 bar using a pressure regulator . In order to ensure consistency in building resin composite blocks, a custom-made Teflon mold with a rounded split design and a central square aperture (measuring 6 mm x 6 mm and 4 mm in height) was created. This Teflon mold was accurately positioned over the bonding surfaces using a specialized centralization tool . A 4.0 mm-thick layer of nanohybrid resin composite (TPH Spectra ST LV, Dentsply DeTrey GmbH) was applied to restore the specimens. The composite material was applied in two 2 mm-thick horizontal increments using a gold-plated instrument (Zeffiro, Lascod SpA, Italy). Each increment was light-cured separately from the occlusal surface, according to the manufacturer’s recommendations. The curing process was monitored using a radiometer (Demetron L.E.D. Radiometer, Kerr Corp., Orange, CA, USA) after every five specimens. To achieve a smooth surface and improved adaptation of resin composite, a clear polyester Mylar strip, 10 mm wide, was applied to the top layer. A transparent glass slide and a 500-gram weight were then placed on the strip for half a minute. After this period, the weight and glass slide were removed, and the surface was cured by pressing a light tip closely against the polyester strip. After removing the Teflon mold, an additional round of light curing was performed for 20 s on all restorations from the side. The specimens were then stored in distilled water at 37 ± 1 °C in an incubator for 24 h. All tooth preparation and restoration procedures were conducted by a single operator throughout the study using magnifying loupes (×4 loupes, Amtech, Wenzhou, China) and LED headlight illumination (HLP05, Amtech). Artificial aging In each subgroup of adhesive strategies, specimens were randomly allocated to two distinct aging conditions, with seven specimens assigned to each condition. The initial condition involved immediate testing following a 24-hour incubation in sterile water at 37 ± 1 °C. The second condition included both thermal cycling and mechanical loading procedures. Thermal cycling was performed using an SD Mechatronik Thermocycler from Germany, subjecting the specimens to 10,000 cycles to replicate a year of clinical service, in accordance with ISO 11,405 guidelines . The cycling temperatures ranged between 5 °C and 55 °C (within a ± 2 °C tolerance range), with a 25-second dwell time and a 5-second transfer interval between baths . Mechanical loading was carried out using a four-station multi-modal ROBOTA chewing simulator (ROBOTA Model ACH-09075DC-T, Ltd., AD-Tech Technology Co., Germany) operated by a servo motor. A force equivalent to 5 kg, corresponding to 49 N of chewing force, was applied. This testing regime was repeated 150,000 times to simulate one year of clinical chewing conditions, as recommended by a previous systematic review . Thermal cycling preceded mechanical loading in the testing sequence . Post-aging, all specimens were examined for damage under an optical microscope. Each tooth within a subgroup was identified by a specific color and numbered 1 to 7, with the central area of the resin composites marked before sectioning for testing. A schematic illustration of the experimental grouping and all the steps involved in specimen preparation for the μTBS test is presented in Fig. . Specimen preparation Specimens were prepared as rectangular beams by cutting them perpendicular to the bonded interface with a slow-speed diamond saw and water coolant. Each beam had a cross-sectional area of 1 mm², comprising resin composite on top and coronal dentin on the bottom. Dimensions were precisely measured using a digital caliper with 0.01 mm accuracy. Five central beams were randomly selected from each specimen for testing. For the microtensile bond strength evaluation, the beams were secured in Geraldeli’s jig and attached to an Instron universal testing machine (Model: 3345, Norwood, MA, USA). They were fixed in place with cyanoacrylate-based glue (Zapit, DVA Inc, Corona, CA, USA) and connected to the machine via a 500 N load cell. A tensile load was gradually applied at a cross-head speed of 0.5 mm/minute until the beams failed. The bond strength was calculated in MPa using Bluehill Lite software (Instron, Norwood). After testing, the fragments were removed from the jig and inspected under a stereomicroscope (Olympus model SZ-PT) at 40× magnification to identify the failure mode, which could be adhesive, cohesive within the resin or dentin, or mixed. Specimens that failed before testing were documented but excluded from further statistical analysis. All test procedures were carried out by a skilled operator who was unaware of the restorative steps. For this research, 84 human upper molars were selected for μTBS testing. These molars were extracted because of periodontal disease and were similar in size. They were carefully examined under a stereomicroscope (Olympus model SZ-PT, Tokyo, Japan) to ensure they were free of caries and cracks. Soft tissue and calculus were removed using an ultrasonic scaler, and the teeth were stored in a 0.5% Chloramine T solution. All teeth were used within six months of extraction. Written consent was obtained from the patients, and the Scientific Research Ethics Committee (KFSIRB200-260) approved the use of the teeth for research. To streamline the process of preparing and restoring the teeth, the tooth roots were securely positioned vertically in cylindrical containers with an internal diameter of 29 mm and a height of 35 mm using a centralizing tool. Epoxy resin was poured into these cylinders, filling them up to 2 mm below CEJ. A specially designed jig device was used to ensure consistent and accurate positioning of each tooth during the fixation process. To expose the mid-coronal dentin surfaces without damaging the pulp chamber, the occlusal surfaces of all teeth were cut parallel to the occlusal table and perpendicular to the long axis of the tooth. This was achieved by using a slow-speed diamond saw (Isomet 4000, Buehler Ltd., Lake Bluff, IL, USA) with coolant. After tooth preparation, the teeth were rinsed and dried. They were then randomly divided into two groups of 42 teeth each using simple randomization via Excel, based on the type of universal adhesive. Each group was further divided into three subgroups, each consisting of 14 teeth, based on the adhesive strategy: self-etch, etch and rinse with 37% phosphoric acid (N-Etch, Ivoclar Vivadent) applied for 15 s, and etch and rinse with a 3-second acid application. To create a smear layer, the tooth surfaces were polished in a circular motion using 600-grit silicon carbide paper, ensuring a consistent and standardized smear layer formation with continuous water flow for 60 s. Care was taken to rinse and dry the dentin surfaces without excessive drying. The two universal adhesives were then applied to each subgroup, air-thinned and light-cured (LED curing light, Elipar Deep Cure; 3 M ESPE, St. Paul, MN, USA) with a power intensity of 1200 mW/cm 2 following the manufacturer’s instructions (Table ). For the etch and rinse specimens, after rinsing the etchant, the surfaces were air-dried for 10 s using an oil-free air flow three-way syringe, held at a 45-degree angle, and positioned approximately 1.5 cm away from the target area. The air pressure was set to 1 bar using a pressure regulator . In order to ensure consistency in building resin composite blocks, a custom-made Teflon mold with a rounded split design and a central square aperture (measuring 6 mm x 6 mm and 4 mm in height) was created. This Teflon mold was accurately positioned over the bonding surfaces using a specialized centralization tool . A 4.0 mm-thick layer of nanohybrid resin composite (TPH Spectra ST LV, Dentsply DeTrey GmbH) was applied to restore the specimens. The composite material was applied in two 2 mm-thick horizontal increments using a gold-plated instrument (Zeffiro, Lascod SpA, Italy). Each increment was light-cured separately from the occlusal surface, according to the manufacturer’s recommendations. The curing process was monitored using a radiometer (Demetron L.E.D. Radiometer, Kerr Corp., Orange, CA, USA) after every five specimens. To achieve a smooth surface and improved adaptation of resin composite, a clear polyester Mylar strip, 10 mm wide, was applied to the top layer. A transparent glass slide and a 500-gram weight were then placed on the strip for half a minute. After this period, the weight and glass slide were removed, and the surface was cured by pressing a light tip closely against the polyester strip. After removing the Teflon mold, an additional round of light curing was performed for 20 s on all restorations from the side. The specimens were then stored in distilled water at 37 ± 1 °C in an incubator for 24 h. All tooth preparation and restoration procedures were conducted by a single operator throughout the study using magnifying loupes (×4 loupes, Amtech, Wenzhou, China) and LED headlight illumination (HLP05, Amtech). In each subgroup of adhesive strategies, specimens were randomly allocated to two distinct aging conditions, with seven specimens assigned to each condition. The initial condition involved immediate testing following a 24-hour incubation in sterile water at 37 ± 1 °C. The second condition included both thermal cycling and mechanical loading procedures. Thermal cycling was performed using an SD Mechatronik Thermocycler from Germany, subjecting the specimens to 10,000 cycles to replicate a year of clinical service, in accordance with ISO 11,405 guidelines . The cycling temperatures ranged between 5 °C and 55 °C (within a ± 2 °C tolerance range), with a 25-second dwell time and a 5-second transfer interval between baths . Mechanical loading was carried out using a four-station multi-modal ROBOTA chewing simulator (ROBOTA Model ACH-09075DC-T, Ltd., AD-Tech Technology Co., Germany) operated by a servo motor. A force equivalent to 5 kg, corresponding to 49 N of chewing force, was applied. This testing regime was repeated 150,000 times to simulate one year of clinical chewing conditions, as recommended by a previous systematic review . Thermal cycling preceded mechanical loading in the testing sequence . Post-aging, all specimens were examined for damage under an optical microscope. Each tooth within a subgroup was identified by a specific color and numbered 1 to 7, with the central area of the resin composites marked before sectioning for testing. A schematic illustration of the experimental grouping and all the steps involved in specimen preparation for the μTBS test is presented in Fig. . Specimens were prepared as rectangular beams by cutting them perpendicular to the bonded interface with a slow-speed diamond saw and water coolant. Each beam had a cross-sectional area of 1 mm², comprising resin composite on top and coronal dentin on the bottom. Dimensions were precisely measured using a digital caliper with 0.01 mm accuracy. Five central beams were randomly selected from each specimen for testing. For the microtensile bond strength evaluation, the beams were secured in Geraldeli’s jig and attached to an Instron universal testing machine (Model: 3345, Norwood, MA, USA). They were fixed in place with cyanoacrylate-based glue (Zapit, DVA Inc, Corona, CA, USA) and connected to the machine via a 500 N load cell. A tensile load was gradually applied at a cross-head speed of 0.5 mm/minute until the beams failed. The bond strength was calculated in MPa using Bluehill Lite software (Instron, Norwood). After testing, the fragments were removed from the jig and inspected under a stereomicroscope (Olympus model SZ-PT) at 40× magnification to identify the failure mode, which could be adhesive, cohesive within the resin or dentin, or mixed. Specimens that failed before testing were documented but excluded from further statistical analysis. All test procedures were carried out by a skilled operator who was unaware of the restorative steps. Teeth selection, fixation, and Cavity Preparation procedures Forty-two upper molars were selected and fixed as described in the μTBS study. A standardized set of occluso-mesial preparations was performed using a medium-grit diamond bur and a high-speed handpiece with water coolant. The preparations had consistent dimensions: a bucco-lingual width of 3 mm and an occlusal depth of 3 mm measured from the cavosurface margin of the cavity. For the box part, the base had a mesio-distal dimension of 1.5 mm, a bucco-lingual width of 3 mm, and extended 1 mm below the CEJ . Accurate measurements were obtained using a graduated periodontal probe. After the preparation, a thorough examination of the cavities was conducted. The teeth were randomly assigned to two groups ( n = 21) based on the type of universal adhesive used. Within each group, the teeth were further divided into three subgroups ( n = 7) based on the adhesive strategy for the bond strength test. Each subgroup’s teeth were marked with specific colors and sequentially numbered from 1 to 7. Restorative procedures After preparing the cavities, selective etching was done on the occlusal and proximal enamel margins using 37% phosphoric acid for 15 s, followed by rinsing and drying. In the dentin acid-etched subgroups of each universal adhesive group, the proximal gingival dentin margins were etched with phosphoric acid for either 15–3 s, followed by rinsing and drying. The universal adhesive was applied to all cavity surfaces, air-thinned, and light-cured according to the manufacturer’s instructions. To ensure proper sealing, Tofflemire retainers and metal matrix bands were placed around each tooth, extending beyond the gingival margin of the cavity. An Ivory matrix holder no. 1 with a rubber piece on each prong of the retainer was securely fastened over the mid-mesial and mid-distal surfaces, pressing the Tofflemire matrix-band against the two proximal surfaces of each tooth. Visual and tactile inspection with magnification and an explorer confirmed a complete seal at the gingival margins. Subsequently, all teeth were restored using the same resin composite material used in the bond strength test. The composite was inserted into the cavity in three 2 mm-thick horizontal increments and cured for 20 s from the occlusal surface. After removing the matrix-band, an additional 20-second curing was performed from the proximal surface. Finishing and polishing were carried out using Al 2 O 3 discs (Extra-Thin Sof-Lex discs, 3 M ESPE) and a low-speed handpiece with water cooling. The specimens were then subjected to ultrasonic cleaning after being removed from their fixation blocks. It is important to note that a single operator performed all the preparation and restoration procedures using magnification. A schematic illustration of the experimental grouping and all the steps involved in specimen preparation for the marginal adaptation test is presented in Fig. . Marginal adaptation evaluation using SEM For a detailed protocol regarding the recording of restoration margins, SEM evaluation, and scoring, please refer to another study . In summary, the mesial surfaces of all teeth were cleaned, and addition silicone impression materials were utilized to make impressions. These impressions were allowed to polymerize for 12 h and then filled with epoxy resin. The replicas were air-dried for 24 h at room temperature, mounted on aluminum stubs, and coated with a layer of gold using a sputter-coater. To examine the restoration/gingival margin interface, a SEM (JSM-6510LV, JEOL Ltd., Tokyo, Japan) was employed at a magnification of 30× to obtain an overall proximal view. Image analysis software was used to analyze and measure each section of the restoration/gingival dentin interface at a magnification of 200×. The marginal integrity of each restoration and gingival dentin was evaluated by determining the percentage of continuous margin (% CM), which represented the length of the perfectly sealed margin relative to the total length of both perfect and imperfect margins, measured in micrometers. Margins were classified as either continuous/gap-free or discontinuous/gap based on a predefined protocol . All SEM examinations and measurements were conducted by a single operator who was unaware of the restorative procedures. The intraexaminer reliability of the measurements was assessed by having the same examiner repeat the measurement procedures after a two-week interval, using the intraclass correlation coefficient (ICC). Artificial aging Following the initial assessment of the margins, all teeth underwent thermal cycling and mechanical loading according to the specific parameters outlined in detail in the bond strength section. Evaluation of marginal adaptation after aging Following the artificial aging procedure, the restoration/gingival dentin interfaces were reevaluated to assess their marginal adaptation. The same techniques and criteria used in the initial pre-aging evaluations were applied. Forty-two upper molars were selected and fixed as described in the μTBS study. A standardized set of occluso-mesial preparations was performed using a medium-grit diamond bur and a high-speed handpiece with water coolant. The preparations had consistent dimensions: a bucco-lingual width of 3 mm and an occlusal depth of 3 mm measured from the cavosurface margin of the cavity. For the box part, the base had a mesio-distal dimension of 1.5 mm, a bucco-lingual width of 3 mm, and extended 1 mm below the CEJ . Accurate measurements were obtained using a graduated periodontal probe. After the preparation, a thorough examination of the cavities was conducted. The teeth were randomly assigned to two groups ( n = 21) based on the type of universal adhesive used. Within each group, the teeth were further divided into three subgroups ( n = 7) based on the adhesive strategy for the bond strength test. Each subgroup’s teeth were marked with specific colors and sequentially numbered from 1 to 7. After preparing the cavities, selective etching was done on the occlusal and proximal enamel margins using 37% phosphoric acid for 15 s, followed by rinsing and drying. In the dentin acid-etched subgroups of each universal adhesive group, the proximal gingival dentin margins were etched with phosphoric acid for either 15–3 s, followed by rinsing and drying. The universal adhesive was applied to all cavity surfaces, air-thinned, and light-cured according to the manufacturer’s instructions. To ensure proper sealing, Tofflemire retainers and metal matrix bands were placed around each tooth, extending beyond the gingival margin of the cavity. An Ivory matrix holder no. 1 with a rubber piece on each prong of the retainer was securely fastened over the mid-mesial and mid-distal surfaces, pressing the Tofflemire matrix-band against the two proximal surfaces of each tooth. Visual and tactile inspection with magnification and an explorer confirmed a complete seal at the gingival margins. Subsequently, all teeth were restored using the same resin composite material used in the bond strength test. The composite was inserted into the cavity in three 2 mm-thick horizontal increments and cured for 20 s from the occlusal surface. After removing the matrix-band, an additional 20-second curing was performed from the proximal surface. Finishing and polishing were carried out using Al 2 O 3 discs (Extra-Thin Sof-Lex discs, 3 M ESPE) and a low-speed handpiece with water cooling. The specimens were then subjected to ultrasonic cleaning after being removed from their fixation blocks. It is important to note that a single operator performed all the preparation and restoration procedures using magnification. A schematic illustration of the experimental grouping and all the steps involved in specimen preparation for the marginal adaptation test is presented in Fig. . For a detailed protocol regarding the recording of restoration margins, SEM evaluation, and scoring, please refer to another study . In summary, the mesial surfaces of all teeth were cleaned, and addition silicone impression materials were utilized to make impressions. These impressions were allowed to polymerize for 12 h and then filled with epoxy resin. The replicas were air-dried for 24 h at room temperature, mounted on aluminum stubs, and coated with a layer of gold using a sputter-coater. To examine the restoration/gingival margin interface, a SEM (JSM-6510LV, JEOL Ltd., Tokyo, Japan) was employed at a magnification of 30× to obtain an overall proximal view. Image analysis software was used to analyze and measure each section of the restoration/gingival dentin interface at a magnification of 200×. The marginal integrity of each restoration and gingival dentin was evaluated by determining the percentage of continuous margin (% CM), which represented the length of the perfectly sealed margin relative to the total length of both perfect and imperfect margins, measured in micrometers. Margins were classified as either continuous/gap-free or discontinuous/gap based on a predefined protocol . All SEM examinations and measurements were conducted by a single operator who was unaware of the restorative procedures. The intraexaminer reliability of the measurements was assessed by having the same examiner repeat the measurement procedures after a two-week interval, using the intraclass correlation coefficient (ICC). Following the initial assessment of the margins, all teeth underwent thermal cycling and mechanical loading according to the specific parameters outlined in detail in the bond strength section. Following the artificial aging procedure, the restoration/gingival dentin interfaces were reevaluated to assess their marginal adaptation. The same techniques and criteria used in the initial pre-aging evaluations were applied. This test utilized a total of nine teeth, which underwent fixation and cutting until reaching the mid-coronal dentin, as described in the bond strength section. The teeth were trimmed 2 mm below the CEJ. Following this, the root portion of each section was embedded in epoxy resin blocks measuring 5 mm in height for easier manipulation, ensuring that the mid-coronal dentin surface faced upwards. A smear layer was formed on all dentin surfaces. The nine dentin sections were then randomly divided into three groups based on the three adhesive strategies employed. Further subdivisions were made within each group based on the universal adhesive used ( n = 1 for each). Additionally, one specimen within each group was designated as a control (without adhesive application) (Fig. ). Group 1: Occlusal surfaces were either untreated (control) or treated with universal adhesives (Self-etch strategy) without curing. Group 2: One dentin disc was etched for 15 s with phosphoric acid, while the other two received universal adhesive without curing after the same etching duration. Group 3: One disc was etched for 3 s with phosphoric acid, and the other two were treated with universal adhesive without curing after the same etching duration. The resin monomers were rinsed off, and then dehydrated using a series of ascending ethanol concentrations (50%, 70%, 80%, 90%, and 3 × 100%) . Specimens were mounted, coated, and analyzed using SEM at 2,000× and 5,000× magnifications. Bond strength values (MPa) were calculated as the mean μTBS of five beams per tooth. SPSS software (version 20) was used for statistical analysis, which revealed a normal distribution of μTBS values, allowing for parametric tests. A three-way ANOVA assessed the effects of universal adhesive type, adhesive strategy, and aging condition on bond strength, with post-hoc analysis using the Bonferroni adjustment (α = 0.05). Cross-tabulations and the Chi-Square test were used to analyze the distribution of failure types. Pre-test failure data were analyzed using independent t-tests for universal adhesive type and aging condition, and a one-way ANOVA for adhesive strategy. The ICC was used to evaluate the examiner’s measurement reliability for %CM data. A two-way ANOVA analyzed the effects of universal adhesive type, adhesive strategy, and their interactions on %CM values within each aging condition. Paired-sample t-tests examined the effect of aging on %CM values for each restorative system, as the difference between paired groups was normally distributed (α = 0.05). Pearson’s correlation coefficient assessed the correlation between μTBS and %CM values. μTBS results Table presents the mean μTBS values, standard deviations, and coefficients of variation for all the examined subgroups. A three-way ANOVA confirmed significant effects of all variables on bond strength values (adhesive type: p < 0.001, adhesive strategy: p = 0.04, and aging: p < 0.001). All interactions between variables were insignificant except for the interaction between adhesive type and adhesive strategy (adhesive type and adhesive strategy: p = 0.007, adhesive type and aging: p = 0.852, adhesive strategy and aging: p = 0.986, adhesive type, adhesive strategy, and aging: p = 0.938). The coefficient of variation varied across the subgroups, ranging from 14.18 to 25.2%. In comparing the μTBS values of the two universal adhesives under the same adhesive strategy immediately, no significant differences were found, except for the Prime&Bond Uni self-etch subgroup. This subgroup exhibited significantly lower bond strength compared to the other adhesive strategy subgroups within the same adhesive category, as well as all immediate subgroups of Tetric Uni adhesive ( p < 0.05). After aging, the Tetric Uni subgroups, which showed no significant differences among themselves, exhibited significantly higher bond strength compared to all Prime&Bond Uni subgroups. Among the Prime&Bond Uni subgroups after aging, the etch and rinse strategy subgroups had higher bond strength compared to the self-etch strategy subgroups, with statistical significance observed in the etch and rinse 3s subgroup. It is clear that aging had a detrimental effect on all universal adhesives tested, regardless of the adhesive strategy used. Failure patterns Table illustrates the distribution of failure modes and pre-test failures across all subgroups, presented as percentages. A notable interaction was observed between failure patterns and aging ( p = 0.006). However, no significant interactions were found between failure patterns and adhesive type or adhesive strategy ( p = 0.832, p = 0.706). Regardless of the subgroup analyzed, adhesive failure patterns predominated, followed by mixed patterns. After aging, adhesive failures increased, while cohesive failures decreased. Pre-test failure was analyzed in relation to adhesive type and aging condition using an independent t-test, but no significant differences were found ( p = 0.36, p = 0.46, respectively). Additionally, a one-way ANOVA revealed no statistically significant variation in pre-test failure rates with regard to adhesive strategy ( p = 0.16). Marginal adaptation results The level of agreement between the two sets of % CM data measurements was high, as indicated by an ICC value of 0.96, suggesting strong consistency within the examiner. Therefore, the average of both sets was used for further analysis. Table presents the average % CM values and standard deviations for the different universal adhesives tested using various adhesive strategies, both immediately and after aging. Statistical analysis using a two-way ANOVA showed that neither adhesive type nor adhesive strategy had a significant effect on the results ( p = 0.482 and p = 0.312, respectively). Moreover, no statistically significant interaction between these variables was found ( p = 0.766). When comparing the immediate % CM values to the aged values, immediate measurements were higher across all subgroups. Further statistical examination using paired-sample t-tests revealed significant differences between immediate and aged values for all subgroups ( p < 0.05) (Table ). Representative SEM images for marginal adaptation evaluations, depicting both continuous and discontinuous margins, are shown in Fig. . The Pearson correlation coefficient between μTBS and % CM values revealed a moderately positive significant relationship (ρ = 0.472, p < 0.001). Dentin etching patterns evaluation The non-etched dentin surfaces displayed a dense smear layer covering the orifices of dentin tubules (Fig. , A). Dentin etching for 15 s resulted in the complete removal of both the smear layer and smear plugs (Fig. , B and C ). However, when dentin was etched for only 3 s, the smear layer was only partially dissolved, leaving some residual smear plugs (Fig. , D and E ). When examining dentin surfaces treated with the two tested universal adhesives in self-etch mode, a residual smear layer and smear plugs remained (Fig. , F-I), indicating that both mild adhesives were unable to fully dissolve the smear layer in self-etch mode. In contrast, using both universal adhesives in etch and rinse mode with a 15-second etching time almost completely eliminated the smear layer and smear plugs (Fig. , J-M). When the adhesives were used in etch and rinse mode with a 3-second etching time, the smear layer was dissolved, but some smear plugs remained (Fig. , N-Q). Table presents the mean μTBS values, standard deviations, and coefficients of variation for all the examined subgroups. A three-way ANOVA confirmed significant effects of all variables on bond strength values (adhesive type: p < 0.001, adhesive strategy: p = 0.04, and aging: p < 0.001). All interactions between variables were insignificant except for the interaction between adhesive type and adhesive strategy (adhesive type and adhesive strategy: p = 0.007, adhesive type and aging: p = 0.852, adhesive strategy and aging: p = 0.986, adhesive type, adhesive strategy, and aging: p = 0.938). The coefficient of variation varied across the subgroups, ranging from 14.18 to 25.2%. In comparing the μTBS values of the two universal adhesives under the same adhesive strategy immediately, no significant differences were found, except for the Prime&Bond Uni self-etch subgroup. This subgroup exhibited significantly lower bond strength compared to the other adhesive strategy subgroups within the same adhesive category, as well as all immediate subgroups of Tetric Uni adhesive ( p < 0.05). After aging, the Tetric Uni subgroups, which showed no significant differences among themselves, exhibited significantly higher bond strength compared to all Prime&Bond Uni subgroups. Among the Prime&Bond Uni subgroups after aging, the etch and rinse strategy subgroups had higher bond strength compared to the self-etch strategy subgroups, with statistical significance observed in the etch and rinse 3s subgroup. It is clear that aging had a detrimental effect on all universal adhesives tested, regardless of the adhesive strategy used. Table illustrates the distribution of failure modes and pre-test failures across all subgroups, presented as percentages. A notable interaction was observed between failure patterns and aging ( p = 0.006). However, no significant interactions were found between failure patterns and adhesive type or adhesive strategy ( p = 0.832, p = 0.706). Regardless of the subgroup analyzed, adhesive failure patterns predominated, followed by mixed patterns. After aging, adhesive failures increased, while cohesive failures decreased. Pre-test failure was analyzed in relation to adhesive type and aging condition using an independent t-test, but no significant differences were found ( p = 0.36, p = 0.46, respectively). Additionally, a one-way ANOVA revealed no statistically significant variation in pre-test failure rates with regard to adhesive strategy ( p = 0.16). The level of agreement between the two sets of % CM data measurements was high, as indicated by an ICC value of 0.96, suggesting strong consistency within the examiner. Therefore, the average of both sets was used for further analysis. Table presents the average % CM values and standard deviations for the different universal adhesives tested using various adhesive strategies, both immediately and after aging. Statistical analysis using a two-way ANOVA showed that neither adhesive type nor adhesive strategy had a significant effect on the results ( p = 0.482 and p = 0.312, respectively). Moreover, no statistically significant interaction between these variables was found ( p = 0.766). When comparing the immediate % CM values to the aged values, immediate measurements were higher across all subgroups. Further statistical examination using paired-sample t-tests revealed significant differences between immediate and aged values for all subgroups ( p < 0.05) (Table ). Representative SEM images for marginal adaptation evaluations, depicting both continuous and discontinuous margins, are shown in Fig. . The Pearson correlation coefficient between μTBS and % CM values revealed a moderately positive significant relationship (ρ = 0.472, p < 0.001). The non-etched dentin surfaces displayed a dense smear layer covering the orifices of dentin tubules (Fig. , A). Dentin etching for 15 s resulted in the complete removal of both the smear layer and smear plugs (Fig. , B and C ). However, when dentin was etched for only 3 s, the smear layer was only partially dissolved, leaving some residual smear plugs (Fig. , D and E ). When examining dentin surfaces treated with the two tested universal adhesives in self-etch mode, a residual smear layer and smear plugs remained (Fig. , F-I), indicating that both mild adhesives were unable to fully dissolve the smear layer in self-etch mode. In contrast, using both universal adhesives in etch and rinse mode with a 15-second etching time almost completely eliminated the smear layer and smear plugs (Fig. , J-M). When the adhesives were used in etch and rinse mode with a 3-second etching time, the smear layer was dissolved, but some smear plugs remained (Fig. , N-Q). Hydroxyethyl methacrylate (HEMA) is a hydrophilic monomer that enhances the wetting properties of dental adhesives on dental surfaces . However, its hydrophilicity makes it prone to hydrolysis and water absorption . HEMA in universal adhesives can negatively interact with MDP, reducing demineralization and impairing the chemical bond between MDP and dental surfaces . Cochinski et al. found that two HEMA-free universal adhesives had lower bonding performance than a single HEMA-free adhesive, which showed bonding properties similar to a control with 10-MDP and HEMA. This suggests that adhesive characteristics, such as pH and solvent composition, influence bonding performance. Additionally, a study found that the same HEMA-free adhesive tested in the present study showed better bond strength with the etch-and-rinse technique compared to self-etch, indicating it may benefit from short dentin etching . In this study, the etch-and-rinse strategy was performed using the air-drying technique, as recommended by the manufacturer of Tetric Uni. For Prime&Bond Uni, both air and blot drying were suggested, but only air drying was used to ensure standardization. Restorative system adaptation is influenced by factors like resin composite shrinkage, which causes stress and marginal gaps . Cavity size, type, and layering technique were standardized to account for shrinkage in the marginal adaptation test. The study found that while the type of universal adhesive and adhesive strategy influenced bond strength, they did not affect adaptation, partially rejecting the first and second null hypotheses. In contrast to Prime&Bond Uni, the bond strength of Tetric Uni was unaffected by the adhesive strategy used. Etching dentin removes the smear layer and exposes collagen for resin impregnation, but also depletes calcium phosphate, which can hinder chemical bonding . In the etch-and-rinse mode, the primary bonding mechanisms are resin diffusion and hybridization . The self-etch mode, however, uses acidic monomers to condition dentin without depleting calcium phosphate, enhancing chemical bonding with MDP monomers . Research suggests that the bonding of mild universal adhesives to dentin is consistent across strategies , which may explain the lack of impact of adhesive strategy on Tetric Uni’s bond strength. Despite containing 10-MDP and PENTA monomers, Prime&Bond Uni exhibited the weakest bond strength in self-etch mode. 10-MDP forms a stable ionic bond with dentin calcium, improving adhesion , but PENTA’s erythritol phosphate group may hinder calcium bonding due to steric effects, despite strengthening the polymer network. The poor performance in self-etch mode could also be due to the isopropanol solvent, which has a lower dielectric constant than ethanol. This can increase the pKa of acidic monomers, reducing hydrolyzed species and impairing calcium interaction . In contrast to the results of the current study, Hardan et al. reported comparable bond strength values for the same tested HEMA-free adhesive of the current study when used in self-etch mode and after phosphoric acid etching for 3 s. The difference could be explained by various factors in experimental design, including the type of teeth used; Hardan et al. utilized bovine incisors. Additionally, the substrates for bonding were different, not only in terms of the type of teeth but also in that they bonded to the buccal dentin. Furthermore, the aging process differed from that of the current study. All of these factors could account for the disparities in results between the two studies. The study found that aging significantly impacted bond strength and marginal adaptation, rejecting the third hypothesis. This decline is attributed to hybrid layer deterioration and mechanical stress from the mismatch in thermal expansion between tooth and restoration . Despite differing compositions, all universal adhesives contained water, which can lead to hydrolysis of polymeric resins and enzymatic degradation of collagen fibrils after evaporation . Simplified hydrophilic adhesives act as semi-permeable membranes, promoting water absorption and accelerating interface degradation and hydrolysis . Repeated load cycling and temperature changes can cause micro-separations between the dentin surface and bonding agent, or plastic deformation of the adhesive interface . Stress from these changes is concentrated at the interface between the bonding agent and the top of the hybrid layer, with occasional fractures at the bottom. Both adhesives contain 10-MDP, which forms water-insoluble calcium salts. These salts may not immediately affect bond strength on self-etched dentin but are believed to protect the hybrid layer from hydrolytic degradation over time . However, when 10-MDP is mixed with other resin monomers in dentin adhesives, its ability to enhance bond stability is uncertain due to hydrolytic degradation of the ester component and other methacrylate monomers . Tetric Uni contains HEMA, which may compete with 10-MDP for calcium binding sites on apatite crystals, potentially weakening the chemical bond . Enzymatic activity plays a significant role in bond degradation. Even adhesives without a separate etching step fail to prevent the activation of dentin matrix metalloproteinases (MMPs) . Self-etch adhesives can expose and activate latent cysteine MMPs, increasing enzymatic activity . This suggests that adhesives used in either etch-and-rinse or self-etch modes may impact bond durability due to proteolytic dentin activity. Prime&Bond Uni exhibited lower bond strength than Tetric Uni after aging, regardless of adhesive strategy. Prime&Bond Uni lacks HEMA, which improves dentin wetting . Without HEMA, adhesives can phase-separate when exposed to water, causing nano-leakage in the polymerized adhesive layer . Tetric Uni’s HEMA and ethanol enhance wetting, reducing thickness and viscosity to maintain expanded collagen fibrils after solvent evaporation, improving monomer penetration into dentin. The etch-and-rinse strategy used in the study with air drying can collapse collagen fibrils, hindering adhesive diffusion . Prime&Bond Uni’s solvent, isopropanol, has a lower hydrogen bonding capacity than ethanol, making it less effective at breaking interpeptide hydrogen bonds that stabilize the matrix and fibrils. Isopropanol also has a lower stiffening rate than ethanol, potentially increasing matrix shrinkage and reducing resin infiltration . This could explain the significant drop in bond strength for Prime&Bond Uni compared to Tetric Uni after aging. The study found similar bond strength and marginal adaptation for both etching durations in the etch-and-rinse strategy for both adhesives. Previous studies suggested that 3-second H 3 PO 4 etching could enhance bonding , but this was not observed here, even after aging. Failure modes were only influenced by aging, and while some studies link bond strength to failure mode , others do not . Pre-test failure distribution was unaffected by study variables, suggesting random preparation issues . A significant, moderately positive correlation was found between bond strength and adaptation values in every subgroup, leading to the rejection of the fourth null hypothesis. Contrary to the previous findings, a recent study reported that there was no significant correlation between μTBS and in vitro marginal gap formation in dentin ; thus, further investigation is needed. The dentin-etching patterns observed at the interfaces of the tested bonds show that the mild universal adhesives have limitations in penetrating the smear layer created by the 600-grit silicon carbide paper. Unetched remnants of the smear layer were found after applying the adhesives in the self-etch mode for 20 s, as recommended by the manufacturer. In contrast, etching for 15 s completely removed the smear layer, and a 3-second H 3 PO 4 etching also effectively eliminated most of it. While variations in dentin etching patterns and smear layer presence were noted across adhesive strategies, the impact of the adhesive strategy on bond strength depended on the adhesive composition. Despite differences in smear layer removal, no significant effect was observed on marginal adaptation for any of the adhesives. These findings highlight the need for further investigation. The study had limitations. First, applying phosphoric acid for only 3 s in larger cavities may be impractical. Second, the limited range of universal adhesives tested affects the generalizability of the results. Future research should explore etching times longer than 3 s but less than 15 s, as well as investigate enzymatic activity in dentin following various etching durations. Additionally, comparing this activity across different universal adhesives in the self-etch mode would be valuable. The choice of adhesive strategy significantly influenced the dentin bond strength of the HEMA-free, isopropanol-based universal adhesive, with etch and rinse demonstrating superior performance over self-etch. Bonding strategies exhibited consistent gingival margin integrity, regardless of aging. Aging compromised both the bond strength and adaptation of the tested universal adhesives. Three and 15-second etching times yielded comparable results, with extended H 3 PO 4 application providing no additional advantages. |
Die Subanalyse von Rheuma-VOR zeigt den erheblichen Bedarf der rheumatologischen Versorgung auf | 0fc58dbb-c192-428f-87c8-1922ae4bde43 | 11485139 | Internal Medicine[mh] | Es existieren bereits diverse Modelle von Früh- und Screeningsprechstunden , um die Zeit vom Auftreten erster Symptome bis zur Diagnose und Therapieeinleitung zu verkürzen. Es liegt nahe, dass regionale Faktoren einen Einfluss auf das Patientenkollektiv und somit auf die Struktur der Früharthritissprechstunde haben könnten. In den letzten Jahren wurden enorme Fortschritte in der Behandlung der rheumatologischen Erkrankungen erzielt. So kann bei frühzeitiger Diagnosestellung der rheumatoiden Arthritis der Verlauf günstig beeinflusst werden. Eine lang anhaltende Remission (kein Nachweis einer Krankheitsaktivität) ohne Auftreten von knöchernen Erosionen oder Endorganschäden ist potenziell möglich. Somit kommt der rechtzeitigen Diagnosestellung eine große Bedeutung zu. Bei einem verzögerten Behandlungsbeginn aufgrund des massiven Versorgungsdefizites können aber bereits irreversible Schäden im Bereich der Gelenke und der Organe aufgetreten sein . Die aus ADAPTHERA weiterentwickelte und auf mehrere Bundesländer erweiterte prospektive multizentrische Netzwerkstudie „Rheuma-VOR“ verfolgte das Ziel, drei der am häufigsten vorkommenden rheumatologischen Erkrankungen frühzeitig zu erkennen und die Versorgungsqualität der Behandlung zu verbessern . Die Ergebnisse der abgeschlossenen Studie sind kürzlich veröffentlicht worden . Aktuelle Schätzungen gehen davon aus, dass allein in Deutschland bis zu 1,2 Mio. Menschen an einer rheumatoiden Arthritis (RA), einer Psoriasis-Arthritis (PsA) oder einer axialen Spondyloarthritis (axSpA) leiden . Die Latenz zwischen dem Auftreten erster Symptome bis zur Diagnosestellung beträgt bei der rheumatoiden Arthritis ein Jahr, bei der Psoriasis-Arthritis drei Jahre und bei der axialen Spondyloarthritis sogar fünf Jahre . Das sogenannte „window of opportunity“, in dem eine zielgerichtete Therapie (Treat-to-Target-Prinzip) den Verlauf und das Outcome der Erkrankung maßgeblich verbessern kann, liegt aber wahrscheinlich in den ersten Monaten nach Erkrankungsbeginn . Die Verzögerung der Diagnosestellung kann zu enormen sozioökonomischen Folgekosten, u. a. durch frühzeitige Erwerbsminderung oder Berentung, führen . Die frühzeitige Detektion teils akut verlaufender systemischer Erkrankungen aus dem Bereich der Kollagenosen und Vaskulitiden war nicht primäres Ziel der Studie Rheuma-VOR. Jedoch wurden die Screeningbögen für Rheuma-VOR häufig unabhängig der angegebenen Fragen individuell vom zuweisenden Arzt:in ergänzt. Neben erhöhten ANA-Titern (> 1:2560) wurden auch Symptome wie Raynaud-Syndrom, Gesichtsrötungen, Gewichtsverlust, Fieber und vaskulitische Effloreszenzen vermerkt. Da diese Symptome und Laborparameter Hinweis für eine zugrunde liegende entzündliche Systemerkrankung sein können, wurden auch diese Patient:innen zur Sprechstunde einbestellt, um einen möglichen gefährlichen Verlauf rechtzeitig zu erkennen bzw. abzuwenden. Kernansatz des Netzwerks Rheuma-VOR ist das Konzept der koordinierten Kooperation zwischen Primärversorger:innen, Rheumatolog:innen, den Kliniken und den jeweiligen Rheumazentren. Primärversorger:innen konnten einen ein- bis zweiseitigen Screeningbogen mit einer der drei Verdachtsdiagnosen (RA, PsA, axiale SpA) ausfüllen. Diese Bögen wurden in der bundeslandspezifischen Koordinationsstelle gesichtet. In Niedersachen erfolgte zusätzlich zum Faxeingang des Screeningbogens ein Telefonat mit den Patient:innen, um weitere Leitsymptome zu erfassen. Bei hochgradigem Verdacht auf eine zugrunde liegende rheumatische Erkrankung erfolgte die Überweisung der Patient:innen zu an Rheuma-VOR teilnehmende niedergelassene Rheumatolog:innen, das MVZ Weserbergland in Bad Pyrmont oder an die rheumatologische Ambulanz der Medizinischen Hochschule Hannover (Abb. ). Im Rahmen des Rheuma-VOR-Projekts erfolgte während der Projektlaufzeit in Niedersachsen zweimal (2018 und 2019) eine Rheumabus-Tour als „Open-Access-Screening-Veranstaltung“. Die für 2020 geplante Tour musste pandemiebedingt abgesagt werden. Angesteuert wurden primär Ortschaften im Norden Niedersachsens. Ziel war die Früherkennung der drei rheumatologischen Erkrankungen RA, PsA und axiale SpA. Insgesamt wurden mehr als 400 Patient:innen bei den Touren gescreent, für 139 Patient:innen wurde bei Verdacht auf eine aktive rheumatische Erkrankung ein kurzfristiger Termin zur weiteren rheumatologischen Abklärung vermittelt. Als weiteres Mittel zur niederschwelligen Patientenrekrutierung erfolgte zur Vermeidung langer Anfahrtswege für die Patient:innen eine monatliche Sichtungssprechstunde sowohl im MVZ Weserbergland als auch in den Räumlichkeiten der Rheumatologie des Nord-West Krankenhauses Sanderbusch. Terminanfragen aus der Region Bad Pyrmont wurden im MVZ im Rahmen einer Screeningsprechstunde selektioniert. Patient:innen mit Terminanfragen im Krankenhaus Sanderbusch bzw. Terminanfragen an die Rheuma-VOR-Koordinationsstelle aus den nördlichen Regionen Niedersachsens wurden in Sanderbusch in einem 15-minütigen Termin von einer rheumatologischen Ärztin mittels Anamnese und fokussierter klinischer Untersuchung abgeklärt. Im Rahmen beider Screeningsprechstunden erfolgte bei Verdacht auf das Vorliegen einer der drei Erkrankungen die Weiterleitung zu einem fachärztlichen Termin in einer der rheumatologischen Kooperationseinrichtungen. Sowohl bei den eingegangenen Screeningbögen der Primärversorger:innen, der Rheumabus-Tour als auch bei den Sichtungssprechstunden in Sanderbusch und Bad Pyrmont ergaben sich auch Verdachtsdiagnosen auf eine andere als der drei für das Projekt Rheuma-VOR geforderten rheumatischen Erkrankungen RA, PsA oder axSpA. Bei Kollagenosen, Vaskulitiden und der Polymyalgia rheumatica stellt eine zeitnahe Diagnose und Einleitung einer Behandlung ebenfalls einen wichtigen Faktor für den weiteren Verlauf der Erkrankung dar. Aus diesem Grund erfolgten auch bei diesen Angaben Terminvergaben, um schwere Verläufe einer Erkrankung abzuwenden. Die gestellten Diagnosen waren nicht primärer Endpunkt der Datenerhebung des Projekts Rheuma-VOR. Konnte keine der drei geforderten Diagnosen (RA, PsA, SpA) gestellt werden, zählten die Patient:innen im Rahmen des Projektes als „nicht bestätigte“ Fälle. In Niedersachsen konnte ein Teil der „nicht bestätigten“ Fälle aufgearbeitet werden. Während des laufenden Projektes gingen in Niedersachsen 2849 Screeningfaxe ein. Bei 1915 der eingegangenen Anfragen (67,2 %) bestand nach der Auswertung der Screeningfragen des Bogens der Verdacht auf das Vorliegen einer entzündlich-rheumatischen Erkrankung, sodass ein Termin bei einer Rheumatolog:in vereinbart wurde. Die Wartezeit vom Eingang des Screeningbogens bis zum rheumatologischen Facharzttermin lag in Niedersachsen bei durchschnittlich 34 Tagen. Insgesamt sind 232 Patient:innen (12 %) nicht zum vereinbarten Termin erschienen. Bei den 1915 vereinbarten Terminen konnte bei 773 Patient:innen die Diagnose einer RA ( n = 496), PsA ( n = 158) oder axSpA ( n = 119) gestellt werden. Bei den 910 „nicht bestätigten“ Diagnosen liegen uns Daten bzw. andere Diagnosen von 245 Patient:innen vor (Tab. ). Bei 73 der 245 Patient:innen (29,8 %) konnte eine Diagnose gestellt werden, die entweder einer degenerativen Gelenkerkrankung entsprach, oder die Kriterien für ein chronisches generalisiertes Schmerzsyndrom erfüllte. Bei 64 Patient:innen (26,1 %) wurden verschiedene Formen von Arthritiden und Spondyloarthritiden diagnostiziert, die nicht der geforderten ICD-10-Diagnose für Rheuma-VOR entsprachen. Am häufigsten konnten Diagnosen aus dem Formenkreis der Kollagenosen und Vaskulitiden gestellt werden (40,5 %). Bei 22 Patient:innen wurde die Diagnose einer undifferenzierten Kollagenose gestellt, in 12 Fällen konnte die Diagnose eines Sjögren-Syndroms gestellt werden, ein systemischer Lupus erythematodes wurde sechsmal diagnostiziert. Jeweils einmal erfolgte die Diagnose Polymyositis, Dermatomyositis, Antisynthetase-Syndrom, Großgefäßvaskulitis, systemische Sklerose mit Lungenbeteiligung und Granulomatose mit Polyangiitis. Den größten Anteil der rheumatologischen Diagnosen stellte die Polymyalgia rheumatica dar. Diese wurde bei 49 Patient:innen (20 %) diagnostiziert (Abb. ). Im Rahmen des Projekts Rheuma-VOR wurden in Niedersachsen 1915 Termine vermittelt. In 773 Fällen (40,4 %) der Vorstellungen konnte die Diagnose einer RA, PsA oder SpA gestellt werden. Das Hauptaugenmerk der vorliegenden Analyse galt den 910 „nicht bestätigten“ Diagnosen. In knapp 27 % der Fälle (245 Patient:innen) liegt uns die gestellte Diagnose der „nicht bestätigten“ Diagnose vor. Bei den 245 Patient:innen entsprechen 163 Diagnosen (66,5 %) eindeutigen rheumatologischen Erkrankungen (verschiedene Formen der Arthritiden, Spondyloarthritiden, Kollagenosen und Vaskulitiden). Es ist anzunehmen, dass auch bei den 665 nicht vorliegenden Diagnosen der 910 „nicht bestätigten“ Diagnosen eine ähnliche Verteilung bestehen könnte. Die vorliegenden Daten zeigen, dass bei einem Großteil der Patient:innen die Vorstellung bei einer Rheumatolog:in indiziert war. Trotz der schnellen Terminvergabe (im Durchschnitt 34 Tage) erschienen 232 Patient:innen (12 %) nicht zu dem vereinbarten Termin. Häufigster Absagegrund war der lange Anfahrtsweg zu Rheumatolog:innen bei älteren immobilen Patient:innen. Die große Anzahl der Faxeingänge mit freihändigen Notizen der Primärversorger:innen und die große Anzahl an Diagnosen aus dem rheumatischen Formenkreis zeigen ein großes strukturelles Versorgungsproblem auf. Zwar haben sich aufgrund der knappen rheumatologischen Versorgungskapazitäten regionale Versorgungsnetzwerke mit schneller, kollegialer, fachärztlicher Zuweisung bereits gebildet, diese scheinen aber eher in Gebieten mit praktizierenden Rheumatolog:innen zu funktionieren. In Landkreisen ohne Rheumatolog:innen ist die schnelle Zuweisung von Kolleg:in zu Kolleg:in deutlich weniger praktikabel. Hier gilt es langfristig eine Lösung zu schaffen. Die Prävalenz der Polymyalgia rheumatica steigt mit zunehmendem Alter an. Die Diagnose ist für Rheumatolog:innen anhand der typischen Symptome wie Schwere im Schulter- und Beckengürtel und deutlich erhöhten systemischen Inflammationsparametern und gelegentlich auch Allgemeinsymptomen wie Gewichtsverlust, Nachtschweiß und Fieber rasch zu stellen. Eine mögliche begleitende Großgefäßvaskulitis, die in 5–30 % der Fälle auftritt , wird bei Verdacht durch den Rheumatolog:in mittels bildgebender Diagnostik weiter abgeklärt. Auch eine „late-onset“ rheumatoide Arthritis muss differentialdiagnostisch abgeklärt werden. Inwieweit dies bei den 49 Patienten mit der Diagnose der Polymyalgia rheumatica der Fall war lässt sich anhand der vorliegenden Daten nicht nachvollziehen und stellt somit eine Limitation der Arbeit dar. Aufgrund der eingeschränkten ambulanten rheumatologischen Versorgungskapazitäten in Niedersachsen, mit Ballungen im Umkreis einiger Städte und komplett fehlenden Rheumatolog:innen in einzelnen Landkreisen, wird häufig proklamiert, dass die Polymyalgia rheumatica auch durch die Primärversorger:innen behandelt werden kann. Dass im Rahmen des Projekts Rheuma-VOR bei 49 Patient:innen die Diagnose einer Polymyalgia rheumatica gestellt wurde, zeigt jedoch, dass es keine Selbstverständlichkeit ist, dass diese Erkrankung durch Primärversorger:innen erkannt und behandelt wird. Die für Rheumatolog:innen häufige Diagnose der Polymyalgia rheumatica ist für manche niedergelassene Primärversorger:innen eine Erkrankung, die sie nur sehr selten sehen. Entsprechend groß kann die Unsicherheit sein möglicherweise eine andere rheumatologische Erkrankung zu übersehen und so den Verlauf durch einen nicht rechtzeitigen Behandlungsbeginn zu verschlechtern. Die Sorge ist nicht gänzlich unbegründet, da die Polymyalgia rheumatica in 5–30 % der Fälle mit einer Großgefäßvaskulitis assoziiert sein kann. Diese Verläufe gilt es rechtzeitig zu detektieren. Auch die differentialdiagnostische „late-onset“ rheumatoide Arthritis muss unterschieden werden. Die Auswertung zeigt, dass der Bedarf an rheumatologischen Vorstellungen hoch ist. Und bei einem Großteil der Zuweisungen ist die Vorstellung zur Abklärung einer möglichen begleitenden Großgefäßvaskulitis oder differentialdiagnostischen „late-onset“ rheumatoiden Arthritis berechtigt. Bei den knappen Ressourcen stellen das häufig diagnostizierte Schmerzsyndrom und die Arthrose Erkrankungen dar, die im Setting der Primärversorger:innen eventuell mithilfe von Tools und Fragebögen besser detektiert werden können, um diese Patient:innen nicht primär in den Engpass der rheumatologischen Vorstellung mit einzubinden, sondern direkt eine schmerzmedizinische Vorstellung zu bahnen. Es wurde 22 Mal die Diagnose einer undifferenzenzierten Kollagenose gestellt. In der Hochschulambulanz wurden die Diagnosen bei erhöhten ANA-Titern (> 1:640) und klinischen Symptomen (z. B. Arthritis, Sicca-Symptomatik, Raynaud, vaskulitische Effloreszenzen, auffällige Kapillarmikroskopie), welche vereinbar mit dem Vorliegen einer Kollagenose sind, bei bis dahin noch negativem ENA gestellt. Jedoch handelt es sich bei der undifferenzierten Kollagenose nicht um validierte Klassifikationskriterien, sodass die Diagnosestellung in anderen Einrichtungen nicht derselben entsprechen muss. Dies stellt eine weitere Limitation der Arbeit da, da nicht gesichert ist, dass in einer anderen Praxis ebenfalls die Diagnose einer undifferenzierten Kollagenose gestellt worden wäre. Solange die Kapazitäten mit schnellen rheumatologischen Vorstellungen weiterhin knapp sind, stellt eine gute Selektion der Patient:innen durch die Primärversorger:innen die wichtige Weiche für die Weiterversorgung dar. Die ländlichen Strukturen und die weiten Anfahrtswege erschweren insbesondere bei den älteren, immobilen Patient:innen eine rechtzeitige Diagnosestellung und Behandlungsbeginn. Die durchgeführten Rheumabus-Touren in verschiedenen Städten Niedersachsens fanden großen Anklang. Für immobile Patient:innen sinkt die Hemmschwelle, sich im Heimatort ohne vorherige Terminabsprache zur Screeninguntersuchung beim geparkten Rheumabus vorzustellen. Für die regelmäßige Versorgung vom immobilen Patient:innen mit notwendiger Immunsuppression bei rheumatischer Erkrankung sind in der Zukunft möglicherweise auch Modelle der rollenden rheumatologischen Arztpraxis zu diskutieren bei mangelnder Rheumatologendichte in der Region. Die Telemedizin kann perspektivisch zu einer Verbesserung der Versorgung beitragen. Ein Erstkontakt von Patient:in und Rheumatolog:in kann jedoch aufgrund der notwendigen klinischen Untersuchung mit Erhebung des Gelenkstatus (am Beispiel der entzündlich-rheumatischen Gelenkerkrankungen) schwerlich durch einen Videokontakt ersetzt werden. Folgetermine oder Routinekontrolltermine wären mithilfe der Telemedizin jedoch möglich. Zusammenfassend zeigen die Daten unserer Subanalyse, dass durch Rheuma-VOR, auch die (Früh‑)Diagnose anderer entzündlicher Erkrankungen verbessert und die fachärztliche Zuweisung beschleunigt werden könnte, auch wenn dieses Programm nicht primär auf diese Erkrankungen abgezielt hat. Das rheumatologische Versorgungsdefizit in ländlichen Regionen wird sich aufgrund des demografischen Wandels in den kommenden Jahren verstärken Screeningbögen für Frühsprechstunden sollten modifiziert und optimiert werden mit zusätzlicher Erfassung von Kollagenose und Vaskulitis typischen Symptomen, B‑Symptomatik und Bestimmung der Entzündungswerte Optimierte Ausschöpfung der eingeschränkten Ressource der rheumatologischen Vorstellung bei besserer Vorselektion des Patientenguts |
Endoscopic semi-blunt dissection technique is safe and effective for treating gastric submucosal tumors from the muscularis propria | 5803cf18-2dc9-482d-b500-d3bcca9bc0bf | 11823065 | Surgical Procedures, Operative[mh] | Gastrointestinal stromal tumors (GISTs) and leiomyomas originating from the muscularis propria of the gastric wall are the most common gastric submucosal tumors (SMTs). However, imaging methods such as endoscopic ultrasound (EUS) and computed tomography (CT) have difficulty differentiating these two tumors. In recent years, many guidelines have recommended resecting GISTs after they have been histologically diagnosed, regardless of the tumor diameter . According to the Chinese SMT consensus, for patients whose tumors measure ≤ 2 cm in diameter, are suspected of being a GIST or neuroendocrine tumor with a low risk of recurrence and metastasis and can possibly be completely resected, direct endoscopic resection can be performed . In recent years, endoscopic treatment has been gradually used for resecting gastric SMTs. The majority of gastric tumors originating from the muscularis propria are resected via endoscopic resection, with a complete resection rate ranging from 92.4% ~ 100% . Needle knives are the most commonly used instrument during endoscopic treatment. Based on the growth characteristics of the tumors originating from gastric muscularis propria, the conventional resection method involves fully extending the needle-shaped knife head, which allows it to more easily penetrate the muscularis propria while stripping the muscle layer of the tumor. In practice, we found that during the operation, when the needle knife is retracted, the metal surface of the knife tip can be placed against the loose tissue for high-frequency electric dissection, while the endoscope can be used to carry the head end of the plastic knife handle along the fissure created by the high-frequency electric incision for blunt push dissection, reducing damage to the muscularis propria. This method is named the semi-blunt dissection method. No studies have compared the treatment efficacy and safety of conventional methods and semi-blunt dissection. This study compared the treatment efficacy and safety of the two methods for treating gastric tumors originating from the muscularis propria, especially GISTs. Participants A total of 113 patients who underwent endoscopic resection of gastric SMTs originating from the muscularis propria between 2017 and 2022 were retrospectively analyzed. This study was approved by the Ethics Committee of Peking University People’s Hospital. The inclusion criteria for the study were as follows: (1) age ≥ 18 years; (2) SMT evaluated by endoscopy, EUS or CT assessment revealing that the tumor originated from the muscularis propria, that at least half of the tumor was protruding into the gastric cavity, and (3) the tumor had a diameter ≥ 10 mm or ≤ 40 mm. The exclusion criteria for the study were as follows: (1) Upper gastrointestinal lesions measured by EUS < 10 mm or > 40 mm; (2) ≥ 1/2 of the tumor protruded out of the gastric cavity; (3) Portal hypertension; (4)A history of upper gastrointestinal surgery. (5) Patients that took c-kit inhibitor (for GISTs) were excluded. Procedure Two physicians with more than five years of experience in endoscopic submucosal excavation performed the operation. An Olympus 290 endoscope was used, and the treatment instrument, a DualKnife (KD-650 L/U/Q, Olympus) was used throughout the whole procedure without replacement. This procedure was performed under anesthesia with tracheal intubation. The process includes: 1) Lesion marking;2) A small submucosal injection; 3) Mucosal incision; 4) Removal of the submucosa around the tumor to expose the edge of the muscularis propria tumor; 5) Careful removal of the tumor, minimizing the damage to the muscularis propria. At this point, different methods were used for Group A and Group B (Fig. ). The conventional method was used for Group A, which consisted of 73 patients; the knife head was fully extended during the cutting and peeling process, after which the remainder of the conventional surgical protocol was conducted. Group B, which consisted of 40 patients, underwent the semi-blunt dissection method. First, if the capsule had no ulcerations, the knife head was retracted. Then, the contact surface of the metal tip of the knife head was pressed to the cutting surface with slight pressure, and a high-frequency current was applied to create a blunt separation fissure on the cutting surface, while the plastic knife handle was used for blunt pushing and dissection. This method was used when the boundary of the tumor was clear and the tissues beneath the tumor were loose. The remaining surgical procedures were performed according to the conventional method. 6) Full-thickness wound (gastric wall damage is defined as gastric muscularis propria perforation) were closed with titanium clips and/or Ligation device (a kind of nylon loop) by endoscopic suturing. If the wound couldn’t be closed by endoscope, laparoscopic suturing with threads were employed. 7) Removal of the resection specimen through the oral cavity. 8) Other operations: Peritonrocentesis was performed for pneumoperitoneum causing a significant increase in abdominal pressure. In some patients, dental floss suspended was used to expose the submucosal dissection surface and the edges of tumors fully encased within the muscularis propria. Histological diagnosis After the specimen was fixed in formalin, it was divided into 3 mm sections to determine the maximum tumor diameter and resection margin. Histopathological results were confirmed by hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) staining. For GISTs, the tumor risk was categorized according to the modified Fletcher classification. The histological examinations were performed by a pathologist with more than 8 years of experience. Definitions (1) Histological resection: R0 resection was defined as a resection with a clear edge under the microscope; R1 resection was defined as a gross tumor resection with a positive tumor edge under the microscope; R2 resection was defined as residual tumor visible to the naked eye. (2) Complete endoscopic resection was defined as resection of the entire tumor without residual tumor; this included endoscopic R0 and R1 resection. (3) Delayed bleeding was defined as postoperative bleeding, vomiting or black stools, and a decrease in hemoglobin of 20 g/L. (4) Recurrence: a submucosal tumor-like bulge found under endoscopy; a clearly visible tumor on CT scan; and biopsy results at the resection site suggestive of recurrent tumor cells. Follow up Patients with a pathological diagnosis of GIST underwent endoscopy at 4 and 12 weeks after surgery, endoscopy and CT examination at 24 weeks after surgery, and endoscopy and CT examination every year thereafter. Patients with pathologically diagnosed leiomyoma underwent endoscopy at 4 and 12 weeks and then endoscopy and CT examination every year thereafter. Statistical analysis SPSS 22.0 software was used for the statistical analysis. Categorical variables were expressed as counts and composition ratio, and were compared using the Chi-square test or Fisher exact test as appropriate, and the t test was used to compare variables depicted as mean values. P < 0.05 was considered to indicate statistical significance. A total of 113 patients who underwent endoscopic resection of gastric SMTs originating from the muscularis propria between 2017 and 2022 were retrospectively analyzed. This study was approved by the Ethics Committee of Peking University People’s Hospital. The inclusion criteria for the study were as follows: (1) age ≥ 18 years; (2) SMT evaluated by endoscopy, EUS or CT assessment revealing that the tumor originated from the muscularis propria, that at least half of the tumor was protruding into the gastric cavity, and (3) the tumor had a diameter ≥ 10 mm or ≤ 40 mm. The exclusion criteria for the study were as follows: (1) Upper gastrointestinal lesions measured by EUS < 10 mm or > 40 mm; (2) ≥ 1/2 of the tumor protruded out of the gastric cavity; (3) Portal hypertension; (4)A history of upper gastrointestinal surgery. (5) Patients that took c-kit inhibitor (for GISTs) were excluded. Two physicians with more than five years of experience in endoscopic submucosal excavation performed the operation. An Olympus 290 endoscope was used, and the treatment instrument, a DualKnife (KD-650 L/U/Q, Olympus) was used throughout the whole procedure without replacement. This procedure was performed under anesthesia with tracheal intubation. The process includes: 1) Lesion marking;2) A small submucosal injection; 3) Mucosal incision; 4) Removal of the submucosa around the tumor to expose the edge of the muscularis propria tumor; 5) Careful removal of the tumor, minimizing the damage to the muscularis propria. At this point, different methods were used for Group A and Group B (Fig. ). The conventional method was used for Group A, which consisted of 73 patients; the knife head was fully extended during the cutting and peeling process, after which the remainder of the conventional surgical protocol was conducted. Group B, which consisted of 40 patients, underwent the semi-blunt dissection method. First, if the capsule had no ulcerations, the knife head was retracted. Then, the contact surface of the metal tip of the knife head was pressed to the cutting surface with slight pressure, and a high-frequency current was applied to create a blunt separation fissure on the cutting surface, while the plastic knife handle was used for blunt pushing and dissection. This method was used when the boundary of the tumor was clear and the tissues beneath the tumor were loose. The remaining surgical procedures were performed according to the conventional method. 6) Full-thickness wound (gastric wall damage is defined as gastric muscularis propria perforation) were closed with titanium clips and/or Ligation device (a kind of nylon loop) by endoscopic suturing. If the wound couldn’t be closed by endoscope, laparoscopic suturing with threads were employed. 7) Removal of the resection specimen through the oral cavity. 8) Other operations: Peritonrocentesis was performed for pneumoperitoneum causing a significant increase in abdominal pressure. In some patients, dental floss suspended was used to expose the submucosal dissection surface and the edges of tumors fully encased within the muscularis propria. After the specimen was fixed in formalin, it was divided into 3 mm sections to determine the maximum tumor diameter and resection margin. Histopathological results were confirmed by hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) staining. For GISTs, the tumor risk was categorized according to the modified Fletcher classification. The histological examinations were performed by a pathologist with more than 8 years of experience. (1) Histological resection: R0 resection was defined as a resection with a clear edge under the microscope; R1 resection was defined as a gross tumor resection with a positive tumor edge under the microscope; R2 resection was defined as residual tumor visible to the naked eye. (2) Complete endoscopic resection was defined as resection of the entire tumor without residual tumor; this included endoscopic R0 and R1 resection. (3) Delayed bleeding was defined as postoperative bleeding, vomiting or black stools, and a decrease in hemoglobin of 20 g/L. (4) Recurrence: a submucosal tumor-like bulge found under endoscopy; a clearly visible tumor on CT scan; and biopsy results at the resection site suggestive of recurrent tumor cells. Patients with a pathological diagnosis of GIST underwent endoscopy at 4 and 12 weeks after surgery, endoscopy and CT examination at 24 weeks after surgery, and endoscopy and CT examination every year thereafter. Patients with pathologically diagnosed leiomyoma underwent endoscopy at 4 and 12 weeks and then endoscopy and CT examination every year thereafter. SPSS 22.0 software was used for the statistical analysis. Categorical variables were expressed as counts and composition ratio, and were compared using the Chi-square test or Fisher exact test as appropriate, and the t test was used to compare variables depicted as mean values. P < 0.05 was considered to indicate statistical significance. Perioperative patient characteristics and histology of gastric submucosal tumors (Table ). The conventional method was used for Group A, which consisted of 73 patients; Group B, which consisted of 40 patients, underwent the semi-blunt dissection method. There was no significant difference between the two groups in age, sex, or lesion location. The intraoperative operational variable, the maximum diameter of gastric muscularis propria damage, was significantly greater in Group A than in Group B (1.06 ± 0.48 cm vs. 0.46 ± 0.09 cm, p < 0.001); there was no statistically significant difference between the two groups in terms of endoscopic or laparoscopic suturing methods. Postoperatively, the average length of hospitalization in Group A was longer than that in Group B (7.66 ± 2.90 days vs. 5.80 ± 1.96 days, p < 0.001); there was no significant difference between the two groups in terms of postoperative fever duration; there was no incidence of delayed bleeding or perforation in either group. No recurrence was observed during the follow-up period. On histological evaluation, the maximum pathological size of the resected lesions in Group B was significantly greater than that in Group A (1.95 ± 1.43 cm vs. 1.26 ± 0.70 cm, p = 0.006); there was no significant difference between the two groups in terms of histological diagnosis and the percentage of histologically positive resection margins. Perioperative patient characteristics and histology of GISTs (Table ). The conventional method was used for Group A, which consisted of 37 patients; Group B underwent the semi-blunt dissection method consisted of 16 patients. For the 53 patients with GISTs, there was no significant difference in age, sex, or lesion location between the two groups. The maximum diameter of gastric muscularis propria damage in Group A was significantly greater than that in Group B (1.20 ± 0.49 cm vs. 0.48 ± 0.07 cm, p < 0.001). There was no significant difference between the two groups in terms of the use of endoscopic or laparoscopic suture methods. The average duration of hospitalization in Group A was significantly longer than that in Group B (8.43 ± 3.55 days vs. 6.25 ± 1.98 days, p = 0.006). There was no significant difference in terms of postoperative fever duration between the two groups. There was no occurrence of delayed bleeding or perforation and no recurrence during the follow-up period. In the histological evaluation, there was no significant difference between the two groups in terms of histological diagnosis, maximum pathological diameter of the resected lesion, or the percentage of histologically positive resection margins. In our study, the semi-blunt dissection group has smaller gastric muscularis propria damage, and shorter length of hospitalization, the associated costs were reduced. There are many reports on the endoscopic treatment of gastric tumors originating from the muscularis propria, most of which have focused on the comparison of endoscopic and laparoscopic resection ; however, few reports have compared different endoscopic resection methods. Blunt dissection is commonly used in surgical operations and refers to the use of the handle of the scalpel, hemostatic forceps or fingers to separate the soft tissues. Blunt dissection is often used to strip loose connective tissues such as those seen in normal tissue gaps, loose adhesions, benign tumors or cysts in the extraperitoneal space. This procedure can prevent accidental injury to nerves and blood vessels and reduce the loss of tissue function. There are few reports about the application of blunt dissection in endoscopic treatment, and the methods vary. Most such studies are case reports, and one reported that during dissection, the tissue under the tumor was directly stripped using titanium clips supplemented by a rubber band. However, this method is only suitable for cases where there is only a small part of tumor left during dissection and the tissue under the tumor is relatively loose or the locations where the operating can be difficult . Another case reported the use of a lab-made scissor-like blunt dissection instrument for the incision and dissection of the esophageal submucosal tunnel. Yet another case report described aspiration of a gastric fundus lesion into a transparent cap during lesion dissection, which itself was performed using the anterior and posterior movement of the endoscope coupled with pushing of the transparent cap . Blunt dissection of the esophageal submucosal tunnel through the transparent cap has also been reported . Compared with blunt dissection alone, the semi-blunt dissection method used in this study involves high-frequency, shallow electrocautery to create a gap for blunt dissection with the plastic knife handle, facilitating rapid blunt dissection; even at the edge of the tumor, slightly compact connective tissue can also be isolated using this method. At the same time, appropriate selection of parameters enables coagulation of the small blood vessels at the cutting surface, which reduces the risk of bleeding relative to blunt dissection alone, keeps the dissection wound clear, and reduces the difficulty of subsequent dissection and the risk of perforation. In the analysis of all patients, the maximum diameter of the resected lesions in Group A was smaller than that in Group B (1.26 ± 0.70 cm vs. 1.95 ± 1.43 cm, p = 0.006), but the maximum diameter of gastric muscularis propria damage in Group A was greater than that in Group B (1.06 ± 0.48 cm). vs. 0.46 ± 0.09 cm, p < 0.001). For larger lesions, there was less gastric muscularis propria damage in Group B than in Group A, and the difference was statistically significant. The procedure used for Group A patients involved the creation of a sharp incision. Because the tumor was embedded in the muscularis propria of the stomach with little tissue between the tumor margin and the healthy tissue, submucosal injection was difficult, and puncture was highly likely, as the needle-like knife was kept protruded to make the sharp incision. At this time, use of an insulated-tip (IT) knife with a magnetic tip could reduce the risk of perforation of the muscularis propria, but it will increase the cost for the patient. In Group B, careful, sharp incisions were performed when the tumor boundary was not clear. When the tumor boundary was clear and the tissues beneath the tumor were loose, the needle knife head was retracted, and semi-blunt dissection was performed, which not only reduced the risk of the knife tip piercing the muscularis propria but also allowed the wound to close easily; furthermore, there was no need to substitute for an IT knife, keeping the costs to the patient relatively low. The average postoperative hospitalization time for Group B was shorter than that for Group A (5.80 ± 1.96 days vs. 7.66 ± 2.90 days, p < 0.001). Because of the small wound area in group B, the wound was easy to close, the patient did not need to be on the postoperative diet as long, the hospitalization time was shortened, and the associated costs were reduced. Among the 53 patients with GISTs, Group B patients also experienced less damage to the gastric wall (1.20 ± 0.49 cm vs. 0.48 ± 0.07 cm, p < 0.001) and a shorter mean duration of hospitalization (8.43 ± 3.55 days vs. 6.25 ± 1.98 days, p = 0.006), suggesting that the semi-blunt dissection method for GISTs can also reduce the wound surface area, shorten the length of hospitalization, and reduce the cost for patients. When excising muscularis propria tumors, novice doctors often experience a greater psychological burden when a perforation occurs. The proposed method is more suitable for such doctors in excising muscularis propria tumors under endoscopic surgery because of the increased safety following retraction of the knife tip . The operation method is relatively simple, the risk of perforating the muscularis propria during surgery is low, closure is easy, the generation of a large amount of pneumoperitoneum is avoided, and the length of hospital stay is reduced. Across the entire patient cohort, the R1 resection rate did not significantly differ between Groups A and B (17.8% vs. 5%, p > 0.05). Among the 53 GIST patients with higher resection margin requirements, there was also no significant difference in the R1 resection rate (35.1% vs. 12.5%, p > 0.05), and there were no patients in the two groups who underwent R2 resection. This finding suggested that the effect of the two resection methods on the resection margin was not significantly different. In most cases, the reasons for histological resections of grades other than R0 included capsule injury caused by the use of an electrosurgical knife during endoscopic resection. Many studies have reported that in the treatment of gastric tumors originating from the muscularis propria, the rate of endoscopic R1 resection is greater than that of laparoscopic resection, but in previous studies of endoscopic resection of GISTs, the overall postoperative recurrence rate was not high (0-2.7%) . While another study showed that lesion size and mitosis but not R1 resection were risk factors for recurrence . In this study, none of the patients who underwent R1 resection, including those with GISTs, experienced recurrence or metastasis during follow-up, which is consistent with the findings of previous studies. However, all the patients with GIST pathologies in this study had a very low risk of recurrence. Therefore, the results of this study suggest only that the effects of the two endoscopic treatment methods on the resection margin in patients with very low recurrence risk, including those with low recurrence-risk GISTs, are not significantly different. Higher recurrence-risk GIST undergone R1- endoscopic resection were needed to be included in further research. There are still some limitations in this study. This was a single-center retrospective study; future prospective studies are needed to further evaluate the efficacy and safety of semi-blunt dissection. Among SMTs, GISTs are more likely to be malignant than leiomyomas and therefore should be the greater focus of studies on the effect of endoscopic treatment. The number of GIST patients enrolled in this study was small and should be increased in future studies. In addition, the GIST patients enrolled in this study all had a low or very low risk of recurrence, so it is difficult to fully assess the true treatment efficacy and recurrence risk of the two treatment methods for resecting intermediate-risk GISTs. Follow-up studies are needed to further enroll patients who have undergone endoscopic resection for intermediate-risk GISTs and compare the two endoscopic treatment methods. In conclusion, the semi-blunt dissection method has certain advantages in the endoscopic resection of gastric tumors originating from the muscularis propria, including a small extent of gastric muscularis propria damage and a shorter postoperative hospital stay. |
Identifying research gaps and priorities for African family medicine and primary health care | ea26b6c0-4f93-4c6a-89ea-aa213fbe5c7f | 11151405 | Family Medicine[mh] | The Primary Care and Family Medicine Network (PRIMAFAMED) represents a well-established regional network of academic family medicine departments in the sub-Saharan African region. Country representatives who participated in the 2023 PRIMAFAMED meeting in Johannesburg, South Africa, on 15 and 16 August 2023 revisited the recommendations from a 2014 network meeting, which described research priorities based on what was known almost a decade ago. Nineteen people from 10 African countries and two European countries participated in the workshop . The authors of this report are journal editors from two African primary health care (PHC) journals, the African Journal of PHC and Family Medicine (PHCFM) and the South African Family Practice Journal (SAFP), who facilitated the workshop during the 2023 meeting. A three-step process led to this report on the final consensus. The analysis of African family practice research Two recent reports described an analysis of research published during 2020–2022 in the PHCFM and SAFP journals. , Although publications had a median number of three authors, most research was derived from only one institution or discipline, indicating a need for collaborative and interdisciplinary research. Most authors were from South Africa (80%), implying a relative lack of published research from other countries in sub-Saharan Africa. The research mainly focused on health services with little on broader PHC issues such as community engagement or multisectoral action. Clinical research focused on infectious diseases, non-communicable diseases, maternal and women’s health, with little focus on mental health care, injury and trauma, palliative care and rehabilitation. Service delivery research addressed person-centredness and comprehensiveness of care, and noticeable gaps included research on continuity, care coordination, effectiveness and efficiency. There was a neglect of research on children, and almost all studies were descriptive, with little publication of observational or experimental work. The World Health Organization’s Afro Regional perspective Dr Karamagi highlighted several implications for education and research on PHC needs for African health systems. He emphasised the link between a PHC approach, universal health coverage (UHC) and meeting the sustainable development goal of ‘Good health and well-being for all ages’. He argued that this requires ‘tangible hardware’ (health workforce, infrastructure and products), ‘tangible software’ (delivery and information systems, finance and governance processes) and, importantly, ‘intangible software’ (relationships, networks, values and norms to inform beliefs and practices). In terms of research priorities, he highlighted the need to develop and evaluate different models of care to strengthen service delivery, to consider what kind of health workforce is needed and the contribution of family physicians to strengthening healthcare teams. Research priorities from the 6th Primary Care and Family Medicine Network meeting The authors of the 2014 report highlighted strengths such as growing support and leadership to drive research activities and an established culture of networking and collaboration. Weaknesses that hindered the growth of primary care research included limited capability and capacity, failure to publish and disseminate findings, poor coordination, lack of innovation in research projects and study designs and lack of support from policymakers in academic and government spheres. The report highlighted strategies to build capacity for primary care research from three perspectives: regional and international networks, individual countries and educational institutions and family medicine departments. Two recent reports described an analysis of research published during 2020–2022 in the PHCFM and SAFP journals. , Although publications had a median number of three authors, most research was derived from only one institution or discipline, indicating a need for collaborative and interdisciplinary research. Most authors were from South Africa (80%), implying a relative lack of published research from other countries in sub-Saharan Africa. The research mainly focused on health services with little on broader PHC issues such as community engagement or multisectoral action. Clinical research focused on infectious diseases, non-communicable diseases, maternal and women’s health, with little focus on mental health care, injury and trauma, palliative care and rehabilitation. Service delivery research addressed person-centredness and comprehensiveness of care, and noticeable gaps included research on continuity, care coordination, effectiveness and efficiency. There was a neglect of research on children, and almost all studies were descriptive, with little publication of observational or experimental work. Dr Karamagi highlighted several implications for education and research on PHC needs for African health systems. He emphasised the link between a PHC approach, universal health coverage (UHC) and meeting the sustainable development goal of ‘Good health and well-being for all ages’. He argued that this requires ‘tangible hardware’ (health workforce, infrastructure and products), ‘tangible software’ (delivery and information systems, finance and governance processes) and, importantly, ‘intangible software’ (relationships, networks, values and norms to inform beliefs and practices). In terms of research priorities, he highlighted the need to develop and evaluate different models of care to strengthen service delivery, to consider what kind of health workforce is needed and the contribution of family physicians to strengthening healthcare teams. The authors of the 2014 report highlighted strengths such as growing support and leadership to drive research activities and an established culture of networking and collaboration. Weaknesses that hindered the growth of primary care research included limited capability and capacity, failure to publish and disseminate findings, poor coordination, lack of innovation in research projects and study designs and lack of support from policymakers in academic and government spheres. The report highlighted strategies to build capacity for primary care research from three perspectives: regional and international networks, individual countries and educational institutions and family medicine departments. The combined matrix in , which amalgamated the WHO PHC measurement framework and primary care research domains, was used to integrate the group work feedback. , Basic research The groups did not specify typical basic research issues, such as the need to develop research tools or instruments. However, they highlighted the need to diversify our methodological approaches, including building bridges with complementary research fields, such as public health, and ensuring that interdisciplinary collaborations investigate cross-cutting focus areas. This will enable research capacity-building and the development of research methods and paradigms, such as implementation science and action research. Clinical research Although all aspects of the burden of disease are relevant, there may be a need to consider multimorbidity, mental health, violence and trauma as relatively neglected areas. In addition, more attention should be given to the extremes of age in terms of children and older adults. Researchers should continue their work on disease prevention and behaviour change, and focus more on palliative care and rehabilitation. These new priority areas also offer opportunities for interdisciplinary research teams. Health services research Service delivery processes There is a desire to describe different PHC models across various contexts, mainly focusing on defining the package of services and the role of family physicians and other providers in these models of care. There was a specific emphasis on raising the profile of family physicians to allow policymakers to grasp the additional value they bring compared to non-specialist primary care clinicians in strengthening teams and services. Focus areas also include community engagement and participation, understanding palliative care and rehabilitation in these models of care and leadership capabilities needed in service delivery. The groups also considered broader societal and environmental forces, such as planetary health, which may impact the quality and resilience of PHC services and facilities. Service delivery outputs The groups agreed that research must centre on the core primary care functions, such as access, coordination, continuity, comprehensiveness and person centredness. Health systems research PHC components We must expand our focus to broader PHC research, including community empowerment and multi-sectoral action as key PHC components. These components are integrated health services, focusing on combining primary care and public health functions to deliver comprehensive care, the significance of multisectoral policies and actions to address broader determinants of health, such as social, economic, and environmental factors, and the empowerment of individuals and communities to advocate for health-promoting policies and actively collaborate in the development of health and social services. Health system structures Research on the financing of primary care could build on Starfield’s previous work, especially from the African region in the post-COVID-19 era. More updated evidence is needed on the funding of PHC. Health system inputs The group work especially highlighted the following facets of health system inputs: health workforce (team composition, tracking graduates and other human resources for health issues), health information (using local and global data to inform local priorities and planning, especially to assess community health needs and to implement community orientated primary care) and digital technologies for health (including mobile apps and drones to enhance rural care and agricultural practices). Health system objectives (outcomes and impact) Participants confirmed the need to continue evaluating the role of family physicians in improving health system performance and health outcomes. Educational research The call to transform health professions education to better cater to population health needs necessitates a departure from conventional undergraduate and postgraduate teaching. This transformation entails a more comprehensive approach to integrating community orientation into the curriculum through experiences embedded in PHC services. Rather than solely focusing on individual patient care, this new paradigm emphasises the broader societal and environmental factors impacting health outcomes. By embracing transformative education, health professionals can better understand population health dynamics and collaborate more effectively with PHC stakeholders to address these complex issues. A differentiated approach to models of care should span both public and private healthcare sectors, which warrants work around understanding family physician roles, training and career paths in these different models and sectors. The groups did not specify typical basic research issues, such as the need to develop research tools or instruments. However, they highlighted the need to diversify our methodological approaches, including building bridges with complementary research fields, such as public health, and ensuring that interdisciplinary collaborations investigate cross-cutting focus areas. This will enable research capacity-building and the development of research methods and paradigms, such as implementation science and action research. Although all aspects of the burden of disease are relevant, there may be a need to consider multimorbidity, mental health, violence and trauma as relatively neglected areas. In addition, more attention should be given to the extremes of age in terms of children and older adults. Researchers should continue their work on disease prevention and behaviour change, and focus more on palliative care and rehabilitation. These new priority areas also offer opportunities for interdisciplinary research teams. Service delivery processes There is a desire to describe different PHC models across various contexts, mainly focusing on defining the package of services and the role of family physicians and other providers in these models of care. There was a specific emphasis on raising the profile of family physicians to allow policymakers to grasp the additional value they bring compared to non-specialist primary care clinicians in strengthening teams and services. Focus areas also include community engagement and participation, understanding palliative care and rehabilitation in these models of care and leadership capabilities needed in service delivery. The groups also considered broader societal and environmental forces, such as planetary health, which may impact the quality and resilience of PHC services and facilities. Service delivery outputs The groups agreed that research must centre on the core primary care functions, such as access, coordination, continuity, comprehensiveness and person centredness. There is a desire to describe different PHC models across various contexts, mainly focusing on defining the package of services and the role of family physicians and other providers in these models of care. There was a specific emphasis on raising the profile of family physicians to allow policymakers to grasp the additional value they bring compared to non-specialist primary care clinicians in strengthening teams and services. Focus areas also include community engagement and participation, understanding palliative care and rehabilitation in these models of care and leadership capabilities needed in service delivery. The groups also considered broader societal and environmental forces, such as planetary health, which may impact the quality and resilience of PHC services and facilities. The groups agreed that research must centre on the core primary care functions, such as access, coordination, continuity, comprehensiveness and person centredness. PHC components We must expand our focus to broader PHC research, including community empowerment and multi-sectoral action as key PHC components. These components are integrated health services, focusing on combining primary care and public health functions to deliver comprehensive care, the significance of multisectoral policies and actions to address broader determinants of health, such as social, economic, and environmental factors, and the empowerment of individuals and communities to advocate for health-promoting policies and actively collaborate in the development of health and social services. Health system structures Research on the financing of primary care could build on Starfield’s previous work, especially from the African region in the post-COVID-19 era. More updated evidence is needed on the funding of PHC. Health system inputs The group work especially highlighted the following facets of health system inputs: health workforce (team composition, tracking graduates and other human resources for health issues), health information (using local and global data to inform local priorities and planning, especially to assess community health needs and to implement community orientated primary care) and digital technologies for health (including mobile apps and drones to enhance rural care and agricultural practices). Health system objectives (outcomes and impact) Participants confirmed the need to continue evaluating the role of family physicians in improving health system performance and health outcomes. We must expand our focus to broader PHC research, including community empowerment and multi-sectoral action as key PHC components. These components are integrated health services, focusing on combining primary care and public health functions to deliver comprehensive care, the significance of multisectoral policies and actions to address broader determinants of health, such as social, economic, and environmental factors, and the empowerment of individuals and communities to advocate for health-promoting policies and actively collaborate in the development of health and social services. Research on the financing of primary care could build on Starfield’s previous work, especially from the African region in the post-COVID-19 era. More updated evidence is needed on the funding of PHC. The group work especially highlighted the following facets of health system inputs: health workforce (team composition, tracking graduates and other human resources for health issues), health information (using local and global data to inform local priorities and planning, especially to assess community health needs and to implement community orientated primary care) and digital technologies for health (including mobile apps and drones to enhance rural care and agricultural practices). Participants confirmed the need to continue evaluating the role of family physicians in improving health system performance and health outcomes. The call to transform health professions education to better cater to population health needs necessitates a departure from conventional undergraduate and postgraduate teaching. This transformation entails a more comprehensive approach to integrating community orientation into the curriculum through experiences embedded in PHC services. Rather than solely focusing on individual patient care, this new paradigm emphasises the broader societal and environmental factors impacting health outcomes. By embracing transformative education, health professionals can better understand population health dynamics and collaborate more effectively with PHC stakeholders to address these complex issues. A differentiated approach to models of care should span both public and private healthcare sectors, which warrants work around understanding family physician roles, training and career paths in these different models and sectors. The group work findings were organised according to capacity-building topics and interventions, and stakeholders or actors who should help implement the proposed interventions. Capacity-building topics Foundational topics include building capability across more diverse study designs and methods and data analysis using software packages. Competencies in academic writing and peer-reviewing skills were suggested, as well as grant writing and science communication to influence policy and advocacy. There is a need to develop capacity around using data in clinical governance, such as in clinical audits and quality improvement. Skills in coordinating larger multicentre teams and projects will attract more substantive funding and produce higher-quality evidence. Such teams may also include methods experts with specific expertise in study designs to answer complex questions. There is a particular need to build postgraduate research supervision and examination capacity, especially around doctoral research. Capacity-building strategies Interventions at the collective level include educational interventions, such as online and conference workshops, and the sharing of resources. There is a need to foster collaborative research projects, including practice-based research networks and multi-country projects. The concept of research lab models such as those encountered in laboratory sciences was suggested. In these models, the principal investigator grows a laboratory team centred around a nucleus of established and senior researchers supervising a mixed group of early career researchers ranging from master’s to postdoctoral levels. These models help build a network of researchers at different career stages, facilitating peer and near-peer learning and mentoring. These models may be adapted to PHC research teams, allowing multidisciplinary teams at institutional and national levels to develop around specific areas of interest. Such teams may also serve as incubators for research by bringing together researchers and clinician-scholars with different areas of expertise, including implementation science methodologies. Postgraduate research interventions linked to academic institutions include collaborations around building supervision capacity, like initiatives by the South African Academy of Family Physicians’ (SAAFP) PhD special interest group, the Consortium for Advanced Research Training in Africa (CARTA) and other global South-South collaborations. , Other interventions include co-badged degrees, cohort PhD programmes, and shared supervision models. Growing a scholarly nucleus of emerging and established researchers at departmental, national and regional levels will help to strengthen the discipline and increase its profile. There is also a need to develop research supervision expertise to support dissertation and publication outputs. This range of research and supervision-related collaboration models will attract more substantive funding to support capacity building, more complex research designs and dedicated time for clinician-scholars to focus on their research. Stakeholders with the potential to implement these interventions Primary Care and Family Medicine Network is a central stakeholder to guide the implementation of these proposed interventions. It should expand its current offering of workshops to include online platforms to share resources and increase the reach of its listserv. The participants suggested that the network should evaluate the impact of capacity-building activities in supporting researchers at different career stages. Such evaluations should seek the views from previously underrepresented countries. At departmental and national levels, family medicine departments should cultivate research teams consisting of clinician-scholars and researchers from different disciplines and career stages. Departments should also map supervision capacity within and across universities to identify centres of expertise and those needing support. Such a mapping exercise will develop a supervisor database. Academic departments should explore the options available for clinicians and jointly appointed academics to ensure access to dedicated research funding and time to grow as clinician-scholars. Lastly, departments should also examine the possibilities in their institutions to fund article processing charges (APCs) and data analysis software access. The workshop participants listed suggestions for scholarly journals in family medicine and PHC, including supporting authors from PRIMAFAMED with reduced APCs. They also suggested a new special collection on research methods and supervision competencies to build on the previously published PHCFM series. The journals could also commission reviews on under-researched areas and commentaries that summarise published research findings in the African region. Foundational topics include building capability across more diverse study designs and methods and data analysis using software packages. Competencies in academic writing and peer-reviewing skills were suggested, as well as grant writing and science communication to influence policy and advocacy. There is a need to develop capacity around using data in clinical governance, such as in clinical audits and quality improvement. Skills in coordinating larger multicentre teams and projects will attract more substantive funding and produce higher-quality evidence. Such teams may also include methods experts with specific expertise in study designs to answer complex questions. There is a particular need to build postgraduate research supervision and examination capacity, especially around doctoral research. Interventions at the collective level include educational interventions, such as online and conference workshops, and the sharing of resources. There is a need to foster collaborative research projects, including practice-based research networks and multi-country projects. The concept of research lab models such as those encountered in laboratory sciences was suggested. In these models, the principal investigator grows a laboratory team centred around a nucleus of established and senior researchers supervising a mixed group of early career researchers ranging from master’s to postdoctoral levels. These models help build a network of researchers at different career stages, facilitating peer and near-peer learning and mentoring. These models may be adapted to PHC research teams, allowing multidisciplinary teams at institutional and national levels to develop around specific areas of interest. Such teams may also serve as incubators for research by bringing together researchers and clinician-scholars with different areas of expertise, including implementation science methodologies. Postgraduate research interventions linked to academic institutions include collaborations around building supervision capacity, like initiatives by the South African Academy of Family Physicians’ (SAAFP) PhD special interest group, the Consortium for Advanced Research Training in Africa (CARTA) and other global South-South collaborations. , Other interventions include co-badged degrees, cohort PhD programmes, and shared supervision models. Growing a scholarly nucleus of emerging and established researchers at departmental, national and regional levels will help to strengthen the discipline and increase its profile. There is also a need to develop research supervision expertise to support dissertation and publication outputs. This range of research and supervision-related collaboration models will attract more substantive funding to support capacity building, more complex research designs and dedicated time for clinician-scholars to focus on their research. Primary Care and Family Medicine Network is a central stakeholder to guide the implementation of these proposed interventions. It should expand its current offering of workshops to include online platforms to share resources and increase the reach of its listserv. The participants suggested that the network should evaluate the impact of capacity-building activities in supporting researchers at different career stages. Such evaluations should seek the views from previously underrepresented countries. At departmental and national levels, family medicine departments should cultivate research teams consisting of clinician-scholars and researchers from different disciplines and career stages. Departments should also map supervision capacity within and across universities to identify centres of expertise and those needing support. Such a mapping exercise will develop a supervisor database. Academic departments should explore the options available for clinicians and jointly appointed academics to ensure access to dedicated research funding and time to grow as clinician-scholars. Lastly, departments should also examine the possibilities in their institutions to fund article processing charges (APCs) and data analysis software access. The workshop participants listed suggestions for scholarly journals in family medicine and PHC, including supporting authors from PRIMAFAMED with reduced APCs. They also suggested a new special collection on research methods and supervision competencies to build on the previously published PHCFM series. The journals could also commission reviews on under-researched areas and commentaries that summarise published research findings in the African region. Much has changed since the 2014 report, including the 2018 Astana PHC recommitment, the 2022 WHO PHC measurement framework release, and a global pandemic that has challenged the foundations of all nations and their health and economic sectors. , , When comparing the two snapshots of 2014 and 2023 from the PRIMAFAMED perspective, it is worth reflecting on our research priorities and whether we need to adjust our strategies to meet the changing needs of our network and region. describes core priorities and strategies to inform the network’s agenda over the next decade. This report is limited as it captures the output of a single workshop whose participants only represented part of the sub-Saharan African region. Other methods, such as the recently used Delphi design, may be considered for future consensus-building activities. However, the workshop and its report represent the work of key PRIMAFAMED role players with the agency to implement the identified strategies. This workshop report provides an updated PRIMAFAMED assessment of current research and capacity-building priorities for family medicine and primary care in African PHC-orientated health systems. Research priorities have expanded to a comprehensive PHC perspective. Despite some progress, there remain opportunities for the network, its affiliated journals, and other partners and stakeholders to strengthen primary care research capability and capacity in the African region. |
Prognostic value of isolated tumor cells and micrometastasis of lymph nodes in invasive urinary bladder cancer | 3393834d-0ca7-4c48-972f-7aaf36d79769 | 11508444 | Anatomy[mh] | Determining lymph node (LN) metastasis in surgically resected specimen is an important part of pathological examination. It was usually determined by examination on hematoxylin and eosin (H&E)-stained slides. Due to easy accessibility of immunohistochemistry (IHC), immunostaining for cytokeratin can lead to discovery of very small-sized nodal metastases, such as isolated tumor cells (ITC) and micrometastasis. ITC is defined as a single cell or a cluster of tumor cells with fewer than 200 cells or less than 0.2 mm in diameter with little stromal reaction: micrometastasis is a metastasis with a size that is greater than 0.2 mm but less than 2 mm, according to the 8th edition of the American Joint Committee on Cancer (AJCC) staging system . This definition of ITC and micrometastasis is generally applied to cancers of all organs, but nodal staging differs by cancer site. ITCs are staged as N1 or higher in melanoma and Merkel cell carcinoma, whereas in breast and gynecological cancers, ITCs are staged as N0(i+) . Micrometastasis was first applied to the breast cancer staging system and reported as pN1mi. However, micrometastasis to the other organ has been considered pN1 so far. The significance of ITC and micrometastasis in many cancer sites are unknown and further studies are needed. Urinary bladder cancer (UBC) is the 10 th most common and 13 th most deadly cancer in the world . Standard treatment for muscle-invasive UBC includes radical cystectomy (RC) and lymphadenectomy along with or without neoadjuvant chemotherapy according to the NCCN guideline. Adjuvant chemotherapy (AC) is performed in UBC patients with extravesical extension or lymph node metastasis . There is no established guideline for ITC or micrometastasis in UBC. Only a few studies have been presented on ITC and micrometastasis in UBC detected using IHC with reported prevalence of 3.3–13.7% . Results of these studies are limited because of a small number of study cohort, a short follow-up period, and an insufficient prognostic impact. In this study, nodal micrometastasis and ITC were investigated using IHC in a relatively large-sized UBC cohort who underwent RC with lymphadenectomy. We evaluated influence of micrometastasis and ITC on TNM staging and clinical outcome with related clinicopathological characteristics. 1. Case selection Patients who received RC with lymph node dissection (LND) for invasive UBC between January 2013 and December 2015 at Ewha Womans University Mokdong Hospital (Seoul, Republic of Korea) were eligible for this study. We excluded patients who received neoadjuvant chemotherapy or had non-urothelial carcinoma histology (n = 5) or died shortly after operation (n = 2) owing to surgical complication such as post-operative infection and cardiovascular event. We excluded 2 cases of squamous cell carcinoma, 2 cases of adenocarcinoma, and one case of small cell carcinoma in this study, since the pathogenesis and clinical course are different between urothelial and non-urothelial carcinoma. Clinical data including age, sex, status of adjuvant chemotherapy (AC), data related to recurrence and metastasis, follow-up data, and survival outcome were assessed by reviewing electronic medical records. A total of 124 cases of invasive UBC treated with radical surgery were included in this study. The operation was done by three urologic surgeons with similar cystectomy and standard pelvic lymph node dissection method. AC was given within 12 weeks after surgery in patients with histologically confirmed locally advanced disease (pT3 or pT4) or regional LN metastases (pN+) or suspected residual tumor (incomplete surgery) or presence of lymphovascular invasion. The AC protocol included three cycles of gemcitabine and cisplatin for pT3/4 disease and six cycles of the same regimen for pN+ disease. For patients with metastasis or advanced condition during the follow up period, other regimens including methotrexate, vinblastine, adriamycin, and/or cisplatin were applied. The patients were followed by every 3 to 4 months for the first two years and every 6 to 12 months until 5 years. Pathological data were evaluated by reviewing all glass slides of transurethral resection (TUR) and RC specimens. Light microscopic examination was performed using the original H&E slide. The pathologic tumor (pT), node, (pN) and TNM stage were determined based on the 8 th edition of AJCC staging system. This study was approved by the Institutional Review Board (IRB) of Ewha Womans University Mokdong Hospital (protocol no. 2018-8-049). The requirement for informed consent was waived by the IRB due to its retrospective nature. Clinical data were collected from electronic medical records between December 18, 2018 and December 12, 2021, and were completely anonymized. 2. Immunohistochemistry Pancytokeratin IHC was performed for all LNs with negative results on initial pathological diagnosis. Only one additional section was cut for IHC. IHC for pan-cytokeratin (1:100, monoclonal, Novocastra, Newcastle, UK) was performed using a BOND-MAX autoimmunostaining system (Leica Biosystem, Melbourne, Australia) with BOND TM Polymer Refine Detection Kit DS9800 (Leica Biosystem, Melbourne, Australia), as CK IHC has been reported to be a sensitive method for detecting micrometastasis or ITC in axillary node-negative breast cancer . Sections (4-μm-thick) from formalin-fixed, paraffin embedded pretreatment tumor biopsy specimens were transferred to adhesive slides and dried at 62°C for 30 min. Slides were then deparaffinized. Endogenous peroxidase was quenched by incubating the tissues with 0.3% hydrogen peroxide for 10 min. Antigen retrieval was performed using the BOND Epitope Retrieval solution for 20 min at 97°C. Sections were incubated with primary antibodies for 15 minutes, the post-primary antibody for 10 min, and the polymer for 30 min, followed by expression with 3,3’-diaminobenzidine and counterstaining with hematoxylin. 3. Pathological evaluation of micrometastasis and ITC and grouping of pN stage Nodal micrometastasis and ITC were detected by IHC and determined with the following criteria in the breast cancer section of the 8 th edition of AJCC cancer staging system . ITC was defined as presence of a single tumor cell or malignant cell clusters which is no larger than 0.2 mm in diameter. Micrometastasis was defined as the presence of tumor cells which is larger than 0.2 mm but not larger than 2.0 mm in diameter. In this study, presence of ITC or micrometastasis in a LN was considered together as “occult LN metastasis” . Most lymph nodes were longitudinally sectioned and stained with H&E. Immunostained slides were evaluated by two pathologists (HC and MC) without knowledge of clinicopathological information. A few discordant IHC results were discussed and reviewed with two pathologists and a consensus diagnosis was reached. To evaluate prognostic impact of occult LN metastasis, the study population was grouped based on the new pN stage after IHC: pN0 (patients with no metastasis after IHC), pNmi (patients with occult LN metastasis after IHC who were originally pN0 on the initial diagnosis with routine H-E slide examination), and pN+ (patients with nodal metastasis on both initial and IHC diagnoses) groups. 4. Pathological evaluation of histological variants and features Pathological evaluation of histological variants was performed according to the 2016 WHO classifications of urothelial tract tumor . In the present study, all existing histological subtypes (variants) and divergent differentiation of urothelial carcinoma (UC) were specified, including squamous differentiation, glandular differentiation, micropapillary, plasmacytoid, sarcomatoid, nested, lymphoepithelioma-like, and microcystic variants. The existence of more than one histological variant type in addition to conventional UC was designated as mixed UC. More than two histological variant types in one case was regarded as multiple variant differentiation. The tumor differentiation was graded to low and high grades. The budding-like tumor cell clusters (budding-like clusters) was defined as isolated single tumor cells or small clusters composed of fewer than 20 cells near tumor border in more than one field of 200x magnification . These clusters were also present in the retraction artifact and were present as small lumps in the interstitial tissue. These areas were roughly estimated. Necrosis was considered significant when the necrotic portion was more than 50% of total tumor area. 5. Outcome and statistical analysis Disease recurrence was defined as local tumor recurrence within urinary tracts and/or regional lymph nodes and/or distant metastasis. Recurrence-free survival (RFS) was defined as the time from RC to recurrence or the last follow-up or death of any cause. Cancer-specific survival (CSS) was defined as the time from RC to the last follow-up or death from UBC. Overall survival (OS) was defined as the time from RC to the last follow-up or death of any cause. Distributions of categorical variables between the groups were compared by the chi-square or Fisher exact tests . Continuous variables were compared by Student’s t -test or Mann-Whitney test . Survival curves were estimated by the Kaplan-Meier method compared by log-rank tests. Univariate and multivariate Cox proportional hazard regression models were used to evaluate the risk of each clinicopathological parameter for disease recurrence, cancer-specific death and overall survival. P ≤ 0.05 was considered statistically significant. All statistical tests were performed using IBM SPSS software version 23.0.0 (SPSS Inc., Chicago, IL, USA). Patients who received RC with lymph node dissection (LND) for invasive UBC between January 2013 and December 2015 at Ewha Womans University Mokdong Hospital (Seoul, Republic of Korea) were eligible for this study. We excluded patients who received neoadjuvant chemotherapy or had non-urothelial carcinoma histology (n = 5) or died shortly after operation (n = 2) owing to surgical complication such as post-operative infection and cardiovascular event. We excluded 2 cases of squamous cell carcinoma, 2 cases of adenocarcinoma, and one case of small cell carcinoma in this study, since the pathogenesis and clinical course are different between urothelial and non-urothelial carcinoma. Clinical data including age, sex, status of adjuvant chemotherapy (AC), data related to recurrence and metastasis, follow-up data, and survival outcome were assessed by reviewing electronic medical records. A total of 124 cases of invasive UBC treated with radical surgery were included in this study. The operation was done by three urologic surgeons with similar cystectomy and standard pelvic lymph node dissection method. AC was given within 12 weeks after surgery in patients with histologically confirmed locally advanced disease (pT3 or pT4) or regional LN metastases (pN+) or suspected residual tumor (incomplete surgery) or presence of lymphovascular invasion. The AC protocol included three cycles of gemcitabine and cisplatin for pT3/4 disease and six cycles of the same regimen for pN+ disease. For patients with metastasis or advanced condition during the follow up period, other regimens including methotrexate, vinblastine, adriamycin, and/or cisplatin were applied. The patients were followed by every 3 to 4 months for the first two years and every 6 to 12 months until 5 years. Pathological data were evaluated by reviewing all glass slides of transurethral resection (TUR) and RC specimens. Light microscopic examination was performed using the original H&E slide. The pathologic tumor (pT), node, (pN) and TNM stage were determined based on the 8 th edition of AJCC staging system. This study was approved by the Institutional Review Board (IRB) of Ewha Womans University Mokdong Hospital (protocol no. 2018-8-049). The requirement for informed consent was waived by the IRB due to its retrospective nature. Clinical data were collected from electronic medical records between December 18, 2018 and December 12, 2021, and were completely anonymized. Pancytokeratin IHC was performed for all LNs with negative results on initial pathological diagnosis. Only one additional section was cut for IHC. IHC for pan-cytokeratin (1:100, monoclonal, Novocastra, Newcastle, UK) was performed using a BOND-MAX autoimmunostaining system (Leica Biosystem, Melbourne, Australia) with BOND TM Polymer Refine Detection Kit DS9800 (Leica Biosystem, Melbourne, Australia), as CK IHC has been reported to be a sensitive method for detecting micrometastasis or ITC in axillary node-negative breast cancer . Sections (4-μm-thick) from formalin-fixed, paraffin embedded pretreatment tumor biopsy specimens were transferred to adhesive slides and dried at 62°C for 30 min. Slides were then deparaffinized. Endogenous peroxidase was quenched by incubating the tissues with 0.3% hydrogen peroxide for 10 min. Antigen retrieval was performed using the BOND Epitope Retrieval solution for 20 min at 97°C. Sections were incubated with primary antibodies for 15 minutes, the post-primary antibody for 10 min, and the polymer for 30 min, followed by expression with 3,3’-diaminobenzidine and counterstaining with hematoxylin. Nodal micrometastasis and ITC were detected by IHC and determined with the following criteria in the breast cancer section of the 8 th edition of AJCC cancer staging system . ITC was defined as presence of a single tumor cell or malignant cell clusters which is no larger than 0.2 mm in diameter. Micrometastasis was defined as the presence of tumor cells which is larger than 0.2 mm but not larger than 2.0 mm in diameter. In this study, presence of ITC or micrometastasis in a LN was considered together as “occult LN metastasis” . Most lymph nodes were longitudinally sectioned and stained with H&E. Immunostained slides were evaluated by two pathologists (HC and MC) without knowledge of clinicopathological information. A few discordant IHC results were discussed and reviewed with two pathologists and a consensus diagnosis was reached. To evaluate prognostic impact of occult LN metastasis, the study population was grouped based on the new pN stage after IHC: pN0 (patients with no metastasis after IHC), pNmi (patients with occult LN metastasis after IHC who were originally pN0 on the initial diagnosis with routine H-E slide examination), and pN+ (patients with nodal metastasis on both initial and IHC diagnoses) groups. Pathological evaluation of histological variants was performed according to the 2016 WHO classifications of urothelial tract tumor . In the present study, all existing histological subtypes (variants) and divergent differentiation of urothelial carcinoma (UC) were specified, including squamous differentiation, glandular differentiation, micropapillary, plasmacytoid, sarcomatoid, nested, lymphoepithelioma-like, and microcystic variants. The existence of more than one histological variant type in addition to conventional UC was designated as mixed UC. More than two histological variant types in one case was regarded as multiple variant differentiation. The tumor differentiation was graded to low and high grades. The budding-like tumor cell clusters (budding-like clusters) was defined as isolated single tumor cells or small clusters composed of fewer than 20 cells near tumor border in more than one field of 200x magnification . These clusters were also present in the retraction artifact and were present as small lumps in the interstitial tissue. These areas were roughly estimated. Necrosis was considered significant when the necrotic portion was more than 50% of total tumor area. Disease recurrence was defined as local tumor recurrence within urinary tracts and/or regional lymph nodes and/or distant metastasis. Recurrence-free survival (RFS) was defined as the time from RC to recurrence or the last follow-up or death of any cause. Cancer-specific survival (CSS) was defined as the time from RC to the last follow-up or death from UBC. Overall survival (OS) was defined as the time from RC to the last follow-up or death of any cause. Distributions of categorical variables between the groups were compared by the chi-square or Fisher exact tests . Continuous variables were compared by Student’s t -test or Mann-Whitney test . Survival curves were estimated by the Kaplan-Meier method compared by log-rank tests. Univariate and multivariate Cox proportional hazard regression models were used to evaluate the risk of each clinicopathological parameter for disease recurrence, cancer-specific death and overall survival. P ≤ 0.05 was considered statistically significant. All statistical tests were performed using IBM SPSS software version 23.0.0 (SPSS Inc., Chicago, IL, USA). 1. Clinicopathologic characteristics of patients summarizes the clinicopathologic characteristics of 124 patients. Their median age was 63.7 years old (range, 27–87 years; standard deviation [SD] ±10.4). Patients were predominantly males (83.2%, 102 out of 124). Histologic type of carcinoma was comprised of pure conventional UC in 54.0% (67 of 124) and mixed UC in 46.0% (57 of 124). Among mixed UC, 7.3% (9 of 124) showed multiple variant histology. Micropapillary variant (n = 14, 11.3%) was the most common histological variant followed by squamous differentiation (n = 13, 10.5%), plasmacytoid variant (n = 7, 5.6%), discohesive pattern (n = 4, 3.2%), glandular differentiation (n = 3, 2.4%) and nested variant (n = 3, 2.4%) among mixed UC with one variant histology. Other rare variants of UC included lymphoepithelioma-like (n = 1), microcystic (n = 1), and sarcomatoid (n = 1) variants in the study cohort. Among histologic features, budding-like cluster was observed in 66.1% (82 of 124). Lymphovascular invasion was observed in 50 (40.3%) patients. Perineural invasion was present in 16 (12.9%) patients . AC was performed in 32.3% (40 of 124). No patient received AC among the pT1 group. Six among pT2 (14.3%), 29 among pT3 (82.9%), and 5 patients among pT4 (83.3%) received AC. Seven patients among compatible condition for AC (pT3, 6 patients; pT4, 1 patient) did not take AC. During a median follow-up period of 80 months (range, 3.6–106.5 months; SD ± 27.9 months), 45 patients had tumor recurrence (36.3%), including 24 local recurrence and/or 26 distant metastasis. Overall death was found in 26.6% (33 of 124), and 21.8% (27 of 124) died of UBC. 2. Detection of occult LN metastasis and shift of pN Standard pathological examination revealed that 23 patients were positive for node metastasis and 101 patients were node negative. IHC newly detected microscopic metastasis in 19 patients. In the originally node-negative group, occult LN metastasis (pNmi) was detected in 12.9% (13 of 101), including 11 ITC and two micrometastasis found in 1 or 2 LNs (mean, 1.2 nodes). The rest of originally node-negative group excluding occult LN metastasis was truly node-negative (pN0, n = 88). In the originally node-positive group (pN+ group, n = 23), 26.1% (6 of 23) were found to have additionally detected LN metastasis comprising 3 ITC and 3 micrometastasis, which were identified in 1 to 4 LNs (average 2.75 nodes). Changes of pN stage were illustrated in . After IHC, pN stage was upstaged in 11.3% (14 of 124) of patients including 10 patients from pN0 to pN1, 3 from pN0 to pN2, and 1 from pN1 to pN2. The average number of resected LN in all patients was 18.6 (range, 1–53; SD ± 9.8). Some specimens labelled “pelvic lymph node” had predominantly fat tissue with few lymph nodes, which could reduce the detection rate of occult metastasis, but there was no significant difference in the average number of resected LNs among pN0, pNmi and pN+ groups (pN0, 17.6; pNmi, 23.2; pN+, 19.7; p = 0.134). 3. Pathological characteristics of occult LN metastasis summarizes the association of pathologic characteristics and occult LN metastasis. The incidence of occult LN metastasis was significantly higher in mixed UC than in pure conventional UC ( p = 0.002). Among each histologic variant, discohesive pattern and glandular differentiation were significantly associated with occult LN metastasis ( p = 0.006 and p = 0.043, respectively). Micropapillary variant tended to have occult LN metastasis without reaching statistical significance. Plasmacytoid variant was significantly associated with positive-node metastasis (pN+ vs. pN0; p = 0.033), though no case of plasmacytoid variant showed occult LN metastasis. Budding-like clusters showed significant association with occult LN (pNmi vs. pN0; p = 0.013) as well as positive-node metastasis (pN+ vs. pN0; p < 0.001). Lymphovascular invasion had significant association with positive-node metastasis (pN+ vs. pN0; p < 0.001), but not with occult LN metastasis (pNmi vs. pN0; p = 0.999). 4. Prognostic significance of occult LN metastasis and other clinicopathologic parameters Five-year RFS rates of the pN0, pNmi, and pN+ groups were 89.6%, 61.5%, and 46.2%, respectively, with five-year CSS rates of 89.6%, 61.5%, and 60.9%, and five-year OS of 86.4%, 61.5%, 60.9%, respectively. Kaplan-Meier survival curve showed a significantly negative impact on CSS ( p = 0.002) and OS ( p = 0.017) in the pNmi group than in the pN0 group, and a worse trend in RFS with no significant difference ( p = 0.107; ). The pN+ group also had significantly worse RFS ( p = 0.007), CSS ( p = 0.001), and OS ( p = 0.01) than the pN0 group. However, clinical outcome was not significantly different between the pNmi and pN+ groups (RFS, p = 0.721; CSS, p = 0.958; OS, p = 0.958). Since node metastasis affects the decision to perform AC at low tumor stage, the effect of occult LN metastasis on prognosis was analyzed in subgroups (pT1-2 and pT3-4 stage). No one received AC among pN0 and pNmi patients in the pT1-2 subgroup. It was found that 12.5% (11 of 88) in pN0 and 33% (3 of 9) in pNmi group died of the disease, whereas only 25% (1 of 4) of pN+ patients in the pT1-2 subgroup died of the disease. Among pNmi patients in the pT3-4 subgroup (n = 4), 3 patients received AC and 3 patients died of the disease. Among pN+ patients in the pT3-4 subgroup (n = 19), 9 patients died of the disease. All pN+ patients received AC in this study cohort. In both subgroups, there was no significant difference in RFS, CSS, or OS among pN0, pNmi, and pN+ groups (pT1-2 subgroup, p = 0.665, p = 0.076, and p = 0.457, respectively; pT3-4 subgroup, p = 0.137, p = 0.168, and p = 0.168, respectively). Based on univariate Cox analysis, pN+ group ( p = 0.008), higher pT stage ( p = 0.042), advanced TNM stage (original version, p = 0.039; new version, p = 0.037), and necrosis ( p = 0.004) were significantly associated with increased disease recurrence . Regarding CSS, pNmi ( p = 0.004) and pN+ ( p = 0.001) groups, higher pT stage ( p = 0.001), advanced TNM stage (original version, p = 0.001; new version, p < 0.001), mixed UC ( p = 0.003), lymphovascular invasion ( p = 0.003), budding-like clusters ( p = 0.012), and necrosis ( p = 0.001) were significantly associated with increased risk of event. In multivariate Cox analysis, pNmi group (CSS, p = 0.040), pN+ group (RFS, p = 0.013; CSS, p = 0.026; OS, p = 0.039) and higher T stage (RFS, p = 0.036; CSS, p = 0.002; OS, p = 0.006) independently increased the risk of disease recurrence and cancer mortality . AC independently decreased the risk of disease recurrence ( p = 0.028), cancer-specific death ( p = 0.034) and overall death ( p = 0.023). However, pNmi was not an independent prognostic factor for disease recurrence (RFS, p = 0.202) and overall death ( p = 0.128). summarizes the clinicopathologic characteristics of 124 patients. Their median age was 63.7 years old (range, 27–87 years; standard deviation [SD] ±10.4). Patients were predominantly males (83.2%, 102 out of 124). Histologic type of carcinoma was comprised of pure conventional UC in 54.0% (67 of 124) and mixed UC in 46.0% (57 of 124). Among mixed UC, 7.3% (9 of 124) showed multiple variant histology. Micropapillary variant (n = 14, 11.3%) was the most common histological variant followed by squamous differentiation (n = 13, 10.5%), plasmacytoid variant (n = 7, 5.6%), discohesive pattern (n = 4, 3.2%), glandular differentiation (n = 3, 2.4%) and nested variant (n = 3, 2.4%) among mixed UC with one variant histology. Other rare variants of UC included lymphoepithelioma-like (n = 1), microcystic (n = 1), and sarcomatoid (n = 1) variants in the study cohort. Among histologic features, budding-like cluster was observed in 66.1% (82 of 124). Lymphovascular invasion was observed in 50 (40.3%) patients. Perineural invasion was present in 16 (12.9%) patients . AC was performed in 32.3% (40 of 124). No patient received AC among the pT1 group. Six among pT2 (14.3%), 29 among pT3 (82.9%), and 5 patients among pT4 (83.3%) received AC. Seven patients among compatible condition for AC (pT3, 6 patients; pT4, 1 patient) did not take AC. During a median follow-up period of 80 months (range, 3.6–106.5 months; SD ± 27.9 months), 45 patients had tumor recurrence (36.3%), including 24 local recurrence and/or 26 distant metastasis. Overall death was found in 26.6% (33 of 124), and 21.8% (27 of 124) died of UBC. Standard pathological examination revealed that 23 patients were positive for node metastasis and 101 patients were node negative. IHC newly detected microscopic metastasis in 19 patients. In the originally node-negative group, occult LN metastasis (pNmi) was detected in 12.9% (13 of 101), including 11 ITC and two micrometastasis found in 1 or 2 LNs (mean, 1.2 nodes). The rest of originally node-negative group excluding occult LN metastasis was truly node-negative (pN0, n = 88). In the originally node-positive group (pN+ group, n = 23), 26.1% (6 of 23) were found to have additionally detected LN metastasis comprising 3 ITC and 3 micrometastasis, which were identified in 1 to 4 LNs (average 2.75 nodes). Changes of pN stage were illustrated in . After IHC, pN stage was upstaged in 11.3% (14 of 124) of patients including 10 patients from pN0 to pN1, 3 from pN0 to pN2, and 1 from pN1 to pN2. The average number of resected LN in all patients was 18.6 (range, 1–53; SD ± 9.8). Some specimens labelled “pelvic lymph node” had predominantly fat tissue with few lymph nodes, which could reduce the detection rate of occult metastasis, but there was no significant difference in the average number of resected LNs among pN0, pNmi and pN+ groups (pN0, 17.6; pNmi, 23.2; pN+, 19.7; p = 0.134). summarizes the association of pathologic characteristics and occult LN metastasis. The incidence of occult LN metastasis was significantly higher in mixed UC than in pure conventional UC ( p = 0.002). Among each histologic variant, discohesive pattern and glandular differentiation were significantly associated with occult LN metastasis ( p = 0.006 and p = 0.043, respectively). Micropapillary variant tended to have occult LN metastasis without reaching statistical significance. Plasmacytoid variant was significantly associated with positive-node metastasis (pN+ vs. pN0; p = 0.033), though no case of plasmacytoid variant showed occult LN metastasis. Budding-like clusters showed significant association with occult LN (pNmi vs. pN0; p = 0.013) as well as positive-node metastasis (pN+ vs. pN0; p < 0.001). Lymphovascular invasion had significant association with positive-node metastasis (pN+ vs. pN0; p < 0.001), but not with occult LN metastasis (pNmi vs. pN0; p = 0.999). 4. Prognostic significance of occult LN metastasis and other clinicopathologic parameters Five-year RFS rates of the pN0, pNmi, and pN+ groups were 89.6%, 61.5%, and 46.2%, respectively, with five-year CSS rates of 89.6%, 61.5%, and 60.9%, and five-year OS of 86.4%, 61.5%, 60.9%, respectively. Kaplan-Meier survival curve showed a significantly negative impact on CSS ( p = 0.002) and OS ( p = 0.017) in the pNmi group than in the pN0 group, and a worse trend in RFS with no significant difference ( p = 0.107; ). The pN+ group also had significantly worse RFS ( p = 0.007), CSS ( p = 0.001), and OS ( p = 0.01) than the pN0 group. However, clinical outcome was not significantly different between the pNmi and pN+ groups (RFS, p = 0.721; CSS, p = 0.958; OS, p = 0.958). Since node metastasis affects the decision to perform AC at low tumor stage, the effect of occult LN metastasis on prognosis was analyzed in subgroups (pT1-2 and pT3-4 stage). No one received AC among pN0 and pNmi patients in the pT1-2 subgroup. It was found that 12.5% (11 of 88) in pN0 and 33% (3 of 9) in pNmi group died of the disease, whereas only 25% (1 of 4) of pN+ patients in the pT1-2 subgroup died of the disease. Among pNmi patients in the pT3-4 subgroup (n = 4), 3 patients received AC and 3 patients died of the disease. Among pN+ patients in the pT3-4 subgroup (n = 19), 9 patients died of the disease. All pN+ patients received AC in this study cohort. In both subgroups, there was no significant difference in RFS, CSS, or OS among pN0, pNmi, and pN+ groups (pT1-2 subgroup, p = 0.665, p = 0.076, and p = 0.457, respectively; pT3-4 subgroup, p = 0.137, p = 0.168, and p = 0.168, respectively). Based on univariate Cox analysis, pN+ group ( p = 0.008), higher pT stage ( p = 0.042), advanced TNM stage (original version, p = 0.039; new version, p = 0.037), and necrosis ( p = 0.004) were significantly associated with increased disease recurrence . Regarding CSS, pNmi ( p = 0.004) and pN+ ( p = 0.001) groups, higher pT stage ( p = 0.001), advanced TNM stage (original version, p = 0.001; new version, p < 0.001), mixed UC ( p = 0.003), lymphovascular invasion ( p = 0.003), budding-like clusters ( p = 0.012), and necrosis ( p = 0.001) were significantly associated with increased risk of event. In multivariate Cox analysis, pNmi group (CSS, p = 0.040), pN+ group (RFS, p = 0.013; CSS, p = 0.026; OS, p = 0.039) and higher T stage (RFS, p = 0.036; CSS, p = 0.002; OS, p = 0.006) independently increased the risk of disease recurrence and cancer mortality . AC independently decreased the risk of disease recurrence ( p = 0.028), cancer-specific death ( p = 0.034) and overall death ( p = 0.023). However, pNmi was not an independent prognostic factor for disease recurrence (RFS, p = 0.202) and overall death ( p = 0.128). In several studies, IHC was used to detect LN micrometastasis in UBC patients undergoing RC. Yang et al . primally found only one micrometastasis (0.62%) among 159 negative LNs after CAM5.2 and AE1AE3 IHC in high-grade muscle-invasive UC of the urinary bladder. This corresponded to 1 (5.6%) out of 19 patients who were originally N0 with routine H&E staining. The authors concluded that standard H&E staining would be adequate and that routine IHC was not useful for nodal staging in UBC, although survival analysis was not performed . Jenson et al . reported that micrometastasis was found in 1 (0.56%) out of 173 negative LNs in pT1-T3 UBC (corresponded to 1 out of 10 patients) and that did not correlate with survival or prognosis . In a prospective study, Matsumoto et al . found micrometastasis in 4 (8.5%) out of 47 pN0 patients who underwent RC with extended lymphadenectomy . However, the 2-year RFS was not significantly different between node-negative and micrometastasis groups after IHC . Recently, Cuck et al . found that 2 out of 61 patients (3.3%) showed micrometastasis in muscle-invasive UBC. However, clinical outcome could not be evaluated because the patients died due to postoperative problems . Studies of micrometastasis detected by real-time reverse transcription-PCR using RNA extract from LN showed that PCR-positive cases (micrometastasis) were found in 20–35% in the originally node-negative group by standard pathological examination . The CSS of this micrometastasis (pNmi) group was significantly lower than that of the pN0 group in the univariate analysis, although there was no statistical significance in the multivariate analysis . Gazquez et al . have found RT-PCR-detected micrometastasis in 25.7% (19 of 74 patients) of node-negative group by conventional histological analysis. After 100 months of follow-up, they found a low trend for RFS and CSS. However, they showed no statistical significance . Because previous studies showed incomplete results, the prognostic significance of occult LN metastasis was investigated in the present study with a larger study cohort of UBC patients (n = 124) with longer follow-up period (median, 80 months) than previous studies. Pan-cytokeratin IHC showed newly identified microscopic LN metastasis in 12.9% of the originally node-negative patients and in 23.1% of the originally node-positive patients. The pathologic node stage was upstaged in 11.4%. UBC patients with occult LN metastasis (pNmi) had significantly worse CSS than truly node-negative patients (pN0) and survival curves of pNmi were similar to node-positive patients (pN+). In the pT1-2 subgroup, more patients in pNmi than in pN+ (33% vs 25%) died of disease, although they showed no statistical significance probably due to small number of pNmi and pN+ cases in pT1-2 subgroup (n = 13). Since pNmi patients in the pT1-2 subgroup were initially reported as node-negative, they did not received AC, while all pN+ patients received AC. AC might be one of the reasons why clinical outcome was worse in pNmi than pN+ patients in the pT1-2 subgroup. Therefore, UBC patients (especially pT1-2) with occult LN metastasis might have to take adequate postoperative management similar to node-positive UBC patients. In this study, occult LN metastasis independently increased the risk of cancer mortality in UBC patients. Among all patients, the pNmi group showed worse clinical outcome than the pN0 group in RFS, CSS, and OS. However, only CSS was statistically significant. In addition, the pNmi group was not an independent risk factor for RFS, CSS, or OS. Because the pNmi group included many early stage cancer patients, a sufficient follow-up period is critical to reveal prognostic significance of occult LN metastasis. In case of breast cancer, studies with follow-up period over 20 years showed that axillary LN micrometastasis was correlated with survival , while studies with follow-up period of 5–8 years failed to reveal prognostic relevance of axillary LN micrometastasis . Performing IHC in LNs of all UBC cases is not practical to detect occult LN metastasis because of a significant additional cost. Therefore, it is necessary to select cases that have histologic features related to occult metastasis. So far, the relationship of occult LN metastasis with histopathologic features of UBC had not been reported yet. In this study, mixed UC, UBC with discohesive pattern, glandular differentiation, and budding-like clusters were significantly associated with occult LN metastasis. If UBC has a mixed histology, discohesive pattern, glandular differentiation, or budding-like clusters, IHC might be helpful for determining occult LN metastasis. High tumor budding is a poor prognostic factor in colorectal cancer. It is associated with lymph node metastasis . There are a few studies on tumor budding in UBC. Measurement methods or criteria have not yet been established yet. Tumor budding was found in 87% of UBC when using cut-off of 5 or more clusters consisting of 5 or less cells in the 400-fold field of view . It was 17.4% when using cut-off of 10 or more clusters consisting of 5 or less in the 200x field of view and 73.6% when using cut-off of 14 budding in the 400x field of view . These studies have reported an association of tumor budding with a low survival rate. In UBC, it is difficult to access tumor budding by the criteria used in colorectal cancer. First, the invasive front is usually unclear in UBC, for a tumor that is fragmented in a TUR specimen. In some cases, the residual tumor is absent in the RC specimen after TUR. Second, UBC histology often shows small tumor nests. Recently, poorly differentiated cluster (a tumor cluster defined as 5 or more tumor cells without gland formation) has been reported to be associated with a poor prognosis in colorectal cancer . Accordingly, we evaluated clinicopathological significance of poorly differentiated clusters in UBC. In this study, poorly differentiated clusters was named budding-like clusters and defined by isolated single tumor cells or small clusters composed of fewer than 20 cells among tumor area in more than one field of 200x magnification. Budding-like clusters was significantly associated with LN metastasis including occult LN metastasis in UBC. In conclusion, cytokeratin IHC identified nodal micrometastasis and ITC in 12.9% of the originally node-negative UBC patients. Nodal micrometastasis and ITC independently increase the risk of cancer mortality in UBC. However, those with occult LN metastasis showed lower survival tendency in the pT1-2 subgroup. In the future, clinical significance of occult LN metastasis in early stage UBC should be clarified by larger scale studies. IHC might be selectively used to detect micrometastasis and ITC in UBC with specific pathological features such as mixed UC, UC with discohesive pattern, glandular differentiation, lymphovascular invasion, and budding-like clusters. |
Obstetrician–gynecologists’ perspectives towards medication use during pregnancy: A cross-sectional study | 2f006442-a761-4195-b279-ef496fa8aedf | 9678598 | Gynaecology[mh] | Pregnant women undergo unique physiological changes that may affect the pharmacokinetic properties of various medications. Around 40% of pregnant women uses either over-the-counter (OTC) or prescribed medications during their pregnancy to treat chronic or acute conditions, such as nausea, vomiting, diabetes, asthma, and hypertension. Pharmacological agents contribute to significant, preventable congenital abnormalities, leading to a rise in public health concerns about using medications during pregnancy. To produce such an effect, the medication must possess certain properties that allow it to cross the placenta, including but not limited to being unbound, weak base, lipid-soluble, and having a low molecular weight. Also, the fetus’s stage of development is a crucial point to consider when using medication during pregnancy. Most pregnant women know that medication use during pregnancy is paramount, which leads them to seek medical advice before taking any medication. A vast majority of studies evaluated pregnant women’s knowledge and attitudes towards using medicines during their pregnancy. One of which was conducted in Saudi Arabia in 2014, which concluded that women claim to receive inadequate medication-related information from physicians and pharmacists; instead, they rely on medication leaflets to attain such information. Obstetrician–gynecologists are frequently faced with inadequate and imprecise information to make decisions for clinical management. Although some medications’ teratogenicity potential is well known, there is limited information on the safety of many other medications used during pregnancy due to ethical considerations. Pregnant and lactating women are typically excluded from clinical trials. A study published in 2010 in the United States examined Obstetrician–gynecologists’ knowledge and informational resources regarding the safety of medication use during pregnancy. Results showed that the number of years in practice was associated with their response choice to medication safety questions. Most responders indicated sufficient access to helpful information regarding medication teratogenicity potential. However, more than half of the participants selected the lack of a single comprehensive source of information as the most significant barrier. Another study evaluating community pharmacists’ knowledge about medication safety during pregnancy in Saudi Arabia found a significant difference between age groups and country of graduation in knowledge test scores. To the best of our knowledge, no studies were conducted to spot the knowledge of Obstetrician–gynecologists in Saudi Arabia and their access to information about the risks of medication use during pregnancy. Such a study is highly warranted due to the physicians’ knowledge and practice’s effect on the patients’ health. For that, this study aims to assess Obstetrician–gynecologists’ knowledge of the medication teratogenicity potential, their frequently used resources, and their residency training contribution to medication use during pregnancy. The present study is a cross-sectional, survey-based study targeting licensed obstetrician-gynecologists practising in Saudi Arabia. Saudi and non-Saudi practitioners were eligible to fill out the questionnaire. Over 6 months, data were collected using a validated self-administered web-based questionnaire developed by the American College of Obstetricians and Gynecologists. The questionnaire is organized into 5 domains. The first domain (7 items) includes the participants’ demographic data. The second domain focused on assessing the knowledge about prescription medications, OTC, dietary supplements, and herbal products in the first trimester (23 items). The third domain was about the references used to obtain appropriate and updated information on medication use during pregnancy (15 items). The fourth domain was to demonstrate the physician’s attitudes toward medication use during pregnancy (6 items). The last domain was regarding the rating of the participant’s training in medication use during pregnancy (6 items). The questions utilized in the questionnaire included multiple choice, check all that apply, Likert-like scale, and fill-in-the-blank questions. With almost 350 clinicians registered as Obstetrician–gynaecologist specialists or consultants in Saudi Arabia, the sample was calculated to be 184 with a 95% confidence interval and 5% confidence level) as follow: S S = [ Z 2 p ( 1 − p ) ] / C 2 = [ ( 1.96 ) 2 × 0.5 ( 1 − 0.5 ) ] / ( 0.05 ) 2 = 384.16 S S / [ 1 + { ( S S 1 ) / P o p } ] = 384.16 / [ 1 + { ( 384.161 ) / 350 } ] = 184 King Saud University Medical City’s Institutional Review Board approved this study (19/0929). Following ethical approval, an online survey was sent to the department of Obstetrics & Gynecology in 6 large hospitals around the Kingdom to be distributed among their employees. Reminders were sent to non-responders, and visits were conducted to some sites with low response rates. Data were analyzed using SPSS version 25. Categorical variables were presented as numbers and percentages, while continuous variables were presented as mean and SD if normally distributed. However, if not normally distributed, median and IQR were used. Shapiro–Wilk test was used to assess for normal distribution. Analyses were tested for significance using an α of 0.05. A total of 60 obstetrician–gynecologists, completed the survey, with a response rate of 33%. The flowchart for the inclusion and exclusion process is shown in Figure . Most participants were female (72%), with a median age of 42. The median years of practice among the participants were 13 years. Around 40% were full-time hospital practitioners, and most (85%) were working in the central region (i.e., Riyadh). Seventy per cent of the participants reported providing routine care/gynecologic exams. Characteristics of participants included in the study are presented in Figure and Supplemental Digital Content (Appendix 1, http://links.lww.com/MD/H763 ). 3.1. Assessment of medication use during the first trimester of pregnancy Participants’ assessment of 23 selected medications regarding fetus safety if taken during the first trimester is presented in Supplemental Digital Content (Appendix 2, http://links.lww.com/MD/H764 ). Regarding prescription medications (Fig. ), the majority (87%) agreed that Isotretinoin is contraindicated. However, 8.3% of them were not sure. For Alprazolam, 25% considered it unsafe, 35% indicated that it required a risk-benefit assessment, and 30% were unsure. Most participants (76.7%) consider acetaminophen safe to use. Regarding dietary supplements (Fig. ), 75% stated that vitamin A supplements are not safe during the first trimester. Around 2-thirds (60%) of respondents were unsure about the safety of herbal remedies during pregnancy. 3.2. Information resources utilized by obstetrician-gynecologists Regarding the information resources used to answer questions, online databases (e.g., Lexi and Micromedex) were chosen as the top resources utilized by obstetrician-gynecologists to obtain information about the teratogenicity of medications (45%), followed by pharmacist consultation, FDA label, and colleagues’ conversation (21.7%). Further information is provided in Table . 3.3. Obstetrician–gynecologists’ attitudes toward medication use during pregnancy A Likert-Like scale was used to assess the proportion of obstetrician-gynecologists agreeing or disagreeing with various statements related to the information on the use of medications during pregnancy. Forty-eight per cent strongly agreed that liability is a concern if there were to be an adverse pregnancy outcome following the use of medications. Additionally, 41% agreed on the lack of sufficient information about the safety of medication use during pregnancy, while 31% reported a lack of accessibility to the available information. Interestingly, 26.7% reported a lack of time to communicate the information available to patients as one of the drawbacks. Additional details are provided in Table . 3.4. Obstetrician–gynecologists’ rating of their training Participants were asked to rate their training on medication use during pregnancy, and the results are presented in Table . Those who had been in practice for more than 15 years were significantly more likely to rate themselves as well qualified ( P -value < 0.05). The majority adequately and significantly rated their training on prescribed medications (58.3%), OTC medications (45%) and dietary supplements or herbal remedies (32%) ( P value < .05). Participants’ assessment of 23 selected medications regarding fetus safety if taken during the first trimester is presented in Supplemental Digital Content (Appendix 2, http://links.lww.com/MD/H764 ). Regarding prescription medications (Fig. ), the majority (87%) agreed that Isotretinoin is contraindicated. However, 8.3% of them were not sure. For Alprazolam, 25% considered it unsafe, 35% indicated that it required a risk-benefit assessment, and 30% were unsure. Most participants (76.7%) consider acetaminophen safe to use. Regarding dietary supplements (Fig. ), 75% stated that vitamin A supplements are not safe during the first trimester. Around 2-thirds (60%) of respondents were unsure about the safety of herbal remedies during pregnancy. Regarding the information resources used to answer questions, online databases (e.g., Lexi and Micromedex) were chosen as the top resources utilized by obstetrician-gynecologists to obtain information about the teratogenicity of medications (45%), followed by pharmacist consultation, FDA label, and colleagues’ conversation (21.7%). Further information is provided in Table . A Likert-Like scale was used to assess the proportion of obstetrician-gynecologists agreeing or disagreeing with various statements related to the information on the use of medications during pregnancy. Forty-eight per cent strongly agreed that liability is a concern if there were to be an adverse pregnancy outcome following the use of medications. Additionally, 41% agreed on the lack of sufficient information about the safety of medication use during pregnancy, while 31% reported a lack of accessibility to the available information. Interestingly, 26.7% reported a lack of time to communicate the information available to patients as one of the drawbacks. Additional details are provided in Table . Participants were asked to rate their training on medication use during pregnancy, and the results are presented in Table . Those who had been in practice for more than 15 years were significantly more likely to rate themselves as well qualified ( P -value < 0.05). The majority adequately and significantly rated their training on prescribed medications (58.3%), OTC medications (45%) and dietary supplements or herbal remedies (32%) ( P value < .05). To our knowledge, this is the first study in the nation that assesses Obstetrician–gynecologists’ knowledge of medications’ teratogenicity potential as well as the impact of their residency training on their decisions. The resources routinely used were also assessed. For a medication to be desirable, it must fulfill the following criteria: safe, effective, and indicated. During pregnancy, women should refrain from taking medications as much as possible due to the teratogenicity risk. However, certain medical conditions require urgent or ongoing treatment, and deciding to use them is not without apprehension. Thus, obstetrician-gynecologists play a vital role in identifying when medications are warranted and which are safe to be given during each trimester, in addition to adequately counseling patients. To assist in decision-making, the Food and Drug Administration (FDA) formerly stratified the medications’ teratogenic effects into 5 categories (i.e., A, B, C, D, and X), possessing fewer safety profiles when moving downwards. However, it is challenging to assess the risk-benefit ratio using this classification. In 2015, the FDA updated their pregnancy and lactation rule to overcome this issue. Nevertheless, even with the new FDA stratification, it is extremely challenging for physicians to make treatment decisions in this population. That is due to the diversity in fetal damage manifested in the same medication when taken at different trimesters, and the exclusion of pregnant women from clinical trials due to ethical considerations, leaving great uncertainty. Therefore, safety information is commonly obtained from other sources such as animal experiments, nonclinical data, case reports, and epidemiological data, of which possess abundant limitations, adding to the ambiguity of treatment decisions in this population. In this study, the participant’s level of knowledge regarding medication teratogenicity potential was assessed and revealed a great variation. Most respondents reported inaccessibility to current information about medication teratogenicity risk and a lack of sufficient data, emphasizing the need for updated, accessible references to aid clinical decisions. A multidisciplinary team including clinical pharmacists in the services of Obstetrics and Gynecology as medication specialists would be of great benefit. Clinical pharmacists’ contributions to the field were reported in the literature, highlighting their role in preventing the incidence of toxicity and death. Their expertise allows them to help select appropriate medications and adequately counsel patients regarding the safety of different treatment modalities, dietary supplements, and herbals. That was supported by previous evidence, where they found clinical pharmacy services in Obstetrics and Gynaecology were associated with a high level of physician satisfaction and better patient care. When assessing participants’ knowledge about the safety of medications in the first trimester, the vast majority reported that Isotretinoin is contraindicated and acetaminophen is safe, which is consistent with the published literature. On the contrary, results varied with Alprazolam. That may be attributed to the weak level of evidence and lack of consensus on its effect on the fetus. Nevertheless, since Alprazolam falls into Category D and may be detrimental to the fetus, prospective studies with a large sample size to assess its effect may be difficult to conduct. Moreover, 75% of responders stated that Vitamin A dietary supplements are not safe in the first trimester, which is far higher than a study conducted amongst community pharmacists, in which 48.4% reported it unsafe. As for the safety of herbals, participants showed a lack of sufficient knowledge of their use in this patient population. This uncertainty is alarming as the use of herbal medicine prevalence in pregnant women in the Middle East ranges from 7% to 55%. These medications may harm the mother and child; thus, healthcare practitioners’ education is essential in this regard as it also contributes to proper patient education. Several limitations exist in our study. The response rate remained low despite many reminders and visits to our participants. That may be justified by the Obstetrician–gynecologists’ high-load nature of practice and busy service, hindering the data collection process. In addition, most responders were from the central region, affecting the results’ generalizability. Since the study used self-administrated questionnaires, desirability bias may arise. It is also important to note that there was no way of determining whether or not responders used their actual knowledge or used reference sources when filling out the questionnaire. A nationwide, paper-based study is recommended to overcome the limitations mentioned above and confirm the results of this study. Our study found that Obstetrician–gynecologists vary in their knowledge about medication and herbal remedies’ teratogencity risk. These findings highlighted the need to emphasize this during their training year and the importance of having this information readily available to health care providers in an updated form. This work was supported by the College of Prince Sultan Bin Abdulaziz for Emergency Medical Services Research Center, Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia. Conceptualization: Mashael Alshebly and Sultan Alghadeer. Data curation: Bana Almadi. Formal analysis: Abdullah M. Mubarak. Funding acquisition: Sultan Alghadeer. Investigation: Haya Alturki and Jeelan Alghaith. Methodology: Sultan Alghadeer. Supervision: Mashael Alshebly and Sultan Alghadeer. Validation: Mashael Alshebly and Abdulrahman Alwhaibi. Visualization: Mashael Alshebly and Abdullah M. Mubarak. Writing – original draft: Haya Alturki and Jeelan Algaith. Writing – review and editing: Bana Almadi and Abdulrahman Alwhaibi. |
Improved methodology for tracing a pulse of | 3ea08c7c-7fcd-47d7-a110-b0d4432f41f7 | 11775467 | Microbiology[mh] | Tree photosynthesis feeds soil biota with carbon (C) through aboveground litter-fall and a roughly equally large below-ground flux to roots and associated organisms, notably mycorrhizal fungi . Insight into the quantitative role of plant below-ground C flux to specific soil organisms and soil processes requires isotope tracer studies, which are challenging to perform in the field due to the size of trees . The pioneers used radioactive 14 C, an approach further developed by . Advancements in isotope ratio mass spectrometry (IRMS) and wave-length scanner cavity ring-down spectroscopy have promoted the use of stable 13 C, with no need to consider radiation safety. Using 13 C makes it possible to use elaborate laboratory methods targeting a range of soil organisms. High tracer levels have enabled labelling of phospholipid fatty acid (PLFA) biomarkers of specific groups of soil microorganisms, their RNA, DNA and other macromolecules in laboratory settings or in field studies of low-stature plants (e.g., , , , , ). Short-term labelling, pulse-labelling, followed by frequent sequential sampling of target tree organs and soil biota enables calculations of C turnover rates (e.g., , ). Studies of plant mesocosms or of small plants in the field can use more elaborate designs for labelling plants and tracing plant photosynthate into soils (e.g., ) than studies of trees at the ecosystem patch scale. This scale is desirable in studies of ecosystem C budgets and for realistic predictions of interactions among trees and between trees and soil biota, but it is much more costly and faces special technical challenges. The prime challenge in pulse-labelling studies is to achieve labelling high above variations in natural abundance of 13 C in trees and in recipient soil organisms, processes and compounds of interest. Furthermore, since 13 CO 2 is expensive, as much of it as possible should be assimilated. A second important challenge is to ensure that the belowground system studied is not disturbed and reflects the natural connection with the labelled tree canopy, i.e., that area-based budget estimates of above- and below-ground C are correct. Two different approaches are used for pulse-labelling of trees with 13 CO 2 in field settings . The single-tree method uses a chamber enclosing the tree crown which is sealed around the lower part of the stem (e.g., , ). It is ideal for providing data from replicate trees, especially regarding above-ground processes. For studying the flux of C to below-ground components in the field, chambers enclosing several trees (thus representing an ecosystem patch) are more appropriate . However, this also means that large chambers are needed, especially where tree root systems overlap considerably, as is common . We have previously pulse-labelled 50 m 2 patches of young boreal Pinus sylvestris L. forest with 4 to 5 m tall trees . In such studies, the air in the chambers used for labelling is open to the respiratory efflux from roots and other soil biota. At this scale, it is not feasible to keep the concentration of CO 2 , [CO 2 ], or the atom% 13 C constant, nor are these constant under natural conditions. One cannot effectively scrub away the large background of 12 CO 2 to replace it with 13 CO 2 because the respiratory efflux will continuously add new un-labelled CO 2 , especially from the soil . Thus, both respiration and tracer additions will add CO 2 into chamber air, while uptake through photosynthesis will remove it. As a result, the chamber air [CO 2 ] will change depending on the balance between these processes. Nevertheless, it is desirable to keep the [CO 2 ] within a reasonable range in relation to naturally occurring levels and variations. We have earlier used a single release of 13 CO 2 and 1.5–3.5 h long incubations (e.g., ). Here, we report a method to substantially increase the level of labelling of CO 2 in chambers by making five consecutive releases of 13 CO 2 during periods of 4 to 4.5 h while maintaining the [CO 2 ] within a reasonable range. Thus, we avoided high levels that would approach or even exceed the A max of photosynthesis and potentially alter the C allocation patterns in the studied system. We compare the levels of labelling in below-ground components and fluxes in this study with those obtained previously using a single release of 13 CO 2 in a full-scale study conducted in 2007 and in a pilot study made in 2006 . Based on the amounts of 13 CO 2 added and a change in chamber volume, we predicted a four fold increase in labels in target organisms and processes between 2007 and 2012. In studies of this kind, it is often overlooked that roots of un-labelled trees occur under the canopy of the labelled tree or labelled group of trees. This neglect is based on the assumption that the distribution of the root system of a tree can be predicted as vertical projections of its crown, which is not correct since root systems often overlap considerably (e.g., ). Ignorance of this fact results in a mismatch between the above- and below-ground C budgets in studies employing 13 CO 2 labelling. Using data on the horizontal extent of tree roots, we elaborate on how the influence from roots of un-labelled trees outside the chambers varies depending on the size of the chambers used. Site studied We studied a young, naturally regenerated boreal P. sylvestris L. forest also studied by . It is located 60 km NW of Umeå, Sweden, at Åheden (64°14′N, 19°46′E, at 175 m a.s.l.). The soil is podzolized coarse silt. It has a 1–3 cm thick organic mor-layer with a C:N ratio of 37 and a pH H2O of 4.4. Trees were, on average, 3.3 m tall and had a diameter at breast height of ~4 cm; larger trees were close to 5 m tall. Some of the trees had cones and were, in that respect, mature. Method of labelling We established three plots in late July 2012. For 13 C labelling, we later raised on two of the plots 5 m tall octagon-shaped plastic chambers, each covering 50 m 2 patches of the forest ecosystem. Thus, each chamber contained a volume of 250 m 3 . Their design, temperature control, circulation of air, etc., were described in detail in and . The third plot was a control plot not used for labelling but for obtaining measures of background variations in the natural abundance of 13 C. Such measures are important in studies using a low level of labelling but much less so when a high level of labelling is used. Of the two plots to be 13 C-labelled, one was treated with nitrogen (N) by adding the equivalent of 150 kg N ha −1 as Ca(NO 3 ) 2 in the form of pellets on 27 July, i.e., 3 weeks before the 13 C labelling. This had the purpose of comparing the effects of N on 13 C distribution with those in previous studies . However, in these studies, effects of N on C allocation were not observed in the short term (first month) but were profound after a year . Given the lack of immediate effects of N on below-ground C allocation, we here used the 13 C-labelled N plot as a replicate of the 13 C-labelled plot. In the pilot study in 2006, we used a single release of 5 L of 13 CO 2 at ≥95 atom% 13 C, which resulted in 3.7 atom% 13 CO 2 in the chamber air and a [CO 2 ] of ~360 parts per million (p.p.m. or μmol mol −1 ) directly after the release . In the full-scale study in 2007, we used a single release of 25 L of 13 CO 2 at 99 atom% 13 C into each chamber . This resulted in an overall enrichment of ~17 atom% 13 CO 2 in the chamber air and a [CO 2 ] of ~500 p.p.m. directly after the release . Subsequently, uptake of tracer along with dilution by respiratory release of un-labelled CO 2 from plants and soil organisms led to a decrease down to 280–375 p.p.m. during labelling periods of 1.5–3.5 h. Despite variations in time duration of labelling in chambers ran in parallel the same day or one or a few days later, plant uptake of 13 C varied little, with 6.9 ± 0.7 g 13 C in unfertilized control plots ( N = 4) and 7.0 ± 0.4 g 13 C in N-fertilized plots ( N = 4). These small differences were the result of our decisions to adjust the duration periods of labelling individually for each chamber. Thus, we took into account the decline in photosynthetic uptake of CO 2 when the [CO 2 ] in the chamber air decreased , which causes the rate of CO 2 uptake to approach the rate of plant and soil respiration simultaneously adding CO 2 to chamber air. Note that the chambers used in 2006 and 2007 were 4 m tall as compared with the 5 m tall chambers used in 2012. Here, we tested whether it was possible to obtain a larger traceable pulse of 13 C by several sequential releases of tracer and by doubling the duration of the labelling period. Based on the forecast made by the Swedish Meteorological and Hydrological Institute we pre-selected a cloud-free day for the labelling. This started on the morning of 17 August 2012. We placed plastic chambers over two plots. We then released five consecutive 25 L 13 CO 2 (99 atom% 13 C, Cambridge Isotope Laboratories, Inc, Tewksbury, MA, USA) pulses 45 min apart into the chambers. We monitored atom% 13 CO 2 and [CO 2 ] inside the chambers using wave-length scanner cavity ring-down spectroscopy (Picarro G1101i, Picarro, Sunnyvale, California, USA). With two chambers, but only one instrument, we shifted the readings between the chambers, causing 10–20 min long gaps in the readings . Comparison with previous studies For comparison with the previous study by and , we report the ratio between the maximum label in August 2012 (this study) as compared with the maximum observed in August 2007 . The maximum value in this context was the highest mean value based on the two plots, from which we took three (soil respiration) to five (ectomycorrhizal (ECM) root tips, microbial cytoplasm C and PLFA) replicates per plot and days of sampling (at 3, 4, 7, 14 and 21 days) after labelling. We thus compare the amount of labels found in the below ground components by calculating the ratio between maximum in 2012 and maximum in 2007 using the highest mean values from the two plots labelled in August 2012 and four plots labelled in August 2007. We also extended the comparison to include the pilot study conducted in 2006 at Rosinedalsheden . Sampling and analyses For comparison with the net ecosystem exchange (NEE) estimated at Rosinedalsheden (64°09′N, 19°05′E, 145 m above sea level) in 2007 , we used the initial CO 2 draw-down rates at 475 p.p.m. CO 2 after each release of 13 CO 2 to calculate a mean per plot. The NEE was calculated as described by . We also compared the soil respiratory efflux from the inner 10 m 2 in this study with data from 2007 . Data on NEE and soil respiratory efflux (the sum of root and heterotrophic respiration) are important parameters to consider if plots from different locations and years are compared. If these parameters differ between studies, the comparison cannot be made with confidence. For studies of belowground processes and biota, we used the central 10 m 2 of the labelled plots to minimize the influence of C from un-labelled trees outside the 50 m 2 plots. The 13 C abundance of the ECM root tips, soil respiratory efflux, microbial cytoplasm C and PLFA biomarkers for soil microorganisms were determined multiple times during the month after labelling using methods described previously . In brief, ECM root tips (average diameter <0.3 mm) were extracted from fresh soil samples on the day of sampling, cleaned under a dissecting microscope and freeze-dried. ECM root tips were analysed on an elemental analyser (EA) coupled to an IRMS (Europa Scientific Ltd, Crewe, UK). Soil respiratory efflux was sampled using cylindrical 0.046 m 2 head spaces. Five gas samples sampled at 2-min intervals were analysed on a gas chromatography IRMS (Europa Scientific Ltd, Crewe, UK). The δ 13 C value of the soil respiratory efflux was estimated using the Keeling plot method (see ). The δ 13 C abundance of microbial cytoplasm was determined by the chloroform fumigation-extraction methodology , followed by EA-IRMS analysis of freeze-dried soil extracts, whereas in the previous studies, the C in salt extracts were wet-oxidized to CO 2 using dichromate and then analysed by GC-IRMS. PLFAs were extracted and analysed at James Hutton Limited (Aberdeen, Scotland, UK) following the methods of ‘Bligh and Dyer’ single phase chloroform:methanol:water extraction system as modified by . The δ 13 C values of PLFAs were analysed on a compound-specific IRMS . Taking tree root distribution into account The relative contribution of roots from un-labelled trees outside the chamber to soil processes and biota inside the labelled plot was calculated based on (i) the radius of the chamber (assuming an approximately circular chamber) and (ii) the relative distribution of root C input as a function of distance from the stem . Based on observations in nearby pine forest stands , we assumed that the relative root biomass density ( D ( r ) , m −2 ) decreases with distance from the stem ( r ) up to a maximum distance r t (root length) according to: (1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} D\left(r,{r}_t\right)=1-\frac{1}{\ln (2)}\ \ln \left(1+\frac{r}{r_t}\right) \end{equation*}\end{document} Because of the radial symmetry of the chamber and the root spatial distribution, we only needed to consider a location in the chamber in terms of its distance from the centre. Thus, we used polar coordinates to describe the geometry of the chamber, the tree roots and their distribution . At each focal point x , roots come from all angles, a , and from all distances r , where r < r t . For r < y , roots come from both inside and outside trees, whereas for r > y , roots come only from outside trees, which only happens for a < a s . Thus, we need to determine y and a s as follows: (2) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {z}^2+{q}^2={r_c}^2 \end{equation*}\end{document} (3) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {\left(x-q\right)}^2+{z}^2={y}^2 \end{equation*}\end{document} (4) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} \frac{x-q}{y}=\cos a \end{equation*}\end{document} Based on the above equations we get: (5) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {\left(\mathrm{y}\ \cos (a)\right)}^2+{r_c}^2-{\left(x-\mathrm{y}\ \cos (a)\right)}^2={y}^2 \end{equation*}\end{document} which is solved for y . Because y cannot be larger than r t we get: (6) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{align*} & y\left(x,a,{r}_t,{r}_c\right) \nonumber \\ &\quad =\min \left(\mathrm{x}\ \cos (a)+\sqrt{\left(\ {x}^2\ \left(\cos{(a)}^2-1\right)+{r_c}^2\right)},{r}_t\right) \end{align*}\end{document} Because y = r t when a = a s we get (7) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {r}_t=\mathrm{x}\ \cos \left({a}_s\right)+\sqrt{\left(\ {x}^2\ \left(\cos{\left({a}_s\right)}^2-1\right)+{r_c}^2\right)} \end{equation*}\end{document} which is solved for a s : (8) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {a}_s\left(x,{r}_t,{r}_c\right)={\displaystyle \begin{array}{@{}c@{\,}c@{}}\pi - acos\left(\frac{1}{2\ {r}_t\ x}\left({r_c}^2-{r_t}^2-{x}^2\right)\right)& \kern0.5em for \left\lceil \begin{array}{c}x>{r}_c-{r}_t\kern0.5em if\ {r}_c>{r}_t\\{}x>{r}_t-{r}_c\kern0.5em if\kern0.5em {r}_c<{r}_t\end{array}\right\rceil \\{}0& \kern0.5em for \left\lceil \begin{array}{c}x<{r}_c-{r}_t\kern0.5em if\ {r}_c>{r}_t\\{}x<{r}_t-{r}_c\kern0.5em if\kern0.5em {r}_c<{r}_t\end{array}\right\rceil \end{array}} \end{equation*}\end{document} To calculate the fraction of roots coming from outside the chamber, R , at a point x in the chamber, R ( x ), we integrated the contributions from outside the chamber and divided by the total contributions (from inside and outside) from all distances ( r < r t ) and all angles (0 < a < π). This calculation also accounts for the proportional increase in contributing area with distance r , i.e., for a given point in the chamber, root contributions come from a circle around this point and the larger the distance from the point, the larger the circle. (9) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} R\left(x,{r}_t,{r}_c\right)=\frac{\int_{a_s\left(x,{r}_t,{r}_c\right)}^{\pi}\int_{y\left(x,a,{r}_t,{r}_c\right)}^{r_t}r\ D\left(r,{r}_t\right) dr\ da}{\int_0^{\pi}\int_0^{r_t}r\ D\left(r,{r}_t\right) dr\ da} \end{equation*}\end{document} When the chamber radius ( r c ) is larger than the maximum root length ( r t ), there is, of course, no contribution in the centre of the chamber from trees outside it: x < r c – r t . We studied a young, naturally regenerated boreal P. sylvestris L. forest also studied by . It is located 60 km NW of Umeå, Sweden, at Åheden (64°14′N, 19°46′E, at 175 m a.s.l.). The soil is podzolized coarse silt. It has a 1–3 cm thick organic mor-layer with a C:N ratio of 37 and a pH H2O of 4.4. Trees were, on average, 3.3 m tall and had a diameter at breast height of ~4 cm; larger trees were close to 5 m tall. Some of the trees had cones and were, in that respect, mature. We established three plots in late July 2012. For 13 C labelling, we later raised on two of the plots 5 m tall octagon-shaped plastic chambers, each covering 50 m 2 patches of the forest ecosystem. Thus, each chamber contained a volume of 250 m 3 . Their design, temperature control, circulation of air, etc., were described in detail in and . The third plot was a control plot not used for labelling but for obtaining measures of background variations in the natural abundance of 13 C. Such measures are important in studies using a low level of labelling but much less so when a high level of labelling is used. Of the two plots to be 13 C-labelled, one was treated with nitrogen (N) by adding the equivalent of 150 kg N ha −1 as Ca(NO 3 ) 2 in the form of pellets on 27 July, i.e., 3 weeks before the 13 C labelling. This had the purpose of comparing the effects of N on 13 C distribution with those in previous studies . However, in these studies, effects of N on C allocation were not observed in the short term (first month) but were profound after a year . Given the lack of immediate effects of N on below-ground C allocation, we here used the 13 C-labelled N plot as a replicate of the 13 C-labelled plot. In the pilot study in 2006, we used a single release of 5 L of 13 CO 2 at ≥95 atom% 13 C, which resulted in 3.7 atom% 13 CO 2 in the chamber air and a [CO 2 ] of ~360 parts per million (p.p.m. or μmol mol −1 ) directly after the release . In the full-scale study in 2007, we used a single release of 25 L of 13 CO 2 at 99 atom% 13 C into each chamber . This resulted in an overall enrichment of ~17 atom% 13 CO 2 in the chamber air and a [CO 2 ] of ~500 p.p.m. directly after the release . Subsequently, uptake of tracer along with dilution by respiratory release of un-labelled CO 2 from plants and soil organisms led to a decrease down to 280–375 p.p.m. during labelling periods of 1.5–3.5 h. Despite variations in time duration of labelling in chambers ran in parallel the same day or one or a few days later, plant uptake of 13 C varied little, with 6.9 ± 0.7 g 13 C in unfertilized control plots ( N = 4) and 7.0 ± 0.4 g 13 C in N-fertilized plots ( N = 4). These small differences were the result of our decisions to adjust the duration periods of labelling individually for each chamber. Thus, we took into account the decline in photosynthetic uptake of CO 2 when the [CO 2 ] in the chamber air decreased , which causes the rate of CO 2 uptake to approach the rate of plant and soil respiration simultaneously adding CO 2 to chamber air. Note that the chambers used in 2006 and 2007 were 4 m tall as compared with the 5 m tall chambers used in 2012. Here, we tested whether it was possible to obtain a larger traceable pulse of 13 C by several sequential releases of tracer and by doubling the duration of the labelling period. Based on the forecast made by the Swedish Meteorological and Hydrological Institute we pre-selected a cloud-free day for the labelling. This started on the morning of 17 August 2012. We placed plastic chambers over two plots. We then released five consecutive 25 L 13 CO 2 (99 atom% 13 C, Cambridge Isotope Laboratories, Inc, Tewksbury, MA, USA) pulses 45 min apart into the chambers. We monitored atom% 13 CO 2 and [CO 2 ] inside the chambers using wave-length scanner cavity ring-down spectroscopy (Picarro G1101i, Picarro, Sunnyvale, California, USA). With two chambers, but only one instrument, we shifted the readings between the chambers, causing 10–20 min long gaps in the readings . For comparison with the previous study by and , we report the ratio between the maximum label in August 2012 (this study) as compared with the maximum observed in August 2007 . The maximum value in this context was the highest mean value based on the two plots, from which we took three (soil respiration) to five (ectomycorrhizal (ECM) root tips, microbial cytoplasm C and PLFA) replicates per plot and days of sampling (at 3, 4, 7, 14 and 21 days) after labelling. We thus compare the amount of labels found in the below ground components by calculating the ratio between maximum in 2012 and maximum in 2007 using the highest mean values from the two plots labelled in August 2012 and four plots labelled in August 2007. We also extended the comparison to include the pilot study conducted in 2006 at Rosinedalsheden . For comparison with the net ecosystem exchange (NEE) estimated at Rosinedalsheden (64°09′N, 19°05′E, 145 m above sea level) in 2007 , we used the initial CO 2 draw-down rates at 475 p.p.m. CO 2 after each release of 13 CO 2 to calculate a mean per plot. The NEE was calculated as described by . We also compared the soil respiratory efflux from the inner 10 m 2 in this study with data from 2007 . Data on NEE and soil respiratory efflux (the sum of root and heterotrophic respiration) are important parameters to consider if plots from different locations and years are compared. If these parameters differ between studies, the comparison cannot be made with confidence. For studies of belowground processes and biota, we used the central 10 m 2 of the labelled plots to minimize the influence of C from un-labelled trees outside the 50 m 2 plots. The 13 C abundance of the ECM root tips, soil respiratory efflux, microbial cytoplasm C and PLFA biomarkers for soil microorganisms were determined multiple times during the month after labelling using methods described previously . In brief, ECM root tips (average diameter <0.3 mm) were extracted from fresh soil samples on the day of sampling, cleaned under a dissecting microscope and freeze-dried. ECM root tips were analysed on an elemental analyser (EA) coupled to an IRMS (Europa Scientific Ltd, Crewe, UK). Soil respiratory efflux was sampled using cylindrical 0.046 m 2 head spaces. Five gas samples sampled at 2-min intervals were analysed on a gas chromatography IRMS (Europa Scientific Ltd, Crewe, UK). The δ 13 C value of the soil respiratory efflux was estimated using the Keeling plot method (see ). The δ 13 C abundance of microbial cytoplasm was determined by the chloroform fumigation-extraction methodology , followed by EA-IRMS analysis of freeze-dried soil extracts, whereas in the previous studies, the C in salt extracts were wet-oxidized to CO 2 using dichromate and then analysed by GC-IRMS. PLFAs were extracted and analysed at James Hutton Limited (Aberdeen, Scotland, UK) following the methods of ‘Bligh and Dyer’ single phase chloroform:methanol:water extraction system as modified by . The δ 13 C values of PLFAs were analysed on a compound-specific IRMS . The relative contribution of roots from un-labelled trees outside the chamber to soil processes and biota inside the labelled plot was calculated based on (i) the radius of the chamber (assuming an approximately circular chamber) and (ii) the relative distribution of root C input as a function of distance from the stem . Based on observations in nearby pine forest stands , we assumed that the relative root biomass density ( D ( r ) , m −2 ) decreases with distance from the stem ( r ) up to a maximum distance r t (root length) according to: (1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} D\left(r,{r}_t\right)=1-\frac{1}{\ln (2)}\ \ln \left(1+\frac{r}{r_t}\right) \end{equation*}\end{document} Because of the radial symmetry of the chamber and the root spatial distribution, we only needed to consider a location in the chamber in terms of its distance from the centre. Thus, we used polar coordinates to describe the geometry of the chamber, the tree roots and their distribution . At each focal point x , roots come from all angles, a , and from all distances r , where r < r t . For r < y , roots come from both inside and outside trees, whereas for r > y , roots come only from outside trees, which only happens for a < a s . Thus, we need to determine y and a s as follows: (2) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {z}^2+{q}^2={r_c}^2 \end{equation*}\end{document} (3) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {\left(x-q\right)}^2+{z}^2={y}^2 \end{equation*}\end{document} (4) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} \frac{x-q}{y}=\cos a \end{equation*}\end{document} Based on the above equations we get: (5) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {\left(\mathrm{y}\ \cos (a)\right)}^2+{r_c}^2-{\left(x-\mathrm{y}\ \cos (a)\right)}^2={y}^2 \end{equation*}\end{document} which is solved for y . Because y cannot be larger than r t we get: (6) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{align*} & y\left(x,a,{r}_t,{r}_c\right) \nonumber \\ &\quad =\min \left(\mathrm{x}\ \cos (a)+\sqrt{\left(\ {x}^2\ \left(\cos{(a)}^2-1\right)+{r_c}^2\right)},{r}_t\right) \end{align*}\end{document} Because y = r t when a = a s we get (7) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {r}_t=\mathrm{x}\ \cos \left({a}_s\right)+\sqrt{\left(\ {x}^2\ \left(\cos{\left({a}_s\right)}^2-1\right)+{r_c}^2\right)} \end{equation*}\end{document} which is solved for a s : (8) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} {a}_s\left(x,{r}_t,{r}_c\right)={\displaystyle \begin{array}{@{}c@{\,}c@{}}\pi - acos\left(\frac{1}{2\ {r}_t\ x}\left({r_c}^2-{r_t}^2-{x}^2\right)\right)& \kern0.5em for \left\lceil \begin{array}{c}x>{r}_c-{r}_t\kern0.5em if\ {r}_c>{r}_t\\{}x>{r}_t-{r}_c\kern0.5em if\kern0.5em {r}_c<{r}_t\end{array}\right\rceil \\{}0& \kern0.5em for \left\lceil \begin{array}{c}x<{r}_c-{r}_t\kern0.5em if\ {r}_c>{r}_t\\{}x<{r}_t-{r}_c\kern0.5em if\kern0.5em {r}_c<{r}_t\end{array}\right\rceil \end{array}} \end{equation*}\end{document} To calculate the fraction of roots coming from outside the chamber, R , at a point x in the chamber, R ( x ), we integrated the contributions from outside the chamber and divided by the total contributions (from inside and outside) from all distances ( r < r t ) and all angles (0 < a < π). This calculation also accounts for the proportional increase in contributing area with distance r , i.e., for a given point in the chamber, root contributions come from a circle around this point and the larger the distance from the point, the larger the circle. (9) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} R\left(x,{r}_t,{r}_c\right)=\frac{\int_{a_s\left(x,{r}_t,{r}_c\right)}^{\pi}\int_{y\left(x,a,{r}_t,{r}_c\right)}^{r_t}r\ D\left(r,{r}_t\right) dr\ da}{\int_0^{\pi}\int_0^{r_t}r\ D\left(r,{r}_t\right) dr\ da} \end{equation*}\end{document} When the chamber radius ( r c ) is larger than the maximum root length ( r t ), there is, of course, no contribution in the centre of the chamber from trees outside it: x < r c – r t . Concentrations of CO 2 and atom% 13 CO 2 during labelling During labelling, the average [CO 2 ] in both chambers was 367 p.p.m., although it varied from 480 p.p.m. to 205 p.p.m. . If the final CO 2 draw-down period after the fifth release is excluded, the average [CO 2 ] were 408 and 404 p.p.m. in the un-fertilized and N-fertilized plots, respectively, which was close to the diurnal average concentration in ambient air of ~400 p.p.m. in 2012. The initial variations in [CO 2 ] inside the chambers of ~100 p.p.m. reflected sequential releases of 13 CO 2 followed by periods of rapid net uptake . These variations in [CO 2 ] compare with diurnal variations of up to 50 p.p.m. within and below the canopy of 20-m tall forest during mid-August conditions at the ICOS (International Carbon Observatory System) tower at Svartberget, 2.5 km north of the site ( www.icos-sweden.se ). Clouds can add short-term variations by instantly reducing the rate of photosynthesis without affecting the rate of soil respiration, hence increasing [CO 2 ] in and below the canopy. Our prime objective was to obtain a high labelling of the below-ground flux of C while keeping the [CO 2 ] within reasonable levels. As discussed above, the average [CO 2 ] inside the chambers was close to the ambient outside. In contrast, had we released all 125 L of 13 CO 2 instantaneously, the [CO 2 ] would have exceeded 1000 p.p.m., i.e., 2.5 times the ambient. It is not known if such a temporal anomaly has consequences, but it is good practice to avoid uncertainties. We consider that the experimentally induced short-term deviations from the natural dynamics of [CO 2 ] during 4.0–4.5 h have no relevant impact on biota and processes in the soil, in which labelled C is observed 3–4 days later and onwards . In both chambers, the atom% 13 C of CO 2 was 23 directly after the first release of labelled CO 2 and declined until the subsequent release of tracer. Each additional release increased the atom% 13 C of CO 2 up to the maximum of 61 atom% after the fifth and final release in this experiment. The fifth release was followed by a period of draw-down towards 200 p.p.m. CO 2 to achieve high assimilation of the label. The full sequence from the first labelling to the completion of the labelling was 40 min slower in the N-fertilized chamber as compared with the control chamber, possibly because of a lower needle biomass. The average atom% 13 C of CO 2 was 42.1 and 41.3 atom% in the control and N-fertilized chambers, respectively. This is roughly four times higher than during the experiments conducted by . For each release of 25 L of 13 CO 2 , the increase in atom% 13 C in the CO 2 in the chambers became progressively smaller . This is expected since respiration of un-labelled CO 2 restricts the maximum level of atom% 13 C of the CO 2 , which can be obtained in the chambers in a pulse-labelling experiment of this kind. We fitted an exponential equation to the data from the five additions in this experiment: (10) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} y=a\left(1-{e}^{- bx}\right) \end{equation*}\end{document} where y is atom% 13 C in CO 2 , x is the litres of 13 CO 2 added, and a and b are constants. We found a good fit ( R 2 adj > 0.999) for the formula : \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ y=68.56\ \left(1-{e}^{\left(-0.017x\right)}\right) $$\end{document} Hence, a maximum of 68.56 atom% 13 C could be obtained in the chambers under the prevailing experimental conditions. We reached 61 atom% 13 C. To approach the calculated maximum would require acceptance of diminishing returns of investments in tracer. Comparison with previous studies For a comparison with the previous studies to be of interest, it is essential that the forests studied at Åheden and Rosinedalsheden are similar (note that the studies by and were conducted in the forest at Rosinedalsheden). They are both boreal P. sylvestris forests growing on N-poor soils and under climatic conditions of a short summer; snowmelt peaks in late April . Net photosynthesis should peak in July , and a major increase in belowground C allocation occurs in late summer . We note that the age of the trees, soil characteristics like pH, C/N ratio and respiration are similar, and especially that the ecosystem NEE are similar . reported that NEE at mid day in the summer of 2007 in the eight plots (four N-fertilized, four unfertilized) studied by at Rosinedalsheden was 1.08 ± 0.06 CO 2 m −2 h −1 at ~475 p.p.m. CO 2 . Our estimate for the two plots studied here was 1.16 ± 0.05 g CO 2 m −2 h −1 , i.e., close to the mean observed in the previous study. Further, the rates of soil respiration (which is the sum of heterotrophic respiration and tree belowground respiration) were 0.39 ± 0.02 g CO 2 m −2 h −1 at Rosinedalsheden as compared with 0.42 ± 0.03 at Åheden. This means that the CO 2 exchange differed little between the two sites. Hence, we considered it appropriate to compare the studies despite the distance of 7.2 km between them. Thus, we compare the labelling of three below-ground components and of the soil respiratory efflux as obtained in this study with those in the previous full-scale study and in the pilot study . We note that it took roughly the same time before the maximum labelling of belowground components occurred in the studies, with a range from 3 to 14 days, depending on the object of study . Here, we mainly focus on comparing our results with those from the previous full-scale study. Using the high tracer labelling approach, the labelling was 5.5 times higher in ECM roots, 2.0 times higher in microbial cytoplasm, 3.6 times higher in the PLFA Biomarker 18:2ω6,9 for ECM fungal mycelium, and 3.5 times higher in the soil respiratory efflux. These figures compare with the expected based on the four fold higher 13 CO 2 concentration during labelling. In , we also make a comparison with the results of the very low tracer addition, 5 L 13 CO 2 , used in a pilot study at Rosinedalsheden in August 2006 . This reveals the major differences in labelling of below-ground components after adding 5, 25 or 125 L of 13 CO 2 . The results also reflect that the chambers were only 4 m tall in 2006 and 2007 as compared with 5 m tall here, which caused the concentrations of tracer to deviate from the relations 1:5:25 expected if the chambers were of equal volume. Taking the differences in chamber volume into account, the expected relations are 1:5:20, i.e., a four fold higher labelling in 2012 as compared with in 2007. Frequent sequential releases of 13 CO 2 result in a broader peak of labelling in target organisms and processes as compared with a single release of tracer. This affects calculations of turnover rates in above-ground components like needles and phloem sap, in particular, but less in below-ground components, where the tracer reaches a maximum of 3–14 days after labelling . Taking the influence of un-labelled roots into account The size of chambers, especially the width, is of crucial importance for studies that aim to reflect ecosystem-scale processes, in particular when the labelling of above- and below-ground components are described in detail. The fact that tree roots commonly extend much further from their stems than do the branches is a complication. In nearby P. sylvestris forest stands, tree roots reached ~5 m from tree stems , such that any circular area of 1 m 2 was occupied by the roots of ~10 trees . also reported that most of the labelled root activity was within 4–5 m from tree stems in a temperate pine forest. With 730 stems ha −1 in their study, the calculated average distance between stems should be 4 m, which means that root systems must have overlapped. We used octagonal chambers covering 50 m 2 , in which we sampled soils, roots and soil biota in the central 10 m 2 . Thus, we conclude that roots of un-labelled trees outside the chambers had an influence on the area studied. In the following discussion, we assume that the biomass of active roots of trees from outside the chamber decreases towards the centre of labelled plots in the way shown by 15 N tracer in and . We also assume that the central 10 m 2 , from which we took samples of tree roots and other soil biota, is circular. Based on these assumptions, we estimate in Eqs. that ~15% of the soil biota and associated soil processes in the central 10 m 2 is affected by C from un-labelled trees outside the chamber ( and ). Hence, while aboveground parts of trees within the 50 m 2 of the chamber were all labelled, roots and other biota sampled by soil coring in the central 10 m 2 would be 85% labelled. Just inside the margin of labelled plots, 60% of the C would come from roots of un-labelled trees and only 40% of the C in below-ground biota and processes would come from labelled trees . It is an important observation that reducing the area of the patch with labelled trees results in an increase in the contributions of un-labelled trees to soil biota and processes . A physical barrier, trenching, would hinder this, but would introduce an undesired input of un-labelled C from dying roots and root-associated organisms and would also affect the trees inside by severing roots extending outside the chamber. A barrier would also disturb below-ground interactions among trees. The presence of un-labelled roots from trees outside the chamber confounds attempts to match the above-ground and below-ground C budgets. One can reduce the problem of the impact of roots of un-labelled trees by increasing the area covered by chambers. However, adding 1 m of radius to the 4 m adds 56% to the volume of chambers, and increasing the height of 5 m by 1 m adds 20% to the volume of chambers increasing the quantity of labels needed accordingly. Furthermore, a larger volume of chambers demands more energy for cooling. At this remote location, we used a mobile diesel-driven engine with a capacity to produce 35 kW. Cooling of the two chambers required ~25 kW under full sunlight at midday (25 °C), i.e., 0.05 kW per m 3 of chamber air. Concluding remarks With our approach of pulsed tracer release, we achieved a significantly higher 13 C labelling of different below-ground compartments compared with single pulse labelling while keeping [CO 2 ] at reasonably low levels. Since higher labelling was found in all compartments investigated, we assume that this should be the case for other compounds and organism groups as well. Recent developments in molecular biology have opened up new opportunities to identify soil organisms and to study gene expressions (e.g., ). If such techniques are combined with stable isotope probing (SIP), it becomes possible to couple the taxonomic specificity of molecular biomarkers (e.g., PLFAs, DNA and RNA) to quantitative measures of ecosystem processes based on SIP. This step requires a high level of labelling, which, until now, has been possible under laboratory conditions or by using small plants in the field. However, field-scale labelling with trees is desirable from many points of view. Such experiments involve the soil microbial community of interest unaltered by experimental installations and can encompass seasonal variations in tree belowground C flux. Thus, results are directly translational to the ecosystem level. As we show here, multiple-release of labelled CO 2 is a useful method to achieve high labelling of soil processes and organisms under natural conditions in the field. We also heightened the need to consider the role of un-labelled C from trees outside the chamber. Further improvements are possible (e.g., by increasing the labelled area), but the wish to maintain an undisturbed system, natural levels of [CO 2 ] and a short pulse of tracer puts limits to the level of tracer that can be obtained in target organisms, compounds and processes. 2 and atom% 13 CO 2 during labelling During labelling, the average [CO 2 ] in both chambers was 367 p.p.m., although it varied from 480 p.p.m. to 205 p.p.m. . If the final CO 2 draw-down period after the fifth release is excluded, the average [CO 2 ] were 408 and 404 p.p.m. in the un-fertilized and N-fertilized plots, respectively, which was close to the diurnal average concentration in ambient air of ~400 p.p.m. in 2012. The initial variations in [CO 2 ] inside the chambers of ~100 p.p.m. reflected sequential releases of 13 CO 2 followed by periods of rapid net uptake . These variations in [CO 2 ] compare with diurnal variations of up to 50 p.p.m. within and below the canopy of 20-m tall forest during mid-August conditions at the ICOS (International Carbon Observatory System) tower at Svartberget, 2.5 km north of the site ( www.icos-sweden.se ). Clouds can add short-term variations by instantly reducing the rate of photosynthesis without affecting the rate of soil respiration, hence increasing [CO 2 ] in and below the canopy. Our prime objective was to obtain a high labelling of the below-ground flux of C while keeping the [CO 2 ] within reasonable levels. As discussed above, the average [CO 2 ] inside the chambers was close to the ambient outside. In contrast, had we released all 125 L of 13 CO 2 instantaneously, the [CO 2 ] would have exceeded 1000 p.p.m., i.e., 2.5 times the ambient. It is not known if such a temporal anomaly has consequences, but it is good practice to avoid uncertainties. We consider that the experimentally induced short-term deviations from the natural dynamics of [CO 2 ] during 4.0–4.5 h have no relevant impact on biota and processes in the soil, in which labelled C is observed 3–4 days later and onwards . In both chambers, the atom% 13 C of CO 2 was 23 directly after the first release of labelled CO 2 and declined until the subsequent release of tracer. Each additional release increased the atom% 13 C of CO 2 up to the maximum of 61 atom% after the fifth and final release in this experiment. The fifth release was followed by a period of draw-down towards 200 p.p.m. CO 2 to achieve high assimilation of the label. The full sequence from the first labelling to the completion of the labelling was 40 min slower in the N-fertilized chamber as compared with the control chamber, possibly because of a lower needle biomass. The average atom% 13 C of CO 2 was 42.1 and 41.3 atom% in the control and N-fertilized chambers, respectively. This is roughly four times higher than during the experiments conducted by . For each release of 25 L of 13 CO 2 , the increase in atom% 13 C in the CO 2 in the chambers became progressively smaller . This is expected since respiration of un-labelled CO 2 restricts the maximum level of atom% 13 C of the CO 2 , which can be obtained in the chambers in a pulse-labelling experiment of this kind. We fitted an exponential equation to the data from the five additions in this experiment: (10) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*} y=a\left(1-{e}^{- bx}\right) \end{equation*}\end{document} where y is atom% 13 C in CO 2 , x is the litres of 13 CO 2 added, and a and b are constants. We found a good fit ( R 2 adj > 0.999) for the formula : \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ y=68.56\ \left(1-{e}^{\left(-0.017x\right)}\right) $$\end{document} Hence, a maximum of 68.56 atom% 13 C could be obtained in the chambers under the prevailing experimental conditions. We reached 61 atom% 13 C. To approach the calculated maximum would require acceptance of diminishing returns of investments in tracer. For a comparison with the previous studies to be of interest, it is essential that the forests studied at Åheden and Rosinedalsheden are similar (note that the studies by and were conducted in the forest at Rosinedalsheden). They are both boreal P. sylvestris forests growing on N-poor soils and under climatic conditions of a short summer; snowmelt peaks in late April . Net photosynthesis should peak in July , and a major increase in belowground C allocation occurs in late summer . We note that the age of the trees, soil characteristics like pH, C/N ratio and respiration are similar, and especially that the ecosystem NEE are similar . reported that NEE at mid day in the summer of 2007 in the eight plots (four N-fertilized, four unfertilized) studied by at Rosinedalsheden was 1.08 ± 0.06 CO 2 m −2 h −1 at ~475 p.p.m. CO 2 . Our estimate for the two plots studied here was 1.16 ± 0.05 g CO 2 m −2 h −1 , i.e., close to the mean observed in the previous study. Further, the rates of soil respiration (which is the sum of heterotrophic respiration and tree belowground respiration) were 0.39 ± 0.02 g CO 2 m −2 h −1 at Rosinedalsheden as compared with 0.42 ± 0.03 at Åheden. This means that the CO 2 exchange differed little between the two sites. Hence, we considered it appropriate to compare the studies despite the distance of 7.2 km between them. Thus, we compare the labelling of three below-ground components and of the soil respiratory efflux as obtained in this study with those in the previous full-scale study and in the pilot study . We note that it took roughly the same time before the maximum labelling of belowground components occurred in the studies, with a range from 3 to 14 days, depending on the object of study . Here, we mainly focus on comparing our results with those from the previous full-scale study. Using the high tracer labelling approach, the labelling was 5.5 times higher in ECM roots, 2.0 times higher in microbial cytoplasm, 3.6 times higher in the PLFA Biomarker 18:2ω6,9 for ECM fungal mycelium, and 3.5 times higher in the soil respiratory efflux. These figures compare with the expected based on the four fold higher 13 CO 2 concentration during labelling. In , we also make a comparison with the results of the very low tracer addition, 5 L 13 CO 2 , used in a pilot study at Rosinedalsheden in August 2006 . This reveals the major differences in labelling of below-ground components after adding 5, 25 or 125 L of 13 CO 2 . The results also reflect that the chambers were only 4 m tall in 2006 and 2007 as compared with 5 m tall here, which caused the concentrations of tracer to deviate from the relations 1:5:25 expected if the chambers were of equal volume. Taking the differences in chamber volume into account, the expected relations are 1:5:20, i.e., a four fold higher labelling in 2012 as compared with in 2007. Frequent sequential releases of 13 CO 2 result in a broader peak of labelling in target organisms and processes as compared with a single release of tracer. This affects calculations of turnover rates in above-ground components like needles and phloem sap, in particular, but less in below-ground components, where the tracer reaches a maximum of 3–14 days after labelling . The size of chambers, especially the width, is of crucial importance for studies that aim to reflect ecosystem-scale processes, in particular when the labelling of above- and below-ground components are described in detail. The fact that tree roots commonly extend much further from their stems than do the branches is a complication. In nearby P. sylvestris forest stands, tree roots reached ~5 m from tree stems , such that any circular area of 1 m 2 was occupied by the roots of ~10 trees . also reported that most of the labelled root activity was within 4–5 m from tree stems in a temperate pine forest. With 730 stems ha −1 in their study, the calculated average distance between stems should be 4 m, which means that root systems must have overlapped. We used octagonal chambers covering 50 m 2 , in which we sampled soils, roots and soil biota in the central 10 m 2 . Thus, we conclude that roots of un-labelled trees outside the chambers had an influence on the area studied. In the following discussion, we assume that the biomass of active roots of trees from outside the chamber decreases towards the centre of labelled plots in the way shown by 15 N tracer in and . We also assume that the central 10 m 2 , from which we took samples of tree roots and other soil biota, is circular. Based on these assumptions, we estimate in Eqs. that ~15% of the soil biota and associated soil processes in the central 10 m 2 is affected by C from un-labelled trees outside the chamber ( and ). Hence, while aboveground parts of trees within the 50 m 2 of the chamber were all labelled, roots and other biota sampled by soil coring in the central 10 m 2 would be 85% labelled. Just inside the margin of labelled plots, 60% of the C would come from roots of un-labelled trees and only 40% of the C in below-ground biota and processes would come from labelled trees . It is an important observation that reducing the area of the patch with labelled trees results in an increase in the contributions of un-labelled trees to soil biota and processes . A physical barrier, trenching, would hinder this, but would introduce an undesired input of un-labelled C from dying roots and root-associated organisms and would also affect the trees inside by severing roots extending outside the chamber. A barrier would also disturb below-ground interactions among trees. The presence of un-labelled roots from trees outside the chamber confounds attempts to match the above-ground and below-ground C budgets. One can reduce the problem of the impact of roots of un-labelled trees by increasing the area covered by chambers. However, adding 1 m of radius to the 4 m adds 56% to the volume of chambers, and increasing the height of 5 m by 1 m adds 20% to the volume of chambers increasing the quantity of labels needed accordingly. Furthermore, a larger volume of chambers demands more energy for cooling. At this remote location, we used a mobile diesel-driven engine with a capacity to produce 35 kW. Cooling of the two chambers required ~25 kW under full sunlight at midday (25 °C), i.e., 0.05 kW per m 3 of chamber air. With our approach of pulsed tracer release, we achieved a significantly higher 13 C labelling of different below-ground compartments compared with single pulse labelling while keeping [CO 2 ] at reasonably low levels. Since higher labelling was found in all compartments investigated, we assume that this should be the case for other compounds and organism groups as well. Recent developments in molecular biology have opened up new opportunities to identify soil organisms and to study gene expressions (e.g., ). If such techniques are combined with stable isotope probing (SIP), it becomes possible to couple the taxonomic specificity of molecular biomarkers (e.g., PLFAs, DNA and RNA) to quantitative measures of ecosystem processes based on SIP. This step requires a high level of labelling, which, until now, has been possible under laboratory conditions or by using small plants in the field. However, field-scale labelling with trees is desirable from many points of view. Such experiments involve the soil microbial community of interest unaltered by experimental installations and can encompass seasonal variations in tree belowground C flux. Thus, results are directly translational to the ecosystem level. As we show here, multiple-release of labelled CO 2 is a useful method to achieve high labelling of soil processes and organisms under natural conditions in the field. We also heightened the need to consider the role of un-labelled C from trees outside the chamber. Further improvements are possible (e.g., by increasing the labelled area), but the wish to maintain an undisturbed system, natural levels of [CO 2 ] and a short pulse of tracer puts limits to the level of tracer that can be obtained in target organisms, compounds and processes. |
Evaluating the quality and equity of patient hospital discharge instructions | ff7dc95f-f800-4498-9a80-af8e04e2b24b | 11844009 | Patient Education as Topic[mh] | The transition from hospital to home is often challenging for patients and their caregivers. Information exchange between inpatient and outpatient providers is notoriously slow, incomplete, and error prone, placing the onus on the patient to understand the events of hospitalization and care plan once home . Patient understanding of discharge instructions is necessary for patients to assume self-care responsibilities and is tied to both improved clinical outcomes and patient satisfaction . But studies over the past three decades show that patients often do not have a clear understanding of provider recommendations within even a few days of discharge, and routinely overestimate their understanding, placing them at higher risk for adverse medication events and hospital readmission . Written discharge instructions are an evidence-based practice shown to improve patient understanding . Discharge instructions should be tailored to each patient’s needs, document provider recommendations such as medication changes and diet, and be provided in language understandable to the patient . Despite their importance, only a handful of studies have evaluated the quality of hospital discharge instructions . Most have been limited to the pediatric population —which might underestimate the medical complexity of the adult discharges—and report the presence or absence of certain content domains without judging the quality of the information presented . Providing high-quality discharge instructions for the growing patient population with limited English proficiency (LEP) is challenging . Patients with LEP experience higher hospital readmission rates for diagnoses that require higher levels of self-management (such as congestive heart failure) and are more likely to report post-discharge problems, including confusion about their discharge instructions . Yet no studies have systematically examined the quality of written discharge instructions for patients with LEP; lower quality discharge instruction could potentially contribute to inadequate understanding and more adverse outcomes after discharge. Given this research gap, we sought to evaluate the quality of personalized hospital discharge instructions of a diverse group of patients with LEP. Our primary goal was to compare discharge instructions between patients with an English and non-English language preference (NELP) to identify potential disparities by whether instructions were written in the patient’s preferred language, included all content domains recommended by professional groups, and followed best practices for health literacy. Our secondary goal was to assess differences in quality of discharge instructions by language group and by whether the diagnosis required high intensity or low intensity of self-management (as readmission rates are higher for conditions requiring high self-management). While LEP remains the dominant term in the field, we will use NELP because it reflects language preference, rather than fluency, which is what was measured in this study and used in clinical care. Study setting The study was carried out at a single urban academic tertiary care safety-net hospital in New England that serves a large population with NELP. On discharge, patients received an After Visit Summary (AVS), which is generated by the electronic medical record (EMR) Epic. The AVS compiles key discharge information created by the provider including a medication list and changes and a personalized written summary that prompts the provider to explain the cause of admission (primary diagnosis), care instructions, reasons to seek medical attention (“return precautions”), and medication changes. The AVS template is shown in Appendix . The AVS also pulls in the date, time, provider type, and address of scheduled post-discharge appointments and information on how patients can contact the hospital with questions after discharge. Record selection strategy The patient’s preferred language is entered in the EMR by registration staff. Preferred language can be updated by clinicians as needed. Using operational data from 2022, we pulled the preferred language field in the EMR to identify the top four non-English languages at the hospital: Spanish, Haitian Creole, Cape Verdean (Portuguese) Creole, and Vietnamese. These four language groups, plus a fifth composed of all other non-English languages, comprised the “NELP cohort”. The “English cohort” included patients who indicated their preferred language was English as recorded in the EMR. We used case matching to compare discharge records from patients with NELP to matched English-language-preference patients by primary diagnosis. Given previous evidence that readmission rates are higher among patients with NELP with diagnoses requiring high intensity of self-management, we reviewed a list of the most common inpatient diagnoses and classified each as high or low self-management. High self-management diagnoses include those that commonly require complex medication regimens (e.g., COPD exacerbation), monitoring of vital signs (e.g., atrial fibrillation or paroxysmal tachycardia), daily weight measurement (e.g., exacerbation of heart failure), or point-of-care labs (e.g., diabetes), or detailed dietary instructions (e.g., renal failure). We then selected five common “low self-management” diagnoses—sepsis, chest pain, stroke-like symptoms including transient ischemic attack (TIA), low back pain, and pneumonia—to ensure broad applicability to hospitalized patients. To build our overall study cohort, we first identified all discharges to home of adults (age ≥ 18) from general medicine teams between January 1, 2017 and December 21, 2022. Each discharge was considered independently without eliminating repeat discharges. Discharges against medical advice were excluded. Two discharge records were randomly selected per diagnosis among the five linguistic categories in the NELP cohort. For each discharge in the NELP cohort, a discharge record from the English cohort was matched to it based on primary diagnosis and patient age within 10 years. This process selected a total of 200 discharge records or 20 per diagnosis, including 10 from patients with English language preference and 10 with NELP. Discharge instructions were included in the analysis even if not provided in the patient’s preferred language. De-identified versions of the AVS and discharge medication list were provided to the study team by the hospital’s Clinical Data Warehouse for Research. In six cases, either the AVS or physician discharge summary was missing, and a replacement case was selected using the above criteria. If discharge instructions were not written in English, they were professionally translated. Data analysis Demographics were abstracted from the EMR and summarized as means for continuous variables (age) and percentages for categorical variables (race, Hispanic origin, and preferred language). Results were compared between English and NELP cohorts using t-test and Fisher’s exact test, respectively. Readability of discharge instructions Word counts of each of the 200 discharge records were evaluated by an online calculator. Language concordance was a binary outcome indicating if the language of the instructions matched the patient’s preferred language. For reading level, we first edited patient instructions for standardization as explained in Appendix 2. An online calculator was used to determine the reading level per the widely used Flesch Reading Ease score (FRES) . As the data did not follow a normal distribution, we calculated the median and interquartile range (IQR). The same was done using the New Dale-Chall Reading Scale (NDCRS), which was converted into U.S. grade level groupings—45 or above = 4th grade or less (up to age 9), 34–44 = 5th grade to 8th grade (ages 10–13), and 33 or less = 9th grade or above (age 14 and above)—and reported as percentages in each category . We summarized categorical outcomes as percentages and compared between NELP and English cohorts using Fisher’s exact test. Continuous outcomes were summarized as medians and interquartile ranges (IQRs) given lack of normal distribution and compared between language groups using the Mann–Whitney test. For all analyses, a p -value of ≤ 0.05 was used as the cutoff for statistical significance. Content of patient discharge instructions We conducted a literature review to identify instruments that could be used to judge the quality of patient discharge instruction content. All existing options either assessed only for presence or absence of instruction domains or were integrated with process measures . As no tools were identified to assess the quality of the content, the lead author drafted the Quality of Discharge Instructions-Inpatient (QDI-I) scale based on existing literature, experience as a hospitalist, and input from the senior author. The QDI-I evaluates content across six domains: primary diagnosis, self-care instructions, return precautions (i.e. reasons to seek medical attention), medication changes, reasons for medication changes, and recommended follow-up. Each domain is scored from 1 (very poor) to 4 (very good) and summed across the six domains for a possible total score of 24; if no medication changes were made, the maximum total score of 20 was then scaled by multiplying by 1.2. For ease of interpretability, the QDI-I score was also expressed as a percentage of a perfect score. For example, a total score of 18 (out of 24) or 3 (out of 4) for a single domain was reported as 75% of a perfect score. The draft QDI-I was pilot tested by three authors and minor changes were made. The QDI-I is included as Appendix 3. For each of the 200 discharge records, two physician raters independently assigned a QDI-I score for each domain. Disagreements of more than one point were resolved by the lead author (KA), and a mean score was assigned if raters disagreed by one point. For each rater, an overall score was calculated by summing scores across the six domains. Both overall and domain-specific QDI scores were summarized as means and standard deviations and compared between English and NELP groups using the t-test. R (version 2023.06.1 + 524) was used for analysis. Sub-group analyses by language group and intensity of self-management To assess for differences by language, we also compared results between the English cohort and each of the five language groups individually. We also compared English and NELP cohorts separately within those selected by high- and low-intensity self-management diagnoses. In both cases we applied the same summative statistics and comparative analyses described above. The study was carried out at a single urban academic tertiary care safety-net hospital in New England that serves a large population with NELP. On discharge, patients received an After Visit Summary (AVS), which is generated by the electronic medical record (EMR) Epic. The AVS compiles key discharge information created by the provider including a medication list and changes and a personalized written summary that prompts the provider to explain the cause of admission (primary diagnosis), care instructions, reasons to seek medical attention (“return precautions”), and medication changes. The AVS template is shown in Appendix . The AVS also pulls in the date, time, provider type, and address of scheduled post-discharge appointments and information on how patients can contact the hospital with questions after discharge. The patient’s preferred language is entered in the EMR by registration staff. Preferred language can be updated by clinicians as needed. Using operational data from 2022, we pulled the preferred language field in the EMR to identify the top four non-English languages at the hospital: Spanish, Haitian Creole, Cape Verdean (Portuguese) Creole, and Vietnamese. These four language groups, plus a fifth composed of all other non-English languages, comprised the “NELP cohort”. The “English cohort” included patients who indicated their preferred language was English as recorded in the EMR. We used case matching to compare discharge records from patients with NELP to matched English-language-preference patients by primary diagnosis. Given previous evidence that readmission rates are higher among patients with NELP with diagnoses requiring high intensity of self-management, we reviewed a list of the most common inpatient diagnoses and classified each as high or low self-management. High self-management diagnoses include those that commonly require complex medication regimens (e.g., COPD exacerbation), monitoring of vital signs (e.g., atrial fibrillation or paroxysmal tachycardia), daily weight measurement (e.g., exacerbation of heart failure), or point-of-care labs (e.g., diabetes), or detailed dietary instructions (e.g., renal failure). We then selected five common “low self-management” diagnoses—sepsis, chest pain, stroke-like symptoms including transient ischemic attack (TIA), low back pain, and pneumonia—to ensure broad applicability to hospitalized patients. To build our overall study cohort, we first identified all discharges to home of adults (age ≥ 18) from general medicine teams between January 1, 2017 and December 21, 2022. Each discharge was considered independently without eliminating repeat discharges. Discharges against medical advice were excluded. Two discharge records were randomly selected per diagnosis among the five linguistic categories in the NELP cohort. For each discharge in the NELP cohort, a discharge record from the English cohort was matched to it based on primary diagnosis and patient age within 10 years. This process selected a total of 200 discharge records or 20 per diagnosis, including 10 from patients with English language preference and 10 with NELP. Discharge instructions were included in the analysis even if not provided in the patient’s preferred language. De-identified versions of the AVS and discharge medication list were provided to the study team by the hospital’s Clinical Data Warehouse for Research. In six cases, either the AVS or physician discharge summary was missing, and a replacement case was selected using the above criteria. If discharge instructions were not written in English, they were professionally translated. Demographics were abstracted from the EMR and summarized as means for continuous variables (age) and percentages for categorical variables (race, Hispanic origin, and preferred language). Results were compared between English and NELP cohorts using t-test and Fisher’s exact test, respectively. Readability of discharge instructions Word counts of each of the 200 discharge records were evaluated by an online calculator. Language concordance was a binary outcome indicating if the language of the instructions matched the patient’s preferred language. For reading level, we first edited patient instructions for standardization as explained in Appendix 2. An online calculator was used to determine the reading level per the widely used Flesch Reading Ease score (FRES) . As the data did not follow a normal distribution, we calculated the median and interquartile range (IQR). The same was done using the New Dale-Chall Reading Scale (NDCRS), which was converted into U.S. grade level groupings—45 or above = 4th grade or less (up to age 9), 34–44 = 5th grade to 8th grade (ages 10–13), and 33 or less = 9th grade or above (age 14 and above)—and reported as percentages in each category . We summarized categorical outcomes as percentages and compared between NELP and English cohorts using Fisher’s exact test. Continuous outcomes were summarized as medians and interquartile ranges (IQRs) given lack of normal distribution and compared between language groups using the Mann–Whitney test. For all analyses, a p -value of ≤ 0.05 was used as the cutoff for statistical significance. Content of patient discharge instructions We conducted a literature review to identify instruments that could be used to judge the quality of patient discharge instruction content. All existing options either assessed only for presence or absence of instruction domains or were integrated with process measures . As no tools were identified to assess the quality of the content, the lead author drafted the Quality of Discharge Instructions-Inpatient (QDI-I) scale based on existing literature, experience as a hospitalist, and input from the senior author. The QDI-I evaluates content across six domains: primary diagnosis, self-care instructions, return precautions (i.e. reasons to seek medical attention), medication changes, reasons for medication changes, and recommended follow-up. Each domain is scored from 1 (very poor) to 4 (very good) and summed across the six domains for a possible total score of 24; if no medication changes were made, the maximum total score of 20 was then scaled by multiplying by 1.2. For ease of interpretability, the QDI-I score was also expressed as a percentage of a perfect score. For example, a total score of 18 (out of 24) or 3 (out of 4) for a single domain was reported as 75% of a perfect score. The draft QDI-I was pilot tested by three authors and minor changes were made. The QDI-I is included as Appendix 3. For each of the 200 discharge records, two physician raters independently assigned a QDI-I score for each domain. Disagreements of more than one point were resolved by the lead author (KA), and a mean score was assigned if raters disagreed by one point. For each rater, an overall score was calculated by summing scores across the six domains. Both overall and domain-specific QDI scores were summarized as means and standard deviations and compared between English and NELP groups using the t-test. R (version 2023.06.1 + 524) was used for analysis. Sub-group analyses by language group and intensity of self-management To assess for differences by language, we also compared results between the English cohort and each of the five language groups individually. We also compared English and NELP cohorts separately within those selected by high- and low-intensity self-management diagnoses. In both cases we applied the same summative statistics and comparative analyses described above. Word counts of each of the 200 discharge records were evaluated by an online calculator. Language concordance was a binary outcome indicating if the language of the instructions matched the patient’s preferred language. For reading level, we first edited patient instructions for standardization as explained in Appendix 2. An online calculator was used to determine the reading level per the widely used Flesch Reading Ease score (FRES) . As the data did not follow a normal distribution, we calculated the median and interquartile range (IQR). The same was done using the New Dale-Chall Reading Scale (NDCRS), which was converted into U.S. grade level groupings—45 or above = 4th grade or less (up to age 9), 34–44 = 5th grade to 8th grade (ages 10–13), and 33 or less = 9th grade or above (age 14 and above)—and reported as percentages in each category . We summarized categorical outcomes as percentages and compared between NELP and English cohorts using Fisher’s exact test. Continuous outcomes were summarized as medians and interquartile ranges (IQRs) given lack of normal distribution and compared between language groups using the Mann–Whitney test. For all analyses, a p -value of ≤ 0.05 was used as the cutoff for statistical significance. We conducted a literature review to identify instruments that could be used to judge the quality of patient discharge instruction content. All existing options either assessed only for presence or absence of instruction domains or were integrated with process measures . As no tools were identified to assess the quality of the content, the lead author drafted the Quality of Discharge Instructions-Inpatient (QDI-I) scale based on existing literature, experience as a hospitalist, and input from the senior author. The QDI-I evaluates content across six domains: primary diagnosis, self-care instructions, return precautions (i.e. reasons to seek medical attention), medication changes, reasons for medication changes, and recommended follow-up. Each domain is scored from 1 (very poor) to 4 (very good) and summed across the six domains for a possible total score of 24; if no medication changes were made, the maximum total score of 20 was then scaled by multiplying by 1.2. For ease of interpretability, the QDI-I score was also expressed as a percentage of a perfect score. For example, a total score of 18 (out of 24) or 3 (out of 4) for a single domain was reported as 75% of a perfect score. The draft QDI-I was pilot tested by three authors and minor changes were made. The QDI-I is included as Appendix 3. For each of the 200 discharge records, two physician raters independently assigned a QDI-I score for each domain. Disagreements of more than one point were resolved by the lead author (KA), and a mean score was assigned if raters disagreed by one point. For each rater, an overall score was calculated by summing scores across the six domains. Both overall and domain-specific QDI scores were summarized as means and standard deviations and compared between English and NELP groups using the t-test. R (version 2023.06.1 + 524) was used for analysis. To assess for differences by language, we also compared results between the English cohort and each of the five language groups individually. We also compared English and NELP cohorts separately within those selected by high- and low-intensity self-management diagnoses. In both cases we applied the same summative statistics and comparative analyses described above. Demographics Table displays the demographics for the overall cohort as well as a comparison between the English and NELP cohorts. The average age was 71.1 years and was similar between the two groups ( p = 0.50). The breakdown by sex was nearly equal and did not differ by language ( p = 1.00). Those with NELP were significantly more likely to identify as Latino (21% versus 4%, p < 0.001) and less likely to be White (7% versus 46%, p < 0.001 for comparison of all racial categories). Comparison by English and NELP cohorts Only 8% of patients with NELP received discharge instructions in their preferred language, as compared to 100% in the English language group ( p < 0.001; Table ). On average discharge instructions were shorter in the NELP cohort at 178 words compared to 190 words in the English cohort ( p = 0.032). In both cohorts the raw FRES was nearly identical and corresponded to an eighth or ninth grade reading level. Using the NDCRS grade-level categorization, 62.0% of patients with NELP received instructions at a ninth-grade reading level or higher compared to only 51.0% in the English cohort, but the difference did not reach statistical significance ( p = 0.15). The overall QDI-I score in the NELP and English cohorts was similar (17.06 vs. 17.10, or 71.1% vs. 71.3%; p = 0.92). On average patients with NELP received lower quality return precautions compared to those with an English language preference (3.22 vs. 3.55, or 80.5% vs 88.8%; p = 0.013). Other domains of the QDI-I did not vary significantly between the two cohorts. In both, the highest scoring QDI-I domain was primary diagnosis and the lowest scoring was self-care instructions. Comparison by language group While all NELP groups were less likely to receive language-concordant discharge instructions compared to English speakers, Spanish was the second most common language with 4 cases (20.0%) receiving language-concordant discharge instructions (Table ). There were no significant differences in the word count among the NELP groups compared to the English cohort (Appendix 4). Analysis of the FRES data showed Vietnamese speakers received easier-to-read discharge instructions than English speakers (67.56 vs 62.86, p = 0.026; Appendix 5). By the NDCRS grade-level categorization, most NELP groups have between 60–70% of discharge records in the ninth-grade-and-above group; however, in the Vietnamese group 45.0% of discharge instructions were written at a 9th-grade level or higher compared to 51.0% in the English group ( p = 0.81). The overall QDI-I score did not differ significantly between the five language groups and the English cohort, but there was a trend toward higher quality instructions in the Vietnamese and Other language categories (Table ). Comparing each QDI-I domain, both Haitian Creole and Cape Verdean Creole speakers received return precautions of inferior quality compared to the English cohort (2.95 and 2.88 vs 3.55, or 73.8% and 72.0% vs 88.8%; p = 0.006 and p = 0.001 respectively; Fig. ). Comparison by intensity of self-management Within the high-intensity self-management diagnoses, patients with NELP received shorter discharge instructions (181 words vs. 213 words, p = 0.048; Appendices 6 and 7). There were no significant differences in reading level or QDI-I score. Within the low-intensity self-management diagnoses, those with NELP had similar word count and reading level as the English cohort. While their overall QDI-I scores were similar (16.68 NELP vs. 16.63 English, or 69.5% vs. 69.3%; p = 0.094), return precautions of those with NELP were significantly lower (3.02 vs. 3.49, or 75.5% vs. 87.3%; p = 0.024). Table displays the demographics for the overall cohort as well as a comparison between the English and NELP cohorts. The average age was 71.1 years and was similar between the two groups ( p = 0.50). The breakdown by sex was nearly equal and did not differ by language ( p = 1.00). Those with NELP were significantly more likely to identify as Latino (21% versus 4%, p < 0.001) and less likely to be White (7% versus 46%, p < 0.001 for comparison of all racial categories). Only 8% of patients with NELP received discharge instructions in their preferred language, as compared to 100% in the English language group ( p < 0.001; Table ). On average discharge instructions were shorter in the NELP cohort at 178 words compared to 190 words in the English cohort ( p = 0.032). In both cohorts the raw FRES was nearly identical and corresponded to an eighth or ninth grade reading level. Using the NDCRS grade-level categorization, 62.0% of patients with NELP received instructions at a ninth-grade reading level or higher compared to only 51.0% in the English cohort, but the difference did not reach statistical significance ( p = 0.15). The overall QDI-I score in the NELP and English cohorts was similar (17.06 vs. 17.10, or 71.1% vs. 71.3%; p = 0.92). On average patients with NELP received lower quality return precautions compared to those with an English language preference (3.22 vs. 3.55, or 80.5% vs 88.8%; p = 0.013). Other domains of the QDI-I did not vary significantly between the two cohorts. In both, the highest scoring QDI-I domain was primary diagnosis and the lowest scoring was self-care instructions. While all NELP groups were less likely to receive language-concordant discharge instructions compared to English speakers, Spanish was the second most common language with 4 cases (20.0%) receiving language-concordant discharge instructions (Table ). There were no significant differences in the word count among the NELP groups compared to the English cohort (Appendix 4). Analysis of the FRES data showed Vietnamese speakers received easier-to-read discharge instructions than English speakers (67.56 vs 62.86, p = 0.026; Appendix 5). By the NDCRS grade-level categorization, most NELP groups have between 60–70% of discharge records in the ninth-grade-and-above group; however, in the Vietnamese group 45.0% of discharge instructions were written at a 9th-grade level or higher compared to 51.0% in the English group ( p = 0.81). The overall QDI-I score did not differ significantly between the five language groups and the English cohort, but there was a trend toward higher quality instructions in the Vietnamese and Other language categories (Table ). Comparing each QDI-I domain, both Haitian Creole and Cape Verdean Creole speakers received return precautions of inferior quality compared to the English cohort (2.95 and 2.88 vs 3.55, or 73.8% and 72.0% vs 88.8%; p = 0.006 and p = 0.001 respectively; Fig. ). Within the high-intensity self-management diagnoses, patients with NELP received shorter discharge instructions (181 words vs. 213 words, p = 0.048; Appendices 6 and 7). There were no significant differences in reading level or QDI-I score. Within the low-intensity self-management diagnoses, those with NELP had similar word count and reading level as the English cohort. While their overall QDI-I scores were similar (16.68 NELP vs. 16.63 English, or 69.5% vs. 69.3%; p = 0.094), return precautions of those with NELP were significantly lower (3.02 vs. 3.49, or 75.5% vs. 87.3%; p = 0.024). Written discharge instructions are an evidence-based practice to improve the transition home after medical hospitalization. While research suggests that written discharge instructions are integral to patient understanding and effective self-care, the few studies that have attempted to assess their quality have been limited and have excluded an analysis of disparities by language. In this analysis of 200 hospital discharge instructions across a variety of linguistic groups—English and five sub-groups with NELP—we found important areas for improvement in terms of linguistic accessibility, readability, and quality of content. We found that only 8% of patients in the NELP group, most of whom were from minoritized racial and ethnic groups in our sample, received their personalized discharge instructions in their preferred language as compared to 100% in English group. For comparison, a quality improvement project at a large urban pediatric hospital which aimed to increase the provision of language-concordant discharge instructions reported a baseline of 18% . While anecdotally such inequities are nearly universal at U.S. hospitals (and many other countries), prior surveys of hospitals have not collected data on the topic . This systemic inequity is driven by logistical challenges to providing language-concordant discharge instructions, such as the typical delay to have documents translated and writing personalized discharge instructions before discharge, as well as the financial cost. Novel solutions are needed to address this inequity. While artificial intelligence is a promising way to produce real-time translations, studies show these methods are not yet reliable enough to be used as part of clinical care . We did not find a statistically significant difference in the quality of discharge instruction content between those with English and non-English language preference except for return precautions, which were less thorough for those with NELP ( p = 0.013). We believe it is premature to declare a lack of disparity in the quality of discharge instructions by language in other domains, especially as the clinical significance remains unexplored. First, since the QDI-I scale did not integrate into the numerical score whether instructions were language concordant, most patients with NELP could not functionally receive the written content due to language barriers. Second, our analysis was conducted in a single safety-net hospital that is nationally recognized as a leader in health equity. Providers who elect to work at this hospital are more likely to spend additional time and effort to overcome communication barriers, which may not represent average provider behavior. This theory is supported by the fact that patients with NELP received equal quality scores but slightly shorter discharge instructions, suggesting that providers used brevity as an approach to improve communication . Third, aggregate analysis may mask differences in quality by language group that we could not detect due to sample size per language group . The sub-group analysis by language group pointed to potential trends that should be evaluated in future studies. We found that the overall QDI-I score was lower for the Haitian Creole and Cape Verdean Creole speakers, though the differences did not reach statistical significance. As both are creole languages—meaning they originate in spoken form as a mix between other languages—this may reflect an actual or perceived difference in the likelihood of patients being able to read written instructions. In contrast, Vietnamese speakers had a higher overall QDI score than English speakers (though still not statistically significant), which may be due to more involvement of English-speaking family members in medical care . A better understanding of trends specific to each language group could help tailor improvements to the hospital discharge process, such as intensive bedside education or investment in professional translation, to the needs of specific groups. A strength of our analysis was our approach of measuring reading level. Prior studies have applied reading level calculators to discharge instructions without editing. Similar to prior research, we found that most hospital discharge instructions exceeded the reading level recommended by professional organizations . For example, the American Medical Association suggests that patient materials target a sixth-grade reading level. However, we found that average grade level for discharge instructions in both the English and NELP cohorts was equivalent to an eighth to ninth grade on the FKGL. By the NDCRS, only 49.0% of instructions were at an eighth-grade level in the English cohort and even fewer (38.0%) in the NELP cohort. While this difference was not statistically significant, the impact of low readability is likely greater for those who face language barriers. We would like to highlight other limitations of our study. Because so few patients with NELP received language-concordant discharge instructions, our findings may not be applicable to the rare health system that offers translation. The diagnoses we selected for sampling discharge records were grouped into high- versus low-intensity self-management diseases as a proxy for complexity of discharge instructions but are somewhat subjective. Also, the absolute number of records reviewed per diagnosis and non-English language group was relatively small ( n = 20 each) thus limiting statistical certainty of our findings. In addition, the QDI-I scale was based on existing evidence and guidance from professional societies but was not vetted with patients to evaluate their preferences and satisfaction with discharge instructions. Such efforts would strengthen the QDI-I, along with a study to formally validate the tool, which is currently underway. This study is the first to examine disparities in the provision of personalized written instructions at hospital discharge. Our findings show that only 8% of patients with NELP receive instructions in their preferred language; further research to understand the link between this and post-discharge disparities previously found in patients with NELP are needed. We hope these results prompt additional research on inequities in the hospital discharge process, particularly as patients with NELP have been historically excluded from evidence-based transitional care interventions. Supplementary Material 1: Appendix 1. Standardized template for providers to write personalized hospital discharge instructions. Appendix 2: Description of reading scores and methods for standardizing patient instructions analyzed prior to calculation of reading scores. Appendix 3: Quality of Discharge Instructions-Inpatient (QDI-I) tool. Appendix 4: Box plot of word count for English cohort and groups with non-English language preference. |
Attitudes and Perceptions of Australian Dentists and Dental Students Towards Applications of Artificial Intelligence in Dentistry: A Survey | 0ab7d806-a23a-4121-807c-05f7b5804e66 | 11729985 | Dentistry[mh] | Introduction Artificial intelligence (AI) refers to machines that imitate human knowledge and behaviour . The term artificial intelligence was first described by John McCarthy in 1955 . In the last two decades, there has been a significant increase in research on applications of AI technologies in healthcare . In 2023, AI in healthcare was worth 14.6 billion US dollars, and the market share is expected to be worth almost 102.7 billion US dollars by 2028 ( https://www.marketsandmarkets.com/Market‐Reports/artificial‐intelligence‐healthcare‐market‐54679303.html , https://www.statista.com/statistics/1334826/ai‐in‐healthcare‐market‐size‐worldwide/ ). The extensive use of electronic health records and digital imaging has facilitated the availability of huge data sets that has propelled the success of AI applications in healthcare . In clinical dentistry, AI technologies have been developed to aid in the radiologic diagnosis of different oral and maxillofacial pathologies and provide decision support and analysis of treatment outcomes in various dental disciplines . Several commercially available AI technologies are helpful in routine dental care and reliable tools in providing a second opinion for image and patient data, improving clinical efficiency and thus saving valuable clinical time (e.g., PearlAI in caries detection). AI can process various data types, such as patient history, demographic information and treatment records, to help identify patient preferences and specific needs and boost motivation and accountability towards their health. AI‐enabled smart assistants and chatbots can support dental consultations through teledentistry . Despite these challenges, the future implications of AI are promising. AI has the potential to streamline clinical workflows, standardise diagnosis and treatment, increase the quality of clinical decisions, and improve patient safety by reducing errors . AI can streamline administrative tasks in clinical practice, and this can free up time and resources for more patient‐centred initiatives, making healthcare more accessible and affordable . AI‐enabled virtual and augmented reality (VR and AR) devices and gamification techniques can also assist with student training . Although AI holds significant promise, its routine adoption in dental practice remains limited. Challenges that impede wider adoption include data accessibility, lack of replicability and robustness in dental AI research, and limited capabilities of current AI applications . Additionally, there are concerns about data privacy and security, lack of generalisability of the algorithms, and inherent bias that may skew diagnostic or treatment recommendations . The opaque ‘black‐box’ nature of the algorithm design has raised concerns about transparency and explainability in the decisions made by AI algorithms . In addition, overreliance on technology may diminish the human element in dental care, impacting the quality and comprehensiveness of care delivered . New avenues in AI research have been explored to address the concerns and translate research into effective clinical tools. This latest research includes understanding the interactions between clinicians and AI technologies. It also involves assessing the impact of AI technologies on the quality, efficiency and productivity of clinical practice and understanding how users adopt new technologies such as AI . User acceptance and experience can significantly influence attitudes and perceptions towards adopting new technology . Based on the perceptions of potential users, evaluations have been conducted to identify usability issues, gaps in understanding, and possible barriers and concerns associated with AI applications in medicine and dentistry . Evidence from recent literature suggests that the adoption of new technologies, particularly AI, can vary significantly among different healthcare settings, which are affected by the societal and cultural contexts and the clinical workflows . Understanding the user experience within the Australian context enables the customisation of AI applications, promoting trust and acceptance among dentists and dental students. This understanding ensures that AI technologies align with local workflows, regulatory frameworks and patient expectations, maximising their effectiveness and utility . Consequently, it is crucial to understand the perceptions and expectations of the Australian dental profession regarding AI technologies. This understanding will facilitate a better grasp of how and when these technologies will be adopted and implemented in routine dental care. This survey aimed to study the attitudes and perceptions of Australian dental practitioners and students about the applications of artificial intelligence in dental practice. Methods A cross‐sectional survey was designed and distributed through the online Qualtrics platform ( www.qualtrics.com ). The study received ethics approval (Human Research Ethics Committee approval number 2021/454) and was conducted in accordance with the Declaration of Helsinki . The following inclusion and exclusion criteria were used to invite the participants: Inclusion criteria: Dentists, dental specialists and oral health therapists registered to practice in Australia Dental and oral health students studying in any accredited Australian university Exclusion criteria: Dental practitioners other than dentists, dental specialists and oral health therapists Dental practitioners not registered to practice in Australia The dental students studying at Australian universities were contacted by an announcement in their University's online learning management systems. Dentists and dental specialists were contacted through the research team's known networks and passive snowball recruitment . Social media platforms, including Facebook and LinkedIn, were also used to circulate the anonymised survey link. Participation was voluntary, and no incentives were offered. Informed consent was considered when participants completed and submitted the survey. The survey remained open from September 2021 until August 2022. Further data collection was stopped when regular monitoring and analysis indicated data saturation. This approach was adopted based on the grounded theory in qualitative research . The survey consisted of an anonymised questionnaire containing multiple choice, Likert scale and open‐ended questions. The research team developed the questionnaire based on the literature review and similar previously published surveys . The questionnaire was designed to present appropriate questions to dental practitioners and students using the branching option within Qualtrics. The questions covered the participants' awareness and perceptions about the current applications of AI algorithms in dentistry, their impact on dental workflow and the dental curriculum, potential benefits of AI‐based solutions in dentistry, performance and accuracy of dental AI applications and concerns about adopting AI solutions in dentistry. Demographic questions included participants' age, gender, professional qualifications (category), type of practice, years of clinical experience and location of practice (metropolitan/rural/private/public/academia). Student participants were asked to indicate the type (oral health or dental) and entry (undergraduate or postgraduate) of the programme they were enrolled in. The questionnaire was pilot tested to ensure that the questions were relevant, appropriate and accessible. The survey questionnaire is included in Data . Data were transferred from the survey platform to IBM's SPSS statistics software (version 26, IBM, SPSS Inc., Chicago, IL) and analysed. Descriptive statistics summarised the demographic characteristics of the survey respondents. The association between the categorical variables was assessed using the chi‐squared test of independence. Fisher's exact test was used when more than 20% of the cells had an expected frequency of less than 5. The Spearman rank correlation test was used to assess the correlation between variables. The level of significance for all tests was set at p < 0.05. Results A total of 177 responses were received, and after removing incomplete records, 155 responses (87.6%) were used in the data analysis. Table summarises the demographic characteristics of the participants. Of the 155 participants, 64 were dental students (43.2%), seven were oral health students (4.7%) and 77 were registered dentists (52%). The participants' ages ranged from 22 to 85 years (average of dental students 25 ± 3.7 years; dentists 37.8 ± 14.8 years) (Figure ), with equal gender distribution. On average, the dentists practised 4.4 days per week, and most of the dentists practised in the private sector ( n = 42, 27.1%) and in metropolitan areas ( n = 52, 33.5%). The clinical experience of the dentists ranged from 1 to 43 years, with an average of 12.6 (± 11.8) years. Among 146 valid responses, participants' self‐assessment of their technological proficiency was 4.02 on a scale of 0–5 (Figure ), with similar average scores for dental students and practitioners. Dentists used several digital technologies in their clinic, such as digital X‐ray imaging (45.2%), digital patient records (43.9%) and OPG machines (38.7%). 3.1 User Experience and Acceptance of Dental AI Applications in Clinical Practice Among the survey participants, 54.8% ( n = 85) were aware of AI applications in the field, and they mainly learned about dental AI applications through lectures, conferences, journals and social media (Table ). Interestingly, a majority ( n = 109, 70.3%) were unable to name a specific dental AI software, with a slightly higher proportion of student participants ( n = 59, 54%) compared to dentists ( n = 47, 43%) in this category. Among the 29.7% ( n = 46) who could name a specific dental AI software, their knowledge primarily came from scholarly and clinical sources. The majority of participants identified dentomaxillofacial radiology (64.5%) and implantology (64.5%) as the most amenable disciplines for AI applications (Figure ). This finding was augmented by the participant's preferences for AI use across various other areas, including diagnostics, general dentistry and emergency triage, suggesting a broad scope for practical integration of AI. AI was found to be most beneficial for tasks such as image processing and diagnosis and less beneficial for administrative tasks and treatment planning. AI was recognised as a supportive tool for clinicians by 91.6% ( n = 142) of the participants, and only 2% ( n = 3) had concerns about potential negative effects. A majority of the participants ( n = 107, 69%) indicated that AI would be beneficial to clinical tasks in dentistry. However, 6% ( n = 9) indicated that either AI would make no difference or have a negative impact. The participants revealed diverse opinions on the expected performance of AI compared to specialists: 35.6% ( n = 52) expected AI performance to equal an average specialist, while 19.9% ( n = 29) thought AI could outperform the best specialist. Additionally, 23.3% ( n = 34) saw AI on par with the least effective specialist, 13% ( n = 19) with the best and 8.2% ( n = 12) predicted that AI would surpass the top specialists. There was no significant difference in expectations between dentist and student participants ( p > 0.05). Regarding the timeline for AI integration, 40% ( n = 60) of all participants indicated that AI applications would be routinely used in dentistry within the next 5–10 years (Figure ), with similar opinions among dentists ( n = 29, 48.3%) and dental students ( n = 31, 51.6%). Interestingly, 23.4% ( n = 18) of dentist participants, compared to only 11.3% ( n = 8) of student participants, considered AI to be already integrated into dental practice. In comparison with dentists, dental students anticipated AI integration in clinical practice in the short term ( p = 0.022). Over a third of dentist participants ( n = 53, 34.2%) expressed excitement about integrating AI into their practice, whereas 17.4% ( n = 27) believed AI would make little difference or had reservations. When faced with a discrepancy between AI's judgement and their own, a majority of participants, 59.6% ( n = 87), indicated they would consult a colleague or an experienced clinician, whereas 8.9% ( n = 13) indicated they would resort to other measures, such as searching scientific literature to resolve the differences (Figure ). More students ( n = 46, 68.7%) than dentists ( n = 35, 48.6%) indicated they would refer to a colleague or a senior clinician. In contrast, more dentists ( n = 27, 37.5%) indicated they would trust their own judgement compared to dental students ( n = 15, 22.4%). The survey also investigated the concerns about AI applications. The primary concerns identified were job losses to more efficient technology, lack of flexibility in patient care and insurance liability. Additionally, mistrust in the technology and concerns about its accuracy were highlighted. The survey found no statistically significant differences in the responses to concerns about AI applications among dentists and dental students. 3.2 Factors Affecting Attitudes Towards Dental AI Applications The survey suggested that attitudes towards AI in dentistry were shaped by factors such as age, gender, clinical experience, professional qualifications and technological proficiency. Participants who were aware of dental AI applications were significantly older (mean age 35.4 ± 14.81 years) compared to the participants who were not aware of these applications (mean age 28.61 ± 7.35 years) ( p < 0.05). Participants who believed AI's routine use was imminent within the next 5–10 years were older than participants who saw AI as already integrated or expected it to happen within 5 years. Gender differences were observed in the expectations for AI performance and its role in clinical support. More females viewed AI as a clinical tool ( n = 48, 65.8%) and relied on their judgement over AI in discrepancies ( n = 24, 32.9%) compared to males. Male participants were more inclined to use AI applications if available within the year ( M = 19, 24%; F = 11, 14%). Participants familiar with AI applications reported more clinical experience than those unfamiliar ( p = 0.02) (Figure ). Awareness of specific software was higher among specialist dentists compared to GPs ( p < 0.01). In addition, specialists ( n = 6, 54.5%) had a more positive attitude towards AI compared to general practitioners ( n = 11, 16.2%), likely due to higher exposure to AI and its applications within their field ( p < 0.01). Additionally, those who perceived themselves as technologically proficient reported higher awareness of AI's dental applications ( p < 0.01). However, this self‐assessment about technological proficiency did not significantly influence opinions on the timeline for AI integration in dentistry. 3.3 Correlations Among Survey Variables Significant correlations were observed between the responses received from the participants (Table , Data ). A statistically significant moderate correlation ( r = 0.42) was observed between the participant's awareness of a specific AI software and their likelihood of using AI if it became available in the next year. Participants' perceptions of the impact of AI on dental practice and workflows also correlated with the likelihood of using AI technologies if they became available within the year ( r = 0.53, p < 0.05). A weak correlation between participants' perceptions of the timeline of AI integration and their technological proficiency and clinical experience was observed ( p < 0.05). The negative correlation between the participant's age and their opinion about the timeline for AI integration into dental practice suggested that younger participants anticipated an earlier adoption of AI applications in dentistry. User Experience and Acceptance of Dental AI Applications in Clinical Practice Among the survey participants, 54.8% ( n = 85) were aware of AI applications in the field, and they mainly learned about dental AI applications through lectures, conferences, journals and social media (Table ). Interestingly, a majority ( n = 109, 70.3%) were unable to name a specific dental AI software, with a slightly higher proportion of student participants ( n = 59, 54%) compared to dentists ( n = 47, 43%) in this category. Among the 29.7% ( n = 46) who could name a specific dental AI software, their knowledge primarily came from scholarly and clinical sources. The majority of participants identified dentomaxillofacial radiology (64.5%) and implantology (64.5%) as the most amenable disciplines for AI applications (Figure ). This finding was augmented by the participant's preferences for AI use across various other areas, including diagnostics, general dentistry and emergency triage, suggesting a broad scope for practical integration of AI. AI was found to be most beneficial for tasks such as image processing and diagnosis and less beneficial for administrative tasks and treatment planning. AI was recognised as a supportive tool for clinicians by 91.6% ( n = 142) of the participants, and only 2% ( n = 3) had concerns about potential negative effects. A majority of the participants ( n = 107, 69%) indicated that AI would be beneficial to clinical tasks in dentistry. However, 6% ( n = 9) indicated that either AI would make no difference or have a negative impact. The participants revealed diverse opinions on the expected performance of AI compared to specialists: 35.6% ( n = 52) expected AI performance to equal an average specialist, while 19.9% ( n = 29) thought AI could outperform the best specialist. Additionally, 23.3% ( n = 34) saw AI on par with the least effective specialist, 13% ( n = 19) with the best and 8.2% ( n = 12) predicted that AI would surpass the top specialists. There was no significant difference in expectations between dentist and student participants ( p > 0.05). Regarding the timeline for AI integration, 40% ( n = 60) of all participants indicated that AI applications would be routinely used in dentistry within the next 5–10 years (Figure ), with similar opinions among dentists ( n = 29, 48.3%) and dental students ( n = 31, 51.6%). Interestingly, 23.4% ( n = 18) of dentist participants, compared to only 11.3% ( n = 8) of student participants, considered AI to be already integrated into dental practice. In comparison with dentists, dental students anticipated AI integration in clinical practice in the short term ( p = 0.022). Over a third of dentist participants ( n = 53, 34.2%) expressed excitement about integrating AI into their practice, whereas 17.4% ( n = 27) believed AI would make little difference or had reservations. When faced with a discrepancy between AI's judgement and their own, a majority of participants, 59.6% ( n = 87), indicated they would consult a colleague or an experienced clinician, whereas 8.9% ( n = 13) indicated they would resort to other measures, such as searching scientific literature to resolve the differences (Figure ). More students ( n = 46, 68.7%) than dentists ( n = 35, 48.6%) indicated they would refer to a colleague or a senior clinician. In contrast, more dentists ( n = 27, 37.5%) indicated they would trust their own judgement compared to dental students ( n = 15, 22.4%). The survey also investigated the concerns about AI applications. The primary concerns identified were job losses to more efficient technology, lack of flexibility in patient care and insurance liability. Additionally, mistrust in the technology and concerns about its accuracy were highlighted. The survey found no statistically significant differences in the responses to concerns about AI applications among dentists and dental students. Factors Affecting Attitudes Towards Dental AI Applications The survey suggested that attitudes towards AI in dentistry were shaped by factors such as age, gender, clinical experience, professional qualifications and technological proficiency. Participants who were aware of dental AI applications were significantly older (mean age 35.4 ± 14.81 years) compared to the participants who were not aware of these applications (mean age 28.61 ± 7.35 years) ( p < 0.05). Participants who believed AI's routine use was imminent within the next 5–10 years were older than participants who saw AI as already integrated or expected it to happen within 5 years. Gender differences were observed in the expectations for AI performance and its role in clinical support. More females viewed AI as a clinical tool ( n = 48, 65.8%) and relied on their judgement over AI in discrepancies ( n = 24, 32.9%) compared to males. Male participants were more inclined to use AI applications if available within the year ( M = 19, 24%; F = 11, 14%). Participants familiar with AI applications reported more clinical experience than those unfamiliar ( p = 0.02) (Figure ). Awareness of specific software was higher among specialist dentists compared to GPs ( p < 0.01). In addition, specialists ( n = 6, 54.5%) had a more positive attitude towards AI compared to general practitioners ( n = 11, 16.2%), likely due to higher exposure to AI and its applications within their field ( p < 0.01). Additionally, those who perceived themselves as technologically proficient reported higher awareness of AI's dental applications ( p < 0.01). However, this self‐assessment about technological proficiency did not significantly influence opinions on the timeline for AI integration in dentistry. Correlations Among Survey Variables Significant correlations were observed between the responses received from the participants (Table , Data ). A statistically significant moderate correlation ( r = 0.42) was observed between the participant's awareness of a specific AI software and their likelihood of using AI if it became available in the next year. Participants' perceptions of the impact of AI on dental practice and workflows also correlated with the likelihood of using AI technologies if they became available within the year ( r = 0.53, p < 0.05). A weak correlation between participants' perceptions of the timeline of AI integration and their technological proficiency and clinical experience was observed ( p < 0.05). The negative correlation between the participant's age and their opinion about the timeline for AI integration into dental practice suggested that younger participants anticipated an earlier adoption of AI applications in dentistry. Discussion This survey investigated the user experience and acceptance to understand the knowledge and perceptions of Australian dentists and dental and oral health students towards applications of AI technologies in dentistry. Although several studies have reported the perceptions regarding AI applications, understanding user experience and acceptance is limited in medicine and dentistry, particularly in the Australian context. With an increase in the applications of AI technologies in our day‐to‐day lives, such as chatbots, AI smart assistants and autonomous vehicles, these technologies are becoming increasingly familiar. So, it is no surprise that a majority of our participants were familiar with AI applications. However, there was a notable gap in knowledge about specific dental AI software, likely due to the limited adoption of AI technologies in clinical practice. Regarding the reliance and trust in dental AI applications, the results indicated a cautious optimism among participants about integrating AI into their practices. While there is a general willingness to adopt AI technologies, hesitance persists, reflecting the need for more tangible demonstrations of the effectiveness of AI in clinical settings. Concerns about data protection and confidentiality have already been identified, and governments worldwide have recognised this concern and created policies such as the EU's AI Act and Australia's Privacy Principles . Our findings are similar to those of other studies conducted in dental communities in other parts of the globe . Across these studies, there is a generally positive attitude towards the potential benefits of AI in dentistry, such as improved diagnostic accuracy, efficiency and treatment planning. The knowledge regarding dental AI applications varied among dental students and practitioners in these studies. Lack of technical resources and training, concerns about data privacy, and algorithmic bias were identified as barriers to adopting AI technologies in these studies. The findings indicated a need for more focused educational efforts to enhance the awareness and understanding of dental AI applications as well as their capabilities and limitations. The survey revealed that AI applications were perceived to have a broad scope for integration, particularly in dentomaxillofacial radiology and implantology. The specialisation of current AI algorithms in specific tasks such as caries detection limits its extensive use and integration in clinical practice . However, with the fast‐paced research and development in this area, we can expect the availability of more comprehensive AI applications that encompass all aspects of dentistry. Our survey found that while there was considerable openness to adopting AI technologies in dental practice, there was some scepticism about AI decisions, where the participants preferred to consult a colleague or a senior clinician rather than rely on AI. It emphasises the need for AI technologies to be designed to augment rather than replace dentist's expertise. The participants' concerns about the impact of AI technologies on job security are valid and may stem from the potential for AI to automate certain aspects of dentistry . However, it is essential to note that dentistry is a highly skilled profession and relies heavily on human expertise, empathy and nuanced judgement . There is a possibility that AI technologies will redefine job roles in dentistry, allowing dentists to focus on more complex cases or aspects of patient care. This misapprehension by dentists can be overcome by acquiring new skills related to AI and understanding how AI tools can enhance dental practice. The survey participants identified a lack of flexibility in the AI's decisions as a concern. The ability to make nuanced treatment decisions considering a patient's overall context, including quality of life, socioeconomic background, emotional well‐being and personal preferences, is a uniquely human trait. These human experiences are complex, subtle and difficult to quantify and integrate into AI algorithms, making them incapable of incorporating the broader context of a patient's life into treatment decisions. For AI to be more widely adopted, robust regulatory and ethical guidelines are essential for safe clinical practice . The differences in the perceptions between dentists and students underscore the importance of considering the role of professional experience in shaping perceptions of technological advancements. These findings reflect the findings in a recent systematic review conducted on the perceptions of dental students and practitioners regarding dental AI applications . Dentists will likely view new technologies like AI through the lens of how these innovations fit into their practices. In contrast, students still in the formative stages of their careers may not have ingrained views or opinions. Understanding these differences is crucial for effectively communicating about and implementing AI technologies in dental practice and tailoring AI‐related education and training for current and future dental professionals. It would be advantageous to conduct longitudinal studies to investigate the integration of AI into dental curricula and assess the effectiveness of different training methods. In addition, research into the impact of demographic factors (ethnic and cultural background of participants, clinician experience) and the long‐term effects of AI integration in dentistry on diagnostic accuracy, treatment efficacy, patient outcomes and perceptions will provide deeper insight into the impact of AI technologies on clinical practice. This study was limited by the sample size and the lack of representation of various demographic categories, including the cultural and ethnic backgrounds of participants. The voluntary nature of the survey may have resulted in selection bias, as only those participants with an interest in this topic may have responded to the survey. In addition, the self‐reported data obtained from surveys are subject to response bias, the influence of the environment and context in which the survey was undertaken, leading to biased responses. Conclusion The survey explored the user experience and acceptance to understand the current landscape of dental AI applications in Australia. The awareness of dental AI among dental students and practitioners is high. Our study suggests that the best use of dental AI applications would be as a support system that can provide data‐driven insights, allowing dentists to focus on more patient‐centred aspects of dental practice. Despite the concerns about the impact of AI on jobs and patient care, the participants foresee the integration of AI in dental care and professional practice in the near future. The study also recognised the various factors that affected the participant's attitudes towards AI. The opinion of the main stakeholders, including dental practitioners, students, education providers, policymakers and patients, is essential as it will significantly affect the integration of AI technologies in dentistry. Future strategies for AI implementation should consider ethical and regulatory challenges. Consequently, dental education and training programmes must adapt to include AI literacy, preparing dental practitioners to confidently and efficiently utilise AI technologies. The authors declare no conflicts of interest. Data S1. Table S1. |
Danish general practitioners as gatekeepers for gynaecological patients in regions with different density of resident specialists in gynaecology: in which situations and to whom do they refer? A cross-sectional study | da26bb59-75bc-4b21-920c-fe44fccb9aa7 | 10088933 | Gynaecology[mh] | In many European countries, the General Practitioner (GP) acts as a professional medical front line person between the wishes and needs of the population on the one hand and access to the specialised healthcare system on the other hand . This gatekeeper system and GPs having a list of patients enrolled at their practice to ensure continuity of care has been seen as part of a comprehensive healthcare system and as a tool to ensure equal access for those in need of care . In the course of a year, 86% of the Danish population comes into direct contact with their GP . The composition of the population enrolled at the GPs list and those who actually contact the GP have an impact on the likelihood of referral to the various specialties . Nevertheless, in Danish as well as in international studies, referral percentages are very similar, with 4–6% of GP contacts being referred to a resident specialist or to a Hospital/Outpatient Clinic (HOC) . The GP referral patterns to resident specialists vary. A wide range of external conditions such as local access to resident specialist, social conditions and the general morbidity of those enrolled at the GP practice have been shown to have an impact on the proportion of patients that are referred . Therefore, referrals occur for very different reasons and at different points in time during a patient contact. In addition, in Denmark there is an unequal distribution between health care regions of specialists, which might shift the referral pattern towards hospital care. Within the gynaecological specialty, the GP can refer patients either to a HOC or to a Resident Specialist in Gynaecology (RSG). It is unknown in which situations the GP refers gynaecological patients and, also, whether these patients are referred to an RSG or to the HOC. There is also a lack of knowledge as to whether the density of RSG influences the referral pattern; moreover, it is not known whether differences in the density of RSGs results in an inequality in the specialist treatment of gynaecological diseases. The present study investigated the referral patterns for GPs referring gynaecological patients to the RSG or to the HOC in specific situations according to density of RSG. Further, we examined whether patients were referred to the HOC or to the RSG, or whether they were treated by the GP her/himself depending on the density of RSGs for six benign gynaecological diagnoses. Setting The Danish health care system is divided into five administrative regions which are defined geographically as the Capital Region (population ∼1.9 million), the Region of Zealand (∼0.8 million), the Southern Region (∼1.2 million), the Central Region (∼1.3 million), and the Northern Region (∼0.6 million). These regions govern primary and secondary health care services provided by GPs, hospitals, and resident specialists. GPs serve as gatekeepers to secondary care, including referrals to resident specialists and inpatient and outpatient hospital care. The Danish healthcare system is based on free and equal access to treatment and is mainly tax financed . Each region politically decides how many resident specialists they require within each discipline, such as in gynaecology and obstetrics, but the number of female individuals per RSG varies considerably between regions, going from approximately 20,000 in the Capital Region of Denmark to approximately 145,000 in the North Denmark Region . Design This was as a cross-sectional study based on questionnaire data from GPs. Study population A total of 100 GPs were randomly selected from each of the five Danish regions with the help of a distribution key based on the total number of doctors in the respective region. Five hundred GPs were invited to take part in the questionnaire study. Questionnaires The anonymised questionnaire comprised questions about demographic data of the GP, including age and sex. Furthermore, it asked in which situations the GP referred gynaecological patients and to whom (HOC or RSG). Six benign gynaecological diagnoses were provided as examples: (i) excessive and frequent menstruation with regular cycle, (ii) Lichen simplex chronicus, (iii) postmenopausal bleeding, (iv) menopausal and perimenopausal disorder, (v) dyspareunia, and (vi) insertion of (intrauterine) contraceptive device (IUD). The GP was asked which diagnoses (s)he treated her/himself or referred to a RSG or to the HOC. The questionnaires were field tested before use. Three GPs were interviewed regarding their understanding of the questions and thereafter completed by five additional GPs. As the GPs deemed the questions understandable, no changes were made. For a list of questions, see Appendix Table A1 . Data collection The GPs received the questionnaire by postal mail in September 2020. A cover letter containing information on the study and a postage paid return envelope were enclosed with each questionnaire. The returned questionnaires were entered into Research Electronic Data Capture (REDCap) by two independent persons and merged by a third person. Study data were collected and managed using REDCap hosted at the University of Southern Denmark. Data analysis Characteristics of responding GPs were reported as numbers and proportions for each of the five regions. Differences between the responding GPs in each region were tested using Pearson’s Chi-Squared test. Referral patterns of gynaecological patients from GPs overall and for six specific reasons were reported as numbers and proportions. Associations between GPs reason for referring to RSG, HOC or both and density of RSG were calculated as odds ratios (OR) with 95% confidence intervals (CI) using generalized linear models for the binomial family. Likewise, the associations between GP referral to RSG, HOC or keeping patients in the GP’s practice, and density of RSG were calculated for the six specific diagnoses. Data analyses were conducted using STATA statistical software 16 (StataCorp, College Station, TX, USA). Ethical approval According to EU's General Data Protection Regulation (article 30), the project was listed at The Record of Processing Activities for Research Projects in Southern Denmark Region (j. no: 19/19630). According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September, 2017 section 14 (2) notification of questionnaire surveys or medical database research projects to the research ethics committee system is only required if the project involves human biological material. Therefore, this study was conducted without an approval from the committees (J.no.: S-20192000-78). The Danish health care system is divided into five administrative regions which are defined geographically as the Capital Region (population ∼1.9 million), the Region of Zealand (∼0.8 million), the Southern Region (∼1.2 million), the Central Region (∼1.3 million), and the Northern Region (∼0.6 million). These regions govern primary and secondary health care services provided by GPs, hospitals, and resident specialists. GPs serve as gatekeepers to secondary care, including referrals to resident specialists and inpatient and outpatient hospital care. The Danish healthcare system is based on free and equal access to treatment and is mainly tax financed . Each region politically decides how many resident specialists they require within each discipline, such as in gynaecology and obstetrics, but the number of female individuals per RSG varies considerably between regions, going from approximately 20,000 in the Capital Region of Denmark to approximately 145,000 in the North Denmark Region . This was as a cross-sectional study based on questionnaire data from GPs. A total of 100 GPs were randomly selected from each of the five Danish regions with the help of a distribution key based on the total number of doctors in the respective region. Five hundred GPs were invited to take part in the questionnaire study. The anonymised questionnaire comprised questions about demographic data of the GP, including age and sex. Furthermore, it asked in which situations the GP referred gynaecological patients and to whom (HOC or RSG). Six benign gynaecological diagnoses were provided as examples: (i) excessive and frequent menstruation with regular cycle, (ii) Lichen simplex chronicus, (iii) postmenopausal bleeding, (iv) menopausal and perimenopausal disorder, (v) dyspareunia, and (vi) insertion of (intrauterine) contraceptive device (IUD). The GP was asked which diagnoses (s)he treated her/himself or referred to a RSG or to the HOC. The questionnaires were field tested before use. Three GPs were interviewed regarding their understanding of the questions and thereafter completed by five additional GPs. As the GPs deemed the questions understandable, no changes were made. For a list of questions, see Appendix Table A1 . The GPs received the questionnaire by postal mail in September 2020. A cover letter containing information on the study and a postage paid return envelope were enclosed with each questionnaire. The returned questionnaires were entered into Research Electronic Data Capture (REDCap) by two independent persons and merged by a third person. Study data were collected and managed using REDCap hosted at the University of Southern Denmark. Characteristics of responding GPs were reported as numbers and proportions for each of the five regions. Differences between the responding GPs in each region were tested using Pearson’s Chi-Squared test. Referral patterns of gynaecological patients from GPs overall and for six specific reasons were reported as numbers and proportions. Associations between GPs reason for referring to RSG, HOC or both and density of RSG were calculated as odds ratios (OR) with 95% confidence intervals (CI) using generalized linear models for the binomial family. Likewise, the associations between GP referral to RSG, HOC or keeping patients in the GP’s practice, and density of RSG were calculated for the six specific diagnoses. Data analyses were conducted using STATA statistical software 16 (StataCorp, College Station, TX, USA). According to EU's General Data Protection Regulation (article 30), the project was listed at The Record of Processing Activities for Research Projects in Southern Denmark Region (j. no: 19/19630). According to the Consolidation Act on Research Ethics Review of Health Research Projects, Consolidation Act number 1083 of 15 September, 2017 section 14 (2) notification of questionnaire surveys or medical database research projects to the research ethics committee system is only required if the project involves human biological material. Therefore, this study was conducted without an approval from the committees (J.no.: S-20192000-78). Of the 500 GPs who received a questionnaire, 347 GPs (69.4%) replied. Of these, 61.4% were female. Regarding age, 51.2% were younger than 50 years, and 76.3% were younger than 60 years. The majority (58.8%) had more than 10 years of professional experience as GPs and most commonly worked in practices with two to three doctors (45.2%). Most practices had both female and male GPs (52.3%). There were no statistically significant differences in any GP characteristics between regions . Referral patterns in specific situations As shown in , 62.9% of GPs referred gynaecological patients to RSG and 9.6% to hospitals/outpatient clinics and 27.5% replied that they referred equally to both. In case of suspected malignancy or suspected severe illness, GPs referred mainly to the HOC. The majority of GPs prefer to refer their patients to RSG with regard to waiting time, patients’ wish, service and distance. In addition, 85.1% of GPs responded that they would prefer to refer patients to RSG, if waiting time and distance were the same as for HOCs. shows, that in regions with a lower density of RSGs than the highest, GPs less frequently referred patients to the RSG. In relation to waiting time and distance, as the density of RSG decreased, the probability of being referred to hospital increased. Referral patterns according to diagnosis As can be seen from , with regard to the six benign gynaecological diagnoses, GPs were more likely to refer to the RSG than to the HOC and more likely to carry out the treatment themselves than to refer patients to the HOC in all other diagnoses than Postmenopausal bleeding. Apart from the diagnoses of Menopausal and perimenopausal disorders and the Insertion of IUD, the general practitioners were more likely to refer patients to RSG than to perform the treatment themselves. demonstrates, for the six benign gynaecological diagnoses, that GPs in the region with the lowest density of RSGs (Northern Region) referred to a RSG to a lesser extent than in the region with the highest density (Capital Region). On closer inspection of the table shows, this difference was significant for Excessive and frequent menstruation with regular cycle, Lichen simplex chronicus, Postmenopausal bleeding, Dyspareunia and Insertion of IUD. Insertion of IUD was more often treated by the GPs themselves in regions where the density of RSG was not the highest. The same applied to patients with Lichen simplex chronicus, although these patients were also referred to the HOC more frequently in regions with a lower density of RSG. As shown in , 62.9% of GPs referred gynaecological patients to RSG and 9.6% to hospitals/outpatient clinics and 27.5% replied that they referred equally to both. In case of suspected malignancy or suspected severe illness, GPs referred mainly to the HOC. The majority of GPs prefer to refer their patients to RSG with regard to waiting time, patients’ wish, service and distance. In addition, 85.1% of GPs responded that they would prefer to refer patients to RSG, if waiting time and distance were the same as for HOCs. shows, that in regions with a lower density of RSGs than the highest, GPs less frequently referred patients to the RSG. In relation to waiting time and distance, as the density of RSG decreased, the probability of being referred to hospital increased. As can be seen from , with regard to the six benign gynaecological diagnoses, GPs were more likely to refer to the RSG than to the HOC and more likely to carry out the treatment themselves than to refer patients to the HOC in all other diagnoses than Postmenopausal bleeding. Apart from the diagnoses of Menopausal and perimenopausal disorders and the Insertion of IUD, the general practitioners were more likely to refer patients to RSG than to perform the treatment themselves. demonstrates, for the six benign gynaecological diagnoses, that GPs in the region with the lowest density of RSGs (Northern Region) referred to a RSG to a lesser extent than in the region with the highest density (Capital Region). On closer inspection of the table shows, this difference was significant for Excessive and frequent menstruation with regular cycle, Lichen simplex chronicus, Postmenopausal bleeding, Dyspareunia and Insertion of IUD. Insertion of IUD was more often treated by the GPs themselves in regions where the density of RSG was not the highest. The same applied to patients with Lichen simplex chronicus, although these patients were also referred to the HOC more frequently in regions with a lower density of RSG. Statement of principal findings This cross-sectional study showed that the referral patterns of GPs was highly dependent on the density of RSGs. The higher the density of RSGs, the more likely that gynaecological patients were referred to the RSG, and conversely, the lower the density of RSGs, the more likely that gynaecological patients were referred to the HOC. GPs most often refer their gynaecological patients to the HOC in cases of suspicion of cancer or other severe disease. Strengths and weaknesses of the study Because none of the previously existing questionnaires we could find on this topic addressed all the items we wanted to include in this study, we developed a study-specific questionnaire. This ensured that the relevant questions were included and that the context was given. We used paper questionnaires as it was not possible obtain a list of email addresses of the GPs due to the General Data Protection Regulation (GDPR). Paper questionnaires have shown declining response rates over the past decade. A low response rate may induce selection bias because respondents may differ systematically from non-respondents, and the study population will thus not represent the target population . However, we achieved a fair response rate with a percentage of 69.4% and with 61.4% females compared to the Danish national average of 58.1% . Thus, the risk of selection bias must be considered low. However, because we did not have access to any information on the targeted study sample, we could not perform a responder – non-responder analysis. For logistic reasons, we selected and invited 100 GPs from each Danish region. This corresponds to 15% of all GPs in Denmark. However, as the number of GPs in the different regions is not the same in absolute numbers, this resulted in a different percentage of invitations between regions, ranging from 9.7% (Capital Region) to 35.1% (Northern Region). Since GPs in Denmark, regardless of the region in which they practice, have the same education at the respective time in their career and the distribution of GPs in the regions is almost the same with regard to sex and age , we believe that this study sample is generalisable to the GP population in its entirety. The fact that we found no differences in GP characteristics over the regions strengthens the credibility of our results. The present study has been carried out in Denmark under Danish conditions in the health system. However, the results should be comparable with health systems that are similarly structured e.g. GP as gatekeeper, especially with the other Scandinavian countries thus we assume that the conditions would be similar due to the great cultural proximity. Findings in relation to other studies Women with gynaecological problems, who are referred to an RSG are always examined by a specialist but when referred to an HOC, they would often be examined by a doctor, who is not yet a specialist but still in training. To compensate for this, HOCs are organized such that doctors in training can always call in a specialist , although this depends on whether the examining doctor decides to call a specialist or not. Due to lack of experience, it may happen that the doctor in training comes to misjudgements and does not call a specialist although it would be indicated. Thus, this may delay the correct diagnosis of a serious disease . This difference means that unless all patients have equal access to relevant care, there would be an inequality in the quality of care depending on in which part of the country they live, which, in turn, can have an impact on the health of this group of the population. Our study demonstrated that GPs prefer to refer their gynaecological patients to RSG; only 9.6% of GPs refer their patients exclusively to the hospital, although most would refer their gynaecological patients directly to the hospital, if they suspect cancer or another severe diagnosis. We examined five geographic regions with different densities of RSG and found that the referral pattern depends on the density of RSG. These results are in agreement with previous studies that have shown that if the number of resident specialists increases, more patients are referred to a resident specialist and at the same time fewer patients are referred to hospitals . With regard to the diagnoses examined, the present study shows that the referral pattern is strongly dependent on the density of RSG in the local region and for five of the six gynaecological diagnoses examined, there was a significantly lower chance for the patient to be referred to an RSG in the region with the lowest density compared to the region with the highest density of RSG. The national average distance from the patient’s place of residence to the hospital is greater than the average distance from the patient’s place of residence to the RSG in the region with the highest density of RSG. This results in a longer transport time and more costs for the patients who live in the region with lowest density of RSG. This can have detrimental effects, as it has shown in previous studies that there is an association between travel distance and cancer prognosis . We also know that the distance to the hospital is linked to an increasing diagnostic interval for cancer . As far as we know, this has not been investigated in relation to the density of RSGs. However, since the RSG is a specialist, it is not unlikely that such studies would obtain similar results. When delays are discussed in the diagnosis of cancer, for example, patient delays, GP delays, and system delays are mentioned , but the density of resident specialists has not been taken into account, although it is known that increased availability of specialist care translates into higher referral rates . Possible mechanisms and implications for clinicians or policy makers In regions with a lower density of resident specialists in gynaecology, women are less frequently referred to a resident specialist in gynaecology. If there are regions in the same country with different densities of resident specialists in gynaecology, one must assume that the population will have an unequal opportunity to have a specialist examination. This results in an injustice in the healthcare system within the same country. Whether or not this inequality should be accepted or not is a political decision, but our results indicate that there are significant differences between regions that may have an impact on the gynaecologic treatment of women. Clearly, further studies are needed to determine the exact consequences of the difference in referral patterns in terms of treatment outcomes. However, the results from our study should already facilitate the future planning of health care in gynaecology with the aim of reducing inequality in the access to RSG. This cross-sectional study showed that the referral patterns of GPs was highly dependent on the density of RSGs. The higher the density of RSGs, the more likely that gynaecological patients were referred to the RSG, and conversely, the lower the density of RSGs, the more likely that gynaecological patients were referred to the HOC. GPs most often refer their gynaecological patients to the HOC in cases of suspicion of cancer or other severe disease. Because none of the previously existing questionnaires we could find on this topic addressed all the items we wanted to include in this study, we developed a study-specific questionnaire. This ensured that the relevant questions were included and that the context was given. We used paper questionnaires as it was not possible obtain a list of email addresses of the GPs due to the General Data Protection Regulation (GDPR). Paper questionnaires have shown declining response rates over the past decade. A low response rate may induce selection bias because respondents may differ systematically from non-respondents, and the study population will thus not represent the target population . However, we achieved a fair response rate with a percentage of 69.4% and with 61.4% females compared to the Danish national average of 58.1% . Thus, the risk of selection bias must be considered low. However, because we did not have access to any information on the targeted study sample, we could not perform a responder – non-responder analysis. For logistic reasons, we selected and invited 100 GPs from each Danish region. This corresponds to 15% of all GPs in Denmark. However, as the number of GPs in the different regions is not the same in absolute numbers, this resulted in a different percentage of invitations between regions, ranging from 9.7% (Capital Region) to 35.1% (Northern Region). Since GPs in Denmark, regardless of the region in which they practice, have the same education at the respective time in their career and the distribution of GPs in the regions is almost the same with regard to sex and age , we believe that this study sample is generalisable to the GP population in its entirety. The fact that we found no differences in GP characteristics over the regions strengthens the credibility of our results. The present study has been carried out in Denmark under Danish conditions in the health system. However, the results should be comparable with health systems that are similarly structured e.g. GP as gatekeeper, especially with the other Scandinavian countries thus we assume that the conditions would be similar due to the great cultural proximity. Women with gynaecological problems, who are referred to an RSG are always examined by a specialist but when referred to an HOC, they would often be examined by a doctor, who is not yet a specialist but still in training. To compensate for this, HOCs are organized such that doctors in training can always call in a specialist , although this depends on whether the examining doctor decides to call a specialist or not. Due to lack of experience, it may happen that the doctor in training comes to misjudgements and does not call a specialist although it would be indicated. Thus, this may delay the correct diagnosis of a serious disease . This difference means that unless all patients have equal access to relevant care, there would be an inequality in the quality of care depending on in which part of the country they live, which, in turn, can have an impact on the health of this group of the population. Our study demonstrated that GPs prefer to refer their gynaecological patients to RSG; only 9.6% of GPs refer their patients exclusively to the hospital, although most would refer their gynaecological patients directly to the hospital, if they suspect cancer or another severe diagnosis. We examined five geographic regions with different densities of RSG and found that the referral pattern depends on the density of RSG. These results are in agreement with previous studies that have shown that if the number of resident specialists increases, more patients are referred to a resident specialist and at the same time fewer patients are referred to hospitals . With regard to the diagnoses examined, the present study shows that the referral pattern is strongly dependent on the density of RSG in the local region and for five of the six gynaecological diagnoses examined, there was a significantly lower chance for the patient to be referred to an RSG in the region with the lowest density compared to the region with the highest density of RSG. The national average distance from the patient’s place of residence to the hospital is greater than the average distance from the patient’s place of residence to the RSG in the region with the highest density of RSG. This results in a longer transport time and more costs for the patients who live in the region with lowest density of RSG. This can have detrimental effects, as it has shown in previous studies that there is an association between travel distance and cancer prognosis . We also know that the distance to the hospital is linked to an increasing diagnostic interval for cancer . As far as we know, this has not been investigated in relation to the density of RSGs. However, since the RSG is a specialist, it is not unlikely that such studies would obtain similar results. When delays are discussed in the diagnosis of cancer, for example, patient delays, GP delays, and system delays are mentioned , but the density of resident specialists has not been taken into account, although it is known that increased availability of specialist care translates into higher referral rates . In regions with a lower density of resident specialists in gynaecology, women are less frequently referred to a resident specialist in gynaecology. If there are regions in the same country with different densities of resident specialists in gynaecology, one must assume that the population will have an unequal opportunity to have a specialist examination. This results in an injustice in the healthcare system within the same country. Whether or not this inequality should be accepted or not is a political decision, but our results indicate that there are significant differences between regions that may have an impact on the gynaecologic treatment of women. Clearly, further studies are needed to determine the exact consequences of the difference in referral patterns in terms of treatment outcomes. However, the results from our study should already facilitate the future planning of health care in gynaecology with the aim of reducing inequality in the access to RSG. |
Understanding the mechanisms of food effect on omaveloxolone pharmacokinetics through physiologically based biopharmaceutics modeling | 0c5c2ec8-8044-44ce-9ccb-782872480802 | 11494823 | Pharmacology[mh] | Friedreich ataxia (FA) is a progressive, autosomal recessive neurodegenerative disorder characterized by difficulty with ambulation, coordination, and speech. , In FA, a biallelic trinucleotide (GAA) repeat expansion in the first intron of the frataxin gene leads to impaired transcription and reduced amounts of functional frataxin protein. , Frataxin deficiency results in dysregulation of antioxidant defenses, mitochondrial dysfunction, and impaired nuclear factor (erythroid‐derived 2)‐like 2 (Nrf2) signaling. , , Omaveloxolone, an Nrf2 activator, has been shown to improve mitochondrial function, restore redox balance, and reduce inflammation in FA models. , In a registrational phase II trial (MOXIe; NCT02255435), omaveloxolone significantly improved neurological function versus placebo, with an acceptable safety profile. , Omaveloxolone was approved in the United States and the European Union for the treatment of FA in patients aged ≥16 years. The recommended dosage is 150 mg administered orally once daily (QD) in the form of three 50‐mg capsules or the entire capsule contents sprinkled on and mixed in 2 tablespoons (30 mL) of applesauce, on an empty stomach at least 1 h before (United States and European Union) or 2 h after (European Union) eating. , Food–drug interactions can potentially affect the release (from a given formulation), solubility, dissolution, absorption, first‐pass metabolism, and/or elimination of an oral drug and possibly impact efficacy and safety. Understanding the differences in the pharmacokinetics (PK) of omaveloxolone in various prandial states and the underlying mechanisms explaining these phenomena is essential. Physiologically based biopharmaceutics modeling (PBBM) is an evolving tool that has been widely applied to predict the absorption and PK of oral drug products (DP). These models integrate the physicochemical and biopharmaceutics properties of the drug substance (DS), formulation characteristics, and system physiological parameters. By using (biopredictive) dissolution testing as a key input, PBBM enables manufacturing flexibility by delineating a safe space for DP critical quality attributes. Through virtual population studies, PBBM has served as an alternative to clinical PK trials that evaluate food effects, drug–drug interaction (DDI), or formulation changes. Here, a PBBM was developed to predict and explain the effect of a high‐fat meal on the PK of a 150‐mg dose of omaveloxolone. The model was validated against PK data from dose‐ranging, food effect, and DDI clinical studies. Overview of modeling strategy The modeling strategy is illustrated in Figure . A physiologically based PK (PBPK) modeling absorption baseline model was first established using intravenous and oral data in monkeys to derive relevant PK distribution parameters and then applied to humans. The in vivo capsule opening time and dissolution were based on the DS and DP performances; these together with other biopharmaceutical drug properties such as solubility, passive permeability, and the effect of systemic drug transporter P‐glycoprotein (P‐gP) on drug efflux were integrated into the PBBM. Metabolic clearance was specified based on in vitro data and verified with DDI studies. The determination of solubility, precipitation rate, dissolution, permeability, drug efflux, distribution, metabolism, and elimination are described in the in Data (including Figure ). The PBBM was validated using data from nine clinical scenarios from clinical PK studies 408‐C‐1703 (NCT03664453), thereafter referred to as study 1703, and 408‐C‐1806 (NCT04008186), thereafter referred to as study 1806, which tested different doses, prandial states, and DDIs. Parameter sensitivity analyses (PSAs) were run on the validated model to identify the main sources of within‐ and between‐participant variability and the factors limiting omaveloxolone absorption. With the validated model, the effect of a high‐fat meal on the PK of omaveloxolone was evaluated and explained mechanistically. The DDI and PBPKPlus modules of GastroPlus v9.8.2 (GastroPlus; Simulations Plus) and the ADMET Predictor v10.3 (APv10.3; Simulations Plus) were used. Clinical studies The average participant demographic information from two selected studies with comprehensive PK data was used for validation of the PBPK absorption baseline model. Study 1703 was a phase I, open‐label, two‐part, food effect (part 1) and dose proportionality (part 2) study of omaveloxolone in healthy adult participants ( N = 34). In part 1 (two‐period, fixed‐sequence, randomized crossover design), participants were randomly assigned 1:1 to one of the two treatment sequences (sequence 1: period 1 fed and period 2 fasted; sequence 2: period 1 fasted and period 2 fed), each with a 1‐week washout period. Participants were administered two single doses of omaveloxolone 150 mg at the start of each period. During the fed state, participants were provided the US Food and Drug Administration (FDA) high‐fat standardized breakfast (800–1000 calories, with ≥50% from fat) prior to dosing. In part 2, participants were randomized 1:1 to receive a single dose of either 50‐mg or 100‐mg omaveloxolone in a fasted state; data for omaveloxolone 150 mg (taken in a fasted state) from part 1 were included in part 2 analyses. The design of study 1703 is further detailed in the (Data ). Study 1806 was a phase I, open‐label, four‐part, DDI study of omaveloxolone in healthy participants ( N = 61). Participants were treated with omaveloxolone 150 mg QD on days 1 and 13, and oral doses of a cytochrome P450 2C8 (CYP2C8) inhibitor gemfibrozil 600 mg twice daily (part 2), a strong CYP3A4 inhibitor itraconazole 200 mg QD (part 3), or a P‐gP inhibitor and a moderate CYP3A4 inhibitor verapamil 120 mg QD (part 4), on days 10 to 18. Primary end points were maximum plasma concentration ( C max ) and area under the concentration–time curve (AUC) from time 0 extrapolated to infinity (AUC 0‐∞ ), and time to C max ( t max ); AUC from time 0 to the last quantifiable plasma concentration (AUC 0‐t ) was a secondary end point. Model validation For model validation, the prediction performance indicators were calculated for the PK parameters and profiles as described below. Average fold error (AFE) is defined by the following equation: AFE = 10 1 n ∑ log Pred i Obs i The AFE is an indicator of prediction bias. A method that predicted all actual values with no bias would have a value of 1; underpredictions are indicated by an AFE of <1 and overpredictions by AFE of >1. AFE values generally vary between 0 and infinity; a prediction may be considered satisfactory if the AFE is between 0.8 and 1.2, passable if the AFE is 0.5 to 0.8 or 1.2 to 2, and poor if the AFE is 0 to <0.5 or >2. A satisfactory AFE is needed for model validation. Absolute average fold error (AAFE) is defined by the following equation: AAFE = 10 1 n ∑ log Pred i Obs i The AAFE converts negative log fold errors to positive values before averaging them and measures the spread of the predictions. AAFE values vary between 1 and infinity. A method that predicted all actual values perfectly would have a value of 1; one with predictions that were on average twofold off (above 100% or below 50%) would have a value of 2 and so forth. A prediction may be considered satisfactory if the AAFE is <1.2, passable if the AAFE is in the range of 1.2 to 2, and poor if the AAFE is >2. A satisfactory AAFE is needed for model validation. Average absolute prediction error (AAPE%) is defined by the following equation: AAPE % = Average Pred i − Obs i Obs i × 100 AAPE is the measurement of prediction error scaled to percentage units. It approximates (AAFE −1) × 100. A model is considered satisfactory if the AAPE is <20%, passable if the AAPE is ≥20 to <50%, and poor if the AAPE is ≥50%. Percent predictions within clinical variability (PPWCV) are defined by following equation: PPWCV % = n YES n total × 100 For each PK sampling time point 1 to n total (apart from pre‐dose), a binary criterion yes or no is determined based on whether the predicted concentration falls within the 95% confidence interval of the measured clinical data. The PPWCV calculated for each PK profile was averaged across all the clinical scenarios tested. The same calculations were performed for PK parameters—the binary criterion for each clinical scenario was based on whether the predicted PK parameter fell within the 95% confidence interval of the measured average value of that parameter. A satisfactory PPWCV is >80%, a passable PPWCV is in the range of ≥65% to 80%, and a poor PPWCV is <65%. Parameter sensitivity analyses The PSA for C max , AUC 0‐t , and other PK parameters was based on a range of selected DP properties and physiological parameters that could impact omaveloxolone absorption or metabolism by affecting capsule opening time, size of the DS (controlling in vivo dissolution), first‐pass gut and liver extraction, and metabolic elimination in vivo (Table ). The analysis was performed using omaveloxolone 150 mg (target dose) in the fasted state (for increased sensitivity with a lower fraction absorbed vs. the fed state) on a representative population based on the MOXIe registrational study cohort with FA (average age: 26 years; average weight: 69 kg). Simulation design The PBBM was built using default values based on human fasted or fed physiologies. Since omaveloxolone is not ionized in the physiological pH range, adjustment of surface solubility was not needed. For PBBM validation, populations representative of the clinical trials were created based on the average height and weight of the cohorts. The advanced compartmental and transit model physiologies were adjusted for body weight. All the doses and prandial states tested in the clinical trials were reproduced in the PBBM. The default optimum log D model SA/V 6.1 was applied to calculate absorption scaling factors. Since omaveloxolone is lipophilic with log p > 5, the amount of absorption scaling factors was increased in the cecum and colon. To predict food effect in study 1703, the FDA high‐fat breakfast option was selected with default zero‐order gastric emptying. The gastric emptying time was increased from 3.51 to 5 h to match the time needed for the intragastric volume to fall below 100 mL after a high‐fat meal intake. Because CYP3A4, but not CYP2C8, is largely involved in omaveloxolone metabolism, DDI is expected for co‐administration of omaveloxolone with CYP3A4 inhibitors. Hence, two DDI simulations based on parts 3 and 4 of study 1806 were performed ( , Figures ). The DDI module of GastroPlus was used, and simpler modeling options based on reported values for inhibition were also tested for model validation. To account for itraconazole 200 mg co‐administration outside of the DDI module, the maximum rate of reaction ( V max ) in the gut and liver for CYP3A4 was reduced by a factor of 3.9. , The V max for P‐gp was set to 0 for verapamil co‐administration. Ethics statement The clinical studies were designed and monitored in accordance with sponsor procedures, which complied with the ethical principles of good clinical practice and with the Declaration of Helsinki. Each trial was approved by the institutional review board or independent ethics committee at the respective centers. All patients provided written informed consent. The modeling strategy is illustrated in Figure . A physiologically based PK (PBPK) modeling absorption baseline model was first established using intravenous and oral data in monkeys to derive relevant PK distribution parameters and then applied to humans. The in vivo capsule opening time and dissolution were based on the DS and DP performances; these together with other biopharmaceutical drug properties such as solubility, passive permeability, and the effect of systemic drug transporter P‐glycoprotein (P‐gP) on drug efflux were integrated into the PBBM. Metabolic clearance was specified based on in vitro data and verified with DDI studies. The determination of solubility, precipitation rate, dissolution, permeability, drug efflux, distribution, metabolism, and elimination are described in the in Data (including Figure ). The PBBM was validated using data from nine clinical scenarios from clinical PK studies 408‐C‐1703 (NCT03664453), thereafter referred to as study 1703, and 408‐C‐1806 (NCT04008186), thereafter referred to as study 1806, which tested different doses, prandial states, and DDIs. Parameter sensitivity analyses (PSAs) were run on the validated model to identify the main sources of within‐ and between‐participant variability and the factors limiting omaveloxolone absorption. With the validated model, the effect of a high‐fat meal on the PK of omaveloxolone was evaluated and explained mechanistically. The DDI and PBPKPlus modules of GastroPlus v9.8.2 (GastroPlus; Simulations Plus) and the ADMET Predictor v10.3 (APv10.3; Simulations Plus) were used. The average participant demographic information from two selected studies with comprehensive PK data was used for validation of the PBPK absorption baseline model. Study 1703 was a phase I, open‐label, two‐part, food effect (part 1) and dose proportionality (part 2) study of omaveloxolone in healthy adult participants ( N = 34). In part 1 (two‐period, fixed‐sequence, randomized crossover design), participants were randomly assigned 1:1 to one of the two treatment sequences (sequence 1: period 1 fed and period 2 fasted; sequence 2: period 1 fasted and period 2 fed), each with a 1‐week washout period. Participants were administered two single doses of omaveloxolone 150 mg at the start of each period. During the fed state, participants were provided the US Food and Drug Administration (FDA) high‐fat standardized breakfast (800–1000 calories, with ≥50% from fat) prior to dosing. In part 2, participants were randomized 1:1 to receive a single dose of either 50‐mg or 100‐mg omaveloxolone in a fasted state; data for omaveloxolone 150 mg (taken in a fasted state) from part 1 were included in part 2 analyses. The design of study 1703 is further detailed in the (Data ). Study 1806 was a phase I, open‐label, four‐part, DDI study of omaveloxolone in healthy participants ( N = 61). Participants were treated with omaveloxolone 150 mg QD on days 1 and 13, and oral doses of a cytochrome P450 2C8 (CYP2C8) inhibitor gemfibrozil 600 mg twice daily (part 2), a strong CYP3A4 inhibitor itraconazole 200 mg QD (part 3), or a P‐gP inhibitor and a moderate CYP3A4 inhibitor verapamil 120 mg QD (part 4), on days 10 to 18. Primary end points were maximum plasma concentration ( C max ) and area under the concentration–time curve (AUC) from time 0 extrapolated to infinity (AUC 0‐∞ ), and time to C max ( t max ); AUC from time 0 to the last quantifiable plasma concentration (AUC 0‐t ) was a secondary end point. For model validation, the prediction performance indicators were calculated for the PK parameters and profiles as described below. Average fold error (AFE) is defined by the following equation: AFE = 10 1 n ∑ log Pred i Obs i The AFE is an indicator of prediction bias. A method that predicted all actual values with no bias would have a value of 1; underpredictions are indicated by an AFE of <1 and overpredictions by AFE of >1. AFE values generally vary between 0 and infinity; a prediction may be considered satisfactory if the AFE is between 0.8 and 1.2, passable if the AFE is 0.5 to 0.8 or 1.2 to 2, and poor if the AFE is 0 to <0.5 or >2. A satisfactory AFE is needed for model validation. Absolute average fold error (AAFE) is defined by the following equation: AAFE = 10 1 n ∑ log Pred i Obs i The AAFE converts negative log fold errors to positive values before averaging them and measures the spread of the predictions. AAFE values vary between 1 and infinity. A method that predicted all actual values perfectly would have a value of 1; one with predictions that were on average twofold off (above 100% or below 50%) would have a value of 2 and so forth. A prediction may be considered satisfactory if the AAFE is <1.2, passable if the AAFE is in the range of 1.2 to 2, and poor if the AAFE is >2. A satisfactory AAFE is needed for model validation. Average absolute prediction error (AAPE%) is defined by the following equation: AAPE % = Average Pred i − Obs i Obs i × 100 AAPE is the measurement of prediction error scaled to percentage units. It approximates (AAFE −1) × 100. A model is considered satisfactory if the AAPE is <20%, passable if the AAPE is ≥20 to <50%, and poor if the AAPE is ≥50%. Percent predictions within clinical variability (PPWCV) are defined by following equation: PPWCV % = n YES n total × 100 For each PK sampling time point 1 to n total (apart from pre‐dose), a binary criterion yes or no is determined based on whether the predicted concentration falls within the 95% confidence interval of the measured clinical data. The PPWCV calculated for each PK profile was averaged across all the clinical scenarios tested. The same calculations were performed for PK parameters—the binary criterion for each clinical scenario was based on whether the predicted PK parameter fell within the 95% confidence interval of the measured average value of that parameter. A satisfactory PPWCV is >80%, a passable PPWCV is in the range of ≥65% to 80%, and a poor PPWCV is <65%. The PSA for C max , AUC 0‐t , and other PK parameters was based on a range of selected DP properties and physiological parameters that could impact omaveloxolone absorption or metabolism by affecting capsule opening time, size of the DS (controlling in vivo dissolution), first‐pass gut and liver extraction, and metabolic elimination in vivo (Table ). The analysis was performed using omaveloxolone 150 mg (target dose) in the fasted state (for increased sensitivity with a lower fraction absorbed vs. the fed state) on a representative population based on the MOXIe registrational study cohort with FA (average age: 26 years; average weight: 69 kg). The PBBM was built using default values based on human fasted or fed physiologies. Since omaveloxolone is not ionized in the physiological pH range, adjustment of surface solubility was not needed. For PBBM validation, populations representative of the clinical trials were created based on the average height and weight of the cohorts. The advanced compartmental and transit model physiologies were adjusted for body weight. All the doses and prandial states tested in the clinical trials were reproduced in the PBBM. The default optimum log D model SA/V 6.1 was applied to calculate absorption scaling factors. Since omaveloxolone is lipophilic with log p > 5, the amount of absorption scaling factors was increased in the cecum and colon. To predict food effect in study 1703, the FDA high‐fat breakfast option was selected with default zero‐order gastric emptying. The gastric emptying time was increased from 3.51 to 5 h to match the time needed for the intragastric volume to fall below 100 mL after a high‐fat meal intake. Because CYP3A4, but not CYP2C8, is largely involved in omaveloxolone metabolism, DDI is expected for co‐administration of omaveloxolone with CYP3A4 inhibitors. Hence, two DDI simulations based on parts 3 and 4 of study 1806 were performed ( , Figures ). The DDI module of GastroPlus was used, and simpler modeling options based on reported values for inhibition were also tested for model validation. To account for itraconazole 200 mg co‐administration outside of the DDI module, the maximum rate of reaction ( V max ) in the gut and liver for CYP3A4 was reduced by a factor of 3.9. , The V max for P‐gp was set to 0 for verapamil co‐administration. The clinical studies were designed and monitored in accordance with sponsor procedures, which complied with the ethical principles of good clinical practice and with the Declaration of Helsinki. Each trial was approved by the institutional review board or independent ethics committee at the respective centers. All patients provided written informed consent. Modeling parameters The main physicochemical and biopharmaceutical properties of omaveloxolone used for model parameterization are shown in Table . The derivation of these parameters (solubility and precipitation [Figure ], dissolution, permeability [Figures and ], P‐gp efflux, and metabolism and elimination [Figure ]) is shown in the . Model validation The PBPK absorption model and PBBM were validated across nine distinct clinical scenarios in studies 1703 and 1806. Profile predictions are shown in Figures (right panel), , and ; calculations of model prediction performance indicators are shown in Table , Tables . All predicted PK parameters over the nine clinical scenarios were within the observed clinical variability (Table , Figures [right panel], , and ). The use of the GastroPlus “controlled release undissolved dispersed” model in conjunction with the DS particle size distribution (PSD) allowed the separation of in vivo drug release (capsule opening) from in vivo dissolution of the capsule contents. This approach provided a satisfactory PK profile prediction, with 98% of the predicted concentrations within the clinical variability at each data point. All model prediction performance indicators on AUC and C max were satisfactory based on predefined criteria, and the PBBM was considered validated. Parameter sensitivity analyses The PSA revealed that the main parameters influencing the PK of omaveloxolone 150 mg were bile salt solubilization, V max of CYP3A4, DS PSD, drug permeability, and volume in the small intestine (Figure , , Figure ). The model's sensitivity to the DS PSD confirmed the decision to use the measured DS batch particle size for the analysis. Precipitation time, transit times, and P‐gp function did not have a substantial effect on omaveloxolone PK. In particular, the stomach transit time from 0.25 to 0.5 h or Weibull lag from 0.12 to 0.5 h did not affect omaveloxolone PK. The main between‐participant sources of variability were drug permeability and CYP3A4 expression and function. The main within‐participant sources of variability were differences in bile salt concentration during the day in the fasted and fed states and the type of food. Prediction of food effect The effects of a high‐fat meal on omaveloxolone 150 mg in vivo dissolution, first‐pass extraction, and PK are shown in Figure . The fraction of dose absorbed (Fa) for omaveloxolone and the fraction reaching the portal vein after passing through the gut wall without metabolism (Fa × fraction of drug escaping first‐pass gut metabolism [Fg]) in the fasted and fed states are shown in Figure . In both prandial states, the in vivo dissolution of omaveloxolone was limited by solubility in all compartments. However, the Fa of omaveloxolone increased from 52% in the fasted to 87% in the fed state, attributed to the drug's lipophilicity and strong affinity to bile salt micelles, along with higher concentration of micelles after a high‐fat meal. The resulting increased solubility and faster dissolution contributed to the majority of absorption occurring in the upper gastrointestinal (GI) tract in the fed state. In the fasted state, omaveloxolone absorption occurred along the GI tract, with the highest fraction absorbed in the cecum and colon as shown by the secondary peaks in the concentration–time plots (Figure ). As there is a lower expression of CYP3A4 in the lower versus upper GI tract, the fraction of omaveloxolone lost by first‐pass gut extraction was limited in the fasted versus fed state. Fa × Fg was 36% in the fed state versus 29% in the fasted state (Figure ), and hepatic extraction was 33% and 28%, respectively. These phenomena explain the negligible increase in AUC (+15%) despite a large increase in C max (+350%) in the fed state for omaveloxolone 150 mg observed in the food effect study 1703. Food effect and dose proportionality results from study 1703 Findings from study 1703 corroborated the results of the PBBM analysis (detailed in the , Tables , Figure ). The main physicochemical and biopharmaceutical properties of omaveloxolone used for model parameterization are shown in Table . The derivation of these parameters (solubility and precipitation [Figure ], dissolution, permeability [Figures and ], P‐gp efflux, and metabolism and elimination [Figure ]) is shown in the . The PBPK absorption model and PBBM were validated across nine distinct clinical scenarios in studies 1703 and 1806. Profile predictions are shown in Figures (right panel), , and ; calculations of model prediction performance indicators are shown in Table , Tables . All predicted PK parameters over the nine clinical scenarios were within the observed clinical variability (Table , Figures [right panel], , and ). The use of the GastroPlus “controlled release undissolved dispersed” model in conjunction with the DS particle size distribution (PSD) allowed the separation of in vivo drug release (capsule opening) from in vivo dissolution of the capsule contents. This approach provided a satisfactory PK profile prediction, with 98% of the predicted concentrations within the clinical variability at each data point. All model prediction performance indicators on AUC and C max were satisfactory based on predefined criteria, and the PBBM was considered validated. The PSA revealed that the main parameters influencing the PK of omaveloxolone 150 mg were bile salt solubilization, V max of CYP3A4, DS PSD, drug permeability, and volume in the small intestine (Figure , , Figure ). The model's sensitivity to the DS PSD confirmed the decision to use the measured DS batch particle size for the analysis. Precipitation time, transit times, and P‐gp function did not have a substantial effect on omaveloxolone PK. In particular, the stomach transit time from 0.25 to 0.5 h or Weibull lag from 0.12 to 0.5 h did not affect omaveloxolone PK. The main between‐participant sources of variability were drug permeability and CYP3A4 expression and function. The main within‐participant sources of variability were differences in bile salt concentration during the day in the fasted and fed states and the type of food. The effects of a high‐fat meal on omaveloxolone 150 mg in vivo dissolution, first‐pass extraction, and PK are shown in Figure . The fraction of dose absorbed (Fa) for omaveloxolone and the fraction reaching the portal vein after passing through the gut wall without metabolism (Fa × fraction of drug escaping first‐pass gut metabolism [Fg]) in the fasted and fed states are shown in Figure . In both prandial states, the in vivo dissolution of omaveloxolone was limited by solubility in all compartments. However, the Fa of omaveloxolone increased from 52% in the fasted to 87% in the fed state, attributed to the drug's lipophilicity and strong affinity to bile salt micelles, along with higher concentration of micelles after a high‐fat meal. The resulting increased solubility and faster dissolution contributed to the majority of absorption occurring in the upper gastrointestinal (GI) tract in the fed state. In the fasted state, omaveloxolone absorption occurred along the GI tract, with the highest fraction absorbed in the cecum and colon as shown by the secondary peaks in the concentration–time plots (Figure ). As there is a lower expression of CYP3A4 in the lower versus upper GI tract, the fraction of omaveloxolone lost by first‐pass gut extraction was limited in the fasted versus fed state. Fa × Fg was 36% in the fed state versus 29% in the fasted state (Figure ), and hepatic extraction was 33% and 28%, respectively. These phenomena explain the negligible increase in AUC (+15%) despite a large increase in C max (+350%) in the fed state for omaveloxolone 150 mg observed in the food effect study 1703. Findings from study 1703 corroborated the results of the PBBM analysis (detailed in the , Tables , Figure ). Omaveloxolone is currently indicated to be taken on an empty stomach with capsules either swallowed whole or contents sprinkled on and mixed in 2 tablespoons of applesauce for patients with swallowing difficulties. Omaveloxolone displays a unique PK profile in the presence of standardized FDA high‐fat breakfast: AUC was only modestly increased (+15%) despite a substantial rise in C max (+350%). The PBBM developed for omaveloxolone provides a mechanistic explanation of this unique food effect. The PBBM, informed by the DS and DP characteristics and drug metabolism, was validated across the doses, prandial states, and DDI scenarios studied. Consistent with permeability and CYP3A4 activity identified as main between‐participant sources of variability from the PSA in this study, Avdeef et al. reported a 60% variability in permeability data obtained from humans across drugs with low and high permeability; furthermore, Lown et al. reported high interparticipant variability in gut CYP3A4 expression, at 11‐fold based on protein, 8‐fold based on mRNA, and 6‐fold based on catalytic activity. These sources of variability should not exist for cross‐over trials. Difference in bile salt concentration was identified as a main within‐participant source of variability in this study, in line with a previous report. The variability in bile salt lumen concentration has also been reported to span several logs, even in the fasted state. The within‐participant variability during the day was reported to be high even in the fasted state, covering 2 log differences. Based on the PBBM prediction, omaveloxolone absorption was mainly limited by drug solubility along the GI tract in the fasted state (Figure ). Omaveloxolone belongs to Biopharmaceutics Classification System class 4 DS, which is characterized by low permeability, restricting absorption in the GI tract, and continuous drug dissolution. This underscores the importance of permeability as a limiting factor for AUC and C max in the fasted state (Figure ). The Fa of omaveloxolone in the fasted state as predicted by the PBBM was approximately 52%. Consistently, based on the human absorption, metabolism, and excretion study (Study 1805), omaveloxolone was the most abundant component (approximately 40%) in the feces of participants who were administered a 150‐mg dose of [ 14 C]‐omaveloxolone, indicating an Fa of up to 60%. Under fed conditions, omaveloxolone absorption occurred more rapidly due to the drug's affinity for bile salts, which resulted in faster dissolution in the fed state versus the fasted state. The PBBM prediction is corroborated by the individual fasted‐ and fed‐state omaveloxolone PK profiles in Study 1703 (Figure ). Under the fasted state, in addition to an initial rise in omaveloxolone plasma concentration, multiple peaks were observed in most individual PK profiles, mainly occurring after 4 h (when the drug should have reached the lower intestine in humans), suggesting that absorption predominantly occurred in the colon. This supports the model prediction that drug dissolution and solubility were the rate‐limiting factors for absorption in the fasted state. Under fed conditions, there was a notably faster drug absorption with t max more frequently observed before omaveloxolone reached the colon (Figure ), indicating that drug absorption predominantly occurred in the upper segments of the GI tract. The faster (within‐participant) absorption in the fed state also suggested that higher concentrations of bile salts in the GI tract lumen could partially overcome the limited solubility (and therefore in vivo dissolution) of omaveloxolone. The uniqueness of the food effect observed with omaveloxolone, compared with other medications, is shown by the correlation between fed/fasted C max and AUC ratios from 323 food effect studies encompassing a range of compounds and formulations. , , , , , The linear correlation between log( C max ratio) and log(AUC ratio) for investigated drugs that were proposed by Omachi et al. was applied to the dataset of the present study (Figure ). Based on the linear regression, when log(AUC ratio) = 0 (i.e., AUC ratio = 1), the corresponding log( C max ratio) is expected to be slightly negative at −0.0651 (i.e., C max ratio = 0.86 on average). This lower C max ratio to AUC ratio (<1) is likely related to prolonged gastric emptying in the fed state, which delays the passage of drug to the small intestine. C max and AUC (ratios) are usually expected to be highly correlated since the AUC is calculated from the integration of concentration–time profiles. In most cases, drugs with solubility‐limited absorption and without significant first‐pass extraction have increased solubility due to the presence of higher amounts of bile salts in the GI tract lumen under the fed state. This leads to increased drug concentration in close proximity to the absorptive surface of the intestine and a larger amount of drug reaching the systemic circulation (i.e., higher AUC) versus in the fasted state. There are, however, physiological factors and methodological issues that can explain deviations in the correlation between C max and AUC. PK sampling frequency could miss the C max during clinical food effect studies, as the same sampling time points are typically used for the fasted and fed states. In some studies, PK sampling was not frequent enough to capture C max in the fed state due to prolonged gastric emptying, which may lead to underestimation of both C max and AUC and a false‐negative finding of food effect. In addition, in both prandial states, gastric emptying of solid phases that comprise the drug might not be happening all at once, and gastric retention could account for a lower C max compared to single‐phase emptying. Partial gastric emptying has been reported in the fasted and fed states for various drugs. , , , , , If first‐pass extraction is not limiting drug absorption in a saturable way, a lower C max due to partial gastric emptying could be associated with the same AUC as when gastric emptying happens in a single phase, leading to decorrelation of C max and AUC. Taken together, a correlation between drug‐fed/fasted C max and AUC ratios is expected and justifiable. The presence of outliers in this correlation is particularly intriguing, primarily to understand the mechanism of food effect and to assess the predictive capability of mechanistic models. The correlation depicted in Figure could prognosticate measured fed/fasted C max ratios based on corresponding fed/fasted AUC ratios. The statistics associated with using the correlation to predict measured C max ratios would lead to an AFE of 1.00, an AAFE of 1.29, and an AAPE of 27%. The relative standard deviation for the prediction of C max based on this correlation was assumed to be 29% from the AAFE value. Assuming that C max ratio observations were randomly spread around the predicted C max ratios with a log‐normal distribution, the probability of observing any C max ratio value is calculated by means of sigma ( σ i ) derivation. σ i = P i − O i AAFE − 1 × P i In the above equation, the value of σ i can help estimate the probability of observing a C max ratio ( O i ) around the prediction P i . For σ i = 2, the chance to observe ( O i ) based on P i and the spread of the data approximates 1 of 3; for σ i = 3, the chance is 1 of 15; and for σ i = 4, the chance is 1 of 160. σ i can be used to identify outliers to the correlation between C max ratio and observed AUC ratio. The σ i value for omaveloxolone and the 323 food effects studies from the literature are shown in Figure . The C max ratio versus AUC ratio correlation established is good and roughly what would be expected from a log‐normal distribution with 69%, 91%, and 95% of the observed C max ratio within 1, 2, and 3 σ i of the predictions, respectively. The unique nature of the food effect on omaveloxolone was evident as the σ i value for omaveloxolone was largest at 11.7 (close to 12) away from the prediction, while the second largest outlier σ i value was only at approximately 7. Interestingly, the other four outliers (besides omaveloxolone) in the C max ratio versus AUC ratio correlation in Figure corresponded to only two drugs (CPD3 and progesterone). , , These two drugs show similarities with omaveloxolone (e.g., lipophilicity [log P] was reported to be 6.6 and 3.9, respectively). , Both CPD3 and progesterone showed secondary absorption phases based on average PK profiles in the fasted state, which was not observed or occurred to a much lower extent in the fed state. , Although the detailed evaluation of these effects goes beyond the scope of this article, the effect of food on the PK of CPD3 was correctly predicted using GastroPlus. A PBBM was successfully developed and validated for omaveloxolone. Drug absorption for omaveloxolone is solubility and dissolution rate limited. In the fasted state, omaveloxolone is incompletely absorbed, with absorption predominantly occurring in the lower segment of the GI tract where the CYP3A4‐mediated first‐pass gut extraction is low. In the fed state, the higher amount of bile salt micelles present led to an increased solubility of omaveloxolone attributed to its weakly acidic and highly lipophilic nature, thereby accelerating in vivo dissolution, resulting in predominant absorption in the upper GI tract. In this region, omaveloxolone is subjected to a more substantial first‐pass gut extraction, causing a notable transient surge in C max without a correspondingly large increase in AUC. This food effect on the PK of omaveloxolone deviates from that of other drugs for which the fed/fasted ratios for C max and AUC are generally well correlated. These findings point to the impact of fed versus fasted condition on the PK profile of omaveloxolone; hence, reinforcing the importance of physician and patient education on administration and dosing compliance. In silico PBPK modeling and PBBM tools offer promising platforms that integrate drug dissolution, formulation characteristics, precipitation, degradation, first‐pass extraction, and metabolism. These tools accurately forecast the impact of food on oral drugs, such as omaveloxolone, potentially reducing the need for clinical evaluations. Notably, the accurate prediction of a 12‐ σ outlier event of C max versus AUC ratio in this case study bolsters the credibility of such models for the prediction of food effects. X.J.H.P., S.M.H., H.Z., D.W., L.Q.S., and S.S.‐S. wrote the manuscript; X.J.H.P., S.M.H., H.Z., and D.W. designed the research; X.J.H.P., H.Z., and D.W. performed the research; X.J.H.P., S.M.H., H.Z., D.W., L.Q.S., and S.S.‐S. analyzed the data; and H.Z. contributed new reagents/analytical tools. This study was funded by Reata Pharmaceuticals, Inc.; Reata was acquired by Biogen in 2023. Xavier J.H. Pepin and Sandra Suarez‐Sharp are employees of and hold stock in Simulations Plus, which was commissioned by Reata Pharmaceuticals to conduct the study; Reata was acquired by Biogen in 2023. Scott M. Hynes was an employee and may have held stock in Biogen at the time of development of this publication. Hamim Zahir and Lois Q. Semmens are employees of and may hold stock in Biogen. Deborah Walker was an employee of and held stock and/or stock options in Reata at the time the study was conducted. Data S1. |
Recent Progress in Nanomaterial-Based Fluorescence Assays for the Detection of Food-Borne Pathogens | 0d1e0717-88a4-44fd-8c53-5cdae318c736 | 11644946 | Microbiology[mh] | Food-borne diseases, as one of the significant public health concerns worldwide, pose a serious threat to the health of humans. Differing from infectious diseases, food-borne diseases are illnesses that result from the consumption of contaminated food . There are various types of food-borne pathogens, and they can be divided into infectious and toxin-producing types according to their biological characteristics. Infectious pathogens include pathogenic Escherichia coli , Vibrio parahaemolyticus , Listeria monocytogenes , etc., while toxin-producing pathogens include Staphylococcus aureus , salmonella , Clostridium botulinum , Bacillus cereus , etc. Although the clinical symptoms of food-borne diseases are mild and self-healing , they are still a significant problem due to the large number of people affected each year . According to the World Health Organization, about 10% of people worldwide fall ill each year due to consuming contaminated food , which puts a strain on healthcare systems and undermines national economies, tourism, and trade, thereby hindering socio-economic development. Therefore, developing a rapid detection method to avoid the consumption of foods contaminated with pathogens is necessary. Traditional detection techniques for food-borne pathogens mainly include plate counting , molecular biological detection technology , and immunodetection technology . Currently, the traditional culture-based method is still the gold standard method with its high accuracy and sensitivity. However, the above methods for food-borne pathogen detection have been restrained due to short shelf life, the need for large sample volumes, and lengthy processing times. With the development of nanotechnology and molecular technology, many new rapid detection methods have been proposed and applied to the detection of food-borne pathogens. These new detection methods mainly include the colorimetric method , electrochemical method , Raman detection method , and fluorescence detection method. In contrast to the traditional culture-based methods, the above detection methods can enable the generation of fast and reliable results for ensuring food safety. The performance of these rapid detection methods can be noticed from some research work for food-borne pathogen detection. Additionally, due to the complexity of the food matrix, these detection methods have some limitations, respectively. A list of the detailed detection methods for food-borne pathogens, including assay time, detection limit, potential limitations, and potential limitations, is given in . Among all novel detection methods, fluorescence assays become a widely used detection method due to high detection sensitivity, high stability, and short detection time . They are a quantitative analysis method and build a linear relationship between fluorescence intensity and the concentration of the detection target, and they meet the requirement of being fast, highly sensitive, and selective. In this review, we provide a comprehensive discussion of recent developments in fluorescence assays based on different specific recognitions for food-borne pathogen detection by showing the latest materials, principles, and properties of fluorescence assays. In addition, we review the current advances in fluorescence detection applications for other targets, offering perspectives and insights for food-borne detection in the future. The core of fluorescence assays for food-borne pathogen detection can be divided into two parts: fluorescent material and the component of specific recognition. Traditional fluorescent materials such as rhodamine dyes , cyanine dyes , and Alexa dyes have been successfully applied in the food safety field. However, fluorescent dyes have many defects, such as high background signals, a narrow excitation spectrum, a broad and asymmetric fluorescence emission spectrum, and susceptibility to photobleaching , which significantly restrict their application in the field of food safety. Nowadays, novel fluorescence materials for detecting food-borne pathogens such as quantum dots, metal–organic frameworks, and upconversion nanoparticles are gradually replacing traditional fluorescent materials due to their advantages. These new materials offer several advantages, including small size, strong adsorption capacity, and high surface reactivity, which enables them to bind with a lot of bacteria at the same time, resulting in a higher fluorescence intensity . shows the strategies for food-borne pathogen detection based on different fluorescence materials, the limit of detection (LOD), detection time, and detection samples for bacterial detection. In addition, the optical properties and applications of different kinds of fluorescence materials are analyzed and compared. 2.1. Quantum Dots Quantum dots (QDs) are emerging nanomaterials in recent years with particle sizes of 1–10 nm . Due to their unique optical properties, they have become very promising fluorescent nanomaterials. In addition, the doped QDs obtained by doping different materials will have the luminescence properties inherent in QDs and the luminescence properties of doped ions. The interaction between QDs and intended pathogens caused fluorescence quenching or enhancement in QDs . The main mechanisms include fluorescence resonance energy transfer (FRET), photoinduced electron transfer, internal filtration effects, aggregate effects, static quenching effects, and dynamic quenching effects . Ren et al. reported a fluorescence assay leveraging the high sensitivity and stable fluorescence of CdTe QDs combined with a specific DNA aptamer for the detection of S. typhimurium ( A). In their research, aptamer-coated magnetic particles (Apt-MNPs) were employed as target captors, while the CdTe QD-labeled complementary strands served as signal generators. The fluorescence of CdTe QDs increases linearly in the concentration range of 10 to 1010 CFU/mL, with a detection limit of 1 CFU/mL. Xue et al. developed a fluorescent sensor for the simultaneous detection of E. coli O157: H7 and Salmonella typhimurium using immunomagnetic nanobeads (MNBs), manganese dioxide nanoflowers (MnO 2 NFs), and QDs ( B). QDs@MnO 2 nanocomposites were obtained with MnO 2 NFs and QDs, followed by modification with antibodies (pAbs) to obtain pAb-QDs@MnO 2 nanocomposites (QM NCs). The target bacteria were first conjugated to MNBs and QM NCs to create the MNB–bacteria–QM complex. Glutathione is then used to reduce MnO 2 to Mn 2+ , rapidly releasing QDs from the complex. This assay enables simultaneous quantification of E. coli and Salmonella within 2 h with detection limits of 15 CFU/mL and 40 CFU/mL, respectively. The above examples show the potential of quantum dots in food-borne disease detection. However, the potential toxicity to humans and the environment and the high cost of quantum dots are key issues to be considered in future research. Although much research has been conducted on QD-based fluorescent nanosensors, many challenges remain, including reducing their toxicity, enhancing their chemical stability, repeatability, high uniformity, and fluorescence performance. Future research should focus on the selection of appropriate doping materials, the optimization of the synthesis, and the doping process of QDs . 2.2. Carbon Dots Carbon dots (CDs) are an emerging fluorescent material that have garnered significant interest as alternatives to conventional QDs . CDs possess several advantages, including excellent biocompatibility, low toxicity, high water stability, and ease of synthesis . Moreover, their customizable surface functional groups and fluorescence properties make them highly suitable for sensing and detection . CDs feature many hydroxyl and carboxyl groups on their surface, enabling them to conjugate with various biomolecules through these functional groups . During interaction with bacterial cells or their metabolic products, their photoluminescent qualities allow for the sensitive identification of bacteria on the basis of either quenched or enhanced fluorescence . Zhao et al. developed a highly sensitive fluorescent immunosensor for detecting E. coli O157:H7 using microspheres labeled with CDs . In this study, CD microspheres were prepared using S. aureus cells as the carrier to incorporate CD particles ( A). The microsphere can be easily combined with various antibodies. When combined with the immunomagnetic bead techniques, the CD microsphere immunosensor was established for the specific detection of E. coli O157:H7. Yang et al. prepared CD-encapsulated breakable organosilica nanocapsules (BONs) as advanced fluorescent labels for the ultrasensitive detection of S. aureus . The CDs are entrapped in organosilica shells to form core–shell CDs@BONs ( B). These fluorescent nanocapsules are then conjugated with anti- S. aureus antibody to specifically recognize S. aureus . Compared with conventional immunoassays using CDs as fluorescent labels, the fluorescent signals are amplified by two orders of magnitude due to the presence of hundreds of CDs encapsulated in each nanocapsule. In addition, CDs have also been widely used in detecting S. typhimurium , Helicobacter pylori , Salmonella , etc. 2.3. Metal–Organic Framework Metal–organic frameworks (MOFs) are organic–inorganic hybrid crystalline materials formed by metal ions or clusters and organic ligands via coordination bonding . The special structure endows MOFs with many advantages, including high specific surface area, controllable pore structures, and significant thermal stability , which make MOFs a potential material in food-borne pathogen detection. Qiao et al. designed a fluorescence resonance energy transfer (FRET) nanoprobe using MOFs for the detection of S. aureus ( A). The zirconium (Zr)-based MOFs were used to encapsulate blue-emitting 7-hydroxycoumarin-4-acetic acid (HCAA) and then functioned as an energy donor. By calculating the results, they achieved the detection of S. aureus within a dynamic range of 1.05 × 10 3 –1.05 × 10 7 CFU/mL and a detection limit of 12 CFU/mL. Bhardwaj et al. developed a new luminescent probe for S. aureus detection based on the bio-conjugation of an amine functionalized metal–organic framework with an anti- S. aureus antibody ( B). This innovative design of the biosensor allowed the detection of S. aureus over a wide concentration range, achieving a notably low limit of detection of 85 CFU/mL. However, as an emerging crystalline material, MOFs still have some challenges and problems. Due to the complex structure of MOFs, their stability is very low, and they are prone to structural decomposition or ligand bond breakage . In addition, the poor conductivity and low thermal, chemical, and water stability also limit their application . 2.4. Upconversion Nanoparticles Upconversion nanoparticles (UCNPs) are kinds of special materials with anti-Stokes luminescence processes. While most fluorescent materials are excited by high-energy light to emit low-energy light, upconversion luminescence can be excited by near infrared. This characteristic provides several advantages, including high penetrability and low auto-fluorescence background . UCNPs offer additional benefits such as environmental protection of a cleaner synthetic route, low cost, straightforward detection, and no matrix interference . Owing to their low autofluorescence background, deep light penetration, non-toxicity, and minimal photodamage to biological samples, UCNPs have proven to be a universal tool in the past few years . The luminescence mechanism of upconversion can be divided into excited state absorption, energy transfer upconversion, and photon avalanche . Zhang et al. present a new fluorescent aptasensor that integrates DNA walking and a hybridization chain reaction (HCR) to detect S. aureus ( A). The binding of S. aureus to the aptamer caused the DNA walker to move along the surface of AuNPs, triggering the separation of the probe and AuNPs, which further triggered the HCR. Therefore, the distance between AuNPs and the upconversion becomes farther away, resulting in the recovery of fluorescence intensity of the upconversion. The limit of detection is 10 CFU/mL, and the detection time is less than 3 h. Song et al. used an aptamer-modified magnetic nanoparticle as a capture probe and an aptamer-modified upconversion nanoparticle a signal probe to capture Vibrio parahaemolyticus ( B). The aptamer-modified magnetic nanoparticle, Vibrio parahaemolyticus , and the aptamer-modified upconversion nanoparticle formed a sandwich-like complex, which was rapidly separated from a complex matrix using a magnetic force, and the bacterial concentration was determined by fluorescence intensity analysis. In addition, UCNPs have also been widely used in Escherichia coli , Staphylococcus aureus , Shigella , etc. However, UCNPs still face several challenges , including some inherent limitations, such as low quantum yields and narrow absorption cross-sections, which should pay more attention in future research. 2.5. Others Many other materials have also been used for food-borne pathogen detection. Gold nanoclusters (AuNCs) are widely used because of their ultra-small size, tunable emission, size-dependent fluorescence, and good biocompatibility . Additionally, some other 2D nanomaterials, such as manganese dioxide (MnO 2 ) nanosheets, are generally used as fluorescence quenchers in fluorescent biosensors . Quantum dots (QDs) are emerging nanomaterials in recent years with particle sizes of 1–10 nm . Due to their unique optical properties, they have become very promising fluorescent nanomaterials. In addition, the doped QDs obtained by doping different materials will have the luminescence properties inherent in QDs and the luminescence properties of doped ions. The interaction between QDs and intended pathogens caused fluorescence quenching or enhancement in QDs . The main mechanisms include fluorescence resonance energy transfer (FRET), photoinduced electron transfer, internal filtration effects, aggregate effects, static quenching effects, and dynamic quenching effects . Ren et al. reported a fluorescence assay leveraging the high sensitivity and stable fluorescence of CdTe QDs combined with a specific DNA aptamer for the detection of S. typhimurium ( A). In their research, aptamer-coated magnetic particles (Apt-MNPs) were employed as target captors, while the CdTe QD-labeled complementary strands served as signal generators. The fluorescence of CdTe QDs increases linearly in the concentration range of 10 to 1010 CFU/mL, with a detection limit of 1 CFU/mL. Xue et al. developed a fluorescent sensor for the simultaneous detection of E. coli O157: H7 and Salmonella typhimurium using immunomagnetic nanobeads (MNBs), manganese dioxide nanoflowers (MnO 2 NFs), and QDs ( B). QDs@MnO 2 nanocomposites were obtained with MnO 2 NFs and QDs, followed by modification with antibodies (pAbs) to obtain pAb-QDs@MnO 2 nanocomposites (QM NCs). The target bacteria were first conjugated to MNBs and QM NCs to create the MNB–bacteria–QM complex. Glutathione is then used to reduce MnO 2 to Mn 2+ , rapidly releasing QDs from the complex. This assay enables simultaneous quantification of E. coli and Salmonella within 2 h with detection limits of 15 CFU/mL and 40 CFU/mL, respectively. The above examples show the potential of quantum dots in food-borne disease detection. However, the potential toxicity to humans and the environment and the high cost of quantum dots are key issues to be considered in future research. Although much research has been conducted on QD-based fluorescent nanosensors, many challenges remain, including reducing their toxicity, enhancing their chemical stability, repeatability, high uniformity, and fluorescence performance. Future research should focus on the selection of appropriate doping materials, the optimization of the synthesis, and the doping process of QDs . Carbon dots (CDs) are an emerging fluorescent material that have garnered significant interest as alternatives to conventional QDs . CDs possess several advantages, including excellent biocompatibility, low toxicity, high water stability, and ease of synthesis . Moreover, their customizable surface functional groups and fluorescence properties make them highly suitable for sensing and detection . CDs feature many hydroxyl and carboxyl groups on their surface, enabling them to conjugate with various biomolecules through these functional groups . During interaction with bacterial cells or their metabolic products, their photoluminescent qualities allow for the sensitive identification of bacteria on the basis of either quenched or enhanced fluorescence . Zhao et al. developed a highly sensitive fluorescent immunosensor for detecting E. coli O157:H7 using microspheres labeled with CDs . In this study, CD microspheres were prepared using S. aureus cells as the carrier to incorporate CD particles ( A). The microsphere can be easily combined with various antibodies. When combined with the immunomagnetic bead techniques, the CD microsphere immunosensor was established for the specific detection of E. coli O157:H7. Yang et al. prepared CD-encapsulated breakable organosilica nanocapsules (BONs) as advanced fluorescent labels for the ultrasensitive detection of S. aureus . The CDs are entrapped in organosilica shells to form core–shell CDs@BONs ( B). These fluorescent nanocapsules are then conjugated with anti- S. aureus antibody to specifically recognize S. aureus . Compared with conventional immunoassays using CDs as fluorescent labels, the fluorescent signals are amplified by two orders of magnitude due to the presence of hundreds of CDs encapsulated in each nanocapsule. In addition, CDs have also been widely used in detecting S. typhimurium , Helicobacter pylori , Salmonella , etc. Metal–organic frameworks (MOFs) are organic–inorganic hybrid crystalline materials formed by metal ions or clusters and organic ligands via coordination bonding . The special structure endows MOFs with many advantages, including high specific surface area, controllable pore structures, and significant thermal stability , which make MOFs a potential material in food-borne pathogen detection. Qiao et al. designed a fluorescence resonance energy transfer (FRET) nanoprobe using MOFs for the detection of S. aureus ( A). The zirconium (Zr)-based MOFs were used to encapsulate blue-emitting 7-hydroxycoumarin-4-acetic acid (HCAA) and then functioned as an energy donor. By calculating the results, they achieved the detection of S. aureus within a dynamic range of 1.05 × 10 3 –1.05 × 10 7 CFU/mL and a detection limit of 12 CFU/mL. Bhardwaj et al. developed a new luminescent probe for S. aureus detection based on the bio-conjugation of an amine functionalized metal–organic framework with an anti- S. aureus antibody ( B). This innovative design of the biosensor allowed the detection of S. aureus over a wide concentration range, achieving a notably low limit of detection of 85 CFU/mL. However, as an emerging crystalline material, MOFs still have some challenges and problems. Due to the complex structure of MOFs, their stability is very low, and they are prone to structural decomposition or ligand bond breakage . In addition, the poor conductivity and low thermal, chemical, and water stability also limit their application . Upconversion nanoparticles (UCNPs) are kinds of special materials with anti-Stokes luminescence processes. While most fluorescent materials are excited by high-energy light to emit low-energy light, upconversion luminescence can be excited by near infrared. This characteristic provides several advantages, including high penetrability and low auto-fluorescence background . UCNPs offer additional benefits such as environmental protection of a cleaner synthetic route, low cost, straightforward detection, and no matrix interference . Owing to their low autofluorescence background, deep light penetration, non-toxicity, and minimal photodamage to biological samples, UCNPs have proven to be a universal tool in the past few years . The luminescence mechanism of upconversion can be divided into excited state absorption, energy transfer upconversion, and photon avalanche . Zhang et al. present a new fluorescent aptasensor that integrates DNA walking and a hybridization chain reaction (HCR) to detect S. aureus ( A). The binding of S. aureus to the aptamer caused the DNA walker to move along the surface of AuNPs, triggering the separation of the probe and AuNPs, which further triggered the HCR. Therefore, the distance between AuNPs and the upconversion becomes farther away, resulting in the recovery of fluorescence intensity of the upconversion. The limit of detection is 10 CFU/mL, and the detection time is less than 3 h. Song et al. used an aptamer-modified magnetic nanoparticle as a capture probe and an aptamer-modified upconversion nanoparticle a signal probe to capture Vibrio parahaemolyticus ( B). The aptamer-modified magnetic nanoparticle, Vibrio parahaemolyticus , and the aptamer-modified upconversion nanoparticle formed a sandwich-like complex, which was rapidly separated from a complex matrix using a magnetic force, and the bacterial concentration was determined by fluorescence intensity analysis. In addition, UCNPs have also been widely used in Escherichia coli , Staphylococcus aureus , Shigella , etc. However, UCNPs still face several challenges , including some inherent limitations, such as low quantum yields and narrow absorption cross-sections, which should pay more attention in future research. Many other materials have also been used for food-borne pathogen detection. Gold nanoclusters (AuNCs) are widely used because of their ultra-small size, tunable emission, size-dependent fluorescence, and good biocompatibility . Additionally, some other 2D nanomaterials, such as manganese dioxide (MnO 2 ) nanosheets, are generally used as fluorescence quenchers in fluorescent biosensors . Fluorescence sensors usually consist of two parts: a biosensing (or biorecognition) component and a fluorescence signal. These fluorescence sensors serve as detection tools that convert information regarding a chemical or physical property of the system into a beneficial analytical fluorescence signal . At present, antibodies and aptamers are the two main biosensing components for the specific recognition of food-borne pathogens. Fluorescence signals can be supplied by nanoparticles and fluorescent dyes. However, fluorescent dyes are applied more in the detection of heavy metals, and they have been replaced by fluorescent nanoparticles due to their poisonous and high background fluorescence interference. The components of specific recognition include immunological based, nucleic acid based, and bacteriophage based. Therefore, label-free based fluorescence probes, aptamer-based fluorescence sensors, antigen–antibody-based biosensors, and bacteriophage-based fluorescence sensors were discussed. 3.1. Label-Free-Based Fluorescence Probe A label-free based fluorescence probe is a highly sensitive analytical method without any recognition element, such as an antibody or aptamer. At present, some label-free-based fluorescence probes to detect food-borne pathogens have been reported. Sang et al. developed a fluorescence probe consisting of fluorescence carbon dots modified by poly(vinylpyrrolidone) and catechol and silver nanoparticles based on Forster resonance energy transfer for the detection of bacteria without any label ( A). The fluorescence probe can be recognized through electrostatic binding between the positively charged fluorescence probe and negatively charged bacteria. Furthermore, the fluorescence probe presents excellent bacterial ( E. coli and S. aureus ) killing. Fu et al. developed a label-free fluorescent sensor that relies on the competitive reduction in Cu 2+ in CuFe 2 O 4 magnetic particles (MPs) by E. coli and o-phenylenediamine (OPD) ( B). In this system, Cu 2+ in CuFe 2 O 4 MPs can oxidize OPD to produce 2,3-diaminophenazine (OPDox) with fluorescent properties. The presence of E. coli can weaken the oxidative ability of CuFe 2 O 4 MPs and result in a decrease in the fluorescence of the system. The detection range of E. coli is 103–106 CFU/mL, and the detection limit is calculated to be 5.8 × 10 2 CFU/mL. 3.2. Antigen–Antibody-Based Fluorescence Biosensors Antibodies are large Y-type proteins that are the most commonly used elements in pathogen identification due to their versatility, high sensitivity, and selectivity . They possess remarkable selectivity and affinity for the specific antigen with which they interact . However, the poor stability and high price of antibodies, especially monoclonal antibodies, have become one of the reasons limiting their application . Zahra et al. made a fluorescence immunosensor consisting of graphene oxide and antibody-modified graphene dots based on FRET to detect campylobacter jejuni in food samples ( A). The principle of the fluorescence immunosensor is based on the ability of antibody specificity to bacteria cells. In the present target, the conjugated antibody bacteria caused an inhibition on the π-π interaction between graphene oxide and antibody-modified graphene, leading to the fluorescence recovering through releasing the effect of FRET. Furthermore, the fluorescence immunosensor can finish campylobacter jejuni detection with high sensitivity (LOD = 10 CFU/mL) within 1.5 h. Wang et al. developed an antigen–antibody-based fluorescence biosensor for the detection of E. coli O157:H7 . In this research, carbon dots were utilized as fluorescence donors, while covalent organic frameworks served as fluorescence acceptors. An antibody (Ab) specific to E. coli O157:H7 was employed to create a CD-Ab-COF immunosensor by linking CDs and COFs. When the antibody was specifically bound with E. coli O157:H7, the connection between CDs and COFs was interrupted, resulting in the restoration of carbon dot fluorescence. The sensor exhibited a linear detection range spanning from 0 to 106 CFU/mL, with a limit of detection of 7 CFU/mL. 3.3. Aptamer-Based Fluorescence Biosensors Aptamers are short oligonucleotide molecules (ssDNA or RNA) that typically range from 25 to 90 bases in length and are capable of binding strongly, specifically to target molecules . Aptamers can specifically recognize and bind with targets through non-covalent interactions such as hydrogen bonding, van der Waals forces, electrostatic interactions, hydrophobic effects, and π-π stacking . Compared with antibodies, aptamers possess enormous advantages like stability across various temperatures and pH, ease of production, longer shelf life, fast production, and low batch variability, which propell its boom in the field of biosensing. In aptamer-based fluorescence biosensors, because most targets are non-fluorescent, various fluorescence signal generation strategies are used to detect fluorescence signals. Common fluorescence signal generation strategies include FRET, fluorophore-linked aptamer assays , fluorescent light-up aptamers, and fluorescence anisotropy. For example, Ouyang et al. developed an aptamer based on upconversion fluorescence resonance energy transfer for S. aureus detection . In this research, the AuNPs were functionalized with aptamers, whereas UCNPs were conjugated with aptamers of complementary DNA (cDNA) ( B). The complementary base pairing between cDNA and aptamers facilitated the interaction between UCNPs and AuNPs, leading to the quenching of upconversion fluorescence. AuNPs functionalized with aptamers preferentially bound to S. aureus and released the UCNPs, resulting in the recovery of UCNP fluorescence. However, challenges still remain for the practical application of these biosensors, which must be overcome. As food is a complex matrix, the sensitivity and selectivity of the aptamers are affected by the sample conditions, including interfering components, pH, ionic strength, and viscosity . 3.4. Fluorescence Sensors Based on Bacteriophages Bacteriophages are viruses of bacteria consisting of DNA or RNA, which can infect host cells and be capable of self-replication within a short period of time. In the beginning, the bacteriophage is used to control bacterial growth. In recent years, a new detection technology based on bacteriophage specificity has emerged. Due to the continuous extension of visualization technology, a novel fluorescent probe based on bacteriophages for the detection of food-borne pathogens obtained success. Chen et al. used a biotin-expressing T7 bacteriophage combined with avidin-modified FeCo magnetic nanoparticles for the isolation of pathogenic bacteria . The nanoprobe allowed the specific recognition and attachment to E. coli cells, and the isolation efficiency was comparable to that of antibody-labeled pathogenic bacteria. Zhao et al. explored a fluorescence biosensor mediated by phage and Clostridium butyricum Argonaute (CbAgo) for the detection of viable Salmonella typhimurium without the need for complicated DNA extraction and amplification procedures ( C). In this approach, a phage was used to capture viable S. typhimurium , while a lysis buffer was used to lyse the S. typhimurium . Subsequently, CbAgo can cleave the bacterial DNA to yield target DNA that directs a newly targeted cleavage of fluorescent probes. After that, the resulting fluorescent signal accumulates on the streptavidin-modified single microsphere. This entire detection process was then analyzed and interpreted using machine vision and learning algorithms, allowing for highly sensitive detection of S. typhimurium with a limit of detection of 40.5 CFU/mL and a linear range of 50–107 CFU/mL. The stability of phages at different pH and temperatures makes it an advantageous probe in the field of microbial detection . However, it only targets bacteria that can serve as phage hosts, which limits its widespread application. A label-free based fluorescence probe is a highly sensitive analytical method without any recognition element, such as an antibody or aptamer. At present, some label-free-based fluorescence probes to detect food-borne pathogens have been reported. Sang et al. developed a fluorescence probe consisting of fluorescence carbon dots modified by poly(vinylpyrrolidone) and catechol and silver nanoparticles based on Forster resonance energy transfer for the detection of bacteria without any label ( A). The fluorescence probe can be recognized through electrostatic binding between the positively charged fluorescence probe and negatively charged bacteria. Furthermore, the fluorescence probe presents excellent bacterial ( E. coli and S. aureus ) killing. Fu et al. developed a label-free fluorescent sensor that relies on the competitive reduction in Cu 2+ in CuFe 2 O 4 magnetic particles (MPs) by E. coli and o-phenylenediamine (OPD) ( B). In this system, Cu 2+ in CuFe 2 O 4 MPs can oxidize OPD to produce 2,3-diaminophenazine (OPDox) with fluorescent properties. The presence of E. coli can weaken the oxidative ability of CuFe 2 O 4 MPs and result in a decrease in the fluorescence of the system. The detection range of E. coli is 103–106 CFU/mL, and the detection limit is calculated to be 5.8 × 10 2 CFU/mL. Antibodies are large Y-type proteins that are the most commonly used elements in pathogen identification due to their versatility, high sensitivity, and selectivity . They possess remarkable selectivity and affinity for the specific antigen with which they interact . However, the poor stability and high price of antibodies, especially monoclonal antibodies, have become one of the reasons limiting their application . Zahra et al. made a fluorescence immunosensor consisting of graphene oxide and antibody-modified graphene dots based on FRET to detect campylobacter jejuni in food samples ( A). The principle of the fluorescence immunosensor is based on the ability of antibody specificity to bacteria cells. In the present target, the conjugated antibody bacteria caused an inhibition on the π-π interaction between graphene oxide and antibody-modified graphene, leading to the fluorescence recovering through releasing the effect of FRET. Furthermore, the fluorescence immunosensor can finish campylobacter jejuni detection with high sensitivity (LOD = 10 CFU/mL) within 1.5 h. Wang et al. developed an antigen–antibody-based fluorescence biosensor for the detection of E. coli O157:H7 . In this research, carbon dots were utilized as fluorescence donors, while covalent organic frameworks served as fluorescence acceptors. An antibody (Ab) specific to E. coli O157:H7 was employed to create a CD-Ab-COF immunosensor by linking CDs and COFs. When the antibody was specifically bound with E. coli O157:H7, the connection between CDs and COFs was interrupted, resulting in the restoration of carbon dot fluorescence. The sensor exhibited a linear detection range spanning from 0 to 106 CFU/mL, with a limit of detection of 7 CFU/mL. Aptamers are short oligonucleotide molecules (ssDNA or RNA) that typically range from 25 to 90 bases in length and are capable of binding strongly, specifically to target molecules . Aptamers can specifically recognize and bind with targets through non-covalent interactions such as hydrogen bonding, van der Waals forces, electrostatic interactions, hydrophobic effects, and π-π stacking . Compared with antibodies, aptamers possess enormous advantages like stability across various temperatures and pH, ease of production, longer shelf life, fast production, and low batch variability, which propell its boom in the field of biosensing. In aptamer-based fluorescence biosensors, because most targets are non-fluorescent, various fluorescence signal generation strategies are used to detect fluorescence signals. Common fluorescence signal generation strategies include FRET, fluorophore-linked aptamer assays , fluorescent light-up aptamers, and fluorescence anisotropy. For example, Ouyang et al. developed an aptamer based on upconversion fluorescence resonance energy transfer for S. aureus detection . In this research, the AuNPs were functionalized with aptamers, whereas UCNPs were conjugated with aptamers of complementary DNA (cDNA) ( B). The complementary base pairing between cDNA and aptamers facilitated the interaction between UCNPs and AuNPs, leading to the quenching of upconversion fluorescence. AuNPs functionalized with aptamers preferentially bound to S. aureus and released the UCNPs, resulting in the recovery of UCNP fluorescence. However, challenges still remain for the practical application of these biosensors, which must be overcome. As food is a complex matrix, the sensitivity and selectivity of the aptamers are affected by the sample conditions, including interfering components, pH, ionic strength, and viscosity . Bacteriophages are viruses of bacteria consisting of DNA or RNA, which can infect host cells and be capable of self-replication within a short period of time. In the beginning, the bacteriophage is used to control bacterial growth. In recent years, a new detection technology based on bacteriophage specificity has emerged. Due to the continuous extension of visualization technology, a novel fluorescent probe based on bacteriophages for the detection of food-borne pathogens obtained success. Chen et al. used a biotin-expressing T7 bacteriophage combined with avidin-modified FeCo magnetic nanoparticles for the isolation of pathogenic bacteria . The nanoprobe allowed the specific recognition and attachment to E. coli cells, and the isolation efficiency was comparable to that of antibody-labeled pathogenic bacteria. Zhao et al. explored a fluorescence biosensor mediated by phage and Clostridium butyricum Argonaute (CbAgo) for the detection of viable Salmonella typhimurium without the need for complicated DNA extraction and amplification procedures ( C). In this approach, a phage was used to capture viable S. typhimurium , while a lysis buffer was used to lyse the S. typhimurium . Subsequently, CbAgo can cleave the bacterial DNA to yield target DNA that directs a newly targeted cleavage of fluorescent probes. After that, the resulting fluorescent signal accumulates on the streptavidin-modified single microsphere. This entire detection process was then analyzed and interpreted using machine vision and learning algorithms, allowing for highly sensitive detection of S. typhimurium with a limit of detection of 40.5 CFU/mL and a linear range of 50–107 CFU/mL. The stability of phages at different pH and temperatures makes it an advantageous probe in the field of microbial detection . However, it only targets bacteria that can serve as phage hosts, which limits its widespread application. 4.1. LAMP Combined with Fluorescence Sensors Loop-mediated isothermal amplification (LAMP) is a nucleic acid amplification method known for its high specificity and simplicity compared to PCR. Some detection targets can be quantitatively analyzed by analyzing the products in the process. With the development of visualization technology, a new fluorescence probe based on LAMP with fluorescence marks was fabricated. Currently, many fluorescent probes based on LAMP for the detection of food-borne pathogens have been reported. For example, Lee et al. developed a rapid, sensitive, and visual detection of E. coli based on CRISPR/Cas12a and LAMP technology. This method is capable of correcting false-negative results produced by LAMP and achieves a detection limit of 1.22 CFU/mL, which was successfully detected without pre-microbial enrichment culture ( A). 4.2. Fluorescence Image Combined with Fluorescence Sensors Fluorescence image detection technology is a fast and real-time analytical method. Due to different fluorescence materials, fluorescence can be divided into probes consisting of fluorescence nanoparticles and fluorescent dyes combined with a microscope, and some related research work for food-borne detection has been reported. For instance, Sajal et al. developed a quantitative detection for S. aureus detection using fluorescence images in peanut milk ( B). The fluorescence figures were taken by a smartphone camera with a light-emitting diode as the excitation light source. The principle of fluorescence image detection is based on aptamer-functionalized fluorescent magnetic nanoparticles that capture S. aureus cells, enabling quantitative analysis through fluorescence imaging. This fluorescence image technology for S. aureus detection presents many advantages, including being fast (10 min), easy to operate, having a high sensitivity (LOD = 10 CFU/mL), and having high selectivity . 4.3. Q-PCR Combined with Fluorescence Sensors The polymerase chain reaction (PCR) is a technology which can produce multiple copies of a target DNA . The monitoring of food-borne pathogens using PCR can be divided into two types, conventional PCR and qPCR. Conventional PCR Detection is based on stained gel electrophoresis without high specificity. Q-PCR refers to the introduction of fluorescently labeled probes or fluorescent chemicals during PCR amplification. Through the change in fluorescence intensity during cyclic amplification, the DNA sequence in the sample can be quantified. With the development of detection technology, qPCR as a new fluorescence probe was developed, which has the the disadvantage of low specificity. Additionally, many research works about food-borne pathogens detected by qPCR were reported in recent years. For example, Jun et al. developed a multiplex real-time qPCR technology combined with polymer network carbon nanotubes to detect four bacteria ( P. aeruginosa , K. pneumoniae , A. baumannii , E. coli ) . The four bacteria can be differentially identified in 30 min by this qPCR detection method. Pan et al. present a fast analytical method to detect E. coli O157:H7 in food samples based on propidium monoazide combined with droplet PCR . Propidium monoazide is a DNA amplification inhibitor that penetrates into injured cells with compromised membrane integrity. However, this inhibitor is not able to penetrate into viable cells. qPCR technology combined with propidium monoazide finished E. coli detection in more than 4.5 h with high sensitivity (LOD = 1 CFU/mL) and without false positive interference. The above successful food-borne pathogen detection examples show qPCR’s superiority of high sensitivity and high selectivity as a fluorescence probe. Loop-mediated isothermal amplification (LAMP) is a nucleic acid amplification method known for its high specificity and simplicity compared to PCR. Some detection targets can be quantitatively analyzed by analyzing the products in the process. With the development of visualization technology, a new fluorescence probe based on LAMP with fluorescence marks was fabricated. Currently, many fluorescent probes based on LAMP for the detection of food-borne pathogens have been reported. For example, Lee et al. developed a rapid, sensitive, and visual detection of E. coli based on CRISPR/Cas12a and LAMP technology. This method is capable of correcting false-negative results produced by LAMP and achieves a detection limit of 1.22 CFU/mL, which was successfully detected without pre-microbial enrichment culture ( A). Fluorescence image detection technology is a fast and real-time analytical method. Due to different fluorescence materials, fluorescence can be divided into probes consisting of fluorescence nanoparticles and fluorescent dyes combined with a microscope, and some related research work for food-borne detection has been reported. For instance, Sajal et al. developed a quantitative detection for S. aureus detection using fluorescence images in peanut milk ( B). The fluorescence figures were taken by a smartphone camera with a light-emitting diode as the excitation light source. The principle of fluorescence image detection is based on aptamer-functionalized fluorescent magnetic nanoparticles that capture S. aureus cells, enabling quantitative analysis through fluorescence imaging. This fluorescence image technology for S. aureus detection presents many advantages, including being fast (10 min), easy to operate, having a high sensitivity (LOD = 10 CFU/mL), and having high selectivity . The polymerase chain reaction (PCR) is a technology which can produce multiple copies of a target DNA . The monitoring of food-borne pathogens using PCR can be divided into two types, conventional PCR and qPCR. Conventional PCR Detection is based on stained gel electrophoresis without high specificity. Q-PCR refers to the introduction of fluorescently labeled probes or fluorescent chemicals during PCR amplification. Through the change in fluorescence intensity during cyclic amplification, the DNA sequence in the sample can be quantified. With the development of detection technology, qPCR as a new fluorescence probe was developed, which has the the disadvantage of low specificity. Additionally, many research works about food-borne pathogens detected by qPCR were reported in recent years. For example, Jun et al. developed a multiplex real-time qPCR technology combined with polymer network carbon nanotubes to detect four bacteria ( P. aeruginosa , K. pneumoniae , A. baumannii , E. coli ) . The four bacteria can be differentially identified in 30 min by this qPCR detection method. Pan et al. present a fast analytical method to detect E. coli O157:H7 in food samples based on propidium monoazide combined with droplet PCR . Propidium monoazide is a DNA amplification inhibitor that penetrates into injured cells with compromised membrane integrity. However, this inhibitor is not able to penetrate into viable cells. qPCR technology combined with propidium monoazide finished E. coli detection in more than 4.5 h with high sensitivity (LOD = 1 CFU/mL) and without false positive interference. The above successful food-borne pathogen detection examples show qPCR’s superiority of high sensitivity and high selectivity as a fluorescence probe. In order to better cope with the challenges posed by food-borne pathogens, this review summarized the fluorescence materials and different kinds of fluorescence sensors in the detection of food-borne pathogens. We highlighted the optical properties of different fluorescence materials. Additionally, to present an in-depth analysis of the utility of various fluorescence assays for food safety, their detection line, assay time, and linear detection range values were discussed. The QDs have strong fluorescence emission efficiency, and the structure of QDs makes it easy for them to be modified by biological molecules, like antibodies and aptamers. However, QDs are toxic, which greatly limits their applications. Compared to conventional quantum dots, tCDs have the advantages of low toxicity and high water stability. MOFs have advantages, including a high specific surface area, controllable pore structures, and significant thermal stability, but they are poor in stability. UCNPs have the advantages of low toxicity and unique luminescent properties, which are applied in multiple detections of pathogens. Moreover, different fluorescence biosensors are reviewed according to the biosensing components, which show great potential in food-borne pathogen detection. Although nanomaterial-based fluorescent assays have advantages compared to traditional detection methods, they still have some limitations. (1) They have high technical requirements and require special equipment and technical personnel to operate. (2) They have a high cost. Fluorescent nanomaterials and detection equipment are usually expensive, which limit their application scenarios and promotion scope. (3) Limitations and interferences can arise when using fluorescent sensors with different food matrices, which may lead to false positives or false negatives. Based on the above summary, the following points should be considered. (1) With the development of miniaturized hardware technology, the limitations of traditional detection procedures finished by professionals in the laboratory no longer exist, and more portable detection devices can be applied. (2) Smart devices, such as smartphones, should be combined with fluorescence sensors to create a new intelligent fluorescence rapid detection platform. (3) Suitable methods and devices that meet the urgent need for efficient on-site testing of food samples and the environment should be developed. (4) With the diversity and complexity of pathogen contamination in food substrates, a suitable selection of biometric elements and nanomaterials to detect multiple bacteria is becoming more important. (5) It is necessary to enhance the resistance of fluorescent nanomaterials to harsh environments to avoid the influence of environmental factors on fluorescence intensity and fluorescence lifetime. |
Dataset development of pre-formulation tests on fast disintegrating tablets (FDT): data aggregation | 183571ac-5c32-4ea0-894b-5c3922a9927d | 10318697 | Pharmacology[mh] | The pharmaceutical industry, as one of the largest industries in the world, seeks on one hand to discover and develop new drugs, and on the other hand, to research and improve existing drug formulations with optimal methods that meet the requirements of treatment and disease. Therefore, simplifying and streamlining the pre-formulation process has become essential and important for pharmaceutical experts in this industry . Among the most popular solid dosage forms, including capsules and tablets, tablets are the most frequently used due to their ease of swallowing . Another significant advantage of tablets is their flexibility in addressing various disease conditions. Changes in the composition of excipients lead to the production of different tablets with different functions. For example, immediate-release tablets or modified-release tablets can be created by altering the excipients. According to the United States Pharmacopeia (USP) definition, immediate-release tablets are a type of tablets that, when administered and placed near gastrointestinal fluids, disintegrate and release their ingredients in less than 3 min. The disintegration time test is sufficient to evaluate this type of tablet formulation . The development of this kind of tablet involves pre-formulation studies through trial and error, which are expensive, time-consuming, and laborious. Moreover, these current methods are known to be a source of environmental pollution. Executing these experiments has become a major challenge for the pharmaceutical industry . In the last decade, there has been a growing use of appropriate techniques that employ machine learning algorithms to predict formulations in research. Machine learning techniques are superior to conventional statistical methods as they are learnable and can automate processes, leading to improved development speed, optimized formulation, and significant cost savings . One such technique gaining considerable attention recently is deep learning, which is a subfield of machine learning that trains artificial neural networks to automatically learn and make complex predictions or decisions from data. Studies conducted over the years have demonstrated that these algorithms yield better results compared to other machine learning methods in predicting the disintegration or dissolution time of tablets, drug solubility in water, and the detection of new medicines . As an example, in study , regression models were used to predict the correct drug formulation. The study introduced a deep neural network trained on two types of drug forms: oral fast disintegrating films (OFDF) and oral sustained release matrix tablets (SRMT). Additionally, the deep learning method was compared to six other machine learning algorithms. In study , deep learning methods (DNN) and artificial neural networks (ANN) were employed to design a quantitative model for predicting the disintegration time of oral fast disintegrating tablets using the Direct Compression method. In study , a recurrent neural network was utilized to predict molecular properties by examining the solubility of the drug in water based on its molecular structure. The initial step in developing a prediction model involves data collection. In this particular case, due to the limited availability of a gathered dataset, our study aimed to create a dataset by aggregating information from articles on fast-disintegrating tablets (FDT) formulations. We believe that this effort is necessary to meet the pharmaceutical industry’s needs for automating medicinal processes, which require the utilization of machine learning techniques, including deep learning, to predict the disintegration time of FDT, an important specification in pre-formulation studies. Given the requirement for a comprehensive dataset, the primary objective of this study was to compile data and create a dataset consisting of FDT formulations and their corresponding properties based on previous studies. Given the extensive nature of the pharmaceutical technologies field and the absence of a comprehensive dataset encompassing pharmaceutical formulations and their corresponding control test values, which is a key requirement for developing predictive models, we performed a systematic search across four databases. Additionally, the selection of tablet pharmaceutical form was based on its widespread usage, and within the tablet category, fast-disintegrating tablets were chosen. The evaluation of these tablets focused on their disintegration time, fragility, and hardness, which are considered crucial parameters. A total of 1,503 articles were retrieved through the database search. During the initial review, which involved a thorough examination of the articles’ full texts to identify those that analyzed formulations with the desired structural values and characteristics, 726 articles were identified. Among these, 193 articles were found to be duplicated across multiple databases. Subsequently, 523 articles proceeded to the next step for a detailed assessment of their full texts, specifically focusing on the inclusion criteria for adding formulations to the dataset. As a result, 301 articles did not meet all the inclusion criteria and were subsequently excluded from the study. The summarized steps can be visualized in Fig. . After reviewing 232 articles, a total of 1,982 formulations were extracted. An overview of the dataset is provided in Table . The formulation information, including the name and content of Active Pharmaceutical Ingredients (API), as well as other excipients, process details, and quality control properties, were recorded in the dataset. Each formulation in the final dataset contains the following features: API name, Dose, Amount of Excipients (each excipient as a separate column), Total Weight, Hardness, Friability, Thickness, Wetting Time, Drug Content, Disintegration Time, Content Uniformity, Water Absorption Ratio, Mixing Time, Diameter, Bulk Density, Tapped Density, Carr’s Compressibility Index, Hausner Ratio, Angle Of Repose, Tablet Porosity, Assay, Moisture Content, Dispersion Time, and Cumulative Drug Release. Currently, tablet manufacturing processes for achieving the optimal formulation traditionally involve multiple trials and errors, as indicated by existing research. However, the utilization of deep learning techniques as part of the Quality by Design (QBD) principles in the pharmaceutical industry necessitates a comprehensive database of relevant formulations, which was previously unavailable. In this study, we have created a dataset by aggregating data to enable advanced analytics concerning the presentation of the optimal formulation. To the best of our knowledge, this is the first instance of such an endeavour. The dataset contains valuable information regarding various formulations of fast-disintegrating tablets, which can be utilized in other studies. Furthermore, the dataset can be used to conduct an optimal analysis of formulation steps. The methodology employed in this study can also be applied to develop datasets for other dosage forms, serving as a prerequisite and introduction to further research in the field of modelling drug formulations. In future work, this dataset will be employed to construct a prediction model using machine learning and deep learning techniques to forecast the disintegration time of fast-disintegrating tablets. Another notable finding from our study, as depicted in Fig. , is that a significant proportion of articles were found in the Scopus and Google Scholar databases. By conducting searches specifically in these databases, we were able to access the majority of the articles included in our study. This highlights the importance of utilizing these databases as valuable sources of research literature. Limitations In selecting the formulations of the articles, there were limitations that led to the exclusion of some formulations or even the entire article in data extraction. • Some articles did not report the main features of interest that were mentioned as inclusion criteria for this article. • A large number of articles did not use direct compression as the method for material blending. • Some articles reported the response variables as dispersion time instead of disintegration time, and as a result, these formulations were also excluded due to the different nature of these two response variables. |
Establishing a standing patient advisory board in family practice research: A qualitative evaluation from patients' and researchers' perspectives | 6ee422ee-2014-4183-8b0f-6f361ca49da3 | 11180710 | Family Medicine[mh] | INTRODUCTION Well prepared patient and public involvement (PPI) is an integral part of high‐quality research and an effective tool to prevent so‐called ‘research waste’. , , Integrating patients' and providers' perspectives in research in an early stage fosters the feasibility of research projects. Furthermore, PPI contributes to the development of patient‐relevant care solutions within studies and increases the transferability of study results into primary care. Therefore, the establishment of formats and structures for stakeholder involvement—that is the involvement of family practitioners, health care assistants and patients—is a significant feature in most family practice‐based research networks (FPBRNs) in Germany. , This development was boosted in 2020 when the Federal Ministry of Education and Research funded six regional and transregional FPBRNs encompassing 23 academic family medicine departments and a coordination office within the initiative DESAM‐ForNet. The initiative aims to foster high‐quality research in the outpatient setting by developing sustainable, reliable and scalable research structures that compare to research structures in the inpatient setting. Over the course of time, FPBRNs will add evidence that reflects family practitioners', medical health assistants' and their patient populations' tasks and needs to the overall body of evidence on prevention, diagnostics and therapies. FPBRNs have started to develop qualification programs for research practices, worked on solutions to gather patient data from research practices, conducted several (clinical) interventional pilot studies within the networks and are in close contact with family practices all over Germany. To incorporate patients' perspective in research within our FPBRN network Frankfurt am Main (ForN) we decided to establish a patient advisory board (PAB) as an initial component of our network structures. Different from approaches of study‐specific PABs, we aimed to establish a standing PAB that is located within our FPBRN and selectively involved within different studies. Furthermore, we aimed to include patients that represent the broad patient population from family practices, that is, from all ages, genders, social backgrounds and with and without preconditions. We chose the term ‘patient advisory board’ and addressed potential members as ‘patients’, because we aimed to focus on their role in family practice, namely patients. With regard to the inclusion criteria, we could also have chosen the term ‘citizens’. The diversity of conditions and experiences of PAB members is another distinct feature compared to other study‐specific PPI approaches in healthcare that often include patients with a similar medical condition. Subsequently, we aimed to include persons who contribute their individual everyday experiences with healthcare in family practice in contrast to patient representatives from patient organizations with a focus on a specific condition. We did not actively reach out to caregivers, even though some patients may hold a double role. In this paper, we describe the establishment of a standing PAB within our FPBRN ForN, outline methods and content of the PAB's research involvement and present PAB members' and researchers' perspective on these processes. MATERIALS AND METHODS The reporting in this article follows the GRIPP2 Reporting Guideline. 2.1 PPI strategy and level of PPI We aimed to establish a standing PAB as part of a sustainable and ready‐to‐use research structure and to create a relationship of two‐way learning and mutual trust. This is to be fostered by a coordinator as a stable contact person that organizes PAB meetings, is responsive to barriers and questions from PAB members and operates as a mediator between the FPBRNs' different study teams and the PAB. The coordinator was trained and experienced in qualitative and participatory methods as well as workshop design and moderation. PAB members are involved via participatory workshop meetings (3–4 per year) or in one‐on‐one consultations, for which they get financially reimbursed. Predominately, the level of involvement is defined as ‘consultation’, that is, ‘asking members of the public for their views and use these views to inform decision making’. 2.2 Recruitment We aimed to include patients that represent the broad patient population from family practices, that is, from all ages, gender, social backgrounds and with and without preconditions. Therefore, we had no inclusion criteria besides the ability to participate and communicate in PAB meetings. Furthermore, to maintain clarity of roles, we decided to exclude persons with a background in health care research. We used several multimodal recruitment strategies and recruited patients between August 2021 and April 2022. (1) We developed an information flyer with a prestamped response postcard that we handed over to 10 interested research practices for display in their waiting rooms. Furthermore, we asked pharmacies to display our information flyer. (2) We talked to interested family practitioners about the PAB, handed over information material and asked them to approach patients they deemed interested individually. (3) We asked patient participants from a former study, in which an intervention was co‐designed, , if they would like to join the PAB. (4) We contacted the coordinator of the standardized patient programme at the Frankfurt University Hospital and asked him to approach standardized patients individually with our PAB information materials. Standardized or simulated patients support medical education by acting like a patient with a certain disease. As they are used to communicating with medical staff, we hoped their barrier to involvement is low, even though they have no formalized medical knowledge and their input is based on personal experiences. (5) We developed a workshop for the citizen sciences programme at Goethe University Frankfurt. The programme's schedule of lectures and workshops open to the public is available in print and online. When patients contacted us, we asked them for a personal phone call. Within this call, we introduced the FPBRN and the Institute of General Practice to them, elaborated their role and tasks as a PAB member, informed them about planned meeting sequences, provided room for questions and announced the date for the next planned onboarding workshop. After the phone call, we sent a short questionnaire that asked for contact information, gender, age, medical conditions and their preferred format for PAB meetings (digital or face‐to‐face) as well as consent to data processing. 2.3 Training 2.3.1 Onboarding workshop We designed an onboarding workshop that included information on our FPBRN as well as research topics covered at our institute and introduced the stages of a research project together with examples of patient involvement at each stage. After each information input, we planned for a short group discussion so that PAB members could get to know each other, that is, their expectations of the PAB, their experiences with family medicine and which aspects of family medicine research they found most interesting. We made clear that there is no duty to share experiences and that they could select which parts they wanted to share with the group. Furthermore, we asked everyone to grant confidentiality to experiences shared within meetings. 2.3.2 Technical introduction workshop and technical support Each PAB member was offered a technical introduction workshop in which the functions of the video conference system were practiced. Furthermore, one team member was available during each workshop to solve technical problems with the video conference system via phone. 2.3.3 On‐the‐job training PAB members were informed about the topic and the attending researchers of each PAB workshop via an invitation email. We aimed to minimize the need to prepare in advance, therefore we designed each meeting in a way that allowed PAB members to participate in a meaningful way without preparation. To achieve this, attending researchers were asked to prepare a methods section that introduced the study design and methods of the study that was discussed in the following workshop as well as basic information on the overall aim of the presented study. This ‘on the job’‐training should step by step enhance PAB members' knowledge of research methods while these methods were always presented in the context of the actual study and the workshop on this study. In this manner, we aimed to combine methodological training with study content and therefore to contextualize the PPI activity within the study setting and vice versa. This approach also facilitated researchers' training in PPI ‘on the job’ by developing PAB workshops together with the PPI coordinator. As we aimed to implement and expand PPI activities within the study teams of the FPBRN, we provided methodological counselling in PPI when necessary. Researchers with little experience in PPI could approach the coordinator with a topic they wished to be reflected from the patients' perspective and the coordinator worked together with the researchers to develop a feasible workshop design by reflecting together on question such as: What is a realistic aim for a 2 h workshop and how much content can be discussed within this time? What is the most important question to be discussed? Which changes to the study are actually possible? Which background information is needed for PAB members to discuss the topic? How is this content best presented and prepared for a nonscientific audience? 2.3.4 Glossary We started a glossary in the onboarding workshop and asked PAB members to write each unclear term into the chat. A member from the academic team explained the term immediately and the term was inserted into a glossary that was adapted after each meeting, emailed to participants and displayed in the secure PAB section of the FPBRN's website. 2.4 Evaluation The literature on evaluation of PPI is diverse. While some authors claim that we need to focus more strongly on PPI as a social interaction with regard to power relations, ‘space to talk’ and ‘space to change’, , , , others stress that we need more information on the actual impact of PPI on research, that is, what did really change by involving patients and stakeholders. , Most authors emphasize, however, that we need more information and more reporting on PPI activities altogether. , , , , , In our evaluation of the PAB's activities, we addressed both PPI as a social interaction from PAB members' and researchers' perspectives and assessed PPIs' impact from researchers' perspectives. 2.4.1 Evaluation from PAB members' perspectives After each onboarding workshop and each PPI workshop, we asked PAB members to comment on the workshop via a short online feedback form containing three open questions on process and social interaction: 1—what did you like best today? 2—what did you miss today? 3—is there anything else you want to share with us? The anonymous written answers were transferred onto an Excel sheet and inserted to MAXQDA 2018. We analyzed answers grouped into feedback to the onboarding workshops and project‐specific PAB meetings. Using thematic analysis, we used a deductive approach first and grouped data with regard to the three questions in the online feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from PAB members. However, marginal experiences are also mentioned in the results. 2.4.2 Evaluation from researchers' perspectives To assess the social interaction within the PAB meetings we asked researchers, similar to PAB members, after each PAB meeting, (1) what they liked best today and (2) what they felt was challenging. To assess PPIs impact, we further asked (3) with which aim they had involved the PAB, (4) if they felt this involvement was beneficial for their research and what should be different next time to make it more beneficial, (5) which changes to research were made due to the PAB meeting and (6) whether there was input from the PAB that was not included in the research and why. Written answers were inserted to MAXQDA 2018 and analyzed using thematic analysis. First, we used a deductive approach and grouped data with regard to the 6 questions of the feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from researchers. Marginal experiences are also mentioned in the results. PPI strategy and level of PPI We aimed to establish a standing PAB as part of a sustainable and ready‐to‐use research structure and to create a relationship of two‐way learning and mutual trust. This is to be fostered by a coordinator as a stable contact person that organizes PAB meetings, is responsive to barriers and questions from PAB members and operates as a mediator between the FPBRNs' different study teams and the PAB. The coordinator was trained and experienced in qualitative and participatory methods as well as workshop design and moderation. PAB members are involved via participatory workshop meetings (3–4 per year) or in one‐on‐one consultations, for which they get financially reimbursed. Predominately, the level of involvement is defined as ‘consultation’, that is, ‘asking members of the public for their views and use these views to inform decision making’. Recruitment We aimed to include patients that represent the broad patient population from family practices, that is, from all ages, gender, social backgrounds and with and without preconditions. Therefore, we had no inclusion criteria besides the ability to participate and communicate in PAB meetings. Furthermore, to maintain clarity of roles, we decided to exclude persons with a background in health care research. We used several multimodal recruitment strategies and recruited patients between August 2021 and April 2022. (1) We developed an information flyer with a prestamped response postcard that we handed over to 10 interested research practices for display in their waiting rooms. Furthermore, we asked pharmacies to display our information flyer. (2) We talked to interested family practitioners about the PAB, handed over information material and asked them to approach patients they deemed interested individually. (3) We asked patient participants from a former study, in which an intervention was co‐designed, , if they would like to join the PAB. (4) We contacted the coordinator of the standardized patient programme at the Frankfurt University Hospital and asked him to approach standardized patients individually with our PAB information materials. Standardized or simulated patients support medical education by acting like a patient with a certain disease. As they are used to communicating with medical staff, we hoped their barrier to involvement is low, even though they have no formalized medical knowledge and their input is based on personal experiences. (5) We developed a workshop for the citizen sciences programme at Goethe University Frankfurt. The programme's schedule of lectures and workshops open to the public is available in print and online. When patients contacted us, we asked them for a personal phone call. Within this call, we introduced the FPBRN and the Institute of General Practice to them, elaborated their role and tasks as a PAB member, informed them about planned meeting sequences, provided room for questions and announced the date for the next planned onboarding workshop. After the phone call, we sent a short questionnaire that asked for contact information, gender, age, medical conditions and their preferred format for PAB meetings (digital or face‐to‐face) as well as consent to data processing. Training 2.3.1 Onboarding workshop We designed an onboarding workshop that included information on our FPBRN as well as research topics covered at our institute and introduced the stages of a research project together with examples of patient involvement at each stage. After each information input, we planned for a short group discussion so that PAB members could get to know each other, that is, their expectations of the PAB, their experiences with family medicine and which aspects of family medicine research they found most interesting. We made clear that there is no duty to share experiences and that they could select which parts they wanted to share with the group. Furthermore, we asked everyone to grant confidentiality to experiences shared within meetings. 2.3.2 Technical introduction workshop and technical support Each PAB member was offered a technical introduction workshop in which the functions of the video conference system were practiced. Furthermore, one team member was available during each workshop to solve technical problems with the video conference system via phone. 2.3.3 On‐the‐job training PAB members were informed about the topic and the attending researchers of each PAB workshop via an invitation email. We aimed to minimize the need to prepare in advance, therefore we designed each meeting in a way that allowed PAB members to participate in a meaningful way without preparation. To achieve this, attending researchers were asked to prepare a methods section that introduced the study design and methods of the study that was discussed in the following workshop as well as basic information on the overall aim of the presented study. This ‘on the job’‐training should step by step enhance PAB members' knowledge of research methods while these methods were always presented in the context of the actual study and the workshop on this study. In this manner, we aimed to combine methodological training with study content and therefore to contextualize the PPI activity within the study setting and vice versa. This approach also facilitated researchers' training in PPI ‘on the job’ by developing PAB workshops together with the PPI coordinator. As we aimed to implement and expand PPI activities within the study teams of the FPBRN, we provided methodological counselling in PPI when necessary. Researchers with little experience in PPI could approach the coordinator with a topic they wished to be reflected from the patients' perspective and the coordinator worked together with the researchers to develop a feasible workshop design by reflecting together on question such as: What is a realistic aim for a 2 h workshop and how much content can be discussed within this time? What is the most important question to be discussed? Which changes to the study are actually possible? Which background information is needed for PAB members to discuss the topic? How is this content best presented and prepared for a nonscientific audience? 2.3.4 Glossary We started a glossary in the onboarding workshop and asked PAB members to write each unclear term into the chat. A member from the academic team explained the term immediately and the term was inserted into a glossary that was adapted after each meeting, emailed to participants and displayed in the secure PAB section of the FPBRN's website. Onboarding workshop We designed an onboarding workshop that included information on our FPBRN as well as research topics covered at our institute and introduced the stages of a research project together with examples of patient involvement at each stage. After each information input, we planned for a short group discussion so that PAB members could get to know each other, that is, their expectations of the PAB, their experiences with family medicine and which aspects of family medicine research they found most interesting. We made clear that there is no duty to share experiences and that they could select which parts they wanted to share with the group. Furthermore, we asked everyone to grant confidentiality to experiences shared within meetings. Technical introduction workshop and technical support Each PAB member was offered a technical introduction workshop in which the functions of the video conference system were practiced. Furthermore, one team member was available during each workshop to solve technical problems with the video conference system via phone. On‐the‐job training PAB members were informed about the topic and the attending researchers of each PAB workshop via an invitation email. We aimed to minimize the need to prepare in advance, therefore we designed each meeting in a way that allowed PAB members to participate in a meaningful way without preparation. To achieve this, attending researchers were asked to prepare a methods section that introduced the study design and methods of the study that was discussed in the following workshop as well as basic information on the overall aim of the presented study. This ‘on the job’‐training should step by step enhance PAB members' knowledge of research methods while these methods were always presented in the context of the actual study and the workshop on this study. In this manner, we aimed to combine methodological training with study content and therefore to contextualize the PPI activity within the study setting and vice versa. This approach also facilitated researchers' training in PPI ‘on the job’ by developing PAB workshops together with the PPI coordinator. As we aimed to implement and expand PPI activities within the study teams of the FPBRN, we provided methodological counselling in PPI when necessary. Researchers with little experience in PPI could approach the coordinator with a topic they wished to be reflected from the patients' perspective and the coordinator worked together with the researchers to develop a feasible workshop design by reflecting together on question such as: What is a realistic aim for a 2 h workshop and how much content can be discussed within this time? What is the most important question to be discussed? Which changes to the study are actually possible? Which background information is needed for PAB members to discuss the topic? How is this content best presented and prepared for a nonscientific audience? Glossary We started a glossary in the onboarding workshop and asked PAB members to write each unclear term into the chat. A member from the academic team explained the term immediately and the term was inserted into a glossary that was adapted after each meeting, emailed to participants and displayed in the secure PAB section of the FPBRN's website. Evaluation The literature on evaluation of PPI is diverse. While some authors claim that we need to focus more strongly on PPI as a social interaction with regard to power relations, ‘space to talk’ and ‘space to change’, , , , others stress that we need more information on the actual impact of PPI on research, that is, what did really change by involving patients and stakeholders. , Most authors emphasize, however, that we need more information and more reporting on PPI activities altogether. , , , , , In our evaluation of the PAB's activities, we addressed both PPI as a social interaction from PAB members' and researchers' perspectives and assessed PPIs' impact from researchers' perspectives. 2.4.1 Evaluation from PAB members' perspectives After each onboarding workshop and each PPI workshop, we asked PAB members to comment on the workshop via a short online feedback form containing three open questions on process and social interaction: 1—what did you like best today? 2—what did you miss today? 3—is there anything else you want to share with us? The anonymous written answers were transferred onto an Excel sheet and inserted to MAXQDA 2018. We analyzed answers grouped into feedback to the onboarding workshops and project‐specific PAB meetings. Using thematic analysis, we used a deductive approach first and grouped data with regard to the three questions in the online feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from PAB members. However, marginal experiences are also mentioned in the results. 2.4.2 Evaluation from researchers' perspectives To assess the social interaction within the PAB meetings we asked researchers, similar to PAB members, after each PAB meeting, (1) what they liked best today and (2) what they felt was challenging. To assess PPIs impact, we further asked (3) with which aim they had involved the PAB, (4) if they felt this involvement was beneficial for their research and what should be different next time to make it more beneficial, (5) which changes to research were made due to the PAB meeting and (6) whether there was input from the PAB that was not included in the research and why. Written answers were inserted to MAXQDA 2018 and analyzed using thematic analysis. First, we used a deductive approach and grouped data with regard to the 6 questions of the feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from researchers. Marginal experiences are also mentioned in the results. Evaluation from PAB members' perspectives After each onboarding workshop and each PPI workshop, we asked PAB members to comment on the workshop via a short online feedback form containing three open questions on process and social interaction: 1—what did you like best today? 2—what did you miss today? 3—is there anything else you want to share with us? The anonymous written answers were transferred onto an Excel sheet and inserted to MAXQDA 2018. We analyzed answers grouped into feedback to the onboarding workshops and project‐specific PAB meetings. Using thematic analysis, we used a deductive approach first and grouped data with regard to the three questions in the online feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from PAB members. However, marginal experiences are also mentioned in the results. Evaluation from researchers' perspectives To assess the social interaction within the PAB meetings we asked researchers, similar to PAB members, after each PAB meeting, (1) what they liked best today and (2) what they felt was challenging. To assess PPIs impact, we further asked (3) with which aim they had involved the PAB, (4) if they felt this involvement was beneficial for their research and what should be different next time to make it more beneficial, (5) which changes to research were made due to the PAB meeting and (6) whether there was input from the PAB that was not included in the research and why. Written answers were inserted to MAXQDA 2018 and analyzed using thematic analysis. First, we used a deductive approach and grouped data with regard to the 6 questions of the feedback form. The data was then coded inductively: Answers were coded multiple times when they included multiple aspects. Finally, the codes were grouped into themes. These themes are presented in the results section with exemplary quotations from researchers. Marginal experiences are also mentioned in the results. RESULTS 3.1 PAB members Today the FPBRNs' PAB has 11 members ranging from 17 to 70 years with and without pre‐existing conditions (see Table ). Only one patient preferred digital to face‐to‐face meetings at the recruitment stage. Nevertheless, the COVID‐19 pandemic forced us to hold most meetings digitally. No PAB member resigned because of the predominantly digital format. 3.2 Recruitment strategies The most successful recruitment strategy for patients to become PAB members was when they were informed about the PAB individually by their family practitioner. No patient was recruited via the display of flyers and information material in family practitioners' waiting rooms only. Two patients contacted us because they were informed about the PAB by a friend: a recruitment strategy we did not plan for in advance (Table ). 3.3 PPI workshops and PAB activities From October 2021 to July 2023, we conducted two digital onboarding workshops for training and trained one PAB member individually. We conducted three digital and two in‐person project‐specific workshops in which the PAB gave input on research projects of the FPBRN. At these workshops, the coordinator of the PAB was present together with researchers from the project in question. Three PAB members gave feedback on two lay‐language brochures with project results. We invited the PAB to the ‘Day of Family Medicine’ at our university hospital, and three PAB members joined us for lunch and the keynote lecture on ‘Patient Involvement in Family Medicine Research’. Furthermore, PAB members joined the anniversary celebration of our FPBRN and two of them took part in a plenary discussion on ‘Research in the FPBRN as an interprofessional undertaking’ (Table ). 3.4 Evaluation from PAB members' perspectives We analyzed 10 feedback forms on two onboarding workshops and 30 feedback forms commenting on five project‐specific workshops. 3.4.1 Onboarding—Intelligible information and congenial atmosphere Concerning the onboarding workshops, PAB members positively stressed the intelligibility of the information provided. Concerning content, they especially liked the display of PAB members’ roles and tasks and the introduction of the FPBRN. The responsiveness of the researchers who moderated was stressed: ‘It was a very comprehensible and informative orientation meeting. I am very happy to be able to participate. The coordinators chaired the meeting very well and with a lot of empathy’. PAB members liked that ‘everything was explained, in a friendly and patient manner’. PAB members furthermore mentioned the ‘congenial open atmosphere’ and felt that they were a ‘good mixture’. Two participants wished for more time get to know the other PAB members and a comprehensive introductory round. One member wished to meet in person. 3.4.2 Project‐specific workshops—Exchange of perspectives and exciting topics In PAB members’ feedback on the benefits of the project‐specific workshops ‘exchange’ was the predominant topic: The PAB members stressed that they liked the exchange of ideas and perspectives with other PAB members, the ‘exciting and open discussions’ and the extra time to get to know each other. Similar to the onboarding workshops, PAB members liked the ‘intelligible presentation’ and ‘graphic explanations’. They also mentioned the content of the five project‐specific meetings positively. They liked the ‘interesting information’ and the ‘exciting, future‐oriented topic’. One PAB member summarized: ‘It was highly informative. I liked the topic, the presentations and the exchange very much’. Answers on what PAB members felt was lacking were heterogeneous. While most had no wishes, the wish for more time to answer questions and give input was articulated twice. Two persons wished for more information on how the PAB members’ feedback was included into the research projects, and one person wished to get to know how the project overall went on. Furthermore, in‐person meetings were wished for twice and one person wished for materials in advance to prepare for the meetings. The fourth and fifth meeting finally took place in person. The members present stressed their appreciation of the ‘personal and direct’ in‐person discussions and felt that ‘meeting in‐person helps us to move forward’. 3.5 Evaluation from researchers' perspectives We included 14 feedback sheets on five project‐specific workshops from researchers in the analysis. Similar to PAB members, researchers very often underlined the open and lively discussions within the PAB: ‘[I liked best] that everyone was involved, experiences were shared openly and a dialogue evolved between board members and researchers’. Mentioned challenges encompassed time management and appropriate communication: one researcher found it hard to interrupt because discussions were so lively and enthusiastic while another one found it challenging to ‘keep the flow of the conversation running’. Furthermore, the preparation of study results for a patient audience was mentioned as a difficult task, while this preparation was also seen as a benefit, because it helped to reflect again on the projects’ most important results, anticipating the patients’ perspective. All researchers felt the PAB meetings were helpful and productive. Two project‐specific workshops discussed study results with patients. In these cases, concrete changes could not be named while the PAB's input helped researchers in weighing their assumptions and research findings from patients’ perspectives and deciding on future research: ‘The workshop underlined our findings from patients’ perspectives, respectively a certain topic was strengthened that patients felt was especially important’. In three other workshops, PAB members were involved in studies in progress, that is, the selection of indications for a systematic review proposal, checking a patient questionnaire on comprehensibility and relevance and giving feedback on a prototype of information material on hypertension. In these cases, researchers also felt that the PAB's input was beneficial and improved the research a lot, while it was easier for them to name concrete changes to the study based on PAB members’ input. However, most researchers also highlighted obstacles in transferring the PAB's input into research. For example, one researcher mentioned that it might be challenging to decide which input to prioritize given the diverse and sometimes contradicting perspectives of the PAB members. Furthermore, structural and methodological barriers were mentioned such as using standardized items in a questionnaire that therefore can hardly be changed as well as the limited overall length of the questionnaire: ‘When it comes to validated items for the calculation of an index – there's very little room for adaptions. That's why we cannot implement some of the PAB's recommendation for methodological reasons’. Researchers also named time constraints and deadlines from funding agencies as barriers to fully integrate the PAB members’ feedback. In other cases, the processing of the PAB members’ feedback depended on cooperation partners and was therefore not predominantly in the hands of the attending researchers: ‘Naming concrete changes is difficult, because we do not solely decide about the implementation. Having said this, I believe that the PAB's stressing of personal communication between patients, health care assistants and family practitioners was important for the future course of the project and that the PAB affected this future course’. PAB members Today the FPBRNs' PAB has 11 members ranging from 17 to 70 years with and without pre‐existing conditions (see Table ). Only one patient preferred digital to face‐to‐face meetings at the recruitment stage. Nevertheless, the COVID‐19 pandemic forced us to hold most meetings digitally. No PAB member resigned because of the predominantly digital format. Recruitment strategies The most successful recruitment strategy for patients to become PAB members was when they were informed about the PAB individually by their family practitioner. No patient was recruited via the display of flyers and information material in family practitioners' waiting rooms only. Two patients contacted us because they were informed about the PAB by a friend: a recruitment strategy we did not plan for in advance (Table ). PPI workshops and PAB activities From October 2021 to July 2023, we conducted two digital onboarding workshops for training and trained one PAB member individually. We conducted three digital and two in‐person project‐specific workshops in which the PAB gave input on research projects of the FPBRN. At these workshops, the coordinator of the PAB was present together with researchers from the project in question. Three PAB members gave feedback on two lay‐language brochures with project results. We invited the PAB to the ‘Day of Family Medicine’ at our university hospital, and three PAB members joined us for lunch and the keynote lecture on ‘Patient Involvement in Family Medicine Research’. Furthermore, PAB members joined the anniversary celebration of our FPBRN and two of them took part in a plenary discussion on ‘Research in the FPBRN as an interprofessional undertaking’ (Table ). Evaluation from PAB members' perspectives We analyzed 10 feedback forms on two onboarding workshops and 30 feedback forms commenting on five project‐specific workshops. 3.4.1 Onboarding—Intelligible information and congenial atmosphere Concerning the onboarding workshops, PAB members positively stressed the intelligibility of the information provided. Concerning content, they especially liked the display of PAB members’ roles and tasks and the introduction of the FPBRN. The responsiveness of the researchers who moderated was stressed: ‘It was a very comprehensible and informative orientation meeting. I am very happy to be able to participate. The coordinators chaired the meeting very well and with a lot of empathy’. PAB members liked that ‘everything was explained, in a friendly and patient manner’. PAB members furthermore mentioned the ‘congenial open atmosphere’ and felt that they were a ‘good mixture’. Two participants wished for more time get to know the other PAB members and a comprehensive introductory round. One member wished to meet in person. 3.4.2 Project‐specific workshops—Exchange of perspectives and exciting topics In PAB members’ feedback on the benefits of the project‐specific workshops ‘exchange’ was the predominant topic: The PAB members stressed that they liked the exchange of ideas and perspectives with other PAB members, the ‘exciting and open discussions’ and the extra time to get to know each other. Similar to the onboarding workshops, PAB members liked the ‘intelligible presentation’ and ‘graphic explanations’. They also mentioned the content of the five project‐specific meetings positively. They liked the ‘interesting information’ and the ‘exciting, future‐oriented topic’. One PAB member summarized: ‘It was highly informative. I liked the topic, the presentations and the exchange very much’. Answers on what PAB members felt was lacking were heterogeneous. While most had no wishes, the wish for more time to answer questions and give input was articulated twice. Two persons wished for more information on how the PAB members’ feedback was included into the research projects, and one person wished to get to know how the project overall went on. Furthermore, in‐person meetings were wished for twice and one person wished for materials in advance to prepare for the meetings. The fourth and fifth meeting finally took place in person. The members present stressed their appreciation of the ‘personal and direct’ in‐person discussions and felt that ‘meeting in‐person helps us to move forward’. Onboarding—Intelligible information and congenial atmosphere Concerning the onboarding workshops, PAB members positively stressed the intelligibility of the information provided. Concerning content, they especially liked the display of PAB members’ roles and tasks and the introduction of the FPBRN. The responsiveness of the researchers who moderated was stressed: ‘It was a very comprehensible and informative orientation meeting. I am very happy to be able to participate. The coordinators chaired the meeting very well and with a lot of empathy’. PAB members liked that ‘everything was explained, in a friendly and patient manner’. PAB members furthermore mentioned the ‘congenial open atmosphere’ and felt that they were a ‘good mixture’. Two participants wished for more time get to know the other PAB members and a comprehensive introductory round. One member wished to meet in person. Project‐specific workshops—Exchange of perspectives and exciting topics In PAB members’ feedback on the benefits of the project‐specific workshops ‘exchange’ was the predominant topic: The PAB members stressed that they liked the exchange of ideas and perspectives with other PAB members, the ‘exciting and open discussions’ and the extra time to get to know each other. Similar to the onboarding workshops, PAB members liked the ‘intelligible presentation’ and ‘graphic explanations’. They also mentioned the content of the five project‐specific meetings positively. They liked the ‘interesting information’ and the ‘exciting, future‐oriented topic’. One PAB member summarized: ‘It was highly informative. I liked the topic, the presentations and the exchange very much’. Answers on what PAB members felt was lacking were heterogeneous. While most had no wishes, the wish for more time to answer questions and give input was articulated twice. Two persons wished for more information on how the PAB members’ feedback was included into the research projects, and one person wished to get to know how the project overall went on. Furthermore, in‐person meetings were wished for twice and one person wished for materials in advance to prepare for the meetings. The fourth and fifth meeting finally took place in person. The members present stressed their appreciation of the ‘personal and direct’ in‐person discussions and felt that ‘meeting in‐person helps us to move forward’. Evaluation from researchers' perspectives We included 14 feedback sheets on five project‐specific workshops from researchers in the analysis. Similar to PAB members, researchers very often underlined the open and lively discussions within the PAB: ‘[I liked best] that everyone was involved, experiences were shared openly and a dialogue evolved between board members and researchers’. Mentioned challenges encompassed time management and appropriate communication: one researcher found it hard to interrupt because discussions were so lively and enthusiastic while another one found it challenging to ‘keep the flow of the conversation running’. Furthermore, the preparation of study results for a patient audience was mentioned as a difficult task, while this preparation was also seen as a benefit, because it helped to reflect again on the projects’ most important results, anticipating the patients’ perspective. All researchers felt the PAB meetings were helpful and productive. Two project‐specific workshops discussed study results with patients. In these cases, concrete changes could not be named while the PAB's input helped researchers in weighing their assumptions and research findings from patients’ perspectives and deciding on future research: ‘The workshop underlined our findings from patients’ perspectives, respectively a certain topic was strengthened that patients felt was especially important’. In three other workshops, PAB members were involved in studies in progress, that is, the selection of indications for a systematic review proposal, checking a patient questionnaire on comprehensibility and relevance and giving feedback on a prototype of information material on hypertension. In these cases, researchers also felt that the PAB's input was beneficial and improved the research a lot, while it was easier for them to name concrete changes to the study based on PAB members’ input. However, most researchers also highlighted obstacles in transferring the PAB's input into research. For example, one researcher mentioned that it might be challenging to decide which input to prioritize given the diverse and sometimes contradicting perspectives of the PAB members. Furthermore, structural and methodological barriers were mentioned such as using standardized items in a questionnaire that therefore can hardly be changed as well as the limited overall length of the questionnaire: ‘When it comes to validated items for the calculation of an index – there's very little room for adaptions. That's why we cannot implement some of the PAB's recommendation for methodological reasons’. Researchers also named time constraints and deadlines from funding agencies as barriers to fully integrate the PAB members’ feedback. In other cases, the processing of the PAB members’ feedback depended on cooperation partners and was therefore not predominantly in the hands of the attending researchers: ‘Naming concrete changes is difficult, because we do not solely decide about the implementation. Having said this, I believe that the PAB's stressing of personal communication between patients, health care assistants and family practitioners was important for the future course of the project and that the PAB affected this future course’. DISCUSSION PAB members stressed the fruitful and open atmosphere, appreciated the changing topics of each meeting and liked the exchange of ideas and perspectives with one another and the researchers. The building of this relationship succeeded, even though most meetings took place in a digital setting by planning for time to get to know each other and social interaction within each meeting. With the end of pandemic‐related restrictions of social contact, many PAB members strongly appreciated meeting in person. Others pointed out the increasing challenge of combining PAB activities with work duties when travelling to in‐person PAB meetings. In the future, a mix of in‐person and digital meetings seems feasible. The most successful recruitment strategy was family practitioners inviting patients personally to join the PAB. Other successful recruitment strategies also involved personal interactions, while the sole display of flyers in family practices and pharmacies did not motivate any patients to join the PAB. This stresses the importance of trust and sustainable relationships in PPI, while it also raises the question of representation (see Section ). The preparation of research material for workshops with the PAB was seen as demanding by some researchers, while it paid off both for researchers—who reflected on the significance of their research for patients and the public—and for PAB members who appreciated the ‘intelligible presentation’ and ‘graphic explanations’ a lot. While all researchers felt that the PAB meetings played a crucial role in weighing findings and emphasizing certain aspects of their projects, some researchers could not name concrete changes that were based on the PAB meetings. This was partly due to the content of the meetings, that is, discussions of project results, but also to methodological and structural barriers to implementation such as standardized questionnaire items, deadlines from funding agencies or the need to come to terms with cooperating partners. These barriers relate to contemporary research structures that are in many cases highly formalized, competitive, involving multiple players and dependent on project‐based external funding. In these surroundings, the topic of providing ‘space to talk’ but also providing and being transparent with regard to ‘space to change’ is especially important. Researchers must communicate openly on research structures, but also on the choices they make and the reasons for these choices when it comes to actual changes made to research projects based on PPI. This is important to prevent ‘sham participation’ , and because PAB members stressed the importance of being informed about the impact of their meetings and the progress of the research projects they discussed. Concerning authorship and acknowledgement of contributions to research, we initiated a discussion within the PAB on the importance of visibility by providing individual names and the possibility of protection by using a group identity. The PAB decided that they do not want their names to appear on the FPBRN's website or elsewhere. In publications the PABs' contribution is honoured in the acknowledgements. With regard to the current level of involvement that is the PAB's counselling on research projects within single sessions, coauthorship was not feasible so far, but this may change in the future. In case individual members decide to contribute to research‐associated events such as panel discussions, they are represented by name just like all other speakers. The PAB's decision on this topic is a matter of constant reconsideration by members. The COVID‐19 pandemic and the switch to digital formats might have prevented some patients from joining the meetings that were predominantly digital during the pandemic. At first, we hesitated to start the PAB in an online‐only environment. Because of very positive experiences with digital PPI and encouraging evaluation results from patients in a study on multimedication, we decided to get started anyway. We implemented the supporting tools used in the study such as technical introduction workshops and technical support throughout the meetings and incorporated extra time for discussions and getting‐to‐know each other. None of the PAB members dropped out during the pandemic because of the digital format, but some might not have joined at all due to barriers in soft‐ and hardware. On the other hand, we know from other studies as well as feedback from PAB members that digital formats can also reduce barriers, as travel restrictions do not apply and participants can tailor their personal environment to suit their individual needs. , , At the end of the pandemic, most PAB members wished for a meeting in person and felt that ‘meeting in‐person helps us to move forward’. We will focus on the shift from online to in‐person meetings and how this may influence communication dynamics within the PAB. LIMITATIONS Even though we theoretically gave everyone interested and present in a family research practice the chance to join the PAB by displaying flyers in waiting rooms, our recruitment strategies might be selective. This might be especially true as most patients joined by personal invitation through their family practitioners, and we have no information why family practitioners approached which patients. This touches the topic of representation, that is always a matter in PPI, when it comes to a selected group of patients speaking for a larger group. We aimed to approach PAB members as patient experts on eye level and therefore decided to not collect a lot of private, health‐related data from them. Therefore, we can only draw conclusions on the diversity of the PAB on the basis of age, gender and pre‐existing health condition (yes or no). Even though our PAB does represent a wide range of ages and health conditions, we cannot provide information on demographics like migration status or educational level. Also, our initial recruitment strategy was not based on either of these characteristics, but we aim to consider this in the future. Furthermore, we aim to stress that our PAB consists of persons that contribute their individual everyday experiences with healthcare in family practice, given the fact that we ruled out patient representatives from patient organizations. By doing so, we aimed to prevent a special condition from becoming the focus of our discussions in which the family practice is always at the centre. Nevertheless, this focus on individual experiences also excludes the wide range of background knowledge and accumulated knowledge of different patient experiences that patient representatives may provide. Finally, the evaluation presented in this article is based on PAB members' and researchers' feedback on a couple of single PAB meetings. Even though we collected feedback data at several points in time, our evaluation data contains no information on PAB members' experiences with the overall PPI process within the FPBRN, i.e. if they had wished for more training, a different level of involvement, or another PPI format different from group workshops. In the future, we plan for an overarching evaluation that shall assess patients' overall experiences with the PAB. There are some standardized tools to assess patients' experiences with PPI as well as frameworks that will inspire our evaluation. , , Nevertheless, we aim to develop a guideline for qualitative interviews that addresses the specific tasks, processes and structures of the FPBRN and the PAB within this network to adjust the PAB and PPI activities accordingly. Concerning the researchers’ perspectives, our evaluation results are limited as well. First, similar to patients, researchers were surveyed at one point in time only, that is 1–2 weeks after the workshop. Reflections, processes and changes to research that occurred after this period could not be assessed. Second, our evaluation is limited to those researchers within the FPBRN that had direct contact with the PAB within a workshop. Most probably these researchers had a positive mindset and were open towards PPI. An extended evaluation could survey all researchers of the FPBRN and assess their attitudes towards PPI in general as well as their knowledge and perception of the PAB to assess the structural and longitudinal changes that the PAB initiated. , , The evaluation results will then inform future directions of the PAB and of PPI activities within the FPBRN in general. CONCLUSION The establishment of a standing PAB in family practice research is feasible and productive both from patients' and researchers' perspectives. PABs should be considered an integral part of research infrastructure in family practice research and beyond and their establishment should be fostered further. Jennifer Engler : Conceptualization; investigation; methodology; writing—review and editing; writing—original draft; project administration; formal analysis; resources; supervision; data curation; validation. Fabian Engler : Writing—review and editing; data curation; investigation. Meike Gerber : Writing—review and editing; investigation; data curation. Franziska Brosse : Writing—review and editing. Karen Voigt : Writing—review and editing; funding acquisition; supervision; project administration; resources. Karola Mergenthal : Supervision; resources; project administration; writing—review and editing; Conceptualization. The authors declare no conflict of interest. We informed the local ethics committee of The University Hospital of Goethe University Frankfurt am Main about our intention to establish a patient advisory board (PAB) and to hold patient and public involvement workshops with PAB members. The ethics committee expressed no concerns and waived a formal approval on the basis of the Medical Association's professional code of conduct in Hesse/Germany (§ 15 BO hess. Ärzte). All PAB members gave written informed consent to the processing of workshop results for academic purposes. |
Comparative evaluation of the clinical effectiveness of chemomechanical (Papacarie) and conventional mechanical caries removal methods in treatment of carious primary molars: a randomized controlled clinical study | d8b69436-04d0-4be1-91f2-fc6c6fbae6d5 | 11847336 | Dentistry[mh] | Dental caries is recognized as the most significant oral public health issue. In children, it can affect nutrition, potentially influencing growth and early development . Untreated dental caries in children can cause persistent pain disrupting daily activities such as learning, playing and sleeping . This inadvertently affects not only the child’s oral health but most importantly overall health with a profound effect on the psychological, social and economic aspects of life; the cumulative effect being a decrease in the quality of life and well-being . Over the years, several methods have evolved in the prevention and treatment of dental caries. Traditional approach focused on treatment of the carious lesion alone and caused the patient to be caught in a ‘repeat restoration cycle’ which involved replacement of failed restorative material with a larger sized cavity preparation each time the restoration failed until the dental pulp becomes involved, which then requires endodontic treatment or extractions . The conventional caries removal method involves the use of rotary instruments, often associated with noise production that could induce fear in children. It is also often associated with thermal pressure and unnecessary destruction of sound tooth structure which can cause discomfort or pain in children . Research on caries management methods in the past decades has led to the evolution of a more acceptable approach and conservative model of caries management referred to as minimal intervention dentistry (MID). Minimal intervention dentistry is a more holistic approach, it involves caries prevention and treatment in the least invasive approach possible . Minimal invasive dentistry is a component of minimal intervention dentistry. This concept of minimal invasive dentistry is based on the selective removal of the infected dentine and preserving the affected dentine which is remineralizable . It is a philosophy that focuses on the need for maximum tooth tissue conservation and this is justified on the fact that there is presently no one restorative material that can be a perfect replacement of the natural tooth structure on a long-term and hence its preservation is of utmost importance . Chemomechanical caries removal is one of the developing treatment modalities in the field of minimally invasive dentistry. The general mechanism of action of chemomechanical caries removal agent is the breakdown of the degraded collagen fibres in the infected dentine on application of the chemical agent thus eliminating infected tissues, preserving healthy structures, avoiding pulp irritation and reducing patient discomfort . Papacarie is an enzyme based chemomechanical agent introduced in 2003 by Bussadori et al. with added benefits of antibacterial, anti-inflammatory effects and without an aversive chlorine taste compared with the previously available chemomechanical caries removal agents . The components of Papacarie are papain enzyme, chloramine, toluidine blue, salts, preservatives, thickener, stabilizers and deionized water . It is relatively easy to apply and there is no need for special instruments . It also enables proper bonding of restorative material to the tooth substance due to presence of micro irregularities on the dentine surface and absence of smear layer following caries removal with the use of Papacarie . Papacarie preserves the integrity of the dentine structure and it is biocompatible with the oral tissue . Caries removal time has however been a concern with the use of Papacarie as the results of some studies have shown increased caries excavation time compared with the conventional rotary method . According to Chowdhry et al., the use of Papacarie gel in caries removal is more acceptable to children than the conventional caries removal method due to a decreased pain perception . Reduced pain associated with Papacarie use has been attributed to decreased dentine destruction in caries removal . Atraumatic restorative treatment has been the major minimally invasive treatment technique widely used and studied in Africa. There is a need to explore other options of minimal invasive techniques for caries management that are acceptable to the paediatric population who are more prone to dental fear and anxiety. The need to investigate other non-aerosol generating minimal invasive treatments are important especially in this era of emerging infections and especially related to SARS-COV-2 infection as advised by the Centre for Disease Control. Paediatric dentists are among the identified high-risk group working with a unique population, children who are more often asymptomatic carriers. Therefore, it is crucial that efforts are targeted at reducing the risk of exposure while practicing non aerosol generating options of caries management in children . The use of Papacarie as a chemomechanical caries removal method is relatively new. Few studies have compared the efficacy and acceptability of Papacarie with the use of the conventional method using low speed handpiece in caries removal . This study is the first study of its kind in this population so it will allow for availability of local data to enhance comparison for future studies. The aim of this study was to compare the clinical efficacy of Papacarie as a chemomechanical caries removal method with the use of mechanical method using rotary instrument in carious primary molars. The specific objectives were to compare the average time taken, pain perception and patient acceptability during caries removal in primary molars of 5–9 year old children in the both Papacarie group and the conventional rotary group. The study was carried out at the Paediatric Dental Clinic of Lagos University Teaching Hospital (LUTH). It was a randomized controlled clinical study with a split-mouth design where each study participant was treated with both the conventional mechanical caries removal method (control group/ group A) and the chemomechanical caries removal method (experimental group/ group B). The study was designed, analysed and interpreted according to the (CONSORT) Consolidated Standards of Reporting Trials (Additional file 1). The study population consisted of children aged 5–9 years who presented at the Paediatric Dental Clinic of LUTH between May and December 2022 with carious lesions and fulfilled the inclusion criteria. These criteria included the children 5–9 years with presence of at least two non-pulpally involved carious lesion on the primary molar teeth confirmed by periapical radiographs, occlusal carious lesion of the dentine involving less than two-thirds of the dentine, primary molars with ICDAS code 5, cooperative children and consent provided by parent or guardian. Exclusion criteria were teeth with clinical or radiographic signs and symptoms of pulpal involvement, existing restorations, proximal carious lesions as well as children debilitating medical conditions and or special needs. Each carious tooth was randomly assigned to one of the two treatment groups: Papacarie group (experimental group) or conventional rotary group (control group) by balloting. The sample size for this study was determined by using sample size formula for split-mouth design from Pandis et al. Sample size calculation for 2 paired means (outcomes as continuous data) with a 1:1 allocation ratio. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eqalign{{\rm{n}} = & \,f\,\left( {{\rm{\alpha,\beta }}} \right){\rm{ }}x{\sigma ^2} \cr\,\, & {\left( {{\mu _1} - {\rm{ }}{\mu _0}} \right)^2} \cr} $$\end{document} Minimum sample size calculated was 23 participants (46 primary molars). A total of 25 participants (with 50 primary molars) were recruited for the study. Ethical approval was obtained from the Health Research and Ethics Committee (HREC) of the Lagos University Teaching Hospital. HREC number: ADM/DCST/HREC/APP/4332. This randomized controlled trial was registered with the Pan African Clinical Trial Registry (PACTR); Trial number PACTR202111738486539. Written informed consent forms detailing study and its benefits, were given to parents and guardians. Consent was obtained before participants were included into the study . Intra-examiner reliability for diagnosing dentine caries involving less than two-thirds of dentine was assessed, achieving a Cohen’s Kappa score of 0.9. Detection of dental caries was by examination of participants in the dental clinic based on the diagnostic criteria recommended by the ICDAS. Teeth with ICDAS code 5 were selected. Diagnosis of carious lesion was made after visual, tactile and radiographic assessment. Depth of occlusal carious lesion was assessed with periapical radiographs. For participants with more than two qualifying teeth, two teeth used were randomly selected using simple balloting method. Each qualifying tooth was written in separate pieces of paper, concealed and placed in a box by the principal investigator. The first dental assistant who was blinded to the carious teeth identified for treatment conducted the selection process. The first tooth picked was assigned to the conventional rotary caries removal method while the other was assigned to the chemomechanical method. To determine the order of treatment, the balloting was repeated and the first tooth selected was treated first. Clinical procedure STEP 1: Topical anaesthesia (lidocaine spray) was applied around the gingiva of the tooth to anesthetize the gingivae for painless placement of rubber dam clamp. STEP 2: Rubber dam was placed for isolation. Participants confirmed to be without pain before caries removal. STEP 3: Caries removal. For Group A (Removal of carious dentine with conventional mechanical method using rotary): Caries was excavated using round burs (Midwest latched carbide bur 06) in low speed air-turbine handpiece. Complete caries excavation was confirmed using the visual and tactile method by the principal investigator. In the visual method, caries removal was completed when the dentine appeared shiny . In the tactile method, caries removal was completed when there was absence of the ‘tug back’ sensation or when the dental explorer did not stick to the dentine . The first dental assistant recorded the primary outcome (cavity preparation time) with the use of a stopwatch (PC-396 XIN JIE). The time from beginning of use of the rotary instrument to the time the cavity was considered caries free was recorded. For Group B (Removal of carious dentine with chemomechanical method using Papacarie): Caries excavation was done according to the manufacturer’s instruction in the Papacarie pack. Papacarie (Duo ® ) was applied with the aid of an applicator tip to fill the cavity and left for 30 s. The softened dentine tissue was removed with the use of the spoon excavator in a pressure-less manner. This procedure was repeated as many times as necessary, until the gel appeared clear and without debris. Complete caries removal was confirmed using the visual and tactile method by the principal investigator . The first dental assistant recorded primary outcome (the cavity preparation time) with the use of a stopwatch. The time from the beginning of gel application until the cavity was considered caries free was recorded. STEP 4: Cavity restoration: The Cavity was irrigated and dried. The Cavity was restored with the use of chemically cured glass ionomer cement (Prevest Denpro Micron Superior GIC Type II). The cavity walls and floor were conditioned by the glass-ionomer liquid (polyacrylic acid) for 10 s using cotton pellets, Glass ionomer cement was mixed for 30s using a powder/liquid ratio of 3/1. The mix was applied to the cavity using a carver (32 CHB3). The cavity was slightly over filled and the material was pressed by applying light pressure with a gloved and petroleum jelly-coated finger. Excess material was removed with a carver (32 CHB3). The rubber dam was removed and the occlusion checked. The study participant was instructed not to eat or drink for one hour after restoration placement. Immediately after completion of the treatment of each tooth, a second dental assistant who was blinded to the caries removal method interviewed the study participant. The secondary outcome was assessed by interview on pain or discomfort (using the Wong-Baker faces pain rating scale Fig. ) and patient’s acceptability was assessed with a 5-point Likert scale faces (Fig. ). There was a washout period of 1 week so the second interview was done after the second caries removal method in a week’s time. The scores from the 5 point likert scale faces were added together for each caries removal method and compared in the same individual to know the preferred method in that individual (Figs. – ) The statistical package for social sciences (SPSS) version 25.0 IBM (software version 25.0; IBM Corporation, Armonk, NY, USA) was used for data analysis. Shapiro Wilk test was used to assess normality distribution assumption. Age was presented using mean and standard deviation while time was presented using median and interquartile range because it was skewed. Frequency and percentages were presented for categorical variables (gender, tooth type, pain). Association between categorical variables was carried out using Fisher’s exact test. Independent t- test was used to compare mean between the two groups (A and B) while Mann Whitney U test was used for median comparison between the two groups. The significance level was set at p ≤ 0.05 for all statistical tests at 95% confidence interval. Charts were used for data presentation where appropriate. Demographic characteristics of participants A total of 25 children (with 50 primary molars) met the inclusion criteria and were recruited for the study. Chemomechanical method (Papacarie) was used for caries removal in 25 primary molars and conventional method (mechanical) was used for caries removal in 25 primary molars. None of the participants was lost to follow up. The children were between ages 5–9 years. The mean age was 6.72 ± 1.2 years. Thirteen boys and twelve girls participated in this study with a 1.1: 1 male to female ratio. Table . Time for caries removal in the chemomechanical (papacarie) group and conventional (rotary) group The median score of time taken for caries removal in Papacarie group (111 s) was slightly lower than that of the conventional group (115 s) however; there was no statistically significant difference in the time for caries removal between the 2 groups ( p = 0.839). Figure . In the conventional group, the minimum caries removal time was 28 s and the maximum caries removal time was 292 s. In the Papacarie group, the minimum caries removal time was 61 s and the maximum caries removal time was 303 s. Figure . Comparison of pain perception for complete caries removal in Papacarie and conventional group In the Papacarie group, 23 (92%) of the study participants reported they had no pain(score 0) during caries removal while only 2 (8%) of the study participant reported little pain(score 2). In the conventional group, 15 (60%) of the participants reported no pain (score 0), 5 (20%) reported little pain (score 2) and 5 (20%) reported a little more pain (score 4). The highest pain score reported in the conventional caries removal group was a score 4 while the highest pain score was a score 2 in the Papacarie group. Figure . In both groups there were no reports of a score 6, score 8 and score 10. There was a statistically significant difference in the pain scores between the 2 groups, study participants who had caries removal by Papacarie reported lower pain scores ( p = 0.019) (Fig. ). Association between age, gender, time and pain in the papacarie and conventional methods During caries removal with the Papacarie method, none of the older children (8- and 9-year olds) had pain. There were reports of pain from 4 (57%) of the older children during the conventional method. Only 1(8%) participant of the younger children (5 and 6 year olds) reported pain in the Papacarie group while 2(17%) of the younger children reported pain in the conventional method. There was however no statistically significant difference in the association between pain and age (0.104). One male and one female had pain during caries removal with the Papacarie method. Seven (58%) of the females had pain while only three (23%) males had pain with the conventional method. Table . There was however no statistically significant difference in the association between gender and pain (0.069). Participants’ preference for papacarie and conventional method From the three questions asked, a maximum score of 15 indicates good acceptance while 3 is the minimum score, which indicates poor acceptance. Only the question on “do you like the treatment?” had a statistically significant difference ( p = 0.047). The mean score for Papacarie method was 4.68 ± 0.5 and 4.24 ± 1.0 for the conventional method. All the participants (100%) indicated they liked the treatment for the Papacarie group and 20 of the 25 participants (80%) indicated they liked the conventional method. Figure . A higher total mean score of 14.00 ± 1.3 was observed for the Papacarie method indicating a better acceptance compared with a total mean score of 12.80 ± 2.9 for the conventional method. However, there was no statistically significant difference in the study participants’ preference. Table ( p = 0.072). About half of the participants, 11 (44%) preferred the Papacarie method. Less than one fifth of the participants; 4 (16%) preferred the conventional method. More than one-third 10 (40%) reported that they liked both methods (Fig. ). STEP 1: Topical anaesthesia (lidocaine spray) was applied around the gingiva of the tooth to anesthetize the gingivae for painless placement of rubber dam clamp. STEP 2: Rubber dam was placed for isolation. Participants confirmed to be without pain before caries removal. STEP 3: Caries removal. For Group A (Removal of carious dentine with conventional mechanical method using rotary): Caries was excavated using round burs (Midwest latched carbide bur 06) in low speed air-turbine handpiece. Complete caries excavation was confirmed using the visual and tactile method by the principal investigator. In the visual method, caries removal was completed when the dentine appeared shiny . In the tactile method, caries removal was completed when there was absence of the ‘tug back’ sensation or when the dental explorer did not stick to the dentine . The first dental assistant recorded the primary outcome (cavity preparation time) with the use of a stopwatch (PC-396 XIN JIE). The time from beginning of use of the rotary instrument to the time the cavity was considered caries free was recorded. For Group B (Removal of carious dentine with chemomechanical method using Papacarie): Caries excavation was done according to the manufacturer’s instruction in the Papacarie pack. Papacarie (Duo ® ) was applied with the aid of an applicator tip to fill the cavity and left for 30 s. The softened dentine tissue was removed with the use of the spoon excavator in a pressure-less manner. This procedure was repeated as many times as necessary, until the gel appeared clear and without debris. Complete caries removal was confirmed using the visual and tactile method by the principal investigator . The first dental assistant recorded primary outcome (the cavity preparation time) with the use of a stopwatch. The time from the beginning of gel application until the cavity was considered caries free was recorded. STEP 4: Cavity restoration: The Cavity was irrigated and dried. The Cavity was restored with the use of chemically cured glass ionomer cement (Prevest Denpro Micron Superior GIC Type II). The cavity walls and floor were conditioned by the glass-ionomer liquid (polyacrylic acid) for 10 s using cotton pellets, Glass ionomer cement was mixed for 30s using a powder/liquid ratio of 3/1. The mix was applied to the cavity using a carver (32 CHB3). The cavity was slightly over filled and the material was pressed by applying light pressure with a gloved and petroleum jelly-coated finger. Excess material was removed with a carver (32 CHB3). The rubber dam was removed and the occlusion checked. The study participant was instructed not to eat or drink for one hour after restoration placement. Immediately after completion of the treatment of each tooth, a second dental assistant who was blinded to the caries removal method interviewed the study participant. The secondary outcome was assessed by interview on pain or discomfort (using the Wong-Baker faces pain rating scale Fig. ) and patient’s acceptability was assessed with a 5-point Likert scale faces (Fig. ). There was a washout period of 1 week so the second interview was done after the second caries removal method in a week’s time. The scores from the 5 point likert scale faces were added together for each caries removal method and compared in the same individual to know the preferred method in that individual (Figs. – ) The statistical package for social sciences (SPSS) version 25.0 IBM (software version 25.0; IBM Corporation, Armonk, NY, USA) was used for data analysis. Shapiro Wilk test was used to assess normality distribution assumption. Age was presented using mean and standard deviation while time was presented using median and interquartile range because it was skewed. Frequency and percentages were presented for categorical variables (gender, tooth type, pain). Association between categorical variables was carried out using Fisher’s exact test. Independent t- test was used to compare mean between the two groups (A and B) while Mann Whitney U test was used for median comparison between the two groups. The significance level was set at p ≤ 0.05 for all statistical tests at 95% confidence interval. Charts were used for data presentation where appropriate. A total of 25 children (with 50 primary molars) met the inclusion criteria and were recruited for the study. Chemomechanical method (Papacarie) was used for caries removal in 25 primary molars and conventional method (mechanical) was used for caries removal in 25 primary molars. None of the participants was lost to follow up. The children were between ages 5–9 years. The mean age was 6.72 ± 1.2 years. Thirteen boys and twelve girls participated in this study with a 1.1: 1 male to female ratio. Table . The median score of time taken for caries removal in Papacarie group (111 s) was slightly lower than that of the conventional group (115 s) however; there was no statistically significant difference in the time for caries removal between the 2 groups ( p = 0.839). Figure . In the conventional group, the minimum caries removal time was 28 s and the maximum caries removal time was 292 s. In the Papacarie group, the minimum caries removal time was 61 s and the maximum caries removal time was 303 s. Figure . In the Papacarie group, 23 (92%) of the study participants reported they had no pain(score 0) during caries removal while only 2 (8%) of the study participant reported little pain(score 2). In the conventional group, 15 (60%) of the participants reported no pain (score 0), 5 (20%) reported little pain (score 2) and 5 (20%) reported a little more pain (score 4). The highest pain score reported in the conventional caries removal group was a score 4 while the highest pain score was a score 2 in the Papacarie group. Figure . In both groups there were no reports of a score 6, score 8 and score 10. There was a statistically significant difference in the pain scores between the 2 groups, study participants who had caries removal by Papacarie reported lower pain scores ( p = 0.019) (Fig. ). During caries removal with the Papacarie method, none of the older children (8- and 9-year olds) had pain. There were reports of pain from 4 (57%) of the older children during the conventional method. Only 1(8%) participant of the younger children (5 and 6 year olds) reported pain in the Papacarie group while 2(17%) of the younger children reported pain in the conventional method. There was however no statistically significant difference in the association between pain and age (0.104). One male and one female had pain during caries removal with the Papacarie method. Seven (58%) of the females had pain while only three (23%) males had pain with the conventional method. Table . There was however no statistically significant difference in the association between gender and pain (0.069). From the three questions asked, a maximum score of 15 indicates good acceptance while 3 is the minimum score, which indicates poor acceptance. Only the question on “do you like the treatment?” had a statistically significant difference ( p = 0.047). The mean score for Papacarie method was 4.68 ± 0.5 and 4.24 ± 1.0 for the conventional method. All the participants (100%) indicated they liked the treatment for the Papacarie group and 20 of the 25 participants (80%) indicated they liked the conventional method. Figure . A higher total mean score of 14.00 ± 1.3 was observed for the Papacarie method indicating a better acceptance compared with a total mean score of 12.80 ± 2.9 for the conventional method. However, there was no statistically significant difference in the study participants’ preference. Table ( p = 0.072). About half of the participants, 11 (44%) preferred the Papacarie method. Less than one fifth of the participants; 4 (16%) preferred the conventional method. More than one-third 10 (40%) reported that they liked both methods (Fig. ). The conservative model of caries management known as minimal intervention dentistry gives a holistic management to individuals with dental caries . The use of Papacaries as a caries removal agent is one of the minimally invasive techniques in the treatment of dental caries which removes the infected dentine but retains the remineralizable affected dentine . It is particularly a good alternative in paediatric dentistry because of the elimination of noise and local anaesthesia compared with the conventional method of caries removal . Time is a very important factor in dental treatment of paediatric population as it affects their cooperation towards the treatment they receive . A decline in the cooperation from a child is usually seen with a longer treatment time . In this study, the average time taken for caries removal with the use of Papacarie was comparable to average time taken for caries removal by Santos et al. This result is similar to results found in a split-mouth design on 20 Brazilian children aged 5–8 years by Matsumoto et al. A longer average caries removal time with the use of Papacarie was found by Kochhar et al. and Bohari et al. on carious primary molars in 5- 9-year-old Indian children . The reason for the shorter caries removal time in this study could be the shorter time of 30 s for each application. Papacarie Duo was used in this study and it was left for 30 s on each application according to the manufacturer’s instruction. It is a more recent form of Papacarie introduced in 2011 having the advantages of improved durability, greater viscosity and adequate storage at room temperature . This was in line with the study of Santos et al. and Matsumoto et al. where shorter caries removal time were recorded . Whereas studies by Kochhar et al., Bohari et al., Alhumaid and Khalek et al. with longer caries removal time did not use Papacarie Duo but used the older form of Papacarie formulated in 2003 which had a time duration of 60 s for each application . A reduced caries removal time with the Papacarie method in this study may also be because the dentine caries recruited was less than two thirds of the dentine having an ICDAS code 5 thus; extensive carious lesions were excluded hence less number of applications needed to completely remove the infected dentine. The average time taken for caries removal using the conventional method in the present study was similar to average caries removal time with the use of conventional method by Jawa et al. and by Matsumoto et al. Hedge et al. found a shorter average caries removal time in primary molars in a group of Indian children . Pathivada et al. also found an average shorter caries removal of time, which may be because the study was carried out in an older age group of 8 to15-year-olds . Kochhar et al. in a group of 5–9 year old Indian children, Anegundi et al. in thirty 4–9 year old Indian children and Khalek et al. in a group of 50 Egyptian 4–8 year old children all reported longer caries removal time with the conventional method . The reason for a longer caries removal time in conventional method compared to the results of this study may be that in this study, a new bur was used for caries removal for each tooth therefore a shorter caries removal time. Comparing both methods in this study, the average time taken for caries removal in chemomechanical group was slightly lower than that of the conventional group. However, there was no statistically significant difference in the time of caries removal for the two groups which is consistent with the findings from the studies of Motta et al., Goyal et al. and Kotb et al. which are split-mouth designs done on Brazilian, Indian and Egyptian children respectively . This may be attributed to the age bracket (5–9 year old participants) and the cooperative abilities of the children recruited into the present study resulting in less time needed in managing the behaviour of participants during treatment. Other studies have shown Papacarie method to have a longer treatment time compared with the conventional caries removal methods . The longer duration of caries removal in the Papacarie method compared with the conventional method has been attributed to multiple application of Papacarie gel . Procedural pain can lead to dental fear which in turn could affect individuals utilization of dental care for a life time . Researchers have thus developed minimal invasive techniques in treatment of dental caries to achieve reduction of pain thereby reducing dental anxiety of patients . In the present study, fewer participants reported pain during caries removal with the Papacarie method than in the conventional group and the difference was statistically significant. Only 2 of the 25 participants exerienced very mild pain with the use of Papacarie while 10 of the 25 participants experienced pain with the use of conventional method. This finding is similar to the results from Anegundi et al. in his study on thirty 4–9 year old Indian children and Motta et al. in a group of twenty 4–7 year old Brazilian children. Similarities may be due to a similar age range and gender distribution with the present study . Singh et al. assessed pain perception in forty 4–8 year old Indian children and observed a statistically significant higher mean pain score in the conventional method compared with the Papacarie method . The similarity with the present study may be due to the use of cooperative participants, split-mouth design and same pain assessment scale (Wong-Baker Faces Pain Scale) used for assessing pain in this present study. Reports from several studies that indicated lesser pain perception by the participants with the Papacarie method has suggested that the reason was due to removal of only infected dentine tissue which lacks odontoblastic processs hence not sensitive to stimuli . The papain in Papacarie has debriding properties targeting only infected dentine due to the absense of alpha-1 anti-trypsin thus causing dissolution of partially degraded collagen molecules and no effect on collagen fibres in healthy dentine . Studies have attributed increased pain perception in conventional method to be due to thermal damage, vibration, noise and increased risk of damage to healthy dentine which could cause more dentine tubule to be opened resulting in excitation of the nervous system causing pain in response to stimuli . A meta-analysis by Deng at al of 3 studies which all assessed the pain perception with the Wong-Baker Pain Scale revealed that the difference in the pain scores was statistically significant with lower pain scores in the Papacarie group suggesting lower pain perception with the Papacarie method . This study however did not find any association between pain and age/gender/time in both methods which was a similar report by Motta et al. This may be because only cooperative participants were recruited in this study so treatment time was not long thereby possibly resulting in less pain reported by the participants. In this study, all the participants reported they liked the treatment for the Papacarie method while most of the participants indicated they liked the conventional method. However, this report was statistically significant. Chowdhry et al. in their study on 30 children of similar age group (6–9 years) with this present study reported the acceptability of procedure using the visual analogue scale . Chowdhry et al. observed that all the participants who had the Papacarie method liked the method while none of the participants liked the conventional method . The findings of this study showed more participants preferred the Papacarie method which is similar to that reported by Anegundi et al. on thirty 4–9 year old Indian children where most of the participants preferred the Papacarie method . In the study by Goyal et al. with similar population with the present study; 25 children aged 5–9 years, almost all the participants preferred the Papacarie . Goyal et al. however suggested increased preference for Papacarie method could be due to reduced pain perception and anxiety which led to increased acceptability by the participants . This is however at variance with findings in the study by Almaz et al., a split-mouth design in twenty five 6–9 year old Turkish children who found that majority of the participants preferred the conventional method . The authors explained that their result could be due to significantly increased chair time which reduced the children’s compliance in the Papacarie method . Limitations Thickness of enamel and dentine varies among individuals and this could have influenced the time of caries removal among the participants. Also, no two dentinal carious lesion can be exactly alike even in the same individual hence a possibility varying depths of dentin caries which may require repeated applications that could affect caries removal time with the use of Papacarie. In this study, a direct comparison was limited as there were no similar studies locally. However, this study is the first study of its kind in this population, the preliminary findings from this study will serve as reference for future studies. Also, children were their own controls thereby reducing bias. Thickness of enamel and dentine varies among individuals and this could have influenced the time of caries removal among the participants. Also, no two dentinal carious lesion can be exactly alike even in the same individual hence a possibility varying depths of dentin caries which may require repeated applications that could affect caries removal time with the use of Papacarie. In this study, a direct comparison was limited as there were no similar studies locally. However, this study is the first study of its kind in this population, the preliminary findings from this study will serve as reference for future studies. Also, children were their own controls thereby reducing bias. The results from this study indicates that the Papacarie method had a comparable caries removal time, Lesser pain perception and better acceptability compared with the conventional caries removal method. This makes caries removal with the use of Papacarie a suitable alternative for caries removal especially in the paediatric population. Below is the link to the electronic supplementary material. Supplementary Material 1 |
YouTube as a possible learning platform for patients and their family caregivers for gastrostomy tube feeding: A cross‐sectional study | d0d6217b-2807-4d9b-a267-990ed3730b33 | 11879905 | Patient Education as Topic[mh] | In percutaneous endoscopic gastrostomy, a gastrostomy tube (G‐tube) is placed in the stomach for nutrition support. This surgical procedure is the most commonly used method in long‐term enteral nutrition because of its easy placement, short hospital stays, early initiation of nutrition support, cost‐effectiveness, and safety. However, gastrostomy and G‐tube feeding can cause various complications if adequate management is not provided by family caregivers at home. Various complications exist from minor to serious problems, such as tube dislodgement, buried bumper syndrome, and peritonitis. , Such risks underscore the necessity for family caregivers to receive precise education and training on managing gastrostomy care and G‐tube feeding to ensure the safety and well‐being of patients. , Given these complexities and the serious nature of potential risks, the decision to proceed with or forego a G‐tube placement becomes critically important and necessitates thorough understanding and careful consideration. It is essential for patients and their families to be well‐informed about both the benefits and risks associated with the procedure, enabling them to make educated decisions that best suit their medical and personal circumstances. Owing to outpatient‐oriented healthcare services and high medical costs, many patients and family caregivers are discharged from the hospital sooner without acquiring appropriate knowledge and care skills related to gastrostomy and home G‐tube feeding. Therefore, to compensate for the lack of knowledge and skills regarding gastrostomy care and G‐tube feeding, patients and their family caregivers research and acquire health information through the internet. YouTube, the largest video‐sharing platform, is often used as a resource by patients and family caregivers for such education. YouTube allows users to easily upload, view, and share videos, enabling interaction by allowing users to rate videos with likes, dislikes, and comments. The COVID‐19 pandemic has led to several changes in all fields, such as politics, economy, culture, education, and healthcare systems. For gastrostomy care and G‐tube feeding, YouTube educational videos can provide a particularly well‐suited alternative to meet the information needs during and in the aftermath of the pandemic, if not provided in the hospital setting. However, with high internet use among patients and family caregivers, the quality of healthcare information provided via the internet is a concern for healthcare providers, , and there is a need to determine the availability of videos that address care and management skills. Considering the growing popularity and easy accessibility of YouTube, along with the information needs of patients and family caregivers, we analyzed videos on gastrostomy care and G‐tube feeding on YouTube to determine its quality as an educational resource for patients and family caregivers. Design A cross‐sectional study design was used to explore the educational quality of YouTube videos on gastrostomy care and G‐tube feeding accessed at a specific time point. Search strategy and data collection Chrome browser (Google) was used in “incognito mode” when browsing YouTube to ensure that no personal recommendations affected the search results. In addition, all searches were performed with the YouTube default “relevance” sorting. We did not apply any time or date filters because most viewers search YouTube without these filters and use the default search. We sought to replicate the search pattern that users most commonly use. The keywords “gastrostomy,” “G‐tube,” “enteral feeding,” and “enteral nutrition” were used for searching videos on YouTube ( www.youtube.com ) on October 3, 2021. Two researchers evaluated 792 videos. Duplicates ( n = 194), videos in languages other than English ( n = 31), advertisements ( n = 51), Vlogs ( n = 16), and videos specifically targeted at healthcare providers ( n = 271) were excluded from the study. After applying the exclusion criteria, 229 videos were obtained (Figure ). All videos included in this study were posted between the years 2010 and 2021 on YouTube. A cross‐sectional study design was used to explore the educational quality of YouTube videos on gastrostomy care and G‐tube feeding accessed at a specific time point. Chrome browser (Google) was used in “incognito mode” when browsing YouTube to ensure that no personal recommendations affected the search results. In addition, all searches were performed with the YouTube default “relevance” sorting. We did not apply any time or date filters because most viewers search YouTube without these filters and use the default search. We sought to replicate the search pattern that users most commonly use. The keywords “gastrostomy,” “G‐tube,” “enteral feeding,” and “enteral nutrition” were used for searching videos on YouTube ( www.youtube.com ) on October 3, 2021. Two researchers evaluated 792 videos. Duplicates ( n = 194), videos in languages other than English ( n = 31), advertisements ( n = 51), Vlogs ( n = 16), and videos specifically targeted at healthcare providers ( n = 271) were excluded from the study. After applying the exclusion criteria, 229 videos were obtained (Figure ). All videos included in this study were posted between the years 2010 and 2021 on YouTube. General features of videos The target population (adults or children) of the videos, country of the uploading agency, year of upload, G‐tube type and educator in the video, and uploading agency type were classified. The target population in the YouTube videos was divided into three categories: children, adults, and universal. The G‐tube types were classified as low profile, high profile, and universal. Educators in the video were categorized as registered nurse (RN), advanced practice registered nurse (APRN), medical doctor (MD), registered dietitian nutritionist (RDN), patient or family caregiver, and unclassified. Uploading agencies were classified into six categories: independent contents creators, patient and caregiver support group, medical device company, hospital, homecare service agency, and academic agency. Educational quality of videos Two independent researchers (H.S.C. and H.L.) evaluated the educational quality of the YouTube videos using the global quality scale (GQS) and modified DISCERN quality scoring system. The modified DISCERN quality scoring system comprises five criteria with a score of 0 or 1 per criterion (0: no, and 1: yes), with a potential total score of 5 points. High scores indicate high‐quality educational material. GQS was developed as an assessment scale for internet resources. It is rated on a 5‐point Likert scale, with 1 and 5 representing poor (most information missing, not at all useful for patients and their family caregivers) and excellent (very useful for patients and their family caregivers) qualities. Using this scale, researchers evaluated the flow, ease of use, and quality of the videos. Video content Video content regarding gastrostomy care and G‐tube feeding was coded and grouped into categories based on previous studies. , Statistical analysis The Statistical Package for the Social Sciences version 23.0 package program (SPSS Inc) was used for data analysis. Mean and SD and numbers and percentages were calculated to describe the data. One‐way analysis of variance and Scheffe post hoc analysis were conducted to compare the GQS and modified DISCERN scores among videos, grouped by the uploading agencies. Additionally, focused temporal analysis of scores over 2‐year intervals and the results of a regional comparison of educational video quality between English‐speaking countries and non–English‐speaking or unidentified regions are presented in Figures and , respectively. Interrater reliability of scoring and theme coding The interrater reliability and degree of agreement of scoring and theme coding were assessed using Cohen κ coefficient. The κ value of 0.41–0.60 was considered an average agreement. In this study, Cohen κ values were 0.56–0.78. The target population (adults or children) of the videos, country of the uploading agency, year of upload, G‐tube type and educator in the video, and uploading agency type were classified. The target population in the YouTube videos was divided into three categories: children, adults, and universal. The G‐tube types were classified as low profile, high profile, and universal. Educators in the video were categorized as registered nurse (RN), advanced practice registered nurse (APRN), medical doctor (MD), registered dietitian nutritionist (RDN), patient or family caregiver, and unclassified. Uploading agencies were classified into six categories: independent contents creators, patient and caregiver support group, medical device company, hospital, homecare service agency, and academic agency. Two independent researchers (H.S.C. and H.L.) evaluated the educational quality of the YouTube videos using the global quality scale (GQS) and modified DISCERN quality scoring system. The modified DISCERN quality scoring system comprises five criteria with a score of 0 or 1 per criterion (0: no, and 1: yes), with a potential total score of 5 points. High scores indicate high‐quality educational material. GQS was developed as an assessment scale for internet resources. It is rated on a 5‐point Likert scale, with 1 and 5 representing poor (most information missing, not at all useful for patients and their family caregivers) and excellent (very useful for patients and their family caregivers) qualities. Using this scale, researchers evaluated the flow, ease of use, and quality of the videos. Video content regarding gastrostomy care and G‐tube feeding was coded and grouped into categories based on previous studies. , The Statistical Package for the Social Sciences version 23.0 package program (SPSS Inc) was used for data analysis. Mean and SD and numbers and percentages were calculated to describe the data. One‐way analysis of variance and Scheffe post hoc analysis were conducted to compare the GQS and modified DISCERN scores among videos, grouped by the uploading agencies. Additionally, focused temporal analysis of scores over 2‐year intervals and the results of a regional comparison of educational video quality between English‐speaking countries and non–English‐speaking or unidentified regions are presented in Figures and , respectively. The interrater reliability and degree of agreement of scoring and theme coding were assessed using Cohen κ coefficient. The κ value of 0.41–0.60 was considered an average agreement. In this study, Cohen κ values were 0.56–0.78. General features of the videos on gastrostomy care and G‐tube feeding on YouTube In this study, 59% of videos explained gastrostomy care and G‐tube feeding for children and 31.4% for adults. Among the countries of the uploading agencies, the United States accounted for the highest proportion at 66.4%, and for the G‐tube type, the low profile was 44.5%, whereas the high profile was 25.3%. Regarding the educators in the YouTube videos, most were unclassified (32.3%), followed by RNs or APRNs (25.3%) and patient or family caregivers (23.6%). The mean GQS and modified DISCERN scores for these videos were 3.31 ± 0.90 points and 2.63 ± 1.23 points, respectively (Table ). Figure shows trends in gastrostomy care and G‐tube feeding videos uploaded on YouTube. A total of 37.1% were uploaded from 2020 to 2021. Classification of video content on gastrostomy care and G‐tube feeding Table shows the YouTube video content on gastrostomy care and G‐tube feeding. The most frequently posted video content were as follows: empowering G‐tube feeding skills (45.76%), skincare and dressing for G‐tube site (18.25%), gastrostomy and G‐tube feeding knowledge (13.37%), maintaining daily life activities after initial gastrostomy (12.34%), and dealing with common problems and emergencies at home (10.28%). For the subthemes, cleaning and dressing a G‐tube was the highest (11.05%), followed by bolus method (9.77%) and replacing a balloon‐type G‐tube (9.51%). Comparison of the educational quality of the videos by uploading agency Classified by the GQS score, videos were divided into high quality (50.7%), moderate quality (31.4%), and low quality (17.9%) educationally (Table ). Among the uploading agencies in this study, the ones that uploaded most frequently were hospitals (32.8%), followed by independent contents creators (30.6%), homecare service agencies (19.2%), and academic agencies (10.0%). The uploading agencies with the highest proportion of high‐quality videos were homecare service agencies (72.7%) and hospitals (70.7%), and those with the highest proportion of low‐quality videos were independent contents creators (50.0%). The mean modified DISCERN scores for each uploading agency were as follows: homecare agencies 3.25 ± 0.81, hospitals 3.20 ± 0.94, academic agencies 3.13 ± 0.81, patient and caregiver support groups 2.63 ± 1.51, medical device companies 2.56 ± 0.53, and independent contents creators 1.47 ± 1.07. When using the modified DISCERN scores to compare video quality, we also found that videos uploaded by independent contents creators had significantly lower quality than other uploading agencies ( F = 30.479, P < 0.001). The analysis revealed significant differences in both GQS ( F = 3.956, P = 0.002) and modified DISCERN scores ( F = 3.332, P = .006) across the various time intervals. However, the post hoc analysis did identify any significant differences between specific groups (Figure ). Videos from English‐speaking countries demonstrated significantly higher GQS ( t = 6.730, P < 0.001) and modified DISCERN scores ( t = 5.388, P < 0.001) compared with other countries (Figure ). In this study, 59% of videos explained gastrostomy care and G‐tube feeding for children and 31.4% for adults. Among the countries of the uploading agencies, the United States accounted for the highest proportion at 66.4%, and for the G‐tube type, the low profile was 44.5%, whereas the high profile was 25.3%. Regarding the educators in the YouTube videos, most were unclassified (32.3%), followed by RNs or APRNs (25.3%) and patient or family caregivers (23.6%). The mean GQS and modified DISCERN scores for these videos were 3.31 ± 0.90 points and 2.63 ± 1.23 points, respectively (Table ). Figure shows trends in gastrostomy care and G‐tube feeding videos uploaded on YouTube. A total of 37.1% were uploaded from 2020 to 2021. Table shows the YouTube video content on gastrostomy care and G‐tube feeding. The most frequently posted video content were as follows: empowering G‐tube feeding skills (45.76%), skincare and dressing for G‐tube site (18.25%), gastrostomy and G‐tube feeding knowledge (13.37%), maintaining daily life activities after initial gastrostomy (12.34%), and dealing with common problems and emergencies at home (10.28%). For the subthemes, cleaning and dressing a G‐tube was the highest (11.05%), followed by bolus method (9.77%) and replacing a balloon‐type G‐tube (9.51%). Classified by the GQS score, videos were divided into high quality (50.7%), moderate quality (31.4%), and low quality (17.9%) educationally (Table ). Among the uploading agencies in this study, the ones that uploaded most frequently were hospitals (32.8%), followed by independent contents creators (30.6%), homecare service agencies (19.2%), and academic agencies (10.0%). The uploading agencies with the highest proportion of high‐quality videos were homecare service agencies (72.7%) and hospitals (70.7%), and those with the highest proportion of low‐quality videos were independent contents creators (50.0%). The mean modified DISCERN scores for each uploading agency were as follows: homecare agencies 3.25 ± 0.81, hospitals 3.20 ± 0.94, academic agencies 3.13 ± 0.81, patient and caregiver support groups 2.63 ± 1.51, medical device companies 2.56 ± 0.53, and independent contents creators 1.47 ± 1.07. When using the modified DISCERN scores to compare video quality, we also found that videos uploaded by independent contents creators had significantly lower quality than other uploading agencies ( F = 30.479, P < 0.001). The analysis revealed significant differences in both GQS ( F = 3.956, P = 0.002) and modified DISCERN scores ( F = 3.332, P = .006) across the various time intervals. However, the post hoc analysis did identify any significant differences between specific groups (Figure ). Videos from English‐speaking countries demonstrated significantly higher GQS ( t = 6.730, P < 0.001) and modified DISCERN scores ( t = 5.388, P < 0.001) compared with other countries (Figure ). Recently, YouTube has been identified as an effective source of online learning material. It is a promising step‐by‐step learning platform. Users can access videos for good‐quality informal learning on YouTube. , YouTube videos for gastrostomy care and G‐tube feeding have recently increased, likely to increase accessibility owing to the high demand for relevant information for patients receiving enteral nutrition support and their family caregivers, who had restricted access to healthcare institutions during the COVID‐19 pandemic. Consequently, patients and family caregivers became increasingly familiar with informal learning platforms, such as YouTube. , Moreover, because of the inherent complexity of issues arising during the gastrostomy care and G‐tube feeding process, it is more practical to acquire knowledge incrementally by addressing problems as they arise. This could be a reason for the increasing trend of YouTube learning. By integrating YouTube and other similar platforms with advancements in digital healthcare, such as artificial intelligence and machine learning, healthcare providers can now personalize and precisely tailor educational experiences to meet the specific needs of patients and caregivers. RNs and APRNs were the most common educators of gastrostomy care and G‐tube feeding–related videos on YouTube, followed by patients or family caregivers, MDs, and RDNs. This diverse range of educators reflects the multidisciplinary approach essential for gastrostomy care, as RNs and APRNs typically oversee educational initiatives in acute hospital settings and perform care in long‐term facilities and homecare services. The involvement of MDs and RDNs holds equal significance, with MDs managing medical treatments and RDNs focusing on nutrition planning. Furthermore, most YouTube videos concentrate on managing G‐tube site care and G‐tube feeding at home, underscoring the importance of comprehensive care strategies that incorporate insights from various healthcare disciplines. The most frequently shared video content was practical care skills for family caregivers who perform gastrostomy care and G‐tube feeding. Kahveci and Akin's study showed that family caregivers who performed gastrostomy care and G‐tube feeding experienced difficulties and required a high level of practical education, such as G‐tube care, verification of G‐tube position, and care of the insertion site. After gastrostomy for the first time, sufficient education and training should be received at a medical institution. However, because of high medical expenses and outpatient‐centered medical services, many family caregivers acquire limited practical care skills. , To address these gaps, it is essential to implement policies ensuring ongoing education and regular assessment to enhance the competence and confidence of family caregivers. This could involve initiatives such as community‐based case management or patient navigation programs. Moreover, healthcare systems should actively engage patients and their family caregivers in the care process, offering continuous support and resources. These policy changes and support systems would not only enhance the quality of care but also empower patients and their caregivers, enabling them to play an active role in the healthcare journey. Patients and their family caregivers experience several difficulties during the decision‐making for gastrostomy, which may be delayed until a critical event forces G‐tube placement. In addition, psychosocial aspects, such as anxiety about the G‐tube and social isolation due to the loss of shared mealtime, can significantly impact patients' and their family caregivers' quality of life. Even in education provided by healthcare professionals, psychosocial support or psychoeducation is often insufficient, focusing on delivering knowledge about gastrostomy care and G‐tube feeding, leading to mutual misunderstanding and disputes between healthcare professionals and family caregivers. Addressing these challenges requires careful consideration of the cultural and socioeconomic factors that may influence the experiences and perceptions of patients and their family caregivers. Actively engaging patients and families in discussions and decision‐making processes and incorporating their needs and preferences are essential for developing more tailored and effective healthcare solutions. By understanding and integrating these diverse perspectives, healthcare providers can create a more inclusive and supportive environment, ultimately improving care outcomes and enhancing patient and caregiver satisfaction. Furthermore, the number of gastrostomy care and G‐tube feeding YouTube videos covering decision‐making for gastrostomy and support for psychosocial difficulties caused by gastrostomy care and G‐tube feeding were insufficient compared with videos on practical care skills. Therefore, patients with swallowing problems and their family caregivers require information on decision‐making, psychosocial support, and support with psychoeducational content on gastrostomy. Moreover, healthcare professionals need to explain to patients or family caregivers the limitations of information in YouTube videos. In this study, most uploading agencies were classified as hospitals, independent contents creators, and homecare services. Gastrostomy care and G‐tube feeding videos were uploaded by various people, including care providers and consumers. However, when evaluating the video quality, uploading agencies with the highest proportion of high‐quality videos were homecare agencies, hospitals, and academic agencies. Videos with the lowest educational quality were primarily uploaded by patients, family caregivers, and independent contents creators. In addition, videos uploaded by such independent contents creators conveyed incorrect information. Hence, it was determined that there were many inappropriate videos for patients and their family caregivers to obtain information on gastrostomy care and G‐tube feeding. Thus, uploading agencies should be considered when healthcare professionals recommend YouTube videos to patients or family caregivers. Healthcare professionals, particularly those involved in gastrostomy care, should recognize the importance of guiding patients and caregivers in using reliable online resources for information. Therefore, they may play a more active role in referencing and discussing the content of educational videos with the patients and their caregivers. Given the significant regional differences and inconsistent video quality improvements over time, it is crucial for platforms like YouTube to establish and enforce stricter guidelines and global standards for educational content. This will ensure all users, regardless of location, have access to accurate and beneficial health information, addressing the disparities highlighted in this study and enhancing the overall quality and reliability of online health education. Limitations and scope for future studies Although this study sheds light on the availability of educational gastrostomy care and enteral G‐tube feeding–related videos for patients and their family caregivers, some limitations should be noted. First, only English‐language videos were reviewed in this study. Therefore, the quality of all YouTube videos, including videos in other languages, could not be reviewed. Second, the selected search terms may have been different from what patients and family caregivers select when searching for gastrostomy care and G‐tube feeding–related videos on YouTube. The search items we chose were assumably the most accurate for finding videos on gastrostomy and G‐tube feeding. Another limitation is that identifying the characteristics of viewers of the YouTube videos analyzed in this study was impossible. Interviews and surveys with patients and family caregivers are required to further investigate their use of YouTube and their attitudes and preferences for YouTube as a health information resource. Although this study sheds light on the availability of educational gastrostomy care and enteral G‐tube feeding–related videos for patients and their family caregivers, some limitations should be noted. First, only English‐language videos were reviewed in this study. Therefore, the quality of all YouTube videos, including videos in other languages, could not be reviewed. Second, the selected search terms may have been different from what patients and family caregivers select when searching for gastrostomy care and G‐tube feeding–related videos on YouTube. The search items we chose were assumably the most accurate for finding videos on gastrostomy and G‐tube feeding. Another limitation is that identifying the characteristics of viewers of the YouTube videos analyzed in this study was impossible. Interviews and surveys with patients and family caregivers are required to further investigate their use of YouTube and their attitudes and preferences for YouTube as a health information resource. As one of the first points of reference, patients and their family caregivers search for health information on theinternet, including YouTube. Visual materials and demonstration videos can help provide information on proper gastrostomy care and G‐tube feeding. Therefore, YouTube can be considerer a supplemental resource for high‐quality gastrostomy care and G‐tube feeding videos. Half of the videos reviewed in our study were of high quality. However, there were differences in video quality depending on the video uploader. Therefore, healthcare providers should inform patients and their families about the importance of video uploaders on YouTube and offer guidance for video selection. It could be helpful for healthcare professionals to provide a curated list of reliable YouTube videos that patients planning to undergo gastrostomy, as well as their family caregivers, can view to meet their educational needs. Hyeon Sik Chu analyzed and interpreted the data and wrote the manuscript. Hanyi Lee contributed to the study conceptualization and design. All the authors have reviewed and approved the final version of the manuscript. None to declare. This study does not include human or animal participants. Publically available YouTube videos were evaluated in this study. Therefore, ethics committee approval was not required for this study. Supporting information. |
甲状腺相关性眼病所致限制性斜视的手术设计及疗效 | 17807706-daeb-4a4e-b599-fbdb16c83a2c | 11814393 | Surgical Procedures, Operative[mh] | 对象与方法 1.1 对象 回顾性分析2017年3月至2023年8月在中南大学湘雅医院眼科中心确诊受累肌肉为垂直方向眼外肌(上直肌、下直肌)且行徙后术、Y劈术及改良术的50例(68只眼)TAO限制性斜视患者的临床资料,每例患者至少有1次回访(≥6周)记录。排除既往有斜视手术史、头部及眼眶外伤史、眼球后退综合征、眼外肌纤维化等特殊类型的限制性斜视患者。本研究获中南大学湘雅医院医学伦理委员会批准(审批号:2024020132),遵循《赫尔辛基宣言》原则,所有患者均知情并签署书面知情同意书。 1.2 一般检查 1)眼科相关检查:术前常规行视力、眼前节、眼底检查,同时进行小瞳检影验光获得最佳矫正视力;斜视专科检查主要是通过三棱镜交替遮盖联合Krimsky法在消除代偿头位前提下测量斜视度,其他检查还包括单双眼的眼球运动检查、被动牵拉试验、复视、代偿头位及双眼视功能(同视机检查)检查。2)甲状腺功能检查:包括甲状腺功能3项及3种抗体,分别为游离甲状腺激素(free thyroxine,FT 4 )、游离三碘甲状腺原氨酸(free triiodothyronine,FT 3 )、促甲状腺素(thyroid stimulating hormone,TSH)和促甲状腺激素受体抗体(thyroid stimulating hormone receptor antibody,TRAb)、抗甲状腺球蛋白抗体(anti-thyroglobulin antibody,TGAb)、甲状腺过氧化物酶抗体(thyroid peroxidase antibody,TPO-Ab)。3)辅助检查:眼眶MRI检查[T 2 加权成像(T 2 weighted imaging,T 2 WI)抑脂成像]以辅助性证明形态学变化明显的受累眼外肌。 1.3 手术设计 所有患者接受全身麻醉,由同一位经验丰富的医师进行主刀手术。1)徙后术( A):做近穹窿部结膜切口(Parks结膜切口),暴露目标眼外肌,以6-0可吸收缝线预置双套环缝线,牵拉断端向后固定于肌止点后设计部位的浅层巩膜处。2)Y劈术( B):做Parks结膜切口,暴露目标肌肉,从其止点开始纵向分离劈开肌束为均等的两半,劈开长度约为15 mm,分别以6-0可吸收缝线预置双套环缝线,于肌止点处离断肌肉,将劈开的2个肌束相距1个肌止端的宽度并呈Y型重新固定于所设计的肌止点后浅层巩膜处的两端。3)改良术(图 C、 D):主要包括2种方式的改良,根据术中被动牵拉试验和眼位观察的结果对Y劈术进行相应的调整,分别为Y劈术联合不对称徙后术(Y型劈开的2束肌肉后徙不等同的距离)、Y劈术联合悬吊术(术中选择Y型劈开的单支或2支肌肉以一定长度的缝线悬吊)。 1.4 术后随访及检查 术后随访时评估内容包括斜视度、眼位、眼球运动、有无复视、代偿头位及双眼视功能。 1.5 统计学处理 采用统计软件SPSS 26.0进行数据分析。符合正态分布的计量资料以均数±标准差表示,不符合正态分布的计量资料采用中位数(第1四分位数,第3四分位数)表示。3组术前、术后第1天及术后随访终点时的垂直斜视度差异采用广义估计方程分析,术前、术后随访终点时的复视情况及双眼视功能进行配对四格表 χ 2 检验。 P <0.05为差异有统计学意义。 对象 回顾性分析2017年3月至2023年8月在中南大学湘雅医院眼科中心确诊受累肌肉为垂直方向眼外肌(上直肌、下直肌)且行徙后术、Y劈术及改良术的50例(68只眼)TAO限制性斜视患者的临床资料,每例患者至少有1次回访(≥6周)记录。排除既往有斜视手术史、头部及眼眶外伤史、眼球后退综合征、眼外肌纤维化等特殊类型的限制性斜视患者。本研究获中南大学湘雅医院医学伦理委员会批准(审批号:2024020132),遵循《赫尔辛基宣言》原则,所有患者均知情并签署书面知情同意书。 一般检查 1)眼科相关检查:术前常规行视力、眼前节、眼底检查,同时进行小瞳检影验光获得最佳矫正视力;斜视专科检查主要是通过三棱镜交替遮盖联合Krimsky法在消除代偿头位前提下测量斜视度,其他检查还包括单双眼的眼球运动检查、被动牵拉试验、复视、代偿头位及双眼视功能(同视机检查)检查。2)甲状腺功能检查:包括甲状腺功能3项及3种抗体,分别为游离甲状腺激素(free thyroxine,FT 4 )、游离三碘甲状腺原氨酸(free triiodothyronine,FT 3 )、促甲状腺素(thyroid stimulating hormone,TSH)和促甲状腺激素受体抗体(thyroid stimulating hormone receptor antibody,TRAb)、抗甲状腺球蛋白抗体(anti-thyroglobulin antibody,TGAb)、甲状腺过氧化物酶抗体(thyroid peroxidase antibody,TPO-Ab)。3)辅助检查:眼眶MRI检查[T 2 加权成像(T 2 weighted imaging,T 2 WI)抑脂成像]以辅助性证明形态学变化明显的受累眼外肌。 手术设计 所有患者接受全身麻醉,由同一位经验丰富的医师进行主刀手术。1)徙后术( A):做近穹窿部结膜切口(Parks结膜切口),暴露目标眼外肌,以6-0可吸收缝线预置双套环缝线,牵拉断端向后固定于肌止点后设计部位的浅层巩膜处。2)Y劈术( B):做Parks结膜切口,暴露目标肌肉,从其止点开始纵向分离劈开肌束为均等的两半,劈开长度约为15 mm,分别以6-0可吸收缝线预置双套环缝线,于肌止点处离断肌肉,将劈开的2个肌束相距1个肌止端的宽度并呈Y型重新固定于所设计的肌止点后浅层巩膜处的两端。3)改良术(图 C、 D):主要包括2种方式的改良,根据术中被动牵拉试验和眼位观察的结果对Y劈术进行相应的调整,分别为Y劈术联合不对称徙后术(Y型劈开的2束肌肉后徙不等同的距离)、Y劈术联合悬吊术(术中选择Y型劈开的单支或2支肌肉以一定长度的缝线悬吊)。 术后随访及检查 术后随访时评估内容包括斜视度、眼位、眼球运动、有无复视、代偿头位及双眼视功能。 统计学处理 采用统计软件SPSS 26.0进行数据分析。符合正态分布的计量资料以均数±标准差表示,不符合正态分布的计量资料采用中位数(第1四分位数,第3四分位数)表示。3组术前、术后第1天及术后随访终点时的垂直斜视度差异采用广义估计方程分析,术前、术后随访终点时的复视情况及双眼视功能进行配对四格表 χ 2 检验。 P <0.05为差异有统计学意义。 结 果 2.1 患者基本信息 本研究共纳入50例(68眼)限制性斜视患者,男24例,女26例,年龄为28~77岁,手术时年龄为(52.9±9.5)岁;TAO患者术前甲状腺功能恢复正常且眼科相关症状稳定时间达6个月以上;病程为6个月至30年,中位病程为1.0(0.8,2.0)年,中位术后随访终点时间为1.8(1.5,7.1)个月;累及肌肉均为垂直眼外肌(累及下直肌40眼,累及上直肌28眼);中位等效球镜为右眼0.0(-0.5,+1.1)屈光度(diopter,D),左眼0.0(-0.6,+0.8)D。50例患者共68眼中,27眼(39.7%)行徙后术,26眼(38.2%)行Y劈术,15眼(22.1%)行改良术。 2.2 3 组术前、术后第 1 天及术后随访终点时的垂直斜视度差异比较 采用广义估计方程分析,组别与时间有交互作用( χ 2 分组×时间 =12.088, P <0.05),故进行单独效应分析。时间的单独效应分析显示:3组术前与术后第1天或术后随访终点相比,术后斜视度明显减小( P <0.001, );徙后组术后第1天斜视度较术前减小23.52棱镜度(prism diopter,PD),术后随访终点时斜视度较术前减小23.33 PD;Y劈组术后第1天、术后随访终点时斜视度较术前分别减小27.50、28.58 PD;改良组术后第1天、术后随访终点时斜视度较术前分别减小35.67、37.00 PD。3组术后第1天与术后随访终点时斜视度比较,差异均无统计学意义( P >0.05; 、 )。 组间的单独效应分析显示:术前,改良组较徙后组斜视度增大12.48 PD,差异有统计学意义( P =0.003),较Y劈组斜视度增大9.59 PD,差异有统计学意义( P =0.026),Y劈组较徙后组斜视度增大2.89 PD,但差异无统计学意义( P >0.05);术后第1天,徙后组较Y劈组斜视度增大1.09 PD,较改良组斜视度减小0.33 PD,改良组较Y劈组斜视度增大1.42 PD,但差异均无统计学意义(均 P >0.05, );术后随访终点时,徙后组较Y劈组斜视度增大2.35 PD,较改良组斜视度增大1.19 PD,改良组较Y劈组斜视度增大1.17 PD,但差异均无统计学意义(均 P >0.05, 、 )。 2.3 术前和术后随访终点时的复视、双眼视功能、眼球运动及代偿头位情况比较 在50例限制性斜视患者中,术前45例(90.0%)患者有复视,术后随访终点时有32例(71.1%)患者第一眼位和阅读眼位复视消失,差异有统计学意义( χ 2 =42.04, P <0.001)。同视机结果比较显示:50例患者术前均无正常视网膜对应,其中27例(54.0%患者超过20˚垂直方向异常视网膜对应范围,术后随访终点时有18例恢复正常视网膜对应;50例患者中,术前仅有5例(10.0%)患者保留融像功能,术后随访终点时28例(56.0%)患者恢复融像功能,且融像范围较术前增大,差异有统计学意义( χ 2 =23.93, P <0.001);50例患者中,术前仅有2例(4.0%)患者保留立体视,术后随访终点时5例(10.0%)患者恢复立体视。所有患者术后随访终点时眼球运动均较术前明显改善,代偿头位均消失。上直肌和下直肌术前、术后对比示例见 、 。 2.4 直肌手术量与垂直斜视度的关系 采用徙后术(27眼)及Y劈术(26眼)的手术量与垂直斜视度不存在明确的线性对应关系,而呈现阶段性特点。 患者基本信息 本研究共纳入50例(68眼)限制性斜视患者,男24例,女26例,年龄为28~77岁,手术时年龄为(52.9±9.5)岁;TAO患者术前甲状腺功能恢复正常且眼科相关症状稳定时间达6个月以上;病程为6个月至30年,中位病程为1.0(0.8,2.0)年,中位术后随访终点时间为1.8(1.5,7.1)个月;累及肌肉均为垂直眼外肌(累及下直肌40眼,累及上直肌28眼);中位等效球镜为右眼0.0(-0.5,+1.1)屈光度(diopter,D),左眼0.0(-0.6,+0.8)D。50例患者共68眼中,27眼(39.7%)行徙后术,26眼(38.2%)行Y劈术,15眼(22.1%)行改良术。 3 组术前、术后第 1 天及术后随访终点时的垂直斜视度差异比较 采用广义估计方程分析,组别与时间有交互作用( χ 2 分组×时间 =12.088, P <0.05),故进行单独效应分析。时间的单独效应分析显示:3组术前与术后第1天或术后随访终点相比,术后斜视度明显减小( P <0.001, );徙后组术后第1天斜视度较术前减小23.52棱镜度(prism diopter,PD),术后随访终点时斜视度较术前减小23.33 PD;Y劈组术后第1天、术后随访终点时斜视度较术前分别减小27.50、28.58 PD;改良组术后第1天、术后随访终点时斜视度较术前分别减小35.67、37.00 PD。3组术后第1天与术后随访终点时斜视度比较,差异均无统计学意义( P >0.05; 、 )。 组间的单独效应分析显示:术前,改良组较徙后组斜视度增大12.48 PD,差异有统计学意义( P =0.003),较Y劈组斜视度增大9.59 PD,差异有统计学意义( P =0.026),Y劈组较徙后组斜视度增大2.89 PD,但差异无统计学意义( P >0.05);术后第1天,徙后组较Y劈组斜视度增大1.09 PD,较改良组斜视度减小0.33 PD,改良组较Y劈组斜视度增大1.42 PD,但差异均无统计学意义(均 P >0.05, );术后随访终点时,徙后组较Y劈组斜视度增大2.35 PD,较改良组斜视度增大1.19 PD,改良组较Y劈组斜视度增大1.17 PD,但差异均无统计学意义(均 P >0.05, 、 )。 术前和术后随访终点时的复视、双眼视功能、眼球运动及代偿头位情况比较 在50例限制性斜视患者中,术前45例(90.0%)患者有复视,术后随访终点时有32例(71.1%)患者第一眼位和阅读眼位复视消失,差异有统计学意义( χ 2 =42.04, P <0.001)。同视机结果比较显示:50例患者术前均无正常视网膜对应,其中27例(54.0%患者超过20˚垂直方向异常视网膜对应范围,术后随访终点时有18例恢复正常视网膜对应;50例患者中,术前仅有5例(10.0%)患者保留融像功能,术后随访终点时28例(56.0%)患者恢复融像功能,且融像范围较术前增大,差异有统计学意义( χ 2 =23.93, P <0.001);50例患者中,术前仅有2例(4.0%)患者保留立体视,术后随访终点时5例(10.0%)患者恢复立体视。所有患者术后随访终点时眼球运动均较术前明显改善,代偿头位均消失。上直肌和下直肌术前、术后对比示例见 、 。 直肌手术量与垂直斜视度的关系 采用徙后术(27眼)及Y劈术(26眼)的手术量与垂直斜视度不存在明确的线性对应关系,而呈现阶段性特点。 讨 论 限制性斜视是非共同性斜视的一种类型,通常伴随眼球运动受限、代偿头位、复视等症状,严重影响患者的工作和生活 。限制性斜视需要采用手术矫正眼位,改善眼球运动,恢复原在位和阅读眼位(向下凝视约10°)的双眼单视,改善或消除异常的代偿头位 。TAO限制性斜视累及眼外肌的发病率由高到低依次为下直肌大于内直肌,上直肌大于外直肌、斜肌,易导致垂直和水平斜视 ,临床观察发现垂直斜视发病率显著高于水平斜视,因此本研究选取垂直限制性斜视患者为研究对象(累及下直肌40眼,累及上直肌28眼)。有研究 报道,在TAO活动期进行斜视手术干预,术后再手术率高达50%,并发症发生率也较高。因此,TAO斜视患者的手术时机应选择在眼部炎症改善时、眼外肌水肿程度减轻时,以及发生纤维化的眼部症状静止期,并且甲状腺功能及斜视度数稳定至少半年 。 有关直肌Y型劈开后退术减弱肌肉力量的生物力学原理在20年前就有文献报道,Haslwanter等 认为,人眼的运动可以看作是眼球在眼窝的旋转运动,因此需要考虑眼外肌对眼球施加的扭矩 T (肌力 F 与杠杆臂在旋转中心和肌肉与球体接触的切点之间的长度 r 的乘积)。直肌Y型劈开徙后术相较于单纯的直肌徙后术可以更有效地减小眼外肌对眼球施加的扭矩 T 。Mravicic等 认为,将直肌Y型劈开约15 mm后,在径向方向上不会产生拉力,并且Y型结构的两臂之间扭矩的减小较为恒定,长期疗效确切。本研究比较徙后术、Y劈术及改良术治疗TAO限制性斜视的疗效,结果显示3种直肌减弱术均能不同程度地解决或改善限制肌对眼球运动的影响,且Y劈术和改良术的正位率更高,疗效更好。 TAO限制性斜视所造成的眼位偏斜的核心原因是受累肌肉的炎症性水肿、变性、纤维化 导致肌肉的收缩性能下降,舒张能力不同程度的损伤,因此在眼球的大运动及微运动方面都会出现明显的障碍。然而,这种类型的疾病通常是正常视功能的患者因发生TAO累及眼外肌所致,患者从发病到出现明显症状会存在一个漫长的代偿期,大脑会对逐渐偏斜的眼位产生逐步扩大的代偿范围,临床上可以观察到此类患者常表现有超过20˚的垂直方向异常视网膜对应范围(本研究中有27例患者),所以很可能因为不易察觉的代偿头位和过度代偿,导致常规斜视度的检查(如三棱镜联合遮盖、马氏杆联合遮盖、同视机等)相对度数不一致,尤其是三棱镜度数可能偏小。笔者的经验是在术前检查斜视度时,三棱镜交替遮盖需联合应用马氏杆,并在尽可能消除代偿头位的前提下进行。针对限制性斜视多为单眼注视方向障碍而不能交替注视、眼球运动受限等情况,保持头位正常后的斜视度测量方可估计限制性眼外肌手术所需要释放的张力程度。限制性斜视手术设计的首要目标是解除或减轻限制受累眼外肌的限制张力,术中反复牵拉试验和观察眼位是保证手术效果的重要手段。同理,术后也不能单纯以斜视度和同视机的测量来衡量效果。 关于限制性斜视手术量与斜视度的关系,Imburgia等 认为限制性斜视患者的手术量计算不能与正常肌力者等同,不能刻板遵循肌肉的“毫米原则”。本研究持相同观点,并发现限制性斜视手术量与斜视度并无明确的线性对应关系。目前尚无特别精准的手段来测量头位,本研究也发现多数患者斜视度与肌肉限制和收缩性能不成正比,对于有较大融像范围的斜视度也不好定义中和点,如此众多的不精准性因素导致手术设计不能像常规斜视手术一样仅根据术前斜视度来进行,而应通过影像学资料了解患者眼外肌纤维化情况;术中应反复进行被动牵拉试验并观察眼位,根据经验进行调整。因此,本研究总结的是一个手术实施范围,而不是精准的量效关系。本研究还发现上直肌、下直肌的规律是基本一致的。 本研究同视机结果比较显示:50例患者术前均无正常视网膜对应,术后随访终点时18例患者恢复正常视网膜对应;术前仅有5例(10.0%)患者保留融像功能,术后随访终点时28例(56.0%)患者恢复融像功能,且融像范围较术前增大;尽管患者手术前后同视机的结果差异性较大,但随着时间延长,双眼功能也可以逐渐回归到正常水平,达到或接近正常视网膜对应。因此,对于限制性斜视患者,不应仅以主观和客观斜视度测量为标准,建议评价治疗效果时也要注重主观感受和功能评估,包括但不限于代偿头位、复视、眼球运动及更精准的双眼视功能检查,甚至可以结合对患者生活与视觉质量的改善情况来综合判断。 在实施TAO限制性斜视手术治疗的过程中,本研究团队遇到过极少数肌肉部分断裂的并发症,这部分患者并未被纳入研究,而是根据断裂情况进行了改变位置的缝合或部分断端游离。Akbari等 认为限制性斜视者通常斜视度较大,加之限制肌纤维化而极度紧张、菲薄,眼球后部空间不足而导致手术极具挑战性,易发生肌肉滑脱、断裂,巩膜穿孔等意外。Tacea等 对8例均有复视的限制性斜视患者行限制肌切除术,术后有5例患者恢复第1眼位和阅读眼位的双眼单视,但这一术式仍存在诸多问题,如切除后肌肉功能丧失进而容易出现过矫、肌肉断端与附近眼球筋膜或巩膜表面粘连导致手术效果不稳定或限制加重、眶内炎症加重等情况,因此通常不作为首选术式。限制肌徙后术 至今仍然是主流术式,但Velez等 认为限制肌大量后退可能会形成“缰绳效应”,引发眼球上下扭转。而限制肌Y型劈开徙后术可有效避免限制肌在眼球表面的滑动,肌肉Y型劈开后在径向方向上不产生拉力,并且在两臂之间扭矩的减少较为恒定 。本研究术后随访还发现垂直肌超常量后退后有极少数患者会发生眼睑退缩,因此在术中应尽可能地充分松解垂直肌肉与眼睑之间存在的节制韧带及筋膜组织等结构,可以有效避免术后眼睑位置发生变化 。Choi等 对12只新西兰兔行上直肌Y型劈开徙后术,6周后发现11例Y型劈开的肌肉上下臂均与巩膜发生粘连,其认为Y型劈开徙后术会不可避免地损伤肌肉中央部分,导致中央分裂面与周围巩膜组织发生粘连。但也有术者对于行Y型劈开徙后术需再次手术的患者进行观察,发现肌肉表面覆盖着一层光滑的白色组织,与周围的任何结构均无瘢痕组织粘连,并且肌肉位置与Y型劈开分裂术后即刻位置一致。本研究对Y型劈开徙后术后再次手术的患者进行探查,未发现肌肉与眼球在缝合点之外有其他任何的瘢痕组织粘连,且Y型结构依然存在。本研究与上述文献存在差异的原因可能是兔眼手术后眼外肌及表面眼眶组织的变化与人类有所不同 ;但仍需注意在将肌肉Y型劈开时尽量采用钝性分离,避免过多损伤肌纤维以及血管组织,可以有效防止肌肉与眼球周围组织发生瘢痕性粘连。 综上所述,徙后术、Y劈术及改良术均能够不同程度地解决或改善限制肌肉对眼球运动的限制,有效改善斜视度,消除复视。针对不同严重程度的TAO限制性斜视应选择不同的手术方式,轻度限制性斜视者多采用徙后术,而中重度限制性斜视者则采用Y劈术及改良术。本研究仅针对垂直直肌,并未纳入足够多的水平直肌受累的患者,未来仍需进一步探索这些术式应用于其他方向限制性斜视的长期疗效。另外,限制性斜视的手术疗效评估不应仅用斜视度衡量,而应更多地关注受累肌肉的运动功能及视觉质量的评估。 |
The long-term effect of job mobility on workers’ mental health: a propensity score analysis | 25824c68-f8b8-459c-8f5d-9bf20e2bb792 | 9175471 | Preventive Medicine[mh] | In Europe, between 28 and 33% of the working population has at least one non-communicable disease (NCD), such as diabetes, asthma or depression . NCDs are often the result of a combination of genetic, physiological, environmental and behavioural factors . According to WHO, “mental disorders” belong to the wide class of NCDs and include the broad range of mental and behavioural disorders covered in the F Chapter of the International Statistical Classification of Diseases, tenth revision (ICD-10), such as depression, bipolar affective disorder, schizophrenia, anxiety disorders . About 264 million people suffer from depression or anxiety worldwide and the cost of loss in productivity to the global economy is equal to US$ 1 trillion each year . Mental health is closely connected with work as well as work mobility. Unemployment is associated with poor mental health and psychological distress, and it can have a harmful effect on general health since it is associated with higher mortality rate, hospital admission rate and with long-standing illness . Analogously, working in hostile environment may lead to physical and mental health problems, development of dependence from substances or alcohol, and cause long-term sickness absence and loss of productivity . There is some literature suggesting that job mobility is associated with mental health. In order to give an insight into this relationship, we distinguished between external mobility, defined as changing employer, and internal mobility, defined as changing workplace within the same organization. From now on, we will use the term job mobility referring to external job mobility. At the worker level, if the job change is voluntary, changing job may have positive effects and lead to improved well-being. In fact, starting a new job is often perceived to improve career advancement and working condition, give increment in salary , increase job satisfaction and reduce strain . The reasons to decide to change job are usually related to job unsatisfaction, conflicts with supervisors and/or colleagues, high physical or emotional strain, high degree of job insecurity, inadequate working conditions and limited growth opportunities . On the other hand, if the choice to change work does not depend on the worker, as in the case of dismissal or expired employment contract, it is possible that the new job is worse and therefore well-being and satisfaction are reduced. However, the effect of job mobility on health has not been sufficiently investigated. In the Stanford-Terman longitudinal study , higher mortality risk was found in a sample of males experiencing many changes between unrelated jobs, adjusted for education, physical health, anxiety and depression. Regarding cardiovascular outcomes, a Scottish study did not find any association with job mobility , while among Belgian workers a change in employment turned out a significant risk factor for being on medication for cardiovascular diseases . Regarding mental health, anxiety and depression were not associated with frequent job mobility in a longitudinal study of Swedish workers , while they were associated in a Danish workers population cohort . Regarding obesity, there is evidence from the literature that patients with Major Depressive Disorders show impairment in cognitive domains such as memory, processing speed, and cognitive flexibility . Some recent literature suggests that job strain is a risk factor for mental health and has an important impact on the onset of depression . Regarding shift work, the desynchronization of the circadian rhythms due to night or shift work impacts cognitive performance and tends to increase as shiftwork duration increases, especially for males . Finally, physical activity is recognized as a protective factor, not only for chronic but also for mental illness. Indeed, physical activity is a key factor in the prevention and management of mental health such as depression, stress and anxiety and is useful for mental well-being . The current study aims to assess the relationship between job mobility and mental health in a cohort of Belgian workers followed up for 27 years. Data are drawn from the official registers of the Belgian External Service for Prevention and Protection at Work IDEWE data warehouse and the information regarding the use of a medication for mental health were considered as an objective indicator of mental health disorders. In order to accurately estimate the effect of job mobility on the onset of neuropsychological diseases, a quasi-experimental approach was applied using propensity score matching with time-dependent covariates. Population and study design We performed a retrospective longitudinal cohort study of all Belgian worker included in the IDEWE data warehouse, the largest central repository of data on Belgian employees. IDEWE disposes of a database, including data from the annual health surveillance of Belgian employees, recorded and encoded in an electronic format using international or national classification standards . Detailed information about data collection and data warehouse has been described earlier . Periodic health checks in Belgium are mandatory for employees who are exposed to occupational hazards . In addition to medical data, personal and work characteristics are also registered and encoded during medical examination. The data stored in electronic medical files are extracted, translated and loaded into a data warehouse for further analysis. Data collection and variables The dataset includes data on 11,246 employees with measurements in the period between 1993 and 2019, after removing subjects lacking of sex information. The open cohort, with participants entering and leaving the cohort at different times, was followed by the index prescription date from January 1993 to December 2019. The outcome variable was the registry-based information about the use of neuropsychological drugs. It was coded as a binary variable with the “No” category (indicating that a subject in a particular year did not take any medication for neuropsychological diseases) and “Yes”, otherwise. Information on job mobility was provided by the organizations where the respondents were employed. In detail, job mobility was coded equal to “No” if the employee did not change employment or “Yes” if he/she changed employer in a particular year. The covariates included demographic (age, sex), physical and behavioural characteristics (BMI, smoking habits, physical activity), occupational (job mobility) and work-related risks (listed below). A subject was defined physically active if reported working out in line with the WHO recommendations, that is, 30 min of moderate physical activity at least 5 days a week, or at least 20 min a day of vigorous activity for at least 3 days a week, or performed a job or household chores that require important physical effort . However, in the current study, the category related to job or household chores was excluded. Obesity was dichotomized, considering a cut-off value of 30 for BMI. Number of underweighted (BMI < 18) subjects were negligible in our sample (< 0.1%). Among the work-related risks, the following binary variables were considered: noise, shift-work, manual tasks, job strain, physical load. Self-reported information about smoking habits, work strain, manual tasks, physical load and shifts work was assessed during the medical examination through the following Yes/No questions: “Are you currently a smoker?”; “Are you currently perceiving work as a strain?”; “Have you been assigned manual tasks?”; “Are you currently perceiving physical load in your work?”; “Are you currently working shifts?”. Only the exposure to noise was measured in dB and the information provided by the employer. Statistical analysis Continuous variables were described as mean and standard deviation (the latter is reported in brackets), median, and range. Categorical variables were analysed as counts and percentages. Quantitative variables were categorized into two classes assuming the median as cut-off value. For all categorical variables, the “No” category was used as a reference. Kuder-Richardson formula 20 was computed to measure inter-items consistency. A high value indicates strong relationship among the items. In order to assess the relationship between covariates and neuropsychological drugs use, first of all the unmatched analysis was performed through the Cox model, as implemented using the “survival” package in the R environment (version 3.5.3). Due to the longitudinal nature of the data and the presence of time-varying variables, the time-dependent data set was built up according to the time-interval format, and the “coxph” function was used to estimate the parameters . Statistically significant variables at univariable analysis were included in multivariable analysis. Afterwards, the same Cox model was applied after propensity score matching. A p -value less than 0.05 was considered statistically significant. Propensity score analysis A propensity score analysis was done to balance workers with experience of external mobility (treated group) and workers without this experience (untreated group) to adjust for systematic differences occurring in covariates that are linked to the outcome . The propensity score approach is a quasi-experimental technique widely used in the field of observational studies to mimic randomization of clinical trials . It is used to avoid bias and balance the distribution of the covariates at every time point . The major strengths of propensity score analysis is that it solves the imbalance in covariates between the treated and untreated so that those subjects that cannot be matched are discarded, implying increment of the internal validity and improvement of the quality of observational research, against a reduction of the sample size. In the Cox model, work mobility was the outcome variable, age and sex were included as fixed covariates, while smoking habit, obesity, physical activity, shift work, noise, manual tasks, job strain and physical load were included as time-dependent variables. The sequential matching algorithm was performed for each risk set in chronological order and the optimal bipartite matching with “optmatch” package was used . Cases were matched with controls with the ratio 1:3 using the hazard of being treated (in our case, the hazard the subjects had to experiment job mobility) at a certain time point for each subject. The selected controls were chosen based on a similar cumulative hazard to the treated in each risk set and matched subjects were removed from the later risk sets. The balance diagnostic of the matched sample was assessed through the standardized mean differences (SMD). Using the strong criterion suggested by Austin , we defined a balanced covariate if SMD < 0.1. In order to assess the amount of unmeasured confounders that was not adjusted through propensity score method, we computed the E-value. Whether the E-value is high or low is relative to the magnitude of other covariates’ effect in the study. As an example, if most of the effects have on average a hazard ratio between 1 and 1.5, an E-value equal to 2 is large but, if it is equal to 1.2, it is not. Therefore, the unmeasured confounding should have a relative risk ratio, with both the outcome and the treatment variable (job mobility), at least equal to the E-value to subvert the observed results . The E-value was computed using “EValue” R package . In order to assess the robustness of the association between treatment for neuropsychological disease and job mobility, a sensitivity analysis was made by omitting different matching variables with SMD greater than 0.05 in the unmatched sample. We performed a retrospective longitudinal cohort study of all Belgian worker included in the IDEWE data warehouse, the largest central repository of data on Belgian employees. IDEWE disposes of a database, including data from the annual health surveillance of Belgian employees, recorded and encoded in an electronic format using international or national classification standards . Detailed information about data collection and data warehouse has been described earlier . Periodic health checks in Belgium are mandatory for employees who are exposed to occupational hazards . In addition to medical data, personal and work characteristics are also registered and encoded during medical examination. The data stored in electronic medical files are extracted, translated and loaded into a data warehouse for further analysis. The dataset includes data on 11,246 employees with measurements in the period between 1993 and 2019, after removing subjects lacking of sex information. The open cohort, with participants entering and leaving the cohort at different times, was followed by the index prescription date from January 1993 to December 2019. The outcome variable was the registry-based information about the use of neuropsychological drugs. It was coded as a binary variable with the “No” category (indicating that a subject in a particular year did not take any medication for neuropsychological diseases) and “Yes”, otherwise. Information on job mobility was provided by the organizations where the respondents were employed. In detail, job mobility was coded equal to “No” if the employee did not change employment or “Yes” if he/she changed employer in a particular year. The covariates included demographic (age, sex), physical and behavioural characteristics (BMI, smoking habits, physical activity), occupational (job mobility) and work-related risks (listed below). A subject was defined physically active if reported working out in line with the WHO recommendations, that is, 30 min of moderate physical activity at least 5 days a week, or at least 20 min a day of vigorous activity for at least 3 days a week, or performed a job or household chores that require important physical effort . However, in the current study, the category related to job or household chores was excluded. Obesity was dichotomized, considering a cut-off value of 30 for BMI. Number of underweighted (BMI < 18) subjects were negligible in our sample (< 0.1%). Among the work-related risks, the following binary variables were considered: noise, shift-work, manual tasks, job strain, physical load. Self-reported information about smoking habits, work strain, manual tasks, physical load and shifts work was assessed during the medical examination through the following Yes/No questions: “Are you currently a smoker?”; “Are you currently perceiving work as a strain?”; “Have you been assigned manual tasks?”; “Are you currently perceiving physical load in your work?”; “Are you currently working shifts?”. Only the exposure to noise was measured in dB and the information provided by the employer. Continuous variables were described as mean and standard deviation (the latter is reported in brackets), median, and range. Categorical variables were analysed as counts and percentages. Quantitative variables were categorized into two classes assuming the median as cut-off value. For all categorical variables, the “No” category was used as a reference. Kuder-Richardson formula 20 was computed to measure inter-items consistency. A high value indicates strong relationship among the items. In order to assess the relationship between covariates and neuropsychological drugs use, first of all the unmatched analysis was performed through the Cox model, as implemented using the “survival” package in the R environment (version 3.5.3). Due to the longitudinal nature of the data and the presence of time-varying variables, the time-dependent data set was built up according to the time-interval format, and the “coxph” function was used to estimate the parameters . Statistically significant variables at univariable analysis were included in multivariable analysis. Afterwards, the same Cox model was applied after propensity score matching. A p -value less than 0.05 was considered statistically significant. A propensity score analysis was done to balance workers with experience of external mobility (treated group) and workers without this experience (untreated group) to adjust for systematic differences occurring in covariates that are linked to the outcome . The propensity score approach is a quasi-experimental technique widely used in the field of observational studies to mimic randomization of clinical trials . It is used to avoid bias and balance the distribution of the covariates at every time point . The major strengths of propensity score analysis is that it solves the imbalance in covariates between the treated and untreated so that those subjects that cannot be matched are discarded, implying increment of the internal validity and improvement of the quality of observational research, against a reduction of the sample size. In the Cox model, work mobility was the outcome variable, age and sex were included as fixed covariates, while smoking habit, obesity, physical activity, shift work, noise, manual tasks, job strain and physical load were included as time-dependent variables. The sequential matching algorithm was performed for each risk set in chronological order and the optimal bipartite matching with “optmatch” package was used . Cases were matched with controls with the ratio 1:3 using the hazard of being treated (in our case, the hazard the subjects had to experiment job mobility) at a certain time point for each subject. The selected controls were chosen based on a similar cumulative hazard to the treated in each risk set and matched subjects were removed from the later risk sets. The balance diagnostic of the matched sample was assessed through the standardized mean differences (SMD). Using the strong criterion suggested by Austin , we defined a balanced covariate if SMD < 0.1. In order to assess the amount of unmeasured confounders that was not adjusted through propensity score method, we computed the E-value. Whether the E-value is high or low is relative to the magnitude of other covariates’ effect in the study. As an example, if most of the effects have on average a hazard ratio between 1 and 1.5, an E-value equal to 2 is large but, if it is equal to 1.2, it is not. Therefore, the unmeasured confounding should have a relative risk ratio, with both the outcome and the treatment variable (job mobility), at least equal to the E-value to subvert the observed results . The E-value was computed using “EValue” R package . In order to assess the robustness of the association between treatment for neuropsychological disease and job mobility, a sensitivity analysis was made by omitting different matching variables with SMD greater than 0.05 in the unmatched sample. The median age of the sample at baseline was 38 years (IQR = 35–51). The unmatched sample included a total of 11,246 subjects, with 368 (3.3%) that changed their job at the baseline (Table ) and 922 (8.2%) workers that left their employer during the follow-up (data not shown). Age, obesity, and manual tasks showed unbalance between workers with external mobility and workers without. In detail, at baseline, workers aged less than 38 years old were 75.8% with job mobility compared to 46.7% without job mobility. Similarly for obesity, 13% of obese workers changed job compared to 17.2% who did not change job. Furthermore, workers with manual tasks were 79.6% with job mobility compared to 73.2% without job mobility. After PS matching, the matched sample of 3,092 workers had better between-group balancing for all considered characteristics, with SMD < 0.1 for all the covariates (Table ). More than half of the matched sample included male workers (60.3% in the job mobility group and 60.5% in subjects that did not change job), aged less than 38 years (70.4% in the job mobility group and 70.2% in the no job mobility group), non-smokers (73% in the job mobility group and 74.4% in the no job mobility group), normal weighted (81.5% among subjects who changed job and 82.1% in subject who did not change job), and physically active (67.3% in the job mobility group and 70.2 in the group without job mobility). Furthermore, most of these workers were not exposed to shift-work (83.6% of the job mobility group and 80% of the group without job mobility), noise (63.5% in the job mobility group and 58.7% in the no job mobility group), job strain (98.7% in the job mobility group and 97.8% in the no job mobility group) and physical load (88% in the job mobility group and 87.5% in the no job mobility group) but they usually did manual tasks (75.5% in the job mobility group and 76.8% in the no job mobility group) (Table ). The Kuder-Richardson formula 20 was equal to 0.84, and suggested inter-items consistency. In the unmatched sample, job mobility was found a significant risk factor for neuropsychological treatment (HR = 1.330, 95%CI = 1.135–1.559) adjusted for the covariates. Furthermore, all the other covariates showed a statistically significant association with neuropsychological treatment, except for obesity. In the matched sample, job mobility (HR = 2.065, 95%CI = 1.397–3.052, P -value < 0.001) was confirmed as statistically significant. Of other covariates, only physical activity (HR = 0.493, 95%CI = 0.332–0.733, P -value < 0.001), and job strain (HR = 3.986, 95%CI = 1.593–9.971, P -value = 0.003) were statistically significant (Table ). The E-value of treatment for neuropsychological disease was equal to 2.86, and the lower CI limit was 1.99. Based on the magnitude of the other HRs, all but one are less than 1.4, this E-value can be judged as relatively large. The unmeasured confounding should have a relative risk association at least as large as 2.86 with both treatment for neuropsychological disease and job mobility to subvert the results. In the sensitivity analysis to further assess the robustness of the associations, we removed job strain (SMD u = 0.062), obesity (SMD u = 0.115) and manual tasks (SMD u = 0.151) from matching variables. The results after the sensitivity analysis remained consistent and statistically significant, with an HR of 2.012 (95%CI = 1.359–2.979, P -value < 0.001). Our study demonstrated the negative impact of external job mobility on mental health of Belgian workers, as measured through the objective indicator of drugs use for neuropsychological diseases. Our paper’s contribution is noteworthy, since the amount of literature concerning the relationship between job mobility and mental health is very limited, while most of the studies consider burnout, self-reported measures of job satisfaction and work conditions as health outcomes. To the best of our knowledge, only two studies analyse mental health as the outcome, and their findings are not consistent. Our results are consistent with the registry-based longitudinal Danish study of Hoougard that found an adverse effect for both male and female workers . Conversely, another study found no association between job mobility and mental health, but its study target, made of Swedish civil servants, was completely different from our study target . To explain this important result of our study, we can hypothesise health worsening as a consequence of external job mobility-induced stress. The application of the propensity score matching with time-dependent covariates , through the balance of the distribution of the covariates, managed to mimic randomization. Successively, the sensitivity analysis assessed the robustness of the strength of the association between treatment for neuropsychological disease and job mobility. The most important advantage of this quasi-experimental approach was the assessment of pseudo-causal effects and not of simple associations. Current literature showed that the relationship between job mobility and health is bidirectional, pending on contextual characteristics of the work and social environment of the employee. In labor markets with high unemployment and precarious temporary jobs, mobility is often involuntary and between more unhealthy jobs . It is more frequent to observe upward and voluntary mobility with effects on better mental health among high-skilled and high-educated workers . In contrast, if people perceive a gap between their intention to move and the actual possibility of changing jobs, then this effect on health may be negligible or negative. The relationship between job mobility and mental health is confused by several context-related risk factors as well as gender, age and level of education . The method we used made it possible to neutralize the effect of many confounders. According to psychological theory, any life change, whether perceived as positive or negative, can induce social readjustment and, consequently, stress reaction and arise somatic and mental disorders . Therefore, both voluntary and involuntary mobility can activate such a causal sequence that worsens mental health. Furthermore, both job control and reward at work are important stress conditions that may have an impact on long-term effects on workers’ health . So that, if job change improves the balance between effort and reward, as happens in voluntary and vertical mobility, the health status will improve. On the contrary, it will worsen with the possibility of developing depression . Therefore, having found a worsening impact on mental health due to the job change can be ascribed to involuntary horizontal mobility. Concerning other results, our study found that job strain is a significant risk factor for mental health while being physically active has a protective role. The harmful role of job strain on the development of neuropsychological diseases is consistent with the results of a longitudinal study conducted on the Canadian population where it was found as the major risk factor of depressive episodes and in a cohort of about one-hundred of full-time workers in Baltimore followed for three years . A multicohort study, together with some meta-analysis including longitudinal studies, showed a prospective association between the increment in job strain and poorer mental health, besides coronary heart disease, stroke and diabetes . Furthermore, the evidence of the protective role of being physically active is consistent with an Italian survey and a systematic review, where the authors demonstrated that aerobic exercise is associated with better psychological health . The main strengths of this paper are the availability of extensive longitudinal data that flow from a twenty-seven-year follow-up study and the use of an objective measure of mental health. In fact, the health status was not self-reported but the use of neuropsychological drugs was retrieved from the IDEWE data warehouse. The third important strength of our study relies on the use of the propensity score matching to create a quasi-experimental context. However, the efficacy of this approach could have been limited by the lack of other important information as work satisfaction, sickness absence, family life, supportive relationships with colleagues, economic security, educational level, and access to social support, healthy behaviours, job control, and workplace characteristics. Furthermore, the lack of information about the distinction between voluntary and forced job mobility and about the specific causes of job mobility cannot exclude the occurrence of other unmeasured confounding in the analysis. Finally, the healthy-worker effect might have influenced the outcome due to the selection of workers in the labour force and without mental impairment during the twenty-seven years of follow-up. This healthy-worker effect can have underestimated the effects of job mobility. The drop-out of employees that leave their job or change it, with the effect to be lost to follow up, because they are no longer enrolled in the same OSH provider (IDEWE). Moreover, the specific causes of job changes are not considered, so the occurrence of some confounding in the analysis cannot be excluded. Furthermore, self-reported information on smoking habits was potentially underreported. There are some questions which remain unanswered. For this reason, in future research, we intend to design an ad-hoc study to detect the effect of job mobility in some segments of the working population such as manual vs non-manual, high vs low skilled, and to examine the effect of environmental or chemical exposures to the likelihood of going towards job mobility. The main finding of our study was that external job mobility has an impact on mental health. Programs and policies are needed to overcome the negative impact of external job mobility on mental health. Specifically, policies to support workers subjected to voluntary job change should include flexible working hours, exercise, providing competitive salaries, incentivizing workers with rewards and positive reinforcement, and implementing open communication with colleagues and supervisors. Alternatively, workers under involuntary job change should be supported through welfare interventions, professional requalification, and return-to-work programmes. Therefore, it is desirable promoting policies at micro (employer) and macro (government) level to limit the impact of change of work on the mental health of workers. |
Pan-Asian adapted ESMO Clinical Practice Guidelines for the diagnosis, treatment and follow-up of patients with early breast cancer | 550bd139-eabd-4d90-96ee-e18c5f77ecbc | 11145753 | Internal Medicine[mh] | In 2020, there were an estimated 2.3 million new cases of female breast cancer worldwide, , accounting for 11.7% of all new cancer cases. Among women worldwide it accounted for 24.5% of cancer diagnoses, and, with nearly 700 000 deaths (15.5% among women and 6.9% of all cancer deaths), was the single biggest cause of cancer death. , Male breast cancer is very rare, accounting for <1% of all cases of malignancies in men and <1% of all breast cancers worldwide. The incidence of breast cancer was lowest for the continent of Asia, , but with over a million new cases in 2020, it remained the most common cancer amongst Asian women, and accounted for 45.3% of all breast cancer cases worldwide. In 2020, breast cancer was the second largest cause of cancer death behind lung cancer in Asian women and accounted for over half of all breast cancer-related deaths worldwide. , , However, significant regional differences were observed with mainland China having the highest number of cases of breast cancer (416 371 cases and 117 174 deaths), accounting for 18.4% of global breast cancer cases in 2020 based on data from the GLOBOCAN cancer today database 2020, followed by Japan (92 024 cases and 17 081 deaths), South Korea (25 814 cases and 3009 deaths) and Singapore (3662 cases and 921 deaths), with additional registry data available for Japan and Singapore. , The corresponding age-standardised incidence rates (ASIRs) per 100 000 of the population were 39.1, 76.3, 64.2 and 77.9 for mainland China, Japan, South Korea and Singapore, respectively, with the highest ASIRs corresponding to those regions with the highest human development indices in terms of life expectancy, education and national income. The mortality-to-incidence (M/I) ratio, defined as the number of deaths that occur compared to the number of breast cancers diagnosed each year, across Asia was 0.32, the second highest behind Africa, and higher than the world’s average of 0.28. Again, there were large regional variations in the M/I ratios between the different regions of Asia, with high-income countries such as Singapore, Japan and South Korea generally having higher incidences of breast cancer due to rapid westernisation in terms of nutritional and lifestyle changes and lower mortality rates due to access to improved treatment and screening programmes. , , , , An important factor affecting mortality from breast cancer is stage at presentation, which tends to be lower in women from high-income countries or regions and higher in women from low- and low-to-middle-income countries (LMIC) or regions. , For example, 63.4% of breast cancer diagnoses in the high-income regions of Asia were stage I and II compared with 33.6% and 43% in low- and LMICs, respectively. Notably, the age of presentation for women with breast cancer in Asia peaks ∼10 years earlier than for women from western countries. , , , Also, estrogen receptor-positive (ER+) breast cancer is the most common subtype across Asia, ranging from 76% of breast cancer cases in Japan to 53% for women of Malay and Indian origin in Malaysia and Singapore. The human epidermal growth factor receptor 2 (HER2)-positive status across Asia is more variable, , and is lowest in Japanese (15%) and Indian (17%) women, and highest in Hong Kong Chinese (43%) and Indonesian (45%) women. The most recent European Society for Medical Oncology Clinical Practice guidelines (ESMO) for the diagnosis, treatment and follow-up of patients with early breast cancer were submitted for publication in 2023 and a decision was taken by ESMO and the Korean Society of Medical Oncology (KSMO) that these latest ESMO guidelines should be adapted for the management and treatment of patients of Asian ethnicity. This manuscript summarises the Pan-Asian adapted guidelines developed and agreed at a hybrid virtual/face-to-face working meeting that took place in Seoul on 23 September 2023 hosted by KSMO. Each recommendation is accompanied by the level of evidence (LoE), grade of recommendation (GoR) ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) and the percentage consensus reached. This Pan-Asian adaptation of the current ESMO Clinical Practice Guidelines was prepared in accordance with the principles of ESMO standard operating procedures ( https://www.esmo.org/Guidelines/ESMO-Guidelines-Methodology ) and was a KSMO–ESMO initiative endorsed by the Chinese Society of Clinical Oncology (CSCO), the Indonesian Society of Hematology and Medical Oncology (ISHMO), the Indian Society of Medical and Paediatric Oncology (ISMPO), the Japanese Society of Medical Oncology (JSMO), the Malaysian Oncological Society (MOS), the Philippine Society of Medical Oncology (PSMO), the Singapore Society of Oncology (SSO), the Taiwan Oncology Society (TOS) and the Thai Society of Clinical Oncology (TSCO). An international panel of experts was selected from the KSMO ( n = 5), the ESMO ( n = 3) and two experts from each of the nine other oncological societies. Only two of the five expert members from the KSMO (JS and Y-HP) were allowed to vote on the recommendations together with the experts from each of the nine other Asian oncology societies ( n = 20). All 20 Asian experts provided comments on the pre-meeting survey and one consensus response per society (see , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Only one voting member per Asian society was present at the hybrid/face-to-face meeting. None of the additional members of KSMO and none of the ESMO experts or additional representatives of ESMO were allowed to vote and were present in an advisory role only (see , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). All the Asian experts ( n = 20) approved the revised recommendations. A Scientific adaptations of the ESMO recommendations In the initial pre-meeting survey, the 20 voting Asian experts reported on the ‘acceptability’ of the 97 recommendations for the diagnosis, treatment and follow-up of patients with early breast cancer from the most recent ESMO Clinical Practice Guidelines ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ), in the eight categories outlined in the text below and in . A lack of agreement in the pre-meeting survey was established for 22 recommendations, 18 of which were discussed at the hybrid virtual/face-to-face working meeting in Seoul to adapt the recently published ESMO Clinical Practice Guidelines. ‘Recommendation 4h’ was also discussed because several of the Asian experts left comments in their responses to the survey. For each of ‘recommendations 1f, 4b, 4d and 5i’ there were discrepancies relating to their applicability in certain regions of Asia and not their ‘scientific applicability’. As a result, these were not discussed at the hybrid virtual/face-to-face meeting. No new recommendations were added, but the original ESMO recommendation 6d’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was relocated to become ‘recommendation 3v’ in . The guideline recommendations outlined in the text below and in for the diagnosis, treatment and follow-up for Asian patients with early breast cancer have been agreed by the Pan-Asian panel of experts based exclusively on the available scientific evidence and their professional opinions. It is acknowledged that regional differences in availability of drugs, equipment and testing facilities, as well as reimbursement and access to treatment may affect the implementation of certain of these recommendations. Where possible, the recommendations have been amended to take into account these regional differences. 1 Screening, diagnosis, pathology and molecular biology—recommendations 1a-m The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 1b-f, 1g-k and m’ (see , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without change . In relation to ‘recommendation 1a’, based on data from the Korean Breast Cancer Society and the Korean Central Cancer Registry, the highest frequency of breast cancer in 2017 was observed in women 40-49 years of age, accounting for a third of all new cases. As mentioned previously in the ‘Introduction’, this is nearly 10 years earlier than that observed in Europe and America, , suggesting that the ESMO recommended age for mammography screening of 50-69 years of age is too late for Asian populations. This is supported by the breast screening guidelines for several regions of Asia including Japan and Korea which recommend breast cancer screening for women over the age of 40 while Taiwan and mainland China recommend breast cancer screening for all women with an average risk of breast cancer aged 45-69. , , , Furthermore, a Korean population-based study reported a 31.98% net benefit in terms of breast cancer mortality reduction, from breast screening, in women aged 45-49 years. Also, a net benefit of 22.42% was observed in women in the youngest, 40-44 years, age bracket. Taking into account the differences in the epidemiology of breast cancer observed across Asia and the benefit of breast cancer screening reported in the Korean study, the original ESMO ‘recommendation 1a’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was modified as per the bold text below and in ( 100% consensus ), to read as follows: 1a. Regular (every 2 years) mammography screening is recommended in women aged 45 -69 years [I, A]. Regular mammography may also be carried out in women aged 40- 44 and 70-74 years, where there is emerging evidence of benefit [I, B; consensus = 100% ]. For ‘recommendation 1l’, there was a great deal of discussion around the benefit of screening for programmed death-ligand 1 (PD-L1). This was particularly the case for therapeutic regimens that included immune checkpoint inhibitors (ICIs) in patients with early-stage triple-negative breast cancer (TNBC). However, the results of the phase III KEYNOTE-522 study in treatment-naïve patients with stage II/III TNBC found that the addition of pembrolizumab to a neoadjuvant chemotherapy (ChT) regimen improved pathological complete responses (pCR) and event-free survival (EFS) rates (hazard ratio [HR] = 0.63; 95% confidence interval [CI] = 0.48-0.829), independent of PD-L1 status. Furthermore, the phase III IMpassion031 study found the addition of atezolizumab to a neoadjuvant ChT regimen of nab-paclitaxel, doxorubicin and cyclophosphamide to improve pCR compared with ChT plus placebo, independent of PD-L1 status. Consequently, it was agreed that decisions regarding the inclusion of ICIs in treatment regimens were not likely to be affected by PD-L1 expression and as a result, the wording for ‘recommendation 1l’ remained unchanged with 100% consensus . 2 Staging and risk assessment—recommendations 2a-e The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 2b-d’ without change . For ESMO ‘recommendation 2a’ the reference text to be used for staging was discussed because in Korea the seventh, and not the eighth, edition of the TNM Classification of Malignant Tumours is the preferred edition. , There was also some discussion regarding how practical the whole staging paradigm of the eighth edition was to clinical practice. However, in the guidelines of the College of American Pathologists, TNM is a part of staging. It was thus decided to leave the eighth edition in the recommendation but to shorten the recommendation, removing ‘Union for International Cancer Control tumour–node–metastasis’ from the original ESMO ‘recommendation 2a’ to read as the text below and in . 2a. Disease stage and final pathological assessment of surgical specimens should be made according to the World Health Organization classification of tumours and the eighth edition of the TNM staging system [V, A; consensus = 100% ]. For ESMO ‘recommendation 2e’, several of the Pan-Asian panel of experts pointed out that, if available, positron emission tomography (PET)–computed tomography (CT) scanning is only used if conventional methods, such as CT or bone scan-based methods have proven inconclusive. Thus, the wording for ‘recommendation 2e’ was modified as per the bold text below and in to read as follows: 2e. [18F]2-fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET)–CT scanning may be an option for high-risk patients and when conventional CT/bone scan methods are inconclusive [II, B; consensus = 100% ]. A proposed algorithm for the diagnostic work-up and staging of early breast cancer is presented in , available at https://doi.org/10.1016/j.esmoop.2024.102974 . 3 General management principles—recommendations 3a-v The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 3a-c, e-h, j-k and n-u’ without change . While there was consensus amongst the Pan-Asian panel of experts regarding ESMO ‘recommendation 3d’ that age should not be the primary determinant of treatment decisions, there was some discussion that for very young patients age could be an important factor in addition to biology. Long-term follow-up data from the SOFT and TEXT trials, in premenopausal women with estrogen/progesterone receptor-positive (ER/PgR+) early breast cancer, showed 5 years of exemestane and ovarian function suppression (OFS) to significantly improve the 12-year overall survival (OS) in women under 35 years of age (4.0%). Despite these data, it was generally agreed that cancer stage and biology should always be the primary determinants of treatment decisions, although age is an important factor for patients with hormone receptor-positive/HER2-negative (HR+/HER2−) breast cancer. Therefore ‘recommendation 3d’ remained unchanged (100% consensus). There was a great deal of discussion around ESMO ‘recommendation 3i’ regarding the benefits of breast-conserving surgery (BCS) plus radiotherapy (breast-conserving therapy [BCT]) over radical mastectomy due to a discrepancy in the data from Italian and Dutch studies. , However, findings reported by the Korean Breast Cancer Registry, which evaluated 45 770 patients with early breast cancer, found that the 10-year OS for those receiving BCT was better than for those receiving radical mastectomy (HR = 1.541; 95% CI = 1.392-1.707; P < 0.001). The breast cancer-specific survival rate was also better for the BCT cohort (HR = 1.541; 95% CI = 1.183-1.668; P < 0.001). There was further discussion regarding women carrying a germline BRCA pathogenic variant ( BRCA -positive) where mastectomy is frequently the preferred option in many regions of Asia. In a Chinese study investigating BCT in women with BRCA- positive breast cancer, the 5-year cumulative recurrence-free survival (RFS) was comparable for patients receiving BCT (HR = 0.95; 95% CI = 0.89-1.00) and those receiving mastectomy (HR = 0.93; 95% CI = 0.85-1.00), after adjustment for clinicopathological characteristics and systemic treatment. Within the BRCA -positive cohort there was no significant difference in disease-free survival (DFS) (HR = 1.17; 95% CI = 0.57-2.39; P = 0.68) or survival (HR = 1.44; 95% CI = 0.22-9.44; P = 0.70) for patients receiving BCT compared with those receiving mastectomy. These results are in line with a meta-analysis comparing BCT with mastectomy in BRCA -positive women which concluded that survival outcomes are comparable between the two treatment options. It was therefore agreed that the clinical need is not there for mastectomy with reconstruction, but it may still be the preferred treatment for regions such as the Philippines and Indonesia where radiotherapy (RT) is not widely available in all hospitals and patients may not be willing or able to afford to travel to distant RT facilities. Also, in many regions of Asia, tumours are typically T2 and T3 at diagnosis which it was felt may impact on the relevance of findings from clinical trials where tumours are typically smaller. ESMO ‘recommendation 3i’ was agreed however, but the wording was modified as per the bold text below and in to read as follows: 3i. BCS with post-operative RT is the recommended local treatment option for the majority of patients with early breast cancer (when compatible with patient preference and available resources) [I, A; consensus = 100% ]. While there was consensus for ESMO ‘recommendations 3l and 3m’ it was highlighted that across Asia, there is a wide variation in stage of presentation. Less-developed regions are more likely to have patients presenting with later-stage breast cancer than more-developed regions. , For example, more than half of patients present with stage III or IV breast cancer in India compared with 76% presenting with stage I or II disease in South Korea. For those regions where advanced disease is more common, the relevance of ESMO ‘recommendations 3l and 3m’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was questioned. Regarding ‘recommendation 3l’, the long-term follow-up of the phase III IBCSG 23-01 randomised trial in patients with sentinel lymph node (SLN) micrometastases found the DFS at 10 years was 76.8% (95% CI = 72.5-81.0) for patients who did not have axillary lymph node dissection (ALND) versus 74.9% (95% CI = 70.5-79.3) for patients who underwent ALND (HR = 0.85; 95% CI = 0.65-1.11; log-rank P = 0.24; P = 0.0024 for non-inferiority). It was thus agreed that the need for further axillary surgery was not required in this group of patients and the panel of Pan-Asian experts agreed with ‘recommendation 3l’, with a minor modification, removing the word ‘eventually’, to read as below and in with 100% consensus: 3l. In the absence of prior primary systemic treatment (PST) patients with micrometastatic spread and those with limited SLN involvement (1-2 affected SLNs) in cN0 following BCS with subsequent whole-breast RT (WBRT) including the lower part of the axilla, and adjuvant systemic treatment, do not need further axillary surgery [II, A; consensus = 100% ]. The Pan-Asian panel of experts agreed that routine ALND was not required for patients with breast cancer who, following SLN biopsy (SLNB), were found to have metastases to 1 or 2 SLNs. Thus ESMO ‘recommendation 3m’ was agreed with the minor modifications shown in bold below and in : 3m. ALND following positive SLNB with <3 involved SLNs is generally recommended only in the case of suspected high axillary disease burden, or with impact on further adjuvant systemic treatment decisions [II, A; consensus = 100% ]. There was a robust discussion around ESMO ‘recommendation 3v’ (originally recommendation 6d in , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) and the administration of granulocyte colony-stimulating factor (G-CSF) with dose-dense schedules of ChT to reduce post-ChT febrile neutropenia. In a meta-analysis by the Early Breast Cancer Trialists’ Collaborative Group (EBCTCG), dose-dense ChT was found to provide a benefit over standard schedule ChT for disease recurrence (10-year gain = 3.4%; 95% CI = 2.2% to 4.5%; log-rank 2 P < 0.0001), breast cancer mortality (10-year gain 2.4%; 95% CI = 1.3% to 3.4%; log-rank 2 P < 0.0001) and all-cause mortality (10-year gain = 2.7%; 95% CI = 1.6% to 3.8%; log-rank 2 P < 0.0001). Similar results were found with subgroup analyses based on ER and PgR status, HER2 status, grade, Ki-67-status and histological type. Furthermore, it was found that primary prophylaxis with G-CSF mandated in all 2-weekly dose-dense adjuvant ChT schedules led to lower levels of grade 3-4 neutropenia and neutropenic sepsis than in control arms. The benefits of prophylactic use of G-CSFs were also reported in a retrospective Japanese study investigating the use of G-CSF or pegfilgrastim (the pegylated form of G-CSF analogue, filgrastim) with perioperative ChT in patients with early breast cancer over a 10-year period from January 2010 to October 2020. It was noted that febrile neutropenia-related hospitalisations decreased in the last half of the study time despite the use of escalated regimens and that prophylactic pegfilgrastim likely contributed to this reduction [odds ratio (OR) of 0.879; 95% CI = 0.778-0.993; P = 0.0384]. Furthermore, a meta-analysis of the primary use of prophylactic G-CSF in trials using a docetaxel plus cyclophosphamide regimen found the risk of febrile neutropenia was reduced by 92.3% with prophylactic G-CSF (pooled OR = 0.077; 95% CI = 0.013-0.460; P = 0.005). However, despite these results, there is still some question over the benefits of G-CSF in ICI-containing ChT regimens and not all regions of Asia use dose-dense schedules for all subtypes of early breast cancer, for example node-negative disease. Thus, as a result of these discrepancies and the uncertainty over the benefits of G-CSF use with all ChT regimens, the GoR for ‘recommendation 3v’ was downgraded from ‘A’ to ‘B’ with 100% consensus, as is shown in bold below and in : 3v. The use of dose-dense schedules of ChT, with granulocyte colony-stimulating factor (G-CSF) support, should be considered given their documented benefit over non-dose-dense schedules [I, B: consensus = 100% ]. presents a proposed algorithm for the treatment of early breast cancer and presents a proposed algorithm for the management of axillary lymph node involvement. 4 Management of ER-positive/HER2-negative early breast cancer—recommendations 4a-l The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 4a,b d-f.1 and i-l’, without change . For ESMO ‘recommendation 4c’, the routine use of gene expression assays for guiding decisions on adjuvant ChT was questioned because, while the data of the West German Study Group Plan B trial demonstrated the potential for such assays in patient stratification, they are not routinely used or widely accessible throughout Asia. Similar concerns were made regarding the accessibility and routine use of endocrine response assessment. Therefore, while the Pan-Asian panel of experts agreed about the science of both gene expression assays and endocrine response assessment, they downgraded the GoR from ‘A’ to ‘B’ and modified the wording, changing the word ‘can’ to ‘may’ as shown in bold below and in , as follows: 4c. In cases of uncertainty about indications for adjuvant ChT (after consideration of all clinical and pathological factors), gene expression assays and/ or endocrine response assessment s may be used to guide decisions on adjuvant ChT [I, B ; consensus = 100% ]. There was a great deal of discussion around ESMO ‘recommendation 4g’ and the use of bisphosphonates in the early breast cancer setting. In the phase III AZURE trial the use of the bisphosphonate zoledronic acid did not improve either the 7-year OS (adjusted HR = 0.93; 95% CI = 0.81-1.08; P = 0.37) or the invasive disease-free survival (iDFS) (HR = 0.93; 95% CI 0.82-1.05; P = 0.22) rate compared to the control group for premenopausal and perimenopausal women, independent of ER status, tumour stage and lymph node involvement. Preclinical evidence suggests that the lack of efficacy of bisphosphonates in these women may be, at least in part, due to the levels of estrogens, and the Pan-Asian panel of experts therefore agreed that there was no benefit in treating premenopausal women with bisphosphonates which could be detrimental for younger patients with reduced bone density. In the EBCTCG meta-analysis of randomised trials investigating adjuvant bisphosphonate treatment in early breast cancer, it was found that for postmenopausal women, there was a significant reduction in disease recurrence (first-event rate ratio [RR] = 0.86; 95% CI = 0.78-0.94; 2p = 0.002), distant recurrence (RR = 0.82; 95% CI = 0.74-0.92; 2p = 0.0003), bone recurrence (RR = 0.72; 95% CI = 0.60-0.86; 2p = 0·0002), and breast cancer mortality (RR= 0·82; 95% CI = 0.73-0.93; 2p = 0·002). However, there is no specific evidence of the effect that adjuvant bisphosphonate treatment has on disease recurrence in postmenopausal Asian women with early breast cancer and, while there was consensus that the use of bisphosphonates should be used for treating postmenopausal women with treatment-related bone loss, it was suggested that bisphosphonates are not routinely used to stop disease recurrence in Asia. As a result, the GoR for the use of bisphosphonates in patients at high risk of relapse was downgraded from ‘A’ to ‘B’ in ‘recommendation 4g’ as per the bold text below and in : 4g. Bisphosphonates are recommended in women without ovarian function (postmenopausal or undergoing OFS), especially if at high risk of relapse [I, B; consensus = 100% ] or treatment-related bone loss [I, A; consensus = 100%]. For ESMO ‘recommendation 4h’ there was some discussion about whether the cyclin-dependent kinase 4/6 (CDK4/6) inhibitor ribociclib should also be incorporated into the recommendation based on the exciting interim data from the phase III NATALEE trial in patients with HR+/HER2− early breast cancer which evaluated adjuvant ribociclib with endocrine therapy versus endocrine therapy alone which showed the 3-year iDFS to be significantly longer in the combination group (90.4%) compared with endocrine therapy alone (87.1%; P = 0.0014). However, because ribociclib has, at present, not been given approval for use in early breast cancer by either the US Food and Drug Administration (FDA) or European Medicines Agency (EMA), the wording for ‘recommendation 4h’ remained unchanged ( 100% consensus ). Recently reported results from a preplanned OS interim analysis of high-risk early breast cancer patients randomised to receive endocrine therapy for at least 5 years plus or minus the CDK4/6 inhibitor abemaciclib for 2 years showed the benefit of abemaciclib in terms of iDFS and distant RFS with HRs of 0.68 (95% CI = 0.60-0.77) and 0.675 (95% CI = 0.59-0.77), respectively. These data suggest that the addition of abemaciclib to endocrine therapy reduces the risk of a patient developing invasive disease and distant disease recurrence beyond the pivotal 5-year mark in the adjuvant setting. Follow-up of OS is ongoing. A proposed algorithm for treatment of HR+/HER2− early breast cancer is presented in . 5 Management of HER2-positive early breast cancer—recommendations 5a-i The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 5a-g and i’, without change . For ESMO ‘recommendation 5h’ the benefit of the addition of pertuzumab to trastuzumab for the adjuvant treatment of patients with HER2-positive breast cancer was discussed based on the findings of the phase III APHINITY trial, where the OS benefit at both the 6-year (HR = 0.85; 95% CI = 0.67-1.07; P = 0.17) and 8-year (HR = 0.83; 95% CI = 0.68-1.02; P = 0.078) follow-up failed to reach statistical significance. , There was, however, a consistent improvement in iDFS where 88.4% of patients in the pertuzumab group versus 85.8% of patients in the placebo group were event-free at the 8-year follow-up, which corresponded to an absolute benefit of 2.6% (95% CI for the difference = 0.7-4.6). Subgroup analysis of iDFS data based on node status revealed that patients receiving pertuzumab with node-positive HER2-positive breast cancer had a 4.53% difference in EFS at the 6-year follow-up (95% CI = 1.92-7.14) compared to those receiving placebo, and there was no clear benefit seen in the node-negative patients (0.07% difference in iDFS event-free survival; 95% CI = −2.02-2.17). Analysis by HR status revealed that there was a benefit for addition of pertuzumab in both the HR+ (2.47% difference in iDFS event-free rate; 95% CI for the difference = −0.66-5.60) and HR− (3.0% difference in iDFS event-free rate; 95% CI for the difference = 0.76-5.23) subgroups. Further stratification of the iDFS data revealed that while patients in the node-positive subgroup benefited from pertuzumab irrespective of whether they were HR+ (4.81% iDFS EFS; 95% CI = 1.59% to 8.03%) or HR− (4.10% iDFS EFS; 95% CI = −0.34% to 8.55%), there was no clear benefit for the node-negative subgroups (for the node-negative HR+ subgroup, iDFS EFS = 0.14%; 95% CI −2.47% to 2.74%; and for the node-negative HR− subgroup, iDFS EFS = −0.05%; 95% CI = −3.85% to 3.47%). Thus, based on these results, the Pan-Asian panel of experts agreed with ESMO ‘recommendation 5h’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without modification with 100% consensus . presents an algorithm for the treatment of HER2-positive early breast cancer. 6 Management of TNBC—recommendations 6a-j.2 The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 6a-e and g-i, j and j.1’ without change . Experts from three of the Asian medical societies disagreed with ESMO ‘recommendation 6f’ because it was felt that the benefit of adjuvant pembrolizumab for early TNBC is unclear, particularly with respect to pCR status. However, in the randomised phase III KEYNOTE-522 trial investigating the addition of pembrolizumab to neoadjuvant ChT in patients with early TNBC, the 5-year EFS was 81.3% (95% CI = 78.4% to 83.9%) in the pembrolizumab group compared with 72.3% (95% CI = 67.5% to 76.5%) in the placebo group. The distant disease progression- or distant RFS rates at 5 years were 84.4% for patients receiving pembrolizumab and 76.8% for patients receiving placebo (HR = 0.64; 95% CI = 0.49-0.84). Recently presented data from a prespecified, non-randomised, exploratory analysis reported 5-year EFS rates for the pembrolizumab and placebo groups of 92.2% versus 88.2% for patients with a pCR, and 62.6% versus 52.3% for patients without a pCR. Thus, it was agreed that the original ESMO ‘recommendation 6f’ which read: 6f. Pembrolizumab should be administered every 3 weeks throughout the neoadjuvant phase [I, A] and for nine 3-week cycles during the adjuvant phase, regardless of pCR status or administration of RT [I, A; ESMO-MCBS v1.1 score: A] Should be modified to remove ‘or administration of RT’, which it was felt was unnecessary, although RT can be given with this combination as shown below and in : 6f. Pembrolizumab should be administered every 3 weeks throughout the neoadjuvant phase [I, A] and for nine 3-week cycles during the adjuvant phase, regardless of pCR status [I, A; ESMO-MCBS v1.1 score: A; consensus = 100% ]. The observation that poly (ADP-ribose) polymerase (PARP) inhibitors upregulate PD-L1 in breast cancer cells and synergise with ICIs in a syngeneic breast cancer tumour model provides a strong rationale for the combination of olaparib with ICIs in early TNBC. However, for ESMO ‘recommendation 6i.1’ concern was raised by members of the Pan-Asian panel of experts regarding the safety of the combination of the PARP inhibitor, olaparib, with ICIs. At present, there are no data for olaparib plus ICIs in early TNBC but it is anticipated that the randomised phase II KEYLYNK-009 study comparing the efficacy of adjuvant olaparib plus pembrolizumab with ChT plus pembrolizumab following induction with first-line ChT in patients with locally recurrent inoperable TNBC will provide important data. Data regarding the safety of olaparib plus ICIs can be found in the phase Ib/II KEYNOTE-365 study of pembrolizumab plus olaparib in patients with metastatic castration-resistant prostate cancer where it was reported that the treatment-related adverse events (TRAEs) for the combination were consistent with either agent alone. Thus, the panel of experts agreed with ESMO ‘recommendation 6i.1’ but felt the recommendation needed more clarity regarding the recommended use of olaparib plus ICIs and ESMO ‘recommendation 6i.1’, which read: 6i.1 The combination of ICIs and olaparib may be considered on an individual basis [V, C] and was amended to read as below and in , with the changes shown in bold (100% consensus): 6i.1. In patients with germline BRCA mutations with residual disease after ICI-containing neoadjuvant therapy, the concurrent adjuvant use of ICIs and olaparib may be considered on an individual basis [V, C ; consensus = 100% ]. As with ‘recommendation 6i.1’, there were some concerns about ESMO ‘recommendation 6j.2’ regarding safety. There were also doubts regarding the efficacy of the combination of pembrolizumab with capecitabine. The addition of adjuvant capecitabine after neoadjuvant ChT treatment was assessed in the Japanese/Korean CREATE-X study where, compared with the ChT-alone group, the addition of capecitabine was found to improve both DFS (69.8% versus 56.1%; HR for recurrence, second cancer or death = 0.58; 95% CI = 0.39-0.87) and the OS rate (78.8% versus 70.3%; HR for death = 0.52; 95% CI =0.30-0.90) for patients with TNBC. The efficacy reported in the CREATE-X study was consistent with findings from a meta-analysis which found addition of capecitabine to ChT improved DFS (HR = 0.818; 95% CI = 0.713-0.938; P = 0.004) and OS (HR = 0.778; 95% CI = 0.657-0.921; P = 0.004) in the TNBC subgroup. In addition, in a phase III trial conducted by the South China Breast Cancer Group, 1-year low-dose capecitabine maintenance therapy was found to significantly improve the 5-year DFS compared to the observation group (82.8% versus 73.0%; HR for risk of recurrence or death = 0.64; 95% CI = 0.42-0.95; P = 0.03), and there was also a numerical improvement in the 5-year OS but it was not significant (85.5% versus 81.3%; HR = 0.75; 95% CI = 0.47-1.19; P = 0.22). Most toxicities from the combination of pembrolizumab and capecitabine in a phase II study in pretreated triple-negative and HR+, HER2-endocrine-refractory metastatic breast cancer were found to be low-grade and consistent with capecitabine monotherapy, including elevated liver tests, skin rash, fatigue, hand–foot syndrome and cytopenias. In this biomarker-unselected cohort, there was no improvement for the combination of pembrolizumab plus capecitabine [12-month progression-free survival (PFS) = 20.7%; 95% CI = 8.4% to 36.7%; 12-month OS = 63%; 95% CI = 43.2% to 77.6%) over historical data, but in a small phase Ib study consisting of 14 patients that investigated the early treatment of metastatic TNBC, the combination of pembrolizumab plus capecitabine showed superior response rates [overall response rate (ORR) = 43%] compared with pembrolizumab plus paclitaxel (ORR = 25%). Thus, while at present there are no data for the efficacy of ICIs plus capecitabine in the adjuvant setting for early TNBC, the panel agreed that the ESMO ‘recommendation 6j.2’ should be modified to provide clarity, over when the combination could be considered, to read as per the bold text below and in (100% consensus): 6j.2. In patients with residual disease after ICI-containing neoadjuvant therapy, the concurrent adjuvant use of ICI and capecitabine can be considered on an individual basis [V, C ; consensus = 100% ] A proposed algorithm for the management of triple-negative early breast cancer is presented in . 7 Management of special situations—recommendations 7a-i The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 7a-h’ without change . For ESMO ‘recommendation 7i’, the survival benefit and safety of tamoxifen and aromatase inhibitors (AIs) following mastectomy for ductal carcinoma in situ (DCIS) in high-risk patients was discussed. The benefit of AIs for breast cancer prevention was demonstrated in the international phase III IBIS-II trial comparing anastrozole with placebo in postmenopausal women at increased risk of developing breast cancer where, at 10-years, a 49% reduction in breast cancer was observed (HR = 0.51; 95% CI = 0.39-0.66; P < 0.0001). In this study, there were no significant differences in the major AEs, except for a 28% reduction in the incidence of cancer outside the breast with anastrozole. In the 9-year follow-up of the phase III NSABP B-35 study of patients with DCIS undergoing lumpectomy plus radiotherapy, there was no significant DFS benefit for anastrozole compared with tamoxifen (HR = 0.89; 95% CI = 0.75-1.07; P = 0.21), but patients in the anastrozole group had a superior breast cancer-free interval compared with the tamoxifen group (84.7% versus 83.1%; HR = 0.73; 95% CI = 0.56-0.96; P = 0.023), particularly in patients who had invasive disease (HR = 0.62; 95% CI = 0.42-0.90; P = 0.0123). Patients in the anastrozole group also had a reduced incidence of contralateral breast cancer (HR = 0.64; 95% CI = 0.43-0.96; P = 0.0322) and again, this benefit over tamoxifen was more pronounced in those patients with invasive disease (HR = 0.52; 95% CI = 0.31-0.88; P = 0.0148). The only notable differences between the two groups in terms of AEs was thrombosis or embolism which is a known side-effect of tamoxifen (2.7% versus 0.8% for the anastrozole group). Thus, based on these results, the Pan-Asian panel of experts agreed with ESMO ‘recommendation 7i’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without modification with 100% consensus . 8 Follow-up, long-term implications and survivorship—recommendations 8a-m The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 8a-c, e, g and i-m’ without change . It was felt that there was a discrepancy between the real-world practice for testing asymptomatic patients in Asia and ESMO ‘recommendation 8d’. Results from both Canadian retrospective chart reviews revealed the low diagnostic value of routine staging investigations, such as CT scans and bone scans, in asymptomatic early breast cancer patients. These were also the findings of two prospective trials comparing patients that received frequent laboratory tests, bone scan and chest roentgenography. , Such findings, as well as studies demonstrating the use of unnecessary tests and screening, have led to many professional bodies publishing lists of tests and procedures that are unlikely to be of benefit to the patient. , , , While it was agreed that over testing can lead to overtreatment, there is a potential benefit for such tests in high-risk patients. Thus, ESMO ‘recommendation 8d’ which reads: 8d. In asymptomatic patients, laboratory tests (e.g. blood counts, routine chemistry, tumour marker assessment) or other imaging are not recommended [I, D] was modified as per the bold text below and , with a revision in the GoR, to read as follows: 8d. In asymptomatic patients, laboratory tests (e.g. blood counts, routine chemistry, tumour marker assessment) or other non-breast imaging for detection of relapse are not recommended [I, D] but may be considered on an individual basis [V, C; consensus = 100%]. Tamoxifen is associated with an increased risk of endometrial cancer in postmenopausal women and the American College of Obstetricians and Gynecologists recommend that postmenopausal women taking tamoxifen should be closely monitored for symptoms of endometrial hyperplasia and cancer. However, it was felt that postmenopausal and higher-risk women would be treated with AIs and that endometrial hyperplasia can be misleading without vaginal bleeding. It was also agreed, based on the study by Love and colleagues, that there was no evidence for the use of transvaginal ultrasound (US) for gynaecological examination in women taking tamoxifen. Thus, ESMO ‘recommendation 8h’ was modified, and the GoR was downgraded from: 8h. For patients on tamoxifen, an annual gynaecological examination is recommended [V, B]; however, routine transvaginal US is not recommended [V, D] to read as per the bold text below, and in (100% consensus): 8h. For patients on tamoxifen, an annual gynaecological examination may be considered [V, C; consensus = 100% ]; however, routine transvaginal US is not recommended [V, D]. presents a proposed algorithm for the adjuvant endocrine therapy in HR+ early breast cancer. B Applicability of the recommendations Following the hybrid virtual/face-to-face meeting in Seoul, the Pan-Asian panel of experts agreed and accepted completely (100% consensus) the revised ESMO recommendations for the diagnosis, treatment and follow-up of early breast cancer in patients of Asian ethnicity . However, the applicability of each of the guideline recommendations is impacted by the individual drug and testing approvals and reimbursement policies for each region. The drug and treatment availability for the regions represented by the 10 participating Asian oncological societies represented is summarised in , available at https://doi.org/10.1016/j.esmoop.2024.102974 , and individually for each region in , available at https://doi.org/10.1016/j.esmoop.2024.102974 . Throughout Asia, most health care provision relies on both public and private insurance. In poorer regions public funding is more limited than in richer regions and patients are more likely to pay ‘out of pocket’ for both biomarker-related diagnostic tests and drugs. , available at https://doi.org/10.1016/j.esmoop.2024.102974 , provides an overview of the availability of biomarker-related tests and drugs for the diagnosis and treatment of early breast cancer revealing that the majority are approved in most regions of Asia. In terms of biomarker-related diagnostic tests, immunohistochemistry (IHC), with the frequent exception of PD-L1, are, to some extent, covered by public health care provision in all regions of Asia, whereas genetic testing and next-generation sequencing (NGS)-based assays do not tend to be reimbursed. However, in regions where there is a disparity with the provision of oncology services, for example, in India, standardised laboratories for the provision of diagnostic tests are only located in the first and second tier cities. With the exceptions of neratinib (which is not approved for the treatment of early breast cancer in Indonesia, Japan, the Philippines and Thailand) and ribociclib (which is not approved for the treatment of early breast cancer in Japan and Korea), drugs for the treatment of early breast cancer have been approved across all regions of Asia although there may be differences in the indications they are approved for (i.e. trastuzumab is approved solely for metastatic disease in Indonesia, whereas in Taiwan approval is for LN+2 disease). Although many drugs for the treatment of early breast cancer are approved across Asia, a major limitation to their provision by the public sectors of the different regions is affordability. CSCO In mainland China (China), the health care system is covered by social insurance for 80% of the population while 10% of the population have private insurance. Biomarker-related diagnostic tests, including IHC assessment of ER, progesterone receptor (PgR), Ki67 and HER2, as well as HER2 in situ hybridisation are covered by insurance, meaning that the 10% of patients without insurance will be out of pocket for these tests ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). There is, however, no reimbursement for PD-L1 IHC, germline or somatic mutation analysis or gene expression risk signature assays. Those without insurance are the only patients likely to be out of pocket for trastuzumab, trastuzumab emtansine (T-DM1) and neratinib but there is no reimbursement in China for drugs such as abemaciclib, ribociclib, olaparib, pertuzumab and pembrolizumab ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). In China, the pan-HER receptor tyrosine kinase inhibitor pyrotinib is approved for the neoadjuvant treatment of early breast cancer. It is estimated that it takes around 1 year for drugs to be approved in China after they have received FDA or EMA approval, and it can take a further 3 months for new drugs to become available. The biggest limiting factors around accessing new treatments is whether they are covered by insurance, and it is availability of new biomarker-related diagnostic tests in hospitals which is the greatest limitation on access for patients. ISHMO The health care system is weak in Indonesia with limited financial prowess and resources. The structure is further aggravated by the lack of awareness of patients and health care providers. National insurance covers the cost of IHC for ER, PgR, HER2 and Ki-67 but does not cover PD-L1 IHC, HER2 in situ hybridisation or gene expression assays ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Sequencing for germline or somatic BRCA1/2 mutations is also not covered and, in Indonesia, NGS is only applied for BRCA1/2 mutations. While most drugs used for the treatment of early breast cancer are available in Indonesia, their prices make them unaffordable for national insurance and, depending on the drugs, private insurance and employers/social insurance may not cover the cost ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). For example, trastuzumab is only covered by national insurance for metastatic breast cancer but for the estimated 20% of the population with private insurance, the cost of trastuzumab is covered for early breast cancer. Bureaucracy of The Indonesian Food and Drug Authority (BPOM) is one of the biggest factors limiting access to new treatments and new biomarker-related diagnostic tests. The average time for approval following EMA/FDA approval is roughly 2 years and it can take, on average, a further 2 years for new drugs to become available for use in Indonesia following national approval. ISMPO In India both private and public health care systems exist and it is estimated that 60% of health expenditure in India is private, including through private insurance, which is taken out by <20% of the population, and out-of-pocket expenses. The public health system has various government schemes which cover up to 40% of total health expenditure. With 30% to 40% of the population covered by employers/social insurance schemes, 40% to 50% of patients will be out of pocket for biomarker assays and drugs. In terms of biomarker tests, IHC for ER, PgR, Ki67, PD-L1 and HER2 expression, as well as HER2 in situ hybridisation, are fully reimbursed, whereas gene expression assays and genetic testing including somatic and germline testing for BRCA1/2 mutations are not ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). One of the main challenges for provision of those assays that are reimbursed is that standardised labs are only located in first- and second-tier cities in India. Most drugs for treating early breast cancer have been given approval in India with full reimbursement available for those who are covered by insurance. In India, it can take between 1 and 5 years for approval of drugs to be given approval following EMA or FDA approval. The length of time to approval is affected by the complexity of the drug and the presence of the pharmaceutical company in India. Once approval has been given, it can take several months to a year for new drugs to become available due to factors such as manufacturing, distribution and reimbursement. Furthermore, access to new treatments and biomarker-related diagnostic tests are affected by cost, health inequities and infrastructure as well as insurance, geographical location and cultural factors. A lack of knowledge and awareness by health care practitioners in smaller towns in India greatly affects the prescription of diagnostic tests. JSMO The Japanese health care system relies on a combination of public and private providers and emphasises preventive care, leading to one of the highest life expectancies and low infant mortality rates in the world. All citizens are required to have health insurance, either through their employers or the government and ∼40% of patients have private insurance to cover cancer treatment in addition to universal health care insurance. As a result of this system, very few patients pay entirely out of pocket but typically will pay a portion (0% to 30%) of costs. Most diagnostic tests for breast cancer are available in Japan although the only gene expression risk signature assay that currently has approval and is reimbursed is the Oncotype Dx assay which patients are expected to pay for upfront before receiving a reimbursement of 70% or more of the cost ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). NGS assays for somatic mutations and IHC for PD-L1 are only indicated for patients with metastatic disease. At present, ribociclib and neratinib are not approved in Japan for the treatment of early breast cancer but the oral fluoropyrimidine S-1, which comprises a combination of tegafur, gimeracil and oteracil potassium, has approval for the adjuvant treatment of high- and intermediate-risk HR+ HER2− early breast cancer ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Regulatory approval of diagnostic tests by the Pharmaceuticals and Medical Devices Agency (PMDA) in Japan can be a rigorous and time-consuming process where manufacturers must demonstrate the safety and efficacy of these diagnostic tests. Access to new treatments and the specific timeline for a new drug’s availability in Japan can vary widely depending on the drug’s complexity, market demand and various regulatory and commercial considerations. In general, new drugs may be reimbursed <6 months after permission by the PMDA. KSMO In Korea, cover of health care costs is provided to all Korean citizens, including foreigners who have lived in Korea for >6 months, by the National Health Insurance (NHI) system. However, in addition to the NHI coverage, patients with private insurance can pay a part of their health care costs including those for non-reimbursed, expensive new drugs, based on their insurance policy. Typically, only 10% of patients in Korea pay in full (out of pocket) for their treatment, with 15% covered by private insurance and the remaining 75% of patients covered by employers’ or social insurance. Cancer patients are categorised as having ‘serious disease’ with 95% of costs covered for most biomarker-related diagnostic tests, including IHC for ER, PgR and Ki67 as well as HER2 in situ hybridisation and BRCA1/ 2 mutation analysis by Sanger sequencing. For NGS-based sequencing, there is partial reimbursement with patients with stage I-II disease paying 90% and patients with stage III disease paying 80% of costs and there is no reimbursement for gene expression risk signature assays ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Both trastuzumab and T-DM1 are covered by NHI, meaning most patients will not be ‘out of pocket’, whereas for abemaciclib, olaparaib, neratinib and pembrolizumab which are approved for the treatment of early breast cancer, there is no reimbursement. This is also the case for pertuzumab in the adjuvant setting although 70% of the cost will be reimbursed for neoadjuvant pertuzumab ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). With the emergence of many expensive drugs the limited resources of the NHI budget is becoming a major issue and the biggest limiting factor to accessing new treatments is reimbursement with the requirement for more self-payment. This is because Korea has been categorised as a developed region resulting in the costs of drugs being set at a much higher level than they were previously. In relation to diagnostic tests, the companion diagnostics associated with newer drugs require specific machines which are not available in the pathology labs of all hospitals. There is also a need for greater standardisation of certain diagnostic tests across the different treatment centres and laboratories throughout Korea. MOS In Malaysia there is a dual health care system consisting of a limited but fully funded health care system provided by the Ministry of Health (MOH) Hospitals and University Hospitals which is available for everyone, and a private health care system which provides services to patients who are insured or willing to pay, with no reimbursement from the government. While certain innovator drugs are listed in the MOH formulary for the respective indications, their prescriptions are subject to very strict MOH criteria and the annual budget allocations. For example, trastuzumab is only indicated for stage II-III early breast cancer and prescribed for up to a maximum of nine cycles, while ribociclib use in metastatic HR+ HER2− cancer is restricted to the first-line setting only and available for a limited number of patients per year. There is, however, a shortage of oncology specialists and an imbalance in the distribution of oncology facilities across Malaysia. Approximately 65% of the population of Malaysia, including members of the civil service and those without health care insurance, receive treatment subsidised by the MOH but patients treated at government facilities have the option to access private centres for diagnostic tests that are not covered by the MOH health care system. The same is also true for drugs that are not covered by the MOH where patients can purchase them for treatment at an MOH hospital. Diagnostic tests that are available free of charge through the MOH include IHC for ER, PgR and HER2, as well as HER2 FISH ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ), although turnover time may be long. Germline testing for BRCA1/2 , NGS-based assays and IHC for PD-L1 are not available through the MOH, meaning that patients either need insurance to cover the costs or they will be out of pocket ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). It takes ∼1 year for a drug that has received FDA approval to be approved by the MOH drug bureau although when drugs are approved by either the FDA or EMA, they can be obtained immediately via a special import licence allowed by the MOH. PSMO The health care system in the Philippines is primarily a mix of public and private health care providers. It consists of government-run hospitals, local health units and an extensive network of private health care facilities which collectively strive to provide health care services to the Filipinos. Social insurance (PhilHealth) costs 110 USD per person and 95% of the population use it. However, it is barely enough to cover anticancer medicines. In the Philippines, ∼20% of patients with early breast cancer will receive reimbursement for biomarker-related diagnostic tests, including IHC for ER, PgR and HER2 expression, which are available through government hospitals only and not reimbursed for private patients. IHC for PD-L1 expression is available through patient programmes and is not reimbursed, nor is HER2 in situ hybridisation which is only available to 60% of patients. Sanger sequencing for BRCA1/2 mutations is available at a 50% reduced cost through an existing patient programme, while NGS for somatic mutations is only accessible to half of patients with no reimbursement ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Most drugs are available through patient access programmes although there is no reimbursement, with the exception of trastuzumab for which half of the cost is reimbursed through patient access programmes. Thanks to the 2018 Philippine National Cancer Control Act, any drugs that are given approval in other countries will be streamlined for approval in the Philippines and it takes, on average, between 4 and 12 months for new drugs to become available. Cost and affordability are the biggest factors for accessing new drugs and biomarker-related tests. There is also limited access to new biomarker-related diagnostic tests and tools which are only available in specialised centres. SSO The health care system is Singapore is funded by both public and private insurance. The public system is funded through individual enforced savings (MediSave) and national health insurance which consists of three tiers: basic [MediShield Life (MSHL)], Integrated Shield Plan (ISP; which is a tie-up with private insurance) and the Enhanced Integrated Shield Plan (EISP; a tie-up with private insurance + riders). It is estimated that over half of Singapore citizens are covered by ISP. All IHC assays and selected FISH panels for early breast cancer diagnostics are entirely covered by the health care system, whereas genetic and gene expression profiling, including germline and somatic mutation screening, are not reimbursed ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). In 2022 the Cancer Drug List (CDL) was created which is updated monthly and lists the drugs deemed cost-effective according to accepted health technology assessment methods. Drugs on the CDL are covered by MSHL and ISP, whereas drugs not on the CDL can be covered by the EISP. It is estimated that 90% of cancer drugs in common usage are on the CDL with all drug costs for early breast cancer covered by the health care system in Singapore. Time to approval for new drugs to treat early breast cancer is typically <6 months from the time of EMA or FDA approval and they become available within about a month following approval. The biggest limiting factors for the health care system in Singapore is regarding the provision of genetic and transcriptional assays and, at present, there is assessment about whether they should be covered by national health insurance. TOS In Taiwan nearly 100% of the population are covered by National Health Insurance (NHI). The monthly payments out of pocket for NHI are relatively low although the financial coverage for reimbursement by NHI in Taiwan is basically ‘all-or-none’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). The financial burden is huge and expected to increase further in the era of immuno-oncology and precision medicine. Therefore, despite approval by the Taiwan FDA which is largely a scientific evaluation based on the design and results of the individual pivotal trials, reimbursement is based on cost-effectiveness, the availability of other medications for the same indication and future budget burden. Sequencing and NGS-based assays are not reimbursed but, with the exception for PD-L1, IHC-based diagnostic tests for early breast cancer are ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Targeted therapies for treating early breast cancer are currently not reimbursed in Taiwan except for trastuzumab and biosimilars. With no co-payment system, the biggest limiting factor with regard to accessing the newer treatment therapies and diagnostic tests in Taiwan is the necessity for patient out-of-pocket payment. TSCO Thailand has three national health insurance schemes [Civil Servant Medical Benefit Scheme (CSMBS), Social Security Scheme (SSS) and Universal Coverage Scheme (UCS)], with beneficiaries from different sectors. All three Thai schemes allow the use of drugs in the national list of essential medicines, with expanded benefits for individuals covered by one of the CSMBS. Basic drug accessibility is afforded by the two other Thai schemes. In terms of biomarker-related diagnostic tests for early breast cancer, IHC for ER, PgR, Ki-67 and HER2 but not PD-L1 are covered ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Those patients covered by the SSS (∼20% of the population) are not reimbursed for germline BRCA1/2 mutation analysis and there is no reimbursement at all for NGS or gene expression assays. It is estimated that <1% of the population will be out of pocket for drug costs. It takes ∼2 years for a new drug to be approved in Thailand once it has been approved by the EMA or FDA and between 6 and 8 months for new indications of previously approved drugs. Once approval has been given for drugs in Thailand, it can take 3-6 months for them to become available due to supply management and hospital listings, but this will be for use without reimbursement. It can take years for a drug that has been approved to be added to the list of indications that are reimbursed. This is especially the case for high-cost drugs. The biggest limiting factors for accessing new treatments and diagnostic tests are financial, including reimbursement issues. Another limiting factor for diagnostic tests in Thailand is the turn-around time. Scientific adaptations of the ESMO recommendations In the initial pre-meeting survey, the 20 voting Asian experts reported on the ‘acceptability’ of the 97 recommendations for the diagnosis, treatment and follow-up of patients with early breast cancer from the most recent ESMO Clinical Practice Guidelines ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ), in the eight categories outlined in the text below and in . A lack of agreement in the pre-meeting survey was established for 22 recommendations, 18 of which were discussed at the hybrid virtual/face-to-face working meeting in Seoul to adapt the recently published ESMO Clinical Practice Guidelines. ‘Recommendation 4h’ was also discussed because several of the Asian experts left comments in their responses to the survey. For each of ‘recommendations 1f, 4b, 4d and 5i’ there were discrepancies relating to their applicability in certain regions of Asia and not their ‘scientific applicability’. As a result, these were not discussed at the hybrid virtual/face-to-face meeting. No new recommendations were added, but the original ESMO recommendation 6d’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was relocated to become ‘recommendation 3v’ in . The guideline recommendations outlined in the text below and in for the diagnosis, treatment and follow-up for Asian patients with early breast cancer have been agreed by the Pan-Asian panel of experts based exclusively on the available scientific evidence and their professional opinions. It is acknowledged that regional differences in availability of drugs, equipment and testing facilities, as well as reimbursement and access to treatment may affect the implementation of certain of these recommendations. Where possible, the recommendations have been amended to take into account these regional differences. 1 Screening, diagnosis, pathology and molecular biology—recommendations 1a-m The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 1b-f, 1g-k and m’ (see , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without change . In relation to ‘recommendation 1a’, based on data from the Korean Breast Cancer Society and the Korean Central Cancer Registry, the highest frequency of breast cancer in 2017 was observed in women 40-49 years of age, accounting for a third of all new cases. As mentioned previously in the ‘Introduction’, this is nearly 10 years earlier than that observed in Europe and America, , suggesting that the ESMO recommended age for mammography screening of 50-69 years of age is too late for Asian populations. This is supported by the breast screening guidelines for several regions of Asia including Japan and Korea which recommend breast cancer screening for women over the age of 40 while Taiwan and mainland China recommend breast cancer screening for all women with an average risk of breast cancer aged 45-69. , , , Furthermore, a Korean population-based study reported a 31.98% net benefit in terms of breast cancer mortality reduction, from breast screening, in women aged 45-49 years. Also, a net benefit of 22.42% was observed in women in the youngest, 40-44 years, age bracket. Taking into account the differences in the epidemiology of breast cancer observed across Asia and the benefit of breast cancer screening reported in the Korean study, the original ESMO ‘recommendation 1a’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was modified as per the bold text below and in ( 100% consensus ), to read as follows: 1a. Regular (every 2 years) mammography screening is recommended in women aged 45 -69 years [I, A]. Regular mammography may also be carried out in women aged 40- 44 and 70-74 years, where there is emerging evidence of benefit [I, B; consensus = 100% ]. For ‘recommendation 1l’, there was a great deal of discussion around the benefit of screening for programmed death-ligand 1 (PD-L1). This was particularly the case for therapeutic regimens that included immune checkpoint inhibitors (ICIs) in patients with early-stage triple-negative breast cancer (TNBC). However, the results of the phase III KEYNOTE-522 study in treatment-naïve patients with stage II/III TNBC found that the addition of pembrolizumab to a neoadjuvant chemotherapy (ChT) regimen improved pathological complete responses (pCR) and event-free survival (EFS) rates (hazard ratio [HR] = 0.63; 95% confidence interval [CI] = 0.48-0.829), independent of PD-L1 status. Furthermore, the phase III IMpassion031 study found the addition of atezolizumab to a neoadjuvant ChT regimen of nab-paclitaxel, doxorubicin and cyclophosphamide to improve pCR compared with ChT plus placebo, independent of PD-L1 status. Consequently, it was agreed that decisions regarding the inclusion of ICIs in treatment regimens were not likely to be affected by PD-L1 expression and as a result, the wording for ‘recommendation 1l’ remained unchanged with 100% consensus . 2 Staging and risk assessment—recommendations 2a-e The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 2b-d’ without change . For ESMO ‘recommendation 2a’ the reference text to be used for staging was discussed because in Korea the seventh, and not the eighth, edition of the TNM Classification of Malignant Tumours is the preferred edition. , There was also some discussion regarding how practical the whole staging paradigm of the eighth edition was to clinical practice. However, in the guidelines of the College of American Pathologists, TNM is a part of staging. It was thus decided to leave the eighth edition in the recommendation but to shorten the recommendation, removing ‘Union for International Cancer Control tumour–node–metastasis’ from the original ESMO ‘recommendation 2a’ to read as the text below and in . 2a. Disease stage and final pathological assessment of surgical specimens should be made according to the World Health Organization classification of tumours and the eighth edition of the TNM staging system [V, A; consensus = 100% ]. For ESMO ‘recommendation 2e’, several of the Pan-Asian panel of experts pointed out that, if available, positron emission tomography (PET)–computed tomography (CT) scanning is only used if conventional methods, such as CT or bone scan-based methods have proven inconclusive. Thus, the wording for ‘recommendation 2e’ was modified as per the bold text below and in to read as follows: 2e. [18F]2-fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET)–CT scanning may be an option for high-risk patients and when conventional CT/bone scan methods are inconclusive [II, B; consensus = 100% ]. A proposed algorithm for the diagnostic work-up and staging of early breast cancer is presented in , available at https://doi.org/10.1016/j.esmoop.2024.102974 . 3 General management principles—recommendations 3a-v The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 3a-c, e-h, j-k and n-u’ without change . While there was consensus amongst the Pan-Asian panel of experts regarding ESMO ‘recommendation 3d’ that age should not be the primary determinant of treatment decisions, there was some discussion that for very young patients age could be an important factor in addition to biology. Long-term follow-up data from the SOFT and TEXT trials, in premenopausal women with estrogen/progesterone receptor-positive (ER/PgR+) early breast cancer, showed 5 years of exemestane and ovarian function suppression (OFS) to significantly improve the 12-year overall survival (OS) in women under 35 years of age (4.0%). Despite these data, it was generally agreed that cancer stage and biology should always be the primary determinants of treatment decisions, although age is an important factor for patients with hormone receptor-positive/HER2-negative (HR+/HER2−) breast cancer. Therefore ‘recommendation 3d’ remained unchanged (100% consensus). There was a great deal of discussion around ESMO ‘recommendation 3i’ regarding the benefits of breast-conserving surgery (BCS) plus radiotherapy (breast-conserving therapy [BCT]) over radical mastectomy due to a discrepancy in the data from Italian and Dutch studies. , However, findings reported by the Korean Breast Cancer Registry, which evaluated 45 770 patients with early breast cancer, found that the 10-year OS for those receiving BCT was better than for those receiving radical mastectomy (HR = 1.541; 95% CI = 1.392-1.707; P < 0.001). The breast cancer-specific survival rate was also better for the BCT cohort (HR = 1.541; 95% CI = 1.183-1.668; P < 0.001). There was further discussion regarding women carrying a germline BRCA pathogenic variant ( BRCA -positive) where mastectomy is frequently the preferred option in many regions of Asia. In a Chinese study investigating BCT in women with BRCA- positive breast cancer, the 5-year cumulative recurrence-free survival (RFS) was comparable for patients receiving BCT (HR = 0.95; 95% CI = 0.89-1.00) and those receiving mastectomy (HR = 0.93; 95% CI = 0.85-1.00), after adjustment for clinicopathological characteristics and systemic treatment. Within the BRCA -positive cohort there was no significant difference in disease-free survival (DFS) (HR = 1.17; 95% CI = 0.57-2.39; P = 0.68) or survival (HR = 1.44; 95% CI = 0.22-9.44; P = 0.70) for patients receiving BCT compared with those receiving mastectomy. These results are in line with a meta-analysis comparing BCT with mastectomy in BRCA -positive women which concluded that survival outcomes are comparable between the two treatment options. It was therefore agreed that the clinical need is not there for mastectomy with reconstruction, but it may still be the preferred treatment for regions such as the Philippines and Indonesia where radiotherapy (RT) is not widely available in all hospitals and patients may not be willing or able to afford to travel to distant RT facilities. Also, in many regions of Asia, tumours are typically T2 and T3 at diagnosis which it was felt may impact on the relevance of findings from clinical trials where tumours are typically smaller. ESMO ‘recommendation 3i’ was agreed however, but the wording was modified as per the bold text below and in to read as follows: 3i. BCS with post-operative RT is the recommended local treatment option for the majority of patients with early breast cancer (when compatible with patient preference and available resources) [I, A; consensus = 100% ]. While there was consensus for ESMO ‘recommendations 3l and 3m’ it was highlighted that across Asia, there is a wide variation in stage of presentation. Less-developed regions are more likely to have patients presenting with later-stage breast cancer than more-developed regions. , For example, more than half of patients present with stage III or IV breast cancer in India compared with 76% presenting with stage I or II disease in South Korea. For those regions where advanced disease is more common, the relevance of ESMO ‘recommendations 3l and 3m’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was questioned. Regarding ‘recommendation 3l’, the long-term follow-up of the phase III IBCSG 23-01 randomised trial in patients with sentinel lymph node (SLN) micrometastases found the DFS at 10 years was 76.8% (95% CI = 72.5-81.0) for patients who did not have axillary lymph node dissection (ALND) versus 74.9% (95% CI = 70.5-79.3) for patients who underwent ALND (HR = 0.85; 95% CI = 0.65-1.11; log-rank P = 0.24; P = 0.0024 for non-inferiority). It was thus agreed that the need for further axillary surgery was not required in this group of patients and the panel of Pan-Asian experts agreed with ‘recommendation 3l’, with a minor modification, removing the word ‘eventually’, to read as below and in with 100% consensus: 3l. In the absence of prior primary systemic treatment (PST) patients with micrometastatic spread and those with limited SLN involvement (1-2 affected SLNs) in cN0 following BCS with subsequent whole-breast RT (WBRT) including the lower part of the axilla, and adjuvant systemic treatment, do not need further axillary surgery [II, A; consensus = 100% ]. The Pan-Asian panel of experts agreed that routine ALND was not required for patients with breast cancer who, following SLN biopsy (SLNB), were found to have metastases to 1 or 2 SLNs. Thus ESMO ‘recommendation 3m’ was agreed with the minor modifications shown in bold below and in : 3m. ALND following positive SLNB with <3 involved SLNs is generally recommended only in the case of suspected high axillary disease burden, or with impact on further adjuvant systemic treatment decisions [II, A; consensus = 100% ]. There was a robust discussion around ESMO ‘recommendation 3v’ (originally recommendation 6d in , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) and the administration of granulocyte colony-stimulating factor (G-CSF) with dose-dense schedules of ChT to reduce post-ChT febrile neutropenia. In a meta-analysis by the Early Breast Cancer Trialists’ Collaborative Group (EBCTCG), dose-dense ChT was found to provide a benefit over standard schedule ChT for disease recurrence (10-year gain = 3.4%; 95% CI = 2.2% to 4.5%; log-rank 2 P < 0.0001), breast cancer mortality (10-year gain 2.4%; 95% CI = 1.3% to 3.4%; log-rank 2 P < 0.0001) and all-cause mortality (10-year gain = 2.7%; 95% CI = 1.6% to 3.8%; log-rank 2 P < 0.0001). Similar results were found with subgroup analyses based on ER and PgR status, HER2 status, grade, Ki-67-status and histological type. Furthermore, it was found that primary prophylaxis with G-CSF mandated in all 2-weekly dose-dense adjuvant ChT schedules led to lower levels of grade 3-4 neutropenia and neutropenic sepsis than in control arms. The benefits of prophylactic use of G-CSFs were also reported in a retrospective Japanese study investigating the use of G-CSF or pegfilgrastim (the pegylated form of G-CSF analogue, filgrastim) with perioperative ChT in patients with early breast cancer over a 10-year period from January 2010 to October 2020. It was noted that febrile neutropenia-related hospitalisations decreased in the last half of the study time despite the use of escalated regimens and that prophylactic pegfilgrastim likely contributed to this reduction [odds ratio (OR) of 0.879; 95% CI = 0.778-0.993; P = 0.0384]. Furthermore, a meta-analysis of the primary use of prophylactic G-CSF in trials using a docetaxel plus cyclophosphamide regimen found the risk of febrile neutropenia was reduced by 92.3% with prophylactic G-CSF (pooled OR = 0.077; 95% CI = 0.013-0.460; P = 0.005). However, despite these results, there is still some question over the benefits of G-CSF in ICI-containing ChT regimens and not all regions of Asia use dose-dense schedules for all subtypes of early breast cancer, for example node-negative disease. Thus, as a result of these discrepancies and the uncertainty over the benefits of G-CSF use with all ChT regimens, the GoR for ‘recommendation 3v’ was downgraded from ‘A’ to ‘B’ with 100% consensus, as is shown in bold below and in : 3v. The use of dose-dense schedules of ChT, with granulocyte colony-stimulating factor (G-CSF) support, should be considered given their documented benefit over non-dose-dense schedules [I, B: consensus = 100% ]. presents a proposed algorithm for the treatment of early breast cancer and presents a proposed algorithm for the management of axillary lymph node involvement. 4 Management of ER-positive/HER2-negative early breast cancer—recommendations 4a-l The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 4a,b d-f.1 and i-l’, without change . For ESMO ‘recommendation 4c’, the routine use of gene expression assays for guiding decisions on adjuvant ChT was questioned because, while the data of the West German Study Group Plan B trial demonstrated the potential for such assays in patient stratification, they are not routinely used or widely accessible throughout Asia. Similar concerns were made regarding the accessibility and routine use of endocrine response assessment. Therefore, while the Pan-Asian panel of experts agreed about the science of both gene expression assays and endocrine response assessment, they downgraded the GoR from ‘A’ to ‘B’ and modified the wording, changing the word ‘can’ to ‘may’ as shown in bold below and in , as follows: 4c. In cases of uncertainty about indications for adjuvant ChT (after consideration of all clinical and pathological factors), gene expression assays and/ or endocrine response assessment s may be used to guide decisions on adjuvant ChT [I, B ; consensus = 100% ]. There was a great deal of discussion around ESMO ‘recommendation 4g’ and the use of bisphosphonates in the early breast cancer setting. In the phase III AZURE trial the use of the bisphosphonate zoledronic acid did not improve either the 7-year OS (adjusted HR = 0.93; 95% CI = 0.81-1.08; P = 0.37) or the invasive disease-free survival (iDFS) (HR = 0.93; 95% CI 0.82-1.05; P = 0.22) rate compared to the control group for premenopausal and perimenopausal women, independent of ER status, tumour stage and lymph node involvement. Preclinical evidence suggests that the lack of efficacy of bisphosphonates in these women may be, at least in part, due to the levels of estrogens, and the Pan-Asian panel of experts therefore agreed that there was no benefit in treating premenopausal women with bisphosphonates which could be detrimental for younger patients with reduced bone density. In the EBCTCG meta-analysis of randomised trials investigating adjuvant bisphosphonate treatment in early breast cancer, it was found that for postmenopausal women, there was a significant reduction in disease recurrence (first-event rate ratio [RR] = 0.86; 95% CI = 0.78-0.94; 2p = 0.002), distant recurrence (RR = 0.82; 95% CI = 0.74-0.92; 2p = 0.0003), bone recurrence (RR = 0.72; 95% CI = 0.60-0.86; 2p = 0·0002), and breast cancer mortality (RR= 0·82; 95% CI = 0.73-0.93; 2p = 0·002). However, there is no specific evidence of the effect that adjuvant bisphosphonate treatment has on disease recurrence in postmenopausal Asian women with early breast cancer and, while there was consensus that the use of bisphosphonates should be used for treating postmenopausal women with treatment-related bone loss, it was suggested that bisphosphonates are not routinely used to stop disease recurrence in Asia. As a result, the GoR for the use of bisphosphonates in patients at high risk of relapse was downgraded from ‘A’ to ‘B’ in ‘recommendation 4g’ as per the bold text below and in : 4g. Bisphosphonates are recommended in women without ovarian function (postmenopausal or undergoing OFS), especially if at high risk of relapse [I, B; consensus = 100% ] or treatment-related bone loss [I, A; consensus = 100%]. For ESMO ‘recommendation 4h’ there was some discussion about whether the cyclin-dependent kinase 4/6 (CDK4/6) inhibitor ribociclib should also be incorporated into the recommendation based on the exciting interim data from the phase III NATALEE trial in patients with HR+/HER2− early breast cancer which evaluated adjuvant ribociclib with endocrine therapy versus endocrine therapy alone which showed the 3-year iDFS to be significantly longer in the combination group (90.4%) compared with endocrine therapy alone (87.1%; P = 0.0014). However, because ribociclib has, at present, not been given approval for use in early breast cancer by either the US Food and Drug Administration (FDA) or European Medicines Agency (EMA), the wording for ‘recommendation 4h’ remained unchanged ( 100% consensus ). Recently reported results from a preplanned OS interim analysis of high-risk early breast cancer patients randomised to receive endocrine therapy for at least 5 years plus or minus the CDK4/6 inhibitor abemaciclib for 2 years showed the benefit of abemaciclib in terms of iDFS and distant RFS with HRs of 0.68 (95% CI = 0.60-0.77) and 0.675 (95% CI = 0.59-0.77), respectively. These data suggest that the addition of abemaciclib to endocrine therapy reduces the risk of a patient developing invasive disease and distant disease recurrence beyond the pivotal 5-year mark in the adjuvant setting. Follow-up of OS is ongoing. A proposed algorithm for treatment of HR+/HER2− early breast cancer is presented in . 5 Management of HER2-positive early breast cancer—recommendations 5a-i The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 5a-g and i’, without change . For ESMO ‘recommendation 5h’ the benefit of the addition of pertuzumab to trastuzumab for the adjuvant treatment of patients with HER2-positive breast cancer was discussed based on the findings of the phase III APHINITY trial, where the OS benefit at both the 6-year (HR = 0.85; 95% CI = 0.67-1.07; P = 0.17) and 8-year (HR = 0.83; 95% CI = 0.68-1.02; P = 0.078) follow-up failed to reach statistical significance. , There was, however, a consistent improvement in iDFS where 88.4% of patients in the pertuzumab group versus 85.8% of patients in the placebo group were event-free at the 8-year follow-up, which corresponded to an absolute benefit of 2.6% (95% CI for the difference = 0.7-4.6). Subgroup analysis of iDFS data based on node status revealed that patients receiving pertuzumab with node-positive HER2-positive breast cancer had a 4.53% difference in EFS at the 6-year follow-up (95% CI = 1.92-7.14) compared to those receiving placebo, and there was no clear benefit seen in the node-negative patients (0.07% difference in iDFS event-free survival; 95% CI = −2.02-2.17). Analysis by HR status revealed that there was a benefit for addition of pertuzumab in both the HR+ (2.47% difference in iDFS event-free rate; 95% CI for the difference = −0.66-5.60) and HR− (3.0% difference in iDFS event-free rate; 95% CI for the difference = 0.76-5.23) subgroups. Further stratification of the iDFS data revealed that while patients in the node-positive subgroup benefited from pertuzumab irrespective of whether they were HR+ (4.81% iDFS EFS; 95% CI = 1.59% to 8.03%) or HR− (4.10% iDFS EFS; 95% CI = −0.34% to 8.55%), there was no clear benefit for the node-negative subgroups (for the node-negative HR+ subgroup, iDFS EFS = 0.14%; 95% CI −2.47% to 2.74%; and for the node-negative HR− subgroup, iDFS EFS = −0.05%; 95% CI = −3.85% to 3.47%). Thus, based on these results, the Pan-Asian panel of experts agreed with ESMO ‘recommendation 5h’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without modification with 100% consensus . presents an algorithm for the treatment of HER2-positive early breast cancer. 6 Management of TNBC—recommendations 6a-j.2 The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 6a-e and g-i, j and j.1’ without change . Experts from three of the Asian medical societies disagreed with ESMO ‘recommendation 6f’ because it was felt that the benefit of adjuvant pembrolizumab for early TNBC is unclear, particularly with respect to pCR status. However, in the randomised phase III KEYNOTE-522 trial investigating the addition of pembrolizumab to neoadjuvant ChT in patients with early TNBC, the 5-year EFS was 81.3% (95% CI = 78.4% to 83.9%) in the pembrolizumab group compared with 72.3% (95% CI = 67.5% to 76.5%) in the placebo group. The distant disease progression- or distant RFS rates at 5 years were 84.4% for patients receiving pembrolizumab and 76.8% for patients receiving placebo (HR = 0.64; 95% CI = 0.49-0.84). Recently presented data from a prespecified, non-randomised, exploratory analysis reported 5-year EFS rates for the pembrolizumab and placebo groups of 92.2% versus 88.2% for patients with a pCR, and 62.6% versus 52.3% for patients without a pCR. Thus, it was agreed that the original ESMO ‘recommendation 6f’ which read: 6f. Pembrolizumab should be administered every 3 weeks throughout the neoadjuvant phase [I, A] and for nine 3-week cycles during the adjuvant phase, regardless of pCR status or administration of RT [I, A; ESMO-MCBS v1.1 score: A] Should be modified to remove ‘or administration of RT’, which it was felt was unnecessary, although RT can be given with this combination as shown below and in : 6f. Pembrolizumab should be administered every 3 weeks throughout the neoadjuvant phase [I, A] and for nine 3-week cycles during the adjuvant phase, regardless of pCR status [I, A; ESMO-MCBS v1.1 score: A; consensus = 100% ]. The observation that poly (ADP-ribose) polymerase (PARP) inhibitors upregulate PD-L1 in breast cancer cells and synergise with ICIs in a syngeneic breast cancer tumour model provides a strong rationale for the combination of olaparib with ICIs in early TNBC. However, for ESMO ‘recommendation 6i.1’ concern was raised by members of the Pan-Asian panel of experts regarding the safety of the combination of the PARP inhibitor, olaparib, with ICIs. At present, there are no data for olaparib plus ICIs in early TNBC but it is anticipated that the randomised phase II KEYLYNK-009 study comparing the efficacy of adjuvant olaparib plus pembrolizumab with ChT plus pembrolizumab following induction with first-line ChT in patients with locally recurrent inoperable TNBC will provide important data. Data regarding the safety of olaparib plus ICIs can be found in the phase Ib/II KEYNOTE-365 study of pembrolizumab plus olaparib in patients with metastatic castration-resistant prostate cancer where it was reported that the treatment-related adverse events (TRAEs) for the combination were consistent with either agent alone. Thus, the panel of experts agreed with ESMO ‘recommendation 6i.1’ but felt the recommendation needed more clarity regarding the recommended use of olaparib plus ICIs and ESMO ‘recommendation 6i.1’, which read: 6i.1 The combination of ICIs and olaparib may be considered on an individual basis [V, C] and was amended to read as below and in , with the changes shown in bold (100% consensus): 6i.1. In patients with germline BRCA mutations with residual disease after ICI-containing neoadjuvant therapy, the concurrent adjuvant use of ICIs and olaparib may be considered on an individual basis [V, C ; consensus = 100% ]. As with ‘recommendation 6i.1’, there were some concerns about ESMO ‘recommendation 6j.2’ regarding safety. There were also doubts regarding the efficacy of the combination of pembrolizumab with capecitabine. The addition of adjuvant capecitabine after neoadjuvant ChT treatment was assessed in the Japanese/Korean CREATE-X study where, compared with the ChT-alone group, the addition of capecitabine was found to improve both DFS (69.8% versus 56.1%; HR for recurrence, second cancer or death = 0.58; 95% CI = 0.39-0.87) and the OS rate (78.8% versus 70.3%; HR for death = 0.52; 95% CI =0.30-0.90) for patients with TNBC. The efficacy reported in the CREATE-X study was consistent with findings from a meta-analysis which found addition of capecitabine to ChT improved DFS (HR = 0.818; 95% CI = 0.713-0.938; P = 0.004) and OS (HR = 0.778; 95% CI = 0.657-0.921; P = 0.004) in the TNBC subgroup. In addition, in a phase III trial conducted by the South China Breast Cancer Group, 1-year low-dose capecitabine maintenance therapy was found to significantly improve the 5-year DFS compared to the observation group (82.8% versus 73.0%; HR for risk of recurrence or death = 0.64; 95% CI = 0.42-0.95; P = 0.03), and there was also a numerical improvement in the 5-year OS but it was not significant (85.5% versus 81.3%; HR = 0.75; 95% CI = 0.47-1.19; P = 0.22). Most toxicities from the combination of pembrolizumab and capecitabine in a phase II study in pretreated triple-negative and HR+, HER2-endocrine-refractory metastatic breast cancer were found to be low-grade and consistent with capecitabine monotherapy, including elevated liver tests, skin rash, fatigue, hand–foot syndrome and cytopenias. In this biomarker-unselected cohort, there was no improvement for the combination of pembrolizumab plus capecitabine [12-month progression-free survival (PFS) = 20.7%; 95% CI = 8.4% to 36.7%; 12-month OS = 63%; 95% CI = 43.2% to 77.6%) over historical data, but in a small phase Ib study consisting of 14 patients that investigated the early treatment of metastatic TNBC, the combination of pembrolizumab plus capecitabine showed superior response rates [overall response rate (ORR) = 43%] compared with pembrolizumab plus paclitaxel (ORR = 25%). Thus, while at present there are no data for the efficacy of ICIs plus capecitabine in the adjuvant setting for early TNBC, the panel agreed that the ESMO ‘recommendation 6j.2’ should be modified to provide clarity, over when the combination could be considered, to read as per the bold text below and in (100% consensus): 6j.2. In patients with residual disease after ICI-containing neoadjuvant therapy, the concurrent adjuvant use of ICI and capecitabine can be considered on an individual basis [V, C ; consensus = 100% ] A proposed algorithm for the management of triple-negative early breast cancer is presented in . 7 Management of special situations—recommendations 7a-i The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 7a-h’ without change . For ESMO ‘recommendation 7i’, the survival benefit and safety of tamoxifen and aromatase inhibitors (AIs) following mastectomy for ductal carcinoma in situ (DCIS) in high-risk patients was discussed. The benefit of AIs for breast cancer prevention was demonstrated in the international phase III IBIS-II trial comparing anastrozole with placebo in postmenopausal women at increased risk of developing breast cancer where, at 10-years, a 49% reduction in breast cancer was observed (HR = 0.51; 95% CI = 0.39-0.66; P < 0.0001). In this study, there were no significant differences in the major AEs, except for a 28% reduction in the incidence of cancer outside the breast with anastrozole. In the 9-year follow-up of the phase III NSABP B-35 study of patients with DCIS undergoing lumpectomy plus radiotherapy, there was no significant DFS benefit for anastrozole compared with tamoxifen (HR = 0.89; 95% CI = 0.75-1.07; P = 0.21), but patients in the anastrozole group had a superior breast cancer-free interval compared with the tamoxifen group (84.7% versus 83.1%; HR = 0.73; 95% CI = 0.56-0.96; P = 0.023), particularly in patients who had invasive disease (HR = 0.62; 95% CI = 0.42-0.90; P = 0.0123). Patients in the anastrozole group also had a reduced incidence of contralateral breast cancer (HR = 0.64; 95% CI = 0.43-0.96; P = 0.0322) and again, this benefit over tamoxifen was more pronounced in those patients with invasive disease (HR = 0.52; 95% CI = 0.31-0.88; P = 0.0148). The only notable differences between the two groups in terms of AEs was thrombosis or embolism which is a known side-effect of tamoxifen (2.7% versus 0.8% for the anastrozole group). Thus, based on these results, the Pan-Asian panel of experts agreed with ESMO ‘recommendation 7i’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without modification with 100% consensus . 8 Follow-up, long-term implications and survivorship—recommendations 8a-m The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 8a-c, e, g and i-m’ without change . It was felt that there was a discrepancy between the real-world practice for testing asymptomatic patients in Asia and ESMO ‘recommendation 8d’. Results from both Canadian retrospective chart reviews revealed the low diagnostic value of routine staging investigations, such as CT scans and bone scans, in asymptomatic early breast cancer patients. These were also the findings of two prospective trials comparing patients that received frequent laboratory tests, bone scan and chest roentgenography. , Such findings, as well as studies demonstrating the use of unnecessary tests and screening, have led to many professional bodies publishing lists of tests and procedures that are unlikely to be of benefit to the patient. , , , While it was agreed that over testing can lead to overtreatment, there is a potential benefit for such tests in high-risk patients. Thus, ESMO ‘recommendation 8d’ which reads: 8d. In asymptomatic patients, laboratory tests (e.g. blood counts, routine chemistry, tumour marker assessment) or other imaging are not recommended [I, D] was modified as per the bold text below and , with a revision in the GoR, to read as follows: 8d. In asymptomatic patients, laboratory tests (e.g. blood counts, routine chemistry, tumour marker assessment) or other non-breast imaging for detection of relapse are not recommended [I, D] but may be considered on an individual basis [V, C; consensus = 100%]. Tamoxifen is associated with an increased risk of endometrial cancer in postmenopausal women and the American College of Obstetricians and Gynecologists recommend that postmenopausal women taking tamoxifen should be closely monitored for symptoms of endometrial hyperplasia and cancer. However, it was felt that postmenopausal and higher-risk women would be treated with AIs and that endometrial hyperplasia can be misleading without vaginal bleeding. It was also agreed, based on the study by Love and colleagues, that there was no evidence for the use of transvaginal ultrasound (US) for gynaecological examination in women taking tamoxifen. Thus, ESMO ‘recommendation 8h’ was modified, and the GoR was downgraded from: 8h. For patients on tamoxifen, an annual gynaecological examination is recommended [V, B]; however, routine transvaginal US is not recommended [V, D] to read as per the bold text below, and in (100% consensus): 8h. For patients on tamoxifen, an annual gynaecological examination may be considered [V, C; consensus = 100% ]; however, routine transvaginal US is not recommended [V, D]. presents a proposed algorithm for the adjuvant endocrine therapy in HR+ early breast cancer. Screening, diagnosis, pathology and molecular biology—recommendations 1a-m The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 1b-f, 1g-k and m’ (see , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without change . In relation to ‘recommendation 1a’, based on data from the Korean Breast Cancer Society and the Korean Central Cancer Registry, the highest frequency of breast cancer in 2017 was observed in women 40-49 years of age, accounting for a third of all new cases. As mentioned previously in the ‘Introduction’, this is nearly 10 years earlier than that observed in Europe and America, , suggesting that the ESMO recommended age for mammography screening of 50-69 years of age is too late for Asian populations. This is supported by the breast screening guidelines for several regions of Asia including Japan and Korea which recommend breast cancer screening for women over the age of 40 while Taiwan and mainland China recommend breast cancer screening for all women with an average risk of breast cancer aged 45-69. , , , Furthermore, a Korean population-based study reported a 31.98% net benefit in terms of breast cancer mortality reduction, from breast screening, in women aged 45-49 years. Also, a net benefit of 22.42% was observed in women in the youngest, 40-44 years, age bracket. Taking into account the differences in the epidemiology of breast cancer observed across Asia and the benefit of breast cancer screening reported in the Korean study, the original ESMO ‘recommendation 1a’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was modified as per the bold text below and in ( 100% consensus ), to read as follows: 1a. Regular (every 2 years) mammography screening is recommended in women aged 45 -69 years [I, A]. Regular mammography may also be carried out in women aged 40- 44 and 70-74 years, where there is emerging evidence of benefit [I, B; consensus = 100% ]. For ‘recommendation 1l’, there was a great deal of discussion around the benefit of screening for programmed death-ligand 1 (PD-L1). This was particularly the case for therapeutic regimens that included immune checkpoint inhibitors (ICIs) in patients with early-stage triple-negative breast cancer (TNBC). However, the results of the phase III KEYNOTE-522 study in treatment-naïve patients with stage II/III TNBC found that the addition of pembrolizumab to a neoadjuvant chemotherapy (ChT) regimen improved pathological complete responses (pCR) and event-free survival (EFS) rates (hazard ratio [HR] = 0.63; 95% confidence interval [CI] = 0.48-0.829), independent of PD-L1 status. Furthermore, the phase III IMpassion031 study found the addition of atezolizumab to a neoadjuvant ChT regimen of nab-paclitaxel, doxorubicin and cyclophosphamide to improve pCR compared with ChT plus placebo, independent of PD-L1 status. Consequently, it was agreed that decisions regarding the inclusion of ICIs in treatment regimens were not likely to be affected by PD-L1 expression and as a result, the wording for ‘recommendation 1l’ remained unchanged with 100% consensus . Staging and risk assessment—recommendations 2a-e The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 2b-d’ without change . For ESMO ‘recommendation 2a’ the reference text to be used for staging was discussed because in Korea the seventh, and not the eighth, edition of the TNM Classification of Malignant Tumours is the preferred edition. , There was also some discussion regarding how practical the whole staging paradigm of the eighth edition was to clinical practice. However, in the guidelines of the College of American Pathologists, TNM is a part of staging. It was thus decided to leave the eighth edition in the recommendation but to shorten the recommendation, removing ‘Union for International Cancer Control tumour–node–metastasis’ from the original ESMO ‘recommendation 2a’ to read as the text below and in . 2a. Disease stage and final pathological assessment of surgical specimens should be made according to the World Health Organization classification of tumours and the eighth edition of the TNM staging system [V, A; consensus = 100% ]. For ESMO ‘recommendation 2e’, several of the Pan-Asian panel of experts pointed out that, if available, positron emission tomography (PET)–computed tomography (CT) scanning is only used if conventional methods, such as CT or bone scan-based methods have proven inconclusive. Thus, the wording for ‘recommendation 2e’ was modified as per the bold text below and in to read as follows: 2e. [18F]2-fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET)–CT scanning may be an option for high-risk patients and when conventional CT/bone scan methods are inconclusive [II, B; consensus = 100% ]. A proposed algorithm for the diagnostic work-up and staging of early breast cancer is presented in , available at https://doi.org/10.1016/j.esmoop.2024.102974 . General management principles—recommendations 3a-v The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 3a-c, e-h, j-k and n-u’ without change . While there was consensus amongst the Pan-Asian panel of experts regarding ESMO ‘recommendation 3d’ that age should not be the primary determinant of treatment decisions, there was some discussion that for very young patients age could be an important factor in addition to biology. Long-term follow-up data from the SOFT and TEXT trials, in premenopausal women with estrogen/progesterone receptor-positive (ER/PgR+) early breast cancer, showed 5 years of exemestane and ovarian function suppression (OFS) to significantly improve the 12-year overall survival (OS) in women under 35 years of age (4.0%). Despite these data, it was generally agreed that cancer stage and biology should always be the primary determinants of treatment decisions, although age is an important factor for patients with hormone receptor-positive/HER2-negative (HR+/HER2−) breast cancer. Therefore ‘recommendation 3d’ remained unchanged (100% consensus). There was a great deal of discussion around ESMO ‘recommendation 3i’ regarding the benefits of breast-conserving surgery (BCS) plus radiotherapy (breast-conserving therapy [BCT]) over radical mastectomy due to a discrepancy in the data from Italian and Dutch studies. , However, findings reported by the Korean Breast Cancer Registry, which evaluated 45 770 patients with early breast cancer, found that the 10-year OS for those receiving BCT was better than for those receiving radical mastectomy (HR = 1.541; 95% CI = 1.392-1.707; P < 0.001). The breast cancer-specific survival rate was also better for the BCT cohort (HR = 1.541; 95% CI = 1.183-1.668; P < 0.001). There was further discussion regarding women carrying a germline BRCA pathogenic variant ( BRCA -positive) where mastectomy is frequently the preferred option in many regions of Asia. In a Chinese study investigating BCT in women with BRCA- positive breast cancer, the 5-year cumulative recurrence-free survival (RFS) was comparable for patients receiving BCT (HR = 0.95; 95% CI = 0.89-1.00) and those receiving mastectomy (HR = 0.93; 95% CI = 0.85-1.00), after adjustment for clinicopathological characteristics and systemic treatment. Within the BRCA -positive cohort there was no significant difference in disease-free survival (DFS) (HR = 1.17; 95% CI = 0.57-2.39; P = 0.68) or survival (HR = 1.44; 95% CI = 0.22-9.44; P = 0.70) for patients receiving BCT compared with those receiving mastectomy. These results are in line with a meta-analysis comparing BCT with mastectomy in BRCA -positive women which concluded that survival outcomes are comparable between the two treatment options. It was therefore agreed that the clinical need is not there for mastectomy with reconstruction, but it may still be the preferred treatment for regions such as the Philippines and Indonesia where radiotherapy (RT) is not widely available in all hospitals and patients may not be willing or able to afford to travel to distant RT facilities. Also, in many regions of Asia, tumours are typically T2 and T3 at diagnosis which it was felt may impact on the relevance of findings from clinical trials where tumours are typically smaller. ESMO ‘recommendation 3i’ was agreed however, but the wording was modified as per the bold text below and in to read as follows: 3i. BCS with post-operative RT is the recommended local treatment option for the majority of patients with early breast cancer (when compatible with patient preference and available resources) [I, A; consensus = 100% ]. While there was consensus for ESMO ‘recommendations 3l and 3m’ it was highlighted that across Asia, there is a wide variation in stage of presentation. Less-developed regions are more likely to have patients presenting with later-stage breast cancer than more-developed regions. , For example, more than half of patients present with stage III or IV breast cancer in India compared with 76% presenting with stage I or II disease in South Korea. For those regions where advanced disease is more common, the relevance of ESMO ‘recommendations 3l and 3m’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) was questioned. Regarding ‘recommendation 3l’, the long-term follow-up of the phase III IBCSG 23-01 randomised trial in patients with sentinel lymph node (SLN) micrometastases found the DFS at 10 years was 76.8% (95% CI = 72.5-81.0) for patients who did not have axillary lymph node dissection (ALND) versus 74.9% (95% CI = 70.5-79.3) for patients who underwent ALND (HR = 0.85; 95% CI = 0.65-1.11; log-rank P = 0.24; P = 0.0024 for non-inferiority). It was thus agreed that the need for further axillary surgery was not required in this group of patients and the panel of Pan-Asian experts agreed with ‘recommendation 3l’, with a minor modification, removing the word ‘eventually’, to read as below and in with 100% consensus: 3l. In the absence of prior primary systemic treatment (PST) patients with micrometastatic spread and those with limited SLN involvement (1-2 affected SLNs) in cN0 following BCS with subsequent whole-breast RT (WBRT) including the lower part of the axilla, and adjuvant systemic treatment, do not need further axillary surgery [II, A; consensus = 100% ]. The Pan-Asian panel of experts agreed that routine ALND was not required for patients with breast cancer who, following SLN biopsy (SLNB), were found to have metastases to 1 or 2 SLNs. Thus ESMO ‘recommendation 3m’ was agreed with the minor modifications shown in bold below and in : 3m. ALND following positive SLNB with <3 involved SLNs is generally recommended only in the case of suspected high axillary disease burden, or with impact on further adjuvant systemic treatment decisions [II, A; consensus = 100% ]. There was a robust discussion around ESMO ‘recommendation 3v’ (originally recommendation 6d in , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) and the administration of granulocyte colony-stimulating factor (G-CSF) with dose-dense schedules of ChT to reduce post-ChT febrile neutropenia. In a meta-analysis by the Early Breast Cancer Trialists’ Collaborative Group (EBCTCG), dose-dense ChT was found to provide a benefit over standard schedule ChT for disease recurrence (10-year gain = 3.4%; 95% CI = 2.2% to 4.5%; log-rank 2 P < 0.0001), breast cancer mortality (10-year gain 2.4%; 95% CI = 1.3% to 3.4%; log-rank 2 P < 0.0001) and all-cause mortality (10-year gain = 2.7%; 95% CI = 1.6% to 3.8%; log-rank 2 P < 0.0001). Similar results were found with subgroup analyses based on ER and PgR status, HER2 status, grade, Ki-67-status and histological type. Furthermore, it was found that primary prophylaxis with G-CSF mandated in all 2-weekly dose-dense adjuvant ChT schedules led to lower levels of grade 3-4 neutropenia and neutropenic sepsis than in control arms. The benefits of prophylactic use of G-CSFs were also reported in a retrospective Japanese study investigating the use of G-CSF or pegfilgrastim (the pegylated form of G-CSF analogue, filgrastim) with perioperative ChT in patients with early breast cancer over a 10-year period from January 2010 to October 2020. It was noted that febrile neutropenia-related hospitalisations decreased in the last half of the study time despite the use of escalated regimens and that prophylactic pegfilgrastim likely contributed to this reduction [odds ratio (OR) of 0.879; 95% CI = 0.778-0.993; P = 0.0384]. Furthermore, a meta-analysis of the primary use of prophylactic G-CSF in trials using a docetaxel plus cyclophosphamide regimen found the risk of febrile neutropenia was reduced by 92.3% with prophylactic G-CSF (pooled OR = 0.077; 95% CI = 0.013-0.460; P = 0.005). However, despite these results, there is still some question over the benefits of G-CSF in ICI-containing ChT regimens and not all regions of Asia use dose-dense schedules for all subtypes of early breast cancer, for example node-negative disease. Thus, as a result of these discrepancies and the uncertainty over the benefits of G-CSF use with all ChT regimens, the GoR for ‘recommendation 3v’ was downgraded from ‘A’ to ‘B’ with 100% consensus, as is shown in bold below and in : 3v. The use of dose-dense schedules of ChT, with granulocyte colony-stimulating factor (G-CSF) support, should be considered given their documented benefit over non-dose-dense schedules [I, B: consensus = 100% ]. presents a proposed algorithm for the treatment of early breast cancer and presents a proposed algorithm for the management of axillary lymph node involvement. Management of ER-positive/HER2-negative early breast cancer—recommendations 4a-l The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 4a,b d-f.1 and i-l’, without change . For ESMO ‘recommendation 4c’, the routine use of gene expression assays for guiding decisions on adjuvant ChT was questioned because, while the data of the West German Study Group Plan B trial demonstrated the potential for such assays in patient stratification, they are not routinely used or widely accessible throughout Asia. Similar concerns were made regarding the accessibility and routine use of endocrine response assessment. Therefore, while the Pan-Asian panel of experts agreed about the science of both gene expression assays and endocrine response assessment, they downgraded the GoR from ‘A’ to ‘B’ and modified the wording, changing the word ‘can’ to ‘may’ as shown in bold below and in , as follows: 4c. In cases of uncertainty about indications for adjuvant ChT (after consideration of all clinical and pathological factors), gene expression assays and/ or endocrine response assessment s may be used to guide decisions on adjuvant ChT [I, B ; consensus = 100% ]. There was a great deal of discussion around ESMO ‘recommendation 4g’ and the use of bisphosphonates in the early breast cancer setting. In the phase III AZURE trial the use of the bisphosphonate zoledronic acid did not improve either the 7-year OS (adjusted HR = 0.93; 95% CI = 0.81-1.08; P = 0.37) or the invasive disease-free survival (iDFS) (HR = 0.93; 95% CI 0.82-1.05; P = 0.22) rate compared to the control group for premenopausal and perimenopausal women, independent of ER status, tumour stage and lymph node involvement. Preclinical evidence suggests that the lack of efficacy of bisphosphonates in these women may be, at least in part, due to the levels of estrogens, and the Pan-Asian panel of experts therefore agreed that there was no benefit in treating premenopausal women with bisphosphonates which could be detrimental for younger patients with reduced bone density. In the EBCTCG meta-analysis of randomised trials investigating adjuvant bisphosphonate treatment in early breast cancer, it was found that for postmenopausal women, there was a significant reduction in disease recurrence (first-event rate ratio [RR] = 0.86; 95% CI = 0.78-0.94; 2p = 0.002), distant recurrence (RR = 0.82; 95% CI = 0.74-0.92; 2p = 0.0003), bone recurrence (RR = 0.72; 95% CI = 0.60-0.86; 2p = 0·0002), and breast cancer mortality (RR= 0·82; 95% CI = 0.73-0.93; 2p = 0·002). However, there is no specific evidence of the effect that adjuvant bisphosphonate treatment has on disease recurrence in postmenopausal Asian women with early breast cancer and, while there was consensus that the use of bisphosphonates should be used for treating postmenopausal women with treatment-related bone loss, it was suggested that bisphosphonates are not routinely used to stop disease recurrence in Asia. As a result, the GoR for the use of bisphosphonates in patients at high risk of relapse was downgraded from ‘A’ to ‘B’ in ‘recommendation 4g’ as per the bold text below and in : 4g. Bisphosphonates are recommended in women without ovarian function (postmenopausal or undergoing OFS), especially if at high risk of relapse [I, B; consensus = 100% ] or treatment-related bone loss [I, A; consensus = 100%]. For ESMO ‘recommendation 4h’ there was some discussion about whether the cyclin-dependent kinase 4/6 (CDK4/6) inhibitor ribociclib should also be incorporated into the recommendation based on the exciting interim data from the phase III NATALEE trial in patients with HR+/HER2− early breast cancer which evaluated adjuvant ribociclib with endocrine therapy versus endocrine therapy alone which showed the 3-year iDFS to be significantly longer in the combination group (90.4%) compared with endocrine therapy alone (87.1%; P = 0.0014). However, because ribociclib has, at present, not been given approval for use in early breast cancer by either the US Food and Drug Administration (FDA) or European Medicines Agency (EMA), the wording for ‘recommendation 4h’ remained unchanged ( 100% consensus ). Recently reported results from a preplanned OS interim analysis of high-risk early breast cancer patients randomised to receive endocrine therapy for at least 5 years plus or minus the CDK4/6 inhibitor abemaciclib for 2 years showed the benefit of abemaciclib in terms of iDFS and distant RFS with HRs of 0.68 (95% CI = 0.60-0.77) and 0.675 (95% CI = 0.59-0.77), respectively. These data suggest that the addition of abemaciclib to endocrine therapy reduces the risk of a patient developing invasive disease and distant disease recurrence beyond the pivotal 5-year mark in the adjuvant setting. Follow-up of OS is ongoing. A proposed algorithm for treatment of HR+/HER2− early breast cancer is presented in . Management of HER2-positive early breast cancer—recommendations 5a-i The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 5a-g and i’, without change . For ESMO ‘recommendation 5h’ the benefit of the addition of pertuzumab to trastuzumab for the adjuvant treatment of patients with HER2-positive breast cancer was discussed based on the findings of the phase III APHINITY trial, where the OS benefit at both the 6-year (HR = 0.85; 95% CI = 0.67-1.07; P = 0.17) and 8-year (HR = 0.83; 95% CI = 0.68-1.02; P = 0.078) follow-up failed to reach statistical significance. , There was, however, a consistent improvement in iDFS where 88.4% of patients in the pertuzumab group versus 85.8% of patients in the placebo group were event-free at the 8-year follow-up, which corresponded to an absolute benefit of 2.6% (95% CI for the difference = 0.7-4.6). Subgroup analysis of iDFS data based on node status revealed that patients receiving pertuzumab with node-positive HER2-positive breast cancer had a 4.53% difference in EFS at the 6-year follow-up (95% CI = 1.92-7.14) compared to those receiving placebo, and there was no clear benefit seen in the node-negative patients (0.07% difference in iDFS event-free survival; 95% CI = −2.02-2.17). Analysis by HR status revealed that there was a benefit for addition of pertuzumab in both the HR+ (2.47% difference in iDFS event-free rate; 95% CI for the difference = −0.66-5.60) and HR− (3.0% difference in iDFS event-free rate; 95% CI for the difference = 0.76-5.23) subgroups. Further stratification of the iDFS data revealed that while patients in the node-positive subgroup benefited from pertuzumab irrespective of whether they were HR+ (4.81% iDFS EFS; 95% CI = 1.59% to 8.03%) or HR− (4.10% iDFS EFS; 95% CI = −0.34% to 8.55%), there was no clear benefit for the node-negative subgroups (for the node-negative HR+ subgroup, iDFS EFS = 0.14%; 95% CI −2.47% to 2.74%; and for the node-negative HR− subgroup, iDFS EFS = −0.05%; 95% CI = −3.85% to 3.47%). Thus, based on these results, the Pan-Asian panel of experts agreed with ESMO ‘recommendation 5h’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without modification with 100% consensus . presents an algorithm for the treatment of HER2-positive early breast cancer. Management of TNBC—recommendations 6a-j.2 The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 6a-e and g-i, j and j.1’ without change . Experts from three of the Asian medical societies disagreed with ESMO ‘recommendation 6f’ because it was felt that the benefit of adjuvant pembrolizumab for early TNBC is unclear, particularly with respect to pCR status. However, in the randomised phase III KEYNOTE-522 trial investigating the addition of pembrolizumab to neoadjuvant ChT in patients with early TNBC, the 5-year EFS was 81.3% (95% CI = 78.4% to 83.9%) in the pembrolizumab group compared with 72.3% (95% CI = 67.5% to 76.5%) in the placebo group. The distant disease progression- or distant RFS rates at 5 years were 84.4% for patients receiving pembrolizumab and 76.8% for patients receiving placebo (HR = 0.64; 95% CI = 0.49-0.84). Recently presented data from a prespecified, non-randomised, exploratory analysis reported 5-year EFS rates for the pembrolizumab and placebo groups of 92.2% versus 88.2% for patients with a pCR, and 62.6% versus 52.3% for patients without a pCR. Thus, it was agreed that the original ESMO ‘recommendation 6f’ which read: 6f. Pembrolizumab should be administered every 3 weeks throughout the neoadjuvant phase [I, A] and for nine 3-week cycles during the adjuvant phase, regardless of pCR status or administration of RT [I, A; ESMO-MCBS v1.1 score: A] Should be modified to remove ‘or administration of RT’, which it was felt was unnecessary, although RT can be given with this combination as shown below and in : 6f. Pembrolizumab should be administered every 3 weeks throughout the neoadjuvant phase [I, A] and for nine 3-week cycles during the adjuvant phase, regardless of pCR status [I, A; ESMO-MCBS v1.1 score: A; consensus = 100% ]. The observation that poly (ADP-ribose) polymerase (PARP) inhibitors upregulate PD-L1 in breast cancer cells and synergise with ICIs in a syngeneic breast cancer tumour model provides a strong rationale for the combination of olaparib with ICIs in early TNBC. However, for ESMO ‘recommendation 6i.1’ concern was raised by members of the Pan-Asian panel of experts regarding the safety of the combination of the PARP inhibitor, olaparib, with ICIs. At present, there are no data for olaparib plus ICIs in early TNBC but it is anticipated that the randomised phase II KEYLYNK-009 study comparing the efficacy of adjuvant olaparib plus pembrolizumab with ChT plus pembrolizumab following induction with first-line ChT in patients with locally recurrent inoperable TNBC will provide important data. Data regarding the safety of olaparib plus ICIs can be found in the phase Ib/II KEYNOTE-365 study of pembrolizumab plus olaparib in patients with metastatic castration-resistant prostate cancer where it was reported that the treatment-related adverse events (TRAEs) for the combination were consistent with either agent alone. Thus, the panel of experts agreed with ESMO ‘recommendation 6i.1’ but felt the recommendation needed more clarity regarding the recommended use of olaparib plus ICIs and ESMO ‘recommendation 6i.1’, which read: 6i.1 The combination of ICIs and olaparib may be considered on an individual basis [V, C] and was amended to read as below and in , with the changes shown in bold (100% consensus): 6i.1. In patients with germline BRCA mutations with residual disease after ICI-containing neoadjuvant therapy, the concurrent adjuvant use of ICIs and olaparib may be considered on an individual basis [V, C ; consensus = 100% ]. As with ‘recommendation 6i.1’, there were some concerns about ESMO ‘recommendation 6j.2’ regarding safety. There were also doubts regarding the efficacy of the combination of pembrolizumab with capecitabine. The addition of adjuvant capecitabine after neoadjuvant ChT treatment was assessed in the Japanese/Korean CREATE-X study where, compared with the ChT-alone group, the addition of capecitabine was found to improve both DFS (69.8% versus 56.1%; HR for recurrence, second cancer or death = 0.58; 95% CI = 0.39-0.87) and the OS rate (78.8% versus 70.3%; HR for death = 0.52; 95% CI =0.30-0.90) for patients with TNBC. The efficacy reported in the CREATE-X study was consistent with findings from a meta-analysis which found addition of capecitabine to ChT improved DFS (HR = 0.818; 95% CI = 0.713-0.938; P = 0.004) and OS (HR = 0.778; 95% CI = 0.657-0.921; P = 0.004) in the TNBC subgroup. In addition, in a phase III trial conducted by the South China Breast Cancer Group, 1-year low-dose capecitabine maintenance therapy was found to significantly improve the 5-year DFS compared to the observation group (82.8% versus 73.0%; HR for risk of recurrence or death = 0.64; 95% CI = 0.42-0.95; P = 0.03), and there was also a numerical improvement in the 5-year OS but it was not significant (85.5% versus 81.3%; HR = 0.75; 95% CI = 0.47-1.19; P = 0.22). Most toxicities from the combination of pembrolizumab and capecitabine in a phase II study in pretreated triple-negative and HR+, HER2-endocrine-refractory metastatic breast cancer were found to be low-grade and consistent with capecitabine monotherapy, including elevated liver tests, skin rash, fatigue, hand–foot syndrome and cytopenias. In this biomarker-unselected cohort, there was no improvement for the combination of pembrolizumab plus capecitabine [12-month progression-free survival (PFS) = 20.7%; 95% CI = 8.4% to 36.7%; 12-month OS = 63%; 95% CI = 43.2% to 77.6%) over historical data, but in a small phase Ib study consisting of 14 patients that investigated the early treatment of metastatic TNBC, the combination of pembrolizumab plus capecitabine showed superior response rates [overall response rate (ORR) = 43%] compared with pembrolizumab plus paclitaxel (ORR = 25%). Thus, while at present there are no data for the efficacy of ICIs plus capecitabine in the adjuvant setting for early TNBC, the panel agreed that the ESMO ‘recommendation 6j.2’ should be modified to provide clarity, over when the combination could be considered, to read as per the bold text below and in (100% consensus): 6j.2. In patients with residual disease after ICI-containing neoadjuvant therapy, the concurrent adjuvant use of ICI and capecitabine can be considered on an individual basis [V, C ; consensus = 100% ] A proposed algorithm for the management of triple-negative early breast cancer is presented in . Management of special situations—recommendations 7a-i The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 7a-h’ without change . For ESMO ‘recommendation 7i’, the survival benefit and safety of tamoxifen and aromatase inhibitors (AIs) following mastectomy for ductal carcinoma in situ (DCIS) in high-risk patients was discussed. The benefit of AIs for breast cancer prevention was demonstrated in the international phase III IBIS-II trial comparing anastrozole with placebo in postmenopausal women at increased risk of developing breast cancer where, at 10-years, a 49% reduction in breast cancer was observed (HR = 0.51; 95% CI = 0.39-0.66; P < 0.0001). In this study, there were no significant differences in the major AEs, except for a 28% reduction in the incidence of cancer outside the breast with anastrozole. In the 9-year follow-up of the phase III NSABP B-35 study of patients with DCIS undergoing lumpectomy plus radiotherapy, there was no significant DFS benefit for anastrozole compared with tamoxifen (HR = 0.89; 95% CI = 0.75-1.07; P = 0.21), but patients in the anastrozole group had a superior breast cancer-free interval compared with the tamoxifen group (84.7% versus 83.1%; HR = 0.73; 95% CI = 0.56-0.96; P = 0.023), particularly in patients who had invasive disease (HR = 0.62; 95% CI = 0.42-0.90; P = 0.0123). Patients in the anastrozole group also had a reduced incidence of contralateral breast cancer (HR = 0.64; 95% CI = 0.43-0.96; P = 0.0322) and again, this benefit over tamoxifen was more pronounced in those patients with invasive disease (HR = 0.52; 95% CI = 0.31-0.88; P = 0.0148). The only notable differences between the two groups in terms of AEs was thrombosis or embolism which is a known side-effect of tamoxifen (2.7% versus 0.8% for the anastrozole group). Thus, based on these results, the Pan-Asian panel of experts agreed with ESMO ‘recommendation 7i’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ) without modification with 100% consensus . Follow-up, long-term implications and survivorship—recommendations 8a-m The Pan-Asian panel of experts agreed with and accepted completely (100% consensus) the original ESMO recommendations, ‘recommendations 8a-c, e, g and i-m’ without change . It was felt that there was a discrepancy between the real-world practice for testing asymptomatic patients in Asia and ESMO ‘recommendation 8d’. Results from both Canadian retrospective chart reviews revealed the low diagnostic value of routine staging investigations, such as CT scans and bone scans, in asymptomatic early breast cancer patients. These were also the findings of two prospective trials comparing patients that received frequent laboratory tests, bone scan and chest roentgenography. , Such findings, as well as studies demonstrating the use of unnecessary tests and screening, have led to many professional bodies publishing lists of tests and procedures that are unlikely to be of benefit to the patient. , , , While it was agreed that over testing can lead to overtreatment, there is a potential benefit for such tests in high-risk patients. Thus, ESMO ‘recommendation 8d’ which reads: 8d. In asymptomatic patients, laboratory tests (e.g. blood counts, routine chemistry, tumour marker assessment) or other imaging are not recommended [I, D] was modified as per the bold text below and , with a revision in the GoR, to read as follows: 8d. In asymptomatic patients, laboratory tests (e.g. blood counts, routine chemistry, tumour marker assessment) or other non-breast imaging for detection of relapse are not recommended [I, D] but may be considered on an individual basis [V, C; consensus = 100%]. Tamoxifen is associated with an increased risk of endometrial cancer in postmenopausal women and the American College of Obstetricians and Gynecologists recommend that postmenopausal women taking tamoxifen should be closely monitored for symptoms of endometrial hyperplasia and cancer. However, it was felt that postmenopausal and higher-risk women would be treated with AIs and that endometrial hyperplasia can be misleading without vaginal bleeding. It was also agreed, based on the study by Love and colleagues, that there was no evidence for the use of transvaginal ultrasound (US) for gynaecological examination in women taking tamoxifen. Thus, ESMO ‘recommendation 8h’ was modified, and the GoR was downgraded from: 8h. For patients on tamoxifen, an annual gynaecological examination is recommended [V, B]; however, routine transvaginal US is not recommended [V, D] to read as per the bold text below, and in (100% consensus): 8h. For patients on tamoxifen, an annual gynaecological examination may be considered [V, C; consensus = 100% ]; however, routine transvaginal US is not recommended [V, D]. presents a proposed algorithm for the adjuvant endocrine therapy in HR+ early breast cancer. Applicability of the recommendations Following the hybrid virtual/face-to-face meeting in Seoul, the Pan-Asian panel of experts agreed and accepted completely (100% consensus) the revised ESMO recommendations for the diagnosis, treatment and follow-up of early breast cancer in patients of Asian ethnicity . However, the applicability of each of the guideline recommendations is impacted by the individual drug and testing approvals and reimbursement policies for each region. The drug and treatment availability for the regions represented by the 10 participating Asian oncological societies represented is summarised in , available at https://doi.org/10.1016/j.esmoop.2024.102974 , and individually for each region in , available at https://doi.org/10.1016/j.esmoop.2024.102974 . Throughout Asia, most health care provision relies on both public and private insurance. In poorer regions public funding is more limited than in richer regions and patients are more likely to pay ‘out of pocket’ for both biomarker-related diagnostic tests and drugs. , available at https://doi.org/10.1016/j.esmoop.2024.102974 , provides an overview of the availability of biomarker-related tests and drugs for the diagnosis and treatment of early breast cancer revealing that the majority are approved in most regions of Asia. In terms of biomarker-related diagnostic tests, immunohistochemistry (IHC), with the frequent exception of PD-L1, are, to some extent, covered by public health care provision in all regions of Asia, whereas genetic testing and next-generation sequencing (NGS)-based assays do not tend to be reimbursed. However, in regions where there is a disparity with the provision of oncology services, for example, in India, standardised laboratories for the provision of diagnostic tests are only located in the first and second tier cities. With the exceptions of neratinib (which is not approved for the treatment of early breast cancer in Indonesia, Japan, the Philippines and Thailand) and ribociclib (which is not approved for the treatment of early breast cancer in Japan and Korea), drugs for the treatment of early breast cancer have been approved across all regions of Asia although there may be differences in the indications they are approved for (i.e. trastuzumab is approved solely for metastatic disease in Indonesia, whereas in Taiwan approval is for LN+2 disease). Although many drugs for the treatment of early breast cancer are approved across Asia, a major limitation to their provision by the public sectors of the different regions is affordability. In mainland China (China), the health care system is covered by social insurance for 80% of the population while 10% of the population have private insurance. Biomarker-related diagnostic tests, including IHC assessment of ER, progesterone receptor (PgR), Ki67 and HER2, as well as HER2 in situ hybridisation are covered by insurance, meaning that the 10% of patients without insurance will be out of pocket for these tests ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). There is, however, no reimbursement for PD-L1 IHC, germline or somatic mutation analysis or gene expression risk signature assays. Those without insurance are the only patients likely to be out of pocket for trastuzumab, trastuzumab emtansine (T-DM1) and neratinib but there is no reimbursement in China for drugs such as abemaciclib, ribociclib, olaparib, pertuzumab and pembrolizumab ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). In China, the pan-HER receptor tyrosine kinase inhibitor pyrotinib is approved for the neoadjuvant treatment of early breast cancer. It is estimated that it takes around 1 year for drugs to be approved in China after they have received FDA or EMA approval, and it can take a further 3 months for new drugs to become available. The biggest limiting factors around accessing new treatments is whether they are covered by insurance, and it is availability of new biomarker-related diagnostic tests in hospitals which is the greatest limitation on access for patients. The health care system is weak in Indonesia with limited financial prowess and resources. The structure is further aggravated by the lack of awareness of patients and health care providers. National insurance covers the cost of IHC for ER, PgR, HER2 and Ki-67 but does not cover PD-L1 IHC, HER2 in situ hybridisation or gene expression assays ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Sequencing for germline or somatic BRCA1/2 mutations is also not covered and, in Indonesia, NGS is only applied for BRCA1/2 mutations. While most drugs used for the treatment of early breast cancer are available in Indonesia, their prices make them unaffordable for national insurance and, depending on the drugs, private insurance and employers/social insurance may not cover the cost ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). For example, trastuzumab is only covered by national insurance for metastatic breast cancer but for the estimated 20% of the population with private insurance, the cost of trastuzumab is covered for early breast cancer. Bureaucracy of The Indonesian Food and Drug Authority (BPOM) is one of the biggest factors limiting access to new treatments and new biomarker-related diagnostic tests. The average time for approval following EMA/FDA approval is roughly 2 years and it can take, on average, a further 2 years for new drugs to become available for use in Indonesia following national approval. In India both private and public health care systems exist and it is estimated that 60% of health expenditure in India is private, including through private insurance, which is taken out by <20% of the population, and out-of-pocket expenses. The public health system has various government schemes which cover up to 40% of total health expenditure. With 30% to 40% of the population covered by employers/social insurance schemes, 40% to 50% of patients will be out of pocket for biomarker assays and drugs. In terms of biomarker tests, IHC for ER, PgR, Ki67, PD-L1 and HER2 expression, as well as HER2 in situ hybridisation, are fully reimbursed, whereas gene expression assays and genetic testing including somatic and germline testing for BRCA1/2 mutations are not ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). One of the main challenges for provision of those assays that are reimbursed is that standardised labs are only located in first- and second-tier cities in India. Most drugs for treating early breast cancer have been given approval in India with full reimbursement available for those who are covered by insurance. In India, it can take between 1 and 5 years for approval of drugs to be given approval following EMA or FDA approval. The length of time to approval is affected by the complexity of the drug and the presence of the pharmaceutical company in India. Once approval has been given, it can take several months to a year for new drugs to become available due to factors such as manufacturing, distribution and reimbursement. Furthermore, access to new treatments and biomarker-related diagnostic tests are affected by cost, health inequities and infrastructure as well as insurance, geographical location and cultural factors. A lack of knowledge and awareness by health care practitioners in smaller towns in India greatly affects the prescription of diagnostic tests. The Japanese health care system relies on a combination of public and private providers and emphasises preventive care, leading to one of the highest life expectancies and low infant mortality rates in the world. All citizens are required to have health insurance, either through their employers or the government and ∼40% of patients have private insurance to cover cancer treatment in addition to universal health care insurance. As a result of this system, very few patients pay entirely out of pocket but typically will pay a portion (0% to 30%) of costs. Most diagnostic tests for breast cancer are available in Japan although the only gene expression risk signature assay that currently has approval and is reimbursed is the Oncotype Dx assay which patients are expected to pay for upfront before receiving a reimbursement of 70% or more of the cost ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). NGS assays for somatic mutations and IHC for PD-L1 are only indicated for patients with metastatic disease. At present, ribociclib and neratinib are not approved in Japan for the treatment of early breast cancer but the oral fluoropyrimidine S-1, which comprises a combination of tegafur, gimeracil and oteracil potassium, has approval for the adjuvant treatment of high- and intermediate-risk HR+ HER2− early breast cancer ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Regulatory approval of diagnostic tests by the Pharmaceuticals and Medical Devices Agency (PMDA) in Japan can be a rigorous and time-consuming process where manufacturers must demonstrate the safety and efficacy of these diagnostic tests. Access to new treatments and the specific timeline for a new drug’s availability in Japan can vary widely depending on the drug’s complexity, market demand and various regulatory and commercial considerations. In general, new drugs may be reimbursed <6 months after permission by the PMDA. In Korea, cover of health care costs is provided to all Korean citizens, including foreigners who have lived in Korea for >6 months, by the National Health Insurance (NHI) system. However, in addition to the NHI coverage, patients with private insurance can pay a part of their health care costs including those for non-reimbursed, expensive new drugs, based on their insurance policy. Typically, only 10% of patients in Korea pay in full (out of pocket) for their treatment, with 15% covered by private insurance and the remaining 75% of patients covered by employers’ or social insurance. Cancer patients are categorised as having ‘serious disease’ with 95% of costs covered for most biomarker-related diagnostic tests, including IHC for ER, PgR and Ki67 as well as HER2 in situ hybridisation and BRCA1/ 2 mutation analysis by Sanger sequencing. For NGS-based sequencing, there is partial reimbursement with patients with stage I-II disease paying 90% and patients with stage III disease paying 80% of costs and there is no reimbursement for gene expression risk signature assays ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Both trastuzumab and T-DM1 are covered by NHI, meaning most patients will not be ‘out of pocket’, whereas for abemaciclib, olaparaib, neratinib and pembrolizumab which are approved for the treatment of early breast cancer, there is no reimbursement. This is also the case for pertuzumab in the adjuvant setting although 70% of the cost will be reimbursed for neoadjuvant pertuzumab ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). With the emergence of many expensive drugs the limited resources of the NHI budget is becoming a major issue and the biggest limiting factor to accessing new treatments is reimbursement with the requirement for more self-payment. This is because Korea has been categorised as a developed region resulting in the costs of drugs being set at a much higher level than they were previously. In relation to diagnostic tests, the companion diagnostics associated with newer drugs require specific machines which are not available in the pathology labs of all hospitals. There is also a need for greater standardisation of certain diagnostic tests across the different treatment centres and laboratories throughout Korea. In Malaysia there is a dual health care system consisting of a limited but fully funded health care system provided by the Ministry of Health (MOH) Hospitals and University Hospitals which is available for everyone, and a private health care system which provides services to patients who are insured or willing to pay, with no reimbursement from the government. While certain innovator drugs are listed in the MOH formulary for the respective indications, their prescriptions are subject to very strict MOH criteria and the annual budget allocations. For example, trastuzumab is only indicated for stage II-III early breast cancer and prescribed for up to a maximum of nine cycles, while ribociclib use in metastatic HR+ HER2− cancer is restricted to the first-line setting only and available for a limited number of patients per year. There is, however, a shortage of oncology specialists and an imbalance in the distribution of oncology facilities across Malaysia. Approximately 65% of the population of Malaysia, including members of the civil service and those without health care insurance, receive treatment subsidised by the MOH but patients treated at government facilities have the option to access private centres for diagnostic tests that are not covered by the MOH health care system. The same is also true for drugs that are not covered by the MOH where patients can purchase them for treatment at an MOH hospital. Diagnostic tests that are available free of charge through the MOH include IHC for ER, PgR and HER2, as well as HER2 FISH ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ), although turnover time may be long. Germline testing for BRCA1/2 , NGS-based assays and IHC for PD-L1 are not available through the MOH, meaning that patients either need insurance to cover the costs or they will be out of pocket ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). It takes ∼1 year for a drug that has received FDA approval to be approved by the MOH drug bureau although when drugs are approved by either the FDA or EMA, they can be obtained immediately via a special import licence allowed by the MOH. The health care system in the Philippines is primarily a mix of public and private health care providers. It consists of government-run hospitals, local health units and an extensive network of private health care facilities which collectively strive to provide health care services to the Filipinos. Social insurance (PhilHealth) costs 110 USD per person and 95% of the population use it. However, it is barely enough to cover anticancer medicines. In the Philippines, ∼20% of patients with early breast cancer will receive reimbursement for biomarker-related diagnostic tests, including IHC for ER, PgR and HER2 expression, which are available through government hospitals only and not reimbursed for private patients. IHC for PD-L1 expression is available through patient programmes and is not reimbursed, nor is HER2 in situ hybridisation which is only available to 60% of patients. Sanger sequencing for BRCA1/2 mutations is available at a 50% reduced cost through an existing patient programme, while NGS for somatic mutations is only accessible to half of patients with no reimbursement ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Most drugs are available through patient access programmes although there is no reimbursement, with the exception of trastuzumab for which half of the cost is reimbursed through patient access programmes. Thanks to the 2018 Philippine National Cancer Control Act, any drugs that are given approval in other countries will be streamlined for approval in the Philippines and it takes, on average, between 4 and 12 months for new drugs to become available. Cost and affordability are the biggest factors for accessing new drugs and biomarker-related tests. There is also limited access to new biomarker-related diagnostic tests and tools which are only available in specialised centres. The health care system is Singapore is funded by both public and private insurance. The public system is funded through individual enforced savings (MediSave) and national health insurance which consists of three tiers: basic [MediShield Life (MSHL)], Integrated Shield Plan (ISP; which is a tie-up with private insurance) and the Enhanced Integrated Shield Plan (EISP; a tie-up with private insurance + riders). It is estimated that over half of Singapore citizens are covered by ISP. All IHC assays and selected FISH panels for early breast cancer diagnostics are entirely covered by the health care system, whereas genetic and gene expression profiling, including germline and somatic mutation screening, are not reimbursed ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). In 2022 the Cancer Drug List (CDL) was created which is updated monthly and lists the drugs deemed cost-effective according to accepted health technology assessment methods. Drugs on the CDL are covered by MSHL and ISP, whereas drugs not on the CDL can be covered by the EISP. It is estimated that 90% of cancer drugs in common usage are on the CDL with all drug costs for early breast cancer covered by the health care system in Singapore. Time to approval for new drugs to treat early breast cancer is typically <6 months from the time of EMA or FDA approval and they become available within about a month following approval. The biggest limiting factors for the health care system in Singapore is regarding the provision of genetic and transcriptional assays and, at present, there is assessment about whether they should be covered by national health insurance. In Taiwan nearly 100% of the population are covered by National Health Insurance (NHI). The monthly payments out of pocket for NHI are relatively low although the financial coverage for reimbursement by NHI in Taiwan is basically ‘all-or-none’ ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). The financial burden is huge and expected to increase further in the era of immuno-oncology and precision medicine. Therefore, despite approval by the Taiwan FDA which is largely a scientific evaluation based on the design and results of the individual pivotal trials, reimbursement is based on cost-effectiveness, the availability of other medications for the same indication and future budget burden. Sequencing and NGS-based assays are not reimbursed but, with the exception for PD-L1, IHC-based diagnostic tests for early breast cancer are ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Targeted therapies for treating early breast cancer are currently not reimbursed in Taiwan except for trastuzumab and biosimilars. With no co-payment system, the biggest limiting factor with regard to accessing the newer treatment therapies and diagnostic tests in Taiwan is the necessity for patient out-of-pocket payment. Thailand has three national health insurance schemes [Civil Servant Medical Benefit Scheme (CSMBS), Social Security Scheme (SSS) and Universal Coverage Scheme (UCS)], with beneficiaries from different sectors. All three Thai schemes allow the use of drugs in the national list of essential medicines, with expanded benefits for individuals covered by one of the CSMBS. Basic drug accessibility is afforded by the two other Thai schemes. In terms of biomarker-related diagnostic tests for early breast cancer, IHC for ER, PgR, Ki-67 and HER2 but not PD-L1 are covered ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Those patients covered by the SSS (∼20% of the population) are not reimbursed for germline BRCA1/2 mutation analysis and there is no reimbursement at all for NGS or gene expression assays. It is estimated that <1% of the population will be out of pocket for drug costs. It takes ∼2 years for a new drug to be approved in Thailand once it has been approved by the EMA or FDA and between 6 and 8 months for new indications of previously approved drugs. Once approval has been given for drugs in Thailand, it can take 3-6 months for them to become available due to supply management and hospital listings, but this will be for use without reimbursement. It can take years for a drug that has been approved to be added to the list of indications that are reimbursed. This is especially the case for high-cost drugs. The biggest limiting factors for accessing new treatments and diagnostic tests are financial, including reimbursement issues. Another limiting factor for diagnostic tests in Thailand is the turn-around time. The results of the voting by the Asian experts both before and after the hybrid virtual/face-to-face meeting in Seoul showed >85% concordance with the ESMO recommendations for the diagnosis, treatment and follow-up of patients with early breast cancer ( , available at https://doi.org/10.1016/j.esmoop.2024.102974 ). Following the ‘face-to-face’ discussions, revisions were made to the wording of ‘recommendations 1a, 2e, 3i, 3l, 3m, 4c, 6f, 6i.1, 6j.2, 8d and 8h’, and for ‘recommendations 3v, 4c, 4g, 8d and 8h’ the GoR was downgraded at least for part of the recommendation ’, resulting in a 100% consensus being achieved in terms of acceptability for all the recommendations listed in . After the consensus meeting, revisions to the wording of ‘recommendations 1e, 1g, 1i, 1m, 5b, 5c, 6c and 7d’ were made to make them consistent with the revisions requested by the reviewers of the original ESMO guidelines. These recommendations therefore constitute the consensus clinical practice guidelines for the diagnosis, treatment and follow-up of patients with early breast cancer in Asia. The variations in the availability for the patients of diagnostic testing, drugs and therefore treatment possibilities, between the different regions, reflect the differences in the organisation of their health care systems and their reimbursement strategies, and will have a significant impact on the implementation of the scientific recommendations in certain of the regions of Asia. Thus, it is anticipated these guidelines may be used to guide policy initiatives to improve the access of all patients with early breast cancer, across the different regions of Asia, to state-of-the-art cancer care, including the enrolment into clinical trials, whilst recognising the constraints imposed by the heterogeneous socioeconomic situations of the different countries and regions of Asia. |
Evaluación del cumplimiento de las recomendaciones de "no hacer" de la Sociedad Española de Medicina Preventiva y Salud Pública | 79e24616-d496-4b67-aa65-33ef17ad137c | 11582859 | Preventive Medicine[mh] | Hasta el momento, casi todo el conocimiento científico se ha dirigido a analizar y a evaluar intervenciones sanitarias que se deben realizar en el paciente. Sin embargo, existen evidencias de que determinadas prácticas diagnósticas, terapéuticas y perfiles de cuidados sanitarios son ineficientes, inseguros o innecesarios y no aportan un valor añadido relevante para el paciente ). La reducción de estas prácticas es una medida de eficiencia y la toma de decisiones clínicas eficientes es un compromiso ético reflejado en diversos códigos profesionales . Durante la última década, las sociedades científicas y profesionales han mostrado interés en la mejora de la atención sanitaria y, para ello, se han desarrollado distintos proyectos. En el año 2009 se elaboran las iniciativas institucionales " Choosing Wisely " de la Alianza Nacional de Médicos ( National Physicians Alliance ) , a través del American Board of Internal Medicine , y " Less is more " de la American Medical Association , donde las sociedades científicas deben proponer cinco recomendaciones principales sobre "no hacer", facilitando la toma de decisiones compartidas en la práctica clínica y promoviendo la eficiencia. Del mismo modo y de forma simultánea, el National Institute for Clinical Excellence (NICE) determina desde 2007 algunas prácticas clínicas que recomienda no hacer (" Do not do ") porque no aportan beneficio, la relación riesgo-beneficio no está clara o no existe evidencia suficiente . En 2013 surge en España el proyecto " Compromiso por la Calidad de las Sociedades Científicas en España " también conocido como " No hacer ", iniciativa de la Sociedad Española de Medicina Interna . El Ministerio de Sanidad, Consumo y Bienestar Social pone en marcha el proyecto con el fin de acordar con las diferentes sociedades científicas diversas recomendaciones de "no hacer" basadas en la evidencia científica . En cada sociedad se nombra un panel de expertos, para elegir después las 5 recomendaciones utilizando el método Delphi. Con ello, además de disminuir el uso de intervenciones médicas innecesarias y la iatrogenia, se pretende la reducción de la variabilidad y la promoción de la seguridad en la práctica clínica, así como la difusión de las recomendaciones para orientar en la toma de decisiones . En 2018, la Sociedad Española de Medicina Preventiva, Salud Pública e Higiene (SEMPSPH), adherida al proyecto, presenta la propuesta de estas cinco recomendaciones: "No eliminar el vello de forma sistemática para reducir el riesgo de infección de sitio quirúrgico." Si fuera necesario, usar cortadoras de pelo adecuadas (maquinillas eléctricas, cortadoras de pelo, depilación química). "No continuar con antibióticos más de 24-48 horas en pacientes hospitalizados", a menos que haya evidencia clara de infección. "No se recomienda el análisis de la toxina Clostridium difficile en pacientes asintomáticos." "No utilice la descontaminación nasal con agentes antimicrobianos tópicos destinados a eliminar el Staphylococcus aureus rutinariamente, para reducir el riesgo de infección del sitio quirúrgico, ni ante procedimientos cardíacos ni ortopédicos." "No se recomienda el reemplazo rutinario de catéteres venosos periféricos cada 72-96 horas." El objetivo del presente trabajo fue evaluar el cumplimiento de las cinco recomendaciones "No hacer" propuestas por la SEMPSPH en los pacientes atendidos en el Hospital Universitario de La Princesa de Madrid. Se diseñó un estudio de la evaluación de la calidad asistencial siguiendo la metodología de Palmer o ciclo de evaluación PDCA, que incluye cuatro fases: - Planificación o definición de objetivos ( Plan ). - Diseño del estudio y recogida de información ( Do ). - Análisis de datos ( Check ). - Implantación de medidas correctoras o de mejora ( Act ) . En primer lugar, se definió la dimensión de la calidad evaluada, siendo esta la calidad científico-técnica. Después, se elaboraron los criterios, que son el instrumento de medida que se utilizó para evaluar la calidad. Se establecieron como criterios las 5 recomendaciones propuestas por la SEMPSPH para el proyecto "No hacer". Estas recomendaciones fueron establecidas por etapas por un panel de 25 expertos designados por la Sociedad Científica, empleando el método Delphi. El panel estableció un listado preliminar de 15 recomendaciones de "no hacer" basadas en la evidencia científica, obtenidas de las Guías de Práctica Clínica como fuente principal. Posteriormente, se realizó una técnica Delphi en la que los panelistas valoraron cada recomendación con una escala de puntuación y jerarquización, obteniéndose el consenso por un procedimiento matemático de agregación de juicios individuales, utilizando la mediana y el rango intercuartílico . Se definieron las excepciones, es decir, aquellas circunstancias en las que no era exigible que se cumpliera el criterio, y se empleó como indicador para medir el criterio y permitir su interpretación el Índice Global de Calidad (IC) expresado en forma de porcentajes. Así, para la Recomendación 1 (" No eliminar el vello de forma sistemática para reducir el riesgo de infección de sitio quirúrgico ") se definió como criterio el no rasurado del sitio quirúrgico, con excepción de rasurado del sitio quirúrgico por indicación médica. Como indicador se estableció el Índice Global de Calidad (IC=Pacientes no rasurados/Total pacientes intervenidos) expresado en porcentaje. Para la Recomendación 2 ("No continuar con antibióticos más de 24-48 horas en pacientes hospitalizados, a menos que haya evidencia clara de infección") se definió como criterio la continuación del tratamiento antibiótico durante un periodo inferior a 24-48 horas en ausencia de infección, con excepción de continuar el tratamiento antibiótico durante un periodo mayor a 48 horas en presencia de signos claros de infección. Como indicador se estableció el Índice Global de Calidad (IC=Pacientes que no continúen tratamiento antibiótico/Total pacientes intervenidos) expresado en porcentaje. Para la Recomendación 3 ("No se recomienda el análisis de la toxina Clostridium diffícile en pacientes asintomáticos") se definió como criterio el análisis de la toxina Clostridium diffícile en pacientes sintomáticos. Como indicador se estableció el Índice Global de Calidad (IC=Pacientes sintomáticos/Total peticiones de análisis de la toxina Clostridium diffícile ) expresado en porcentaje. Para la Recomendación 4 (" No utilice la descontaminación nasal con agentes antimicrobianos tópicos destinados a eliminar el Staphylococcus aureus rutinariamente, para reducir el riesgo de infección del sitio quirúrgico, ante procedimientos cardíacos ni ortopédicos ") se definió como criterio la no descontaminación nasal con agentes antimicrobianos tópicos frente al Staphylococcus aureus . Como indicador se estableció el Índice Global de Calidad (IC=Pacientes no descontaminados/Total pacientes intervenidos de procedimientos cardiacos y ortopédicos) expresado en porcentaje. Para la Recomendación 5 (" No se recomienda el reemplazo rutinario de catéteres venosos periféricos cada 72-96 horas ") se definieron como criterios el ser paciente portador de catéter venoso periférico y el no reemplazo rutinario del catéter en un periodo inferior a 72-96 horas, con excepción del recambio del mismo cada 72-96 horas por indicación médica. Como indicador se estableció el Índice Global de Calidad (IC=Pacientes sin recambio de catéter/Total pacientes portadores de catéteres venosos periféricos) expresado en porcentaje. Se estableció que el cumplimiento de las muestras debía ser elevado, idealmente cercano al 100%, por lo que se determinó que el estándar o valor que se consideraría como aceptable fuese del 100%. Se diseñó un estudio observacional, prospectivo y descriptivo para la evaluación del cumplimiento de cada recomendación. El estudio se realizó desde el 1 de diciembre de 2018 hasta el 31 de enero de 2019. Los pacientes que se estudiaron fueron incluidos según unos criterios diferentes para cada recomendación. Recomendación 1: "No eliminar el vello de forma sistemática para reducir el riesgo de infección de sitio quirúrgico". Se incluyeron los pacientes hospitalizados que fueron sometidos a intervención quirúrgica en el Hospital Universitario de La Princesa durante ese periodo. Se excluyeron los pacientes no hospitalizados y sometidos a cirugía mayor ambulatoria. De los 580 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 231 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se emplearon el registro de intervenciones quirúrgicas, la historia clínica informatizada y el listado de preparación quirúrgica del paciente como fuentes de datos. Se elaboró una hoja de recogida de datos, con variables demográficas (edad y sexo) y variables del proceso (servicio, rasurado del sitio quirúrgico y procedimiento quirúrgico realizado). Recomendación 2: "No continuar con antibióticos más de 24-48 horas en pacientes hospitalizados, a menos que haya evidencia clara de infección". Se incluyeron los pacientes hospitalizados que fueron intervenidos en el hospital y que recibieron tratamiento antibiótico profiláctico. Se excluyeron los pacientes no hospitalizados que fueron sometidos a cirugía mayor ambulatoria y aquellos que no recibieron tratamiento antibiótico profiláctico. De los 421 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 201 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se empleó la historia clínica informatizada como fuente de datos. Se realizó un seguimiento de los pacientes durante su ingreso hospitalario. Se elaboró una hoja de recogida de datos con variables demográficas (edad y sexo) y variables del proceso (fecha de intervención, procedimiento quirúrgico, clasificación de la cirugía -limpia, limpia/contaminada, contaminada, sucia/infectada-, profilaxis antibiótica empleada, duración de la profilaxis y signos de infección). Recomendación 3: "No se recomienda el análisis de la toxina Clostridium diffícile en pacientes asintomáticos" Se incluyeron las peticiones de análisis de la toxina Clostridium diffícile de pacientes pertenecientes al hospital y de centros de salud adscritos al laboratorio de Microbiología del hospital durante ese periodo. Se excluyeron los pacientes no sometidos a análisis de la toxina Clostridium diffícile . De 401 peticiones, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 196 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. La fuente de datos empleada fue el listado de los análisis de la toxina Clostridium diffícile realizados a los pacientes en el laboratorio de Microbiología durante el periodo establecido. Se elaboró una hoja de recogida de datos con variable demográficas (edad y sexo) y variables del proceso (centro de procedencia de la petición de análisis, servicio peticionario, fecha de petición y resultado del análisis de la toxina Clostridium difficile ). Recomendación 4: "No utilice la descontaminación nasal con agentes antimicrobianos tópicos destinados a eliminar el Staphylococcus aureus rutinariamente, para reducir el riesgo de infección del sitio quirúrgico, ni ante procedimientos cardíacos ni ortopédicos". Se incluyó a los pacientes sometidos a procedimientos cardíacos en el hospital y a pacientes sometidos a procedimientos ortopédicos en el hospital. Se excluyeron los pacientes sometidos a otro tipo de intervenciones quirúrgicas. De los 295 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 167 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se empleó la historia clínica informatizada y el listado de preparación quirúrgica del paciente como fuentes de datos. Se elaboró una hoja de recogida de datos, con variables demográficas (edad y sexo) y variables del proceso (servicio y descontaminación nasal con agentes antimicrobianos). Recomendación 5: "No se recomienda el reemplazo rutinario de catéteres venosos periféricos cada 72-96 horas". Se incluyó a los pacientes hospitalizados en el hospital durante un periodo igual o superior a 72 horas que fuesen portadores de catéter venoso periférico. Se excluyeron los pacientes no hospitalizados, los pacientes hospitalizados durante un periodo inferior a 72 horas y aquellos no portadores de catéter venoso periférico. De los 255 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 153 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se empleó la historia clínica informatizada como fuente de datos. Se realizó un seguimiento de los cuidados de los pacientes durante su ingreso hospitalario. Se elaboró una hoja de recogida de datos, con variables demográficas (edad y sexo) y variables del proceso (servicio hospitalario donde ingresa, periodo de ingreso, cuidados de enfermería de catéter venoso periférico, reemplazo del mismo e intervalo de reemplazo). Los datos obtenidos de cada paciente se tabularon en un archivo Excel y se elaboraron cinco bases de datos, una para la evaluación del cumplimiento de cada recomendación. El análisis de los datos se realizó con Excel 2011 y el programa estadístico Epi Info 7. Las variables cuantitativas se describieron mediante media y desviación estándar, y en el caso de las variables cualitativas se definieron por el número de casos y el porcentaje. Se consideró nivel de significación estadística valores de p<0,05. Con este método se obtuvo un Índice Global de Calidad (IC=Pacientes que cumplen criterio/Total de pacientes) para la unidad de estudio seleccionada, que permitió comprobar si se alcanzaron los estándares previamente definidos, es decir, el cumplimiento de las recomendaciones y, en caso de no ser así, analizar los posibles factores causales. Durante el desarrollo del estudio se respetaron los principios éticos básicos, enunciados en el Informe Belmont: de respeto por las personas, beneficencia, no maleficencia y justicia distributiva. Así mismo, se cumplieron los preceptos legales aplicables, contenidos en las siguientes leyes: Ley 41/2002, de 14 de noviembre, básica reguladora de la autonomía del paciente y de derechos y obligaciones en materia de información y documentación clínica; Ley 14/2007, de 3 julio, de Investigación biomédica; y Ley Orgánica 3/2018, de 5 de diciembre, de Protección de Datos Personales y garantías de los derechos digitales. El acceso a la información fue autorizado por el responsable del fichero y no se consideró necesario el obtener el consentimiento informado de cada paciente por tratarse de un estudio para valorar la calidad asistencial. Todo ello fue valorado por el Comité de Ética de la Investigación del Hospital Universitario de La Princesa, emitiendo un informe favorable al proyecto antes de iniciar su realización. Se incluyeron los pacientes hospitalizados que fueron sometidos a intervención quirúrgica en el Hospital Universitario de La Princesa durante ese periodo. Se excluyeron los pacientes no hospitalizados y sometidos a cirugía mayor ambulatoria. De los 580 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 231 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se emplearon el registro de intervenciones quirúrgicas, la historia clínica informatizada y el listado de preparación quirúrgica del paciente como fuentes de datos. Se elaboró una hoja de recogida de datos, con variables demográficas (edad y sexo) y variables del proceso (servicio, rasurado del sitio quirúrgico y procedimiento quirúrgico realizado). Se incluyeron los pacientes hospitalizados que fueron intervenidos en el hospital y que recibieron tratamiento antibiótico profiláctico. Se excluyeron los pacientes no hospitalizados que fueron sometidos a cirugía mayor ambulatoria y aquellos que no recibieron tratamiento antibiótico profiláctico. De los 421 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 201 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se empleó la historia clínica informatizada como fuente de datos. Se realizó un seguimiento de los pacientes durante su ingreso hospitalario. Se elaboró una hoja de recogida de datos con variables demográficas (edad y sexo) y variables del proceso (fecha de intervención, procedimiento quirúrgico, clasificación de la cirugía -limpia, limpia/contaminada, contaminada, sucia/infectada-, profilaxis antibiótica empleada, duración de la profilaxis y signos de infección). Clostridium diffícile en pacientes asintomáticos" Se incluyeron las peticiones de análisis de la toxina Clostridium diffícile de pacientes pertenecientes al hospital y de centros de salud adscritos al laboratorio de Microbiología del hospital durante ese periodo. Se excluyeron los pacientes no sometidos a análisis de la toxina Clostridium diffícile . De 401 peticiones, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 196 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. La fuente de datos empleada fue el listado de los análisis de la toxina Clostridium diffícile realizados a los pacientes en el laboratorio de Microbiología durante el periodo establecido. Se elaboró una hoja de recogida de datos con variable demográficas (edad y sexo) y variables del proceso (centro de procedencia de la petición de análisis, servicio peticionario, fecha de petición y resultado del análisis de la toxina Clostridium difficile ). Staphylococcus aureus rutinariamente, para reducir el riesgo de infección del sitio quirúrgico, ni ante procedimientos cardíacos ni ortopédicos". Se incluyó a los pacientes sometidos a procedimientos cardíacos en el hospital y a pacientes sometidos a procedimientos ortopédicos en el hospital. Se excluyeron los pacientes sometidos a otro tipo de intervenciones quirúrgicas. De los 295 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 167 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se empleó la historia clínica informatizada y el listado de preparación quirúrgica del paciente como fuentes de datos. Se elaboró una hoja de recogida de datos, con variables demográficas (edad y sexo) y variables del proceso (servicio y descontaminación nasal con agentes antimicrobianos). Se incluyó a los pacientes hospitalizados en el hospital durante un periodo igual o superior a 72 horas que fuesen portadores de catéter venoso periférico. Se excluyeron los pacientes no hospitalizados, los pacientes hospitalizados durante un periodo inferior a 72 horas y aquellos no portadores de catéter venoso periférico. De los 255 pacientes, se hizo una estimación del tamaño muestral para garantizar un nivel de confianza del 95%, con una precisión del 5% y una incidencia esperada de 50%, siendo necesario el estudio de 153 intervenciones. Se obtuvo la muestra mediante muestreo aleatorio simple entre los pacientes incluidos. Se empleó la historia clínica informatizada como fuente de datos. Se realizó un seguimiento de los cuidados de los pacientes durante su ingreso hospitalario. Se elaboró una hoja de recogida de datos, con variables demográficas (edad y sexo) y variables del proceso (servicio hospitalario donde ingresa, periodo de ingreso, cuidados de enfermería de catéter venoso periférico, reemplazo del mismo e intervalo de reemplazo). Los datos obtenidos de cada paciente se tabularon en un archivo Excel y se elaboraron cinco bases de datos, una para la evaluación del cumplimiento de cada recomendación. El análisis de los datos se realizó con Excel 2011 y el programa estadístico Epi Info 7. Las variables cuantitativas se describieron mediante media y desviación estándar, y en el caso de las variables cualitativas se definieron por el número de casos y el porcentaje. Se consideró nivel de significación estadística valores de p<0,05. Con este método se obtuvo un Índice Global de Calidad (IC=Pacientes que cumplen criterio/Total de pacientes) para la unidad de estudio seleccionada, que permitió comprobar si se alcanzaron los estándares previamente definidos, es decir, el cumplimiento de las recomendaciones y, en caso de no ser así, analizar los posibles factores causales. Durante el desarrollo del estudio se respetaron los principios éticos básicos, enunciados en el Informe Belmont: de respeto por las personas, beneficencia, no maleficencia y justicia distributiva. Así mismo, se cumplieron los preceptos legales aplicables, contenidos en las siguientes leyes: Ley 41/2002, de 14 de noviembre, básica reguladora de la autonomía del paciente y de derechos y obligaciones en materia de información y documentación clínica; Ley 14/2007, de 3 julio, de Investigación biomédica; y Ley Orgánica 3/2018, de 5 de diciembre, de Protección de Datos Personales y garantías de los derechos digitales. El acceso a la información fue autorizado por el responsable del fichero y no se consideró necesario el obtener el consentimiento informado de cada paciente por tratarse de un estudio para valorar la calidad asistencial. Todo ello fue valorado por el Comité de Ética de la Investigación del Hospital Universitario de La Princesa, emitiendo un informe favorable al proyecto antes de iniciar su realización. Recomendación 1 Se incluyeron un total de 231 pacientes. El 63,6% de los pacientes intervenidos fueron hombres y el 36,4% mujeres. La edad media de los pacientes fue de 64,7 años (DE 16 años). La incidencia observada de pacientes que no fueron rasurados y que, por tanto, cumplían el criterio, fue del 83,55% (IC95%: 78,77-88,33%; 193 casos). En cuanto a los pacientes que sí fueron rasurados, se obtuvo una incidencia del 16,45% (IC95%: 11,67-20,46%; 38 casos), todos ellos varones. Estos pacientes fueron rasurados por indicación médica debido a que el vello interfería con la incisión quirúrgica, y en todos los casos se empleó el material adecuado. Por ello, se consideró que en estos casos no era exigible que se cumpliese el criterio. La mayor incidencia de pacientes rasurados se observó en el Servicio de Cardiología Hemodinámica (10,39% del total), con un total de 24 casos (63,2%). De ellos, 22 pacientes fueron sometidos a cateterismo cardíaco mientras que a 2 casos se les realizó una inserción de marcapasos permanente. En la se muestra la incidencia de la eliminación del vello en los servicios que realizaron intervenciones quirúrgicas durante el periodo estudiado. Se obtuvo un Índice Global de Calidad mediante la proporción de pacientes que cumplían el criterio previamente establecido de no eliminación del vello (193), incluidas las excepciones permitidas (38), con respecto al número total de pacientes incluidos, con un resultado de 100% (IC95%: 98,27-100%; 231 casos) . Por tanto, se comprobó el cumplimiento del estándar previamente definido y de la recomendación. Recomendación 2 En el estudio se incluyeron un total de 201 pacientes con una edad media de 63,7 años (DE 17,1 años). El 53,2% de los pacientes fueron hombres y el 46,8% mujeres. La incidencia observada de pacientes que no continuaron con tratamiento antibiótico más de 24-48 horas y que, por tanto, cumplían el criterio, fue del 81,59% (IC95%: 76,23-86,95%; 164 casos). De ellos, un total de 77 casos (un 46,95% de los pacientes que cumplían el criterio y un 38,31% del total) finalizaron el tratamiento en un periodo inferior a 24 horas tras la intervención quirúrgica, siendo 87 los pacientes que concluyeron la profilaxis antibiótica en un periodo inferior a 48 horas, lo que supone un 53,04% de los pacientes que cumplían el criterio y un 43,28% del total de pacientes incluidos. El porcentaje de pacientes que continuaron la profilaxis antibiótica por un periodo superior a 48 horas fue del 18,41%. Se observó una incidencia de infección del 4,98% (IC95%: 1,97-7,98%5; 10 casos), siendo el Servicio de Cirugía General y Digestivo donde se observó mayor incidencia, con un total de 4 casos, siendo dos de ellos intervenciones en el colon e ileostomías. Un 1,49% de los casos incluidos (3 pacientes) se encontraban en tratamiento antibiótico en el momento de la intervención quirúrgica y 11 pacientes (un 5,47%) fueron sometidos a cirugía maxilofacial y otorrinolaringológica, que fueron clasificadas como intervenciones sucias. Estos 24 casos, un 11,94% de los pacientes incluidos, fueron considerados excepciones en los que no era exigible el cumplimiento del criterio. Se obtuvo una incidencia de 6,47% (IC95%: 3,07-9,87%; 13 casos) de pacientes que no cumplían el criterio. En ellos, la profilaxis fue administrada por un periodo superior a 48 horas tras la finalización de la intervención quirúrgica, sin evidencia clara de infección ni otra causa que determinase la inadecuación de la duración de la profilaxis antibiótica. La mayor incidencia de incumplimiento del criterio se observó en el Servicio de Neurocirugía, con un 3,48% de los pacientes incluidos, resultando el 53,81% de los pacientes que incumplían el criterio. De ellos, los principales procedimientos realizados fueron dos craneotomías y dos abordajes transesfenoidales. Se determinó un Índice Global de Calidad del 93,53% (IC95%: 90,09-96,91; 188 casos). Se obtuvo de la proporción de pacientes que cumplían el criterio (164), incluyendo las excepciones aceptadas (24), en relación al número de pacientes incluidos en el estudio. Se comprobó un cumplimiento inferior al estándar previamente definido y, por tanto, una oportunidad de mejora. Recomendación 3 Se seleccionó una muestra de 200 peticiones de análisis de la toxina Clostridium diffícile a pacientes, donde el 50% fueron hombres y el 50% mujeres. La edad media de los pacientes incluidos en el estudio fue de 67,7 años (DE 19,9 años). El 79.50% de las peticiones fueron realizadas por el Hospital Universitario de La Princesa, siendo el Servicio de Urgencias el servicio que más peticiones realizó (33), seguido del Servicio de Medicina Interna (25). Un 12% de las peticiones procedían de las consultas de Atención Primaria. Finalmente, las consultas del hospital realizaron un 8,5% de las peticiones, procedentes en un 3,5% (7 peticiones) de las consultas externas del Servicio de Digestivo. La petición de análisis de la toxina se solicitó en 187 pacientes (93,5%), en los que estaba indicada por presentar sintomatología (heces no formes). En el 3,5% de las peticiones indicadas se obtuvo un resultado positivo (7 casos), siendo negativo en el 90% de las mismas (180 casos). Se obtuvo una incidencia de peticiones que incumplían el criterio por no estar indicadas del 6,5% (IC95%: 3,08-9,92%; 13 peticiones). En dos casos no estaba indicada la petición por heces formes (1%), peticiones realizadas por las consultas externas de Hematología (1) y Urgencias (1). Un 4,5% de las peticiones realizadas (9 casos) se recibieron de forma duplicada y en un 1% de las peticiones (2 casos) se produjo un error en la recepción de la muestra . El Índice Global de Calidad dio como resultado un 93,5% (IC95%: 90,08-96,92%; 187 casos). Se determinó de la proporción de pacientes que cumplían el criterio de petición del análisis de la toxina en pacientes indicados (187), en relación al total de peticiones realizadas. Se comprobó un cumplimiento inferior al estándar previamente definido y, por tanto, una oportunidad de mejora. Recomendación 4 Se incluyeron 167 pacientes. En el protocolo de preparación quirúrgica del Hospital Universitario de La Princesa no constaba la descontaminación nasal como medida preoperatoria, y en la historia clínica informatizada no se obtuvo ningún dato al respecto. Se consultó a loas supervisoras de quirófano y al servicio de farmacia. Confirmaron que no se utilizaba esta profilaxis tópica. Por ello, podría asumirse un cumplimiento del 100% (IC95%: 97,6-100%). Recomendación 5 Se estudiaron un total de 153 pacientes. El 41,2% de los pacientes incluidos fueron mujeres y el 58,8% hombres. La edad media fue de 67,1 años (DE 16,5 años). La incidencia observada de pacientes que portaban catéter venoso periférico a los que no se le reemplazó de forma rutinaria en un periodo inferior a 72 horas y que, por tanto, cumplían el criterio, fue del 83% (127 pacientes). En un 66% de los pacientes (101) no se les realizó ningún reemplazo del catéter que portaban, observándose la mayor incidencia en el Servicio de Traumatología (16,34%). Se obtuvo una incidencia de reemplazo en un periodo superior a 72-96 horas del 17%. Las principales causas fueron extravasación (7,84%) y flebitis (3,92%). El reemplazo de catéteres se realizó en un periodo inferior a 72 horas en 23 pacientes (15,03%) por indicación médica. La principal causa fue la extravasación, con un 11,11% de los casos (17 pacientes). Estos fueron considerados como excepciones. El porcentaje de pacientes que incumplieron el criterio fue del 1,96% (3), a los que se realizó reemplazo del catéter de forma rutinaria cada aproximadamente 72 horas, sin precisar indicación ni constar motivo del reemplazo en la historia clínica. Los servicios donde se observó fueron el Servicio de Traumatología (1,31%) y el Servicio de Angiología y Cirugía Vascular (0,65%). Se obtuvo un Índice Global de Calidad mediante la proporción de pacientes que cumplían el criterio (127) y los pacientes considerados como excepciones permitidas (23), con respecto al número total de pacientes incluidos, con un resultado del 98,04% (IC95%: 94,12-99,35%). Se comprobó un cumplimiento inferior al estándar previamente definido y, por tanto, una oportunidad de mejora. Se incluyeron un total de 231 pacientes. El 63,6% de los pacientes intervenidos fueron hombres y el 36,4% mujeres. La edad media de los pacientes fue de 64,7 años (DE 16 años). La incidencia observada de pacientes que no fueron rasurados y que, por tanto, cumplían el criterio, fue del 83,55% (IC95%: 78,77-88,33%; 193 casos). En cuanto a los pacientes que sí fueron rasurados, se obtuvo una incidencia del 16,45% (IC95%: 11,67-20,46%; 38 casos), todos ellos varones. Estos pacientes fueron rasurados por indicación médica debido a que el vello interfería con la incisión quirúrgica, y en todos los casos se empleó el material adecuado. Por ello, se consideró que en estos casos no era exigible que se cumpliese el criterio. La mayor incidencia de pacientes rasurados se observó en el Servicio de Cardiología Hemodinámica (10,39% del total), con un total de 24 casos (63,2%). De ellos, 22 pacientes fueron sometidos a cateterismo cardíaco mientras que a 2 casos se les realizó una inserción de marcapasos permanente. En la se muestra la incidencia de la eliminación del vello en los servicios que realizaron intervenciones quirúrgicas durante el periodo estudiado. Se obtuvo un Índice Global de Calidad mediante la proporción de pacientes que cumplían el criterio previamente establecido de no eliminación del vello (193), incluidas las excepciones permitidas (38), con respecto al número total de pacientes incluidos, con un resultado de 100% (IC95%: 98,27-100%; 231 casos) . Por tanto, se comprobó el cumplimiento del estándar previamente definido y de la recomendación. En el estudio se incluyeron un total de 201 pacientes con una edad media de 63,7 años (DE 17,1 años). El 53,2% de los pacientes fueron hombres y el 46,8% mujeres. La incidencia observada de pacientes que no continuaron con tratamiento antibiótico más de 24-48 horas y que, por tanto, cumplían el criterio, fue del 81,59% (IC95%: 76,23-86,95%; 164 casos). De ellos, un total de 77 casos (un 46,95% de los pacientes que cumplían el criterio y un 38,31% del total) finalizaron el tratamiento en un periodo inferior a 24 horas tras la intervención quirúrgica, siendo 87 los pacientes que concluyeron la profilaxis antibiótica en un periodo inferior a 48 horas, lo que supone un 53,04% de los pacientes que cumplían el criterio y un 43,28% del total de pacientes incluidos. El porcentaje de pacientes que continuaron la profilaxis antibiótica por un periodo superior a 48 horas fue del 18,41%. Se observó una incidencia de infección del 4,98% (IC95%: 1,97-7,98%5; 10 casos), siendo el Servicio de Cirugía General y Digestivo donde se observó mayor incidencia, con un total de 4 casos, siendo dos de ellos intervenciones en el colon e ileostomías. Un 1,49% de los casos incluidos (3 pacientes) se encontraban en tratamiento antibiótico en el momento de la intervención quirúrgica y 11 pacientes (un 5,47%) fueron sometidos a cirugía maxilofacial y otorrinolaringológica, que fueron clasificadas como intervenciones sucias. Estos 24 casos, un 11,94% de los pacientes incluidos, fueron considerados excepciones en los que no era exigible el cumplimiento del criterio. Se obtuvo una incidencia de 6,47% (IC95%: 3,07-9,87%; 13 casos) de pacientes que no cumplían el criterio. En ellos, la profilaxis fue administrada por un periodo superior a 48 horas tras la finalización de la intervención quirúrgica, sin evidencia clara de infección ni otra causa que determinase la inadecuación de la duración de la profilaxis antibiótica. La mayor incidencia de incumplimiento del criterio se observó en el Servicio de Neurocirugía, con un 3,48% de los pacientes incluidos, resultando el 53,81% de los pacientes que incumplían el criterio. De ellos, los principales procedimientos realizados fueron dos craneotomías y dos abordajes transesfenoidales. Se determinó un Índice Global de Calidad del 93,53% (IC95%: 90,09-96,91; 188 casos). Se obtuvo de la proporción de pacientes que cumplían el criterio (164), incluyendo las excepciones aceptadas (24), en relación al número de pacientes incluidos en el estudio. Se comprobó un cumplimiento inferior al estándar previamente definido y, por tanto, una oportunidad de mejora. Se seleccionó una muestra de 200 peticiones de análisis de la toxina Clostridium diffícile a pacientes, donde el 50% fueron hombres y el 50% mujeres. La edad media de los pacientes incluidos en el estudio fue de 67,7 años (DE 19,9 años). El 79.50% de las peticiones fueron realizadas por el Hospital Universitario de La Princesa, siendo el Servicio de Urgencias el servicio que más peticiones realizó (33), seguido del Servicio de Medicina Interna (25). Un 12% de las peticiones procedían de las consultas de Atención Primaria. Finalmente, las consultas del hospital realizaron un 8,5% de las peticiones, procedentes en un 3,5% (7 peticiones) de las consultas externas del Servicio de Digestivo. La petición de análisis de la toxina se solicitó en 187 pacientes (93,5%), en los que estaba indicada por presentar sintomatología (heces no formes). En el 3,5% de las peticiones indicadas se obtuvo un resultado positivo (7 casos), siendo negativo en el 90% de las mismas (180 casos). Se obtuvo una incidencia de peticiones que incumplían el criterio por no estar indicadas del 6,5% (IC95%: 3,08-9,92%; 13 peticiones). En dos casos no estaba indicada la petición por heces formes (1%), peticiones realizadas por las consultas externas de Hematología (1) y Urgencias (1). Un 4,5% de las peticiones realizadas (9 casos) se recibieron de forma duplicada y en un 1% de las peticiones (2 casos) se produjo un error en la recepción de la muestra . El Índice Global de Calidad dio como resultado un 93,5% (IC95%: 90,08-96,92%; 187 casos). Se determinó de la proporción de pacientes que cumplían el criterio de petición del análisis de la toxina en pacientes indicados (187), en relación al total de peticiones realizadas. Se comprobó un cumplimiento inferior al estándar previamente definido y, por tanto, una oportunidad de mejora. Se incluyeron 167 pacientes. En el protocolo de preparación quirúrgica del Hospital Universitario de La Princesa no constaba la descontaminación nasal como medida preoperatoria, y en la historia clínica informatizada no se obtuvo ningún dato al respecto. Se consultó a loas supervisoras de quirófano y al servicio de farmacia. Confirmaron que no se utilizaba esta profilaxis tópica. Por ello, podría asumirse un cumplimiento del 100% (IC95%: 97,6-100%). Se estudiaron un total de 153 pacientes. El 41,2% de los pacientes incluidos fueron mujeres y el 58,8% hombres. La edad media fue de 67,1 años (DE 16,5 años). La incidencia observada de pacientes que portaban catéter venoso periférico a los que no se le reemplazó de forma rutinaria en un periodo inferior a 72 horas y que, por tanto, cumplían el criterio, fue del 83% (127 pacientes). En un 66% de los pacientes (101) no se les realizó ningún reemplazo del catéter que portaban, observándose la mayor incidencia en el Servicio de Traumatología (16,34%). Se obtuvo una incidencia de reemplazo en un periodo superior a 72-96 horas del 17%. Las principales causas fueron extravasación (7,84%) y flebitis (3,92%). El reemplazo de catéteres se realizó en un periodo inferior a 72 horas en 23 pacientes (15,03%) por indicación médica. La principal causa fue la extravasación, con un 11,11% de los casos (17 pacientes). Estos fueron considerados como excepciones. El porcentaje de pacientes que incumplieron el criterio fue del 1,96% (3), a los que se realizó reemplazo del catéter de forma rutinaria cada aproximadamente 72 horas, sin precisar indicación ni constar motivo del reemplazo en la historia clínica. Los servicios donde se observó fueron el Servicio de Traumatología (1,31%) y el Servicio de Angiología y Cirugía Vascular (0,65%). Se obtuvo un Índice Global de Calidad mediante la proporción de pacientes que cumplían el criterio (127) y los pacientes considerados como excepciones permitidas (23), con respecto al número total de pacientes incluidos, con un resultado del 98,04% (IC95%: 94,12-99,35%). Se comprobó un cumplimiento inferior al estándar previamente definido y, por tanto, una oportunidad de mejora. El proyecto "No hacer" propone reducir las intervenciones sanitarias no coste-efectivas, así como las de dudosa o nula eficacia y efectividad, promoviendo la colaboración de las Sociedades Científicas para conseguir una mejora continua en la calidad asistencial . La SEMPSPH se adhirió en 2017 al proyecto de "No hacer" y presentó en 2018 las 5 medidas. Por el corto período transcurrido entre la presentación de las medidas por la Sociedad y este estudio que las evalúa, no se han encontrado datos de referencia para comparar los resultados obtenidos en España. Así, con respecto a la eliminación del vello, que hasta el momento ha sido empleada como medida de preparación prequirúrgica, se demuestra no solo la ausencia de beneficio de esta medida en la prevención de infección de localización quirúrgica (ILQ), sino un incremento del riesgo de ILQ debido a las erosiones producidas por el rasurado . No se encuentran diferencias significativas en la incidencia de ILQ entre la ausencia de rasurado y el empleo de maquinilla eléctrica, cortadora de pelo o depilación química , por lo que estas técnicas se pueden seleccionar en caso de considerarse imprescindible la eliminación del vello por interferencia con la localización quirúrgica (NICE, 2008). Esta evidencia queda recogida en la recomendación 1, en las guías de buenas prácticas de la Comunidad de Madrid y, así mismo, en el Protocolo de Preparación Quirúrgica del Hospital Universitario de La Princesa. El cumplimiento de la recomendación al 100% implica que en ningún caso se ha eliminado el vello de forma innecesaria, reduciendo la potencial iatrogenia. Esto es reflejo de una adecuada difusión del protocolo y conduce a una mejora de la calidad asistencial y seguridad del paciente. Por otro lado, existen evidencias de que los pacientes que son sometidos de forma innecesaria o inapropiada a antibióticos presentan riesgo de efectos secundarios graves sin beneficio clínico alguno . Su uso incorrecto contribuye al aumento de resistencias a los antibióticos, un serio problema para la salud pública que se ha incrementado durante las últimas décadas . En el estudio, el cumplimiento de la Recomendación 2, aunque cercano al 100%, es incompleto (93,53%). Esto supone unos pocos pacientes (13 casos) en los que se incumple la recomendación en un periodo de dos meses. Sin embargo, este número es mayor en las 5.467 intervenciones realizadas durante todo el año 2018. Entre los posibles factores causales se puede encontrar la incorrecta difusión de la evidencia científica o la variabilidad existente en la práctica clínica entre los diferentes servicios. Se demuestra que los programas hospitalarios dedicados a promover el uso correcto de los antibióticos optimizan tanto el tratamiento de las infecciones como reducen los efectos adversos, mejorando la calidad asistencial . La infección por Clostridium diffícile parece estar cambiando, siendo mayor la incidencia y la virulencia de los casos. El diagnóstico preciso es crítico y, sin embargo, existen intervenciones que conducen a un diagnóstico erróneo . Una de ellas, recogida en la Recomendación 3, es el análisis de la toxina Clostridium diffícile en pacientes asintomáticos. Los resultados pueden ser falsamente positivos, lo que supone sobrediagnóstico y sobretratamiento . El cumplimiento en el estudio fue del 93,5%. Sin embargo, a pesar de no ser completo, el laboratorio de Microbiología no realizó el análisis en aquellas muestras en que no estaba indicado y limitó intervenciones sanitarias innecesarias. Esto supone un diagnóstico más preciso, con la reducción de peticiones en pacientes sin diarrea o con solo un episodio, y otorgando más importancia a elementos clave en la historia clínica del paciente . Resaltar que en el 90% de las peticiones indicadas, el resultado es negativo. Por otra parte, en 200 peticiones, lo que supone una mínima cantidad de peticiones inadecuadas (13), se incrementa en relación al total de peticiones realizadas en el año, alcanzando un número considerable de errores. Las causas principales de incumplimiento son las muestras duplicadas, lo que refleja un consumo innecesario de los recursos sanitarios. El principal factor causal podría ser la falta de coordinación entre los miembros del equipo médico. En relación a la Recomendación 4, las guías NICE determinan el empleo de mupirocina nasal en aquellos casos en los que sea causa de ILQ, teniendo en cuenta el tipo de procedimiento, los factores de riesgo del paciente y el potencial impacto de la infección, vigilando la resistencia antimicrobiana asociada al empleo de mupirocina . En el Hospital Universitario de La Princesa, entre las medidas incluidas en el Protocolo de Preparación Quirúrgica no se encuentra el empleo de mupirocina nasal. Esto se debe a su actualización en base a la evidencia científica disponible realizada por varios servicios del hospital. Supondría una acción de mejora especificar en el protocolo la no realización de esta medida, con el fin de facilitar el cumplimiento y la recogida de información en estudios posteriores. Por otra parte, son muchos los pacientes que durante su ingreso hospitalario reciben medicación, fluidos o nutrientes vía endovenosa por medio de catéter venoso periférico. Estos catéteres son a menudo reemplazados cada 72 o 96 horas para prevenir infecciones o molestias. Sin embargo, esta práctica incrementa el coste sanitario y supone someter a los pacientes de forma repetida a un procedimiento invasivo . Varios análisis de coste-efectividad realizados concluyen que el reemplazo de catéter por indicación médica reduce el coste comparado con el reemplazo rutinario . Por ello, aunque el cumplimiento ideal de la Recomendación 5 fuese del 100%, un cumplimiento cercano del 98% mediante el reemplazo en caso de indicación médica supone una medida de eficiencia clínica que no repercute en los resultados de salud obtenidos y sí en la calidad de la asistencia, ya que contribuye a la sostenibilidad y mejora del sistema sanitario, así como del hospital. Globalmente, un cumplimiento cercano al 100% de las recomendaciones evaluadas supone una adecuada difusión de la evidencia científica y el compromiso con el uso apropiado de los recursos sanitarios en el Hospital Universitario de La Princesa. Se cumple el principal objetivo del proyecto "No hacer", limitando así las intervenciones sanitarias innecesarias, lo que promueve la seguridad clínica y la calidad asistencial. Los buenos resultados en las Recomendaciones 1 y 2 están asociados a la actualización y difusión del Protocolo de Preparación Quirúrgica del hospital. Sin embargo, aún hay margen de mejora. La difusión de la evidencia y de los resultados obtenidos en el hospital supone una medida correctora que facilita su cumplimiento y permite sensibilizar a los profesionales sanitarios en el compromiso con la calidad en la asistencia sanitaria y en el uso eficiente de los recursos. Las limitaciones de este estudio son las propias de los estudios observacionales, y también están relacionadas con el corto tiempo analizado. En cuanto a la recomendación 4, la ausencia de información al respecto no permitió realizar un análisis de la misma. De la información y resultados del estudio se puede concluir que: - La realización de estudios de calidad asistencial y la metodología empleada para su evaluación son útiles y aportan información relevante. - Hay recomendaciones (R1 y R4) en las que el cumplimiento de "No hacer" es del 100%. - En las otras recomendaciones (R2, R3 y R5), el cumplimiento es elevado pero no alcanza el 100%, siendo por tanto áreas a mejorar. En conjunto, se obtiene un cumplimiento elevado de las recomendaciones "No hacer" en el Hospital Universitario de La Princesa. No obstante, no se pueden comparar los resultados por falta de publicación de estudios similares. Es aconsejable la realización periódica de este tipo de estudios, así como la difusión de sus resultados, para facilitar el cumplimiento. El fin es mejorar en aquellas recomendaciones que no alcanzan el 100% del mismo y mantenerlo a un nivel adecuado en aquellas que sí lo hacen. |
Combating climate-induced health threats through Co-Constitutive Risk (CCR) Messaging: A | 2035adde-5631-4d06-8ce1-e51ced264e09 | 11637427 | Health Communication[mh] | One Health , as defined by the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) , embodies an integrated approach that recognizes the interdependence of human, animal, and environmental health. Interdisciplinary and transdisciplinary efforts have employed this approach to study issues such as antimicrobial resistance, vector-borne diseases, food animal diseases, and water contamination . Limited research, however, has explored how the intricacies of One Health issues, specifically, can be effectively communicated to diverse audiences. To address this gap, we propose One Health Communication as a new approach to enhancing traditional health communication in these contexts. One Health and Co-Constitutive Risks According to the CDC , the prevalence of vector-borne diseases is increasing due to rising temperatures facilitating the expansion of mosquito and tick habitats. Global climate change facilitates the spread of mosquito-borne pathogens that cause several forms of hemorrhagic fever—including dengue fever—to new environments , including parts of the United States. For instance, the CDC documented several isolated outbreaks of dengue fever in the US over the past several years, in the continental US (e.g., Texas, 2013) and Hawaii (in 2015) . The increased prevalence in the US of what researchers call “neglected tropical diseases”, like dengue fever, could represent a major public health concern . Dengue, specifically, is concerning not only due to the disease’s high mortality rate, as virtually all those reinfected with the virus ultimately succumb to the illness , but because there is no known cure for the disease . Combating the climate change-induced public health challenges of diseases like dengue, therefore, entails preventing disease onset by pursuing policies that either (a) rely on pre-existing vaccines to form the basis of mass childhood vaccination campaigns and/or (b) make an effort to mitigate rising global temperatures attributable to climate change. Communicating risks to increase policy support Although a frequent goal of health communication campaigns is to promote individual health-related behaviors (e.g., smoking cessation), such campaigns also can aim to increase support for specific policies that, if enacted, intervene at a societal level (e.g., ending the sale of flavored tobacco products, requiring labels on tobacco products). One way that typical health communication messages achieve these goals is by communicating potential harms and/or the relative risk of those harms . Traditional messages aiming to mitigate the spread of dengue may choose to emphasize the relative risks of contracting it. A One Health Communication approach, however, emphasizes the interconnected, or co-constitutive, nature of the risks inherent in One Health issues. For example, such an approach could highlight both the immediate risks of contracting dengue as well as the broader impact of climate change on those risks and, thus, aim to encourage support for policies targeting both mitigation of the disease and of climate change. A variety of theoretical frameworks have been used to design messages and examine their effectiveness in health-related contexts. Although messages and campaigns focused on changing health behaviors are often informed by the Health Belief Model or the Theory of Planned Behavior , those focused on increasing or measuring policy support are often informed by (versions of) Framing Theory and/or differential receptiveness theories such as identity-protective reasoning and the cultural cognition theory of risk , particularly as it is generally accepted that policy support is heavily influenced by cultural worldviews and political ideology . In this study, we examine the potential effectiveness of a One Health Communication approach, co-constitutive risk messaging in the context of dengue fever, to influence support for vaccination policies to mitigate the potential proximal harms of dengue and green energy policies to mitigate the broader impacts of climate change. Because our outcome of interest is policy support, and not specific health behaviors, our study is informed by the cultural cognition theory of risk and message framing. Cultural cognition theory of risk and message framing The Cultural Cognition Theory of Risk posits that individuals’ perceptions of risks and benefits associated with various policies, technologies, and societal choices are shaped by their cultural values and group identities. That is, people are motivated to adopt beliefs about societal risks that resonate with the views and interests of groups with which they identify, reinforcing their connections within these groups. Though there are many ways in which values and worldviews could be ordered or grouped, this theory typically categorizes cultural values along two dimensions: from egalitarian to hierarchist and from individualist to communitarian. Greater egalitarianism reflects prioritization of equality, whereas greater hierarchism reflects preference for a structured distribution of roles; greater individualism reflects emphasis on personal freedoms and responsibility, whereas greater communitarianism prioritizes collective responsibility and well-being . In the current study, we focus only on the individualist to communitarian dimension. Understanding cultural orientations can help in crafting effective messages by aligning the framing of risks and interventions with targeted audiences’ worldviews. Again, cultural worldviews significantly influence how individuals perceive risks and assess the acceptability of interventions aimed at mitigating these risks , consistent with motivated reasoning . People with a strong individualist orientation may respond more positively to messages that frame risks in terms of personal consequences rather than collective dangers. Such individuals also may be more likely to resist interventions that they perceive as infringing on their personal freedoms, such as mandatory masking or bans on plastic straws. On the other hand, those with communitarian inclinations may be more receptive to framing that emphasizes the collective impact of risks and may show greater willingness to accept restrictions on their personal freedoms if these are seen as benefiting the wider community. A key challenge with co-constitutive risk messaging, however, is the potential for boomerang effects , where messages that bundle different risks can lead different audiences to perceive a lower risk than intended . For example, experimental studies examining media frames of the Zika virus showed that when the risk was linked to global climate change, participants who scored higher on hierarchism and individualism (i.e., hierarchist-individualists) tended to downplay Zika’s dangers compared to when they were presented with information solely about its public health risks. However, these same individuals showed increased concern when Zika was framed as being exacerbated by immigration. Conversely, those participants who scored lower on hierarchism and individualism (and thus, higher on egalitarianism and communitarianism, i.e., egalitarian-communitarians) demonstrated consistent levels of concern for Zika, whether it was framed solely as a public health issue or linked to global climate change. Associating Zika risks with increased immigration, however, led egalitarian-communitarians to underestimate its threat compared to the public health framing alone. Thus, how risks are framed and/or connected can significantly influence public perception, particularly when these connections resonate differently across cultural groups. Current study The messages in this study were designed to emphasize the risk faced by either the individual or the general US adult population (collective) and suggest potential actions (policies) that might help mitigate the threat. The first set of hypotheses predict that co-constitutive risk (CCR) messaging, the One Health Communication strategy we test in this study, will be effective for increasing policy support for the immediate risk of dengue fever (e.g., vaccine policies) and for addressing the superordinate conditions created by climate change that exacerbate the threat (climate policies). Hypothesis 1. Immediate Outcomes : Effects on Support for Vaccination Policies . Exposure to CCR messages that emphasize either the personal health risks or collective health risks of mosquito-borne illness attributable to climate change will be associated with increased (a) support for investment in dengue vaccine research, (b) intention to vaccinate against dengue, and (c) support for expanding dengue vaccination mandates. Hypothesis 2. Superordinate Outcomes : Effects on Support for Climate Change Mitigation Policies . Exposure to CCR messages that emphasize either the personal health risks or collective health risks of mosquito-borne illness attributable to climate change will be associated with increased support for climate change mitigation policy. Furthermore, the expected asymmetry in how Individualists versus Communitarians would respond to personal versus collective framing of risk is reflected in Hypothesis 3. Hypothesis 3. Moderation by Cultural Cognitive Orientations . The effects of CCR messaging should be moderated by cultural cognitive orientations, such that Individualists are more likely to exhibit the aforementioned effects on (a) immediate and (b) superordinate outcomes when exposed to CCR messages that emphasize personal risk, while Communitarians should be more likely to exhibit effects on support for (c) immediate and (d) superordinate interventions when exposed to CCR messages that emphasize collective risk. According to the CDC , the prevalence of vector-borne diseases is increasing due to rising temperatures facilitating the expansion of mosquito and tick habitats. Global climate change facilitates the spread of mosquito-borne pathogens that cause several forms of hemorrhagic fever—including dengue fever—to new environments , including parts of the United States. For instance, the CDC documented several isolated outbreaks of dengue fever in the US over the past several years, in the continental US (e.g., Texas, 2013) and Hawaii (in 2015) . The increased prevalence in the US of what researchers call “neglected tropical diseases”, like dengue fever, could represent a major public health concern . Dengue, specifically, is concerning not only due to the disease’s high mortality rate, as virtually all those reinfected with the virus ultimately succumb to the illness , but because there is no known cure for the disease . Combating the climate change-induced public health challenges of diseases like dengue, therefore, entails preventing disease onset by pursuing policies that either (a) rely on pre-existing vaccines to form the basis of mass childhood vaccination campaigns and/or (b) make an effort to mitigate rising global temperatures attributable to climate change. Although a frequent goal of health communication campaigns is to promote individual health-related behaviors (e.g., smoking cessation), such campaigns also can aim to increase support for specific policies that, if enacted, intervene at a societal level (e.g., ending the sale of flavored tobacco products, requiring labels on tobacco products). One way that typical health communication messages achieve these goals is by communicating potential harms and/or the relative risk of those harms . Traditional messages aiming to mitigate the spread of dengue may choose to emphasize the relative risks of contracting it. A One Health Communication approach, however, emphasizes the interconnected, or co-constitutive, nature of the risks inherent in One Health issues. For example, such an approach could highlight both the immediate risks of contracting dengue as well as the broader impact of climate change on those risks and, thus, aim to encourage support for policies targeting both mitigation of the disease and of climate change. A variety of theoretical frameworks have been used to design messages and examine their effectiveness in health-related contexts. Although messages and campaigns focused on changing health behaviors are often informed by the Health Belief Model or the Theory of Planned Behavior , those focused on increasing or measuring policy support are often informed by (versions of) Framing Theory and/or differential receptiveness theories such as identity-protective reasoning and the cultural cognition theory of risk , particularly as it is generally accepted that policy support is heavily influenced by cultural worldviews and political ideology . In this study, we examine the potential effectiveness of a One Health Communication approach, co-constitutive risk messaging in the context of dengue fever, to influence support for vaccination policies to mitigate the potential proximal harms of dengue and green energy policies to mitigate the broader impacts of climate change. Because our outcome of interest is policy support, and not specific health behaviors, our study is informed by the cultural cognition theory of risk and message framing. Cultural cognition theory of risk and message framing The Cultural Cognition Theory of Risk posits that individuals’ perceptions of risks and benefits associated with various policies, technologies, and societal choices are shaped by their cultural values and group identities. That is, people are motivated to adopt beliefs about societal risks that resonate with the views and interests of groups with which they identify, reinforcing their connections within these groups. Though there are many ways in which values and worldviews could be ordered or grouped, this theory typically categorizes cultural values along two dimensions: from egalitarian to hierarchist and from individualist to communitarian. Greater egalitarianism reflects prioritization of equality, whereas greater hierarchism reflects preference for a structured distribution of roles; greater individualism reflects emphasis on personal freedoms and responsibility, whereas greater communitarianism prioritizes collective responsibility and well-being . In the current study, we focus only on the individualist to communitarian dimension. Understanding cultural orientations can help in crafting effective messages by aligning the framing of risks and interventions with targeted audiences’ worldviews. Again, cultural worldviews significantly influence how individuals perceive risks and assess the acceptability of interventions aimed at mitigating these risks , consistent with motivated reasoning . People with a strong individualist orientation may respond more positively to messages that frame risks in terms of personal consequences rather than collective dangers. Such individuals also may be more likely to resist interventions that they perceive as infringing on their personal freedoms, such as mandatory masking or bans on plastic straws. On the other hand, those with communitarian inclinations may be more receptive to framing that emphasizes the collective impact of risks and may show greater willingness to accept restrictions on their personal freedoms if these are seen as benefiting the wider community. A key challenge with co-constitutive risk messaging, however, is the potential for boomerang effects , where messages that bundle different risks can lead different audiences to perceive a lower risk than intended . For example, experimental studies examining media frames of the Zika virus showed that when the risk was linked to global climate change, participants who scored higher on hierarchism and individualism (i.e., hierarchist-individualists) tended to downplay Zika’s dangers compared to when they were presented with information solely about its public health risks. However, these same individuals showed increased concern when Zika was framed as being exacerbated by immigration. Conversely, those participants who scored lower on hierarchism and individualism (and thus, higher on egalitarianism and communitarianism, i.e., egalitarian-communitarians) demonstrated consistent levels of concern for Zika, whether it was framed solely as a public health issue or linked to global climate change. Associating Zika risks with increased immigration, however, led egalitarian-communitarians to underestimate its threat compared to the public health framing alone. Thus, how risks are framed and/or connected can significantly influence public perception, particularly when these connections resonate differently across cultural groups. The Cultural Cognition Theory of Risk posits that individuals’ perceptions of risks and benefits associated with various policies, technologies, and societal choices are shaped by their cultural values and group identities. That is, people are motivated to adopt beliefs about societal risks that resonate with the views and interests of groups with which they identify, reinforcing their connections within these groups. Though there are many ways in which values and worldviews could be ordered or grouped, this theory typically categorizes cultural values along two dimensions: from egalitarian to hierarchist and from individualist to communitarian. Greater egalitarianism reflects prioritization of equality, whereas greater hierarchism reflects preference for a structured distribution of roles; greater individualism reflects emphasis on personal freedoms and responsibility, whereas greater communitarianism prioritizes collective responsibility and well-being . In the current study, we focus only on the individualist to communitarian dimension. Understanding cultural orientations can help in crafting effective messages by aligning the framing of risks and interventions with targeted audiences’ worldviews. Again, cultural worldviews significantly influence how individuals perceive risks and assess the acceptability of interventions aimed at mitigating these risks , consistent with motivated reasoning . People with a strong individualist orientation may respond more positively to messages that frame risks in terms of personal consequences rather than collective dangers. Such individuals also may be more likely to resist interventions that they perceive as infringing on their personal freedoms, such as mandatory masking or bans on plastic straws. On the other hand, those with communitarian inclinations may be more receptive to framing that emphasizes the collective impact of risks and may show greater willingness to accept restrictions on their personal freedoms if these are seen as benefiting the wider community. A key challenge with co-constitutive risk messaging, however, is the potential for boomerang effects , where messages that bundle different risks can lead different audiences to perceive a lower risk than intended . For example, experimental studies examining media frames of the Zika virus showed that when the risk was linked to global climate change, participants who scored higher on hierarchism and individualism (i.e., hierarchist-individualists) tended to downplay Zika’s dangers compared to when they were presented with information solely about its public health risks. However, these same individuals showed increased concern when Zika was framed as being exacerbated by immigration. Conversely, those participants who scored lower on hierarchism and individualism (and thus, higher on egalitarianism and communitarianism, i.e., egalitarian-communitarians) demonstrated consistent levels of concern for Zika, whether it was framed solely as a public health issue or linked to global climate change. Associating Zika risks with increased immigration, however, led egalitarian-communitarians to underestimate its threat compared to the public health framing alone. Thus, how risks are framed and/or connected can significantly influence public perception, particularly when these connections resonate differently across cultural groups. The messages in this study were designed to emphasize the risk faced by either the individual or the general US adult population (collective) and suggest potential actions (policies) that might help mitigate the threat. The first set of hypotheses predict that co-constitutive risk (CCR) messaging, the One Health Communication strategy we test in this study, will be effective for increasing policy support for the immediate risk of dengue fever (e.g., vaccine policies) and for addressing the superordinate conditions created by climate change that exacerbate the threat (climate policies). Hypothesis 1. Immediate Outcomes : Effects on Support for Vaccination Policies . Exposure to CCR messages that emphasize either the personal health risks or collective health risks of mosquito-borne illness attributable to climate change will be associated with increased (a) support for investment in dengue vaccine research, (b) intention to vaccinate against dengue, and (c) support for expanding dengue vaccination mandates. Hypothesis 2. Superordinate Outcomes : Effects on Support for Climate Change Mitigation Policies . Exposure to CCR messages that emphasize either the personal health risks or collective health risks of mosquito-borne illness attributable to climate change will be associated with increased support for climate change mitigation policy. Furthermore, the expected asymmetry in how Individualists versus Communitarians would respond to personal versus collective framing of risk is reflected in Hypothesis 3. Hypothesis 3. Moderation by Cultural Cognitive Orientations . The effects of CCR messaging should be moderated by cultural cognitive orientations, such that Individualists are more likely to exhibit the aforementioned effects on (a) immediate and (b) superordinate outcomes when exposed to CCR messages that emphasize personal risk, while Communitarians should be more likely to exhibit effects on support for (c) immediate and (d) superordinate interventions when exposed to CCR messages that emphasize collective risk. Ethics statement The study was reviewed and granted approval (#H-43232) by the Institutional Review Board (IRB) Staff at Boston University who determined that the study qualifies for an exemption under the policies and procedures of the Human Research Subjects Program, category 2. See https://www.bumc.bu.edu/ohra/hrpp-policies/hrpp-policies-procedures/#10.2.4 . All experiments were performed in accordance with these guidelines. We obtained written consent (via a closed-form survey question) from respondents prior to their beginning the survey. The preregistration for the study’s experimental design and empirical expectations is available at https://osf.io/kx43r . Data Data for this study come from a YouGov-fielded survey of N = 2,200 US adults (18 and older). YouGov used propensity score matching procedures to ensure that our sample is nationally representative. The firm did this by first taking a random sample of respondents from the nationally representative American Community Survey (ACS), then they used propensity score matching techniques to determine which members of its large online opt-in panel most closely resemble each of the cases drawn from the ACS and invited those individuals to participate in the study. YouGov also provided us with post-stratification weights that account for any remaining deviations between the survey sample and demographic benchmarks from the US Census. We apply these weights when conducting our multivariate analyses. According to an independent analysis by the Pew Research Center, YouGov has outperformed other online data vendors on accuracy . Design We tested our hypotheses by embedding a three-armed randomized controlled trial (RCT), into a nationally representative public opinion survey. In it, we exposed respondents to one of two co-constitutive risk (CCR) messages (vs. a pure control message pertaining to the history of baseball). A limitation of this design was that we did not include a condition with a typical health communication message emphasizing a single risk to compare to the co-constitutive risk conditions. This can be addressed in future research. Both CCR messages emphasized the risk of contracting dengue and how that risk will increase given global climate change. However, the messages varied in cultural cognitive framing, such that one CCR message (“individual risk”) emphasizes the spread of mosquito-borne infection as a risk to one’s personal health, while another (“collective risk”) emphasizes the public health risks of mosquito-borne infection. provides a side-by-side comparison of each CCR message, with differences in cultural cognitive framing highlighted in red. Analytic strategy We test Hypotheses 1 and 2 by constructing a series of ordered logistic regression models that regress ordinal indicators of each outcome described in the hypotheses on dichotomous indicators of whether respondents were assigned to either of the two messages presented in (with assignment to the pure control group serving as an analytical reference category). If our theoretical expectations are supported, both of these indicators should be positively (i.e., β > 0) and significantly (at the p < 0.05 level, two-tailed) associated with each of the aforementioned outcome variables. Note that, due to an oversight in our original pre-analysis plan, we code support for dengue vaccine mandates as a dichotomous outcome variable and model these views via logistic regression. We do this both due to the lack of a clear causal ordering between the variable’s penultimate categories as well as our substantive interest in support for vaccine mandates (the first response option). Note that, because we randomly assign respondents into each treatment and the control, we do not expect to observe differences in demographic makeup across groups. Indeed, in a series of randomization checks included in the Supporting Information —calculated on the basis of respondents’ gender identity, racial identity, age, educational attainment, and party identification—we find no evidence some respondents were more likely than others to be assigned into either of our two treatment groups (vs the control) than others. Correspondingly, we do not condition on any covariates in our models. We test Hypothesis 3 by amending the ordered logistic regression models described above to interact each treatment assignment indicator with a multi-item index denoting the extent to which people hold individualistic or communitarian cultural cognitive worldviews. If our theoretical expectations are supported, we would expect that the interaction term denoting that more individualistic people who were assigned to the “personal risk” condition will be (a) positively and significantly associated with increases in each of the outcome variables listed in , and (b) that these effects will be significantly larger than those for the more individualistic people who were assigned to the “communal risk” condition, as well as for all communitarians assigned to the “personal risk” condition. We expect to observe a complimentary pattern for communitarians assigned to the “communal risk” condition. Measures Outcome variables There are two primary sets of outcomes in this study. The first set are a series of ordinal indicators of the extent to which survey respondents support policies aimed at promoting vaccination against neglected tropical diseases like dengue fever, and we refer to these as the immediate outcomes (Hypothesis 1). These policies include support for research and development into vaccination for mosquito borne illnesses, the expansion of dengue vaccine requirements for children (as dengue fever vaccination is currently only approved for children aged 9–16) and (adult) respondents’ hypothetical intentions to vaccinate against dengue, should a vaccine for adults be approved by federal regulators. Each vaccine-related question was preceded by the following preamble: Aside from taking care to avoid mosquito bites, one simple way to prevent getting sick with dengue is to get vaccinated against the disease. Safe and effective vaccines are available for children aged 9–16, who have previously been infected with dengue. At this time only those traveling to areas where dengue is common are eligible for vaccination. As rising global temperatures attributable to climate change may increase the likelihood that mosquito borne illness become more common in the United States, some argue that federal regulators ought to rethink this policy . The question wording and response options are available in . Note that the first outcome variable, Dengue Fever Vaccine Mandate, was coded such that support for mandate expansion (option A) takes on a value of 1, with all other variables taking on a value of 0. The second set of outcome variables are ordinal indicators measuring the extent to which respondents’ support policies aimed at mitigating the effects of climate change and we refer to these as the superordinate outcomes (Hypothesis 2). These policies (adapted from ) include support for research into sources of renewable energy, carbon dioxide regulation, carbon dioxide emission limits on power plants, and investment in diversifying the sources of energy production. Full question wording and response option information is available in . Note that all variables were scored such that the maximum ordinal category on each one corresponds to the highest possible levels of support for both the immediate (vaccine) and superordinate (climate) outcomes. Independent variables The primary explanatory variables used to test Hypotheses 1 and 2 are dichotomous indicators of whether survey respondents were assigned to read one of the two experimental treatments versus the control group. When testing Hypothesis 3, we add an intervalized, multi-item index assessing respondents’ placement on an individualist-communitarian continuum (M = 0.56, SD = 0.20) . We construct this index by averaging responses across a series of 18 items—derived from Kahan et al. —that ask respondents to assess whether they agree or disagree with a series of statements designed to measure individualistic or communitarian cultural worldviews (e.g., views that the government interferes “far too much in our everyday lives”). We score the resulting (untransformed) scale to range from 0–1, such that a score of 1 is indicative of expressing strong individualistic preferences (ɑ = 0.93). For simplicity, we will refer to this measure as individualism throughout our analyses. A full list of items used to build this scale can be found in the attached supplement . The study was reviewed and granted approval (#H-43232) by the Institutional Review Board (IRB) Staff at Boston University who determined that the study qualifies for an exemption under the policies and procedures of the Human Research Subjects Program, category 2. See https://www.bumc.bu.edu/ohra/hrpp-policies/hrpp-policies-procedures/#10.2.4 . All experiments were performed in accordance with these guidelines. We obtained written consent (via a closed-form survey question) from respondents prior to their beginning the survey. The preregistration for the study’s experimental design and empirical expectations is available at https://osf.io/kx43r . Data for this study come from a YouGov-fielded survey of N = 2,200 US adults (18 and older). YouGov used propensity score matching procedures to ensure that our sample is nationally representative. The firm did this by first taking a random sample of respondents from the nationally representative American Community Survey (ACS), then they used propensity score matching techniques to determine which members of its large online opt-in panel most closely resemble each of the cases drawn from the ACS and invited those individuals to participate in the study. YouGov also provided us with post-stratification weights that account for any remaining deviations between the survey sample and demographic benchmarks from the US Census. We apply these weights when conducting our multivariate analyses. According to an independent analysis by the Pew Research Center, YouGov has outperformed other online data vendors on accuracy . We tested our hypotheses by embedding a three-armed randomized controlled trial (RCT), into a nationally representative public opinion survey. In it, we exposed respondents to one of two co-constitutive risk (CCR) messages (vs. a pure control message pertaining to the history of baseball). A limitation of this design was that we did not include a condition with a typical health communication message emphasizing a single risk to compare to the co-constitutive risk conditions. This can be addressed in future research. Both CCR messages emphasized the risk of contracting dengue and how that risk will increase given global climate change. However, the messages varied in cultural cognitive framing, such that one CCR message (“individual risk”) emphasizes the spread of mosquito-borne infection as a risk to one’s personal health, while another (“collective risk”) emphasizes the public health risks of mosquito-borne infection. provides a side-by-side comparison of each CCR message, with differences in cultural cognitive framing highlighted in red. We test Hypotheses 1 and 2 by constructing a series of ordered logistic regression models that regress ordinal indicators of each outcome described in the hypotheses on dichotomous indicators of whether respondents were assigned to either of the two messages presented in (with assignment to the pure control group serving as an analytical reference category). If our theoretical expectations are supported, both of these indicators should be positively (i.e., β > 0) and significantly (at the p < 0.05 level, two-tailed) associated with each of the aforementioned outcome variables. Note that, due to an oversight in our original pre-analysis plan, we code support for dengue vaccine mandates as a dichotomous outcome variable and model these views via logistic regression. We do this both due to the lack of a clear causal ordering between the variable’s penultimate categories as well as our substantive interest in support for vaccine mandates (the first response option). Note that, because we randomly assign respondents into each treatment and the control, we do not expect to observe differences in demographic makeup across groups. Indeed, in a series of randomization checks included in the Supporting Information —calculated on the basis of respondents’ gender identity, racial identity, age, educational attainment, and party identification—we find no evidence some respondents were more likely than others to be assigned into either of our two treatment groups (vs the control) than others. Correspondingly, we do not condition on any covariates in our models. We test Hypothesis 3 by amending the ordered logistic regression models described above to interact each treatment assignment indicator with a multi-item index denoting the extent to which people hold individualistic or communitarian cultural cognitive worldviews. If our theoretical expectations are supported, we would expect that the interaction term denoting that more individualistic people who were assigned to the “personal risk” condition will be (a) positively and significantly associated with increases in each of the outcome variables listed in , and (b) that these effects will be significantly larger than those for the more individualistic people who were assigned to the “communal risk” condition, as well as for all communitarians assigned to the “personal risk” condition. We expect to observe a complimentary pattern for communitarians assigned to the “communal risk” condition. Outcome variables There are two primary sets of outcomes in this study. The first set are a series of ordinal indicators of the extent to which survey respondents support policies aimed at promoting vaccination against neglected tropical diseases like dengue fever, and we refer to these as the immediate outcomes (Hypothesis 1). These policies include support for research and development into vaccination for mosquito borne illnesses, the expansion of dengue vaccine requirements for children (as dengue fever vaccination is currently only approved for children aged 9–16) and (adult) respondents’ hypothetical intentions to vaccinate against dengue, should a vaccine for adults be approved by federal regulators. Each vaccine-related question was preceded by the following preamble: Aside from taking care to avoid mosquito bites, one simple way to prevent getting sick with dengue is to get vaccinated against the disease. Safe and effective vaccines are available for children aged 9–16, who have previously been infected with dengue. At this time only those traveling to areas where dengue is common are eligible for vaccination. As rising global temperatures attributable to climate change may increase the likelihood that mosquito borne illness become more common in the United States, some argue that federal regulators ought to rethink this policy . The question wording and response options are available in . Note that the first outcome variable, Dengue Fever Vaccine Mandate, was coded such that support for mandate expansion (option A) takes on a value of 1, with all other variables taking on a value of 0. The second set of outcome variables are ordinal indicators measuring the extent to which respondents’ support policies aimed at mitigating the effects of climate change and we refer to these as the superordinate outcomes (Hypothesis 2). These policies (adapted from ) include support for research into sources of renewable energy, carbon dioxide regulation, carbon dioxide emission limits on power plants, and investment in diversifying the sources of energy production. Full question wording and response option information is available in . Note that all variables were scored such that the maximum ordinal category on each one corresponds to the highest possible levels of support for both the immediate (vaccine) and superordinate (climate) outcomes. Independent variables The primary explanatory variables used to test Hypotheses 1 and 2 are dichotomous indicators of whether survey respondents were assigned to read one of the two experimental treatments versus the control group. When testing Hypothesis 3, we add an intervalized, multi-item index assessing respondents’ placement on an individualist-communitarian continuum (M = 0.56, SD = 0.20) . We construct this index by averaging responses across a series of 18 items—derived from Kahan et al. —that ask respondents to assess whether they agree or disagree with a series of statements designed to measure individualistic or communitarian cultural worldviews (e.g., views that the government interferes “far too much in our everyday lives”). We score the resulting (untransformed) scale to range from 0–1, such that a score of 1 is indicative of expressing strong individualistic preferences (ɑ = 0.93). For simplicity, we will refer to this measure as individualism throughout our analyses. A full list of items used to build this scale can be found in the attached supplement . There are two primary sets of outcomes in this study. The first set are a series of ordinal indicators of the extent to which survey respondents support policies aimed at promoting vaccination against neglected tropical diseases like dengue fever, and we refer to these as the immediate outcomes (Hypothesis 1). These policies include support for research and development into vaccination for mosquito borne illnesses, the expansion of dengue vaccine requirements for children (as dengue fever vaccination is currently only approved for children aged 9–16) and (adult) respondents’ hypothetical intentions to vaccinate against dengue, should a vaccine for adults be approved by federal regulators. Each vaccine-related question was preceded by the following preamble: Aside from taking care to avoid mosquito bites, one simple way to prevent getting sick with dengue is to get vaccinated against the disease. Safe and effective vaccines are available for children aged 9–16, who have previously been infected with dengue. At this time only those traveling to areas where dengue is common are eligible for vaccination. As rising global temperatures attributable to climate change may increase the likelihood that mosquito borne illness become more common in the United States, some argue that federal regulators ought to rethink this policy . The question wording and response options are available in . Note that the first outcome variable, Dengue Fever Vaccine Mandate, was coded such that support for mandate expansion (option A) takes on a value of 1, with all other variables taking on a value of 0. The second set of outcome variables are ordinal indicators measuring the extent to which respondents’ support policies aimed at mitigating the effects of climate change and we refer to these as the superordinate outcomes (Hypothesis 2). These policies (adapted from ) include support for research into sources of renewable energy, carbon dioxide regulation, carbon dioxide emission limits on power plants, and investment in diversifying the sources of energy production. Full question wording and response option information is available in . Note that all variables were scored such that the maximum ordinal category on each one corresponds to the highest possible levels of support for both the immediate (vaccine) and superordinate (climate) outcomes. The primary explanatory variables used to test Hypotheses 1 and 2 are dichotomous indicators of whether survey respondents were assigned to read one of the two experimental treatments versus the control group. When testing Hypothesis 3, we add an intervalized, multi-item index assessing respondents’ placement on an individualist-communitarian continuum (M = 0.56, SD = 0.20) . We construct this index by averaging responses across a series of 18 items—derived from Kahan et al. —that ask respondents to assess whether they agree or disagree with a series of statements designed to measure individualistic or communitarian cultural worldviews (e.g., views that the government interferes “far too much in our everyday lives”). We score the resulting (untransformed) scale to range from 0–1, such that a score of 1 is indicative of expressing strong individualistic preferences (ɑ = 0.93). For simplicity, we will refer to this measure as individualism throughout our analyses. A full list of items used to build this scale can be found in the attached supplement . Results of pre-registered analyses We begin by assessing the degree to which exposure to CCR messaging is associated with increased support for pharmaceutical interventions against the health risks posed by dengue fever (Immediate Outcomes; Hypothesis 1), as well as support for climate change mitigation policies (Superordinate Outcomes; Hypothesis 2). The results of the multivariate models devised to test these hypotheses (see Analytical Strategy) are presented in . Note that although we did not state a priori that we expected to observe asymmetries in effectiveness across differences in cultural cognitive message framing, we nevertheless account for this possibility in our models (as identified in our pre-analysis plan). The results presented in provide mixed support for Hypothesis 1. Consistent with Hypothesis 1, we find that participants who were exposed to CCR messages that emphasize the communal health risks of dengue fever (vs. exposure to the pure control) had greater prospective vaccination intentions (β = 0.25, p = 0.03; Column 2). Transforming that parameter estimate into predicted probabilities that hold all other covariates at their sample means, we find that this corresponds to approximately a 4-percentage point increase in the probability that respondents say that they are “very likely” to vaccinate across the treatment (25%) and control (21%) conditions. We also note that the results are correctly signed, albeit non-significant, with respect to support for policies aimed at expanding dengue vaccine requirements (Column 1) as well as investment in dengue vaccine research (Column 3). In contrast, although we made no a priori predictions about differences between message frames, we find that personal risk CCR message bore no significant associations with any of the outcome variables used to test Hypothesis 1. Furthermore, and surprisingly, we find no evidence in support of Hypothesis 2 (Columns 4–7). Neither personal nor collective risk messages were significantly associated with increased support for policies aimed at mitigating climate change. Next, we consider the possibility that the effectiveness of CCR messaging might be conditional on respondents’ cultural cognitive worldviews; such that individualists are more responsive to personal risk messages and collectivists are more responsive to communal risk messages (Hypotheses 3a and 3b). We present the aforementioned interactive models (see Analytical Strategy) in . again reveals mixed evidence in favor of the conditional effectiveness of CCR messaging with respect to its effects on vaccine attitudes. In this case, collectivists exposed to public health-focused CCR messages were significantly more likely to favor investing in the development of dengue fever vaccines (β = 1.59, p = 0.01). Substantively, these effects correspond to an approximately 15-percentage point decrease in the predicted probability that respondents who express the strongest observed levels of collectivism (i.e., a score of 0 on the individualism scale) indicate that they “strongly agree” with raising taxes to fund dengue vaccine research across the treatment (87%) versus control groups (72%). However, we detect no evidence of moderation by cultural cognitive worldviews on vaccination intentions or support for expanding vaccine requirements. Likewise, as was the case when testing Hypothesis 2, we find no evidence of effects of CCR messaging across cultural cognitive worldviews on support for climate change mitigation policy. Notably, however, individualism scores negatively predicted vaccine and climate policy attitudes, across the message conditions. Post Hoc (Exploratory) analyses To this point, our pre-registered analysis plan has yielded mixed results in testing our theoretical expectations regarding the efficacy of CCR messaging. We recognize, however, that we may have overlooked a potential moderating influence on CCR messaging effectiveness when registering our theoretical expectations. Specifically, it may be the case that CCR treatments inspire support for both pharmaceutical interventions and climate change mitigation policy when we consider individuals’ beliefs about anthropogenic climate change. People who accept the scientific consensus that changes in the planet’s climate are caused by human activities (anthropogenic climate change acceptance; ACC) tend to be more likely to express concern about its harmful effects on human life and support policies to curb greenhouse gas emissions . Correspondingly, individuals who accept and are already concerned about ACC may exhibit a comparatively lower capacity to be influenced by our CCR messages (i.e., a “ceiling effect”) because they already express a strong desire to take action to lessen its effects. We therefore hypothesize that CCR message exposure effectiveness may be further moderated by ACC acceptance, such that only those who doubt the reality of human-caused climate change could be influenced by the messages. We test this possibility by amending the models presented in Tables and to interact each treatment indicator (the models in ), as well as the treatment by cultural worldview interactions (the models in ), with a dichotomous indicator of whether or not survey respondents believe that “the earth is getting warmer mostly because of human activity such as burning fossil fuels” (see the for complete question-wording information). The results are presented in and Tables and summarized graphically in . Post hoc , we expected that CCR messages would most effectively influence the attitudes and behaviors of those who do not already view climate change as human-caused. On the contrary, we find no evidence that exposure to either of the two CCR message treatments is associated with increased support for pharmaceutical interventions to combat dengue fever or climate change mitigation policy (in all cases, p > 0.05; two-tailed). However, consistent with Hypothesis 3, we find that ACC beliefs moderate message effectiveness, especially concerning the outcomes that measure the treatments’ influence on climate attitudes (as a reminder, please see Tables and in the method section for complete question-wording information). Specifically, we report in the that the three-way interaction between exposure to personal risk CCR messages, ACC acceptance, and individualistic attitudes is associated with increased support for additional government regulation of coal-fired power plants (β = 3.81, p = 0.02) and diversifying utilities’ clean power sources (β = 4.85, p < 0.01). We find an analogous pattern of results across all four climate policy outcomes for the CCR messages that emphasize collective risk. For those messages, CCR exposure is associated with significantly greater levels of support for government investment in renewable energy (β = 3.69, p = 0.02), as well as increased government regulation on coal-fired power plants (β = 3.59, p = 0.02) and clean power diversification (β = 4.68, p < 0.01). Exposure to the collective risk messages is also associated with support for government regulation of CO2 emissions (β = 2.93), although this effect only approaches conventional levels of two-tailed significance ( p = 0.07). Treatment exposure, however, has no statistically discernible impact on support for pharmaceutical interventions when accounting for the possibility of moderation by ACC beliefs ( p > 0.10 in all cases). Of course, these three-way interactive terms are difficult to interpret on their own. Correspondingly, plots the predicted probability (y-axis) of indicating strong levels of support for each of the aforementioned climate policies for individuals who were exposed to each CCR message (solid vs. dashed lines, with personal risk messaging presented on the left-hand side of the figure, and collective risk messaging presented on the right), across levels of individualistic attitude endorsement (x-axis), for those who express skepticism about ACC. Note that we display predictions from all of the aforementioned models for ease of visual comparison. However, following Brambor and colleagues , we strongly caution against interpreting substantive effects from models that produced non-significant interaction terms. Thus, for reference, we suffix all significant interactions with an asterisk. demonstrates, somewhat surprisingly, that exposure to our CCR treatments—irrespective of cultural cognitive framing—is associated with significantly more robust support for pro-climate policies for individuals who hold more collectivist worldviews (as demonstrated by both the elevated position of the dashed line, as well as the non-overlapping confidence intervals). For example, the predicted probability of strongly supporting clean power regulations is 90% for people who express ACC skepticism, hold strongly collectivistic worldviews, and were exposed to the collective risk CCR message, compared to 71% in the control group (a 19-percentage point increase). As shows (contrary to our post hoc expectations), we observe an analogous pattern for messages emphasizing personal risk. While these results comport with our theoretical expectations regarding the asymmetric appeal of collective risk messages to those with less individualistic worldviews, we were surprised that our personal risk messages did not produce an analogous effect pattern among those with more individualistic worldviews. It seems likely that, because all of the policies under investigation require some level of government action, which could be seen as infringing on individual freedoms, only those who value collective approaches to solving health and climate issues are responsive to our treatments. Although we did not specify this mechanism a priori , this could be a fascinating area for future investigation. We begin by assessing the degree to which exposure to CCR messaging is associated with increased support for pharmaceutical interventions against the health risks posed by dengue fever (Immediate Outcomes; Hypothesis 1), as well as support for climate change mitigation policies (Superordinate Outcomes; Hypothesis 2). The results of the multivariate models devised to test these hypotheses (see Analytical Strategy) are presented in . Note that although we did not state a priori that we expected to observe asymmetries in effectiveness across differences in cultural cognitive message framing, we nevertheless account for this possibility in our models (as identified in our pre-analysis plan). The results presented in provide mixed support for Hypothesis 1. Consistent with Hypothesis 1, we find that participants who were exposed to CCR messages that emphasize the communal health risks of dengue fever (vs. exposure to the pure control) had greater prospective vaccination intentions (β = 0.25, p = 0.03; Column 2). Transforming that parameter estimate into predicted probabilities that hold all other covariates at their sample means, we find that this corresponds to approximately a 4-percentage point increase in the probability that respondents say that they are “very likely” to vaccinate across the treatment (25%) and control (21%) conditions. We also note that the results are correctly signed, albeit non-significant, with respect to support for policies aimed at expanding dengue vaccine requirements (Column 1) as well as investment in dengue vaccine research (Column 3). In contrast, although we made no a priori predictions about differences between message frames, we find that personal risk CCR message bore no significant associations with any of the outcome variables used to test Hypothesis 1. Furthermore, and surprisingly, we find no evidence in support of Hypothesis 2 (Columns 4–7). Neither personal nor collective risk messages were significantly associated with increased support for policies aimed at mitigating climate change. Next, we consider the possibility that the effectiveness of CCR messaging might be conditional on respondents’ cultural cognitive worldviews; such that individualists are more responsive to personal risk messages and collectivists are more responsive to communal risk messages (Hypotheses 3a and 3b). We present the aforementioned interactive models (see Analytical Strategy) in . again reveals mixed evidence in favor of the conditional effectiveness of CCR messaging with respect to its effects on vaccine attitudes. In this case, collectivists exposed to public health-focused CCR messages were significantly more likely to favor investing in the development of dengue fever vaccines (β = 1.59, p = 0.01). Substantively, these effects correspond to an approximately 15-percentage point decrease in the predicted probability that respondents who express the strongest observed levels of collectivism (i.e., a score of 0 on the individualism scale) indicate that they “strongly agree” with raising taxes to fund dengue vaccine research across the treatment (87%) versus control groups (72%). However, we detect no evidence of moderation by cultural cognitive worldviews on vaccination intentions or support for expanding vaccine requirements. Likewise, as was the case when testing Hypothesis 2, we find no evidence of effects of CCR messaging across cultural cognitive worldviews on support for climate change mitigation policy. Notably, however, individualism scores negatively predicted vaccine and climate policy attitudes, across the message conditions. To this point, our pre-registered analysis plan has yielded mixed results in testing our theoretical expectations regarding the efficacy of CCR messaging. We recognize, however, that we may have overlooked a potential moderating influence on CCR messaging effectiveness when registering our theoretical expectations. Specifically, it may be the case that CCR treatments inspire support for both pharmaceutical interventions and climate change mitigation policy when we consider individuals’ beliefs about anthropogenic climate change. People who accept the scientific consensus that changes in the planet’s climate are caused by human activities (anthropogenic climate change acceptance; ACC) tend to be more likely to express concern about its harmful effects on human life and support policies to curb greenhouse gas emissions . Correspondingly, individuals who accept and are already concerned about ACC may exhibit a comparatively lower capacity to be influenced by our CCR messages (i.e., a “ceiling effect”) because they already express a strong desire to take action to lessen its effects. We therefore hypothesize that CCR message exposure effectiveness may be further moderated by ACC acceptance, such that only those who doubt the reality of human-caused climate change could be influenced by the messages. We test this possibility by amending the models presented in Tables and to interact each treatment indicator (the models in ), as well as the treatment by cultural worldview interactions (the models in ), with a dichotomous indicator of whether or not survey respondents believe that “the earth is getting warmer mostly because of human activity such as burning fossil fuels” (see the for complete question-wording information). The results are presented in and Tables and summarized graphically in . Post hoc , we expected that CCR messages would most effectively influence the attitudes and behaviors of those who do not already view climate change as human-caused. On the contrary, we find no evidence that exposure to either of the two CCR message treatments is associated with increased support for pharmaceutical interventions to combat dengue fever or climate change mitigation policy (in all cases, p > 0.05; two-tailed). However, consistent with Hypothesis 3, we find that ACC beliefs moderate message effectiveness, especially concerning the outcomes that measure the treatments’ influence on climate attitudes (as a reminder, please see Tables and in the method section for complete question-wording information). Specifically, we report in the that the three-way interaction between exposure to personal risk CCR messages, ACC acceptance, and individualistic attitudes is associated with increased support for additional government regulation of coal-fired power plants (β = 3.81, p = 0.02) and diversifying utilities’ clean power sources (β = 4.85, p < 0.01). We find an analogous pattern of results across all four climate policy outcomes for the CCR messages that emphasize collective risk. For those messages, CCR exposure is associated with significantly greater levels of support for government investment in renewable energy (β = 3.69, p = 0.02), as well as increased government regulation on coal-fired power plants (β = 3.59, p = 0.02) and clean power diversification (β = 4.68, p < 0.01). Exposure to the collective risk messages is also associated with support for government regulation of CO2 emissions (β = 2.93), although this effect only approaches conventional levels of two-tailed significance ( p = 0.07). Treatment exposure, however, has no statistically discernible impact on support for pharmaceutical interventions when accounting for the possibility of moderation by ACC beliefs ( p > 0.10 in all cases). Of course, these three-way interactive terms are difficult to interpret on their own. Correspondingly, plots the predicted probability (y-axis) of indicating strong levels of support for each of the aforementioned climate policies for individuals who were exposed to each CCR message (solid vs. dashed lines, with personal risk messaging presented on the left-hand side of the figure, and collective risk messaging presented on the right), across levels of individualistic attitude endorsement (x-axis), for those who express skepticism about ACC. Note that we display predictions from all of the aforementioned models for ease of visual comparison. However, following Brambor and colleagues , we strongly caution against interpreting substantive effects from models that produced non-significant interaction terms. Thus, for reference, we suffix all significant interactions with an asterisk. demonstrates, somewhat surprisingly, that exposure to our CCR treatments—irrespective of cultural cognitive framing—is associated with significantly more robust support for pro-climate policies for individuals who hold more collectivist worldviews (as demonstrated by both the elevated position of the dashed line, as well as the non-overlapping confidence intervals). For example, the predicted probability of strongly supporting clean power regulations is 90% for people who express ACC skepticism, hold strongly collectivistic worldviews, and were exposed to the collective risk CCR message, compared to 71% in the control group (a 19-percentage point increase). As shows (contrary to our post hoc expectations), we observe an analogous pattern for messages emphasizing personal risk. While these results comport with our theoretical expectations regarding the asymmetric appeal of collective risk messages to those with less individualistic worldviews, we were surprised that our personal risk messages did not produce an analogous effect pattern among those with more individualistic worldviews. It seems likely that, because all of the policies under investigation require some level of government action, which could be seen as infringing on individual freedoms, only those who value collective approaches to solving health and climate issues are responsive to our treatments. Although we did not specify this mechanism a priori , this could be a fascinating area for future investigation. Our results document the effectiveness of co-constitutive risk messaging (CCR), a method of One Health Communication , on taking both immediate action (i.e., on pro-vaccine policy and uptake intentions) and supporting superordinate action (i.e., on climate change mitigation policy) to stop the spread of climate-facilitated infectious disease. These effects occur through a variety of different channels. CCR message exposure exhibits both main and moderated (by cultural cognitive orientation) influence on the immediate outcomes of pro-vaccine attitudes and behavior. Still, it appears to have no effects on the more superordinate outcomes of climate policy attitudes. However, when we account for the possibility of “ceiling effects” among those who accept anthropogenic climate change (ACC) as real, we document strong effects of exposure to CCR messaging among those who hold collectivist cultural worldviews. Surprisingly, and inconsistent with our pre-registered theoretical expectations, we find little evidence that those with more individualistic worldviews were more responsive to CCR threats that emphasized the personal health risks of climate change. Although we hesitate to speculate post hoc as to why our theoretical expectations were not borne out in the data, one possibility could be, as we suggested in the results section, that the proposed government interventions, which can be seen as infringing on personal freedoms, were more salient to individualists than the risks posed by dengue from anthropogenic climate change. Relatedly, it is also possible that our treatments did not emphasize clearly enough either the severity or (increasing) probability of getting sick with insect-borne illnesses. We see this as both a limitation of our research and an opportunity for future work to develop, pilot, and test different messages emphasizing the personal health risks of climate change. More generally, we tend to observe the strongest effects of CCR messaging among those less convinced that climate change results from human activities. As noted throughout the paper, we suspect that moderation by ACC beliefs reflects the idea that those who already accept climate change as real and human-caused may not have much opportunity to update their health and/or climate-related beliefs in response to CCR messaging, i.e., because they are already precisely the types of people who report that they favor taking action to lessen the health risks borne by climate change. We, therefore, encourage future research in this area to anticipate the possibility of ACC belief moderation when devising extensions of this messaging approach and to always measure attitudes about the causes of climate change when researching climate messaging. Taken together, these findings offer admittedly mixed support for our pre-registered hypotheses. However, we believe that they represent an important first step in assessing the viability of CCR messaging as a One Health Communication approach. Growing scientific evidence highlighting the negative impacts of climate change has not convinced many US Americans of the urgency of climate action . Past research indicates that a non-trivial proportion of US Americans are either concerned about climate change (i.e., believe that the climate is changing but tend to believe that climate impacts are still distant in time and space), cautious about climate change (have not yet made up their minds); disengaged from climate change (know little about it); and doubtful about climate change (they do not think global warming is happening or they believe it is just a natural cycle) . That is despite scholars making increasing connections between climate change and public health in recent years . Sadly, lack of policy action on issues with a long-time horizon is frequently a feature of democratic politics, as citizens are often strongly biased toward policies addressing present problems over long-term ones . We believe that in this context, CCR messaging and the broader One Health Communication paradigm offer some hope by linking climate change to a specific disease, which can help to make the non-obvious effects of climate change real and immediate in the mind of many Americans, especially those who are less committed to climate action due to their skepticism about global warming. Although more research is needed, CCR messaging could potentially even influence the citizens who are already alarmed about climate change by motivating them towards more concrete climate action. This paradigm enables a more comprehensive and potentially effective engagement with diverse audiences on complex, interlinked issues beyond climate change. Of course, these are results from a single study offering only one CCR messaging application. Moreover, our work is presented in a single national context (the United States). We urge strategic communication researchers in both the US and beyond to consider whether these strategies hold promise in other national and cross-cultural contexts. Beyond considering cross-national applications of CCR messaging, future work should expand on our approach in other One Health-related applications. Future work might also consider administering a “stronger” experimental treatment, i.e., because although we detect important experimental treatment effects, our manipulation featured only a few minor word changes in the experimental primes. More robust treatment vignettes would more fully reveal the effects that CCR messaging might have. Finally, future efforts to apply CCR messaging should account for different moderators, including accounting for the possibility that some worldviews and psychological predispositions might make someone more persuadable than others , collectivists could be more persuadable than individualists simply because they tend to be higher on openness, an important psychological factor determining persuadability. Still, the results presented in this manuscript hold promise for strategic health and environmental communication. Principally, our work in what we term One Health Communication emphasizes that–in some respects–health and environmental communication represent two sides of the same proverbial coin. In other words, because climate risks can beget concerns about the spread of infectious disease, we encourage strategic communicators to make an effort to identify areas in which these two concerns ought to be raised in concert with one another. To do this, one concrete step that strategic health and climate communicators can take in the short term is to conduct pilot survey-based RCTs that assess the efficacy of CCR messages that fuse climate and health risks into the same strategic messages. In addition to expanding on the scope of health risks assessed in this piece (e.g., the spread of other insect-borne diseases attributable to climate change, such as Lyme Disease), this applied work might also consider assessing the efficacy of content elements not tested in the present research; e.g., the possibility that providing visual risk-related imagery–like pictures of insect vectors and/or infected humans–may evoke comparatively stronger feelings of anxiety in those who view them than those viewing messages lacking that imagery. After determining which content elements might enhance the effectiveness of CCR messages in an expanded range of public health domains, communicators might then consider “scaling up” pilot messaging approaches into larger (and more costly) field experimental interventions conducted on web, print, televised, and/or socially mediated platforms. This would allow communicators to assess the external generalizability of the effectiveness of CCR messaging. This would also enable communicators to consider how different content elements might enhance CCR messages’ abilities to both induce feelings of risk and, correspondingly, inspire behavioral action to mitigate those risks. Overall, we see our work not as the “final word” on CCR messaging but as offering a blueprint for future research in this area. We look forward to future efforts to build on the analyses offered here to better understand the viability of this and other One Health Communication strategies. S1 Table Supplemental Randomization Checks. (DOCX) S2 Table Supplemental Analyses. Post Hoc Reestimation of , with Moderation by ACC beliefs. (DOCX) S3 Table Supplementary Analyses. Post Hoc Reestimation of , with Moderation by ACC beliefs. (DOCX) S1 Appendix Item wording used in the survey. (DOCX) |
Subsets and Splits