title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
6
8
search_term
stringclasses
18 values
text
stringlengths
0
6.94M
Haptic experience to significantly motivate anatomy learning in medical students
73536371-2365-42b0-b28f-903d6ace1304
11363654
Anatomy[mh]
Teaching anatomy to medical students represents a constant challenge, due to the complex structure of the human body, its importance during the early stages of medical education, and its implications in future clinical practice, as it provides understanding of the structure and its function . To obtain a clinical integration, the student must become proficient in anatomical structures and their respective function, which lead to acquiring medical terminology and prepares the future healthcare professional for determining diagnosis. In addition, it is especially advantageous in certain fields, such as surgery, where it has been described to offer patients greater safety . Although, to learn anatomy cadaver dissection is considered the gold standard , high maintenance costs, exposure to toxic chemical substances, the lack of professors trained in anatomy dissection, curricular limitations and integrative curricula, among others, have led to the implementation of different methods and tools to teach anatomy such as, lecture sessions, laboratories, prosection, videos, interactive screens (such as Sectra and Anatomage), plastination and others . To date, no teaching tool meets all curriculum requirements; therefore, the best way to teach modern anatomy is by combining multiple pedagogical resources to complement each other . Understanding 3D spatial anatomical relationships requires of the student to have a solid structure comprehension and 3D mental visualization skills . It has been described by Rizzolo and Stewart, 2006 and DeHoff et al. 2011 that tactile manipulation, in addition to involving other senses, is a great advantage provided by cadaver dissection. Therefore, it might be associated with better understanding and retention of spatial information . In this context, the haptic experience arises as an innovative tool, promising to increase the student’s motivation, and facilitate a significant learning process. It seems to have a positive effect on long term memory with an impact on cognitive ability, especially at the level of the student’s satisfaction . A haptic experience uses the sense of touch, associated with sight to explore and understand the form, texture, and the 3D characteristics of an object . Moreover, physical manipulation allows active control of a model; hence, permitting visualization from multiple perspectives, and the capacity to establish relationships . Collectively, 3D models have been demonstrated to be versatile and easy to manipulate. Diverse modalities of haptic experiences have been evaluated, such as interactive 3D models and advanced haptic technologies, to determine their impact on anatomical knowledge acquisition and their capacity to develop a connection between theory and practice, both for undergraduate and post-graduate education . Applying this focus on teaching anatomy not only seeks to surpass the limitations in traditional methods, which are usually centered on passive memorization, but to stimulate active participation by the student in his or her autonomous learning process. In post-graduate clinical training, various studies have used haptic cues with 3D prototypes to strengthen the understanding of anatomical and surgical procedures, as an aid in the clinical practice. Using 3D printed models helps to understand abstract concepts through spatial visualization and establish relationships. One such example is a study where a 3D printed liver with tumors model was utilized to understand surgical procedures and establish better cooperation at different training levels . Another study used a physical model of an acetabular fraction to evaluate the effect of a tactile feedback, allowing the residents to feel resistance, contours, textures, and edges of fractures . Acknowledging the importance of the haptic experience in the educational context, this research sought to provide several valuable insights to teachers, curriculum designers and healthcare professionals interested in optimizing human anatomy teaching, promoting a pedagogical approach that not only transmits information, but inspires a deep and long-term understanding of human anatomy. This study aimed to investigate the effectiveness of a haptic experience and painting on 3D plaster models as a motivating strategy to enhance meaningful learning of the shoulder’s anatomy. The following study evaluated a haptic experience to motivate meaningful learning in anatomy between March 2021 and March 2023. To this end, skeletal elements of the shoulder were modelled in plaster to determine if a tactile workshop and interaction with color markers would establish a better learning process, in comparison with the traditional method using a written 2D workshop. This study was carried out with undergraduate second year medical students from Pontificia Universidad Javeriana, Bogotá. Plaster bone model To elaborate realistic physical bone models from the human shoulder computed tomography (CT) images were obtained from volunteers after signing informed consent (FM-CIE-0113-18). Digital Imaging and Communication in Medicine (DICOM) files from CT scan were segmented with an on-house algorithm and converted into a CAD format, using an isocontour algorithm. Using a rapid prototype system, CAD files were used to 3D print a scapula, humerus and clavicle prototype in ABS plastic. Following, each prototype was employed to elaborate a silicone model into which plaster was poured into and allowed to set. The piece was released from the mold and the excess material was removed to have a replica of the skeletal element to be used in the 3D workshop with the students. At least 12 prototypes for each bone were created. Study design The study was approved by the Ethics committee of Pontificia Universidad Javeriana, Bogotá, School of Medicine project No. 8259 (FM-CIE-0113-18). To carry out this research, 85 second year undergraduate medical students were invited to take part in this study (62 female (73%) and 23 males (27%) between the ages of 19 and 25, where 77 participated in this study anonymously, representing 91% of the fourth semester class. Gender distribution was 71% females and 29% males. It was made clear it would not influence their grades, and they could withdraw from the study at any moment. Signed inform consent was obtained from all participants (Act No. FM-CIE-0010-23). At the time of the study, subjects had completed two hours of lecture on the anatomy of the shoulder, without any practical component. Subjects were divided into random groups, assuming students held a similar anatomical knowledge. A first group answered the conventional workshop ( n = 24, females: 21, males: 3), herein referred to as 2D. A second group of 28 students participated in the 3D workshop ( n = 28, females: 21, males: 7) and a third group, referred to as control, decided not to participate in either of the workshops ( n = 25, females: 13, males: 12), but did participate in taking a 10-question quiz at the end of the study. The same written workshop regarding the anatomy of the shoulder bones was distributed among the two groups (2D and 3D workshops). Students could organize themselves into subgroups with no more than four students per subgroup. The activity was first explained for both groups by one of the professors leading the study. Subjects were handed out a printed workshop with black and white photographs of the scapulae, clavicle and humerus and a table to fill in. Students had 90 min to provide answers regarding the name of the skeletal element, bone laterality, and the view of the bone based on the photograph. In addition, the workshop contained a table with the names of 18 shoulder muscles. Subjects had to determine bone marking, muscle origins and insertions, and function, according to a given color code. To this end, they could use anatomy atlases, and information from the internet (Fig. ). For the study, the 2D and 3D groups were divided into two different classrooms: The 2D group performed the workshop in a lecture classroom, whereas the 3D group carried out the activity in an anatomy laboratory, where each group worked on a table with scapulae, clavicle and humerus plaster models and color markers. For the 3D workshop, in addition to writing the answers on paper, bone markings (projections and depressions) and muscle insertions were painted on the plaster bone models using a color code. At the end of the activity, all three groups, 2D, 3D and those not participating in either workshop (control group) had to answer a 10-question quiz under examination conditions related to anatomical markings, bone laterality, view of the bone, muscle insertion, and muscle function. They had a 20-minute time span to answer the quiz, without access to any of their learning material. The maximum score achievable was 5.0 and the minimum 0. After this activity, a focus group was performed with the two groups. A week later, a graded survey was conducted for the 2D workshop group, consisting of six questions addressing how the workshop contributed to their understanding of anatomical landmarks, muscle insertions, and articular movements. In addition, they were asked to grade the overall experience, how it contributed to their learning process and if the quiz questions were related to what they had learned in the activity. For the 3D workshop group, two additional questions were included: if bone manipulation and painting on bone plaster models had an added value in their learning process. Moreover, to gather feedback from all participants, two open questions regarding the two main strengths and weaknesses of this activity were answered by all students who participated. Data analysis This was a descriptive cross-sectional study. Data for quiz results are presented as mean ± standard deviation (SD). To determine normal distribution a Shapiro-Wilk test was performed, and an ANOVA test was carried out to establish significant differences among groups with a p < 0.05. For survey, responses are presented as percentages from a six-degree Likert scale: 1: very poor, 2: poor, 3: fair, 4: good, 5: very good, 6: excellent, results are presented as percentages. Stata software (College Station, TX USA) version 17.0 was used to analyze all data. Graphs were made with GraphPad version 8.0 (Boston, MA USA). To elaborate realistic physical bone models from the human shoulder computed tomography (CT) images were obtained from volunteers after signing informed consent (FM-CIE-0113-18). Digital Imaging and Communication in Medicine (DICOM) files from CT scan were segmented with an on-house algorithm and converted into a CAD format, using an isocontour algorithm. Using a rapid prototype system, CAD files were used to 3D print a scapula, humerus and clavicle prototype in ABS plastic. Following, each prototype was employed to elaborate a silicone model into which plaster was poured into and allowed to set. The piece was released from the mold and the excess material was removed to have a replica of the skeletal element to be used in the 3D workshop with the students. At least 12 prototypes for each bone were created. The study was approved by the Ethics committee of Pontificia Universidad Javeriana, Bogotá, School of Medicine project No. 8259 (FM-CIE-0113-18). To carry out this research, 85 second year undergraduate medical students were invited to take part in this study (62 female (73%) and 23 males (27%) between the ages of 19 and 25, where 77 participated in this study anonymously, representing 91% of the fourth semester class. Gender distribution was 71% females and 29% males. It was made clear it would not influence their grades, and they could withdraw from the study at any moment. Signed inform consent was obtained from all participants (Act No. FM-CIE-0010-23). At the time of the study, subjects had completed two hours of lecture on the anatomy of the shoulder, without any practical component. Subjects were divided into random groups, assuming students held a similar anatomical knowledge. A first group answered the conventional workshop ( n = 24, females: 21, males: 3), herein referred to as 2D. A second group of 28 students participated in the 3D workshop ( n = 28, females: 21, males: 7) and a third group, referred to as control, decided not to participate in either of the workshops ( n = 25, females: 13, males: 12), but did participate in taking a 10-question quiz at the end of the study. The same written workshop regarding the anatomy of the shoulder bones was distributed among the two groups (2D and 3D workshops). Students could organize themselves into subgroups with no more than four students per subgroup. The activity was first explained for both groups by one of the professors leading the study. Subjects were handed out a printed workshop with black and white photographs of the scapulae, clavicle and humerus and a table to fill in. Students had 90 min to provide answers regarding the name of the skeletal element, bone laterality, and the view of the bone based on the photograph. In addition, the workshop contained a table with the names of 18 shoulder muscles. Subjects had to determine bone marking, muscle origins and insertions, and function, according to a given color code. To this end, they could use anatomy atlases, and information from the internet (Fig. ). For the study, the 2D and 3D groups were divided into two different classrooms: The 2D group performed the workshop in a lecture classroom, whereas the 3D group carried out the activity in an anatomy laboratory, where each group worked on a table with scapulae, clavicle and humerus plaster models and color markers. For the 3D workshop, in addition to writing the answers on paper, bone markings (projections and depressions) and muscle insertions were painted on the plaster bone models using a color code. At the end of the activity, all three groups, 2D, 3D and those not participating in either workshop (control group) had to answer a 10-question quiz under examination conditions related to anatomical markings, bone laterality, view of the bone, muscle insertion, and muscle function. They had a 20-minute time span to answer the quiz, without access to any of their learning material. The maximum score achievable was 5.0 and the minimum 0. After this activity, a focus group was performed with the two groups. A week later, a graded survey was conducted for the 2D workshop group, consisting of six questions addressing how the workshop contributed to their understanding of anatomical landmarks, muscle insertions, and articular movements. In addition, they were asked to grade the overall experience, how it contributed to their learning process and if the quiz questions were related to what they had learned in the activity. For the 3D workshop group, two additional questions were included: if bone manipulation and painting on bone plaster models had an added value in their learning process. Moreover, to gather feedback from all participants, two open questions regarding the two main strengths and weaknesses of this activity were answered by all students who participated. This was a descriptive cross-sectional study. Data for quiz results are presented as mean ± standard deviation (SD). To determine normal distribution a Shapiro-Wilk test was performed, and an ANOVA test was carried out to establish significant differences among groups with a p < 0.05. For survey, responses are presented as percentages from a six-degree Likert scale: 1: very poor, 2: poor, 3: fair, 4: good, 5: very good, 6: excellent, results are presented as percentages. Stata software (College Station, TX USA) version 17.0 was used to analyze all data. Graphs were made with GraphPad version 8.0 (Boston, MA USA). Survey results Regarding the question on how the students perceived the workshop overall (Fig. ), 3D students graded the workshop as very good (35.7%) and excellent (53.6%), whereas the 2D group graded it as fair (34.8%) and good (30.4%). On how it contributed to the learning process (Fig. A), the 3D group rated it as good 21.4%, very good 42.8% and excellent 28.6%. On the contrary, the majority of the 2D group considered it fair (43.5%). Understanding anatomical landmarks (Fig. B) was graded by the 3D group as good (39.3%), and excellent (50.0%), compared with poor (27.3%), fair (27.3%) and good (40.9%) grades given by the 2D workshop students. For the question regarding muscle insertions (Fig. C), 63% of the students in the 3D group gave it an excellent mark. In contrast, for this question 8.7% of the students in the 2D group considered it was very poor. Last, understanding joint movement was graded by 3D workshop students for the most part as good (35.7%), whereas 52.2% of the 2D group considered it was poor (Fig. D). Because of the nature of the study design, students in the 3D workshop group answered two additional questions: manipulating the bones and its impact on significant learning was graded as excellent (67.9%), and even though it received a high mark (excellent 64.3%), painting on the bones seemed to have a lower importance in their learning process (Fig. ). Focus group results In addition, based on the focus group, the 3D workshop students commented it was helpful to see structures in 3D and establish associations, which is difficult when working with 2D images. Most found it was useful to understand bone laterality view and muscle origin and insertion. It asserted their knowledge, as it allowed to dimension how the bone is structured, reinforcing spatial location. In comparison to working with 2D images, such as an anatomy atlas or the lecture on the subject, the students referred it to be different when one touches the structure vs. reading about it. The 3D workshop promoted learning through collaborative work between students, complementing each other’s knowledge. From the focus group, it was evident that the tactile models allowed for a three-dimensional appreciation of the bones, their landmarks and respective muscle insertions, contributing to spatial metacognition. This result was not as strong for the 2D workshop. In contrast, students in the 2D workshop said they had to rely more on the teacher’s help. The process was more related to memorization rather than understanding. Printed photographs of the bone do not allow for good identification of bone markings. For both workshops, an anatomy atlas was of great help. Moreover, bone articulation and clinical correlation was not sufficiently reinforced in this workshop, as evidenced by the results from the survey and the quiz. Regarding the question on how the students perceived the workshop overall (Fig. ), 3D students graded the workshop as very good (35.7%) and excellent (53.6%), whereas the 2D group graded it as fair (34.8%) and good (30.4%). On how it contributed to the learning process (Fig. A), the 3D group rated it as good 21.4%, very good 42.8% and excellent 28.6%. On the contrary, the majority of the 2D group considered it fair (43.5%). Understanding anatomical landmarks (Fig. B) was graded by the 3D group as good (39.3%), and excellent (50.0%), compared with poor (27.3%), fair (27.3%) and good (40.9%) grades given by the 2D workshop students. For the question regarding muscle insertions (Fig. C), 63% of the students in the 3D group gave it an excellent mark. In contrast, for this question 8.7% of the students in the 2D group considered it was very poor. Last, understanding joint movement was graded by 3D workshop students for the most part as good (35.7%), whereas 52.2% of the 2D group considered it was poor (Fig. D). Because of the nature of the study design, students in the 3D workshop group answered two additional questions: manipulating the bones and its impact on significant learning was graded as excellent (67.9%), and even though it received a high mark (excellent 64.3%), painting on the bones seemed to have a lower importance in their learning process (Fig. ). In addition, based on the focus group, the 3D workshop students commented it was helpful to see structures in 3D and establish associations, which is difficult when working with 2D images. Most found it was useful to understand bone laterality view and muscle origin and insertion. It asserted their knowledge, as it allowed to dimension how the bone is structured, reinforcing spatial location. In comparison to working with 2D images, such as an anatomy atlas or the lecture on the subject, the students referred it to be different when one touches the structure vs. reading about it. The 3D workshop promoted learning through collaborative work between students, complementing each other’s knowledge. From the focus group, it was evident that the tactile models allowed for a three-dimensional appreciation of the bones, their landmarks and respective muscle insertions, contributing to spatial metacognition. This result was not as strong for the 2D workshop. In contrast, students in the 2D workshop said they had to rely more on the teacher’s help. The process was more related to memorization rather than understanding. Printed photographs of the bone do not allow for good identification of bone markings. For both workshops, an anatomy atlas was of great help. Moreover, bone articulation and clinical correlation was not sufficiently reinforced in this workshop, as evidenced by the results from the survey and the quiz. Even though, students from the 3D group graded the workshop as very good or excellent in its majority, quiz results (Fig. ) did not reveal a significant difference compared to 2D workshop performance or students who did not participate in any workshop (1.82 ± 0.88), 2D (2.05 ± 0.82) and 3D (2.09 ± 0.94). However, 21% of the 3D group had a passing grade, whereas only 16% from the 2D group and 8% from the control group had a passing grade. The highest grade (4.0 from a maximum of 5.0) were obtained by two subjects of the 3D group (7%). The best grade for the 2D group was 3.7 obtained by one person (4%). Last, for the control group the highest mark was and 3.5 from one student (4%). Moreover, for the 3D group, five of the 10 questions had a greater percentage of students selecting the correct answer. These included a question regarding the levator scapulae muscle insertion, to identify the view of the clavicle, an arrow pointing to a humerus landmark asking to identify the function of the muscle fibers inserting in the lesser crest of the humerus, an arrow pointing to the radial groove asking to identify the structure that associates to it. Last, an arrow pointing to the anatomical neck of the humerus, asking to identify the structure. For the 2D group, only one question had a greater percentage of students answering the right question (to identify the muscle inserting on the scapula bone landmark circled in red). Last, the control group had four questions for which a greater percentage of students answered correctly. The questions were related to the main function of the muscle inserting in the subscapular fossa, to identify the neck of the scapula, to identify the structure and function of the arrow pointing at the coronoid fossa, and to identify the crest of the lesser tubercle. This study aimed to develop new learning resources using bone plaster models to understand the shoulder’s skeletal anatomy, focusing on bone landmarks and muscle insertions and origins. To this end, a customized, highly accurate plaster skeletal element from a 3D printed prototype was used to assess the efficacy as an anatomical teaching aid. To evaluate what students had learned, a quiz was applied to all subjects at the end of the activity. Additionally, to collect the views of the two approaches evaluated (conventional 2D vs. 3D), a focus group and survey were conducted to determine the subjects’ educational benefits and perceptions. At present, objective evaluation on comparative efficacies using conventional teaching resources and novel pedagogical tools, such as 3D prototypes remains scarce in the literature, since most studies are based on student perception, attitude and enjoyment . The objective of the present study was to proof the hypothesis that students actively participating in the 3D workshop would obtain a significantly higher grade compared with 2D workshop or control subjects. However, results did not reveal significant differences (Fig. ). As was observed from the 3D focus group, students described it would be more beneficial to have a lecture followed by a lab session to interiorize the information. Furthermore, subjects recounted this was the first time studying this subject; if they had a review session, they would have benefited more from the activity. In addition, they attributed the low performance on the fact that they had high-stakes examinations in the two weeks prior. In contrast, in studies where a post-test objectively evaluated the efficacy of a 3D model, test conditions were different: For the Preece et al. study , subjects had access to their teaching aids, and for Bao et al. study , a training took place three times a week, with each session lasting 40 min for four continuous weeks. Therefore, for both formerly mentioned studies, test scores were significantly higher for the subjects learning by a haptic experience. Last, in the Huang et al. study, the subjective questionnaire demonstrated the 3D experience was considered the most valuable and enjoyable learning instrument , suggesting this positive quality using 3D models can be employed towards developing educational resources. In the present study, it was evidenced that a haptic experience involving painting on 3D plaster models of skeletal elements, aided in the learning process of the shoulder’s anatomy by enhancing the student’s anatomical spatial awareness. It is known that there has been limited development of activities that support visuospatial and metacognitive skills in anatomy . Therefore, with this innovative approach, the limitations that traditional methods, usually focused on a surface approach to learning such as memorization, might be overcome . Preece et al. suggested that 3D physical models have a significant advantage over textbooks and virtual reality by improving visuospatial understanding. Furthermore, appreciating complex spatial relationships in 3D increases visual skills . In their acetabular fracture study, Huang et al. described that by touching the anatomical landmarks and fracture lines in the 3D models, students could obtain spatial details of the morphology of the fracture that could not be acquired by the other methods evaluated. They concluded that 3D models are an efficient learning tool . Hence, haptic cues may be crucial in learning about complicated structures. In the present study, 3D group participants were able to identify bone landmarks by touching the structure. Students were made aware of bone landmark’s that may not be otherwise noticeable in a 2D format (photograph, drawing or virtual image). The hand-held interactive experience allows for active control, permitting visualization from multiple perspectives. As concluded by Wainman et al., a physical model is superior to a computer projection, because of stereoscopic vision in the 3D structure . In addition, the 3D model improves understanding, because the haptic experience develops the ability to integrate information, as described in the acetabular study: “form a complete chain from vision to touch, from plane to stereo, and from intact to fracture”. To achieve a deep learning approach, the student must understand the structure and manipulate the object to make sense of the relation between the elements. Hence, 3D plaster models of the shoulder skeleton were fabricated. Brumpt et al. carried out a systematic review describing the value of 3D printed anatomical models . From their work, they selected 68 articles, of which 47 were designed from CT scans, and 51 articles mentioned bone printing. However, the shoulder was only mentioned in one study . In the study of Garas and colleagues, 23 undergraduate students of health sciences were exposed to plastinated, 3D-printed models and cadaverous specimens of the external heart, shoulder, and thigh, where the shoulder was plastinated . The students then had to take a test with nine questions on a pinned structure, and were asked to identify it. Afterwards, they were provided a post-test survey with five questions on a Likert scale. Collectively, from the Garas’ study, it was concluded that 3D printing can be an asset in the process of learning anatomy . Furthermore, the level of understanding was very basic and not comparable with the present study. Ye et al. carried out a systematic review and meta-analysis for the last decade . They included studies using post-training tests, where 3D printed models of various systems, such as nervous system, and abdominal organs were used. Regarding student satisfaction, from their study it was observed that five of the six study results were significantly higher for the 3D group, in comparison with conventional groups. Likewise, concerning accuracy of answering questions, two studies showed the 3D group was significantly better in comparison with the conventional group. Collectively, subjective information obtained from survey can be as important as test scores. In the present work, students from the 3D group described how important it was to touch, feel the texture, see the structure and establish proportions in the plaster bone models. The students expressed the added value provided by manipulating the three skeletal elements to establish associations and anatomical relations. Additionally, anatomical information was not fully understood from textbook reading or from explanations in a lecture. One student described how the main difficulty was establishing dimensions. The 3D spatial perception view allowed to understand proportions and locations. Additionally, painting on bone landmarks and muscle insertions made it easier to recognize their locations. Wainman et al. described how a haptic experience manipulating a 3D model, enhanced the learning process by providing additional sensory spatial relationships, which cannot be acquired by learning from 2D images; thus, enhancing the learning process . To further this learning experience, painting was included in the 3D workshop, reinforcing the learning process. Other researchers have used 3D printing and painting to learn anatomy. McMenamin and collaborators reported on high resolution 3D prints of accurate color reproductions of prosections based on CT scan images . Their article described in depth the process of creating the models, yet no evaluation with students was carried out. In the present study, the overall experience was rated as very good or excellent by almost 90% of the 3D model group members. In contrast, 65% of the students in the 2D group rated the activity primarily as fair or good, and none of them rated it as excellent. Likewise, a study carried out by Pandya, Mistry and Owens , described the use of videoconferencing and use of tactile learning with 3D models to assess the differences in undergraduate students’ attitudes toward tactile and non-tactile learning. In their results, students believed tactile learning was statistically superior ( p = 0.017). Furthermore, Reid et al. described a study where five students participated in a special module entitled “Drawing and Anatomy” at the University of Cape Town . Reid’s study coupled exploring the skeletal element, such as a skull, with a haptic experience with one hand and drawing with the other hand. The students were then interviewed mid-way through their intervention. Collectively, the experience resulted in an increased comprehension of the 3D form and detail of anatomical landmarks and cavities. Likewise, herein we obtained similar answers from the 3D focus group. Other experiences using painting to learn anatomy were evaluated by Shapiro et al. . In their study, they employed haptic surface painting to support learner engagement and spatial awareness. They described that haptico-visual observation can support spatial, holistic anatomy learning. Haptic sensing involves perceiving a variety of object features, such as shape, size, weight, surface texture, compliance, and thermal characteristics . In this manner, somatosensory haptic acquired information is also subjected to detailed analysis . In our study, the students surveyed in the 3D model group perceived the haptic activity favored their overall learning process, rating it primarily between the very good and excellent, representing 71.5% of their answers; while in the group that used only 2D images, more than 60% perceived the contribution that the activity provided to their learning process as poor or fair. The haptic experiences in this study support the argument that their implementation favors meaningful, autonomous and collaborative learning, characteristics that are sought in all academic activities in current medical education. The opportunity to work with the 3D plaster models and actively participate in painting on them, demonstrated a significant impact on the learning of the medical students who scored 90% in the very good and excellent categories. It is evident that the bone plaster models provided 3D metacognition of the structures, consolidating knowledge and making learning more motivating and satisfactory. To achieve a comprehensive knowledge of bone markings, laterality, muscle insertions and joint movements, demands of the learner the correct spatial orientation of the structures involved. The group that worked with the 3D plaster bone models graded it in the survey as very good and excellent (between 85 and 90%). However, joint movement was not properly developed in this workshop. These same categories were rated between fair and poor (50–70%) for the 2D group. Collectively, haptic experiences in this study were shown to favor significant learning, characterized by an autonomous and collaborative approach. Although results in this study were satisfactory, one of the limitations observed was the duration of the workshop, which only lasted 90 min. As with the Waiman et al. study, learning time was brief . It could be expected that another 90-minute laboratory might allow students to recognize bone articulation and movements, rather than identifying a bone landmark without understanding its function. Even though, one of the learning objectives of this activity was to recognize different components of the shoulder in diagnostic images to establish associations between them, this was not achieved. Anatomical understanding must precede diagnostic images, as was the means of objectively evaluating the 3D tool in the Preece study . Therefore, radiological images should also be included in the workshop, to verify if learned concepts can be applied in a clinical setting. Moreover, a pre-test should have been carried out to assess the level of anatomy knowledge of all participants. Last, evaluations of this nature should not be implemented after midterm examination, as they might affect students’ performance. A highly accurate 3D plaster model was custom made, so students could appreciate the bones’ landmarks, identify muscle origins and insertions, and understand their function. Such tools contribute to the development of skills that allow students to face various future situations in clinical practice with greater proficiency and confidence. The results from our study demonstrated that a haptic experience increased motivation and satisfaction. Furthermore, painting on a particular bone landmark required from the student to combine the senses of touch and sight to establish spatial relationships; thus, reinforcing the learning process. Additionally, in the 3D workshop, students actively participated in their autonomous learning process. Furthermore, teamwork helped solve questions and complete tasks, while learning new concepts. Hence, new collaborative learning was stimulated, as evidenced from the 3D model focus group. However, this workshop must be complemented with activities that increase the understanding of muscle movement and bone articulation for better integration to clinical settings. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4
When
937a0697-4e7f-4ec8-85bb-45643c957395
11863445
Forensic Medicine[mh]
Escherichia coli ( E. coli ) is the one of prevalent gram-negative species. The following three broad categories of E. coli strains are of biological significance to mammals: commensal, intestinal pathogenic (InPEC), and extraintestinal pathogenic (ExPEC) . Although E. coli is a benign commensal colonizing the mammalian intestine, some strains or pathotypes can cause a variety of intestinal and diarrheal disorders . For example, a minimum of the following six pathotypes have been described: enterohemorrhagic, enteropathogenic, enterotoxigenic, enteroaggregative, diffusely adherent, and enteroinvasive E. coli , respectively . Moreover, ExPEC can cause diseases such as urinary tract infections, bacteremia, septicemia, and meningitis . It is unclear how E. coli genetic diversity, virulence, and antimicrobial resistance affect biodiversity and wild animal conservation . Wild animals may get exposed to antimicrobial compounds and antimicrobial resistance bacteria by interaction with anthropogenic sources such as human waste (garbage and sewage) and polluted waterways , livestock activities , or predation on impacted prey, including livestock corpses . Giraffes ( Giraffa camelopardalis ) are the tallest living animals and are kept in many zoos worldwide. Despite the passionate interest in keeping captive giraffes healthy, the health management of the giraffe presents a significant challenge. Despite being routinely bred in zoos, giraffes continue to provide a problem, particularly when it comes to food. Because of the high risk of maternal rejection and death among both mother-reared and hand-reared calves . Although success rates have increased over time, intensive care therapy of compromised calves remains under documented . There are still no definitive feeding standards, predicted weight increase, or suggestions for veterinarian assistance. In addition, little research has been conducted on diseases affecting giraffes, which are primarily associated with its hoofs and musculoskeletal system . However, there are few reports of E. coli disease in young giraffes. ExPEC infections are a serious threat to public health worldwide . Urinary tract infections, severe newborn meningitis, major intra-abdominal infections, and, less frequently, pneumonia, intravascular device infections, osteomyelitis, soft tissue infections, or bacteremia are the most troublesome illnesses. Bacteremia can result in sepsis, which is defined as life-threatening organ dysfunction caused by an unregulated immune response to infection . In this study, we describe the case of a giraffe that developed septicemia after an umbilical cord infection caused by E. coli. This case study may serve as a valuable reference and caution for veterinarians in zoos. Clinical history A female giraffe’s mother died of severe trauma approximately 5 h after delivery; hence, the juvenile giraffe could not feed colostrum and had to be artificially administered milk powder (Holstein milk + 10% colostrum). The juvenile giraffe was able to stand on its own 3 days after birth and was in a good condition. However, on the eight day after birth, the juvenile giraffe began to show clinical signs of losing appetite, slow walking, and depression. Lactasin (LactaidⓇ, Johnson & Johnson Inc., Guelp, Canada; Take 3 caplets with their bite of daily food.) was administered orally twice a day for 4 days during the course of the disease, and the treatment was ineffective. On the 12th day after birth, the juvenile giraffe showed anorexia, tarsal joint swelling of the right hind limb, claudication, unwillingness to move, the presence of a small amount of dirty yellow loose stool around the anus, and eventually lying down, and died on the 14th day after birth. Necropsy A postmortem examination was performed within 2 h of the animal’s death. According to the naked eye observation, dark, red, and swollen umbilicus (Fig. A); and a small amount of dirty yellow sticky feces on the perianal coat. Serofibrinous arthritis and periarticular serous necrotizing inflammation: the swollen hock joint of the hind limb and the subcutaneous tissue near it was light yellow gelatinous material due to inflammatory edema, and the local skin is attached to the subcutaneous tissue and muscle (Fig. B). A cystic necrotic focus was formed at the adhesion site, with a red inflammatory response zone at the margin and yellow necrotic tissue in the central area. A large amount of pale yellow translucent inflammatory fluid and yellow flocculent fibrinous exudate accumulated in the joint cavity of the wrist, hock, and hip joints (Fig. C). Serous omphicitis with severe gelatinous swelling of the umbilical pore was obvious. The umbilical veins and bilateral umbilical arteries were thickened significantly, with black and red adventitia and gelatinous edema of the surrounding connective tissue. The umbilical arteries were full of dirty dark red necrosis, and the intima was rough (Fig. D). Severe serofibrinous pericarditis, pleuritis, and peritonitis: A large amount of pale-yellow translucent fluid and yellow white flocculent fibrinous exudates in the pericardial, chest, and abdominal cavities, and slight adhesion of the local serous membrane were observed (Fig. E and F). The kidneys and liver were swollen and dark red, with moist and glossy surfaces, and the submucosa of the renal pelvis was thickened and showed yellowish gelatinous edema. The lungs were enlarged, dark red in color, covered with flocculent fibrinous exudates, and the interstitium of the pulmonary lobule was generally widened and full of yellow translucent gelatinous exudate (Fig. A). The transverse diameter of the heart was significantly widened, and the epicardial membrane was attached to a flocculent yellowish-white fibrinous exudate. Hyperemia and edema of the abomasum mucosa and intestinal pneumatosis were observed. Histopathology Serous interstitial pneumonia and lobular interstitial pneumonia were significantly widened and filled with homogeneous pink stained serous fluid (Fig. A). A small amount of fibrous protein, diffused neutrophils, scattered or clustered small blue bacilli, and a large number of neutrophils within lymphatic vessels at all levels were observed (Fig. B). Pulmonary hyperemia and sporadic serous fluid, erythrocytes, and neutrophils were found in the alveolar and bronchial lumens near the lobular interstitium (Fig. C and D). Serous necrotizing umbilical arteritis with hyperemia, edema, and marked thickening of the tunica adventitia of the umbilical artery filled with homogeneous pink serous fluid, scattered or diffused infiltrating neutrophils, and scattered or clustered small blue-stained bacilli were observed (Fig. E and F). Necrosis of the tunica intima and partial tunica media with diffused neutrophils and increased blue-stained bacterial clusters of varying sizes were observed; there was a large amount of serous fluid, necrotic neutrophils, and erythrocytes in the lumen of the artery (Fig. F). Mild hepatic sclerosis: hepatic interstitial connective tissue proliferated and widened mildly, with small bile duct increase; liver edema, obvious Disse space, incomplete wall of hepatic sinusoid, hemolysis, and hepatocytes separated from each other were seen. Mild steatosis and scattered necrosis of hepatocytes in the central area of the hepatic lobule were observed. Renal hyperemia and edema, mild to moderate cell swelling of the renal tubular epithelia, occasional necrosis of the renal tubular epithelia in some renal tubules, and increased neutrophil content in the pelvis were observed. Hyperemia and edema, loose capsules with scattered infiltrating neutrophils, and cells in the zona fasciculata separated from each other were observed in the adrenal glands. Lymphocyte reduction, fewer lymph nodules with inconspicuous germinal centers, and diffuse hemorrhage of the medulla were observed in the lymph nodes. Hyperemia and edema, significantly reduced lymphocytes, white pulp lymphocyte nodules with sparse lymphocytes of white pulp were observed in the spleen. Mild to moderate cellular swelling of cardiomyocytes was observed. Serous necrotizing enteritis: significant edema and thickening of the small intestine wall, large amount of serous fluid, diffuse infiltrating neutrophils, and necrotic mucosal layer were observed in the small intestine. The marginal acinar epithelial cells of the thyroid gland were partially necrotic. Blue-stained bacterial clusters of varying sizes or diffuse blue-stained small bacilli were present in the interstitium and serous membranes of most tissues and organs as well as in small blood vessels and lymphatic vessels (Fig. A). This was accompanied by scattered or diffuse infiltrating neutrophils, particularly in the lymphatic vessels of tissues filled with neutrophils (lymphatic spread). The endothelial cells separated severely from the media of the small vessels because of edema. Bacterial isolation and molecular identification Pleural fluid, pericardial exudate, ascites, joint fluid, lung, liver, and umbilical artery wall were aseptically collected with an inoculation loop and inoculated on MacConkey and eosin-methylene blue (EMB) medium and cultivated at 37 °C for 24 h. Many small pink colonies grew on the MacConkey medium. The EMB medium grew many small, round, shiny black colonies characteristic of E. coli . Using an inoculation loop, a small amount of the organism was collected to prepare a smear. Simple gram-negative small rods having the same morphology as that of E. coli were detected using Gram staining (Fig. B). In this study, the 16S rRNA of the cultured bacteria was sequenced. We selected ten colonies from each plate (total 70 colonies) for polymerase chain reaction (PCR) detection and sequencing. General primer sets (10Fx:5′-AGAGTTTGATCCTGGCTCAG-3′; 1509R:5′-GTTACCTTGTTACGACTTCAC-3′) were selected to amplify the 16S rRNA from all the colonies isolated from the baby giraffe samples . For amplification, the following conditions were used: initial denaturation at 95 °C for 3 min; 30 cycles of denaturation (30 s at 94 °C), annealing (30 s at 55 °C), extension (1.5 min at 72 °C), and final extension at 72 °C for 5 min. The amplified PCR products were analyzed on 1.5% agarose gels, purified, and sequenced. Through BLAST searches, the sequences were compared with those in the NCBI database. The results indicated that all the 70 colonies were of E. coli ; they also revealed a nucleotide sequence similarity of 99.16–99.79% to strains from human feces (CCFM8332), Yuncheng Salt Lake (YC-LK-LKJ9), poultry droppings (AKP_87), marine (CSR-33, CSR-59), wetland (CH-8), and wastewater treatment plant (WTPii241) (Fig. C). The phylogenetic groups of E. coli isolates were identified using a PCR-based method developed by Clermont et al. E. coli was classified into four main phylogenetic groups (A, B1, B2, and D) based on the presence of three markers (chuA, yjaA, and TSPE4.C2) in their DNA. Crude DNA was extracted from colonies by lysing them in sterile water at 100 °C for 15 min, followed by centrifugation. The lysis supernatant was utilized for the polymerase chain reaction, following the conditions outlined by Clermont et al. . The primers utilized in this investigation are detailed in Supplementary Table 1. PCR analysis of the isolate indicated its classification within phylogenetic group B1 (Fig. A). A total of twenty-five virulence genes were identified, including PAI, pap A, fm H, kps MT III, pap EF, ibe A, fyu A, bma E, sfa / foc DE, iut A, pap G allele III, hly A, rfc , nfa E, pap G allele I, kps MT II, pap C, gaf D, cva C, foc G, tra T, pap G allele I, pap G allele II, afa / dra BC, cnf 1, and sfas . Each virulence gene was amplified using specific primers in PCR. The primers utilized in this investigation are detailed in Supplementary Table 1. Thermal cycling conditions included an initial denaturation cycle at 94 °C for 2 min, followed by 35 cycles at 94 °C for 1 min, annealing at a specific temperature for 1 min, and extension at 72 °C for 1 min, with a final cycle at 72 °C for 2 min. In this strain, 6 virulence genes (PAI, iut A, pap G allele III, cva C, sfas , afa / dra BC) associated with adhesion, toxicity, and environmental response were identified (Fig. B). E. coli strains were tested for antibiotic susceptibility using CLSI guidelines and a disc diffusion method with 16 antibiotics . The resistance profiles of the E. coli strains to the antibiotics tested are outlined in Table , with interpretation of all susceptibility results based on the CLSI guidelines . The strains exhibited resistance to ceftazidime, ceftriaxone, ciprofloxacin, levofloxacin, amoxicillin, and azithromycin, while demonstrating susceptibility to penicillin, oxacillin, lincomycin, clindamycin, ampicillin, and cotrimoxazole. A female giraffe’s mother died of severe trauma approximately 5 h after delivery; hence, the juvenile giraffe could not feed colostrum and had to be artificially administered milk powder (Holstein milk + 10% colostrum). The juvenile giraffe was able to stand on its own 3 days after birth and was in a good condition. However, on the eight day after birth, the juvenile giraffe began to show clinical signs of losing appetite, slow walking, and depression. Lactasin (LactaidⓇ, Johnson & Johnson Inc., Guelp, Canada; Take 3 caplets with their bite of daily food.) was administered orally twice a day for 4 days during the course of the disease, and the treatment was ineffective. On the 12th day after birth, the juvenile giraffe showed anorexia, tarsal joint swelling of the right hind limb, claudication, unwillingness to move, the presence of a small amount of dirty yellow loose stool around the anus, and eventually lying down, and died on the 14th day after birth. A postmortem examination was performed within 2 h of the animal’s death. According to the naked eye observation, dark, red, and swollen umbilicus (Fig. A); and a small amount of dirty yellow sticky feces on the perianal coat. Serofibrinous arthritis and periarticular serous necrotizing inflammation: the swollen hock joint of the hind limb and the subcutaneous tissue near it was light yellow gelatinous material due to inflammatory edema, and the local skin is attached to the subcutaneous tissue and muscle (Fig. B). A cystic necrotic focus was formed at the adhesion site, with a red inflammatory response zone at the margin and yellow necrotic tissue in the central area. A large amount of pale yellow translucent inflammatory fluid and yellow flocculent fibrinous exudate accumulated in the joint cavity of the wrist, hock, and hip joints (Fig. C). Serous omphicitis with severe gelatinous swelling of the umbilical pore was obvious. The umbilical veins and bilateral umbilical arteries were thickened significantly, with black and red adventitia and gelatinous edema of the surrounding connective tissue. The umbilical arteries were full of dirty dark red necrosis, and the intima was rough (Fig. D). Severe serofibrinous pericarditis, pleuritis, and peritonitis: A large amount of pale-yellow translucent fluid and yellow white flocculent fibrinous exudates in the pericardial, chest, and abdominal cavities, and slight adhesion of the local serous membrane were observed (Fig. E and F). The kidneys and liver were swollen and dark red, with moist and glossy surfaces, and the submucosa of the renal pelvis was thickened and showed yellowish gelatinous edema. The lungs were enlarged, dark red in color, covered with flocculent fibrinous exudates, and the interstitium of the pulmonary lobule was generally widened and full of yellow translucent gelatinous exudate (Fig. A). The transverse diameter of the heart was significantly widened, and the epicardial membrane was attached to a flocculent yellowish-white fibrinous exudate. Hyperemia and edema of the abomasum mucosa and intestinal pneumatosis were observed. Serous interstitial pneumonia and lobular interstitial pneumonia were significantly widened and filled with homogeneous pink stained serous fluid (Fig. A). A small amount of fibrous protein, diffused neutrophils, scattered or clustered small blue bacilli, and a large number of neutrophils within lymphatic vessels at all levels were observed (Fig. B). Pulmonary hyperemia and sporadic serous fluid, erythrocytes, and neutrophils were found in the alveolar and bronchial lumens near the lobular interstitium (Fig. C and D). Serous necrotizing umbilical arteritis with hyperemia, edema, and marked thickening of the tunica adventitia of the umbilical artery filled with homogeneous pink serous fluid, scattered or diffused infiltrating neutrophils, and scattered or clustered small blue-stained bacilli were observed (Fig. E and F). Necrosis of the tunica intima and partial tunica media with diffused neutrophils and increased blue-stained bacterial clusters of varying sizes were observed; there was a large amount of serous fluid, necrotic neutrophils, and erythrocytes in the lumen of the artery (Fig. F). Mild hepatic sclerosis: hepatic interstitial connective tissue proliferated and widened mildly, with small bile duct increase; liver edema, obvious Disse space, incomplete wall of hepatic sinusoid, hemolysis, and hepatocytes separated from each other were seen. Mild steatosis and scattered necrosis of hepatocytes in the central area of the hepatic lobule were observed. Renal hyperemia and edema, mild to moderate cell swelling of the renal tubular epithelia, occasional necrosis of the renal tubular epithelia in some renal tubules, and increased neutrophil content in the pelvis were observed. Hyperemia and edema, loose capsules with scattered infiltrating neutrophils, and cells in the zona fasciculata separated from each other were observed in the adrenal glands. Lymphocyte reduction, fewer lymph nodules with inconspicuous germinal centers, and diffuse hemorrhage of the medulla were observed in the lymph nodes. Hyperemia and edema, significantly reduced lymphocytes, white pulp lymphocyte nodules with sparse lymphocytes of white pulp were observed in the spleen. Mild to moderate cellular swelling of cardiomyocytes was observed. Serous necrotizing enteritis: significant edema and thickening of the small intestine wall, large amount of serous fluid, diffuse infiltrating neutrophils, and necrotic mucosal layer were observed in the small intestine. The marginal acinar epithelial cells of the thyroid gland were partially necrotic. Blue-stained bacterial clusters of varying sizes or diffuse blue-stained small bacilli were present in the interstitium and serous membranes of most tissues and organs as well as in small blood vessels and lymphatic vessels (Fig. A). This was accompanied by scattered or diffuse infiltrating neutrophils, particularly in the lymphatic vessels of tissues filled with neutrophils (lymphatic spread). The endothelial cells separated severely from the media of the small vessels because of edema. Pleural fluid, pericardial exudate, ascites, joint fluid, lung, liver, and umbilical artery wall were aseptically collected with an inoculation loop and inoculated on MacConkey and eosin-methylene blue (EMB) medium and cultivated at 37 °C for 24 h. Many small pink colonies grew on the MacConkey medium. The EMB medium grew many small, round, shiny black colonies characteristic of E. coli . Using an inoculation loop, a small amount of the organism was collected to prepare a smear. Simple gram-negative small rods having the same morphology as that of E. coli were detected using Gram staining (Fig. B). In this study, the 16S rRNA of the cultured bacteria was sequenced. We selected ten colonies from each plate (total 70 colonies) for polymerase chain reaction (PCR) detection and sequencing. General primer sets (10Fx:5′-AGAGTTTGATCCTGGCTCAG-3′; 1509R:5′-GTTACCTTGTTACGACTTCAC-3′) were selected to amplify the 16S rRNA from all the colonies isolated from the baby giraffe samples . For amplification, the following conditions were used: initial denaturation at 95 °C for 3 min; 30 cycles of denaturation (30 s at 94 °C), annealing (30 s at 55 °C), extension (1.5 min at 72 °C), and final extension at 72 °C for 5 min. The amplified PCR products were analyzed on 1.5% agarose gels, purified, and sequenced. Through BLAST searches, the sequences were compared with those in the NCBI database. The results indicated that all the 70 colonies were of E. coli ; they also revealed a nucleotide sequence similarity of 99.16–99.79% to strains from human feces (CCFM8332), Yuncheng Salt Lake (YC-LK-LKJ9), poultry droppings (AKP_87), marine (CSR-33, CSR-59), wetland (CH-8), and wastewater treatment plant (WTPii241) (Fig. C). The phylogenetic groups of E. coli isolates were identified using a PCR-based method developed by Clermont et al. E. coli was classified into four main phylogenetic groups (A, B1, B2, and D) based on the presence of three markers (chuA, yjaA, and TSPE4.C2) in their DNA. Crude DNA was extracted from colonies by lysing them in sterile water at 100 °C for 15 min, followed by centrifugation. The lysis supernatant was utilized for the polymerase chain reaction, following the conditions outlined by Clermont et al. . The primers utilized in this investigation are detailed in Supplementary Table 1. PCR analysis of the isolate indicated its classification within phylogenetic group B1 (Fig. A). A total of twenty-five virulence genes were identified, including PAI, pap A, fm H, kps MT III, pap EF, ibe A, fyu A, bma E, sfa / foc DE, iut A, pap G allele III, hly A, rfc , nfa E, pap G allele I, kps MT II, pap C, gaf D, cva C, foc G, tra T, pap G allele I, pap G allele II, afa / dra BC, cnf 1, and sfas . Each virulence gene was amplified using specific primers in PCR. The primers utilized in this investigation are detailed in Supplementary Table 1. Thermal cycling conditions included an initial denaturation cycle at 94 °C for 2 min, followed by 35 cycles at 94 °C for 1 min, annealing at a specific temperature for 1 min, and extension at 72 °C for 1 min, with a final cycle at 72 °C for 2 min. In this strain, 6 virulence genes (PAI, iut A, pap G allele III, cva C, sfas , afa / dra BC) associated with adhesion, toxicity, and environmental response were identified (Fig. B). E. coli strains were tested for antibiotic susceptibility using CLSI guidelines and a disc diffusion method with 16 antibiotics . The resistance profiles of the E. coli strains to the antibiotics tested are outlined in Table , with interpretation of all susceptibility results based on the CLSI guidelines . The strains exhibited resistance to ceftazidime, ceftriaxone, ciprofloxacin, levofloxacin, amoxicillin, and azithromycin, while demonstrating susceptibility to penicillin, oxacillin, lincomycin, clindamycin, ampicillin, and cotrimoxazole. Among neonatal hand-reared giraffes, failure of passive transfer of immunity (FPI) continues to be a problem . The cotyledonary placentas in giraffes transfer negligible antibodies. Therefore, newborns rely on colostrum consumption and the absorption of maternal antibodies across the intestines during the first 24–48 h after birth . FPI increases the risk of diarrhea, enteritis, septicemia, arthritis, omphalitis, and pneumonia in domestic ungulates . Passive immunity transfer during the newborn’s first week is crucial for the successful rearing of ruminant neonates. To ensure optimal and steady growth, milk replacers must have a composition similar to that of giraffe milk. Bovine milk and colostrum have been effectively utilized and advised for hand-rearing giraffes despite the lower fat and protein contents of cow’s milk and milk substitutes than that of giraffe milk . Until the regular consumption of solid food, milk should be consumed daily in amounts of 7–10% of the body weight (19,000–25,000 kcal/day) . A hand-fed giraffe calf (which did not receive colostrum) died of septicemia caused by E. coli in the present study. Septic arthritis and phlegmon are caused by trauma or systemic infection. No trauma was recorded in this giraffe pup. Therefore, systemic infection may have contributed to the septic polyarthritis and/or phlegmon observed in this study. Enteritis, pneumonia, and funisitis are common sources of infection in giraffe calves; enteritis and pneumonia were not recorded in giraffe calves before the development of arthritis . Furthermore, the lack of immunocompetence might have put the calves at a risk of the infection spreading systemically through the umbilical cord. Septic polyarthritis and/or phlegmon may be caused by systemic infection. A PCR and sequence analysis confirmed that E. coli was the cause of bacteremia in the present case. E. coli colonizes newborn pups’ gastrointestinal tract shortly after birth and typically coexists with its host without causing disease. However, certain strains with specific virulence attributes can cause a range of illnesses in immunocompromised hosts or when gastrointestinal barriers are compromised. Extraintestinal pathogenic E. coli (ExPEC) are characterized primarily by their site of isolation, with the most clinically significant groups being uropathogenic E. coli (UPEC), neonatal meningitis-associated E. coli (NMEC), avian pathogenic E. coli (APEC), and septicemic E. coli (SEPEC) . ExPEC strains have the ability to cause infections in various extraintestinal locations. In the present case, the ExPEC strain resulted in pneumonia, umbilical arteritis, hepatitis, nephritis, hemorrhagic lymphadenitis, necrotizing enteritis, and necrotizing thyroiditis in the baby giraffe. There is no doubt that this is a direct result of E. coli bacteremia. In order to initiate bacteremia, the ExPEC strain must successfully infiltrate initial sites of infection or colonization, disseminate throughout the bloodstream, and persist within the blood. Nevertheless, the ExPEC strain has the capability to access the bloodstream through various pathways. Bacteremia lacking a discernible origin is classified as primary, while secondary bacteremia may result from dissemination originating from an existing infection, such as pneumonia or urinary tract infections, or from contaminated medical equipment . In this case, however, the bacteremia was likely a result of an umbilical cord infection. Improper handling of the umbilical cord presents a potential risk of infection, as it serves as a significant entry point for pathogens in newborns. Therefore, it is strongly advised that veterinarians adhere to proper disinfection, sterilization, isolation, and other cleaning protocols to ensure optimal umbilical cord hygiene when handling neonates. ExPEC uses various factors to cause disease in animals, including adhesins, invasins, protectins, iron acquisition systems, and toxins . These factors help ExPEC adhere, invade, evade the immune system, colonize, proliferate, and spread throughout the body, leading to infection in animals . Other bacterial factors such as secretion systems, quorum sensing systems, transcriptional regulators, and two-component systems also play a role in ExPEC pathogenesis . In this study, the virulotyping revealed that the E. coli strain was positive for PAI, iut A, pap G allele III, cva C, sfa s, and afa / dra BC. Adhesins are bacterial components that help them stick to other cells or surfaces, increasing their virulence. Specific adhesins are adapted to colonize different environments. Virulence genes linked to adhesion include pap G allele III, sfas , and afa / dra BC. Iron is a crucial micronutrient necessary for the growth and proliferation of bacteria within the host following successful colonization and/or invasion. Among the most significant virulence plasmids associated with ExPEC virulence are ColV and ColBM, particularly those containing the aerobactin operon ( iut A/ iuc ABCD). This operon codes for high-affinity iron-transport systems that enable bacteria to acquire iron in low-iron environments, such as those found in host fluids and tissues. Our isolates carrying virulence genes were found to possess the iut A gene, which facilitates survival in low iron conditions. Antibiotics are commonly utilized for the prevention and treatment of ExPEC infections. However, the widespread use of antibiotics has been linked to the development of multidrug-resistant bacteria. The high levels of antibiotic resistance observed in ExPEC strains present a significant risk to human health, as antibiotic-resistant bacteria and genes can be transmitted through the food chain. Previous research has shown that ExPEC isolates exhibit resistance to multiple antibiotics , underscoring the importance of conducting antibiotic susceptibility testing to identify the most effective treatment option. In this particular instance, the E. coli strain exhibited broad-spectrum beta-lactamase production. β-Lactam antibiotics, particularly 3rd generation cephalosporins, are commonly prescribed for the treatment of serious community-onset or hospital-acquired infections caused by E. coli . Regrettably, β-lactamase production in E. coli continues to be a significant factor in the development of resistance to β-lactam antibiotics . β-lactamases are bacterial enzymes that render β-lactam antibiotics ineffective through hydrolysis. This study presents findings on septic polyarthritis and/or septicemia in juvenile giraffes, potentially attributed to insufficient colostrum intake and E. coli infection via the umbilical cord. Furthermore, the study elucidates the diverse array of virulence factors exhibited by the E. coli strain and underscores the pathogenic significance of these pathogens in animal health. Continued research is warranted to identify additional virulence factors and elucidate the pathogenic mechanisms, ultimately aiding in the development of an effective diagnosis and treatment strategy for managing giraffe colibacillosis. Supplementary Material 1. Supplementary Material 2. Supplementary Material 3.
null
e814789b-ddb6-4c6e-9a90-40082890bdd3
10819464
Pharmacology[mh]
Trollius chinensis Bunge, a perennial herb of the Ranunculaceae family, falls under the genus Trollius . Widely distributed in Northern China, T. chinensis is recognized for its high ornamental and medicinal value . Its dried flowers, known as Flos Trollii, serve as the medicinal component . There are more than 20 identified species in the genus Trollius . They are distributed mainly in the temperate and arctic regions of Asia, Europe, and North America, of which 16 are in China . It usually grows in peatlands, swamps, wet meadows, and banks of reservoirs, as well as in mountain areas up to the alpine zone . T. chinensis , with its significant ornamental and health-related compounds, is highly esteemed for applications in the food, medicine, and cosmetic industries . Traditionally, the Chinese have employed T. chinensis for medicinal and tea purposes, dating back to the Qing Dynasty and recorded in Supplements to the Compendium of Materia Medica (Qing Dynasty) as “bitter in taste, cold in nature, non-toxic, mainly used for heat-clearing and detoxicating” . It holds a prominent place in pharmacies, is frequently referenced in medical literature, and is listed in the Chinese Pharmacopoeia (Edition 2020) with five Chinese patent medicines. Pharmacological tests have substantiated T. chinensis ’s anti-inflammatory, anti-oxidant, anti-bacterial, and anti-viral properties, correlating closely with its chemical composition . Over 100 compounds have been isolated from Trollius species, including flavonoids, organic acids, coumarins, alkaloids, terpenoids, and prenyl flavonoids in T. chinensis , boasting diverse biological activities . To date, more than 100 compounds have been isolated from Trollius species. Phytochemical investigations of this plant have demonstrated the presence of flavonoids, organic acids, coumarins, alkaloids, terpenoids, and prenylflavonoids as main constituents of T. chinensis with diverse biological activities . For instance, the flavonoid metabolite(s) Orientin and poncirin found in T. chinensis exhibited significant antiviral activity against parainfluenza type 3 (Para 3) . Additionally, researchers have identified seventeen new labdane diterpenoid glycosides A–Q (1–17) in the dried flowers of T. chinensis , possessing therapeutic, antiviral, and antibacterial properties, establishing T. chinensis as a common anti-inflammatory drug and health tea . The flowers have traditional uses in treating respiratory infections, pharyngitis, tonsillitis, and bronchitis in Chinese medicine . The exploration of T. chinensis holds immense potential for novel medication research and therapeutic advancements . This review article aims to provide comprehensive information and highlight the potential values associated with the development of T. chinensis . Relevant literature was obtained from scientific databases such as TCMSP ( https://old.tcmsp-e.com/tcmsp.php , accessed on 21 April 2023), Pubchem ( https://pubchem.ncbi.nlm.nih.gov , accessed on 23 April 2023), Scientific Database of China Plant Species ( http://db.kib.ac.cn , accessed on 10 April 2023), Google Scholar ( https://xs.scqylaw.com , accessed on 5 April 2023), PubMed ( https://pubmed.ncbi.nlm.nih.gov , accessed on 5 April 2023), Baidu Scholar ( https://xueshu.baidu.com , accessed on 3 April 2023), Vip site (China Science and Technology Journal Database) ( http://www.cqvip.com , accessed on 3 April 2023), and CNKI site (Chinese National Knowledge Infrastructure) ( https://www.cnki.net , accessed on 3 April 2023). The most extensive collection of publicly available chemical data in the world is found on PubChem. Chemicals can be found using their names, structures, molecular formulas, and other identifiers. Discover information about biological activity, safety and toxicity, chemical and physical qualities, patents, literature citations, et al. The PubChem Compound, Substance, and Bioassay sub-databases are the three sub-databases that make up the PubChem database. TCMSP, which includes 499 Chinese herbal medicines, a total of 29,384 ingredients, 3311 targets, and 837 related diseases. TCMSP is a unique systematic pharmacology platform for Chinese herbal medicines, where we can find the relationship between drugs, targets, and diseases. This database platform provides information that includes identifying active ingredients, compounds, drug target networks, et al. . The Database of China Plant Species is jointly constructed by the Kunming Institute of Botany, Chinese Academy of Sciences (KIB), the Institute of Botany, Chinese Academy of Sciences (IBS), the Wuhan Botanical Garden, Chinese Academy of Sciences (WBG), and the South China Botanical Garden, Chinese Academy of Sciences (SCBG). There are more than 31,000 species of higher plants in more than 3400 genera in more than 300 families, and the data content mainly includes standard names of plant species, basic information, systematic taxonomic information, ecological information, physiological and biochemical characteristic description information, habitat and distribution information, literature information, and other information. TCMSP, Pubchem, and Web of Huayuan were used to find the chemical composition of T. chinensis . Most of the active components were obtained by searching for T. chinensis in TCMSP. Then, PubChem and Web of Huayuan were used to obtain and validate information related to the chemical structure of organic small molecules contained in the herb and their biological activities. The Web of China Plant Species Information Database is the primary source for the botanical collection of the genus Trollius . All the sites listed above are public databases and have access to the public database. The article is summarized using other websites that gather literature about the development of T. chinensis research. Diverse studies have been published in recent years. Therefore, a comprehensive review is necessary. This paper reviewed the research progress of T. chinensis from six aspects, including botany, materia medica, ethnopharmacological use, phytochemistry, pharmacology, and quality control, with the keywords of chemical constituents such as flavonoids, phenolic acids, anti-inflammatory effects, and antimicrobial effects, as well as related words such as pharmacological effects. We reviewed 350 related papers. This paper draws on over 120 articles on T. chinensis and documents some of the literature on chemical composition and pharmacological studies conducted from 1991 to 2023. Based on the search results from the Chinese herbal medicine series of the Chinese herbal medicine resource dictionary , Flora of China ( https://www.plantplus.cn/foc , accessed on 10 April 2023), Scientific Database of China Plant Species (DCP) ( http://db.kib.ac.cn , accessed on 10 April 2023), and other websites, and complemented by an extensive array of references, the genus Trollius comprises 26 species, as detailed in . T. chinensis , a perennial herb of medicinal significance, features dried flowers as its medicinal components . The geographical distribution of T. chinensis mainly spans Asia, Europe, the temperate zones of North America, and the Arctic region. In China, it is located in Tibet, Yunnan, Sichuan, Qinghai, Xinjiang, Gansu, Shaanxi, Shanxi, Henan, Hebei, Liaoning, Jilin, Heilongjiang, Inner Mongolia, and Taiwan . Additionally, it is prevalent in Russia (Far East, Siberia, and Central Asia), North Korea, Inner Mongolia, Sakhalin Island (Sakhalin Island), Nepal, and Northern Europe . Thriving in light and moist conditions, T. chinensis flourishes best in deep, preferably heavy, and consistently moist soil, exhibiting resilience in full sun or partial shade. Typically growing at elevations between 1000 and 2000 m, it is frequently observed at approximately 1400 m in habitats with ample water and optimal light conditions, such as peatlands, marshes, wet meadows, reservoir banks, mountainous areas, and alpine areas . T. chinensis plants are glabrous, boasting columns reaching up to 70 cm in height . The stems, numbering 1–3, range from 3.5–100 cm tall, either unbranched or branched above the middle, with occasional basal or distal branching and sparse foliage featuring 2–4 leaves. Basal leaves, numbering 1–4, measure 16–36 cm in length and are characterized by long stalks, occasionally accompanied by 1–3 rosette leaves. The leaf blade is pentagonal, with dimensions of 3.8–12.5 cm, exhibiting a cordate, trilobated base; the petiole, measuring 12–30 cm, has a narrowly sheathed base. Cauline leaves mirror basal leaves, with lower leaves possessing long stalks and upper leaves being smaller, short-stalked, or sessile. The pedicel, mostly grey-green, extends 5–9 cm in length. Flowers appear solitarily terminal or in 2–3 cymes, with a diameter ranging from 3.8–5.5 cm. Sepals, numbering 6–19, measure 1.6–2.8 cm and exhibit varying colors among species, including pale purple, pale blue, white, golden yellow, yellow, or orange-yellow. The leaf blade is not green when dried and is isobovate or elliptic-obovate in shape. Petals, numbering 18–21, are narrowly linear, slightly longer than sepals or subequal to sepals apically attenuate, measuring 1.8–2.2 cm in length and 1.2–1.5 mm in width. Stamens, numerous and spirally arranged, range from 0.5–1.1 cm in length. Carpels, numbering 20–30, are sessile, and follicles are 1–1.2 cm in length and approximately 3 mm in width. Seeds are subobovoid, around 1–1.5 mm in length, black, and glossy. Flowering June–July, fruiting August–September . T. chinensis has various nicknames. T. chinensis was recorded in the Annals of Shan Xi Traditional Chinese Medicine as Golden Pimple. It has been recorded in Wild Plants of Shan Xi under Asian T. chinensis. Tropaeolum majus, T. chinensis was recorded as a Supplement to the Compendium of Materia Medica (Thirty Years of Qianlong, 1765) by Shanxi Tong Zhi. Liao’s History is also recorded in the Annals of Wu Tai Mountain and the Sea of Humanity under Nasturtium. In Liao’s History Ying Wei Zhi, T. chinensis is recorded as T. chinensis , and The Book of Pictorial Guide of Chinese Plants calls it a globeflower . T. chinensis was initially recognized as an ornamental plant. It was not until the Qing Dynasty that the medicinal value of T. chinensis was widely developed. The Record of Ennin’s Diary: The Record of a Pilgrimage to China in Search of the Law mentions that T. chinensis blooms in June and July . After that, in the Yuan Dynasty, the poet Zhou Boqi used T. chinensis as the title of the Book of the Squire of Shangdu Poems, left heroic verses with the objects, and recorded the characteristics of the flowers of T. chinensis in the notes of the Book of the Squire of Shangdu Poems. In the Qing Dynasty, the origin of T. chinensis was recorded in the Shanxi Tong Zhi. In the Annals of Mount Wu Tai, under the name of nasturtium, T. chinensis was associated with miracles to record articles. The Widely Manual of Aromatic Plants describes the golden yellow color of the flower, seven petals, and two layers; the heart of the flower is also yellow; there are several flowers on one stem; and so on, describing in detail the flowering period, flowering characteristics, and other botanical characteristics of T. chinensis . It appeared as a companion botanical drug to licorice in the description of licorice in the Bencao ZhengYao (Ming Dynasty, AD 1368–1644) but was not included in the book in its entirety . T. chinensis ’s medicinal functions were first recorded in Supplements to the Compendium of Materia Medica (Thirty Years of Qianlong, 1765) . Modern character descriptions and fluorescence identification of T. chinensis have been included in the Chinese Pharmacopeia (1977 edition). T. chinensis is a traditional Mongolian medicine and not a widely used medicinal herb. Initially, its sources of medicinal herbs were mainly wild, and due to the lack of commercial supply, fewer applications, regional herbs, and relatively limited clinical applications and research, as well as the cold nature of T. chinensis , some potential safety and efficacy issues, and other factors, it has not been included in the Pharmacopoeia of China since 1985 . The Chinese Pharmacopoeia (2020 edition) includes only five proprietary Chinese medicines: Jinlianhua Tablets, Jinlianhua Runhou Tablets, Jinlianhua Mixture, Jinlianhua Capsules, and Jinlianhua Granules . 5.1. Traditional Uses T. chinensis serves as both a traditional Chinese medicine and a frequently used ethnomedicine. The herb can improve heat clearance, detoxification, alleviation of oral/throat soreness, earache, eye pain, cold-induced fever, and vision improvement . Furthermore, it can effectively treat boils, poisons, and winds. The Shanhai Caozhuan briefly mentions T. chinensis as a remedy for boils, poisons, and all kinds of winds. Flowers are used in the Hebei Handbook of Traditional Chinese Medicine (1970) for chronic tonsillitis. T. chinensis is combined with Juhua and Guanaco, doubled in acute cases, or added with Yazhicao in equal parts. To treat acute otitis media, acute conjunctivitis, and other inflammatory diseases of the upper focus, T. chinensis and Juhua are each taken with three qian, and raw Gancao with one qian. Zhaobing Nan Fang records combining Nanshashen and Beishashen with 12 g of T. chinensis to promote yin and diminish fire, reducing spleen and kidney yin deficiency and inflammation caused by fire inadequacy. It is noted in the Manual of Chinese Herbal Medicine Commonly Used in Guangxi Folklore: Book I that T. chinensis has been utilized for alleviating eye inflammation and pain. Furthermore, T. chinensis , along with Wushuige and Mufurong, are recommended for treating malignant sores via compressing and pounding the affected site . 5.2. Current Use In 2003, the Administration of Traditional Chinese Medicine of China announced a prescription for preventing atypical pneumonia. The prescription, T. chinensis Tang, combined six botanical drugs, including T. chinensis , to clear away heat, detoxify toxins, disperse wind, and penetrate evil spirits. This prescription had a significant effect on atypical pneumonia and is now commonly used to prevent and treat “plague”, such as the new coronavirus . Its principal effects and clinical use for acute and chronic tonsillitis and other inflammatory conditions are recorded in the National Compendium of Chinese Herbal Medicine (1975). The pharmacological effects of T. chinensis are summarized in the Dictionary of Traditional Chinese Medicine (2006). To cope with the contemporary and rapidly changing lifestyle, the utilization of T. chinensis medicinal decoctions has diminished compared with previous times. Instead, they are now commonly consumed as patented medications—for example, Jinlianhua soft capsules and health products . Moreover, the petals and stamens of T. chinensis are widely employed as a flavoring agent in culinary contexts, imparting a distinctive taste to salads, desserts, and beverages. Moreover, it can be used as a coloring agent, food additive, and dyeing agent . It is also valued as an antioxidant component in cosmetics, including T. chinensis Pure Lotion and T. chinensis Spray. The ethnopharmacological uses of T. chinensis are shown in . T. chinensis serves as both a traditional Chinese medicine and a frequently used ethnomedicine. The herb can improve heat clearance, detoxification, alleviation of oral/throat soreness, earache, eye pain, cold-induced fever, and vision improvement . Furthermore, it can effectively treat boils, poisons, and winds. The Shanhai Caozhuan briefly mentions T. chinensis as a remedy for boils, poisons, and all kinds of winds. Flowers are used in the Hebei Handbook of Traditional Chinese Medicine (1970) for chronic tonsillitis. T. chinensis is combined with Juhua and Guanaco, doubled in acute cases, or added with Yazhicao in equal parts. To treat acute otitis media, acute conjunctivitis, and other inflammatory diseases of the upper focus, T. chinensis and Juhua are each taken with three qian, and raw Gancao with one qian. Zhaobing Nan Fang records combining Nanshashen and Beishashen with 12 g of T. chinensis to promote yin and diminish fire, reducing spleen and kidney yin deficiency and inflammation caused by fire inadequacy. It is noted in the Manual of Chinese Herbal Medicine Commonly Used in Guangxi Folklore: Book I that T. chinensis has been utilized for alleviating eye inflammation and pain. Furthermore, T. chinensis , along with Wushuige and Mufurong, are recommended for treating malignant sores via compressing and pounding the affected site . In 2003, the Administration of Traditional Chinese Medicine of China announced a prescription for preventing atypical pneumonia. The prescription, T. chinensis Tang, combined six botanical drugs, including T. chinensis , to clear away heat, detoxify toxins, disperse wind, and penetrate evil spirits. This prescription had a significant effect on atypical pneumonia and is now commonly used to prevent and treat “plague”, such as the new coronavirus . Its principal effects and clinical use for acute and chronic tonsillitis and other inflammatory conditions are recorded in the National Compendium of Chinese Herbal Medicine (1975). The pharmacological effects of T. chinensis are summarized in the Dictionary of Traditional Chinese Medicine (2006). To cope with the contemporary and rapidly changing lifestyle, the utilization of T. chinensis medicinal decoctions has diminished compared with previous times. Instead, they are now commonly consumed as patented medications—for example, Jinlianhua soft capsules and health products . Moreover, the petals and stamens of T. chinensis are widely employed as a flavoring agent in culinary contexts, imparting a distinctive taste to salads, desserts, and beverages. Moreover, it can be used as a coloring agent, food additive, and dyeing agent . It is also valued as an antioxidant component in cosmetics, including T. chinensis Pure Lotion and T. chinensis Spray. The ethnopharmacological uses of T. chinensis are shown in . According to the search results of TCMSP ( old.tcmsp-e.com/tcmsp.php , accessed on 21 April 2023), the Huayuan website ( www.chemsrc.com , accessed on 23 April 2023), PubChem ( https://pubchem.ncbi.nlm.nih.gov , accessed on 23 April 2023), and other websites combined with much of the literature review, the main components of T. chinensis include flavonoids, fatty acids, alkaloids, sterols, coumarins, tannins, and polysaccharides. 6.1. Flavonoids Flavonoids stand out as the predominant bioactive metabolites within Trollius chinensis flowers. Numerous studies have substantiated the manifold advantageous biological properties of flavonoids, encompassing anti-oxidation, anti-inflammatory, anti-viral, and anti-tumor characteristics . The flavonoids in T. chinensis consist primarily of flavone C-glycoside, flavone O-glycoside, dihydroflavone, and flavonols. Notably, flavone C-glycosides, predominantly hexose glycosides, exhibit unique stability due to a direct connection between the sugar group and the flavonoid parent nucleus via a c-c bond , forming a remarkably stable glycoside structure. The majority of flavone C-glycosides are situated at the flavone C-glycoside C-6 or C-8 positions, with a few occurring at the a-ring C-3 or C-4 positions. In T. chinensis , the flavone C-glycoside is positioned at the flavonoid A-ring C-8 positions . Polyphenols, mainly flavonoids, including Orientin, Vitexin, and isoflavin, are highly abundant among T. chinensis and are responsible for antiviral, antimicrobial, and antioxidant activities. The flavone C-glycoside includes Orientin, Vitexin, and isodoxanthin. Notably, Orientin, Vitexin, and Orientin -2″- O -β- l -galactoside emerge as the most abundant flavonoids in T. chinensis . Vitexin and Orientin glycosyl exhibit robust inhibitory effects against influenza virus, Staphylococcus aureus , and epidermis . In addition to flavone C-glycosides, flavone O-glycosides, such as Quercetin and Isoquercetin, are also discernible in T. chinensis . Noteworthy is the enhanced stability and reduced hydrolysis susceptibility of flavonoid carbosides like Orientin . The therapeutic potential of these constituents extends to the treatment of age-related macular degeneration, cancer, cardiovascular disease, and skin repair following UV damage. Refer to and for further details. 6.2. Organic Acids The concentration of phenolic acids in T. chinensis surpasses only that of flavonoids. Specifically, Veratric acid stands out with a notably high concentration of 0.86–0.91 mg.g −1 . Intriguingly, a distinct study revealed that the bioavailability of phenolic acid constituents in T. chinensis surpassed that of its flavonoid counterparts . Organic acids in T. chinensis encompass both phenolic and fatty acids. Phenolic acids predominantly constitute derivatives of benzoic acid, further classified into two categories. The first category lacks a free hydroxyl group, including Veratric acid, benzonic acid, methyl veratrate, globeflower acid, etc. The second category possesses free hydroxyl groups, including vanillic acid, methyl-p-hydroxybenzoate, p-hydroxybenzonic acid, etc. . T. chinensis houses a repertoire of 21 fatty acids, with saturated fatty acids as the primary components, and a total of 21 elements, constituting 57.95% of the detected substances. Palmitic acid and tetradecanoic acid exhibit relatively substantial content within saturated fatty acids. Additionally, nine types of unsaturated fatty acids comprise 30.35% of the total, featuring oleic acid, linoleic acid, palmitoleic acid, 3-(4-hydroxy-3-methoxybenzene) -2-acrylic acid, 3-(4-hydroxy-benzene) -2-acrylic acid, 4-phenyl-2-butenic acid, 3-phenyl-2-acrylic acid, (E) -11-eicosanoic acid, and (Z, Z, Z) -9, 12, 15-octadecanotrioleic acid . Of significant note, three crucial phenolic acids—proglobeflowery acid (PA), globeflowery acid (GA), and trolloside (TS)—have been isolated from the flowers of T. chinensis . Pharmacological investigations have underscored their diverse biological activities, strongly correlated with the flower’s efficacy in treating respiratory infections, tonsillitis, bronchitis, and pharyngitis . Refer to and for detailed insights. 6.3. Alkaloids Alkaloids, a prominent category of nitrogenous phytochemicals widely distributed in medicinal plants, stand out as crucial constituents in T. chinensis . The exploration of T. chinensis alkaloids remains limited, with only five of these compounds identified thus far. The principal pyrrolidine alkaloids include Senecionine and Integerrimine, the isoquinoline Trolline and Indole (R)-nitrile-methyl-3-hydroxy-oxyindole), and adenine . Notably, Trolline emerges as the most abundant among these five ingredients . Investigations indicate that T. chinensis flowers possess the highest total alkaloid content, while roots and branches exhibit the lowest concentrations. Among them, Trolline, an isoquinoline first discovered in T. chinensis , demonstrates significant antiviral and antibacterial activities. Refer to and for detailed data. 6.4. Other Chemical Components In addition to the aforementioned three primary active components, the flowers contain trace amounts of sterols, coumarins, tannins, and polysaccharides. Although these components exist in relatively low concentrations, their pharmacological effects are manifold, holding substantial potential for development. T. chinensis polysaccharides consist of neutral and acidic monosaccharides, predominantly comprising mannose (Man), rhamnose (Rha), galacturonic acid (GalA), glucose (Glu), galactose (Gal), arabinose (Ara), and fucose (Fuc) . T. chinensis also harbors compounds like xantho-phyll-Epoxyde (C 40 H 56 O 3 ) and trollixanthin (C 40 H 56 O 3 ). The yellow pigment in T. chinensis , characterized as a fat-soluble pigment, exhibits remarkable stability under neutral and acidic conditions . An undescribed phenolic glycoside, phenol A, isolated from T. chinensis flowers via spectroscopic methods, has revealed both its structural composition and pharmacological actions, including anti-inflammatory and antibacterial properties . Furthermore, T. chinensis encompasses eight trace elements: Fe, Mg, Cu, Zn, Mn, Cr, Pb, and As. Research indicates minimal variations in Ca and Fe levels across T. chinensis from different regions, while more pronounced differences exist in Mn, Cu, and Zn levels . For a comprehensive overview, consult and . Flavonoids stand out as the predominant bioactive metabolites within Trollius chinensis flowers. Numerous studies have substantiated the manifold advantageous biological properties of flavonoids, encompassing anti-oxidation, anti-inflammatory, anti-viral, and anti-tumor characteristics . The flavonoids in T. chinensis consist primarily of flavone C-glycoside, flavone O-glycoside, dihydroflavone, and flavonols. Notably, flavone C-glycosides, predominantly hexose glycosides, exhibit unique stability due to a direct connection between the sugar group and the flavonoid parent nucleus via a c-c bond , forming a remarkably stable glycoside structure. The majority of flavone C-glycosides are situated at the flavone C-glycoside C-6 or C-8 positions, with a few occurring at the a-ring C-3 or C-4 positions. In T. chinensis , the flavone C-glycoside is positioned at the flavonoid A-ring C-8 positions . Polyphenols, mainly flavonoids, including Orientin, Vitexin, and isoflavin, are highly abundant among T. chinensis and are responsible for antiviral, antimicrobial, and antioxidant activities. The flavone C-glycoside includes Orientin, Vitexin, and isodoxanthin. Notably, Orientin, Vitexin, and Orientin -2″- O -β- l -galactoside emerge as the most abundant flavonoids in T. chinensis . Vitexin and Orientin glycosyl exhibit robust inhibitory effects against influenza virus, Staphylococcus aureus , and epidermis . In addition to flavone C-glycosides, flavone O-glycosides, such as Quercetin and Isoquercetin, are also discernible in T. chinensis . Noteworthy is the enhanced stability and reduced hydrolysis susceptibility of flavonoid carbosides like Orientin . The therapeutic potential of these constituents extends to the treatment of age-related macular degeneration, cancer, cardiovascular disease, and skin repair following UV damage. Refer to and for further details. The concentration of phenolic acids in T. chinensis surpasses only that of flavonoids. Specifically, Veratric acid stands out with a notably high concentration of 0.86–0.91 mg.g −1 . Intriguingly, a distinct study revealed that the bioavailability of phenolic acid constituents in T. chinensis surpassed that of its flavonoid counterparts . Organic acids in T. chinensis encompass both phenolic and fatty acids. Phenolic acids predominantly constitute derivatives of benzoic acid, further classified into two categories. The first category lacks a free hydroxyl group, including Veratric acid, benzonic acid, methyl veratrate, globeflower acid, etc. The second category possesses free hydroxyl groups, including vanillic acid, methyl-p-hydroxybenzoate, p-hydroxybenzonic acid, etc. . T. chinensis houses a repertoire of 21 fatty acids, with saturated fatty acids as the primary components, and a total of 21 elements, constituting 57.95% of the detected substances. Palmitic acid and tetradecanoic acid exhibit relatively substantial content within saturated fatty acids. Additionally, nine types of unsaturated fatty acids comprise 30.35% of the total, featuring oleic acid, linoleic acid, palmitoleic acid, 3-(4-hydroxy-3-methoxybenzene) -2-acrylic acid, 3-(4-hydroxy-benzene) -2-acrylic acid, 4-phenyl-2-butenic acid, 3-phenyl-2-acrylic acid, (E) -11-eicosanoic acid, and (Z, Z, Z) -9, 12, 15-octadecanotrioleic acid . Of significant note, three crucial phenolic acids—proglobeflowery acid (PA), globeflowery acid (GA), and trolloside (TS)—have been isolated from the flowers of T. chinensis . Pharmacological investigations have underscored their diverse biological activities, strongly correlated with the flower’s efficacy in treating respiratory infections, tonsillitis, bronchitis, and pharyngitis . Refer to and for detailed insights. Alkaloids, a prominent category of nitrogenous phytochemicals widely distributed in medicinal plants, stand out as crucial constituents in T. chinensis . The exploration of T. chinensis alkaloids remains limited, with only five of these compounds identified thus far. The principal pyrrolidine alkaloids include Senecionine and Integerrimine, the isoquinoline Trolline and Indole (R)-nitrile-methyl-3-hydroxy-oxyindole), and adenine . Notably, Trolline emerges as the most abundant among these five ingredients . Investigations indicate that T. chinensis flowers possess the highest total alkaloid content, while roots and branches exhibit the lowest concentrations. Among them, Trolline, an isoquinoline first discovered in T. chinensis , demonstrates significant antiviral and antibacterial activities. Refer to and for detailed data. In addition to the aforementioned three primary active components, the flowers contain trace amounts of sterols, coumarins, tannins, and polysaccharides. Although these components exist in relatively low concentrations, their pharmacological effects are manifold, holding substantial potential for development. T. chinensis polysaccharides consist of neutral and acidic monosaccharides, predominantly comprising mannose (Man), rhamnose (Rha), galacturonic acid (GalA), glucose (Glu), galactose (Gal), arabinose (Ara), and fucose (Fuc) . T. chinensis also harbors compounds like xantho-phyll-Epoxyde (C 40 H 56 O 3 ) and trollixanthin (C 40 H 56 O 3 ). The yellow pigment in T. chinensis , characterized as a fat-soluble pigment, exhibits remarkable stability under neutral and acidic conditions . An undescribed phenolic glycoside, phenol A, isolated from T. chinensis flowers via spectroscopic methods, has revealed both its structural composition and pharmacological actions, including anti-inflammatory and antibacterial properties . Furthermore, T. chinensis encompasses eight trace elements: Fe, Mg, Cu, Zn, Mn, Cr, Pb, and As. Research indicates minimal variations in Ca and Fe levels across T. chinensis from different regions, while more pronounced differences exist in Mn, Cu, and Zn levels . For a comprehensive overview, consult and . 7.1. Antiviral Effect A study exploring the antiviral properties of T. chinensis revealed that its five active components—Vitexin, Orientin, Trolline, Veratric acid, and Vitexin-2″- O -β- l -galorientin—exert their effects by modulating Toll-like receptors (a critical class of protein molecules associated with non-specific immunity/natural immunity). Specifically, the T. chinensis soft capsule demonstrated in vitro inhibition of human coronavirus OC43 replication, accomplished through the regulation of TLRs to suppress elevated expression of host cell cytokines such as IL-1B, IL-6, and IFN-a mRNA induced by viral infection. These findings substantiate the inhibitory mechanism of the T. chinensis soft capsule against the virus . Examining 26 active components such as Rutin, Luteolin-7- O -glucoside, Kaempferol, Genistin, Apigenin, Scutellarin, Orientin, Daidzin, Vitexin, 3′-Hydroxy Puerarin, Puerarin, Daidzein, 3′-Methoxypuerarin, 2″- O -Beta- l -Galactoside, Rosmarinic acid, Progloboflowery acid, Caffeic acid, Protocatechuic acid, Ferulic acid, Veratric acid, Indirubin E, Oleracein E, Trollioside, Carbenoside I, 2″- O -(2‴-methyl butanol)isodangyloxanthin, 2″- O -(2‴-methylbutyryl) Vitexin, and glucose veratrate in T. chinensis, were observed to bind to the Mpro protein (2019-nCoV novel coronavirus pneumonia hydrolase Mpr0 protein) primarily through hydrogen bonds. This binding showcased Mpro protein-binding activity, affirming the potential of T. chinensis against novel coronaviruses . Influencing pivotal anti-inflammatory and immunomodulatory targets, T. chinensis , when combined with multiple inflammatory and immunomodulatory pathways such as tumor necrosis factor-α (TNF-α), HIF-1, and Toll-like receptors (TLR), exhibits anti-influenza viral effects, particularly against influenza A . The antiviral action of T. chinensis has been scrutinized through cyberpharmacology. While cyberpharmacological analyses offer valuable insights into pharmacological research, their reliance on network interactions between biomolecules and extensive databases introduces challenges related to data quality and reliability. Furthermore, the intricate nature of biological systems, limited experimental data, and the evolving understanding of drugs and targets require cautious consideration of credibility, necessitating further validation through pharmacological experiments . Chicken embryos served as the medium for influenza virus cultivation, with the inhibitory effect of T. chinensis alcohol extract on viral proliferation in chicken embryo allantoic fluid evaluated through a chicken erythrocyte agglutination test. The results substantiated the direct inactivation of the influenza A virus by T. chinensis alcohol extract in vitro. In a parallel experiment involving influenza A virus inoculation into chicken embryos, the T. chinensis alcohol extract effectively curbed the proliferation of the virus within the embryos . In a mouse model infected with influenza A (H1N1) virus, the study categorized the subjects into the control group, TGC group ( T. chinensis crude extract gavage group), VI1~3 groups (virus infection model 1~3 groups), and VI + TGC 1~3 groups (treatment 1~3 groups), each comprising 10 mice. Notably, the aqueous extract of T. chinensis exhibited the potential to enhance the antiviral ability of mice. Subsequent comparative analyses validated the initial findings, establishing that aqueous extracts of T. chinensis augmented antiviral capacity in mice. Conversely, alcoholic extracts of T. chinensis directly deactivated the influenza A virus . Furthermore, the aqueous extract of T. chinensis demonstrated potent inhibitory activity against the Cox B3 virus, achieving an inhibitory concentration of 0.318 mg/mL. The total flavonoids in this study displayed varying inhibitory activity against the respiratory syncytial virus, influenza A virus, and parainfluenza virus, with inhibitory concentrations of the viruses being 20.8 μg/mL and 11.7 μg/mL for Vitexin and Orientin, respectively . Notably, 60% ethanolic extracts of T. chinensis and total flavonoids exhibited weak effects, with Protopanaxanthic acid among the organic acids demonstrating the weakest antiviral ability. While T. chinensis showed effectiveness against the influenza A virus, its impact on the influenza B virus was not significant . Comparative assessments revealed that the alcoholic extract solution of T. chinensis soup displayed greater antiviral effects than the aqueous decoction of T. chinensis soup. Additionally, higher-purity T. chinensis soup extract exhibited a more robust inhibitory effect on the influenza virus. Specifically, 80% T. chinensis soup extract and secondary 95% T. chinensis soup extract demonstrated superior antiviral effects compared with 60% T. chinensis soup extract . A study delved into the material basis of the UPLC-DAD-TOF/MS fingerprinting profile (ultra-performance liquid chromatography-tandem diode array detector-time-of-flight mass spectrometry) of T. chinensis , establishing its potential as the active agent against EV71 (enterovirus 71). The key active ingredients of T. chinensis in combating EV71 included Guaijaverin acid, an unidentified alkaloid, P-hydroxybenzene-malic acid, and 2″- O -acetyl Orientin . In the broader context, T. chinensis flowers emerged as a valuable contributor to the anti-influenza virus activity of the overall formula, exhibiting relatively few side effects. The synergistic effect of T. chinensis , particularly in formulations like T. chinensis soup, has proven effective as a treatment for influenza virus . In recapitulation, the findings indicate that the antiviral mechanism of T. chinensis predominantly revolves around impeding the virus-receptor binding process and restraining the cytokines/chemokines response. The unrefined flower extract derived from T. chinensis shields the host from inflammatory damage by intervening in the TLRs, encompassing TLR3, TLR4, and TLR7. This intervention leads to a reduction in the secretion of inflammatory factors, ultimately manifesting antiviral effects . 7.2. Antioxidant Effect The varied pharmacological impacts of Orientin in T. chinensis , particularly its potent antioxidant effect, surpass those attributed to Vitexin. This discrepancy may be attributed to the structural disparity between Orientin and Vitexin. The oxidative activity of flavonoids with an o-diphenol hydroxyl group on the B-ring is notably more robust compared with those flavonoids possessing a singular phenol hydroxyl group attached to the B-ring . To assess the antioxidant capacity of Orientin and Vitexin in T. chinensis concerning D-galactose-induced subacute senescence in mice, D-galactose was administered intraperitoneally . The experimental outcomes revealed that Orientin and Bauhinia glycosides in T. chinensis effectively elevated the total antioxidant capacity (T-AOC), superoxide dismutase (SOD), glutathione peroxidase (GPGP), and glutathione peroxidase (GPP) in the tissues of the kidneys, livers, and brains of senescent mice. Additionally, these compounds increased SOD, glutathione peroxidase (GSH-Px), Na + -K + -ATPase, and Ca 2+ -Mg 2+ -ATPase activities in kidney, liver, and brain tissues. Notably, Orientin demonstrated superior efficacy over Oryza sativa in augmenting T-AOC activity within the organism . The former mitigates impaired sodium ion transport and associated metabolic disorders , while the latter, elevated levels of Ca 2+ , adversely impact the cytoskeleton and membrane structure of neuronal cells, culminating in diminished stability and heightened membrane permeability, thereby contributing to the senescence process . In contrast, the glycosides of Orientin and Vitexin pruriens act as antioxidants by positively modulating the activity of membrane transporter enzymes within tissue cells. Remarkably, Orientin exhibited greater efficacy than Vitexin in enhancing the activity of these tissue cell membrane transporter enzymes . The robust antioxidant potential of Orientin, exceeding that of poncirin and further surpassing total flavonoids, has been corroborated in various studies. Both Orientin and Vitexin demonstrate the ability to scavenge superoxide anion, hydroxyl radical, and DPPH radical, effectively safeguarding the erythrocyte membrane. Specifically, Orientin displayed notable scavenging efficacy within the concentration range of 2.0–12.0 μg/mL. In contrast, Vitexin exhibited hydroxyl radical scavenging within the concentration range of 0–1.0 μg/mL, achieving maximum scavenging efficiency at 1.0 μg/mL, followed by a decline in scavenging effect with increasing Vitexin concentration . The pharmacological mechanism underlying the antioxidant action of T. chinensis encompasses several key facets: (1) Scavenging of free radicals: The active constituents in T. chinensis , particularly flavonoids, exhibit potent free radical scavenging capabilities. This capacity enables the neutralization of free radicals both inside and outside the cell, thereby mitigating oxidative stress-induced damage . (2) Stimulation of antioxidant enzyme activity: the active ingredients in T. chinensis stimulate the activity of antioxidant enzymes by stimulating the intracellular antioxidant enzymes such as superoxide dismutase, glutathione peroxidase, etc. . This stimulation enhances the efficacy of the antioxidant system, fortifying cells against oxidative damage. In conclusion, T. chinensis safeguards cells from oxidative damage through the dual mechanisms of scavenging free radicals and enhancing antioxidant enzyme activity. These combined actions underscore the efficacy of T. chinensis as a potent antioxidant therapeutic agent. 7.3. Anti-Inflammatory Effect The anti-inflammatory prowess of T. chinensis primarily targets the upper segment of the triple energizer, encompassing the area above the diaphragm within the human body. This region predominantly involves organs such as the stomach and throat, extending through the diaphragm and chest, including the heart, lungs, viscera, head, and face. Both the aqueous extract and 95% ethanol extracts of T. chinensis manifest robust anti-inflammatory activities. Notably, within the repertoire of compounds contained in T. chinensis , flavonoids such as Robinin, Quercetin, Vitexin, and Orientin exhibit heightened anti-inflammatory efficacy. Particularly, Vitexin and Orientin, due to their anti-inflammatory and soothing properties, along with peptide anti-histamine attributes, are deemed suitable for managing acute allergic skin conditions such as rash and eczema, as well as respiratory allergic diseases . Current domestic research on T. chinensis underscores its potential in treating upper respiratory tract infectious diseases, including nasal mucosal diseases, by deploying an anti-inflammatory mechanism that engages multiple metabolites, targets, and pathways. Among the identified core targets, TNF and mitogen-activated protein kinase 1(MAPK1) take precedence, with the cancer factor pathway emerging as a pivotal route . Additionally, Toll-like receptors 3, 4, and 7 (TLR3/4/7) have been proposed as promising common anti-inflammatory targets for T. chinensis constituents. This includes Vitexin, Orientin, Trolline, Veratric acid, and Vitexin-2″- O -galactoside, as discerned through the integration of network pharmacology and molecular docking techniques . Respiratory inflammation, arising from diverse pathogens, microbial infections, influenza, nitroative stress, and compromised immune systems, can be effectively addressed by T. chinensis . Its therapeutic spectrum extends beyond treating nasal mucosa inflammation to positively impacting upper respiratory infections. Leveraging data mining, an enriched analysis of the top 20 pathways linked to the targets and metabolites of T. chinensis in upper respiratory tract infection treatment identified quercetin as a highly probable compound. This conclusion was derived from the “metabolite-target-signaling pathway” network analysis . Moreover, T. chinensis preparations exhibit therapeutic potential against upper respiratory tract infections by reducing serum inflammatory factors in patients. These factors include IL-8, IL-6, TNF-alpha, C-reactive protein, and procalcitonin, along with the modulation of T-cell subpopulation ratios . Additionally, Orientin-2″- O -β- l -galactoside and Veratric acid have been identified for their anti-inflammatory effects . In the clinical realm, the combination of amoxicillin, sodium, and potassium clavulanate has demonstrated the potential to reduce treatment duration and enhance therapeutic efficacy in children with acute tonsillitisn ratios . In summary, T. chinensis harbors a repertoire of anti-inflammatory compounds, including Vitexin, Orientxin, Trolline, Veratric acid, and Vitexin-2″- O -galactoside. Notably, Quercetin may also contribute significantly to its anti-inflammatory activity . Specifically, Orientin demonstrates efficacy in attenuating LPS-induced inflammation by impeding the production of inflammatory mediators and suppressing the expression of Cyclooxygenase 2 (COX-2) and Inducible nitric oxide synthase (iNOS) . Vitexin-2″- O -galactoside exhibits substantial inhibitory effects on lipopolysaccharide (LPS)-induced inflammation, as evidenced by its impact on key factors such as tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), iNOS, and COX-2 expression. Additionally, it mitigates the production of reactive oxygen species and exerts an anti-neurogenic role by inhibiting the NF-κB and extracellular signal-regulated kinase (ERK) signaling pathways, leading to anti-neuroinflammatory activity. However, the pharmacological mechanisms underlying the anti-inflammatory effects of the other components remain elusive. 7.4. Antitumour Flavonoids derived from T. chinensis exhibit notable inhibitory effects on active cancer cells. Specifically, the total flavonoids from T. chinensis demonstrate the capacity to impede the proliferation of tumor cells by activating the mitochondrial pathway . T. chinensis extracts exerted strong inhibitory effects on Leukemia K562 cells (K562), and HeL T. chinensis extracts manifest robust inhibitory influences on various cancer cell lines, including Leukemia K562 cells (K562), HeLa cells (He La), esophageal cancer cellsEc-109 (Ec-109), lung cancer cells NCI-H446 (NCI-H446), human non-lung cancer cells NCI-H446 (NCI-H446), human non-small cell lung cancer cell line A549 (A549), and human carcinoma cells HT-29 (HT-29), MCF-7, and HepG2, among others . Moreover, the total flavonoid extract of T. chinensis significantly retards the growth and proliferation of MCF-7 cells. This involvement is characterized by the activation of caspase-3 and caspase-9, leading to induced cell apoptosis within a concentration range of 0.0991 to 1.5856 mg/mL . Non-alcoholic fatty liver disease (NAFLD) stands as a clinical pathologic syndrome , with its incidence in China reaching a significant 29.2%, demonstrating an annual increase . The complex interplay of metabolic disorders, such as dyslipidemia, hypertension, hyperglycemia, and persistent abnormalities in liver function tests, is closely associated with NAFLD . Elevated lipid levels induce expression changes in HepG2 cells (hepatoma cells) . In an investigation into the impact of total flavonoids from T. chinensis on HepG2 cell function induced by high sugar levels, it was observed that oxidative stress levels in hepatocytes and the metabolic balance of reactive oxygen species (ROS) in HepG2 cells were intricately linked to intracellular fat accumulation. The study conclusively demonstrated that total flavonoids from T. chinensis exhibit a specific therapeutic effect on HepG2 cells by influencing disease-associated processes. Tissue cultures were employed to compare the effects of high glucose concentrations and varying doses of total flavonoids from T. chinensis on HepG2 cells. The proliferative tendencies of lipid substances are directly correlated with ROS levels; higher lipid accumulation corresponds to elevated ROS levels. Elevated glucose concentrations intensified ROS levels, while total flavonoids from T. chinensis effectively attenuated ROS levels, thereby influencing HepG2 cells. In vitro, total flavonoids from T. chinensis demonstrated a capacity to reduce lipid substance accumulation, presenting a promising avenue for the improved treatment of NAFLD . The ethanol extract derived from the total flavonoids of T. chinensis has been observed to induce apoptosis in HT-2 cells through the endogenous mitochondrial pathway. In addition, specific constituents of T. chinensis, namely Orientin and Vitexin, have demonstrated inhibitory effects on human esophageal cancer EC-109 cells. The apoptotic induction of EC-109 cells by both Orientin and Vitexin was found to correlate with increased drug action time and elevated drug concentrations. Significantly, Orientin surpassed Vitexin in effectively inhibiting the growth and apoptosis of EC-109 cells . At the administration dose of 80 μM, Orientin demonstrated a more potent apoptotic effect on EC-109 cells compared with Vitexin at the same concentration, registering apoptotic rates of 28.03% and 12.38%, respectively, within the concentration range of 0.91 to 1.5856 mg/mL. Elucidating the pharmacological mechanism underlying Orientin’s action, specifically in the context of esophageal cancer cells (EC-109), involves the up-regulation of P53 expression and concomitant down-regulation of Bcl-2 expression. This dual modulation positions Orientin as a prospective therapeutic agent for esophageal cancer. Utilizing the total flavonoids of T. chinensis as a model drug, our exploration delved into the molecular-level relationship and mechanism of these flavonoids, shedding light on their antitumor activity. A pertinent discovery was that Orientin affected HeLa, augmenting the Bax/Bcl-2 protein ratio. This manifested as an increase in Bax protein levels coupled with a decrease in Bcl-2 protein levels, thereby triggering apoptotic protease activation. Consequently, this inhibition of HeLa cell proliferation underscores the therapeutic potential of Orientin in cervical cancer treatment. While the notable anti-tumor activity of T. chinensis extract is evident, the specific mechanistic intricacies remain elusive. Putatively, this metabolite’s impact on the signaling pathways within tumor cells plays a pivotal role. T. chinensis is observed to down-regulate anti-apoptotic genes Bcl and Bcl-xL while concurrently up-regulating pro-apoptotic genes such as Bax, caspase-9 , and caspase-3 at the mRNA levels. This concomitant suppression of COX-2 gene expression in tumor cells is linked to inhibiting the proliferation of diverse tumor cell lines. The inhibitory effect extends to the HT-29 of human colon cancer cells, with T. chinensis flavonoids proving efficacious in restraining cell proliferation. The concentration-dependent inhibition of human non-small cell lung cancer A549 cells, the induction of apoptosis in lung cancer A549 cells, and the anti-lung cancer role demonstrated by these flavonoids underscore their potential therapeutic relevance. Moreover, the ability of T. chinensis flavonoids to impede the progression of K562 cells, retaining them in the Go/G1 phase, elucidates their protective role against leukemia. Additionally, beyond the total flavonoid components, the total saponins of T. chinensis showcase robust antitumor activity, albeit without significant advantages over other pharmaceutical agents . 7.5. Antibacterial Effect T. chinensis manifests broad-spectrum bacteriostatic activity against both Gram-positive cocci and Gram-negative Bacilli, including Pseudomonas aeruginosa, Staphylococcus aureus , Diplococcus pneumoniae, and Shigella dysenteriae. The pivotal antibacterial constituents of T. chinensis are its flavonoids, notably Orientin and Vitexin . In vitro assessments utilized Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) as benchmarks for analyzing Escherichia coli , Salmonella , Staphylococcus aureus , Bacillus subtilis , Streptococcus mutans , Streptomyces , Rhodotorula , Aspergillus niger , and Candida albicans . The 30% ethanolic extract of T. chinensis exhibited notable antibacterial efficacy, particularly inhibiting Streptococcus mutans, suggesting a potential therapeutic avenue for dental caries. T. chinensis total flavonoids, along with Orientin and Vitexin, exhibited notable inhibitory effects on Gram-positive cocci while demonstrating no discernible impact on Gram-negative Bacilli and fungi. Their most pronounced inhibitory activity was observed against Staphylococcus aureus , with the order of inhibitory strength being Orientin = Total flavonoids > Vitexin. Specifically, the lowest inhibitory and bactericidal concentrations were determined to be 0.15625 mg·mL −1 and 0.625 mg·mL −1 for Orientin and total flavonoids, respectively. Additionally, these components demonstrated considerable inhibitory activity against Streptococcus mutans, with the antibacterial efficacy ranking as Orientin > Total flavonoids > Vitexin. Notably, the lowest inhibitory concentration and bactericidal concentration of Orientin were 0.15625 mg·mL −1 and 0.625 mg·mL −1 , surpassing the efficacy of Vitexin . In investigations exploring the bacteriostatic activity of various T. chinensis preparations, the Staphylococcus aureus solution clarified at concentrations of 225 mg/mL for Jinlianhua Tablets, 56.25 mg/mL for Jinlianhua Jiaonang, 450 mg/mL for Jinlianhua Granules, and 56.25 mg/mL for T. chinensis oral solution. For Bacillus subtilis , clarification occurred at concentrations of 56.25 mg/mL for Jinlianhua tablets, 14.0625 mg/mL for T. chinensis capsule, 225 mg/mL for T. chinensis granules, and 28.125 mg/mL for T. chinensis oral solution. Notably, the T. chinensis oral solution displayed no inhibitory effect against Escherichia coli . These experiments revealed that the antibacterial activities of the four T. chinensis preparations followed the order of strength as Bacillus subtilis > Staphylococcus aureus > Escherichia coli , with varying minimum inhibitory concentrations (MICs) against Staphylococcus aureus and Bacillus subtilis for different T. chinensis preparations, ranked from strongest to weakest as Jinlianhua capsules, Jinlianhua mixture, Jinlianhua tablets, and Jinlianhua granules . In the in vitro bacteriostatic efficacy assessment, the total flavonoids extracted from T. chinensis exhibited robust inhibitory effects against common pathogenic organisms, including Staphylococcus epidermidis , Staphylococcus aureus , Escherichia coli , Streptococcus viridans , Salmonella paratyphi A , and Salmonella paratyphi B . Notably, the total demonstrated considerable protective effects in Staphylococcus aureus -infected mice, showcasing a dose-dependent reduction in the 48-h mortality of the infected mice . The yellow pigment of T. chinensis , composed of xantho-phyll epoxyde and trollixanthin, also displayed bacteriostatic properties, with varying degrees of inhibition against Staphylococcus aureus , Bacillus subtilis , and Escherichia coli , showing increased activity with escalating concentrations. Tecomin, a glucose ester of Veratric acid, exhibited effective inhibition against Staphylococcus aureus and Pseudomonas aeruginosa, with MICs of 0.256 and 0.128 mg/mL, respectively . Progloboflowery acid has emerged as an effective treatment for Pseudomonas aeruginosa-induced inflammatory skin reactions. Inhibitory effects were observed for proglobeflowery acid, Vitexin, and Orientin against Bacillus subtilis , Staphylococcus epidermidis , Staphylococcus aureus , and Micrococcus luteus . T. chinensis total flavonoids, Vitexin, Orientin, and proglobeflowery acid displayed inhibitory effects on Staphylococcus aureus and Staphylococcus epidermidis , with MICs of 50 and 25 μg/mL, 100 and 25 μg/mL, 25 and 25 μg/mL, and 200 and 200 μg/mL. For Micrococcus luteus and Bacillus subtilis , the MICs were higher than 200 μg/mL . In the investigation, T. chinensis extract and its three metabolites exhibited potent inhibitory effects on four Gram-positive cocci. Total flavonoids and Vitexin, having the highest content, demonstrated strong inhibition, especially Orientin, against Staphylococcus aureus and Staphylococcus epidermidis , while PA demonstrated relatively weak inhibition against these two bacteria . The study further revealed that PA had robust inhibitory action against Pseudomonas aeruginosa and Staphylococcus aureus , with MIC rates of 16 and 200 mg/L, respectively. Additionally, PA exhibited modest antiviral activity (IC50 of 184.2 μg/mL) against Para 3. Conversely, GA displayed significant antiviral efficacy against influenza A, as evidenced by its IC50 value of 42.1 μg/mL. With a MIC rate of 128 mg/L, TS demonstrated moderate inhibitory activity against Streptococcus pneumonia . The antibacterial pharmacological mechanism underlying the action of T. chinensis predominantly revolves around impeding regular bacterial growth processes. This is accomplished by elevating extracellular nucleic acid and soluble protein levels within bacteria. The resultant damage to the cell membrane influences membrane permeability, inducing the efflux of vital metabolic substances crucial for cellular viability or the influx of detrimental medicinal fluids. Such interactions significantly impact bacterial growth, thereby realizing the intended inhibitory effects. The drug concentration exhibits a positive correlation with both the rate of inhibition of bacterial growth and the rate of inhibition of biofilm formation . 7.6. Others The main active components of T. chinensis , total flavonoids, also have analgesic and antipyretic effects. Studies have shown that flavonoids can significantly reduce ET (the lipid and polysaccharide substances produced by the cell wall of G-bacteria-ET are a standard model for screening antipyretic drugs and exploring antipyretic mechanisms). Total flavonoids can also reduce the contents of endogenous heat sources TNF-α and IL-1β in the serum of febrile rabbits and then inhibit the production and release of PGE2 in the cerebrospinal fluid of rabbits by inhibiting the production or release of TNF-α and IL-1β induced by ET to reduce fever, increase heat loss, and restore body temperature to normal. Reducing the production of endogenous pyrogens such as IL-1 and TNF-α is the pharmacological basis of the antipyretic effect of total flavonoids . The experiment was divided into two parts: the blank group, the positive group, the water extract from stem and leaf (low), the water extract from stem and leaf (high), the alcohol extract from stem and leaf (low), and the alcohol extract from stem and leaf (high). The control group was used to investigate the anti-inflammatory effect of T. chinensis . A part of the control group was divided into the blank group (distilled water 20 mL/kg), positive group (100 mg/kg), low (12 g/kg), and high (24 g/kg) water extract groups, and low (12 g/kg) and high (24 g/kg) alcohol extract groups as the control group to verify the analgesic effect of T. chinensis . The extracts of T. chinensis stem and leaf have anti-inflammatory and analgesic effects . One study further investigated the antitussive, anti-inflammatory, and analgesic effects of T. chinensis . The study showed that all dose groups of total flavonoid extract of T. chinensis could significantly prolong the therapeutic effect of antitussive; with the increase in total flavonoid extract dose, the incubation period of cough in mice was prolonged, and the number of coughs was reduced—the more significant the tracheal phenol red excretion, the more pronounced the antitussive and expectorant effects. The antitussive and expectorant effects were more evident in the high-dose group of total flavone extract of T. chinensis than in patent medicine cough syrup. In addition, the high-dose group treated with the whole flavonoid extract of T. chinensis showed significantly reduced ear swelling caused by xylenes, reduced reaction times, and an improved hot plate pain threshold. Moreover, statistical analysis showed that the total flavonoid extract of this drug had effects similar to those of nonsteroidal anti-inflammatory drugs commonly used in clinics. It was confirmed that the total flavone extract had sound anti-inflammatory and analgesic effects and that the total flavones of T. chinensis were helpful in myocardial ischemia-reperfusion injury. Experimental studies have shown that its mechanism of action is to inhibit the activities of superoxide dismutase (SOD) and glutathione peroxidase (GSH-Px), reduce the content of malondialdehyde (MDA), reduce the area of myocardial infarction, inhibit the release of myocardial enzymes, reduce the apoptosis of myocardial cells, and play a corresponding therapeutic and relieving role . In addition, Orientin and Vitexin in T. chinensis could improve membrane transport in d-galactose-induced aging mice, which may be helpful for clinical applications in treating acute respiratory distress syndrome . A study exploring the antiviral properties of T. chinensis revealed that its five active components—Vitexin, Orientin, Trolline, Veratric acid, and Vitexin-2″- O -β- l -galorientin—exert their effects by modulating Toll-like receptors (a critical class of protein molecules associated with non-specific immunity/natural immunity). Specifically, the T. chinensis soft capsule demonstrated in vitro inhibition of human coronavirus OC43 replication, accomplished through the regulation of TLRs to suppress elevated expression of host cell cytokines such as IL-1B, IL-6, and IFN-a mRNA induced by viral infection. These findings substantiate the inhibitory mechanism of the T. chinensis soft capsule against the virus . Examining 26 active components such as Rutin, Luteolin-7- O -glucoside, Kaempferol, Genistin, Apigenin, Scutellarin, Orientin, Daidzin, Vitexin, 3′-Hydroxy Puerarin, Puerarin, Daidzein, 3′-Methoxypuerarin, 2″- O -Beta- l -Galactoside, Rosmarinic acid, Progloboflowery acid, Caffeic acid, Protocatechuic acid, Ferulic acid, Veratric acid, Indirubin E, Oleracein E, Trollioside, Carbenoside I, 2″- O -(2‴-methyl butanol)isodangyloxanthin, 2″- O -(2‴-methylbutyryl) Vitexin, and glucose veratrate in T. chinensis, were observed to bind to the Mpro protein (2019-nCoV novel coronavirus pneumonia hydrolase Mpr0 protein) primarily through hydrogen bonds. This binding showcased Mpro protein-binding activity, affirming the potential of T. chinensis against novel coronaviruses . Influencing pivotal anti-inflammatory and immunomodulatory targets, T. chinensis , when combined with multiple inflammatory and immunomodulatory pathways such as tumor necrosis factor-α (TNF-α), HIF-1, and Toll-like receptors (TLR), exhibits anti-influenza viral effects, particularly against influenza A . The antiviral action of T. chinensis has been scrutinized through cyberpharmacology. While cyberpharmacological analyses offer valuable insights into pharmacological research, their reliance on network interactions between biomolecules and extensive databases introduces challenges related to data quality and reliability. Furthermore, the intricate nature of biological systems, limited experimental data, and the evolving understanding of drugs and targets require cautious consideration of credibility, necessitating further validation through pharmacological experiments . Chicken embryos served as the medium for influenza virus cultivation, with the inhibitory effect of T. chinensis alcohol extract on viral proliferation in chicken embryo allantoic fluid evaluated through a chicken erythrocyte agglutination test. The results substantiated the direct inactivation of the influenza A virus by T. chinensis alcohol extract in vitro. In a parallel experiment involving influenza A virus inoculation into chicken embryos, the T. chinensis alcohol extract effectively curbed the proliferation of the virus within the embryos . In a mouse model infected with influenza A (H1N1) virus, the study categorized the subjects into the control group, TGC group ( T. chinensis crude extract gavage group), VI1~3 groups (virus infection model 1~3 groups), and VI + TGC 1~3 groups (treatment 1~3 groups), each comprising 10 mice. Notably, the aqueous extract of T. chinensis exhibited the potential to enhance the antiviral ability of mice. Subsequent comparative analyses validated the initial findings, establishing that aqueous extracts of T. chinensis augmented antiviral capacity in mice. Conversely, alcoholic extracts of T. chinensis directly deactivated the influenza A virus . Furthermore, the aqueous extract of T. chinensis demonstrated potent inhibitory activity against the Cox B3 virus, achieving an inhibitory concentration of 0.318 mg/mL. The total flavonoids in this study displayed varying inhibitory activity against the respiratory syncytial virus, influenza A virus, and parainfluenza virus, with inhibitory concentrations of the viruses being 20.8 μg/mL and 11.7 μg/mL for Vitexin and Orientin, respectively . Notably, 60% ethanolic extracts of T. chinensis and total flavonoids exhibited weak effects, with Protopanaxanthic acid among the organic acids demonstrating the weakest antiviral ability. While T. chinensis showed effectiveness against the influenza A virus, its impact on the influenza B virus was not significant . Comparative assessments revealed that the alcoholic extract solution of T. chinensis soup displayed greater antiviral effects than the aqueous decoction of T. chinensis soup. Additionally, higher-purity T. chinensis soup extract exhibited a more robust inhibitory effect on the influenza virus. Specifically, 80% T. chinensis soup extract and secondary 95% T. chinensis soup extract demonstrated superior antiviral effects compared with 60% T. chinensis soup extract . A study delved into the material basis of the UPLC-DAD-TOF/MS fingerprinting profile (ultra-performance liquid chromatography-tandem diode array detector-time-of-flight mass spectrometry) of T. chinensis , establishing its potential as the active agent against EV71 (enterovirus 71). The key active ingredients of T. chinensis in combating EV71 included Guaijaverin acid, an unidentified alkaloid, P-hydroxybenzene-malic acid, and 2″- O -acetyl Orientin . In the broader context, T. chinensis flowers emerged as a valuable contributor to the anti-influenza virus activity of the overall formula, exhibiting relatively few side effects. The synergistic effect of T. chinensis , particularly in formulations like T. chinensis soup, has proven effective as a treatment for influenza virus . In recapitulation, the findings indicate that the antiviral mechanism of T. chinensis predominantly revolves around impeding the virus-receptor binding process and restraining the cytokines/chemokines response. The unrefined flower extract derived from T. chinensis shields the host from inflammatory damage by intervening in the TLRs, encompassing TLR3, TLR4, and TLR7. This intervention leads to a reduction in the secretion of inflammatory factors, ultimately manifesting antiviral effects . The varied pharmacological impacts of Orientin in T. chinensis , particularly its potent antioxidant effect, surpass those attributed to Vitexin. This discrepancy may be attributed to the structural disparity between Orientin and Vitexin. The oxidative activity of flavonoids with an o-diphenol hydroxyl group on the B-ring is notably more robust compared with those flavonoids possessing a singular phenol hydroxyl group attached to the B-ring . To assess the antioxidant capacity of Orientin and Vitexin in T. chinensis concerning D-galactose-induced subacute senescence in mice, D-galactose was administered intraperitoneally . The experimental outcomes revealed that Orientin and Bauhinia glycosides in T. chinensis effectively elevated the total antioxidant capacity (T-AOC), superoxide dismutase (SOD), glutathione peroxidase (GPGP), and glutathione peroxidase (GPP) in the tissues of the kidneys, livers, and brains of senescent mice. Additionally, these compounds increased SOD, glutathione peroxidase (GSH-Px), Na + -K + -ATPase, and Ca 2+ -Mg 2+ -ATPase activities in kidney, liver, and brain tissues. Notably, Orientin demonstrated superior efficacy over Oryza sativa in augmenting T-AOC activity within the organism . The former mitigates impaired sodium ion transport and associated metabolic disorders , while the latter, elevated levels of Ca 2+ , adversely impact the cytoskeleton and membrane structure of neuronal cells, culminating in diminished stability and heightened membrane permeability, thereby contributing to the senescence process . In contrast, the glycosides of Orientin and Vitexin pruriens act as antioxidants by positively modulating the activity of membrane transporter enzymes within tissue cells. Remarkably, Orientin exhibited greater efficacy than Vitexin in enhancing the activity of these tissue cell membrane transporter enzymes . The robust antioxidant potential of Orientin, exceeding that of poncirin and further surpassing total flavonoids, has been corroborated in various studies. Both Orientin and Vitexin demonstrate the ability to scavenge superoxide anion, hydroxyl radical, and DPPH radical, effectively safeguarding the erythrocyte membrane. Specifically, Orientin displayed notable scavenging efficacy within the concentration range of 2.0–12.0 μg/mL. In contrast, Vitexin exhibited hydroxyl radical scavenging within the concentration range of 0–1.0 μg/mL, achieving maximum scavenging efficiency at 1.0 μg/mL, followed by a decline in scavenging effect with increasing Vitexin concentration . The pharmacological mechanism underlying the antioxidant action of T. chinensis encompasses several key facets: (1) Scavenging of free radicals: The active constituents in T. chinensis , particularly flavonoids, exhibit potent free radical scavenging capabilities. This capacity enables the neutralization of free radicals both inside and outside the cell, thereby mitigating oxidative stress-induced damage . (2) Stimulation of antioxidant enzyme activity: the active ingredients in T. chinensis stimulate the activity of antioxidant enzymes by stimulating the intracellular antioxidant enzymes such as superoxide dismutase, glutathione peroxidase, etc. . This stimulation enhances the efficacy of the antioxidant system, fortifying cells against oxidative damage. In conclusion, T. chinensis safeguards cells from oxidative damage through the dual mechanisms of scavenging free radicals and enhancing antioxidant enzyme activity. These combined actions underscore the efficacy of T. chinensis as a potent antioxidant therapeutic agent. The anti-inflammatory prowess of T. chinensis primarily targets the upper segment of the triple energizer, encompassing the area above the diaphragm within the human body. This region predominantly involves organs such as the stomach and throat, extending through the diaphragm and chest, including the heart, lungs, viscera, head, and face. Both the aqueous extract and 95% ethanol extracts of T. chinensis manifest robust anti-inflammatory activities. Notably, within the repertoire of compounds contained in T. chinensis , flavonoids such as Robinin, Quercetin, Vitexin, and Orientin exhibit heightened anti-inflammatory efficacy. Particularly, Vitexin and Orientin, due to their anti-inflammatory and soothing properties, along with peptide anti-histamine attributes, are deemed suitable for managing acute allergic skin conditions such as rash and eczema, as well as respiratory allergic diseases . Current domestic research on T. chinensis underscores its potential in treating upper respiratory tract infectious diseases, including nasal mucosal diseases, by deploying an anti-inflammatory mechanism that engages multiple metabolites, targets, and pathways. Among the identified core targets, TNF and mitogen-activated protein kinase 1(MAPK1) take precedence, with the cancer factor pathway emerging as a pivotal route . Additionally, Toll-like receptors 3, 4, and 7 (TLR3/4/7) have been proposed as promising common anti-inflammatory targets for T. chinensis constituents. This includes Vitexin, Orientin, Trolline, Veratric acid, and Vitexin-2″- O -galactoside, as discerned through the integration of network pharmacology and molecular docking techniques . Respiratory inflammation, arising from diverse pathogens, microbial infections, influenza, nitroative stress, and compromised immune systems, can be effectively addressed by T. chinensis . Its therapeutic spectrum extends beyond treating nasal mucosa inflammation to positively impacting upper respiratory infections. Leveraging data mining, an enriched analysis of the top 20 pathways linked to the targets and metabolites of T. chinensis in upper respiratory tract infection treatment identified quercetin as a highly probable compound. This conclusion was derived from the “metabolite-target-signaling pathway” network analysis . Moreover, T. chinensis preparations exhibit therapeutic potential against upper respiratory tract infections by reducing serum inflammatory factors in patients. These factors include IL-8, IL-6, TNF-alpha, C-reactive protein, and procalcitonin, along with the modulation of T-cell subpopulation ratios . Additionally, Orientin-2″- O -β- l -galactoside and Veratric acid have been identified for their anti-inflammatory effects . In the clinical realm, the combination of amoxicillin, sodium, and potassium clavulanate has demonstrated the potential to reduce treatment duration and enhance therapeutic efficacy in children with acute tonsillitisn ratios . In summary, T. chinensis harbors a repertoire of anti-inflammatory compounds, including Vitexin, Orientxin, Trolline, Veratric acid, and Vitexin-2″- O -galactoside. Notably, Quercetin may also contribute significantly to its anti-inflammatory activity . Specifically, Orientin demonstrates efficacy in attenuating LPS-induced inflammation by impeding the production of inflammatory mediators and suppressing the expression of Cyclooxygenase 2 (COX-2) and Inducible nitric oxide synthase (iNOS) . Vitexin-2″- O -galactoside exhibits substantial inhibitory effects on lipopolysaccharide (LPS)-induced inflammation, as evidenced by its impact on key factors such as tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), iNOS, and COX-2 expression. Additionally, it mitigates the production of reactive oxygen species and exerts an anti-neurogenic role by inhibiting the NF-κB and extracellular signal-regulated kinase (ERK) signaling pathways, leading to anti-neuroinflammatory activity. However, the pharmacological mechanisms underlying the anti-inflammatory effects of the other components remain elusive. Flavonoids derived from T. chinensis exhibit notable inhibitory effects on active cancer cells. Specifically, the total flavonoids from T. chinensis demonstrate the capacity to impede the proliferation of tumor cells by activating the mitochondrial pathway . T. chinensis extracts exerted strong inhibitory effects on Leukemia K562 cells (K562), and HeL T. chinensis extracts manifest robust inhibitory influences on various cancer cell lines, including Leukemia K562 cells (K562), HeLa cells (He La), esophageal cancer cellsEc-109 (Ec-109), lung cancer cells NCI-H446 (NCI-H446), human non-lung cancer cells NCI-H446 (NCI-H446), human non-small cell lung cancer cell line A549 (A549), and human carcinoma cells HT-29 (HT-29), MCF-7, and HepG2, among others . Moreover, the total flavonoid extract of T. chinensis significantly retards the growth and proliferation of MCF-7 cells. This involvement is characterized by the activation of caspase-3 and caspase-9, leading to induced cell apoptosis within a concentration range of 0.0991 to 1.5856 mg/mL . Non-alcoholic fatty liver disease (NAFLD) stands as a clinical pathologic syndrome , with its incidence in China reaching a significant 29.2%, demonstrating an annual increase . The complex interplay of metabolic disorders, such as dyslipidemia, hypertension, hyperglycemia, and persistent abnormalities in liver function tests, is closely associated with NAFLD . Elevated lipid levels induce expression changes in HepG2 cells (hepatoma cells) . In an investigation into the impact of total flavonoids from T. chinensis on HepG2 cell function induced by high sugar levels, it was observed that oxidative stress levels in hepatocytes and the metabolic balance of reactive oxygen species (ROS) in HepG2 cells were intricately linked to intracellular fat accumulation. The study conclusively demonstrated that total flavonoids from T. chinensis exhibit a specific therapeutic effect on HepG2 cells by influencing disease-associated processes. Tissue cultures were employed to compare the effects of high glucose concentrations and varying doses of total flavonoids from T. chinensis on HepG2 cells. The proliferative tendencies of lipid substances are directly correlated with ROS levels; higher lipid accumulation corresponds to elevated ROS levels. Elevated glucose concentrations intensified ROS levels, while total flavonoids from T. chinensis effectively attenuated ROS levels, thereby influencing HepG2 cells. In vitro, total flavonoids from T. chinensis demonstrated a capacity to reduce lipid substance accumulation, presenting a promising avenue for the improved treatment of NAFLD . The ethanol extract derived from the total flavonoids of T. chinensis has been observed to induce apoptosis in HT-2 cells through the endogenous mitochondrial pathway. In addition, specific constituents of T. chinensis, namely Orientin and Vitexin, have demonstrated inhibitory effects on human esophageal cancer EC-109 cells. The apoptotic induction of EC-109 cells by both Orientin and Vitexin was found to correlate with increased drug action time and elevated drug concentrations. Significantly, Orientin surpassed Vitexin in effectively inhibiting the growth and apoptosis of EC-109 cells . At the administration dose of 80 μM, Orientin demonstrated a more potent apoptotic effect on EC-109 cells compared with Vitexin at the same concentration, registering apoptotic rates of 28.03% and 12.38%, respectively, within the concentration range of 0.91 to 1.5856 mg/mL. Elucidating the pharmacological mechanism underlying Orientin’s action, specifically in the context of esophageal cancer cells (EC-109), involves the up-regulation of P53 expression and concomitant down-regulation of Bcl-2 expression. This dual modulation positions Orientin as a prospective therapeutic agent for esophageal cancer. Utilizing the total flavonoids of T. chinensis as a model drug, our exploration delved into the molecular-level relationship and mechanism of these flavonoids, shedding light on their antitumor activity. A pertinent discovery was that Orientin affected HeLa, augmenting the Bax/Bcl-2 protein ratio. This manifested as an increase in Bax protein levels coupled with a decrease in Bcl-2 protein levels, thereby triggering apoptotic protease activation. Consequently, this inhibition of HeLa cell proliferation underscores the therapeutic potential of Orientin in cervical cancer treatment. While the notable anti-tumor activity of T. chinensis extract is evident, the specific mechanistic intricacies remain elusive. Putatively, this metabolite’s impact on the signaling pathways within tumor cells plays a pivotal role. T. chinensis is observed to down-regulate anti-apoptotic genes Bcl and Bcl-xL while concurrently up-regulating pro-apoptotic genes such as Bax, caspase-9 , and caspase-3 at the mRNA levels. This concomitant suppression of COX-2 gene expression in tumor cells is linked to inhibiting the proliferation of diverse tumor cell lines. The inhibitory effect extends to the HT-29 of human colon cancer cells, with T. chinensis flavonoids proving efficacious in restraining cell proliferation. The concentration-dependent inhibition of human non-small cell lung cancer A549 cells, the induction of apoptosis in lung cancer A549 cells, and the anti-lung cancer role demonstrated by these flavonoids underscore their potential therapeutic relevance. Moreover, the ability of T. chinensis flavonoids to impede the progression of K562 cells, retaining them in the Go/G1 phase, elucidates their protective role against leukemia. Additionally, beyond the total flavonoid components, the total saponins of T. chinensis showcase robust antitumor activity, albeit without significant advantages over other pharmaceutical agents . T. chinensis manifests broad-spectrum bacteriostatic activity against both Gram-positive cocci and Gram-negative Bacilli, including Pseudomonas aeruginosa, Staphylococcus aureus , Diplococcus pneumoniae, and Shigella dysenteriae. The pivotal antibacterial constituents of T. chinensis are its flavonoids, notably Orientin and Vitexin . In vitro assessments utilized Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) as benchmarks for analyzing Escherichia coli , Salmonella , Staphylococcus aureus , Bacillus subtilis , Streptococcus mutans , Streptomyces , Rhodotorula , Aspergillus niger , and Candida albicans . The 30% ethanolic extract of T. chinensis exhibited notable antibacterial efficacy, particularly inhibiting Streptococcus mutans, suggesting a potential therapeutic avenue for dental caries. T. chinensis total flavonoids, along with Orientin and Vitexin, exhibited notable inhibitory effects on Gram-positive cocci while demonstrating no discernible impact on Gram-negative Bacilli and fungi. Their most pronounced inhibitory activity was observed against Staphylococcus aureus , with the order of inhibitory strength being Orientin = Total flavonoids > Vitexin. Specifically, the lowest inhibitory and bactericidal concentrations were determined to be 0.15625 mg·mL −1 and 0.625 mg·mL −1 for Orientin and total flavonoids, respectively. Additionally, these components demonstrated considerable inhibitory activity against Streptococcus mutans, with the antibacterial efficacy ranking as Orientin > Total flavonoids > Vitexin. Notably, the lowest inhibitory concentration and bactericidal concentration of Orientin were 0.15625 mg·mL −1 and 0.625 mg·mL −1 , surpassing the efficacy of Vitexin . In investigations exploring the bacteriostatic activity of various T. chinensis preparations, the Staphylococcus aureus solution clarified at concentrations of 225 mg/mL for Jinlianhua Tablets, 56.25 mg/mL for Jinlianhua Jiaonang, 450 mg/mL for Jinlianhua Granules, and 56.25 mg/mL for T. chinensis oral solution. For Bacillus subtilis , clarification occurred at concentrations of 56.25 mg/mL for Jinlianhua tablets, 14.0625 mg/mL for T. chinensis capsule, 225 mg/mL for T. chinensis granules, and 28.125 mg/mL for T. chinensis oral solution. Notably, the T. chinensis oral solution displayed no inhibitory effect against Escherichia coli . These experiments revealed that the antibacterial activities of the four T. chinensis preparations followed the order of strength as Bacillus subtilis > Staphylococcus aureus > Escherichia coli , with varying minimum inhibitory concentrations (MICs) against Staphylococcus aureus and Bacillus subtilis for different T. chinensis preparations, ranked from strongest to weakest as Jinlianhua capsules, Jinlianhua mixture, Jinlianhua tablets, and Jinlianhua granules . In the in vitro bacteriostatic efficacy assessment, the total flavonoids extracted from T. chinensis exhibited robust inhibitory effects against common pathogenic organisms, including Staphylococcus epidermidis , Staphylococcus aureus , Escherichia coli , Streptococcus viridans , Salmonella paratyphi A , and Salmonella paratyphi B . Notably, the total demonstrated considerable protective effects in Staphylococcus aureus -infected mice, showcasing a dose-dependent reduction in the 48-h mortality of the infected mice . The yellow pigment of T. chinensis , composed of xantho-phyll epoxyde and trollixanthin, also displayed bacteriostatic properties, with varying degrees of inhibition against Staphylococcus aureus , Bacillus subtilis , and Escherichia coli , showing increased activity with escalating concentrations. Tecomin, a glucose ester of Veratric acid, exhibited effective inhibition against Staphylococcus aureus and Pseudomonas aeruginosa, with MICs of 0.256 and 0.128 mg/mL, respectively . Progloboflowery acid has emerged as an effective treatment for Pseudomonas aeruginosa-induced inflammatory skin reactions. Inhibitory effects were observed for proglobeflowery acid, Vitexin, and Orientin against Bacillus subtilis , Staphylococcus epidermidis , Staphylococcus aureus , and Micrococcus luteus . T. chinensis total flavonoids, Vitexin, Orientin, and proglobeflowery acid displayed inhibitory effects on Staphylococcus aureus and Staphylococcus epidermidis , with MICs of 50 and 25 μg/mL, 100 and 25 μg/mL, 25 and 25 μg/mL, and 200 and 200 μg/mL. For Micrococcus luteus and Bacillus subtilis , the MICs were higher than 200 μg/mL . In the investigation, T. chinensis extract and its three metabolites exhibited potent inhibitory effects on four Gram-positive cocci. Total flavonoids and Vitexin, having the highest content, demonstrated strong inhibition, especially Orientin, against Staphylococcus aureus and Staphylococcus epidermidis , while PA demonstrated relatively weak inhibition against these two bacteria . The study further revealed that PA had robust inhibitory action against Pseudomonas aeruginosa and Staphylococcus aureus , with MIC rates of 16 and 200 mg/L, respectively. Additionally, PA exhibited modest antiviral activity (IC50 of 184.2 μg/mL) against Para 3. Conversely, GA displayed significant antiviral efficacy against influenza A, as evidenced by its IC50 value of 42.1 μg/mL. With a MIC rate of 128 mg/L, TS demonstrated moderate inhibitory activity against Streptococcus pneumonia . The antibacterial pharmacological mechanism underlying the action of T. chinensis predominantly revolves around impeding regular bacterial growth processes. This is accomplished by elevating extracellular nucleic acid and soluble protein levels within bacteria. The resultant damage to the cell membrane influences membrane permeability, inducing the efflux of vital metabolic substances crucial for cellular viability or the influx of detrimental medicinal fluids. Such interactions significantly impact bacterial growth, thereby realizing the intended inhibitory effects. The drug concentration exhibits a positive correlation with both the rate of inhibition of bacterial growth and the rate of inhibition of biofilm formation . The main active components of T. chinensis , total flavonoids, also have analgesic and antipyretic effects. Studies have shown that flavonoids can significantly reduce ET (the lipid and polysaccharide substances produced by the cell wall of G-bacteria-ET are a standard model for screening antipyretic drugs and exploring antipyretic mechanisms). Total flavonoids can also reduce the contents of endogenous heat sources TNF-α and IL-1β in the serum of febrile rabbits and then inhibit the production and release of PGE2 in the cerebrospinal fluid of rabbits by inhibiting the production or release of TNF-α and IL-1β induced by ET to reduce fever, increase heat loss, and restore body temperature to normal. Reducing the production of endogenous pyrogens such as IL-1 and TNF-α is the pharmacological basis of the antipyretic effect of total flavonoids . The experiment was divided into two parts: the blank group, the positive group, the water extract from stem and leaf (low), the water extract from stem and leaf (high), the alcohol extract from stem and leaf (low), and the alcohol extract from stem and leaf (high). The control group was used to investigate the anti-inflammatory effect of T. chinensis . A part of the control group was divided into the blank group (distilled water 20 mL/kg), positive group (100 mg/kg), low (12 g/kg), and high (24 g/kg) water extract groups, and low (12 g/kg) and high (24 g/kg) alcohol extract groups as the control group to verify the analgesic effect of T. chinensis . The extracts of T. chinensis stem and leaf have anti-inflammatory and analgesic effects . One study further investigated the antitussive, anti-inflammatory, and analgesic effects of T. chinensis . The study showed that all dose groups of total flavonoid extract of T. chinensis could significantly prolong the therapeutic effect of antitussive; with the increase in total flavonoid extract dose, the incubation period of cough in mice was prolonged, and the number of coughs was reduced—the more significant the tracheal phenol red excretion, the more pronounced the antitussive and expectorant effects. The antitussive and expectorant effects were more evident in the high-dose group of total flavone extract of T. chinensis than in patent medicine cough syrup. In addition, the high-dose group treated with the whole flavonoid extract of T. chinensis showed significantly reduced ear swelling caused by xylenes, reduced reaction times, and an improved hot plate pain threshold. Moreover, statistical analysis showed that the total flavonoid extract of this drug had effects similar to those of nonsteroidal anti-inflammatory drugs commonly used in clinics. It was confirmed that the total flavone extract had sound anti-inflammatory and analgesic effects and that the total flavones of T. chinensis were helpful in myocardial ischemia-reperfusion injury. Experimental studies have shown that its mechanism of action is to inhibit the activities of superoxide dismutase (SOD) and glutathione peroxidase (GSH-Px), reduce the content of malondialdehyde (MDA), reduce the area of myocardial infarction, inhibit the release of myocardial enzymes, reduce the apoptosis of myocardial cells, and play a corresponding therapeutic and relieving role . In addition, Orientin and Vitexin in T. chinensis could improve membrane transport in d-galactose-induced aging mice, which may be helpful for clinical applications in treating acute respiratory distress syndrome . 8.1. Analysis Methods Currently, the market for Chinese herbal medicine T. chinensis has not been unified into varieties, in addition to Trollius chinensis Bunge. as the primary source of medicinal botanical drugs, Trollius ledebourii Reichenbach. Trollius macropetalus Fr. et al. have also done more research on resource exploitation and utilization for medicinal use. Hence, the quality of T. chinensis on the market is confusing, and it is difficult to distinguish the good from the bad. The 1977 edition of the Chinese Pharmacopoeia analyzes the quality of botanical drugs from two perspectives: physical identification and chemical identification. The 1998 edition of the Beijing Standards for Chinese Materia Medica (1998) also includes a microscopic identification method for determining authenticity. The 2019 edition of the Anhui Provincial Standard for the Preparation of Chinese Medicinal Tablets (2019) records the method of identification by thin-layer chromatography, in which the chromatograms of the test article obtained by experimental treatment and the chromatogram of the control botanical drug show spots of the same color at the corresponding positions of the thin-layer plate. The evaluation method in the 2018 edition of the Hubei Quality Standard for Traditional Chinese Medicinal Materials (2018) specifies that the moisture content of T. chinensis should not exceed 13.0%. The total ash content should not exceed 9.0%. The leachate content shall not be less than 35.0%. The content of Orientin (C 21 H 20 O 11 ) must not be less than 1.0% when measured by high-performance liquid chromatography and calculated on the dry product. In addition to the identification methods recorded in pharmacopeia and local standards, the fluorescence reaction identification method, micro-sublimation test, FTIR identification, and DNA barcode molecular identification method of Chinese herbal medicines can also be used to identify the authenticity of T. chinensis . A micro-sublimation test can be seen on the slide of yellowish snow-like crystals . The FTIR profile of T. chinensis was obtained by using FTIR identification, and the differences in peak shape, peak position, and peak intensity of the peaks in the profile can elucidate the differences in the components, compositions, and ratios of T. chinensis botanical drugs extracted from different origins, habitats, varieties, growth years, and different drying methods and extraction solvents to carry out a more accurate quality analysis to determine the authenticity of T. chinensis . DNA barcode molecular identification of Chinese herbal medicines is a method to identify herbal medicines through the study of the polymorphism of the genetic material of Chinese herbal medicines, which can quickly identify the species . At present, with the rapid development of molecular identification technology and in-depth plant genetic information mining, molecular identification methods in the standardization of traditional Chinese medicine identification have been widely used. For example, the early DNA molecular identification technique of T. chinensis , random amplified polymorphic DNA labeling (RAPD), was used to identify T. chinensis by observing the electrophoretic results of the DNA bands by PCR amplification, and the samples of T. chinensis could be classified according to their origins by using the RAPD technique . The DNA barcode identification method of T. chinensis was established by using ITS2 sequences, and the neighbor-joining (NJ) phylogenetic tree was constructed to accurately identify T. chinensis , Trollius lilacinus Bunge, and Artemisia annua L. In addition, high-performance liquid chromatography (HPLC) coupled with mass spectrometry (MS) can be used to identify the chemical composition and characteristics of TCM. Based on the different information of protein bands of different varieties as the basis for the identification of TCM with protein as the informative substance, it is observed that the protein bands of different varieties of T. chinensis differ significantly in the number of bands, levels, and distribution . In addition to the above methods, X-ray diffraction and X-ray fluorescence analysis can also be used to identify the grain characteristics of T. chinensis and establish a primary X-ray diffraction database for rapid identification of the authenticity of T. chinensis and its powder . 8.2. Quality Evaluation Method To ensure the quality and therapeutic efficacy of T. chinensis , the key to quality control of the active ingredients is also to establish quality analysis methods. The quality of T. chinensis can be identified and evaluated through the establishment of content determination standards, the use of fingerprinting evaluation methods, and other methods that can provide reference for the further development and utilization of T. chinensis . At present, the quality evaluation method of T. chinensis is mainly based on chemical content determination, i.e., HPLC fingerprinting, with Orientin and Vitexin as the index components of the method . In some studies, these two metabolites are combined with phenolic acid or alkaloid and other metabolites as quality evaluation indexes to improve the comprehensiveness of evaluation, and HPLC is the main evaluation method at present . Currently, the market for Chinese herbal medicine T. chinensis has not been unified into varieties, in addition to Trollius chinensis Bunge. as the primary source of medicinal botanical drugs, Trollius ledebourii Reichenbach. Trollius macropetalus Fr. et al. have also done more research on resource exploitation and utilization for medicinal use. Hence, the quality of T. chinensis on the market is confusing, and it is difficult to distinguish the good from the bad. The 1977 edition of the Chinese Pharmacopoeia analyzes the quality of botanical drugs from two perspectives: physical identification and chemical identification. The 1998 edition of the Beijing Standards for Chinese Materia Medica (1998) also includes a microscopic identification method for determining authenticity. The 2019 edition of the Anhui Provincial Standard for the Preparation of Chinese Medicinal Tablets (2019) records the method of identification by thin-layer chromatography, in which the chromatograms of the test article obtained by experimental treatment and the chromatogram of the control botanical drug show spots of the same color at the corresponding positions of the thin-layer plate. The evaluation method in the 2018 edition of the Hubei Quality Standard for Traditional Chinese Medicinal Materials (2018) specifies that the moisture content of T. chinensis should not exceed 13.0%. The total ash content should not exceed 9.0%. The leachate content shall not be less than 35.0%. The content of Orientin (C 21 H 20 O 11 ) must not be less than 1.0% when measured by high-performance liquid chromatography and calculated on the dry product. In addition to the identification methods recorded in pharmacopeia and local standards, the fluorescence reaction identification method, micro-sublimation test, FTIR identification, and DNA barcode molecular identification method of Chinese herbal medicines can also be used to identify the authenticity of T. chinensis . A micro-sublimation test can be seen on the slide of yellowish snow-like crystals . The FTIR profile of T. chinensis was obtained by using FTIR identification, and the differences in peak shape, peak position, and peak intensity of the peaks in the profile can elucidate the differences in the components, compositions, and ratios of T. chinensis botanical drugs extracted from different origins, habitats, varieties, growth years, and different drying methods and extraction solvents to carry out a more accurate quality analysis to determine the authenticity of T. chinensis . DNA barcode molecular identification of Chinese herbal medicines is a method to identify herbal medicines through the study of the polymorphism of the genetic material of Chinese herbal medicines, which can quickly identify the species . At present, with the rapid development of molecular identification technology and in-depth plant genetic information mining, molecular identification methods in the standardization of traditional Chinese medicine identification have been widely used. For example, the early DNA molecular identification technique of T. chinensis , random amplified polymorphic DNA labeling (RAPD), was used to identify T. chinensis by observing the electrophoretic results of the DNA bands by PCR amplification, and the samples of T. chinensis could be classified according to their origins by using the RAPD technique . The DNA barcode identification method of T. chinensis was established by using ITS2 sequences, and the neighbor-joining (NJ) phylogenetic tree was constructed to accurately identify T. chinensis , Trollius lilacinus Bunge, and Artemisia annua L. In addition, high-performance liquid chromatography (HPLC) coupled with mass spectrometry (MS) can be used to identify the chemical composition and characteristics of TCM. Based on the different information of protein bands of different varieties as the basis for the identification of TCM with protein as the informative substance, it is observed that the protein bands of different varieties of T. chinensis differ significantly in the number of bands, levels, and distribution . In addition to the above methods, X-ray diffraction and X-ray fluorescence analysis can also be used to identify the grain characteristics of T. chinensis and establish a primary X-ray diffraction database for rapid identification of the authenticity of T. chinensis and its powder . To ensure the quality and therapeutic efficacy of T. chinensis , the key to quality control of the active ingredients is also to establish quality analysis methods. The quality of T. chinensis can be identified and evaluated through the establishment of content determination standards, the use of fingerprinting evaluation methods, and other methods that can provide reference for the further development and utilization of T. chinensis . At present, the quality evaluation method of T. chinensis is mainly based on chemical content determination, i.e., HPLC fingerprinting, with Orientin and Vitexin as the index components of the method . In some studies, these two metabolites are combined with phenolic acid or alkaloid and other metabolites as quality evaluation indexes to improve the comprehensiveness of evaluation, and HPLC is the main evaluation method at present . Based on ancient texts and modern research, this paper reviews the herbal testimonies, traditional uses, phytochemistry, pharmacological activities, and quality standards of T. chinensis to provide new ideas for future research on T. chinensis . According to ancient texts, T. chinensis can reduce inflammation, eliminate heat and toxins, and enhance visual clarity. It is particularly effective in managing sore throats, swollen gums, and oral gingival pain caused by heat. Based on recent phytochemical and pharmacological studies, T. chinensis possesses anti-inflammatory, antiviral, antitumor, antibacterial, and antimicrobial effects, which are especially good for treating virus-induced colds and various types of inflammation, such as respiratory inflammation. It was initially recorded as an ornamental plant in various ancient books. Since its initial inclusion in the Compendium of Materia Medica as a traditional Chinese medicine in 1765 during the Qing Dynasty, T. chinensis has been widely developed for its medicinal properties and employed in health care products and various dosage forms following current processing technology. Over 180 compounds from T. chinensis have been isolated and identified. The main active components of T. chinensis are flavonoids, alkaloids, and organic acids. Objective evaluations are emphasized in recent studies of T. chinensis , where the focus is mainly on the flavonoids Orientin and Vitexin. These two compounds are the most important and representative of T. chinensis , with less research on the other active components. Various domestic and international investigations indicate that flavonoids account for most of the pharmacological effects of T. chinensis . First of all, regarding the medicinal employment of T. chinensis , historical records specify that its dried flower is the primary constituent. Additionally, contemporary experimental research concentrates on the flower of T. chinensis ; however, chemical makeup and pharmacology evaluations of its roots, stems, and leaves are limited. Moreover, most research on the phytochemical metabolites of T. chinensis concentrates on crude extracts and flavonoids, including Vitexin, Orientin, and Orientin-2″- O -β- l -galactopyranoside. However, there is a lack of studies on the alkaloids and organic acids present in T. chinensis, with only a limited number of articles on this topic. Second, studies have shown that both crude extracts and active constituents of T. chinensis have a wide range of pharmacological activities, and these modern pharmacological studies support most of the traditional uses of T. chinensis as a folk medicine. However, there is still a gap in the systematic research on T. chinensis . Many pharmacological studies on its crude extracts or active constituents are not in-depth enough, and fewer in vitro experiments exist. These pharmacological activities must be further confirmed by in vivo animal experiments and combined with clinical applications. This direction will provide a solid foundation for developing novel drug-lead compounds. For example, relevant animal experiments did not verify the antitumor effect of T. chinensis . Third, most studies on the pharmacological activities of T. chinensis have focused on uncharacterized crude extracts, making it difficult to clarify the link between the isolated compounds and their biological activities. Systematic pharmacological studies on compounds isolated from T. chinensis are considerable. In addition, many pharmacological activities of crude extracts or compounds of T. chinensis , such as the anti-inflammatory pharmacological effects of T. chinensis, are currently focused on network pharmacology and molecular docking techniques, with only very few relevant in vitro experiments for further validation, and the exact mechanism of the inhibitory activity is still unclear; therefore, further studies to reveal better the precise molecular mechanism of the pharmacological activity of the drug appear to be necessary. Fourth, in some ancient texts, T. chinensis was used with other botanical drugs, thereby treating chronic inflammation. However, almost no studies have been carried out to investigate the formulae of T. chinensis or to reveal the effects of synergistic or antagonistic actions. The area of this piece is almost blank. Therefore, drug interactions between certain botanical drugs and T. chinensis seem to be a new direction worth further exploration. Fifth, T. chinensis was included in the 1977 edition of the Chinese Pharmacopoeia, but this variety was not included in the 1985–2020 edition. Although this paper summarizes the identification methods of T. chinensis in other pharmacopeias, the provisions on authenticity identification and quality evaluation methods of T. chinensis are not comprehensive compared with other Chinese medicinal materials. For example, Trollius ledebourii Rchb. It is an alternative source of T. chinensis . However, the different base plants of T. chinensis have not been included in the pharmacopeia like other Chinese botanical drugs, which limits the further development and utilization of T. chinensis . In addition, although other plants of the same genus have been used as substitutes for T. chinensis in some places, there is no unified standard in the market for evaluation, confusing product types, specifications, and grades of Chinese medicinal materials in the medicinal materials market, which easily leads to problems in efficacy and safety. At present, the commonly used identification methods for T. chinensis are different. Microscopic identification and character identification make it difficult to distinguish the difference between T. chinensis and different species of T. chinensis . Molecular identification technology still needs to be further improved, and new DNA molecular marker technology must be developed. By analyzing and comparing the ribosomal DNA of biological species, species identification methods such as ITS barcode technology still need to collect more T. chinensis from different places and species to improve relevant studies and further verify the applicability of this method. In summary, T. chinensis serves not only as an ornamental plant and a tea source but also as a significant medicinal and food crop, possessing wide-ranging pharmacological and nutritional value. Nonetheless, more in-depth and comprehensive clinical utility studies are needed to establish the plant’s safety and effectiveness. Various compounds have been identified in T. chinensis , although the work done so far has been insufficient. Furthermore, additional research is necessary to determine the precise molecular mechanisms of these active ingredients in specific diseases. Future investigations should emphasize active metabolites other than flavonoids to uncover novel compounds and pharmacological effects. Thus, systematic studies on the phytochemistry and bioactivity of T. chinensis are essential for future research endeavors. This review is intended to serve as a valuable reference for developing and applying T. chinensis.
Evidence certainty in neonatology—a meta-epidemiological analysis of Cochrane reviews
1c3a3bc5-559b-452a-baad-e593d6a26131
11814034
Pediatrics[mh]
Clinical care moved away from traditional intuition-based approaches towards evidence-based medicine (EBM) . EBM aspires to provide the best possible care for patients, supported by high-quality evidence. Nonetheless, high-quality evidence is rather an unclear term, and thus, objective measurements of its certainty have been introduced. The most used framework to address evidence certainty is the Grading of Recommendations Assessment, Development and Evaluation (GRADE). GRADE is crucial in translating scientific evidence into medical recommendations and is also a valuable tool to help clinicians better understand the gathered evidence . High-quality randomized controlled trials (RCTs) represent the basis of the foundation of evidence . Since the beginning of the twenty-first century, the publication rate of high-quality research concerning neonates has been declining . Many factors contribute to this, but the most significant are the rising standards to define the quality of clinical research, which mainly evolved in adult medicine and are not easy to be reproduced in neonatology. In fact, the high certainty of evidence consists typically of several high quality and large randomized controlled trials, whose results are in consensus with each other. Nonetheless, this is uncommon in neonatal trials due to varying co-interventions and definitions for multifaceted and multifactorial outcomes (e.g., bronchopulmonary dysplasia) , This variation makes it difficult to combine results and estimate the effectiveness of interventions and the certainty of the evidence . Being born extremely prematurely usually means that the newborn is in a critical condition and has many medical problems occurring simultaneously and influencing each other . Therefore, it is challenging for a single intervention to significantly impact on an outcome, making it unfair to deem these interventions unsuitable in neonatology . Thus, we hypothesize that certainty of the available evidence is relatively low, and we designed a meta-epidemiological review to examine what has been the evidence certainty in the latest Cochrane neonatal reviews and investigate if the number of trials and enrolled patients is associated with the certainty of evidence. Protocol We performed a systematic meta-epidemiological review, whose protocol was registered in Open Science Framework, and it is available from https://osf.io/7k6s8/ . The protocol was agreed before commencing the search and included search criteria, analysis plan, and full description of methods. Search process and screening For this work, we searched Cochrane neonatal reviews published between January 2022 and May 2024 from the Cochrane review register. We decided to focus on the past 2 years to provide the most recent view of neonatal evidence. Furthermore, we hypothesized that these reviews would be similar in terms of methodology, considering that guidance on risk of bias was updated in 2019, and GRADE guidance is continuously updated. The search results were then uploaded to Covidence software (Veritas Healthcare, Melbourne, Australia) for abstract screening. Two authors (TV and IK) performed the screening process independently, and disagreements, if any, were solved by reaching a mutual consensus or consulting the third author. Inclusion and exclusion criteria We included all Cochrane reviews on interventions that targeted neonates and had at least one meta-analysis performed for which the evidence certainty was rated according to GRADE criteria. If the review remained qualitative and did not perform any statistical pooling of the results, it was excluded. Data extraction and treatment Data from the reviews were extracted to a pre-designed Excel spreadsheet. First, 20% of the reviews were extracted independently by two authors (TV and IK) to pilot and test the extraction process, and as there were no conflicts, one author (TV) extracted the remaining 80% of the review data. The following information was extracted from each included review and for each outcome: year of publication, intervention, control, patient population, setting, number of studies, number of participants, effect estimates, and evidence certainty rating as declared by the meta-analysis according to GRADE classification . This information was extracted from the presented summary of findings tables. We classified the patient populations analyzed by each Cochrane meta-analysis in three groups as follows: (1) preterm neonates (gestational age less than 37 +0 weeks), (2) term neonates (gestational age 37 +0 weeks or more), (3) neonates of any gestational age. Furthermore, the meta-analyzed interventions were classified as ventilation, nutrition, medication, and others. Finally, the outcomes were classified as “subjective” or “objective” for the outcome assessor. Objective outcomes were, for example, laboratory parameters and clinical measurements which are measured by standardized methods. Death, intraventricular hemorrhage, and bronchopulmonary dysplasia were also considered as objective clinical outcomes. On the contrary, outcomes that may be possibly affected by the knowledge of the received intervention were considered subjective (e.g., duration of invasive ventilation, need for reintubation, time to discharge, pain). For full transparency, all the extracted data is available in the online supplementary file . Statistics For categorized variables, we presented absolute numbers with proportions . We used cross tabulation and Chi-squared or Fisher test, as appropriate, to examine differences in categorical outcomes between groups. We presented the findings for different interventions and outcomes in a traffic-light plot where green indicates high, yellow indicates moderate, orange indicates low, and red indicates very low certainty. The mean number of studies and patients per each evidence certainty categories were analyzed with ANOVA followed by Bonferroni post hoc test. A sensitivity analysis with Kruskal–Wallis test was also made. Statistical analyses were made by using SPSS 29.0 (IBM, Chicago-IL, USA), and p value less than 0.05 was considered statistically significant. We have reported the findings of this review according to the meta-epidemiological extension of the preferred reporting items in systematic reviews and meta-analysis (PRISMA) guideline . We performed a systematic meta-epidemiological review, whose protocol was registered in Open Science Framework, and it is available from https://osf.io/7k6s8/ . The protocol was agreed before commencing the search and included search criteria, analysis plan, and full description of methods. For this work, we searched Cochrane neonatal reviews published between January 2022 and May 2024 from the Cochrane review register. We decided to focus on the past 2 years to provide the most recent view of neonatal evidence. Furthermore, we hypothesized that these reviews would be similar in terms of methodology, considering that guidance on risk of bias was updated in 2019, and GRADE guidance is continuously updated. The search results were then uploaded to Covidence software (Veritas Healthcare, Melbourne, Australia) for abstract screening. Two authors (TV and IK) performed the screening process independently, and disagreements, if any, were solved by reaching a mutual consensus or consulting the third author. We included all Cochrane reviews on interventions that targeted neonates and had at least one meta-analysis performed for which the evidence certainty was rated according to GRADE criteria. If the review remained qualitative and did not perform any statistical pooling of the results, it was excluded. Data from the reviews were extracted to a pre-designed Excel spreadsheet. First, 20% of the reviews were extracted independently by two authors (TV and IK) to pilot and test the extraction process, and as there were no conflicts, one author (TV) extracted the remaining 80% of the review data. The following information was extracted from each included review and for each outcome: year of publication, intervention, control, patient population, setting, number of studies, number of participants, effect estimates, and evidence certainty rating as declared by the meta-analysis according to GRADE classification . This information was extracted from the presented summary of findings tables. We classified the patient populations analyzed by each Cochrane meta-analysis in three groups as follows: (1) preterm neonates (gestational age less than 37 +0 weeks), (2) term neonates (gestational age 37 +0 weeks or more), (3) neonates of any gestational age. Furthermore, the meta-analyzed interventions were classified as ventilation, nutrition, medication, and others. Finally, the outcomes were classified as “subjective” or “objective” for the outcome assessor. Objective outcomes were, for example, laboratory parameters and clinical measurements which are measured by standardized methods. Death, intraventricular hemorrhage, and bronchopulmonary dysplasia were also considered as objective clinical outcomes. On the contrary, outcomes that may be possibly affected by the knowledge of the received intervention were considered subjective (e.g., duration of invasive ventilation, need for reintubation, time to discharge, pain). For full transparency, all the extracted data is available in the online supplementary file . For categorized variables, we presented absolute numbers with proportions . We used cross tabulation and Chi-squared or Fisher test, as appropriate, to examine differences in categorical outcomes between groups. We presented the findings for different interventions and outcomes in a traffic-light plot where green indicates high, yellow indicates moderate, orange indicates low, and red indicates very low certainty. The mean number of studies and patients per each evidence certainty categories were analyzed with ANOVA followed by Bonferroni post hoc test. A sensitivity analysis with Kruskal–Wallis test was also made. Statistical analyses were made by using SPSS 29.0 (IBM, Chicago-IL, USA), and p value less than 0.05 was considered statistically significant. We have reported the findings of this review according to the meta-epidemiological extension of the preferred reporting items in systematic reviews and meta-analysis (PRISMA) guideline . We screened 55 Cochrane reviews and included 49 of these for our analysis; six articles were excluded due to lack of quantitative meta-analysis in the review (Fig. ). The included 49 reviews reported a total of 443 outcomes whose certainty of evidence was evaluated. Of the included articles, 14 reviews with 115 outcomes focused on feeding interventions, 8 reviews with 75 outcomes on ventilation, 17 reviews with 180 outcomes on medications, and 10 reviews with 73 outcomes were classified as miscellaneous. Overall, the certainty of evidence was reported to be high in 8 (1.8%), moderate in 89 (20.2%), low in 195 (44.0%), and very low in 151 (34.0%) of the outcomes (Fig. ). There were significant differences in certainty of evidence among reviews focusing on different interventions ( p < 0.001). For instance, the highest proportion of very low certainty was in the medication reviews, and the highest proportion of low certainty was in feeding reviews; ventilation and other interventions had rather similar certainty (Fig. ). There was no high certainty of evidence outcomes in ventilation reviews. Reviews reporting subjective outcomes had a significantly greater proportion of very low certainty (100 out of 243, i.e., 41.2%) than those with objective outcomes (50 out of 198, i.e., 25.3%, p < 0.001), and, finally, certainty was best in the reviews focusing solely on preterm neonates (Fig. ). Reviews with at least one outcome with high certainty of evidence included significantly more trials and enrolled patients compared to those with low certainty ( p < 0.001 for both the number of studies and patients; Fig. A, B, respectively). Results of post hoc analyses are shown in Fig. : numbers of studies and patients were significantly different between outcomes of any certainty of evidence with the only exception of moderate vs high certainty comparisons. A sensitivity analysis with Kruskal–Wallis test showed similarly p < 0.001 for both the number of studies and patients. Specifically, reviews with at least one outcome with high certainty had approximately 3 and 1.5 times more studies or patients than those with very low or low certainty, respectively. Outcomes with high certainty of evidence were eight, and their details are presented in Table . The interventions which had high certainty for at least one outcome were early developmental intervention programs, lumbar puncture position, oral dextrose gel, methylxanthine for apnea prevention, musical/vocal interventions for preterm neonates, and indomethacin and ibuprofen in PDA patients. Interestingly, only four (50%) of the high-certainty evidence outcomes were based on a pharmacological intervention. Furthermore, four reviews reported null findings, as they reported no clear difference between the intervention and control groups. In general, we found that most outcomes in the Cochrane neonatal reviews published in the last 2 years have low or very low certainty of evidence. Only about 2% of the outcomes had high certainty, and around 20% had moderate certainty. These percentages varied slightly across different categories, such as type of intervention, but the overall proportions of certainty were almost identical within these categories. We also found a significant association between certainty of evidence and the number of trials and enrolled patients. However, our main finding was the worrisome and striking low number of reviews with high certainty for different interventions and outcomes. For example, none of the studies on ventilation interventions had high certainty of evidence. There are many possible explanations for this result. First, there is a high variance in the number of trials comparing different non-invasive and invasive ventilation modalities. For example, the use of non-invasive neurally adjusted ventilation and nasal high frequency oscillatory ventilation was studied only in 5 small trials and approximately 10 randomized studies, respectively . Second, the understanding of the outcome assessments subjectivity plays a key part, as ventilation strategies are hard to be blinded to attending clinicians. A recent meta-epidemiological review on this matter found a worrying rate of incorrectly performed risk of bias assessment leading to downgrading of certainty , and the proportion of incorrect assessments was similar between Cochrane and non-Cochrane reviews. Nonetheless, the recent NASONE trial showed that it is possible to compare complex ventilation strategies for a long time with an assessor-blinded design and reduce possible bias . More and above this, it is important to remind that ventilation cannot be considered as a single intervention, like a pharmaceutical therapy. Ventilation is a complex technique delivered on various modes and, even within the same mode, with several possible combinations of ventilatory parameters and thus several therapeutic strategies. This is even more complex for non-invasive ventilation where the use of different interfaces plays a significant role in ventilation efficacy, patient-ventilator interaction, and comfort . The lack of detailed standardization of the ventilatory intervention can significantly downgrade the quality of results, and only recent trials have implemented this concept . Finally, a ventilatory strategy may be applied to neonates with completely different lung mechanics and pathophysiology (e.g., respiratory distress syndrome or bronchopulmonary dysplasia or neonatal acute respiratory distress syndrome): this also plays a role in downgrading the quality and demands a close pathophysiology phenotyping to enroll homogeneous populations . This is the basis for a personalized neonatal respiratory care and currently represent an important need . The number of trials and enrolled patients was associated with the certainty of evidence. In other words, outcomes that were more widely studied had higher certainty. This is consistent with previously published meta-epidemiological reports and is worrisome. In fact, as previous reports date to 2006 and 2013, the situation has not significantly improved over time . As previously reported, the majority of the studies and reviews focus on preterm neonates. In fact, critically ill preterm neonates are more numerous than term infants, and the two age groups present with very different diseases and comorbidities. Thus, not only preterm neonates are more commonly admitted to NICUs than term babies, but these latter present with more complex and rare life-threatening conditions such as, for instance, meconium aspiration that are more difficult to study in randomized trials. The association between the certainty of evidence and the number of trials and enrolled patients was expected since, according to the GRADE classification, an intervention needs a sample size large enough to produce precise results . One of the domains in the GRADE assessment is the imprecision of the outcome estimate. As imprecision is mostly based on the interpretation of the effect estimates, whose confidence intervals are narrower for more common event and larger populations. It is also interesting that four of the eight high certainty outcomes reported the “so called” null findings, where the analyzed intervention did not show evidence of a difference compared to control intervention. The classification of the certainty of evidence may change if reviews are updated, and the outcome estimates usually become more precise . It is more common for very low evidence to be reclassified into higher categories, than for high evidence outcome to be downgraded . Continuous research efforts to improve the quality of neonatal care are needed. Traditionally, this means designing more and larger randomized trials. However, meta-epidemiological reviews of existing literature are important, as these hold potential to spare resources from unnecessary trials. Strengths and limitations The main strength of our work was the full transparency and respect of best review practices without any protocol violations. The main limitation comes from the fact that we classified the review interventions and outcomes after data extraction and classified the outcomes as subjective and objective based on our own experience and opinion. To reduce this and promote transparency, we have provided the whole data as a . Furthermore, a further limitation was that we had a protocol deviation, as we initially planned to use dual data extraction but performed the data extraction by individual author after 20% of the reports were extracted. This decision was based on the fact that there were no issues with the extraction as it was made directly from the summary of findings tables as those were. Another limitation may be that we decided to focus solely on Cochrane reviews and on a relatively narrow amount of time (i.e., 2022–2024). We decided so because (1) Cochrane reviews are conducted rigorously using the same formats and protocols, and (2) we wanted to have a picture of the really recent situation. In fact, we considered that the risk of bias tool was revised in 2019 and implemented into use only in 2020; this means that the protocols for the reviews published in 2022 have been written mostly in 2020–2021. However, it could be that the results here are influenced by the short timeline, and a longer study period could have altered the results in either direction. Thus, a future study should investigate if results presented here may be different on a wider time window or comparing Cochrane and non-Cochrane meta-analyses. A clear limitation of our results was the lack of detailed extraction on the GRADE domain assessments and the reasons for downgrading the evidence certainty. Although these were provided in the Cochrane reviews, we did not extract this information, and this will be the object of future meta-epidemiological studies. Thus, future research should analyze the GRADE assessments more in detail and clarify the reasons of downgrading the certainty and whether these decisions have been correctly taken. The main strength of our work was the full transparency and respect of best review practices without any protocol violations. The main limitation comes from the fact that we classified the review interventions and outcomes after data extraction and classified the outcomes as subjective and objective based on our own experience and opinion. To reduce this and promote transparency, we have provided the whole data as a . Furthermore, a further limitation was that we had a protocol deviation, as we initially planned to use dual data extraction but performed the data extraction by individual author after 20% of the reports were extracted. This decision was based on the fact that there were no issues with the extraction as it was made directly from the summary of findings tables as those were. Another limitation may be that we decided to focus solely on Cochrane reviews and on a relatively narrow amount of time (i.e., 2022–2024). We decided so because (1) Cochrane reviews are conducted rigorously using the same formats and protocols, and (2) we wanted to have a picture of the really recent situation. In fact, we considered that the risk of bias tool was revised in 2019 and implemented into use only in 2020; this means that the protocols for the reviews published in 2022 have been written mostly in 2020–2021. However, it could be that the results here are influenced by the short timeline, and a longer study period could have altered the results in either direction. Thus, a future study should investigate if results presented here may be different on a wider time window or comparing Cochrane and non-Cochrane meta-analyses. A clear limitation of our results was the lack of detailed extraction on the GRADE domain assessments and the reasons for downgrading the evidence certainty. Although these were provided in the Cochrane reviews, we did not extract this information, and this will be the object of future meta-epidemiological studies. Thus, future research should analyze the GRADE assessments more in detail and clarify the reasons of downgrading the certainty and whether these decisions have been correctly taken. Only 2% of the outcomes had high certainty of evidence in the Cochrane neonatal reviews published in the last 2 years. The certainty was significantly associated with the number of included trials and participants. Neonatal trials and reviews still need more and larger studies to improve the overall quality of the evidence. Below is the link to the electronic supplementary material. Supplementary file1 (XLSX 73 KB)
Evaluation of the Accuracy of Cameriere, Modified Cameriere and Willems and Blenkin-Evans Methods for Turkish Children
01451b74-55c1-462e-bb85-9da55f1eb760
11806340
Dentistry[mh]
Children and adolescents may encounter situations such as forced marriage, illegal adoption, human trafficking, involvement in crime, or losing their families due to major natural disasters or wars. These circumstances can lead to becoming undocumented migrants or losing documents that prove their identity. In such cases, the absence of chronological age (CA) information highlights the importance of determining physiological age. , Furthermore, under Turkish law, individuals' criminal responsibility can vary significantly depending on their age group, with ages 12 and 15 being particularly crucial distinctions. In dentistry, determining physiological age is crucial for diagnosis and treatment planning, especially for paediatric and orthodontic specialists. Physiological age is based on the developmental level of various systems and tissues. Due to factors such as race, hormones, genetics, and nutrition, there can be discrepancies between an individual's CA and physiological age. Various methods have been developed to estimate physiological age from radiographs, such as evaluating skeletal or dental development. , Each dental age estimation method is generally developed based on individuals from different ethnic backgrounds. The Demirjian Method (DM) was established using the French-Canadian child population, while the Willems Method (WM) is a modification of DM tailored to the Belgian-Caucasian population. The Blenkin-Evans Method (BEM) was adapted from DM to suit Australian children. These methods are applied using the mineralisation chart created by Demirjian et al. On the other hand, the Cameriere Method (CM) is a measurement-based approach initially developed for Italian children and subsequently modified by Kış et al. for Turkish children (Modified Cameriere Method (MCM)). , , , , , , It is known that different genetic and environmental factors can lead individuals with the same CA to have other body characteristics and developmental levels. Therefore, before using an age estimation method for legal purposes in any population, it is essential to validate the accuracy and applicability of the method for that specific population. This study aims to compare the agreement between dental age estimated using the Cameriere method (CM), Modified Cameriere method (MCM), Willems method (WM), and Blenkin-Evans method (BEM) on panoramic radiographs of children from Karaman province and its surroundings in the Central Anatolia region of Turkiye with their CA. The study seeks to identify which method provides the closest estimate of the actual age of children in our region, aiming to assess the suitability of these methods for age estimation and serve as a reference for future studies. Ethical approval This cross-sectional retrospective study was conducted according to the Declaration of Helsinki and approved by the Karamanoğlu Mehmetbey University Faculty of Medicine Local Scientific Medical Research Ethics Committee with protocol number 09-2023/13. Before radiography, written informed consent was obtained from all patients' parents. No personal information other than the patient's date of birth, gender, and the date of the radiograph was used in the study. Study population and sampling criteria This study included 953 individuals aged between 6 and 14.99 years to evaluate all 4 methods who presented for routine examination and follow-up at the Karamanoğlu Mehmetbey University Faculty of Dentistry between August 2021 and September 2023. All digital panoramic images examined in the study were obtained using a PCH-2500 (Vatech, Gyeonggi-do, Korea) digital panoramic x-ray machine with 60 to 75 kVp, 5 to 10 mA, and 15 s radiation time parameters. A total of 953 panoramic radiographs were saved in 'TIF' (Tagged Image File) format from the faculty's radiograph archive to the study computer (3.10 GHz Intel 10th Generation i5 with 8 GB RAM, Windows 10 Professional operating system, and a 21.5-inch flat panel color screen (Lenovo ThinkVision S22e-20), 1920 × 1080 pixels) and categorised into folders based on age groups and gender. The inclusion criteria were as follows: • No anomalies or bilateral pathologies in the maxillofacial region or history of trauma, • No history of orthodontic treatment, • No bilateral permanent tooth missing, decay, restoration, root canal treatment, or apical pathology, • Systemically healthy individuals, • Non-migrants (Those whose TR Identity Number does not start with 99 or 98 or who do not have a temporary asylum card) Of these, 337 panoramic radiographs were excluded from the study due to not meeting the inclusion criteria (n = 26 for bilateral tooth missing/decay/root canal treatment, n = 14 with a history of trauma, n = 7 with a history of orthodontic treatment, n = 10 with pathology, n = 280 radiographs were not Grade 1 quality based on the UK National Radiological Protection Board, n = 0 immigrants). The diagnostic quality of the radiographs was evaluated by an oral and maxillofacial radiologist with 8 years of experience (M.G.) based on the UK National Radiological Protection Board criteria. Only Grade 1 radiographs were included in the study. As a result, the 616 individuals included in the study were divided into 18 groups as follows: girls aged 6-6.99 years (6F) and boys (6M), girls aged 7 to 7.99 years (7F) and boys (7M), girls aged 8 to 8.99 years (8F) and boys (8M), girls aged 9 to 9.99 years (9F) and boys (9M), girls aged 10 to 10.99 years (10F) and boys (10M), girls aged 11 to 11.99 years (11F) and boys (11M), girls aged 12 to 12.99 years (12F) and boys (12M), girls aged 13 to 13.99 years (13F) and boys (13M), girls aged 14 to 14.99 years (14F) and boys (14M). For this study, the power value calculation was based on a study by Şahin et al. using the G*Power 3.1 software package (v.3.1.9.7, Universitat Kiel, Kiel, Germany). With a 5% margin of error, a 95% confidence interval, and an effect size of 0.25, the sample size needed was determined to be a minimum of 486 across 18 groups for a power value of 95%. Calculation of chronological age (CA) CA was calculated by subtracting the date of birth from the date the radiograph was taken. For ease of calculation, it was expressed in decimals (e.g., if the individual's age was 10 years and 3 months at the time of the radiograph, they were included in the 10-year age group as 10.25 years). Application of Cameriere's method (CM) The apical width of the left mandibular 7 teeth (Ai, i = 1, . . ., 7) and the length of the teeth (Li, i = 1, . . ., 7) were measured using the Image J program (Image J, NIH, Maryland, USA) with × 150 magnification on panoramic images as shown in . These measurements were then substituted into the relevant formula (Age = 8.971 + 0.375g + 1.631 × 5 + 0.674N0 – 1.034s – 0.176sN0) to determine the individual's Estimated Age (EA) (xi = Ai/Li (i = 1, . . ., 7), X5 = A5/L5, s: the sum of the Ai/Li of the open apices, g: 1 for boys and 0 for girls, N0: the number of teeth with complete root development). For teeth with two roots, the apical widths of both roots were summed to calculate the tooth's Ai. Application of the modified Cameriere method (MCM) The apical width and tooth length data used for the Cameriere method were substituted into the relevant formula modified by Kış and colleagues for Turkish children (Age = 9.876 + 0.361g + 0.663N0 – 0.927s – 0.057N0s) to determine the individual's EA. Application of Willems method (WM) The mineralisation stages of the left mandibular 7 teeth on the panoramic radiographs were assessed using the mineralisation chart created by Demirjian et al., and the developmental stages of the teeth were determined. The scores from the gender-specific tables created by Willems et al. were summed to determine the individual's EA. Application of Blenkin and Evans method (BEM) The measurement data used for the WM were also utilised for the BEM. The developmental stages of the teeth labeled as A, B, C, D, E, F, G, and H in the WM were converted to 1, 2, 3, 4, 5, 6, 7, and 8, respectively, for the BEM. Simplified Maturity Scores (SMS) were obtained by summing these numbers, and the individual's EA was determined using gender-specific tables prepared by Blenkin and Evans. For all methods, if a tooth in the left mandibular region was missing, decayed, had dilaceration or apical pathology, or had fillings or root canal treatment, the measurements of the corresponding tooth on the right mandibular side were used instead. After the initial measurements for each method, 62 randomly selected individuals from the same study group were repeated under the same conditions by the same researcher, a paediatric dentist with 8 years of experience (T.N.Ş.), four weeks later to assess intra-observer reliability. The RANDBETWEEN formula was used for randomisation in Microsoft Excel (Microsoft Corp.). Inter-observer reliability was statistically evaluated by comparing the measurements taken by a second observer (M.G.) under the same conditions using the same radiographs with those taken by the first observer (T.N.Ş.). Statistical analysis All statistical analyses for the study were conducted using SPSS 27.0 (IBM Inc). Descriptive statistics were presented as the mean absolute error, standard deviation, median, minimum (min), and maximum (max). The Kruskal-Wallis test was used to compare the estimated age values across age groups. Pairwise comparisons between age groups were performed based on the critical difference post-hoc analyses of the Kruskal-Wallis test. Spearman's Rho correlation analysis was applied to determine the correlation values between the estimated age values obtained from different methods. To assess intra-observer and inter-observer reliability, the Intra-Interclass Correlation Coefficient (ICC) analysis was conducted for the apical width and length measurements in the CM and MCM, for the developmental stage data of the teeth used in the WM and BEM, Kendall's Tau-b, Somer's d, and Gamma analyses were performed. A type-I error value of p<0.05 was considered statistically significant in all analyses. This cross-sectional retrospective study was conducted according to the Declaration of Helsinki and approved by the Karamanoğlu Mehmetbey University Faculty of Medicine Local Scientific Medical Research Ethics Committee with protocol number 09-2023/13. Before radiography, written informed consent was obtained from all patients' parents. No personal information other than the patient's date of birth, gender, and the date of the radiograph was used in the study. This study included 953 individuals aged between 6 and 14.99 years to evaluate all 4 methods who presented for routine examination and follow-up at the Karamanoğlu Mehmetbey University Faculty of Dentistry between August 2021 and September 2023. All digital panoramic images examined in the study were obtained using a PCH-2500 (Vatech, Gyeonggi-do, Korea) digital panoramic x-ray machine with 60 to 75 kVp, 5 to 10 mA, and 15 s radiation time parameters. A total of 953 panoramic radiographs were saved in 'TIF' (Tagged Image File) format from the faculty's radiograph archive to the study computer (3.10 GHz Intel 10th Generation i5 with 8 GB RAM, Windows 10 Professional operating system, and a 21.5-inch flat panel color screen (Lenovo ThinkVision S22e-20), 1920 × 1080 pixels) and categorised into folders based on age groups and gender. The inclusion criteria were as follows: • No anomalies or bilateral pathologies in the maxillofacial region or history of trauma, • No history of orthodontic treatment, • No bilateral permanent tooth missing, decay, restoration, root canal treatment, or apical pathology, • Systemically healthy individuals, • Non-migrants (Those whose TR Identity Number does not start with 99 or 98 or who do not have a temporary asylum card) Of these, 337 panoramic radiographs were excluded from the study due to not meeting the inclusion criteria (n = 26 for bilateral tooth missing/decay/root canal treatment, n = 14 with a history of trauma, n = 7 with a history of orthodontic treatment, n = 10 with pathology, n = 280 radiographs were not Grade 1 quality based on the UK National Radiological Protection Board, n = 0 immigrants). The diagnostic quality of the radiographs was evaluated by an oral and maxillofacial radiologist with 8 years of experience (M.G.) based on the UK National Radiological Protection Board criteria. Only Grade 1 radiographs were included in the study. As a result, the 616 individuals included in the study were divided into 18 groups as follows: girls aged 6-6.99 years (6F) and boys (6M), girls aged 7 to 7.99 years (7F) and boys (7M), girls aged 8 to 8.99 years (8F) and boys (8M), girls aged 9 to 9.99 years (9F) and boys (9M), girls aged 10 to 10.99 years (10F) and boys (10M), girls aged 11 to 11.99 years (11F) and boys (11M), girls aged 12 to 12.99 years (12F) and boys (12M), girls aged 13 to 13.99 years (13F) and boys (13M), girls aged 14 to 14.99 years (14F) and boys (14M). For this study, the power value calculation was based on a study by Şahin et al. using the G*Power 3.1 software package (v.3.1.9.7, Universitat Kiel, Kiel, Germany). With a 5% margin of error, a 95% confidence interval, and an effect size of 0.25, the sample size needed was determined to be a minimum of 486 across 18 groups for a power value of 95%. CA was calculated by subtracting the date of birth from the date the radiograph was taken. For ease of calculation, it was expressed in decimals (e.g., if the individual's age was 10 years and 3 months at the time of the radiograph, they were included in the 10-year age group as 10.25 years). The apical width of the left mandibular 7 teeth (Ai, i = 1, . . ., 7) and the length of the teeth (Li, i = 1, . . ., 7) were measured using the Image J program (Image J, NIH, Maryland, USA) with × 150 magnification on panoramic images as shown in . These measurements were then substituted into the relevant formula (Age = 8.971 + 0.375g + 1.631 × 5 + 0.674N0 – 1.034s – 0.176sN0) to determine the individual's Estimated Age (EA) (xi = Ai/Li (i = 1, . . ., 7), X5 = A5/L5, s: the sum of the Ai/Li of the open apices, g: 1 for boys and 0 for girls, N0: the number of teeth with complete root development). For teeth with two roots, the apical widths of both roots were summed to calculate the tooth's Ai. The apical width and tooth length data used for the Cameriere method were substituted into the relevant formula modified by Kış and colleagues for Turkish children (Age = 9.876 + 0.361g + 0.663N0 – 0.927s – 0.057N0s) to determine the individual's EA. The mineralisation stages of the left mandibular 7 teeth on the panoramic radiographs were assessed using the mineralisation chart created by Demirjian et al., and the developmental stages of the teeth were determined. The scores from the gender-specific tables created by Willems et al. were summed to determine the individual's EA. The measurement data used for the WM were also utilised for the BEM. The developmental stages of the teeth labeled as A, B, C, D, E, F, G, and H in the WM were converted to 1, 2, 3, 4, 5, 6, 7, and 8, respectively, for the BEM. Simplified Maturity Scores (SMS) were obtained by summing these numbers, and the individual's EA was determined using gender-specific tables prepared by Blenkin and Evans. For all methods, if a tooth in the left mandibular region was missing, decayed, had dilaceration or apical pathology, or had fillings or root canal treatment, the measurements of the corresponding tooth on the right mandibular side were used instead. After the initial measurements for each method, 62 randomly selected individuals from the same study group were repeated under the same conditions by the same researcher, a paediatric dentist with 8 years of experience (T.N.Ş.), four weeks later to assess intra-observer reliability. The RANDBETWEEN formula was used for randomisation in Microsoft Excel (Microsoft Corp.). Inter-observer reliability was statistically evaluated by comparing the measurements taken by a second observer (M.G.) under the same conditions using the same radiographs with those taken by the first observer (T.N.Ş.). All statistical analyses for the study were conducted using SPSS 27.0 (IBM Inc). Descriptive statistics were presented as the mean absolute error, standard deviation, median, minimum (min), and maximum (max). The Kruskal-Wallis test was used to compare the estimated age values across age groups. Pairwise comparisons between age groups were performed based on the critical difference post-hoc analyses of the Kruskal-Wallis test. Spearman's Rho correlation analysis was applied to determine the correlation values between the estimated age values obtained from different methods. To assess intra-observer and inter-observer reliability, the Intra-Interclass Correlation Coefficient (ICC) analysis was conducted for the apical width and length measurements in the CM and MCM, for the developmental stage data of the teeth used in the WM and BEM, Kendall's Tau-b, Somer's d, and Gamma analyses were performed. A type-I error value of p<0.05 was considered statistically significant in all analyses. A total of 616 children were included in the study, with 52.2% being girls (n = 321) and 47.8% boys (n = 295). The distribution of the number of patients across the age groups was relatively balanced, as shown in . The age groups with the highest number of individuals were observed to be 14 years (14.6%) and 8 years (13.1%) among girls, and 8 years (13.2%) and 13 years (13.2%) among boys. The EAs obtained using CM, MCM, WM, and BEM were compared for each gender and age group. Significant differences were found for the ages estimated by each method for both genders ( p < .001). Consequently, the test statistics of the applied analyses were compared. Accordingly, the test statistics for girls were as follows: CM (293.49), MCM (292.48), WM (290.82), and BEM (289.67) . When the test statistics in males were analysed, it was observed that the highest result belonged to CM (267.36). After that, the EA obtained with MCM (267.20), WM (262.02) and BEM (260.19) were very close to each other . According to the correlation analysis, a very high correlation was found between the actual age values of the individuals and the age values obtained by different methods in both boys and girls. In girls, the highest correlation value was calculated between CA and EA obtained by CM ( r = 0.957). In boys, CM was the EA most highly correlated with CA ( r = 0.951). Very similar correlation values were obtained with MCM ( r = 0.950). For both genders, the highest correlated values among the methods were found between CM and MCM ( r = 0.996). The differences between the CA values of the individuals and the age values estimated by different methods were taken. The difference values were compared according to age groups. In girls, the difference between CA and CM-derived EA values was significantly different ( p < .001). Up to 10 years of age, the difference values were negative, whereas positive age values were observed in the following age groups. The highest mean difference was observed in patients aged 14 years. The lowest mean age difference was observed in patients aged 8 years. Since MCM age values were estimated to be greater than CA values, a positive mean age difference was obtained only for the 14-year-old patient group. The mean values were higher than the mean values of the difference with CM. The mean difference between MCM-EA, WM-EA and CA values differed significantly between age groups ( p < .001). However, since the mean values obtained for the age groups were close to each other, only the 13th and 14th age groups and the 7th and 10th age groups had significantly different age group pairs in pairwise comparisons. The lowest difference was observed in the 13th age group, and the highest difference value was observed in the 11th age group. Age differences estimated by BEM significantly differed between age groups ( p < .001). Entirely a large number of pairwise comparison results were obtained. A positive age difference was found only for the 14-year-old age group. The lowest age difference was for the 9-year-old age group. When analysed as all girls regardless of age group, the mean CM difference was 0.22, the mean MCM difference was -0.49, the mean WM difference was -0.45, and the mean BEM difference was -0.20 . In boys, the mean difference between CA and age values obtained with four different estimation methods showed a significant difference between age groups ( p < .001). The highest difference between age groups was realised with CM. The mean values of the 10, 11 and 14 age groups generally differed from the other age groups. The mean differences with MCM were generally close to each other ( p < .001). However, it was observed that the value of the 14th age group was different in general. A significant difference was observed between age groups for WM ( p < .001). However, since the mean values of 6, 10, 11 and 14 age groups were very close to zero and even the difference value of 10 age group was 0, it was understood that the prediction was quite good. The mean values of age difference for BEM showed a significant difference between age groups ( p < .001). The values of the 6 and 14 age groups were quite close to CA. The 6th age group value generally differed from the other age groups . When analysed as all boys without considering age groups, the mean CM difference was calculated as 0.17, MCM difference as -0.52, WM difference as -0.43 and BEM difference as -0.12 . For CM, MCM, WM, and BEM, the percentages of accuracy for girls-boys were 77.88% to 78.3%, 69.15% to 68.81%, 61.68% to 67.11, and 73.83% to 75.93 in the absolute difference values within 1 year, respectively . The relationship between age groups and EA-CA of the four methods is presented in . In the boxplot, horizontal lines are located at the median of data inside boxes; box height gives the interquartile range, and whiskers show the range. Accordingly, EA closest to the CA was obtained using the WM in groups 6F, 11F, 6M, 10M, and 11M; the CM in groups 7F, 8F, 9F, 10F, 12F, 7M, 8M, and 9M; the MCM in group 14F; and the BEM in groups 13F, 12M, 13M, and 14M. The average measurement time of Observer 1 was calculated as 2 minutes 09 seconds for one patient with CM, 2 min with MC, 1 minute 12 seconds with WM, and 51 seconds with BEM. Accordingly, as seen in , BEM was the most practical method, requiring the least time. While CM, WM, and BEM were found to be generally valid for this sample group, the MCM method was thought to need more studies. To evaluate intra- and inter-observer agreement, the apical opening width and tooth length measurements used in the methods were performed three times on 62 individuals. For all measurements and methods, both intra- and inter-observer agreement was excellent, above 0.90 (range: 0.941-1.000). The purpose of age determination is to determine an individual's physiological age in the absence of information about the CA of the individual or forensic situations in a way that does not cause legal losses. For this purpose, teeth or bones are utilised. A wide variety of factors, such as gender, race, environmental and geographical factors, systemic diseases, syndromes, congenital disorders, endocrine disorders, and nutritional disorders, are known to play a role in bone development. The sequence and timing of teeth eruption are known to be less sensitive to environmental influences or endocrine disorders than skeletal growth and maturation. Therefore, dental age determination methods were evaluated in this study. Although radiographic methods are considered simple, rapid, reliable, and the most suitable methods for age determination in terms of cost/benefit, they still have some ethical concerns due to the potential side effects of X-rays. , Therefore, this study aimed to eliminate this situation by using panoramic radiographs taken for diagnosis or treatment from patients who had previously applied to our institution in the faculty archive without additional radiation. The study preferred healthy individuals between the ages of 6 and 14.99, and the exclusion criteria used for individuals with radiographs were the same as in similar studies due to the limitations of the methods evaluated in the study. , Also, the maximum age that can be measured according to the methods is as follows for boys and girls, respectively: CM:14.06 to 13.69, MCM:14.87 to 14.51, WM:16.03 to 15.99, BEM 14.6 to 14.33. It is known that different genetic and environmental factors can cause people with the same CA to have other body characteristics and developmental levels. In countries like Turkiye, where people of diverse ethnic backgrounds coexist, it is crucial to extensively test and validate age estimation methods across a wide range of populations and regions to ensure their reliability. WM and BEM methods are dental age determination methods prepared by researchers with data from their populations. , The table Demirjian et al. prepared in both methods determines the developmental stages of teeth. CM is a measurement-based method ready for the researcher's population who developed this method and allows the calculation of the EA of the person by placing the data in a particular formula. Kış et al. similarly modified the formula of Cameriere et al. for Turkish children. While examining the suitability of these methods for the study group, the same tooth development stages data were used for WM and BEM methods, and the same apical aperture and length data were utilised for CM and MCM. According to the literature review, no other study was found that evaluated the suitability of the MCM formula for the Turkish population other than Kış et al. and no other study that evaluated the BEM for Turkish children other than Çarıkçıoğlu & Değirmenci. In addition, to the best of our knowledge, this study is also the first to compare these two methods with WM and CM for Turkish children. Therefore, it is believed that the MCM requires further validation through more tests before it can be reliably applied to Turkish children. While DM is known to overestimate the CA of Turkish children, WM, a method developed from DM, gives results that are closer to CA in most studies and more accurate than DM in terms of age determination. , , Therefore, WM was preferred instead of DM in this study. While it was the closest method to CA for males (0.08), it was underestimated for females (-0.45). Gulsahi et al., in their pioneering study of CM for the Turkish population, reported that CM is relatively fast, inexpensive, practical, and user-friendly, which can be used in a forensic situation. We believe this may be due to genetic similarities between Italians and Turks. Among the studies comparing WM and CM for the Turkish population, a study conducted with 400 children aged 6 to 14.99 years in the Central Blacksea region population (0.76 and 0.49 years with WM and -0.10 and -0.17 years with CM in girls and boys, respectively) and another study conducted with 636 children aged 6 to 14.99 years living in the Thrace region reported that CM gave results closer to CA than WM. In a study conducted with 1024 children aged 6 to 15.99 years living in the Northwest Anatolia region (0.1 and 0.35 years with WM, -0.53 and -0.48 years with CM for girls and boys, respectively) and in a study conducted with 330 children from Central Anatolia aged 5 to 15.90 years living in the same region as our study (-0.062 and -0.05 years with WM, -0.55 and -0.6 years with CM for girls and boys, respectively) WM was found to be more appropriate than CM. In this study, the EA-CA difference was found to be -0.45 and -0.43 with WM and 0.22 and 0.17 with CM for girls and boys, respectively, and it was observed that CM gave results closer to the actual age for both genders. This is because the samples in the other studies had different regions of residence. In contrast, in the study where the same region was evaluated, the number of samples and age groups were different. In forensic medicine, the acceptable accuracy of EA-CA for persons up to adolescence has been reported to be ± 1.00 years. , In other words, in addition to the mean difference, the percentage of absolute difference values of that difference within a year is also significant for evaluating the suitability of a method for that population. Ozveren et al. (77.3% for girls and 84.6% for boys) and Hato et al. (79.9% for girls and 80.6% for boys) reported that CM provided a higher percentage accuracy in absolute difference values within 1 year compared to WM. This is similar to the current study (77.8% for boys and 78.3% for girls). On the other hand, Çarıkçıoğlu & Değirmenci (81.5% for girls and 76.6% for boys) reported that WM had a higher percentage of accuracy than CM. This may be because, in addition to using samples from different climates, different methods may give more appropriate or highly inaccurate results in age groups. The limitations of this study include the inclusion of children from a single region and the lack of time measurement during the application of the methods, which resulted in subjective assessments. Therefore, if a method is to be evaluated or modified for a specific population, it should be assessed with the largest possible sample size. In a country like Türkiye, where various ethnic backgrounds coexist, individuals from multiple regions should be included in the study to ensure broader representativeness. If groups are evaluated separately, the closest results to CA for both sexes were obtained with CM in the Central Anatolia region, around the Karaman province. However, when the mean age difference for all age groups is evaluated by gender, the closest results to all individuals' CA were obtained using the BEM. This is thought to be because, for the CM, the maximum age limit is 14.06 years for boys and 13.69 for girls. MCM modification for the Turkish population was the method with the highest mean difference from CA for our study group. For this reason, if it is desired to modify a valid age determination method for a population, it is thought that children from different provinces should be included in the sample group, especially in countries like Turkiye, where many different ethnic origins live together. Idea/Concept: Şahin, Güleç; Desıgn: Şahin; Control/Supervısıon: Şahin, Güleç; Data Collectıon and/or Processıng: Şahin, Güleç; Analysıs and/or Interpretatıon: Şahin, Güleç; Lıterature Revıew: Şahin, Güleç; Wrıtıng the Artıcle: Şahin, Güleç; Materıals: Şahin, Güleç. None disclosed.
Bibliometric comparison of Nobel Prize laureates in physiology or medicine and chemistry
a0b555c4-ea96-4e10-b684-2b24027b0a4c
11422443
Physiology[mh]
The Nobel Prize is an annual award founded by the Swedish engineer, inventor, and entrepreneur Alfred Nobel (1833–1896) (Hansson et al. ). The Nobel Prize is awarded to those researchers whose work has been of the greatest benefit to humanity in the year in question. It is awarded in the fields of physics, chemistry, physiology or medicine, literature and peace efforts and is regarded as the highest scientific honor in the respective disciplines. There has also been an award in the field of economics since 1969, but this is not officially categorized as a Nobel Prize. Since the foundation was established in 1901, 609 Nobel Prizes have been awarded to 975 laureates, of which the Nobel Prize in physiology or medicine has been awarded to 225 persons to date. A Nobel Prize can be awarded to several researchers, each of whom is then considered a Nobel Prize laureate. As a rule, however, a Nobel Prize is not awarded to more than three researchers. The Nobel Prize in physiology or medicine has been awarded by the Nobel Assembly at Karolinska Institute since 1901 ( https://www.nobelprize.org/about/the-nobel-assembly-at-karolinska-institutet/ ; last accessed on 03/18/2024). In his will, Nobel had stipulated that the prizes should be awarded to the most worthy, regardless of their nationality, and he made no mention of gender. He decided to establish a foundation that would award annual prizes to researchers whose discoveries or inventions had contributed to the well-being of humanity in the previous year (Zárate et al. ). The gender gap in the number of Nobel Prize candidates and laureates in the fields of physiology or medicine is striking (Hansson and Fangerau ). The Nobel Prize Committee has been criticized for appearing to ignore the contributions of women in science (Mahmoudi et al. ; Silver et al. ; Valian ; Wade ). Many Nobel Prizes have direct or indirect pharmacological relevance (Table ). This background prompted us to perform a bibliometric analysis of the Nobel Prize laureates in physiology or medicine and chemistry (in this field only topics related to pharmacology) from 2006 to 2022. Most importantly, we wished to answer the question whether there is any bias against women in this group. We selected the last 15 years at the beginning of the research to capture contemporary research. In addition to that, the history of the Nobel Prize is also a history of changing processes in science and medicine (Hansson et al. ). Therefore, we wanted to analyze the current awarding practice. The 16th year was added because it was being awarded when we collected the data to remain as up-to-date as possible. The focus on recent Nobel Prizes also allows us to perform important comparisons with papers on gender aspects in science encompassing a similar historical period (Zehetbauer et al. ; Zöllner and Seifert ). Table provides an overview on the Nobel Prize laureates analyzed. The year of award, name, gender, year of birth, nationality of the laureate, research topic honored by the Nobel Prize, research institution, and country of the institution are provided, all publicly available ( https://www.nobelprize.org ). Every laureate is identified by a number used throughout this paper. We are not considering so much individual laureates in this paper but rather overarching patterns. Only in occasional cases, we mention a specific laureate to highlight a specific trait. For an in-depth analysis of individual Nobel Prize laureates, the reader is referred to the excellent work of Hansson et al. . The present paper is meant to provide a general bibliometric analysis of contemporary Nobel Prize laureates in the sense of a meta-analysis to identify overarching patterns and mechanisms underlying awarding of the Nobel Prize. The list of Nobel Prize laureates was compiled via the Nobel Prize website ( https://www.nobelprize.org ). Nobel laureates ( n = 55) from the field of physiology or medicine and chemistry (in this field only topics related to pharmacology) were listed according to their age and gender, their nationalities, their publications, citations and research rankings, and subsequently their productivity peaks and their research locations. The inclusion criteria were all Nobel Prize laureates from the years 2006–2022 in the fields of physiology or medicine, supplemented by prize laureates in the field of chemistry who were honored for a research topic related to pharmacology. For each researcher, a bibliometric analysis was performed using the Clarivate database ( https://clarivate.com/products/scientific-and-academic-research/research-analytics-evaluation-and-management-solutions/ ; last accessed 06/08/2023). The Journal Impact Factor, which is calculated annually by Clarivate Analytics and published in the Journal Citations Reports, is widely used to compare journals. It is now frequently used to assess the quality of journals, although this use is controversial. For this work, publication numbers for each research year of each individual Nobel Prize laureate were retrieved and listed in Clarivate with linear regression. Furthermore, with these data, we analyzed the publication peaks of the Nobel Prize laureates. In addition, the nationalities of the Nobel Prize laureates and their location of research were compiled and analyzed from University websites and the Nobel Prize website. In a further step, the subsequent statistical data analysis was initially carried out by using the Statistical Package for the Social Sciences software (SPSS® Version 25), ANOVA (variance analyses of women and men), and the excel program. We used GraphPad 8 to create the graphs with the statistical software R and the package ggplot2 for the relevant tests for frequency distribution, mean value determination, T -tests, p -tests, Pearson r , and the excel program to display the pie charts to illustrate the percentage differences between women and men. Whenever possible and meaningful, the results of women were compared with the results of men. We calculated cross-tabulations with the Cramer-V value and the significances for the number of Nobel Prize laureates, correlations to show the connections between the publications and citations, one-factorial ANOVA calculations and linear regressions to calculate the correlations when comparing female and male Nobel Prize laureates, and mean value determinations to show the comparison of the female and male results and the respective standard deviations. The results were presented and visualized in different graphics to show the respective totality, the female and the male characteristics. We analyzed 41 Nobel Prize laureates (74.5%) from physiology or medicine, and 14 Nobel Prize laureates (25.5%) from chemistry (Fig. ). At 18.2%, the proportion of women receiving awards was significantly lower than compared to 81.8% of male award laureates (Fig. ). There is a clear difference between the genders in the subjects awarded the Nobel Prize: in physiology or medicine, only 14% of the prize laureates were women, while the proportion in chemistry of women was 36%. There was a significant difference between the genders ( p = 0.039) in relation to the average age of Nobel Prize laureates at the time of the Nobel Prize awarding. In average, the age of females was 60.1 years and of males 67.4 years (Fig. ). The oldest male and female Nobel Prize laureate had an age of 85 and 84 years, respectively. The youngest male and female Nobel Prize laureate had an age of 46 and 48 years, respectively. The standard deviation has a larger range for male Nobel Prize laureates than for female Nobel Prize laureates. Figure shows the nationalities of the Nobel Prize laureates. The USA dominated Nobel Prize awards, among both women (40%) and men (51%). However, notably, among women, three countries were represented that were not present among men. Specifically, female Nobel Prize laureates were recorded from Israel, Australia, and China. Conversely, the UK, Japan, Germany, Sweden, Denmark, Canada, India, Italy, Ireland, and Luxembourg were represented among men, but not women. Figure shows the relation between the number of publications and citations of all Nobel Prize laureates (panel A) and separately for women (panel B) and men (panel C). The point cloud of female Nobel Prize laureates is more scattered than that of male Nobel Prize laureates. Among the male laureates, two laureates stand out as having a significantly higher number of publications and citations than all other laureates. There are no such features among women. The Pearson correlation was calculated to show the correlation between citations and publications. It was r =0.763 for women and r =0.667 for men. The slope is almost identical for women and men, with a slightly flatter slope for women. Thus, there are no major differences between the genders. Figure shows the individual distribution of publication of Nobel Prize laureates (panel A). Both among men and women, there is a huge variation in the number of publications, ranging from more than 1.200 (Nobel Prize laureate No. 25) to 0 (Nobel Prize laureate No. 26). Overall, most publications of Nobel Prize laureates were published before the award (mean value for women was 273.9; and for men 284.5). After the award, the mean value of publications for women was 47.6, and for men 48.8. This reflects the fact that the award is usually given in late stages of the career (see Fig. ). However, it should also be noted that most of the researchers are still actively engaged in science after the Nobel Prize award. Figure shows the individual H-index (Hirsch-index) distribution among Nobel Prize laureates. Hirsch defined the H-index as “an index to quantify an individual’s scientific research output. A scientist has index H if H of his or her papers have at least H citations each and the other papers have ≤ H citations each” (Hirsch ). The H-index is therefore intended to describe the reception of publications by individual academics in the scientific community. There is a huge variation in H-index of the Nobel Prize laureates, ranging from > 200 (Nobel Prize laureate No. 35) to 0 (Nobel Prize laureates No. 26). The mean value for women is 78.78, and 90.20 for men (panel B), with men having a much larger variation than women. The age-adjusted H-index was calculated by dividing the H-index by the age of the Nobel Prize laureates. The results show that women and men do not differ significantly in terms of their age-adjusted H-index (Fig. ). There was a large variation in this parameter, ranging from > 2.5 (Nobel Prize laureates No. 32 and 35) to 0 (Nobel Prize laureates No. 26). The mean value for women is 1.238, and of men 1.26 with a larger variance by men (0.36) than by women (0.250). Figure shows the average number of publications per year. The yellow line in panel A shows the year of the Nobel Prize awarding. The years to the left of 0 describe the time before the awarding (with a minus in front of the numbers), the numbers to the right describe the years after the awarding. The number of publications is highest on average at approximately 10 per year for around 20–24 years prior to receiving the Nobel Prize. However, the differences between the individual Nobel Prize laureates are very large. Women and men reach their productivity peak at about the same age. The 20 years immediately before the Nobel Prize awarding (especially the last two years) are more productive for Nobel Prize laureates than the time after the Nobel Prize (Fig. ). The average age of the year with the most publications to date is 53.44 years for female Nobel Prize laureates and 55.31 years for male Nobel Prize laureates. The standard deviation is significantly wider for male Nobel Prize laureates than for women (Fig. ). There was no significant difference between the groups. Figure shows the research locations at the time of the awarding. The addition of researchers from the University of Stanford, Scripps Institute, Rockefeller University, Harvard University, Yale University, and University of Berkeley (all USA) totals 36% (and therefore more than 1/3), but each individual university is not significantly overrepresented. Most of the other research locations are evenly distributed. Panel B shows the research locations of the female awardees. The 10 female awardees conducted research at 10 different universities, but 50% conducted research at a US university. Among the male awardees (panel C), there is also a fairly balanced distribution of research universities. In a direct comparison of countries, however, 58% of all award laureates conduct their research in the USA, 12% in Japan, 17% in the UK, and just 10% in four other countries. A limitation of our work is the small database of female Nobel Prize laureates. In addition, we focused on quantifiable bibliometric parameters. Furthermore, there is a very large variation among the individual career paths and productivities of individual Nobel Prize laureates that is not appreciated by our analysis. Most strikingly, even without a single publication and, hence, a non-existant bibliometric track record, important scientific achievements can be made, e.g., 26. We had to limit our bibliometric analysis at a certain calendar date, but it cannot be excluded that in the future, recognition of female scientists having already been awarded the Nobel Prize changes. Even though the Nobel committees’ mandate is to honor scientific achievements for the benefit of humankind, their interpretation of this criterion was primarily based on their assessment of the groundbreaking nature of the science, while the applied or practical utility of this discovery or bibliometric values such as number of publications, citations, or H-index assessed in the current study are at best secondary factors when awarding the prize (Källstrand ). In fact, some Nobel Prize laureates (e.g., 17, 18, 23, 26, 40) have only few publications or no publications. Hansson et al. state that it is difficult to measure this “greatest benefit to mankind” or brilliance in science in an objective way. To the best of our knowledge, this is the first study that aims at providing a bibliometric comparison of female and male Nobel Prize laureates. Based on numerous studies pointing to a discrimination of women in science (Ceci and Williams ; Moss-Racusin et al. ; Ball ; Beaudry and Larivière ; Ceci and Williams ; Charyton et al. ; Harding ; Kulis and Sicotte ; Lubinski et al. ; Ma et al. ; Ross et al. ), it cannot be excluded that even among this group of absolute elite scientists, some sort of discrimination occurs. However, looking on numerous bibliometric parameters, we did not obtain evidence for a bias against women. Rather, for crucial parameters such as publications before the Nobel Prize, citations, age-adjusted H-index, productivity peak, and research location, we did not find evidence for systematic discrimination of female Nobel Prize laureates relative to male Nobel Prize laureates. Rather, women were awarded the Nobel Prize at a significantly younger age than men although both genders have a similar age with regard to the peak of research productivity. Thus, surprisingly, our study shows that the research accomplishments of female Nobel Prize laureates are actually recognized earlier than those of men. This strongly argues against the Nobel Prize committee being discriminatory against women although the current Nobel assembly is male-dominated. There are six Nobel Committee members for physiology or medicine, five male members and just one female member ( https://www.nobelprize.org/about/the-nobel-committee-for-physiology-or-medicine/ ; last accessed 03/29/2024). In case of systematic discrimination of females, we would have expected that female Nobel Prize laureates are much older than their male counterparts and need to have many more publications and citations and a higher H-index. This was, however, not the case. We also did not notice overrepresentation of a specific country or research institution among female Nobel Prize laureates. Thus, it appears that the current Nobel Committee tries to look for the best candidates for the Nobel Prize independently of gender. This is supported by the fact that concerning contemporary Nobel Prize laureates in the topics discussed here (Table ), there has never been such an egregious case of omitting females as the non-consideration of Rosalind Franklin who made seminal contributions to the identification of the DNA structure (Conti ). The most controversial case of non-consideration for the Nobel Prize in recent times in the fields considered here probably concerns a male (Salvador Moncada for the nitric oxide/cGMP pathway), where bias against him coming from a developing country was speculated to have played a role (Lancaster ). In the present study, representation of citizens from developing countries is poor as well (Table ). Scientists coming from developed countries dominate the field regarding Nobel Prize awards. The number of female Nobel Prize laureates with a relation to pharmacology is much smaller than the number of male Nobel Prize laureates. A gender gap is not only observed for the Nobel Prize but also for other scientific awards (Hansson ). Hence, our present study complements the current knowledge on gender imbalance concerning scientific awards. The study of Zehetbauer et al. showed that the number of female first authors in pharmacology-related papers, mostly reflecting PhD students and postdocs, is much higher than the number of female senior authors, the latter reflecting group leaders conducting independent research. This study suggests that the major drop of female researchers occurs between the PhD student and postdoc stage versus group leader stage. This career stage often collides with family planning. Thus, a major factor accounting for the small number of female Nobel Prize laureates is the smaller number of female researchers who enter an intellectually independent research career: an unwritten prerequisite for getting eligible for the Nobel Prize. All of the Nobel Prize laureates in Table fulfill the criterion of long-term research as intellectually independent investigator. But it must also be taken into consideration that both female and male scientists are not just passive objects in a career system but that they also make active decisions about what they do and what they do not do in their scientific careers (Zöllner and Seifert ). The latter study epitomized that female German pharmacologists invest much less in social capital (scientific visibility in the German science community via the journal “Biospektrum”) than their male counterparts although they are very much encouraged to do so by the Executive Board of the German Pharmacological Society and although the time effort needed to become visible is low. Visibilty is important for being recognized a potential award candidate. The study also noted substantial gender differences between various scientific fields regarding investment in visibility. The aspect of voluntary conscious decisions of individuals is, unfortunately, substantially underrated in the current gender discussion in science. The group of Nobel Prize laureates is a very small group of elite researchers, and only the minority of all important research accomplishments is awarded the Nobel Prize (Pohar and Hansson ). Thus, it will be very important to expand this type of bibliometric research to a larger population of scientists, independently of an award. One approach could be to analyze, the group of the leading 10.000 or 100.000 scientists globally and relying on an integrative approach including number of publications, citations, and H-index. The advantage of analyzing many scientists is that it is much easier to analyze cultural differences among different countries. It will also be worthwhile, in 10 years from now, to repeat the current study and compare how Nobel Prize laureates from 2006 to 2022 compare with Nobel Prize laureates from 2023 to 2032. Interviews should be conducted with scientists regarding their professional choices. Lastly, it will be important to analyze the contributions of scientists from developing countries, both male and female, who may not have received the Nobel Prize.
Validation of entrustable professional activities for use in neonatal care residency programs
2959b178-891e-4fe5-8f2b-ad6b1d28c2d5
11662742
Pediatrics[mh]
Since the beginning of the 21st century, graduate and postgraduate medical training in different countries has been transitioning to competency-based education (CBE) as a response to the concern regarding the quality of medical education and the lack of social accountability and training flexibility. In Brazil, a core competencies matrix has been proposed for graduate students, and the Ministry of Education has already established the competencies for most medical specialties and areas of expertise, including neonatology. , , However, the transition to a competency-based model is not straightforward, probably because of the difficulty in assessment. The gaps between a well-designed competency structure and the assessment of clinical practice might contribute to the problem. In 2005, the concept of entrustable professional activities (EPA) was introduced in medical education to bridge the gap between competence, assessment, and clinical practice. An EPA is a unit of professional practice, or a profession-specific task, to be entrusted to students once they have demonstrated the integration of essential knowledge and appropriate skills and attitudes. EPAs should be core activities that define the practice of a specialty. While competencies are descriptors of the individual's personal qualities, EPAs describe the tasks that must be performed in the workplace. The decision to entrust a student to perform a given professional activity depends on the transfer of responsibility between the supervisor and the medical resident, and it is called an entrustment decision. Entrustment decisions entail subjectivity, and supervisors often make such decisions in uncertainty. To help reduce the subjectivity of entrustment, Ten Cate et al. defined levels of supervision, which are assigned to those evaluated according to the skills already acquired and demonstrated by them. A set of EPAs can be used to define the framework of a specialty curriculum. An EPA-based curriculum has the potential to link clinical training to the daily work of physicians and can be built using the following steps: (i) identification of core EPAs; (ii) description of each EPA (title, specifications, limitations, required knowledge, skills and attitudes, source of information to inform entrustment decision, expected level of supervision, and expiration date); (iii) definition of tools to monitor and record residents ´ performance (e.g. portfolios); (iv) allowing flexibility in the training pathway(6). A set of EPAs has already been validated for use in Pediatric Cardiac Critical Care in the United States. In neonatal care, EPAs have already been described by the American Board of Medical Specialties (ABMS) and by the Royal College of Physicians and Surgeons of Canada (RCPSC). , The curriculum created by the ABMS is based on seven EPAs referring to training in pediatrics, with an additional five EPAs specific to neonatology. The RCPSC establishes 24 EPAs necessary for residency training in neonatology, with increasing complexity. In the present study, the authors aimed to define and develop a set of EPAs that could be used to link clinical training and assessment of neonatal medicine residents. Study design A qualitative study was conducted in two phases to develop EPAs for use in assessing neonatology medical residents in the hospital components of neonatal care. The first study phase involved drafting a list of EPAs and the second phase aimed at content validation using the modified Delphi method. Participants The authors invited coordinators from six neonatal medicine residency programs in Belo Horizonte to participate in the first phase of the study, and five agreed to participate. The invited hospitals were referral centers for perinatal health in Minas Gerais State, being responsible for the labor and delivery of high-risk pregnant women and the care of their newborn babies. The number of participants in the first phase was defined based on a similar study that invited five pediatric intensive care unit (PICU) physicians to draft a set of EPAs for Dutch PICU fellows. For the second phase, the authors invited a convenient sample of 50 neonatal care physicians and medical residents from Minas Gerais, Brazil. The sample size was chosen to reflect the usual size for Delphi studies between 15 and 60. Procedures The stepwise procedure of this study is shown in . Design of entrustable professional activities (EPAs) A committee of five neonatal care coordinators drafted a list of EPAs between November 2021 and June 2022. The list was aligned with the competencies related to neonatal hospital care established by the Ministry of Education for Residency Programs of Neonatology in Brazil (2). All committee members had more than ten years of experience in resident supervision in neonatology residency programs. Validation of entrustable professional activities using the modified Delphi method The initial list of EPAs (title, description, and specification) was submitted to a group of neonatal care physicians and residents to obtain consensus using a modified Delphi method. This method involves the iterative process of developing and distributing a questionnaire of statements, based on the content that needs to be validated, to a group of participants; after each round of responses, the researchers analyze the results and improve the content based on the observations. The process is repeated until the best possible level of consensus is reached. The Delphi method was chosen because it is an approach to developing consensus widely used in medical education research, particularly in designing EPAs. , Since this method does not require participants to interact directly, it minimizes undue dominance by specific individuals and guarantees anonymity. The panel members were asked to rate the indispensability and clarity of each EPA on a five-point Likert scale. Finally, panelists were asked to rate the comprehensiveness of the list of EPAs and to provide comments and suggestions for improvement. After each round of consensus, the committee reviewed the results and, if necessary, revised the EPAs. This study phase was conducted between July and September 2022. Data analysis In each round of consensus, the median, mode, interquartile range, and content validity index (CVI) for 'indispensability' and 'clarity' were calculated for the Likert scale data of each EPA. The CVI, the degree to which an instrument has an appropriate sample of items for the measured construct, was calculated as the number of panelists who achieved one of the two highest ratings for each EPA, divided by the total number of panelists. CVI values ​​can vary from 0 to 1. As a cutoff score, the authors determined that a CVI of 0.8 or greater indicates sufficient content validity, a CVI between 0.70 and 0.79 implies that the item needed revision and a CVI below 0.70 indicates elimination of the item, based on the literature. If the median for an item was below the predetermined consensus level of 4 for 'indispensability' and 'clarity,' that item was revised by the coordinators´ committee. Ethical approval was obtained from the Institutional Review Board of Unifenas (CAAE 56,484,122.7.0000.5143), and all participants signed informed consent. A qualitative study was conducted in two phases to develop EPAs for use in assessing neonatology medical residents in the hospital components of neonatal care. The first study phase involved drafting a list of EPAs and the second phase aimed at content validation using the modified Delphi method. The authors invited coordinators from six neonatal medicine residency programs in Belo Horizonte to participate in the first phase of the study, and five agreed to participate. The invited hospitals were referral centers for perinatal health in Minas Gerais State, being responsible for the labor and delivery of high-risk pregnant women and the care of their newborn babies. The number of participants in the first phase was defined based on a similar study that invited five pediatric intensive care unit (PICU) physicians to draft a set of EPAs for Dutch PICU fellows. For the second phase, the authors invited a convenient sample of 50 neonatal care physicians and medical residents from Minas Gerais, Brazil. The sample size was chosen to reflect the usual size for Delphi studies between 15 and 60. The stepwise procedure of this study is shown in . A committee of five neonatal care coordinators drafted a list of EPAs between November 2021 and June 2022. The list was aligned with the competencies related to neonatal hospital care established by the Ministry of Education for Residency Programs of Neonatology in Brazil (2). All committee members had more than ten years of experience in resident supervision in neonatology residency programs. The initial list of EPAs (title, description, and specification) was submitted to a group of neonatal care physicians and residents to obtain consensus using a modified Delphi method. This method involves the iterative process of developing and distributing a questionnaire of statements, based on the content that needs to be validated, to a group of participants; after each round of responses, the researchers analyze the results and improve the content based on the observations. The process is repeated until the best possible level of consensus is reached. The Delphi method was chosen because it is an approach to developing consensus widely used in medical education research, particularly in designing EPAs. , Since this method does not require participants to interact directly, it minimizes undue dominance by specific individuals and guarantees anonymity. The panel members were asked to rate the indispensability and clarity of each EPA on a five-point Likert scale. Finally, panelists were asked to rate the comprehensiveness of the list of EPAs and to provide comments and suggestions for improvement. After each round of consensus, the committee reviewed the results and, if necessary, revised the EPAs. This study phase was conducted between July and September 2022. In each round of consensus, the median, mode, interquartile range, and content validity index (CVI) for 'indispensability' and 'clarity' were calculated for the Likert scale data of each EPA. The CVI, the degree to which an instrument has an appropriate sample of items for the measured construct, was calculated as the number of panelists who achieved one of the two highest ratings for each EPA, divided by the total number of panelists. CVI values ​​can vary from 0 to 1. As a cutoff score, the authors determined that a CVI of 0.8 or greater indicates sufficient content validity, a CVI between 0.70 and 0.79 implies that the item needed revision and a CVI below 0.70 indicates elimination of the item, based on the literature. If the median for an item was below the predetermined consensus level of 4 for 'indispensability' and 'clarity,' that item was revised by the coordinators´ committee. Ethical approval was obtained from the Institutional Review Board of Unifenas (CAAE 56,484,122.7.0000.5143), and all participants signed informed consent. Five participants from four different medical centers (three public and one private hospital) agreed to participate in the committee responsible for drafting the EPAs. The coordinators´ committee drafted seven EPAs and sent them to a panel of 50 neonatal care physicians and medical residents for content validation. In the first Delphi round, responses were obtained from 37 (74%) participants. Most respondents were women (89.2%), aged between 40 and 49 years (45.9%), board-certified in neonatology (64.9%), had more than ten years of clinical experience (67.6%), and worked in Belo Horizonte (86.5%). Five (13.5%) panel members were neonatal care residents, and four (10.8%) neonatal care physicians had less than five years of clinical experience. After the first round, all EPAs had a CVI above 0.8 and reached the predefined threshold for indispensability and clarity. Regarding the comprehensiveness of the proposed EPAs, no suggestions were made for adding another EPA, and the final list is shown in . The following suggestions were incorporated into the preliminary list of EPAs: (i) In EPA 3 (“Providing rooming-in care for the newborn”), the element “promoting breastfeeding” was added, and (ii) in all other EPAs, the element “clinical documentation” was added. The researchers then conducted a second round to validate the revised EPAs and 13 responses were obtained. The CVI for indispensability and clarity in the second round was similar to that obtained in the first, with scores ranging from 0.97 to 1. A complete description of one of the EPAs is shown as an example . A detailed specification of the seven EPAs is presented as Supplementary Material. In the present study, the authors reached a consensus on seven EPAs to be used in designing and assessing neonatology residency programs. By defining the EPAs' titles and describing their specifications and limitations, the authors aimed to contribute to the initial steps in implementing EPA-based assessment in neonatology training in Brazil. Other specialties in Brazil have been discussing and defining EPAs for specialty training. The Brazilian Federation of the Association of Gynecology and Obstetrics (FEBRASGO) has already defined 21 EPAs for residency training. As far as the authors know, this study is the first to define and develop a set of EPAs as the basis for competency-based teaching of neonatology in Brazil. An EPA-based curriculum must be constructed by a broad group of stakeholders involved in the specialist's daily routine. Although our consensus group was composed mostly of individuals with extensive experience, the authors intentionally included medical residents and early career specialists, who comprised almost a quarter of the participants. In addition, the EPAs must reflect the context where the physician will practice after training. The authors used the Delphi method to reach a consensus on the EPAs that reflect the practice of the neonatal care specialist in Brazil. It is important to emphasize that the objective of the Delphi method is not necessarily to obtain absolute consensus on all items but to achieve the maximum possible convergence of opinion. The modified Delphi method was chosen because it has been widely used in studies aiming at defining and validating EPAs in medical education. In most of these studies, consensus is reached in two or three rounds, and a high level of agreement between such responses was observed, similar to what was found in the present study. , , The number of EPAs that need to be assessed and entrusted in a medical residency program varies according to the area of expertise and regional/national legal requirements for certification. An excessive number of EPAs may risk increasing the complexity of assessment and the demand on clinical supervisors. Conversely, broader and fewer EPAs can reduce the complexity and allow a more holistic view of the resident. Ten Cate suggests an adequate number of EPAs for a complete residency program should vary between 20 and 30. However, a recent Dutch study defined only nine EPAs to train Pediatric Intensive Care Unit residents. In addition, the American Board of Pediatrics (ABP) has proposed ten EPAs for pediatric intensive care medicine and five for neonatology. Finally, a Delphi study with program directors of the Accreditation Council for Graduate Medical Education (ACGME)-certified neonatal-perinatal medicine fellowships defined thirteen EPAs for neonatology . In this study, the authors defined seven EPAs, which seemed adequate for a subspecialty in which the resident has already been trained in pediatrics. When defining the set of EPAs for this study, the authors followed a framework based on “service provision,” one of the most commonly used. In such a perspective, the EPAs are broad and general, reflecting activities as they are usually assigned to the training physician. Entrustment decisions in this “service provision” framework assume adequate experience with diseases and procedures usually encountered during these services. A drawback of this framework is its lack of case specificity, which demands careful attention to sampling in assessment and makes it challenging for summative entrustment decisions. A limitation of the study is the national validity of the set of EPAs, as the original list of EPAs was drafted by experts from a single state in Brazil, and consensus was obtained from specialists from the same region. However, since the list of EPAs was built considering the national matrix of competencies, the authors might assume neonatal medicine residency programs from other parts of Brazil could use it as components of a competency-based assessment strategy. Seven EPAs were defined and developed for use in neonatal medicine residency programs. Implementing competency-based assessment in postgraduate medical education is challenging and this set of EPAs might be an important step towards operationalizing such strategy in Neonatal Care training. The authors declare no conflicts of interest.
Uptake of pediatric patient-reported outcome and experience measures and challenges associated with their implementation in Alberta: a mixed-methods study
82fa73de-8383-4793-b10f-cb31b7d6ac09
10353095
Pediatrics[mh]
In recent years, there has been a shift in healthcare provision, pivoting towards a more Patient- and Family-Centered Care (PFCC) framework for healthcare decision-making . In pediatrics, PFCC emphasizes partnership and collaboration with patients and families when formulating and individualizing their treatment plans. The importance of such care strategies has led to the recognition of PFCC as a central indicator for high-quality health care in patient-clinician interactions . The goal of PFCC is to empower patients and their families in their care by ensuring that their voices are heard and respected. Instead of traditional physician-dominated consultations, patients are encouraged to participate in a dialogue surrounding their own healthcare decisions and develop a collaborative relationship with clinicians and health systems . In recognizing what is important to patients, healthcare providers and health systems can adapt and improve their services to best-fit patients' and families' needs, a crucial step towards providing more comprehensive and efficacious healthcare . One effective way to involve patients and families in conversations about their health is through the use of Patient-Reported Outcome Measures (PROMs) and Patient-Reported Experience Measures (PREMs) . PROMs and PREMs are standardized and validated questionnaires that allow patients to self-report their current health status and experiences receiving care, respectively . PROMs inquire about a patient's functional capacity (generic or disease-specific) and wellbeing. They measure intrinsic outcomes, such as functional status and health-related quality of life (HRQOL) . Disease-specific PROMs can help address particular disease symptoms impacting health conditions and outcomes . Alternatively, PREMs measure care aspects related to the experience of a health encounter which includes patient-provider communication, the clinical environment, or efficiency in healthcare delivery. Thus, PREMs help capture patients' and families' feedback regarding their experience interacting with the healthcare system . PREMs typically provide information for quality improvement or program evaluation initiatives. Together, results from PROMs and PREMs can be used to provide PFCC . Despite the indisputable benefits of using PROMs and PREMs to deliver PFCC, their implementation lags in routine pediatric clinical care . Previous research has identified implementation barriers in adult patients, including the assurance of patient comprehension, fears of workflow obstruction, limited capacity to integrate responses into clinical care, and insufficient technological infrastructure to facilitate survey completion . The use of PROMs and PREMs in pediatric populations poses additional challenges, such as assessing the capacity of the patient to effectively comprehend survey questions and weighing the benefits of by-proxy survey completion, while still ensuring that the patient's voice is being heard . In Canada, Alberta Health Services (AHS) provides all healthcare services within the province. AHS has established a Patient First Strategy , an organization-wide initiative to improve PFCC practices, including patient engagement and partnership . Within AHS, there are sporadic uses of PROMs and PREMs in clinical care and research, as well as a general lack of integration of PROMs and PREMs in routine clinical care, especially in pediatric health services. To facilitate province-wide implementation of pediatric PROMs and PREMs, it is essential first to understand the current use of these measures in Alberta. It is equally essential to explore the perspectives of current pediatric PROM and PREM users to understand current practices and the system-level challenges these users face. Therefore, this mixed-methods study aims to understand the current uptake of pediatric PROMs and PREMs in Alberta and the challenges associated with their implementation in routine clinical care. Design We conducted a convergent-parallel mixed-methods study comprised of quantitative and qualitative arms. The convergent-parallel study design is an approach of concurrently collecting complementary qualitative and quantitative data on the same phenomenon, followed by the convergence of data to facilitate a more comprehensive interpretation . At the methods level, the integration of quantitative and qualitative data was achieved by bringing together data from two arms of analysis and comparison to understand the current uptake of pediatric PROMs and PREMs, as well as study participants' perceptions of the challenges associated with their implementation in Alberta. Ethics Ethical approval for this study was obtained from the University of Calgary's Research Ethics Board (REB21-01441), with all study participants providing verbal consent prior to participating in the qualitative interview and implied consent prior to completing quantitative surveys. Administrative approval was also obtained from Alberta Health Services (AHS). Study setting and participants This study was conducted in the Canadian province of Alberta. Alberta is Canada's fourth-most populous province and is served by AHS, Canada's first and largest province-wide fully integrated health system. The pediatric health ecosystem in Alberta includes two tertiary pediatric hospitals, Stollery Children's Hospital in Edmonton and Alberta Children's Hospital in Calgary. There are also five regional hospitals with a limited number of dedicated pediatric units. AHS has also established the Maternal Newborn Child and Youth Strategic Clinical Network (MNCY SCN™), one of 11 SCNs™ established as learning health systems to facilitate the translation of the latest evidence into practice. Additionally, the Alberta Children's Hospital Research Institute (ACHRI), affiliated with the University of Calgary, and the Women and Children's Health Research Institute (WCHRI), affiliated with the University of Alberta, serve as two major academic pediatric research institutions. Participants of this study were healthcare professionals with experience using pediatric PROMs and PREMs and/or interested in using these measures in practice, quality improvement or for clinical research. Participants were comprised of pediatric clinicians, pediatric health services researchers and community care providers. Materials For the study's quantitative arm, a survey was developed by our team to capture the current uptake of pediatric PROMs and PREMs in Alberta. This survey included 24 questions (see Additional file : Appendix 1). It focused on variables of interest, such as the name of the specific measure used, the type of health setting, mode of administering the measure, reasons for use (i.e., research, quality improvement, program evaluation, mental health, etc.), date of initial use, and methods of data reporting. This survey was designed in Qualtrics (Qualtrics, Provo, Utah, USA). For the qualitative arm of the study, an interview guide (see Additional file : Appendix 2) was developed to explore participants' knowledge, experiences, and perceptions of using pediatric PROMs and PREMs in their respective clinical practice or health services research projects. Data collection All the data were collected between May 2021 and April 2022. Participants were recruited by disseminating study information in regular newsletters sent by the Departments of Pediatrics, ACHRI, WCHRI, and the AHS MNCY SCN™. The study invitation included a link to complete the anonymous survey through Qualtrics. In addition, a list of potential participants was compiled based on publicly available information about professions and positions in AHS. These potential participants were also sent emails inviting them to complete the survey. The study recruitment information shared through these channels also included an invitation to contact the study coordinator (SB) if the participants wished to be interviewed for the qualitative arm of the study. In addition, a snowball sampling approach was also utilized to recruit participants for the qualitative interviews. All qualitative interviews were conducted virtually via Zoom. Before each interview, verbal consent was obtained from each participant. Interview participants received a $20 gift card to acknowledge their time and insights. All the interviews were audio-recorded and transcribed verbatim. Data analysis Quantitative data collected through surveys were imported into MS Excel for descriptive statistical analysis. The users of pediatric PROMs and PREMs were categorized by their primary affiliation, clinical area of use, and whether they used only PROMs, PREMs, or both. A list of PROMs and PREMs was also compiled based on the responses provided in the quantitative survey. Lastly, the uses of pediatric PROMs were categorized into clinical care, research, and care evaluation. Similarly, the uses of pediatric PREMs were categorized into quality improvement, research, and care evaluation. A pie chart was created to demonstrate the frequency of different pediatric PROM and PREM modes of administration, which included via mail, phone, email, e-survey at the clinic, and paper (in the clinic or a secure portal). Qualitative data were transcribed verbatim and imported into NVivo 12 (QRS International Pvt. Ltd Melbourne, Australia) to guide coding, organizing, and synthesis of the data. In the first step, two randomly chosen interview transcripts were coded independently by three research team members (SB, SR, and MZ) to develop a codebook, consistent of code definitions and associated quotes. Some changes were made to the codebook when additional categories were identified in subsequent interviews. Then, a researcher (SB) iteratively coded the remaining transcripts using this codebook and identified the patterns in the form of themes. Key statements demonstrating the beliefs of participants were attributed to themes and sub-themes. Final themes and sub-themes were shared with other team members to seek their feedback on thematic groupings and the selection of supporting quotes. These themes and sub-themes were then narratively described along with the de-identified quotes illustrating participants' core beliefs on the specific theme. Finally, results from quantitative and qualitative analyses were integrated and narratively interpreted to find convergence, divergence, contradictions, or relationships between quantitative and qualitative study findings. We conducted a convergent-parallel mixed-methods study comprised of quantitative and qualitative arms. The convergent-parallel study design is an approach of concurrently collecting complementary qualitative and quantitative data on the same phenomenon, followed by the convergence of data to facilitate a more comprehensive interpretation . At the methods level, the integration of quantitative and qualitative data was achieved by bringing together data from two arms of analysis and comparison to understand the current uptake of pediatric PROMs and PREMs, as well as study participants' perceptions of the challenges associated with their implementation in Alberta. Ethical approval for this study was obtained from the University of Calgary's Research Ethics Board (REB21-01441), with all study participants providing verbal consent prior to participating in the qualitative interview and implied consent prior to completing quantitative surveys. Administrative approval was also obtained from Alberta Health Services (AHS). This study was conducted in the Canadian province of Alberta. Alberta is Canada's fourth-most populous province and is served by AHS, Canada's first and largest province-wide fully integrated health system. The pediatric health ecosystem in Alberta includes two tertiary pediatric hospitals, Stollery Children's Hospital in Edmonton and Alberta Children's Hospital in Calgary. There are also five regional hospitals with a limited number of dedicated pediatric units. AHS has also established the Maternal Newborn Child and Youth Strategic Clinical Network (MNCY SCN™), one of 11 SCNs™ established as learning health systems to facilitate the translation of the latest evidence into practice. Additionally, the Alberta Children's Hospital Research Institute (ACHRI), affiliated with the University of Calgary, and the Women and Children's Health Research Institute (WCHRI), affiliated with the University of Alberta, serve as two major academic pediatric research institutions. Participants of this study were healthcare professionals with experience using pediatric PROMs and PREMs and/or interested in using these measures in practice, quality improvement or for clinical research. Participants were comprised of pediatric clinicians, pediatric health services researchers and community care providers. For the study's quantitative arm, a survey was developed by our team to capture the current uptake of pediatric PROMs and PREMs in Alberta. This survey included 24 questions (see Additional file : Appendix 1). It focused on variables of interest, such as the name of the specific measure used, the type of health setting, mode of administering the measure, reasons for use (i.e., research, quality improvement, program evaluation, mental health, etc.), date of initial use, and methods of data reporting. This survey was designed in Qualtrics (Qualtrics, Provo, Utah, USA). For the qualitative arm of the study, an interview guide (see Additional file : Appendix 2) was developed to explore participants' knowledge, experiences, and perceptions of using pediatric PROMs and PREMs in their respective clinical practice or health services research projects. All the data were collected between May 2021 and April 2022. Participants were recruited by disseminating study information in regular newsletters sent by the Departments of Pediatrics, ACHRI, WCHRI, and the AHS MNCY SCN™. The study invitation included a link to complete the anonymous survey through Qualtrics. In addition, a list of potential participants was compiled based on publicly available information about professions and positions in AHS. These potential participants were also sent emails inviting them to complete the survey. The study recruitment information shared through these channels also included an invitation to contact the study coordinator (SB) if the participants wished to be interviewed for the qualitative arm of the study. In addition, a snowball sampling approach was also utilized to recruit participants for the qualitative interviews. All qualitative interviews were conducted virtually via Zoom. Before each interview, verbal consent was obtained from each participant. Interview participants received a $20 gift card to acknowledge their time and insights. All the interviews were audio-recorded and transcribed verbatim. Quantitative data collected through surveys were imported into MS Excel for descriptive statistical analysis. The users of pediatric PROMs and PREMs were categorized by their primary affiliation, clinical area of use, and whether they used only PROMs, PREMs, or both. A list of PROMs and PREMs was also compiled based on the responses provided in the quantitative survey. Lastly, the uses of pediatric PROMs were categorized into clinical care, research, and care evaluation. Similarly, the uses of pediatric PREMs were categorized into quality improvement, research, and care evaluation. A pie chart was created to demonstrate the frequency of different pediatric PROM and PREM modes of administration, which included via mail, phone, email, e-survey at the clinic, and paper (in the clinic or a secure portal). Qualitative data were transcribed verbatim and imported into NVivo 12 (QRS International Pvt. Ltd Melbourne, Australia) to guide coding, organizing, and synthesis of the data. In the first step, two randomly chosen interview transcripts were coded independently by three research team members (SB, SR, and MZ) to develop a codebook, consistent of code definitions and associated quotes. Some changes were made to the codebook when additional categories were identified in subsequent interviews. Then, a researcher (SB) iteratively coded the remaining transcripts using this codebook and identified the patterns in the form of themes. Key statements demonstrating the beliefs of participants were attributed to themes and sub-themes. Final themes and sub-themes were shared with other team members to seek their feedback on thematic groupings and the selection of supporting quotes. These themes and sub-themes were then narratively described along with the de-identified quotes illustrating participants' core beliefs on the specific theme. Finally, results from quantitative and qualitative analyses were integrated and narratively interpreted to find convergence, divergence, contradictions, or relationships between quantitative and qualitative study findings. The quantitative and qualitative data were collected concurrently, and a merging approach was used to integrate the findings from both arms of the study . First, the findings from the quantitative arm of the study are reported, followed by the findings from the qualitative arm. Finally, qualitative and quantitative data integration was accomplished through a joint display and contiguous narrative approach at the interpretation and reporting level . Quantitative data Twenty-eight people participated in the quantitative survey, however, six of these participants opened but did not complete any of the survey questions. Therefore, only data from 22 participants were included in the final quantitative analysis (See Table ). Fifty-nine percent ( n = 13) of the participants had a primary affiliation with AHS. The most common area where pediatric PROMs and PREMs were used was in general child health (18%, n = 4), followed by respirology (15%, n = 3) and rehabilitation (15%, n = 3). Most participants (60%, n = 13) completing the quantitative survey used both PROMs and PREMs. One participant did not provide information on their uses for pediatric PROMs, so among 21 respondents (See Table ), the most common reason for PROM use was research (81%, n = 17), followed by clinical care (71%, n = 15) and care evaluation (52%, n = 11). Only 14 participants used PREMs, among which the most common application was for research (71%, n = 10), followed by quality improvement (64%, n = 9) and care evaluation (57%, n = 8). Since participants may have been using pediatric PROMs and PREMs for multiple purposes, the numbers reported for the types of uses were not mutually exclusive. The most common modes of administering PROMs and PREMs were through email (27%, n = 7) and electronic completion at the healthcare facility (27%, n = 7) (See Fig. ). There was significant variation in the current use of pediatric PROMs and PREMs in Alberta (See Table ). In total, 33 unique PROMs were identified by the participants. The pediatric PROMs used in Alberta ranged from generic instruments such as the EQ-5D-Youth and Pediatric Quality of Life Inventory (PedsQL™) , to disease-specific measures like the Knee injury and Osteoarthritis Outcome Score (KOOS) and Children’s Dermatology Life Quality Index (CDLQI) . On the other hand, only six unique pediatric PREMs were identified across all participants. The pediatric PREMs identified included generic PREMs, such as the Child Hospital Consumer Assessment of Healthcare Providers and Systems (Child-HCAHPS) , and condition-specific PREMs like the Measure of Processes of Care (MPOC) . Overall, participants identified 33 PROMs and 6 PREMs showing diversity in the types of pediatric PROMs and PREMs currently being used in Alberta, with their mode of administration ranging from emails to traditional paper–pencil modes. The purpose of using PROMs and PREMs were similarly diverse, including research, clinical care, quality improvement, and care evaluation. Qualitative data We interviewed 14 participants for the qualitative arm of this study, with thematic successfully reached (see Table ). While nine of the 14 participants openly expressed interest in being interviewed; all of them willingly consented. Two participants were purposively recruited because they were known users of pediatric PROMs and PREMs. In addition, two participants were included through snowball sampling and one participant was reached out to using their publicly available profile. Half of the participants were primarily affiliated with AHS, with the remaining at the University of Alberta, University of Calgary, or a community organization. All interviews were held over a period of nine months (from May 2021 to January 2022) and lasted between 29 to 48 min in length. Table shows themes and sub-themes around the current use of pediatric PROMs and PREMs in Alberta, as well as the challenges associated with their implementation in routine clinical care. Qualitative interviews were conducted by SB and SA, who have received graduate-level academic training in qualitative research methodology and have experience conducting interviews and focus groups. Below we have described the themes and sub-themes surrounding the current use of pediatric PROMs and PREMs in Alberta, as well as the challenges associated with their implementation in routine clinical care. Use of Pediatric PROMs and PREMs in Alberta One purpose of the qualitative inquiry was to understand how PROMs and PREMs are being used in pediatric health settings across Alberta. This larger theme focused on the speciality-specific implementation and participant rationales behind PROM and PREM use. Specialty-specific implementation Since study participants came from diverse backgrounds, they were able to provide an overview of the different clinical areas in which PROMs and PREMs are used. “Our other study is a care for disease study for children who have neurodevelopmental disabilities.” (HCP -10). “I think most of everything we measure in pain medicine, including pain severity, is patient-reported” (HCP – 02). Often different health systems will choose a few generic or disease specific PROMs and PREMs to implement in routine clinical care, however, these statements demonstrate how the study participants also came from diverse clinical backgrounds (i.e., pain medicine) where great importance is paid to the use of patient-reported measures. Rationale for using PROMs and PREMs Despite a lack of province-wide implementation of pediatric PROMs and PREMs, some clinicians and health service researchers were using PROMs and PREMs. After probing these participants further on their rationale for using these measures, four additional sub-themes were revealed about their beliefs about the utility of these measures. Offering greater insights into patient’s conditions Study participants considered these measures as tools that provide them more information about how a certain disease or health encounter impacts their patients (and families). “I think PROMs help with getting a better view, how the patient feels overall, so I think that is where I'm confident that it really helps us.” (HCP—02). Tracking outcomes over time Participants also endorsed the use of PROMs and PREMs to track patients' trajectories by monitoring patients' health outcomes and experiences over more extended periods of time. They believed that such long-term monitoring of PROMs and PREMs data in clinical care helps them and their clinical teams to improve their patients’ health outcomes and experiences. “The next three months we're going to do again and if the next time we have the same we’re- we’re going to make this step that we're going to increase treatment or stop treatment.” (HCP – 11). Promoting shared decision making Participants believed that using PROMs and PREMs helps to promote shared-decision making. According to them, since these measures are directly reported by patients and/or their family caregivers, they can help evaluate different treatment options that matter most to the patients and/or family caregivers. “I think it helps really a lot with decision making and trying to make these difficult decisions of stopping or adding a medication.” (HCP – 04). Facilitating patient management Participants at the frontlines of providing clinical care felt that PROMs and PREMs offered them greater insights into patients' conditions, highlighting that the patient perspectives that were captured by PROMs and/or PREMs enabled them to better manage their patients’ symptoms and provide PFCC. “I think they're very important to integrate a patient's perspective, it helps you to make a good management plan going forward and you've got often, you get what matters to the patient rather than what you think matters to the patient.” (HCP – 10). Training requirements Although participants either knew about PROMs and PREMs, or were already using them in their clinical practice or health services research, they highlighted the desire to receive more training on the science behind developing PROMs and PREMs, and the optimal ways to use PROMs and PREMs data in clinical care. “As a clinician, one needs to be familiar with the specific tools and how they’re used, what they show, broadly, what's the evidence behind them? Because I think it's important to understand that broadly…”(HCP—06). “They need those evidence-informed teaching tools to be able to provide that consistent information to families that will make them much happier because they won't be confused and frustrated. And that's a better experience (HCP – 05). Administration of pediatric PROMs and PREMs Our quantitative data showed that participants were using different modes to administer and collect PROMs and PREMs data, so we explicitly asked participants about their experience using different modes of administration and any specific challenges associated with them. All the qualitative data on administration modality for PROMs and PREMs were grouped into this theme. “For us they are all on paper and then we have to transfer them into the electronic system, which also brings another complication because that could potentially also again, like you know, put a bias in it because we don't transfer exactly what has been put on paper” (HCP -03). “All the PREMs and PROMs will be under that sink (if in paper form), in the cupboard under the sink. It's much easier with electronic data collection platforms now” (HCP—01) In this theme, participants highlighted the challenges with traditional modes of administering PROMs and PREMs (i.e., paper-based) and underscored the importance of moving towards electronic administration. Study participants also proposed creating a repository of PROMs and PREMs results, which could be utilized for multiple purposes, including clinical care and research. Challenges associated with PROM and PREM implementation Study participants faced, or anticipated facing, several challenges with implementing PROMs and PREMs in clinical care. The majority of the challenges shared by study participants were either associated with the clinicians, patient and family members, or health system at large, therefore, we divided this theme into these three sub-themes, respectively. Clinician-associated challenges Limited capacity to address PROM and PREM identified issues Although participants overwhelmingly supported the use of PROMs and PREMs in Alberta, they were also sceptical about their abilities to address some of the issues identified by the measures, suggesting that they might be outside their scope of practice. Participants stated that sometimes their patients might disclose information about how their clinical condition might has impacted the patients’ social life or mental health, but healthcare providers might not be trained to deal with such issues or do not have adequate supports. “You know, you're asking a patient ‘tell me how you feel?’ and then they tell you ‘I feel crap,’ and then you’re saying, ‘I'm sorry, we don't have the resources to do anything about it.’ Right?” (HCP – 03). Personal apprehension about the use of PROMs and PREMs According to study participants, some of their peers might have personal apprehensions about the utility of PROMs and PREMs, as well as the non-suitability of specific PROMs and PREMs to clinicians' personal style of practicing medicine. The use of PROMs and PREMs was compared to any other intervention, which have early adopters and laggards, who are slow to adopt to the change. Such apprehension may be due to various reasons including personal apprehensions about the intervention. Participants believed that one of the reasons for slow adoption might be that some of their colleagues consider PROMs and PREMs to be a nuisance rather than a useful tool for clinical practice. “I suspect some will intuitively get it more readily than others. Some will be a bit slower; some will say yeah, this is useless.” (HCP – 01) Other barriers Some other barriers mentioned by the participants included interruptions in clinical flows and an inability to select the right measure for the right scenario. One important barrier that is relevant to the implementation of PROMs and PREMs in Alberta was identified as “implementation context”. Participants felt that because of the size of Alberta’s health system, each pediatric healthcare facility had cultivated a distinct work culture. Therefore, a lack of understanding about each clinical context could constitute a major barrier to the province-wide implementation of PROMs and PREMs. “I think you need to know the limitations of the PROMs and PREMs, and you also need to know if they fit in the- in the context.” (HCP – 12) Patient and family-associated challenges Study participants believed that it is equally essential to engage patients and family caregivers in order to successfully use PROMs and PREMs in routine clinical care. Lack of understanding of the importance of PROMs and PREMs Since PROMs and PREMs require active engagement from patients and family members, participants underlined the importance of ensuring the patients and families understand the value of using such measures. “What is my most experience is that you make sure that patients understand what patient-reported outcome means.” (HCP -11). Capacity to complete PROMs and PREMs Similarly, participants felt that some of the patients and family members might be interested in completing these measures, however, they might not have the capacity to actually complete them. This lack of capacity could be attributed to issues such as, parents’ burden to care for more than one child, the length of PROM and PREM measures, and a lack of language proficiency. Therefore, limited capacity to complete these measures, was viewed as a barrier that could potentially hamper the uptake of PROMs and PREMs in routine clinical care. “… those are usually much more extensive PROMs, which sometimes is a bit of a burden on the families, of course, because it's a lot of questionnaires that need to be filled.” (HCP – 14). System-level challenges Lastly, several challenges were associated with the infrastructure and policies within AHS. Some of the system-level changes were also attributed to the COVID-19 pandemic. Connect care AHS has recently rolled out a province-wide electronic medical record system called Connect Care, with most participants expressing very high hopes for Connect Care's ability to facilitate the use of pediatric PROMs and PREMs in Alberta. Currently, however, the lack of integration of PROMs and PREMs within Connect Care was identified as a major system-level challenge. “Connect Care will help us with that, we're not there yet, we're working on it.” (HCP – 03) Policy-mandate Participants believed that without policy mandates to incorporate PROMs and PREMs in routine clinical care, it would be difficult to scale and spread the use of PROMs and PREMs in Alberta. A similar experience was shared by one key participant who explained the impact of policy making on increasing the uptake of PROMs and PREMs in cancer care. “If you look at cancer care, I mean they've got this PREMs and PROMs stuff down because they've had bundles of money for years because Cancer Research is actually embedded in the act, in the Cancer Care Act. Did you know that? Do you know that that's not embedded in any other clinical care? But it's embedded in the Cancer Care Act, which is why if you've got a policy, the money has to follow the policy” (HCP – 01). Impact of COVID-19 pandemic Lastly, participants shared some challenges they faced in using PROMs and PREMs that were specific to the COVID-19 pandemic. At the time of data collection, AHS was using virtual tools to provide clinical care for non-acute patients. Several participants shared that they did not believe this format was not suitable for administering PROMs or PREMs. “It's very hard to do PROMs because they're on paper (and appointments are) through Zoom, so we miss a lot of PROMS and PREMs” (HCP – 12) Convergence of quantitative and qualitative findings Table is a joint display illustrating the convergence of our findings from the quantitative and qualitative arms. As is evident in the findings from both arms of the study, most quantitative and qualitative findings complemented each other. In Table , the left column lists major findings from the quantitative arm, and the right column highlights complementary findings captured through qualitative interviews. The quantitative and qualitative findings were predominantly convergent. We did not find any divergent or contradictory findings. The qualitative arm of the study was more exploratory, so it provided additional unique findings the highlighted some of the challenges associated with implementing pediatric PROMs and PREMs in Alberta. Twenty-eight people participated in the quantitative survey, however, six of these participants opened but did not complete any of the survey questions. Therefore, only data from 22 participants were included in the final quantitative analysis (See Table ). Fifty-nine percent ( n = 13) of the participants had a primary affiliation with AHS. The most common area where pediatric PROMs and PREMs were used was in general child health (18%, n = 4), followed by respirology (15%, n = 3) and rehabilitation (15%, n = 3). Most participants (60%, n = 13) completing the quantitative survey used both PROMs and PREMs. One participant did not provide information on their uses for pediatric PROMs, so among 21 respondents (See Table ), the most common reason for PROM use was research (81%, n = 17), followed by clinical care (71%, n = 15) and care evaluation (52%, n = 11). Only 14 participants used PREMs, among which the most common application was for research (71%, n = 10), followed by quality improvement (64%, n = 9) and care evaluation (57%, n = 8). Since participants may have been using pediatric PROMs and PREMs for multiple purposes, the numbers reported for the types of uses were not mutually exclusive. The most common modes of administering PROMs and PREMs were through email (27%, n = 7) and electronic completion at the healthcare facility (27%, n = 7) (See Fig. ). There was significant variation in the current use of pediatric PROMs and PREMs in Alberta (See Table ). In total, 33 unique PROMs were identified by the participants. The pediatric PROMs used in Alberta ranged from generic instruments such as the EQ-5D-Youth and Pediatric Quality of Life Inventory (PedsQL™) , to disease-specific measures like the Knee injury and Osteoarthritis Outcome Score (KOOS) and Children’s Dermatology Life Quality Index (CDLQI) . On the other hand, only six unique pediatric PREMs were identified across all participants. The pediatric PREMs identified included generic PREMs, such as the Child Hospital Consumer Assessment of Healthcare Providers and Systems (Child-HCAHPS) , and condition-specific PREMs like the Measure of Processes of Care (MPOC) . Overall, participants identified 33 PROMs and 6 PREMs showing diversity in the types of pediatric PROMs and PREMs currently being used in Alberta, with their mode of administration ranging from emails to traditional paper–pencil modes. The purpose of using PROMs and PREMs were similarly diverse, including research, clinical care, quality improvement, and care evaluation. We interviewed 14 participants for the qualitative arm of this study, with thematic successfully reached (see Table ). While nine of the 14 participants openly expressed interest in being interviewed; all of them willingly consented. Two participants were purposively recruited because they were known users of pediatric PROMs and PREMs. In addition, two participants were included through snowball sampling and one participant was reached out to using their publicly available profile. Half of the participants were primarily affiliated with AHS, with the remaining at the University of Alberta, University of Calgary, or a community organization. All interviews were held over a period of nine months (from May 2021 to January 2022) and lasted between 29 to 48 min in length. Table shows themes and sub-themes around the current use of pediatric PROMs and PREMs in Alberta, as well as the challenges associated with their implementation in routine clinical care. Qualitative interviews were conducted by SB and SA, who have received graduate-level academic training in qualitative research methodology and have experience conducting interviews and focus groups. Below we have described the themes and sub-themes surrounding the current use of pediatric PROMs and PREMs in Alberta, as well as the challenges associated with their implementation in routine clinical care. One purpose of the qualitative inquiry was to understand how PROMs and PREMs are being used in pediatric health settings across Alberta. This larger theme focused on the speciality-specific implementation and participant rationales behind PROM and PREM use. Since study participants came from diverse backgrounds, they were able to provide an overview of the different clinical areas in which PROMs and PREMs are used. “Our other study is a care for disease study for children who have neurodevelopmental disabilities.” (HCP -10). “I think most of everything we measure in pain medicine, including pain severity, is patient-reported” (HCP – 02). Often different health systems will choose a few generic or disease specific PROMs and PREMs to implement in routine clinical care, however, these statements demonstrate how the study participants also came from diverse clinical backgrounds (i.e., pain medicine) where great importance is paid to the use of patient-reported measures. Despite a lack of province-wide implementation of pediatric PROMs and PREMs, some clinicians and health service researchers were using PROMs and PREMs. After probing these participants further on their rationale for using these measures, four additional sub-themes were revealed about their beliefs about the utility of these measures. Study participants considered these measures as tools that provide them more information about how a certain disease or health encounter impacts their patients (and families). “I think PROMs help with getting a better view, how the patient feels overall, so I think that is where I'm confident that it really helps us.” (HCP—02). Participants also endorsed the use of PROMs and PREMs to track patients' trajectories by monitoring patients' health outcomes and experiences over more extended periods of time. They believed that such long-term monitoring of PROMs and PREMs data in clinical care helps them and their clinical teams to improve their patients’ health outcomes and experiences. “The next three months we're going to do again and if the next time we have the same we’re- we’re going to make this step that we're going to increase treatment or stop treatment.” (HCP – 11). Participants believed that using PROMs and PREMs helps to promote shared-decision making. According to them, since these measures are directly reported by patients and/or their family caregivers, they can help evaluate different treatment options that matter most to the patients and/or family caregivers. “I think it helps really a lot with decision making and trying to make these difficult decisions of stopping or adding a medication.” (HCP – 04). Participants at the frontlines of providing clinical care felt that PROMs and PREMs offered them greater insights into patients' conditions, highlighting that the patient perspectives that were captured by PROMs and/or PREMs enabled them to better manage their patients’ symptoms and provide PFCC. “I think they're very important to integrate a patient's perspective, it helps you to make a good management plan going forward and you've got often, you get what matters to the patient rather than what you think matters to the patient.” (HCP – 10). Although participants either knew about PROMs and PREMs, or were already using them in their clinical practice or health services research, they highlighted the desire to receive more training on the science behind developing PROMs and PREMs, and the optimal ways to use PROMs and PREMs data in clinical care. “As a clinician, one needs to be familiar with the specific tools and how they’re used, what they show, broadly, what's the evidence behind them? Because I think it's important to understand that broadly…”(HCP—06). “They need those evidence-informed teaching tools to be able to provide that consistent information to families that will make them much happier because they won't be confused and frustrated. And that's a better experience (HCP – 05). Our quantitative data showed that participants were using different modes to administer and collect PROMs and PREMs data, so we explicitly asked participants about their experience using different modes of administration and any specific challenges associated with them. All the qualitative data on administration modality for PROMs and PREMs were grouped into this theme. “For us they are all on paper and then we have to transfer them into the electronic system, which also brings another complication because that could potentially also again, like you know, put a bias in it because we don't transfer exactly what has been put on paper” (HCP -03). “All the PREMs and PROMs will be under that sink (if in paper form), in the cupboard under the sink. It's much easier with electronic data collection platforms now” (HCP—01) In this theme, participants highlighted the challenges with traditional modes of administering PROMs and PREMs (i.e., paper-based) and underscored the importance of moving towards electronic administration. Study participants also proposed creating a repository of PROMs and PREMs results, which could be utilized for multiple purposes, including clinical care and research. Study participants faced, or anticipated facing, several challenges with implementing PROMs and PREMs in clinical care. The majority of the challenges shared by study participants were either associated with the clinicians, patient and family members, or health system at large, therefore, we divided this theme into these three sub-themes, respectively. Limited capacity to address PROM and PREM identified issues Although participants overwhelmingly supported the use of PROMs and PREMs in Alberta, they were also sceptical about their abilities to address some of the issues identified by the measures, suggesting that they might be outside their scope of practice. Participants stated that sometimes their patients might disclose information about how their clinical condition might has impacted the patients’ social life or mental health, but healthcare providers might not be trained to deal with such issues or do not have adequate supports. “You know, you're asking a patient ‘tell me how you feel?’ and then they tell you ‘I feel crap,’ and then you’re saying, ‘I'm sorry, we don't have the resources to do anything about it.’ Right?” (HCP – 03). Personal apprehension about the use of PROMs and PREMs According to study participants, some of their peers might have personal apprehensions about the utility of PROMs and PREMs, as well as the non-suitability of specific PROMs and PREMs to clinicians' personal style of practicing medicine. The use of PROMs and PREMs was compared to any other intervention, which have early adopters and laggards, who are slow to adopt to the change. Such apprehension may be due to various reasons including personal apprehensions about the intervention. Participants believed that one of the reasons for slow adoption might be that some of their colleagues consider PROMs and PREMs to be a nuisance rather than a useful tool for clinical practice. “I suspect some will intuitively get it more readily than others. Some will be a bit slower; some will say yeah, this is useless.” (HCP – 01) Other barriers Some other barriers mentioned by the participants included interruptions in clinical flows and an inability to select the right measure for the right scenario. One important barrier that is relevant to the implementation of PROMs and PREMs in Alberta was identified as “implementation context”. Participants felt that because of the size of Alberta’s health system, each pediatric healthcare facility had cultivated a distinct work culture. Therefore, a lack of understanding about each clinical context could constitute a major barrier to the province-wide implementation of PROMs and PREMs. “I think you need to know the limitations of the PROMs and PREMs, and you also need to know if they fit in the- in the context.” (HCP – 12) Patient and family-associated challenges Study participants believed that it is equally essential to engage patients and family caregivers in order to successfully use PROMs and PREMs in routine clinical care. Lack of understanding of the importance of PROMs and PREMs Since PROMs and PREMs require active engagement from patients and family members, participants underlined the importance of ensuring the patients and families understand the value of using such measures. “What is my most experience is that you make sure that patients understand what patient-reported outcome means.” (HCP -11). Capacity to complete PROMs and PREMs Similarly, participants felt that some of the patients and family members might be interested in completing these measures, however, they might not have the capacity to actually complete them. This lack of capacity could be attributed to issues such as, parents’ burden to care for more than one child, the length of PROM and PREM measures, and a lack of language proficiency. Therefore, limited capacity to complete these measures, was viewed as a barrier that could potentially hamper the uptake of PROMs and PREMs in routine clinical care. “… those are usually much more extensive PROMs, which sometimes is a bit of a burden on the families, of course, because it's a lot of questionnaires that need to be filled.” (HCP – 14). System-level challenges Lastly, several challenges were associated with the infrastructure and policies within AHS. Some of the system-level changes were also attributed to the COVID-19 pandemic. Connect care AHS has recently rolled out a province-wide electronic medical record system called Connect Care, with most participants expressing very high hopes for Connect Care's ability to facilitate the use of pediatric PROMs and PREMs in Alberta. Currently, however, the lack of integration of PROMs and PREMs within Connect Care was identified as a major system-level challenge. “Connect Care will help us with that, we're not there yet, we're working on it.” (HCP – 03) Policy-mandate Participants believed that without policy mandates to incorporate PROMs and PREMs in routine clinical care, it would be difficult to scale and spread the use of PROMs and PREMs in Alberta. A similar experience was shared by one key participant who explained the impact of policy making on increasing the uptake of PROMs and PREMs in cancer care. “If you look at cancer care, I mean they've got this PREMs and PROMs stuff down because they've had bundles of money for years because Cancer Research is actually embedded in the act, in the Cancer Care Act. Did you know that? Do you know that that's not embedded in any other clinical care? But it's embedded in the Cancer Care Act, which is why if you've got a policy, the money has to follow the policy” (HCP – 01). Impact of COVID-19 pandemic Lastly, participants shared some challenges they faced in using PROMs and PREMs that were specific to the COVID-19 pandemic. At the time of data collection, AHS was using virtual tools to provide clinical care for non-acute patients. Several participants shared that they did not believe this format was not suitable for administering PROMs or PREMs. “It's very hard to do PROMs because they're on paper (and appointments are) through Zoom, so we miss a lot of PROMS and PREMs” (HCP – 12) Convergence of quantitative and qualitative findings Table is a joint display illustrating the convergence of our findings from the quantitative and qualitative arms. As is evident in the findings from both arms of the study, most quantitative and qualitative findings complemented each other. In Table , the left column lists major findings from the quantitative arm, and the right column highlights complementary findings captured through qualitative interviews. The quantitative and qualitative findings were predominantly convergent. We did not find any divergent or contradictory findings. The qualitative arm of the study was more exploratory, so it provided additional unique findings the highlighted some of the challenges associated with implementing pediatric PROMs and PREMs in Alberta. Although participants overwhelmingly supported the use of PROMs and PREMs in Alberta, they were also sceptical about their abilities to address some of the issues identified by the measures, suggesting that they might be outside their scope of practice. Participants stated that sometimes their patients might disclose information about how their clinical condition might has impacted the patients’ social life or mental health, but healthcare providers might not be trained to deal with such issues or do not have adequate supports. “You know, you're asking a patient ‘tell me how you feel?’ and then they tell you ‘I feel crap,’ and then you’re saying, ‘I'm sorry, we don't have the resources to do anything about it.’ Right?” (HCP – 03). According to study participants, some of their peers might have personal apprehensions about the utility of PROMs and PREMs, as well as the non-suitability of specific PROMs and PREMs to clinicians' personal style of practicing medicine. The use of PROMs and PREMs was compared to any other intervention, which have early adopters and laggards, who are slow to adopt to the change. Such apprehension may be due to various reasons including personal apprehensions about the intervention. Participants believed that one of the reasons for slow adoption might be that some of their colleagues consider PROMs and PREMs to be a nuisance rather than a useful tool for clinical practice. “I suspect some will intuitively get it more readily than others. Some will be a bit slower; some will say yeah, this is useless.” (HCP – 01) Some other barriers mentioned by the participants included interruptions in clinical flows and an inability to select the right measure for the right scenario. One important barrier that is relevant to the implementation of PROMs and PREMs in Alberta was identified as “implementation context”. Participants felt that because of the size of Alberta’s health system, each pediatric healthcare facility had cultivated a distinct work culture. Therefore, a lack of understanding about each clinical context could constitute a major barrier to the province-wide implementation of PROMs and PREMs. “I think you need to know the limitations of the PROMs and PREMs, and you also need to know if they fit in the- in the context.” (HCP – 12) Study participants believed that it is equally essential to engage patients and family caregivers in order to successfully use PROMs and PREMs in routine clinical care. Since PROMs and PREMs require active engagement from patients and family members, participants underlined the importance of ensuring the patients and families understand the value of using such measures. “What is my most experience is that you make sure that patients understand what patient-reported outcome means.” (HCP -11). Similarly, participants felt that some of the patients and family members might be interested in completing these measures, however, they might not have the capacity to actually complete them. This lack of capacity could be attributed to issues such as, parents’ burden to care for more than one child, the length of PROM and PREM measures, and a lack of language proficiency. Therefore, limited capacity to complete these measures, was viewed as a barrier that could potentially hamper the uptake of PROMs and PREMs in routine clinical care. “… those are usually much more extensive PROMs, which sometimes is a bit of a burden on the families, of course, because it's a lot of questionnaires that need to be filled.” (HCP – 14). Lastly, several challenges were associated with the infrastructure and policies within AHS. Some of the system-level changes were also attributed to the COVID-19 pandemic. AHS has recently rolled out a province-wide electronic medical record system called Connect Care, with most participants expressing very high hopes for Connect Care's ability to facilitate the use of pediatric PROMs and PREMs in Alberta. Currently, however, the lack of integration of PROMs and PREMs within Connect Care was identified as a major system-level challenge. “Connect Care will help us with that, we're not there yet, we're working on it.” (HCP – 03) Participants believed that without policy mandates to incorporate PROMs and PREMs in routine clinical care, it would be difficult to scale and spread the use of PROMs and PREMs in Alberta. A similar experience was shared by one key participant who explained the impact of policy making on increasing the uptake of PROMs and PREMs in cancer care. “If you look at cancer care, I mean they've got this PREMs and PROMs stuff down because they've had bundles of money for years because Cancer Research is actually embedded in the act, in the Cancer Care Act. Did you know that? Do you know that that's not embedded in any other clinical care? But it's embedded in the Cancer Care Act, which is why if you've got a policy, the money has to follow the policy” (HCP – 01). Lastly, participants shared some challenges they faced in using PROMs and PREMs that were specific to the COVID-19 pandemic. At the time of data collection, AHS was using virtual tools to provide clinical care for non-acute patients. Several participants shared that they did not believe this format was not suitable for administering PROMs or PREMs. “It's very hard to do PROMs because they're on paper (and appointments are) through Zoom, so we miss a lot of PROMS and PREMs” (HCP – 12) Table is a joint display illustrating the convergence of our findings from the quantitative and qualitative arms. As is evident in the findings from both arms of the study, most quantitative and qualitative findings complemented each other. In Table , the left column lists major findings from the quantitative arm, and the right column highlights complementary findings captured through qualitative interviews. The quantitative and qualitative findings were predominantly convergent. We did not find any divergent or contradictory findings. The qualitative arm of the study was more exploratory, so it provided additional unique findings the highlighted some of the challenges associated with implementing pediatric PROMs and PREMs in Alberta. The growing evidence-base around the effectiveness of PROMs and PREMs in supporting PFCC is irrefutable . AHS is Canada's largest integrated health system and has enacted the Patient First Strategy . However, as shown by the results of our study, PROMs and PREMs are not consistently incorporated into routine pediatric clinical care. This mixed-methods study was conducted to understand the current use of pediatric PROMs and PREMs in Alberta and the challenges associated with their implementation in routine clinical care. This study identified great variation in the types of health settings where pediatric PROMs and PREMs are currently being used. It also showed the diversity in the types of PROMs and PREMs and the purposes for using them. The modes of administering PROMs and PREMs ranged from the traditional paper–pencil mode to email and electronic platforms. Most of the study participants used PROMs and PREMs for research, followed by clinical care, quality improvement, and care evaluation. The challenges in implementing PROMs and PREMs in routine clinical care were associated with physicians, patients and family caregivers, and the overall health system. In Alberta, women account for over 80% of the healthcare workforce, which explains the proportionally higher number of women in our study. A recent systematic review emphasized that organizations implementing PROMs need to invest time and resources into ‘designing’ a PROMs strategy and ‘preparing’ the organization to use PROMs . Another recent study from the Netherlands found that PREMs implementation strategies need to focus on designing and preparing implementation at the patient-clinician interaction level . These studies highlight healthcare organizations' role in facilitating the implementation of PROMs and PREMs. The quantitative arm of this study found 33 PROMs and 6 PREMs currently being used in pediatric health systems in Alberta. This number might look large, but there are hundreds of PROMs and PREMs developed by researchers and health systems based on their specific needs . Another recently published systematic review of childhood PROMs identified 89 generic PROMs, including 110 versions . Therefore, the number of disease or condition-specific PROMs could be considerably greater. Similarly, our team's systematic review of pediatric PREMs identified 49 pediatric PREMs being used worldwide . This illustrates that large health systems like AHS need to strike a balance between standardization by implementing a few PROMs and PREMs across the province and adaptation according to individual unit or clinician needs. Participants offered several rationales for using PROMs and PREMs. According to them, these measures offered greater insights into patients’ conditions and experiences, promoted shared decision-making, facilitated patient management, and helped track patient outcomes and experiences over time. These were the primary rationales for developing PROMs and PREMs and have been reported widely in the adult population as well . Our study confirms similar uses of these measures in pediatric healthcare. At a broader level, participant-identified uses of pediatric PROMs and PREMs included research, clinical care, quality improvement, and care evaluation. These uses are also highlighted in relevant published literature on this topic . In fact, the literature highlights the potential of PROMs in transforming healthcare if the individualized and aggregated PROMs data is used in clinical care, research, or care evaluation . Similar use of PREMs also has great potential to improve health system performance . The findings from our study show that clinicians and health service researchers in Alberta rightly use pediatric PROMs and PREMs but face many challenges identified through the qualitative arm of the study. Some of the challenges, such as personal apprehensions about PROMs and PREMs, and the inability to address issues identified by PROMs and PREMs, can be mitigated by engaging clinicians in the process of selecting these measures and jointly creating clinical management pathways . Patient and family-associated challenges could be mitigated by educating patients and families, and supporting them through the completion of PROMs and PREMs before and after clinical encounters . The major system-level challenge identified in the literature and our study is the lack of integration of PROMs and PREMs within electronic medical records . AHS is currently rolling out Connect Care, a province-wide electronic medical records system . AHS plans to implement PROMs and PREMs through Connect Care in the future, so some of the system-level challenges may be mitigated. Another important challenge identified by participants was associated with policy mandates by health systems to integrate PROMs and PREMs in clinical care. The US Food & Drug Administration and the European Medicines Agency have mandated the use of PROMs to support labelling claims . Similarly, the National Health Services (NHS) of England has mandated the use of PROMs for certain elective surgeries . These policy mandates have been effective in standardizing the use of PROMs across the healthcare system. Therefore, AHS should also consider developing recommendations and policy mandates to support the use of pediatric PROMs and PREMs across Alberta. There is growing evidence around the use of pediatric PROMs and PREMs in different health systems worldwide ; however, according to our knowledge, this is the first study to use a mixed-methods approach to comprehensively understand the experiences and perspectives of PROMs and PREMs users within a large integrated health system. Some of the findings from this study would be helpful for other pediatric health systems, recognizing however that every health system is unique and some of our findings may be highly specific to the Alberta context. AHS could utilize the findings from this study to develop a province-wide pediatric PROMs and PREMs implementation strategy. Strengths and limitations Combining qualitative and quantitative methods within the same study allows for a more comprehensive understanding of the phenomenon under investigation by strengthening and validating the results . This study's convergent mixed methods approach gathered complementary data to provide a comprehensive and multidimensional understanding of the uptake of pediatric PROMs and PREMs in Alberta and the system-level challenges associated with their implementation. There were also several limitations of this study. First, despite our efforts to reach all the users of pediatric PROMs and PREMs for the quantitative arm of the study, we might have missed some of the users of pediatric PROMs and PREMs in Alberta. In addition, participation in this study was voluntary, so selection bias might have excluded participants who use PROMs and PREMs but do not wish to participate in this study. The qualitative arm of the study was more exploratory, so it elicited more unique findings. However, due to the concurrent nature of this study, unique themes identified through qualitative interviews could not be measured through quantitative surveys. Lastly, the COVID-19 pandemic had disrupted the healthcare system, so the results might not reflect post-pandemic times. Currently, a large research program is underway to generate evidence to support the province-wide integration of pediatric PROMs and PREMs in Alberta using KidsPRO, an innovative e-health solution. The KidsPRO program will utilize findings from this study. However, future studies should comprise a larger sample size and be conducted in non-pandemic times. Combining qualitative and quantitative methods within the same study allows for a more comprehensive understanding of the phenomenon under investigation by strengthening and validating the results . This study's convergent mixed methods approach gathered complementary data to provide a comprehensive and multidimensional understanding of the uptake of pediatric PROMs and PREMs in Alberta and the system-level challenges associated with their implementation. There were also several limitations of this study. First, despite our efforts to reach all the users of pediatric PROMs and PREMs for the quantitative arm of the study, we might have missed some of the users of pediatric PROMs and PREMs in Alberta. In addition, participation in this study was voluntary, so selection bias might have excluded participants who use PROMs and PREMs but do not wish to participate in this study. The qualitative arm of the study was more exploratory, so it elicited more unique findings. However, due to the concurrent nature of this study, unique themes identified through qualitative interviews could not be measured through quantitative surveys. Lastly, the COVID-19 pandemic had disrupted the healthcare system, so the results might not reflect post-pandemic times. Currently, a large research program is underway to generate evidence to support the province-wide integration of pediatric PROMs and PREMs in Alberta using KidsPRO, an innovative e-health solution. The KidsPRO program will utilize findings from this study. However, future studies should comprise a larger sample size and be conducted in non-pandemic times. Although integrating PROMs and PREMs in clinical care is recognized as an effective way to deliver PFCC, their use is limited in pediatrics healthcare systems in Alberta. This study shows the significant variation in the types of PROMs and PREMs, rationale for their use, and mode of administration to demonstrate the diverse and sporadic use of these measures in Alberta. Our study also highlights a lack of a standardized approach to implementing pediatric PROMs and PREMs in Alberta. The findings from this study could help healthcare organizations like AHS to develop evidence-based PROM and PREM implementation strategies in routine pediatric clinical care. Additional file 1: Appendix 1. Quantitative survey to collect data on the current use of pediatric PROMs and PREMs in Alberta. Appendix 2. The interview guide to collect qualitative data on the use of pediatric PROMs and PREMs in Albertaand the challenges associated with their implementation.
Update DVO-Leitlinie 2023 „Prophylaxe, Diagnostik und Therapie der Osteoporose bei postmenopausalen Frauen und bei Männern ab dem 50. Lebensjahr“ – Was ist neu für die Rheumatologie?
4909484c-3989-4b78-acf3-47827c1a19f4
11147822
Internal Medicine[mh]
Unverändert zur Leitlinie aus dem Jahr 2017 werden die rheumatoide Arthritis, axiale Spondyloarthritis (Spondylitis ankylosans) und der systemische Lupus erythematodes als Risikofaktoren für eine Osteoporose ausgewiesen. Im Hinblick auf die medikamentösen Therapien wird unverändert die systemische Applikation von Glukokortikoiden unter Berücksichtigung der Dosis und der Anwendungsdauer mit einem signifikanten Anstieg des Frakturrisikos im Bereich der Wirbelsäule und Hüfte dargestellt. Als Standardknochenmineraldichtemessverfahren wird weiterhin die Dual-Energy-X-ray-Absorptiometrie(DXA)-Methode favorisiert. Entsprechend den aktuellen Empfehlungen sollte eine DXA-Knochenmineraldichtemessung an der Lendenwirbelsäule und am proximalen Femur beidseits (Schenkelhals und Femur gesamt) durchgeführt werden. In die Beurteilung der Therapieschwelle geht aber nur der T‑Score der Hüfte (gesamt) ein. Hinsichtlich der quantitativen Computertomographie ergibt sich eine Änderung, dass beim Vorhandensein einer Knochenmineraldichtemessung mittels quantitativer Computertomographie (mindestens zwei auswertbare Wirbelkörper im Bereich des Brustwirbelkörpers 12 bis Lendenwirbelkörper 4) und einer messbaren trabekulären Knochenmineraldichte < 80 mg/ml Hydroxylapatitgehalt diese Daten zur Beurteilung des Frakturrisikos herangezogen werden können. Als eine der wichtigsten Neuerungen ist zu nennen, dass die Vorhersage des 10-Jahres-Frakturrisikos verlassen wird und durch das 3‑Jahresrisiko für Schenkelhals- und Wirbelkörperfrakturen ersetzt wird. Im Gegensatz zur Leitlinie aus dem Jahr 2017 wird in der aktuellen Version keine Frakturrisikoschwelle (DVO-Leitlinie 2017: 20 % osteoporotisches Frakturrisiko bezogen auf 10 Jahre) zur Diagnostik definiert . Beim Vorliegen entsprechender Risikofaktoren (z. B. Wirbelkörper- oder Schenkelhalsfraktur) sollte bei postmenopausalen Frauen und Männern ab dem 50. Lebensjahr eine entsprechende osteologische Basisdiagnostik durchgeführt werden. Aufgrund des geringen Frakturrisikos vor dem Auftreten der Menopause und bei Männern vor dem 50. Lebensjahr wird generell keine osteologische Basisdiagnostik empfohlen , wenngleich unter Berücksichtigung der klinischen Gesamtsituation im Sinne eines „Case Findings“ eine osteologische Basisdiagnostik initiiert werden kann . Aus rheumatologischer Sicht sind als besondere Risikofaktoren für die Einleitung einer Diagnostik die Erkrankungsbilder rheumatoide Arthritis, Spondyloarthritis und systemischer Lupus erythematodes sowie die systemische (orale) Applikation von Glukokortikoiden zu nennen. Bei Frauen im Alter von > 70 Jahren wird bei einer entsprechenden therapeutischen Konsequenz eine Knochenmineraldichtemessung ohne das Vorhandensein von Risikofaktoren durchgeführt. Unverändert zur DVO-Leitlinie 2017 wird die Kalziumaufnahme von mindestens 1000 mg je Tag und eine Vitamin-D3-Substitution von täglich 800–1000 IE empfohlen . Neu werden in der DVO-Leitlinie 2023 drei Therapieschwellen zur spezifischen osteologischen Therapie bezogen auf das 3‑Jahres-Frakturrisiko für Wirbelkörper- und Schenkelhalsfrakturen genannt: 3 % bis < 5 %, 5 % bis < 10 % und ab 10 %. Die Berechnung der Therapieschwellen erfolgt anhand der in der Leitlinie angegebenen Tabellen (siehe Abb. ). Perspektivisch soll im Frühjahr 2024 ein digitaler Rechner zur Bestimmung der Therapieschwellen implementiert werden. Des Weiteren ermöglicht die Leitlinie bei Patientinnen und Patienten mit einem hohen Frakturrisiko im Kontext von starken Frakturrisikofaktoren unabhängig von den oben genannten drei Therapieschwellen eine frühzeitigere Einleitung einer spezifischen osteologischen Therapie. Eine klare Definition dieser Hochrisikosituation wird in der Leitlinie nicht vorgenommen. Eine Übersicht zur Therapiedauer der spezifischen osteologischen Therapie entsprechend der DVO-Leitlinie 2023 gibt Tab. . Bei einem 3‑Jahres-Frakturrisiko von 3 % bis < 5 % wird die Durchführung einer spezifischen osteologischen Therapie empfohlen. Hier sollten primär antiresorptive Medikamente eingesetzt werden. Eine osteoanabole Therapie kann bei einem Frakturrisiko zwischen 5 % bis < 10 % erwogen werden, wobei bei einem Frakturrisiko ab 10 % eine osteoanabole Therapie einzuleiten ist. Die osteoanabole Therapie wird als Sequenztherapie durchgeführt, d. h. an die osteoanabole Therapie schließt sich eine antiresorptive Therapie zum Erhalt der Knochenmineraldichte an. Als osteoanabole Therapien können Teriparatid oder Romosozumab (nach Indikationszulassung) eingesetzt werden. Als Indikation zur Applikation von Teriparatid sind die postmenopausale Osteoporose, die Osteoporose bei Männern als auch die Glukokortikoidtherapie assoziierte Osteoporose jeweils mit einem hohen Frakturrisiko zu nennen. Demgegenüber kann Romosozumab nur bei postmenopausaler Osteoporose mit einem deutlich erhöhten Frakturrisiko eingesetzt werden. Weiterführend konnte in einer Post-hoc-Analyse gezeigt werden, dass in der Therapiesequenz Romosozumab-Denosumab ein Anstieg der Knochenmineraldichte im Bereich der Lendenwirbelsäule nach 2 Jahren von 16,6 % im Vergleich zur Therapiesequenz Romosozumab-Alendronat mit 15,2 % nachgewiesen werden konnte . Entsprechend dem Zulassungsstatus und den Therapieempfehlungen darf eine Teriparatid-Therapie nur einmalig durchgeführt werden, demgegenüber kann die Therapiesequenz Romosozumab-antiresorptive Substanz beliebig wiederholt werden. An dieser Stelle ist anzumerken, dass Romosozumab nur zur Behandlung der postmenopausalen Osteoporose zugelassen ist. In den Zulassungsstudien von Romoszumab wurden numerisch mehr ischämische kardiovaskuläre Ereignisse (0,8 %), ischämische cerebrovaskuläre Ereignisse (0,8 %) und kardiovaskulär bedingter Tod (0,8 %) im Vergleich zu Alendronat (ischämische kardiovaskuläre Ereignisse: 0,3 %, ischämische cerebrovaskuläre Ereignisse 0,3 % und kardiovaskulär bedingter Tod 0,6 %) nachgewiesen , sodass Romosozumab nicht bei Patienten mit einem Myokardinfarkt bzw. einen ischämisch bedingten Schlaganfall eingesetzt werden sollte. In diesem Zusammenhang ist vor dem Therapiebeginn mit Romosozumab das kardiovaskuläre Risiko zu evaluieren und in Kontext zum Frakturrisiko zu setzen. In Bezug auf die wichtigen Kontraindikationen für Teriparatid sind die chronische Niereninsuffizienz im Stadium Chronic Kidney Disease (CKD) G4 und G5 sowie die Hyperkalzämie, der Hyperparathyreoidismus, der Morbus Paget und ossäre Malignome inklusive Skelettmetastasen zu nennen. Zusätzlich wurden Risikofaktorengradienten pro Risikofaktor definiert, welche die Therapieschwelle in Hinblick auf eine antiresorptive bzw. osteoanabole Therapie erniedrigen können. Beim Vorliegen mehrerer Risikofaktoren sollten die Risikofaktorengradienten der zwei stärksten Risikofaktoren multipliziert werden, wobei innerhalb der Gruppe „Wirbelkörperfrakturen“, „Rheumatologie und Glukokortikoide“ (Ausnahme axiale Spondyloarthritis) sowie „Sturzrisiko assoziierte Risikofaktoren/Geriatrie“ keine Multiplikation von zwei Risikofaktoren vorgenommen werden sollte, damit das Frakturrisiko nicht überschätzt wird. In Tab. wird eine Übersicht zu den Risikofaktoren gegeben. Als rheumatologische Risikofaktoren mit einem Risikogradienten, welche mit einer Änderung der Therapieschwelle assoziiert sind, werden die axiale Spondyloarthritis, die rheumatoide Arthritis sowie Glukokortikoide gewichtet entsprechend der täglichen Dosis in Prednisolonäquivalent und Therapiedauer aufgeführt. Der systemische Lupus erythematodes stellt einen Risikofaktor für eine Osteoporose dar, dennoch wird der systemische Lupus erythematodes nicht als Risikofaktor mit Risikogradient aufgeführt. Nach Ansicht der Leitlinienautoren weist der systemische Lupus erythematodes eine geringe Prävalenz in der Allgemeinbevölkerung auf und das Erkrankungsbild wird zumeist an rheumatologisch-osteologischen Schwerpunktzentren mit entsprechender Kompetenz betreut. Zusätzlich wird in der Leitlinie dargestellt, dass Patientinnen und Patienten mit einem systemischen Lupus erythematodes häufig ein Alter unter 50 Jahren sowie ein hohes Risiko für eine Osteoporose aufgrund der Entzündungsaktivität und der Einnahme von Glukokortikoiden aufweisen. Aus diesen Gründen sollte als Einzelfallentscheidung vor dem 50. Lebensjahr eine osteologische Basisdiagnostik durchgeführt und gegebenenfalls eine spezifische osteologische Therapie eingeleitet werden. Neu wird in der Leitlinie die osteoanabole Therapie bei einer Glukokortikoidtherapie definiert. Bei Betroffenen mit einem hohen Frakturrisiko und einer Glukokortikoidtherapie mit Prednisolon (> 5 mg Prednisolonäquivalent/Tag über mehr als 3 Monate) sollte primär eine osteoanabole Therapie mittels Teriparatid durchgeführt werden. Als einzige Substanzgruppe wird für die Bisphosphonate die Evaluierung einer Therapiepause bei einem Abfall des Risikos unter die Therapieschwelle in der aktualisierten Leitlinie diskutiert. Bei einem hohen Frakturrisiko sollte die Therapie fortgeführt werden. Eine Therapiepause kann nach einer Therapiedauer mit Alendronat nach 5 Jahren bzw. Zoledronat nach 3 Jahren diskutiert werden, wenn keine osteoporotisch bedingten Frakturen vor und während der Therapie vorlagen. Zusätzlich sollte der T‑Score am Femurhals > −2,5 Standardabweichungen betragen und keine weiteren Risikofaktoren vorhanden sein. Nach der Pausierung der Bisphosphonattherapie wird eine Überwachung der Knochenumbauparameter als auch der Knochenmineraldichte mittels DXA-Methode empfohlen. In der DVO-Leitlinie 2023 wird noch einmal detailliert darauf hingewiesen, dass Denosumab ohne eine anschließende antiresorptive Therapie nicht beendet werden sollte, da sonst ein signifikanter Anstieg des Frakturrisikos (vertebrale Frakturen vor der Denosumab-Therapie 16,4 %; vertebrale Frakturen im Rahmen der Denosumab-Therapie 2,2 % versus vertebrale Frakturen nach Absetzen der Denosumab-Therapie 10,3 %) zu verzeichnen ist . Aus diesem Grund sollte Denosumab ohne zeitliche Begrenzung fortgeführt werden. Dies spielt vor allem bei Betroffenen mit einer chronischen Niereninsuffizienz eine wichtige Rolle, da alle Bisphosphonate bzw. Raloxifen bei einer chronischen Niereninsuffizienz im Stadium CKD G4 und G5 sowie teilweise im Stadium G3 nicht zugelassen sind . Ist eine Pausierung bzw. ein Absetzen der Denosumab-Therapie notwendig, muss eine intravenöse Anschlusstherapie mit Zoledronat (1 bis 2 Applikationen in Abhängigkeit vom Verlauf der Knochenmineraldichte und der Knochenumbaumarker) durchgeführt werden . Die erste Gabe von Zoledronat erfolgt am Ende des Denosumab-Intervalls . Ist die Gabe von Zoledronat nicht möglich, kann alternativ Alendronat verordnet werden. Nach der Beendigung der Denosumab-Therapie muss eine Kontrolle der Knochenumbaumarker im Monat 3 sowie Monat 6 nach Therapiebeendigung durchgeführt werden und in Abhängigkeit der Befunde ist gegebenenfalls eine frühzeitigere weitere Applikation von Zoledronat indiziert. Bei einer fehlenden Bestimmung der Knochenumbauparameter wird eine Applikation von Zoledronat entsprechend 6 und 12 Monate nach der letzten Denosumab-Applikation vonseiten der Leitlinie empfohlen . Aus osteologischer Sicht bzw. rheumatologischer Sicht spielt die Behandlung der Osteoporose bei einer Niereninsuffizienz bzw. die Glukokortikoid-induzierte Osteoporose eine entscheidende Rolle. Im Hinblick auf eine Einschränkung der Niereninsuffizienz ist anzumerken, dass neben einer renalen Osteopathie auch eine Osteoporose auftreten kann . Hierbei sind die Begriffe renale Osteopathie und Osteoporose nicht gleichzusetzen, da die Osteoporose mit einer Verminderung des Knochenmineralsalzgehaltes und einer Mikrostrukturveränderung des Knochens mit einem erhöhten Frakturrisiko verbunden ist und die renale Osteopathie durch eine Hyperkalzämie, Hypocholesterinämie, Hyperparathyreoidismus, Calcitriolmangel sowie eine erhöhte FDG-23-Konzentration gekennzeichnet ist . Patientinnen und Patienten mit einer zunehmenden Einschränkung der Nierenfunktion weisen ein erhöhtes Frakturrisiko auf . In diesem Zusammenhang ist auch durch entsprechende Leitlinienempfehlungen die Therapie einer Osteoporose bei einer eingeschränkten Nierenfunktion darzustellen. Äquivalent zu den vorhergehenden Leitlinien bezieht die aktualisierte Version der DVO-Leitlinie keine detaillierte Stellung hinsichtlich der Behandlung einer Glukokortikoid-induzierten Osteoporose und deren Differentialtherapie. Hier wird auf die Empfehlungen der Deutschen Gesellschaft für Rheumatologie zum Management der Glukokortikoid-induzierten Osteoporose aus dem Jahr 2021 verwiesen . Diese beiden aus osteologisch-rheumatologischer Sichtweise wichtigen Themen werden in der aktuellen DVO-Leitlinie 2023 erneut nicht adressiert. Zusammenfassend sind als wichtige Erneuerungen der DVO-Leitlinie 2023 die Absenkung des Frakturrisikos auf die 3‑Jahresschwelle und die Durchführung einer Sequenztherapie zur Behandlung der Osteoporose mit einer primären osteoanabolen, gefolgt von einer antiresorptiven Therapie zu nennen. Berücksichtigung des T‑Scores an der Hüfte (gesamt, gemessen mittels DXA-Knochenmineraldichtemessung) zur Bestimmung der Therapieschwelle Spezifische Therapieschwellen in Abhängigkeit des 3‑jährigen Frakturrisikos Primäre osteoanabole Therapie bei einem 3‑Jahres-Frakturrisiko ab 10 %
Salvaging From Limb Amputation in an Acute Complicated Type B Aortic Dissection Patient
d5315fdf-240a-4064-96ae-21e277411d88
11838834
Surgical Procedures, Operative[mh]
BACKGROUND Aortic dissection is a condition in which there is an intimal tear that allows the blood to pass through the tear and into the aortic media, splitting to a true lumen and a newly formed false lumen. It is associated with genetic disorders such as Marfan syndrome, Ehlers-Danlos syndrome, and Loeys-Dietz syndrome, or may result from cardiovascular risk factors including smoking, hypertension, and familial hyperlipidemia . The incidence of aortic dissection is approximately 5–30 cases per million population annually, with a higher prevalence in men than women. Most cases occur between the ages of 50 and 70. Type B aortic dissection with complications, as classified by Stanford, is a cardiovascular emergency requiring urgent intervention . OBJECTIVE The aim this article was to report a complicated Stanford B aortic dissection with acute limb ischemia and compartment syndrome, successfully managed with limb preservation and aortic repair. CASE PRESENTATIOIN History of presentation A 60s male patient with poorly controlled hypertension presented with severe chest pain radiating to the back and right leg ischemia, characterized by coldness, numbness, motor loss, and absent arterial pulses. Past Medical History The patient had a medical history of chronic hypertension, along with dyslipidemia have been poorly managed. Investigations On admission, his blood pressure was 200/120 mmHg in both arms. The right leg was cold, with undetectable pulsation in the femoral, popliteal, or dorsalis pedis arteries. The patient had lost motor function in the right leg, showed restricted knee joint movement, and had sensory disturbances. Emergency bedside echocardiography, computed tomography (CT) angiography of the aorta and iliac arteries, and blood tests were performed immediately to evaluate basic parameters and creatine-kinase (CK) levels. Echocardiography findings revealed a non-dilated ascending aorta, no pericardial effusion, and normal left ventricular size and systolic function. CT angiography of the aorta and iliac arteries showed a thoracic aortic dissection extending through the descending aorta to the aortoiliac bifurcation. The true lumen was compressed by the false lumen. The dissection reduced blood flow to the celiac trunk and caused complete occlusion of the right external iliac artery . Laboratory blood tests showed an elevated CK level of 241 mg/dL. Management The patient was treated with intensive medical management, including pain control, heart rate and blood pressure stabilization, while preparing for emergency intervention. A thoracic aortic endovascular stent-graft was placed to seal the entry tear and restore blood flow to the right lower limb. The intervention utilized a Relay thoracic stent-graft (32–28 mm diameter, 200 cm length, Bolton Medical), with access via the left common femoral artery . Goals of the thoracic aortic intervention is to seal the primary entry-tear, expand the true lumen to improve perfusion of abdominal organs, and anticipate that the right common iliac artery would reopen to restore blood flow to the right leg. Imaging revealed no blood flow to the right leg post-intervention assessment. The team decided on further limb revascularization by stenting the right iliac artery, the Supera™ Peripheral Stent 8x100 Abbot was successfully performed, achieving the desired outcome of re-establishing blood flow to the right iliac artery. Despite the successful intervention, the patient continued to experience severe pain, swelling, loss of pulses, sensation, and motor function in the right leg . The leg became tense and swollen. An emergency ultrasound of the lower limb arteries showed no evidence of thrombotic obstruction in the bilateral lower limb vessels, an empty vascular lumen, and marked soft tissue edema in the right leg. Laboratory results showed an alarming increase in CK levels from 241 mg/dL to 194,106 mg/dL. The patient underwent complete fasciotomy of the right thigh and calf to relieve compartment pressure, with hemostasis achieved. . Postoperatively, the patient received antibiotics, aggressive fluid resuscitation (4–6 liters of Natri-chlorid 0,9%/day), forced diuresis, and consideration for hemodialysis to manage rhabdomyolysis. Nutritional support was also optimized. Follow-up After eight days, once the limb had regained normal coloration, pulsations were detectable in the femoral, popliteal, and dorsalis pedis arteries, and sensory function was fully restored, the patient underwent fasciotomy wound closure. He was discharged in stable condition, with right leg muscle strength graded at 4/5 and fully restored sensation. At a one-month follow-up, the patient demonstrated independent ambulation, and the fasciotomy scars on the thigh and lower leg had healed completely. He reported no chest or leg pain, and his blood pressure was well-controlled according to prescribed recommendations. Follow-up MSCT angiography confirmed proper placement of the thoracic stent-graft without displacement. The graft maintained 50% blood flow through the left subclavian artery, ensured adequate perfusion to the abdominal arteries, and demonstrated excellent patency of the stented right common iliac artery. . DISCUSSION The management of this case highlights the complexity of acute limb ischemia secondary to Stanford B aortic dissection and the critical role of a multidisciplinary approach . The decision to perform a thoracic aortic stent-graft placement and extend the stent into the right common iliac artery was guided by thorough imaging analysis, which revealed compression of the true lumen without thrombosis . This minimally invasive approach effectively sealed the aortic tear, restored lower limb perfusion, and preserved critical vascular structures . Post-procedure complications, such as cerebral, spinal cord, and organ ischemia, as well as acute kidney injury, were significant concerns. However, the timely diagnosis and management of acute compartment syndrome, a severe complication resulting from reperfusion injury, were pivotal in the patient’s recovery. Understanding the pathophysiology of compartment syndrome, characterized by increased intercompartmental pressure leading to reduced perfusion and ischemic injury, allowed for early intervention with fasciotomy . This procedure alleviated compartmental pressure, mitigated tissue damage, and prevented progression to permanent muscle and nerve injury. Reperfusion injury, exacerbated by oxidative stress, microvascular obstruction, and interstitial edema, posed a unique challenge. By employing vascular ultrasound and CT imaging, the team accurately diagnosed the etiology of limb ischemia and identified the secondary complications of compartment syndrome . These findings informed a tailored therapeutic strategy involving fasciotomy, fluid resuscitation, and careful wound management. This case underscores the importance of an integrated clinical, imaging, and surgical approach in managing complex cardiovascular emergencies . Prompt recognition and intervention not only stabilized the patient’s aortic dissection but also preserved limb function, demonstrating the value of multidisciplinary teamwork in addressing life-threatening complications effectively . CONCLUSION Stanford Type B aortic dissection complicated by limb ischemia is a cardiovascular emergency requiring urgent intervention. Timely monitoring, insight consultation with orthopedic physician to understand the pathophysiology of acute compartment syndrome induced by reperfusion injury, and the importance of early diagnosis and fasciotomy to prevent irreversible tissue damage and preserve limb function bring out precise management of acute compartment syndrome following intervention are crucial to preserving the limb and ensuring the success of aortic repair.
Clinico-Pathological Spectrum of Alveolar Soft Part Sarcoma: Case Series from a Tertiary Care Cancer Referral Centre in India with a Focus on Unusual Clinical and Histological Features
960dfd41-c31f-4a55-acc1-1b4e9d633af0
11131567
Anatomy[mh]
Alveolar soft part sarcomas (ASPS) are rare soft tissue tumors of uncertain histogenesis having a distinctive histomorphological appearance of variably discohesive epithelioid cells arranged in nests and have a specific translocation of t(x::17)(p11.2;q25) resulting in ASPSCR1-TFE3 fusion . Marked histologic overlap with other tumors, and tumor at unusual site and unusual clinical presentation with mass at the metastatic site prior to the identification of the primary make the diagnosis tricky. The differential diagnoses include a broad range of mesenchymal and non-mesenchymal neoplasms such as paraganglioma, PEComa, granular cell tumor, metastatic carcinoma such as metastatic renal cell carcinoma, hepatocellular carcinoma, and adrenal cortical carcinoma. The present study analyzes the clinical, histopathological, and immunohistochemical profile of ASPS and clinical outcomes in cases, wherever available. Particular emphasis was given on the unusual histological features. The differential diagnosis and potential pitfalls in the current era of the increasing spectrum of TFE3 rearranged tumors have been highlighted. The present study is retrospective and all cases with a histopathological diagnosis of ASPS were retrieved from the archives of the department of Oncopathology from 2012 to 2021 at a tertiary care cancer center. Demographic, clinical, and radiological data were retrieved from the case records. Cases with non-availability of either immunohistochemistry (IHC) or paraffin blocks were excluded. Histomorphological and immunohistochemical characteristics were analyzed in each case. TFE3 immunohistochemistry was performed wherever unavailable. The histological parameters evaluated were growth pattern, presence of crystals confirmed by periodic acid-Schiff stain with diastase (PAS-D), nuclear features, presence of inflammation, fibrous septa, vascular invasion, necrosis, cystic change and myxoid change. A total of 22 patients (0.4%) with ASPS out of 5541 soft tissue sarcomas were identified from 2012 to 2021. The patient age range was 2-47 years and the median age was 27 years. The M:F ratio was 0.8:1. The most common site was the lower extremity in 45% (10/22) of the cases followed by the upper extremity 27.3% (6/22) of the cases, retroperitoneum in 18.1% (4/22) cases and one case each in the head and neck, chest wall, and lung. Clinical, radiological, and outcome details are given in . Tumor size varied from 3 cm to 22 cm with a mean size of 7.8 cm. Lymph node metastasis was seen in 2 cases only. Distant metastasis (54.5%;12/22) was more frequent than lymph node metastasis. Out of these 12 cases with metastasis, 91.7% of the patients had synchronous metastasis while three showed metachronous metastasis. The lung was the most common site (90.9%) followed by the brain, bone, and liver . In one case of ASPS of the forearm, an unusual site of metastasis was bilateral nasal cavities, with biopsy showing a submucosal tumor. Five patients amongst these had multiple site metastasis. The metastasis preceded detection of the primary tumor in two cases. One case presented with a posterior fossa mass and the second case with a pathological fracture of the right femur, and both were diagnosed as ASPS on biopsy. Subsequently, PET revealed a primary mass in the left iliac region and the right thigh respectively. Thus, most cases presented with AJCC stage IV at the time of diagnosis (54.5%;12/22), followed by stage IIIa (22.7%; 5/22), stage I (13.6%; 3/22), and stage II (0.9%; 2/22). One patient with T-ALL post remission showed ASPS in the left paravertebral location, post remission. The sibling of the patient also had T-ALL and developed glioblastoma 4 years post remission. The patient was further evaluated and diagnosed with constitutional mismatch repair deficiency syndrome (CMMRD) with a homozygous deletion (chr7:6026910; delC) detected in exon 11 of the PMS2 gene. Microscopically, all cases showed a multilobular architecture separated by fibrotic bands. The most predominant architecture pattern of the tumor cells within the lobules was the organoid pattern (81.8%;18/22) followed by the alveolar pattern (n=4) encircled by sinusoidal capillary vasculature . The size of the nests was variable with the number of cells in one nest varying from 10 cells to as many as 200 cells. Focal solid areas without any intervening vasculature were seen in 3 cases . Thick fibrotic bands were seen in 50% (n=11) of the cases . The rare architectural features noted were infiltration of single cells in septa and focal spindling of tumor cells in 3 cases each . Cytologically, tumor cells were epithelioid or polygonal with abundant eosinophilic granular cytoplasm in 91% of the cases and predominantly clear cytoplasm in 2 cases. It was also noted that the cytoplasm was more condensed near the nucleus and clearing towards the edge of the cell. The classically described round to oval nuclei with vesicular chromatin and prominent eosinophilic nucleoli with anisonucleosis was a major feature (>50% of the tumor nuclei) in only 31.8 % (7/22) of the cases while in 68.2% (15/22) of the cases the majority of the nuclei showed wrinkling and a concave nuclear contour without nucleoli, described as apple bite nuclei . Rare nuclear features included binucleation (n=13), multinucleation (n=8), pleomorphism (n=4), and nuclear grooves in three cases and intranuclear inclusion in one case . Mitotic activity in general in ASPS is rare with only 5 cases showing occasional mitoses. Necrosis was infrequent and focally seen only in 6 cases. A lymphovascular embolus was a common phenomenon seen in 50% of the cases. None of our cases showed perineural invasion. Intratumoral hemorrhage in the center of the nests was seen in 2 cases. Many cells with PAS-D positive rod-like crystalline structures in a sheaf-like or stacked configuration in the cytoplasm were seen in 5 cases while this was seen in occasional cells in 3 cases . There was no significant inflammatory host response in any case. There were focal intratumoral lymphocytes in 2 cases but peritumoral lymphocytes were seen in only one case. Other inflammatory cells such as plasma cells, granulocytes and histiocytes were absent. IHC was performed in all cases to rule out paraganglioma, PEComa, granular cell tumor, metastatic carcinoma such as renal cell carcinoma, hepatocellular carcinoma, or adrenocortical carcinoma that can mimic ASPS as per the clinical context and morphological features. All cases showed diffuse nuclear positivity for TFE3 and consistent negativity for AE1/AE3, EMA, vimentin, HMB45, PAX8, MyoD1, SMA, synaptophysin, and chromogranin. Only two cases showed focal S100 positivity while one showed focal desmin positivity . Histomorphological and IHC details are given in . All except one patient with localized disease of stage I-III were treated with surgical resection with clear margins, with no evidence of disease on follow-up. Response was noted in 5 cases with tumor size <5 cm while only 3 cases out of 16 cases with a diameter >5 cm showed no evidence of disease on follow up. It was not affected by site in our study. A single patient of ASPS of the lung with stage I was treated with chemotherapy and radiotherapy with no response, rather progression of disease with metachronous metastasis to the liver and bone. All patients with disseminated disease i.e. stage IV were treated with anthracycline based chemotherapy and radiotherapy of 10 cycles of 30 gray. On regular follow-up, radiologically, no response but rather progression of disease was seen with increase in the size of the tumor at the primary site as well as an increase in the size of the metastasis. Out of 4 paediatric patients (age <17 years), a response was only noted in one case each of stage I and stage II cancers while another two of stage IV disease showed no response. No hospital death was reported in any patient. All were alive with disease in the limited period of follow-up ranging from 4 to 108 months. ASPSs are rare soft tissue tumors that constitute < 1% of all soft tissue sarcomas . The present study mirrors similar findings with only 0.4% the cases of ASPS out of all soft tissue sarcomas diagnosed over a period of 10 years. Studies have established that ASPS affects more commonly young adults; concordantly the age range in the present cohort was 2-47 years with four pediatric patients . The literature has a well documented female to male predominance before the age of 30 years, with a reversed ratio for older ages . Our study also corroborates these findings with the M:F ratio being 0.6:1 in patients less than 30 years while all three patients of age >30 years were male. However, Rekhi et. al. reported a male preponderance in their study . The prominent predilection for the extremities in our series is also well reflected in earlier studies . A rare site seen in the present series was primary pulmonary ASPS in a 25-year male without evidence of soft tissue tumor elsewhere at the time of initial diagnosis confirmed by the PET scan. To the best of our knowledge, only three cases of primary pulmonary ASPS have been reported in the English literature till date . The clinical course in our series illustrates the high incidence of metastatic disease at the time of diagnosis with 50% of the cases. Many studies have reported metastatic disease at diagnosis in 55% to 65% of the patients . The most common metastatic site was the lung while brain metastasis was always a part of disseminated metastasis and never occurred in isolation, a phenomenon also observed by Portera et al. and Keyton et al . In our study, in two cases, metastases was detected prior to the finding of a primary, a phenomenon also encountered by other authors . One of our cases with the primary in the forearm also presented with metastasis in the nasal cavity which is not reported as the site of metastasis in any of the large series, though rare cases of primary sinonasal ASPS have been reported . Metastases to the lymph nodes are uncommon and were seen in only 2 cases in the present cohort. Portera et al. reported lymph node metastasis in a single patient only out of 70 cases . Our study had the first reported case of ASPS in patients with CMMRD . CMMRD is a childhood cancer predisposition syndrome caused by biallelic pathogenic variants in one of four mismatch repair (MMR) genes, i.e., MLH1, MSH2, MSH6 and PMS2. It is classically associated with hematological, brain, and intestinal malignancies but rare in sarcoma. Only 30 MMR deficient bone and soft tissue sarcomas including 3 ASPS were encountered in the literature . ASPS metastasis to the breast is considered extremely rare and is reported only in a handful of cases but was seen in one of our cases . ASPS is known to have a very classical histomorphology showing very little variation from case to case and site to site. However, the diagnosis is challenging because of morphological overlap with other tumors, particularly on small biopsies and uncommon sites of occurrence or evaluation of metastatic site prior to identification of primary such as in biopsies from the posterior fossa, bone, or nasal cavity in the present series. Difficulties are further confounded by the occurrence of rare morphologic features particularly in biopsies such as solid pattern, clear cytoplasm, and unusual nuclear features. With regard to the pattern, the tumor always had a lobular architecture with variably thick fibrous septae separating the lobules. We noted a significant preponderance of a ‘non-alveolar’ organoid growth pattern over the alveolar pattern, despite the name of the entity. This needs to be kept in mind, particularly when looking at a small biopsy. Focal clear cytoplasm seen in two of our cases as a dominant feature raises the possibility for these cases to be confused with other clear cell tumors. The cells were also found to have a feathery kind of cytoplasm with condensation of the cytoplasm around the nucleus with pale cytoplasm at the periphery giving a lacy skirt kind of appearance. Most of the studies in the literature including WHO 2013 and the latest WHO 2020 classification of tumors of soft tissue have emphasized vesicular nuclei with prominent eosinophilic nucleoli as a characteristic feature of ASPS but it was not the most prominent finding in the present series . The dominant nuclear feature (>50% of tumor nuclei) were bland nuclei with marked nuclear folding leading to concave, apple bite, and crenated nuclei without any nucleoli in nearly 68.2% of the cases and these nuclei were focal in the rest of the patients. These features were first observed by Fanburg-Smith et al. and Chatura et al. in lingual ASPS but it was an universal finding in the present series, independent of site . We also observed focal nuclear grooves in 3 cases which are not documented in the literature. Intranuclear inclusion was seen in one case and also observed in two cases by Rekhi et al . Awareness of these nuclear features is important and should not deviate one from the diagnosis of ASPS due to the absence of classical vesicular nuclei, particularly in small biopsies. The exact molecular pathogenetic relation between specific cellular-level structural features and cancer genes is not known. Nucleolar enlargement classically is associated with increased ribosome production, and production of new ribosomes appears essential for cell-cycle progression. Nuclear envelope irregularity may be the effect of downstream signaling pathway of the aberrant transcription factor ASPSCR1-TFE3 altering the structure of the nuclear membrane . Other rare features such as multinucleation and pleomorphism have been observed in other studies also but with no prognostic significance . Focal mucinous and cystic change reported in the literature was not seen in any of our cases. Based on morphology, the differential diagnoses considered in the present study were paraganglioma, granular cell tumor, metastatic renal cell carcinoma, adrenocortical carcinoma, hepatocellular carcinoma, rhabdomyosarcoma, PEComa, and melanoma. Previously there was no specific marker for diagnosis of ASPS but the discovery of an unbalanced t(X::17) resulting in a fusion of the ASPL gene on chromosome 17 to the TFE3 gene on chromosome X changed this scenario . Recently, novel HNRNPH3-TFE3, DVL2-TFE3 , and PRCC-TFE3 fusions have also been identified . Thus, immunodetection of the C terminus of the TFE3 protein in ASPS was considered a diagnostic landmark, but it should be interpreted carefully since the list of tumors with TFE3 immunopositivity is increasing. Cathepsin K is a cysteine protease abundantly expressed by osteoclasts and its expression is driven by microphthalmia transcription factor (MITF). TFE3 also belongs to the same transcription factor subfamily as MITF. It is hypothesized that the TFE3 fusion proteins function like MITF in the neoplasms, and thus activate cathepsin K expression which can be detected by IHC . TFE3 rearrangements are not specific to ASPS but have also been identified in a subset of PEComa and a Mit Translocation renal cell carcinoma, both of which are morphological mimickers of ASPS. TFE3 immunoreactivity is not specific for TFE3 rearranged tumors, - Williams et al. have documented TFE3 positivity in four cases of granular cell tumors while Rekhi et al. observed TFE3 positivity in 28.5% of granular cell tumor . Cathepsin K immunoexpression is non-specific and has been reported in renal cell tumors, granular cell tumors, as well as numerous additional sarcomas including Kaposi sarcoma, liposarcoma, chondrosarcoma, undifferentiated pleomorphic sarcoma, and leiomyosarcoma . Granular cell tumors are diffusely immunopositive for S100, SOX10 and inhibin, which are negative in ASPS. There was focal weak S100 positivity in one of our tumors. Cytoplasmic granules can be also seen in granular cell tumor but PAS-positive diastase-resistant rod-like/rhomboid crystalline inclusions seen in 36.4% of the cases in the present series are specific for ASPS, and can be highlighted with MCT1 and CD147 immunostains while cytoplasmic granules in granular cell tumor are CD68 positive . Though TFE3 positivity have been reported in paraganglioma but immunopositivity for neuroendocrine markers, with S100 highlighting sustentacular cells, helps differentiate them from ASPS . PAX8, pan cytokeratin, CD10 negativity helps in ruling out renal cell carcinoma which is further substantiated by the absence of a renal mass on radiology. Negative immunostaining for vimentin and Melan-A ruled out an adrenocortical carcinoma. S100-P, HMB45, and Melan-A negativity in tumor cells ruled out a melanoma. Focal desmin positivity was seen in two of our cases but the lack of nuclear positivity for MyoD1 and myogenin ruled out a rhabdomyosarcoma. PEComa is differentiated from ASPS due to its reactivity for HMB45 but recently aberrant expression of HMB45 was also reported in ASPS, though both tumors are TFE3 rearranged, diagnosis of ASPS was favored based on presence of PAS-D needle crystals in ASPS . Translocation analysis can be performed, when necessary, and is the diagnostic ‘gold’ standard but one should be aware of other TFE3 rearranged tumors while interpreting the results . None of our cases showed extensive mitosis or necrosis which are considered classical features of high-grade sarcoma. Despite thatbiological behavior of ASPS is aggressive, hence FNCLCC Histological Grading System isn’t used for them, all ASPS by definition are considered high grade . The management of ASPS typically involves surgical resection for localized disease, which was performed in 8 cases and was curative. Anthracycline-based chemotherapy with or without radiotherapy was given for disseminated tumors with metastases in 10 cases and for localized disease in one case. It was largely ineffective with no response in any case and rather progression of disease was noted in all cases in present series. A search for novel therapies and their evaluation is being done in clinical trials. Molecular targeted treatment has been increasingly utilized. Vascular endothelial growth factor receptor-targeted TKIs such as pazopanib, crizotinib, sorafenib, anlotinib, sunitinib, and cedirranib and MET kinase inhibitors have been explored in clinical trials for metastatic disease with promising results . We argue against the future of immunotherapy in ASPS since a very focal intratumoral inflammatory host response was seen in only two cases and only one case showed minimal lymphocytic response at the tumor edge. ASPS has a morphological and immunohistochemical overlap with many mesenchymal and non-mesenchymal tumors. Diffuse strong nuclear TFE3 positivity is sensitive for ASPS in an appropriate clinicoradiological context. Awareness of TFE3 positivity in other tumors is vital. It is imperative to employ a panel of markers in order to identify an alveolar soft part sarcoma from its differential diagnoses. Due to the high propensity for early metastasis even at the time of presentation in ASPS, complete metastatic workup and long term follow up is recommended. ASPS is associated with slow progression and resistance to conventional cytotoxic chemotherapy. The study has been approved by the institute research committee of GCRI assuring legal and ethical criteria fulfilment in the study with review number IRC/2022/P-79. Authors received no financial support for the research, authorship and/or publication of this manuscript The authors declare that they have no competing interests. Available on request from the corresponding author.
Obstetrician-gynecologist perceptions and utilization of prescription drug monitoring programs
36756204-ac94-4554-a05c-14c267b0fe44
7793317
Gynaecology[mh]
Introduction The US Centers for Disease Control and Prevention (CDC) issued guidelines in 2016 recommending that clinicians review their state Prescription Drug Monitoring Program (PDMP) data when initiating and/or continuing opioid therapies under certain clinical circumstances. PDMPs provide opioid and other controlled substance dispensing histories and other measures to clinicians for patients in their care. The American College of Obstetricians and Gynecologists (ACOG) and the American Society of Addiction Medicine (ASAM) jointly released a committee opinion to clarify recommendations for obstetrician-gynecologists (OB/GYN) that treat patients who are prescribed or may use opioids during pregnancy, medically or non-medically, following the release of the CDC guidelines. The ACOG-ASAM recommendations endorse OB/GYN usage of PDMPs as a primary prevention tool for opioid-related adverse events. As of mid-2020, most states now a) mandate that all controlled substance prescribers register with their state PDMP and b) require all or certain prescribers to check the PDMP when initiating controlled substance prescriptions, particularly for US Drug Enforcement Agency (DEA) Schedule II opioids. Physician use of PDMPs increases when administrative registration with the state is mandated, and prescribers reportedly comply with PDMP usage mandates. However, prescribers across multiple specialties report that stand-alone PDMP data is difficult to access and incorporate into their workflow. For OB/GYNs in particular, PDMPs are viewed as less effective, positive, or useful when compared to other primary care physicians. In this literature, OB/GYNs sample sizes are low and they have sometimes been categorized with “other” prescriber specialties making it difficult to understand their nuanced PDMP use and perceptions. One study in Washington Medicaid reported that OB/GYNs had the second lowest uptake in both PDMP registration and usage when compared with other physician specialties. Since OB/GYNs are the primary source of care for many women and comprise the majority of care during pregnancy; they are well-positioned to provide screening and intervention for opioid-related sequelae. The purpose of this study was to assess OB/GYN utilization and perceptions of their state PDMP as stratified by practice location in states with and without mandated PDMP query. Methods 2.1 Instrument development A workgroup consisting of an OB/GYN, a pharmacist, and health services researchers reviewed survey items from several publicly available state-level PDMP survey instruments. Survey items from previously published instruments were adapted for OB/GYNs to assess the perception of PDMP effectiveness, knowledge of PDMP functions, and self-reported use of PDMPs. The survey instrument was reviewed and approved by the ACOG District XII Committee on Health Care for Underserved Women prior to release and is available in Supplementary Materials. 2.2 Study design and protocol The study design was a cross-sectional survey. The research team partnered with ACOG leadership, who oversaw dissemination of the survey link and accompanying study description and explanation via email to a random sample of 5000 ACOG members with an active license to practice in the United States in May 2018. A reminder email was sent each week following the initial email invitation for a period of 6 weeks and the survey link remained active for a period of one week following the final reminder in July 2018. Survey responses were anonymous, but email read receipt data from the invitation were collected to calculate an adjusted response rate. Data were collected in Qualtrics (Qualtrics, Provo, Utah, USA). The University of Florida Institutional Review Board reviewed and approved this study. 2.3 Analysis Response frequencies were calculated for each item and all surveys with >1 item response were included in the analysis (n = 397). State regulatory environment was classified as “mandatory” or “voluntary” based on the legal requirements for PDMP query (as of July 2018) and the physicians primary practice location. Chi square analysis was used to compare differences in response distribution between respondents practicing in mandatory versus voluntary PDMP states. A priori significance was set at 0.05. Qualitative and free-text survey items were analyzed and coded for instances of similar thematic content by 3 reviewers, and, in instances of disagreement, our OB/GYN acted as a fourth and deciding vote. All analyses were conducted in Excel and SAS 9.4 (SAS Institute Inc., Cary, North Carolina, USA). Instrument development A workgroup consisting of an OB/GYN, a pharmacist, and health services researchers reviewed survey items from several publicly available state-level PDMP survey instruments. Survey items from previously published instruments were adapted for OB/GYNs to assess the perception of PDMP effectiveness, knowledge of PDMP functions, and self-reported use of PDMPs. The survey instrument was reviewed and approved by the ACOG District XII Committee on Health Care for Underserved Women prior to release and is available in Supplementary Materials. Study design and protocol The study design was a cross-sectional survey. The research team partnered with ACOG leadership, who oversaw dissemination of the survey link and accompanying study description and explanation via email to a random sample of 5000 ACOG members with an active license to practice in the United States in May 2018. A reminder email was sent each week following the initial email invitation for a period of 6 weeks and the survey link remained active for a period of one week following the final reminder in July 2018. Survey responses were anonymous, but email read receipt data from the invitation were collected to calculate an adjusted response rate. Data were collected in Qualtrics (Qualtrics, Provo, Utah, USA). The University of Florida Institutional Review Board reviewed and approved this study. Analysis Response frequencies were calculated for each item and all surveys with >1 item response were included in the analysis (n = 397). State regulatory environment was classified as “mandatory” or “voluntary” based on the legal requirements for PDMP query (as of July 2018) and the physicians primary practice location. Chi square analysis was used to compare differences in response distribution between respondents practicing in mandatory versus voluntary PDMP states. A priori significance was set at 0.05. Qualitative and free-text survey items were analyzed and coded for instances of similar thematic content by 3 reviewers, and, in instances of disagreement, our OB/GYN acted as a fourth and deciding vote. All analyses were conducted in Excel and SAS 9.4 (SAS Institute Inc., Cary, North Carolina, USA). Results A total of n = 1470 survey invitations were opened and read, resulting in an adjusted response rate of 27% (n = 397 surveys completed). About a third of respondents were in private practice settings, and few were still considered trainees (60.7% classified as Attending). Most respondents practiced in a mandatory PDMP state (80.6%), 9.6% practiced in voluntary PDMP states, and 9.8% did not indicate their practice location. The majority were currently registered with the PDMP (77.6%). To gauge OB/GYN familiarity and understanding of PDMP data, respondents were asked to identify what information is provided by the PDMP from a list of options. Approximately, 30% were unaware that the PDMP identifies the prescriber writing each prescription and nearly half of respondents were unaware that the PDMP identifies dispensing pharmacies. A summary of other respondent characteristics is shown in Table . Those practicing in mandatory versus voluntary states perceived the primary purpose of PDMPs differently (Table ) and the majority of respondents suspected that 0 to 10% of their patients misuse or abuse opioids (Fig. ). In free-text responses regarding the primary purpose of PDMPs, a majority of respondents that selected “other” purpose expressed frustration with PDMP usage and/or mandatory use laws (n = 14, Table ). Three content themes of PDMP purpose emerged from these free-text responses: 1. Increase in physician burden [sample response: “To burden physicians with police work”], 2. Skepticism of government involvement [sample response: “Government bull [expletive]”], and 3. Oversight of prescriber activity [sample response: “So that state government and legislators can say they are doing something about the “opioid crisis””]. Respondents report most frequently querying the PDMP for patients that are currently using or prescribed opioids, and when they treat patients suspected of drug abuse (Fig. ). Respondents most frequently report taking action as a result of using the PDMP by confirming prescription fills (31.3% in mandatory states; 23.7% in voluntary states), followed by speaking with patients about controlled substance use (27.8% mandatory states; 26.3% voluntary states). About 1 in 5 respondents indicated they confirmed doctor shopping behaviors as a result of querying the PDMP. No respondents reported referring patients to law enforcement (0%) and Child Protective Services referrals were also rare (1.9% in mandatory states; 0.0% in voluntary states; Table ). Overall, 53% of OB/GYNs agreed that “…mandating prescriber use of the PDMP was a good idea.” A greater proportion (58.3%) of respondents practicing in voluntary states agreed or strongly agreed with this statement (Fig. ). Discussion Our study is the largest to-date on OB/GYN perceptions and use of their state PDMPs, and is among the first to assess perception of opioid use among the patients in their care. These findings suggest that OB/GYN perceptions may be tied to experience with the PDMP as evidenced by a significantly different stated purpose of the PDMP when examined by practice legal environment. The skepticism expressed by many respondents regarding PDMP effectiveness as a primary prevention tool for several opioid-related sequelae is concerning, despite recommendations. The findings regarding PDMP utility as a primary prevention tool were documented in a separate report analyzing these same data. A recent survey of ACOG Fellows and Junior Fellows reported that most OB/GYN respondents continue to prescribe opioids for a variety of indications, but few reported adherence to opioid prescribing guidelines. In that ACOG survey, 81% of respondents also reported that they were unaware that the primary source of diverted opioids were prescriptions from friends and family members. 4.1 Clinical and research implications Many states have recently adopted legislation to restrict opioid prescribing and dispensing by limiting quantities of outpatient prescriptions of opioids for acute pain and several other states have similar legislation under consideration. Additionally, federal legislation has been proposed to limit new opioid prescriptions for acute pain conditions to a 7-day supply. These changes in the medico-legal landscape suggest that all prescribers, including OB/GYNs, will be checking PDMPs more frequently. Of particular importance for OB/GYN clinical practice, pregnancy may be the only time a woman with opioid use disorder or other forms of SUD engage in medical treatment, which suggests that OB/GYNs are optimally positioned for screenings and interventions. The delegate model, whereby a prescriber assigns responsibility for logging in and obtaining reports to another qualified health professional, for PDMP usage has been demonstrated to be more cost-effective than prescriber-initiated PDMP query and could reduce time and resource burden for OB/GYNs. As of 2020, all states (with the exception of Missouri, which is the only state that has not yet implemented a statewide PDMP) permit prescriber delegates to access the PDMP. After resolving workflow issues regarding PDMP access, however, there is evidence to suggest that physicians are uncertain about how and when to discuss information gleaned from PDMPs with their patients. This uncertainty may contribute to decreased perceptions of PDMP utility. 4.2 Strengths and limitations This study employed evidence-based practices for maximizing physician response rates, including the use of multiple, timely follow-up invitations, as well as delivery of the invitation via a trusted professional association (here, ACOG). Despite these efforts, the response rate to this survey is in line with typical response rates for web-based surveys to physicians that do not include financial incentives. An additional limitation is that we were reliant on self-reported measures of OB/GYN PDMP usage and were unable to compare these self-reports with patterns of actual PDMP use. Clinical and research implications Many states have recently adopted legislation to restrict opioid prescribing and dispensing by limiting quantities of outpatient prescriptions of opioids for acute pain and several other states have similar legislation under consideration. Additionally, federal legislation has been proposed to limit new opioid prescriptions for acute pain conditions to a 7-day supply. These changes in the medico-legal landscape suggest that all prescribers, including OB/GYNs, will be checking PDMPs more frequently. Of particular importance for OB/GYN clinical practice, pregnancy may be the only time a woman with opioid use disorder or other forms of SUD engage in medical treatment, which suggests that OB/GYNs are optimally positioned for screenings and interventions. The delegate model, whereby a prescriber assigns responsibility for logging in and obtaining reports to another qualified health professional, for PDMP usage has been demonstrated to be more cost-effective than prescriber-initiated PDMP query and could reduce time and resource burden for OB/GYNs. As of 2020, all states (with the exception of Missouri, which is the only state that has not yet implemented a statewide PDMP) permit prescriber delegates to access the PDMP. After resolving workflow issues regarding PDMP access, however, there is evidence to suggest that physicians are uncertain about how and when to discuss information gleaned from PDMPs with their patients. This uncertainty may contribute to decreased perceptions of PDMP utility. Strengths and limitations This study employed evidence-based practices for maximizing physician response rates, including the use of multiple, timely follow-up invitations, as well as delivery of the invitation via a trusted professional association (here, ACOG). Despite these efforts, the response rate to this survey is in line with typical response rates for web-based surveys to physicians that do not include financial incentives. An additional limitation is that we were reliant on self-reported measures of OB/GYN PDMP usage and were unable to compare these self-reports with patterns of actual PDMP use. Conclusions ACOG members are diverse in their perceptions regarding the utility and purpose of PDMPs; though, the majority agree that PDMPs are a primary prevention tool for drug abuse and diversion. However, a knowledge translation gap may still exist- as only a third of OB/GYNs report checking the PDMP for their patients with opioid prescriptions. Increased training is needed regarding clinical utility of PDMPs along with practical guidance for incorporating the PDMP into OB/GYN practice. The authors wish to thank the leadership of the American College of Obstetricians and Gynecologists (ACOG) for disseminating the survey to ACOG members and for providing feedback on the survey instrument. Additionally, the authors would like to thank the members of ACOG District XII for providing comment on the preliminary findings. Preliminary findings were presented at the 2019 ACOG Annual Clinical and Scientific Meeting in Nashville, Tennessee. Conceptualization: Amie Goodin, Chris Delcher, Joshua Brown, Dikea Roussos-Ross. Data curation: Jungjun Bae. Formal analysis: Amie Goodin. Methodology: Amie Goodin, Chris Delcher, Joshua Brown, Dikea Roussos-Ross. Project administration: Amie Goodin. Supervision: Amie Goodin, Chris Delcher, Dikea Roussos-Ross. Visualization: Jungjun Bae. Writing – original draft: Amie Goodin. Writing – review & editing: Amie Goodin, Jungjun Bae, Chris Delcher, Joshua Brown, Dikea Roussos-Ross. Supplemental Digital Content
Bouncing Beyond Adversity in Oncology: An Exploratory Study of the Association Between Professional Team Resilience at Work and Work-Related Sense of Coherence
d5c40e2e-338b-462d-a63d-4aba9bfb502e
11592751
Internal Medicine[mh]
Team resilience at work refers to “the capacity of a group of employees to collectively manage the everyday pressure of work and remain healthy; to adapt to change and to be proactive in positioning for future challenges” (p. 14). Given the seriousness, complexity, and rapid evolution of cancer care, nurses provide and coordinate patient-centered care within a wide variety of interdisciplinary team models, which may include oncologists, pharmacists, psychologists, social workers, and others . Oncology teams carry out their work in an unprecedented challenging environment . During COVID-19 rebound years, clinicians, managers, and support staff continue to experience additional challenges as growing caseloads and time pressures combine with workforce shortages, increasing administrative burdens and breakdowns in communication and coordination across the cancer trajectory . The unavoidable impacts of cancer on the whole life of people living with and beyond cancer has shifted professional practice from individuals to interdependent teamwork and partnership with patients and families . While interdependency is key for effective oncology teamwork , the COVID-19 pandemic fundamentally shifted professional work toward the obligation to manage one’s own practice, leading to struggles with the established dynamics of team functioning . At the organizational level, network-based structures and collaborative governance remain underdeveloped. This constrains the capacity to face the volatility, uncertainty, complexity, and ambiguity of cancer care that can compromise patient outcomes and generate additional workload and burden for professionals . Oncology teams use determination and creativity to contend with complex problems in a “system in crisis” and offer person-centered care, but the effort can put their physical and mental health at risk . From a system perspective, oncology teams confront national cancer programs focused on performance and are expected to achieve top-down objectives that may be at odds with grassroots team functioning . Oncology teams are, to varying degrees, living these entangled adverse situations that contribute to an increased incidence of burnout and mental health problems in cancer care professionals , particularly in women oncologists and nurses . The pandemic has only worsened these adverse situations. The accumulation of both chronic and acute adversity qualifies as a so-called wicked problem characterized by blurred definition, where there are multiple people with vested and mostly competing values, and where the evolving dynamics in the system are confusing . Scholars highlight that experience of adversity is essential for teams to build resilience . These “wicked problems” raise a number of questions, making oncology teams a fertile ground for understanding resilience at work and contributing new knowledge that has implications for care provided by nurses and other team members: Why do oncology teams not let adversity define them? How do oncology team members continue to find sense in their work? What is so important to them in this work? These questions highlight the importance of understanding team resilience at work in oncology as more than a buzzword heard everywhere since the pandemic. Despite the lack of consistency in the definition of team resilience , this study chooses a pragmatic approach. Team resilience at work is defined as a non-linear and ongoing process of minimizing the impact of adversity, managing to bounce beyond and mending while learning for the future . Bouncing beyond refers to the ability of teams to overcome difficult situations and return to previous functioning or emerge stronger as the result of a dynamic process . Team resilience at work can be developed by emphasizing team resources and strengths in a context characterized by complexity and uncertainty , while avoiding the stigmatization of team members who are coping with distress, feelings of burnout, or maladaptive coping strategies . Striving for effective and responsive care, team resilience at work in oncology can be viewed as a very important resource that helps to respond to a “noble calling” to care for those affected by cancer and maintain a sense of coherence at work despite adversity . We conceptualize this asset as a “generalized resistance resource” (GRR) within Antonovsky’s salutogenic model . To the best of our knowledge, this is the first time that team resilience in the context of oncology care has been analyzed as a GRR in terms of its relation to the sense of coherence components. GRRs refer to biological, material, or psychosocial factors, bringing capacities to manage the stressors with which a person or a group have to cope . The salutogenic model of health presumes a reciprocal and dynamic relationship between GRRs and sense of coherence. In the work-specific domain, sense of coherence is defined as the perceived comprehensibility, manageability, and meaningfulness of a person’s current work situation . These three components of the so-called Work-SoC, applied to teamwork in oncology, refer to the cognitive aspect of perceived professional practice as structured, consistent, and clear; the extent to which team members perceive a fair balance between job demands and the resources they can access to face acute and chronic adverse situations or implement complex interventions ; and the emotional aspect of sharing a work situation considered worthy of both engagement and involvement. Work-SoC is influenced—and can be modified—by interactions between individuals or groups and by characteristics of the work context (e.g., structures, rules and protocols, and processes) . Building on a salutogenic model , team resilience at work may provide team members with sets of perceived work experiences characterized by cohesion, participation in shaping outcomes, and a workload balance. However, there is a dearth of research into the possible reciprocity between team resilience at work and a work-related sense of coherence in oncology teams. This study explores whether team resilience at work in oncology is associated with a work-related sense of coherence. We assume that a higher level of team resilience at work is associated with a more positive perception of the work-related sense of coherence as a means of not letting adversity define oncology teams. It also examines whether both team members’ and context characteristics influence the association between these two main variables. 2.1. Study Design This exploratory study was part of a larger research project that aimed to better understand how a multi-component intervention improves resilience at work in oncology teams in Québec (Canada) . The present study used a cross-sectional design to analyze data on team resilience at work and work-related sense of coherence across a sample of oncology team members. 2.2. Participants Participants were from four Integrated Health and Social Service Centres (IHSSC or IUHSSC when it includes a university center) embedded in the national cancer network. The National Cancer Program in Québec (Canada) is part of the Ministry of Health and Social Services, the governing authority of the publicly funded healthcare system. One of the key elements of the cancer program is interdisciplinary team-based care operationalized through relational and cognitive proximity within and between teams as part of a “network-of-networks” . Oncology departments include interdisciplinary teams providing direct care (oncologists, nurses, pharmacists, social workers, psychologists, physical therapists, and nutritionists) supported by managers and clerical staff. Participants were eligible to participate in the study if they were a health professional, a manager, or a support staff member in the oncology department; worked at least 20 h per week; and had worked in oncology without a long-term leave during the 12 months before data collection. A total of 209 team members returned the online questionnaire, of which 189 fully completed it and were included in the analysis. Using SEM Power Calculation by MacCallum et al. and with a Goodness of Fit Index (GFI) of 0.88 as alternative hypothesis vs. 0.80 as null hypothesis and degrees of freedom of 71, a risk α of 0.05, we obtain a power of 1 − β = 0.99. 2.3. Procedures Following ethics approval, the PI presented the study at regular oncology team meetings as a pre-notification before the distribution of the e-questionnaire . A designated local collaborator held the list of eligible team members, e-mailed invitations to complete the questionnaire, and the research professional checked the eligibility of team members who manifested an interest in participating. If eligible, a unique login link was sent to access the e-consent and e-questionnaire. Two reminders at 7 and 14 days were transmitted after the initial e-mail to boost the response rate. The questionnaire did not include any questions asking for identifying details. All responses were anonymous and collected using an individual SurveyMonkey account. Data collection was performed from 21 February 2022 to 19 June 2023. 2.4. Measures Team resilience at work was operationalized with the R@W Team Scale (TR@W) French version . This questionnaire has 42 items measured on a 7-point scale (from 1 = strongly disagree to 7 = strongly agree, with reverse scoring for negatively phrased items). Items are grouped into seven subscales, all with an acceptable value of alpha; however, the value of the full scale over 0.90 may suggest that some items are redundant . McEwen and Boyd developed and validated the TR@W scale among employees ( n = 344) across government, private, and non-profit sectors . The results for the full scale (42 items) show a mean score of 4.29 (SD = 0.83), while, together, the seven subscales explain 63% of the variance. To the best of our knowledge, only one study has previously used the TR@W in the healthcare sector, among registered nurses in long-term care homes in the province of Ontario, Canada ( n = 306; mean score = 4.5; SD = 1.21) . Minor adaptations to items were made to the French version to render questions more specific to the oncology setting, where it has not been used previously. The Work-Related Sense of Coherence (Work-SoC-9) French version has 9 items grouped into three subscales: comprehensibility (4 items describing perceived work situation as structured, consistent, and clear), manageability (2 items describing perceived availability of resources to face work demands), and meaningfulness (3 items describing perception that work is worthy of involvement) . The full scale has a Cronbach’s alpha of 0.83. Participants respond to items according to their current and general work situation on a 7-point differential semantic scale, ranging from positive to negative perception (e.g., meaningless–meaningful). Bauer and Jenny (2007) suggest that working conditions directly affect Work-SoC. A systematic review of the literature reports that the general Sense of Coherence (SoC) scale has been used in studies among individual nurses in the work context of oncology . To the best of our knowledge, only one study has used the Work-SoC-9 in oncology . That study, undertaken in Switzerland, examines the association between the Work-SoC-9 and oncology nurses’ confidence to implement an intervention supporting self-management in cancer patients. The findings support those in the validity study of the Work-SoC-9 . Sociodemographic and professional characteristics included age, gender, gender-related roles at work, education level, type of profession, work experience, work experience in the oncology team, work status, and role. Gender-related role at work was determined using the Labor Force Gender Index (LFGI), a four-item questionnaire deemed representative of the Canadian labor market dealing with social role rather than biological sex . The construction of the LFGI represented the sum of scores for the components, resulting in a score ranging between 0 and 10 for each respondent. The quality of life at work was assessed with the QoLW Thermometer (scale 0–100): (0–25, red = problem zone; 26–50, yellow = needs-improvement zone; and 51–100, green = good-QoLW zone) . The perceived impact of work adversity specific to the COVID-19 pandemic was measured on a visual analog scale (0 = no impact; 100 = most significant impact). presents the full scales and subscales, along with the related number of items and Cronbach’s alpha. 2.5. Statistical Analysis Survey data were exported from SurveyMonkey to an Excel spreadsheet. Statistical analyses were conducted using SAS software, version 9.4 . Results with a p < 0.05 were considered significant. Descriptive statistics per item and subscale we used to summarize the variables. Analyses stratified by gender (LFGI) were explored (given the high proportion of female nurses) to determine if gender-specific aspects influence the association between TR@W and Work-SoC. Internal consistency was determined for variables using standardized Cronbach’s alpha. Descriptive statistics (mean and standard deviation) served to examine distribution. Our sample size was large enough that we could use listwise deletion to handle missing data. We used structural equation modeling (SEM), which is a combination of multiple regression and factor analysis that deals with measured and latent variables, to test complex models. We formulated a hypothetical model of relationships between TR@W and Work-SoC and sociodemographic factors as covariates and performed SEM to find relationships between variables. We used the standardized ß values to identify significant relationships between the variables. Some of these relationships were bidirectional. To evaluate model fit, we used the SRMR (Standardized Root Mean Squared Residual) as an absolute indicator of goodness of fit, as well as the RMSEA, CFI, GFI, and TLI as relative fit indicators with the conventional cut-off values (i.e., SRMR < 0.08; RMSEA < 0.08; CFI > 0.90; GFI > 0.90; TLI > 0.90). This exploratory study was part of a larger research project that aimed to better understand how a multi-component intervention improves resilience at work in oncology teams in Québec (Canada) . The present study used a cross-sectional design to analyze data on team resilience at work and work-related sense of coherence across a sample of oncology team members. Participants were from four Integrated Health and Social Service Centres (IHSSC or IUHSSC when it includes a university center) embedded in the national cancer network. The National Cancer Program in Québec (Canada) is part of the Ministry of Health and Social Services, the governing authority of the publicly funded healthcare system. One of the key elements of the cancer program is interdisciplinary team-based care operationalized through relational and cognitive proximity within and between teams as part of a “network-of-networks” . Oncology departments include interdisciplinary teams providing direct care (oncologists, nurses, pharmacists, social workers, psychologists, physical therapists, and nutritionists) supported by managers and clerical staff. Participants were eligible to participate in the study if they were a health professional, a manager, or a support staff member in the oncology department; worked at least 20 h per week; and had worked in oncology without a long-term leave during the 12 months before data collection. A total of 209 team members returned the online questionnaire, of which 189 fully completed it and were included in the analysis. Using SEM Power Calculation by MacCallum et al. and with a Goodness of Fit Index (GFI) of 0.88 as alternative hypothesis vs. 0.80 as null hypothesis and degrees of freedom of 71, a risk α of 0.05, we obtain a power of 1 − β = 0.99. Following ethics approval, the PI presented the study at regular oncology team meetings as a pre-notification before the distribution of the e-questionnaire . A designated local collaborator held the list of eligible team members, e-mailed invitations to complete the questionnaire, and the research professional checked the eligibility of team members who manifested an interest in participating. If eligible, a unique login link was sent to access the e-consent and e-questionnaire. Two reminders at 7 and 14 days were transmitted after the initial e-mail to boost the response rate. The questionnaire did not include any questions asking for identifying details. All responses were anonymous and collected using an individual SurveyMonkey account. Data collection was performed from 21 February 2022 to 19 June 2023. Team resilience at work was operationalized with the R@W Team Scale (TR@W) French version . This questionnaire has 42 items measured on a 7-point scale (from 1 = strongly disagree to 7 = strongly agree, with reverse scoring for negatively phrased items). Items are grouped into seven subscales, all with an acceptable value of alpha; however, the value of the full scale over 0.90 may suggest that some items are redundant . McEwen and Boyd developed and validated the TR@W scale among employees ( n = 344) across government, private, and non-profit sectors . The results for the full scale (42 items) show a mean score of 4.29 (SD = 0.83), while, together, the seven subscales explain 63% of the variance. To the best of our knowledge, only one study has previously used the TR@W in the healthcare sector, among registered nurses in long-term care homes in the province of Ontario, Canada ( n = 306; mean score = 4.5; SD = 1.21) . Minor adaptations to items were made to the French version to render questions more specific to the oncology setting, where it has not been used previously. The Work-Related Sense of Coherence (Work-SoC-9) French version has 9 items grouped into three subscales: comprehensibility (4 items describing perceived work situation as structured, consistent, and clear), manageability (2 items describing perceived availability of resources to face work demands), and meaningfulness (3 items describing perception that work is worthy of involvement) . The full scale has a Cronbach’s alpha of 0.83. Participants respond to items according to their current and general work situation on a 7-point differential semantic scale, ranging from positive to negative perception (e.g., meaningless–meaningful). Bauer and Jenny (2007) suggest that working conditions directly affect Work-SoC. A systematic review of the literature reports that the general Sense of Coherence (SoC) scale has been used in studies among individual nurses in the work context of oncology . To the best of our knowledge, only one study has used the Work-SoC-9 in oncology . That study, undertaken in Switzerland, examines the association between the Work-SoC-9 and oncology nurses’ confidence to implement an intervention supporting self-management in cancer patients. The findings support those in the validity study of the Work-SoC-9 . Sociodemographic and professional characteristics included age, gender, gender-related roles at work, education level, type of profession, work experience, work experience in the oncology team, work status, and role. Gender-related role at work was determined using the Labor Force Gender Index (LFGI), a four-item questionnaire deemed representative of the Canadian labor market dealing with social role rather than biological sex . The construction of the LFGI represented the sum of scores for the components, resulting in a score ranging between 0 and 10 for each respondent. The quality of life at work was assessed with the QoLW Thermometer (scale 0–100): (0–25, red = problem zone; 26–50, yellow = needs-improvement zone; and 51–100, green = good-QoLW zone) . The perceived impact of work adversity specific to the COVID-19 pandemic was measured on a visual analog scale (0 = no impact; 100 = most significant impact). presents the full scales and subscales, along with the related number of items and Cronbach’s alpha. Survey data were exported from SurveyMonkey to an Excel spreadsheet. Statistical analyses were conducted using SAS software, version 9.4 . Results with a p < 0.05 were considered significant. Descriptive statistics per item and subscale we used to summarize the variables. Analyses stratified by gender (LFGI) were explored (given the high proportion of female nurses) to determine if gender-specific aspects influence the association between TR@W and Work-SoC. Internal consistency was determined for variables using standardized Cronbach’s alpha. Descriptive statistics (mean and standard deviation) served to examine distribution. Our sample size was large enough that we could use listwise deletion to handle missing data. We used structural equation modeling (SEM), which is a combination of multiple regression and factor analysis that deals with measured and latent variables, to test complex models. We formulated a hypothetical model of relationships between TR@W and Work-SoC and sociodemographic factors as covariates and performed SEM to find relationships between variables. We used the standardized ß values to identify significant relationships between the variables. Some of these relationships were bidirectional. To evaluate model fit, we used the SRMR (Standardized Root Mean Squared Residual) as an absolute indicator of goodness of fit, as well as the RMSEA, CFI, GFI, and TLI as relative fit indicators with the conventional cut-off values (i.e., SRMR < 0.08; RMSEA < 0.08; CFI > 0.90; GFI > 0.90; TLI > 0.90). 3.1. Response Rate The response rate was 26%. A total of 209 respondents returned the questionnaires, and 20 were not included in the study because of missing data rates of more than 20%, leaving 189 questionnaires included in the study. 3.2. Participant Characteristics reports participant demographics, work history, and role description in the oncology team. The mean age was 42.63 years (SD = 10.28), and the most frequent survey categorizations were female (80.42%), nurses (39.26%), university education-level completed (61.41%), and less than 10 years of experience in oncology (59.52%). The Labor Force Gender Index was 4.67 (SD = 1.72). 3.3. Perceived Team Resilience at Work and Work-Related Sense of Coherence in Context The present study’s Cronbach’s alpha values reported in were 0.96 for the TR@W overall scale and 0.66 for the Work-SoC-9 overall scale, showing, respectively, very reliable or reliable levels . presents descriptive statistics and correlations between study variables. Participants reported high levels of team resilience at work (M = 4.87; SD = 1.09), with the highest score for capability (M = 5.42; SD = 1.18) and lowest score for self-care (M = 4.35; SD = 1.44). The work-related sense of coherence was also high (M = 5.42; SD = 1.25) with subscale meaningfulness (M = 5.98; SD = 1.14), comprehensibility (M = 4.88; SD = 1.09), and manageability (M = 4.64; SD = 1.06). Age was significantly and negatively related to TR@W full scale, but non-significantly positively related to Work-SoC-9. Quality of life at work was (M = 65.14; SD = 22.82), representing a positive perception, and it was significantly associated with the two main variables and all subscales. COVID-19 was significantly and negatively related to Work-SoC9 overall score, more specifically with comprehensibility, but no association was found with TR@W. LFGI was significantly and positively related to both variables. 3.4. Model Fit The model presented in shows that all TR@W subscales had a very high impact (standardized Cronbach’s alpha = 0.96), especially the resourcefulness subscale. The subscale best at explaining the Work-SoC was comprehensibility (R = 0.89). Strong team resilience at work allowed individuals to maintain a work-related sense of coherence. Standardized path coefficient values CFI = 0.94, TLI = 0.91, GFI = 0.89, SRMR = 0.05, and RMSEA = 0.09 showed a good adjustment of the observed relationships to the theoretical model. The first hypothesis was confirmed, showing a significantly positive relationship between the two main variables, TR@W and Work-SoC. The analysis revealed a positive correlation between TR@W and gender female (R = 0.20) and between Work-SoC and Labor Force Gender Index (LFGI; R = 0.19), but it revealed a negative correlation between TR@W and age (R = −0.19) and between Work-SoC and perceived impact of COVID-19 on teamwork (R = −0.15). Moreover, our results showed a significant positive reciprocal relationship between TR@W and Work-SoC (R = 0.20) and between Work-SoC and TR@W (R = 0.39). The response rate was 26%. A total of 209 respondents returned the questionnaires, and 20 were not included in the study because of missing data rates of more than 20%, leaving 189 questionnaires included in the study. reports participant demographics, work history, and role description in the oncology team. The mean age was 42.63 years (SD = 10.28), and the most frequent survey categorizations were female (80.42%), nurses (39.26%), university education-level completed (61.41%), and less than 10 years of experience in oncology (59.52%). The Labor Force Gender Index was 4.67 (SD = 1.72). The present study’s Cronbach’s alpha values reported in were 0.96 for the TR@W overall scale and 0.66 for the Work-SoC-9 overall scale, showing, respectively, very reliable or reliable levels . presents descriptive statistics and correlations between study variables. Participants reported high levels of team resilience at work (M = 4.87; SD = 1.09), with the highest score for capability (M = 5.42; SD = 1.18) and lowest score for self-care (M = 4.35; SD = 1.44). The work-related sense of coherence was also high (M = 5.42; SD = 1.25) with subscale meaningfulness (M = 5.98; SD = 1.14), comprehensibility (M = 4.88; SD = 1.09), and manageability (M = 4.64; SD = 1.06). Age was significantly and negatively related to TR@W full scale, but non-significantly positively related to Work-SoC-9. Quality of life at work was (M = 65.14; SD = 22.82), representing a positive perception, and it was significantly associated with the two main variables and all subscales. COVID-19 was significantly and negatively related to Work-SoC9 overall score, more specifically with comprehensibility, but no association was found with TR@W. LFGI was significantly and positively related to both variables. The model presented in shows that all TR@W subscales had a very high impact (standardized Cronbach’s alpha = 0.96), especially the resourcefulness subscale. The subscale best at explaining the Work-SoC was comprehensibility (R = 0.89). Strong team resilience at work allowed individuals to maintain a work-related sense of coherence. Standardized path coefficient values CFI = 0.94, TLI = 0.91, GFI = 0.89, SRMR = 0.05, and RMSEA = 0.09 showed a good adjustment of the observed relationships to the theoretical model. The first hypothesis was confirmed, showing a significantly positive relationship between the two main variables, TR@W and Work-SoC. The analysis revealed a positive correlation between TR@W and gender female (R = 0.20) and between Work-SoC and Labor Force Gender Index (LFGI; R = 0.19), but it revealed a negative correlation between TR@W and age (R = −0.19) and between Work-SoC and perceived impact of COVID-19 on teamwork (R = −0.15). Moreover, our results showed a significant positive reciprocal relationship between TR@W and Work-SoC (R = 0.20) and between Work-SoC and TR@W (R = 0.39). 4.1. Reciprocal and Positive Relationship Between TR@W and Work-SoC This study explored the relationship between team resilience at work and work-related sense of coherence in the specific context of oncology. The findings confirmed our initial assumption of a positive relationship between these two variables. Our model appeared “good”, considering the CFI and TLI values of more than 0.90 and the SRMR at 0.05 . This suggests that Antonovsky’s salutogenic model is valuable in empirical understanding of the associations between oncology team resilience and the sense of coherence at work. Without denying the importance of pathogenic mental health risks, moral distress, and burnout seen in oncology , the salutogenic approach illuminated factors that could actively promote teams’ capacity to bounce beyond adversity situations. Additionally, the substantial correlations between these two variables and subscales described in converged with the model of team resilience at work , which suggested that it has a function of sensemaking while team members problematize the situation, make sense of it together, and maintain or restore their teaming mechanisms (e.g., coordination, collaboration, and communication) to provide cancer care. Our unprecedented empirical demonstration in the specific context of oncology confirmed that team resilience has kinship with sense of coherence, quality of life, and interdisciplinarity under the salutogenic umbrella . This would suggest that there may be different levels of GRR under the health salutogenic umbrella (e.g., individual, group, and organizational). Going back to our question, why do oncology teams not let adversity define them, the three highest mean scores of the TR@W subscales suggested that perseverance was achieved through connectedness and that it improved capability. This revealed what Chatwal et al. (2023) call the “noble calling” that stimulates job engagement and offers gratitude and meaning , and it was reflected in especially high meaningfulness scores in our study. The connectedness reflected that team members had to work together interdependently and provide backup behaviors that characterize higher levels of interdisciplinary teamwork bringing various perspectives and collective efforts to generate solutions to adversity-induced situations. These teaming processes were protected during the COVID-19 pandemic with the creation of “sanctuary” zones in oncology that were designed to protect patients with immunodeficiency and consequently maintained the team together, although comprehensibility was not always present. The three other subscales revealed that the robustness needed to recover from the unexpected or to avoid quality-of-care failure despite adversity depended on team alignment and resourcefulness. These TR@W subscale scores that remained above 4.5 out of 7 may indicate that team members had confidence in their resources to bounce beyond and maintain focus on their role of being responsive to whole-person needs, even though they would have appreciated other resources outside the team. This required organizational agility and robust governance to overcome pessimism and find new ways of doing things when usual practices appeared neither feasible nor attractive. For example, telehomecare was introduced during the COVID-19 lockdowns without adequate planning but had since become a routine practice. Not surprisingly, the lowest scores were on the self-care subscale, supporting previous studies . One strategy to develop a culture of self-care may be to develop awareness of coherent work experiences and reduce tensions between job resources and job demands . At the collective level of the team, “team-care” involved a combination of socially supportive communication and backup behaviors from colleagues through sharing, supporting, and leading with compassion . Although the fourth element of the Quadruple Aim in healthcare involves improving the work life and well-being of care teams , a real culture of promoting and deploying stress management routines and healthy work environments has yet to come. Attieh and Loiselle found that resilience was key to sustainable team functioning during COVID-19 . However, they pointed to the scarcity of empirical studies on team resilience in oncology. Despite its limitations, the present study furthered efforts to raise awareness and better pinpoint to what extent team resilience and work-related sense of coherence are associated in this specialized healthcare domain. 4.2. Practical Implications Our study results have several practical implications for maintaining or improving resilience as a GRR to cope with adversity situations in oncology teams. First, the multifaceted and dynamic nature of situations mean that there is no room for blame and that problems will never be “solved” once and for all. Faced with such “wicked problems” , developing a shared definition of problems is a starting point to finding innovative ways of aligning a myriad of nursing and healthcare professional roles to suit the local challenges. Drawing on each member’s talents, knowledge, skills, and responsibilities helps make available the team’s resources to provide people-centered responsive care . Carefully balancing professional bounded autonomy and interdependency among nurses and other healthcare professionals helps avoid turf disputes along the cancer trajectory and reduce sources of stress. Positive teaming processes could prevent a loss of meaningfulness, voluntary (or not) renunciation, and “desilience”—the opposite of resilience . Second, visible and committed managers who foster mutual trust and the expression of a plurality of points of view facilitate achievement of common goals that depend on team alignment and robustness, dimensions of resilience at work . Deliberate support is needed for team connectedness and sense of belonging. The capacity for joint action reflects comprehensibility on multiple levels and dimensions and produces creative problem-solving dynamics, which in turn are associated with manageability and perseverance in achieving oncology team goals. Third, in a deliberative multi-stakeholder symposium, attendees identified practical interventions aimed at enhancing resilience of professional care providers in oncology . One of these was enhancing the articulation of evidence-based professional practice and patients’ experiential knowledge. This was described as a strategy to mitigate gaps in responsive cancer care by recognizing the patient’s role as a legitimate team partner. A second was raising awareness among policy leaders and decision makers of the importance of team resilience in oncology. Symposium attendees also recommended caution regarding the “tyranny of happiness” that resilience can impose. Placing responsibility for managing irreducible problems on the shoulders of individual healthcare professionals creates the risk of generating stigma around stress and burnout. Fourth, a number of strategies have been shown to benefit dimensions that received the lowest scores in our study (self-care, alignment, and resourcefulness). These include mindfulness-based stress reduction, continuing education and training , reducing administrative burden and overtime , monitoring team member well-being and burnout metrics and providing resources , facilitating work-family balance, and allowing flexible schedules . Although there is a need for more research evidence in oncology, innovative approaches may help team members. For example, the drama triangle framework suggests that people create their own stories to make sense of interpersonal relationships and their environment and can become trapped in the cycle of victim (poor me), persecutor (blame others), and rescuer (elevated need to help). Understanding the role team members play may help people move beyond these stories and foster a constructive sense of coherence at work . Another interesting avenue to maintain meaningfulness is the 3P’s framework known for its three areas of learned optimism: permanence (look at adversity as temporary or permanent), pervasiveness (how adversity affects your own and others’ lives), and personalization (adversity is your own fault or it just happens) . Finally, the WHO undertook a thorough review of the role of the arts in improving health and well-being which opens new doors to resilience in healthcare settings . Participatory arts classes, writing stories or keeping a diary, and drawing classes or art-appreciation classes have been found to enhance feelings of support in daily emotional challenges, help identify team issues for doctors and nurses, improve interdisciplinary teamwork, and increase tolerance for ambiguity. This review also reports that art activities can reduce exhaustion and death anxiety and increase emotional awareness in those working in end-of-life care. While interventions to optimize team resilience in oncology require more research work, there are feasible means of managing adversity and minimizing its impact. Translating these promising activities calls for mobilization at the individual, organizational, and policy level and to prioritize nurses’ and other team members’ health and well-being in an evolving system and society. 4.3. Limitations One of the limits of the present study was the conceptual suitability of questionnaires that were not specifically designed for healthcare workplaces. We chose the TR@W because it addressed the capacity to manage everyday pressures common in the healthcare sector and because of the quality of the methodology . Our study using the French language version with minor adaptation to the healthcare setting showed excellent internal consistency, with Cronbach’s alpha = 0.96 for the full scale and ranging from 0.93 to 0.83 for the subscales. These were similar to values achieved with the original instrument. Data were normally distributed, and reliability levels satisfied criteria for empirical studies and were similar to those found in other studies using TR@W . The conceptual suitability issue also emerged with the Work-SoC-9 questionnaire. The choice built upon Antonovsky’s theoretical basis of salutogenesis and its complementarity with the huge number of studies focusing on pathogenic aspects of the COVID-19 pandemic. Moreover, the influence of a volatile working environment and individual characteristics on the Work-SoC was already confirmed by Vogt et al. (2013) with cross-sectional data. Our first utilization of these questionnaires in oncology teams raised endogeneity as a potential bias to our study findings due to an omitted variable related to the wicked nature of adversity of work in oncology, to simultaneity related to how both main variables affect each other, to measures that may be not sufficiently sensitive to change, or to the convenience sample that meant team members on sick leave were not included in the study . This study explored the relationship between team resilience at work and work-related sense of coherence in the specific context of oncology. The findings confirmed our initial assumption of a positive relationship between these two variables. Our model appeared “good”, considering the CFI and TLI values of more than 0.90 and the SRMR at 0.05 . This suggests that Antonovsky’s salutogenic model is valuable in empirical understanding of the associations between oncology team resilience and the sense of coherence at work. Without denying the importance of pathogenic mental health risks, moral distress, and burnout seen in oncology , the salutogenic approach illuminated factors that could actively promote teams’ capacity to bounce beyond adversity situations. Additionally, the substantial correlations between these two variables and subscales described in converged with the model of team resilience at work , which suggested that it has a function of sensemaking while team members problematize the situation, make sense of it together, and maintain or restore their teaming mechanisms (e.g., coordination, collaboration, and communication) to provide cancer care. Our unprecedented empirical demonstration in the specific context of oncology confirmed that team resilience has kinship with sense of coherence, quality of life, and interdisciplinarity under the salutogenic umbrella . This would suggest that there may be different levels of GRR under the health salutogenic umbrella (e.g., individual, group, and organizational). Going back to our question, why do oncology teams not let adversity define them, the three highest mean scores of the TR@W subscales suggested that perseverance was achieved through connectedness and that it improved capability. This revealed what Chatwal et al. (2023) call the “noble calling” that stimulates job engagement and offers gratitude and meaning , and it was reflected in especially high meaningfulness scores in our study. The connectedness reflected that team members had to work together interdependently and provide backup behaviors that characterize higher levels of interdisciplinary teamwork bringing various perspectives and collective efforts to generate solutions to adversity-induced situations. These teaming processes were protected during the COVID-19 pandemic with the creation of “sanctuary” zones in oncology that were designed to protect patients with immunodeficiency and consequently maintained the team together, although comprehensibility was not always present. The three other subscales revealed that the robustness needed to recover from the unexpected or to avoid quality-of-care failure despite adversity depended on team alignment and resourcefulness. These TR@W subscale scores that remained above 4.5 out of 7 may indicate that team members had confidence in their resources to bounce beyond and maintain focus on their role of being responsive to whole-person needs, even though they would have appreciated other resources outside the team. This required organizational agility and robust governance to overcome pessimism and find new ways of doing things when usual practices appeared neither feasible nor attractive. For example, telehomecare was introduced during the COVID-19 lockdowns without adequate planning but had since become a routine practice. Not surprisingly, the lowest scores were on the self-care subscale, supporting previous studies . One strategy to develop a culture of self-care may be to develop awareness of coherent work experiences and reduce tensions between job resources and job demands . At the collective level of the team, “team-care” involved a combination of socially supportive communication and backup behaviors from colleagues through sharing, supporting, and leading with compassion . Although the fourth element of the Quadruple Aim in healthcare involves improving the work life and well-being of care teams , a real culture of promoting and deploying stress management routines and healthy work environments has yet to come. Attieh and Loiselle found that resilience was key to sustainable team functioning during COVID-19 . However, they pointed to the scarcity of empirical studies on team resilience in oncology. Despite its limitations, the present study furthered efforts to raise awareness and better pinpoint to what extent team resilience and work-related sense of coherence are associated in this specialized healthcare domain. Our study results have several practical implications for maintaining or improving resilience as a GRR to cope with adversity situations in oncology teams. First, the multifaceted and dynamic nature of situations mean that there is no room for blame and that problems will never be “solved” once and for all. Faced with such “wicked problems” , developing a shared definition of problems is a starting point to finding innovative ways of aligning a myriad of nursing and healthcare professional roles to suit the local challenges. Drawing on each member’s talents, knowledge, skills, and responsibilities helps make available the team’s resources to provide people-centered responsive care . Carefully balancing professional bounded autonomy and interdependency among nurses and other healthcare professionals helps avoid turf disputes along the cancer trajectory and reduce sources of stress. Positive teaming processes could prevent a loss of meaningfulness, voluntary (or not) renunciation, and “desilience”—the opposite of resilience . Second, visible and committed managers who foster mutual trust and the expression of a plurality of points of view facilitate achievement of common goals that depend on team alignment and robustness, dimensions of resilience at work . Deliberate support is needed for team connectedness and sense of belonging. The capacity for joint action reflects comprehensibility on multiple levels and dimensions and produces creative problem-solving dynamics, which in turn are associated with manageability and perseverance in achieving oncology team goals. Third, in a deliberative multi-stakeholder symposium, attendees identified practical interventions aimed at enhancing resilience of professional care providers in oncology . One of these was enhancing the articulation of evidence-based professional practice and patients’ experiential knowledge. This was described as a strategy to mitigate gaps in responsive cancer care by recognizing the patient’s role as a legitimate team partner. A second was raising awareness among policy leaders and decision makers of the importance of team resilience in oncology. Symposium attendees also recommended caution regarding the “tyranny of happiness” that resilience can impose. Placing responsibility for managing irreducible problems on the shoulders of individual healthcare professionals creates the risk of generating stigma around stress and burnout. Fourth, a number of strategies have been shown to benefit dimensions that received the lowest scores in our study (self-care, alignment, and resourcefulness). These include mindfulness-based stress reduction, continuing education and training , reducing administrative burden and overtime , monitoring team member well-being and burnout metrics and providing resources , facilitating work-family balance, and allowing flexible schedules . Although there is a need for more research evidence in oncology, innovative approaches may help team members. For example, the drama triangle framework suggests that people create their own stories to make sense of interpersonal relationships and their environment and can become trapped in the cycle of victim (poor me), persecutor (blame others), and rescuer (elevated need to help). Understanding the role team members play may help people move beyond these stories and foster a constructive sense of coherence at work . Another interesting avenue to maintain meaningfulness is the 3P’s framework known for its three areas of learned optimism: permanence (look at adversity as temporary or permanent), pervasiveness (how adversity affects your own and others’ lives), and personalization (adversity is your own fault or it just happens) . Finally, the WHO undertook a thorough review of the role of the arts in improving health and well-being which opens new doors to resilience in healthcare settings . Participatory arts classes, writing stories or keeping a diary, and drawing classes or art-appreciation classes have been found to enhance feelings of support in daily emotional challenges, help identify team issues for doctors and nurses, improve interdisciplinary teamwork, and increase tolerance for ambiguity. This review also reports that art activities can reduce exhaustion and death anxiety and increase emotional awareness in those working in end-of-life care. While interventions to optimize team resilience in oncology require more research work, there are feasible means of managing adversity and minimizing its impact. Translating these promising activities calls for mobilization at the individual, organizational, and policy level and to prioritize nurses’ and other team members’ health and well-being in an evolving system and society. One of the limits of the present study was the conceptual suitability of questionnaires that were not specifically designed for healthcare workplaces. We chose the TR@W because it addressed the capacity to manage everyday pressures common in the healthcare sector and because of the quality of the methodology . Our study using the French language version with minor adaptation to the healthcare setting showed excellent internal consistency, with Cronbach’s alpha = 0.96 for the full scale and ranging from 0.93 to 0.83 for the subscales. These were similar to values achieved with the original instrument. Data were normally distributed, and reliability levels satisfied criteria for empirical studies and were similar to those found in other studies using TR@W . The conceptual suitability issue also emerged with the Work-SoC-9 questionnaire. The choice built upon Antonovsky’s theoretical basis of salutogenesis and its complementarity with the huge number of studies focusing on pathogenic aspects of the COVID-19 pandemic. Moreover, the influence of a volatile working environment and individual characteristics on the Work-SoC was already confirmed by Vogt et al. (2013) with cross-sectional data. Our first utilization of these questionnaires in oncology teams raised endogeneity as a potential bias to our study findings due to an omitted variable related to the wicked nature of adversity of work in oncology, to simultaneity related to how both main variables affect each other, to measures that may be not sufficiently sensitive to change, or to the convenience sample that meant team members on sick leave were not included in the study . Our results suggest that the French version of these questionnaires can be used in the healthcare sector, which paves the way for future research using these tools. This is a considerable effort in the current post-pandemic climate that places multiple time and workload pressures on healthcare teams. The participants were all involved and knowledgeable of how practice changed during the COVID-19, contributing to generating real-world data, raising confidence in the results and increasing the usefulness of findings to inform decision-making. Future research could use longitudinal designs to follow the same teams over longer time periods. Such designs would increase the robustness of the relationship with context, although this might also be blurred by endogeneity.
Editorial: Rising stars in cardiovascular endocrinology 2022
d9bb4ff7-ebf7-4708-89c0-54c047cf70fc
10042285
Physiology[mh]
The author confirms being the sole contributor of this work and has approved it for publication.
Clinical, neuropathological, and immunological short‐ and long‐term feature of a mouse model mimicking human herpes virus encephalitis
3c294d9d-a8f1-48ce-82ee-8e2608ad1ba3
9048517
Pathology[mh]
INTRODUCTION Neurotropic alphaherpesviruses including herpes simplex viruses 1 and 2 (HSV‐1, HSV‐2) and Varicella Zoster Virus (VZV) are major human pathogens which can cause devastating neurological diseases . Alphaherpesviruses typically initiate productive infections in mucosal epithelial cells. Subsequently, peripheral sensory nerve endings are infected and viral particles are transported retrogradely within their axons to ganglia of the peripheral nervous system (PNS) to establish reactivatable, life‐long latency . HSV‐1 infection is the major cause of Herpes Simplex Encephalitis (HSE) . HSE occurs sporadically and is characterized by high mortality of up to 70%, if undiagnosed and untreated, with only a minority of patients returning to normal life . Patients initially present with unspecific clinical signs, which, however, get worse with disease progression and include disorientation, aphasia, changes in mental status, disorders of cranial nerves IV and VI, or seizures . Specifically, behavioral abnormalities occur which include hypomania, Kluver–Bucy syndrome, and memory impairment . Survivors suffer from incriminating life‐long sequelae like speech dysfunctions, behavioral, memory and cognitive alterations, and epilepsy . Apart from usually severe affection, few subacute forms of HSE have been described . Further, rare cases of relapsing or chronic CNS inflammation have been reported in both immunocompetent and immunocompromised individuals . HSE is characterized by asymmetrical necrotizing inflammation which is mainly restricted to the temporal as well as to the frontal lobe and insular cortex . Macroscopically, brains of surviving HSE patients reveal predominantly unilateral atrophy and yellow‐brownish discoloration resulting from necrosis and microhemorrhages of affected brain areas . Histologically, destruction of grey and white matter, necrosis of cortical neurons, glial activation as well as leptomeningeal, and scattered parenchymal infiltration with lymphocytes and histiocytes are present . Extensive edema has been reported , whereas glial nodules are frequently detected . Extra temporal involvement of HSE has been described in more than half of the cases and includes lesions in the frontal and parietal cortex, occipital lobe, basal ganglia, brain stem, and pons . Intranuclear inclusion bodies are only inconsistently observed in neurons and astrocytes . Rather rare reports include calcification within necrotic brain areas or granulomatous inflammation with foci of mineralization . Viral antigen containing neurons and astrocytes appear predominantly in amygdaloid nuclei, cortex and white matter of the lateral olfactory striae, entorhinal cortex, subiculum, hippocampus, insula, and cingulate gyrus and to a lesser extent in olfactory bulbs and pons . Despite decades of intensive research, major questions regarding the pathogenesis of HSE remain unanswered. Of central importance is the elucidation of factors enabling HSV‐1 to traverse the peripheral nervous system for access to the central nervous system (CNS), specifically to the temporal lobe. Moreover, it is unclear why HSV‐1 infection leads to fatal encephalitis in some individuals, but normally results in life‐long latency without any clinical signs. In addition, subclinical HSE with recurring reactivation events and associated psychiatric disorders has been discussed decades ago . Valuable knowledge has been gained from a variety of animal models to better understand pathogenesis of the disease, although the data are quite heterogeneous because of the large number of different virus strains, inoculation routes, and variable genetic backgrounds of the animals . In this context, especially HSV‐1‐infected mice fail to reflect crucial elements of human encephalitis appropriately, because mice develop brain stem or cerebellar encephalitis rather than inflammation associated with the temporal lobe. Although mice are generally highly susceptible to HSV‐1 infection, disease outcomes as well as mortality differ widely between inbred mouse strains . Highly susceptible mice usually develop severe neurological signs and succumb to death shortly after infection , preventing a more detailed investigation of long‐term damage and lesions associated with behavioral abnormalities as seen in human patients. We have recently reported on an animal model for HSE which revealed striking analogies to human disease . In our study, 6‐ to 8‐week‐old female CD‐1 mice developed marked meningoencephalitis after intranasal infection with a mutant of the neurotropic alphaherpesvirus pseudorabies virus (PrV), which is closely related to HSV‐1 and usually fatal in all animal species except for pigs. This PrV mutant, designated PrV‐ΔUL21/US3Δkin, lacks tegument protein pUL21 and carries a mutation in the active site of the pUS3 protein kinase. While the deletion of most nonessential viral genes had either no or only a slight effect on neuroinvasion, neurovirulence and survival time of infected mice , most mice surprisingly survived infection with PrV‐ΔUL21/US3Δkin . As in human HSE, lymphohistiocytic inflammation with pronounced neuronal necrosis was predominantly confined to the temporal as well as to the frontal lobes and insular cortex. With progression of the inflammatory reaction, mice revealed behavioral abnormalities such as “star gazing,” which seem to be comparable to behavioral alterations in humans suffering from HSE. Strikingly, only few mice developed severe disease between day 10 and 13 post infectionem (pi), while the majority of infected animals were only moderately affected and able to survive despite extensive neuropathological changes. In the present study, we analyzed survival as well as clinical and histopathological short‐ and long‐term consequences and compared the inflammatory reaction and associated neuropathological changes to further validate the PrV‐mouse model for human HSE. MATERIAL AND METHODS 2.1 Animal experiments All animal experiments were approved by the State Office for Agriculture, Food Safety and Fishery in Mecklenburg‐Western Pomerania (LALFF M‐V) with reference number 7221.3‐1‐064/17. ARRIVE guidelines 2.0 were followed as reported below. In general, 6‐ to 8‐week‐old female CD‐1 mice were purchased from Charles River Laboratory and housed in groups of maximum five animals in conventional cages type II L under BSL 2 conditions at a temperature of 20–24°C. Mice were kept under a 12 h light–dark cycle (day light intensity 60%) with free access to food (ssniff Ratte/Maus – Haltung) and clean drinking water. Bedding (ssniff Spezialdiäten Abedd Espen CLASSIC), nesting (PLEXX sizzle nest), and enrichment material (PLEXX Aspen Bricks medium, mouse smart home, mouse tunnel) were provided. An acclimatization period of at least 1 week was allowed prior to inoculation. Animals were anesthetized with 200 µl of a mixture of ketamine (60 mg/kg) and xylazine (3 mg/kg) dissolved in 0.9% sodium chloride which was administered intraperitoneally. Afterwards, a total of 5 µl of PrV‐∆UL21/US3∆kin suspension in cell culture media was inoculated in each nostril (1 × 10 4 plaque forming units [PFU]). Control mice were inoculated with cell culture supernatant from rabbit kidney (RK13) cells (minimum essential medium (MEM) + 5% fetal calf serum [FCS]) accordingly. Mice were monitored 24/7 and scored for clinical signs as described earlier . The animals were sacrificed under deep anesthesia with isoflurane, cardiac bleeding, and final decapitation. In order to allow an unbiased investigation, as well as considering animal welfare conditions, treatment and time points of analysis were determined for each individual animal prior to the experiment by simple randomization. Blinding was performed during allocation of animals and data analysis. The minimum number of animals necessary in the different exploratory experiments were calculated on a disease incidence of 80% of inoculated animals and 0% in mock‐infected mice (power = 0.8, α = 0.1). 2.1.1 Long‐term investigation Long‐term effects were determined clinically and histopathologically over 6 months in an exploratory study. PrV‐∆UL21/US3∆kin‐infected animals ( n = 5) each were analyzed by histology at 28, 35, 42, 49, 84, and 168 days pi. Mock‐inoculated mice ( n = 6) were included as control. 2.1.2 Neurohistopathological analyses of early inflammation We reused mouse brain tissue sections obtained from the previous experiment to explore pathomorphological changes and the spatial distribution of infiltrating immune cell populations during the first 21 days of infection in detail. Mice sacrificed at 2, 8, 12, 15, and 21 days pi ( n = 3) served as positive material. Mock‐infected mice were included as control ( n = 3). 2.1.3 Immunological analyses of early inflammation To assess neuroinflammatory response by flow cytometry PrV‐∆UL21/US3∆kin‐infected ( n = 6) as well as control mice ( n = 4) were sacrificed at 2, 8, 12, 15, and 21 days pi to identify and quantify inflammatory infiltrates and cytokine levels in the brain. 2.1.4 Neurohistopathological analyses of severely diseased mice PrV‐ΔUL21/US3Δkin‐infected animals from the different experiments ( n = 6) that show severe clinical signs or found dead were analyzed histopathologically to explore the severe disease outcome. Animals from the first trial to determine the mean time to death and the kinetic study as well as the long‐term experiment (this study) were included. 2.2 Virus PrV‐ΔUL21/US3Δkin was generated in a PrV‐Kaplan (PrV‐Ka) background as described previously . The virus was propagated in RK13 cells grown at 37°C in MEM supplemented with 10% FCS (Invitrogen). 2.3 Histopathological analysis For histopathological investigation, the skull was removed and the head was fixed in 4% neutral‐buffered formalin for at least 1 week followed by decalcification for 3 days in Formical 2000 (Decal, Tallman, NY). From all heads, eight coronal head sections were obtained, embedded in paraffin wax and cut at 3 or 5 µm thick slices, respectively, for further histological and immunohistochemical evaluation . The slices were mounted on Super‐Frost‐Plus‐Slides (Carl Roth GmbH, Karlsruhe, Germany) and stained with hematoxylin–eosin for detailed neuropathological analysis of CNS inflammation. 2.3.1 Special stains Axonal density was visualized by Bielschowsky's silver impregnation. Dewaxed paraffin sections were treated with 0.25% potassium permanganate solution (3 min) and rinsed in distilled water. Afterwards, 1% potassium sulfate solution was applied to sections (1 min). Sections were rinsed in tap and distilled water before samples were probed with 2% silver nitrate solution overnight. Sections were rinsed in distilled water (3–5 s, 2 times) and incubated with 10% ammoniacal silver nitrate solution (10 min). Sections were dipped in distilled water (5 s) and reduced in 4% formalin. Myelination was evaluated using Luxol Fast Blue‐Cresyl Violet staining. Dewaxed paraffin sections were treated with xylol (2x 2 min), 99.5% (2x 3 min), 95%, 80%, 70%, 50% 1‐propanol (3 min each), and distilled water (3 min). After incubation in isopropyl alcohol (15 min), the section were left in luxol fast blue solution (24 h, 57°C), rinsed with distilled water, and differentiated in 0.05% lithium carbonate solution (15 s) and 70% ethyl alcohol (15 s). Sections were rinsed with distilled water and counterstained with 0.1% Cresyl Fast Violet solution, dehydrated in 96% ethyl alcohol (2x 4 min), isopropanol (1x 4 min), and butyl acetate (1x 4 min), and coated with Entellan (Merck, Darmstadt, Germany). Mineralization was investigated using the von Kossa stain. As described above dewaxed paraffin‐embedded section were rehydrated and subsequently incubated with 5% silver nitrate solution (120 min) in the dark. After washing in distilled water, the sections were treated with 1% pyrogallic acid (4 min) and 4% sodium thiosulfate (5 min). The sections were rinsed in tap water (10 min) and counterstained with nuclear fast red (5 min), washed in distilled water, and dehydrated through graded alcohol. Hemosiderosis following hemorrhages was assessed by Prussian Blue staining. Rehydrated paraffin‐embedded tissue sections were immersed in 1% hydrochloric acid and 2% potassium ferrocyanide (30 min), rinsed in distilled water, followed by counterstain with nuclear fast red (5 min). Sections were rinsed in distilled water and dehydrated. 2.3.2 Immunohistochemistry Infiltrating immune cell populations were identified using antibodies against Iba‐1 (FUJIFILM Wako, polyclonal rabbit anti‐rat, 1:800, for monocytes and macrophages), CD3 (DAKO, polyclonal rabbit anti‐human T cell CD3 A452, 1:100, for T cells) and CD79a (DAKO, monoclonal mouse anti‐human CD79αcy CloneHM57, 1:50, for B cells). Antibodies against glia‐fibrillary‐acid‐protein (GFAP) (abcam, polyclonal rabbit anti‐bovine, 1:100) stained astrocytes. PrV infection was visualized using an in‐house‐generated rabbit polyclonal antibody against glycoprotein B . Dewaxed and rehydrated paraffin‐embedded sections were treated with 3% of hydrogen peroxide (10 min, Merck, Darmstadt, Germany) to block endogenous peroxidases. To demask antigenic sites in tissue (except sections for PrV gB), sections were either treated with 10mM citrate buffer (2x 5 min, microwave, 500W, for GFAP, CD3) or 10mM Tris–EDTA buffer (10mM Tris base, 1mM EDTA solution, 15 min, microwave, 500W, for CD79a) followed by incubation in undiluted normal goat serum (30 min). Primary antibody incubation was followed by biotinylated goat anti‐rabbit IgG (1:200; Vector Laboratories, Burlingame, CA, for GFAP) or goat‐anti‐mouse IgG (1:200, Vector Laboratories, Burlingame, CA, for CD79a) and subsequent avidin–biotin–peroxidase (ABC) complex (Vector Laboratories) for 30 min at room temperature. For CD3 staining, sections were treated with Envison®+ System – HRP (DAKO). Positive antigen–antibody reaction was visualized using AEC‐substrate (DAKO, Hamburg, Germany). After rinsing with deionized water, the sections were counterstained with Mayer's hematoxylin for 10 min and mounted with Aquatex (Merck). 2.3.3 Scoring of neurohistopathological changes In order to characterize PrV‐∆UL21/US3∆kin‐induced encephalitis during the acute phase of infection in more detail, the following brain regions were stained with hematoxylin and eosin and analyzed histopathologically: brainstem (BS) including medulla oblongata and pons, mesencephalon (MES), diencephalon (DI), temporal lobe (TL) including hippocampus, parietal lobe (PL), and frontal lobe (FL). The above mentioned brain areas were analyzed for inflammatory changes according to a recently published protocol with slight modifications as given in Table . Neuronal necrosis and spongiform changes (Table ) were scored only in the TL which was the most affected brain region. Axonal density in the white matter was scored according to a recent protocol . Demyelination was evaluated as published earlier . Parenchymal mineralization and hemosiderosis were determined absent or present. Scoring of all four parameters assessed in the TL during the acute phase of infection is given in Table . 2.3.4 Scoring of inflammatory cells (immunohistochemistry) Temporal lobe infiltration by CD3 + T cells, CD79 + B cells, and Iba‐1 + microglia/macrophages was determined during the acute phase of infection and scored as illustrated in Table based on a recent protocol with few adaptions. Astrogliosis based on GFAP immunoreactivity was assessed as absent or present. Infiltration of cells was evaluated in 20x or 40x magnification (high power filed = HPF). 2.4 Cell preparation and antibody staining for flow cytometric analysis Brain samples were prepared for single‐cell isolation according to a recently published protocol with slight modifications. Briefly, after removing from the scull, brains were immediately transferred to ice‐cold PBS and kept on ice. The cerebellum was removed and the remaining brain cut into small pieces on ice. For cell isolation, brain pieces were pressed through a cell strainer (70 µm, BD Biosciences, Heidelberg, Germany), homogenized, and taken up in 2 ml cOmplete TM Mini EDTA‐free protease inhibitor cocktail (Roche, Basel, Switzerland). Half of the homogenate was used for the analysis of the infiltrating immune cells or cytokines. The homogenate was centrifuged (286 × g , 4 °C, 5 min), and the supernatant was discarded. The cell pellet was resuspended in 1 ml digestion buffer (Liberase with low thermolysin concentration to a concentration of 2 U/ml in Hanks Balanced Salt Solution [HBSS] containing calcium [Ca] and magnesium [Mg]) and incubated for 30 min at 37°C with gentle agitation. The suspension was pressed through a cell strainer (70 μm), washed with 10 ml of DNAse‐free washing buffer (HBSS (Ca/Mg free) containing 10% of FCS), and centrifuged (286 × g , 18°C, 5 min). The supernatant was discarded, and the cell pellet was carefully resuspended in 5 ml density gradient medium (25%, room temperature), and centrifuged (521 × g , 18°C, 30 min, acceleration/deceleration = 0). The myelin layer and the supernatant were aspirated, and the cell pellet was resuspended in 10ml DNAse‐free washing buffer and centrifuged again (286 × g , 10°C, 5 min). The supernatant was discarded, and the cells were resuspended in 100µl of cold washing buffer. Cell counting and assessment of cell viability were achieved using trypan blue staining (dilution 1:10). For flow cytometric antibody staining, the cell pellet of 1 ml brain homogenate was suspended in FACS‐buffer (PBS containing 0.1% Sodium azide and 0.1% BSA) and treated with CD16/CD32 Fc‐Receptor blocking reagent (2.5 μg/ml). Cells were stained with primary antibodies listed in Table for 15 min at 4°C in the dark. For staining of whole blood, erythrocytes were lysed after surface staining by conventional lysis buffer (1.55 M NH 4 Cl, 100 mM KHCO 3 , 12.7 mM Na 4 EDTA, pH 7.4, in distilled water). Gating is shown in SI . 2.5 Cytokine assay For cytokine analysis in the brain, the LegendPlex ™ Mouse Anti‐Virus Response Panel was used to quantify 13 mouse cytokines including interferons IFN‐α, IFN‐β, and IFN‐γ; interleukins IL‐1β, IL‐6, IL‐10, and IL‐12 as well as chemokines CCL2, CCL5, CXCL1, CXCL10, TNF‐α, and GM‐CSF, according to manufacturer’s instructions (BioLegend, Koblenz, Germany). 2.6 Statistical analysis Statistical analyses and graphical visualization of data were performed using Graph Pad Prism (Version 8.4.2). To analyze brain immune cell infiltration and cytokines, ordinary one‐way ANOVA with Holm‐Sidak’s post hoc test was performed to compare infected animals from 2, 8, 12, 15, and 21 days pi to all control mice. Values with p ≤ 0.05 were considered significant and are indicated by asterisks . Animal experiments All animal experiments were approved by the State Office for Agriculture, Food Safety and Fishery in Mecklenburg‐Western Pomerania (LALFF M‐V) with reference number 7221.3‐1‐064/17. ARRIVE guidelines 2.0 were followed as reported below. In general, 6‐ to 8‐week‐old female CD‐1 mice were purchased from Charles River Laboratory and housed in groups of maximum five animals in conventional cages type II L under BSL 2 conditions at a temperature of 20–24°C. Mice were kept under a 12 h light–dark cycle (day light intensity 60%) with free access to food (ssniff Ratte/Maus – Haltung) and clean drinking water. Bedding (ssniff Spezialdiäten Abedd Espen CLASSIC), nesting (PLEXX sizzle nest), and enrichment material (PLEXX Aspen Bricks medium, mouse smart home, mouse tunnel) were provided. An acclimatization period of at least 1 week was allowed prior to inoculation. Animals were anesthetized with 200 µl of a mixture of ketamine (60 mg/kg) and xylazine (3 mg/kg) dissolved in 0.9% sodium chloride which was administered intraperitoneally. Afterwards, a total of 5 µl of PrV‐∆UL21/US3∆kin suspension in cell culture media was inoculated in each nostril (1 × 10 4 plaque forming units [PFU]). Control mice were inoculated with cell culture supernatant from rabbit kidney (RK13) cells (minimum essential medium (MEM) + 5% fetal calf serum [FCS]) accordingly. Mice were monitored 24/7 and scored for clinical signs as described earlier . The animals were sacrificed under deep anesthesia with isoflurane, cardiac bleeding, and final decapitation. In order to allow an unbiased investigation, as well as considering animal welfare conditions, treatment and time points of analysis were determined for each individual animal prior to the experiment by simple randomization. Blinding was performed during allocation of animals and data analysis. The minimum number of animals necessary in the different exploratory experiments were calculated on a disease incidence of 80% of inoculated animals and 0% in mock‐infected mice (power = 0.8, α = 0.1). 2.1.1 Long‐term investigation Long‐term effects were determined clinically and histopathologically over 6 months in an exploratory study. PrV‐∆UL21/US3∆kin‐infected animals ( n = 5) each were analyzed by histology at 28, 35, 42, 49, 84, and 168 days pi. Mock‐inoculated mice ( n = 6) were included as control. 2.1.2 Neurohistopathological analyses of early inflammation We reused mouse brain tissue sections obtained from the previous experiment to explore pathomorphological changes and the spatial distribution of infiltrating immune cell populations during the first 21 days of infection in detail. Mice sacrificed at 2, 8, 12, 15, and 21 days pi ( n = 3) served as positive material. Mock‐infected mice were included as control ( n = 3). 2.1.3 Immunological analyses of early inflammation To assess neuroinflammatory response by flow cytometry PrV‐∆UL21/US3∆kin‐infected ( n = 6) as well as control mice ( n = 4) were sacrificed at 2, 8, 12, 15, and 21 days pi to identify and quantify inflammatory infiltrates and cytokine levels in the brain. 2.1.4 Neurohistopathological analyses of severely diseased mice PrV‐ΔUL21/US3Δkin‐infected animals from the different experiments ( n = 6) that show severe clinical signs or found dead were analyzed histopathologically to explore the severe disease outcome. Animals from the first trial to determine the mean time to death and the kinetic study as well as the long‐term experiment (this study) were included. Long‐term investigation Long‐term effects were determined clinically and histopathologically over 6 months in an exploratory study. PrV‐∆UL21/US3∆kin‐infected animals ( n = 5) each were analyzed by histology at 28, 35, 42, 49, 84, and 168 days pi. Mock‐inoculated mice ( n = 6) were included as control. Neurohistopathological analyses of early inflammation We reused mouse brain tissue sections obtained from the previous experiment to explore pathomorphological changes and the spatial distribution of infiltrating immune cell populations during the first 21 days of infection in detail. Mice sacrificed at 2, 8, 12, 15, and 21 days pi ( n = 3) served as positive material. Mock‐infected mice were included as control ( n = 3). Immunological analyses of early inflammation To assess neuroinflammatory response by flow cytometry PrV‐∆UL21/US3∆kin‐infected ( n = 6) as well as control mice ( n = 4) were sacrificed at 2, 8, 12, 15, and 21 days pi to identify and quantify inflammatory infiltrates and cytokine levels in the brain. Neurohistopathological analyses of severely diseased mice PrV‐ΔUL21/US3Δkin‐infected animals from the different experiments ( n = 6) that show severe clinical signs or found dead were analyzed histopathologically to explore the severe disease outcome. Animals from the first trial to determine the mean time to death and the kinetic study as well as the long‐term experiment (this study) were included. Virus PrV‐ΔUL21/US3Δkin was generated in a PrV‐Kaplan (PrV‐Ka) background as described previously . The virus was propagated in RK13 cells grown at 37°C in MEM supplemented with 10% FCS (Invitrogen). Histopathological analysis For histopathological investigation, the skull was removed and the head was fixed in 4% neutral‐buffered formalin for at least 1 week followed by decalcification for 3 days in Formical 2000 (Decal, Tallman, NY). From all heads, eight coronal head sections were obtained, embedded in paraffin wax and cut at 3 or 5 µm thick slices, respectively, for further histological and immunohistochemical evaluation . The slices were mounted on Super‐Frost‐Plus‐Slides (Carl Roth GmbH, Karlsruhe, Germany) and stained with hematoxylin–eosin for detailed neuropathological analysis of CNS inflammation. 2.3.1 Special stains Axonal density was visualized by Bielschowsky's silver impregnation. Dewaxed paraffin sections were treated with 0.25% potassium permanganate solution (3 min) and rinsed in distilled water. Afterwards, 1% potassium sulfate solution was applied to sections (1 min). Sections were rinsed in tap and distilled water before samples were probed with 2% silver nitrate solution overnight. Sections were rinsed in distilled water (3–5 s, 2 times) and incubated with 10% ammoniacal silver nitrate solution (10 min). Sections were dipped in distilled water (5 s) and reduced in 4% formalin. Myelination was evaluated using Luxol Fast Blue‐Cresyl Violet staining. Dewaxed paraffin sections were treated with xylol (2x 2 min), 99.5% (2x 3 min), 95%, 80%, 70%, 50% 1‐propanol (3 min each), and distilled water (3 min). After incubation in isopropyl alcohol (15 min), the section were left in luxol fast blue solution (24 h, 57°C), rinsed with distilled water, and differentiated in 0.05% lithium carbonate solution (15 s) and 70% ethyl alcohol (15 s). Sections were rinsed with distilled water and counterstained with 0.1% Cresyl Fast Violet solution, dehydrated in 96% ethyl alcohol (2x 4 min), isopropanol (1x 4 min), and butyl acetate (1x 4 min), and coated with Entellan (Merck, Darmstadt, Germany). Mineralization was investigated using the von Kossa stain. As described above dewaxed paraffin‐embedded section were rehydrated and subsequently incubated with 5% silver nitrate solution (120 min) in the dark. After washing in distilled water, the sections were treated with 1% pyrogallic acid (4 min) and 4% sodium thiosulfate (5 min). The sections were rinsed in tap water (10 min) and counterstained with nuclear fast red (5 min), washed in distilled water, and dehydrated through graded alcohol. Hemosiderosis following hemorrhages was assessed by Prussian Blue staining. Rehydrated paraffin‐embedded tissue sections were immersed in 1% hydrochloric acid and 2% potassium ferrocyanide (30 min), rinsed in distilled water, followed by counterstain with nuclear fast red (5 min). Sections were rinsed in distilled water and dehydrated. 2.3.2 Immunohistochemistry Infiltrating immune cell populations were identified using antibodies against Iba‐1 (FUJIFILM Wako, polyclonal rabbit anti‐rat, 1:800, for monocytes and macrophages), CD3 (DAKO, polyclonal rabbit anti‐human T cell CD3 A452, 1:100, for T cells) and CD79a (DAKO, monoclonal mouse anti‐human CD79αcy CloneHM57, 1:50, for B cells). Antibodies against glia‐fibrillary‐acid‐protein (GFAP) (abcam, polyclonal rabbit anti‐bovine, 1:100) stained astrocytes. PrV infection was visualized using an in‐house‐generated rabbit polyclonal antibody against glycoprotein B . Dewaxed and rehydrated paraffin‐embedded sections were treated with 3% of hydrogen peroxide (10 min, Merck, Darmstadt, Germany) to block endogenous peroxidases. To demask antigenic sites in tissue (except sections for PrV gB), sections were either treated with 10mM citrate buffer (2x 5 min, microwave, 500W, for GFAP, CD3) or 10mM Tris–EDTA buffer (10mM Tris base, 1mM EDTA solution, 15 min, microwave, 500W, for CD79a) followed by incubation in undiluted normal goat serum (30 min). Primary antibody incubation was followed by biotinylated goat anti‐rabbit IgG (1:200; Vector Laboratories, Burlingame, CA, for GFAP) or goat‐anti‐mouse IgG (1:200, Vector Laboratories, Burlingame, CA, for CD79a) and subsequent avidin–biotin–peroxidase (ABC) complex (Vector Laboratories) for 30 min at room temperature. For CD3 staining, sections were treated with Envison®+ System – HRP (DAKO). Positive antigen–antibody reaction was visualized using AEC‐substrate (DAKO, Hamburg, Germany). After rinsing with deionized water, the sections were counterstained with Mayer's hematoxylin for 10 min and mounted with Aquatex (Merck). 2.3.3 Scoring of neurohistopathological changes In order to characterize PrV‐∆UL21/US3∆kin‐induced encephalitis during the acute phase of infection in more detail, the following brain regions were stained with hematoxylin and eosin and analyzed histopathologically: brainstem (BS) including medulla oblongata and pons, mesencephalon (MES), diencephalon (DI), temporal lobe (TL) including hippocampus, parietal lobe (PL), and frontal lobe (FL). The above mentioned brain areas were analyzed for inflammatory changes according to a recently published protocol with slight modifications as given in Table . Neuronal necrosis and spongiform changes (Table ) were scored only in the TL which was the most affected brain region. Axonal density in the white matter was scored according to a recent protocol . Demyelination was evaluated as published earlier . Parenchymal mineralization and hemosiderosis were determined absent or present. Scoring of all four parameters assessed in the TL during the acute phase of infection is given in Table . 2.3.4 Scoring of inflammatory cells (immunohistochemistry) Temporal lobe infiltration by CD3 + T cells, CD79 + B cells, and Iba‐1 + microglia/macrophages was determined during the acute phase of infection and scored as illustrated in Table based on a recent protocol with few adaptions. Astrogliosis based on GFAP immunoreactivity was assessed as absent or present. Infiltration of cells was evaluated in 20x or 40x magnification (high power filed = HPF). Special stains Axonal density was visualized by Bielschowsky's silver impregnation. Dewaxed paraffin sections were treated with 0.25% potassium permanganate solution (3 min) and rinsed in distilled water. Afterwards, 1% potassium sulfate solution was applied to sections (1 min). Sections were rinsed in tap and distilled water before samples were probed with 2% silver nitrate solution overnight. Sections were rinsed in distilled water (3–5 s, 2 times) and incubated with 10% ammoniacal silver nitrate solution (10 min). Sections were dipped in distilled water (5 s) and reduced in 4% formalin. Myelination was evaluated using Luxol Fast Blue‐Cresyl Violet staining. Dewaxed paraffin sections were treated with xylol (2x 2 min), 99.5% (2x 3 min), 95%, 80%, 70%, 50% 1‐propanol (3 min each), and distilled water (3 min). After incubation in isopropyl alcohol (15 min), the section were left in luxol fast blue solution (24 h, 57°C), rinsed with distilled water, and differentiated in 0.05% lithium carbonate solution (15 s) and 70% ethyl alcohol (15 s). Sections were rinsed with distilled water and counterstained with 0.1% Cresyl Fast Violet solution, dehydrated in 96% ethyl alcohol (2x 4 min), isopropanol (1x 4 min), and butyl acetate (1x 4 min), and coated with Entellan (Merck, Darmstadt, Germany). Mineralization was investigated using the von Kossa stain. As described above dewaxed paraffin‐embedded section were rehydrated and subsequently incubated with 5% silver nitrate solution (120 min) in the dark. After washing in distilled water, the sections were treated with 1% pyrogallic acid (4 min) and 4% sodium thiosulfate (5 min). The sections were rinsed in tap water (10 min) and counterstained with nuclear fast red (5 min), washed in distilled water, and dehydrated through graded alcohol. Hemosiderosis following hemorrhages was assessed by Prussian Blue staining. Rehydrated paraffin‐embedded tissue sections were immersed in 1% hydrochloric acid and 2% potassium ferrocyanide (30 min), rinsed in distilled water, followed by counterstain with nuclear fast red (5 min). Sections were rinsed in distilled water and dehydrated. Immunohistochemistry Infiltrating immune cell populations were identified using antibodies against Iba‐1 (FUJIFILM Wako, polyclonal rabbit anti‐rat, 1:800, for monocytes and macrophages), CD3 (DAKO, polyclonal rabbit anti‐human T cell CD3 A452, 1:100, for T cells) and CD79a (DAKO, monoclonal mouse anti‐human CD79αcy CloneHM57, 1:50, for B cells). Antibodies against glia‐fibrillary‐acid‐protein (GFAP) (abcam, polyclonal rabbit anti‐bovine, 1:100) stained astrocytes. PrV infection was visualized using an in‐house‐generated rabbit polyclonal antibody against glycoprotein B . Dewaxed and rehydrated paraffin‐embedded sections were treated with 3% of hydrogen peroxide (10 min, Merck, Darmstadt, Germany) to block endogenous peroxidases. To demask antigenic sites in tissue (except sections for PrV gB), sections were either treated with 10mM citrate buffer (2x 5 min, microwave, 500W, for GFAP, CD3) or 10mM Tris–EDTA buffer (10mM Tris base, 1mM EDTA solution, 15 min, microwave, 500W, for CD79a) followed by incubation in undiluted normal goat serum (30 min). Primary antibody incubation was followed by biotinylated goat anti‐rabbit IgG (1:200; Vector Laboratories, Burlingame, CA, for GFAP) or goat‐anti‐mouse IgG (1:200, Vector Laboratories, Burlingame, CA, for CD79a) and subsequent avidin–biotin–peroxidase (ABC) complex (Vector Laboratories) for 30 min at room temperature. For CD3 staining, sections were treated with Envison®+ System – HRP (DAKO). Positive antigen–antibody reaction was visualized using AEC‐substrate (DAKO, Hamburg, Germany). After rinsing with deionized water, the sections were counterstained with Mayer's hematoxylin for 10 min and mounted with Aquatex (Merck). Scoring of neurohistopathological changes In order to characterize PrV‐∆UL21/US3∆kin‐induced encephalitis during the acute phase of infection in more detail, the following brain regions were stained with hematoxylin and eosin and analyzed histopathologically: brainstem (BS) including medulla oblongata and pons, mesencephalon (MES), diencephalon (DI), temporal lobe (TL) including hippocampus, parietal lobe (PL), and frontal lobe (FL). The above mentioned brain areas were analyzed for inflammatory changes according to a recently published protocol with slight modifications as given in Table . Neuronal necrosis and spongiform changes (Table ) were scored only in the TL which was the most affected brain region. Axonal density in the white matter was scored according to a recent protocol . Demyelination was evaluated as published earlier . Parenchymal mineralization and hemosiderosis were determined absent or present. Scoring of all four parameters assessed in the TL during the acute phase of infection is given in Table . Scoring of inflammatory cells (immunohistochemistry) Temporal lobe infiltration by CD3 + T cells, CD79 + B cells, and Iba‐1 + microglia/macrophages was determined during the acute phase of infection and scored as illustrated in Table based on a recent protocol with few adaptions. Astrogliosis based on GFAP immunoreactivity was assessed as absent or present. Infiltration of cells was evaluated in 20x or 40x magnification (high power filed = HPF). Cell preparation and antibody staining for flow cytometric analysis Brain samples were prepared for single‐cell isolation according to a recently published protocol with slight modifications. Briefly, after removing from the scull, brains were immediately transferred to ice‐cold PBS and kept on ice. The cerebellum was removed and the remaining brain cut into small pieces on ice. For cell isolation, brain pieces were pressed through a cell strainer (70 µm, BD Biosciences, Heidelberg, Germany), homogenized, and taken up in 2 ml cOmplete TM Mini EDTA‐free protease inhibitor cocktail (Roche, Basel, Switzerland). Half of the homogenate was used for the analysis of the infiltrating immune cells or cytokines. The homogenate was centrifuged (286 × g , 4 °C, 5 min), and the supernatant was discarded. The cell pellet was resuspended in 1 ml digestion buffer (Liberase with low thermolysin concentration to a concentration of 2 U/ml in Hanks Balanced Salt Solution [HBSS] containing calcium [Ca] and magnesium [Mg]) and incubated for 30 min at 37°C with gentle agitation. The suspension was pressed through a cell strainer (70 μm), washed with 10 ml of DNAse‐free washing buffer (HBSS (Ca/Mg free) containing 10% of FCS), and centrifuged (286 × g , 18°C, 5 min). The supernatant was discarded, and the cell pellet was carefully resuspended in 5 ml density gradient medium (25%, room temperature), and centrifuged (521 × g , 18°C, 30 min, acceleration/deceleration = 0). The myelin layer and the supernatant were aspirated, and the cell pellet was resuspended in 10ml DNAse‐free washing buffer and centrifuged again (286 × g , 10°C, 5 min). The supernatant was discarded, and the cells were resuspended in 100µl of cold washing buffer. Cell counting and assessment of cell viability were achieved using trypan blue staining (dilution 1:10). For flow cytometric antibody staining, the cell pellet of 1 ml brain homogenate was suspended in FACS‐buffer (PBS containing 0.1% Sodium azide and 0.1% BSA) and treated with CD16/CD32 Fc‐Receptor blocking reagent (2.5 μg/ml). Cells were stained with primary antibodies listed in Table for 15 min at 4°C in the dark. For staining of whole blood, erythrocytes were lysed after surface staining by conventional lysis buffer (1.55 M NH 4 Cl, 100 mM KHCO 3 , 12.7 mM Na 4 EDTA, pH 7.4, in distilled water). Gating is shown in SI . Cytokine assay For cytokine analysis in the brain, the LegendPlex ™ Mouse Anti‐Virus Response Panel was used to quantify 13 mouse cytokines including interferons IFN‐α, IFN‐β, and IFN‐γ; interleukins IL‐1β, IL‐6, IL‐10, and IL‐12 as well as chemokines CCL2, CCL5, CXCL1, CXCL10, TNF‐α, and GM‐CSF, according to manufacturer’s instructions (BioLegend, Koblenz, Germany). Statistical analysis Statistical analyses and graphical visualization of data were performed using Graph Pad Prism (Version 8.4.2). To analyze brain immune cell infiltration and cytokines, ordinary one‐way ANOVA with Holm‐Sidak’s post hoc test was performed to compare infected animals from 2, 8, 12, 15, and 21 days pi to all control mice. Values with p ≤ 0.05 were considered significant and are indicated by asterisks . RESULTS 3.1 Long‐term dynamics and clinical signs after PrV‐∆UL21/US3∆kin infection In our first study , we monitored PrV‐∆UL21/US3∆kin‐infected mice for 21 days and investigated viral spread and inflammatory reaction in a detailed kinetic experiment. The animals developed meningoencephalitis starting at day 9 pi. Interestingly, the majority of mice showed only mild‐to‐moderate clinical signs or remained completely asymptomatic despite of an extensive inflammatory reaction, which could be detected until the end of the study. The animals were able to survive, with the exception of three mice, which died or had to be euthanized between 9 and 13 days pi. Notably, localization of viral antigen, which was detectable until day 15, and the very pronounced inflammatory response localized to the temporal lobe were largely comparable to human HSE. Mice also developed behavioral alterations including star gazing which may resemble abnormalities observed in human patients. Based on these data, we aimed to investigate the further course of infection including clinical alterations as well as central nervous lesions beyond 21 days in a long‐term experiment. To this end, clinical signs of PrV‐∆UL21/US3∆kin‐infected mice were recorded until day 168 pi. For this experiment, 47 PrV‐∆UL21/US3∆kin‐infected mice and six control mice were used. Starting from day 28 pi, five infected animals and one mock‐infected mouse each were sacrificed on days 35, 42, 49, 84, and 168 pi for neurohistopathological examination. As observed previously , at day 5 pi few mice (6%) started to show clinical signs typical for PrV‐∆UL21/US3∆kin infection. Subsequently, the incidence of clinical signs increased and reached almost 50% on day 8 pi. On day 10 pi, 87% of animal showed clinical signs which was the highest incidence detected in this experiment. On day 19, it decreased to 54%. Thereafter the number of mice showing clinical signs increased again to 77% on day 24 pi, thereafter decreasing continuously to 25% on day 46 pi. This was followed by a slight increase to 48% on day 53, followed by another decrease to ca. 30% on day 96 pi. The incidence from then on was low, at a maximum of around 24%. However, the incidence slightly increased again to around 30% from day 133 pi and decreased from day 146 pi to almost 18%. However, starting at day 153 pi, a new wave of clinical signs was noted which reached almost 77% at day 165 pi (Figure ). In summary, we detected an essentially biphasic course of infection. The acute, first phase of the infection with two peaks at around days 10 and 20 slowly subsided about 3 weeks after the infection. The disease rate then remained at a low level, but increased again after about 6 months defined as the second phase of disease. Early after infection, animals showed nonspecific clinical signs including ruffled fur or hunching, while two out of 47 animals remained clinically inapparent until day 28 pi. Several mice had mild pruritus and conjunctivitis and developed a nasal bridge edema. Out of 47 mice (72%), 34 animals developed alopecic skin erosions in various body regions including the head, limbs, abdomen or back, mainly occurring between day 8 and 14 pi. Some animals even developed multiple or recurring skin lesions. Typically, lesions were not hemorrhagic and healed within a week of onset. Out of 47 mice (32%), 15 animals started to show behavioral alterations including reduced activity levels and “star gazing” within the first two weeks pi while further 19 animals (40%) started to show clinical signs in the fourth week pi. Rather mild nervous signs were observed in 12 animals (25%) and included slight facial fasciculations mainly occurring between day 10 and 13 pi. Four mice showed mild ataxia mainly between day 12 and 14 pi. In total, three mice had to be euthanized because of severe convulsions and excitations at 11, 12, and 13 days pi. Beyond 28 days pi, 29 out of the remaining 39 mice (74%) revealed recurring clinical signs mainly characterized by behavioral abnormalities, but also ruffled fur, hunched back or alopecia. Notably, not until day 159 pi, two mice (designated as mouse 1 and 2) developed seizures interrupted by phases with normal behavior or reduced activity levels. These two animals did not show any particular abnormalities in the acute phase of the infection. More specifically, mouse 1 showed first clinical signs as early as 5 days pi such as nasal bridge edema, ruffled fur, blepharospasm, and reduced activity. In the following, but only for a short time, the animal developed photophobia and signs of nervousness. Until the end of the study period, the mouse showed alternating phases of staring, nervousness, and hunching. Mouse 2 showed sickness from day 8 pi on with hunching, ruffled fur, and mild itch. From day 23 onwards, the animal had reduced activity levels which improved on day 40 pi. The animal was then clinically normal until day 156. 3.2 Long‐term CNS lesions after PrV‐∆UL21/US3∆kin infection In the previous study, all mice investigated at 21 days pi showed marked meningoencephalitis , therefore we intended to investigate neuropathological changes also at later time points of infection. For this, five infected animals each were sacrificed at 28, 35, 42, 49, 84, and 168 days pi. The severity and location of inflammation was determined on hematoxylin and eosin stained tissue sections as described earlier . Inflammatory cell infiltrates and reactive changes were differentiated by immunohistochemistry targeting CD3, Iba‐1, and GFAP on three animals each at each time point as illustrated in Figure . On day 28 pi, four out of five animals revealed inflammation confined to the FL and TL. Immunohistochemistry identified mainly CD3 + and Iba‐1 + infiltrates while CD3 + T cells predominated in two mice. One mouse had a severe temporal meningoencephalitis with extensive necrosis of hippocampal neurons, perivascular infiltration, and mild edema. Mild astrogliosis was present in all animals. Results of hematoxylin and eosin, CD3, Iba‐1, and GFAP immunohistochemistry staining are demonstrated in Figure . At 35 days pi, meningoencephalitis composed of CD3 + and Iba‐1 + cells was found in all animals similar to those investigated 28 days pi. One animal showed moderate spongy changes in the BS. Likewise, all animals sacrificed on day 42 pi showed inflammatory reaction in the CNS. In one mouse multifocal gliosis was observed, while all other animals revealed mainly mild lymphohistiocytic meningoencephalitis in the TL and FL, confirmed by CD3 + and Iba‐1 + immunohistochemistry. As observed earlier, mild spongiform changes were located to the TL as well as to the BS in two out of five animals. In mice investigated on day 49 pi, mild lymphohistiocytic temporofrontal inflammation and gliosis were found in three out of five animals, which were comparable to samples of 42 days pi. Later on day 84 pi, four out five animals suffered from CNS inflammation. However, in contrast to earlier time points mild, but mixed cellular inflammatory response was found consisting of moderate numbers of lymphocytes and histiocytes, admixed with few neutrophils mainly affecting the TL and FL. At the end of the experiment, on day 168 pi, two out of five animals that were scheduled to be sacrificed that day as well as one mouse which was further euthanized because of seizures showed histopathological abnormalities including mild mixed‐cellular meningoencephalitis, as seen on day 84 pi, as well as gliosis. Severe and widespread spongiform changes, but no inflammatory reactions were present in the mouse with seizures, particularly in the TL. Despite of long‐term CNS inflammation detected in majority of animals, immunohistochemistry against PrV gB was negative except for one animal examined 49 days pi, showing few positive neurons located in the PL and TL as well as another animal 84 days pi with a single positive neuronal signal in TL (SI ). 3.3 Kinetics and distribution of the CNS inflammatory response in the acute phase The first term of our long‐term study reflected the results of the previous experiment during the first 21 days . Both indicate a biphasic course of disease with an acute phase of infection with two peaks. Mice developed meningoencephalitis from day 9. As the degree of inflammation progresses, the animals developed predominantly moderate clinical signs, including behavioral alterations, but mostly survived the infection. Only six mice died during this critical phase in three independent experiments (mean time to death and kinetic study and long‐term experiment [this study]). We therefore aimed to analyze this acute and critical phase in more histopathological detail. Eight coronal head sections were stained with hematoxylin–eosin. Different parts of the brain including the BS, MES, DI, TL, PL, and FL of three animals each sacrificed at 2, 8, 12, 15, and 21 days pi were evaluated for meningeal/perivascular and parenchymal inflammatory infiltration based on a score from 0 to 3 (Table ). Figure illustrates the dynamics and score per animal and brain area at the indicated time points. While infected animals showed no histopathological changes at 2 days pi, meningeal/perivascular and parenchymal infiltration, respectively, affecting the BS and TL were mildly present in single PrV‐ΔUL21/US3Δkin‐infected mice sacrificed at 8 days pi. However, severe meningoencephalitis was observed in all animals sacrificed 12 days pi mainly affecting the TL. While moderate inflammation was present in BS, moderate to only mild inflammatory response was observed in DI, PL, FL, and MES. However, up to severe inflammation was found 15 days pi in the TL. Whereas at this time meningoencephalitis slightly receded in the BS and DI, inflammation moderately increased in the PL at 15 days pi. Inflammatory reaction in the FL and MES remained constantly mild to moderate. At 21 days pi, all animals still showed CNS inflammation especially in TL. Compared with the relatively mild inflammatory response in FL at 15 days pi, this brain region was more severely affected in single individuals at 21 days pi. Other brain areas such as the BS, DI, and PL showed largely reduced inflammation, while no infiltrates were detectable in the MES at 21 days pi. Thus, the TL was identified as the primary site of sustained inflammation which was first detectable at day 8 pi, peaked at 12 days pi, and remained almost consistently severe until 21 days pi (Figure ). Representative hematoxylin and eosin‐stained sections of the TL are given in Figure . The degree of neuronal degeneration and necrosis in the TL was semiquantitatively scored at the indicated time points. Although no degeneration of neurons was observed 2 and 8 days pi, loss of neurons ranged from mild to severe at 12 days pi, when inflammation increased. With ongoing inflammation 15 and 21 days pi, all animals revealed moderate neuronal degeneration (SI ). No hemorrhage, mineralization, or demyelination was detected in any brain section at any time point of infection. 3.4 Identification and spatial distribution of inflammatory effector cells Tissue sections of the TL, which was most severely affected by inflammation, were further analyzed for identification of infiltrating or CNS resident immune cells, using immunohistochemistry against CD3 + T cells, CD79 + positive B cells, and Iba‐1 + monocytes/macrophages, respectively. Sections were semiquantitatively screened for the number of CD3, CD79, and Iba‐1 + meningeal/perivascular cell layers and neuroparenchymal infiltrates per HPF (Table ). The respective scores are given in Figure . At 8 days pi, very mild meningeal infiltration of CD3 + T cells was detectable (Figure ). High numbers of CD3 + T cells were present 12 days pi within the parenchyma and to a lesser extent in the meninges or perivascular spaces. While meningeal and perivascular numbers of T cells remained constantly low, parenchymal infiltrating cells varied from low to high scores at 15 days pi and slightly decreased at 21 days pi. Representative tissue sections of CD3 + immunohistochemistry are depicted in Figure . Few Iba‐1 + cells were present at 8 days pi, but markedly increased within the meninges and perivascular spaces and neuroparenchyma at 12 days pi (Figure ). Iba‐1 + cells reached maximum levels in the neuroparenchyma whereas within the meninges moderate numbers were found at 12 and 15 days pi. Still considerable, but lower numbers of both parenchymal and meningeal/perivascular infiltrates were present at 21 days pi. Iba‐1 + immunohistochemistry stains are shown in Figure . Only few CD79 + B lymphocytes were detectable over the study period (Figure ). At 12 and 15 days pi, only one mouse each showed few CD79 + B lymphocytes within the brain parenchyma and meninges, respectively. However, all mice sacrificed at 21 days pi revealed neuroparenchymal infiltration of CD79 + cells as illustrated in SI . Astrogliosis was found starting 12 days pi and was present until the end of the short‐term study (SI ). 3.5 Kinetics of immune cell infiltration toward the brain during the acute phase To assay the type of infiltrating immune cells toward the CNS, brain homogenates of PrV‐∆UL21/US3∆kin‐infected mice were investigated by flow cytometry at 2, 8, 12, 15, and 21 days pi. As shown in Figure infiltrating leukocytes, defined as CD45 hi cells, were detectable starting 8 days pi, reaching average frequencies of up to 70% 12 and 15 days pi and declining at the end of the study (21 days pi) to 40%. In contrast, the resident leukocyte population, putative microglia defined as CD45 lo CD11b + cells (Figure ), was proportionally higher in control animals as well as in infected animals investigated on day 2 pi with about 60% of total cells. Nevertheless, from day 8 onwards, the fraction decreases in favor of the infiltrating CD45 hi leukocytes. In uninfected animals, CD45 hi cells consisted of approximately 10% T cells, 30% CD3 − CD11b + monocyte/macrophages (Figure ), 5% B cells (Figure ), and 10%–15% granulocytes (Figure ). Natural killer (NK) cell frequency was highly variable (Figure ). After infection, higher numbers of CD3 + T cells were found starting at 8 days pi (Figure ). Whereas at day 12 pi, CD3 + T cell frequencies were comparable to CD3 − CD11b + monocytes/macrophages, T cells markedly increased 15 days pi at the expense of monocytes/macrophages (Figure ). The number of T cells reached almost 70% at 21 days pi, whereas CD3 − CD11b + monocytes/macrophages decreased by two‐thirds in total until 21 days pi. In addition, B cells were found 15 days pi reaching frequencies of around 10% until the end of the study. The number of granulocytes was highest on day 2 pi with about 10% and then decreased continuously until 21 days (Figure ). The population of NK cells only accounted for a small proportion of the inflammatory infiltrate, and their frequency ranged around 5% at 2, 8, and 12 days pi while it decreased at later time points (Figure ). To further characterize T cell infiltration, CD4 + and CD8 + subpopulations of T cells were determined (Figure ). As the population of cytotoxic CD8 + T cells was rather stable over the first 12 days with approximately 40% (Figure ), the frequency of CD4 + helper T cells (Figure ) was already higher at 8 and 12 days pi reaching 60%. However, at day 15 pi the ratio of CD4 + to CD8 + T cells was reversed, and the number of cytotoxic T cells increased and remained high until the end of the study reaching in average 60% while CD4 + T cells decreased to 40%. In summary, leukocyte infiltration toward the brain was detectable starting on day 8 pi (Figure ). As measured based on the total cell count, the proportion of monocytes/macrophages increased up to day 12 and reached almost identical values compared with infiltrating T cells. From day 15 pi, the number of T cells markedly increased whereas monocytes and macrophages declined. Compared with day 12 and 15 pi, the overall number of infiltrating cells slightly decreased on day 21 pi. 3.5.1 Kinetics of chemokine and cytokine expression during encephalitis Chemokine and cytokine expression in brain homogenate was investigated by flow cytometry using a commercial LegendPlex™ Mouse Anti‐Virus Response Panel (BioLegend, Germany). Analysis revealed five cytokines and chemokines including CXCL10, CCL2, CXCL1, CCL5, and IFN‐γ to be significantly elevated in PrV‐∆UL21/US3∆kin‐infected animals (Figure ) at 12 days pi. Levels were slightly higher at 15 day pi, but reached base line 21 days pi. Cytokines IFN‐α, IFN‐β, interleukins IL‐1β, IL‐6, IL‐10, and IL‐12 as well as TNF‐α and GM‐CSF were not elevated at the investigated time points. 3.6 Neurohistopathology of severely diseased PrV‐∆UL21/US3∆kin‐infected animals From all experiments (mean time to death and kinetic study and long‐term trial [this study]), seven animals in total presented with severe clinical condition and seizures, and were either euthanized ( n = 6) or found dead ( n = 1). In the first trial to determine the mean time to death, one mouse was sacrificed around 10 days pi, whereas two animals were euthanized or found dead at 10 and 13 days pi in the kinetic study as reported earlier . Four mice were euthanized during the long‐term experiment described in the present study at days 11, 12, 14, and 168 pi because of seizures and excitations, while one animal was euthanized because of marked dermatitis at day 27 pi. This animal was excluded therefore from the following histopathological examination. In order to better understand this rare but severe clinical course which mimicked that observed in HSV‐1‐infected human patients, we analyzed the brains of these mice more closely. Brain sections of all animals were investigated using hematoxylin and eosin staining as well as immunohistochemistry for PrV antigen detection and identification of immune cells. Histopathologically, in all animals sacrificed or found dead between days 10 and 14 pi, severe intra‐ and perilesional spongy changes, defined as intra‐ and extracellular edema of variable severity, mainly confined to TL, were diagnosed (Figure ). Extensive neuronal degeneration and necrosis but rather mild lymphohistiocytic meningoencephalitis was present which is contrast to clinically less affected mice showing rather severe inflammation. Variable loss of axons and myelin was present (Figure ). Astrocytosis was only sparsely detectable. In all six animals analyzed between days 10 and 14 pi, PrV anti‐gB immunohistochemistry revealed abundant infected neurons mainly in the TL, but also in the PL and FL (Figure ). Notably, in an animal sacrificed 168 days pi (end of the long‐term experiment) widespread edema, especially in the FL and TL as well as in the DI, without any inflammatory reaction was found. Details of histopathological investigation are summarized in Table . Long‐term dynamics and clinical signs after PrV‐∆UL21/US3∆kin infection In our first study , we monitored PrV‐∆UL21/US3∆kin‐infected mice for 21 days and investigated viral spread and inflammatory reaction in a detailed kinetic experiment. The animals developed meningoencephalitis starting at day 9 pi. Interestingly, the majority of mice showed only mild‐to‐moderate clinical signs or remained completely asymptomatic despite of an extensive inflammatory reaction, which could be detected until the end of the study. The animals were able to survive, with the exception of three mice, which died or had to be euthanized between 9 and 13 days pi. Notably, localization of viral antigen, which was detectable until day 15, and the very pronounced inflammatory response localized to the temporal lobe were largely comparable to human HSE. Mice also developed behavioral alterations including star gazing which may resemble abnormalities observed in human patients. Based on these data, we aimed to investigate the further course of infection including clinical alterations as well as central nervous lesions beyond 21 days in a long‐term experiment. To this end, clinical signs of PrV‐∆UL21/US3∆kin‐infected mice were recorded until day 168 pi. For this experiment, 47 PrV‐∆UL21/US3∆kin‐infected mice and six control mice were used. Starting from day 28 pi, five infected animals and one mock‐infected mouse each were sacrificed on days 35, 42, 49, 84, and 168 pi for neurohistopathological examination. As observed previously , at day 5 pi few mice (6%) started to show clinical signs typical for PrV‐∆UL21/US3∆kin infection. Subsequently, the incidence of clinical signs increased and reached almost 50% on day 8 pi. On day 10 pi, 87% of animal showed clinical signs which was the highest incidence detected in this experiment. On day 19, it decreased to 54%. Thereafter the number of mice showing clinical signs increased again to 77% on day 24 pi, thereafter decreasing continuously to 25% on day 46 pi. This was followed by a slight increase to 48% on day 53, followed by another decrease to ca. 30% on day 96 pi. The incidence from then on was low, at a maximum of around 24%. However, the incidence slightly increased again to around 30% from day 133 pi and decreased from day 146 pi to almost 18%. However, starting at day 153 pi, a new wave of clinical signs was noted which reached almost 77% at day 165 pi (Figure ). In summary, we detected an essentially biphasic course of infection. The acute, first phase of the infection with two peaks at around days 10 and 20 slowly subsided about 3 weeks after the infection. The disease rate then remained at a low level, but increased again after about 6 months defined as the second phase of disease. Early after infection, animals showed nonspecific clinical signs including ruffled fur or hunching, while two out of 47 animals remained clinically inapparent until day 28 pi. Several mice had mild pruritus and conjunctivitis and developed a nasal bridge edema. Out of 47 mice (72%), 34 animals developed alopecic skin erosions in various body regions including the head, limbs, abdomen or back, mainly occurring between day 8 and 14 pi. Some animals even developed multiple or recurring skin lesions. Typically, lesions were not hemorrhagic and healed within a week of onset. Out of 47 mice (32%), 15 animals started to show behavioral alterations including reduced activity levels and “star gazing” within the first two weeks pi while further 19 animals (40%) started to show clinical signs in the fourth week pi. Rather mild nervous signs were observed in 12 animals (25%) and included slight facial fasciculations mainly occurring between day 10 and 13 pi. Four mice showed mild ataxia mainly between day 12 and 14 pi. In total, three mice had to be euthanized because of severe convulsions and excitations at 11, 12, and 13 days pi. Beyond 28 days pi, 29 out of the remaining 39 mice (74%) revealed recurring clinical signs mainly characterized by behavioral abnormalities, but also ruffled fur, hunched back or alopecia. Notably, not until day 159 pi, two mice (designated as mouse 1 and 2) developed seizures interrupted by phases with normal behavior or reduced activity levels. These two animals did not show any particular abnormalities in the acute phase of the infection. More specifically, mouse 1 showed first clinical signs as early as 5 days pi such as nasal bridge edema, ruffled fur, blepharospasm, and reduced activity. In the following, but only for a short time, the animal developed photophobia and signs of nervousness. Until the end of the study period, the mouse showed alternating phases of staring, nervousness, and hunching. Mouse 2 showed sickness from day 8 pi on with hunching, ruffled fur, and mild itch. From day 23 onwards, the animal had reduced activity levels which improved on day 40 pi. The animal was then clinically normal until day 156. Long‐term CNS lesions after PrV‐∆UL21/US3∆kin infection In the previous study, all mice investigated at 21 days pi showed marked meningoencephalitis , therefore we intended to investigate neuropathological changes also at later time points of infection. For this, five infected animals each were sacrificed at 28, 35, 42, 49, 84, and 168 days pi. The severity and location of inflammation was determined on hematoxylin and eosin stained tissue sections as described earlier . Inflammatory cell infiltrates and reactive changes were differentiated by immunohistochemistry targeting CD3, Iba‐1, and GFAP on three animals each at each time point as illustrated in Figure . On day 28 pi, four out of five animals revealed inflammation confined to the FL and TL. Immunohistochemistry identified mainly CD3 + and Iba‐1 + infiltrates while CD3 + T cells predominated in two mice. One mouse had a severe temporal meningoencephalitis with extensive necrosis of hippocampal neurons, perivascular infiltration, and mild edema. Mild astrogliosis was present in all animals. Results of hematoxylin and eosin, CD3, Iba‐1, and GFAP immunohistochemistry staining are demonstrated in Figure . At 35 days pi, meningoencephalitis composed of CD3 + and Iba‐1 + cells was found in all animals similar to those investigated 28 days pi. One animal showed moderate spongy changes in the BS. Likewise, all animals sacrificed on day 42 pi showed inflammatory reaction in the CNS. In one mouse multifocal gliosis was observed, while all other animals revealed mainly mild lymphohistiocytic meningoencephalitis in the TL and FL, confirmed by CD3 + and Iba‐1 + immunohistochemistry. As observed earlier, mild spongiform changes were located to the TL as well as to the BS in two out of five animals. In mice investigated on day 49 pi, mild lymphohistiocytic temporofrontal inflammation and gliosis were found in three out of five animals, which were comparable to samples of 42 days pi. Later on day 84 pi, four out five animals suffered from CNS inflammation. However, in contrast to earlier time points mild, but mixed cellular inflammatory response was found consisting of moderate numbers of lymphocytes and histiocytes, admixed with few neutrophils mainly affecting the TL and FL. At the end of the experiment, on day 168 pi, two out of five animals that were scheduled to be sacrificed that day as well as one mouse which was further euthanized because of seizures showed histopathological abnormalities including mild mixed‐cellular meningoencephalitis, as seen on day 84 pi, as well as gliosis. Severe and widespread spongiform changes, but no inflammatory reactions were present in the mouse with seizures, particularly in the TL. Despite of long‐term CNS inflammation detected in majority of animals, immunohistochemistry against PrV gB was negative except for one animal examined 49 days pi, showing few positive neurons located in the PL and TL as well as another animal 84 days pi with a single positive neuronal signal in TL (SI ). Kinetics and distribution of the CNS inflammatory response in the acute phase The first term of our long‐term study reflected the results of the previous experiment during the first 21 days . Both indicate a biphasic course of disease with an acute phase of infection with two peaks. Mice developed meningoencephalitis from day 9. As the degree of inflammation progresses, the animals developed predominantly moderate clinical signs, including behavioral alterations, but mostly survived the infection. Only six mice died during this critical phase in three independent experiments (mean time to death and kinetic study and long‐term experiment [this study]). We therefore aimed to analyze this acute and critical phase in more histopathological detail. Eight coronal head sections were stained with hematoxylin–eosin. Different parts of the brain including the BS, MES, DI, TL, PL, and FL of three animals each sacrificed at 2, 8, 12, 15, and 21 days pi were evaluated for meningeal/perivascular and parenchymal inflammatory infiltration based on a score from 0 to 3 (Table ). Figure illustrates the dynamics and score per animal and brain area at the indicated time points. While infected animals showed no histopathological changes at 2 days pi, meningeal/perivascular and parenchymal infiltration, respectively, affecting the BS and TL were mildly present in single PrV‐ΔUL21/US3Δkin‐infected mice sacrificed at 8 days pi. However, severe meningoencephalitis was observed in all animals sacrificed 12 days pi mainly affecting the TL. While moderate inflammation was present in BS, moderate to only mild inflammatory response was observed in DI, PL, FL, and MES. However, up to severe inflammation was found 15 days pi in the TL. Whereas at this time meningoencephalitis slightly receded in the BS and DI, inflammation moderately increased in the PL at 15 days pi. Inflammatory reaction in the FL and MES remained constantly mild to moderate. At 21 days pi, all animals still showed CNS inflammation especially in TL. Compared with the relatively mild inflammatory response in FL at 15 days pi, this brain region was more severely affected in single individuals at 21 days pi. Other brain areas such as the BS, DI, and PL showed largely reduced inflammation, while no infiltrates were detectable in the MES at 21 days pi. Thus, the TL was identified as the primary site of sustained inflammation which was first detectable at day 8 pi, peaked at 12 days pi, and remained almost consistently severe until 21 days pi (Figure ). Representative hematoxylin and eosin‐stained sections of the TL are given in Figure . The degree of neuronal degeneration and necrosis in the TL was semiquantitatively scored at the indicated time points. Although no degeneration of neurons was observed 2 and 8 days pi, loss of neurons ranged from mild to severe at 12 days pi, when inflammation increased. With ongoing inflammation 15 and 21 days pi, all animals revealed moderate neuronal degeneration (SI ). No hemorrhage, mineralization, or demyelination was detected in any brain section at any time point of infection. Identification and spatial distribution of inflammatory effector cells Tissue sections of the TL, which was most severely affected by inflammation, were further analyzed for identification of infiltrating or CNS resident immune cells, using immunohistochemistry against CD3 + T cells, CD79 + positive B cells, and Iba‐1 + monocytes/macrophages, respectively. Sections were semiquantitatively screened for the number of CD3, CD79, and Iba‐1 + meningeal/perivascular cell layers and neuroparenchymal infiltrates per HPF (Table ). The respective scores are given in Figure . At 8 days pi, very mild meningeal infiltration of CD3 + T cells was detectable (Figure ). High numbers of CD3 + T cells were present 12 days pi within the parenchyma and to a lesser extent in the meninges or perivascular spaces. While meningeal and perivascular numbers of T cells remained constantly low, parenchymal infiltrating cells varied from low to high scores at 15 days pi and slightly decreased at 21 days pi. Representative tissue sections of CD3 + immunohistochemistry are depicted in Figure . Few Iba‐1 + cells were present at 8 days pi, but markedly increased within the meninges and perivascular spaces and neuroparenchyma at 12 days pi (Figure ). Iba‐1 + cells reached maximum levels in the neuroparenchyma whereas within the meninges moderate numbers were found at 12 and 15 days pi. Still considerable, but lower numbers of both parenchymal and meningeal/perivascular infiltrates were present at 21 days pi. Iba‐1 + immunohistochemistry stains are shown in Figure . Only few CD79 + B lymphocytes were detectable over the study period (Figure ). At 12 and 15 days pi, only one mouse each showed few CD79 + B lymphocytes within the brain parenchyma and meninges, respectively. However, all mice sacrificed at 21 days pi revealed neuroparenchymal infiltration of CD79 + cells as illustrated in SI . Astrogliosis was found starting 12 days pi and was present until the end of the short‐term study (SI ). Kinetics of immune cell infiltration toward the brain during the acute phase To assay the type of infiltrating immune cells toward the CNS, brain homogenates of PrV‐∆UL21/US3∆kin‐infected mice were investigated by flow cytometry at 2, 8, 12, 15, and 21 days pi. As shown in Figure infiltrating leukocytes, defined as CD45 hi cells, were detectable starting 8 days pi, reaching average frequencies of up to 70% 12 and 15 days pi and declining at the end of the study (21 days pi) to 40%. In contrast, the resident leukocyte population, putative microglia defined as CD45 lo CD11b + cells (Figure ), was proportionally higher in control animals as well as in infected animals investigated on day 2 pi with about 60% of total cells. Nevertheless, from day 8 onwards, the fraction decreases in favor of the infiltrating CD45 hi leukocytes. In uninfected animals, CD45 hi cells consisted of approximately 10% T cells, 30% CD3 − CD11b + monocyte/macrophages (Figure ), 5% B cells (Figure ), and 10%–15% granulocytes (Figure ). Natural killer (NK) cell frequency was highly variable (Figure ). After infection, higher numbers of CD3 + T cells were found starting at 8 days pi (Figure ). Whereas at day 12 pi, CD3 + T cell frequencies were comparable to CD3 − CD11b + monocytes/macrophages, T cells markedly increased 15 days pi at the expense of monocytes/macrophages (Figure ). The number of T cells reached almost 70% at 21 days pi, whereas CD3 − CD11b + monocytes/macrophages decreased by two‐thirds in total until 21 days pi. In addition, B cells were found 15 days pi reaching frequencies of around 10% until the end of the study. The number of granulocytes was highest on day 2 pi with about 10% and then decreased continuously until 21 days (Figure ). The population of NK cells only accounted for a small proportion of the inflammatory infiltrate, and their frequency ranged around 5% at 2, 8, and 12 days pi while it decreased at later time points (Figure ). To further characterize T cell infiltration, CD4 + and CD8 + subpopulations of T cells were determined (Figure ). As the population of cytotoxic CD8 + T cells was rather stable over the first 12 days with approximately 40% (Figure ), the frequency of CD4 + helper T cells (Figure ) was already higher at 8 and 12 days pi reaching 60%. However, at day 15 pi the ratio of CD4 + to CD8 + T cells was reversed, and the number of cytotoxic T cells increased and remained high until the end of the study reaching in average 60% while CD4 + T cells decreased to 40%. In summary, leukocyte infiltration toward the brain was detectable starting on day 8 pi (Figure ). As measured based on the total cell count, the proportion of monocytes/macrophages increased up to day 12 and reached almost identical values compared with infiltrating T cells. From day 15 pi, the number of T cells markedly increased whereas monocytes and macrophages declined. Compared with day 12 and 15 pi, the overall number of infiltrating cells slightly decreased on day 21 pi. 3.5.1 Kinetics of chemokine and cytokine expression during encephalitis Chemokine and cytokine expression in brain homogenate was investigated by flow cytometry using a commercial LegendPlex™ Mouse Anti‐Virus Response Panel (BioLegend, Germany). Analysis revealed five cytokines and chemokines including CXCL10, CCL2, CXCL1, CCL5, and IFN‐γ to be significantly elevated in PrV‐∆UL21/US3∆kin‐infected animals (Figure ) at 12 days pi. Levels were slightly higher at 15 day pi, but reached base line 21 days pi. Cytokines IFN‐α, IFN‐β, interleukins IL‐1β, IL‐6, IL‐10, and IL‐12 as well as TNF‐α and GM‐CSF were not elevated at the investigated time points. Kinetics of chemokine and cytokine expression during encephalitis Chemokine and cytokine expression in brain homogenate was investigated by flow cytometry using a commercial LegendPlex™ Mouse Anti‐Virus Response Panel (BioLegend, Germany). Analysis revealed five cytokines and chemokines including CXCL10, CCL2, CXCL1, CCL5, and IFN‐γ to be significantly elevated in PrV‐∆UL21/US3∆kin‐infected animals (Figure ) at 12 days pi. Levels were slightly higher at 15 day pi, but reached base line 21 days pi. Cytokines IFN‐α, IFN‐β, interleukins IL‐1β, IL‐6, IL‐10, and IL‐12 as well as TNF‐α and GM‐CSF were not elevated at the investigated time points. Neurohistopathology of severely diseased PrV‐∆UL21/US3∆kin‐infected animals From all experiments (mean time to death and kinetic study and long‐term trial [this study]), seven animals in total presented with severe clinical condition and seizures, and were either euthanized ( n = 6) or found dead ( n = 1). In the first trial to determine the mean time to death, one mouse was sacrificed around 10 days pi, whereas two animals were euthanized or found dead at 10 and 13 days pi in the kinetic study as reported earlier . Four mice were euthanized during the long‐term experiment described in the present study at days 11, 12, 14, and 168 pi because of seizures and excitations, while one animal was euthanized because of marked dermatitis at day 27 pi. This animal was excluded therefore from the following histopathological examination. In order to better understand this rare but severe clinical course which mimicked that observed in HSV‐1‐infected human patients, we analyzed the brains of these mice more closely. Brain sections of all animals were investigated using hematoxylin and eosin staining as well as immunohistochemistry for PrV antigen detection and identification of immune cells. Histopathologically, in all animals sacrificed or found dead between days 10 and 14 pi, severe intra‐ and perilesional spongy changes, defined as intra‐ and extracellular edema of variable severity, mainly confined to TL, were diagnosed (Figure ). Extensive neuronal degeneration and necrosis but rather mild lymphohistiocytic meningoencephalitis was present which is contrast to clinically less affected mice showing rather severe inflammation. Variable loss of axons and myelin was present (Figure ). Astrocytosis was only sparsely detectable. In all six animals analyzed between days 10 and 14 pi, PrV anti‐gB immunohistochemistry revealed abundant infected neurons mainly in the TL, but also in the PL and FL (Figure ). Notably, in an animal sacrificed 168 days pi (end of the long‐term experiment) widespread edema, especially in the FL and TL as well as in the DI, without any inflammatory reaction was found. Details of histopathological investigation are summarized in Table . DISCUSSION In this study, we further characterized our mouse model which more accurately reflects human herpesviral encephalitis . Mice intranasally infected with a PrV mutant lacking the tegument protein pUL21 and kinase function of pUS3 develop disease with striking analogies to human HSE including temporofrontal lobe associated inflammation and concomitant behavioral alterations. Despite extensive meningoencephalitis the majority of animals survive, which prompted us to further investigate the dynamics of the inflammatory response and CNS lesions in a long‐term trial until 6 month after infection (168 days pi). We demonstrate that infection with PrV‐∆UL21/US3∆kin resulted in an essentially biphasic disease pattern with slight multi‐wave dynamics. Compared with our first study, clinical data obtained during the first 21 days in this study were highly reproducible . In the first, acute phase within the first week of infection, most of the animals developed clinical signs, which led to the first peak on day 9. While mice started to recover in the second week, the number of diseased animals increased again by the end of the third week leading to a second peak at day 21. At around day 50 pi, a third increase of clinical signs was detected, followed by a fourth mild increase after 4 to 5 months (130 days pi). A fifth rapid increase of disease was detected close to the end of the experiment at 168 days pi, which is regarded as the second phase of the biphasic course. Long‐term PrV‐∆UL21/US3∆kin‐infected mice showed lymphohistiocytic inflammation at days 28, 35, 42, and 49 pi, while lymphocytes and macrophages admixed with low numbers of scattered neutrophilic infiltrates were detected in mice at days 84 and 168 pi. Neutrophils are usually present at very early time points of infection when viral antigen is detectable . However, viral antigen could be shown in single animals at 49 and 84 days pi which may lead to recurrent encephalitis as reported in humans . Moreover, this data point to the establishment of reactivation from latency in this PrV animal model because viral antigen was absent at all other time points investigated. However, this requires further investigation. In addition to possible reactivation, chronic CNS inflammation, which has been reported in mouse models , but rarely in humans , should be considered based on the clinical and histopathological findings. Autoimmune encephalitis, which occurs relatively frequently after HSE , should also be part of further investigations. In this context, seizures which appeared approximately 6 months after PrV‐∆UL21/US3∆kin infection may result from either chronic or autoimmune encephalitis. Epilepsy has been described in chronic herpes encephalitis, but autoimmunity may also play an important role in the development of seizures . Behavioral alterations such as star gazing and alternating phases of reduced and normal activity in PrV‐∆UL21/US3∆kin‐infected mice have not been described in any of the animal models for HSE so far. Interestingly, the majority of PrV‐∆UL21/US3∆kin‐infected animals reproducibly developed behavioral impairment, 32% in the first 2 weeks after infection, and further 40% 3 weeks after infection. In 74% of recovered mice, clinical disease returned which became obvious by behavioral changes and to a lesser extent by non‐specific clinical signs recurring beyond 28 days pi. In human HSE patients, similar long‐term symptoms are frequently reported such as memory impairment, personality and behavioral abnormalities, and epilepsy . In line with our findings, mice infected with HSV‐1 strain 17 syn + exhibited severe spatial memory deficits in a long‐term trial associated with axonal degeneration and secondary demyelination in affected brain regions, and later cortical atrophy with still moderate lymphohistiocytic inflammation at day 30 and 60 pi . Based on the present data, we showed that PrV‐∆UL21/US3∆kin infection leads to a biphasic course of infection. In order to better understand the inflammatory dynamics in the acute phase of infection (first 21 days), we performed detailed histopathomorphological studies. Based on the data obtained from the previous trial we decided for five key time points at day 2, 8, 12, 15, and 21 pi for the in‐depth investigation. In general, meningoencephalitis was confined to the temporal and later also to the frontal lobe, consisting of varying numbers of meningeal/perivascular and neuroparenchymal inflammatory cell infiltrates as well as neuronal necrosis. Although no inflammatory changes were detectable 2 days after infection, the number of brain inflammatory cells consisting of differing proportions of CD3 + T lymphocytes, Iba‐1 + macrophages, and later CD79 + B lymphocytes were detectable at day 8 pi, peaked between day 12 and 15 pi and showed slightly lower levels at 21 days pi. Flow cytometric analysis confirmed the dynamics and kinetics of the infiltrating immune cells, revealing that the cells in the brain were composed of up to 70% of immune cells which was also observed in an animal model for HSE . At 12‐ and 15‐days pi, the concentration of CCL2 in brain suspension was elevated contributing to increased permeability of the blood brain barrier, thus enabling increased influx of immune cells including monocytes . Detailed characterization of the infiltrate showed the classic course of an antiviral immune response. Although the infiltrate until day 12 was still largely composed of myeloid cells, the ratio shifted toward lymphocytes during later times. Subsequently, T cells invaded the CNS, which is further promoted by the release of CCL5, a chemokine known to specifically attract T cells . Increased expression of CXCR3 on NK, CD4 + , and CD8 + cells and the following interaction with increased levels of CXCL10, which is derived from various cell types including monocytes, promote a robust immune response, which was shown to prevent mortality in a murine HSV‐1 infection model of HSE . Elevated levels of CCL2, CCL3, CCL5, and CXCL8 were also found in the CSF of HSV‐1‐infected humans . Looking at the composition of the infiltrate at later time points (from day 12), as well as the chemokine and cytokine levels, it is striking that infection with PrV‐∆UL21/US3∆kin led to a polarization of the brain infiltrate from mainly CD4 + T cell response toward a Th1 chemokine profile (CXCL10) and a subsequent CD8 + T cell infiltration. This matches well with the two clinical peaks at 10 and 20 days pi observed in the acute phase of the disease during the first 21 days. To what extend this proinflammatory response contributes to the rarely observed fatal pathology in mice remains to be elucidated, but as all measured cyto‐ and chemokines returned to baseline levels at the end of the study, it seems that this polarization rather prevents fatal outcome by enhanced viral clearance. However, regarding the long‐term experiment and the presence of CD3 + T cells in the temporal lobe until 6 months after infection, the immune status of these T cells should be discussed and tested in future studies. On the one hand, prolonged persistence of CD4 + and CD8 + T cells is described after acute HSV‐1 infection, which successfully inhibits viral reactivation . On the other hand, these cells may consist of exhausted CD8 + T cells, which have lost their functionality because of chronic or persistent infection, leading to recurring clinical disease at later time points . We also further characterized the primary involvement of the TL in necrotizing, lymphohistiocytic herpetic encephalitis as it occurs in human HSE . Primary involvement of TL in human HSE has been shown by detailed histopathological investigation of human brains obtained from patients at different time points of infection giving the time of onset of clinical signs . In human patients died within the first week of onset of disease, relatively mild inflammation was present, whereas in the second and third week, inflammation was more severe, mainly affecting the meninges and cortex of TL. If this is compared with our initial kinetic trial and our present investigations, the results are largely congruent, at least as judged by appearance after onset of clinical signs. Compared with HSE mouse models, only few studies report on inflammatory lesions in the CNS in detail. After infection with HSV‐1 strain, 17 syn + few foci of necrosis and mild neutrophilic to lymphocytic inflammation were present in the trigeminal tract of mice during the first week after inoculation as found in PrV‐∆UL21/US3∆kin‐infected mice , but also in the olfactory bulb. Half of the mice infected with HSV‐1 syn + succumbed to death between day 7 and 10 pi, showing multifocal necrosis of the piriform, entorhinal, occipital cortices, thalamus, and cerebellum and lymphohistocytic inflammation, while other animals survived . Although in our model, the critical phase occurs from day 9 to 14 pi, only few PrV‐∆UL21/US3∆kin‐infected animals were seriously affected while the majority of mice survived (and this study). Within this critical phase, six mice out of a total of 122 animals used in different approaches had to be euthanized because of seizures and general bad condition. These mice differed histopathologically from all other animals, especially by extensive viral replication, concomitant with massive intra‐ and perilesional spongiform changes consistent with widespread intracellular and extracellular edema, and neuronal necrosis primarily found in TL. However, these animals showed less inflammatory reaction to infection. In HSE patients two different forms of edema have been reported, extracellular, vasogenic edema, and intracellular, cytotoxic edema . Patients suffering from cytotoxic edema were generally in worse condition compared with those with vasogenic edema, which mirrors our findings. PrV‐∆UL21/US3∆kin‐infected animals with severe edema further showed extensive demyelination and loss of axons mainly in the TL. It has been proposed that increased tissue pressure caused by edema may lead to demyelination . Demyelination linked to alphaherpesviral infection has been reported in HSV‐1‐infected cotton rats and several mouse strains , and has even been associated with multiple sclerosis in humans . However the functional role of herpesviruses in demyelinating diseases is still unclear, which urges the need of further research . Severe forms of HSE occur only sporadically, although approx. 67% of the world's population aged between 0 and 49 years were estimated in 2016 to be infected with HSV‐1 . However, subclinical, milder forms of HSE are possibly underdiagnosed . Mice intranasally infected with PrV‐∆UL21/US3∆kin generally show mild‐to‐moderate clinical signs or even remain asymptomatic, while only few animals show fatal disease progression. Animals that survive the critical period either recover completely or experience recurrent disease, which may indicate reactivation from latent infection, chronic inflammation, or an autoimmune reaction toward PrV‐∆UL21/US3∆kin infection. As suggested earlier , an ideal animal model for herpesviral encephalitis should include (i) infection via the mucocutaneous route, (ii) a small proportion of animals showing severe disease, and (iii) a large proportion of individuals developing an immune response that protects from severe disease. In summary, although further investigations are still needed, our present findings strongly support that the PrV‐∆UL21/US3∆kin mouse is well suited to investigate the mechanisms involved in alphaherpesviral infection of the nervous system and the consequences in humans. As a long‐term goal, this animal model might guide research for an effective HSE therapy. The authors have no conflicts of interest to declare that are relevant to the content of this article. Julia Sehl‐Ewert, Ulrike Blohm, and Thomas C. Mettenleiter were involved in conceptualization of the study. Julia Sehl‐Ewert, Theresa Schwaiger, and Julia E. Hölper performed animal experiments. Julia Sehl‐Ewert, Theresa Schwaiger, Alexander Schäfer, and Ulrike Blohm performed data analysis, results interpretation and figures. Julia Sehl‐Ewert, Theresa Schwaiger and Alexander Schäfer wrote the original draft. All authors critically reviewed the manuscript. Thomas C. Mettenleiter, Jens P. Teifke, Barbara G. Klupp, and Ulrike Blohm provided resources. Thomas C. Mettenleiter and Ulrike Blohm supervised the study. FIGURE S1 Gating strategy for the identification of cellular infiltration by flow cytometry. Single cells were identified by consecutive FSC‐A versus FSC‐H and SSC‐A versus SSC‐H gating followed by excluding cellular debris via FSC‐A versus SSC‐A gating. Cells were subdivided into CD45 hi and CD45 lo /CD11b + cells (1). From CD45 hi cells, cells were further analyzed based on CD11b and CD3 expression. Granulocytes were identified as CD3 − CD11b + /Ly6G + (2) and monocytes/macrophages were identified as CD3 − /CD11b + /Ly6G − (3). T lymphocytes expressed CD3 + /CD11b − and were subdivided into CD8 + cytotoxic T cells (4) and CD4 + T helper cells (5). NK cells were identified as CD3 − /CD11b − /NK1.1 + (6) whereas B lymphocytes were distinguished by CD3 − /CD11b − /B220 + (7) expression Click here for additional data file. FIGURE S2 Viral antigen detection in the temporal lobe of a mouse 49 days pi. The low number of viral antigen positive neurons is indicated (arrow), immunohistochemistry, polyclonal rabbit antibody against PrV glycoprotein B, ABC‐method, magnification 20x Click here for additional data file. FIGURE S3 Neuronal necrosis in the temporal lobe. (A) Semiquantitative scoring of neuronal necrosis at different time point post infection. (B and C) Representative temporal lobe section of a mouse 12 days pi showing lymphohistiocytic meningoencephalitis with multifocal necrotic neurons (arrows) as well as mild perivascular edema (arrowhead), hematoxylin and eosin stain, magnification 20x (B) and 40x (C) Click here for additional data file. FIGURE S4 CD79 + B lymphocytic infiltration of the temporal lobe (TL) 21 days pi, immunohistochemistry, monoclonal mouse anti‐human CD79 + antibody, ABC method, magnification 20x Click here for additional data file. FIGURE S5 GFAP + astrocyte immunostaining of the temporal lobe (TL). In contrast to a mock‐infected animal, mild astrocytosis is present in infected mice at day 21 pi bordering parenchymal and perivascular lesions, immunohistochemistry, polyclonal rabbit anti‐bovine, ABC‐method, magnification 10x Click here for additional data file.
The Role of Sex in Post‐Mortem Neuropathology and Cognitive Decline
10fee1d8-e91b-4fb6-9fcf-2a98e7aa3f4e
11714066
Forensic Medicine[mh]
A respiro-fermentative strategy to survive nanoxia in
e63a4f40-36e2-4c76-b595-84d2d26a5db1
11636273
Microbiology[mh]
Microorganisms face a multitude of fluctuating and often limiting conditions across various environments, such as soils, human gut, and aquatic environments. Carbon, electron acceptors (such as oxygen (O 2 )), and/or nutrients can vary over space and time. As such, microorganisms need to compensate and employ strategies to survive during these potentially growth-restricting conditions. One such strategy is respiratory flexibility. The utilization of both high- and low-affinity terminal oxidases enables exploitation of the full range of O 2 concentrations for oxidative phosphorylation and energy conservation, providing a great benefit in the ever-changing O 2 concentrations across environments. This can be attained by inducing branched respiratory chains that terminate in multiple oxidases with different affinities for O 2 (Bueno et al. ) recently shown in members of ubiquitous soil bacteria, the Acidobacteriota (Eichorst et al. , Trojan et al. ). Other strategies by which cells respond to limitations are, e.g. modifying enzyme synthesis to take up growth-limiting nutrients or by modulating uptake rates for nutrients available in excess (Roszak and Colwell ). Alternatively, they can reroute metabolic fluxes, which enables them to shift to alternative sources of energy and building blocks while avoiding possible blockages due to specific nutrient limitations (Roszak and Colwell , Bergkessel et al. ). Catabolism and ATP production are often incongruent during these periods of limitation (Stouthamer ). As a result of this incongruency, a trade-off can occur between catabolic rate and ATP yield, whereby bacteria utilize pathways for the most efficient production of molar ATP yield (Y ATP : mole of ATP/mole of oxidized substrate). For example, when catabolic rates are high but O 2 limiting, fermentative pathways (when available) are employed, together with respiratory pathways, commonly referred to as respiro-fermentation physiology (Pfeiffer et al. , Vemuri et al. ) allowing bacteria to maximize ATP production during electron acceptor limitation. This respiro-fermentative physiology has been observed in Escherichia coli, Bacillus subtilis , and Saccharomyces cerevisiae , yet the evolution and regulation of this metabolism is still under debate (Molenaar et al. ). Presumably, bacteria have evolved to harbor greater metabolic flexibility for ATP production, rather than pathways yielding optimal growth yield (Stouthamer ). Members of the phylum Acidobacteriota are ubiquitous across numerous soils (Fierer , Delgado-Baquerizo et al. ) with a central role in carbon mineralization and plant decomposition (Fierer , Crowther et al. ). Still very little is known about factors controlling their abundance in the environment or their effects on biogeochemical cycles under changing environmental conditions. In this study, we investigated the adaptive capability to O 2 -limited conditions with a model member of the phylum Acidobacteriota, Acidobacterium capsulatum 161. It is a member of the family Acidobacteriaceae that is commonly found across many environments, such as soils. Acidobacterium capsulatum 161 has originally been documented to be capable of microaerophilic growth and only later of weak fermentative growth as well (Pankratov et al. , Myers and King ). Recently, its capacity for respiratory flexibility due to the presence and functionality of high- and low-affinity terminal oxidases was demonstrated (Trojan et al. ). Here, we expanded our investigation of this strain to ascertain if it has additional abilities to alter its metabolism, such as the rerouting of metabolic fluxes, by profiling the whole transcriptomic response of A. capsulatum 161 to decreasing O 2 concentrations in the micro- and nanomolar range, the latter referred to as nanoxic (<1 µmol O 2 l −1 ) (Berg et al. ). To date, no reports have closely documented the catabolic routes of carbon and energy metabolism of Acidobacteriota or have evaluated its global transcriptomic response to O 2 deprivations. By combining genomics and hypoxic culture incubations using highly sensitive optical O 2 sensors (Lehner et al. ), we were able to investigate transcription patterns at the oxic–anoxic interface and could observe a transition from respiratory to respiro-fermentative metabolism in A. capsulatum 161. Growth conditions and experimental setup As previously described (Trojan et al. ), A. capsulatum 161 (ATCC 51196, DSM 11244) was grown up in biological quadruplicates in a vitamins and salts base medium (Eichorst et al. , ) amended with 10 mM glucose as the sole carbon source with a pH of 5 under fully aerated conditions at room temperature. The setup of the 225-min microoxic incubations using two LUMOS systems (Lehner et al. ) for O 2 concentration monitoring and sample collection at discrete, declining O 2 concentrations (10 µmol O 2 l −1 , 0.1 µmol O 2 l −1 , and 0.001 µmol O 2 l −1 ) down to anoxia (0 µmol O 2 l −1 is < 0.0005 µmol O 2 l −1 ) was previously described in Trojan et al. . Briefly, at select time points , 30 ml of culture were collected for RNA extraction and transcriptome sequencing by a syringe. For an immediate inactivation, the syringes were prefilled with an acidic phenol-stop solution (Kits et al. ) and precooled at 4°C. After centrifugation, cell pellets were snap frozen in liquid nitrogen and then stored at –80°C. RNA extraction and purification RNA was extracted from frozen cell pellets using an acidic phenol/chloroform/isoamyl alcohol protocol (Griffiths et al. ) with mechanical disruption (30 s, 4 ms −1 , FastPrep-24 bead beater, MP Biomedicals, Heidelberg, Germany) (Trojan et al. ). Purification of RNA and verification of complete DNA removal were described previously (Trojan et al. ). Transcriptome sequencing Triplicate RNA samples from selected O 2 concentrations and time points were sent to the Vienna BioCenter Core Facilities for sequencing. rRNA was depleted using the NEB Ribo-Zero rRNA removal kit for bacteria. Sequencing was performed on an Illumina NextSeq 550 system resulting in a total of 8.2–18.2 million 75-nucleotide reads per sample; more details can be found in Trojan et al. . Data processing and statistical analyses Raw reads were trimmed of sequencing adapters and low-quality 3′ ends using BBduk (BBtools v37.61, https://jgi.doe.gov/data-and-tools/bbtools/ ) with default parameters and error-corrected using Bayes–Hammer module of SPAdes assembler version 3.13.0 (Nikolenko et al. ). Any reads mapping to either the SILVA SSU or LSU releases 132 (Quast et al. ) or the 5S rRNA database (Szymanski et al. ) with a sequence identity >70% (performed with BBmap, BBtools, https://jgi.doe.gov/data-and-tools/bbtools/ ) were removed from the dataset . The remaining reads were mapped to the publicly available genome of the Acidobacterium capsulatum 161 (Eichorst et al. ). The RNA reads per gene were summarized using the featureCounts tool from the Subread package v1.6.2 (Liao et al. ). Based on the generated read count tables, transcripts per million (TPMs) were calculated in R v3.6.0. Differential expression analysis, such as the calculation of log 2 -fold changes of relative transcript abundance and the significance of these changes were calculated in DESeq2 v1.26.0 using default parameters and a P -value cutoff of .05 (Love et al. ). As previously described (Trojan et al. ), A. capsulatum 161 (ATCC 51196, DSM 11244) was grown up in biological quadruplicates in a vitamins and salts base medium (Eichorst et al. , ) amended with 10 mM glucose as the sole carbon source with a pH of 5 under fully aerated conditions at room temperature. The setup of the 225-min microoxic incubations using two LUMOS systems (Lehner et al. ) for O 2 concentration monitoring and sample collection at discrete, declining O 2 concentrations (10 µmol O 2 l −1 , 0.1 µmol O 2 l −1 , and 0.001 µmol O 2 l −1 ) down to anoxia (0 µmol O 2 l −1 is < 0.0005 µmol O 2 l −1 ) was previously described in Trojan et al. . Briefly, at select time points , 30 ml of culture were collected for RNA extraction and transcriptome sequencing by a syringe. For an immediate inactivation, the syringes were prefilled with an acidic phenol-stop solution (Kits et al. ) and precooled at 4°C. After centrifugation, cell pellets were snap frozen in liquid nitrogen and then stored at –80°C. RNA was extracted from frozen cell pellets using an acidic phenol/chloroform/isoamyl alcohol protocol (Griffiths et al. ) with mechanical disruption (30 s, 4 ms −1 , FastPrep-24 bead beater, MP Biomedicals, Heidelberg, Germany) (Trojan et al. ). Purification of RNA and verification of complete DNA removal were described previously (Trojan et al. ). Triplicate RNA samples from selected O 2 concentrations and time points were sent to the Vienna BioCenter Core Facilities for sequencing. rRNA was depleted using the NEB Ribo-Zero rRNA removal kit for bacteria. Sequencing was performed on an Illumina NextSeq 550 system resulting in a total of 8.2–18.2 million 75-nucleotide reads per sample; more details can be found in Trojan et al. . Raw reads were trimmed of sequencing adapters and low-quality 3′ ends using BBduk (BBtools v37.61, https://jgi.doe.gov/data-and-tools/bbtools/ ) with default parameters and error-corrected using Bayes–Hammer module of SPAdes assembler version 3.13.0 (Nikolenko et al. ). Any reads mapping to either the SILVA SSU or LSU releases 132 (Quast et al. ) or the 5S rRNA database (Szymanski et al. ) with a sequence identity >70% (performed with BBmap, BBtools, https://jgi.doe.gov/data-and-tools/bbtools/ ) were removed from the dataset . The remaining reads were mapped to the publicly available genome of the Acidobacterium capsulatum 161 (Eichorst et al. ). The RNA reads per gene were summarized using the featureCounts tool from the Subread package v1.6.2 (Liao et al. ). Based on the generated read count tables, transcripts per million (TPMs) were calculated in R v3.6.0. Differential expression analysis, such as the calculation of log 2 -fold changes of relative transcript abundance and the significance of these changes were calculated in DESeq2 v1.26.0 using default parameters and a P -value cutoff of .05 (Love et al. ). Global transcriptomic response under decreasing O 2 concentrations Our analyses revealed that the decrease from 10 to 0.1 µmol O 2 l −1 had the greatest impact on gene expression with the highest number of significantly differentially expressed genes observed (Fig. ). Subsequent transitions to 0.001 and further to 0 µmol O 2 l −1 invoked only a few to no significant expression changes (Fig. ). At 0.001 µmol O 2 l −1 , oxygen was still being supplied at 10.1 µmol O 2 l −1 but could no longer be accurately determined, therefore is defined as “apparent anoxia.” Ninety-seven percent of all annotated genes ( n = 3321 genes) were transcribed at least at one time point across the O 2 concentrations. A significant difference in transcript numbers between 10 and 0.1 µmol O 2 l −1 was detected for 2677 (∼81%) of transcribed genes. Of these genes, 41% were upregulated and 40% were downregulated at 0.1 µmol O 2 l −1 (Fig. ). Genes encoding hypothetical proteins with no further functional annotation accounted for ∼25% of the differentially transcribed genes, whereas ∼26% and ∼30% of significantly upregulated and downregulated genes, respectively, were assigned to protein-coding genes with annotated function (Fig. ). Differential response of function-based categories to decreasing O 2 concentrations Genes that exhibited significant differential expression in response to O 2 decrease were classified in clusters of orthologous groups (COGs) (Fig. and ). The COG categories of energy production and conversion (C), amino acid transport and metabolism (E), nucleotide transport and metabolism (F), and lipid transport and metabolism (I) were the categories that were most altered by the decrease from 10 to 0.1 µmol O 2 l −1 as more than 80% of the total genes assigned to these categories exhibited significant differential gene expression (Fig. ). More specifically, the transition from 10 to 0.1 µmol O 2 l −1 reduced the number of TPMs in COGs related to translation, ribosomal structure, and biogenesis (J); transcription (K); carbohydrate transport and metabolism (G); cell wall, membrane, and envelope biogenesis (M); and intracellular trafficking, secretion, and vesicular transport (U) (Fig. ), and many of these categories had a higher proportion of significantly downregulated genes due to reduced O 2 (Fig. ). Interestingly, the number of transcripts per million (TPMs) in COG categories pertaining to energy production and conversion (C), secondary metabolites biosynthesis, transport, and catabolism (Q), and signal transduction mechanisms (T) increased with decreasing O 2 concentrations (Fig. ), and some of these categories also had a higher proportion of significantly upregulated genes due to reduced O 2 (Fig. ). Universal and reactive oxygen stress response to decreasing O 2 The transcription of genes encoding proteins involved in general stress response (COG category: T-signal transduction mechanisms) was significantly stimulated upon the decrease of O 2 (Fig. and ). In particular, a two-component sensor histidine kinase exhibited an 84-fold increase upon the shift to 0.1 µmol O 2 l −1 ( P ≤ .001) (Fig. , ). Furthermore, several homologs of the universal stress response gene uspA were significantly upregulated upon the decrease in oxygenation from 10 to 0.1 µmol O 2 l −1 . Four of those uspA genes had already been highly expressed at 10 µmol O 2 l −1 , whereas the expression of two of them, those with the highest fold increase (7- and 56-fold, respectively) seemed to be specifically stimulated by the drop to 0.1 µmol O 2 l −1 (Fig. , ). The transcription levels of genes coding for various chaperone proteins differed between 10 and 0.1 µmol O 2 l −1 , but with no discernible trend of regulation (Fig. , ). The transcriptome data at 10, 0.1, and 0.001 µmol O 2 l −1 of the continuously decreasing O 2 incubations exhibited the downregulation of key genes ( ahpCD, trxR, katG ) involved in oxidative stress defense (Fig. , ). Surprisingly, several other key genes coding for oxidative stress defense enzymes, catalase C ( katE ), non-heme manganese-containing catalase ( kat ), manganese superoxide dismutase ( soda ), heme-dependent chlorite dismutase ( cld ), rubrerythrin ( rbr ), thioredoxin ( trx ), ferroxidase ( bfr ), organic hydroperoxide resistance protein ohrB , and various homologs of alkyl hydroperoxide reductase subunits C and D ( ahpCD ) did not follow this trend and were transcribed at significantly higher levels at diminishing O 2 concentrations (Fig. , ). Most of all, rbr, oda, trx, bfr , and one ahpC homolog ( ahpC- 2) were transcribed at high levels with an up to 13-fold upregulation upon the shift from 10 to 0.1 µmol O 2 l −1 . Expression of electron transport chain and oxidative phosphorylation We detected gene expression for all complexes I–IV of the electron transport chain (ETC) and the ATP synthase (complex V) . Acidobacterium capsulatum 161 harbors several complexes IV of the respiratory chain, and the transcriptional responses to the decrease in O 2 from 10 to 0.1 µmol O 2 l −1 (Fig. , ) were published and discussed recently (Trojan et al. ). The genes encoding the ATP synthase (complex V) were expressed during all O 2 -limiting conditions but showed a continuous decrease in transcription level with diminishing O 2 availability (Fig. , ). Acidobacterium capsulatum 161 expressed two proton-translocating NADH dehydrogenases (NDH-I, complex I). Out of the whole NDH-I nuoA-N operon, the nuoAC genes were continuously expressed at high transcription levels, whereas the transcription levels of other subunits ( nuoE-N ) decreased significantly with O 2 ( P ≤ .001; Fig. , ). The other NADH dehydrogenase was an unusual complex I (Chadwick et al. ), as it had a duplicated nuo-M gene. It had similar expression patterns as the NDH-I; the nuoABC transcripts were detected in significantly higher numbers at diminishing O 2 concentrations (Fig. , ). Two homologs of the type II NADH dehydrogenase (NDH-II), which do not translocate protons across the cell membrane (Blaza et al. ), were transcribed. One homolog was detected at a very low level with a maximum TPM value of 10 at 10 µmol O 2 l −1 , whereas the second homolog was significantly higher expressed ( P ≤ .001) upon the shift from 10 to 0.1 µmol O 2 l −1 (21-fold), and this high expression was maintained at all subsequent O 2 conditions with an average TPM value of 740 (Fig. , ). The operon ( sdhABC ) encoding the succinate dehydrogenase (SDH), complex II of the ETC, did not show any discernible pattern of regulation; e.g. the membrane subunit was upregulated, whereas the catalytic subunit ( sdhB )was downregulated at 0.1 µmol O 2 l −1 (Fig. , ). Decreasing O 2 concentrations significantly altered the central metabolism With glucose in excess, decreasing O 2 concentrations resulted in significant changes in the gene expression between 10 and 0.1 µmol O 2 l −1 for the Embden–Meyerhof–Parnas (EMP) pathway, the Entner–Doudoroff (ED) pathway, the pentose phosphate (PP) pathway, genes of the pyruvate metabolism and the tricarboxylic acid (TCA) cycle as well as the acetate, glycogen, and gluconate metabolism (Fig. , , ). More details are discussed below. Glucose transport and pyruvate production were downregulated at low-nanomolar O 2 concentrations The overall expression of genes in the glycolytic pathways decreased from 10 to 0.1 µmol O 2 l −1 (Fig. , , ). The transcription level of the symporter gene responsible for transporting glucose from the periplasm to the cytoplasm, galP , was significantly decreased (3.5-fold, P ≤ .001) upon the shift of oxygenation from 10 to 0.1 µmol O 2 l −1 (Fig. , , ). The production of pyruvate was downregulated across various paths. Pyruvate stemming from 2-keto-3-deoxy-6-phosphogluconate (KDPG) in the ED pathway via KDPG aldolase ( eda gene) (Bennett et al. , Flamholz et al. ) was expressed at significantly lower levels (Fig. , , ). And the pfkA gene, which is unique to the EMP pathway in the catabolic direction, was significantly downregulated at 0.1 µmol O 2 l −1 (1.3-fold, , ). The conversion of PEP to pyruvate, which is coupled to the synthesis of ATP, is catalyzed by pyruvate kinase ( pyk gene) that exhibited a lower expression at diminishing O 2 concentrations . Apart from three genes, all genes involved in the PP pathway were significantly downregulated at 0.1 µmol O 2 l −1 (Fig. , , ). Pyruvate metabolism was highly transcribed at low-nanomolar O 2 concentrations Pyruvate occupies a key position in central carbon metabolism and is an important branch point between catabolic and biosynthetic pathways (Fig. ). Genes encoding proteins involved in the pyruvate metabolism (such as poxB, aceEF, lpdA, por , and maeA ) were transcribed at higher levels at lower O 2 concentrations (Fig. , , ). Upregulation of glycogen and downregulation of gluconate metabolism at lower O 2 concentrations Extracellular glucose can be oxidized to gluconate in the periplasm instead to G6P (Fig. ). The membrane-bound glucose 1-dehydrogenase ( gdh ), which oxidizes glucose to glucono-1,5-lactone (Fig. ), was one of the top upregulated genes (45-fold; log 2 -fold change of 5.5) and was transcribed at significantly higher levels at 0.1 µmol O 2 l −1 . Yet gluconolaconase ( gnl ) had very low transcript levels at low O 2 concentrations (Fig. , ). Another alternative route to the glycolytic pathways is glycogen metabolism. The genes that metabolize G6P to glycogen were upregulated in A. capsulatum 161. These glucose polymers can be stored and mobilized upon future demand. We found genes for both, biosynthesis as well as degradation of glycogen, in the genome of A. capsulatum 161 and genes encoding proteins involved in the glycogen metabolism were all upregulated upon the shift of O 2 from 10 to 0.1 µmol O 2 l −1 (Fig. , , ). Differential transcriptional response of TCA cycle genes to O 2 concentrations Genes of the TCA cycle responded differently to the decrease of O 2 (Fig. , , ). The oxidative decarboxylation of acetyl-CoA is one of the main catalytic functions of the TCA cycle and provides reducing equivalents to the respiratory complexes. The first of four oxidative steps is a key rate-limiting step of the TCA cycle and is catalyzed by icd . The multiple homologs of the genes icd exhibited a high transcription level (high TPM levels) but were significantly downregulated at low-nanomolar O 2 concentrations . In addition, the TCA cycle flux can be constricted by the availability of the oxaloacetate (OAA). The enzyme, phosphoenolpyruvate carboxykinase ( pckA ), was transcribed at significantly higher levels (7.5-fold) at 0.1 µmol O 2 l −1 , making OAA less available for the TCA cycle (Fig. ). However, the TCA cycle also functions in a biosynthetic capacity, primarily in the synthesis of amino acids, heme, and glucose. Glutamate and aspartate are synthesized from 2-oxoglutarate and OAA, respectively, via transamination, and both genes gdhA as well as aspB , exhibited an increased expression at 0.1 µmol O 2 l −1 . We detected neither the genes aceA and aceB , encoding the enzymes isocitrate lyase and malate synthase, nor the gene glcB , encoding the malate synthase G, neither in the genome nor in the transcriptome of A. capsulatum 161, which are involved in the glyoxylate shunt of the TCA cycle. In addition to the oxidative conversion of pyruvate into acetyl-CoA, acetyl-CoA can also be synthesized through the catabolism of branched-chain amino acids by the branched-chain alpha-ketoacid dehydrogenase (BCKDC) complex (Fig. ). The decrease in oxygenation to 0.1 µmol O 2 l −1 increased the expression of this BCKDC complex up to 3.2-fold (Fig. , , ). Significant transcriptional response of genes involved in production of acetate and ethanol due to decreasing O 2 concentrations Genes involved in various pathways for the production of acetate exhibited significantly higher expression levels at lower nanomolar O 2 concentrations, such as phosphotransacetylase ( pta ) (Fig. , , ). The conversion of acetate to acetyl-CoA via acetyl-CoA synthetase ( acs ) was downregulated at lower O 2 concentrations (Fig. , , ). The peripheral membrane protein pyruvate oxidase PoxB (also sometimes referred to as pyruvate dehydrogenase [ubiquinone]), which oxidatively decarboxylates pyruvate to form acetate and directly coupled to the respiratory chain (Gennis and Hager , Koland et al. , Abdel-Hamid et al. ) (Fig. ), was one of the topmost upregulated genes of the central carbon and energy metabolism with a 45-fold increase in transcription at 0.1 µmol O 2 l −1 (Fig. , , ). Apart from acetate production, ethanol production was significantly altered by decreasing O 2 concentrations, and the homologs adhP , encoding for alcohol dehydrogenases, exhibited higher expression levels, up to 14.9-fold higher, at 0.1 µmol O 2 l −1 (Fig. , , ). 2 concentrations Our analyses revealed that the decrease from 10 to 0.1 µmol O 2 l −1 had the greatest impact on gene expression with the highest number of significantly differentially expressed genes observed (Fig. ). Subsequent transitions to 0.001 and further to 0 µmol O 2 l −1 invoked only a few to no significant expression changes (Fig. ). At 0.001 µmol O 2 l −1 , oxygen was still being supplied at 10.1 µmol O 2 l −1 but could no longer be accurately determined, therefore is defined as “apparent anoxia.” Ninety-seven percent of all annotated genes ( n = 3321 genes) were transcribed at least at one time point across the O 2 concentrations. A significant difference in transcript numbers between 10 and 0.1 µmol O 2 l −1 was detected for 2677 (∼81%) of transcribed genes. Of these genes, 41% were upregulated and 40% were downregulated at 0.1 µmol O 2 l −1 (Fig. ). Genes encoding hypothetical proteins with no further functional annotation accounted for ∼25% of the differentially transcribed genes, whereas ∼26% and ∼30% of significantly upregulated and downregulated genes, respectively, were assigned to protein-coding genes with annotated function (Fig. ). 2 concentrations Genes that exhibited significant differential expression in response to O 2 decrease were classified in clusters of orthologous groups (COGs) (Fig. and ). The COG categories of energy production and conversion (C), amino acid transport and metabolism (E), nucleotide transport and metabolism (F), and lipid transport and metabolism (I) were the categories that were most altered by the decrease from 10 to 0.1 µmol O 2 l −1 as more than 80% of the total genes assigned to these categories exhibited significant differential gene expression (Fig. ). More specifically, the transition from 10 to 0.1 µmol O 2 l −1 reduced the number of TPMs in COGs related to translation, ribosomal structure, and biogenesis (J); transcription (K); carbohydrate transport and metabolism (G); cell wall, membrane, and envelope biogenesis (M); and intracellular trafficking, secretion, and vesicular transport (U) (Fig. ), and many of these categories had a higher proportion of significantly downregulated genes due to reduced O 2 (Fig. ). Interestingly, the number of transcripts per million (TPMs) in COG categories pertaining to energy production and conversion (C), secondary metabolites biosynthesis, transport, and catabolism (Q), and signal transduction mechanisms (T) increased with decreasing O 2 concentrations (Fig. ), and some of these categories also had a higher proportion of significantly upregulated genes due to reduced O 2 (Fig. ). 2 The transcription of genes encoding proteins involved in general stress response (COG category: T-signal transduction mechanisms) was significantly stimulated upon the decrease of O 2 (Fig. and ). In particular, a two-component sensor histidine kinase exhibited an 84-fold increase upon the shift to 0.1 µmol O 2 l −1 ( P ≤ .001) (Fig. , ). Furthermore, several homologs of the universal stress response gene uspA were significantly upregulated upon the decrease in oxygenation from 10 to 0.1 µmol O 2 l −1 . Four of those uspA genes had already been highly expressed at 10 µmol O 2 l −1 , whereas the expression of two of them, those with the highest fold increase (7- and 56-fold, respectively) seemed to be specifically stimulated by the drop to 0.1 µmol O 2 l −1 (Fig. , ). The transcription levels of genes coding for various chaperone proteins differed between 10 and 0.1 µmol O 2 l −1 , but with no discernible trend of regulation (Fig. , ). The transcriptome data at 10, 0.1, and 0.001 µmol O 2 l −1 of the continuously decreasing O 2 incubations exhibited the downregulation of key genes ( ahpCD, trxR, katG ) involved in oxidative stress defense (Fig. , ). Surprisingly, several other key genes coding for oxidative stress defense enzymes, catalase C ( katE ), non-heme manganese-containing catalase ( kat ), manganese superoxide dismutase ( soda ), heme-dependent chlorite dismutase ( cld ), rubrerythrin ( rbr ), thioredoxin ( trx ), ferroxidase ( bfr ), organic hydroperoxide resistance protein ohrB , and various homologs of alkyl hydroperoxide reductase subunits C and D ( ahpCD ) did not follow this trend and were transcribed at significantly higher levels at diminishing O 2 concentrations (Fig. , ). Most of all, rbr, oda, trx, bfr , and one ahpC homolog ( ahpC- 2) were transcribed at high levels with an up to 13-fold upregulation upon the shift from 10 to 0.1 µmol O 2 l −1 . We detected gene expression for all complexes I–IV of the electron transport chain (ETC) and the ATP synthase (complex V) . Acidobacterium capsulatum 161 harbors several complexes IV of the respiratory chain, and the transcriptional responses to the decrease in O 2 from 10 to 0.1 µmol O 2 l −1 (Fig. , ) were published and discussed recently (Trojan et al. ). The genes encoding the ATP synthase (complex V) were expressed during all O 2 -limiting conditions but showed a continuous decrease in transcription level with diminishing O 2 availability (Fig. , ). Acidobacterium capsulatum 161 expressed two proton-translocating NADH dehydrogenases (NDH-I, complex I). Out of the whole NDH-I nuoA-N operon, the nuoAC genes were continuously expressed at high transcription levels, whereas the transcription levels of other subunits ( nuoE-N ) decreased significantly with O 2 ( P ≤ .001; Fig. , ). The other NADH dehydrogenase was an unusual complex I (Chadwick et al. ), as it had a duplicated nuo-M gene. It had similar expression patterns as the NDH-I; the nuoABC transcripts were detected in significantly higher numbers at diminishing O 2 concentrations (Fig. , ). Two homologs of the type II NADH dehydrogenase (NDH-II), which do not translocate protons across the cell membrane (Blaza et al. ), were transcribed. One homolog was detected at a very low level with a maximum TPM value of 10 at 10 µmol O 2 l −1 , whereas the second homolog was significantly higher expressed ( P ≤ .001) upon the shift from 10 to 0.1 µmol O 2 l −1 (21-fold), and this high expression was maintained at all subsequent O 2 conditions with an average TPM value of 740 (Fig. , ). The operon ( sdhABC ) encoding the succinate dehydrogenase (SDH), complex II of the ETC, did not show any discernible pattern of regulation; e.g. the membrane subunit was upregulated, whereas the catalytic subunit ( sdhB )was downregulated at 0.1 µmol O 2 l −1 (Fig. , ). 2 concentrations significantly altered the central metabolism With glucose in excess, decreasing O 2 concentrations resulted in significant changes in the gene expression between 10 and 0.1 µmol O 2 l −1 for the Embden–Meyerhof–Parnas (EMP) pathway, the Entner–Doudoroff (ED) pathway, the pentose phosphate (PP) pathway, genes of the pyruvate metabolism and the tricarboxylic acid (TCA) cycle as well as the acetate, glycogen, and gluconate metabolism (Fig. , , ). More details are discussed below. 2 concentrations The overall expression of genes in the glycolytic pathways decreased from 10 to 0.1 µmol O 2 l −1 (Fig. , , ). The transcription level of the symporter gene responsible for transporting glucose from the periplasm to the cytoplasm, galP , was significantly decreased (3.5-fold, P ≤ .001) upon the shift of oxygenation from 10 to 0.1 µmol O 2 l −1 (Fig. , , ). The production of pyruvate was downregulated across various paths. Pyruvate stemming from 2-keto-3-deoxy-6-phosphogluconate (KDPG) in the ED pathway via KDPG aldolase ( eda gene) (Bennett et al. , Flamholz et al. ) was expressed at significantly lower levels (Fig. , , ). And the pfkA gene, which is unique to the EMP pathway in the catabolic direction, was significantly downregulated at 0.1 µmol O 2 l −1 (1.3-fold, , ). The conversion of PEP to pyruvate, which is coupled to the synthesis of ATP, is catalyzed by pyruvate kinase ( pyk gene) that exhibited a lower expression at diminishing O 2 concentrations . Apart from three genes, all genes involved in the PP pathway were significantly downregulated at 0.1 µmol O 2 l −1 (Fig. , , ). 2 concentrations Pyruvate occupies a key position in central carbon metabolism and is an important branch point between catabolic and biosynthetic pathways (Fig. ). Genes encoding proteins involved in the pyruvate metabolism (such as poxB, aceEF, lpdA, por , and maeA ) were transcribed at higher levels at lower O 2 concentrations (Fig. , , ). 2 concentrations Extracellular glucose can be oxidized to gluconate in the periplasm instead to G6P (Fig. ). The membrane-bound glucose 1-dehydrogenase ( gdh ), which oxidizes glucose to glucono-1,5-lactone (Fig. ), was one of the top upregulated genes (45-fold; log 2 -fold change of 5.5) and was transcribed at significantly higher levels at 0.1 µmol O 2 l −1 . Yet gluconolaconase ( gnl ) had very low transcript levels at low O 2 concentrations (Fig. , ). Another alternative route to the glycolytic pathways is glycogen metabolism. The genes that metabolize G6P to glycogen were upregulated in A. capsulatum 161. These glucose polymers can be stored and mobilized upon future demand. We found genes for both, biosynthesis as well as degradation of glycogen, in the genome of A. capsulatum 161 and genes encoding proteins involved in the glycogen metabolism were all upregulated upon the shift of O 2 from 10 to 0.1 µmol O 2 l −1 (Fig. , , ). 2 concentrations Genes of the TCA cycle responded differently to the decrease of O 2 (Fig. , , ). The oxidative decarboxylation of acetyl-CoA is one of the main catalytic functions of the TCA cycle and provides reducing equivalents to the respiratory complexes. The first of four oxidative steps is a key rate-limiting step of the TCA cycle and is catalyzed by icd . The multiple homologs of the genes icd exhibited a high transcription level (high TPM levels) but were significantly downregulated at low-nanomolar O 2 concentrations . In addition, the TCA cycle flux can be constricted by the availability of the oxaloacetate (OAA). The enzyme, phosphoenolpyruvate carboxykinase ( pckA ), was transcribed at significantly higher levels (7.5-fold) at 0.1 µmol O 2 l −1 , making OAA less available for the TCA cycle (Fig. ). However, the TCA cycle also functions in a biosynthetic capacity, primarily in the synthesis of amino acids, heme, and glucose. Glutamate and aspartate are synthesized from 2-oxoglutarate and OAA, respectively, via transamination, and both genes gdhA as well as aspB , exhibited an increased expression at 0.1 µmol O 2 l −1 . We detected neither the genes aceA and aceB , encoding the enzymes isocitrate lyase and malate synthase, nor the gene glcB , encoding the malate synthase G, neither in the genome nor in the transcriptome of A. capsulatum 161, which are involved in the glyoxylate shunt of the TCA cycle. In addition to the oxidative conversion of pyruvate into acetyl-CoA, acetyl-CoA can also be synthesized through the catabolism of branched-chain amino acids by the branched-chain alpha-ketoacid dehydrogenase (BCKDC) complex (Fig. ). The decrease in oxygenation to 0.1 µmol O 2 l −1 increased the expression of this BCKDC complex up to 3.2-fold (Fig. , , ). 2 concentrations Genes involved in various pathways for the production of acetate exhibited significantly higher expression levels at lower nanomolar O 2 concentrations, such as phosphotransacetylase ( pta ) (Fig. , , ). The conversion of acetate to acetyl-CoA via acetyl-CoA synthetase ( acs ) was downregulated at lower O 2 concentrations (Fig. , , ). The peripheral membrane protein pyruvate oxidase PoxB (also sometimes referred to as pyruvate dehydrogenase [ubiquinone]), which oxidatively decarboxylates pyruvate to form acetate and directly coupled to the respiratory chain (Gennis and Hager , Koland et al. , Abdel-Hamid et al. ) (Fig. ), was one of the topmost upregulated genes of the central carbon and energy metabolism with a 45-fold increase in transcription at 0.1 µmol O 2 l −1 (Fig. , , ). Apart from acetate production, ethanol production was significantly altered by decreasing O 2 concentrations, and the homologs adhP , encoding for alcohol dehydrogenases, exhibited higher expression levels, up to 14.9-fold higher, at 0.1 µmol O 2 l −1 (Fig. , , ). Adaptations and fast responses to changes in environmental conditions often occur at the metabolic level and in this work, we gained new insights into the transcription response of A. capsulatum 161 to diminishing O 2 concentrations at low micro- and nanomolar levels. Our data indicate that diminishing O 2 played a pivotal role in regulating the expression of genes involved in central metabolism under C excess conditions (Figs and ). To counter the toxic accumulation of respiration byproducts building up from the lack of O 2 , it shifted its metabolism and rerouted fluxes from an energy favorable respiratory state (Fig. ) to a respiro-fermentative condition, in which acetate together with ethanol seemed to be major end-products (Figs and ). Glucose transport, PP pathways, and pyruvate production were downregulated at low-nanomolar O 2 concentrations (Fig. )—presumably to reduce the NADH/NAD + redox ratio, which is a critical regulator of cell metabolism ultimately controlling the onset of respiro-fermentative metabolism (Shen and Atkinson , Szenk et al. ). As A. capsulatum 161 transitioned from oxic to nanoxic conditions under C excess, the transcripts of many NADH-generating enzymes related to oxidative respiration were reduced (Fig. ). Glucose import ( galP) exhibited reduced expression from 10 to 0.1 µmol O 2 l −1 , presumably as a means to limit the amount of available glucose. Yet, glucose 1-dehydrogenase (gdh ) was overexpressed, potentially modulated a great part of the carbon flow through the gluconate bypass, thus reducing glucose concentration in the cell (Fig. ). However, it appears that the cell did not use gluconate, as gluconolaconase had very low transcript levels at low O 2 concentrations (<10 µmol O 2 l −1 ). This could suggest that the enzyme requires a certain oxygen concentration to function. Pyruvate oxidase (PoxB) was upregulated in A. capsulatum 161, which catalyzes the decarboxylation of pyruvate to acetate and CO 2 (Figs and ), suggesting that pyruvate catabolism is the major switch point between the respiratory and fermentative responses. The glycolytic flux was redirected toward the production of fermentation products, acetate (upregulation of pta and acyP ) and ethanol (upregulation of adhP ) (Figs and , , ), to prevent carbon intermediates to enter the TCA cycle (El-Mansi and Holms ). Cells can then convert acetyl-CoA through the Pta-AckA pathway, producing and excreting acetate while generating ATP (El-Mansi and Holms ). Since the flux from acetyl-CoA to acetate does not generate any NADH (while the flux from acetyl-CoA through the TCA cycle generates 8 NAD(P)H and 2 FADH 2 ), carbon flow diversion to acetate could be viewed as a means of A. capsulatum 161 to reduce or prevent further NADH accumulation (El-Mansi and Holms , Holms ). This is in congruence with previous work on Staphylococcus aureus , where acetate production was enhanced under low O 2 and glucose excess conditions (Ferreira et al. ). In addition, the conversion of acetaldehyde to ethanol via adhP was upregulated, consuming NADH and hence a way to counteract the NADH/NAD + imbalance (Figs and ). Taken together, we hypothesize that the concomitant rise in NADH levels from glucose excess and low O 2 conditions drove the onset of fermentative metabolism (acetate and ethanol production) to avoid toxic levels of NADH in the cell. Acetate and ethanol production stemming from pyruvate bypasses any energy-conserving steps associated with NADH, allowing a fast oxidation of pyruvate and efficient shuttling of protons/electrons to the ETC. Various studies have shown that in concentrated glucose environments, E. coli and other organisms switch to and obtain some of their energy anaerobically by acetate fermentation, even when O 2 is plentiful, if the rate of glucose consumption is greater than the capacity to reoxidize the reduced equivalents generated (Farmer and Jones , Hollywood and Doelle , Andersen and Meyenburg , Meyer et al. , Farmer and Liao , Kayser et al. , Vemuri et al. , Vazquez et al. , Molenaar et al. , Nahku et al. , Valgepea et al. , , Zhuang et al. , Basan et al. , Peebo et al. , Schütze et al. ). Although a major role of NADH is to supply electrons to the ETC thereby fueling the production of ATP, the strategy of A. capsulatum 161 was to reduce the NADH production stemming from respiratory pathways to avoid NADH imbalance while generating ATP (Szenk et al. ). The use of alternative pathways for NAD + regeneration concomitant was also reported in other facultative anaerobes such as E. coli (Vemuri et al. , Farhana et al. , Martínez-Gómez et al. , Szenk et al. ) and members of the genera Salmonella and Shigella (Gray et al. , Wolfe ). The metabolic flexibility would allow these bacteria to cope with varying concentrations of carbon and O 2 in such environments like soils. This respiro-fermentative strategy might extend into the Acidobacteriota , as many genomes harbor this potential as evidence by the presence of acetate kinase and alcohol dehydrogenase (Eichorst et al. ). Our experimental conditions also invoked a significant upregulation of the glycogen metabolism suggesting cells transform excess glucose to the storage compound glycogen (Fig. , , ). The accumulation of glycogen provides a metabolic reserve for A. capsulatum 161 under potential carbon-limited conditions in the future, which could be an important strategy in environments such as soils allowing cells to cope with transient limiting conditions. Stress response to O 2 and reactive oxygen species (ROS) is crucial for the ability to exist in habitats that are characterized by fluctuating O 2 concentrations. Whether microbes can occupy such a habitat or a microniche within partly depends upon whether they are able to withstand local concentrations of high or low O 2 . Several universal stress proteins in A. capsulatum 161 were significantly upregulated by decreasing O 2 concentrations approaching anoxia; especially two of these usp genes were affected by the drop of O 2 to 0.1 µmol O 2 l −1 (Fig. , ). Furthermore, the drop of O 2 invoked a significant increase of transcription of a sensor histidine kinase (Fig. , ), which presumably allowed A. capsulatum 161 to sense environmental stimuli and manage various environmental changes by coupling environmental cues to gene expression (Stock et al. , Mascher et al. , Kaczmarczyk et al. ). Cellular stress can further lead to protein denaturation (Hightower ), and proteolytic removal of non‐functional proteins is crucial for optimal metabolic activities (Porankiewicz et al. ). We detected an upregulation of the ATP-dependent Clp proteases in the transcriptomic response of A. capsulatum 161 to diminishing O 2 concentrations (Fig. , ), suggesting that they are important in removing irreversibly damaged polypeptides that may interfere with metabolic pathways under O 2 -limited stress conditions. In A. capsulatum 161, a clear differential upregulation of genes involved in counteracting oxidative stress was observed upon the decrease of oxygenation from 10 to 0.1 µmol O 2 l −1 (Fig. , ), indicating that it is capable of adapting to different redox states. Oxidative stress defense genes such as manganese superoxide dismutase, thioredoxins, and glutaredoxins were highly expressed under stimulated at low O 2 (Fig. , ), as seen previously in Nitrosomonas europaea (Sedlacek et al. ). The increased demand for proteins involved in ROS defense could be caused by NADH/NAD + redox ratio imbalances, as NADH accumulates and becomes toxic. Under O 2 -limiting conditions, an increased level of NADH builds up, as it is less efficiently reoxidized to NAD + as a result of reduced aerobic respiration. The high-affinity bd -type oxidase ( cydAB) and NADH dehydrogenase ( ndh-II ) were upregulated upon the drop of oxygenation to 0.1 µmol O 2 l −1 (Fig. , ). We previously hypothesized that the upregulation of the bd -type oxidase could suggest a contribution to respiratory activity at trace O 2 conditions or favor the more faster electron flux than cbb 3 -type oxidases to permit more rapid reducing potential from carbon surplus (Trojan et al. ). The uncoupled NADH dehydrogenase NDH-II was highly upregulated at 0.1 µmol O 2 l −1 (Fig. , ), presumably to compensate for the slow regeneration of NAD + due to the low O 2 availability. NDH-II only catalyzes the oxidation of NADH and reduction of quinones without the ability to pump protons, which, based on our data, seemed to be beneficial under micro- and nanoxic conditions. Alternative electron‐transfer routes seem to allow A. capsulatum 161 adjusting its energy transduction efficiency to its needs and substrate availability. In A. capsulatum 161, electrons can flow from the NADH dehydrogenase and SDH (complex II of the ETC) to the quinone/quinol pool, from where the electrons may either bypass the cytochrome bc 1 complex (complex III) and directly flow to the bd -type quinol terminal oxidase or flow via a cytochrome c either to the low-affinity caa 3 ‐type or the high-affinity cbb 3 ‐type cytochrome c terminal oxidase. Under aerobic- or carbon-limiting conditions, the proton-coupled NDH-I might be the main driver of NADH oxidation and maintain the proton motive force required for ATP synthesis (Fig. ). Under the reduced environment in our incubations, when the reduction state of the quinone/quinol pool increases, the pathway to the NDH-II and bd -type quinol oxidase seemed to increase, with NDH-II being then the dominant dehydrogenase to oxidize excess NADH and support redox balance (Figs and ). This strategy has been observed for E. coli , which may use NDH-II to counteract increasing NADH/NAD + triggered by faster metabolism due to increased glucose uptake in order to support fast growth (Vemuri et al. , Liu et al. ). In this study, we examined the transcriptional response of A. capsulatum 161 to diminishing O 2 oncentrations in the low nanomolar range. Overall, O 2 -limiting conditions invoked a significant stress response in A. capsulatum 161. Our data indicate that A. capsulatum 161 has the genomic potential for multiple routes for the early steps of glucose catabolism. Under O 2 -limited but glucose-unlimited conditions, A. capsulatum 161 reroutes fluxes through its central metabolism from glycolysis to fermentative end products to counteract NADH/NAD + imbalances building up due to loss of respiratory capacities under electron acceptor-limiting conditions. Understanding these capacities advances the knowledge on the metabolic responses A. capsulatum 161 is capable of in order to successfully thrive and persist under fluctuating substrate availabilities in terrestrial environments. The investigated oxygen range (10 to 0.1 µmol O 2 l −1 or 3.6–0.036 pO 2 % present atmospheric level) is environmentally relevant (Sexstone et al. ), presumably seen in various soil niches. Coping with dynamic O 2 tensions is therefore vital for aerobic bacteria dwelling in (temporarily) O 2 -deprived habitats. During “spring snowmelt” or in the rhizosphere, catabolism and ATP yields can be uncoupled due to O 2 limitation and carbon availability. To survive reductive stress during O 2 deprivation, soil bacteria depend on metabolic strategies to maintain a proton motive force and redox balance. But modifications in metabolic routes at trace O 2 levels extend beyond soils; these findings have implications in other environments, such as oxygen minimum zones (OMZs) in the Earth’s oceans. OMZs are large water masses with low oxygen concentrations, thus favoring anaerobic metabolism (Kalvelage et al. ). Interestingly, aerobic metabolism was previously detected in regions of apparent anoxic conditions (“anoxic” OMZs) (Garcia-Robledo et al. ), along with the presence of terminal oxidases (Kalvelage et al. , Tsementzi et al. ) and production of O 2 at trace levels (Canfield and Kraft ). These regions could provide a niche where bacteria transition from a respiratory to a respiro-fermentative metabolism to maximize energy yield, prior to using less favorable electron acceptors, such as nitrate. Taken together, the transition from aerobic respiration to a respiro-fermentative metabolism could provide bacteria the flexibility to generate energy during periods of limiting O 2 in fluctuating environments for maintenance and survival of their populations. fiae152_Supplemental_Files
Review of Methods for Studying Viruses in the Environment and Organisms
9ff23238-910c-4120-b31a-6d457f196639
11769461
Microbiology[mh]
1.1. Role of Viruses in the Environment and Organisms Viruses are widely found in a variety of environments, where they play an important role. They form the foundation of the Earth’s ecological pyramid and significantly impact bacterial diversity and population structure. By infecting and killing bacteria, viruses participate in and influence the material and energy cycles within ecosystems. For example, viruses in oceans, such as those from the Caudovirales and Microviridae phage families, harbour genes that enable them to manipulate infected bacteria to process sulphur-containing compounds and participate in various cellular processes, such as photosynthesis . The unique structural characteristics and small size of viruses allow for them to survive in a wide range of soils , facilitating virus transmission between plants and soil. For example, the cucumber green mottled mosaic virus (CGMMV) of the genus Tobamovirus infects grafted watermelons and spreads through pruning, irrigation, and other agricultural practices . In addition to habitat-based viruses, there are also viruses that are transmitted through living organisms, such as the zoonotic snowshoe hare virus (SSHV), an arbovirus that is transmitted between small mammals and mosquitoes . Rabies is another of the world’s deadliest zoonotic diseases, with bats serving as the primary host and source of transmission to humans and livestock . Detecting virus–host (VH) interactions is important for in-depth studies of virus transmission between habitats. Since most cellular functions are supported by proteins, by studying VH interactions, we can more quickly find antiviral targets that play important functions in the viral life cycle . Some convenient and fast mathematical modelling methods for the analysis of VH interactions have also emerged, such as dynamic optimisation, evolutionary game theory, and the modelling of spatial phenomena, among other computational methods . However, the presence of viruses is not solely destructive to habitats; they also have positive effects. For example, viruses contribute to the release and cycling of carbon in the oceans by infecting and killing bacteria, which plays a role in maintaining the productivity of marine ecosystems and regulating global temperatures . Additionally, viral infections promote inter-gene exchange, where viruses transfer genes between different hosts, facilitating horizontal gene transfer—an important mechanism in the evolution of organisms. For example, adenoviruses have been used as vectors for gene transfer and as therapeutic tools for cancer, owing to their unique envelope-free DNA structure. Adenoviral vectors can be modified to make them replication defective for use in gene therapy . Similarly, viruses have a profound impact on human society, not only by causing human diseases but also through their applications in the fields of medicine and biotechnology. Currently, pathogens are often used as tools for vaccine development and gene therapy, aiding humans in combating certain diseases . 1.2. Current Status of Virus Research in the Environment and Organisms According to the Baltimore virus classification system, the viruses discovered can be categorised into DNA viruses and RNA viruses. DNA viruses are further classified into single-stranded DNA viruses, double-stranded DNA viruses, and double-stranded DNA reverse transcription viruses, while RNA viruses are divided into double-stranded RNA viruses, positive-stranded RNA viruses, negative-stranded RNA viruses, and single-stranded RNA reverse transcription viruses . Previously, there has been relatively limited research on RNA viruses due to the inherent instability of RNA, which makes these viruses highly susceptible to degradation by contaminants during the research process. This has contributed to fewer studies. A shows the number of articles published on different virus types between 2010 and 2025. As can be seen, the emergence of a novel coronavirus (a single-stranded, positive-stranded RNA virus) in 2019 has led to an increasing interest in RNA viruses . In addition, the literature on RNA virus research grows exponentially from 2020 onwards. Current research on viruses, whether in the environment or in organisms, primarily focuses on virus–host interactions; the distribution of viruses in natural environments; and their evolution, survival patterns, and modes of transmission. The outbreak of the novel coronavirus (SARS-CoV-2, the agent of COVID-19) has sparked widespread interest in the virus–host–environment triad, driving research advancements in viral ecology . As illustrated in A–D, the three years following the onset of the global outbreak of COVID-19 in late 2020 and the subsequent lifting of full control measures at the beginning of 2023 saw a sharp rise in the number of articles on various virus classes worldwide. The number of such articles increased by as much as three- to four-fold compared to previous years. Although the number of related articles has decreased compared with the previous years, the volume of publications in the past two years remains high. This indicates a sustained increase in interest in viruses, especially in the study of some viruses affecting plants and animals. Polar, glacial, and permafrost environments, which are significantly affected by climate change, are research hotspots but still have fewer related viral studies. As shown in D, fewer than 200 articles were published on viruses in glaciers between 2010 and 2024. This limited research is partly due to the challenges of sampling in glacial environments and partly due to the relatively low biomass compared to other environments, which renders conventional detection techniques less effective. This also suggests significant potential for future research on viruses in glacial environments. E illustrates the changing trend in the number of virus-related articles as a percentage of all articles published in the biological community from 2010 to 2024, with the same trend evident from 2019 onwards, suggesting heightened concern regarding the presence of viruses in habitats following the COVD-19 outbreak. This review provides a summary of the pre-treatment methods used in previous virus studies, as shown in . There is little difference in the principles of the methods used to study viruses in the various matrices, which involve extracting the virus particles from the samples and then analysing their genomes. However, the methods for collecting and processing different types of samples are different. For instance, the extraction of viral particles from water samples requires filtration and concentration prior to nucleic acid extraction, while solid samples must be resuspended in PBS (phosphate buffered saline) buffer, followed by filtration and concentration for further analysis. During the sample collection step, the operational methods may vary, even for the same type of samples. For example, some surface water samples can be collected directly using sterile bags or bottles, whereas deep water samples necessitate specialised water samplers. This review aims to summarise and analyse the differences in viral research methods across various sample types. 1.3. Methodological Issues in Virus Research in the Current Environment In the past, there have been a number of traditional viral particle assays, such as the plaque forming units (PFU) method , the 50% tissue culture infective dose (TCID 50 ) method , and the real-time fluorescence quantitative PCR (qPCR) method , and of these three methods, while the qPCR method is rapid and sensitive, it does not ensure quantification of infectious virus particles only. However, there is a proliferation of emerging methods for viral titration. For example, the fluorescence focus assay (FFA) can detect and titrate viral infectivity based on the binding of specific antibodies to viral antigens, especially for viruses that fail to form empty spots using classical crystal violet or neutral red staining . There is also the enzyme-linked immunosorbent assay (ELISA), which is fast, simple, specific, efficient, and similar in accuracy and sensitivity to traditional methods . Currently, droplet digital PCR (ddPCR) is also highly used, which is highly accurate and allows for absolute quantification of target virus particles in samples without the need to prepare a standard curve . Currently, there are still major constraints in the enrichment and pre-processing of viruses for research. By summarising and comparing different processing methods, this review aims to facilitate researchers to choose more efficient methods for virus research and make improvements. Viruses are widely found in a variety of environments, where they play an important role. They form the foundation of the Earth’s ecological pyramid and significantly impact bacterial diversity and population structure. By infecting and killing bacteria, viruses participate in and influence the material and energy cycles within ecosystems. For example, viruses in oceans, such as those from the Caudovirales and Microviridae phage families, harbour genes that enable them to manipulate infected bacteria to process sulphur-containing compounds and participate in various cellular processes, such as photosynthesis . The unique structural characteristics and small size of viruses allow for them to survive in a wide range of soils , facilitating virus transmission between plants and soil. For example, the cucumber green mottled mosaic virus (CGMMV) of the genus Tobamovirus infects grafted watermelons and spreads through pruning, irrigation, and other agricultural practices . In addition to habitat-based viruses, there are also viruses that are transmitted through living organisms, such as the zoonotic snowshoe hare virus (SSHV), an arbovirus that is transmitted between small mammals and mosquitoes . Rabies is another of the world’s deadliest zoonotic diseases, with bats serving as the primary host and source of transmission to humans and livestock . Detecting virus–host (VH) interactions is important for in-depth studies of virus transmission between habitats. Since most cellular functions are supported by proteins, by studying VH interactions, we can more quickly find antiviral targets that play important functions in the viral life cycle . Some convenient and fast mathematical modelling methods for the analysis of VH interactions have also emerged, such as dynamic optimisation, evolutionary game theory, and the modelling of spatial phenomena, among other computational methods . However, the presence of viruses is not solely destructive to habitats; they also have positive effects. For example, viruses contribute to the release and cycling of carbon in the oceans by infecting and killing bacteria, which plays a role in maintaining the productivity of marine ecosystems and regulating global temperatures . Additionally, viral infections promote inter-gene exchange, where viruses transfer genes between different hosts, facilitating horizontal gene transfer—an important mechanism in the evolution of organisms. For example, adenoviruses have been used as vectors for gene transfer and as therapeutic tools for cancer, owing to their unique envelope-free DNA structure. Adenoviral vectors can be modified to make them replication defective for use in gene therapy . Similarly, viruses have a profound impact on human society, not only by causing human diseases but also through their applications in the fields of medicine and biotechnology. Currently, pathogens are often used as tools for vaccine development and gene therapy, aiding humans in combating certain diseases . According to the Baltimore virus classification system, the viruses discovered can be categorised into DNA viruses and RNA viruses. DNA viruses are further classified into single-stranded DNA viruses, double-stranded DNA viruses, and double-stranded DNA reverse transcription viruses, while RNA viruses are divided into double-stranded RNA viruses, positive-stranded RNA viruses, negative-stranded RNA viruses, and single-stranded RNA reverse transcription viruses . Previously, there has been relatively limited research on RNA viruses due to the inherent instability of RNA, which makes these viruses highly susceptible to degradation by contaminants during the research process. This has contributed to fewer studies. A shows the number of articles published on different virus types between 2010 and 2025. As can be seen, the emergence of a novel coronavirus (a single-stranded, positive-stranded RNA virus) in 2019 has led to an increasing interest in RNA viruses . In addition, the literature on RNA virus research grows exponentially from 2020 onwards. Current research on viruses, whether in the environment or in organisms, primarily focuses on virus–host interactions; the distribution of viruses in natural environments; and their evolution, survival patterns, and modes of transmission. The outbreak of the novel coronavirus (SARS-CoV-2, the agent of COVID-19) has sparked widespread interest in the virus–host–environment triad, driving research advancements in viral ecology . As illustrated in A–D, the three years following the onset of the global outbreak of COVID-19 in late 2020 and the subsequent lifting of full control measures at the beginning of 2023 saw a sharp rise in the number of articles on various virus classes worldwide. The number of such articles increased by as much as three- to four-fold compared to previous years. Although the number of related articles has decreased compared with the previous years, the volume of publications in the past two years remains high. This indicates a sustained increase in interest in viruses, especially in the study of some viruses affecting plants and animals. Polar, glacial, and permafrost environments, which are significantly affected by climate change, are research hotspots but still have fewer related viral studies. As shown in D, fewer than 200 articles were published on viruses in glaciers between 2010 and 2024. This limited research is partly due to the challenges of sampling in glacial environments and partly due to the relatively low biomass compared to other environments, which renders conventional detection techniques less effective. This also suggests significant potential for future research on viruses in glacial environments. E illustrates the changing trend in the number of virus-related articles as a percentage of all articles published in the biological community from 2010 to 2024, with the same trend evident from 2019 onwards, suggesting heightened concern regarding the presence of viruses in habitats following the COVD-19 outbreak. This review provides a summary of the pre-treatment methods used in previous virus studies, as shown in . There is little difference in the principles of the methods used to study viruses in the various matrices, which involve extracting the virus particles from the samples and then analysing their genomes. However, the methods for collecting and processing different types of samples are different. For instance, the extraction of viral particles from water samples requires filtration and concentration prior to nucleic acid extraction, while solid samples must be resuspended in PBS (phosphate buffered saline) buffer, followed by filtration and concentration for further analysis. During the sample collection step, the operational methods may vary, even for the same type of samples. For example, some surface water samples can be collected directly using sterile bags or bottles, whereas deep water samples necessitate specialised water samplers. This review aims to summarise and analyse the differences in viral research methods across various sample types. In the past, there have been a number of traditional viral particle assays, such as the plaque forming units (PFU) method , the 50% tissue culture infective dose (TCID 50 ) method , and the real-time fluorescence quantitative PCR (qPCR) method , and of these three methods, while the qPCR method is rapid and sensitive, it does not ensure quantification of infectious virus particles only. However, there is a proliferation of emerging methods for viral titration. For example, the fluorescence focus assay (FFA) can detect and titrate viral infectivity based on the binding of specific antibodies to viral antigens, especially for viruses that fail to form empty spots using classical crystal violet or neutral red staining . There is also the enzyme-linked immunosorbent assay (ELISA), which is fast, simple, specific, efficient, and similar in accuracy and sensitivity to traditional methods . Currently, droplet digital PCR (ddPCR) is also highly used, which is highly accurate and allows for absolute quantification of target virus particles in samples without the need to prepare a standard curve . Currently, there are still major constraints in the enrichment and pre-processing of viruses for research. By summarising and comparing different processing methods, this review aims to facilitate researchers to choose more efficient methods for virus research and make improvements. 2.1. Methods for Studying Viruses in Water Samples During the COVID-19 pandemic, viral particle levels in water have been reported to reach 4.60 ± 2.50 × 10 6 GC/μL . The concentration of viruses in different water bodies can vary widely, necessitating pre-experimentation to determine the appropriate sampling volume prior to analysis, especially in water bodies where viruses exist in diverse species and in large quantities. For example, viruses that infect humans are commonly found in domestic water and municipal wastewater , and fish-associated viruses are often detected in aquatic environments where fish are present . 2.1.1. Water Sampling Methods The process of collecting surface water is relatively straightforward. To enrich the water samples for virus particles, it is necessary to filter the samples to obtain a sediment containing microorganisms. Subsequently, the genome is extracted from the obtained sediment. Water samples are generally collected in sterile bags or sterile bottles at appropriate depths below the water surface. The samples are then preserved at appropriate temperatures and transported to the laboratory in a timely manner. The methods for collecting water samples vary depending on the depth of the source. As shown in , deep water from environments such as oceans and lakes can be collected using the CTD water collection system . In addition, the amount of sample used for viral research varies. Generally, everyday water contains fewer viral particles, requiring a larger sample for research. In contrast, sewage contains a large number of viral particles, and thus, a smaller sample size is sufficient; approximately 100 mL is enough to detect the abundance of viruses and their genomes . 2.1.2. Water Sample Handling and Testing Methods The various water samples collected are transported to the laboratory at 4 °C and stored at −20 °C until processing. Owing to the large volume of the water samples, the virus particles are dispersed and present in low concentrations, making it necessary to filter and concentrate the water samples to obtain a higher concentration of viruses in suspension before extracting and quantifying the genome. summarises the commonly used pore sizes of membranes and concentration methods for filtering water samples. Because viral particles vary in size, the pore sizes of the filter membranes must also differ. Initially, membranes with pore sizes of 1 μm or 2 μm are used to remove large particles, such as sand and gravel, from the water samples. Subsequently, filters with pore sizes of 0.22 μm or 0.45 μm are used to capture viral particles, and membranes with pore sizes of 50 kD or 100 kD are often used at the final stage to capture even smaller virus particles . It is worth noting that, when viruses are filtered using membranes with a pore size of 0.22 µm or 0.45 µm, the filters actually capture the virus particles mainly by electrostatic action, since the pore size of these membranes is much larger than the virus particles . Concentration is typically achieved using the tangential flow filtration (TFF) method , FeCl3 concentration , or the PEG (polyethylene glycol) method . Among these, the TFF method is often used in experiments with large sample volumes and allows for continuous operation. However, it has the disadvantage of being expensive, and for some sensitive biomolecules, the high-speed flow concentration method may affect their activity. The FeCl 3 concentration method is primarily used for flocculation processes in water treatment, relying on charge neutralisation between the chemical flocculant and the particles. The disadvantages of this method are that it is mainly applicable to water samples, making it unsuitable for all types of samples, and that it may introduce additional ions into the sample, which could affect the purity of the sample if not removed in subsequent steps. The PEG method is currently the most commonly used concentration method in the laboratory. It is simple and effective at concentrating viruses. However, it may require additional treatment to remove excess PEG and salts to minimise their impact on the quality of the sample. The hollow fibre ultrafiltration (HFUF) method can effectively capture and concentrate viruses from water samples, with viral recoveries reaching 70–80% . The celite secondary concentration method after HFUF can effectively increase virus recovery concentration, and the recovered virus concentration can reach about 60% . It has been previously shown that some effluent analyses are conducted following the creation of microcosms, such as the detection of oil contamination using 0.5%, 1%, and 5% diesel oil filtered through a 0.22 μm filter, representing three levels of oil contamination to simulate real-world conditions. These microcosms are incubated at room temperature and manually shaken at regular intervals to simulate tidal fluctuations in surface water . Among the different sources of water samples, the difference in the treatment of glacial samples arises from differences in their form. For example, in the case of some phage DNA enrichment from artificial ice cores from the Tibetan Plateau , it is necessary to first melt the ice samples before passing them through filter membranes of different pore sizes. The appropriate method for concentration is then selected, with the FeCl 3 enrichment technique commonly employed for snow and ice samples . Subsequently, DNase I should be added to the viral concentrate to remove any free DNA. The viral genome is then extracted using a commercial kit, followed by amplification and quantification of the viral genome through PCR, qPCR, qRT-PCR, nested-PCR, and the creation of libraries . Since qPCR and RT-qPCR will also directly detect membrane-free nucleic acids, this will make the amount of virus detected under this technique higher than the actual viral abundance . To solve this problem, Farkas et al. found a SYBR Green-based RT-qPCR method to accurately quantify viral particles, which could detect 5–5 × 10 4 viral particle copies . Droplet digital PCR (ddPCR) also has excellent sensitivity, with detection limits of 0.05 pg/μL for viral particle copies . For the detection of the abundance of viral particles in samples, a more standardised method is employed, where a 1–2 mL sub-sample is taken and fixed with methanol/pentanediol . However, some literature suggests that methanol/pentanediol fixation may affect the abundance of viruses in the sample . Therefore, viral abundance is determined directly after SYBR Green I staining using either flow cytometry or drop-shot fluorescence microscopy. 2.2. Methods for the Study of Viruses in Soil Samples Current research on soil samples is primarily focused on three types: soil, sediment, and beach sand. The amount of sample required for analysis varies depending on the collection environment, which may affect the level of viral content in the samples. For example, samples collected from beaches with frequent human activity typically require smaller quantities, while those from soil environments with minimal biological activity necessitate larger sample sizes. Viral levels in soil samples typically range from 2.7 × 10 5 to 4.2 × 10 9 virus particles/g . 2.2.1. Sampling Methods Soil samples are generally easier to collect than water samples and can be obtained directly from the surface to 45 cm by a sterile spoon . Some beach sand samples are collected using a core sampler . To collect sediments from the deep sea, specialised samplers, such as gravity corers and box corers, are required . The collected soil samples must be sieved using a 2 mm sieve to remove stones and grassroots before conducting further experiments . 2.2.2. Soil Sample Processing and Testing Methods As shown in , there is little variation in the methods used for the filtration, concentration, and determination of the abundance of virus particles in soil samples. However, differences arise in the amount of sample required due to variations in microbial content across different types of samples. For solid samples such as soil, it is usually necessary to recover the supernatant virus particles after resuspension and homogenisation using PBS and then collect the virus particles using different concentration and precipitation methods, among which the TFF method is commonly used to concentrate the supernatant and the PEG method is used to precipitate the virus particles . This step is followed by DNase treatment to ensure effective purification of the virus particles before extracting the viral DNA and using a kit for whole-genome amplification and sequencing with the Illumina system. Alternatively, the PBS resuspension step can be omitted, and the entire soil genome can be directly extracted using a soil sample kit. The viral sequence can then be identified post-assembly using VirSorter and DeepVirFinder . Phages that may be present in soil samples are processed by mixing and resuspending the samples in buffer—commonly PBS buffer or sodium citrate buffer —and filtering the samples using a 0.22 μm membrane. The filtered sample solution is then poured into appropriate media, such as lysogeny broth medium (LB medium) or beef digest-based medium (BD medium), for incubation and screening of phage plaques . Phages on the media are usually observed through negative staining transmission electron microscopy . For example, a method based on magnetic separation and chemiluminescence can be used for rapid detection of Pseudomonas aeruginosa phage followed by observation of phage particles by scanning electron microscopy (SEM) and transmission electron microscopy (TEM) . The method for detecting the abundance of virus particles in soil is similar to that used for water samples. This involves staining a sub-sample with SYBR Green I and determining virus abundance using droplet fluorescence microscopy . Quantification of virus particles in soil samples is typically performed using qPCR or the droplet digital PCR (ddPCR) method . The lowest detection limit can reach 5 fg/μL for ddPCR and 10 fg/μL for qPCR . 2.3. Methods for Studying Viruses in Aerosol Samples Aerosol-based samples have been less extensively studied, and it was only after the COVID-19 outbreak that greater attention was given to aerosol samples. Under daily conditions, the concentration of viral particles in the air is approximately 10 3 –10 4 viral gene copies/m 3 . In indoor environments, the concentration of viral particles in aerosols can reach 300–700 viral gene copies/m 3 . 2.3.1. Sampling Methods Aerosol samples are typically collected using either active or passive sampling methods . Passive sampling is mainly based on the natural deposition method, which generally requires prolonged exposure to the environment to collect indoor air . This method has the advantages of ease of use and low cost. However, its disadvantages are the need for a clean sampling environment and poor stability . If fewer particles in the air cause the particles in the collector to bounce back, the data from the passive sampler are inaccurate . Active sampling methods can be classified into solid impact methods, liquid impact methods, centrifugal and cyclone methods, electrostatic precipitation methods, and filtration methods. Among these, the solid impact method is highly sensitive and widely used, though it is complex to operate and requires stringent conditions for the sampling media . The liquid impaction method, which minimises microbial damage and operates at a high flow rate, is not suitable for low temperatures and short-duration sampling . Centrifugation and cyclone methods are easy to operate, compact, portable, controllable, and low-cost, but they may lead to microbial loss . Electrostatic precipitation methods help maintain the biostability and viability of samples while enabling the sampling of particles with a wide range of diameters, but they have a limited collection range and a notable impact on the sampling environment . Filtration methods, while involving easy-to-carry and low-cost devices, are highly influenced by the materials used and can be complicated to operate . The collection of aerosol samples typically requires specialised aerosol collectors , which are often equipped with filters of various pore sizes (typically 25 mm and 47 mm) to collect aerosol particles of different sizes . 2.3.2. Aerosol Sample Handling and Detection Methods For the enrichment of viral particles, the filter membrane is typically resuspended in PBS buffer and centrifuged by filtration. Since viral particles in aerosols are relatively small in diameter, the solution is usually passed through a 0.22 μm filter membrane to remove macromolecules other than viral particles . Viral particles are concentrated using tangential flow filtration (TFF), and the virus concentrations in the samples are detected using RT-PCR, quantitative fluorescent PCR, and other methods after extracting the viral genome with a kit . Currently, for the detection of pathogenic microorganisms in aerosols, simple amplification-based detection (SAMBA) and LAMP ASSAYS are commonly used, which complement the accuracy and sensitivity of the traditional RT-PCR method and reduce the assay time . RT-qPCR was used for the detection of COVID-19. Although RT-qPCR can be very sensitive to RNA in small sample sizes, this method is not widely used due to the high risk of RNA virus infection and the high cost of detection instruments . In addition, the emergence of electrowetting-on-dielectric (EWOD)-based digital microfluidics (DMF) allows for rapid and efficient specific virus detection in a shorter time and with a lower sample volume, and the lowest detection limit for the detection of phage viruses can reach 106 pfu/ml . 2.4. Methods for the Study of Viruses in Faeces Research on faecal samples has also increased in recent years, but their processing and detection methods are largely similar to those used for soil samples. Viral gene copies in faecal samples are typically quantified using Qubit concentrations, which range from 80.14 to 98.91 ng/mL . Additionally, viral gene copies in the range of 10 8 to 10 13 per gram can be detected in the faeces of patients with gastrointestinal diseases, such as diarrhoea . 2.4.1. Sampling Methods Faecal samples are collected using a single method, either with a specialised faecal collector or directly into a sterile bag, ensuring careful measures are taken to avoid contamination during collection. The sampling method for faecal samples does not notably differ between human and animal faeces. Generally, a sample size of 100 mg is required for the study of viral particles in faeces . 2.4.2. Faecal Sample Processing and Testing Methods Faecal samples are typically first resuspended in PBS solution , and in some studies, the viral genome is extracted from the samples using kits following pre-treatment with antibiotics in PBS solution . Additionally, the resuspension solution is filtered through 0.8 μm and 0.45 μm membranes after the addition of antibiotics and treatment, effectively removing excess macromolecules . RNA viruses are more commonly studied in faecal samples, and viral RNA extraction kits are commonly used to extract the viral RNA, which is then analysed using methods such as RT-qPCR, real-time RT-PCR, and nested RT-PCR and sequenced on the Illumina platform . DNA viruses are generally extracted directly from samples using viral DNA extraction kits and subjected to PCR analysis and sequencing. The difference in the analysis of DNA and RNA viruses is that RNA viruses undergo reverse transcription to cDNA before PCR analysis. A novel real-time RT-PCR method (P-sg-QPCR) has also emerged for the rapid diagnosis and quantification of coronaviruses in the faeces of biological pathogens (e.g., human, cat, canine, porcine, bovine, murine, and avian), which combines the sensitivity of the primer-probe-energy transfer (PriProET) technique to allow for faster detection and quantification of viral mutants, with a detection limit of 3.7 × 10 7 copies/μL . 2.5. Methods for the Study of Viruses in Plant and Animal Tissues Current research on plant viruses is relatively straightforward, and the methods for sample collection are the same, involving the direct collection of diseased plant parts for analysis. The viral content in these samples generally ranges from 30 to 3000 ng/mL . The study of viruses in animals has gained increasing attention, especially following the COVID-19 outbreak, heightening concerns about the presence of viruses in animals. However, the handling of animal samples presents challenges, requiring compliance with regulations such as the Laboratory Animal Welfare Act, which mandates that laboratory animals not be abused or killed indiscriminately. In general, virus quantification in animals is typically conducted by directly quantifying antibodies in plasma samples , with viral gene copy abundance reaching up to 5 × 10 6 copies/μL . However, not all tests of viral abundance can be replaced by tests of the antibody content in them. When the concentration of antibodies is not sufficient to neutralise the virus, it increases the load of pathogens in it, a phenomenon known as antibody-dependent enhancement (ADE) . 2.5.1. Sampling Methods Because plant viruses are highly visible, their collection is typically straightforward and can be conducted directly at the site of the disease. Generally, a sample size of 2–40 g is required for extracting the viral genome . In contrast, the collection of animal samples is typically more complex. It requires careful management to prevent cross-contamination among experimental animals during breeding at the same time. The type of samples often collected from animals include serum, nasal swabs, and oral swabs . 2.5.2. Animal and Plant Sample Handling and Testing Methods Before extracting viral particles from plant and animal tissues, the samples must be ground and homogenised, after which, the supernatant is collected through centrifugation and filtration. Particularly, for certain viruses extracted from animal tissues such as fish, the samples need to be cultured before homogenisation. Following this, the samples are filtered and screened for viruses. For example, the RNA virus neuron necrosis virus (NNV) extracted from marine fish and shellfish , influenza A viruses (IVAs) extracted from bronchial-associated lymphoid tissues in severely affected areas of seal lungs , and viral haemorrhagic septicaemia virus (VHSV) extracted from European rainbow trout were isolated directly using cell culture . However, not all viral particle extractions from animals require culturing the tissues. For example, Redondoviridae from human bronchial alveoli (BAL) can be directly extracted from the sample using genome amplification kits to obtain the virus’s genome sequence . Most viruses present in plant samples are dsRNA, a stable form of RNA that can be easily extracted from plants. Typically, 10–40 g or 2–5 g of plant tissue is taken for homogenisation. dsRNA extraction is performed using either reagents prepared as described in the literature or commercially available kits. The extracted dsRNA is subsequently analysed by RT-PCR, qPCR, real-time PCR, and multiplexed PCR and sequenced on the Illumina platform . High-throughput sequencing is subsequently used to detect and characterise known and unknown plant viruses . In addition to this method of viral genome identification after the extraction of all genomes in a sample, some earlier studies have employed double antibody sandwich enzyme-linked immunosorbent assay (DAS-ELISA) , followed by RT-PCR of selected positive samples. This approach involves resuspending samples in PBS buffer with sodium azide as a preservative, followed by extraction of the plant viral genome and purification using commonly used methods. The sample solution is then subjected to rate-banded sucrose density gradient centrifugation to produce antiserum and measure viral content . The cetyltrimethylammonium bromide (CTAB) method is also commonly used to extract viruses from plants. Typically, this process involves homogenising 100 mg of sample in CBTA buffer, followed by centrifugation and RNA extraction. Viral abundance is then assessed using methods such as RT-PCR, LAMP, or RT-ddPCR . For animal samples, suitable tissue sections or serum samples (200 μL) are usually collected. Before extracting the genome from virus-infected tissues, the samples must be homogenised. The tissues are then extracted directly using virus kits, and the genome is analysed through various methods such as PCR and RT-PCR . Common quantification methods for viruses in these samples include quantification kits, ELISA, chromatography, RT-PCR, and ddPCR . The LAMP assay, which is commonly used for virus detection in plant and animal samples, is capable of detecting viral RNA in samples diluted to a concentration of 1 × 10 −7 ng/μL, and the detection limit of the RT-PCR assay can reach 1 × 10 −4 ng/μL . During the COVID-19 pandemic, viral particle levels in water have been reported to reach 4.60 ± 2.50 × 10 6 GC/μL . The concentration of viruses in different water bodies can vary widely, necessitating pre-experimentation to determine the appropriate sampling volume prior to analysis, especially in water bodies where viruses exist in diverse species and in large quantities. For example, viruses that infect humans are commonly found in domestic water and municipal wastewater , and fish-associated viruses are often detected in aquatic environments where fish are present . 2.1.1. Water Sampling Methods The process of collecting surface water is relatively straightforward. To enrich the water samples for virus particles, it is necessary to filter the samples to obtain a sediment containing microorganisms. Subsequently, the genome is extracted from the obtained sediment. Water samples are generally collected in sterile bags or sterile bottles at appropriate depths below the water surface. The samples are then preserved at appropriate temperatures and transported to the laboratory in a timely manner. The methods for collecting water samples vary depending on the depth of the source. As shown in , deep water from environments such as oceans and lakes can be collected using the CTD water collection system . In addition, the amount of sample used for viral research varies. Generally, everyday water contains fewer viral particles, requiring a larger sample for research. In contrast, sewage contains a large number of viral particles, and thus, a smaller sample size is sufficient; approximately 100 mL is enough to detect the abundance of viruses and their genomes . 2.1.2. Water Sample Handling and Testing Methods The various water samples collected are transported to the laboratory at 4 °C and stored at −20 °C until processing. Owing to the large volume of the water samples, the virus particles are dispersed and present in low concentrations, making it necessary to filter and concentrate the water samples to obtain a higher concentration of viruses in suspension before extracting and quantifying the genome. summarises the commonly used pore sizes of membranes and concentration methods for filtering water samples. Because viral particles vary in size, the pore sizes of the filter membranes must also differ. Initially, membranes with pore sizes of 1 μm or 2 μm are used to remove large particles, such as sand and gravel, from the water samples. Subsequently, filters with pore sizes of 0.22 μm or 0.45 μm are used to capture viral particles, and membranes with pore sizes of 50 kD or 100 kD are often used at the final stage to capture even smaller virus particles . It is worth noting that, when viruses are filtered using membranes with a pore size of 0.22 µm or 0.45 µm, the filters actually capture the virus particles mainly by electrostatic action, since the pore size of these membranes is much larger than the virus particles . Concentration is typically achieved using the tangential flow filtration (TFF) method , FeCl3 concentration , or the PEG (polyethylene glycol) method . Among these, the TFF method is often used in experiments with large sample volumes and allows for continuous operation. However, it has the disadvantage of being expensive, and for some sensitive biomolecules, the high-speed flow concentration method may affect their activity. The FeCl 3 concentration method is primarily used for flocculation processes in water treatment, relying on charge neutralisation between the chemical flocculant and the particles. The disadvantages of this method are that it is mainly applicable to water samples, making it unsuitable for all types of samples, and that it may introduce additional ions into the sample, which could affect the purity of the sample if not removed in subsequent steps. The PEG method is currently the most commonly used concentration method in the laboratory. It is simple and effective at concentrating viruses. However, it may require additional treatment to remove excess PEG and salts to minimise their impact on the quality of the sample. The hollow fibre ultrafiltration (HFUF) method can effectively capture and concentrate viruses from water samples, with viral recoveries reaching 70–80% . The celite secondary concentration method after HFUF can effectively increase virus recovery concentration, and the recovered virus concentration can reach about 60% . It has been previously shown that some effluent analyses are conducted following the creation of microcosms, such as the detection of oil contamination using 0.5%, 1%, and 5% diesel oil filtered through a 0.22 μm filter, representing three levels of oil contamination to simulate real-world conditions. These microcosms are incubated at room temperature and manually shaken at regular intervals to simulate tidal fluctuations in surface water . Among the different sources of water samples, the difference in the treatment of glacial samples arises from differences in their form. For example, in the case of some phage DNA enrichment from artificial ice cores from the Tibetan Plateau , it is necessary to first melt the ice samples before passing them through filter membranes of different pore sizes. The appropriate method for concentration is then selected, with the FeCl 3 enrichment technique commonly employed for snow and ice samples . Subsequently, DNase I should be added to the viral concentrate to remove any free DNA. The viral genome is then extracted using a commercial kit, followed by amplification and quantification of the viral genome through PCR, qPCR, qRT-PCR, nested-PCR, and the creation of libraries . Since qPCR and RT-qPCR will also directly detect membrane-free nucleic acids, this will make the amount of virus detected under this technique higher than the actual viral abundance . To solve this problem, Farkas et al. found a SYBR Green-based RT-qPCR method to accurately quantify viral particles, which could detect 5–5 × 10 4 viral particle copies . Droplet digital PCR (ddPCR) also has excellent sensitivity, with detection limits of 0.05 pg/μL for viral particle copies . For the detection of the abundance of viral particles in samples, a more standardised method is employed, where a 1–2 mL sub-sample is taken and fixed with methanol/pentanediol . However, some literature suggests that methanol/pentanediol fixation may affect the abundance of viruses in the sample . Therefore, viral abundance is determined directly after SYBR Green I staining using either flow cytometry or drop-shot fluorescence microscopy. The process of collecting surface water is relatively straightforward. To enrich the water samples for virus particles, it is necessary to filter the samples to obtain a sediment containing microorganisms. Subsequently, the genome is extracted from the obtained sediment. Water samples are generally collected in sterile bags or sterile bottles at appropriate depths below the water surface. The samples are then preserved at appropriate temperatures and transported to the laboratory in a timely manner. The methods for collecting water samples vary depending on the depth of the source. As shown in , deep water from environments such as oceans and lakes can be collected using the CTD water collection system . In addition, the amount of sample used for viral research varies. Generally, everyday water contains fewer viral particles, requiring a larger sample for research. In contrast, sewage contains a large number of viral particles, and thus, a smaller sample size is sufficient; approximately 100 mL is enough to detect the abundance of viruses and their genomes . The various water samples collected are transported to the laboratory at 4 °C and stored at −20 °C until processing. Owing to the large volume of the water samples, the virus particles are dispersed and present in low concentrations, making it necessary to filter and concentrate the water samples to obtain a higher concentration of viruses in suspension before extracting and quantifying the genome. summarises the commonly used pore sizes of membranes and concentration methods for filtering water samples. Because viral particles vary in size, the pore sizes of the filter membranes must also differ. Initially, membranes with pore sizes of 1 μm or 2 μm are used to remove large particles, such as sand and gravel, from the water samples. Subsequently, filters with pore sizes of 0.22 μm or 0.45 μm are used to capture viral particles, and membranes with pore sizes of 50 kD or 100 kD are often used at the final stage to capture even smaller virus particles . It is worth noting that, when viruses are filtered using membranes with a pore size of 0.22 µm or 0.45 µm, the filters actually capture the virus particles mainly by electrostatic action, since the pore size of these membranes is much larger than the virus particles . Concentration is typically achieved using the tangential flow filtration (TFF) method , FeCl3 concentration , or the PEG (polyethylene glycol) method . Among these, the TFF method is often used in experiments with large sample volumes and allows for continuous operation. However, it has the disadvantage of being expensive, and for some sensitive biomolecules, the high-speed flow concentration method may affect their activity. The FeCl 3 concentration method is primarily used for flocculation processes in water treatment, relying on charge neutralisation between the chemical flocculant and the particles. The disadvantages of this method are that it is mainly applicable to water samples, making it unsuitable for all types of samples, and that it may introduce additional ions into the sample, which could affect the purity of the sample if not removed in subsequent steps. The PEG method is currently the most commonly used concentration method in the laboratory. It is simple and effective at concentrating viruses. However, it may require additional treatment to remove excess PEG and salts to minimise their impact on the quality of the sample. The hollow fibre ultrafiltration (HFUF) method can effectively capture and concentrate viruses from water samples, with viral recoveries reaching 70–80% . The celite secondary concentration method after HFUF can effectively increase virus recovery concentration, and the recovered virus concentration can reach about 60% . It has been previously shown that some effluent analyses are conducted following the creation of microcosms, such as the detection of oil contamination using 0.5%, 1%, and 5% diesel oil filtered through a 0.22 μm filter, representing three levels of oil contamination to simulate real-world conditions. These microcosms are incubated at room temperature and manually shaken at regular intervals to simulate tidal fluctuations in surface water . Among the different sources of water samples, the difference in the treatment of glacial samples arises from differences in their form. For example, in the case of some phage DNA enrichment from artificial ice cores from the Tibetan Plateau , it is necessary to first melt the ice samples before passing them through filter membranes of different pore sizes. The appropriate method for concentration is then selected, with the FeCl 3 enrichment technique commonly employed for snow and ice samples . Subsequently, DNase I should be added to the viral concentrate to remove any free DNA. The viral genome is then extracted using a commercial kit, followed by amplification and quantification of the viral genome through PCR, qPCR, qRT-PCR, nested-PCR, and the creation of libraries . Since qPCR and RT-qPCR will also directly detect membrane-free nucleic acids, this will make the amount of virus detected under this technique higher than the actual viral abundance . To solve this problem, Farkas et al. found a SYBR Green-based RT-qPCR method to accurately quantify viral particles, which could detect 5–5 × 10 4 viral particle copies . Droplet digital PCR (ddPCR) also has excellent sensitivity, with detection limits of 0.05 pg/μL for viral particle copies . For the detection of the abundance of viral particles in samples, a more standardised method is employed, where a 1–2 mL sub-sample is taken and fixed with methanol/pentanediol . However, some literature suggests that methanol/pentanediol fixation may affect the abundance of viruses in the sample . Therefore, viral abundance is determined directly after SYBR Green I staining using either flow cytometry or drop-shot fluorescence microscopy. Current research on soil samples is primarily focused on three types: soil, sediment, and beach sand. The amount of sample required for analysis varies depending on the collection environment, which may affect the level of viral content in the samples. For example, samples collected from beaches with frequent human activity typically require smaller quantities, while those from soil environments with minimal biological activity necessitate larger sample sizes. Viral levels in soil samples typically range from 2.7 × 10 5 to 4.2 × 10 9 virus particles/g . 2.2.1. Sampling Methods Soil samples are generally easier to collect than water samples and can be obtained directly from the surface to 45 cm by a sterile spoon . Some beach sand samples are collected using a core sampler . To collect sediments from the deep sea, specialised samplers, such as gravity corers and box corers, are required . The collected soil samples must be sieved using a 2 mm sieve to remove stones and grassroots before conducting further experiments . 2.2.2. Soil Sample Processing and Testing Methods As shown in , there is little variation in the methods used for the filtration, concentration, and determination of the abundance of virus particles in soil samples. However, differences arise in the amount of sample required due to variations in microbial content across different types of samples. For solid samples such as soil, it is usually necessary to recover the supernatant virus particles after resuspension and homogenisation using PBS and then collect the virus particles using different concentration and precipitation methods, among which the TFF method is commonly used to concentrate the supernatant and the PEG method is used to precipitate the virus particles . This step is followed by DNase treatment to ensure effective purification of the virus particles before extracting the viral DNA and using a kit for whole-genome amplification and sequencing with the Illumina system. Alternatively, the PBS resuspension step can be omitted, and the entire soil genome can be directly extracted using a soil sample kit. The viral sequence can then be identified post-assembly using VirSorter and DeepVirFinder . Phages that may be present in soil samples are processed by mixing and resuspending the samples in buffer—commonly PBS buffer or sodium citrate buffer —and filtering the samples using a 0.22 μm membrane. The filtered sample solution is then poured into appropriate media, such as lysogeny broth medium (LB medium) or beef digest-based medium (BD medium), for incubation and screening of phage plaques . Phages on the media are usually observed through negative staining transmission electron microscopy . For example, a method based on magnetic separation and chemiluminescence can be used for rapid detection of Pseudomonas aeruginosa phage followed by observation of phage particles by scanning electron microscopy (SEM) and transmission electron microscopy (TEM) . The method for detecting the abundance of virus particles in soil is similar to that used for water samples. This involves staining a sub-sample with SYBR Green I and determining virus abundance using droplet fluorescence microscopy . Quantification of virus particles in soil samples is typically performed using qPCR or the droplet digital PCR (ddPCR) method . The lowest detection limit can reach 5 fg/μL for ddPCR and 10 fg/μL for qPCR . Soil samples are generally easier to collect than water samples and can be obtained directly from the surface to 45 cm by a sterile spoon . Some beach sand samples are collected using a core sampler . To collect sediments from the deep sea, specialised samplers, such as gravity corers and box corers, are required . The collected soil samples must be sieved using a 2 mm sieve to remove stones and grassroots before conducting further experiments . As shown in , there is little variation in the methods used for the filtration, concentration, and determination of the abundance of virus particles in soil samples. However, differences arise in the amount of sample required due to variations in microbial content across different types of samples. For solid samples such as soil, it is usually necessary to recover the supernatant virus particles after resuspension and homogenisation using PBS and then collect the virus particles using different concentration and precipitation methods, among which the TFF method is commonly used to concentrate the supernatant and the PEG method is used to precipitate the virus particles . This step is followed by DNase treatment to ensure effective purification of the virus particles before extracting the viral DNA and using a kit for whole-genome amplification and sequencing with the Illumina system. Alternatively, the PBS resuspension step can be omitted, and the entire soil genome can be directly extracted using a soil sample kit. The viral sequence can then be identified post-assembly using VirSorter and DeepVirFinder . Phages that may be present in soil samples are processed by mixing and resuspending the samples in buffer—commonly PBS buffer or sodium citrate buffer —and filtering the samples using a 0.22 μm membrane. The filtered sample solution is then poured into appropriate media, such as lysogeny broth medium (LB medium) or beef digest-based medium (BD medium), for incubation and screening of phage plaques . Phages on the media are usually observed through negative staining transmission electron microscopy . For example, a method based on magnetic separation and chemiluminescence can be used for rapid detection of Pseudomonas aeruginosa phage followed by observation of phage particles by scanning electron microscopy (SEM) and transmission electron microscopy (TEM) . The method for detecting the abundance of virus particles in soil is similar to that used for water samples. This involves staining a sub-sample with SYBR Green I and determining virus abundance using droplet fluorescence microscopy . Quantification of virus particles in soil samples is typically performed using qPCR or the droplet digital PCR (ddPCR) method . The lowest detection limit can reach 5 fg/μL for ddPCR and 10 fg/μL for qPCR . Aerosol-based samples have been less extensively studied, and it was only after the COVID-19 outbreak that greater attention was given to aerosol samples. Under daily conditions, the concentration of viral particles in the air is approximately 10 3 –10 4 viral gene copies/m 3 . In indoor environments, the concentration of viral particles in aerosols can reach 300–700 viral gene copies/m 3 . 2.3.1. Sampling Methods Aerosol samples are typically collected using either active or passive sampling methods . Passive sampling is mainly based on the natural deposition method, which generally requires prolonged exposure to the environment to collect indoor air . This method has the advantages of ease of use and low cost. However, its disadvantages are the need for a clean sampling environment and poor stability . If fewer particles in the air cause the particles in the collector to bounce back, the data from the passive sampler are inaccurate . Active sampling methods can be classified into solid impact methods, liquid impact methods, centrifugal and cyclone methods, electrostatic precipitation methods, and filtration methods. Among these, the solid impact method is highly sensitive and widely used, though it is complex to operate and requires stringent conditions for the sampling media . The liquid impaction method, which minimises microbial damage and operates at a high flow rate, is not suitable for low temperatures and short-duration sampling . Centrifugation and cyclone methods are easy to operate, compact, portable, controllable, and low-cost, but they may lead to microbial loss . Electrostatic precipitation methods help maintain the biostability and viability of samples while enabling the sampling of particles with a wide range of diameters, but they have a limited collection range and a notable impact on the sampling environment . Filtration methods, while involving easy-to-carry and low-cost devices, are highly influenced by the materials used and can be complicated to operate . The collection of aerosol samples typically requires specialised aerosol collectors , which are often equipped with filters of various pore sizes (typically 25 mm and 47 mm) to collect aerosol particles of different sizes . 2.3.2. Aerosol Sample Handling and Detection Methods For the enrichment of viral particles, the filter membrane is typically resuspended in PBS buffer and centrifuged by filtration. Since viral particles in aerosols are relatively small in diameter, the solution is usually passed through a 0.22 μm filter membrane to remove macromolecules other than viral particles . Viral particles are concentrated using tangential flow filtration (TFF), and the virus concentrations in the samples are detected using RT-PCR, quantitative fluorescent PCR, and other methods after extracting the viral genome with a kit . Currently, for the detection of pathogenic microorganisms in aerosols, simple amplification-based detection (SAMBA) and LAMP ASSAYS are commonly used, which complement the accuracy and sensitivity of the traditional RT-PCR method and reduce the assay time . RT-qPCR was used for the detection of COVID-19. Although RT-qPCR can be very sensitive to RNA in small sample sizes, this method is not widely used due to the high risk of RNA virus infection and the high cost of detection instruments . In addition, the emergence of electrowetting-on-dielectric (EWOD)-based digital microfluidics (DMF) allows for rapid and efficient specific virus detection in a shorter time and with a lower sample volume, and the lowest detection limit for the detection of phage viruses can reach 106 pfu/ml . Aerosol samples are typically collected using either active or passive sampling methods . Passive sampling is mainly based on the natural deposition method, which generally requires prolonged exposure to the environment to collect indoor air . This method has the advantages of ease of use and low cost. However, its disadvantages are the need for a clean sampling environment and poor stability . If fewer particles in the air cause the particles in the collector to bounce back, the data from the passive sampler are inaccurate . Active sampling methods can be classified into solid impact methods, liquid impact methods, centrifugal and cyclone methods, electrostatic precipitation methods, and filtration methods. Among these, the solid impact method is highly sensitive and widely used, though it is complex to operate and requires stringent conditions for the sampling media . The liquid impaction method, which minimises microbial damage and operates at a high flow rate, is not suitable for low temperatures and short-duration sampling . Centrifugation and cyclone methods are easy to operate, compact, portable, controllable, and low-cost, but they may lead to microbial loss . Electrostatic precipitation methods help maintain the biostability and viability of samples while enabling the sampling of particles with a wide range of diameters, but they have a limited collection range and a notable impact on the sampling environment . Filtration methods, while involving easy-to-carry and low-cost devices, are highly influenced by the materials used and can be complicated to operate . The collection of aerosol samples typically requires specialised aerosol collectors , which are often equipped with filters of various pore sizes (typically 25 mm and 47 mm) to collect aerosol particles of different sizes . For the enrichment of viral particles, the filter membrane is typically resuspended in PBS buffer and centrifuged by filtration. Since viral particles in aerosols are relatively small in diameter, the solution is usually passed through a 0.22 μm filter membrane to remove macromolecules other than viral particles . Viral particles are concentrated using tangential flow filtration (TFF), and the virus concentrations in the samples are detected using RT-PCR, quantitative fluorescent PCR, and other methods after extracting the viral genome with a kit . Currently, for the detection of pathogenic microorganisms in aerosols, simple amplification-based detection (SAMBA) and LAMP ASSAYS are commonly used, which complement the accuracy and sensitivity of the traditional RT-PCR method and reduce the assay time . RT-qPCR was used for the detection of COVID-19. Although RT-qPCR can be very sensitive to RNA in small sample sizes, this method is not widely used due to the high risk of RNA virus infection and the high cost of detection instruments . In addition, the emergence of electrowetting-on-dielectric (EWOD)-based digital microfluidics (DMF) allows for rapid and efficient specific virus detection in a shorter time and with a lower sample volume, and the lowest detection limit for the detection of phage viruses can reach 106 pfu/ml . Research on faecal samples has also increased in recent years, but their processing and detection methods are largely similar to those used for soil samples. Viral gene copies in faecal samples are typically quantified using Qubit concentrations, which range from 80.14 to 98.91 ng/mL . Additionally, viral gene copies in the range of 10 8 to 10 13 per gram can be detected in the faeces of patients with gastrointestinal diseases, such as diarrhoea . 2.4.1. Sampling Methods Faecal samples are collected using a single method, either with a specialised faecal collector or directly into a sterile bag, ensuring careful measures are taken to avoid contamination during collection. The sampling method for faecal samples does not notably differ between human and animal faeces. Generally, a sample size of 100 mg is required for the study of viral particles in faeces . 2.4.2. Faecal Sample Processing and Testing Methods Faecal samples are typically first resuspended in PBS solution , and in some studies, the viral genome is extracted from the samples using kits following pre-treatment with antibiotics in PBS solution . Additionally, the resuspension solution is filtered through 0.8 μm and 0.45 μm membranes after the addition of antibiotics and treatment, effectively removing excess macromolecules . RNA viruses are more commonly studied in faecal samples, and viral RNA extraction kits are commonly used to extract the viral RNA, which is then analysed using methods such as RT-qPCR, real-time RT-PCR, and nested RT-PCR and sequenced on the Illumina platform . DNA viruses are generally extracted directly from samples using viral DNA extraction kits and subjected to PCR analysis and sequencing. The difference in the analysis of DNA and RNA viruses is that RNA viruses undergo reverse transcription to cDNA before PCR analysis. A novel real-time RT-PCR method (P-sg-QPCR) has also emerged for the rapid diagnosis and quantification of coronaviruses in the faeces of biological pathogens (e.g., human, cat, canine, porcine, bovine, murine, and avian), which combines the sensitivity of the primer-probe-energy transfer (PriProET) technique to allow for faster detection and quantification of viral mutants, with a detection limit of 3.7 × 10 7 copies/μL . Faecal samples are collected using a single method, either with a specialised faecal collector or directly into a sterile bag, ensuring careful measures are taken to avoid contamination during collection. The sampling method for faecal samples does not notably differ between human and animal faeces. Generally, a sample size of 100 mg is required for the study of viral particles in faeces . Faecal samples are typically first resuspended in PBS solution , and in some studies, the viral genome is extracted from the samples using kits following pre-treatment with antibiotics in PBS solution . Additionally, the resuspension solution is filtered through 0.8 μm and 0.45 μm membranes after the addition of antibiotics and treatment, effectively removing excess macromolecules . RNA viruses are more commonly studied in faecal samples, and viral RNA extraction kits are commonly used to extract the viral RNA, which is then analysed using methods such as RT-qPCR, real-time RT-PCR, and nested RT-PCR and sequenced on the Illumina platform . DNA viruses are generally extracted directly from samples using viral DNA extraction kits and subjected to PCR analysis and sequencing. The difference in the analysis of DNA and RNA viruses is that RNA viruses undergo reverse transcription to cDNA before PCR analysis. A novel real-time RT-PCR method (P-sg-QPCR) has also emerged for the rapid diagnosis and quantification of coronaviruses in the faeces of biological pathogens (e.g., human, cat, canine, porcine, bovine, murine, and avian), which combines the sensitivity of the primer-probe-energy transfer (PriProET) technique to allow for faster detection and quantification of viral mutants, with a detection limit of 3.7 × 10 7 copies/μL . Current research on plant viruses is relatively straightforward, and the methods for sample collection are the same, involving the direct collection of diseased plant parts for analysis. The viral content in these samples generally ranges from 30 to 3000 ng/mL . The study of viruses in animals has gained increasing attention, especially following the COVID-19 outbreak, heightening concerns about the presence of viruses in animals. However, the handling of animal samples presents challenges, requiring compliance with regulations such as the Laboratory Animal Welfare Act, which mandates that laboratory animals not be abused or killed indiscriminately. In general, virus quantification in animals is typically conducted by directly quantifying antibodies in plasma samples , with viral gene copy abundance reaching up to 5 × 10 6 copies/μL . However, not all tests of viral abundance can be replaced by tests of the antibody content in them. When the concentration of antibodies is not sufficient to neutralise the virus, it increases the load of pathogens in it, a phenomenon known as antibody-dependent enhancement (ADE) . 2.5.1. Sampling Methods Because plant viruses are highly visible, their collection is typically straightforward and can be conducted directly at the site of the disease. Generally, a sample size of 2–40 g is required for extracting the viral genome . In contrast, the collection of animal samples is typically more complex. It requires careful management to prevent cross-contamination among experimental animals during breeding at the same time. The type of samples often collected from animals include serum, nasal swabs, and oral swabs . 2.5.2. Animal and Plant Sample Handling and Testing Methods Before extracting viral particles from plant and animal tissues, the samples must be ground and homogenised, after which, the supernatant is collected through centrifugation and filtration. Particularly, for certain viruses extracted from animal tissues such as fish, the samples need to be cultured before homogenisation. Following this, the samples are filtered and screened for viruses. For example, the RNA virus neuron necrosis virus (NNV) extracted from marine fish and shellfish , influenza A viruses (IVAs) extracted from bronchial-associated lymphoid tissues in severely affected areas of seal lungs , and viral haemorrhagic septicaemia virus (VHSV) extracted from European rainbow trout were isolated directly using cell culture . However, not all viral particle extractions from animals require culturing the tissues. For example, Redondoviridae from human bronchial alveoli (BAL) can be directly extracted from the sample using genome amplification kits to obtain the virus’s genome sequence . Most viruses present in plant samples are dsRNA, a stable form of RNA that can be easily extracted from plants. Typically, 10–40 g or 2–5 g of plant tissue is taken for homogenisation. dsRNA extraction is performed using either reagents prepared as described in the literature or commercially available kits. The extracted dsRNA is subsequently analysed by RT-PCR, qPCR, real-time PCR, and multiplexed PCR and sequenced on the Illumina platform . High-throughput sequencing is subsequently used to detect and characterise known and unknown plant viruses . In addition to this method of viral genome identification after the extraction of all genomes in a sample, some earlier studies have employed double antibody sandwich enzyme-linked immunosorbent assay (DAS-ELISA) , followed by RT-PCR of selected positive samples. This approach involves resuspending samples in PBS buffer with sodium azide as a preservative, followed by extraction of the plant viral genome and purification using commonly used methods. The sample solution is then subjected to rate-banded sucrose density gradient centrifugation to produce antiserum and measure viral content . The cetyltrimethylammonium bromide (CTAB) method is also commonly used to extract viruses from plants. Typically, this process involves homogenising 100 mg of sample in CBTA buffer, followed by centrifugation and RNA extraction. Viral abundance is then assessed using methods such as RT-PCR, LAMP, or RT-ddPCR . For animal samples, suitable tissue sections or serum samples (200 μL) are usually collected. Before extracting the genome from virus-infected tissues, the samples must be homogenised. The tissues are then extracted directly using virus kits, and the genome is analysed through various methods such as PCR and RT-PCR . Common quantification methods for viruses in these samples include quantification kits, ELISA, chromatography, RT-PCR, and ddPCR . The LAMP assay, which is commonly used for virus detection in plant and animal samples, is capable of detecting viral RNA in samples diluted to a concentration of 1 × 10 −7 ng/μL, and the detection limit of the RT-PCR assay can reach 1 × 10 −4 ng/μL . Because plant viruses are highly visible, their collection is typically straightforward and can be conducted directly at the site of the disease. Generally, a sample size of 2–40 g is required for extracting the viral genome . In contrast, the collection of animal samples is typically more complex. It requires careful management to prevent cross-contamination among experimental animals during breeding at the same time. The type of samples often collected from animals include serum, nasal swabs, and oral swabs . Before extracting viral particles from plant and animal tissues, the samples must be ground and homogenised, after which, the supernatant is collected through centrifugation and filtration. Particularly, for certain viruses extracted from animal tissues such as fish, the samples need to be cultured before homogenisation. Following this, the samples are filtered and screened for viruses. For example, the RNA virus neuron necrosis virus (NNV) extracted from marine fish and shellfish , influenza A viruses (IVAs) extracted from bronchial-associated lymphoid tissues in severely affected areas of seal lungs , and viral haemorrhagic septicaemia virus (VHSV) extracted from European rainbow trout were isolated directly using cell culture . However, not all viral particle extractions from animals require culturing the tissues. For example, Redondoviridae from human bronchial alveoli (BAL) can be directly extracted from the sample using genome amplification kits to obtain the virus’s genome sequence . Most viruses present in plant samples are dsRNA, a stable form of RNA that can be easily extracted from plants. Typically, 10–40 g or 2–5 g of plant tissue is taken for homogenisation. dsRNA extraction is performed using either reagents prepared as described in the literature or commercially available kits. The extracted dsRNA is subsequently analysed by RT-PCR, qPCR, real-time PCR, and multiplexed PCR and sequenced on the Illumina platform . High-throughput sequencing is subsequently used to detect and characterise known and unknown plant viruses . In addition to this method of viral genome identification after the extraction of all genomes in a sample, some earlier studies have employed double antibody sandwich enzyme-linked immunosorbent assay (DAS-ELISA) , followed by RT-PCR of selected positive samples. This approach involves resuspending samples in PBS buffer with sodium azide as a preservative, followed by extraction of the plant viral genome and purification using commonly used methods. The sample solution is then subjected to rate-banded sucrose density gradient centrifugation to produce antiserum and measure viral content . The cetyltrimethylammonium bromide (CTAB) method is also commonly used to extract viruses from plants. Typically, this process involves homogenising 100 mg of sample in CBTA buffer, followed by centrifugation and RNA extraction. Viral abundance is then assessed using methods such as RT-PCR, LAMP, or RT-ddPCR . For animal samples, suitable tissue sections or serum samples (200 μL) are usually collected. Before extracting the genome from virus-infected tissues, the samples must be homogenised. The tissues are then extracted directly using virus kits, and the genome is analysed through various methods such as PCR and RT-PCR . Common quantification methods for viruses in these samples include quantification kits, ELISA, chromatography, RT-PCR, and ddPCR . The LAMP assay, which is commonly used for virus detection in plant and animal samples, is capable of detecting viral RNA in samples diluted to a concentration of 1 × 10 −7 ng/μL, and the detection limit of the RT-PCR assay can reach 1 × 10 −4 ng/μL . In some of the early studies, it was shown that concentration and enrichment methods for viral particles in viral suspension were usually for small-scale techniques with small sample sizes, such as aqueous polymer two-phase separation , hydroextraction , soluble alginate ultrafilter membranes , and ultracentrifugation . For some experiments with large sample sizes, they were treated with precipitable salts, Fe oxides, polyelectrolytes , and cotton gauze fibres. Although these methods seem to be helpful for experiments with different sample size requirements, the results of concentrating the viral particles are not as good as expected, and many of the methods result in the loss of viral particles in the samples or enrichment of other toxic substances in some water samples instead of accurately obtaining the specific viral particles required for the experiments. Since these methods are not applicable to all virus types, they have been improved based on this situation and started to use positive and negative electrosorption-elution methods with different pore sizes and materials . However, these methods are susceptible to the effects of pH, charge ions, etc., leading to clogging of the filter, which affects the enrichment of virus particles. In order to solve the problem of electrostatic interactions of virus particles, people have switched to ultrafiltration for concentration, which is based on the size of the virus particles and the pore size of the filter membrane, including the TFF tangential flow method commonly used to better solve the problem of membrane blockage . In recent years, a combination of ultracentrifugation and density gradient centrifugation has been the most common method of concentration . This secondary concentration method can greatly increase the recovery of virus particles compared to other concentration methods. As can be seen from , ultracentrifugation is one of the most commonly used and highly recovered methods for concentrating virus particles, which is less restrictive to use and has a high rate. Although the two-step concentration method can also obtain a higher concentration of virus particle recovery solution, the steps are complicated. The greatest advantage of the two-step concentration method over ultracentrifugation is that it can be used at outdoor sampling sites where electricity is available and does not have to be confined to a laboratory environment. Currently, different collection, processing, and quantification methods are employed for various types of samples studied in the current habitat. In general, water samples are collected directly using sterile bags, bottles, or CTD water collection system. Snow, ice, and soil samples are collected directly with sterile shovels and then placed into sterile sampling bags or bottles. Faecal samples and aerosols require specialised sampling equipment. Plant samples are collected by selecting representative specimens from uncontaminated areas, and the site of investigation is collected using sterile tools, placing them into sampling bags. Animal samples are typically collected from appropriate tissues or body fluids. The detection of virus abundance using flow cytometry is widely applied to water samples, though the method has some limitations. Flow cytometry may not remove enough interfering particles compared to drop-shot fluorescence microscopy, resulting in inflated virus particle abundance results . The quantitative detection of virus particles in soil samples typically employs qPCR or ddPCR technology. ddPCR is generally more sensitive to virus particles at lower target concentrations, and some studies have shown that ddPCR offers superior accuracy, reproducibility, sensitivity, and stability for the quantification of bacteria and fungi compared to qPCR, resulting in more accurate outcomes . However, the ddPCR method also has drawbacks, such as high cost, complex protocols, and a limited detection range; thus, it can be used as a complementary method to qPCR method . Stool samples, which usually contain a high concentration of viral particles, generally do not require a highly sensitive detection method. As a result, qPCR, RT-PCR, or RT-qPCR is commonly selected for quantification . For studies on human clinical samples, methods such as the TaqMan Array Card (TAC) assay and the LAMP assay are used for the simultaneous detection of multiple enteric pathogens. Virus quantification in plant samples frequently utilises serological assays, such as the DAS-ELISA, as well as qPCR, RT-PCR, RT-qPCR, and RT-ddPCR . DAS-ELISA is commonly used to detect tomato spotted wilt orthotropic virus (TSWV) in peanut leaves. However, the TSWV load detected by DAS-ELISA in the inter-root of asymptomatic peanut leaves has been found to be significantly higher than that detected by other techniques, suggesting that DAS-ELISA may overestimate TSWV viral loads . For the quantification of citrus tartar leaf virus (CTLV), the RT-ddPCR method is used, as it can be up to 10-fold more sensitive than RT-PCR . Furthermore, the LAMP assay has been demonstrated to be more sensitive than RT-PCR and specific for the detection of ToMV. RT-PCR is often used in the quantification of viruses in animal serum samples, although differences exist in this assay. The RT-PCR method can be divided into one-step and two-step protocols, with the one-step protocol offering a higher limit of detection of viral load and greater sensitivity , whereas the two-step RT-PCR reaction provides greater flexibility and allows for better optimisation . lists the currently used methods for quantitative detection of virus particles. It can be seen that RT-qPCR has the lowest detection limit, which is also the most used method at present. There are also some improved methods based on RT-qPCR and ddPCR that can quantify the viral copy number more accurately. Currently, there is also a complex PCR method, epicPCR, which is usually amplified with specific primers and then quantified using common PCR methods such as qPCR. epicPCR can rapidly identify specific viral targets in situ and reveal virus–host relationships in a non-culture manner. The method is highly efficient with low equipment requirements but suffers from some primer bias . The study of viruses in environmental habitats currently faces several significant challenges, including the low concentration of viral content in samples, high detection limits, and operational complexity. These challenges limit high-throughput processing and rapid response capabilities in virus detection, both for outbreak surveillance and environmental virology studies. Although the development of genome sequencing technology has advanced rapidly, the pre-processing steps for samples remain complex. In future research, the enrichment methods for virus particles can be optimised to improve the recovery rate of the virus and the sensitivity of detection. Additionally, the development of automated equipment for sample processing could reduce manual labour, increase processing speed, and effectively reduce human contamination at the same time. Interdisciplinary approaches, incorporating molecular biology and bioinformatics, could also help create comprehensive virus detection platforms and enable real-time monitoring technology for virus particles in environmental samples, facilitating rapid responses to public health emergencies. These experimental challenges that have emerged in the past two years along with the limitations of objective factors such as experimental samples and the subjective aspects of unavoidable human contamination have made the study of pathogenic microorganisms complex. Additionally, the ultrafiltration and concentration steps involved in enriching pathogenic microorganisms and other steps of the process require extended experimental time and stringent experimental conditions, further complicating the study of pathogenic microorganisms. For example, during field sampling, there is a high probability that the samples will be contaminated with environmental pollutants that may affect the enrichment of the target virus particles. When researchers handle the samples in the laboratory, the samples may be contaminated due to operator error, or errors in the selection of methods and quantification instruments may lead to inaccuracies in the study of viral particles. Currently, there is no effective and convenient method to enrich and study pathogenic microorganisms with high quality and high concentration. We can predict the trend of rapid methods by summarising the advantages of some of the current emerging methods, including allowing for high throughput, automation, low cost, objectivity, reproducibility, high precision and accuracy, and broad applicability, which provides a direction for the development of more advanced viral titre assays in the future. This review summarises the methods used in the literature to enrich and extract pathogenic microorganisms from various environments. Despite the differences in these environments, the methods for the enrichment and extraction of pathogenic microorganisms are broadly similar, with differences only occurring in the processing of the samples. However, for the detection of pathogenic microorganisms, the microorganisms must first be concentrated, followed by the extraction of genetic material using kits. Although the methods are relatively uniform, several challenges remain in the study of pathogenic microorganisms. These include the low concentration of pathogenic microorganisms in environmental samples, contamination introduced during sample enrichment and concentration due to the experimental environment and procedures, and the difficulty in extracting pathogenic gene sequences owing to the low abundance of microorganisms in the samples. Additionally, environmental factors, such as differences in sample collection conditions and experimental temperatures, can induce morphological changes in the samples, complicating the analysis further. These morphological changes can also influence both the quantity and survival rate of pathogenic microorganisms in the samples.
Bringing theory to life: integrating case-based learning in applied physiology for undergraduate physiotherapy education
893d8b50-2bd5-4444-9cb1-17840e0758ca
11827333
Physiology[mh]
The fundamental medical sciences, including applied physiology, constitute a foundational element of medical curricula, providing essential understanding of the human body’s biological underpinnings, various diseases, and associated therapies . Physiotherapy students rely on this basic knowledge as they develop their clinical expertise. However, there is growing concern that TTMs in physiotherapy education fail to yield optimal learning outcomes . Integrating basic sciences with clinical relevance from the outset of education is believed to enhance information retention and facilitate its application in clinical settings . In the traditional framework of physiotherapy education, basic sciences, including medical physiology, are typically taught in the initial two years of undergraduate studies with limited interdisciplinary interaction. This approach may detrimentally impact students’ perception and long-term retention of foundational scientific knowledge . The inability to connect foundational knowledge with clinical contexts may result in graduates lacking critical thinking and problem-solving skills essential for effective clinical practice . Moreover, senior undergraduate physiotherapy students frequently express informal dissatisfaction with their memory of basic medical sciences and struggle to correlate this content with later clinical curricula . As physiotherapy students’ progress through their education, their perceptions of foundational courses often become increasingly negative, highlighting a potential flaw in the educational system where acquired knowledge risks becoming inaccessible and inert . Research indicates that basic science knowledge acquired within a clinical context is more readily applied and comprehended by students . Despite significant efforts over decades, the practical implementation of integration remains challenging. Case-based learning (CBL) emerges as a promising alternative, characterized by interactivity and student autonomy, potentially fostering greater enthusiasm for learning . Recent studies have explored the potential benefits of combining CBL with TTMs in physiotherapy education, contrasting with traditional didactic lectures and practical classes that are often teacher-centered with minimal student engagement . Monitoring student perception throughout undergraduate courses may inform recommendations for better integration of basic sciences within clinical subjects, facilitating the unified application of foundational knowledge to clinical scenarios. By integrating CBL approach alongside TTMs in applied physiology, for physiotherapy students may enhance students’ ability to apply theoretical knowledge to clinical scenarios, thereby improving their academic performance and perception. Hence, the current study aims to integrate CBL with TTMs in teaching applied physiology for undergraduate physiotherapy students and evaluate the impact of this combined hybrid approach on student perceptions and academic performance, comparing it to the application of TTMs alone. Study design This is an interventional study that was conducted at the Faculty of Physiotherapy, AlSalam University, during the period of January to May 2023, on the undergraduate physiotherapy students during the neuroscience course. Ethical considerations This study was conducted in accordance with the Declaration of Helsinki. Setting and participants Study participants and eligibility A cohort comprising 244 undergraduate physiotherapy students in their fourth semester, who were enrolled in a neuroscience course were recruited for the present study. There were no exclusion criteria. Sample size The study included all fourth-semester students ( n = 244), with an expected dropout rate of 5%. A formal sample size calculation was not performed, as the objective was to include all eligible students. The facilitators Nine volunteer physiologists with expertise in applied physiology served as facilitators for the CBL sessions. Most had prior experience with interactive teaching methodologies, including problem-based learning (PBL) and small-group discussions. Facilitators training To ensure effective CBL implementation, the facilitators underwent a comprehensive orientation and training program selectively tailored for applied physiology teaching through CBL. The training program spanned over two weeks and covered the following key dimensions: Theoretical training and orientation sessions Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Practical training Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Assessment of preparedness Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Ongoing professional support for facilitators Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. Facilitator selection process All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies Expert-led comprehensive facilitator training program The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist Effective CBL session Aligning goals: building the learning objectives of effective CBL session The learning objectives for each CBL session were collaboratively developed by a committee of physiologists and neurologists, together with the facilitators, based on the overall course objectives and the intended outcomes (ILOs) of the neuroscience module. To ensure that students gained both theoretical knowledge and practical skills, ILOs of each CBL session were aligned with the course learning outcomes (CLOs), including knowledge and skill-based objectives related to neurophysiology. This alignment ensured that the cases addressed both theoretical understanding and practical application to clinical scenarios. These objectives were clearly defined to the facilitators after sharing the case study scenario and all relevant patient information in each CBL. The learning objectives were shared with students at the beginning of each CBL session to guide discussions and ensure alignment with the session’s goals. During the discussions, students were encouraged to ask questions and explore related topics, fostering an environment of active learning and critical thinking . CBL as an integral part of physiology course CBL sessions were integrated as a mandatory component of the applied physiology course in the 4 th semester of the undergraduate physiotherapy program. This course is part of the foundational phase of the program, which spans the first four semesters and focuses on basic and applied sciences. The entire undergraduate physiotherapy program consists of 10 semesters over five academic years, with an additional internship year comprising 36 h per week for 12 months. An overview of case-related physiology topics was delivered to students in their traditional interactive dictated lectures. These traditional lectures were used to deliver foundational knowledge in applied physiology. The CBL sessions were then implemented during the laboratory component of the course, allowing students to apply theoretical concepts from lectures to clinical case scenarios. These CBL sessions were conducted weekly and designed to provide hands-on, case-based problem-solving experiences aligned with the topics covered in the neuroscience course. Furthermore, the evaluation of students’ understanding of applied physiology, including their participation in CBL sessions, is carried out through Objective Structured Practical Examinations (OSPE). The interactive didactic lectures and practical labs were retained in the 4 th semester and were consistent with TTMs in the first semester. The frequency of interactive lectures was not reduced; instead, CBL was integrated into the weekly lab sessions, which were already part of the curriculum. CBL formatting Nine cases were selected through online search to match the specific objectives and contents of the neuroscience physiology course. Each case was thoroughly reviewed, focusing on relevance, comprehensiveness, and the potential to stimulate critical thinking and discussion among students. ILOs of each CBL case were explicitly designed to address CLOs, including knowledge and skill-based objectives (Table ). The students’ preparation for CBL sessions To ensure student preparedness, all cases- related materials and recommended reading resources were made available on Microsoft Teams at the beginning of the semester. Students were expected to review these materials, engage in pre-reading, and come to the session ready for active participation and meaningful discussion. This preparatory phase was vital for fostering effective group analysis and ensuring all students had a foundational understanding of the case content. Optimizing CBL in physiology labs: small groups’ organization and facilitators’ guidance In the physiology labs, the total number of students was organized into small groups, each comprising 25 to 30 students. These groups were further divided into approximately five smaller batches, each with six students. This setup allowed for increased interaction and personalized attention. Nine trained physiologists facilitated the CBL sessions, rotating among the different lab groups to provide students with a variety of expert insights and to ensure consistent guidance and support during the CBL activities. Each facilitator remained with a small group for the entire duration of the lab session, actively supervising and moderating discussions. By rotating across various lab groups at different time slots throughout the week, facilitators ensured that all students received consistent, high-quality guidance. This rotational approach effectively accommodated the large number of participants while preserving the small-group dynamic critical for the success of CBL implementation. CBL implementation: the Kaddoura approach in action The facilitator guided the learning process during the CBL sessions using the Kaddoura approach (Supplementary Figure 1). The Kaddoura method includes five sequential steps: case presentation, presentation of triggering questions by the facilitators, creating a comfortable and safe atmosphere for learners, active participation of all students in discussions, and finally case summarization by the facilitator. Each session started with an interactive introduction to the physiology topic. All students were encouraged to participate actively in the discussions . Tools of data collection: the data was collected through the following tools The students’ perception questionnaire To better understand the students’ acceptance and perception of the newly implemented CBL approach, they were encouraged to complete a web-based questionnaire using Google Forms. It was delivered at the end of the fourth semester to physiotherapy students who enrolled in the neuroscience course. The questionnaire was administered in English. The questionnaire consisted of two sections: the first section included demographic information, while the second section was specially designed to evaluate students’ perception with the CBL approach, as follow: Socio-demographic (SD) section of questionnaire A SD part was prepared to ask about the participants’ gender and age. The perception (P) section of questionnaire This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . Facilitators’ feedback questionnaire The Facilitators’ feedback questionnaire had both open-ended and structured questions. The open-ended questions were developed to help in the better CBL implementation in the future through collection of facilitators’ insights on challenges and recommendations for improving CBL implementation. While the close-ended questions were based on nine parameters and scored using a five-point Likert scale ranging from strongly disagreeing to strongly agreeing . The facilitators’ feedback questionnaire was validated through a pilot test with a small group of facilitators and experts in the field, achieving FVI score of 0.92, indicating high face validity. Additionally, the CVI and CVR were estimated at 0.85 and 0.79, respectively, indicating good content validity of the feedback questionnaire. Indicators for academic achievement To compare academic performance between TTMs alone and the combined case-based and traditional education in applied physiology, examination scores obtained by physiotherapy students at the end of the first and fourth semesters were used as benchmarks. Statistical methods All data were tabulated and analyzed by the statistical package for the social sciences software program, IBM SPSS Statistics for Windows, version 26 (IBM Corp., Armonk, N.Y., USA). The total perception score was calculated by giving 5 to strongly agree, 4 to agree, 3 to neutral, 2 to disagree, and 1 to strongly disagree responses. A perception score of 3 or above on the 5-point Likert scale was considered positive, while scores below 3 were considered negative. Categorical data were represented as frequencies and percentages. Possible associations between categorical variables were analyzed using Pearson’s Chi-Square test or Fisher’s exact tests as appropriate. Continuous variables were reported as median with interquartile range (IQR: 25 th – 75 th percentiles) and were compared using the Mann–Whitney U test because they were not normally distributed. Furthermore, Wilcoxon-signed-rank test was applied to compare the student’s grades before and after CBL method intervention. Open-ended responses from the facilitators’ feedback questionnaire were analyzed using thematic analysis. Data familiarization was followed by coding to identify recurring patterns and unique insights. Thematic analysis was conducted to group related codes into broader themes, which were categorized under relevant domains. A p- value of < 0.05 was considered statistically significant. This is an interventional study that was conducted at the Faculty of Physiotherapy, AlSalam University, during the period of January to May 2023, on the undergraduate physiotherapy students during the neuroscience course. This study was conducted in accordance with the Declaration of Helsinki. Study participants and eligibility A cohort comprising 244 undergraduate physiotherapy students in their fourth semester, who were enrolled in a neuroscience course were recruited for the present study. There were no exclusion criteria. A cohort comprising 244 undergraduate physiotherapy students in their fourth semester, who were enrolled in a neuroscience course were recruited for the present study. There were no exclusion criteria. The study included all fourth-semester students ( n = 244), with an expected dropout rate of 5%. A formal sample size calculation was not performed, as the objective was to include all eligible students. Nine volunteer physiologists with expertise in applied physiology served as facilitators for the CBL sessions. Most had prior experience with interactive teaching methodologies, including problem-based learning (PBL) and small-group discussions. Facilitators training To ensure effective CBL implementation, the facilitators underwent a comprehensive orientation and training program selectively tailored for applied physiology teaching through CBL. The training program spanned over two weeks and covered the following key dimensions: Theoretical training and orientation sessions Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Practical training Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Assessment of preparedness Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Ongoing professional support for facilitators Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. Facilitator selection process All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies Expert-led comprehensive facilitator training program The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist To ensure effective CBL implementation, the facilitators underwent a comprehensive orientation and training program selectively tailored for applied physiology teaching through CBL. The training program spanned over two weeks and covered the following key dimensions: Theoretical training and orientation sessions Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Practical training Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Assessment of preparedness Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Ongoing professional support for facilitators Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. Facilitator selection process All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies Expert-led comprehensive facilitator training program The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist Aligning goals: building the learning objectives of effective CBL session The learning objectives for each CBL session were collaboratively developed by a committee of physiologists and neurologists, together with the facilitators, based on the overall course objectives and the intended outcomes (ILOs) of the neuroscience module. To ensure that students gained both theoretical knowledge and practical skills, ILOs of each CBL session were aligned with the course learning outcomes (CLOs), including knowledge and skill-based objectives related to neurophysiology. This alignment ensured that the cases addressed both theoretical understanding and practical application to clinical scenarios. These objectives were clearly defined to the facilitators after sharing the case study scenario and all relevant patient information in each CBL. The learning objectives were shared with students at the beginning of each CBL session to guide discussions and ensure alignment with the session’s goals. During the discussions, students were encouraged to ask questions and explore related topics, fostering an environment of active learning and critical thinking . CBL as an integral part of physiology course CBL sessions were integrated as a mandatory component of the applied physiology course in the 4 th semester of the undergraduate physiotherapy program. This course is part of the foundational phase of the program, which spans the first four semesters and focuses on basic and applied sciences. The entire undergraduate physiotherapy program consists of 10 semesters over five academic years, with an additional internship year comprising 36 h per week for 12 months. An overview of case-related physiology topics was delivered to students in their traditional interactive dictated lectures. These traditional lectures were used to deliver foundational knowledge in applied physiology. The CBL sessions were then implemented during the laboratory component of the course, allowing students to apply theoretical concepts from lectures to clinical case scenarios. These CBL sessions were conducted weekly and designed to provide hands-on, case-based problem-solving experiences aligned with the topics covered in the neuroscience course. Furthermore, the evaluation of students’ understanding of applied physiology, including their participation in CBL sessions, is carried out through Objective Structured Practical Examinations (OSPE). The interactive didactic lectures and practical labs were retained in the 4 th semester and were consistent with TTMs in the first semester. The frequency of interactive lectures was not reduced; instead, CBL was integrated into the weekly lab sessions, which were already part of the curriculum. CBL formatting Nine cases were selected through online search to match the specific objectives and contents of the neuroscience physiology course. Each case was thoroughly reviewed, focusing on relevance, comprehensiveness, and the potential to stimulate critical thinking and discussion among students. ILOs of each CBL case were explicitly designed to address CLOs, including knowledge and skill-based objectives (Table ). The students’ preparation for CBL sessions To ensure student preparedness, all cases- related materials and recommended reading resources were made available on Microsoft Teams at the beginning of the semester. Students were expected to review these materials, engage in pre-reading, and come to the session ready for active participation and meaningful discussion. This preparatory phase was vital for fostering effective group analysis and ensuring all students had a foundational understanding of the case content. Optimizing CBL in physiology labs: small groups’ organization and facilitators’ guidance In the physiology labs, the total number of students was organized into small groups, each comprising 25 to 30 students. These groups were further divided into approximately five smaller batches, each with six students. This setup allowed for increased interaction and personalized attention. Nine trained physiologists facilitated the CBL sessions, rotating among the different lab groups to provide students with a variety of expert insights and to ensure consistent guidance and support during the CBL activities. Each facilitator remained with a small group for the entire duration of the lab session, actively supervising and moderating discussions. By rotating across various lab groups at different time slots throughout the week, facilitators ensured that all students received consistent, high-quality guidance. This rotational approach effectively accommodated the large number of participants while preserving the small-group dynamic critical for the success of CBL implementation. CBL implementation: the Kaddoura approach in action The facilitator guided the learning process during the CBL sessions using the Kaddoura approach (Supplementary Figure 1). The Kaddoura method includes five sequential steps: case presentation, presentation of triggering questions by the facilitators, creating a comfortable and safe atmosphere for learners, active participation of all students in discussions, and finally case summarization by the facilitator. Each session started with an interactive introduction to the physiology topic. All students were encouraged to participate actively in the discussions . The learning objectives for each CBL session were collaboratively developed by a committee of physiologists and neurologists, together with the facilitators, based on the overall course objectives and the intended outcomes (ILOs) of the neuroscience module. To ensure that students gained both theoretical knowledge and practical skills, ILOs of each CBL session were aligned with the course learning outcomes (CLOs), including knowledge and skill-based objectives related to neurophysiology. This alignment ensured that the cases addressed both theoretical understanding and practical application to clinical scenarios. These objectives were clearly defined to the facilitators after sharing the case study scenario and all relevant patient information in each CBL. The learning objectives were shared with students at the beginning of each CBL session to guide discussions and ensure alignment with the session’s goals. During the discussions, students were encouraged to ask questions and explore related topics, fostering an environment of active learning and critical thinking . CBL sessions were integrated as a mandatory component of the applied physiology course in the 4 th semester of the undergraduate physiotherapy program. This course is part of the foundational phase of the program, which spans the first four semesters and focuses on basic and applied sciences. The entire undergraduate physiotherapy program consists of 10 semesters over five academic years, with an additional internship year comprising 36 h per week for 12 months. An overview of case-related physiology topics was delivered to students in their traditional interactive dictated lectures. These traditional lectures were used to deliver foundational knowledge in applied physiology. The CBL sessions were then implemented during the laboratory component of the course, allowing students to apply theoretical concepts from lectures to clinical case scenarios. These CBL sessions were conducted weekly and designed to provide hands-on, case-based problem-solving experiences aligned with the topics covered in the neuroscience course. Furthermore, the evaluation of students’ understanding of applied physiology, including their participation in CBL sessions, is carried out through Objective Structured Practical Examinations (OSPE). The interactive didactic lectures and practical labs were retained in the 4 th semester and were consistent with TTMs in the first semester. The frequency of interactive lectures was not reduced; instead, CBL was integrated into the weekly lab sessions, which were already part of the curriculum. Nine cases were selected through online search to match the specific objectives and contents of the neuroscience physiology course. Each case was thoroughly reviewed, focusing on relevance, comprehensiveness, and the potential to stimulate critical thinking and discussion among students. ILOs of each CBL case were explicitly designed to address CLOs, including knowledge and skill-based objectives (Table ). To ensure student preparedness, all cases- related materials and recommended reading resources were made available on Microsoft Teams at the beginning of the semester. Students were expected to review these materials, engage in pre-reading, and come to the session ready for active participation and meaningful discussion. This preparatory phase was vital for fostering effective group analysis and ensuring all students had a foundational understanding of the case content. In the physiology labs, the total number of students was organized into small groups, each comprising 25 to 30 students. These groups were further divided into approximately five smaller batches, each with six students. This setup allowed for increased interaction and personalized attention. Nine trained physiologists facilitated the CBL sessions, rotating among the different lab groups to provide students with a variety of expert insights and to ensure consistent guidance and support during the CBL activities. Each facilitator remained with a small group for the entire duration of the lab session, actively supervising and moderating discussions. By rotating across various lab groups at different time slots throughout the week, facilitators ensured that all students received consistent, high-quality guidance. This rotational approach effectively accommodated the large number of participants while preserving the small-group dynamic critical for the success of CBL implementation. The facilitator guided the learning process during the CBL sessions using the Kaddoura approach (Supplementary Figure 1). The Kaddoura method includes five sequential steps: case presentation, presentation of triggering questions by the facilitators, creating a comfortable and safe atmosphere for learners, active participation of all students in discussions, and finally case summarization by the facilitator. Each session started with an interactive introduction to the physiology topic. All students were encouraged to participate actively in the discussions . The students’ perception questionnaire To better understand the students’ acceptance and perception of the newly implemented CBL approach, they were encouraged to complete a web-based questionnaire using Google Forms. It was delivered at the end of the fourth semester to physiotherapy students who enrolled in the neuroscience course. The questionnaire was administered in English. The questionnaire consisted of two sections: the first section included demographic information, while the second section was specially designed to evaluate students’ perception with the CBL approach, as follow: Socio-demographic (SD) section of questionnaire A SD part was prepared to ask about the participants’ gender and age. The perception (P) section of questionnaire This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . Facilitators’ feedback questionnaire The Facilitators’ feedback questionnaire had both open-ended and structured questions. The open-ended questions were developed to help in the better CBL implementation in the future through collection of facilitators’ insights on challenges and recommendations for improving CBL implementation. While the close-ended questions were based on nine parameters and scored using a five-point Likert scale ranging from strongly disagreeing to strongly agreeing . The facilitators’ feedback questionnaire was validated through a pilot test with a small group of facilitators and experts in the field, achieving FVI score of 0.92, indicating high face validity. Additionally, the CVI and CVR were estimated at 0.85 and 0.79, respectively, indicating good content validity of the feedback questionnaire. Indicators for academic achievement To compare academic performance between TTMs alone and the combined case-based and traditional education in applied physiology, examination scores obtained by physiotherapy students at the end of the first and fourth semesters were used as benchmarks. To better understand the students’ acceptance and perception of the newly implemented CBL approach, they were encouraged to complete a web-based questionnaire using Google Forms. It was delivered at the end of the fourth semester to physiotherapy students who enrolled in the neuroscience course. The questionnaire was administered in English. The questionnaire consisted of two sections: the first section included demographic information, while the second section was specially designed to evaluate students’ perception with the CBL approach, as follow: Socio-demographic (SD) section of questionnaire A SD part was prepared to ask about the participants’ gender and age. The perception (P) section of questionnaire This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . A SD part was prepared to ask about the participants’ gender and age. This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . The Facilitators’ feedback questionnaire had both open-ended and structured questions. The open-ended questions were developed to help in the better CBL implementation in the future through collection of facilitators’ insights on challenges and recommendations for improving CBL implementation. While the close-ended questions were based on nine parameters and scored using a five-point Likert scale ranging from strongly disagreeing to strongly agreeing . The facilitators’ feedback questionnaire was validated through a pilot test with a small group of facilitators and experts in the field, achieving FVI score of 0.92, indicating high face validity. Additionally, the CVI and CVR were estimated at 0.85 and 0.79, respectively, indicating good content validity of the feedback questionnaire. To compare academic performance between TTMs alone and the combined case-based and traditional education in applied physiology, examination scores obtained by physiotherapy students at the end of the first and fourth semesters were used as benchmarks. All data were tabulated and analyzed by the statistical package for the social sciences software program, IBM SPSS Statistics for Windows, version 26 (IBM Corp., Armonk, N.Y., USA). The total perception score was calculated by giving 5 to strongly agree, 4 to agree, 3 to neutral, 2 to disagree, and 1 to strongly disagree responses. A perception score of 3 or above on the 5-point Likert scale was considered positive, while scores below 3 were considered negative. Categorical data were represented as frequencies and percentages. Possible associations between categorical variables were analyzed using Pearson’s Chi-Square test or Fisher’s exact tests as appropriate. Continuous variables were reported as median with interquartile range (IQR: 25 th – 75 th percentiles) and were compared using the Mann–Whitney U test because they were not normally distributed. Furthermore, Wilcoxon-signed-rank test was applied to compare the student’s grades before and after CBL method intervention. Open-ended responses from the facilitators’ feedback questionnaire were analyzed using thematic analysis. Data familiarization was followed by coding to identify recurring patterns and unique insights. Thematic analysis was conducted to group related codes into broader themes, which were categorized under relevant domains. A p- value of < 0.05 was considered statistically significant. Demographic data Two hundred thirty-eight out of 244 undergraduate physiotherapy students completed the survey following the combined CBL and TTMs approach during the neuroscience course in the fourth semester. 72.3% of participants were female, compared to 27.7% male. Of whom, 90.80% were 18–20 years old, while 9.2% were 21–23 years old (Figure S2). Student’s perception level with the different features of hybrid learning “ combined CBL with TTMs” (Table ) From the perspective of the enrolled students, the most significant advantages of the integrating CBL with TTMs as a hybrid teaching tool, compared to TTMs alone, were its ability to comprehensively cover course objectives and effectively evaluate students’ knowledge (98.3%). Additionally, most participants (97.9%) believed that the CBL approach allowed for greater engagement through more questions, and 97.5% stated that CBL felt closer to real-life scenarios. A majority of students (97.1%) described CBL as a motivating, efficient, and engaging teaching tool in clinical practice that effectively summarizes content. Additionally, 96.6% of participants stated that CBL enhances self-confidence and reduces class monotony. Among the 238 undergraduate students, 229 highlighted that CBL significantly facilitates learning, promotes deep thinking, and improves retention of topics. Furthermore, 95.8% noted that CBL organizes information well and supports comprehension, while 95.4% believed it enhances visualization skills. Cooperation and participation were also cited as benefits by 93.7%, and 93.3% ensured that CBL is a more practical tool. Association between gender and perception level regarding individual features of incorporating CBL to the traditional learning methods (Table ) Although female students constituted the majority of the participants (72.3%) compared to males (27.7%), no significant gender-based differences were observed in perception levels regarding the various features of CBL, as indicated by each question ( P > 0.05 ). Overall perception level with the incorporation of CBL into the traditional teaching framework applied physiology and its association with genders and age groups Overall, 232 students (97.5%) affirmed that the combined CBL with TTMs approach is superior to TTMs alone (Table ). Figure S3 illustrates the overall perception levels of enrolled students after integrating CBL with TTMs in teaching applied physiology during the neuroscience course for undergraduate physiotherapy students. The perception scores ranged from 25.0 to 100.0, with a median of 99.0 and an IQR of 88.0–100.0. Table indicates a significantly higher perception level among female participants compared to males ( P < 0.05 ), with a median perception score of 100.0 for females compared to 96.5 for males. Similarly, the mean rank of the total perception score was 124.65 for females and 106.08 for males. In contrast, no significant differences ( P > 0.05 ) were observed in total perception scores across different age groups. The median and mean rank values were 99.0 and 121.52, respectively, for the 18–20 age group, compared to 96.5 and 99.68 for the 21–23 age group. Student’s academic achievement Integrating CBL with TTMs in teaching applied physiology was apparently associated with better academic achievement. Although the maximum grades were the same between the two methods of teaching (10), the minimum grade was 2 out of 10 when applied physiology was taught through the TTMs in the first semester, compared to a minimum grade of 7.5 out of 10.0 with the integration of CBL with TTMs in the fourth semester. Similarly, the median grade was 8.5 with TTMs alone compared to 10 when CBL was combined with traditional methods. The better achievement of the students was also apparent at both the 25 th and 75 th percentiles, where the hybrid approach integrating CBL with TTMs scored 10.0 and 10.0 at both the 25 th and 75 th percentiles, respectively, compared to 7.0 and 9.5 in TTMs (Table ). Facilitators’ feedback CBL was well received by the facilitators, with 85% agreeing that combining CBL with TTMs enhances students’ communication skills, fosters a better and safer relationship between facilitators and students, and helps in understanding group dynamics. Additionally, 80% of the facilitators supported incorporating CBL into the timetable as a regular learning tool along with TTMs (Figure S4). A substantial proportion (80%) of the facilitators believed that integrating CBL with TTMs is a better teaching strategy, as it enhances students’ problem-solving and self-directed learning abilities while facilitating the integration of knowledge across different subjects. Additionally, 80% of facilitators emphasized that this approach supports better knowledge retention among students (Figure S4). The majority also agreed that combining CBL with TTMs represents a concerted effort to bridge existing gaps between physiologists and students in a clinical context. While most facilitators welcomed CBL as a complementary tool to TTMs, aiding in the achievement of objectives, some challenges were identified by the facilitators and are summarized in Table . Two hundred thirty-eight out of 244 undergraduate physiotherapy students completed the survey following the combined CBL and TTMs approach during the neuroscience course in the fourth semester. 72.3% of participants were female, compared to 27.7% male. Of whom, 90.80% were 18–20 years old, while 9.2% were 21–23 years old (Figure S2). ) From the perspective of the enrolled students, the most significant advantages of the integrating CBL with TTMs as a hybrid teaching tool, compared to TTMs alone, were its ability to comprehensively cover course objectives and effectively evaluate students’ knowledge (98.3%). Additionally, most participants (97.9%) believed that the CBL approach allowed for greater engagement through more questions, and 97.5% stated that CBL felt closer to real-life scenarios. A majority of students (97.1%) described CBL as a motivating, efficient, and engaging teaching tool in clinical practice that effectively summarizes content. Additionally, 96.6% of participants stated that CBL enhances self-confidence and reduces class monotony. Among the 238 undergraduate students, 229 highlighted that CBL significantly facilitates learning, promotes deep thinking, and improves retention of topics. Furthermore, 95.8% noted that CBL organizes information well and supports comprehension, while 95.4% believed it enhances visualization skills. Cooperation and participation were also cited as benefits by 93.7%, and 93.3% ensured that CBL is a more practical tool. ) Although female students constituted the majority of the participants (72.3%) compared to males (27.7%), no significant gender-based differences were observed in perception levels regarding the various features of CBL, as indicated by each question ( P > 0.05 ). Overall, 232 students (97.5%) affirmed that the combined CBL with TTMs approach is superior to TTMs alone (Table ). Figure S3 illustrates the overall perception levels of enrolled students after integrating CBL with TTMs in teaching applied physiology during the neuroscience course for undergraduate physiotherapy students. The perception scores ranged from 25.0 to 100.0, with a median of 99.0 and an IQR of 88.0–100.0. Table indicates a significantly higher perception level among female participants compared to males ( P < 0.05 ), with a median perception score of 100.0 for females compared to 96.5 for males. Similarly, the mean rank of the total perception score was 124.65 for females and 106.08 for males. In contrast, no significant differences ( P > 0.05 ) were observed in total perception scores across different age groups. The median and mean rank values were 99.0 and 121.52, respectively, for the 18–20 age group, compared to 96.5 and 99.68 for the 21–23 age group. Integrating CBL with TTMs in teaching applied physiology was apparently associated with better academic achievement. Although the maximum grades were the same between the two methods of teaching (10), the minimum grade was 2 out of 10 when applied physiology was taught through the TTMs in the first semester, compared to a minimum grade of 7.5 out of 10.0 with the integration of CBL with TTMs in the fourth semester. Similarly, the median grade was 8.5 with TTMs alone compared to 10 when CBL was combined with traditional methods. The better achievement of the students was also apparent at both the 25 th and 75 th percentiles, where the hybrid approach integrating CBL with TTMs scored 10.0 and 10.0 at both the 25 th and 75 th percentiles, respectively, compared to 7.0 and 9.5 in TTMs (Table ). CBL was well received by the facilitators, with 85% agreeing that combining CBL with TTMs enhances students’ communication skills, fosters a better and safer relationship between facilitators and students, and helps in understanding group dynamics. Additionally, 80% of the facilitators supported incorporating CBL into the timetable as a regular learning tool along with TTMs (Figure S4). A substantial proportion (80%) of the facilitators believed that integrating CBL with TTMs is a better teaching strategy, as it enhances students’ problem-solving and self-directed learning abilities while facilitating the integration of knowledge across different subjects. Additionally, 80% of facilitators emphasized that this approach supports better knowledge retention among students (Figure S4). The majority also agreed that combining CBL with TTMs represents a concerted effort to bridge existing gaps between physiologists and students in a clinical context. While most facilitators welcomed CBL as a complementary tool to TTMs, aiding in the achievement of objectives, some challenges were identified by the facilitators and are summarized in Table . Despite the benefits of TTMs as a learning approach in undergraduate medical teaching, they have been questioned due to a lack of reasoning skills and critical thinking , which are key elements in any physiotherapy career. CBL offers a dynamic and interactive, learning strategy that integrates both guided and structured learning to cultivate these crucial abilities . In the current study, CBL was concomitantly applied with TTMs in applied physiology to undergraduate physiotherapy students encourages the active students’ learning and yields a more productive outcome as an exploratory step that reflect the potential benefits of CBL in fostering engagement and comprehension. By discussing different real clinical case scenarios related to the topics taught in the neuroscience module, physiotherapy students evaluated their own perception level while their academic achievement was assessed through their grades. The higher overall perception level observed with integration of CBL within TTMs in the current study underscores the importance of CBL in preparing students for the clinical demands of their medical careers while fostering essential skills critical for their performance and competency . The analysis of student perceptions demonstrates strong support for integrating CBL with TTMs across all 20 measured items. Remarkably, 97.5% of students expressed a positive perception of the combined approach compared to TTMs alone, with an overall perception score of 99.0 out of 100. Students particularly appreciated CBL’s ability to facilitate deeper thinking and its effectiveness in reducing class monotony. These findings suggest that CBL when integrated with TTMs may be particularly beneficial for maintaining student interest and motivation throughout the course. The high level of student perception with CBL’s ability to organize information and facilitate assessment indicates that this method may also improve students’ learning outcomes. Furthermore, the perception that the combined learning approach increases students’ self-confidence aligns with previous research suggesting that active learning approaches can enhance student confidence . Interestingly, students perceived the combination of CBL and TTMs as more efficient for clinical practice. This suggests that integrating CBL with TTMs may offer benefits beyond academic settings, potentially enhancing clinical decision-making skills . The positive perceptions observed in this study can be attributed to the student-centered nature of CBL, which fosters active participation, critical thinking, and real-world applicability . These features address many limitations of TTMs, such as passive learning and lack of engagement . Additionally, the structured design of CBL cases, which align closely with course objectives, ensures clarity and focus in learning . The high students’ perception with CBL in teaching physiology was reported previously, where students clearly enjoyed their experience with CBL, perceived it as valuable, and gave an overall rating of the CBL program as good—excellent on a five-point Likert Scale . The current findings are consistent with the preliminary results reported by Brown et al. (2012) , who implemented a CBL approach for undergraduate health sciences students at the University of Ottawa. In their pilot project, 144 students participated and achieved an average score of 4.13 out of 5 on a quiz designed to evaluate their mastery of the concepts covered in the CBL sessions. Furthermore, the students rated the overall learning benefit of the program as 3.82 out of 4 on a nominal scale, highlighting its perceived educational value . These results reinforce the potential of CBL to significantly enhance the learning experience for undergraduate students, especially when combined with TTMs. The positive findings from the current study are further bolstered by the recent work of Saini et al. (2024), who highlighted CBL, as a student-centric, self-directed learning approach that fosters collaboration and critical thinking among 134 final-year physiotherapy, medical, and nursing students . Using pre- and post-test assessments, they reported significantly higher post-test scores following the CBL approach, indicating higher knowledge acquisition ( p < 0.05 ) . Students also reported enhanced learning experiences, highlighting the role of CBL in consolidating and integrating knowledge, and applying learned concepts to real-world scenarios. To investigate potential gender-based differences in the effectiveness of CBL integration with TTMs, we assessed the correlation between gender and perceptions. Our findings showed no significant gender-based differences in perception levels regarding the various features of CBL (Table ). These results are consistent with previous studies that also reported no significant gender differences in the perception of active learning methods . In contrast, other educational research has highlighted the presence of gender-based differences in perceptions across broader educational contexts . CBL emphasized on the active role of the students in creating their own knowledge (discovery learning) in CBL while engaging with the designed clinical cases and building their own understanding of medical procedures and concepts . It has been stated that CBL provides better opportunities for students to formulate diagnoses and delineate the appropriate management solutions as well as to relate the underlying possible mechanisms to the proposed diagnosis and treatment . CBL was reported by students to enhance their motivation and interest in learning, based on their experiences during the course. Interestingly, CBL-related student-centered features, such as self-confidence, learning, and critical thinking, were all higher with CBL (Table ). These findings support the idea that these skills could be improved through CBL. Additionally, the implementation of CBL as a complement to TTMs in applied physiology was linked to more cooperation between students with better teacher-student relationship, creating trusting and collaborative classroom culture. Prior lines of evidence pointed out that CBL encouraged students to share their knowledge during discussion and resolve the clinical cases . Indeed, inter-peer collaboration and interaction are considered fundamental skills required to efficiently work in multidisciplinary clinical teams. The displayed results in this study are consistent with the prior results that scored CBL as an effective learning tool in developing critical thinking skills, allowing students to link what they have learned with real-world scenarios . Combined CBL with TLMs application was associated with significantly promoted academic performance (Table ). In alignment with our results, the application of CBL in teaching endocrine physiology was associated with enhanced students’ learning with better knowledge assimilation. The improved retention of knowledge with CBL could be attributed to the fact that students are required to study simultaneously the same topic from all subjects and integrate the knowledge to proceed to decision-making for the given problem in the case scenario . The facilitators believed that integrating CBL to TTMs would better prepare physiotherapy students for future clinical practice by challenging them with realistic clinical cases. Additionally, they speculated that students are expected to collaboratively apply their previously acquired theoretical knowledge to gradually make appropriate decisions and propose solutions to the assigned clinical scenarios while identifying the key relevant characteristics. The facilitators exhibited a relatively low effort in providing detailed information about the clinical case while stimulating a gradual discussion among students. The reported positive feedback by the facilitator in the current study matched that previously recorded in CBL implementation studies . The facilitators reported that students became more actively engaged and collaborative during the discussion of clinical cases in CBL sessions. This observation was supported by the thematic analysis of open-ended questions. The qualitative feedback highlighted that students were more motivated to participate, ask questions, and share their insights during the discussion of clinical cases. This could be explained by the way CBL implementation depends on open questions, which leaves the students more confident and promotes their participation in the clinical discussion . This study brings several unique strengths to the field of physiotherapy education. It is the first to integrate CBL along with TTMs in applied physiology specifically for physiotherapy students, providing novel insights into the effectiveness of this combined approach in this context. The present study focuses on neurophysiology, a critical area in physiotherapy, enhancing students’ skills in managing clinical cases with a neurological basis. Secondly, the careful formulation of course objectives and selection of real and previously published cases by a committee of physiologists and neurologists, in consultation with students, ensured that the CBL approach was tailored to the specific needs and interests of the target students. Additionally, the relatively large sample size ( n = 238) and high response rate (97.5%) of the student survey provide robust evidence for the effectiveness of integrating CBL with TTMs. The structured weekly implementation in lab sessions with OSPE-based assessment underscores a replicable and adaptable model, presenting a practical framework for other institutions aiming to enhance applied physiology education. The current study has some limitations, including involvement of one cohort, convenience sampling and the single-institution setting, which may limit generalizability of the findings. Although this study primarily focused on short-term outcomes at the end of the semester, the strong academic performance observed provides a foundation for hypothesizing that the active learning environment created by CBL can support long-term knowledge retention. To address this limitation, future research could include longitudinal assessments, such as follow-up exams or evaluations of clinical performance to measure the durability of knowledge retention. Additionally, incorporating spaced repetition or periodic review sessions into the CBL framework may further enhance long-term retention. Moreover, it would be better if we practiced CBL over a longer period of time in a wider range of labs equipped with appropriate infrastructure and incorporating a newly developed online learning environment. Finally, this study is limited by the content differences between the first and fourth semesters that might have influenced outcomes, potentially confounding the comparison of academic performance. Additionally, the absence of a control group may be considered another limitation. Future research should consider incorporating a control group to enable a more robust and direct comparison. While CBL holds immense potential to transform applied physiology education into a clinical context, its implementation could present unique challenges. These potential challenges of implementing CBL in applied physiology could be addressed through proposed strategies based on our experience; illustrated in Table . These suggestions could help to construct a better CBL framework and empower students to become active participants for better engagement in CBL. By addressing these challenges, students could acquire a broad scale of critical thinking abilities and collaboration skills vital to adequately preparing them for the complexities of their future clinical practice. CBL can be broadly implemented as a more interactive teaching tool not only in applied physiology but also in other health sciences to overcome the limitations of TTMs and ensure better outcomes. While the current study focuses on short-term academic performance as an indicator of concurrent TTMs and CBL’s effectiveness, we recommend implementing follow-up assessments in subsequent semesters or at the end of the program to capture the long-term impact of CBL on knowledge retention. These assessments should test foundational concepts gained during physiology courses to evaluate the longevity of knowledge retention. Additionally, future studies could explore the perception and effectiveness of CBL when implemented independently of TTMs, enabling a clearer understanding of its isolated impact on student learning outcomes. Based on the positive results of the current study, we recommend integrating CBL in other courses within the physiotherapy program. This consistent application of active learning strategies can reinforce knowledge retention through more engagement and motivation of students, leading to better encoding and retrieval of information and promoting better academic outcomes. Additionally a longitudinal study should be conducted to track student performance and knowledge retention throughout the physiotherapy program that would provide valuable insights into the long-term effects of CBL. Also these studies would be instrumental in assessing clinical competence and patient outcomes. Incorporation of CBL into the existing TTMs framework for teaching applied physiology was advantageous for physiotherapy students as a preliminary step for their entry into clinical practice and ultimately in successfully managing patients, as it encourages students to pursue self-directed learning and to develop both analytical as well as problem-solving skills. This hybrid teaching tool with integration of CBL in applied physiology encourages active learning, helps physiotherapy students gain the requisite knowledge, and enhances their analytical and communication skills. The interactive and contextually relevant nature of CBL accommodates different learning styles, catering to visual, auditory, and kinesthetic learners alike. By engaging students in real-world scenarios, CBL fosters critical thinking, problem-solving, and clinical reasoning skills, all of which are essential for professional practice. Supplemenentary Material 1.
Determining the Suitability of MinION’s Direct RNA and DNA Amplicon Sequencing for Viral Subtype Identification
21d318ac-4020-4ed3-b55b-6b138894cf10
7472323
Pathology[mh]
The MinION (Oxford Nanopore Technologies Ltd., Oxford, UK, hereafter ONT) has shown it has the potential to revolutionize diagnostic protocols and pathogen surveillance. This is thanks to the device’s portability, low cost, and short sequencing time relative to other high-throughput sequencers . Initially utilized in high-profile human disease outbreaks (e.g., Ebola, ; Salmonella, ), the MinION was shown to support rapid in situ pathogen detection and disease surveillance. More recently, the MinION has been used for smaller scale outbreaks and to detect non-human pathogens . In a clear example of the device’s full potential for routine diagnostics, harmful DNA viruses of Cassava were confirmed within a crop in less than three hours by MinION sequencing. Amazingly, all steps leading to this diagnosis could be successfully conducted in the field . Nevertheless, the error rate present in MinION reads remains significantly higher than other high-throughput sequencers (95% modal accuracy for MinION R9 reported by ONT in 2020) and this likely prevents its use in routine diagnostics. For the detection of biologically and commercially important RNA viruses, ONT’s newly available direct RNA sequencing protocol could be a significant diagnostic advance by circumventing the need for error prone and time-consuming cDNA synthesis and PCR amplification . In this Special Issue, direct RNA sequencing has been shown to be suitable for generating a near full-length consensus sequence of the agricultural pathogen Porcine Reproductive and Respiratory Syndrome Virus (PRRSV) and was shown to produce sufficiently accurate data to distinguish viral strains with 20 to 40% sequence divergence . However, direct RNA sequencing is still in its infancy, and further explorations of the error rate and capabilities of this new technology are necessary (though, see the recent work ). Particularly, comparisons relative to more established DNA-based sequencing methods, such as amplicon sequencing, are needed. As well as, comparisons between more closely related viruses where the high error rate from MinION may overwhelm biological differences. Chestnut blight ( Cryphonectria parasitica ) is an invasive cosmopolitan fungus from Asia . Introduced into North America and Europe in the early 20th century, Chestnut blight has had devastating effects on the American Chestnut ( Castanea dentata ) and more moderate effects within Europe ( Castanea sativa ) . The reduced impact of the fungus in Europe is due to natural biocontrol from a fortuitously co-introduced RNA virus: Cryphonectria hypovirus 1 (CHV-1) . CHV-1 is a natural hyperparasite of the Chestnut blight fungus, belonging to a small clade of C. parasitica mycoviruses that includes the closely related CHV-2, CHV-3, and CHV-4 . These viruses fall within the expanding viral family, Hypoviridae , e.g., . Six different CHV-1 subtypes have been identified across Europe. All are represented in culture collections at the Swiss Federal Research Institute WSL (Birmensdorf, Switzerland). CHV-1 subtype I is the most widespread and is found along the Eastern Mediterranean . The remaining subtypes are present in more localized populations: subtype F1 and F2 are found in France, E within Spain, D in Germany, and G within Georgia . While the six subtypes have varying impacts on their fungal hosts and differing biocontrol potential, e.g., , all subtypes mitigate infection severity and reduce host tree mortality . CHV-1 monitoring currently uses fungal culture phenotyping and Sanger sequencing of short DNA amplicons, e.g., . While this approach is reliable, MinION’s direct RNA sequencing or sequencing of viral DNA could provide diagnostic information more quickly and in greater detail than currently available. In this study, we aimed to understand the advantages and disadvantages of ONT’s direct RNA and DNA amplicon sequencing for diagnostics using CHV-1 as a model system. We began our evaluation by confirming if we could identify CHV-1 presence from sequencing reads. While ONT direct RNA sequencing data have recently been shown to be sufficient to identify viral strains 20–40% divergent , CHV-1 subtypes within Europe range from 12% to only 2% divergent across the entire genome (see Results). Therefore, we also examined if it was possible to distinguish between the six closely related CHV-1 European subtypes at the sequencing read and consensus sequence level. Finally, we examined the reliability and repeatability of variant calls within each library because intra-host information is often considered an advantage of using high-throughput sequencing methods over traditional approaches . Throughout our analysis, we chose the most rapid and simple analytical tools to mirror what is likely to occur in diagnostic laboratories with time sensitive analysis and limited bioinformatics expertise. 2.1. Isolation of Double Stranded CHV-1 RNA Cryphonectria parasitica isolates infected with one of six CHV-1 focal subtypes were grown for five days at 25 °C in 100 mL of liquid medium (16 g D-Glc, 4 g yeast extract, FeCl3 1% 8 drops, Knop’s solution (10×) 80 mL, 800 mL H 2 O). Fungal mycelium was then harvested with a suction filter, lyophilized overnight, and frozen at −20 °C for storage. Before extraction, the dried frozen mycelium was ground in a swing mill (Retsch, MM400, Haan, Germany) using a 2 mm acid-cleaned metal bead. The replicative double stranded form of CHV-1 RNA was then extracted from 8–10 mg of the ground mycelium with the Double-RNA Viral dsRNA Extraction Mini Kit (iNtRON Biotechnology, Seongnam-Si, South Korea). Extractions followed the manufacturer’s protocol. To facilitate the lysis of the fungal cells, the mycelium powder was dissolved in a 1.5 mL tube and larger fragments were broken up by a micro pestle after the addition of the iNtRON pre-buffer. Final concentrations were measured with a Qubit RNA Assay Kit (v3.0 Thermo Fisher Scientific, Loughborough, UK). Presence of CHV-1 was then confirmed by gel electrophoresis and dsRNA stored at −20 °C. 2.2. Direct RNA Sequencing Library Construction A total of ~500 ng of RNA in a volume of 9 µL was used for each sample for the construction of the RNA sequencing library. To meet this requirement, a few samples were concentrated using an isopropanol precipitation with sodium chloride (described in Thermo Fisher’s RLM RACE protocol). The library was prepared following the direct RNA sequencing protocol from ONT for MinION (SQK-RNA002 ONT, Oxford, UK). Since CHV-1 is an RNA virus with a double strand replicative form, before beginning the protocol, all samples were denatured by heating for two minutes at 100 °C, then, snap cooled on ice for two minutes. A minor modification was made to the ONT protocol to help with RNA recovery during the bead purification steps: tubes were mixed gently by flicking only and freshly made 80% EtOH was used for bead washing. A positive control was added during library preparation (Yeast Enolase II 1.3 kilobase (kb) transcript). 2.3. DNA Amplicon Sequencing Library Construction The DNA amplicon sequencing library was prepared following ONT’s cDNA sequencing kit protocol (SQK-PCS108 ONT, Oxford, UK). Before cDNA synthesis, extracted RNA was denatured as above, for two minutes at 100 °C, then, snap cooled on ice for two minutes. After cooling, we immediately began first strand cDNA synthesis using the Maxima H Minus Reverse Transcriptase (Thermo Fisher Scientific, Loughborough, UK) and Oligo(dT) 12–18 primers (Thermo Fisher Scientific, Loughborough, UK). The standard MinION protocol for second strand cDNA synthesis was unsuccessful despite several attempts, and the single strand cDNA was used directly for PCR. The full CHV-1 genome length could not be amplified due to PCR limitations and primer design constraints. Instead, we amplified three and five kilobase (kb) amplicons targeting ORFA using the high fidelity PrimeSTAR GXL DNA Polymerase (Takara, Japan). The forward primer sequences used were identical (5′-ATC YGG AGA ARG TGA TTT GC-3′), but the reverse primers targeted different genome regions (3 kb amplicon 5′-AGA YGA YGC TGG TAA ATG AAG-3′; 5 kb amplicon 5′-YTT RTT GAT GTA GCT GCG AGG-3′). The two amplicons were used to provide a technical replicate library for each sample. In total, 30× PCR cycles were used for each primer pair. The two PCR reactions of each CHV-1 strain were then pooled and cleaned with Agencourt RNAClean XP beads (Beckman Coulter, Brea, CA, USA). MinION’s end-prep, barcoding, and adapter ligation were performed on the pooled products with the barcode expansion EXP-NBD103. Further modifications to ONT’s protocol were made during bead purification. Binding of DNA to the RNAClean XP beads was elongated to 10 min. Beads were also incubated at 37 °C for 15 min during the elution of the purified DNA to increase yield. The final amount of (pooled) dsDNA in the library was between 300–650 ng. It should be noted that the PCR primers failed to amplify one of our more divergent CHV-1 subtypes, G. This prevented us from sequencing this subtype with a DNA amplicon library. 2.4. Sequencing Conditions for the MinION Sequencing was performed in-house at WSL (Phytopathology, Birmensdorf, Switzerland). For RNA sequencing, each library was loaded onto a MinION R9.4 flow cell on a MinION Mk1B device (ONT) and sequenced for 8–12 h. Failed runs were identified and excluded at this point. DNA libraries were also sequenced using a MinION R9.4 flow cell on a MinION Mk1B device (ONT) for 18 h. The MinKNOW software v.2.0 (ONT) was adjusted according to ONT’s sequencing protocol with live basecalling disabled. The DNA amplicon library flow cell was used at least two times, with the 5 kb library run first, followed by the 3 kb library (voltage was adjusted according to ONT’s washing protocol). Basecalling was performed with Guppy (v2.3.5 ONT). 2.5. Direct RNA Sequencing Read Processing Direct RNA sequencing reads were filtered to remove reads belonging to the kit positive control with NanoLyse (v1.1.0 , reference accession number NC_001140.6). Reads were then quality filtered with NanoFilt (v2.2.0 ). Only reads above 2 kb and quality score ( q ) of q ≥ 8 were retained for downstream analysis. 2.6. DNA Sequencing Read Processing Amplicon reads were demultiplexed using qcat (v1.1.0 ONT), the entire read was searched for barcodes, and all barcodes trimmed out. Reads below a minimum q score of 10 were then filtered with NanoFilt for both libraries. For the 3 kb amplicon DNA library, reads shorter than 2000 basepairs (bp) and longer than 4 kb were also excluded. For the 5 kb amplicon DNA library, reads shorter than 4 kb and longer than 6 kb were excluded. 2.7. CHV-1 Subtype Identification from Filtered Reads Three different CHV-1 subtypes were successfully sequenced using direct RNA sequencing (runs for two subtypes failed to produce data and were excluded from our analysis) and five different subtypes with DNA amplicon sequencing . Though the subtype was known a priori, it is important to evaluate the diagnostic potential of this technology and confirm if CHV-1 presence and subtype could be accurately inferred from sequencing reads alone. To test this, filtered RNA and DNA amplicon reads were submitted to a local BLAST search against a custom database containing CHV-1 subtype genome sequences (I, F1, F2, E, D, G), and full-length sequences of the closely related viruses CHV-2/3/4. The option max_target_seqs was set to simplify the BLAST output; this returns the first N ‘good’ hits in the BLAST catalogue and is sensitive to the order of sequences . The order of reference sequences in the database was: I, F1, F2, D, E, G, CHV-2, CHV-3, and CHV-4. The top BLAST hit for each sequencing read was then extracted and the proportion of correctly identified reads estimated. Because we expected a high error rate per read, we did not exclude hits with low BLAST alignment quality scores, though the mean read percentage identity for each hit was recorded. To gain understanding of the sequence level divergence of subtypes and help interpret the BLAST analysis, we also compared the available full genome reference sequences, as well as the corresponding 3 and 5 kb amplicon regions in CLC workbench 7 (v10.1.1 Hilden, Germany, QIAGEN) using the pairwise comparison tool. 2.8. CHV-1 Consensus Generation Consensus sequences have been used for pathogen identification in diagnostic studies and we explored if they offered greater accuracy than read-based methods. To generate a consensus sequence from each library type, we began by assembling de novo with Wtdbg2 (v2.5 ). A de novo assembly approach was taken because it is assumption free about the pathogen present and will not generate any bias towards the initial reference sequence used. Furthermore, it enables us to examine the accuracy of species and subtype identification when the virus or subtype present is unclear. For both the DNA amplicon and the direct RNA sequencing reads, Wtdbg2 was run on all uncorrected reads to reduce processing time, assuming a genome size of 12.7 kb and the ‘ont’ setting suitable for error prone ONT reads. We included all reads, including those that did not have CHV-1 as a top BLAST hit. The DNA libraries were PCR amplicons and, therefore, additional Wtdbg2 parameter adjustment was necessary. Accordingly, our repetitive sequence (‘K’) filter was increased to 100 thousand reads. Furthermore, to ensure an output consensus was generated for low quality samples, the minimum number of nodes allowed in a contig was reduced to two and the minimum length of a contig reduced to 1 kb. For the 3 kb libraries, it was also necessary to reduce to read length filter to 1 kb and to increase the maximum node depth to 500 reads. These parameter changes may reduce consensus quality , but were necessary for consensus to be produced and may still improve our pathogen identification accuracy. The longest consensus fragment was taken for downstream processing. These were used to identify subtype following the BLAST search approach detailed above. In addition, consensus sequences from the RNA libraries were imported into MegaX (v10.1.2 ) and aligned with Muscle (default settings with 50 replicates) . A phylogeny was then constructed using a Maximum Likelihood tree and 500 bootstraps replicates with consensus sequences and pre-existing reference sequences. This allowed us to evaluate if subtypes sequenced with direct RNA sequencing could be correctly identified through a very simple and rapidly produced phylogeny. Only RNA consensuses sequences were used for this analysis as they are genome-wide, while the amplicon libraries were not. 2.9. Repeatability of Variant Calls To call variants, sequencing reads were mapped to a CHV-1 reference genome listed in . Reads were mapped using Minimap2 (v2.17 ) with the recommended settings for MinION reads. For the RNA libraries, Canu corrected reads for this analysis because of the lower q score filter needed to obtain sufficient data. The SNP callers used included: AssociVar (v1 ), iVar (v1.0.1 ), Ococo (v0.1.2.7 ), and FreeBayes (v1.3.1 ). iVar was run without base quality alignment and the following filters: a minimum base and mapping quality of 20, a minimum variant quality score of 30, and a frequency of 0.2. Ococo and AssociVar were run using the default settings. FreeBayes was run assuming a ploidy of 1, with the flags pooled discrete and pooled continuous set. To reduce memory requirements, variants were only evaluated by FreeBayes if they had a minimum base quality of 30, a maximum of 2 alleles per site, and were seen on 20 or more reads. Cryphonectria parasitica isolates infected with one of six CHV-1 focal subtypes were grown for five days at 25 °C in 100 mL of liquid medium (16 g D-Glc, 4 g yeast extract, FeCl3 1% 8 drops, Knop’s solution (10×) 80 mL, 800 mL H 2 O). Fungal mycelium was then harvested with a suction filter, lyophilized overnight, and frozen at −20 °C for storage. Before extraction, the dried frozen mycelium was ground in a swing mill (Retsch, MM400, Haan, Germany) using a 2 mm acid-cleaned metal bead. The replicative double stranded form of CHV-1 RNA was then extracted from 8–10 mg of the ground mycelium with the Double-RNA Viral dsRNA Extraction Mini Kit (iNtRON Biotechnology, Seongnam-Si, South Korea). Extractions followed the manufacturer’s protocol. To facilitate the lysis of the fungal cells, the mycelium powder was dissolved in a 1.5 mL tube and larger fragments were broken up by a micro pestle after the addition of the iNtRON pre-buffer. Final concentrations were measured with a Qubit RNA Assay Kit (v3.0 Thermo Fisher Scientific, Loughborough, UK). Presence of CHV-1 was then confirmed by gel electrophoresis and dsRNA stored at −20 °C. A total of ~500 ng of RNA in a volume of 9 µL was used for each sample for the construction of the RNA sequencing library. To meet this requirement, a few samples were concentrated using an isopropanol precipitation with sodium chloride (described in Thermo Fisher’s RLM RACE protocol). The library was prepared following the direct RNA sequencing protocol from ONT for MinION (SQK-RNA002 ONT, Oxford, UK). Since CHV-1 is an RNA virus with a double strand replicative form, before beginning the protocol, all samples were denatured by heating for two minutes at 100 °C, then, snap cooled on ice for two minutes. A minor modification was made to the ONT protocol to help with RNA recovery during the bead purification steps: tubes were mixed gently by flicking only and freshly made 80% EtOH was used for bead washing. A positive control was added during library preparation (Yeast Enolase II 1.3 kilobase (kb) transcript). The DNA amplicon sequencing library was prepared following ONT’s cDNA sequencing kit protocol (SQK-PCS108 ONT, Oxford, UK). Before cDNA synthesis, extracted RNA was denatured as above, for two minutes at 100 °C, then, snap cooled on ice for two minutes. After cooling, we immediately began first strand cDNA synthesis using the Maxima H Minus Reverse Transcriptase (Thermo Fisher Scientific, Loughborough, UK) and Oligo(dT) 12–18 primers (Thermo Fisher Scientific, Loughborough, UK). The standard MinION protocol for second strand cDNA synthesis was unsuccessful despite several attempts, and the single strand cDNA was used directly for PCR. The full CHV-1 genome length could not be amplified due to PCR limitations and primer design constraints. Instead, we amplified three and five kilobase (kb) amplicons targeting ORFA using the high fidelity PrimeSTAR GXL DNA Polymerase (Takara, Japan). The forward primer sequences used were identical (5′-ATC YGG AGA ARG TGA TTT GC-3′), but the reverse primers targeted different genome regions (3 kb amplicon 5′-AGA YGA YGC TGG TAA ATG AAG-3′; 5 kb amplicon 5′-YTT RTT GAT GTA GCT GCG AGG-3′). The two amplicons were used to provide a technical replicate library for each sample. In total, 30× PCR cycles were used for each primer pair. The two PCR reactions of each CHV-1 strain were then pooled and cleaned with Agencourt RNAClean XP beads (Beckman Coulter, Brea, CA, USA). MinION’s end-prep, barcoding, and adapter ligation were performed on the pooled products with the barcode expansion EXP-NBD103. Further modifications to ONT’s protocol were made during bead purification. Binding of DNA to the RNAClean XP beads was elongated to 10 min. Beads were also incubated at 37 °C for 15 min during the elution of the purified DNA to increase yield. The final amount of (pooled) dsDNA in the library was between 300–650 ng. It should be noted that the PCR primers failed to amplify one of our more divergent CHV-1 subtypes, G. This prevented us from sequencing this subtype with a DNA amplicon library. Sequencing was performed in-house at WSL (Phytopathology, Birmensdorf, Switzerland). For RNA sequencing, each library was loaded onto a MinION R9.4 flow cell on a MinION Mk1B device (ONT) and sequenced for 8–12 h. Failed runs were identified and excluded at this point. DNA libraries were also sequenced using a MinION R9.4 flow cell on a MinION Mk1B device (ONT) for 18 h. The MinKNOW software v.2.0 (ONT) was adjusted according to ONT’s sequencing protocol with live basecalling disabled. The DNA amplicon library flow cell was used at least two times, with the 5 kb library run first, followed by the 3 kb library (voltage was adjusted according to ONT’s washing protocol). Basecalling was performed with Guppy (v2.3.5 ONT). Direct RNA sequencing reads were filtered to remove reads belonging to the kit positive control with NanoLyse (v1.1.0 , reference accession number NC_001140.6). Reads were then quality filtered with NanoFilt (v2.2.0 ). Only reads above 2 kb and quality score ( q ) of q ≥ 8 were retained for downstream analysis. Amplicon reads were demultiplexed using qcat (v1.1.0 ONT), the entire read was searched for barcodes, and all barcodes trimmed out. Reads below a minimum q score of 10 were then filtered with NanoFilt for both libraries. For the 3 kb amplicon DNA library, reads shorter than 2000 basepairs (bp) and longer than 4 kb were also excluded. For the 5 kb amplicon DNA library, reads shorter than 4 kb and longer than 6 kb were excluded. Three different CHV-1 subtypes were successfully sequenced using direct RNA sequencing (runs for two subtypes failed to produce data and were excluded from our analysis) and five different subtypes with DNA amplicon sequencing . Though the subtype was known a priori, it is important to evaluate the diagnostic potential of this technology and confirm if CHV-1 presence and subtype could be accurately inferred from sequencing reads alone. To test this, filtered RNA and DNA amplicon reads were submitted to a local BLAST search against a custom database containing CHV-1 subtype genome sequences (I, F1, F2, E, D, G), and full-length sequences of the closely related viruses CHV-2/3/4. The option max_target_seqs was set to simplify the BLAST output; this returns the first N ‘good’ hits in the BLAST catalogue and is sensitive to the order of sequences . The order of reference sequences in the database was: I, F1, F2, D, E, G, CHV-2, CHV-3, and CHV-4. The top BLAST hit for each sequencing read was then extracted and the proportion of correctly identified reads estimated. Because we expected a high error rate per read, we did not exclude hits with low BLAST alignment quality scores, though the mean read percentage identity for each hit was recorded. To gain understanding of the sequence level divergence of subtypes and help interpret the BLAST analysis, we also compared the available full genome reference sequences, as well as the corresponding 3 and 5 kb amplicon regions in CLC workbench 7 (v10.1.1 Hilden, Germany, QIAGEN) using the pairwise comparison tool. Consensus sequences have been used for pathogen identification in diagnostic studies and we explored if they offered greater accuracy than read-based methods. To generate a consensus sequence from each library type, we began by assembling de novo with Wtdbg2 (v2.5 ). A de novo assembly approach was taken because it is assumption free about the pathogen present and will not generate any bias towards the initial reference sequence used. Furthermore, it enables us to examine the accuracy of species and subtype identification when the virus or subtype present is unclear. For both the DNA amplicon and the direct RNA sequencing reads, Wtdbg2 was run on all uncorrected reads to reduce processing time, assuming a genome size of 12.7 kb and the ‘ont’ setting suitable for error prone ONT reads. We included all reads, including those that did not have CHV-1 as a top BLAST hit. The DNA libraries were PCR amplicons and, therefore, additional Wtdbg2 parameter adjustment was necessary. Accordingly, our repetitive sequence (‘K’) filter was increased to 100 thousand reads. Furthermore, to ensure an output consensus was generated for low quality samples, the minimum number of nodes allowed in a contig was reduced to two and the minimum length of a contig reduced to 1 kb. For the 3 kb libraries, it was also necessary to reduce to read length filter to 1 kb and to increase the maximum node depth to 500 reads. These parameter changes may reduce consensus quality , but were necessary for consensus to be produced and may still improve our pathogen identification accuracy. The longest consensus fragment was taken for downstream processing. These were used to identify subtype following the BLAST search approach detailed above. In addition, consensus sequences from the RNA libraries were imported into MegaX (v10.1.2 ) and aligned with Muscle (default settings with 50 replicates) . A phylogeny was then constructed using a Maximum Likelihood tree and 500 bootstraps replicates with consensus sequences and pre-existing reference sequences. This allowed us to evaluate if subtypes sequenced with direct RNA sequencing could be correctly identified through a very simple and rapidly produced phylogeny. Only RNA consensuses sequences were used for this analysis as they are genome-wide, while the amplicon libraries were not. To call variants, sequencing reads were mapped to a CHV-1 reference genome listed in . Reads were mapped using Minimap2 (v2.17 ) with the recommended settings for MinION reads. For the RNA libraries, Canu corrected reads for this analysis because of the lower q score filter needed to obtain sufficient data. The SNP callers used included: AssociVar (v1 ), iVar (v1.0.1 ), Ococo (v0.1.2.7 ), and FreeBayes (v1.3.1 ). iVar was run without base quality alignment and the following filters: a minimum base and mapping quality of 20, a minimum variant quality score of 30, and a frequency of 0.2. Ococo and AssociVar were run using the default settings. FreeBayes was run assuming a ploidy of 1, with the flags pooled discrete and pooled continuous set. To reduce memory requirements, variants were only evaluated by FreeBayes if they had a minimum base quality of 30, a maximum of 2 alleles per site, and were seen on 20 or more reads. 3.1. Run Statistics For the direct RNA sequencing libraries: 15,358 reads were produced for subtype G, with a mean q score of 9.1 and mean read length of 4.6 kb. Subtype I had 6283 reads produced, with a mean q score of 9.1 and a mean read length of 3.4 kb. For subtype F1, 7251 reads were produced with a mean q score of 9 and a mean read length of 2.9 kb. For the DNA amplicon libraries, over 2 million reads were produced for the 5 kb library with a mean read q score of 9.5 and mean read length of 4.5 kb. For the 3 kb library, 2.8 million reads were produced, with an average length of 2.7 kb and an average read quality of 9.6. 3.2. CHV-1 Subtype Identification Reads from the direct RNA and DNA amplicon sequencing libraries were submitted to a BLAST search against the CHV custom reference database. The largest proportion of reads had a top hit belonging to any CHV-1 subtype , indicating that identification of viral species was possible. However, identification of the correct viral subtype had more varied success. Subtypes with lower pairwise sequence divergence, E and D, could not be distinguished in the 3 kb amplicon library. The pairwise percentage identity and sequence divergence for subtypes are shown in . E and D are the most closely related subtypes. 3.3. Consensus Sequence Accuracy Consensus sequences for each sequencing library were also subject to a BLAST search in the same manner detailed above; this yielded identical results to the sequencing reads . The length of the longest consensus sequence for each library ranged from 2995 to 4265 bp for the 3 kb amplicons (23–33% genome-wide coverage), and 4905–6757 bp for the 5 kb amplicons (38–53% genome-wide coverage). All consensus sequences were close to their expected sizes of 3 and 5 kb; some exceeded this due to reads present from outside the target region. For the direct RNA sequencing libraries, the longest consensus sequences were 7921 bp (F1, 63% genome-wide coverage), 12,018 bp (I, 94% genome-wide coverage), and 12,144 bp (G, 97% genome-wide coverage). The phylogeny drawn in MegaX using the full-length consensus sequences from our direct RNA reads only, matched biological expectations (see ). This could be used to identify subtypes through looking at sister species . 3.4. Repeatability of Variant Calls To detect within subtype mutations, four variant callers were applied to the filtered and aligned ONT reads. FreeBayes repeatedly failed due to high memory requirements (>100 GB). Furthermore, the MinION-specific variant caller AssociVar failed to produce an output after two weeks. Both programs were excluded from further analysis. The number of variants called by Ococo and iVar is shown in . Each sequencing library had a high number of private variants. A low proportion of the variants identified using Ococo and iVar were consistent across the RNA and DNA libraries . Furthermore, an average overlap of only 5% (±9% standard deviation) was found when overlapping variants from the same library called across the two softwares. For the direct RNA sequencing libraries: 15,358 reads were produced for subtype G, with a mean q score of 9.1 and mean read length of 4.6 kb. Subtype I had 6283 reads produced, with a mean q score of 9.1 and a mean read length of 3.4 kb. For subtype F1, 7251 reads were produced with a mean q score of 9 and a mean read length of 2.9 kb. For the DNA amplicon libraries, over 2 million reads were produced for the 5 kb library with a mean read q score of 9.5 and mean read length of 4.5 kb. For the 3 kb library, 2.8 million reads were produced, with an average length of 2.7 kb and an average read quality of 9.6. Reads from the direct RNA and DNA amplicon sequencing libraries were submitted to a BLAST search against the CHV custom reference database. The largest proportion of reads had a top hit belonging to any CHV-1 subtype , indicating that identification of viral species was possible. However, identification of the correct viral subtype had more varied success. Subtypes with lower pairwise sequence divergence, E and D, could not be distinguished in the 3 kb amplicon library. The pairwise percentage identity and sequence divergence for subtypes are shown in . E and D are the most closely related subtypes. Consensus sequences for each sequencing library were also subject to a BLAST search in the same manner detailed above; this yielded identical results to the sequencing reads . The length of the longest consensus sequence for each library ranged from 2995 to 4265 bp for the 3 kb amplicons (23–33% genome-wide coverage), and 4905–6757 bp for the 5 kb amplicons (38–53% genome-wide coverage). All consensus sequences were close to their expected sizes of 3 and 5 kb; some exceeded this due to reads present from outside the target region. For the direct RNA sequencing libraries, the longest consensus sequences were 7921 bp (F1, 63% genome-wide coverage), 12,018 bp (I, 94% genome-wide coverage), and 12,144 bp (G, 97% genome-wide coverage). The phylogeny drawn in MegaX using the full-length consensus sequences from our direct RNA reads only, matched biological expectations (see ). This could be used to identify subtypes through looking at sister species . To detect within subtype mutations, four variant callers were applied to the filtered and aligned ONT reads. FreeBayes repeatedly failed due to high memory requirements (>100 GB). Furthermore, the MinION-specific variant caller AssociVar failed to produce an output after two weeks. Both programs were excluded from further analysis. The number of variants called by Ococo and iVar is shown in . Each sequencing library had a high number of private variants. A low proportion of the variants identified using Ococo and iVar were consistent across the RNA and DNA libraries . Furthermore, an average overlap of only 5% (±9% standard deviation) was found when overlapping variants from the same library called across the two softwares. In this study, we explored the suitability of ONT’s direct RNA sequencing and DNA amplicon sequencing for detecting CHV-1 presence within Chestnut blight fungal cultures. Viral presence and species could be correctly identified using BLAST searches of raw sequences and consensus sequences. Subtype was more difficult to correctly infer and required longer (>3 kb) consensus fragments to be identified correctly. Intra-host variant calls were not repeatable across libraries. Importantly, two direct RNA sequencing runs failed entirely due to the difficulties in applying this method. 4.1. Identifying Species with Different ONT Read Types The fundamental objective of any diagnostic study is confirming the presence of a pathogen within a sample. In this study, a BLAST search of filtered reads from direct RNA and DNA amplicon sequencing libraries correctly identified CHV-1 within a sample. Despite the high technical error rate expected from ONT data, CHV-1 could be distinguished from closely related mycoviruses that can also occur in Chestnut blight cankers. However, direct RNA sequencing reads were misassigned more frequently than DNA sequencing reads. Only a small difference in percentage error rate was expected between RNA and DNA reads based on the q score filters ( q score 8 vs. 10, <5% difference based on ). Consequently, differences in library structure, i.e., amplicon vs. whole genome, may be driving the increased misassignment probability. Nevertheless, due to the non-negligible misassignment rate of direct RNA reads, direct RNA reads should not be used by methods requiring species identification from individual reads, such as characterizing a virome (e.g., ), because of the risk of species misidentification. Though good species assignment accuracy was possible for DNA amplicons reads, the intrinsically high MinION error rate also makes individual read-based virus identification unsuitable for diagnostics. Threshold read numbers or proportions, similar to Ct cutoffs , should be used to confirm viral presence with DNA reads. Consensus sequences were extremely reliable for species identification and were always correctly identified as CHV-1 through BLAST, even though the DNA sequences were based on amplicon libraries. These results add to the growing body of evidence that viral species can be accurately identified using consensus sequences from DNA and direct RNA sequencing reads (influenza, ; PRRSV, ). 4.2. Identifying Closely Related Subtypes with Different ONT Read Types Distinguishing between closely related CHV-1 subtypes had variable success and was closely linked to the biological distances of subtypes. Subtype could be correctly identified across reads and consensus sequences for all but two DNA amplicon libraries. The 3 kb libraries from subtype E and D were misassigned to each other; however, the correct focal subtype could be identified using the 5 kb libraries. Subtype D is a putative recent recombinant of subtypes E and I . Previous studies have also struggled to split subtype E and D based on a small fragment of ORFA, and required additional sequences from ORFB to do so . Due to primer constraints, neither amplicon includes the region of ORFB used previously. However, the 3 kb amplicon does include 400 bp from ORFB and the 5 kb amplicon covers nearly 2.7 kb of ORFB. For the 3 kb amplicon, this was likely an insufficien portion of ORFB or an insufficiently divergent section of ORFB to distinguish E and D. Due to D’s recombinant origin, subtypes are very closely related at the sequence level, with close to 2% sequence divergence genome-wide. Comparisons of the amplicon regions show the same estimate of 2% across the 5 kb amplicon, but less than 1% for the 3 kb amplicon. Consequently, there is very limited biological variation in the 3 kb amplicon to distinguish these two subtypes. This variation is likely insufficient when coupled with MinION’s error rate to distinguish between subtypes. This result highlights that longer amplicon targets, or more divergent targets (>2%), are necessary for studies seeking to distinguish between closely related subtypes with MinION DNA amplicon data. For the direct RNA sequencing libraries, all three subtypes were correctly identified through the BLAST analyses of the reads and consensus sequences. The subtypes sequenced were between 4–12% divergent from other subtypes in the reference database. It must be noted that subtype identification accuracy was dependent on a full reference sequence being available a priori. Many reads were misassigned to closely related subtypes, thus, care must be taken when working with RNA sequencing data from new strains or subtypes to ensure that they are not misidentified as close relatives in the catalogue. For this reason, we recommend that researchers couple a BLAST search with a phylogeny. This will confirm if consensus sequences follow our biological expectations and may help identify subtypes where only genome fragments are available a priori. Furthermore, BLAST catalogues must be examined thoroughly before performing a read or consensus BLAST analysis to ensure sequences are correctly labeled. For CHV-1, the full-length sequence CHV-1 subtype G is present in NCBI but is incorrectly classed as subtype F2 . This a historical misidentification that arose because subtype G is a recombinant of subtypes F2 and D . This misidentification could have easily led to incorrect read assignment and highlights the importance of database curation. Two additional RNA libraries were sequenced within this study and failed to produce sufficient data for analysis. The challenges associated with applying a new technology should not be ignored by future studies seeking to use direct RNA sequencing. Failed sequencing runs and delays must be incorporated into study designs. This may limit direct RNA sequencing’s suitability for studies requiring a rapid result, when in-house protocols have not been established. 4.3. Repeatability of Variant Calls from MinION Data In this study, we reconfirmed that MinION data are currently too error prone for accurate variant calling. We found a low repeatability of variant calls across both sequencing techniques and across the 3 and 5 kb amplicon libraries. This result is in line with many previous studies (e.g., ), and MinION data should not be used for variant calling until sequencing error rate is reduced . The fundamental objective of any diagnostic study is confirming the presence of a pathogen within a sample. In this study, a BLAST search of filtered reads from direct RNA and DNA amplicon sequencing libraries correctly identified CHV-1 within a sample. Despite the high technical error rate expected from ONT data, CHV-1 could be distinguished from closely related mycoviruses that can also occur in Chestnut blight cankers. However, direct RNA sequencing reads were misassigned more frequently than DNA sequencing reads. Only a small difference in percentage error rate was expected between RNA and DNA reads based on the q score filters ( q score 8 vs. 10, <5% difference based on ). Consequently, differences in library structure, i.e., amplicon vs. whole genome, may be driving the increased misassignment probability. Nevertheless, due to the non-negligible misassignment rate of direct RNA reads, direct RNA reads should not be used by methods requiring species identification from individual reads, such as characterizing a virome (e.g., ), because of the risk of species misidentification. Though good species assignment accuracy was possible for DNA amplicons reads, the intrinsically high MinION error rate also makes individual read-based virus identification unsuitable for diagnostics. Threshold read numbers or proportions, similar to Ct cutoffs , should be used to confirm viral presence with DNA reads. Consensus sequences were extremely reliable for species identification and were always correctly identified as CHV-1 through BLAST, even though the DNA sequences were based on amplicon libraries. These results add to the growing body of evidence that viral species can be accurately identified using consensus sequences from DNA and direct RNA sequencing reads (influenza, ; PRRSV, ). Distinguishing between closely related CHV-1 subtypes had variable success and was closely linked to the biological distances of subtypes. Subtype could be correctly identified across reads and consensus sequences for all but two DNA amplicon libraries. The 3 kb libraries from subtype E and D were misassigned to each other; however, the correct focal subtype could be identified using the 5 kb libraries. Subtype D is a putative recent recombinant of subtypes E and I . Previous studies have also struggled to split subtype E and D based on a small fragment of ORFA, and required additional sequences from ORFB to do so . Due to primer constraints, neither amplicon includes the region of ORFB used previously. However, the 3 kb amplicon does include 400 bp from ORFB and the 5 kb amplicon covers nearly 2.7 kb of ORFB. For the 3 kb amplicon, this was likely an insufficien portion of ORFB or an insufficiently divergent section of ORFB to distinguish E and D. Due to D’s recombinant origin, subtypes are very closely related at the sequence level, with close to 2% sequence divergence genome-wide. Comparisons of the amplicon regions show the same estimate of 2% across the 5 kb amplicon, but less than 1% for the 3 kb amplicon. Consequently, there is very limited biological variation in the 3 kb amplicon to distinguish these two subtypes. This variation is likely insufficient when coupled with MinION’s error rate to distinguish between subtypes. This result highlights that longer amplicon targets, or more divergent targets (>2%), are necessary for studies seeking to distinguish between closely related subtypes with MinION DNA amplicon data. For the direct RNA sequencing libraries, all three subtypes were correctly identified through the BLAST analyses of the reads and consensus sequences. The subtypes sequenced were between 4–12% divergent from other subtypes in the reference database. It must be noted that subtype identification accuracy was dependent on a full reference sequence being available a priori. Many reads were misassigned to closely related subtypes, thus, care must be taken when working with RNA sequencing data from new strains or subtypes to ensure that they are not misidentified as close relatives in the catalogue. For this reason, we recommend that researchers couple a BLAST search with a phylogeny. This will confirm if consensus sequences follow our biological expectations and may help identify subtypes where only genome fragments are available a priori. Furthermore, BLAST catalogues must be examined thoroughly before performing a read or consensus BLAST analysis to ensure sequences are correctly labeled. For CHV-1, the full-length sequence CHV-1 subtype G is present in NCBI but is incorrectly classed as subtype F2 . This a historical misidentification that arose because subtype G is a recombinant of subtypes F2 and D . This misidentification could have easily led to incorrect read assignment and highlights the importance of database curation. Two additional RNA libraries were sequenced within this study and failed to produce sufficient data for analysis. The challenges associated with applying a new technology should not be ignored by future studies seeking to use direct RNA sequencing. Failed sequencing runs and delays must be incorporated into study designs. This may limit direct RNA sequencing’s suitability for studies requiring a rapid result, when in-house protocols have not been established. In this study, we reconfirmed that MinION data are currently too error prone for accurate variant calling. We found a low repeatability of variant calls across both sequencing techniques and across the 3 and 5 kb amplicon libraries. This result is in line with many previous studies (e.g., ), and MinION data should not be used for variant calling until sequencing error rate is reduced . In this study, we showed that MinION’s direct RNA and DNA sequencing reads and consensus sequences can both be used to identify viral species and distinguish between subtypes. However, direct RNA sequencing reads show a high species misassignment rate when examined independently and should not be used to characterize complex samples with several viral species present. Furthermore, a long read length and sufficient biological differences relative to the expected error rate were needed to distinguished closely related subtypes. Consequently, MinION reads or consensus alone will likely be insufficient to definitively confirm the presence of viruses with many closely relatives or limited biological information a priori. Furthermore, reliable intra-host variants could not be called across either sequencing technique and MinION data should not be used for this purpose until the error rate is reduced. While the diagnostic potential is promising, many challenges remain when using MinION sequences and cannot be ignored in diagnostics, where accuracy is essential.
Supporting the Advancement of a National Agenda for Pediatric Healthcare Reform: A multi-year Evaluation of a Leadership Education in Neurodevelopmental and Related Disabilities Program
04d471da-d567-49ea-b21b-292617188b11
11821710
Pediatrics[mh]
Children and youth with special health care needs (CYSHCN) comprise approximately 20% of the pediatric population in the United States and account for approximately 50% of pediatric healthcare expenditures ( Children with Special Health Care Needs: NSCH Data Brief , July 2020 , n.d.; Coller et al., ; Kuo et al., ; Warren et al., ). While CYSHCN disproportionally rely on the healthcare system, approximately 85% of CYSHCN nationwide do not receive services in a well-functioning healthcare system, experiencing persistent unmet health needs and increased family burden (Caicedo, ; Children with Special Health Care Needs: NSCH Data Brief , July 2020 , n.d.; Coller et al., ; Hoover et al., ; Pilapil et al., ; Van Cleave et al., ). Critical services gaps have led the Maternal Child Health Bureau (MCHB) to support CYSHCN and their families’ involvement in research and clinical partnerships (Warren et al., ). Cross-sector collaborations between CYSHCN, their families, providers, and advocates are crucial to improving population health among CYSHCN (Franz et al., ; Hoover et al., ). MCHB leverages the American Academy of Pediatrics’ Blueprint for Change as a new lens of healthcare for CYSHCN to recognize professional-family partnerships as fundamental to large-scale change, advocating for training curricula that involve patient and family partners (Brown et al., ; Coleman et al., ; McLellan et al., ). The Leadership Education in Neurodevelopmental and Related Disabilities (LEND) program is an interdisciplinary training program, funded by MCHB, to improve the health of CYSHCN and their families through a statewide network of healthcare providers, legal professionals, educators, individuals with disabilities, and their families (Sharma et al., ). LEND is implemented at the graduate-level to address education-related gaps among healthcare providers who work with CYSHCN, with 60 training sites nationwide (Bishop et al., , ; Leadership Education in Neurodevelopmental and Related Disabilities (LEND) Fact Sheet , ; McLellan et al., ; Rosenberg et al., ). The purpose of this study is to explore the relationship between LEND program implementation and sustainability over time on practice- and patient-level outcomes. State-level evaluations of unmet needs among CYSHCN and their families may provide regional context and improve family-centered services in achieving MCHB objectives (Camelo Castillo et al., ). There are several LEND training tracks, with long-term trainees participating in a 1-year, > 300-hour curriculum. However, this study’s LEND program has not been rigorously evaluated since its inception in 2011, with gaps in implementation and sustainability of educational outcomes on interdisciplinary providers (Bishop et al., , ; Edwards et al., ; Leadership Education in Neurodevelopmental and Related Disabilities (LEND) Fact Sheet , ). Therefore, with a southeastern state’s LEND program serving as a case study to improve the rigor of evaluating LEND programs nationwide, this study aims to (1) compare perceived sustainability of effect of LEND training, as measured by comparisons in CYSHCN outcomes and their site delivery, among long-term trainees, and (2) identify factors that may affect the implementation of LEND training, as aligned with national priorities among CYSHCN (Brown et al., ). Theoretical Constructs The life course perspective theorizes the sequence of age-sensitive events that a child experiences influence their longitudinal health outcomes, including relational and physical environments (Bengtson & Allen, ; Edwards et al., ). With guidance from MCHB, the LEND curriculum follows the life course perspective, highlighting the potential for providers to implement evidence-based strategies into their service delivery and enhance their patients’ long-term health outcomes because of their training. Likewise, this study adopts the life course perspective (Edwards et al., ). Evaluation Framework This study applies the Exploration, Preparation, Implementation, Sustainment (EPIS) framework to rigorously evaluate this LEND program. The EPIS framework is a dynamic, cyclical evaluation framework that contains well-defined phases of (1) Exploration, (2) Preparation, (3) Implementation, and (4) Sustainability and examines the implementation of evidence-based practices across a variety of settings, including community and allied health sectors (Moullin et al., ). In this study, EPIS guided conceptualization, interview questions, and a preliminary codebook for thematic analysis to understand how LEND concepts are applied into practice among those who have completed long-term training and to understand barriers and facilitators to sustainable changes in service delivery. Operationalization of the Blueprint for Change This study focuses on Critical Areas 2 and 3 of the Blueprint for Change— “family and child well-being and quality of life” and “access to services” (McLellan et al., ). Priority areas guided interview question development and were coded to drive interpretation of LEND’s role in advancing national agendas among interdisciplinary healthcare providers (Appendices A and B). These priorities align with the LEND mission statement and are well-supported by LEND training curriculum (Bishop et al., ; Edwards et al., ; Leadership Education in Neurodevelopmental and Related Disabilities (LEND) Fact Sheet , ; McLellan et al., ). Study Design This study utilizes a qualitative retrospective, longitudinal design to engage LEND long-term trainees from the past five cohorts. Sample Population Of statewide applicants, LEND invites two healthcare providers per discipline to participate as long-term trainees annually. Disciplines include developmental-behavioral pediatricians, occupational and physical therapists, speech language pathologists, audiologists, mental health counselors, social workers, and genetic counselors. Long-term trainees complete a 1-year, > 300-hour training track. LEND Leadership hold senior-level faculty, advocacy, and clinical positions statewide and select a subset of qualified applicants to participate as long-term trainees. For this retrospective study, eligible participants (1) currently practice within their discipline and (2) successfully completed this state’s LEND program as a long-term trainee from 2018 to 2022. LEND Leadership facilitated recruitment efforts by providing eligible graduates’ email addresses. In total, 79 participants were contacted, with a 31% response rate. A systematic, convenience sampling method was used to recruit from each training year and discipline for a representative sample (Table ). Data Collection and Analysis Upon agreeance, participants were scheduled for virtual interviews via Zoom, due to LEND’s statewide reach (Archibald et al., ). Interviews ( N = 24) were conducted from February 23 to July 5, 2023. Participants provided verbal consent prior to the interview, and each interview was recorded and transcribed by an independent transcription service. Interviews lasted between 13 min and 44 min, with an average duration of 23 min. Incentives of $20 gift cards were provided. Qualitative data analysis was conducted using ATLAS.ti Web (Paulus & Lester, ). An initial codebook was developed based on literature review and clinical experience among this research team and was referenced throughout the process to ensure consistency among coders (Appendix B). Transcripts were analyzed by this research team, with four of 15 transcripts double coded and compared by independent coders. Coders identified and discussed discrepancies to reach mutual consensus. Analysis followed a deductive-inductive thematic analysis by adapting the initial codebook based on emergent findings (Bingham, A. J., & Witkowsky, P., ). Coders refined the existing codebook to best represent the data and reach final consensus on thematic findings. Illustrative quotes are presented in this analysis (Chun Tie et al., ; Vanover, Charles; Mihas, Paul; Saldana, Johnny, ). Preliminary findings were discussed, addressing potential discrepancies in data interpretation, with LEND Leadership to reduce bias and participate in member checking. Final themes were disseminated among LEND Leadership to inform future objectives and program implementation. Research ethical issues including informed consent, anonymity, and participant confidentiality were carefully addressed throughout the study process. This study was approved and deemed exempt by the Clemson University Institutional Review Board. The life course perspective theorizes the sequence of age-sensitive events that a child experiences influence their longitudinal health outcomes, including relational and physical environments (Bengtson & Allen, ; Edwards et al., ). With guidance from MCHB, the LEND curriculum follows the life course perspective, highlighting the potential for providers to implement evidence-based strategies into their service delivery and enhance their patients’ long-term health outcomes because of their training. Likewise, this study adopts the life course perspective (Edwards et al., ). This study applies the Exploration, Preparation, Implementation, Sustainment (EPIS) framework to rigorously evaluate this LEND program. The EPIS framework is a dynamic, cyclical evaluation framework that contains well-defined phases of (1) Exploration, (2) Preparation, (3) Implementation, and (4) Sustainability and examines the implementation of evidence-based practices across a variety of settings, including community and allied health sectors (Moullin et al., ). In this study, EPIS guided conceptualization, interview questions, and a preliminary codebook for thematic analysis to understand how LEND concepts are applied into practice among those who have completed long-term training and to understand barriers and facilitators to sustainable changes in service delivery. This study focuses on Critical Areas 2 and 3 of the Blueprint for Change— “family and child well-being and quality of life” and “access to services” (McLellan et al., ). Priority areas guided interview question development and were coded to drive interpretation of LEND’s role in advancing national agendas among interdisciplinary healthcare providers (Appendices A and B). These priorities align with the LEND mission statement and are well-supported by LEND training curriculum (Bishop et al., ; Edwards et al., ; Leadership Education in Neurodevelopmental and Related Disabilities (LEND) Fact Sheet , ; McLellan et al., ). This study utilizes a qualitative retrospective, longitudinal design to engage LEND long-term trainees from the past five cohorts. Of statewide applicants, LEND invites two healthcare providers per discipline to participate as long-term trainees annually. Disciplines include developmental-behavioral pediatricians, occupational and physical therapists, speech language pathologists, audiologists, mental health counselors, social workers, and genetic counselors. Long-term trainees complete a 1-year, > 300-hour training track. LEND Leadership hold senior-level faculty, advocacy, and clinical positions statewide and select a subset of qualified applicants to participate as long-term trainees. For this retrospective study, eligible participants (1) currently practice within their discipline and (2) successfully completed this state’s LEND program as a long-term trainee from 2018 to 2022. LEND Leadership facilitated recruitment efforts by providing eligible graduates’ email addresses. In total, 79 participants were contacted, with a 31% response rate. A systematic, convenience sampling method was used to recruit from each training year and discipline for a representative sample (Table ). Upon agreeance, participants were scheduled for virtual interviews via Zoom, due to LEND’s statewide reach (Archibald et al., ). Interviews ( N = 24) were conducted from February 23 to July 5, 2023. Participants provided verbal consent prior to the interview, and each interview was recorded and transcribed by an independent transcription service. Interviews lasted between 13 min and 44 min, with an average duration of 23 min. Incentives of $20 gift cards were provided. Qualitative data analysis was conducted using ATLAS.ti Web (Paulus & Lester, ). An initial codebook was developed based on literature review and clinical experience among this research team and was referenced throughout the process to ensure consistency among coders (Appendix B). Transcripts were analyzed by this research team, with four of 15 transcripts double coded and compared by independent coders. Coders identified and discussed discrepancies to reach mutual consensus. Analysis followed a deductive-inductive thematic analysis by adapting the initial codebook based on emergent findings (Bingham, A. J., & Witkowsky, P., ). Coders refined the existing codebook to best represent the data and reach final consensus on thematic findings. Illustrative quotes are presented in this analysis (Chun Tie et al., ; Vanover, Charles; Mihas, Paul; Saldana, Johnny, ). Preliminary findings were discussed, addressing potential discrepancies in data interpretation, with LEND Leadership to reduce bias and participate in member checking. Final themes were disseminated among LEND Leadership to inform future objectives and program implementation. Research ethical issues including informed consent, anonymity, and participant confidentiality were carefully addressed throughout the study process. This study was approved and deemed exempt by the Clemson University Institutional Review Board. Results were consistent across LEND cohorts, regardless of provider type and/or practice setting. Providers described discipline-specific examples of their application of LEND principles to their service delivery; however, providers’ descriptions of learned concepts were consistent. Table summarizes findings by EPIS construct. Eight themes were constructed and are presented along the EPIS framework (Fig. ). (Bingham, A. J., & Witkowsky, P., ; Moullin et al., ). Exploration The “Exploration” phase identifies existing needs among CYSHCN and investigates LEND as an evidence-based practice to address the population’s needs (Moullin et al., ). Trainees reported familiarity with CYSHCN and their families from coursework and clinical experience, prior to LEND (Table ). Trainees were encouraged to apply to LEND based on their knowledge of existing needs among CYSHCN. Some participants reported LEND exposure from their colleagues, as some organizations’ graduate-level fellowships recommended LEND involvement and affected their decision to pursue long-term training. I encourage individuals that work with families and children who have disabilities to become more familiar. So I say a great way to do this is to get knowledge using the LEND program. –FY22_MH Trainees unanimously reported participating with some baseline knowledge of LEND and identified LEND as a mechanism for implementing evidence-based practice. Trainees explained intrinsic motivation and leadership capacity to advocate for unmet needs of CYSHCN at a patient- and practice-level. Preparation The “Preparation” phase involves planning to implement LEND principles’ into providers’ service delivery, including reflection of past experiences and how they can improve healthcare quality for CYSHCN (Bengtson & Allen, ; Moullin et al., ). Most trainees cited multidisciplinary discussions and family panels as facilitators to their development (Table ). Regarding multidisciplinary discussions, trainees described the value in translating their graduate-level training within a collaborative learning environment, simulating their clinical settings. Many trainees credited the organizational characteristics, including support for interdisciplinary care and continuing education, of their clinical settings when discussing their ability to participate in LEND. Family panels within LEND curriculum were often cited as the most helpful in learning about family perspectives and informed meaningful changes in providers’ service delivery. Trainees considered their service delivery and opportunities for provider-level change after hearing the experiences of parents of CYSHCN and adults with various disabilities who serve as self-advocates. “Sometimes, I can go through the motions too much…I look back to the family panels and remember the things that they shared that not necessarily negative, but just their constructive criticism of when they got their diagnosis or when the child received their diagnosis and what that felt like for them.” -FY19_IP PSYCH. While providers discussed a more comprehensive understanding of CYSHCN and their families’ daily lives, some providers considered their mindset adjustments and approaches to family-centered care—translating their new clinical training to service delivery. Implementation The “Implementation” phase occurs when LEND concepts are initiated within the healthcare system, following trainees’ completion of the training (Moullin et al., ). Facilitators and challenges to implementation were attributed to (1) implementing a collaborative approach to patient care , (2) emphasis on family-centered care , including social drivers of health (SDOH), and (3) systemic barriers (Table ). Trainees explained their improved understanding and ability to implement a multi-disciplinary approach across practice settings. While trainees reported didactic understanding of this team-based model from graduate school, LEND provided an opportunity to practice within a supportive learning environment and foster interdisciplinary relationships. “In my graduate program [multidisciplinary care] was an emphasis, but I don’t think I ever actually like got to know personally, as large a group of multidisciplinary providers as I did in LEND. That was huge…and that definitely helped me apply the multidisciplinary model a lot better.” -FY18_ PSYCH. Further, trainees consistently reported individual growth through their LEND training. Trainees most frequently reported improved confidence and communication with interdisciplinary providers, CYSHCN, and their families (Fig. ). As a result, trainees reportedly experienced improvements in their service delivery and their patient and family satisfaction, permeating into family-level outcomes. Over time, some trainees discussed acquiring leadership positions to instill confidence within their organizations and continue this cycle at an organizational level—translating individual change to the service environment and broader networks . As a result, physicians and other interdisciplinary providers described their reliance on other providers as part of a broader care team to support patient care plans and optimal health outcomes, since their LEND training. As a byproduct of individual-level growth, long-term trainees reported improvements in patient-level advocacy, including resource acquisition, referral appropriateness, and communication on behalf of patients and their families to providers who did not participate in LEND. Some trainees reported acquiring leadership and/or advocacy positions in collaborative settings, establishing themselves as leaders in their practice and local regions. One provider described her role in a national advocacy position for systemic change. However, some trainees identified the vast need for higher-level advocacy as an overwhelming barrier to encouraging large-scale change among providers. In providers, I think a lot of it is like, oh… that seems like a huge thing…But everybody knows how to do it and then nobody touches it, and it just continues to be the same system. –FY18_PSYCH. Still, trainees described the crucial effect of LEND training on their family-centered care practices. Per trainees, family-centered care includes concepts representative of humility , open-mindedness , and advocacy . Trainees reported adapting their service delivery through improved understanding of SDOH and their impact on family functioning and healthcare experiences. These perceptions were reported among trainees in inpatient, outpatient, and school-based settings alike. Some providers described specific changes to their clinical recommendations to better consider CYSHCN and their families’ burden of navigating SDOH and personal constraints to access referred services. In some cases, trainees partnered with families to address previously undetected concerns and tailored their treatment plans accordingly. “Just me gaining a little bit more humility of that these families and the child’s lives are very complicated, and there’s sometimes going to be competing interests for time, energy, financial capacity. That’s part of my job, is to help the family…not give them recommendations or suggestions that are not feasible or achievable.” -FY18_PSYCH. Despite strong awareness of SDOH and contextual factors that inhibit patients’ healthcare access, systemic barriers —related to financial and time constraints—persist across settings. These barriers prevented trainees from fully integrating concepts learned during LEND into their practices. “You’re seeing these families get turned away…That’s when it really starts, at least for me, eating you as a therapist. And then you start digging even deeper and realizing that system is so broken. It’s insane.” -FY19_PT. In these cases, trainees reported being unable to overcome systemic barriers despite best efforts. Still, trainees repeatedly described persistently advocating for patients within patient-provider relationships and clinics within their respective health systems to overcome barriers as able. Sustainability The “Sustainability” phase includes discussion of factors to support continued application of LEND concepts into daily service delivery, affecting children’s health and development across the lifespan (Edwards et al., ; Moullin et al., ). Trainees identified mindset shifts and statewide connections as drivers for sustained change, with suggestions for follow-up events and networking opportunities to enhance the effect of LEND training (Table ). One provider describes the long-term effect of LEND training as: “My competence and my ability to say I’ve been trained, I was very present in that training, I took copious notes, I’ve had all this experience…I’m prepared.” -FY21_SW. Many trainees sought interdisciplinary settings following LEND and described intermittently contacting fellow trainees for help with difficult cases. Those who moved from their residence as a LEND trainee—within or beyond state borders—reported difficulty maintaining and/or re-establishing multidisciplinary relationships. For some, COVID-19 hindered their ability to form lasting relationships within their cohorts. However, these individuals report anticipation of evolving relationships as a byproduct of their skills acquired during LEND training. Most trainees suggested opportunities to better sustain LEND connections. These suggestions were reported across school and healthcare environments, including invitations to attend LEND events with current cohorts, intermittent in-person events for live networking with past and current trainees, and annual events to connect across cohorts. The “Exploration” phase identifies existing needs among CYSHCN and investigates LEND as an evidence-based practice to address the population’s needs (Moullin et al., ). Trainees reported familiarity with CYSHCN and their families from coursework and clinical experience, prior to LEND (Table ). Trainees were encouraged to apply to LEND based on their knowledge of existing needs among CYSHCN. Some participants reported LEND exposure from their colleagues, as some organizations’ graduate-level fellowships recommended LEND involvement and affected their decision to pursue long-term training. I encourage individuals that work with families and children who have disabilities to become more familiar. So I say a great way to do this is to get knowledge using the LEND program. –FY22_MH Trainees unanimously reported participating with some baseline knowledge of LEND and identified LEND as a mechanism for implementing evidence-based practice. Trainees explained intrinsic motivation and leadership capacity to advocate for unmet needs of CYSHCN at a patient- and practice-level. The “Preparation” phase involves planning to implement LEND principles’ into providers’ service delivery, including reflection of past experiences and how they can improve healthcare quality for CYSHCN (Bengtson & Allen, ; Moullin et al., ). Most trainees cited multidisciplinary discussions and family panels as facilitators to their development (Table ). Regarding multidisciplinary discussions, trainees described the value in translating their graduate-level training within a collaborative learning environment, simulating their clinical settings. Many trainees credited the organizational characteristics, including support for interdisciplinary care and continuing education, of their clinical settings when discussing their ability to participate in LEND. Family panels within LEND curriculum were often cited as the most helpful in learning about family perspectives and informed meaningful changes in providers’ service delivery. Trainees considered their service delivery and opportunities for provider-level change after hearing the experiences of parents of CYSHCN and adults with various disabilities who serve as self-advocates. “Sometimes, I can go through the motions too much…I look back to the family panels and remember the things that they shared that not necessarily negative, but just their constructive criticism of when they got their diagnosis or when the child received their diagnosis and what that felt like for them.” -FY19_IP PSYCH. While providers discussed a more comprehensive understanding of CYSHCN and their families’ daily lives, some providers considered their mindset adjustments and approaches to family-centered care—translating their new clinical training to service delivery. The “Implementation” phase occurs when LEND concepts are initiated within the healthcare system, following trainees’ completion of the training (Moullin et al., ). Facilitators and challenges to implementation were attributed to (1) implementing a collaborative approach to patient care , (2) emphasis on family-centered care , including social drivers of health (SDOH), and (3) systemic barriers (Table ). Trainees explained their improved understanding and ability to implement a multi-disciplinary approach across practice settings. While trainees reported didactic understanding of this team-based model from graduate school, LEND provided an opportunity to practice within a supportive learning environment and foster interdisciplinary relationships. “In my graduate program [multidisciplinary care] was an emphasis, but I don’t think I ever actually like got to know personally, as large a group of multidisciplinary providers as I did in LEND. That was huge…and that definitely helped me apply the multidisciplinary model a lot better.” -FY18_ PSYCH. Further, trainees consistently reported individual growth through their LEND training. Trainees most frequently reported improved confidence and communication with interdisciplinary providers, CYSHCN, and their families (Fig. ). As a result, trainees reportedly experienced improvements in their service delivery and their patient and family satisfaction, permeating into family-level outcomes. Over time, some trainees discussed acquiring leadership positions to instill confidence within their organizations and continue this cycle at an organizational level—translating individual change to the service environment and broader networks . As a result, physicians and other interdisciplinary providers described their reliance on other providers as part of a broader care team to support patient care plans and optimal health outcomes, since their LEND training. As a byproduct of individual-level growth, long-term trainees reported improvements in patient-level advocacy, including resource acquisition, referral appropriateness, and communication on behalf of patients and their families to providers who did not participate in LEND. Some trainees reported acquiring leadership and/or advocacy positions in collaborative settings, establishing themselves as leaders in their practice and local regions. One provider described her role in a national advocacy position for systemic change. However, some trainees identified the vast need for higher-level advocacy as an overwhelming barrier to encouraging large-scale change among providers. In providers, I think a lot of it is like, oh… that seems like a huge thing…But everybody knows how to do it and then nobody touches it, and it just continues to be the same system. –FY18_PSYCH. Still, trainees described the crucial effect of LEND training on their family-centered care practices. Per trainees, family-centered care includes concepts representative of humility , open-mindedness , and advocacy . Trainees reported adapting their service delivery through improved understanding of SDOH and their impact on family functioning and healthcare experiences. These perceptions were reported among trainees in inpatient, outpatient, and school-based settings alike. Some providers described specific changes to their clinical recommendations to better consider CYSHCN and their families’ burden of navigating SDOH and personal constraints to access referred services. In some cases, trainees partnered with families to address previously undetected concerns and tailored their treatment plans accordingly. “Just me gaining a little bit more humility of that these families and the child’s lives are very complicated, and there’s sometimes going to be competing interests for time, energy, financial capacity. That’s part of my job, is to help the family…not give them recommendations or suggestions that are not feasible or achievable.” -FY18_PSYCH. Despite strong awareness of SDOH and contextual factors that inhibit patients’ healthcare access, systemic barriers —related to financial and time constraints—persist across settings. These barriers prevented trainees from fully integrating concepts learned during LEND into their practices. “You’re seeing these families get turned away…That’s when it really starts, at least for me, eating you as a therapist. And then you start digging even deeper and realizing that system is so broken. It’s insane.” -FY19_PT. In these cases, trainees reported being unable to overcome systemic barriers despite best efforts. Still, trainees repeatedly described persistently advocating for patients within patient-provider relationships and clinics within their respective health systems to overcome barriers as able. The “Sustainability” phase includes discussion of factors to support continued application of LEND concepts into daily service delivery, affecting children’s health and development across the lifespan (Edwards et al., ; Moullin et al., ). Trainees identified mindset shifts and statewide connections as drivers for sustained change, with suggestions for follow-up events and networking opportunities to enhance the effect of LEND training (Table ). One provider describes the long-term effect of LEND training as: “My competence and my ability to say I’ve been trained, I was very present in that training, I took copious notes, I’ve had all this experience…I’m prepared.” -FY21_SW. Many trainees sought interdisciplinary settings following LEND and described intermittently contacting fellow trainees for help with difficult cases. Those who moved from their residence as a LEND trainee—within or beyond state borders—reported difficulty maintaining and/or re-establishing multidisciplinary relationships. For some, COVID-19 hindered their ability to form lasting relationships within their cohorts. However, these individuals report anticipation of evolving relationships as a byproduct of their skills acquired during LEND training. Most trainees suggested opportunities to better sustain LEND connections. These suggestions were reported across school and healthcare environments, including invitations to attend LEND events with current cohorts, intermittent in-person events for live networking with past and current trainees, and annual events to connect across cohorts. Interdisciplinary healthcare provider education is a well-documented means to promote collaborative, team-based care across pediatric settings—a “best practice” standard for optimal patient outcomes (Beebe et al., ; Elgen et al., ; Fair et al., ; Rosenberg et al., ). As a well-established training program nationwide, LEND provides an opportunity to support providers’ in developing evidence-based approaches to practice, research, leadership, and advocacy and to integrate national priorities into practice to address significant care gaps among CYSHCN (Edwards et al., ; Rosenberg et al., ; Weber et al., , ). Further, LEND partners with individuals with disabilities and their families, who are key to driving multi-level solutions for existing gaps (Kuo et al., ; McLellan et al., ). Still, this study’s findings suggest that LEND graduates are limited in implementing and maintaining the effect of their training due to time and financial constraints of the U.S. healthcare system ( outer contexts ) (Moullin et al., ). Aligned with the life course perspective, discrepancies in LEND trainees’ abilities to implement their training within the healthcare system may elicit lifelong implications for CYSHCN and their families (Bengtson & Allen, ; Edwards et al., ). Existing research agendas suggest that providers are often molded to fit systemic needs, rather than a healthcare system that is reflective of high quality care for CYSHCN and their families (Coller et al., ; Kuo et al., ; McLellan et al., ). While family-centered, multidisciplinary care within a medical home is well-documented as best practice, interviewees repeatedly reported that their graduate training was insufficient, compared to the translational skill development acquired through LEND (Beebe et al., ; Elgen et al., ; Fair et al., ; Rosenberg et al., ). Nationwide, LEND long-term trainees are often clinical fellows and early clinicians, who are eager to develop their leadership skills and set their career trajectories. LEND trainees discussed significant improvements in their service delivery and patient outcomes, largely attributed to acquiring an adjusted posture of humility and empathy for patients and their families as well as improvements in communication, confidence, and leadership skills within their local networks. However, trainees repeatedly identified systemic barriers, including waitlists, insurance limitations, and allotted time-per-patient, that prevent trainees’ from fully integrating their LEND training into practice. The systemic barriers reported by providers are not unique to this state and identify an opportunity for trainees from 60 LEND programs nationwide to band together, implement evidence-based strategies, and advocate for systems-level change across the U.S. healthcare system. Trainees provided detailed examples of patient-level advocacy, from addressing parent concerns of a CYSHCN who identified as non-Hispanic Black and potential interactions with law enforcement to connecting families with community-based resources for well-rounded support. Still, trainees infrequently mentioned advocacy training or examples beyond the patient-provider relationship or local networks. One trainee in the sample described her involvement with a national-level advocacy board for transparent payment models, with hopes of making healthcare costs visible to patients before receiving treatment and adapting insurance-related restrictions for providers. With cohorts of passionate healthcare providers nationwide, LEND may consider building on leadership training and advocating for large-scale change through a massive provider network at the front lines of clinical care. The healthcare system has demonstrated pioneering resiliency in recent years to meet patient needs and LEND has a sensitive opportunity to emphasize advocacy training—driven by providers, policymakers, individuals with disabilities, and their families (Coleman et al., ; Geweniger et al., ; McLellan et al., ). The integration of a national framework with priority care gaps intended to serve a population these trainees are passionate about provides a key opportunity to shape the future research, practice, and policy of pediatric medicine in the US. Key Takeaways Related to EPIS Framework LEND trains healthcare professionals to become leaders within their disciplines and settings. With foundational training, healthcare providers experience a dynamic relationship between their practice setting and implementation of their training components with continual adaptations for inner and outer contexts. LEND offers an opportunity for healthcare providers to develop leadership skills ( inner context ) and is woven into organizational characteristics and staffing processes. Trainees’ provider-level interactions are influenced by leadership within their practice setting, systemic limitations ( service environment/policies ), funding of services ( insurance ), and advocacy opportunities for CYSHCN and their families to access high-quality healthcare services. With established community-academic partnerships and expert leadership among LEND faculty, LEND may consider emphasizing state- and national-level advocacy and further engaging legislators, policymakers, and lawmakers in their curriculum nationwide. Recommendations for LEND To facilitate sustainability of the effect of LEND training, providers requested increased follow-up from LEND faculty and opportunities for in-person networking. Several LEND programs transitioned to virtual programming following the onset of COVID-19; however, LEND faculty continue to seek balance in the hybrid format to maintain accessibility for working professionals while offering in-person events. Additionally, providers reported reliance on LEND resources since program completion to sustain the effect of their training. However, in this state, LEND trainees lose access to the virtual platform following program completion and a few trainees requested sustained access to resources. National LEND programs may develop a semi-public, virtual platform for current and past LEND trainees to gather resources and promote evidence-based practices. As suggested by this study’s LEND faculty, long-term access to such resources may allow providers to continually engage in LEND beyond the duration of their enrollment year. Strengths and Limitations This study represents a homogeneous population due to the recruitment methods and demographic characteristics of this population. In addition, this study’s lead author is a LEND graduate from 2018 to 2022. However, member checking and multiple coders were employed to reduce these biases. This study intends to serve a baseline for implementation of Blueprint for Change among interdisciplinary healthcare providers nationwide. The strengths of this study include its even distribution of participants by training year and by discipline, reducing bias in the results of this multi-year evaluation. Further, LEND leadership—including the director and a founding faculty member—participated in member checking to challenge and validate this study’s findings. These findings were shared with the LEND leadership team to inform program objectives. LEND trains healthcare professionals to become leaders within their disciplines and settings. With foundational training, healthcare providers experience a dynamic relationship between their practice setting and implementation of their training components with continual adaptations for inner and outer contexts. LEND offers an opportunity for healthcare providers to develop leadership skills ( inner context ) and is woven into organizational characteristics and staffing processes. Trainees’ provider-level interactions are influenced by leadership within their practice setting, systemic limitations ( service environment/policies ), funding of services ( insurance ), and advocacy opportunities for CYSHCN and their families to access high-quality healthcare services. With established community-academic partnerships and expert leadership among LEND faculty, LEND may consider emphasizing state- and national-level advocacy and further engaging legislators, policymakers, and lawmakers in their curriculum nationwide. To facilitate sustainability of the effect of LEND training, providers requested increased follow-up from LEND faculty and opportunities for in-person networking. Several LEND programs transitioned to virtual programming following the onset of COVID-19; however, LEND faculty continue to seek balance in the hybrid format to maintain accessibility for working professionals while offering in-person events. Additionally, providers reported reliance on LEND resources since program completion to sustain the effect of their training. However, in this state, LEND trainees lose access to the virtual platform following program completion and a few trainees requested sustained access to resources. National LEND programs may develop a semi-public, virtual platform for current and past LEND trainees to gather resources and promote evidence-based practices. As suggested by this study’s LEND faculty, long-term access to such resources may allow providers to continually engage in LEND beyond the duration of their enrollment year. This study represents a homogeneous population due to the recruitment methods and demographic characteristics of this population. In addition, this study’s lead author is a LEND graduate from 2018 to 2022. However, member checking and multiple coders were employed to reduce these biases. This study intends to serve a baseline for implementation of Blueprint for Change among interdisciplinary healthcare providers nationwide. The strengths of this study include its even distribution of participants by training year and by discipline, reducing bias in the results of this multi-year evaluation. Further, LEND leadership—including the director and a founding faculty member—participated in member checking to challenge and validate this study’s findings. These findings were shared with the LEND leadership team to inform program objectives. While LEND has the potential to advance large-scale agendas for pediatric healthcare reform, trainees are not immune to systemic limitations of U.S. healthcare systems. To optimize the effect of LEND training, multi-level, systemic adaptations must allow providers to follow best practice guidelines, as evidenced in existing research and policy (Coller et al., ; Hoover et al., ; Kuo et al., ; McLellan et al., ). With service delivery improvements, as described in this study’s findings, CYSHCN and their families may experience improvements in health outcomes, well-being, and quality of life. Below is the link to the electronic supplementary material. Supplementary Material 1
Outcome of Er, Cr:YSGG laser and antioxidant pretreatments on bonding quality to caries-induced dentin
cf95b076-3931-47eb-8918-09f8109f9e3c
11734456
Dentistry[mh]
In the current literature there has been a paradigm shift towards minimally invasive approach, that is associated with the development of efficient adhesive systems and effective bonding procedures. Such approach aims to conservatively manage the carious dentin lesion to preserve the sound dentin. Carious dentin composes two layers; the inner affected dentin surrounded by the outer infected dentin. Upon absence of bacteria; the remineralizable caries-affected dentin layer, should be preserved in conservative cavity preparation. Caries-affected dentin has different physical and chemical features than sound dentin (SoD), and shows lower bond strength values . Bonding to caries-affected dentin exhibits some challenges during bonding procedures. Structural modifications in caries affected dentin, such as collagen denature and reduced mineral content may hinder dentin hybridization and jeopardize the mechanical characteristics of bonded restorations. Furthermore, obstructed dentinal tubules can inhibit resin diffusion, and prevent resin tags formation. However, the low-mineralized inter-tubular dentin in caries affected dentin permits more profound etching . However, in-vitro studies often struggle to replicate the complex features and structure of naturally occurring caries-affected dentin . Various approaches have been proposed to artificially induce caries-like dentin lesions (CID), but they may not fully replicate the long-term, natural caries process. Despite these differences, Joves et al. 2013 concluded that natural CAD and artificial CID of permanent teeth were superficially analogous regarding the intertubular nanohardness. Sodium hypochlorite (NaOCl) is a common endodontic irrigant in root canal treatments owing to its bactericidal impact. Moreover, it can deproteinize both mineralized and demineralized dentin substrates. Though, it can improve the bond strength through deproteinization effect and the removal of weakly attached smear layer. However, depleted bond strength was reported due to the strong oxidizing potential of the NaOCl , that could be owed to the release of NaOCl by-products that display an adverse influence on the polymerization of dental adhesives. However, such depleted bond strength values of NaOCl-treated dentin can be reinstated by application of antioxidant solution before the bonding procedures , as it is able to counteract the effect of NaOCl by-products and reversing the oxidizing effect of NaOCl on dentin surface . Sodium ascorbate (SA) is an eminent antioxidant agent that can reduce free radicals. It was reported that SA can counteract the negative effect of NaOCl and peroxides on dentin bond strength through neutralization of their by-products. It was concluded that the depleted bond strength can return to normal by reduction of the oxidized dentin with a biocompatible and neutral antioxidant, such as SA, prior to the bonding procedures . Erbium, chromium: yttrium-scandium-gallium-garnet (Er, Cr:YSGG) is one of the recent technological advances in dentistry. During the last few years, there has been a dramatic increase in laser applications for soft and hard dental tissues. Alternate treatments such as laser therapy using Nd:YAG and Er, Cr:YSGG demonstrated positive outcomes in bond strength enhancement. As both create a rough surface that is similar to the acid etching patterns. Thus, improve resin composite bonding to dentin . Therefore, the objective of the current study is to assess different pretreatment approaches of SoD and caries-induced dentin (CID) using NaOCl and Er, Cr:YSGG laser prior to antioxidant agent application on SBS of a universal adhesive to the two dentin substrates. The null hypotheses tested were; 1- The different dentin substrates will have no effect on the bond strength pretreatment protocols. 2- The NaOCl and laser application will not affect the bond strength to different dentin substrates. 3- The application of antioxidant agent application will have no influence on the bond strength to different dentin substrates. Ethical approval This study was approved by the Research Ethics Committee of Oral and Dental Medicine, Future University (REC-FODM), New Cairo, Egypt; under the reference number: FUE.REC (13)/5-2-24. The authors declare that all conducted methods agreed with the guidelines and regulations of the ‘World Medical Association Declaration of Helsinki in 2013. The tested human teeth were extracted from anonymous participants for orthodontic purposes to be used for research objectives. The teeth were obtained from the outpatient clinic of the National Research Centre (NRC, Giza, Egypt). Informed consents were obtained from the participants for using their teeth samples in the study. Experimental design of the study Sample size was calculated based on a pilot study of five samples to compare between different groups (Means= 9, 9.5 and 5.5, within subject SD=2). The effect size f=0.889 and a=0.05 resulting in minimum sample size of 8 in each and a 95% power. For statistical analysis reliability, the sample size was increased to ten teeth in each group. A total of one hundred and twenty premolar teeth were collected. Teeth divided into two main groups ( n =60/each) based on dentin substrate type into: SoD and CID. Each main group was further divided into three subgroups regarding dentin pretreatment into: control without dentin pretreatment ( n =20), NaOCl-treated dentin ( n =20) and Er, Cr:YSGG laser-treated dentin ( n =20). Then, each subgroup was finally alienated into two divisions according to antioxidant application into: no SA application ( n =10) and 10% SA application ( n =10). Figure demonstrates specimens grouping, study design and the frequency. Selected materials A universal adhesive; All-Bond Universal (ABU: BISCO Inc., Schaumburg, IL, USA), one cavity disinfectant; 6% NaOCl solution, and one antioxidant agent; 10% SA solution, and a nanofilled resin composite (Filtek™ Supreme Ultra: 3M Oral Care, St. Paul, MN, USA) were used in this study. Materials brand name, description, composition, and their manufacturers are listed in Table . Teeth selection One hundred and twenty human posterior teeth were collected for the current study. Any remaining soft tissues or debris were removed under tap-water using sharp hand scalers. A 25x magnifying lens was equipped to examine the selected teeth to eliminate any defective, fractured, or cracked teeth. Afterwards, the teeth were preserved in 0.1% thymol solution at 4°C up to a duration of three months maximum period post-extraction. The solution was changed once per week up until use . Specimens’ preparation The roots of the selected teeth were removed 2-mm beyond the enamel-cementum junction using low-speed handpiece with mounted double-sided diamond cutting disc. Under wet condition, the occlusal enamel was ground flat with 240-grit silicon carbide (SiC) paper exposing the underlying dentin. Wet SiC paper of 600-grit was used to finish the exposed dentin surfaces for 60s in circular motion to develop uniform smear layer . Stereomicroscope (Olympus ® BX 60, Olympus Optical Co. LTD, Tokyo, Japan) was used to examine the specimens for enamel remains or additional flaws. Then the specimens were placed in blocks of auto cure acrylic resin . After complete polymerization of the acrylic resin, the prepared specimens were stored in distilled water . Development of the caries-induced dentin (CID) Artificially developed caries-induced dentinal lesions were produced through cariogenic challenge. Following the procedure proposed by Nicoloso et al. as follows; two layers of an acid-resistant nail polish was applied to the specimens’ surfaces except for the exposed dentin surfaces. Specimens were then separately immersed in a demineralizing solution of adjusted pH= 4.5 (0.05 M acetic acid, 2.2 mM NaH2PO 4 , 2.2 mM CaCl 2 ) for a duration of 8h, and then immersed in a remineralizing solution of adjusted pH= 7 (0.15 mM KCL, 0.9 mM NaH 2 PO 4 , 1.5 mM CaCl 2 ) for 16h duration. The solutions were changed with fresh solutions and the specimens were thoroughly rinsed using deionized water then blotted dry, at the end of each cycle. This cycle was performed for 14d where the solutions were inspected intermittently using a pH meter. Half of the prepared dentin specimens ( n =60) were exposed to a pH cycling protocol using prepared remineralizing and demineralizing solutions. NaOCl dentin pretreatment procedure The respective prepared specimens were immersed in 6% NaOCl solution for 30s, followed by through rinsing with distilled water for 1-min to remove any residues of the solution . Er, Cr:YSGG laser dentin pretreatment procedure According to Takada et. al , the respective specimens were treated with Er, Cr:YSGG laser system 2780nm (Biloase Technology Inc., San Clemente, CA, USA) using MZ8 tip of 800µm diameter, in a scanning motion on the occlusal surface for 30s with an output power of 2W, frequency of 20Hz and pulse duration of 140µs with 75% water coolant and 60% air coolant. Application of the antioxidant agent Following the pretreatment procedures for different dentin substrates, the pretreated specimens were immersed in 10% SA solution for 10-min then rinsed thoroughly with distilled water for 1-min to remove any residues of the solution . Bonding procedures The tested universal adhesive (ABU) was applied to SoD and CID specimens following SE bonding technique according to its manufacturers’ recommendations. ABU was actively applied in two coats with rubbing action to the pretreated dentin substrates using micro brushes for 10-15s per coat without light curing between the two coats . Air syringe was used to evaporate the excess solvent by air-drying for ten seconds till there no visible movement of the adhesive was detected . The tested universal adhesive was light-cured for 10s using LED light curing unit (=1000mW/cm 2 , Elipar S10, 3M ESPE, USA). The light curing unit was examined periodically using handheld radiometer (Demetron 100, Kerr Corporation, CA, USA). Resin composite application Filtek Supreme Ultra nanofilled resin composite was used for composite discs build-up in one increment with the help of split Teflon molds of 2mm internal diameter and 2-mm height fixed over the dentin surface. Transparent celluloid strips were positioned on the top of the composite restorations. Each composite disc was light cured for 10s using the LED light curing unit according to the manufacturer’s instructions. Then the celluloid strips were removed and any flashes extending past the base of the composite discs were removed using a sharp blade. Then the specimens were reserved in distilled water in tight-seal plastic containers for 24-hr at 37°C until the SBS was evaluated . Shear bond strength testing (SBS) and mode of failure assessment The prepared specimens were attached to the lower jig of universal testing machine (Instron®, Model 3345, Instron Instruments, Buckinghamshire, UK). Chisel bladed metallic attachment mounted at the upper jig of the machine was placed as close as possible to the resin composite/dentin interface, and the test was run at 0.5mm/min cross head speed until failure with 5kN load cell. Maximum force was calculated in MPa. To calculate SBS, the peak load at failure was divided by the specimen’s surface area using the universal machine computer software (BlueHill® Universal, Instron Testing Software, Buckinghamshire, UK). The debonded specimens were examined using stereomicroscope at x35 magnification and modes of failure were classified as adhesive when the failure was located at resin composite/dentin interface, cohesive if the failure was identified within the resin composite or dentin substrates, and mixed when adhesive and cohesive fractures were acknowledged simultaneously. Statistical analysis Shapiro-Wilk showed a normal distribution of the SBS, and three-way ANOVA test was used to demonstrate the effect of dentin substrate [SoD vs. CID], dentin pretreatment [Control (no pretreatment), NaOCl, and Laser application], and antioxidant application [No antioxidant application vs 10% SA application] on the shear bond strength. Tukey HSD was used for multiple comparisons. Statistical analysis was performed with IBM SPSS Statistics Version 20 for Windows (IBM Documentation products, Armonk, NY, USA). This study was approved by the Research Ethics Committee of Oral and Dental Medicine, Future University (REC-FODM), New Cairo, Egypt; under the reference number: FUE.REC (13)/5-2-24. The authors declare that all conducted methods agreed with the guidelines and regulations of the ‘World Medical Association Declaration of Helsinki in 2013. The tested human teeth were extracted from anonymous participants for orthodontic purposes to be used for research objectives. The teeth were obtained from the outpatient clinic of the National Research Centre (NRC, Giza, Egypt). Informed consents were obtained from the participants for using their teeth samples in the study. Sample size was calculated based on a pilot study of five samples to compare between different groups (Means= 9, 9.5 and 5.5, within subject SD=2). The effect size f=0.889 and a=0.05 resulting in minimum sample size of 8 in each and a 95% power. For statistical analysis reliability, the sample size was increased to ten teeth in each group. A total of one hundred and twenty premolar teeth were collected. Teeth divided into two main groups ( n =60/each) based on dentin substrate type into: SoD and CID. Each main group was further divided into three subgroups regarding dentin pretreatment into: control without dentin pretreatment ( n =20), NaOCl-treated dentin ( n =20) and Er, Cr:YSGG laser-treated dentin ( n =20). Then, each subgroup was finally alienated into two divisions according to antioxidant application into: no SA application ( n =10) and 10% SA application ( n =10). Figure demonstrates specimens grouping, study design and the frequency. A universal adhesive; All-Bond Universal (ABU: BISCO Inc., Schaumburg, IL, USA), one cavity disinfectant; 6% NaOCl solution, and one antioxidant agent; 10% SA solution, and a nanofilled resin composite (Filtek™ Supreme Ultra: 3M Oral Care, St. Paul, MN, USA) were used in this study. Materials brand name, description, composition, and their manufacturers are listed in Table . One hundred and twenty human posterior teeth were collected for the current study. Any remaining soft tissues or debris were removed under tap-water using sharp hand scalers. A 25x magnifying lens was equipped to examine the selected teeth to eliminate any defective, fractured, or cracked teeth. Afterwards, the teeth were preserved in 0.1% thymol solution at 4°C up to a duration of three months maximum period post-extraction. The solution was changed once per week up until use . The roots of the selected teeth were removed 2-mm beyond the enamel-cementum junction using low-speed handpiece with mounted double-sided diamond cutting disc. Under wet condition, the occlusal enamel was ground flat with 240-grit silicon carbide (SiC) paper exposing the underlying dentin. Wet SiC paper of 600-grit was used to finish the exposed dentin surfaces for 60s in circular motion to develop uniform smear layer . Stereomicroscope (Olympus ® BX 60, Olympus Optical Co. LTD, Tokyo, Japan) was used to examine the specimens for enamel remains or additional flaws. Then the specimens were placed in blocks of auto cure acrylic resin . After complete polymerization of the acrylic resin, the prepared specimens were stored in distilled water . Artificially developed caries-induced dentinal lesions were produced through cariogenic challenge. Following the procedure proposed by Nicoloso et al. as follows; two layers of an acid-resistant nail polish was applied to the specimens’ surfaces except for the exposed dentin surfaces. Specimens were then separately immersed in a demineralizing solution of adjusted pH= 4.5 (0.05 M acetic acid, 2.2 mM NaH2PO 4 , 2.2 mM CaCl 2 ) for a duration of 8h, and then immersed in a remineralizing solution of adjusted pH= 7 (0.15 mM KCL, 0.9 mM NaH 2 PO 4 , 1.5 mM CaCl 2 ) for 16h duration. The solutions were changed with fresh solutions and the specimens were thoroughly rinsed using deionized water then blotted dry, at the end of each cycle. This cycle was performed for 14d where the solutions were inspected intermittently using a pH meter. Half of the prepared dentin specimens ( n =60) were exposed to a pH cycling protocol using prepared remineralizing and demineralizing solutions. The respective prepared specimens were immersed in 6% NaOCl solution for 30s, followed by through rinsing with distilled water for 1-min to remove any residues of the solution . According to Takada et. al , the respective specimens were treated with Er, Cr:YSGG laser system 2780nm (Biloase Technology Inc., San Clemente, CA, USA) using MZ8 tip of 800µm diameter, in a scanning motion on the occlusal surface for 30s with an output power of 2W, frequency of 20Hz and pulse duration of 140µs with 75% water coolant and 60% air coolant. Following the pretreatment procedures for different dentin substrates, the pretreated specimens were immersed in 10% SA solution for 10-min then rinsed thoroughly with distilled water for 1-min to remove any residues of the solution . The tested universal adhesive (ABU) was applied to SoD and CID specimens following SE bonding technique according to its manufacturers’ recommendations. ABU was actively applied in two coats with rubbing action to the pretreated dentin substrates using micro brushes for 10-15s per coat without light curing between the two coats . Air syringe was used to evaporate the excess solvent by air-drying for ten seconds till there no visible movement of the adhesive was detected . The tested universal adhesive was light-cured for 10s using LED light curing unit (=1000mW/cm 2 , Elipar S10, 3M ESPE, USA). The light curing unit was examined periodically using handheld radiometer (Demetron 100, Kerr Corporation, CA, USA). Filtek Supreme Ultra nanofilled resin composite was used for composite discs build-up in one increment with the help of split Teflon molds of 2mm internal diameter and 2-mm height fixed over the dentin surface. Transparent celluloid strips were positioned on the top of the composite restorations. Each composite disc was light cured for 10s using the LED light curing unit according to the manufacturer’s instructions. Then the celluloid strips were removed and any flashes extending past the base of the composite discs were removed using a sharp blade. Then the specimens were reserved in distilled water in tight-seal plastic containers for 24-hr at 37°C until the SBS was evaluated . The prepared specimens were attached to the lower jig of universal testing machine (Instron®, Model 3345, Instron Instruments, Buckinghamshire, UK). Chisel bladed metallic attachment mounted at the upper jig of the machine was placed as close as possible to the resin composite/dentin interface, and the test was run at 0.5mm/min cross head speed until failure with 5kN load cell. Maximum force was calculated in MPa. To calculate SBS, the peak load at failure was divided by the specimen’s surface area using the universal machine computer software (BlueHill® Universal, Instron Testing Software, Buckinghamshire, UK). The debonded specimens were examined using stereomicroscope at x35 magnification and modes of failure were classified as adhesive when the failure was located at resin composite/dentin interface, cohesive if the failure was identified within the resin composite or dentin substrates, and mixed when adhesive and cohesive fractures were acknowledged simultaneously. Shapiro-Wilk showed a normal distribution of the SBS, and three-way ANOVA test was used to demonstrate the effect of dentin substrate [SoD vs. CID], dentin pretreatment [Control (no pretreatment), NaOCl, and Laser application], and antioxidant application [No antioxidant application vs 10% SA application] on the shear bond strength. Tukey HSD was used for multiple comparisons. Statistical analysis was performed with IBM SPSS Statistics Version 20 for Windows (IBM Documentation products, Armonk, NY, USA). Mean and standard deviation (SD) values [95% CI] for the SBS of different dentin substrates were demonstrated in Table . Three-way ANOVA test revealed that different dentin substrates, dentin pretreatment, and antioxidant application resulted in a significant effect on SBS at p?0.001. The interaction between the three variables resulted in an insignificant effect on SBS at p =0.156. For SoD substrate, 6% NaOCl resulted in a significant reduction in SBS compared to the control group and laser group without antioxidant application. On the other hand, 10% SA application resulted in a significant increase in SBS for 6% NaOCl group only. For CID substrate, Laser application resulted in a significantly higher SBS compared to 6% NaOCl group without or with antioxidant application. Meanwhile, 10% SA application revealed a significant increase in SBS for control group only. Failure mode results are presented in Fig. . For SoD substrate, the control group without antioxidant application showed 100% adhesive failure, while after 10% SA application, the results showed 60% adhesive failure and 40% mixed failure. 6% NaOCl groups, showed 80% adhesive failure with and without antioxidant application. For the Laser group, adhesive failure showed 20% without antioxidant application and 40% after 10% SA application. For CID substrate, the control group without antioxidant application showed 60% adhesive failure, while after 10% SA application, a 100% adhesive failure resulted. The 6% NaOCl group, showed 60% adhesive failure without antioxidant application and 20% with 10% SA application. For the Laser group, adhesive failure was 60% without antioxidant application and 40% after 10% SA application. Representative images of failure mode are presented in (Fig. ). The primary objective of minimally invasive dentistry is to thoroughly remove infected dentin while safeguarding healthy dental structures and eradicating cariogenic bacteria and decayed tissues prior to restoration, with notable similarities in mineral loss between caries-affected dentin and artificially induced dentin caries, yet distinguished by their formation processes, as natural caries exhibit two layers: a soft, bacteria-laden layer and a partially demineralized layer retaining collagen and viable odontoblasts, facilitating potential remineralization . Lenzi et al. proposed that chemically induced caries (CID) share similarities with natural acute caries lesions. To replicate artificial caries in dentin, researchers frequently use pH-cycling models, which are particularly useful for assessing bond strength across different tooth types . Although chemical models can imitate tooth decay by alternating demineralization and remineralization phases, the exact duration of these cycles in the oral environment is not well established . Nonetheless, pH-cycling procedures have been found to produce surface hardness comparable to natural decay in primary teeth, with lesions extending up to 40 µm deep . Different dentin pretreatment regimes have been proposed such as applying cavity disinfectants such as NaOCl which can dissolve the collagen fibrils, promoting dentin deproteinization making the surface rich in apatite . In addition, NaOCl dentin pretreatment will produce surface microporosity and irregularities, creating more permeable dentin surface that enhance adhesive monomer infiltration during the bonding procedures. Likewise, NaOCl dentin pretreatment can modify its ultrastructural morphology, which will lead to a significant increase in dentin wettability, dentinal tubules diffusion length, and number of resin tags and lateral branching . However, cavity disinfectants may affect the hybrid layer, thus negatively influencing the restoration/dentin bond quality by changing or loss of such layer leading to evident reduction in the bond strength that might results in increased rates of restorations failure. Consequently, many alternative treatment approaches have been advocated, like application of different remineralizing agents, antioxidants and laser treatment . Dentin pretreatment with erbium laser enhances the morphological features of the dentin surface and improves the bonding quality of the dentin/restoration interface . Its mechanism contrasts with that of NaOCl, relying on the water and hydroxyapatite in dental hard tissues to absorb laser energy, resulting in "micro explosions" that facilitate the extraction of water from target tissues via energy absorption . The application of Erbium laser treatment has been found to enhance the surface roughness of dentin and expand its surface area via the patent dentinal tubules, consequently facilitating the diffusion of adhesive monomers into these tubules while preventing smear layer formation, thereby providing ideal dentin surfaces for effective bonding with resin composites and refining bonding techniques . Universal adhesives are the newest generation of dental adhesives. They can be applied in either total -etch (TE) or SE modes to tooth substrate. They exhibit less clinical steps and they are more user-friendly. Bonding to caries effected dentin revealed decreased bond strength values of adhesive systems, but bonding of universal adhesives to different dentin substrates requires more research . In this context, the current study investigated the possible effect of different dentin pretreatment modalities including laser and antioxidant application on bond strength to two dentin substrates. The results of the current study revealed that dentin substrates (SoD and CID), dentin pretreatment (6% NaOCl and Er, Cr:YSGG laser application), and antioxidant application (No SA and 10% SA application) had significantly impacted the SBS at p?0.001. Bonding to SoD revealed significantly higher SBS result values than CID in all tested groups (with/without dentin pretreatment and antioxidant application). Therefore, the first null hypothesis was rejected. This outcome might be owed to the composition of CID, since it is partially demineralized around and within the collagen fibrils with much lower crystalline structure related to normal SoD. That might lead to much softer and porous dentin surface than SoD. Moreover, CID has occluded dentinal tubules with acid-resistant minerals that might be impervious to resin infiltration. Consequently, the final SBS could be negatively altered . The findings of the current study agreed with Ekambaram et al. , they concluded that lower bond strength values could be developed by different resin adhesive systems to caries-affected dentin substrates than SoD substrate. Treatment of dentin substrate with potent oxidizing agents such as NaOCl, has been suggested as an approach to reduce the sensitivity of hybridization technique. Furthermore, numerous dental procedures commonly depend on using NaOCl due to its non-specific deproteinization effect. NaOCl can remove the uncapsulated collagen fibrils, thus facilitate the penetration of the resin adhesives within the treated dentin surface producing more permeable surfaces. However, the impact of removal of collagen fibrils on bond quality to dentin demonstrated variable outcomes . The finding of the current study showed that both dentin pretreatment approaches had a significant effect on SBS to both dentin substrates. Accordingly, the second null hypothesis was partially rejected as there was a significant difference between laser- and NaOCl-treated groups, though, there was an insignificant difference between laser-treated groups and the control groups for both dentin substrates. Regarding NaOCl application, the results revealed a statistical negative influence the SBS of both dentin substrates compared to the control group (no dentin pretreatment). This consequence might be related to the detrimental effect of NaOCl that is a potent oxidizing agent responsible for demineralized collagen fibrils elimination within the formed hybrid layer . The influence of NaOCl dentin pretreatment on the final bond strength to dentin is controversial. NaOCl was reported to improve dentin bond strength due to its deproteinizing effect providing a proper mineralized matrix that can be bonded directly to the adhesive monomer, as well as formation of an infrequent dentin bonding mechanism known as ‘reverse hybrid layer’ at which NaOCl dissolves the exposed collagen fibrils in mineralized matrix of the etched dentin and create surface microporosity to enable the penetration of the adhesive monomer to create this significant layer . On the other hand, NaOCl was reported in other research to deplete dentin bond strength. These results were owed to its deproteinizing effect which created dentin surface that is less amenable for bonding . Dentin pretreatment with NaOCl can remove the collagen fibrils thus preventing a continuous hybrid layer creation, and the remnants of NaOCl on the treated dentin surface was reported to prevent the infiltration of adhesive monomers . Moreover, NaOCl oxidizing action can interfere with the polymerization adhesive monomers polymerization . Another report from Montagner et al. demonstrated that deproteinization pretreatment using NaOCl has shown comparable bonding performance to conventional adhesive procedures, where dentin regions play a significant role in bond strength values. In this context, the finding of the current study agreed with de Almeida et al. who demonstrated that NaOCl can directly hinder the free-radical additional polymerization reaction of the resin monomers, due to the remnants of super-oxide radicals it releases that inhibit the polymerization of the resin, producing a significant depletion of the final bond strength . Moreover, NaOCl retention inside the demineralized dentin might have negative impact on the resin/dentin interface . In contradiction to these finding, the effect of NaOCl was found in some literature to improve the bond strength or enhance the mechanical and physical properties of the resin/dentin interface . Such literature owed their results to the ability of NaOCl to dissolve the majority of organic content, thus, forming a mineral-rich layer that would be simply penetrated by the resin monomers. Though, it was concluded in another literature that its effect is adhesive-dependent . However, such contradiction could be owed to the difference in NaOCl concentration, time of application, solution temperature and type of the substrate. Moreover, Kunawarote et al. reported that dentin pretreatment using 6% NaOCl solution for 5- and 15-s showed an insignificant impact on µTBS of Clearfil SE Bond, whereas a 30-s time of application has a substantial adverse influence on µTBS to dentin. These findings agreed with the current study and previous research that concluded that smear layer-covered dentin pretreatment with NaOCl solution for 30-s duration or more had an adverse effect on dentin bonding quality . They owed such lower µTBS values to NaOCl-pretreated dentin to its oxidizing effect that causes production of chloramine-derived free radicals , which might compete with the free radicals produced during adhesive monomer activation, causing premature termination of the chain reaction and probably inadequate polymerization . Furthermore, dentin bond strength might be affected by residual NaOCl trapped within the porosity of mineralized dentin . These results could be explained by the predominant adhesive failure (80%) that was recorded for 6% NaOCl-treated/SoD groups regardless of antioxidant application. While, 6% NaOCl-treated/CID groups demonstrated 20% and 60% adhesive failure with and without antioxidant application respectively. These findings agreed with Gönülol et al. and Dikmen and Tarim who revealed that adhesive failure demonstrated with NaOCl-treated groups. Ascorbic acid and its Na-salts are distinguished antioxidants that can reduce various oxidative compounds. They counter the depleting influence of NaOCl on the bond strength to dentin through reinstating the changed redox reaction of the oxidized dentin substrate. Consequently, using SA before the bonding procedures can restore the depleted bond strength to NaOCl-treated dentin . The results of the present study revealed an improved SBS values after different dentin pretreatments of both dentin substrates following 10% SA application. Thus, the third null hypothesis was partially rejected as a comparable bond strength value was recorded for the two tested dentin pretreatments for both dentin substrates with and without antioxidant application. The application of 10% SA showed a significant increase for NaOCl-treated groups in SoD. This consequence could be attributed to the effect of antioxidant application that might have counteract the potent oxidizing effect of NaOCl. It was concluded that, antioxidant agents like SA can effectively nullify the reactive oxygen produced during the oxidation process of NaOCl with dentin . These findings were in accordance with Prasansuttiporn et al. who concluded that further application of different antioxidant agents following deprotenization of the smear layer by NaOCl could enhance the final bond strength values of a SE adhesive to caries-affected dentin. Moreover, Delgado et al. stated that without using an antioxidant agent for NaOCl-treated dentin, the outcomes cannot be validated, due to poor bond strength outcomes that could be accredited to NaOCl oxidizing effect that adversely influences resin polymerization instead of the treatment approach itself, and the depleted bond strength could be retrieved through application of antioxidant agents to NaOCl-treated dentin. Likewise, Dikmen and Tarim demonstrated that using 10% SA for NaOCl-treated dentin surface had significantly enhanced dentin bond strength. Gönülol et al. demonstrated the capability of SA to restore the depleted bond strength of oxidized dentin by allowing the additional polymerization of the resin adhesive to progress without early termination, thus, reversing the jeopardized bonding in NaOCl-treated dentin. Due to the recent advances in modern dentistry, different types of laser devices have been introduced to the market. Amid the diverse laser devices, Er,Cr:YSGG that is operated at 2780 nm wavelength. It is properly absorbed by different biological tissues such as enamel and dentin. The primary objective of the diverse types of laser devices is alteration of light energy of laser devices into heat leading to a substantial increase in laser energy absorption by the substrate. The degree of absorption of laser energy is affected by several surface characteristics that may include the extent of the irradiated surface pigmentation and its content of water . The results of the current study displayed that the Er, Cr:YSGG laser-treated groups exhibited a significantly higher SBS values compared to NaOCl-treated groups, however, their results were comparable to the control groups (no dentin pretreatment) regardless the dentin substates and antioxidant application. This outcome could be generally related to the effect of Er, Cr:YSGG laser irradiation on the different tested dentin substrates, through the micro-explosions produced by evaporation of water and other moist organic components of dentin resulting in removal of the smear layer and irregularities development on the dentin surface and further opening of dentinal tubules. Thus, making the dentin surface more permeable and ideal for bonding . Regarding this result, it was concluded that the removal of the laser-modified layer by etching has restored the depleted bond strength back to normal . This outcome agreed with Celik et al. and Ferreira et al. who concluded that SE adhesives can enhance the bond strength of laser-irradiated dentin than etch-and-rinse adhesives. Moreover, this finding was in agreement with Alkhudhairy and Neiva who concluded that using low-power Cr:YSGG laser improved the bond strength to CID substrate, by enabling the laser energy to interact with the water molecules of the dentin surface leading to their evaporation, which might lead to collagen fibrils shrinkage and dehydration . This could result in decreasing the surface area of the dentin, thus permitting enhanced monomer infiltration and better dentin adhesion. Furthermore, such dehydration could alter the surface features of the dentin by changing it from hydrophilic to hydrophobic substrate with more enhanced bonding . Generally, laser treatment showed no adverse effects on adhesion performance. The variance in outcomes among different the studies in laser-treated tooth surfaces, can be owed to numerous factors such as; the type of applied laser, the parameters of the used laser device including; distance of application, frequency, energy, and application mode as well as the applied adhesive system type . On the other hand, these findings were contradicted by those attained by Al Habdan et al. who reported a significant decrease in dentin bond strength after laser irradiation. This might be to using Er, Cr:YSGG laser with 4.5-W power output, that can be considered as a high value for laser pretreatment causing serious surface alterations, thus, preventing resin penetration and causing a substantial reduction in the final bond strength . Vermelho et al. reported that the application of laser has modified the bonding mechanism of the adhesives to dentin substrate decreasing the bond strength for SE adhesives, through integration of small-size and few particles of dentin developed during laser ablation into SE adhesive layer. Moreover, the tested laser settings had no impact on dentin SBS, regardless aging time and type of adhesive. In addition, Comba et al. reported decreased bond strength values of Er:YAG laser-treated dentin regardless adhesive type. Such contradiction could be related to the use of Er:YAG laser instead of low-power Er, Cr:YSGG laser (2W) as in the current study. In this context, a few studies reported that erbium laser application has reduced the overall dentin bond strength to resin composite materials . Shirani et al. concluded that erbium laser irradiation had decreased the overall SBS. They owed such finding to the irradiation distance which significantly affected the SBS values as decreasing the distance increased the adverse effects of laser irradiation. They demonstrated that irradiation distance presented an imperative parameter that it is directly associated with the laser ablation-ability, morphological features of the lased surface and the subsequent achievement of the bonding process . While, Ceballos et al. suggested that the collagen fibrils have been fused together following laser ablation of dentin, that might result in absence of interfibrillar space. Thus, adhesive monomer infiltration within the subsurface inter-tubular dentin would be hindered, lowering the final bond strengths regardless dentin substrate type and adhesive type. The inconsistent outcomes regarding the quality of bond strength to irradiated caries affected dentin substrates could be owed to the impact of the wide variation in the parameters of laser irradiation, such as frequency, duration, output power and distance. Additionally, such contradicting findings could be associated with using different adhesive systems and lack of longitudinal study designs . Consequently, the results of the study revealed that the application of SA after Er, Cr:YSGG laser irradiation for both dentin substrates has the potential to improve the bond strength. This could be explained by the combined positive effect of SA application and Er, Cr:YSGG laser irradiation on bond strength to dentin. As the Er, Cr: YSGG laser has the ability to change the characteristics of dentin surface through removal of smear layer with patent dentinal tubules, while SA can eliminate the oxygen at the dentin surface, so that oxygen absence at the bonding area might enhance the adhesive polymerization and thus, improving the final bond strength . This was in accordance with Rezaei et al. who concluded that antioxidant agents’ application can even improve the bond without bleaching. Although, Er, Cr:YSGG laser and SA have different mode of action and characteristic potent effect on dentin, it seems a promising adhesive strategy to use laser treatment followed by antioxidant application for better dentin bonding. As the combined application of laser irradiation and antioxidant application on bond strength to different dentin substrates is not properly discussed in current literature, therefore, more research is required to validate these results. Consequently, such findings could be related to the fracture mode analysis results that showed adhesive mode of failure with 40% for antioxidant application and 20% without antioxidant application for SoD groups, while adhesive failure was 60% without antioxidant application and 40% with antioxidant application for CID groups. These findings could be related to the thermo-mechanical ablation effect of Er, Cr:YSGG laser irradiation on both dentin substates, indicating that the SBS test was well-conducted, and no undesirable stresses were generated at the resin-dentin interface. These findings were in accordance with Ribeiro et al. who demonstrated the absence of cohesive failure and predominance of adhesive failure followed by mixed type of failure. Moreover, Alrahlah showed that adhesive failure comprised the majority of resultant failure types within the tested groups, denoting that the adhesive failure is considered favorable as it avoids any impractical harm or loss of the tested dental substrate. However, it was concluded that the debonded specimens after SBS testing showed that CID irradiated with Er, Cr:YSGG laser showed cohesive failures that are commonly related to high bond strength values that might be due to several external factors such as and microporosities within the adhesive layer, anatomy of dentinal tubules, level of tubular occlusion, degree of dentin remineralization and the dentin-binding ability of the adhesive . The current study has some limitations, including the use of the shear bond strength (SBS) method instead of the micro-tensile bond strength (µTBS) method. While µTBS offers better control of regional differences and economic tooth use, it is more technique-sensitive and can be challenging to perform, especially with small specimens. Additionally, µTBS requires specimen trimming, which can lead to tooth cracking and premature failure if not done carefully . The SBS method was chosen for its simplicity and speed, making it popular in research settings. Although SBS may not always detect cohesive failure, it remains the most common method for evaluating new adhesive systems . In this study, the SBS test was conducted using an Instron universal testing machine with a chisel-bladed metallic attachment, providing a more feasible and affordable approach in the research lab. There are other limitations of the study include the use of only universal adhesive system in SE mode and application of laser irradiation at a short duration. Also, employing other dentin substrates such as eroded and sclerosed dentin at different depths would be of more value. Hybrid layer assessment in aged specimens under the same conditions of the current study presents another limitation of this in vitro study. In this context, one can recommend to assess the effect of Er, Cr:YSGG laser irradiation at different parameters and NaOCl among other dentin pretreatments with different antioxidants application, on different cariogenic bacterial strains experimentally grown on dentin surface. Additionally, it would be worthy to assess diverse types of adhesive systems including SE and TE on different dentin substrates, and to evaluate their effect on the interfacial surface morphology and the hybrid layer using scanning electron microscope (SEM) to augment and validate the consequences and to overcome the confines of the present study. Likewise, investigating the durability and longevity of CID and SoD aged specimens bonded to different restorative systems would be of great value for further research. Under the limitations of the current study, it can be concluded that; pretreatment of different dentin substrates using Er, Cr: YSGG laser irradiation followed by antioxidant application has the potential to enhance the bonding quality of both tested dentin substrates. Nevertheless, using NaOCl for dentin pretreatment has significantly compromised the bonding to SoD and CID substrates regardless SA application. Moreover, restored SBS is a far-reach consequence of antioxidants application. SA application can improve the bond strength to different dentin substrates following different pretreatment protocols.
Immunohistochemistry for Immunoglobulin G4 on Paraffin Sections as a Diagnostic Test for Pemphigus
e40407d8-4516-4ed0-b7ae-bd1ee590196e
11761055
Anatomy[mh]
Pemphigus is a group of life-threatening autoimmune bullous diseases that affect the skin and mucous membranes. The main forms of pemphigus diseases are pemphigus vulgaris (PV) and pemphigus foliaceus (PF). PV is the most common variant. Pemphigus most often affects individuals aged 45–65 years, but the disease can also occur in younger people. Control of the disease is achieved with corticosteroids with or without other immunosuppressants and, in recent years, rituximab. An accurate diagnosis is necessary in patients with pemphigus before initiating treatment. The clinical diagnosis of pemphigus should always be confirmed by histopathologic examination and immunofluorescent assay. The histopathologic hallmark in pemphigus is acantholysis and formation of bullae caused by the loss of keratinocyte adhesion. Direct immunofluorescence (DIF) is the gold standard in the diagnosis of pemphigus. DIF has a crucial role in the accurate diagnosis of pemphigus when treatment is started without a proven diagnosis and the clinical and histopathologic findings are not characteristic. Pemphigus is mediated by autoantibodies most often of the immunoglobulin G (IgG) class, subclasses immunoglobulin G1 (IgG1), and immunoglobulin G4 (IgG4), directed against desmosomal adhesion proteins. Several studies investigated the isotype profile of circulating antibodies in sera from patients with pemphigus and in tissue through DIF. IgG4 was found to be pathogenic in patients with endemic PF (fogo selvagem). , Among IgG subclasses, IgG4 was predominant and found in all PV and PF tested sera, followed by IgG1. Elevated levels of IgG4 in PV patients with active disease, and of IgG1 in patients in remission were found. Based on fluorescence intensity in DIF assay, it was concluded that IgG4 is prevailing over IgG1. IgG4 was detected in the sera of 62% of patients with pemphigus, in only 1 of the relatives, and was absent in the controls, but there was no significant difference between IgG4 in active and remissive pemphigus. Other studies did not confirm that IgG4 is the predominant pathogenic antibody in PV. It was found that subclass switching between IgG1 and IgG4 has no significant effect on epitope specificity, antigen binding affinity, or pathogenicity in 3 PV monoclonal antibodies isolated from 3 different patients with PV. A study detected only a weak correlation between antidesmoglein IgG4 levels and PV disease severity. Highest serum concentration of IgE and IgG4 and intercellular IgE deposits was shown in acute onset patients with PV. The tested serum samples of patients with PV in another study did not show any sample positive for IgE. Studies on the detection of autoantibodies in pemphigus on formalin-fixed paraffin-embedded specimens by immunohistochemistry (IHC) are few. The aim of the study was to evaluate IgG4 immunoreactivity on paraffin sections using IHC in patients with pemphigus as a diagnostic test. Fifty formalin-fixed paraffin-embedded specimens from patients with pemphigus were selected. Patients were previously diagnosed by DIF and histopathologic examination. Fifty formalin-fixed paraffin-embedded specimens from 50 patients with bullous pemphigoid, dermatitis herpetiformis, linear IgA dermatosis, and erythema multiforme were used as controls. Biopsies were performed from the edge of the bulla or oral erosion in newly diagnosed patients with active disease without previous treatment. Five-millimeter cutaneous biopsies and 4-mm oral mucosa biopsies were performed. All specimens were from the dermatopathology archives of the Section of Dermatopathology of the Department of Dermatology and Venereology where the patients were diagnosed and treated. IHC was performed on 4 μm-thick paraffin sections. DAKO pre-treatment module Link was used for pretreatment process of deparaffinization, rehydration, and antigen retrieval. Immunohistochemical examination was performed on DAKO Autostainer Link 48 platform—an automated system for immunohistochemical staining. For antigen retrieval, the sections were treated with target retrieval solution, pH 9 for 20 minutes at 97°C. The sections were treated with peroxidase blocking reagent for 5 minutes. Anti-IgG4 antibody (Recombinant Anti-IgG4 antibody [EP4420], ab109493, Abcam Plc.) was applied at 1:100 dilution for 60 minutes, horseradish peroxidase for 20 minutes, 3,3′-diaminobenzidine for 10 minutes, and staining with hematoxylin for 20 seconds. Positivity was defined as distinctive, uninterrupted immunoreactivity localized to the intercellular junctions of keratinocytes. A finding that did not meet these criteria was defined as negative. Each patient provided written informed consent before participation. The study was conducted in accordance with the Declaration of Helsinki and was approved by the University Scientific Ethics Committee. Statistical Analysis Category variables are given as percentages. Statistical analyses were performed with data analysis software IBM SPSS Statistics version 26.0. Sensitivity and specificity were culculated. Category variables are given as percentages. Statistical analyses were performed with data analysis software IBM SPSS Statistics version 26.0. Sensitivity and specificity were culculated. Forty-three (86.0%) of the examined patients had PV and 7 (14%) patients had PF. A dermatopathologist and a general pathologist independently assessed the immunoreactivity for IgG4 using the same criteria and achieved a 100% agreement for all immunohistochemically stained slides. The pathologists were masked to whether a specimen was a study or a control specimen. Forty-nine (98.0%) of patients with pemphigus were immunoreactive for IgG4 (Fig. ). One upper back specimen from a patient with oral pemphigus showed negative immunoreactivity for IgG4. IHC was performed on 50 specimens from 50 controls, of which 43 (86%) patients were with bullous pemphigoid, 2 (4.0%) patients with dermatitis herpetiformis, 3 (6.0%) patients with linear IgA dermatosis, and 2 (4.0%) patients with erythema multiforme. Negative immunoreactivity for IgG4 was found in 45 (90%) controls, in 5 (10%) controls with bullous pemphigoid was established Ig G4 immunoreactivity positive for pemphigus. Sensitivity of IHC for IgG4 was estimated to be 98% and specificity 90%. In this study, only 1 upper back specimen from a patient with severe oral pemphigus showed a negative result for IgG4. DIF findings from an adjacent upper back specimen from the same patient were positive. In 5 cases of bullous pemphigoid, 5% of controls, IHC revealed distinctive, uninterrupted immunoreactivity localized to the intercellular junctions of keratinocytes. IHC findings were positive for pemphigus. These cases of bullous pemphigoid were false positive. IHC results for IgG4 should be interpreted depending on the clinical manifestation and histopathologic findings. The results of this survey are consistent with the results of other studies. Several studies on IHC for IgG4 in the diagnosis of pemphigus were found in the literature. Zhang et al performed IHC examination for IgG4 on paraffin sections and found that 9 of 12 PV cases and 4 of 6 PF cases were positive for IgG4. Specimens of 4 normal skin and 32 nonpemphigus vesiculobullous diseases were used as controls, and 1 bullous pemphigoid case showed IgG4 positivity. The established overall sensitivity was 72.2%. It was found that IHC for IgG4 had a higher sensitivity in specimens with active acantholysis or active disease status, and the positive IgG4 immunoreactivity was often concentrated to the acantholytic sites. The authors concluded that these findings were consistent with the concept that IgG4 is a pathogenic antibody associated with active pemphigus. It was suggested that the intercellular staining in the bullous pemphigoid case was not real IgG4 immunoreactivity but resulted from nonspecifically stained exudate permeating into the spongiotic cellular junctions. In IHC examination for IgG4, Al-Shenawy detected Ig G4 expression in 28 out of 30 PV cases and in 6 out of 10 PF cases. Eight out of 10 bullous pemphigoid cases showed positive expression for IgG4, and cases of other vesiculobullous diseases and controls of normal skin were negative for IgG4. The sensitivity of IgG4 for pemphigus was 85%. According to data published by Heidarpour et al, 21 of 29 PV specimens and 5 of 6 PF specimens of acantholytic lesions were immunoreactive for IgG4. Higher sensitivity was found in specimens with acantholysis. Twenty-nine out of 35 control samples were negative for IgG4. Six specimens out of 35 controls (31 specimens of safe margins of basal cell carcinoma and 4 specimens of normal skin) were positive for IgG4. The overall sensitivity and specificity of IHC for IgG4 for diagnosis of pemphigus were 74.2% and 82.8%, respectively. Garcia-Lechuga et al confirmed the suggestive PV diagnosis in 4 patients examined using the IHC IgG4 technique. Shaji et al found that 16 out of 18 PV specimens were immunoreactive for IgG4 and 2 out of 3 PF specimens were immunoreactive for IgG4. Among 27 specimens (25 of bullous pemphigoid, 1 of epidermolysis bullosa acquisita, and 1 of dermatitis herpetiformis) 1 specimen of a patient with bullous pemphigoid was positive. The sensitivity of IgG4 was 85.7% and the specificity was 96.3%. The results of the above studies are summarized in Table . In this study, all specimens were obtained from patients with active disease. IgG4 immunoreactivity in pemphigus found in this study was higher than in other studies. Higher immunoreactivity for IgG4 is likely because of the use of specimens of active lesions in newly diagnosed patients with untreated pemphigus. The higher immunoreactivity for IgG4 found in active pemphigus lesions and the negative immunoreactivity for IgG4 in healthy skin of the patient with oral pemphigus support the concept of the pathogenic role of IgG4 in active pemphigus discussed by Zhang et al and Heidarpour et al. , This study cannot conclude that IgG4 antibodies are the main pathogenic antibodies in pemphigus, but these antibodies probably have an important role, and IHC detection of IgG4 on paraffin sections can be used as a diagnostic test in pemphigus. A limitation of the study is the absence of IHC for IgG4 of perilesional specimens and comparison of results for active lesions and perilesional tissue. Another limitation is also that specimens from patients with Grover's disease, inflammatory diseases such as psoriasis, dermatitis, and healthy skin specimens were not included. To the best of our knowledge, this is the largest study comparing results of IHC for IgG4 in pemphigus and other bullous diseases. In conclusion, immunohistochemical examinations for IgG4 for the diagnosis of pemphigus can be applied when DIF examination is unavailable. Specimens for IgG4 immunoreactivity should be obtained of active pemphigus lesions. The advantage of this method is that it does not require special equipment, and the histology slides are permanent.
Crisis and Emergency Risk Communication and Emotional Appeals in COVID-19 Public Health Messaging: Quantitative Content Analysis
c3eff081-5c7f-496f-9304-e27c9b2e77c3
11445630
Health Communication[mh]
Background Singapore effectively managed COVID-19, which is evident from the World Health Organization lauding its “all-of-government” approach . This approach entails collaboration among different government agencies . While COVID-19 is no longer a global health emergency, Singapore continues to experience periodic infection waves . During the pandemic, the Singaporean government charted its response to COVID-19 in stages, as detailed in a white paper . Avenues for public health communication in Singapore include government websites and Facebook (Meta Platforms Inc) pages. These websites serve as a one-stop communications channel, and Facebook is one of Singapore’s most widely used social-networking platforms . However, studies on the government’s use of Facebook for public health communication during the pandemic are limited. Singapore’s success in managing the pandemic can be attributed to its small population, concentrated political authority, high political trust , state-supported media, and the 2003 SARS outbreak experience . Despite this, Singapore faced criticism for the high number of COVID-19 cases in dormitories of migrant workers, due to the lack of communication . Studies have shown that media messages can shape public knowledge, attitudes, and preventive behaviors during pandemics in Singapore. It is worthwhile to study Singapore’s public health communication during COVID-19 as it can highlight areas of improvement and offer insights for other countries in future crisis. This study had 4 objectives. First, it aimed to characterize the themes of public messages during the COVID-19 using the crisis and emergency risk communication (CERC) framework. Second, it aimed to examine how these message themes changed across different pandemic phases. Third, it aimed to identify the types of emotional appeals used. Fourth, it aimed to analyze how emotional appeals changed across the COVID-19 phases. CERC Framework CERC is well-suited for evaluating Singapore’s public communication strategies during the COVID-19 pandemic. This is because CERC evolved in stages and involves both risk and crisis communications. CERC consists of 5 stages: precrisis, initial, maintenance, resolution, and evaluation. Communication during the precrisis stage focuses on educating the public about potential adverse events and risks to prepare them for the subsequent stages . During the initial stage, communication messages focus on reducing uncertainty, conveying empathy, and imparting the general understanding of the crisis. The maintenance stage reiterates misinformation, ongoing risks, and mitigation strategies . The resolution stage involves communicating how the emergency was handled, while the evaluation stage assesses response effectiveness . The CERC framework assumes that crises develop in a linear way. However, due to the variability of diseases, crises may not follow the sequence of the outlined stages . Although CERC suggests 5 stages, the precrisis stage did not apply to COVID-19 because it was not a known disease. The length of each stage may also vary, as a prolonged crisis state may occur . For example, COVID-19 had a prolonged CERC maintenance stage as the virus mutated several times during the pandemic . This has resulted in repeated tightening and easing of COVID-19 measures in Singapore . CERC Themes Drawing on the existing literature , this study categorized the CERC message themes into 4 categories: risk and crisis information , self-efficacy and sense-making, preparations and uncertainty reduction , and advisories and alerts . Risk and crisis information refers to information that educate the public about potential threats . This category consists of a subtheme, pandemic intelligence. It refers to messages containing basic information about the pandemic, including case numbers , to raise awareness of the current situation. The category self-efficacy and sense-making involves messages that help people to understand the situation and reflect their ability to change their behaviors . This category includes 3 subthemes: personal preventive measures and mitigation, social and common responsibility, and inquisitive messaging . Personal preventive measures and mitigation refers to messages about measures or precautions that can be taken to protect the public from COVID-19. Social and common responsibility includes messages on measures or precautions that can be taken at the community level to prevent the spread of COVID-19 or to show care . Inquisitive messaging addresses the public’s questions to better understand the situation . The category preparations and uncertainty reduction includes messages on how to act appropriately during the pandemic . Drawing reference to Malik et al , preparations and uncertainty comprises 4 subthemes: clarification, events, campaigns and activities, showing gratitude, and reassurance . Clarification refers to messages addressing misunderstandings and untrue claims about the pandemic . Events, campaigns, and activities include messages promoting communication campaigns for awareness, relief, or treatment. Showing gratitude refers to expressing appreciation to those involved in managing the virus, such as frontline workers . Reassurance consists of messages that allay the public’s fears . The category advisories and alerts refers to messages that provide crucial warnings and specific advice about diseases. There are 2 subthemes: risk groups and general advisories and vigilance . The subtheme risk groups refers to messages targeting susceptible groups such as people with preexisting conditions and older adults who are at greater risk of contracting COVID-19 . Messages on general advisories and vigilance include information on what to do in certain situations, such as returning to the workplace. COVID-19 Phases and Social Media Use in Singapore The Singapore government segmented the COVID-19 pandemic into 4 phases: early days of fog, fighting a pandemic, rocky transition , and learning to live with COVID-19 , which correspond to the CERC stages . However, empirical investigation is needed to examine whether the message themes were conveyed appropriately across these stages, especially on social media. The CERC framework has been used to evaluate public health communications on social media such as Facebook . Vijaykumar et al found out that information disseminated by Singapore-based public health institutions on Facebook were similar in content, but differed in focus. The Ministry of Health (MOH) focused on situational updates and the National Environment Agency (NEA) elaborated on preventive measures. However, the study only focused on public communication by these 2 agencies. To gain a broader understanding of crisis communications in Singapore; this study examined public communication by multiple government agencies in Singapore. Hence, we ask the following research questions (RQs): RQ1: To what extent are the CERC message themes present in Singapore’s online public health messaging during the COVID-19 pandemic? RQ2: How do the CERC message themes change across different phases during the COVID-19 pandemic? While CERC is extensively studied, there is limited research linking it with emotional appeals, a gap scholars find crucial to address. Meadows et al argued that investigating the emotional tones of the public during different outbreak phases aids in formulating effective public health messages. This is echoed by Xie et al who found that emotional appeals effectively engaged audiences. In addition to analyzing CERC message themes, this study also aimed to examine the use of emotional appeals in public health communication during COVID-19. Emotional Appeals Emotional appeals can persuade people to perform an intended behavior by evoking specific emotion . They are widely used in health communications ; each type elicits varying responses. For example, people are divided on humor appeals; a few think it undermines the seriousness of the subject, while others find it useful . The choice of emotional appeals depends on the context and the target audience . During the COVID-19 pandemic, key emotional appeals included hope, humor, fear, anger, guilt, and nurturance . Hope appeals emphasize efficacy and can be empowering when paired with actionable advice. During health crises, transparent communication about uncertainties and hopeful messages can enhance support for the measures implemented . World Health Organization recommends using hope appeals to combat pandemic fatigue . Hope appeals are an effective communication strategy across different cultures. In collectivist countries such as Singapore, hope appeals can focus on emerging stronger from COVID-19 as a community. Humor appeals use techniques such as clownish humor, irony, and satire , aimed at reducing negative emotions and promoting positivity . However, they are also noted for potentially reducing social responsibility and perceived crisis risk . Humor appeals should be used tactfully especially during the critical phrases where increased perceived risk and social responsibility are crucial. Fear appeals are the most widely studied emotional appeals. A message with fear appeals induces fear when a situation is seen as threatening to one’s physical or mental health and is perceived as uncontrollable . It evokes fear about the harm that will befall them if they do not adopt the recommended behavior . The arousal triggered would create a desire to avoid the perceived threat and to adopt the suggested behavior, such as mask wearing and vaccination. Upon encountering the message, the audience would evaluate the severity and susceptibility of the threat, and their ability to overcome the threat, and subsequently taking the recommended action . Anger appeals motivate people to carry out actions requiring more effort and commitment . The anger activism model suggests that when coupled with a sense of efficacy, a person made to feel anger would feel motivated to perform a behavior . Anger was one of the least used appeals in organizational YouTube videos during COVID-19 . Guilt appeals consist of 2 components—material to evoke guilt and an action to reduce guilt . The material can highlight discrepancies between the audience’s standards and their behavior , which could effectively influence health-related attitudes . However, excessive guilt can be counterproductive and less persuasive , as shown in the study by Matkovic et al , where guilt appeals failed to influence handwashing intention during the pandemic. Nurturance appeals are defined as appeals that evoke a sense of caretaking, which effectively target parents . Nurturance appeals were the most dominant emotional appeal in advertising materials using COVID-19 as a theme . Given the dynamic nature of a crisis, it is important to use suitable emotional appeals at appropriate times and for effective management of the situation. A few studies focused on how emotional appeals were used in the communication messages during the COVID-19 pandemic (eg, a study by Mello et al ). Hence, we asked the following RQs: RQ3: What are the types of emotional appeals used in Singapore’s online public health messaging during the COVID-19 pandemic? RQ4: How does the use of emotional appeals in Singapore’s online public health messaging change across different CERC phases during the COVID-19 pandemic? Singapore effectively managed COVID-19, which is evident from the World Health Organization lauding its “all-of-government” approach . This approach entails collaboration among different government agencies . While COVID-19 is no longer a global health emergency, Singapore continues to experience periodic infection waves . During the pandemic, the Singaporean government charted its response to COVID-19 in stages, as detailed in a white paper . Avenues for public health communication in Singapore include government websites and Facebook (Meta Platforms Inc) pages. These websites serve as a one-stop communications channel, and Facebook is one of Singapore’s most widely used social-networking platforms . However, studies on the government’s use of Facebook for public health communication during the pandemic are limited. Singapore’s success in managing the pandemic can be attributed to its small population, concentrated political authority, high political trust , state-supported media, and the 2003 SARS outbreak experience . Despite this, Singapore faced criticism for the high number of COVID-19 cases in dormitories of migrant workers, due to the lack of communication . Studies have shown that media messages can shape public knowledge, attitudes, and preventive behaviors during pandemics in Singapore. It is worthwhile to study Singapore’s public health communication during COVID-19 as it can highlight areas of improvement and offer insights for other countries in future crisis. This study had 4 objectives. First, it aimed to characterize the themes of public messages during the COVID-19 using the crisis and emergency risk communication (CERC) framework. Second, it aimed to examine how these message themes changed across different pandemic phases. Third, it aimed to identify the types of emotional appeals used. Fourth, it aimed to analyze how emotional appeals changed across the COVID-19 phases. CERC is well-suited for evaluating Singapore’s public communication strategies during the COVID-19 pandemic. This is because CERC evolved in stages and involves both risk and crisis communications. CERC consists of 5 stages: precrisis, initial, maintenance, resolution, and evaluation. Communication during the precrisis stage focuses on educating the public about potential adverse events and risks to prepare them for the subsequent stages . During the initial stage, communication messages focus on reducing uncertainty, conveying empathy, and imparting the general understanding of the crisis. The maintenance stage reiterates misinformation, ongoing risks, and mitigation strategies . The resolution stage involves communicating how the emergency was handled, while the evaluation stage assesses response effectiveness . The CERC framework assumes that crises develop in a linear way. However, due to the variability of diseases, crises may not follow the sequence of the outlined stages . Although CERC suggests 5 stages, the precrisis stage did not apply to COVID-19 because it was not a known disease. The length of each stage may also vary, as a prolonged crisis state may occur . For example, COVID-19 had a prolonged CERC maintenance stage as the virus mutated several times during the pandemic . This has resulted in repeated tightening and easing of COVID-19 measures in Singapore . Drawing on the existing literature , this study categorized the CERC message themes into 4 categories: risk and crisis information , self-efficacy and sense-making, preparations and uncertainty reduction , and advisories and alerts . Risk and crisis information refers to information that educate the public about potential threats . This category consists of a subtheme, pandemic intelligence. It refers to messages containing basic information about the pandemic, including case numbers , to raise awareness of the current situation. The category self-efficacy and sense-making involves messages that help people to understand the situation and reflect their ability to change their behaviors . This category includes 3 subthemes: personal preventive measures and mitigation, social and common responsibility, and inquisitive messaging . Personal preventive measures and mitigation refers to messages about measures or precautions that can be taken to protect the public from COVID-19. Social and common responsibility includes messages on measures or precautions that can be taken at the community level to prevent the spread of COVID-19 or to show care . Inquisitive messaging addresses the public’s questions to better understand the situation . The category preparations and uncertainty reduction includes messages on how to act appropriately during the pandemic . Drawing reference to Malik et al , preparations and uncertainty comprises 4 subthemes: clarification, events, campaigns and activities, showing gratitude, and reassurance . Clarification refers to messages addressing misunderstandings and untrue claims about the pandemic . Events, campaigns, and activities include messages promoting communication campaigns for awareness, relief, or treatment. Showing gratitude refers to expressing appreciation to those involved in managing the virus, such as frontline workers . Reassurance consists of messages that allay the public’s fears . The category advisories and alerts refers to messages that provide crucial warnings and specific advice about diseases. There are 2 subthemes: risk groups and general advisories and vigilance . The subtheme risk groups refers to messages targeting susceptible groups such as people with preexisting conditions and older adults who are at greater risk of contracting COVID-19 . Messages on general advisories and vigilance include information on what to do in certain situations, such as returning to the workplace. The Singapore government segmented the COVID-19 pandemic into 4 phases: early days of fog, fighting a pandemic, rocky transition , and learning to live with COVID-19 , which correspond to the CERC stages . However, empirical investigation is needed to examine whether the message themes were conveyed appropriately across these stages, especially on social media. The CERC framework has been used to evaluate public health communications on social media such as Facebook . Vijaykumar et al found out that information disseminated by Singapore-based public health institutions on Facebook were similar in content, but differed in focus. The Ministry of Health (MOH) focused on situational updates and the National Environment Agency (NEA) elaborated on preventive measures. However, the study only focused on public communication by these 2 agencies. To gain a broader understanding of crisis communications in Singapore; this study examined public communication by multiple government agencies in Singapore. Hence, we ask the following research questions (RQs): RQ1: To what extent are the CERC message themes present in Singapore’s online public health messaging during the COVID-19 pandemic? RQ2: How do the CERC message themes change across different phases during the COVID-19 pandemic? While CERC is extensively studied, there is limited research linking it with emotional appeals, a gap scholars find crucial to address. Meadows et al argued that investigating the emotional tones of the public during different outbreak phases aids in formulating effective public health messages. This is echoed by Xie et al who found that emotional appeals effectively engaged audiences. In addition to analyzing CERC message themes, this study also aimed to examine the use of emotional appeals in public health communication during COVID-19. Emotional appeals can persuade people to perform an intended behavior by evoking specific emotion . They are widely used in health communications ; each type elicits varying responses. For example, people are divided on humor appeals; a few think it undermines the seriousness of the subject, while others find it useful . The choice of emotional appeals depends on the context and the target audience . During the COVID-19 pandemic, key emotional appeals included hope, humor, fear, anger, guilt, and nurturance . Hope appeals emphasize efficacy and can be empowering when paired with actionable advice. During health crises, transparent communication about uncertainties and hopeful messages can enhance support for the measures implemented . World Health Organization recommends using hope appeals to combat pandemic fatigue . Hope appeals are an effective communication strategy across different cultures. In collectivist countries such as Singapore, hope appeals can focus on emerging stronger from COVID-19 as a community. Humor appeals use techniques such as clownish humor, irony, and satire , aimed at reducing negative emotions and promoting positivity . However, they are also noted for potentially reducing social responsibility and perceived crisis risk . Humor appeals should be used tactfully especially during the critical phrases where increased perceived risk and social responsibility are crucial. Fear appeals are the most widely studied emotional appeals. A message with fear appeals induces fear when a situation is seen as threatening to one’s physical or mental health and is perceived as uncontrollable . It evokes fear about the harm that will befall them if they do not adopt the recommended behavior . The arousal triggered would create a desire to avoid the perceived threat and to adopt the suggested behavior, such as mask wearing and vaccination. Upon encountering the message, the audience would evaluate the severity and susceptibility of the threat, and their ability to overcome the threat, and subsequently taking the recommended action . Anger appeals motivate people to carry out actions requiring more effort and commitment . The anger activism model suggests that when coupled with a sense of efficacy, a person made to feel anger would feel motivated to perform a behavior . Anger was one of the least used appeals in organizational YouTube videos during COVID-19 . Guilt appeals consist of 2 components—material to evoke guilt and an action to reduce guilt . The material can highlight discrepancies between the audience’s standards and their behavior , which could effectively influence health-related attitudes . However, excessive guilt can be counterproductive and less persuasive , as shown in the study by Matkovic et al , where guilt appeals failed to influence handwashing intention during the pandemic. Nurturance appeals are defined as appeals that evoke a sense of caretaking, which effectively target parents . Nurturance appeals were the most dominant emotional appeal in advertising materials using COVID-19 as a theme . Given the dynamic nature of a crisis, it is important to use suitable emotional appeals at appropriate times and for effective management of the situation. A few studies focused on how emotional appeals were used in the communication messages during the COVID-19 pandemic (eg, a study by Mello et al ). Hence, we asked the following RQs: RQ3: What are the types of emotional appeals used in Singapore’s online public health messaging during the COVID-19 pandemic? RQ4: How does the use of emotional appeals in Singapore’s online public health messaging change across different CERC phases during the COVID-19 pandemic? Overview To answer our research questions, we conducted a quantitative content analysis on public Facebook posts and publicly accessible website articles from key Singapore government institutions involved in public health communication during the COVID-19 pandemic. Specifically, we compiled and analyzed content from Gov.sg, representing the Singapore government, as well as institutions, such as the MOH, the Ministry of Sustainability and the Environment, the NEA, and the Health Promotion Board. Ethical Considerations Before commencing data collection for content analysis, we sought approval from the Nanyang Technological University’s Integrity Review Board (IRB-2022-725) in exempt category 4. This category pertained to secondary research using existing or publicly accessible data sets such as those found on social media. The exemption criteria included sources of individually identifiable information that were already in existence or that were publicly available. Obtaining IRB approval ensured that the research adheres to ethical standards, protecting the privacy and rights of individuals whose data were being analyzed. This step was crucial in maintaining the integrity and ethical compliance of the research project. Data Collection and Sampling Upon receiving IRB approval, we used a Python script to crawl Facebook posts containing specified keywords related to COVID-19 from January 1, 2020, to September 30, 2022. Concurrently, we manually compiled relevant website articles from the same timeframe through keyword searches on the institutions’ websites. These keywords included “2019-nCoV,” “SARS-CoV-2,” “Sars-CoV-2,” “Wuhan Coronavirus,” “Wuhan coronavirus,” “wuhan coronavirus,” “Wuhan virus,” “wuhan virus,” “Wuhan Virus,” “Covid-19,” “covid-19,” “novel coronavirus,” “COVID,” “Covid,” and “covid.” Articles and posts that are not related to the public health communication about COVID-19 (such as Facebook posts and website articles that solely focused on situational updates such as the number of cases and clusters, call outs to subscribe for updates, mentions of COVID-19 as a time frame where other activities or programs were the major topics, posts that did not focus on COVID-19, speeches by public figures, and press releases) were excluded. This initial screening yielded a total of 1114 Facebook posts and 85 relevant website articles. The data were then randomly sampled with a CI level of 99% and a 3% margin of error, resulting in the final 696 Facebook posts and 83 website articles selected for detailed analysis. Codebook and Coding Scheme We developed the codebook on the basis of the CERC message themes adapted from previous literature . These themes encompassed (1) pandemic intelligence, (2) personal preventive measures and mitigation, (3) social and common responsibility, (4) inquisitive messaging, (5) clarification, (6) events, campaigns and activities, (7) request for contributions, (8) showing gratitude, (9) reassurance, (10) risk groups, and (11) general advisories and vigilance. In addition, 6 emotional appeals adapted from previous studies were included in the codebook. These emotional appeals included (1) fear appeals, (2) guilt appeals, (3) anger appeals, (4) hope appeals, (5) humor appeals, and (6) nurturance appeals. Each Facebook post, including all text and visual elements, and everything visible on the webpages were coded as a single unit of analysis. Intercoder Reliability We recruited 3 coders to code the posts and articles. Before conducting actual coding, the coders undertook 2 rounds of training, practice sessions for coding, intercoder reliability, and discussions to refine the codebook. During practice sessions, coders coded the same units of analysis to ensure a common understanding of the codebook. The units of analysis (n=60) for the training and practice sessions included the materials that had not been sampled. After achieving consensus, the coders coded 10% of the data, and intercoder reliability was tested. The process was repeated until we achieved an average Krippendorff α value of 0.78, ranging from 0.70 to 1.00. As it exceeded the 0.70 standard established in the literature , demonstrating an acceptable intercoder reliability. Subsequently, the data were split equally and coded by the coders. Statistical Analyses To answer RQ1 and RQ3, a series of descriptive statistics were conducted using SPSS (version 29; IBM Corp). For RQ2 and RQ4, chi-square tests were performed to examine the relationships among CERC themes, emotional appeals, and COVID-19 phases. Notably, 24 website articles lacking publication dates were excluded from the chi-square tests as we could not classify them into any COVID-19 phases. To answer our research questions, we conducted a quantitative content analysis on public Facebook posts and publicly accessible website articles from key Singapore government institutions involved in public health communication during the COVID-19 pandemic. Specifically, we compiled and analyzed content from Gov.sg, representing the Singapore government, as well as institutions, such as the MOH, the Ministry of Sustainability and the Environment, the NEA, and the Health Promotion Board. Before commencing data collection for content analysis, we sought approval from the Nanyang Technological University’s Integrity Review Board (IRB-2022-725) in exempt category 4. This category pertained to secondary research using existing or publicly accessible data sets such as those found on social media. The exemption criteria included sources of individually identifiable information that were already in existence or that were publicly available. Obtaining IRB approval ensured that the research adheres to ethical standards, protecting the privacy and rights of individuals whose data were being analyzed. This step was crucial in maintaining the integrity and ethical compliance of the research project. Upon receiving IRB approval, we used a Python script to crawl Facebook posts containing specified keywords related to COVID-19 from January 1, 2020, to September 30, 2022. Concurrently, we manually compiled relevant website articles from the same timeframe through keyword searches on the institutions’ websites. These keywords included “2019-nCoV,” “SARS-CoV-2,” “Sars-CoV-2,” “Wuhan Coronavirus,” “Wuhan coronavirus,” “wuhan coronavirus,” “Wuhan virus,” “wuhan virus,” “Wuhan Virus,” “Covid-19,” “covid-19,” “novel coronavirus,” “COVID,” “Covid,” and “covid.” Articles and posts that are not related to the public health communication about COVID-19 (such as Facebook posts and website articles that solely focused on situational updates such as the number of cases and clusters, call outs to subscribe for updates, mentions of COVID-19 as a time frame where other activities or programs were the major topics, posts that did not focus on COVID-19, speeches by public figures, and press releases) were excluded. This initial screening yielded a total of 1114 Facebook posts and 85 relevant website articles. The data were then randomly sampled with a CI level of 99% and a 3% margin of error, resulting in the final 696 Facebook posts and 83 website articles selected for detailed analysis. We developed the codebook on the basis of the CERC message themes adapted from previous literature . These themes encompassed (1) pandemic intelligence, (2) personal preventive measures and mitigation, (3) social and common responsibility, (4) inquisitive messaging, (5) clarification, (6) events, campaigns and activities, (7) request for contributions, (8) showing gratitude, (9) reassurance, (10) risk groups, and (11) general advisories and vigilance. In addition, 6 emotional appeals adapted from previous studies were included in the codebook. These emotional appeals included (1) fear appeals, (2) guilt appeals, (3) anger appeals, (4) hope appeals, (5) humor appeals, and (6) nurturance appeals. Each Facebook post, including all text and visual elements, and everything visible on the webpages were coded as a single unit of analysis. We recruited 3 coders to code the posts and articles. Before conducting actual coding, the coders undertook 2 rounds of training, practice sessions for coding, intercoder reliability, and discussions to refine the codebook. During practice sessions, coders coded the same units of analysis to ensure a common understanding of the codebook. The units of analysis (n=60) for the training and practice sessions included the materials that had not been sampled. After achieving consensus, the coders coded 10% of the data, and intercoder reliability was tested. The process was repeated until we achieved an average Krippendorff α value of 0.78, ranging from 0.70 to 1.00. As it exceeded the 0.70 standard established in the literature , demonstrating an acceptable intercoder reliability. Subsequently, the data were split equally and coded by the coders. To answer RQ1 and RQ3, a series of descriptive statistics were conducted using SPSS (version 29; IBM Corp). For RQ2 and RQ4, chi-square tests were performed to examine the relationships among CERC themes, emotional appeals, and COVID-19 phases. Notably, 24 website articles lacking publication dates were excluded from the chi-square tests as we could not classify them into any COVID-19 phases. Our sample showed that most of the messages about COVID-19 were communicated by Gov.sg (394/779, 50.6%), followed by the MOH (261/779, 33.5%), NEA (90/779, 11.5%), Ministry of Sustainability and Environment (18/779, 2.3%), and Health Promotion Board (16/779, 2.1%; ). RQ1 asked about the CERC message themes used by the Singaporean government during the COVID-19 pandemic. Our sample showed that most of the messages disseminated during the pandemic were about personal preventive measures and mitigation (522/779, 67%) followed by general advisories and vigilance (445/779, 57.1%); pandemic intelligence (266/779, 34.1%); social and common responsibility (131/779, 16.8%); risk groups (118/779, 15.1%); and event, campaigns, and activities (105/779, 13.5%). A small number of messages showed gratitude (54/779, 6.9%), inquisitive messaging (31/779, 4%), clarification (31/779, 4%), and reassurance (31/779, 4%). Request for contributions (5/779, 0.6%) was least communicated. RQ2 asked how the CERC message themes changed across different phases during the COVID-19 pandemic. As shown in , the communication message themes changed across the COVID-19 phases. Chi-square tests revealed substantial changes in message themes across the phases, including pandemic intelligence ( χ 2 3 =18.1; P <.001). Specifically, messages on pandemic intelligence were more frequently posted during the maintenance stages—fighting a pandemic and rocky transition—compared with other phases ( and ) . Similarly, the results showed that message themes such as personal preventive measures and mitigation ( χ 2 3 =29.1; P <.001); events, campaigns, and activities ( χ 2 3 =27.9; P <.001); and general advisories and vigilance ( χ 2 3 =15.5; P <.001) changed significantly across different COVID-19 phases. These message themes were frequently used in Singapore’s online public health messaging during the fighting a pandemic phase and rocky transition phase (ie, maintenance stage). Chi-square tests showed that message themes on social and common responsibility ( χ 2 3 =29.9; P <.001) and showing gratitude ( χ 2 3 =21.0; P <.001) changed across different COVID-19 phases. Messages on social and common responsibility were frequently communicated to the public during the fighting a pandemic period (ie, the maintenance stage), while messages that focused on expressing gratitude were often communicated during the early days of fog (ie, the initial stage) and fighting a pandemic period (ie, the maintenance stage). Message theme on risk groups ( χ 2 3 =17.7; P <.001) also changed across different COVID-19 phases; messages about risk groups were frequently mentioned during the rocky transition period (ie, the maintenance stage). RQ3 asked about the types of emotional appeals used in the messages communicated by the Singaporean government to the public during the COVID-19 pandemic. Our data showed that hope (37/97, 38%) and humor (36/97, 37%) appeals were most frequently used in the communication messages during the COVID-19 pandemic, followed by nurturance appeals (17/97, 18%). Anger appeals (4/97, 4%), fear appeals (2/97, 2%), and guilt appeals (1/97, 1%) were used in the messaging strategies with a very low frequency. RQ4 asked how the use of emotional appeals in messages communicated by the Singaporean government changed across different phases of the COVID-19 pandemic. Chi-square tests showed that emotional appeals—fear, anger, humor, and nurturance appeals—changed across phases. Messages containing fear appeals were only disseminated during the learning to live with COVID-19 period ( χ 2 3 =17.4; P <.001). Messages containing anger appeals were used during the fighting a pandemic period and learning to live with COVID-19 period ( χ 2 3 =8.4; P =.04). Humor appeals were used across all the phases of COVID-19 at different levels of frequency ( χ 2 3 =8.3; P =.04). Messages containing nurturance appeals were also mostly communicated to the public during the learning to live with COVID-19 period ( χ 2 3 =49.8; P <.001). Principal Findings This study examined public health communication strategies in Singapore during the COVID-19 pandemic by applying the CERC framework and emotional appeals. We found that the communication strategies used by the Singaporean public health institutions are aligned with the CERC framework. However, our analysis suggested that CERC message themes, such as inquisitive messaging and clarification, can be conveyed more frequently, particularly at the earliest stage of the crisis. This is in line with CERC recommendations; it also helps in verifying the abundance of information available when there is an infodemic. The COVID-19 phases in Singapore outlined by the government are also aligned with the CERC stages. We found that different emotional appeals were used at various COVID-19 phases in differing situations, which is evident in how nurturance appeals were used to encourage child vaccination, aligned with literature showing that nurturance appeals can effectively target parents. Despite this, certain emotional appeals can be used more frequently at various COVID-19 phases. We observed that Singapore’s communication strategy is aligned with the frameworks of CERC and emotional appeals, with a few areas for improvement as discussed below. Consistent with the study by Malik et al , the findings of this study revealed that Singapore-based public health institutions’ communication themes focused more on personal preventive measures and mitigation as well as general advisories and vigilance. For example, tele-befriending and telecounseling services, such as the Seniors Helpline, were established to help older citizens who face mental distress during the lock-down period. Overall, the Singapore government effectively communicated the message themes recommended by the CERC framework. This is evident from how the framework recommends informing the public about what they can do to protect themselves, the risks of the disease, and the actions that the public health institutions are taking to manage the situation. Meanwhile, the request for contribution theme was the one communicated the least, likely due to the Singapore-based public health agencies having sufficient resources to tide over the pandemic. To protect individuals and businesses in the country, the Singapore government had issued multiple budgets and grants since the onset of COVID-19. These monetary payouts include one-off as well as recurring cash grants for individuals whose livelihoods were affected by the pandemic . Assistance was also offered to lower-income households. Examples of this include the COVID-19 Recovery Grant, which ensured that the citizens of Singapore or permanent residents could receive up to SG $700 (US $535) for 3 months if they faced an income loss of at least 50% . The grants were successful in reducing the inequality in Singapore . A shortcoming of the public health institutions’ communication strategies was that messages on clarification was communicated less frequently. This was in line with the existing literature that shows how health care organizations may have insufficient posts addressing misinformation . While steps were taken to clarify misinformation and address the public’s questions, there can be more such messaging as COVID-19 was also an infodemic . Infodemics occur when a large amount of information is rampant, including those that might be inaccurate or confusing . Aligned with Reynolds and Seeger’s argument that communication during the initial phase should aim to reduce uncertainty, the Singapore-based public health institutions can enhance messaging on clarification and inquisitive messaging at the earliest stage of the crisis to prevent outrage and confusion in times of emergency. This is considering the fact that the health institutions would be communicating new information, in the form of pandemic intelligence and general advisories and vigilance, which might lead to increased uncertainty. Separately, the frequency of reassurance messaging can be increased, with the CERC framework encouraging such messaging to be conveyed during the initial and maintenance stages . This can help to assure the public that the health institutions are handling the situation and managing the public’s emotions in times of uncertainty . We found that the communication message themes used by the public health institutions changed across different phases of COVID-19 in Singapore. This finding supported the CERC framework, which suggested that different message themes should be communicated to the public at different stages of a pandemic . For example, we observed that messages on pandemic intelligence were communicated less frequently at the initial stage (ie, early days of fog: January, 2020, to March, 2020) of the COVID-19 pandemic; during this time, there was limited knowledge about the disease. As COVID-19 test kits became available, the Singaporean government could trace the number of cases on a daily basis and better understand the spread of the virus. This enabled them to learn and develop mitigation strategies to control the disease. Hence, there has been an increased focus on communicating messages on pandemic intelligence (eg, messages on the kick-off of COVID-19 vaccination) at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021) than in other stages. Similarly, as scientists gradually gained more information about the virus, personal preventive measures and mitigation strategies were implemented by the public health institutions and more frequently communicated to the public at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021). This is in line with CERC’s recommendations to provide more explanations about preventive measures and mitigation strategies during the maintenance stage . Our results showed that positive emotional appeals (eg, hope and humor appeals) were more frequently used in COVID-19 communication strategies. This is in line with the study by Xie et al , which found that positive emotions, such as hope, were commonly used in videos on COVID-19. They also posited that positive emotions can be beneficial to public engagement at the start of a pandemic to balance out the public’s negative emotions. Hence, Singapore-based public health institutions may have taken this approach to neutralize the public’s uncertainty. While other studies acknowledge that positive emotional appeals should be leveraged, they also suggested for negative emotional appeals to be used as both types of messages can engage the public in taking up preventive behaviors . Positive emotional appeals, such as humor appeals, if overused or applied at inopportune times, can backfire, possibly lowering perceived risk and social responsibility; this may also result in the public not internalizing the intended message or not taking it seriously . In addition, emotional appeals have different effectiveness for different demographics. For example, when compared with younger populations , older populations prefer emotional appeals that avoid negative emotional outcomes. Hence, health institutions can consider integrating a mix of emotional appeals for more effective messaging in future public health crises or pandemics. This study found that the emotional appeals used varied with time, with their use being context-specific, depending on the situation and state of the disease. For example, nurturance appeals were not used at the early stage of the COVID-19 communication but were frequently used during the period of learning to live with COVID-19. This coincided with the first shipment of pediatric doses for the vaccination during the third week of December, 2021 , when the government started encouraging parents to bring their children for vaccination. Humor appeals were used with different frequencies across the stages, which could be due to the fluctuating severity of the crisis. Our studies revealed that humor appeals were used in less-pressing messages, such as encouraging the public to take up preventive behaviors, that were more culturally appropriate especially during the stressful pandemic. For example, a sitcom character most Singapore residents are familiar with, Phua Chu Kang, was used in COVID-19 campaign videos that deal with responsible behavior during the pandemic, and later, to boost the local vaccination drive. While humor appeals were used in the communication messages across different stages of COVID-19 pandemic in Singapore, it is recommended that other countries should use the same strategy tactfully. This is because there are many factors, such as relevance and timeliness , that could influence the effectiveness of humor appeals. Hence, humor appeals need to be applied in good judgment to avoid unintended outcomes. By contrast, fear and guilt appeals were less frequently applied in communication messages during the COVID-19 pandemic in Singapore. This demonstrates the Singapore-based health institutions’ careful use of negative emotion appeals in a tense pandemic situation where most people were confined at home during the “circuit breaker” period. Such negative appeals could lead to higher mental stress and compromise social cohesion, if overused. This also explains why fear appeals were used in the later phase of the pandemic (ie, during the “learning to live with COVID-19” phase) when the situation was more relaxed, and most management measures had been eased. Hence, the public health authorities should consider the political and cultural landscape as well as the appropriate junctures when applying emotional appeals in their communication strategies in future. Implications and Limitations Theoretically, this study contributes to the existing literature on both the CERC model and emotional appeals. Apart from exploring how CERC model and emotional appeals were applied in Singapore’s public health communication, this study is one of the few examining the relationship between CERC stages and the use of emotional appeals, especially in the context of COVID-19. This study provides insight on how to use a balanced mix of communication strategies for effective public health communications. The practical implication of this study is twofold. First, in the local context, the findings of this study could inform Singaporean public health practitioners in developing more comprehensive messages during an emerging health crisis. Understanding how CERC message themes and emotional appeals were used in the public communication strategies during the COVID-19 pandemic could help the relevant authorities identify their strengths and shortcomings. For example, our finding on the lack of clarification messages is a pointer for public communication during the pandemic, especially during the period where misinformation about COVID-19 vaccination for children aged 5 to 11 years in Singapore was widespread . Consequently, the local health authorities can learn from our findings to be better equipped to formulate communication strategies in handling unpredictable and emerging health pandemics in the future. Second, for other nations, especially those with high population density, the health authorities can emulate Singapore’s communication strategies during the COVID-19 pandemic to structure their communication strategies during the health crisis. In particular, Singapore’s “all-of-government” approach, which involves the collaboration of various government agencies in communicating key messages during crises, is a useful communication strategy. Drawing from Singapore’s approach, other countries could chart their responses in stages during a crisis and formulate timely public health messaging by incorporating CERC message themes together with the appropriate emotional appeal. However, as this study considers CERC message themes and phases and emotional appeals in the context of Singapore, the approach should be adapted with care—given the differences in local governance and culture of each country—because the messages may be received differently, thus affecting communication strategies. The “all-of-government” approach may also need to be tailored as a result. This study has several limitations. First, this study did not collate Facebook posts and website articles from all the public health institutions and only focused on those that provided pressing information about COVID-19 that are applicable to all members of the public. We did not analyze content from government institutions with more targeted messaging because of the large volume of content for analysis. We also did not analyze other media sources, such as television, radio, newspapers, online news, and other social media content, beyond Facebook because of cross-posting of content. As this study might not provide a complete picture of COVID-19 messaging in Singapore, future research should examine social media posts by various government institutions. Second, website articles without publication dates were excluded from the analyses for RQ1 and RQ2, as we were unable to categorize the data into any of the COVID-19 phases in Singapore. Third, we did not analyze social media responses (ie, likes, shares, and comments) because such information was unavailable for website articles. Future research could examine social media responses for a greater understanding of CERC themes and emotional appeals in the context of COVID-19. Fourth, the findings of this study might not be generalizable to countries that are very different from Singapore because of the country’s specific sociopolitical traits such as its high population density and strong central government. Nonetheless, given its exemplary management of COVID-19, it is worthy of documenting its practice to offer useful insights into future pandemic management. While other countries can learn from Singapore’s approach, there may be a need to tailor the communication strategies according to their characteristics. Fifth, this study did not specifically focus on messages containing severity and susceptibility because neither theme was encompassed in the CERC model used in this study. Given that severity and susceptibility are important aspects of risk perception, future research should examine these message themes in relation to the CERC model. In addition, this study did not examine the extent to which messaging conveyed acute risks from COVID-19 (eg, hospitalization and death) and chronic risks from COVID-19 (eg, postacute sequelae of COVID-19). Further studies should be conducted to delve into the differences as these may have impacted public willingness to engage in prevention and mitigation behaviors. Finally, while this study examined CERC themes and emotional appeals used across CERC phases, we did not dive into the interaction between CERC themes and emotional appeals. This is a possible area for future studies. Conclusion This study examined public health messaging during COVID-19 pandemic in Singapore. The public health authorities in Singapore have taken a strategic and systematic approach in public health communication coupled with the use of emotional appeal to encourage the public to engage in protective behaviors. This study examined public health communication strategies in Singapore during the COVID-19 pandemic by applying the CERC framework and emotional appeals. We found that the communication strategies used by the Singaporean public health institutions are aligned with the CERC framework. However, our analysis suggested that CERC message themes, such as inquisitive messaging and clarification, can be conveyed more frequently, particularly at the earliest stage of the crisis. This is in line with CERC recommendations; it also helps in verifying the abundance of information available when there is an infodemic. The COVID-19 phases in Singapore outlined by the government are also aligned with the CERC stages. We found that different emotional appeals were used at various COVID-19 phases in differing situations, which is evident in how nurturance appeals were used to encourage child vaccination, aligned with literature showing that nurturance appeals can effectively target parents. Despite this, certain emotional appeals can be used more frequently at various COVID-19 phases. We observed that Singapore’s communication strategy is aligned with the frameworks of CERC and emotional appeals, with a few areas for improvement as discussed below. Consistent with the study by Malik et al , the findings of this study revealed that Singapore-based public health institutions’ communication themes focused more on personal preventive measures and mitigation as well as general advisories and vigilance. For example, tele-befriending and telecounseling services, such as the Seniors Helpline, were established to help older citizens who face mental distress during the lock-down period. Overall, the Singapore government effectively communicated the message themes recommended by the CERC framework. This is evident from how the framework recommends informing the public about what they can do to protect themselves, the risks of the disease, and the actions that the public health institutions are taking to manage the situation. Meanwhile, the request for contribution theme was the one communicated the least, likely due to the Singapore-based public health agencies having sufficient resources to tide over the pandemic. To protect individuals and businesses in the country, the Singapore government had issued multiple budgets and grants since the onset of COVID-19. These monetary payouts include one-off as well as recurring cash grants for individuals whose livelihoods were affected by the pandemic . Assistance was also offered to lower-income households. Examples of this include the COVID-19 Recovery Grant, which ensured that the citizens of Singapore or permanent residents could receive up to SG $700 (US $535) for 3 months if they faced an income loss of at least 50% . The grants were successful in reducing the inequality in Singapore . A shortcoming of the public health institutions’ communication strategies was that messages on clarification was communicated less frequently. This was in line with the existing literature that shows how health care organizations may have insufficient posts addressing misinformation . While steps were taken to clarify misinformation and address the public’s questions, there can be more such messaging as COVID-19 was also an infodemic . Infodemics occur when a large amount of information is rampant, including those that might be inaccurate or confusing . Aligned with Reynolds and Seeger’s argument that communication during the initial phase should aim to reduce uncertainty, the Singapore-based public health institutions can enhance messaging on clarification and inquisitive messaging at the earliest stage of the crisis to prevent outrage and confusion in times of emergency. This is considering the fact that the health institutions would be communicating new information, in the form of pandemic intelligence and general advisories and vigilance, which might lead to increased uncertainty. Separately, the frequency of reassurance messaging can be increased, with the CERC framework encouraging such messaging to be conveyed during the initial and maintenance stages . This can help to assure the public that the health institutions are handling the situation and managing the public’s emotions in times of uncertainty . We found that the communication message themes used by the public health institutions changed across different phases of COVID-19 in Singapore. This finding supported the CERC framework, which suggested that different message themes should be communicated to the public at different stages of a pandemic . For example, we observed that messages on pandemic intelligence were communicated less frequently at the initial stage (ie, early days of fog: January, 2020, to March, 2020) of the COVID-19 pandemic; during this time, there was limited knowledge about the disease. As COVID-19 test kits became available, the Singaporean government could trace the number of cases on a daily basis and better understand the spread of the virus. This enabled them to learn and develop mitigation strategies to control the disease. Hence, there has been an increased focus on communicating messages on pandemic intelligence (eg, messages on the kick-off of COVID-19 vaccination) at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021) than in other stages. Similarly, as scientists gradually gained more information about the virus, personal preventive measures and mitigation strategies were implemented by the public health institutions and more frequently communicated to the public at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021). This is in line with CERC’s recommendations to provide more explanations about preventive measures and mitigation strategies during the maintenance stage . Our results showed that positive emotional appeals (eg, hope and humor appeals) were more frequently used in COVID-19 communication strategies. This is in line with the study by Xie et al , which found that positive emotions, such as hope, were commonly used in videos on COVID-19. They also posited that positive emotions can be beneficial to public engagement at the start of a pandemic to balance out the public’s negative emotions. Hence, Singapore-based public health institutions may have taken this approach to neutralize the public’s uncertainty. While other studies acknowledge that positive emotional appeals should be leveraged, they also suggested for negative emotional appeals to be used as both types of messages can engage the public in taking up preventive behaviors . Positive emotional appeals, such as humor appeals, if overused or applied at inopportune times, can backfire, possibly lowering perceived risk and social responsibility; this may also result in the public not internalizing the intended message or not taking it seriously . In addition, emotional appeals have different effectiveness for different demographics. For example, when compared with younger populations , older populations prefer emotional appeals that avoid negative emotional outcomes. Hence, health institutions can consider integrating a mix of emotional appeals for more effective messaging in future public health crises or pandemics. This study found that the emotional appeals used varied with time, with their use being context-specific, depending on the situation and state of the disease. For example, nurturance appeals were not used at the early stage of the COVID-19 communication but were frequently used during the period of learning to live with COVID-19. This coincided with the first shipment of pediatric doses for the vaccination during the third week of December, 2021 , when the government started encouraging parents to bring their children for vaccination. Humor appeals were used with different frequencies across the stages, which could be due to the fluctuating severity of the crisis. Our studies revealed that humor appeals were used in less-pressing messages, such as encouraging the public to take up preventive behaviors, that were more culturally appropriate especially during the stressful pandemic. For example, a sitcom character most Singapore residents are familiar with, Phua Chu Kang, was used in COVID-19 campaign videos that deal with responsible behavior during the pandemic, and later, to boost the local vaccination drive. While humor appeals were used in the communication messages across different stages of COVID-19 pandemic in Singapore, it is recommended that other countries should use the same strategy tactfully. This is because there are many factors, such as relevance and timeliness , that could influence the effectiveness of humor appeals. Hence, humor appeals need to be applied in good judgment to avoid unintended outcomes. By contrast, fear and guilt appeals were less frequently applied in communication messages during the COVID-19 pandemic in Singapore. This demonstrates the Singapore-based health institutions’ careful use of negative emotion appeals in a tense pandemic situation where most people were confined at home during the “circuit breaker” period. Such negative appeals could lead to higher mental stress and compromise social cohesion, if overused. This also explains why fear appeals were used in the later phase of the pandemic (ie, during the “learning to live with COVID-19” phase) when the situation was more relaxed, and most management measures had been eased. Hence, the public health authorities should consider the political and cultural landscape as well as the appropriate junctures when applying emotional appeals in their communication strategies in future. Theoretically, this study contributes to the existing literature on both the CERC model and emotional appeals. Apart from exploring how CERC model and emotional appeals were applied in Singapore’s public health communication, this study is one of the few examining the relationship between CERC stages and the use of emotional appeals, especially in the context of COVID-19. This study provides insight on how to use a balanced mix of communication strategies for effective public health communications. The practical implication of this study is twofold. First, in the local context, the findings of this study could inform Singaporean public health practitioners in developing more comprehensive messages during an emerging health crisis. Understanding how CERC message themes and emotional appeals were used in the public communication strategies during the COVID-19 pandemic could help the relevant authorities identify their strengths and shortcomings. For example, our finding on the lack of clarification messages is a pointer for public communication during the pandemic, especially during the period where misinformation about COVID-19 vaccination for children aged 5 to 11 years in Singapore was widespread . Consequently, the local health authorities can learn from our findings to be better equipped to formulate communication strategies in handling unpredictable and emerging health pandemics in the future. Second, for other nations, especially those with high population density, the health authorities can emulate Singapore’s communication strategies during the COVID-19 pandemic to structure their communication strategies during the health crisis. In particular, Singapore’s “all-of-government” approach, which involves the collaboration of various government agencies in communicating key messages during crises, is a useful communication strategy. Drawing from Singapore’s approach, other countries could chart their responses in stages during a crisis and formulate timely public health messaging by incorporating CERC message themes together with the appropriate emotional appeal. However, as this study considers CERC message themes and phases and emotional appeals in the context of Singapore, the approach should be adapted with care—given the differences in local governance and culture of each country—because the messages may be received differently, thus affecting communication strategies. The “all-of-government” approach may also need to be tailored as a result. This study has several limitations. First, this study did not collate Facebook posts and website articles from all the public health institutions and only focused on those that provided pressing information about COVID-19 that are applicable to all members of the public. We did not analyze content from government institutions with more targeted messaging because of the large volume of content for analysis. We also did not analyze other media sources, such as television, radio, newspapers, online news, and other social media content, beyond Facebook because of cross-posting of content. As this study might not provide a complete picture of COVID-19 messaging in Singapore, future research should examine social media posts by various government institutions. Second, website articles without publication dates were excluded from the analyses for RQ1 and RQ2, as we were unable to categorize the data into any of the COVID-19 phases in Singapore. Third, we did not analyze social media responses (ie, likes, shares, and comments) because such information was unavailable for website articles. Future research could examine social media responses for a greater understanding of CERC themes and emotional appeals in the context of COVID-19. Fourth, the findings of this study might not be generalizable to countries that are very different from Singapore because of the country’s specific sociopolitical traits such as its high population density and strong central government. Nonetheless, given its exemplary management of COVID-19, it is worthy of documenting its practice to offer useful insights into future pandemic management. While other countries can learn from Singapore’s approach, there may be a need to tailor the communication strategies according to their characteristics. Fifth, this study did not specifically focus on messages containing severity and susceptibility because neither theme was encompassed in the CERC model used in this study. Given that severity and susceptibility are important aspects of risk perception, future research should examine these message themes in relation to the CERC model. In addition, this study did not examine the extent to which messaging conveyed acute risks from COVID-19 (eg, hospitalization and death) and chronic risks from COVID-19 (eg, postacute sequelae of COVID-19). Further studies should be conducted to delve into the differences as these may have impacted public willingness to engage in prevention and mitigation behaviors. Finally, while this study examined CERC themes and emotional appeals used across CERC phases, we did not dive into the interaction between CERC themes and emotional appeals. This is a possible area for future studies. This study examined public health messaging during COVID-19 pandemic in Singapore. The public health authorities in Singapore have taken a strategic and systematic approach in public health communication coupled with the use of emotional appeal to encourage the public to engage in protective behaviors.
Performance of generative pre-trained transformers (GPTs) in Certification Examination of the College of Family Physicians of Canada
9b3397ed-4755-4d81-ab79-b3386383f6a1
11138270
Family Medicine[mh]
Prior to this study, there was an understanding of the general capabilities of artificial intelligence (AI) models like GPTs in various applications and some exams. However, ChatGPT is not specifically designed for medical purposes and there is no specific insights into their performance in open-ended, complex medical examinations like the Certification Examination of the College of Family Physicians of Canada (CFPC). The need for this study stemmed from the growing integration of AI in medical education and the potential of large language models such as GPTs in preparing for this complex exam. This study demonstrates that the latest iteration of ChatGPT, particularly GPT-4, can accurately respond to a significant portion of CFPC examination questions. It reveals that GPT-4 notably outperforms its predecessor, GPT-3.5, in both the accuracy and efficiency of responses. Moreover, the study indicates that the timing and conditions under which ChatGPT is queried, along with the regeneration of answers and the strategic use of prompts for debriefing, might not significantly impact the accuracy and consistency of the responses. The findings from this study could influence future research directions, focusing on incorporating advanced AI models such as GPTs in medical education and examination preparation. It suggests a new, innovative method for medical students and professionals to prepare for examinations. Policy-wise, it could open discussions on the role of AI in formal medical education and certification processes, potentially leading to the integration of AI as a standard tool in medical learning and assessment. ChatGPT, released by Open AI (San Francisco, California, USA) in November 2022, is an advanced large language model (LLM) to generate humane-like dialectic responses to text inquiries. ChatGPT, by default, uses the generative pre-trained transformer (GPT) 3.5 model, specifically designed for conversational application and is the freely accessible version. In contrast, ChatGPT Plus uses GPT-4.0, which is claimed to be a more accurate and efficient tool with improved and safer responses to complex problems. Several potential implications have been described for ChatGPT in medical education. These include the creation of clinical vignettes to help with the training and evaluation of healthcare professionals ; answering specific questions related to various medical encounters, including diagnoses or treatments ; generating exercises and quizzes for teaching purposes ; generating lists of differential diagnoses ; and facilitating self-directed learning by creating helpful mnemonics. However, while AI-based chatbots offer valuable contributions to medical learning, ethical concerns exist about their use in education and research. For instance, data privacy and security are essential considerations when employing chatbots. ChatGPT has demonstrated promising outcomes in reputable medical examinations, suggesting its potential utility in medical exam preparation. In an official multiple-choice progress test, GPT-3.5’s performance was comparable to that of family medicine residents from the University of Toronto, while GPT-4 outperformed both groups. Moreover, ChatGPT has also been used to answer the different steps of the United States Medical Licensing Examination (eg, USMLE, membership of the Royal College of General Practitioners Applied Knowledge Test (AKT), ophthalmology, neurology and radiology specialty exams. ChatGPT’s performance has been acceptable not only in English-based medical exams but also in tests conducted in other languages; for instance, the Japanese medical licensing examination, the Chinese National Medical Licensing Examination (NMLE) and the Iranian Medical Residency Examination. The Certification Examination in Family Medicine conducted by the College of Family Physicians of Canada (CFPC) is a comprehensive assessment of broad clinical knowledge in the field of family medicine in Canada. This exam consists of the oral component, the simulated office oral exam, and the written component, consisting of short-answer management problems (SAMPs). Typically, SAMPs include around 40 clinical scenarios, with two to seven questions for each scenario. The rapid expansion and widespread accessibility of LLM-based AIs have increased their use in medical education and medical exam preparation. Nevertheless, ChatGPT is not specifically designed for medical purposes and may not be accurate in this domain. Therefore, it is unclear if it could be employed to help candidates find potential answers to SAMP questions for CFPC exam preparation. Huang et al compared the performance of GPT-3.5 and GPT-4 with that of family medicine residents at the University of Toronto, employing an official multiple-choice medical knowledge test sourced from their university, designed for preparation for the SAMPs exam. However, to our knowledge, no studies have assessed LLMs’ capacity to assist candidates in preparing for the open-ended questions. Furthermore, it remains uncertain whether factors such as questioning ChatGPT at different times, regenerating answers or employing (different) prompts for debriefing could influence the accuracy of responses. Therefore, we conducted this study to assess the performance of both GPT-3.5 and GPT-4, in addressing a series of sample open-ended SAMPs questions. Additionally, we examined the consistency and accuracy of ChatGPT and ChatGPT Plus responses in various rounds with different contexts. Dataset We conducted this study using all the questions from a sample set of SAMPs obtained from the official website of the CFPC. This sample set comprises 19 clinical scenarios, accompanied by a total of 77 questions related to these scenarios. Each scenario has between two and seven associated questions designed to simulate the format of the actual computer-based examination. These questions require brief, concise responses and typically, answers should consist of no more than 10 words per line, with each question necessitating 1 to 5 lines of response. The clinical scenarios spanned various domains within family medicine, such as cardiology, neurology and emergency, except for dermatology . Data collection We employed GPT-3.5 (ChatGPT, August 3, Version 2023, OpenAI, San Francisco, California, USA) and GPT-4 (ChatGPT Plus, August 3, version 2023, OpenAI, San Francisco, California, USA) from 8 August to 24 August 2023, to respond to these sample SAMPs-CFPC questions. Initially, we experimented with one or two scenarios and found that while ChatGPT’s answers were highly informative and valuable for learning, they were also textually rich (average of 150 words per answer), while the amount of text that is required in the real test is the short answers. As a result, we introduced a prompt before each run to limit the answers to fewer than 10 words per line to simulate the actual real exam format. However, in the final round (ie, the fifth round), we included a session without a prompt for comparison purposes. In each instance, we presented ChatGPT with the scenario, followed by its related questions, without repeating the clinical scenario. To eliminate the potential impact of memory retention bias (ie, the tendency of GPT to remember the responses from the previous round of questions and answers), we copied all the responses into a Word document. Subsequently, we completely erased all the conversations of that round from the ChatGPT window before initiating a new session for the subsequent round. summarises the various rounds in which both GPT-3.5 and GPT-4 were used for our study. Scoring and review of responses Two experienced CFPC-certified practicing family physicians independently reviewed and scored the AI-generated responses and explanations. Both reviewers, MM and SS, possess over 2 years of Canadian family medicine practice experience. However, they are International Medical Graduates with over 15 years of extensive professional backgrounds. MM has also a background of over 16 years of experience in medical education and currently serves as a faculty member in the Department of Family Medicine at a Canadian university. First, the reviewers strictly adhered to the answer key provided on the CCFP website for scoring, which we refer to as ‘CFPC Scoring’. Initially, the two reviewing physicians scored the answers independently, blinded to each other. Responses that were entirely incorrect for each line received a score of zero during the evaluation process, while those deemed accurate were assigned a score of one per line. Following their initial evaluations, the two reviewers observed that 71 out of 77 CFPC Score Percentages (92.2%) were identical. Subsequently, after a collaborative discussion, they reached a consensus on the final score for all questions (100%). A fractional scoring system of 0.5 was employed in certain instances, deviating from the binary scale of zero or one for a line of answers. Subsequently, the total score for each question (comprising the total lines of correct answers) was divided by the maximum possible score for each question and then multiplied by 100 to derive the ‘CFPC Score Percentage’. However, the reviewers noted that ChatGPT mainly produced accurate and acceptable answers based on their expertise, although absent in the official answer key. Consequently, they did a second round of scoring. In this second scoring, the reviewers jointly reassessed the responses simultaneously, using the latest version of UpToDate (August 2023), and agreed completely on the ‘Reviewer’s Score’. Additionally, to assess the consistency of the answers between rounds, we used the ‘Percentage of Repeated Answers’ for each question. To compute this percentage, we compared each round with a selected reference round to determine the extent of repetition of the same concepts within the answers for each question. Finally, each question’s difficulty level was evaluated based on the reviewers' judgments. The questions were classified as difficult if they were textually dense and complex questions that required the responder to judiciously weigh multiple clinical indicators while eliminating various potential answers based on the cues provided in the question. Conversely, questions that did not exhibit these characteristics and mostly needed one-word answers were classified as easy. Data analysis We conducted our statistical analyses using SPSS V.16.0 software (SPSS Inc, Chicago, Illinois, USA). We presented the results as median values with the (25th and 75th percentiles) for variables that did not follow a normal distribution and reported the mean (SD) only for comparative purposes. Categorical variables were reported as numbers (percentages). We examined differences in the scores assigned to each question by GPT-3.5 and GPT-4 using the Wilcoxon signed-rank non-parametric test. To compare the outcome of repeated measures across five rounds of GPT-4 and GPT-3.5 results, we used the ordinal logistic generalised estimating equation (GEE). The outcome variables were the CFPC score, or Reviewers’ Score, categorised as 0, 33.3, 50, 66.67, 75, 80 and 100. The independent variable was the usage of GPT-3.5 vs GPT-4 to answer the questions across the five rounds. We employed an independent working correlation matrix structure in the GEE analysis with link function of cumulative logit. All the reported p values were two-sided, with a significance level of ≤0.05 considered statistically significant. Ethical considerations This study exclusively used and analysed publicly available data and did not involve human participants. Consequently, there was no requirement for approval from the Review Board of McGill University. The authors have no conflicts of interest to disclose. Patient and public involvement Patients and the public were not involved in the design, recruitment, conduct or any other stages of the research process in this study. We conducted this study using all the questions from a sample set of SAMPs obtained from the official website of the CFPC. This sample set comprises 19 clinical scenarios, accompanied by a total of 77 questions related to these scenarios. Each scenario has between two and seven associated questions designed to simulate the format of the actual computer-based examination. These questions require brief, concise responses and typically, answers should consist of no more than 10 words per line, with each question necessitating 1 to 5 lines of response. The clinical scenarios spanned various domains within family medicine, such as cardiology, neurology and emergency, except for dermatology . We employed GPT-3.5 (ChatGPT, August 3, Version 2023, OpenAI, San Francisco, California, USA) and GPT-4 (ChatGPT Plus, August 3, version 2023, OpenAI, San Francisco, California, USA) from 8 August to 24 August 2023, to respond to these sample SAMPs-CFPC questions. Initially, we experimented with one or two scenarios and found that while ChatGPT’s answers were highly informative and valuable for learning, they were also textually rich (average of 150 words per answer), while the amount of text that is required in the real test is the short answers. As a result, we introduced a prompt before each run to limit the answers to fewer than 10 words per line to simulate the actual real exam format. However, in the final round (ie, the fifth round), we included a session without a prompt for comparison purposes. In each instance, we presented ChatGPT with the scenario, followed by its related questions, without repeating the clinical scenario. To eliminate the potential impact of memory retention bias (ie, the tendency of GPT to remember the responses from the previous round of questions and answers), we copied all the responses into a Word document. Subsequently, we completely erased all the conversations of that round from the ChatGPT window before initiating a new session for the subsequent round. summarises the various rounds in which both GPT-3.5 and GPT-4 were used for our study. Two experienced CFPC-certified practicing family physicians independently reviewed and scored the AI-generated responses and explanations. Both reviewers, MM and SS, possess over 2 years of Canadian family medicine practice experience. However, they are International Medical Graduates with over 15 years of extensive professional backgrounds. MM has also a background of over 16 years of experience in medical education and currently serves as a faculty member in the Department of Family Medicine at a Canadian university. First, the reviewers strictly adhered to the answer key provided on the CCFP website for scoring, which we refer to as ‘CFPC Scoring’. Initially, the two reviewing physicians scored the answers independently, blinded to each other. Responses that were entirely incorrect for each line received a score of zero during the evaluation process, while those deemed accurate were assigned a score of one per line. Following their initial evaluations, the two reviewers observed that 71 out of 77 CFPC Score Percentages (92.2%) were identical. Subsequently, after a collaborative discussion, they reached a consensus on the final score for all questions (100%). A fractional scoring system of 0.5 was employed in certain instances, deviating from the binary scale of zero or one for a line of answers. Subsequently, the total score for each question (comprising the total lines of correct answers) was divided by the maximum possible score for each question and then multiplied by 100 to derive the ‘CFPC Score Percentage’. However, the reviewers noted that ChatGPT mainly produced accurate and acceptable answers based on their expertise, although absent in the official answer key. Consequently, they did a second round of scoring. In this second scoring, the reviewers jointly reassessed the responses simultaneously, using the latest version of UpToDate (August 2023), and agreed completely on the ‘Reviewer’s Score’. Additionally, to assess the consistency of the answers between rounds, we used the ‘Percentage of Repeated Answers’ for each question. To compute this percentage, we compared each round with a selected reference round to determine the extent of repetition of the same concepts within the answers for each question. Finally, each question’s difficulty level was evaluated based on the reviewers' judgments. The questions were classified as difficult if they were textually dense and complex questions that required the responder to judiciously weigh multiple clinical indicators while eliminating various potential answers based on the cues provided in the question. Conversely, questions that did not exhibit these characteristics and mostly needed one-word answers were classified as easy. We conducted our statistical analyses using SPSS V.16.0 software (SPSS Inc, Chicago, Illinois, USA). We presented the results as median values with the (25th and 75th percentiles) for variables that did not follow a normal distribution and reported the mean (SD) only for comparative purposes. Categorical variables were reported as numbers (percentages). We examined differences in the scores assigned to each question by GPT-3.5 and GPT-4 using the Wilcoxon signed-rank non-parametric test. To compare the outcome of repeated measures across five rounds of GPT-4 and GPT-3.5 results, we used the ordinal logistic generalised estimating equation (GEE). The outcome variables were the CFPC score, or Reviewers’ Score, categorised as 0, 33.3, 50, 66.67, 75, 80 and 100. The independent variable was the usage of GPT-3.5 vs GPT-4 to answer the questions across the five rounds. We employed an independent working correlation matrix structure in the GEE analysis with link function of cumulative logit. All the reported p values were two-sided, with a significance level of ≤0.05 considered statistically significant. This study exclusively used and analysed publicly available data and did not involve human participants. Consequently, there was no requirement for approval from the Review Board of McGill University. The authors have no conflicts of interest to disclose. Patients and the public were not involved in the design, recruitment, conduct or any other stages of the research process in this study. We evaluated 19 clinical scenarios, each with two to seven pertinent questions. These scenarios included 77 specific questions, generating 165 lines of answers. The possible responses to each question varied in length, ranging from 1 to 5 lines, with a median length of 2 (1, 3). The two reviewers categorised 28 questions (36.4%) as easy, and 49 (63.6%) as difficult. Both reviewers agreed that the answers given by ChatGPT in the fifth round without any prompts were very informative and valuable for education and better understanding. Over five rounds, out of 852 lines of answers, 607 (73.6%) provided by GPT-3.5 and 691 (81%) offered by GPT-4 were deemed correct based on the CFPC answer key. The mean CFPC score percentage for all five rounds was 76.0 for GPT-3.5 and 85.2 for GPT-4. The mean Reviewers’ Scores for GPT-3.5 and GPT-4 were 86.1 and 93.4, respectively. The GEE analysis revealed that the likelihood of achieving a higher CFPC score percentage was significantly greater for GPT-4 compared with GPT-3.5, with GPT-4 being 2.31 times more likely to score higher (OR: 2.31; 95% CI: 1.53 to 3.47; p<0.001). Similarly, over five rounds, the Reviewers’ Score percentage for responses provided by GPT-4 were found to be significantly higher, being 2.23 times more likely to exceed those of GPT-3.5 (OR: 2.23; 95% CI: 1.22 to 4.06; p=0.009). The results of five distinct rounds using GPT-3.5 and GPT-4 to respond to the sample CFPC questionnaire are presented in . Comparing the results of GPT-3.5 and GPT-4 showed that CFPC scores were significantly higher for GPT-4 as opposed to GPT-3.5 for rounds 1, 3, 4 and 5, and we noted a trend towards an increase in rounds 2 . The right side of the table represents the ‘Reviewers’ Score Percentage’ for GPT-3.5 and GPT-4 answers to each question. Similar to the CFPC Score Percentages, the Reviewers’ Score Percentages assigned by GPT-4 tended to be higher in round 5 and were significantly higher in rounds 1, 2, 3 and 4 . GPT-3.5 exhibited consistent repetition of the same concepts in the answers across all five rounds in 31 out of 77 questions (40.3%), whereas GPT-4 repeated the same concepts in 37 out of 77 questions (48.1%). compares GPT-3.5 and GPT-4 regarding the percentage of repeated answers for each question on the left side and the percentage of questions with no change to the CFPC and Reviewers’ Score on the two right columns, respectively. When comparing the responses to each question in rounds 1 and 2 (with an approximate 1 week interval, as shown in ), there was no significant change in the ‘CFPC Score Percentage’ for both GPT-3.5 and GPT-4 (p=0.79 for GPT-3.5 and p=0.26 for GPT-4 respectively, Wilcoxon signed-rank test). Both GPT-3.5 and GPT-4 consistently demonstrated a high percentage of repeated answers for each question, approximately 80% , with mean percentages of 82.0 and 88.7, respectively . However, the percentage of repeated answers was higher for GPT-4 (p=0.025, ). Among the answers that differed between rounds 1 and 2, the CFPC or Reviewers' Scores predominantly remained unchanged for both GPT-3.5 and GPT-4 . In round 4, we excluded the term ‘CFPC exam’ from ‘Prompt 1’, which was used in round 1 . The ‘CFPC Score Percentage’ was significantly higher for round 4 compared with round 1 (p=0.014) for GPT-3.5, but this trend was not significant for GPT-4 (p=0.089). The percentage of repeated answers was found to be higher for GPT-4 than for GPT-3.5 (p=0.002, ). Additionally, the scores remained largely unchanged, particularly for GPT-4 ( , last two columns on the right). Comparing round 5 (without any prompt) and round 1 (with prompt 1) showed no significant difference in ‘CFPC Score Percentage’ for both GPT-3.5 and GPT-4 (p=0.83 and p=0.72, respectively). However, GPT-4 showed a higher percentage of repeated answers than GPT-3.5 (p<0.001, ). Most of the scores remained unchanged, similar to previous comparisons ( , the two columns on the right side). Lastly, round 3 was a regeneration of responses from round 2. When comparing these two rounds, the ‘CFPC Score Percentage’ tended to increase for GPT-3.5 and GPT-4 (p=0.058 and p=0.098, respectively), while remaining unchanged for GPT-4. The percentages of repeated answers were not significantly different between GPT-3.5 and GPT-4 . Like other comparisons, most scores remained unchanged between these two rounds ( , the two columns on the right side). presents an illustrative CFPC sample question along with responses generated by GPT-3.5 and GPT-4 across multiple rounds. 10.1136/fmch-2023-002626.supp1 Supplementary data In this study, we used GPTs to answer the Sample CFPC questions and responded satisfactorily to our complex sample questions. When the reviewers scored the questions using the fixed answer key provided by the CFPC website, the mean score for all five rounds was 76.0±27.7 for GPT-3.5 and 85.2±23.7 for GPT-4. Additionally, the authors found that most of the answers, although not explicitly stated in the answer key, were reasonable and acceptable, and only about 16% of the lines of answers provided by GPT-3.5 and 7% of the lines of answers provided by GPT-4 were deemed incorrect in the Reviewers’ scoring. Although ChatGPT has been used to respond to medical examination questions, only one study has evaluated its efficacy in preparing for the Canadian family medicine exam. In this study, Huang and colleagues demonstrated that GPT-4 significantly outperformed the other test takers, achieving an impressive accuracy rate of 82.4%, whereas GPT-3.5 achieved 57.4% accuracy, and family medicine residents scored 56.9% correctly. In our study, the mean CFPC score across five rounds was 85.2 for GPT-4, which closely resembled their score, while GPT-3.5 scored lower at 76.0. However, it is important to note that Huang and his team’s questionnaire comprised multiple-choice questions, differing from the open-ended format of the questions in the SAMPs exam. Furthermore, their questionnaire was sourced from their university, specifically designed to prepare their family medicine residents for the exam and may lack standardisation. In contrast, our study employed a comprehensive and standardised set of questions sourced directly from the CFPC website. These questions were open-ended, mirroring the SAMPs structure, and included official answer keys approved by CFPC, providing a more accurate representation of the CFPC exam format. Thirunavukarasu and coworkers used GPT-3.5 to answer the AKT exam designed for Membership of the Royal College of General Practitioners in the UK. They achieved a performance level of 60.17%, which was lower than our score, and it fell short of the 70.45% passing threshold in this primary care examination. Nevertheless, like the University of Toronto study, this study employed a multiple-choice questionnaire and was not specific to a Canadian family medicine exam. Other studies have reported similar scores for GPT-3.5 on various medical examinations at the undergraduate level. Kung and colleagues reported that ChatGPT achieved near-passing accuracy levels of around 60% for Step 1, Step 2 of CK and Step 3 of the USMLE. Similarly, Gilson and colleagues observed an accuracy range of 44% to 64.4% for sample USMLE Step one and Step two questions. ChatGPT’s performance on the Chinese NMLE stayed behind that of medical students and was below the passing threshold. Similar to our study, scores were higher when GPT-4 was used instead of GPT-3.5 in other studies. For instance, while GPT-3.5 fell short of the passing criteria for the Japanese medical licensing examination, GPT-4 met the threshold criteria. Nori et al used GPT-4 and observed a passing score on USMLE by over 20 points. Finally, GPT-4 accurately answered 81.3% of the questions on the Iranian Medical Residency Examination. The combined analysis of five rounds using the GEE model revealed that the CFPC Score Percentages were significantly higher for GPT-4 than GPT-3.5 (p<0.001). Likewise, on re-evaluating the responses using their medical expertise, the Reviewers’ Score percentages for GPT-4 over five rounds were significantly higher for GPT-4 compared with GPT-3.5 (p=0.009). This finding is probably because GPT-4 is able to perform more efficiently under challenging questions from complex situations. This trend has been previously shown through assessments of ChatGPT (GPT-3.5) and ChatGPT Plus (GPT-4) on various exams, including a sample of multiple choice progress tests from the University of Toronto, two sets of official practice materials for the USMLE exam from the National Board of Medical Examiners, the Japanese Medical Licensing Examination, the StatPearls ophthalmology Question Bank and the 2022 SCE neurology examination. However, other studies primarily involved multiple-choice questions, were related to the undergraduate level, were conducted in different languages or focused on other specialties. Our study focused on the complex task of open-ended Canadian family medicine questions and demonstrated that GPT-4 can provide more accurate answers to complex Canadian SAMPs exam questions than GPT-3.5 (the free version). In the fifth round of our study, when AI was not specifically instructed to offer brief responses, it consistently provided informative justifications and reasoning. These responses were highly instructive and aligned well with our educational objectives (see ). Therefore, our study demonstrated that GPT-3.5 and GPT-4 can be used to guess the answers to complex tasks such as those outlined in the study, making it a potential help for CFPC exam preparation. However, using these technologies to learn family medicine and prepare for exams needs further study. Despite several benefits and potential roles of LLMs in medical education and research, they have several pitfalls. These pitfalls include the absence of up-to-date sources of literature (the current versions of ChatGPT are trained in September 2021), inaccurate data, inability to distinguish between fake and reliable information, generating incorrect answers known as hallucinations, which is potentially misleading or dangerous in a healthcare context. ChatGPT is still in an experimental phase and is not intended for medical application. Therefore, using ChatGPT in preparation for exams should serve as a prompt to reinforce existing knowledge derived from reliable sources. Responses generated by ChatGPT should undergo rigorous fact-checking by human experts before being considered a primary knowledge resource. Our testing comprised several rounds, including repeating identical prompts at intervals, modifying the prompts by eliminating the reference to ‘CFPC exam’ from the prompts, regenerating responses and removing prompts to evaluate outcomes. When comparing rounds 1 and 2 with a similar ‘Prompt 1’ but with an approximately 1 week interval, both GPT-3.5 and GPT-4 demonstrated high consistency and accuracy. This observation suggests that the passage of time does not significantly impact the chatbot’s performance. Instead, future improvements may arise through the AI’s learning curve and the introduction of newer versions of LLMs trained on updated material, warranting further investigation. Removing the phrase ‘CFPC exam’ in round 4 led to an unexpected outcome. The accuracy, indicated by ‘CFPC Score Percentage’, noticeably increased for GPT-3.5 and showed an upward GPT-4 trend contrary to our initial hypothesis. We speculated that omitting the exam’s name might limit GPT’s access to the source questions, potentially reducing scores. However, the observed increase may be accidental or suggest other underlying factors, necessitating further investigation to understand these results. The comparison between rounds 1 and 5 aimed to determine whether prompting influenced responses and resulted in consistently accurate outcomes. The absence of significant change for ‘CFPC Score Percentage’ for both GPT-3.5 and GPT-4 may suggest that prompting did not significantly alter the accuracy of the responses. Also, in most of the questions, the CFPC score remained unchanged (67.5% for GPT-3.5 and 83.1% for GPT-4). This result suggests that running ChatGPT without any prompt could lead to detailed responses with justifications with similar accuracy, which could be valuable for candidates preparing for the CFPC exam. Finally, the regeneration of responses from round 2 in round 3 was conducted to assess whether response regeneration could enhance accuracy. We removed the output from each round except for the third run, a repetition of the second run, to minimise potential learning curve effects on the GPT’s performance. As a result of this approach, the ‘CFPC Score Percentage’ tended to increase for GPT-3.5, while remaining unchanged for GPT-4. This finding may further emphasise that regeneration of the responses may improve the results for GPT-3.5 but not GPT-4. In summary, GPT-4 showed considerable consistency in our comparisons. This consistency was more impressive when the reviewers realised that changing the answer choices by GPT would not impact the scores . In most cases, GPT-4 repeated answers more frequently than GPT-3.5 or at least showed a trend of higher repetition. In a related study, Thirunavukarasu et al conducted two independent sessions of the AKT exam using ChatGPT for 10 days and observed consistent performance. Study limitation It is important to acknowledge that there is no established cut-off score for passing the SAMPs part of the CFPC exam. Instead, the minimal passing score is set based on the performance of a reference group of first-time test-takers who graduate from Canadian family medicine residency programmes in each exam. Consequently, whether ChatGPT’s current performance would be sufficient to pass the exam remains inconclusive. Additionally, we lack access to the scores of candidates, making it impossible to compare ChatGPT’s performance with that of human candidates. Comparing ChatGPT’s performance in answering a sample question with that of candidates could potentially reveal whether ChatGPT outperforms or is not inferior to human candidates. It is necessary to emphasise that ChatGPT is not designed to practice family medicine or pass the related exam. Instead, we may propose that it could be used to assist candidates with exam preparation by helping them determine correct responses. A significant component of learning in family medicine involves the interpretation of images, such as ECGs, X-rays and skin conditions—capabilities that text-based models like ChatGPT lack. In our study, we encountered this limitation when one question included an ECG image, which we had to exclude the image. Interestingly, our two reviewers found that the absence of this image did not impact the accuracy or relevance of ChatGPT’s answers to the associated clinical scenario question. In this study, we used GPT-3.5 and GPT-4 from OpenAI, which were trained in September 2021 and were not specialised for medical purposes. It’s important to note that other LLMs may use more recent sources of information, potentially yielding different results and warranting further investigation. Furthermore, even within the same version of OpenAI, the GPT’s performance can be influenced by the repetition of questions and the feedback provided over time, meaning that the performance of ChatGPT may evolve over time. To avoid the possibility of learning curve effects and memory retention bias impacting the AI’s performance, we took the precaution of erasing the results of each round from the ChatGPT window before initiating a new session for the subsequent round. In an actual exam setting, residents typically read the clinical scenario once and then respond to each two to seven related questions and the scenario is not reaped before each question. We adopted a similar approach and did not reiterate the clinical scenario before each related question. Nevertheless, ChatGPT’s responses might differ if the clinical scenario were repeated before each question. Confirming this hypothesis would necessitate further investigation. In this study, we examined a sample of SAMP questions provided by CFPC, which is very similar to the actual exam. These question sets comprised only 19 clinical scenarios and 77 questions. Expanding the number of questions examined could enhance the study’s reliability. However, it’s important to note that many of the available sample questions from other sources on the market may not represent the actual examination, or their answer keys may be reliable. It is important to acknowledge that there is no established cut-off score for passing the SAMPs part of the CFPC exam. Instead, the minimal passing score is set based on the performance of a reference group of first-time test-takers who graduate from Canadian family medicine residency programmes in each exam. Consequently, whether ChatGPT’s current performance would be sufficient to pass the exam remains inconclusive. Additionally, we lack access to the scores of candidates, making it impossible to compare ChatGPT’s performance with that of human candidates. Comparing ChatGPT’s performance in answering a sample question with that of candidates could potentially reveal whether ChatGPT outperforms or is not inferior to human candidates. It is necessary to emphasise that ChatGPT is not designed to practice family medicine or pass the related exam. Instead, we may propose that it could be used to assist candidates with exam preparation by helping them determine correct responses. A significant component of learning in family medicine involves the interpretation of images, such as ECGs, X-rays and skin conditions—capabilities that text-based models like ChatGPT lack. In our study, we encountered this limitation when one question included an ECG image, which we had to exclude the image. Interestingly, our two reviewers found that the absence of this image did not impact the accuracy or relevance of ChatGPT’s answers to the associated clinical scenario question. In this study, we used GPT-3.5 and GPT-4 from OpenAI, which were trained in September 2021 and were not specialised for medical purposes. It’s important to note that other LLMs may use more recent sources of information, potentially yielding different results and warranting further investigation. Furthermore, even within the same version of OpenAI, the GPT’s performance can be influenced by the repetition of questions and the feedback provided over time, meaning that the performance of ChatGPT may evolve over time. To avoid the possibility of learning curve effects and memory retention bias impacting the AI’s performance, we took the precaution of erasing the results of each round from the ChatGPT window before initiating a new session for the subsequent round. In an actual exam setting, residents typically read the clinical scenario once and then respond to each two to seven related questions and the scenario is not reaped before each question. We adopted a similar approach and did not reiterate the clinical scenario before each related question. Nevertheless, ChatGPT’s responses might differ if the clinical scenario were repeated before each question. Confirming this hypothesis would necessitate further investigation. In this study, we examined a sample of SAMP questions provided by CFPC, which is very similar to the actual exam. These question sets comprised only 19 clinical scenarios and 77 questions. Expanding the number of questions examined could enhance the study’s reliability. However, it’s important to note that many of the available sample questions from other sources on the market may not represent the actual examination, or their answer keys may be reliable. Given the high accuracy and consistency of the answers generated by ChatGPT—particularly GPT-4 —our study suggests that these GPTs are promising as supplementary learning tools for candidates preparing for the CFPC exam. Future studies need to assess the long-term efficacy and reliability of these models in educational settings, especially in preparing candidates for exams like the CFPC. This would involve tracking performance over multiple years and across various curriculum updates and study how the use of these AI-enabled tools influences learning behaviours, including understanding of complex concepts, and critical thinking skills.
Optimization of Glibenclamide Loaded Thermoresponsive SNEDDS Using Design of Experiment Approach: Paving the Way to Enhance Pharmaceutical Applicability
85878693-4082-419c-865b-99e10d3184de
11547575
Pharmacology[mh]
Self-nano-emulsifying drug delivery systems (SNEDDS) have been widely used to boost the oral bioavailability of lipophilic drugs . They consist of a homogenous blend of surfactant, co-surfactant, and oil that can form a nanoemulsion within the gastrointestinal tract (GIT) following exposure to agitation by peristaltic movement . The dispersed nanoemulsion facilitates the solubilization of drugs and enhances their oral bioavailability . However, liquid SNEDDS suffer from a propensity to leak from soft gelatin capsules, which limits their application as a pharmaceutical dosage form . Consequently, Solid SNEDDS have been invented to overcome the limitations of liquid SNEDDS using various technologies, including lyophilization , hot melt extrusion , spray drying , and fluid bed coating . However, the associated high cost of these technologies owing to the multistep processes during production and expensive instruments limit their application. Even though adsorption onto porous materials overcomes these limitations, drug trapping and high total dosage hinder its application . Therefore, an alternative approach is still required to address the limitations of traditional forms of SNEDDS. Herein, a novel thermoresponsive SNEDDS (T-SNEDDS) formulation has been invented in response to this demand. This innovative formulation combines the advantages of a low dosage of liquid SNEDDS with the leak-prevention properties of solid SNEDDS. The incorporation of Poloxamer 188 presents an essential rule, which enables the formulation to remain solid at room temperature, prevent leakage during storage, and convert to a liquid state at body temperature, facilitating optimal drug release and absorption. Propylene glycol is incorporated into the SNEDDS formulation as a cosurfactant for Poloxamer 188 and modulates SNEDDS transition from a solid to a liquid state to achieve this thermoresponsive behavior. To investigate the ability of T-SNEDDS to remain solid at room temperature while retaining the advantages of enhancing drug dissolution, glibenclamide (GBC) was utilized as a model drug. It is widely prescribed by physicians owing to its reported potency and long duration of action . It belongs to sulfonylureas, which are widely used to treat patients diagnosed with diabetes mellitus (DM), particularly type II. They reduce blood glucose levels following pancreatic beta cell stimulation, which results in a pronounced increment in insulin secretion . Even though GBC showed promising outcomes in clinical settings during the treatment of diabetes, it exhibits poor oral bioavailability (approximately 45%) due to its low aqueous solubility . Previous studies were conducted to improve the poor aqueous solubility and dissolution associated with GBC . However, none of these studies address the limitation of formulation leakage. Therefore, GBC is a good candidate for preparing T-SNEDDS to enhance poor aqueous dissolution and prevent formulation leakage during storage. To select the optimum T-SNEDDS formulation, Design of Experiments (DoE) software was utilized. This systematic approach simultaneously studies the interaction of multiple factors, which gives a precise indication of the impact of independent factors on measured response factors . Using DoE software, the influence of propylene glycol and Poloxamer 188 concentrations on the liquefying temperature, liquefying time, and GBC solubility of prepared T-SNEDDS formulations was studied. This study aims to prepare optimized T-SNEDDS as a potential alternative to conventional SNEDDS formulations. The optimization process was performed in two stages. First, oil was selected based on solubility to achieve maximum drug loading, while surfactant was selected based on an emulsification study to ensure the ability to form nanoemulsion droplets with the selected oil. Second, the influence of propylene glycol and Poloxamer 188 concentrations on the thermoresponsive behavior of T-SNEDDS was investigated using Design-Expert ® software. The suggested optimized formulation was then prepared and subjected to pharmaceutical assessment, including particle size analysis and in vitro dissolution. 2.1. Selection of Oil The solubility of GBC within different types of oils was studied to select the lipid phase during the preparation of T-SNEDDS. The oils were carefully chosen to represent diverse chemical classes: long-chain triglycerides (soybean oil), medium-chain triglycerides (Captex 355), long-chain monoglycerides (Peceol), short-chain monoglycerides (Imwitor 308), and free fatty acids (oleic acid). This strategy aimed to assess the impact of oils with different esterification degrees and chain lengths on their solubilization capacity. showed that Captex 355 and soybean oils (triglycerides) have a lower ability to solubilize GBC, with values of 0.15 ± 0.02 and 0.10 ± 0.00 mg/g, respectively. Oleic acid (free fatty acids) increased GBC’s solubility to 0.49 ± 0.04 mg/g. Moreover, higher GBC solubility was detected in Peceol and Imwitor 308 oil (monoglycerides), with values of 0.84 ± 0.06 and 2.44 ± 0.08 mg/g, respectively. Consequently, Imwitor 308 was selected to prepare T-SNEDDS based on its measured high drug solubility. This could enable the incorporation of GBC within dispersed nanoemulsions inside the gastrointestinal tract and avoid drug precipitation . Furthermore, solubilization of GBC will ensure high drug bioavailability by maintaining a noteworthy concentration gradient driving force between GIT and systemic circulation . 2.2. Selection of Surfactant Transmittance measurement was performed to select the optimum surfactant during the preparation of T-SNEDDS. The prepared mixtures (surfactant and oil) were dispersed, showing their physical appearance in . In addition, transmittance percentages were measured to give a numerical value for each dispersion and presented in . It is clear from the images that the mixture containing tween 85 and labrasol ALF produced a milky dispersion system that agrees with the low transmittance value of less than 5%. The dispersion of the mixture comprising tween 80 and tween 20 produced a pale white dispersion system with a transmittance value of about 60%. Finally, Kolliphor EL enhanced the dispersion of Imwitor 308 oil, and the dispersed system appeared clear with a transmittance value of about 99%. The measured high transmittance value of Kolliphor EL and Imwitor 308 mixture confirms its dispersion in the nanosized range . 2.3. Selection of Cosurfactant Thermoresponsive polymer (Poloxamer 188) was selected as a solidifying agent to prepare the T-SNEDDS formulation. Poloxamer 188 at a concentration of 10% w / w was subjected to mixing with Kolliphor EL and Imwitor 308 for one day, and the polymer failed to dissolve. On the other hand, Poloxamer 188 could dissolve in propylene glycol, which could be attributed to the formation of hydrogen bonding between them. The detected insolubility of Poloxamer 188 in Kolliphor EL and Imwitor 308 could be ascribed to complex molecular interaction. Although Kolliphor EL contains hydroxyl groups, its complex structure sterically hinders the formation of hydrogen bonds with Poloxamer 188. Furthermore, Imwitor 308 has a free hydroxyl group and is considered a lipophilic molecule owing to the presence of lipophilic caprylic acid fatty acid. A shows T-SNEDDS remains in a solid at room temperature 25 °C, while it transitions to a liquid state when exposed to body temperature 37 °C ( B). This thermoresponsive property enhances the formulation’s stability during storage and facilitates its conversion to a liquid form upon administration. Therefore, it was selected as a cosurfactant to solubilize Poloxamer 188 within T-SNEDDS. 2.4. Effect of Independent Variables on the Responses shows the measured responses of the prepared T-SNEDDS, including liquefying temperature, liquefying time, and GBC solubility. The DOE software was utilized to examine the influence of Poloxamer-188 and propylene glycol concentrations on the measured responses using various mathematical models, including linear, 2FI, quadratic, and cubic. The best-fitting model was selected based on statistical parameters. The selected model showed no significant lack of fit ( p > 0.05), indicating good model adequacy. summarizes the selected models for each response based on ANOVA analysis, while shows the 3D surface plots for the measured responses. The impact of Poloxamer-188 and propylene glycol concentrations on the studied responses was discussed individually. 2.5. Liquefying Temperature The prepared thermoresponsive SNEDDS had liquefying temperatures ranging from 29 to 36.5 °C . Statistical analysis showed that increasing the concentration of propylene glycol resulted in a significant (<0.0001) reduction in the liquefying temperature of thermoresponsive SNEDDS . On the contrary, increasing the concentration of Poloxamer 188 resulted in a significant (<0.0001) increment in the liquefying temperature of thermoresponsive SNEDDS . The steepness of the lines ( I) aligns with statistical analysis, which indicates the sensitivity of both responses to changes in each factor. In addition, the liquefying temperature of the thermoresponsive SNEDDS could be expected utilizing Equation (1): Liquefying temperature = 32.95 − 0.19 ∗ Propylene glycol (% w / w ) + 0.52 ∗ Poloxamer 188 (% w / w ) (1) The thermoresponsive behavior of the prepared formulations can be attributed to the complex interactions between propylene glycol and Poloxamer 188, which is in agreement with a previously reported study . At lower temperatures, the hydroxyl groups of propylene glycol could form hydrogen bonding with terminal hydroxyl groups or oxygen atoms in the intra-polyether part of Poloxamer 188. This agrees with previous studies, which showed that Poloxamer forms hydrogen bonds with the hydroxyl groups of polyacrylic acid and Carboxymethyl Pullulan, respectively . Moreover, Poloxamer could create two types of bonding between the polymer units: intermolecular hydrogen bonding and Van der Waals forces. This further supports our study, which showed that intermolecular hydrogen bonds could be formed between terminal hydroxyl groups of long hydrocarbon chains . Moreover, dipole interactions between Poloxamer units could be generated by electrons withdrawing oxygen atoms within polymer chains . The predicted complex crosslinking between propylene glycol and Poloxamer 188 could be the reason for solidification. On the contrary, increasing temperatures break these bonds and form a solubilized micellular structure . Therefore, T-SNEDDS is converted from a solid to a liquid state. Increasing propylene glycol reduces the liquefying temperature, resulting in a loose crosslinking structure. This could be ascribed to propylene glycol’s small molecular size. Therefore, increasing propylene glycol concentration separates Poloxamer 188 units from each other and prevents the formation of complex bridging bonds between polymer units. On the contrary, increasing the Poloxamer concentration formed a rigid matrix with a cross-linking solid structure. This aligns with the principle that increased polymer concentration leads to the formation of rigid structures . Therefore, higher energy is required to break it down and form a soluble micellar structure. 2.6. Liquefying Time The prepared thermoresponsive SNEDDS had liquefying times ranging from 53 to 150 s . Statistical analysis showed that increasing the concentration of propylene glycol rendered insignificant ( p -value = 0.1188) the reduction in the liquefying time of thermoresponsive SNEDDS . In contrast, increasing the concentration of Poloxamer 188 resulted in a significant (<0.0001) increment in the liquefying time of thermoresponsive SNEDDS . The slope of the line (B) ( II) aligns with statistical analysis, which indicates the sensitivity of liquefying time to changes in Poloxamer 188 concentration alone. In addition, the liquefying time of the thermoresponsive SNEDDS could be expected utilizing Equation (2) as follows: Liquefying time = 60.29 − 1.00 ∗ Propylene glycol (% w / w ) + 8.46 ∗ Poloxamer 188 (% w / w ) (2) The obtained results are consistent with liquefying temperature. The observed significant influence in Poloxamer 188 could result from forming a rigid structure while increasing Poloxamer concentration. Moreover, this is consistent with the reported thermoresponsive behavior of poloxamer . On the other hand, the insignificant effect of propylene glycol is attributed to the measuring temperature of 37 °C, which is above the liquefying temperature for all formulations. Therefore, the micellization of poloxamer is mainly driven by its concentration rather than propylene glycol concentration. This agreed with a previously reported study by Alexandridis et al., who found that the conversion of poloxamer from a monomeric state to micellar form is easily reached with increasing temperature . 2.7. GBC Solubility The prepared thermoresponsive SNEDDS had GBC solubility ranging from 4.97 to 5.54 mg/g . Statistical analysis showed that increasing the concentration of propylene glycol rendered insignificant ( p -value = 0.5792) the reduction in the GBC solubility of thermoresponsive SNEDDS . In contrast, increasing the concentration of Poloxamer 188 resulted in a significant (<0.0001) increment in the GBC solubility of thermoresponsive SNEDDS . The slope of the line (B) ( III) aligns with statistical analysis, which indicates the sensitivity of GBC solubility to changes in Poloxamer 188 concentration rather than propylene glycol concentration. In addition, the liquefying time of the thermoresponsive SNEDDS could be expected utilizing Equation (3) as follows: GBC Solubility = 4.91 + 0.003 ∗ Propylene glycol (% w / w ) + 0.06 ∗ Poloxamer 188 (% w / w ) (3) The significant positive effect of Poloxamer 188 on GBC solubility demonstrates its effectiveness in enhancing drug solubilization. The observed significant effect of poloxamer could be attributed to its amphiphilic nature and its ability to solubilize hydrophobic drugs, which agreed with previously reported studies . However, propylene glycol’s lack of a significant effect on the solubility of GBC is noteworthy. There is no scientific rationale for this observation. However, further investigation is required to address the complexity of bonding within the T-SNEDDS formulation. 2.8. Optimization of Thermoresponsive SNEDDS The optimized T-SNEDDS was chosen based on maximum liquefying temperature and GBC solubility while minimizing liquefying time. The optimization suggested a thermoresponsive SNEDDS formulation comprising 13.7 and 7.9% w / w propylene glycol and Poloxamer 188, respectively. The proposed optimized formulation showed considerable desirability, as shown in . The liquefying temperature of 34.5 °C is far from room temperature (25 °C) and close to body temperature (37 °C). This is required to avoid premature liquefaction during storage before administration and to ensure its transition from solid to liquid upon administration. The liquefying time of 113 s will ensure rapid liquefaction, which promotes rapid drug dissolution in vivo. The GBC solubility of 5.38 mg/g indicates proper drug loading, which reduces the total dosage of the formulation. The suggested optimized thermoresponsive SNEDDS was prepared to determine the actual values of measured responses. This ensures the validation, accuracy, and reliability of the model suggested by the Design of Experiments software. shows the predicted mean value for the measured responses (liquefying temperature, liquefying time, and GBC solubility) against the actual mean values. The results showed that all actual mean values fall within the 95% prediction interval, indicating the developed model’s remarkable power. 2.9. Particle Size Measurement The optimized T-SNEDDS formulation (drug-free) was prepared for pharmaceutical assessment. GBC was mixed with the optimized T-SNEDDS formulation to prepare a drug-loaded formulation. Both drug-free and drug-loaded T-SNEDDS formulations were subjected to particle size analysis. The present results revealed that both formulations dispersed in the nanosize range, with values of 23.8 ± 0.7 and 29.5 ± 1.2 nm, respectively. The observed increase in particle size could be attributed to incorporating GBC with the lipid core of the dispersed nanoemulsion. Moreover, the dispersion of the optimized formulation in the nanosize range indicates its potential to enhance drug bioavailability . 2.10. In Vitro Dissolution The optimized formulation was placed within a hard gelatin capsule, as shown in A. It is clear from the images that the prepared formulation is solidified with no risk of formulation leakage. B shows the dissolution profile of GBC from hard gelatin capsules filled with raw drug and T-SNEDDS formulation. The obtained results revealed that pure GBC’s dissolution efficiency was 2.5%. The optimized T-SNEDDS formulation increased dissolution efficiency 39 times with a value of 98.8%. The present results showed that the prepared formulation could enhance the bioavailability of orally administered GBC owing to the observed enhancement in the drug dissolution profile . 2.11. Future Prospective Even though the prepared optimized T-SNEDDS improved the dissolution of GBC and resolved the limitation of traditional forms of liquid SNEDDS, further studies are still required to study drug permeability. However, it has been reported that SNEDDS formulations were able to improve drug permeability through solubilization and the P-glycoprotein inhibition effect of their excipients. Moreover, Poloxamer showed promising results in enhancing drug permeability through the modulation of tight junctions. Therefore, it is expected that combining both SNEDDS and Poloxamer could augment drug permeability. However, further studies are required to confirm this issue. Another issue is the impact of different grades of Poloxamer on the T-SNEDDS, which should be addressed. This could include Poloxamer 407, Poloxamer 237, and Poloxamer 338, which vary in molecular weights, hydrophilic-lipophilic balance (HLB) values, and critical micelle concentrations (CMC). This investigation could help identify the optimal thermoresponsive polymer to be used during the preparation of T-SNEDDS for pharmaceutical applications. The solubility of GBC within different types of oils was studied to select the lipid phase during the preparation of T-SNEDDS. The oils were carefully chosen to represent diverse chemical classes: long-chain triglycerides (soybean oil), medium-chain triglycerides (Captex 355), long-chain monoglycerides (Peceol), short-chain monoglycerides (Imwitor 308), and free fatty acids (oleic acid). This strategy aimed to assess the impact of oils with different esterification degrees and chain lengths on their solubilization capacity. showed that Captex 355 and soybean oils (triglycerides) have a lower ability to solubilize GBC, with values of 0.15 ± 0.02 and 0.10 ± 0.00 mg/g, respectively. Oleic acid (free fatty acids) increased GBC’s solubility to 0.49 ± 0.04 mg/g. Moreover, higher GBC solubility was detected in Peceol and Imwitor 308 oil (monoglycerides), with values of 0.84 ± 0.06 and 2.44 ± 0.08 mg/g, respectively. Consequently, Imwitor 308 was selected to prepare T-SNEDDS based on its measured high drug solubility. This could enable the incorporation of GBC within dispersed nanoemulsions inside the gastrointestinal tract and avoid drug precipitation . Furthermore, solubilization of GBC will ensure high drug bioavailability by maintaining a noteworthy concentration gradient driving force between GIT and systemic circulation . Transmittance measurement was performed to select the optimum surfactant during the preparation of T-SNEDDS. The prepared mixtures (surfactant and oil) were dispersed, showing their physical appearance in . In addition, transmittance percentages were measured to give a numerical value for each dispersion and presented in . It is clear from the images that the mixture containing tween 85 and labrasol ALF produced a milky dispersion system that agrees with the low transmittance value of less than 5%. The dispersion of the mixture comprising tween 80 and tween 20 produced a pale white dispersion system with a transmittance value of about 60%. Finally, Kolliphor EL enhanced the dispersion of Imwitor 308 oil, and the dispersed system appeared clear with a transmittance value of about 99%. The measured high transmittance value of Kolliphor EL and Imwitor 308 mixture confirms its dispersion in the nanosized range . Thermoresponsive polymer (Poloxamer 188) was selected as a solidifying agent to prepare the T-SNEDDS formulation. Poloxamer 188 at a concentration of 10% w / w was subjected to mixing with Kolliphor EL and Imwitor 308 for one day, and the polymer failed to dissolve. On the other hand, Poloxamer 188 could dissolve in propylene glycol, which could be attributed to the formation of hydrogen bonding between them. The detected insolubility of Poloxamer 188 in Kolliphor EL and Imwitor 308 could be ascribed to complex molecular interaction. Although Kolliphor EL contains hydroxyl groups, its complex structure sterically hinders the formation of hydrogen bonds with Poloxamer 188. Furthermore, Imwitor 308 has a free hydroxyl group and is considered a lipophilic molecule owing to the presence of lipophilic caprylic acid fatty acid. A shows T-SNEDDS remains in a solid at room temperature 25 °C, while it transitions to a liquid state when exposed to body temperature 37 °C ( B). This thermoresponsive property enhances the formulation’s stability during storage and facilitates its conversion to a liquid form upon administration. Therefore, it was selected as a cosurfactant to solubilize Poloxamer 188 within T-SNEDDS. shows the measured responses of the prepared T-SNEDDS, including liquefying temperature, liquefying time, and GBC solubility. The DOE software was utilized to examine the influence of Poloxamer-188 and propylene glycol concentrations on the measured responses using various mathematical models, including linear, 2FI, quadratic, and cubic. The best-fitting model was selected based on statistical parameters. The selected model showed no significant lack of fit ( p > 0.05), indicating good model adequacy. summarizes the selected models for each response based on ANOVA analysis, while shows the 3D surface plots for the measured responses. The impact of Poloxamer-188 and propylene glycol concentrations on the studied responses was discussed individually. The prepared thermoresponsive SNEDDS had liquefying temperatures ranging from 29 to 36.5 °C . Statistical analysis showed that increasing the concentration of propylene glycol resulted in a significant (<0.0001) reduction in the liquefying temperature of thermoresponsive SNEDDS . On the contrary, increasing the concentration of Poloxamer 188 resulted in a significant (<0.0001) increment in the liquefying temperature of thermoresponsive SNEDDS . The steepness of the lines ( I) aligns with statistical analysis, which indicates the sensitivity of both responses to changes in each factor. In addition, the liquefying temperature of the thermoresponsive SNEDDS could be expected utilizing Equation (1): Liquefying temperature = 32.95 − 0.19 ∗ Propylene glycol (% w / w ) + 0.52 ∗ Poloxamer 188 (% w / w ) (1) The thermoresponsive behavior of the prepared formulations can be attributed to the complex interactions between propylene glycol and Poloxamer 188, which is in agreement with a previously reported study . At lower temperatures, the hydroxyl groups of propylene glycol could form hydrogen bonding with terminal hydroxyl groups or oxygen atoms in the intra-polyether part of Poloxamer 188. This agrees with previous studies, which showed that Poloxamer forms hydrogen bonds with the hydroxyl groups of polyacrylic acid and Carboxymethyl Pullulan, respectively . Moreover, Poloxamer could create two types of bonding between the polymer units: intermolecular hydrogen bonding and Van der Waals forces. This further supports our study, which showed that intermolecular hydrogen bonds could be formed between terminal hydroxyl groups of long hydrocarbon chains . Moreover, dipole interactions between Poloxamer units could be generated by electrons withdrawing oxygen atoms within polymer chains . The predicted complex crosslinking between propylene glycol and Poloxamer 188 could be the reason for solidification. On the contrary, increasing temperatures break these bonds and form a solubilized micellular structure . Therefore, T-SNEDDS is converted from a solid to a liquid state. Increasing propylene glycol reduces the liquefying temperature, resulting in a loose crosslinking structure. This could be ascribed to propylene glycol’s small molecular size. Therefore, increasing propylene glycol concentration separates Poloxamer 188 units from each other and prevents the formation of complex bridging bonds between polymer units. On the contrary, increasing the Poloxamer concentration formed a rigid matrix with a cross-linking solid structure. This aligns with the principle that increased polymer concentration leads to the formation of rigid structures . Therefore, higher energy is required to break it down and form a soluble micellar structure. The prepared thermoresponsive SNEDDS had liquefying times ranging from 53 to 150 s . Statistical analysis showed that increasing the concentration of propylene glycol rendered insignificant ( p -value = 0.1188) the reduction in the liquefying time of thermoresponsive SNEDDS . In contrast, increasing the concentration of Poloxamer 188 resulted in a significant (<0.0001) increment in the liquefying time of thermoresponsive SNEDDS . The slope of the line (B) ( II) aligns with statistical analysis, which indicates the sensitivity of liquefying time to changes in Poloxamer 188 concentration alone. In addition, the liquefying time of the thermoresponsive SNEDDS could be expected utilizing Equation (2) as follows: Liquefying time = 60.29 − 1.00 ∗ Propylene glycol (% w / w ) + 8.46 ∗ Poloxamer 188 (% w / w ) (2) The obtained results are consistent with liquefying temperature. The observed significant influence in Poloxamer 188 could result from forming a rigid structure while increasing Poloxamer concentration. Moreover, this is consistent with the reported thermoresponsive behavior of poloxamer . On the other hand, the insignificant effect of propylene glycol is attributed to the measuring temperature of 37 °C, which is above the liquefying temperature for all formulations. Therefore, the micellization of poloxamer is mainly driven by its concentration rather than propylene glycol concentration. This agreed with a previously reported study by Alexandridis et al., who found that the conversion of poloxamer from a monomeric state to micellar form is easily reached with increasing temperature . The prepared thermoresponsive SNEDDS had GBC solubility ranging from 4.97 to 5.54 mg/g . Statistical analysis showed that increasing the concentration of propylene glycol rendered insignificant ( p -value = 0.5792) the reduction in the GBC solubility of thermoresponsive SNEDDS . In contrast, increasing the concentration of Poloxamer 188 resulted in a significant (<0.0001) increment in the GBC solubility of thermoresponsive SNEDDS . The slope of the line (B) ( III) aligns with statistical analysis, which indicates the sensitivity of GBC solubility to changes in Poloxamer 188 concentration rather than propylene glycol concentration. In addition, the liquefying time of the thermoresponsive SNEDDS could be expected utilizing Equation (3) as follows: GBC Solubility = 4.91 + 0.003 ∗ Propylene glycol (% w / w ) + 0.06 ∗ Poloxamer 188 (% w / w ) (3) The significant positive effect of Poloxamer 188 on GBC solubility demonstrates its effectiveness in enhancing drug solubilization. The observed significant effect of poloxamer could be attributed to its amphiphilic nature and its ability to solubilize hydrophobic drugs, which agreed with previously reported studies . However, propylene glycol’s lack of a significant effect on the solubility of GBC is noteworthy. There is no scientific rationale for this observation. However, further investigation is required to address the complexity of bonding within the T-SNEDDS formulation. The optimized T-SNEDDS was chosen based on maximum liquefying temperature and GBC solubility while minimizing liquefying time. The optimization suggested a thermoresponsive SNEDDS formulation comprising 13.7 and 7.9% w / w propylene glycol and Poloxamer 188, respectively. The proposed optimized formulation showed considerable desirability, as shown in . The liquefying temperature of 34.5 °C is far from room temperature (25 °C) and close to body temperature (37 °C). This is required to avoid premature liquefaction during storage before administration and to ensure its transition from solid to liquid upon administration. The liquefying time of 113 s will ensure rapid liquefaction, which promotes rapid drug dissolution in vivo. The GBC solubility of 5.38 mg/g indicates proper drug loading, which reduces the total dosage of the formulation. The suggested optimized thermoresponsive SNEDDS was prepared to determine the actual values of measured responses. This ensures the validation, accuracy, and reliability of the model suggested by the Design of Experiments software. shows the predicted mean value for the measured responses (liquefying temperature, liquefying time, and GBC solubility) against the actual mean values. The results showed that all actual mean values fall within the 95% prediction interval, indicating the developed model’s remarkable power. The optimized T-SNEDDS formulation (drug-free) was prepared for pharmaceutical assessment. GBC was mixed with the optimized T-SNEDDS formulation to prepare a drug-loaded formulation. Both drug-free and drug-loaded T-SNEDDS formulations were subjected to particle size analysis. The present results revealed that both formulations dispersed in the nanosize range, with values of 23.8 ± 0.7 and 29.5 ± 1.2 nm, respectively. The observed increase in particle size could be attributed to incorporating GBC with the lipid core of the dispersed nanoemulsion. Moreover, the dispersion of the optimized formulation in the nanosize range indicates its potential to enhance drug bioavailability . The optimized formulation was placed within a hard gelatin capsule, as shown in A. It is clear from the images that the prepared formulation is solidified with no risk of formulation leakage. B shows the dissolution profile of GBC from hard gelatin capsules filled with raw drug and T-SNEDDS formulation. The obtained results revealed that pure GBC’s dissolution efficiency was 2.5%. The optimized T-SNEDDS formulation increased dissolution efficiency 39 times with a value of 98.8%. The present results showed that the prepared formulation could enhance the bioavailability of orally administered GBC owing to the observed enhancement in the drug dissolution profile . Even though the prepared optimized T-SNEDDS improved the dissolution of GBC and resolved the limitation of traditional forms of liquid SNEDDS, further studies are still required to study drug permeability. However, it has been reported that SNEDDS formulations were able to improve drug permeability through solubilization and the P-glycoprotein inhibition effect of their excipients. Moreover, Poloxamer showed promising results in enhancing drug permeability through the modulation of tight junctions. Therefore, it is expected that combining both SNEDDS and Poloxamer could augment drug permeability. However, further studies are required to confirm this issue. Another issue is the impact of different grades of Poloxamer on the T-SNEDDS, which should be addressed. This could include Poloxamer 407, Poloxamer 237, and Poloxamer 338, which vary in molecular weights, hydrophilic-lipophilic balance (HLB) values, and critical micelle concentrations (CMC). This investigation could help identify the optimal thermoresponsive polymer to be used during the preparation of T-SNEDDS for pharmaceutical applications. 3.1. Materials Glibenclamide was acquired from Saudi Pharmaceutical Industries and Medical Appliances Corp. (Qassim, Saudi Arabia). Oleic acid and Imwitor-308 oil were provided by Avonchem (Cheshire, UK) and Sasol Germany GmbH (Werk, Witten, Germany), respectively. Peceol (oil) was supplied by Gattefosse (Saint-Priest, France). Soybean oil and Captex 355 EP/NF were acquired from John L. Seaton & Co., Ltd., Croda International Plc. (East Yorkshire, UK) and Abitec Corporation (Janesville, WI, USA), respectively. Kolliphor EL, Tween-80, and Labrasol ALF (LB) were purchased from BASF (Ludwigshafen, Germany), Loba Chemie (Mumbai, India), and Gattefosse (Saint-Priest, France), respectively. In addition, Tween 20 and Tween 85 were donated by BDH (Poole, UK) and Merck-Schuchardt OHG (Hohenbrunn, Germany), respectively. Propylene glycol and polyethylene glycol 400 were purchased from Winlab Laboratory (Leicestershire, UK) and BASF (Ludwigshafen, Germany), respectively. Poloxamer 188 (average molecular weight ~7680–9510 g/mol) was obtained from Sigma Aldrich (St. Louis, MO, USA). 3.2. Ultra Performance Liquid Chromatography (UPLC) Method for Drug Analysis GBC concentrations in the samples were analyzed using an Ultimate 3000 UPLC system (Thermo Scientific, Bedford, MA, USA) that incorporated a quadratic pump, an automatic sampler, a column chamber, and a Photodiode Array (PDA) detector. The analysis employed an Acquity UPLC BEH C18 column (2.1 × 50 mm, 1.7 μm), through which a mobile phase flowed at 0.3 mL/min. This mobile phase comprised 46.9% acetonitrile and 53.1% of a 0.1% formic acid solution. The column was kept at a constant temperature of 38.8 °C. The concentration of GB in the samples was precisely determined using the PDA detector set at a wavelength of 228 nm. The system was controlled through Chromeleon software version 5 for data acquisition and analysis. 3.3. Selection of Oil GBC solubility within different types of oils was studied to select the lipid phase of SNEDDS formulation. The excess amount of GBC was mixed with each oil using a magnetic stirrer for one day at 1000 rpm. Afterward, the mixture was centrifuged at 14,000 rpm for 5 min to precipitate the undissolved drug. The concentration of GBC in the supernatant was determined utilizing UPLC following appropriate dilution using an organic solvent (acetonitrile). 3.4. Selection of Surfactant Various types of surfactants were subjected to the emulsification study to optimize SNEDDS components. Briefly, Imwitor 308 and surfactant were mixed in an equivalent amount and then heated to facilitate the formulation of a uniform system. The prepared mixture was diluted with distilled water (1:100) to promote the formation of nanoemulsion droplets. The transmittance of the dispersed system was determined using a UV-Vis spectrophotometer Ultrospec 2100 Pro, Amersham Biosciences (Piscataway, NJ, USA) at 638 nm. Distilled water was used as blank during absorbance measurement . 3.5. Selection of Cosurfactant The solubility of Poloxamer 188 in various types of cosurfactants (propylene glycol and polyethylene glycol 400) was studied to select the optimum cosurfactant during the preparation of T-SNEDDS. Poloxamer 188 was mixed with cosurfactant to prepare a 10% w / w concentration. The mixture was stirred for one day at 1000 rpm. 3.6. Design of Experiments In the present study, the optimization process involved two stages. In the first stage, oil (Imwitor 308) was selected based on a solubility study to achieve maximum drug loading within the formulation. Moreover, Kolliphor EL was chosen as the surfactant based on its emulsification efficiency based on measured transmittance value. In the second stage, a cosurfactant (propylene glycol) and polymer (Poloxamer 188) were selected to induce the thermoresponsive behavior of T-SNEDDS formulation. Therefore, this study was designed to investigate the impact of their concentrations on the measured responses. Therefore, the ratio of surfactant and oil (2:1) was kept constant for all formulations to avoid any possible influence in the measured responses. Design-Expert ® software (version 13, Stat-Ease Inc., Minneapolis, MN, USA) was used to achieve this purpose. Design of Experiments (DoE) software using response surface methodology (RSM) was utilized to optimize the T-SNEDDS formulation. The Central Composite Face-Centered (CCF) design was selected precisely because it studies the impact of selected factors at three levels, which provides good prediction capability and assists in efficiently predicting quadratic effects with fewer experimental runs. It consists of four factorial points, four axial points, and five replicated center points, totaling 13 experimental runs. Two independent variables were studied: propylene glycol concentration (X1: 10–25% w / w ) and Poloxamer 188 concentration (X2: 2–10% w / w ). Three response variables were evaluated, including Y1: Liquefying temperature (°C), Y2: Liquefying time (seconds), and Y3: GBC solubility (mg/g). The model was selected based on analysis of variance (ANOVA), lack of fit tests, R 2 values (R 2 > 0.9 considered acceptable), comparison of predicted vs. adjusted R 2 , and adequate precision (signal-to-noise ratio > 4 desired). The model was selected based on the sum of squares, lack of fit tests, and statistical analysis in terms of R 2 values (adjusted and predicted), adequate precision (signal-to-noise ratio), and ANOVA ( p < 0.05 considered significant). 3.7. Preparation of SNEDDS Formulation The suggested formulations by the Design of Experiments software presented in were prepared as follows: a 2:1 mixture of surfactant (Kolliphor-EL) and oil (Imwitor 308) was mixed. Then, propylene glycol and Poloxamer 188 were mixed with this mixture as per . The prepared formulations were kept in an incubator at 40 °C for two hours to facilitate the solubilization of Poloxamer 188. 3.8. Determination of Liquefying Temperature The suggested 13 and optimized T-SNEDDS formulations were kept in test tubes and left for an hour to facilitate the transition to a solid state. Later, the water bath temperature was set at 25 ± 0.5 °C, and racks holding test tubes were placed in the bath for 3 min for equilibration. Then, the test tubes were inspected visually to determine whether liquefaction had occurred. After that, the temperature was raised by 0.5 ± 0.1 °C, and the formulations were subjected to a similar procedure until the liquefying temperature for all formulations was determined. 3.9. Determination of Liquefying Time The water bath was set at body temperature (37 ± 0.1 °C), and liquefying time for each formulation was determined separately. The test tube was placed in a water bath, and liquefying time was determined once the formulation was completely converted from a solid to a liquid state. 3.10. Determination of GBC Solubility The solubility of GBC within prepared T-SNEDDS was determined by subjection of a mixture of an excess amount of GBC at a given temperature and formulation to stirring at 1000 rpm at a controlled room temperature (23 ± 2 °C). After one day, the mixture was centrifuged at 14,000 rpm for 5 min to precipitate the undissolved drug. The concentration of GBC in the supernatant was determined utilizing UPLC following appropriate dilution using an organic solvent (acetonitrile). 3.11. Particle Size Measurement The particle size of prepared drug-free and drug-loaded T-SNEDDS was measured using the Zetasizer instrument Model ZEN3600, Malvern Instruments Co. (Worcestershire, UK) . A dispersed nanoemulsion system was attained by diluting the prepared formulation using distilled water (1: 1000) and mixing for 5 min using a magnetic stirrer. After that, each sample was placed inside a Zetasizer instrument and allowed for equilibration at 25 °C. 3.12. In Vitro Dissolution The in vitro dissolution study was performed using dissolution apparatus Type II (LOGAN Inst. Corp., Somerset, NJ, USA). A drug-loaded formulation was prepared with 4 mg/g loading based on approximately 80% drug solubility. An equivalent amount of formulation containing GBC (2.5 mg) and row drug was placed inside a hard gelatin capsule. A sinker surrounded the capsules to prevent them from floating during the experiment. Before the experiment, 900 mL of dissolution medium (phosphate buffer, pH 6.8) was preheated at 37 ± 0.5 °C. During the experiment, the paddle’s speed was set at 50 rpm. At predetermined intervals, samples were taken from the medium using a 10-micron filter connected to the syringe. Drug quantification in the samples was determined using the UPLC method described in . Glibenclamide was acquired from Saudi Pharmaceutical Industries and Medical Appliances Corp. (Qassim, Saudi Arabia). Oleic acid and Imwitor-308 oil were provided by Avonchem (Cheshire, UK) and Sasol Germany GmbH (Werk, Witten, Germany), respectively. Peceol (oil) was supplied by Gattefosse (Saint-Priest, France). Soybean oil and Captex 355 EP/NF were acquired from John L. Seaton & Co., Ltd., Croda International Plc. (East Yorkshire, UK) and Abitec Corporation (Janesville, WI, USA), respectively. Kolliphor EL, Tween-80, and Labrasol ALF (LB) were purchased from BASF (Ludwigshafen, Germany), Loba Chemie (Mumbai, India), and Gattefosse (Saint-Priest, France), respectively. In addition, Tween 20 and Tween 85 were donated by BDH (Poole, UK) and Merck-Schuchardt OHG (Hohenbrunn, Germany), respectively. Propylene glycol and polyethylene glycol 400 were purchased from Winlab Laboratory (Leicestershire, UK) and BASF (Ludwigshafen, Germany), respectively. Poloxamer 188 (average molecular weight ~7680–9510 g/mol) was obtained from Sigma Aldrich (St. Louis, MO, USA). GBC concentrations in the samples were analyzed using an Ultimate 3000 UPLC system (Thermo Scientific, Bedford, MA, USA) that incorporated a quadratic pump, an automatic sampler, a column chamber, and a Photodiode Array (PDA) detector. The analysis employed an Acquity UPLC BEH C18 column (2.1 × 50 mm, 1.7 μm), through which a mobile phase flowed at 0.3 mL/min. This mobile phase comprised 46.9% acetonitrile and 53.1% of a 0.1% formic acid solution. The column was kept at a constant temperature of 38.8 °C. The concentration of GB in the samples was precisely determined using the PDA detector set at a wavelength of 228 nm. The system was controlled through Chromeleon software version 5 for data acquisition and analysis. GBC solubility within different types of oils was studied to select the lipid phase of SNEDDS formulation. The excess amount of GBC was mixed with each oil using a magnetic stirrer for one day at 1000 rpm. Afterward, the mixture was centrifuged at 14,000 rpm for 5 min to precipitate the undissolved drug. The concentration of GBC in the supernatant was determined utilizing UPLC following appropriate dilution using an organic solvent (acetonitrile). Various types of surfactants were subjected to the emulsification study to optimize SNEDDS components. Briefly, Imwitor 308 and surfactant were mixed in an equivalent amount and then heated to facilitate the formulation of a uniform system. The prepared mixture was diluted with distilled water (1:100) to promote the formation of nanoemulsion droplets. The transmittance of the dispersed system was determined using a UV-Vis spectrophotometer Ultrospec 2100 Pro, Amersham Biosciences (Piscataway, NJ, USA) at 638 nm. Distilled water was used as blank during absorbance measurement . The solubility of Poloxamer 188 in various types of cosurfactants (propylene glycol and polyethylene glycol 400) was studied to select the optimum cosurfactant during the preparation of T-SNEDDS. Poloxamer 188 was mixed with cosurfactant to prepare a 10% w / w concentration. The mixture was stirred for one day at 1000 rpm. In the present study, the optimization process involved two stages. In the first stage, oil (Imwitor 308) was selected based on a solubility study to achieve maximum drug loading within the formulation. Moreover, Kolliphor EL was chosen as the surfactant based on its emulsification efficiency based on measured transmittance value. In the second stage, a cosurfactant (propylene glycol) and polymer (Poloxamer 188) were selected to induce the thermoresponsive behavior of T-SNEDDS formulation. Therefore, this study was designed to investigate the impact of their concentrations on the measured responses. Therefore, the ratio of surfactant and oil (2:1) was kept constant for all formulations to avoid any possible influence in the measured responses. Design-Expert ® software (version 13, Stat-Ease Inc., Minneapolis, MN, USA) was used to achieve this purpose. Design of Experiments (DoE) software using response surface methodology (RSM) was utilized to optimize the T-SNEDDS formulation. The Central Composite Face-Centered (CCF) design was selected precisely because it studies the impact of selected factors at three levels, which provides good prediction capability and assists in efficiently predicting quadratic effects with fewer experimental runs. It consists of four factorial points, four axial points, and five replicated center points, totaling 13 experimental runs. Two independent variables were studied: propylene glycol concentration (X1: 10–25% w / w ) and Poloxamer 188 concentration (X2: 2–10% w / w ). Three response variables were evaluated, including Y1: Liquefying temperature (°C), Y2: Liquefying time (seconds), and Y3: GBC solubility (mg/g). The model was selected based on analysis of variance (ANOVA), lack of fit tests, R 2 values (R 2 > 0.9 considered acceptable), comparison of predicted vs. adjusted R 2 , and adequate precision (signal-to-noise ratio > 4 desired). The model was selected based on the sum of squares, lack of fit tests, and statistical analysis in terms of R 2 values (adjusted and predicted), adequate precision (signal-to-noise ratio), and ANOVA ( p < 0.05 considered significant). The suggested formulations by the Design of Experiments software presented in were prepared as follows: a 2:1 mixture of surfactant (Kolliphor-EL) and oil (Imwitor 308) was mixed. Then, propylene glycol and Poloxamer 188 were mixed with this mixture as per . The prepared formulations were kept in an incubator at 40 °C for two hours to facilitate the solubilization of Poloxamer 188. The suggested 13 and optimized T-SNEDDS formulations were kept in test tubes and left for an hour to facilitate the transition to a solid state. Later, the water bath temperature was set at 25 ± 0.5 °C, and racks holding test tubes were placed in the bath for 3 min for equilibration. Then, the test tubes were inspected visually to determine whether liquefaction had occurred. After that, the temperature was raised by 0.5 ± 0.1 °C, and the formulations were subjected to a similar procedure until the liquefying temperature for all formulations was determined. The water bath was set at body temperature (37 ± 0.1 °C), and liquefying time for each formulation was determined separately. The test tube was placed in a water bath, and liquefying time was determined once the formulation was completely converted from a solid to a liquid state. The solubility of GBC within prepared T-SNEDDS was determined by subjection of a mixture of an excess amount of GBC at a given temperature and formulation to stirring at 1000 rpm at a controlled room temperature (23 ± 2 °C). After one day, the mixture was centrifuged at 14,000 rpm for 5 min to precipitate the undissolved drug. The concentration of GBC in the supernatant was determined utilizing UPLC following appropriate dilution using an organic solvent (acetonitrile). The particle size of prepared drug-free and drug-loaded T-SNEDDS was measured using the Zetasizer instrument Model ZEN3600, Malvern Instruments Co. (Worcestershire, UK) . A dispersed nanoemulsion system was attained by diluting the prepared formulation using distilled water (1: 1000) and mixing for 5 min using a magnetic stirrer. After that, each sample was placed inside a Zetasizer instrument and allowed for equilibration at 25 °C. The in vitro dissolution study was performed using dissolution apparatus Type II (LOGAN Inst. Corp., Somerset, NJ, USA). A drug-loaded formulation was prepared with 4 mg/g loading based on approximately 80% drug solubility. An equivalent amount of formulation containing GBC (2.5 mg) and row drug was placed inside a hard gelatin capsule. A sinker surrounded the capsules to prevent them from floating during the experiment. Before the experiment, 900 mL of dissolution medium (phosphate buffer, pH 6.8) was preheated at 37 ± 0.5 °C. During the experiment, the paddle’s speed was set at 50 rpm. At predetermined intervals, samples were taken from the medium using a 10-micron filter connected to the syringe. Drug quantification in the samples was determined using the UPLC method described in . The present study showed that a prepared thermoresponsive optimized formulation could potentially enhance the pharmaceutical applicability of SNEDDS as a marketed dosage form. The results showed that propylene glycol and Poloxamer 188 concentrations significantly affect the liquefying temperature, while Poloxamer 188 concentrations also significantly affected liquefying time and GBC solubility. The optimized formulation showed desirable characteristics, with a liquefying temperature close to body temperature, rapid liquefying time, and enhanced GBC solubility. The in vitro dissolution study showed that optimized T-SNEDDS significantly enhanced drug dissolution compared to raw GBC.
Personality predictors of dementia diagnosis and neuropathological burden: An individual participant data meta‐analysis
f6be88e7-87d9-43ee-b804-d022a1c9748c
10947984
Pathology[mh]
BACKGROUND The incidence of dementia due to neurodegenerative diseases has increased substantially over the past half‐century along with increases in life expectancy, contributing to an expansive economic burden and disability. , Identifying modifiable risk factors that influence individual differences in cognitive aging processes is critical to researchers, policymakers, and the public. While research suggests that the Big Five personality traits and subjective well‐being (SWB) are associated with dementia diagnosis, , , limited research has examined traits or SWB as predictors of underlying dementia neuropathology. Drawing data from eight independent studies (ie, a multistudy approach), the current study investigated whether the Big Five (extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience) and SWB (life satisfaction, positive affect, and negative affect) differentially predict dementia diagnoses and neuropathological burden. This approach also permitted opportunities to explore evidence linking these psychological constructs to the cognitive resilience theoretical model. Several different neuropathologies cause dementia; the most well‐known type of dementia, Alzheimer's disease (AD), is defined by amyloid beta (Aβ) peptides and tau neurofibrillary tangles (NFTs), which subsequently results in loss of neuronal cells. Although AD is the leading cause of dementia, there are other types of dementia (eg, vascular, frontotemporal, Lewy body), and the majority of dementia cases are due to mixed pathologies. A large body of research demonstrates a disconnect between the degree of pathology in a person's brain and whether that neuropathology manifests clinically as cognitive impairment , , ; approximately one‐third of cognitively unimpaired older adults aged 75+ years have sufficient Aβ and NFTs to meet AD criteria. Numerous systematic reviews and meta‐analyses indicate that physical, social, and cognitive engagement contributes to healthier cognitive aging. , , , , , The Big Five personality traits capture consistent patterns in physical, social, and cognitive engagement and can be conceptualized as higher‐order predictors of factors contributing to cognitive aging. Indeed, the existing literature documents associations between cognitive functioning and dementia diagnosis with the Big Five, particularly neuroticism and conscientiousness. , , , , , , , , Multiple pathways linking personality traits and dementia have been proposed ; two likely accounts theorize that traits may (1) act as predispositions that subsequently influence brain health and/or (2) influence cognitive performance in the presence of neuropathological burden. For instance, individuals high in conscientiousness demonstrate healthier behavioral, emotional, and cognitive tendencies across the lifespan, which protect against development of neuropathology (ie, contributing to brain maintenance) and/or assist in maintaining better cognitive performance despite the development of neuropathology (ie, cognitive resilience ). Evidence supporting the predisposition theory finds links between traits and cortical amyloid deposition, tau pathology, and smaller brain volume assessed by in vivo biomarkers, brain imaging, and autopsy. , , , , Evidence supporting the cognitive resilience model finds that individuals high in conscientiousness or low in neuroticism are less likely to develop clinical dementia despite neuropathology at autopsy. , , RESEARCH IN CONTEXT Systematic review : We reviewed the literature within Web of Science, PubMed, and EBSCOhost electronic databases. Limited research has examined the relationships between personality or well‐being and neuropathology, though several publications examine the associations between personality or well‐being and dementia diagnosis. No research has systematically investigated the links between personality, well‐being, clinical manifestation of dementia, and neuropathology all together or using an individual participants meta‐analytic approach. We appropriately cite relevant research. Interpretation : Our findings, based on 44,531 participants from eight longitudinal samples spanning three continents and five countries, highlight clear differences in the associations between these psychosocial factors (ie, personality traits, well‐being) and clinical versus neuropathological manifestations of dementia. Conscientiousness, extraversion, and positive affect may improve, while neuroticism and negative affect may impede, performance on neuropsychological tests, leading to differential risk of receiving a dementia diagnosis. Future directions : Future research should prospectively investigate similar associations using in vivo markers of dementia. Research on personality and dementia rarely assesses neuropathological markers of neurodegenerative disease, making the distinction between these models impossible to test. Our multistudy approach permits evaluation of the replicability and robustness of prospective associations between traits and dementia diagnosis using large samples that span decades and continents, as well as exploration of the processes linking traits to the diagnosis of dementia and neuropathology. Specifically, we not only test whether personality traits are separately associated with clinical diagnoses and neuropathology (predisposition theory) but also test whether personality traits moderate the association between clinical diagnoses and neuropathology (cognitive resilience theory). Finally, additional psychological factors, such as SWB, may contribute to cognitive aging processes. SWB can be conceptualized as a tripartite construct (life satisfaction, negative affect, and positive affect). Some evidence suggests that certain aspects of well‐being are associated with cognitive resilience and that satisfaction with life is protective against dementia diagnoses, , but this literature is small and newly emerging. We address this gap in the literature by investigating SWB as an antecedent of incident dementia diagnoses and neuropathological burden. Systematic review : We reviewed the literature within Web of Science, PubMed, and EBSCOhost electronic databases. Limited research has examined the relationships between personality or well‐being and neuropathology, though several publications examine the associations between personality or well‐being and dementia diagnosis. No research has systematically investigated the links between personality, well‐being, clinical manifestation of dementia, and neuropathology all together or using an individual participants meta‐analytic approach. We appropriately cite relevant research. Interpretation : Our findings, based on 44,531 participants from eight longitudinal samples spanning three continents and five countries, highlight clear differences in the associations between these psychosocial factors (ie, personality traits, well‐being) and clinical versus neuropathological manifestations of dementia. Conscientiousness, extraversion, and positive affect may improve, while neuroticism and negative affect may impede, performance on neuropsychological tests, leading to differential risk of receiving a dementia diagnosis. Future directions : Future research should prospectively investigate similar associations using in vivo markers of dementia. METHOD This study makes two primary contributions to the literature. It (1) examines aspects of the processes that may underlie the association between psychological factors (the Big Five and SWB), incident dementia diagnosis, and post mortem neuropathology and (2) integrates across multiple samples simultaneously to better estimate robustness and generalizability using a one‐stage individual participant data meta‐analysis (IPD‐MA). First, while alternative or additional processes may underlie these associations, , , our design permitted exploration of foundational associations between both the disease burden itself (neuropathology) and the clinical manifestation (dementia risk). Second, prior research in this area is typically based on single studies or meta‐analyses of published studies. The use of individual participant data from multiple studies has a number of advantages, including the ability to directly control for key covariates and moderators and generally not being subject to choices made by researchers who worked with the raw data. With IPD‐MA, we are able to make identical data cleaning, harmonization, and analytic choices across studies. Thus, rather than statistically correcting for these different choices as in traditional meta‐analyses, IPD‐MA enables us to clearly and directly compare effect sizes across samples. Further, IPD‐MA is also not subject to publication bias. Investigating associations among personality traits, SWB, clinical dementia, and neuropathology in a multistudy format permits evaluation of associations across samples, measures, and time while preserving important heterogeneity across studies. Systematic investigation of the prospective relationships between personality or SWB with neuropsychological and neuropathological markers of dementia may provide important information regarding the mechanisms underlying these associations and the timing in which they unfold, potentially informing the development of interventions and screening assessments. Importantly, personality and well‐being assessments can be administered quickly and cost‐effectively, whereas neuropsychological batteries and biomarker collection can be time‐consuming, costly, and stress‐inducing for patients. , Integrating personality and well‐being assessments in clinical settings earlier in the lifespan can help to identify long‐term risk for a number of chronic illnesses and offer unique pathways for interventions before symptom onset. , We test three primary research questions. First, we ask whether the Big Five personality traits and aspects of SWB are associated with dementia diagnoses and neuropathology at autopsy. Second, we ask whether sociodemographic and baseline cognitive health factors (age, gender, education, and global cognition) moderate associations between the Big Five/SWB and diagnoses/neuropathology. Finally, we ask whether the Big Five and SWB moderate associations between dementia diagnoses and neuropathology at autopsy. This study was preregistered on the Open Science Framework ( https://osf.io/fmjv3 ). In addition, all code, model objects, figures, and tables are available in the online materials on the OSF ( https://osf.io/dzty7/ ) and GitHub ( https://github.com/emoriebeck/personality‐dementia‐neuropath/tree/master/results ). Finally, rendered results are available as a standalone web page on GitHub ( https://emoriebeck.github.io/personality‐dementia‐neuropath/ ) and in an online R Shiny web app ( https://emoriebeck.shinyapps.io/personality‐dementia‐neuropath/ ). 2.1 Participants Participants included 44,531 individuals from eight longitudinal samples, spanning two continents and four countries. We chose samples based on prior work examining personality predictors of cognitive decline, dementia diagnoses, and neuropathology). , , From these we identified six samples (Washington University School of Medicine Memory and Aging Project [WUSM‐MAP], Rush Memory and Aging Project [Rush‐MAP], Religious Orders Study [ROS], Einstein Aging Study [EAS], Baltimore Longitudinal Study of Aging [BLSA], and Health and Retirement Study [HRS]). One (BLSA) was eliminated because we were not granted access to the data. We identified three additional samples (German Soeconomic Panel Study [GSOEP], Longitudinal Internet Studies for the Social Sciences [LISS], and Swedish Adoption / Twin Study of Aging [SATSA]) that had personality trait measures and dementia diagnoses. Across samples, we used the latest data release, and participants were included in all models in which they had requisite data (ie, participants within samples vary across combinations of personality, SWB, covariates, and moderators when necessary). Sample descriptions are available in the online materials. 2.2 Measures To conduct IPD‐MA, variables across studies must be harmonized, which involves pulling, recoding, and including measures that have exact (ie, measured and coded identically) or conceptual (ie, measured and coded differently, but recoded to the same scale) mappings across samples. A more in‐depth discussion of this process was previously documented. In the present study, because measures were not identical across samples, we used conceptual harmonization, which is described in detail in subsequent sections. Descriptive statistics of all conceptually harmonized variables for each sample are presented in Table . Zero‐order correlations among measures within samples are presented in the online materials and web app. 2.2.1 Psychosocial characteristics: The Big Five and SWB Complete information on the scales used for each measure across samples is in Table , and which measures are available across samples is documented in Table . Many measures are on different scales, so all psychosocial indicators were transformed to P ercentages O f the M aximum P ossible score (POMP). z ‐transformations have a mean of zero and unit variance, which can be useful for interpreting effect sizes in standard deviation or correlational terms. However, when the underlying distribution is non‐normal, such interpretations can be less clear. POMP, in contrast, allows for interpretation in relative percentiles. To aid convergence, we deviate from traditional POMP scoring and multiply the ratio by 10: (1) P O M P = o b s e r v e d − m i n m a x − m i n × 10 . 2.2.2 Dementia diagnoses The measurement of dementia diagnoses varied across samples, including clinician assessments (Rush‐MAP, ROS, WUSM‐MAP, EAS, SATSA), participant‐reported clinical dementia diagnoses (ie, received a dementia diagnosis from a doctor ever or in the last year; HRS, LISS, GSOEP), and probability of dementia diagnosis based on cognitive testing (HRS). Each was recoded such that 0 = no clinical dementia and 1 = dementia diagnosis. 2.2.3 Neuropathology We identified 10 indicators of post mortem neuropathology from four samples: Braak stage (a measure of AD severity capturing the degree and diffuseness of NFTs; 0 to 6), CERAD (a measure of AD capturing the presence of neuritic plaques; 1 to 4, reverse coded), Lewy body disease (0 = no, 1 = yes), gross cerebral infarcts (0 = none, 1 = one or more), gross cerebral microinfarcts (0 = none, 1 = one or more), cerebral atherosclerosis (0 to 3), cerebral amyloid angiopathy (0 to 3), arteriosclerosis (0 to 3), hippocampal sclerosis (0 = none, 1 = present), and TAR DNA‐binding protein‐43 (TDP‐43; 0 = none or amygdala only, 1 = beyond amygdala). More details on the background and measurement of each can be found in the online materials. Three samples (WUSM‐MAP, Rush‐MAP, and ROS) used standard National Alzheimer's Coordinating Center (NACC) autopsy checklists and data preparation. For EAS, indicators were transformed or omitted as necessary to ensure exact comparability with NACC checklists (Table ). 2.2.4 Participant‐level covariates and moderators Covariates included age (years, centered at 60), gender (0 = male, 1 = female), and education (years, centered at 12). Cognitive function was POMP scored (POMP scores derived from similar tests across identical broad cognitive domains, see supplemental material for more details; 0 to 10). More details on each of the covariates can be found in the codebook in the online materials and rendered results in the web app. In what follows, we focus on results from the adjusted models. The full results for all models are available online. 2.3 Analytic plan To test whether personality and SWB were prospectively associated with dementia diagnoses and neuropathology, we fit a series of Bayesian multilevel models (with a random intercept and a random slope for personality traits and SWB) predicting diagnosis and neuropathology (11) from each predictor (8) separately across each of the sets of covariates. Such multilevel models are an example of one‐stage IPD‐MA in which sample‐level and meta‐analytic effects are estimated in a single model. In these models, participants’ observations (Level 1) were nested within study (Level 2). Covariates were included at Level 1 but were not modeled as random effects. The basic form of the unadjusted model is as follows: Y i j = β 0 j + β 1 j ∗ P i j + ε i j , β 0 j = γ 00 + u 0 j , β 1 j = γ 10 + u 1 j , e i j ∼ N 0 , σ 2 , (2) u 0 j u 1 j ∼ N τ 00 2 τ 10 τ 10 τ 11 2 , where i indicates individual i in sample j and P indicates levels on a personality trait or well‐being characteristic (in POMP scores). The key terms are γ 10 , which captures the meta‐analytic association between personality trait/well‐being levels and each outcome; u 1 j , which captures the sample‐specific deviation from the overall estimate; and β 1 j , which is the linear combination of the overall estimate and the deviation that captures the sample‐specific estimate. We used regularizing priors ( t ‐distributed) for all fixed effects, half‐Cauchy priors for all variances, and LKJ priors for all correlations. Binary outcomes (dementia diagnosis, Lewy body disease, gross cerebral infarcts, gross cerebral microinfarcts, hippocampal sclerosis, and TDP‐43) were modeled using logistic regression models with a logit link, while continuous outcomes (Braak stage, CERAD, cerebral atherosclerosis, cerebral amyloid angiopathy, and arteriosclerosis) were modeled using linear regression with a Gaussian link. The results of logistic regression models are presented as odds ratios (ORs; exponentiated log odds), while results of linear regression models are presented as non‐standardized estimates (ie, change in neuropathology associated with a 10% difference in personality trait or SWB levels). Next, to test (1) whether age, gender, education, and cognitive function moderate the association between levels of personality characteristics and dementia diagnoses and neuropathology and (2) whether personality and SWB moderate the relationship between dementia diagnoses and neuropathology, we added a main effect and interaction for each moderator separately at Level 1 and included them as random slopes. The basic form of the model is as follows: Y i j = β 0 j + β 1 j × P i j + β 2 j × M i j + β 3 j × P i j × M i j + ε i j , β 0 j = γ 00 + u 0 j , β 1 j = γ 10 + u 1 j , β 2 j = γ 20 + u 2 j , β 3 j = γ 30 + u 3 j , e j ∼ N 0 , σ 2 , (3) u 0 j u 1 j u 2 j u 3 j ∼ N τ 00 2 τ 01 τ 02 τ 03 τ 10 τ 11 2 τ 12 τ 13 τ 20 τ 21 τ 22 2 τ 23 τ 30 τ 31 τ 32 τ 33 2 , where M indicates the level of the moderator for person i in sample j , and the new key terms are γ 20 and β 2 j , which capture the main effect of the moderator across all samples (ie, the meta‐analytic effect) and for each sample separately, respectively, and γ 30 and β 3 j , which capture the interaction between personality trait/well‐being levels and moderator levels across all samples (ie, the meta‐analytic effect) and for each sample separately, respectively. Errors and random terms are assumed normal with variances σ 2 (residual variance) and τ 2 (random effect variance). Due to missing data, sample sizes vary across models. In each model, we used all participants with complete data for the indicators included in the model. Sample sizes used in each model are included in all forest plots of results across studies (eg, Figure ). Because sample sizes reported in Table indicate the number of participants with any valid data for any combination of psychosocial characteristics (ie, personality traits and SWB), the sample sizes reported in Table may not align with the number of observations in any given model. 2.4 Deviations from preregistration Although this study was preregistered to avoid making any analytic choices after hypothesizing, a small number of unforeseen challenges and questions led to slight deviations from the preregistration. First, we originally identified eight covariates (age, gender, education, cognitive function, marital status, self‐rated health, chronic conditions [diabetes, stroke, cancer, respiratory disease], smoking, and alcohol use) and planned to test unadjusted and fully adjusted models. Due to sampling differences in which and when measures were collected, only one sample (HRS) could be fully adjusted. Thus, we identified which covariates were both ever collected and collected at or before baseline personality measurement in all samples and focused on these for the main analyses. Second, in our preregistered analyses, we did not investigate whether the timing of diagnoses or death relative to the assessment of personality and well‐being impacted our findings. Thus, we elected to additionally adjust for this interval in all models. Third, we elected to add an additional research question (whether personality traits and SWB moderate dementia diagnosis–neuropathology associations) after preregistering our hypotheses and analytic plan. Our goal was to allow us to disentangle evidence for traits as predispositions for brain maintenance and/or traits contributing to cognitive resilience across the samples. Finally, as noted previously, we preregistered the use of data from the BLSA but were not granted access to the data. 2.5 Prior publications of the same samples Data from the samples used in this study were previously published. Table provides detailed information on prior publications using similar data, summaries of findings, and summaries of key differences between each of those studies and the present study. Two samples (GSOEP, LISS) have no prior publications of personality traits and dementia diagnoses. Among other previously published studies using the same samples, the current study differs in key ways that make the re‐analysis of data from these samples value added. In some cases, we add additional years (sometimes decades) of data (HRS, ROS, Rush‐MAP, ). In others, we use different inclusion criteria or subsets of the sample (WUSM‐MAP; EAS, , SATSA, ). In all cases but Duchek et al., prior investigation used Cox Proportional Hazards Models (eg, Wilson et al., Terracciano et al. ), tested whether personality traits moderated cognitive decline and dementia conversion, or tested whether personality trait change predicted dementia diagnoses. See Table in the online materials and web app for additional information on prior uses of the samples and comparisons with the present study. Participants Participants included 44,531 individuals from eight longitudinal samples, spanning two continents and four countries. We chose samples based on prior work examining personality predictors of cognitive decline, dementia diagnoses, and neuropathology). , , From these we identified six samples (Washington University School of Medicine Memory and Aging Project [WUSM‐MAP], Rush Memory and Aging Project [Rush‐MAP], Religious Orders Study [ROS], Einstein Aging Study [EAS], Baltimore Longitudinal Study of Aging [BLSA], and Health and Retirement Study [HRS]). One (BLSA) was eliminated because we were not granted access to the data. We identified three additional samples (German Soeconomic Panel Study [GSOEP], Longitudinal Internet Studies for the Social Sciences [LISS], and Swedish Adoption / Twin Study of Aging [SATSA]) that had personality trait measures and dementia diagnoses. Across samples, we used the latest data release, and participants were included in all models in which they had requisite data (ie, participants within samples vary across combinations of personality, SWB, covariates, and moderators when necessary). Sample descriptions are available in the online materials. Measures To conduct IPD‐MA, variables across studies must be harmonized, which involves pulling, recoding, and including measures that have exact (ie, measured and coded identically) or conceptual (ie, measured and coded differently, but recoded to the same scale) mappings across samples. A more in‐depth discussion of this process was previously documented. In the present study, because measures were not identical across samples, we used conceptual harmonization, which is described in detail in subsequent sections. Descriptive statistics of all conceptually harmonized variables for each sample are presented in Table . Zero‐order correlations among measures within samples are presented in the online materials and web app. 2.2.1 Psychosocial characteristics: The Big Five and SWB Complete information on the scales used for each measure across samples is in Table , and which measures are available across samples is documented in Table . Many measures are on different scales, so all psychosocial indicators were transformed to P ercentages O f the M aximum P ossible score (POMP). z ‐transformations have a mean of zero and unit variance, which can be useful for interpreting effect sizes in standard deviation or correlational terms. However, when the underlying distribution is non‐normal, such interpretations can be less clear. POMP, in contrast, allows for interpretation in relative percentiles. To aid convergence, we deviate from traditional POMP scoring and multiply the ratio by 10: (1) P O M P = o b s e r v e d − m i n m a x − m i n × 10 . 2.2.2 Dementia diagnoses The measurement of dementia diagnoses varied across samples, including clinician assessments (Rush‐MAP, ROS, WUSM‐MAP, EAS, SATSA), participant‐reported clinical dementia diagnoses (ie, received a dementia diagnosis from a doctor ever or in the last year; HRS, LISS, GSOEP), and probability of dementia diagnosis based on cognitive testing (HRS). Each was recoded such that 0 = no clinical dementia and 1 = dementia diagnosis. 2.2.3 Neuropathology We identified 10 indicators of post mortem neuropathology from four samples: Braak stage (a measure of AD severity capturing the degree and diffuseness of NFTs; 0 to 6), CERAD (a measure of AD capturing the presence of neuritic plaques; 1 to 4, reverse coded), Lewy body disease (0 = no, 1 = yes), gross cerebral infarcts (0 = none, 1 = one or more), gross cerebral microinfarcts (0 = none, 1 = one or more), cerebral atherosclerosis (0 to 3), cerebral amyloid angiopathy (0 to 3), arteriosclerosis (0 to 3), hippocampal sclerosis (0 = none, 1 = present), and TAR DNA‐binding protein‐43 (TDP‐43; 0 = none or amygdala only, 1 = beyond amygdala). More details on the background and measurement of each can be found in the online materials. Three samples (WUSM‐MAP, Rush‐MAP, and ROS) used standard National Alzheimer's Coordinating Center (NACC) autopsy checklists and data preparation. For EAS, indicators were transformed or omitted as necessary to ensure exact comparability with NACC checklists (Table ). 2.2.4 Participant‐level covariates and moderators Covariates included age (years, centered at 60), gender (0 = male, 1 = female), and education (years, centered at 12). Cognitive function was POMP scored (POMP scores derived from similar tests across identical broad cognitive domains, see supplemental material for more details; 0 to 10). More details on each of the covariates can be found in the codebook in the online materials and rendered results in the web app. In what follows, we focus on results from the adjusted models. The full results for all models are available online. Psychosocial characteristics: The Big Five and SWB Complete information on the scales used for each measure across samples is in Table , and which measures are available across samples is documented in Table . Many measures are on different scales, so all psychosocial indicators were transformed to P ercentages O f the M aximum P ossible score (POMP). z ‐transformations have a mean of zero and unit variance, which can be useful for interpreting effect sizes in standard deviation or correlational terms. However, when the underlying distribution is non‐normal, such interpretations can be less clear. POMP, in contrast, allows for interpretation in relative percentiles. To aid convergence, we deviate from traditional POMP scoring and multiply the ratio by 10: (1) P O M P = o b s e r v e d − m i n m a x − m i n × 10 . Dementia diagnoses The measurement of dementia diagnoses varied across samples, including clinician assessments (Rush‐MAP, ROS, WUSM‐MAP, EAS, SATSA), participant‐reported clinical dementia diagnoses (ie, received a dementia diagnosis from a doctor ever or in the last year; HRS, LISS, GSOEP), and probability of dementia diagnosis based on cognitive testing (HRS). Each was recoded such that 0 = no clinical dementia and 1 = dementia diagnosis. Neuropathology We identified 10 indicators of post mortem neuropathology from four samples: Braak stage (a measure of AD severity capturing the degree and diffuseness of NFTs; 0 to 6), CERAD (a measure of AD capturing the presence of neuritic plaques; 1 to 4, reverse coded), Lewy body disease (0 = no, 1 = yes), gross cerebral infarcts (0 = none, 1 = one or more), gross cerebral microinfarcts (0 = none, 1 = one or more), cerebral atherosclerosis (0 to 3), cerebral amyloid angiopathy (0 to 3), arteriosclerosis (0 to 3), hippocampal sclerosis (0 = none, 1 = present), and TAR DNA‐binding protein‐43 (TDP‐43; 0 = none or amygdala only, 1 = beyond amygdala). More details on the background and measurement of each can be found in the online materials. Three samples (WUSM‐MAP, Rush‐MAP, and ROS) used standard National Alzheimer's Coordinating Center (NACC) autopsy checklists and data preparation. For EAS, indicators were transformed or omitted as necessary to ensure exact comparability with NACC checklists (Table ). Participant‐level covariates and moderators Covariates included age (years, centered at 60), gender (0 = male, 1 = female), and education (years, centered at 12). Cognitive function was POMP scored (POMP scores derived from similar tests across identical broad cognitive domains, see supplemental material for more details; 0 to 10). More details on each of the covariates can be found in the codebook in the online materials and rendered results in the web app. In what follows, we focus on results from the adjusted models. The full results for all models are available online. Analytic plan To test whether personality and SWB were prospectively associated with dementia diagnoses and neuropathology, we fit a series of Bayesian multilevel models (with a random intercept and a random slope for personality traits and SWB) predicting diagnosis and neuropathology (11) from each predictor (8) separately across each of the sets of covariates. Such multilevel models are an example of one‐stage IPD‐MA in which sample‐level and meta‐analytic effects are estimated in a single model. In these models, participants’ observations (Level 1) were nested within study (Level 2). Covariates were included at Level 1 but were not modeled as random effects. The basic form of the unadjusted model is as follows: Y i j = β 0 j + β 1 j ∗ P i j + ε i j , β 0 j = γ 00 + u 0 j , β 1 j = γ 10 + u 1 j , e i j ∼ N 0 , σ 2 , (2) u 0 j u 1 j ∼ N τ 00 2 τ 10 τ 10 τ 11 2 , where i indicates individual i in sample j and P indicates levels on a personality trait or well‐being characteristic (in POMP scores). The key terms are γ 10 , which captures the meta‐analytic association between personality trait/well‐being levels and each outcome; u 1 j , which captures the sample‐specific deviation from the overall estimate; and β 1 j , which is the linear combination of the overall estimate and the deviation that captures the sample‐specific estimate. We used regularizing priors ( t ‐distributed) for all fixed effects, half‐Cauchy priors for all variances, and LKJ priors for all correlations. Binary outcomes (dementia diagnosis, Lewy body disease, gross cerebral infarcts, gross cerebral microinfarcts, hippocampal sclerosis, and TDP‐43) were modeled using logistic regression models with a logit link, while continuous outcomes (Braak stage, CERAD, cerebral atherosclerosis, cerebral amyloid angiopathy, and arteriosclerosis) were modeled using linear regression with a Gaussian link. The results of logistic regression models are presented as odds ratios (ORs; exponentiated log odds), while results of linear regression models are presented as non‐standardized estimates (ie, change in neuropathology associated with a 10% difference in personality trait or SWB levels). Next, to test (1) whether age, gender, education, and cognitive function moderate the association between levels of personality characteristics and dementia diagnoses and neuropathology and (2) whether personality and SWB moderate the relationship between dementia diagnoses and neuropathology, we added a main effect and interaction for each moderator separately at Level 1 and included them as random slopes. The basic form of the model is as follows: Y i j = β 0 j + β 1 j × P i j + β 2 j × M i j + β 3 j × P i j × M i j + ε i j , β 0 j = γ 00 + u 0 j , β 1 j = γ 10 + u 1 j , β 2 j = γ 20 + u 2 j , β 3 j = γ 30 + u 3 j , e j ∼ N 0 , σ 2 , (3) u 0 j u 1 j u 2 j u 3 j ∼ N τ 00 2 τ 01 τ 02 τ 03 τ 10 τ 11 2 τ 12 τ 13 τ 20 τ 21 τ 22 2 τ 23 τ 30 τ 31 τ 32 τ 33 2 , where M indicates the level of the moderator for person i in sample j , and the new key terms are γ 20 and β 2 j , which capture the main effect of the moderator across all samples (ie, the meta‐analytic effect) and for each sample separately, respectively, and γ 30 and β 3 j , which capture the interaction between personality trait/well‐being levels and moderator levels across all samples (ie, the meta‐analytic effect) and for each sample separately, respectively. Errors and random terms are assumed normal with variances σ 2 (residual variance) and τ 2 (random effect variance). Due to missing data, sample sizes vary across models. In each model, we used all participants with complete data for the indicators included in the model. Sample sizes used in each model are included in all forest plots of results across studies (eg, Figure ). Because sample sizes reported in Table indicate the number of participants with any valid data for any combination of psychosocial characteristics (ie, personality traits and SWB), the sample sizes reported in Table may not align with the number of observations in any given model. Deviations from preregistration Although this study was preregistered to avoid making any analytic choices after hypothesizing, a small number of unforeseen challenges and questions led to slight deviations from the preregistration. First, we originally identified eight covariates (age, gender, education, cognitive function, marital status, self‐rated health, chronic conditions [diabetes, stroke, cancer, respiratory disease], smoking, and alcohol use) and planned to test unadjusted and fully adjusted models. Due to sampling differences in which and when measures were collected, only one sample (HRS) could be fully adjusted. Thus, we identified which covariates were both ever collected and collected at or before baseline personality measurement in all samples and focused on these for the main analyses. Second, in our preregistered analyses, we did not investigate whether the timing of diagnoses or death relative to the assessment of personality and well‐being impacted our findings. Thus, we elected to additionally adjust for this interval in all models. Third, we elected to add an additional research question (whether personality traits and SWB moderate dementia diagnosis–neuropathology associations) after preregistering our hypotheses and analytic plan. Our goal was to allow us to disentangle evidence for traits as predispositions for brain maintenance and/or traits contributing to cognitive resilience across the samples. Finally, as noted previously, we preregistered the use of data from the BLSA but were not granted access to the data. Prior publications of the same samples Data from the samples used in this study were previously published. Table provides detailed information on prior publications using similar data, summaries of findings, and summaries of key differences between each of those studies and the present study. Two samples (GSOEP, LISS) have no prior publications of personality traits and dementia diagnoses. Among other previously published studies using the same samples, the current study differs in key ways that make the re‐analysis of data from these samples value added. In some cases, we add additional years (sometimes decades) of data (HRS, ROS, Rush‐MAP, ). In others, we use different inclusion criteria or subsets of the sample (WUSM‐MAP; EAS, , SATSA, ). In all cases but Duchek et al., prior investigation used Cox Proportional Hazards Models (eg, Wilson et al., Terracciano et al. ), tested whether personality traits moderated cognitive decline and dementia conversion, or tested whether personality trait change predicted dementia diagnoses. See Table in the online materials and web app for additional information on prior uses of the samples and comparisons with the present study. RESULTS 3.1 Association of personality and well‐being with clinical dementia First, we examined prospective associations between the Big Five/SWB and dementia diagnoses. As seen in Figure (forest plots with sample‐specific estimates) and Table (meta‐analytic associations), neuroticism , conscientiousness (−), extraversion (−), positive affect (−), and negative affect were associated with risk of dementia across studies. We conducted a total of 63 hypothesis tests, including the meta‐analytic term, across eight different models (one for each psychosocial characteristic). Of these, 37 (58.7%) were significant. Both neuroticism and conscientiousness were associated with incident dementia diagnoses in every sample and overall, extraversion in three out of eight samples and overall, negative affect in three out of five samples and overall, positive affect (−) in three out of six samples and overall, and satisfaction with life (−) in three of six samples but not overall. Across all samples and psychological characteristics, estimates tended to be in the same direction, but of slightly different magnitudes. 3.2 Association of personality and well‐being with neuropathological burden Next, we examined associations between personality traits/SWB and 10 neuropathological indicators of dementia at autopsy (Table ), including a total number of 270 statistical tests (80 meta‐analytic tests and 190 sample‐specific tests). Across all studies, there was no consistent association between psychological characteristics and neuropathology measures. Sample‐specific estimates are presented in the online materials and online web app. Although a small number of sample‐specific estimates (7/190; 3.7%) were significant, this was never the case for more than one study for each psychological characteristic–outcome combination and fell within the expected 5% type‐I error rate, suggesting the possibility of spurious associations. 3.3 Moderators of personality/well‐being associations with dementia and neuropathology Table presents the overall estimates of four tested moderators (age, gender, education, baseline cognition) predicting neuropathology and clinical diagnoses. This included a total of 1224 statistical tests, including 352 meta‐analytic tests. Across all outcomes, 43 tests were significant (3.2%). But among these, 18/244 (7.38%) of moderator tests for clinical dementia and 20/2000 (0.92%) of moderator tests for neuropathology were significant. For clinical dementia diagnoses, age moderated the relationship between conscientiousness and dementia diagnosis (see Figure for overall and sample‐specific estimates). Figure illustrates dementia risk across levels of conscientiousness as a function of age (in years, centered at 60) and education (in years, centered at 12; conscientiousness only). There was a stronger protective relationship between both conscientiousness and dementia diagnosis for 70‐year‐old adults than those at 50 or 60 years old. For neuropathology, meta‐analytic estimates suggested no consistently moderated associations between personality/SWB and neuropathology. Forest plots and simple effects plots for all other personality traits, outcomes, and moderators are in the online materials and online web app. 3.4 Personality and well‐being moderators of clinical dementia‐neuropathology associations Next, we tested whether personality traits and well‐being moderated the association between neuropathology and clinical dementia diagnoses. If personality traits or well‐being are associated with larger differences in neuropathological burden between those who were or were not diagnosed with dementia, this provides evidence for resilience models (ie, do people higher in conscientiousness have more neuropathology than we would expect based on their diagnosis status than people lower in conscientiousness?). We conducted a total of 275 tests, including 80 meta‐analytic tests, of which six (2.2%) were significant. Conscientiousness moderated the association between Braak stages and clinical dementia (2/4, ROS and WUSM‐MAP; Figure ). That is, people who were higher (rather than lower) in conscientiousness had different levels of Braak stages than we would expect based on their clinical dementia diagnosis status alone. Examining the marginal means suggests that those who were diagnosed had higher Braak stages overall, as expected. However, across participants who were not diagnosed with dementia during the time in study, individuals who were higher in conscientiousness had lower Braak stages than participants lower in conscientiousness in WUSM‐MAP and ROS. These results suggest that conscientiousness may be protective against development of neuropathology, which is consistent with the resistance to neuropathology hypothesis. Figure depicts associations across samples (left) and simple effects plots (right). Association of personality and well‐being with clinical dementia First, we examined prospective associations between the Big Five/SWB and dementia diagnoses. As seen in Figure (forest plots with sample‐specific estimates) and Table (meta‐analytic associations), neuroticism , conscientiousness (−), extraversion (−), positive affect (−), and negative affect were associated with risk of dementia across studies. We conducted a total of 63 hypothesis tests, including the meta‐analytic term, across eight different models (one for each psychosocial characteristic). Of these, 37 (58.7%) were significant. Both neuroticism and conscientiousness were associated with incident dementia diagnoses in every sample and overall, extraversion in three out of eight samples and overall, negative affect in three out of five samples and overall, positive affect (−) in three out of six samples and overall, and satisfaction with life (−) in three of six samples but not overall. Across all samples and psychological characteristics, estimates tended to be in the same direction, but of slightly different magnitudes. Association of personality and well‐being with neuropathological burden Next, we examined associations between personality traits/SWB and 10 neuropathological indicators of dementia at autopsy (Table ), including a total number of 270 statistical tests (80 meta‐analytic tests and 190 sample‐specific tests). Across all studies, there was no consistent association between psychological characteristics and neuropathology measures. Sample‐specific estimates are presented in the online materials and online web app. Although a small number of sample‐specific estimates (7/190; 3.7%) were significant, this was never the case for more than one study for each psychological characteristic–outcome combination and fell within the expected 5% type‐I error rate, suggesting the possibility of spurious associations. Moderators of personality/well‐being associations with dementia and neuropathology Table presents the overall estimates of four tested moderators (age, gender, education, baseline cognition) predicting neuropathology and clinical diagnoses. This included a total of 1224 statistical tests, including 352 meta‐analytic tests. Across all outcomes, 43 tests were significant (3.2%). But among these, 18/244 (7.38%) of moderator tests for clinical dementia and 20/2000 (0.92%) of moderator tests for neuropathology were significant. For clinical dementia diagnoses, age moderated the relationship between conscientiousness and dementia diagnosis (see Figure for overall and sample‐specific estimates). Figure illustrates dementia risk across levels of conscientiousness as a function of age (in years, centered at 60) and education (in years, centered at 12; conscientiousness only). There was a stronger protective relationship between both conscientiousness and dementia diagnosis for 70‐year‐old adults than those at 50 or 60 years old. For neuropathology, meta‐analytic estimates suggested no consistently moderated associations between personality/SWB and neuropathology. Forest plots and simple effects plots for all other personality traits, outcomes, and moderators are in the online materials and online web app. Personality and well‐being moderators of clinical dementia‐neuropathology associations Next, we tested whether personality traits and well‐being moderated the association between neuropathology and clinical dementia diagnoses. If personality traits or well‐being are associated with larger differences in neuropathological burden between those who were or were not diagnosed with dementia, this provides evidence for resilience models (ie, do people higher in conscientiousness have more neuropathology than we would expect based on their diagnosis status than people lower in conscientiousness?). We conducted a total of 275 tests, including 80 meta‐analytic tests, of which six (2.2%) were significant. Conscientiousness moderated the association between Braak stages and clinical dementia (2/4, ROS and WUSM‐MAP; Figure ). That is, people who were higher (rather than lower) in conscientiousness had different levels of Braak stages than we would expect based on their clinical dementia diagnosis status alone. Examining the marginal means suggests that those who were diagnosed had higher Braak stages overall, as expected. However, across participants who were not diagnosed with dementia during the time in study, individuals who were higher in conscientiousness had lower Braak stages than participants lower in conscientiousness in WUSM‐MAP and ROS. These results suggest that conscientiousness may be protective against development of neuropathology, which is consistent with the resistance to neuropathology hypothesis. Figure depicts associations across samples (left) and simple effects plots (right). DISCUSSION The current IPD‐MA investigated whether psychological factors (the Big Five traits and three aspects of SWB) predicted neuropsychological and neurological markers of dementia using a multistudy framework. Replicating and extending prior publications in the same samples by including additional waves of follow‐up (ROS, Rush‐MAP, EAS, HILDA, HRS, WUSM‐MAP) and extending these analyses to new samples (LISS, GSOEP), results indicate robust prospective associations between some psychological factors and incident dementia diagnosis, but not neuropathology. Specifically, neuroticism and negative affect were risk factors for, while conscientiousness, extraversion, and positive affect were protective against dementia diagnosis. Across all analyses, there was directional consistency in estimates across samples ( see forest plots , Figure ), which is particularly noteworthy given between‐study differences in sociodemographic and design characteristics (eg, sample size, age at baseline, frequency of occasions, years of follow‐up). Consistent with our preregistered hypotheses, these results replicate and extend evidence that personality traits may assist in early identification and dementia‐care planning strategies, as well as risk stratification for dementia diagnosis. Moreover, our findings provide further support for recommendations to incorporate psychological trait measures into clinical screening or diagnosis criteria. , , Conversely, these psychological factors were not consistently associated with any neuropathology indicators. For example, neuroticism was not directly associated with neuropathology biomarkers, suggesting that individuals higher in neuroticism do not have more neuropathological burden at death, consistent with previous research. , Our follow‐up moderation analyses suggested that baseline cognitive function did not consistently moderate associations between personality traits and neuropathology. Further, across synthesized analyses, personality traits did not moderate the associations between dementia diagnoses and neuropathology. These findings are inconsistent with the postulation that particular traits may protect against the development of neuropathology. However, synthesized moderator analyses and some individual study results revealed some evidence supporting the cognitive resilience model; specifically, older individuals were more likely to have higher Braak stages, gross cerebral infarcts, cerebral atherosclerosis, cerebral, amyloid angiopathy, arteriosclerosis, hippocampal sclerosis, and TDP‐43, and lower CERAD. As synthesized results suggested that older individuals who were also higher in conscientiousness were less likely to be diagnosed with dementia, high conscientiousness may be protective against dementia diagnosis in the face of possible neuropathology (ie, cognitive resilience). Indeed, individuals higher in conscientiousness who did not receive a clinical diagnosis tended to have a lower Braak stage at autopsy in ROS and WUSM‐MAP. Together, these findings hint at the possibility that conscientiousness is related to cognitive resilience. However, given that this neuropathology finding was only replicated in half of the datasets, results should be interpreted with caution, but they emphasize the need for future research efforts focusing on traits, dementia diagnosis, and Braak stage. The reliable association between negative affect and dementia diagnosis is a particularly novel contribution to the literature. This finding aligns well with mounting evidence from multiple studies on seemingly remarkable linear associations between emotions rated as integers on Likert‐like scales and a number of consequential outcomes. Negative affect is characterized by a variety of aversive mood states (eg, anger, anxiety, disgust, guilt, fear) , and, when assessed on several occasions, average negative affect is highly related to neuroticism. As such, it is unsurprising that both negative affect and neuroticism were positively associated with dementia diagnosis. Similarly to the possible inflammatory pathways underlying the link between neuroticism and dementia, , , research suggests that negative affect is associated with neuroinflammation, particularly for individuals high in Aβ load. Abnormal immune response and inflammatory processes may cause neural system change, thereby predisposing individuals to depressive symptoms, , which are positively associated with high and dysregulated negative affect. That is, the link between inflammation and psychological factors appears to be bidirectional , (eg, depressive symptoms are related to inflammation, and inflammation may cause depressive symptoms ). The current study examined only a single measure of negative affect as a predictor of incident diagnosis and neuropathology; however, intraindividual variability in mood states is typical across the lifespan. Future research should make use of longitudinal measurement burst designs that assess day‐to‐day negative affect, to examine prospective associations between average levels of and variability in affect in relation to dementia diagnosis and neuropathology. Finally, our findings provide some evidence that openness to experience, positive affect, and satisfaction with life may be protective against incident dementia diagnosis, though effects were only significant in 42%, 50%, and 50% of studies, respectively. With regard to openness, our findings are consistent with previous research as well as our hypotheses, which reveal mixed associations between openness and aspects of cognition and dementia. , , , Importantly, openness to experience, which is characterized by cognitive flexibility and engagement, is the least consistent Big Five trait in cross‐cultural replications and across personality taxonomies. Given cross‐cultural differences in openness, heterogeneity across our findings may be partially attributed to disparate meanings of openness items across datasets or individuals. Furthermore, openness tends to be associated with cognitive processes, possibly capturing aspects of cognitive functioning ; as such, the timing of openness assessment may influence associations with dementia diagnosis (ie, openness assessments in prodromal stages of dementia may lead to lower self‐reported openness in tandem with awareness of cognitive decline). Despite this, we saw inconsistent evidence across studies of openness predicting diagnoses, even when adjusting for the timing of assessments. A notable strength of the current IPD‐MA was thorough preregistration of the research design, variable harmonization, analytic plan, and hypotheses ( https://osf.io/fmjv3 ). The primary deviations were follow‐up moderator analyses, which provided a better test of whether personality moderates the relationship between level of neuropathological burden and the clinical manifestation of dementia, and adjusting for assessment intervals, which provides more robust evidence that our findings represent truly prospective effects. Furthermore, a substantial strength is our IPD‐MA approach, which permitted estimation of overall robustness of personality and well‐being predictors of dementia and pathology while preserving real and important heterogeneity in prediction across studies. Importantly, estimates were directionally consistent despite between‐study differences in operational definitions of dementia diagnosis (eg, self‐report vs clinical diagnoses), providing support for this harmonization approach. Future research should aim to systematically disentangle and harmonize these measures and their associations with both personality and dementia diagnoses. Finally, given the extensive analyses included within this IPD‐MA, figures depicting results from all analyses are available in the online R Shiny web app ( https://emoriebeck.shinyapps.io/personality‐dementia‐neuropath/ ). An important limitation of the current study was the limited access to neuropathology markers; half of the samples did not complete autopsies, and all samples with neuropathology markers were US samples. Additionally, the LISS dataset only included 20 dementia cases, limiting our confidence in the power to detect associations between psychosocial factors (personality traits and SWB) and risk of dementia. If we had investigated these research questions in only one dataset, this would have been especially concerning. However, our one‐stage approach is particularly effective for estimating effects when events such as dementia diagnoses are rare. Further, the included studies are not representative with respect to race. Given emerging evidence that dementia and cognitive decline unfold differently for Black and Mexican American populations in the United States, efforts to understand the role of race are critically important, requiring concerted data collection efforts focused on these historically marginalized groups. With regard to the analytic approach, the primary goal was to map basic associations between baseline psychological factors with dementia diagnoses and neuropathology at autopsy. However, these are likely dynamic associations that vary over time and will require more nuanced understanding of how personality (and personality changes), cognitive function (and cognitive decline), and neuropathology unfold together, which requires longitudinal modeling and in vivo biomarkers of dementia‐causing diseases, including AD. , Future work using a joint modeling approach, in which the association between psychological factors and cognitive functioning trajectories is examined in relation to in vivo and/or autopsy neuropathology markers, may better delineate the mechanisms underlying the links between the Big Five, dementia diagnosis, and neuropathology. Overall, the current IPD‐MA replicated and extended prior work, providing strong evidence that neuroticism, conscientiousness, and negative affect are associated with dementia diagnoses across samples, measures, and time. The directional consistency in estimates despite between‐study differences in operational definitions of dementia diagnoses emphasizes the practicality of using either self‐report or clinical diagnoses of dementia, contributing to conceptual replication efforts. Further, our results suggest a protective effect of openness to experience, positive affect, and satisfaction with life for incident dementia diagnosis, though effects were less consistent across datasets. Although the Big Five and aspects of SWB were not associated with neuropathology at autopsy, moderator analyses reveal some evidence that these psychological factors may also act as predispositions that influence neuropathology. Future work is needed to build upon these key findings, focusing on more nuanced, time‐varying questions to determine the temporal ordering of these associations and mechanisms underlying them. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. There are no conflicts of interest to disclose among any of the contributing authors. Author disclosures are available in the . Supplemental Information. Supplemental Information. Supplemental Information. Supplemental Information.
The true complexities of “standard” family practice visits unmasked: an observational cross-sectional study in Regina
d9278d87-7344-407e-98a8-3d2ea8228a35
11461146
Family Medicine[mh]
We have hypothesized that family physicians routinely address multiple different concerns for their patients during a single visit despite only being able to report on one to the “data pool” in Saskatchewan, Canada at the time of this research. While alternative payment models do exist in Canada they depend on location and whether physicians are contract or fee for service. Understanding the complexity of primary care visits helps to provide information about the quality of care provided to patients in the time allotted and informs both how we pay for physician services and how care teams are designed. An extensive literature search of the Medline database revealed that minimal research has been done to determine the average number of concerns per regular family medicine visit. While limited, all the available research states that, on average, greater than 2 concerns are addressed per primary care visit. However, none of this available research addressed the complexity of family medicine in the context of the Canadian health care system. An American study by Beasley and colleagues found that physicians reported, on average, more problems than they charted (3.05 vs 2.82) and only billed for 1.97 problems per encounter on average. This not only points towards a trend in underreporting but also reflects the challenges physicians face when billing for more than one issue. In a 2001 analysis of American family practice visits, an average of 2.7 problems and 8 physician actions were observed per encounter. A trend towards underbilling for the number of issues addressed in each visit was also observed. Analysis of recorded general practitioner consultations in the United Kingdom revealed 2.1 concerns voiced per consultation. In adult primary care, the number of concerns addressed per visit has been found to have increased from 5.4 to 7.1 from 1997 to 2005 with visit duration also increasing during this period. In Norway, recorded general practice consultations found that 2.6 problems per visit were addressed on average with this number increasing to 3.3 with the exclusion of acute conditions. The analysis of 982 Texas family physician and patient encounters revealed a mean of 5.4 concerns addressed per visit, with a range from 1 to 16. An increased number of problems managed per visit was found to be associated with increased consultation length, despite many medical systems only allowing physicians to bill for one concern per visit regardless of the number of concerns a patient may have. Each additional concern addressed in a patient encounter with a family physician was found to increase the visit length by a mean of 2.5 minutes ( P < 0.001). Luft and Liang dubbed the practice of addressing multiple issues per visit “Max-Packing” and found that it was “associated with 3.4% lower overall resource use, improved clinical quality metrics, and comparable patient experience (except for worse wait time ratings)”. We hypothesized that patient and physician demographics influence both how many concerns a patient presents with and how many concerns a physician can address in a single visit. An increased number of concerns per visit has been previously associated with older age and female sex. Older age and female sex are also associated with increased consultation rates and length. , As increased consultation time has been associated with increased physician age, we hypothesized that patients seeing a family physician who has been practicing longer will present with more concerns per visit. As physicians develop more experience in clinical time management, they may rearrange their days to accommodate longer consultation times for those who need it. General practice patients value the ability to address all the health concerns they have in a single visit. As the Canadian health care system focuses on providing patient centred care, patient requests are increasing the demands on physicians. , Recognition of the frequency that family physicians address more than one concern per visit, and adapting visit length and billing practices accordingly, is likely to result in fair pay for family physicians, decreased use of health care resources, and, most importantly, improved patient care. We analysed the clinical encounter notes from 2,500 general practice visits from 5 different family physicians at the same clinic, Victoria East Medical Clinic, in Regina, Saskatchewan, Canada. The study was approved by the Biomedical Research Ethics Board at the University of Saskatchewan (Bio-3978). All charts were accessed using Accuro EMR 2017.130, electronic medical record software. There are 11 physicians in practice at the Clinic. The Clinic provides chronic disease management, pharmacist, dietician, community health nursing, radiology, psychology and counselling, referrals, sports medicine, an after-hours walk in clinic, and laboratory services. The Clinic provides care to approximately 20,000 patients and has been in service for over 30 years. This practice was identified by the senior author, and participating physicians were recruited by word of mouth to allow the use of their records for secondary purposes. The family physicians involved in the study ranged in family practice experience from 5 to 25 years. Each physician contributed their 500 most recent charts from in-person visits that were by patients over 18 years of age, billed as regular appointments and did not have additional billing for procedures such as mole removal or ear syringing. Walk-in visits were excluded as they did not reflect longitudinal family medicine visits. Visits with no chart note logged were excluded. Three of the participating physicians were female, and 2 were male. The chart review extended from 1 June 2023, retrospectively as far as was required to obtain 500 eligible charts for each physician, ending 8 November 2022. Each chart was analysed for the number of discrete concerns addressed in the visit. A concern was defined as “an issue requiring physician action in the form of a decision, diagnosis, treatment, or monitoring”. , As well, “if separate problems merged into 1 at the end of a visit (e.g. fever and chest pain merging to pneumonia), then only 1 problem was to be listed”. All analysis was conducted with SPSS version 28 using a significance level of α = 0.05. We built several models to determine the effect of patient sex, age group (< 40 years, 40–64 years, and ≥ 65 years), and provider. Both generalized estimating equation and generalized mixed linear models can utilize count data using Poisson log-linear regression. Unlike generalized estimating equations, generalized linear mixed models explicitly model the within-subject correlation by using random effects. As a first step, we built a null generalized linear mixed model that contained the intercept and the grouping variable (physician) as a random effect, which allowed us to test the significance of the variance component for the grouping variable. The one-tailed significance value was 0.106, indicating that multilevel modelling was not needed to account for clustering by physician. As such, we created a generalized linear model containing patient age, patient sex, and physician as fixed effects. As this model did not include random effects, the calculated incident rate ratios are equivalent to those that would be generated using generalized estimating equations. Finally, a separate generalized linear model was created using to test for interactions between gender and age group. This final model is not shown as the interaction terms were non-significant, and the model with interactions did not fit as well as the one without interactions (higher Alkaike’s Information Criterion). The retrospective review of 2,500 visits resulted in 1,746 unique participants. Demographic information for patient visits is provided in . Fifty percent of the visits addressed more than one complaint . The number of concerns addressed per visit ranged from 1 to 8, with a mean of 1.8 concerns per visit. Patient’s age and physician were significant predictors of the number of complaints raised during a visit . Sex was not a significant predictor of multiple concerns. Our study is the first of its kind to address the complexity of community family medicine in the Canadian context with respect to the number of concerns addressed per visit. Our results are consistent with previous research as we found that the majority of family medicine visits address more than one concern. Female sex was not found to be associated with multiple concerns per visit, consistent with some previous research. , However, Bjørland and Brekke observed a relationship between female sex and presenting multiple concerns per visit. The association between increasing age and multiple concerns per visit is supported by earlier studies. , , Beasley et al. found visits with patients over 65 years of age address 3.88 concerns, as compared to a whole sample mean of 3.05. When examining elderly primary care visits, Tai-Seale et al. found that physicians addressed a median of six topics. Our study found that Provider 3, who was a male physician with the least practice experience at 5 years, addressed the most issues per visit. This differed from our prediction that patients seeing family physicians who has been practicing longer would present with more concerns per visit as the physician has more experience in clinical time management and might be able to rearrange their day to accommodate a longer consultation time for those who need it. Fewer years in practice could be associated with increased concerns addressed per visit due to the provider being less experienced with setting time boundaries with their patients. As well, as physicians gain practice experience, it is possible that their charting detail decreases resulting in less concerns charted per visit. The current Saskatchewan family medicine billing system is constructed on the incorrect notion that each visit addresses only one issue. For example, if a patient comes in in need of one prescription renewal, this is billed for the same amount as if a patient comes in needing a prescription renewal, diagnosis of a cough, and musculoskeletal pain. The provincial medical systems need to adapt their billing technologies. Increasing appointment length to accommodate multiple concerns per visit as patients value the ability of physicians to address all their health concerns in a single visit would be helpful. The structure of physician compensation should be adapted to reflect the complexity of the visit. These changes are especially important as our population ages and more people have complex health concerns with multiple concerns to address in their family medicine visits. Alternative payment models have already been implemented in Canada. For example, the British Columbia Longitudinal Family Physician Payment Model compensates family physicians for “time, patient interactions, and the number and complexity of patients in their practice” as of 1 Februrary 2023. This model includes time codes for direct patient care, indirect patient care, and clinical administration. Alternatively, Ontario offers capitation-based payment models, which pay physicians per patient to deliver primary care services. Nova Scotia piloted a blended capitation model starting in 2022, which compensated family physicians based on the number of patients, number of services, and timely access to care. It is important to note that alternative payment models are associated with increased recruitment and retainment of family physicians, of which we have a national shortage. The ability to accommodate more than one concern per standard family medicine visit could be realized through the widespread implementation of team-based care structures such as the Patient’s Medical Home vision. This vision prioritizes interdisciplinary collaboration among health care professionals and has been associated with higher quality and more timely family medicine visits. As our health care system evolves to further prioritize patient-centred care, the Patient’s Medical Home vision could be a valuable tool, in conjunction with changes in billing practices and compensation strategies, to help physicians better address more the one complaint per visit. Strengths and limitations The main strength of our study is the statistical power associated with the retrospective analysis of 2,500 patient charts. This retrospective chart analysis allowed us to analyse the impact of both provider and patient characteristics on family medicine visit complexity. Conversely, our study is limited by the accuracy and thoroughness of physician charting. If the physician was unable to completely chart all issues addressed in the visit, this would result in underreporting within the Accuro patient data. A trend towards underreporting among primary care patient charts has been well researched and could account for a falsely low percentage of family medicine visits addressing multiple concerns. , Physician actions could encourage patients to bring up less concerns per visit. As well, this study only explored patients from 5 physicians at a single clinic in Regina, Saskatchewan, which limits the generalizability of results. Future iterations of this research should consider using multiple trained researchers to analyse all charts to ensure concern count accuracy and increase the sample size of physicians. As well, as Artificial Intelligence improves and becomes more integrated within the electronic medical records for charting functionality, future research should explore its use in mitigating charting bias and underreporting. The main strength of our study is the statistical power associated with the retrospective analysis of 2,500 patient charts. This retrospective chart analysis allowed us to analyse the impact of both provider and patient characteristics on family medicine visit complexity. Conversely, our study is limited by the accuracy and thoroughness of physician charting. If the physician was unable to completely chart all issues addressed in the visit, this would result in underreporting within the Accuro patient data. A trend towards underreporting among primary care patient charts has been well researched and could account for a falsely low percentage of family medicine visits addressing multiple concerns. , Physician actions could encourage patients to bring up less concerns per visit. As well, this study only explored patients from 5 physicians at a single clinic in Regina, Saskatchewan, which limits the generalizability of results. Future iterations of this research should consider using multiple trained researchers to analyse all charts to ensure concern count accuracy and increase the sample size of physicians. As well, as Artificial Intelligence improves and becomes more integrated within the electronic medical records for charting functionality, future research should explore its use in mitigating charting bias and underreporting. Most standard family medicine visits address more than one concern. However, the Saskatchewan medical billing system only allows physicians to report one concern per visit to the data pool and standard visit lengths are recommended on the premise of addressing a single concern. Patients value the ability to have all their health concerns addressed in a single visit, and this structure is associated with lower use of health care resources and improved clinical quality. , Our study is the first step toward recognizing the frequency with which Canadian family physicians address more than one concern per visit and supports the future adaptation of visit length, billing practices, and physician compensation structure to reflect this complexity. Future research should consider an increased sample size of physicians to further examine the impact of physician characteristics on the number of patient concerns per visit. As our health care system evolves to further prioritize patient-centred care, the Patient’s Medical Home vision could be a valuable tool, in conjunction with changes in billing practices, to help physicians better address more than one complaint per visit.
Comparative yield of molecular diagnostic algorithms for autism spectrum disorder diagnosis in India: evidence supporting whole exome sequencing as first tier test
d31b252f-096f-4f9f-bbf4-e04e2f553fd6
10403833
Pathology[mh]
Autism spectrum disorder (ASD) is a heterogeneous group of neurodevelopmental disorders (NDD) with a prevalence of approximately 1 in 160 children worldwide and with variable clinical presentations and outcomes . According to the latest version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), it is characterized by impaired social communication along with repetitive behavior or restricted interests which can persist throughout lifetime . In addition to these core features, many affected individuals can be afflicted with comorbidities like intellectual disability and epilepsy. A review and meta-analysis of ASD in India reported low prevalence of only 0.0014 − 0.0012% in children aged 1–18 years compared to developed countries like the United States and United Kingdom with a prevalence of 1-1.5% . However, a review across the South Asian population reported its prevalence rate ranging from 0.09 to 1.07% which is similar to that observed in developed countries . The etiology of ASD is not fully understood, although, similar to several neurodevelopmental disorders, genetic risk and environmental exposure appears to contribute to the pathogenesis of ASD . Data from twin studies suggest a strong genetic role and a quantitative meta-analysis on all published twin studies in the context of ASD has estimated heritability component between 64 and 91% . Therefore, genetic testing is recommended in ASD patients and as of 2013, an etiology underlying ASD could be established in around 6–15% cases . Guidelines put forth a decade ago by the American College of Medical Genetics (ACMG) suggests using chromosomal microarray (CMA) as a first line test in ASD since its diagnostic yield was estimated to be between 7 and 9% . However, since then, studies using whole exome sequencing (WES) have evidenced sequence level contribution of de novo variants in the etiology of ASD and recent advancements in computational analyses of WES data suggests improvement in detection of copy number variants (CNVs) too. Indeed, two recent studies have shown that WES was able to detect nearly all clinically relevant CNVs that were detected by CMA thereby increasing its diagnostic yield by approximately 1.6% . In addition, a recent retrospective study using WES on clinically diagnosed 343 children with ASD from Spain suggested a diagnostic yield of ~ 14% with 75% of the cases harbouring a de novo variant . It is predicted that nearly 85% of the disease causing variants reside in the protein coding and splice site regions of the genome, which are well covered by WES . Various studies have repeatedly shown a better yield and utility of WES over CMA in NDD and thus, WES has now been suggested as a first-tier test for patients with intellectual disability/ NDD . Selection and availability of a first-tier test with high diagnostic yield is desirable in low-middle income countries (LMICs) like India, since patients and families bear the cost of genetic testing. To our knowledge, no study to date has been performed in the Indian population to delineate the genetic architecture of ASD which can aid in the selection of first-tier genetic test. Here, we report the first systematic study to assess the genetic architecture and molecular diagnostic yields for karyotype, Fragile-X testing, CMA and WES in a population-based cohort of 101 patient-parent trios with ASD from India. Patient recruitment and sample collection The study included consecutively recruited 101 children with a confirmed clinical diagnosis of idiopathic ASD based on the DSM-5 . Children with prominent syndromic features, isolated speech delay or isolated sensory processing disorders were excluded from this study. Blood samples of the patient-parent trios were collected. The parents or guardians of all probands provided a written informed consent as per the Helsinki Declaration and the study was approved by the research ethics committee at Foundation for Research in Genetics and Endocrinology, Ahmedabad (ID: FRIGE/IEC/19/2020). All the methods in the study were carried out as per the Helsinki Declaration. High molecular weight genomic DNA was extracted using desalting method and was stored at -20 °C until molecular genetic testing was carried out. Karyotyping and Fragile-X testing Karyotyping was performed in all cases regardless of sex, whereas Fragile-X testing was performed only in male probands. Karyotyping was carried out using GTG banding at 500 band resolution to check for gross chromosomal aberrations. Fragile-X testing was carried out by triplet repeat primed – polymerase chain reaction (TP-PCR), that involved analyzing CGG repeat expansion in the 5’ UTR of the FMR1 gene using method as previously described . Children with a normal chromosomal constitution and showing no expansion of the CGG repeats in the 5’ UTR of FMR1 gene were subsequently assessed with CMA and WES. Chromosomal microarray CMA was carried out using CytoScan™ Optima array, GeneChip™ System 3000 and Affymetrix platform (Thermo Fisher Scientific, USA) as per the manufacturer’s instructions. Chromosome Analysis Suite Software (ChAS) (Thermo Fisher Scientific, USA) was used to carry out the analysis of the data as per the manufacturer’s recommendations which suggested a minimum resolution of 1 Mb for losses, 2 Mb for gains and 5 Mb for copy neutral loss of heterozygosity. For all candidate CNVs, variants were primarily screened for population frequency and known disease associations using publicly available databases like gnomAD database , DGV and DECIPHER and OMIM . Pathogenicity of CNVs were classified in accordance with ACMG and ClinGen classification system . All candidate CNVs were validated in proband and parents using SYBR Green based quantitative PCR (Q-PCR) using ABI’s StepOne Real Time PCR system (Thermo Fisher Scientific, USA) (Supplementary Table 1). Whole exome sequencing Genomic DNA of the proband was subjected to selective capture and sequencing of the protein coding regions that included exons and exon-intron boundaries of genes using Agilent SureSelect v6 enrichment kit (Agilent, USA). The library prepared, was subjected to paired-end sequencing with a mean coverage of > 80-100x on the Illumina HiSeq or NovaSeq platform (Illumina, USA). Sequences obtained as FASTQ files were aligned to the human reference genome (GRCh37/hg19) using BWA MEM v0.7.12 . SNVs and indels were called using GATK v4.12 Haplotype caller . In addition to SNVs and small indels, copy number variants (CNVs) were detected from the data using the ExomeDepth v1.1.10 . Variant annotation, filtration and prioritization was performed using Exomiser v12.1.0 . Exomiser uses the hiPHIVE prioritization method that incorporates protein-protein interaction networks and multi-species ontologies along with ranking candidate genes based on the predicted variant pathogenicity associated with the phenotype. The phenotype information was coded in uniform human phenotype ontology (HPO) terminologies . Common variants were filtered based on minor allele frequency in the 1000Genome Phase 3 and gnomAD v2.1 databases. The minor allele frequency cut off was set at 0.02 (2%). The cut-off was set assuming ASD has a global prevalence of 1:100; the frequency of major and minor alleles would be 0.9 (p) and 0.1 (q), respectively, based on the Hardy-Weinberg equilibrium. As ASD is caused by dominant de novo variants in majority of the cases (pq = 0.09) and the prior estimates suggests genetic diagnostic yield of approximately 33%, pq would be 0.027. Only non-synonymous variants in the coding region and canonical splice site variants with a depth of > 20x were used for analysis and clinical correlation. Various in-silico prediction tools such as PolyPhen-2 , SIFT , MutationTaster2 , LRT , CADD and MetaDome were used to predict pathogenicity of non-synonymous and indel variants. A CADD_phred score of ≥ 15, slightly intolerant, intolerant or highly intolerant predictions of MetaDome and at least two damaging predictions from the remaining in silico tools were used for selection of candidate variants. In-silico predictions along with available knowledge from various sources and databases as described below was used in prioritising the variant. Post-gross filtering, variants were prioritized based on the following: (a) known disease causing variant previously reported in databases like ClinVar and HGMD ; (b) novel variants in known genes based on the Z-score for missense and pLoF or LOEUF score for loss of function variants available in the gnomAD database ; (c) variants in novel candidate genes wherein the respective gene was additionally evaluated for their function using UniProt and Human Protein Atlas (proteinatlas.org) . Tissue expression using GTEx database (gtexportal.org), association/ interaction with known ASD genes using STRING database and, plausible phenotypic outcome in murine models based on the MGI database were assessed. All candidate variants were assessed using IGV to evaluate their quality. In the case of candidate CNVs, variants were primarily screened for population frequency and known disease associations using publicly available databases like gnomAD database , DGV , DECIPHER and OMIM . Pathogenicity of CNVs were classified in accordance with ACMG and ClinGen classification system . All candidate SNVs and indels were validated in proband and parents using bi-directional Sanger sequencing using ABI’s SeqStudio platform (Thermo Fisher Scientific, USA) whereas all candidate CNVs were validated using SYBR Green based quantitative PCR (Q-PCR) using ABI’s StepOne Real Time PCR system (Thermo Fisher Scientific, USA) (Supplementary Table 1). This was conducted to delineate mode of inheritance and reclassify variant pathogenicity. The classification of SNVs was carried out according to the American College of Medical Genetics – American College of Pathologists (ACMG-AMP) guidelines and ClinGen framework . The study included consecutively recruited 101 children with a confirmed clinical diagnosis of idiopathic ASD based on the DSM-5 . Children with prominent syndromic features, isolated speech delay or isolated sensory processing disorders were excluded from this study. Blood samples of the patient-parent trios were collected. The parents or guardians of all probands provided a written informed consent as per the Helsinki Declaration and the study was approved by the research ethics committee at Foundation for Research in Genetics and Endocrinology, Ahmedabad (ID: FRIGE/IEC/19/2020). All the methods in the study were carried out as per the Helsinki Declaration. High molecular weight genomic DNA was extracted using desalting method and was stored at -20 °C until molecular genetic testing was carried out. Karyotyping was performed in all cases regardless of sex, whereas Fragile-X testing was performed only in male probands. Karyotyping was carried out using GTG banding at 500 band resolution to check for gross chromosomal aberrations. Fragile-X testing was carried out by triplet repeat primed – polymerase chain reaction (TP-PCR), that involved analyzing CGG repeat expansion in the 5’ UTR of the FMR1 gene using method as previously described . Children with a normal chromosomal constitution and showing no expansion of the CGG repeats in the 5’ UTR of FMR1 gene were subsequently assessed with CMA and WES. CMA was carried out using CytoScan™ Optima array, GeneChip™ System 3000 and Affymetrix platform (Thermo Fisher Scientific, USA) as per the manufacturer’s instructions. Chromosome Analysis Suite Software (ChAS) (Thermo Fisher Scientific, USA) was used to carry out the analysis of the data as per the manufacturer’s recommendations which suggested a minimum resolution of 1 Mb for losses, 2 Mb for gains and 5 Mb for copy neutral loss of heterozygosity. For all candidate CNVs, variants were primarily screened for population frequency and known disease associations using publicly available databases like gnomAD database , DGV and DECIPHER and OMIM . Pathogenicity of CNVs were classified in accordance with ACMG and ClinGen classification system . All candidate CNVs were validated in proband and parents using SYBR Green based quantitative PCR (Q-PCR) using ABI’s StepOne Real Time PCR system (Thermo Fisher Scientific, USA) (Supplementary Table 1). Genomic DNA of the proband was subjected to selective capture and sequencing of the protein coding regions that included exons and exon-intron boundaries of genes using Agilent SureSelect v6 enrichment kit (Agilent, USA). The library prepared, was subjected to paired-end sequencing with a mean coverage of > 80-100x on the Illumina HiSeq or NovaSeq platform (Illumina, USA). Sequences obtained as FASTQ files were aligned to the human reference genome (GRCh37/hg19) using BWA MEM v0.7.12 . SNVs and indels were called using GATK v4.12 Haplotype caller . In addition to SNVs and small indels, copy number variants (CNVs) were detected from the data using the ExomeDepth v1.1.10 . Variant annotation, filtration and prioritization was performed using Exomiser v12.1.0 . Exomiser uses the hiPHIVE prioritization method that incorporates protein-protein interaction networks and multi-species ontologies along with ranking candidate genes based on the predicted variant pathogenicity associated with the phenotype. The phenotype information was coded in uniform human phenotype ontology (HPO) terminologies . Common variants were filtered based on minor allele frequency in the 1000Genome Phase 3 and gnomAD v2.1 databases. The minor allele frequency cut off was set at 0.02 (2%). The cut-off was set assuming ASD has a global prevalence of 1:100; the frequency of major and minor alleles would be 0.9 (p) and 0.1 (q), respectively, based on the Hardy-Weinberg equilibrium. As ASD is caused by dominant de novo variants in majority of the cases (pq = 0.09) and the prior estimates suggests genetic diagnostic yield of approximately 33%, pq would be 0.027. Only non-synonymous variants in the coding region and canonical splice site variants with a depth of > 20x were used for analysis and clinical correlation. Various in-silico prediction tools such as PolyPhen-2 , SIFT , MutationTaster2 , LRT , CADD and MetaDome were used to predict pathogenicity of non-synonymous and indel variants. A CADD_phred score of ≥ 15, slightly intolerant, intolerant or highly intolerant predictions of MetaDome and at least two damaging predictions from the remaining in silico tools were used for selection of candidate variants. In-silico predictions along with available knowledge from various sources and databases as described below was used in prioritising the variant. Post-gross filtering, variants were prioritized based on the following: (a) known disease causing variant previously reported in databases like ClinVar and HGMD ; (b) novel variants in known genes based on the Z-score for missense and pLoF or LOEUF score for loss of function variants available in the gnomAD database ; (c) variants in novel candidate genes wherein the respective gene was additionally evaluated for their function using UniProt and Human Protein Atlas (proteinatlas.org) . Tissue expression using GTEx database (gtexportal.org), association/ interaction with known ASD genes using STRING database and, plausible phenotypic outcome in murine models based on the MGI database were assessed. All candidate variants were assessed using IGV to evaluate their quality. In the case of candidate CNVs, variants were primarily screened for population frequency and known disease associations using publicly available databases like gnomAD database , DGV , DECIPHER and OMIM . Pathogenicity of CNVs were classified in accordance with ACMG and ClinGen classification system . All candidate SNVs and indels were validated in proband and parents using bi-directional Sanger sequencing using ABI’s SeqStudio platform (Thermo Fisher Scientific, USA) whereas all candidate CNVs were validated using SYBR Green based quantitative PCR (Q-PCR) using ABI’s StepOne Real Time PCR system (Thermo Fisher Scientific, USA) (Supplementary Table 1). This was conducted to delineate mode of inheritance and reclassify variant pathogenicity. The classification of SNVs was carried out according to the American College of Medical Genetics – American College of Pathologists (ACMG-AMP) guidelines and ClinGen framework . Study cohort The study cohort consisted of 101 well defined patient-parent trios diagnosed with moderate to severe ASD of unknown etiology as per the DSM-5 criteria. The average age at recruitment was 5 ± 3 years and ranged from 2 to 6 months to 16 years (Table ). The average maternal and paternal age at the time of conception was 28 ± 4 years and 30 ± 4 years, respectively. The cohort included 72 males (71%) and 29 females (29%), suggesting a male to female ratio of approximately 3:1. Five families had more than one child diagnosed with ASD (Supplementary Information 1). Consanguinity was noted in 8 families (7.9%), whereas non-consanguinity and endogamy in 31 (30.7%) and 62 (61.4%) families, respectively. All 101 probands with ASD also had developmental delay and intellectual disability with some of them having subtle dysmorphism (large and/ or cupped ears, long eyelashes, telecanthus, thin upper lip) (n = 28/101; 27.7%) and epilepsy (n = 28/101; 27.7%) (Supplementary Table 2). Outcomes from karyotype and fragile X testing Sequential genetic testing was performed in all 101 patients which began with karyotyping and were followed by fragile X testing (only in male probands), CMA and WES. None of the probands showed gross chromosomal aberrations or had expanded triplet repeat tracks (full-mutation alleles with > 200 CGG repeats) in the 5’-UTR region of the FMR1 gene. Therefore, all probands were subsequently tested using CMA and WES. Outcomes from chromosomal microarray From the 101 probands in whom CMA was performed, pathogenic CNVs were detected in 3 cases (2.9%) including two deletions and one duplication (Table ). Proband ASD-076 had an 8 Mb deletion at the 15q11.2 locus which encompassed 20 OMIM genes and is known to cause 15q11.2 deletion syndrome (OMIM#615,656) or Angelman syndrome (OMIM#105,830). Compared to the individuals with class II deletions (BP2-BP3; ISCA-37,478), individuals with large class I deletions (BP1-BP3; ISCA-37,404) at the 15q11.2 region are observed to have a high likelihood of language impairment and autistic traits, similar to that seen in the proband in our study . Patient ASD-103 was detected with a deletion of 0.19 Mb size at the 9q34.3 locus which encompassed 6 OMIM genes and is associated with Kleefstra syndrome I (OMIM#610,253). Individuals with > 1 Mb deletion of the 9q34 locus have a severe phenotype such as congenital anomalies including heart defects, limb anomalies, seizures and respiratory distress. In contrast individuals having < 1 Mb deletion are observed with a milder phenotype, which in part could explain the phenotype in the proband in the current study such as bruxism, drooling, subtle facial dysmorphism and recurrent episodes of vomiting . Lastly, proband ASD-050 was detected with a 0.52 Mb duplication on the 1q22 locus which consists of 8 OMIM genes. This is a rare CNV which has previously only been reported in a boy with intellectual disability and psychiatric disturbances . Multiple individuals in this family were affected and the duplication variant segregated with the neurological features in all family members with this variant. All CNVs in our cohort were de novo in origin and were observed exclusively in male probands. Outcomes from whole exome sequencing WES was carried out in 99 of 101 cases, as the cohort contained two monozygotic twin pairs and only one proband from each twin pair was processed for WES. The 99 cases also included the three cases that yielded a result by CMA to assess the sensitivity of WES to detect CNVs. On an average, approximately 3 candidate gene(s) or variant(s) were identified per proband (Supplementary Table 3). From the 101 patients, pathogenic and/ or likely pathogenic variants were identified in 30 cases (29.7%), of which, SNVs were detected in 27 cases (90%) and CNVs in 3 cases (10%) (Table ). Interestingly, 3 CNVs detected by CMA were also identified by WES, however, a 0.8 Mb de novo deletion encompassing the BP1 region of the 15q11.2 locus was detected by WES alone (Table ). On further analysis, the lack of detection of the aforementioned CNV by CMA was due to the lack of probes covering this region on CytoScan™ Optima array. Segregation analysis revealed that approximately 66.6% (n = 3 for CNVs and n = 17 for SNVs) of the cases were caused due to a de novo variant. De novo SNVs were found primarily in previously known ASD genes- MECP2, SCN2A, KCNQ2, TBL1XR1, CNTNAP2, TCF4, CAMK2A, NF1, AUTS2, FOXP2 and NLGN3. Of 17 de novo variants, 6 were predicted to be loss of function (pLOF) variants (35.2%) whereas the remaining were missense variants. Remarkably, 6 of the 17 patients had a de novo SNV in the MECP2 gene, which is associated with Rett syndrome (OMIM#312,750). Of these, 5 were female and 1 was a male proband. Interestingly, in a rare case of the male proband aged 2.5 years with Rett syndrome, we observed that the variant c.538 C > T (p.Arg180Ter) in the MECP2 gene originated through a post-zygotic de novo event which led to somatic mosaicism in the proband (Table ) . In our cohort of patients with pathogenic/ likely pathogenic variants, 5 probands (n = 5/30; 16.6%) were observed with biallelic or hemizygous variants in genes associated with NDD or metabolic disorders with a recessive mode of inheritance (Table ). Specifically, biallelic variants were detected in (i) ALDH4A1 gene which is associated with hyperprolinemia type II (OMIM#239,510), (ii) NEUROG1 gene which is associated with congenital cranial dysinnervation disorder and autism spectrum disorder , (iii) KDM6A gene which is associated with Kabuki syndrome 2 (OMIM#300,867), (iv) LMAN2L gene which is associated with mental retardation 52 (OMIM#616,887) and, (v) ALDH7A1 gene which is associated with pyridoxine dependent epilepsy (OMIM#266,100). In addition, 4 probands were identified with pathogenic/ likely pathogenic heterozygous variants, which were inherited from one of their parents. In 2 cases, the variants were inherited from unaffected mother and in 1 case the variant was inherited from an unaffected father. In the 4th case, pLOF variant c.202 C > T (p.Gln68Ter) in the RORB gene was inherited from father who also had a clinical history of seizures (Supplementary Table 2; Supplementary Information 1). Of note, in one case (ASD-003), paternal sample was un-available, hence the mode of inheritance couldn’t be deduced. Interestingly, ASD probands with epilepsy had a higher diagnostic yield (n = 15/28; 53.6%) compared to ASD probands without epilepsy (n = 15/73; 20.5%) ( χ 2 = 10.6, p = 0.001), however, no such association was observed for facial dysmorphism ( χ 2 = 0.67, p = 0.41) and social/ speech regression phenotypes ( χ 2 = 0.53, p = 0.47). Lastly, WES identified 22 VUS variants in 21 patients (n = 21/101; 20.8%; Supplementary Table 4). The variants were identified in genes that have previously been associated with or implicated in ASD etiology as per the Simons Foundation Autism Research Initiative (SFARI) Gene Database and Autism Database (AutDB). Of these, majority of the probands were detected with heterozygous variants (66.6%) which were inherited from either of the unaffected parents with equal distribution. Of note, 3 of the 21 patients following segregation analysis were detected with missense variants in the KMT2C gene (Kleefstra syndrome 2; OMIM#617,768) which were inherited from a healthy parent. Whilst the majority of the cases have been reported with a de novo variant in the KMT2C gene, 4 reports observed variants being inherited from a healthy parent suggesting a potential oligogenic mode of inheritance . The study cohort consisted of 101 well defined patient-parent trios diagnosed with moderate to severe ASD of unknown etiology as per the DSM-5 criteria. The average age at recruitment was 5 ± 3 years and ranged from 2 to 6 months to 16 years (Table ). The average maternal and paternal age at the time of conception was 28 ± 4 years and 30 ± 4 years, respectively. The cohort included 72 males (71%) and 29 females (29%), suggesting a male to female ratio of approximately 3:1. Five families had more than one child diagnosed with ASD (Supplementary Information 1). Consanguinity was noted in 8 families (7.9%), whereas non-consanguinity and endogamy in 31 (30.7%) and 62 (61.4%) families, respectively. All 101 probands with ASD also had developmental delay and intellectual disability with some of them having subtle dysmorphism (large and/ or cupped ears, long eyelashes, telecanthus, thin upper lip) (n = 28/101; 27.7%) and epilepsy (n = 28/101; 27.7%) (Supplementary Table 2). Sequential genetic testing was performed in all 101 patients which began with karyotyping and were followed by fragile X testing (only in male probands), CMA and WES. None of the probands showed gross chromosomal aberrations or had expanded triplet repeat tracks (full-mutation alleles with > 200 CGG repeats) in the 5’-UTR region of the FMR1 gene. Therefore, all probands were subsequently tested using CMA and WES. From the 101 probands in whom CMA was performed, pathogenic CNVs were detected in 3 cases (2.9%) including two deletions and one duplication (Table ). Proband ASD-076 had an 8 Mb deletion at the 15q11.2 locus which encompassed 20 OMIM genes and is known to cause 15q11.2 deletion syndrome (OMIM#615,656) or Angelman syndrome (OMIM#105,830). Compared to the individuals with class II deletions (BP2-BP3; ISCA-37,478), individuals with large class I deletions (BP1-BP3; ISCA-37,404) at the 15q11.2 region are observed to have a high likelihood of language impairment and autistic traits, similar to that seen in the proband in our study . Patient ASD-103 was detected with a deletion of 0.19 Mb size at the 9q34.3 locus which encompassed 6 OMIM genes and is associated with Kleefstra syndrome I (OMIM#610,253). Individuals with > 1 Mb deletion of the 9q34 locus have a severe phenotype such as congenital anomalies including heart defects, limb anomalies, seizures and respiratory distress. In contrast individuals having < 1 Mb deletion are observed with a milder phenotype, which in part could explain the phenotype in the proband in the current study such as bruxism, drooling, subtle facial dysmorphism and recurrent episodes of vomiting . Lastly, proband ASD-050 was detected with a 0.52 Mb duplication on the 1q22 locus which consists of 8 OMIM genes. This is a rare CNV which has previously only been reported in a boy with intellectual disability and psychiatric disturbances . Multiple individuals in this family were affected and the duplication variant segregated with the neurological features in all family members with this variant. All CNVs in our cohort were de novo in origin and were observed exclusively in male probands. WES was carried out in 99 of 101 cases, as the cohort contained two monozygotic twin pairs and only one proband from each twin pair was processed for WES. The 99 cases also included the three cases that yielded a result by CMA to assess the sensitivity of WES to detect CNVs. On an average, approximately 3 candidate gene(s) or variant(s) were identified per proband (Supplementary Table 3). From the 101 patients, pathogenic and/ or likely pathogenic variants were identified in 30 cases (29.7%), of which, SNVs were detected in 27 cases (90%) and CNVs in 3 cases (10%) (Table ). Interestingly, 3 CNVs detected by CMA were also identified by WES, however, a 0.8 Mb de novo deletion encompassing the BP1 region of the 15q11.2 locus was detected by WES alone (Table ). On further analysis, the lack of detection of the aforementioned CNV by CMA was due to the lack of probes covering this region on CytoScan™ Optima array. Segregation analysis revealed that approximately 66.6% (n = 3 for CNVs and n = 17 for SNVs) of the cases were caused due to a de novo variant. De novo SNVs were found primarily in previously known ASD genes- MECP2, SCN2A, KCNQ2, TBL1XR1, CNTNAP2, TCF4, CAMK2A, NF1, AUTS2, FOXP2 and NLGN3. Of 17 de novo variants, 6 were predicted to be loss of function (pLOF) variants (35.2%) whereas the remaining were missense variants. Remarkably, 6 of the 17 patients had a de novo SNV in the MECP2 gene, which is associated with Rett syndrome (OMIM#312,750). Of these, 5 were female and 1 was a male proband. Interestingly, in a rare case of the male proband aged 2.5 years with Rett syndrome, we observed that the variant c.538 C > T (p.Arg180Ter) in the MECP2 gene originated through a post-zygotic de novo event which led to somatic mosaicism in the proband (Table ) . In our cohort of patients with pathogenic/ likely pathogenic variants, 5 probands (n = 5/30; 16.6%) were observed with biallelic or hemizygous variants in genes associated with NDD or metabolic disorders with a recessive mode of inheritance (Table ). Specifically, biallelic variants were detected in (i) ALDH4A1 gene which is associated with hyperprolinemia type II (OMIM#239,510), (ii) NEUROG1 gene which is associated with congenital cranial dysinnervation disorder and autism spectrum disorder , (iii) KDM6A gene which is associated with Kabuki syndrome 2 (OMIM#300,867), (iv) LMAN2L gene which is associated with mental retardation 52 (OMIM#616,887) and, (v) ALDH7A1 gene which is associated with pyridoxine dependent epilepsy (OMIM#266,100). In addition, 4 probands were identified with pathogenic/ likely pathogenic heterozygous variants, which were inherited from one of their parents. In 2 cases, the variants were inherited from unaffected mother and in 1 case the variant was inherited from an unaffected father. In the 4th case, pLOF variant c.202 C > T (p.Gln68Ter) in the RORB gene was inherited from father who also had a clinical history of seizures (Supplementary Table 2; Supplementary Information 1). Of note, in one case (ASD-003), paternal sample was un-available, hence the mode of inheritance couldn’t be deduced. Interestingly, ASD probands with epilepsy had a higher diagnostic yield (n = 15/28; 53.6%) compared to ASD probands without epilepsy (n = 15/73; 20.5%) ( χ 2 = 10.6, p = 0.001), however, no such association was observed for facial dysmorphism ( χ 2 = 0.67, p = 0.41) and social/ speech regression phenotypes ( χ 2 = 0.53, p = 0.47). Lastly, WES identified 22 VUS variants in 21 patients (n = 21/101; 20.8%; Supplementary Table 4). The variants were identified in genes that have previously been associated with or implicated in ASD etiology as per the Simons Foundation Autism Research Initiative (SFARI) Gene Database and Autism Database (AutDB). Of these, majority of the probands were detected with heterozygous variants (66.6%) which were inherited from either of the unaffected parents with equal distribution. Of note, 3 of the 21 patients following segregation analysis were detected with missense variants in the KMT2C gene (Kleefstra syndrome 2; OMIM#617,768) which were inherited from a healthy parent. Whilst the majority of the cases have been reported with a de novo variant in the KMT2C gene, 4 reports observed variants being inherited from a healthy parent suggesting a potential oligogenic mode of inheritance . Almost a decade ago, the ACMG published guidelines recommending CMA as a first tier test for delineating the genetic cause of ASD and other NDDs . Since then, WES coupled with advancements in computational analyses has led to simultaneous detection of SNVs and CNVs. Studies carried out in multiple ethnic populations since 2015 have shown an increased diagnostic yield from WES compared to CMA in ASD . This outcome is supported by the observation of a high proportion of de novo SNVs in ASD patients which are not detectable by CMA. To our knowledge, we here report the first description of the genetic architecture of ASD and simultaneously carry out diagnostic yield comparisons of karyotype, FMR1 triplet repeat expansion, CMA and WES in a cohort of 101 patient-parent trios of Indian origin. Our data is in congruence with prior reports and supports the utility of WES as a primary genetic diagnostic method for ASD. In the present cohort, WES detected pathogenic/ likely pathogenic variants causative of the ASD phenotype in 29.7% of the cases in contrast with 2.9%, 0% and 0% from CMA, FMR1 triplet repeat expansion and karyotype testing, respectively. Indeed, all three CNVs detected by CMA were also detected by WES together with a fourth CNV which was detected by WES alone. Interestingly, the low yield of CMA in the present cohort can be attributed to two potential reasons. First, gross dysmorphism was an exclusion criteria during recruitment of cases for the study. Prior study by Tammimes et al., has shown a higher diagnostic yield of CMA in children with ASD and major congenital anomaly compared with children with minor physical anomaly . Two, Affymetrix CytoScan Optima oligonucleotide array was used in the current study. The platform consists of 315,608 probes and requires at least 25 probes to call a loss or gain of approximately 100 kb in size. Prior study has shown a trend for differential diagnostic yield with CMA based on both platform resolution and phenotypic manifestation in ASD patients . A higher resolution microarray (1 million probes or more) had a higher diagnostic yield in ASD patients with minor physical anomalies compared to low resolution microarray (44k platform), however, this difference was abated when the test was carried out in ASD patients with major congenital anomalies . It is therefore plausible that the current platform may have missed CNVs that are beyond its detection limit, which could have been picked up with a higher resolution microarray platform. The diagnostic yield in the present cohort is concordant with those reported previously from individual cohort studies . Indeed, a recent meta-analysis in patients with NDD i.e. global developmental delay, intellectual disability and ASD showed diagnostic yield of WES to range from 31 to 53% in contrast to CMA with yield ranging 15–20% . Based on these results, Srivastava et al. outlined a consensus statement and a stepwise algorithm for NDD diagnosis whereby WES is presented as the first-tier test followed by CMA and/or other orthogonal tests. Interestingly, we observed that in 66.6% and 16.1% of the cases with a genetic diagnosis for ASD, the mode of inheritance for the variant was de novo and recessive, respectively. This is in congruence with prior patient-parent trio cohort studies whereby similar rates for variant’s mode of inheritance was observed . All genes identified carrying potential causative variants were subjected to STRING analysis v11.5 (Fig. ). The network statistics consisted of 37 unique proteins resulting in 67 various protein-protein interactions (PPI) amongst themselves. In comparison, a random set of same number of proteins, would result in only 12 different interactions. With a p -value of < 1.0e-16, a statistically significant enrichment of PPI in the present cohort indicated a biological connection amongst these proteins. Majority of these proteins are involved in synaptic formation, transcription and its regulation, ubiquitination and chromatin remodeling, as have been observed in prior studies . This leads to a plausible hypothesis that the genetic architecture and etiopathogenesis of ASD is similar across ethnicities and an introduction of a uniform stepwise genetic testing algorithm would yield similar diagnostic yields. In our cohort, three genes ( LRFN1 , UNC13A and UNC79 ) were identified as potential novel candidates for ASD. The variant in the LRFN1 gene was a result of a de novo event. LRFN1 interacts with DLG4 , a known ASD gene vital in the formation of the post-synaptic complex required for signal transduction . DLG4 is classed under a high confidence category with a gene score of 1 in the SFARI database and has an Evaluation of Autism Gene Link Evidence (EAGLE) score of 2.45, which suggests limited but no contradicting evidence of its role in ASD. Due to the direct interaction between the two genes, LRFN1 could be considered as a potential candidate for ASD, although functional validation is required and was beyond the scope of the current study. The variants in the UNC13A and UNC79 genes were inherited from likely asymptomatic parents and classed as VUS. Both these genes have been listed in the AutDB and SFARI database and have been considered novel due to the absence of an associated phenotype in the OMIM database. A patient with developmental delay, dyskinetic movement disorder and autism has been previously identified with a de novo variant in the UNC13A gene . Additionally, experimental evidence suggests its direct interaction with a known ASD associated gene, STXBP1 . Only recently, UNC79 gene has also been associated with neurodevelopmental features including autism . With an increasing awareness of ASD amongst the general populous, there is a high likelihood of increase in demand for genetic testing in children with ASD. In a survey of parents having a child with ASD in USA, 80% of the parents indicated that they would pursue genetic testing to identify risk of ASD in the younger sibling . However, financial concerns, not being offered genetic testing by a physician or a geneticist and lack of awareness are amongst the most common reasons for not opting for genetic diagnosis . In addition, with the advent of development and deployment of new treatments such as trofinetide for Rett syndrome, there is likely to be increase in uptake for genetic testing . This suggests that adoption of a uniform genetic testing algorithm coupled with educating primary care physicians and non-genetic specialists could improve rates of genetic testing and diagnosis in children with ASD. Limitations The limitations of our study include a relatively small sample size, possible ascertainment bias related to patients having primarily non-syndromic form of ASD without gross congenital dysmorphism, carrying out WES and CMA in the proband only followed by segregation analysis by orthogonal approaches on prioritized variants and absence of detailed cost-effectiveness assessment. Despite this, we observe similar diagnostic yields to that observed in other cohorts . Additionally, there are technical and interpretation limitations to the identification and prioritization of variants which were classified as VUS. Delineation of pathogenicity of these variants is often challenging because of their incomplete penetrance, variable expressivity and/or sex specific bias . This however would require re-assessment of WES data every 2–3 years as per the consensus statement by Srivastava et al. using updated datasets and new computational tools . Lastly, WES and CMA due to their inherent technical limitations are unable to resolve complex structural re-arrangements (e.g. inversions and translocations) which could play role in the pathogenesis of NDD , although, newer genomic technologies such as long-read whole genome sequencing could help to assess their role in the etiology of ASD. The limitations of our study include a relatively small sample size, possible ascertainment bias related to patients having primarily non-syndromic form of ASD without gross congenital dysmorphism, carrying out WES and CMA in the proband only followed by segregation analysis by orthogonal approaches on prioritized variants and absence of detailed cost-effectiveness assessment. Despite this, we observe similar diagnostic yields to that observed in other cohorts . Additionally, there are technical and interpretation limitations to the identification and prioritization of variants which were classified as VUS. Delineation of pathogenicity of these variants is often challenging because of their incomplete penetrance, variable expressivity and/or sex specific bias . This however would require re-assessment of WES data every 2–3 years as per the consensus statement by Srivastava et al. using updated datasets and new computational tools . Lastly, WES and CMA due to their inherent technical limitations are unable to resolve complex structural re-arrangements (e.g. inversions and translocations) which could play role in the pathogenesis of NDD , although, newer genomic technologies such as long-read whole genome sequencing could help to assess their role in the etiology of ASD. Data from large scale genomic and transcriptomic studies have helped to delineate the genetic architecture of ASD in European/ non-Hispanic white populations. To the best of our knowledge, this is the first study to delineate the genetic architecture of ASD in the Indian population, with de novo variants in genes involved in synaptic formation, transcription and its regulation, ubiquitination and chromatin remodeling as the primary cause. In congruence with data from other ethnic populations, the current study provides evidence supporting the implementation of WES as the first-tier test in the genetic diagnosis of ASD. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5
Biological Basis of Breast Cancer-Related Disparities in Precision Oncology Era
85ddbd4a-1be5-423f-841e-e5936c4faf38
11012526
Internal Medicine[mh]
In the frame of P6 medicine, which is personalized, predictive, preventive, participatory, psycho-cognitive and population-based , precision medicine investigates the biological basis of diseases using molecular information that emerges from different omics fields , allowing for a more accurate therapy of different groups of patients who are different in disease susceptibility as well as treatment response . Precision oncology (PO) was defined as “the molecular profiling of tumors to identify targetable alterations” and allows for personalized treatments to improve cancer patient outcomes . PO requires the discovery of predictive and prognostic biomarkers, which have been found to have racial and ethnic differences among other types of disparities, such as chronological or biological age- or sex/gender-related ones . For example, the application of biomarkers found in serum to distinguish between BC patients and healthy people can be an important and minimally invasive tool to improve screening programs . Thus, Srivastava et al. (2019) identified both race-specific serum biomarkers, such as the tyrosine kinase receptor c-Kit, retinoblastoma proteins (Rb) and vascular endothelial growth factor receptor 2 (VEGFR2), and non-race-specific serum protein biomarkers, such as pyruvate kinase 2 (Pyk2), for racial disparities in BC progression . Thus, c-Kit, a receptor tyrosine kinase that induces the migration of triple-negative BC (TNBC) cells , has been identified as overexpressed in African American women (AAW) with BC compared to Caucasian American women (CAW) patients, while Rb, known to inhibit tumor progression, have a lower prevalence in the serum of AAW compared to CAW with BC. Moreover, c-Kit was associated with BRCA1-mutation-associated BC . In addition, VEGFR2 was significantly overexpressed in AAW cancer serum compared to AAW controls, while this was not the case in CAW patients’ serum compared to CAW controls . Cancer-related disparities have been mainly associated with geographical disparities , socioeconomic position and social inequities, also known as “social epidemiology” , and different racial and ethnic groups . It is known that counties characterized by elevated rates of cancer mortality usually have a higher proportion of non-Hispanic-Black adults or an older population, greater poverty, and more rurality . Overall, several factors that contribute to cancer health disparities are comorbidities, social stress exposure, ancestral adaptations, such as immune response at the populational level, mitochondrial function, acquired somatic mutations in oncogenes or tumor suppressor genes and dysbiosis . For BC, most evidence emphasizes racial or ethnicity-related disparities. The concept of biological races, as well as racial disparities, are human inventions, being sociopolitical constructs , so many authors have stated that biomedical researchers and clinicians should eliminate the use of race as a biological variable . However, both the public health system and scientific literature that has been written until nowadays in biomedicine fields handle the syntagma of “racial and ethnic disparities” that result from integrative interaction between patient-related intrinsic factors, such as phenotypic-, genomic-/proteomic-, metabolomic-, epiomic-, developmental- or/and evolutionary-based characteristics, and external variables, such as exposure to a natural and/or anthropized environment, psycho-socio-economic landscape or organizational and health care system factors that act on the individuals over their life course . However, when discussing racial or ethnic disparities, many authors do not provide the necessary explanations for these differences , particularly at the level of molecular pathways and biological processes. Thus, Linnenbringer et al. (2017) integrated several multi-level hypotheses from stress biology, BC epidemiology and health disparity-related data to develop a structural perspective for emphasizing racial disparities in BC subtypes, concluding that the socially patterned psycho-social stressors, physiological and behavioral responses and genomic pathways contribute together to the increased risk of more aggressive BC and higher mortality among Black women compared to White women . Many other works have shown that BC incidence and mortality rates vary across geographic regions and countries . However, while the geographic region could be used to explain some genetic, biological and environmental differences , countries are not ideal units for the analysis of cancer rates due to variations in population size, ethnic/genetic mosaicism due to genetic mixture, socioeconomic and cultural lifestyle and many other local variable factors, so some authors recommend the use of zone design procedures in disparity-based studies . Thus, evidence suggests that BC incidence is usually greater in Western countries, such as North America, Northern and Western Europe and even Australia and New Zealand, than in the majority of African and Asian nations . Other authors highlighted the differences in BC incidence and mortality rates in developed countries compared to low- and middle-income countries . The incidence and mortality rates of different cancers have also been associated with sex-specific disparities . Overall, the incidence of different types of cancer seems to be higher in men, who are more prone to die from cancer, than in women, for relatively unknown reasons . Male breast cancer (MBC) is a rare disease, so MBC and female breast cancer (FBC) are considered different entities , even if both sexes share some common BCR factors . However, sex differences in cancer incidence have been associated with regulatory mechanisms at the genetic/molecular level and sex steroid hormones, i.e., estrogen and progesterone, which modulate gene expression in different cancers . In this context, it is important to evaluate the dose–response relationships between sex steroid hormones and BCR that were most evident for postmenopausal compared to premenopausal women . Dong et al. (2022) emphasized disparities in the stage at diagnosis for BC and described seven phenotypes of late-stage BC associated with a high uninsured rate, low mammography use, high area deprivation, rurality and high poverty levels . Thus, in the United States, these authors showed that these phenotypes were most prevalent in southern and western states, whereas phenotypes associated with a lower percentage of late-stage diagnosis were most prevalent in the north-eastern states and select metropolitan areas . Thus, the aim of this review was to deepen the understanding of BC-related disparities, mainly from a biomedical perspective. Different ethnic populations are characterized by different susceptibilities to diseases so group-based differences in BC incidence and mortality rates result from race and ethnicity . There are many works that validate race-related differences in various organ structures and development, such as brain exposure to childhood adversity or pubertal mammary gland development in conjunction with diet . Generally, African American (AA)/Black individuals are known to possess a significantly greater cancer burden, with the poorest likelihood of survival leading to the worst incidence of death of any race with regard to various types of cancer . Consequently, AA women (AAW)/Black women have a 41% higher mortality rate compared to White women/non-Hispanic White (NHW) women . For BC, racial disparities are accentuated in Black women, who have a lower incidence than White and Asian women, while their BC-related mortality and aggressiveness are the highest among all races . Thus, the cited incidence rates are as follows: 130.8 per 100,000 among White women, 126.7 per 100,000 among Black women, and 93.2 per 100,000 among Asian/Pacific Islander women . The cited mortality rate in BC for Black women is 28 for every 100,000 individuals, White women is 20.3 for every 100,000 individuals and Asian/Pacific Islander women are 11.5 for every 100,000 individuals . The incidence of BC in Native American, also known as American Indian and Alaskan Native (AI/AN), women is significantly lower than the incidence in both NHW and Black women, but the prognosis after a diagnosis of BC is worse compared to White women . Several disparity-related data are summarized in . 2.1. Genetics/Genomics of Breast Cancer Disparities Overall, BC is a heterogeneous and polygenic disease , with 10–15% of BC cases being caused by hereditary/germline mutations in BC susceptibility genes , known as high-penetrance alleles/high-risk variants (i.e., BRCA1 , BRCA2 , TP53 , STK11 , CD1 and PTEN ), moderate-penetrance alleles/moderate-risk variants (i.e., ATM , PALB2 , CHEK2 , BRIP1 , RAD51C ) and common low-penetrance alleles/low-risk variants . It is known that specific breast cancer 1/2 ( BRCA1/2 ) mutations in the worldwide population are highly ethnic-specific , with a high frequency of BRCA variation in specific countries or ethnic groups, especially within genetically isolated populations, where these mutations are descendent from a single founder . Wang (2023) summarized the main factors that contribute to the ethnic specificity of the BRCA variation, such as strong positive selection on human BRCA , adaptation to the living environment, genetic drift and founder variation in different ethnic populations . BRCA1 and BRCA2 mutations are estimated to be responsible for about 3% of all BCs and other less common high-penetrance genes account for less than 1% of all BCs . BRCA1 and BRCA2 genes encode proteins involved in DNA repair and homologous recombination (HR) , playing key roles in the maintenance of genome stability , including cell cycle checkpoint activations as well as transcriptional regulation and apoptosis . High-penetrance germline mutations in the tumor suppressor genes result in a loss of tumor suppressor activity and an increased risk of BC . Thus, the lifetime risk of BC in women with the BRCA1 pathogenic mutation is 84% . Of the current BRCA variant data, 80% were derived from European descendant populations, constituting only 20% of the world population . Moreover, the mutational spectrum within BRCA1/2 was mainly associated with an increasing risk of TNBC . Thus, BRCA1/2 pathogenic variants (PVs) have been reported in many different populations , like Ashkenazi Jewish people, who are at higher BCR because of a high frequency of the BRCA1/2 gene mutations . The results obtained by Bhaskaran et al. (2019) suggest that the present Caucasian population-level BRCA mutation signature is insufficient to accurately reflect BRCA status in groups other than Caucasians, for instance, people who are Chinese . Somatic mutation analysis reveals racial differences in specific high-prevalence genes, such as tumor protein 53 ( TP53 ) (46% in AAW vs. 27% in Caucasian women (CAW)), phosphatildylinositol-4,5-biphosphate 3-kinase ( PIK3CA ) (20% in AAW vs. 34% in CAW) and MLL methyltransferase family genes (12% in AAW vs. 6% in CAW) . Yadav et al. (2021) performed a multigene hereditary cancer panel test for women with BC to evaluate the racial and ethnic differences in the prevalence of germline PVs and the effect of race and ethnicity on BCR among carriers . These authors showed that BRCA1 PVs were higher in Ashkenazi Jewish women and Hispanic women compared to NHW, checkpoint kinase 2 ( CHEK2 ) PVs were statistically significantly lower in Black and Asian women, BRCA1 -associated RING domain 1 ( BARD1 ) PVs were associated with high BCR in Black, Hispanic, and Asian women, and ataxia-telangiectasia mutated ( ATM ) PVs were associated with increased BCR among all races and ethnicities except Asian people, whereas CHEK2 and BRIP1 PVs were associated with increased BCR among NHW and Hispanic women . Moreover, Kwong et al. (2021) showed that the prevalence of the partner and localizer of the BRCA2 ( PALB2 ) mutation in BC also varies across different ethnic groups . Germline PVs of the PALB2 tumor suppressor gene, which binds to and co-localizes with BRCA2 in the DNA repair pathway , are associated with an increased BCR, more aggressive phenotypes, particularly the TNBC subtype, and higher mortality . AAW are more likely to have a basal subtype of BC and TP53 mutations and a lower frequency of PIK3A mutations than White Americans . DNA polymerases are also essential for DNA replication, repair mechanisms and tolerance of DNA damage . Evidence suggests that DNA polymerases are associated with cancer, with many mutations in cancer cells being the result of error-prone DNA synthesis by non-replicative polymerases or the inability of replicative DNA polymerases to proofread mismatch nucleotides . Family et al. (2014) analyzed single-nucleotide polymorphisms (SNPs) in DNA bypass polymerase genes, such as DNA polymerase theta ( POLQ ), and their association with BC and BC subtypes in AAW and White women, concluding that the analyzed SNPs are in high linkage disequilibrium in both races, but these can be associated with the risk of luminal BC . Cells with BRCA1/2 mutations have a homologous recombination (HRR)-deficient repair mechanism, so the poly(ADP-ribose) polymerase (PARP) inhibitors can be considered a precision-targeted anticancer drug in BRCA1/2 -mutated women . Hsiao and Lu (2021) showed that the identification of accessible homologous recombination deficiency (HRD)-type genes, which are relevant based on race, has significant clinical relevance for various malignancies, including BC . Thus, these authors showed that in both White and Asian populations, more substantial mutation regions were discovered in ATM , BRCA2 , the catalytic subunit of DNA polymerase epsilon ( POLE ), and type II tropoisomerase 2B ( TOP2B ), whereas variants in the replication timing regulatory factor 1 ( RIF1 ), epidermal growth factor receptor ( EGFR ) and phosphatase and tensin homolog ( PTEN ) have been identified in both White and African American/Black communities. Moreover, in the African American/Black populations, there are associations of bloom syndrome helicase ( BLM ), an autosomal dominant BC susceptibility gene , with breast invasive carcinoma . 2.2. Breast Cancer Disparities Are Associated with Tumor Biology The racial and ethnic differences in BC outcomes are also influenced by tumor biology . Sarink et al. (2021) found that hormone receptor (HR) presence in BC prevalence varies by race/ethnicity . These authors demonstrated that ER+ BCR was greater in Native Hawaiians and lower in Latina women and African Americans, although ER– BCR has increased rates in African Americans, as observed through the use of multi-ethnic cohort research . Furthermore, even if the known risk variables do not entirely account for racial/ethnic variations in risk, the same authors demonstrated that relationships between obesity and oral contraceptive (OC) use with ER+ and ER− BCR differ by race/ethnicity . The disparities are also particularly pronounced in ER+ BC patients, with AAW with ER+ subtype of BC experiencing four–five times higher mortality rates than their white counterparts . Aberrations in insulin growth factor (IGF) signaling induced by obesity and other conditions may also contribute to racial/ethnic disparities in BC outcomes . The insulin-like growth factor 1 (IGF1) axis includes insulin growth factors (IGF1 and IGF2), IGF receptors (IGF1R and IGF2R), IGF-binding proteins (IGFBPs) and IGFBP proteases . IGF1 stimulates the developmental process of the mammary during fetal development, but at elevated levels, it also plays a role in the formation, progression and metastasis of BC . It is known that IGF1 plays a key role in obesity-related endocrine cancers such as BC . IGF2 is also a potent mitogen that induces cell proliferation and survival signals through activation of the IGF1 and insulin receptors (IRs), while IGF2 plasma levels are regulated by cellular uptake through IGF2R . Thus, IGF1, modulated by IGF-binding protein-3 (IGFBP-3), and IGF1R were associated with stimulation of the pro-growth MAPK signal transduction pathway and the PI3K/Akt anti-apoptotic pathway that sustains BC development , so up to 50% of BC cases express the activated form of IGF1R . Higgins et al. (2005) showed that numerous studies have reported higher systemic concentrations of IGF1 among AAW compared with European American women (EAW) before puberty . Kalla Singh et al. (2010) showed that IGF1R’s and IGF2R’s differential expressions may contribute to an increased risk of neoplastic transformation in young AAW and to a more aggressive BC phenotype compared to CAW . Moreover, Werner and Bruchim (2012) reviewed the interactions between IGF and BRCA1 signaling pathways, emphasizing the convergence of IGF1-mediated cell survival, proliferative pathways and BRCA1-mediated tumor suppressive pathways . Taking account of differences in tumor characteristics, triple-negative breast cancer (TNBC), which is the most aggressive type of BC, occurs at a higher frequency in AAW compared to CAW, even if the mutational landscape of established tumor regulatory pathway genes in TNBC seems similar . Thus, 30% of BC diagnosed in AAW are TN, compared to 11–13% of non-AAW . Among White women, 76% are diagnosed with the luminal A subtype of BC, while 61% of Black women have TNBC . Many explanations for these disparities are based on differential familial, socioeconomic, occupational-related and medical care factors rather than on biological/biomolecular differences between races and ethnic groups . Li et al. (2017) suggested that the development of personalized treatment strategies for BC patients can be improved by considering both germline and tumor-specific somatic mutations, as well as expression profiles related to drug and xenobiotic metabolizing enzymes (DXME) . These authors identified significant differences among CA, AA and Asian American populations in the expression of DXME, as well as in the activation of pathways involved in commonly used chemotherapeutic drugs . To exemplify, the human cytochrome P450 CYP2D6 isoform enzyme plays an important role in xenobiotic metabolism , and CYP2D6 gene polymorphism can modify the pharmacokinetics of commonly used medications . The frequency of CYP2D6 alleles, which are combined at the individual level, allowing for the prediction of the metabolizer phenotype, ranging from poor metabolizer to ultra-rapid metabolizer, differs from one population to another, which explains the inter-individual differences in medication response . The tumor environment (TME) has an important role in racial disparities and patient outcomes . Interestingly, Kim et al. (2023) showed that, compared to White women, Black women with residual ER+ BC after neoadjuvant chemotherapy have worse distant recurrence-free survival, which can be due to a pro-metastatic TME and an increased density of “Tumor Microenvironment of Metastasis” (TMEM) doorways as portals for systemic cancer dissemination that contribute to racial disparities in BC . These authors characterized the TMEM as microanatomical niches enriched for cancer stem cells (CECs) and composed of three-cell structures: a tumor cell that expresses the mammal-enabled (MENA) protein, an actin-regulatory protein involved in cell motility and adhesion , a tyrosine-protein kinase (TIE2)-expressing macrophage M2 and an endothelial cell, which can be together used by tumor cells as a portal to intravasate and disseminate into the bloodstream . Consequently, racial differences in TMEM doorway density can contribute to racial differences in clinical outcomes . 2.3. BC Immune Landscape and BC Disparities Evidence suggests that Black patients tend to have in their TME an increased density of pro-tumorigenic immune cells, such as M2 macrophages, which become a major population of tumor-associated macrophages (TAMs), and regulatory T cells as well as microvasculature compared to White BC patients, as a putative result of evolutionary selection for a more powerful immune response in patients with African ancestry . Increased angiogenesis as well as M2 macrophages, known as tumor promotors, which support BC progression, tumor cell growth and spread, blood vessel development, cancer stem cell development, regulation of metabolic processes, and immunity resistance , have been correlated with increases in metastasis through the formation of TMEM. Moreover, Black patients also have high serum levels of inflammatory cytokines that sustain a pro-metastatic TME . Tumor necrosis factor-α (TNF-α) is a multifunctional cytokine known as a critical regulator of inflammation and tumor progression . Black women have greater TNF-α production during mid-pregnancy and lower IL-1β production postpartum . It is also known that AAW tend to have higher systemic inflammation levels and endothelial dysfunction compared with CAW . This can be a consequence of TNF-α overexpression, as well as other pro-inflammatory cytokines secreted by tumor and stromal cells to recruit leukocytes with metastatic effects, to generate cancer stem cells, epithelial–mesenchymal transition (EMT), invasion, resistance to therapy and metabolic reprogramming . Evidence has revealed a pro-tumorigenic role of TNF-α during BC progression and metastasis . Kochumon et al. (2021) showed that TNF-α activates the c-Jun NH 2 -terminal kinase (JNK/c-Jun) signaling pathway , promoting stem cell phenotype and tumorigenesis in TNBC through upregulation of the Notch1 signaling pathway , involved in normal mammary gland development as well as in BC tumorigenesis and progression . Koru-Sengul et al. (2016) showed that BC in Black women exhibits a higher number of immunosuppressive cancer-associated macrophages (CAMs) with proliferative activity and a specific disposition associated with lower survival compared with non-Black Latina women and CAW . Hirko et al. (2022) showed that Asian patients had increased levels of tumor-infiltration lymphocytes, reflecting disparities in the immune profile of BC in this population compared to Western patients, with applications in immune therapy . Moreover, specific gene native elongation factor complex E (NELFE) with histone methyltransferase activity was associated with worse survival exclusively for AA individuals. The same authors found that methionine levels are lower in plasma samples from AAW with BC, so hypermethylation has been suggested as a possible biological/epigenetic mechanism to explain the worse outcomes in AAW with BC because many cancer suppressor genes are silenced by DNA methylation . Thus, hypermethylation was correlated with high poverty levels in AAW and affects many pathways, such as p53, glucocorticoid receptor, estrogen-dependent BC signaling and cell proliferation (BCL2, JUN, ESR1, ESR2, CYP19A1) . 2.4. Metabolism-Related Disparities in Breast Cancer Like other tumor types, BC is accompanied by metabolic reprogramming required for the proliferation, growth, invasion and migration of BC cells . Attri et al. (2017) highlighted the racial disparity in the metabolic regulation of cancer . Recently, Santaliz-Casiano et al. (2023) conducted a metabolomics- and bioinformatics-based study and observed that metabolic alterations are differentially associated with both AAW and NHW women, providing greater insight into the biological mechanisms underlying racial disparities in BC survival . Thus, the authors observed decreased plasma levels of amino acids in AAW compared to healthy controls, while fatty acids were overexpressed in NHW patients. This study identified significant associations with regulators of metabolism, such as methionine adenosyl transferase 1A (MAT1A), DNA methyltransferase and histone methyltransferases for AAW, and fatty acid synthase (FASN) and monoacylglycerol lipase (MGL) for NHW. Many studies have identified complex interactions between metabolic syndrome (MetS) and BRCA1 germline mutations . AAW have a high prevalence of the MetS and are 70% more likely to be obese compared to NHW/CAW . Also, Japanese BC patients tend to weigh more than the general population . Liu et al. (2019) emphasized that, for Japanese women, a higher body mass index (BMI) was associated with an increased BCR in both pre- and postmenopausal women, while a higher BMI in Western countries was associated with an increased BCR in postmenopausal women and a decreased risk in premenopausal women . Furthermore, increases in obesity, particularly abdominal obesity, and BMI are also risk factors for BC in men, in correlation to increasing estrogen levels with weight gain because of the conversion of testosterone to estrogen by aromatase in adipose tissue . It is well known that TNBC is typically detected in young AAW and Hispanic women who carry a mutation in the BRCA1 gene . AAW have higher rates of type 2 diabetes than CAW, but paradoxically lower plasma triglycerides (TG), visceral adipose tissue and hepatic fat, and higher high-density lipoprotein (HDL) cholesterol . Eketunde (2020) concluded that patients with diabetes have a higher incidence and mortality of BC due to hyperglycemia and the Warburg effect, activation of the insulin pathway, insulin-like growth factor pathways, inflammatory cytokines, and regulation of endogenous sex hormones . Premature menopause, also named premature ovarian failure (POF) or premature ovarian insufficiency (POI), was reported by 1% of CAW, 1.4% of AAW, 1.4% of Hispanic women, 0.5% of Chinese women and 0.1% of Japanese women , with POI patients having a marginally higher insulin level . Overall, BC is a heterogeneous and polygenic disease , with 10–15% of BC cases being caused by hereditary/germline mutations in BC susceptibility genes , known as high-penetrance alleles/high-risk variants (i.e., BRCA1 , BRCA2 , TP53 , STK11 , CD1 and PTEN ), moderate-penetrance alleles/moderate-risk variants (i.e., ATM , PALB2 , CHEK2 , BRIP1 , RAD51C ) and common low-penetrance alleles/low-risk variants . It is known that specific breast cancer 1/2 ( BRCA1/2 ) mutations in the worldwide population are highly ethnic-specific , with a high frequency of BRCA variation in specific countries or ethnic groups, especially within genetically isolated populations, where these mutations are descendent from a single founder . Wang (2023) summarized the main factors that contribute to the ethnic specificity of the BRCA variation, such as strong positive selection on human BRCA , adaptation to the living environment, genetic drift and founder variation in different ethnic populations . BRCA1 and BRCA2 mutations are estimated to be responsible for about 3% of all BCs and other less common high-penetrance genes account for less than 1% of all BCs . BRCA1 and BRCA2 genes encode proteins involved in DNA repair and homologous recombination (HR) , playing key roles in the maintenance of genome stability , including cell cycle checkpoint activations as well as transcriptional regulation and apoptosis . High-penetrance germline mutations in the tumor suppressor genes result in a loss of tumor suppressor activity and an increased risk of BC . Thus, the lifetime risk of BC in women with the BRCA1 pathogenic mutation is 84% . Of the current BRCA variant data, 80% were derived from European descendant populations, constituting only 20% of the world population . Moreover, the mutational spectrum within BRCA1/2 was mainly associated with an increasing risk of TNBC . Thus, BRCA1/2 pathogenic variants (PVs) have been reported in many different populations , like Ashkenazi Jewish people, who are at higher BCR because of a high frequency of the BRCA1/2 gene mutations . The results obtained by Bhaskaran et al. (2019) suggest that the present Caucasian population-level BRCA mutation signature is insufficient to accurately reflect BRCA status in groups other than Caucasians, for instance, people who are Chinese . Somatic mutation analysis reveals racial differences in specific high-prevalence genes, such as tumor protein 53 ( TP53 ) (46% in AAW vs. 27% in Caucasian women (CAW)), phosphatildylinositol-4,5-biphosphate 3-kinase ( PIK3CA ) (20% in AAW vs. 34% in CAW) and MLL methyltransferase family genes (12% in AAW vs. 6% in CAW) . Yadav et al. (2021) performed a multigene hereditary cancer panel test for women with BC to evaluate the racial and ethnic differences in the prevalence of germline PVs and the effect of race and ethnicity on BCR among carriers . These authors showed that BRCA1 PVs were higher in Ashkenazi Jewish women and Hispanic women compared to NHW, checkpoint kinase 2 ( CHEK2 ) PVs were statistically significantly lower in Black and Asian women, BRCA1 -associated RING domain 1 ( BARD1 ) PVs were associated with high BCR in Black, Hispanic, and Asian women, and ataxia-telangiectasia mutated ( ATM ) PVs were associated with increased BCR among all races and ethnicities except Asian people, whereas CHEK2 and BRIP1 PVs were associated with increased BCR among NHW and Hispanic women . Moreover, Kwong et al. (2021) showed that the prevalence of the partner and localizer of the BRCA2 ( PALB2 ) mutation in BC also varies across different ethnic groups . Germline PVs of the PALB2 tumor suppressor gene, which binds to and co-localizes with BRCA2 in the DNA repair pathway , are associated with an increased BCR, more aggressive phenotypes, particularly the TNBC subtype, and higher mortality . AAW are more likely to have a basal subtype of BC and TP53 mutations and a lower frequency of PIK3A mutations than White Americans . DNA polymerases are also essential for DNA replication, repair mechanisms and tolerance of DNA damage . Evidence suggests that DNA polymerases are associated with cancer, with many mutations in cancer cells being the result of error-prone DNA synthesis by non-replicative polymerases or the inability of replicative DNA polymerases to proofread mismatch nucleotides . Family et al. (2014) analyzed single-nucleotide polymorphisms (SNPs) in DNA bypass polymerase genes, such as DNA polymerase theta ( POLQ ), and their association with BC and BC subtypes in AAW and White women, concluding that the analyzed SNPs are in high linkage disequilibrium in both races, but these can be associated with the risk of luminal BC . Cells with BRCA1/2 mutations have a homologous recombination (HRR)-deficient repair mechanism, so the poly(ADP-ribose) polymerase (PARP) inhibitors can be considered a precision-targeted anticancer drug in BRCA1/2 -mutated women . Hsiao and Lu (2021) showed that the identification of accessible homologous recombination deficiency (HRD)-type genes, which are relevant based on race, has significant clinical relevance for various malignancies, including BC . Thus, these authors showed that in both White and Asian populations, more substantial mutation regions were discovered in ATM , BRCA2 , the catalytic subunit of DNA polymerase epsilon ( POLE ), and type II tropoisomerase 2B ( TOP2B ), whereas variants in the replication timing regulatory factor 1 ( RIF1 ), epidermal growth factor receptor ( EGFR ) and phosphatase and tensin homolog ( PTEN ) have been identified in both White and African American/Black communities. Moreover, in the African American/Black populations, there are associations of bloom syndrome helicase ( BLM ), an autosomal dominant BC susceptibility gene , with breast invasive carcinoma . The racial and ethnic differences in BC outcomes are also influenced by tumor biology . Sarink et al. (2021) found that hormone receptor (HR) presence in BC prevalence varies by race/ethnicity . These authors demonstrated that ER+ BCR was greater in Native Hawaiians and lower in Latina women and African Americans, although ER– BCR has increased rates in African Americans, as observed through the use of multi-ethnic cohort research . Furthermore, even if the known risk variables do not entirely account for racial/ethnic variations in risk, the same authors demonstrated that relationships between obesity and oral contraceptive (OC) use with ER+ and ER− BCR differ by race/ethnicity . The disparities are also particularly pronounced in ER+ BC patients, with AAW with ER+ subtype of BC experiencing four–five times higher mortality rates than their white counterparts . Aberrations in insulin growth factor (IGF) signaling induced by obesity and other conditions may also contribute to racial/ethnic disparities in BC outcomes . The insulin-like growth factor 1 (IGF1) axis includes insulin growth factors (IGF1 and IGF2), IGF receptors (IGF1R and IGF2R), IGF-binding proteins (IGFBPs) and IGFBP proteases . IGF1 stimulates the developmental process of the mammary during fetal development, but at elevated levels, it also plays a role in the formation, progression and metastasis of BC . It is known that IGF1 plays a key role in obesity-related endocrine cancers such as BC . IGF2 is also a potent mitogen that induces cell proliferation and survival signals through activation of the IGF1 and insulin receptors (IRs), while IGF2 plasma levels are regulated by cellular uptake through IGF2R . Thus, IGF1, modulated by IGF-binding protein-3 (IGFBP-3), and IGF1R were associated with stimulation of the pro-growth MAPK signal transduction pathway and the PI3K/Akt anti-apoptotic pathway that sustains BC development , so up to 50% of BC cases express the activated form of IGF1R . Higgins et al. (2005) showed that numerous studies have reported higher systemic concentrations of IGF1 among AAW compared with European American women (EAW) before puberty . Kalla Singh et al. (2010) showed that IGF1R’s and IGF2R’s differential expressions may contribute to an increased risk of neoplastic transformation in young AAW and to a more aggressive BC phenotype compared to CAW . Moreover, Werner and Bruchim (2012) reviewed the interactions between IGF and BRCA1 signaling pathways, emphasizing the convergence of IGF1-mediated cell survival, proliferative pathways and BRCA1-mediated tumor suppressive pathways . Taking account of differences in tumor characteristics, triple-negative breast cancer (TNBC), which is the most aggressive type of BC, occurs at a higher frequency in AAW compared to CAW, even if the mutational landscape of established tumor regulatory pathway genes in TNBC seems similar . Thus, 30% of BC diagnosed in AAW are TN, compared to 11–13% of non-AAW . Among White women, 76% are diagnosed with the luminal A subtype of BC, while 61% of Black women have TNBC . Many explanations for these disparities are based on differential familial, socioeconomic, occupational-related and medical care factors rather than on biological/biomolecular differences between races and ethnic groups . Li et al. (2017) suggested that the development of personalized treatment strategies for BC patients can be improved by considering both germline and tumor-specific somatic mutations, as well as expression profiles related to drug and xenobiotic metabolizing enzymes (DXME) . These authors identified significant differences among CA, AA and Asian American populations in the expression of DXME, as well as in the activation of pathways involved in commonly used chemotherapeutic drugs . To exemplify, the human cytochrome P450 CYP2D6 isoform enzyme plays an important role in xenobiotic metabolism , and CYP2D6 gene polymorphism can modify the pharmacokinetics of commonly used medications . The frequency of CYP2D6 alleles, which are combined at the individual level, allowing for the prediction of the metabolizer phenotype, ranging from poor metabolizer to ultra-rapid metabolizer, differs from one population to another, which explains the inter-individual differences in medication response . The tumor environment (TME) has an important role in racial disparities and patient outcomes . Interestingly, Kim et al. (2023) showed that, compared to White women, Black women with residual ER+ BC after neoadjuvant chemotherapy have worse distant recurrence-free survival, which can be due to a pro-metastatic TME and an increased density of “Tumor Microenvironment of Metastasis” (TMEM) doorways as portals for systemic cancer dissemination that contribute to racial disparities in BC . These authors characterized the TMEM as microanatomical niches enriched for cancer stem cells (CECs) and composed of three-cell structures: a tumor cell that expresses the mammal-enabled (MENA) protein, an actin-regulatory protein involved in cell motility and adhesion , a tyrosine-protein kinase (TIE2)-expressing macrophage M2 and an endothelial cell, which can be together used by tumor cells as a portal to intravasate and disseminate into the bloodstream . Consequently, racial differences in TMEM doorway density can contribute to racial differences in clinical outcomes . Evidence suggests that Black patients tend to have in their TME an increased density of pro-tumorigenic immune cells, such as M2 macrophages, which become a major population of tumor-associated macrophages (TAMs), and regulatory T cells as well as microvasculature compared to White BC patients, as a putative result of evolutionary selection for a more powerful immune response in patients with African ancestry . Increased angiogenesis as well as M2 macrophages, known as tumor promotors, which support BC progression, tumor cell growth and spread, blood vessel development, cancer stem cell development, regulation of metabolic processes, and immunity resistance , have been correlated with increases in metastasis through the formation of TMEM. Moreover, Black patients also have high serum levels of inflammatory cytokines that sustain a pro-metastatic TME . Tumor necrosis factor-α (TNF-α) is a multifunctional cytokine known as a critical regulator of inflammation and tumor progression . Black women have greater TNF-α production during mid-pregnancy and lower IL-1β production postpartum . It is also known that AAW tend to have higher systemic inflammation levels and endothelial dysfunction compared with CAW . This can be a consequence of TNF-α overexpression, as well as other pro-inflammatory cytokines secreted by tumor and stromal cells to recruit leukocytes with metastatic effects, to generate cancer stem cells, epithelial–mesenchymal transition (EMT), invasion, resistance to therapy and metabolic reprogramming . Evidence has revealed a pro-tumorigenic role of TNF-α during BC progression and metastasis . Kochumon et al. (2021) showed that TNF-α activates the c-Jun NH 2 -terminal kinase (JNK/c-Jun) signaling pathway , promoting stem cell phenotype and tumorigenesis in TNBC through upregulation of the Notch1 signaling pathway , involved in normal mammary gland development as well as in BC tumorigenesis and progression . Koru-Sengul et al. (2016) showed that BC in Black women exhibits a higher number of immunosuppressive cancer-associated macrophages (CAMs) with proliferative activity and a specific disposition associated with lower survival compared with non-Black Latina women and CAW . Hirko et al. (2022) showed that Asian patients had increased levels of tumor-infiltration lymphocytes, reflecting disparities in the immune profile of BC in this population compared to Western patients, with applications in immune therapy . Moreover, specific gene native elongation factor complex E (NELFE) with histone methyltransferase activity was associated with worse survival exclusively for AA individuals. The same authors found that methionine levels are lower in plasma samples from AAW with BC, so hypermethylation has been suggested as a possible biological/epigenetic mechanism to explain the worse outcomes in AAW with BC because many cancer suppressor genes are silenced by DNA methylation . Thus, hypermethylation was correlated with high poverty levels in AAW and affects many pathways, such as p53, glucocorticoid receptor, estrogen-dependent BC signaling and cell proliferation (BCL2, JUN, ESR1, ESR2, CYP19A1) . Like other tumor types, BC is accompanied by metabolic reprogramming required for the proliferation, growth, invasion and migration of BC cells . Attri et al. (2017) highlighted the racial disparity in the metabolic regulation of cancer . Recently, Santaliz-Casiano et al. (2023) conducted a metabolomics- and bioinformatics-based study and observed that metabolic alterations are differentially associated with both AAW and NHW women, providing greater insight into the biological mechanisms underlying racial disparities in BC survival . Thus, the authors observed decreased plasma levels of amino acids in AAW compared to healthy controls, while fatty acids were overexpressed in NHW patients. This study identified significant associations with regulators of metabolism, such as methionine adenosyl transferase 1A (MAT1A), DNA methyltransferase and histone methyltransferases for AAW, and fatty acid synthase (FASN) and monoacylglycerol lipase (MGL) for NHW. Many studies have identified complex interactions between metabolic syndrome (MetS) and BRCA1 germline mutations . AAW have a high prevalence of the MetS and are 70% more likely to be obese compared to NHW/CAW . Also, Japanese BC patients tend to weigh more than the general population . Liu et al. (2019) emphasized that, for Japanese women, a higher body mass index (BMI) was associated with an increased BCR in both pre- and postmenopausal women, while a higher BMI in Western countries was associated with an increased BCR in postmenopausal women and a decreased risk in premenopausal women . Furthermore, increases in obesity, particularly abdominal obesity, and BMI are also risk factors for BC in men, in correlation to increasing estrogen levels with weight gain because of the conversion of testosterone to estrogen by aromatase in adipose tissue . It is well known that TNBC is typically detected in young AAW and Hispanic women who carry a mutation in the BRCA1 gene . AAW have higher rates of type 2 diabetes than CAW, but paradoxically lower plasma triglycerides (TG), visceral adipose tissue and hepatic fat, and higher high-density lipoprotein (HDL) cholesterol . Eketunde (2020) concluded that patients with diabetes have a higher incidence and mortality of BC due to hyperglycemia and the Warburg effect, activation of the insulin pathway, insulin-like growth factor pathways, inflammatory cytokines, and regulation of endogenous sex hormones . Premature menopause, also named premature ovarian failure (POF) or premature ovarian insufficiency (POI), was reported by 1% of CAW, 1.4% of AAW, 1.4% of Hispanic women, 0.5% of Chinese women and 0.1% of Japanese women , with POI patients having a marginally higher insulin level . The incidence and mortality of different cancers have been associated with sex-specific disparities . Both women and men are subject to the health effects of gender . Even if these disparities happen for relatively unknown reasons, sex differences in cancer incidence have been associated with regulatory mechanisms at the genetic/molecular level and sex hormones, i.e., estrogen, which modulate gene expression in different cancers . BC occurs in either gender, as female breast cancer (FBC) or male breast cancer (MBC) , but FBC is the principal cancer among women worldwide, whereas MBC is about 100 times less common than FBC . A total of 12.9% of all women, or one in eight women, will develop BC at a certain point in their lives . Consequently, BC is the most common cancer and the second highest cause of cancer death among women, while MBC is a rare disease , with an incidence of around 1.2 per 100,000 . Thus, MBC represents less than 1% of all BC cases, accounting for 0.11% of all male neoplasms . Usually, patients with MBC have it detected at an advanced stage at the time of diagnosis, are at an older age, and have a worse overall survival (OS) rate compared to FBC patients , so MBC mortality is higher (18.2%) than FBC (17.2%) . Moreover, Zeinomar et al. (2021) showed that Black men have worse overall survival following a BC diagnosis compared to White men . In the United States, rates were higher in Black men than White men for all BC subtypes, while among women, rates in Black people were 21% lower for HR+/HER2−, comparable for HR+/HER2+, 29% higher for HR−/HER2+ and 93% higher for TNBC . According to mammary developmental biology, in males, breasts are rudimentary and non-functional organs, but they develop similarly in female and male fetuses . In males, pubertal androgens mediate the removal of the ducts and prevention of mammary tissue development, while in females, estrogen acts as an essential regulator of branching and the development of the pubertal mammary gland . This means that men have breast tissue in minimal quantity and have the potential to develop BC as well as females. FBC and MBC are considered phenotypically quite similar but different in their molecular profile , due to several genetic, hormonal and lifestyle/environmental risk factors . In addition, a positive family history of BC is considered a major MBC predisposition factor . Thus, many studies have emphasized both mutational and epigenetic similarities and differences between FBC and MBC, suggesting that some characteristics are conserved between them whereas others are not . Moreover, there are hypotheses that suggest that MBC could indicate a separate form of BC that has a higher dependence on genetic variants than FBC . Evidence suggests that MBC may have several distinct biological features, tending to be ductal type, luminal type A, estrogen receptor (ER)- and progesterone receptor (PR)-positive and human epidermal growth factor receptor-2 (HER2)-negative . Moreover, increased BCR in relation to obesity has been reported both in situ and in invasive tumors, and it seems to be higher for HER2-positive than HER2-negative tumors . Concluding here, MBC has been associated with a higher lymph node metastasis rate, higher ER positivity and lower HER2 rates , being considered an ER-driven BC . Moreover, ERα is associated with PR in FBC, whereas Erα is associated with ERβ and the androgen receptor (AR) in MBC . No luminal B or HER2 phenotypes were found in males and the basal phenotype is very rare, so male triple-negative breast cancer (TNBC) is a very rarely encountered disease . As in FBC, high-, moderate- and low-penetrance susceptibility genes have been recognized in MBC, but these genes and their impact are not similar in FBC and MBC . A total of 10% of all MBCs are hereditary forms caused by germline mutations in BC susceptibility genes . Men with BRCA1/2 mutations have an increased risk for BC: 7–8% with BRCA2 mutations and 1% with BRCA1 mutation, compared to 0.1% lifetime risk in the general population . In addition, the CHEK2 mutation has also been associated with an increased risk of MBC . Rates of the CHEK2 mutation seem to be higher in some countries compared to others, such as in Northern European countries, but are rare in Australia, Spain, and Ashkenazi Jewish people . Szwiec et al. (2021) reviewed a lot of studies that showed that male patients with mutations in the PALB2 gene have a seven-fold increased risk of MBC . Studying the molecular differences between the FBC and MBC methylomes, Abeni et al. (2021) reported different DNA methylation levels of GTPase-related genes (RHO-GAP, RHO-GEF, and RAB GTPase) and keratin-related genes as an essential component of the cytoskeleton rearrangement biological process . Known as key regulators of the cytoskeleton architecture, RHO GTPases are involved in membrane trafficking, gene transcription, cell migration, invasion, adhesion, survival and growth, and cancer initiation, metastasis and therapeutic responses . Thus, Abeni et al. (2021) showed that almost all genes included in the GO term “keratin filament” were hypomethylated in FBC compared to MBC, suggesting their overexpression in FBC in association with the hypomethylation of the cytokeratin genes KRT6A and KRT14 , which are hallmark features of TNBC . These authors suggested that the overexpression of these genes has been found to be positively associated with a high tumor grade in BC and the expression of KRT6A and KRT14 to be significantly associated with a basal molecular subtype of BC . On the other hand, the results obtained by Callari et al. (2010) sustained a prominent role of the AR gene in neoplastic transformation in MBC . AR gene maps to the X-chromosome, and X-chromosome polysomy, as well as an AR gene copy number increase, were emphasized in most invasive MBCs and in situ carcinomas . In addition, Mule et al. (2020) demonstrated that melanoma-associated antigen A ( MAGEA ) family members, also mapped on the X-chromosome and co-regulators of AR , are hypomethylated in MBC, leading to their overexpression, which also suggests AR protein overexpression . Chatterji et al. (2023) emphasized stanniocalcin 2 (STC2), the DEAD-box helicase family member DDX3 and the Dachshund family transcription factor 1 (DACH1) as underexploited prognostic biomarkers for MBC . STC2 is a glycoprotein hormone expressed in many mammalian tissues and overexpressed in various types of cancer, including human BC, facilitating cell adaptation to stress conditions, preventing apoptosis and promoting cell proliferation, migration, immune response, tumor growth, invasion and metastasis . STC2 is frequently co-expressed with ER, and it was found to be preferentially expressed in BCs of a luminal phenotype . Thus, the STC2 gene was overexpressed in MBC compared to FBC, emphasizing the greatest fold change between genders and being suggested as an independent prognostic factor for disease-free survival (DFS) in MBC . Conversely, STC2 expression seems to be a favorable prognostic factor associated with extended disease-free survival and OS in FBC . DDX3 is an RNA helicase with tumor suppressor and oncogenic potential, involved in cell cycle and translation regulation, DNA repair, cell survival and apoptosis . The cytoplasmic DDX3 overexpression was associated with androgen expression receptor (AR), so cytoplasmic DDX3 expression could be a useful prognosticator in MBC . Cui et al. (2018) showed that DACH1, which is expressed widely in normal adult tissues and functions as a tumor suppressor in a variety of neoplasms , is differentially expressed between MBC and FBC, concluding that the DACH1 gene was downregulated in MBC and HER2 was overexpressed in FBC . The elderly population is growing around the world and older women are more likely to die from cancer than younger women, which in fact leads to a major health disparity . It has been noted that older women have a worse prognosis compared to younger women in both early-stage and more advanced or metastatic BC . Similarly, elderly MBC patients had larger tumors in more advanced stages at the time of diagnosis compared to younger patients . In AAW, younger age and obesity associated with a low socioeconomic status influence TNBC development . Other factors, such as age at menarche and childbearing patterns, could influence mammary gland development and BC disparity . Age is one of the strongest risk factors for malignancy, due to biological changes linked to the aging process that limit health during the lifespan . Consequently, outside of BRCA mutations, age is the main risk factor for BC development . The risk of acquiring cancer-driving mutations in tissues increases as a function of chronological time . Thus, genomic instability (GI), telomere shortening, epigenetic alterations, loss of proteostasis, mitochondrial dysfunction, disabled macroautophagy, deregulated nutrient sensing, cellular senescence, altered intercellular communication, chronic inflammation and dysbiosis are known as hallmarks of aging that are also hallmarks of cancer . Moreover, Lehallier et al. (2019) showed that the patterns of changes in the proteome in different decades of life have been associated with distinct biological pathways in correlation with the genome and proteome of age-related diseases and phenotypic traits . GI, known as the tendency of the genome to undergo mutations and copy number alterations (CNAs)/structural chromosome structural rearrangements/copy number variation (CNVs), is considered a hallmark of aging and is also a hallmark of BC . DNA damage repair, DNA replication, transcription, mitotic chromosome segregation and telomere maintenance are several dysregulated biological processes that lead to GI . Telomeres are essential in the maintenance of chromosome integrity and genomic stability, so telomere alteration is a feature of malignancy . Moreover, the length of telomeres, known as repetitive sequences of DNA at the ends of chromosomes that are involved in protection against DNA degradation during cell division, is considered a biomarker of human aging and longevity . Shorter relative telomere (SRT) length has been associated with both senescence and an increased BCR or with the degree of BC progression . Interestingly, in AAW, perceived racism, a major source of chronic stress, has been inversely associated with telomere length . However, Needham et al. (2020) showed that many studies found that Black people have longer leucocyte telomere lengths than White people during adulthood and suggested that race differences in telomere length may depend on socioeconomic status . Furthermore, there are several studies that have emphasized that observed race differences in telomeric length are statistical artifacts . However, Thorvaldsdottir et al. (2017) found that blood telomere length is predictive of BCR in BRCA2 mutation carriers . It is known that the age-associated genes in the human mammary gland drive human BC progression . A study conducted by Gu et al. (2020) concluded that transcriptome changes during aging can contribute to breast tumorigenesis . These authors identified 14 upregulated and 24 downregulated genes that were both age- and BC-associated. Among these deregulated genes, dynein light chain Tctex-type 3 ( DYNLT3 ), prolyl 4-hydroxylase subunit alpha 3 ( P4HA3 ) and Aristaless-like homeobox 4 ( ALX4 ) have been identified as age-related genes that play a significant role in BC progression . Thus, DYNLT3 was found to be highly overexpressed in both BC tissues and BC cell lines, in association with N-cadherin and vimentin (VIM) overexpression associated with E-cadherin downregulation, while DYNLT3 silencing suppressed cell growth, migration and invasion via the epithelial–mesenchymal transition (EMT) and induced cell apoptosis in MDA-MB-231 and MCF7 BC cells . Interestingly, Aktary et al. (2021) showed that the level of DYNLT3 is dependent on β-catenin activity, revealing a function of the canonical Wnt/β-catenin signaling pathway during melanocyte and skin pigmentation . Thus, the Wnt/β-catenin signaling pathway, which is involved in mammary development, BC cell proliferation, motility and metastasis , is also a central pathway in melanocyte biology , and it is closely associated with aging-related diseases . In addition, Getz et al. (2015) suggested that the Wnt/β-catenin signaling pathway may contribute to a more aggressive phenotype present in AAW diagnosed with TNBC and could be associated with known disparities that exist in AAW compared to CAW . In closing here, we may suggest an association between more aggressive BC development, black skin, and aging mediated by the Wnt/β-catenin signaling pathway, which could partially explain some biological disparities between AAW and CAW. Similarly, the P4HA3 gene acts as an oncogene, is significantly upregulated in breast cancer, and its silencing could suppress the aggressive phenotypes of BC cells . P4HA3 was also significantly upregulated in the subcutaneous adipose tissue of obese and type 2 diabetes mellitus (T2DM) patients, with a functional role in the differentiation of adipocytes and insulin resistance , which is known to vary across race/ethnicity , so AA have a high risk for T2DM and insulin resistance . Similarly to DYNLT3, P4HA3 silencing significantly decreased mesenchymal markers (VIM, N-cadherin and Snail) expression and increased E-cadherin as an epithelial marker, while its overexpression produced the opposite effects, promoting cancer growth and metastasis by affecting the transforming growth factor-beta 1 (TGF-β) signaling pathway , which has a significant role in BC initiation and promotion and is linked to health disparities in AA . Moreover, the TGF-β pathway enhances cell proliferation, migration, invasion and metastasis and suppresses immunosurveillance. Black and White people’s disparities in BC mortality are most pronounced at younger ages and seem to converge later in life . Thus, Hendrick et al. (2021), analyzing the age distributions of BC diagnosis and mortality by race and ethnicity in U.S. women, concluded that non-Hispanic Black, Asian American/Pacific Islander (AAPI), Native American, and Hispanic women have a higher percentage of invasive BC at younger ages and more advanced stages of BC deaths at younger ages compared to non-Hispanic White women . Thus, AAW experience an increased likelihood of cancer before the age of 40, a greater severity of illnesses throughout all phases, and an elevated risk of death in comparison to White women . Native American women have a younger median age of diagnosis (59 years) compared with White women (61 years) and Japanese women (65 years) , but it is necessary to consider that Japan is the globe’s fastest aging country, where 32% of the female population were 65 or older in 2021 . Moreover, many authors showed that the incidence rates in Japan have a bimodal age distribution with two peaks of 45–49 and 65–69 . However, BC is the most common cancer and the second leading cause of cancer-related death in women under 40 years of age worldwide . Moreover, Tzikas et al. (2020) showed that primary TNBC in younger patients is more often of a poor differentiation grade and highly proliferative compared with older patients . Also, the risk of carrying a BRCA mutation is higher among young TNBC patients . Nevertheless, in a study conducted by Metcalfe et al. (2018), the penetrance of BC in women aged 80 years, known as BRCA carriers, was 60.4% for those without a first-degree relative with BC and 63.3% for those with at least one first-degree relative with BC . The same authors showed that the estimated penetrance of BC in women aged 80 years was 60.8% for BRCA1 and 63.1% for BRCA2 , compared with 13% of women in the general population that develop BC sometime in their lifetime. Premenopausal women emphasized a weak association between estrogen, progesterone or sex hormone-binding globulin (SHBG) levels and a positive association between androgens and breast cancer risk (BCR), while in postmenopausal women, higher estrogen and androgen levels were associated with an increase in BCR, whereas higher SHBG levels were inversely correlated with BCR . BC is commonly described as an ecological and environmental sickness or disorder . To sustain this hypothesis, numerous studies have indicated that certain environmental exposures and lifestyle variables contribute 70% to 95% of the different risk factors that influence the incidence of breast cancer . It is known that the higher frequency of TNBC in AAW is therefore not associated with a different genomic profile , as long as only 20% of TNBC tumors in AAW demonstrate BRCA1 activity . Recently, Siegel et al. (2023) showed that even cumulative exposure to neighborhood-level threat elements that particularly impact Black communities can be related to increased TNBC rates . Furthermore, it was demonstrated that multiple chemicals have disproportionate exposure rates in Black women and have BC-associated biological activity as well as higher exposure-related biomarker levels than in White women . Several effects of ecological factors on BC disparities are summarized in . 5.1. Diet Contribution to BC-Related Disparities It was estimated that about one-third of cancers in Western high-income societies are attributable to factors related to food, nutrition and physical activity . Many food components may act as mutagens, influence the expression of oncogenes or tumor suppressor genes by the induction of epigenetic changes, such as DNA methylation or histone acetylation, and/or alter the cells’ microenvironment by modulating hormone or growth factor-based signaling, facilitating the growth and proliferation of specific cell populations . Evidence suggests that a healthy dietary pattern that includes fruits and vegetables, unrefined cereals, nuts and olive oil, and a moderate or low consumption of red meat and saturated fatty acids might improve the overall survival (OS) of BC . Krisanits et al. (2020) studied pubertal mammary gland development in NHW, AAW and Asian American women in combination with diets linked to changes in BC risk and disparity . These authors discovered an increased BCR and BC discrepancy related to regimens comprised of high fat, N-3 polyunsaturated fatty acids, N-6 polyunsaturated fatty acids, being overweight and a Western style of eating, which can lead to abnormalities in the growth of mammary glands during puberty in mouse models . Jacobs et al. (2021) indicated that both conventional meals and cereal–dairy breakfast eating patterns may minimize BCR for this population . Findings on Black women as well as in women of European descent showed an inverse association of dietary vitamin A (retinol and carotenoids) intake with BCR, mainly in premenopausal women . 5.2. Alcohol Intake and BC Disparity Higher alcohol consumption has been associated with an increased risk for BC development and appears to have a stronger effect on ER+ BC , because alcohol induces alterations in estrogen receptor (ER) physiology and function . Women, as well as rodents used in experiments, demonstrated an elevation in estrogen (17-β-estradiol (E2)) associated with increased alcohol drinking , followed by the activation of ER-alpha (ER-α) . Moreover, in BC cells, ERα is involved in the genomic pathway, when it is localized in the nucleus, or in a non-genomic pathway, when it is present in the cytoplasm, but in both cases, it binds to E2 . The nuclear pathways of ERα action involve interaction with AP-1/c-Jun, NF-κB, p53, SP-1 and STAT . Among these pathways, neuropeptide substance P (SP) and its related receptor neurokinin-1 receptor (NK1R) are known to promote the proliferation of BC cells via NF-κB-mediated inflammatory responses . In addition, ERα36, an isoform of ERα that can be considered an oncogenic biomarker, is distributed in the cytoplasm and can induce the proliferation and endocrine resistance of BC cells . Candelaria et al. (2015), based on a genomic-based approach, demonstrated that alcohol promotes cell proliferation and increased growth factor signaling through a large number of alcohol-responsive genes, principally those involved in apoptotic and cell proliferation pathways . These authors identified the proto-oncogene BRAF , an alcohol- and estrogen-induced gene, to be overexpressed in BC patients with poor outcomes . Moreover, Voordeckers et al. (2020) emphasize the mechanisms underlying ethanol-related genome stability by the recruitment of error-prone DNA polymerases, known as mutagenic effectors of DNA repair pathways , to the replication fork . Heavy drinking among younger Black women was lower than that of White and Hispanic women , which could be associated with a lower BC incidence among Black women compared to White women . Alcohol stimulates the migration and invasion of the BC cell line MCF7 , the EMT, vascular development, cellular oxidative stress (OS) and rendering of reactive oxygen species , with decreased levels of E-cadherin, α, β and γ catenin protein and the BRCA1 gene, which suppresses tumor expression . Furthermore, drinking alcohol affects numerous genes related to the reaction to hormonal treatment and reduces the activity of tamoxifen in BC cells . Moreover, alcohol abuse interferes with insulin-like growth factor-1 (IGF1), a known contributor to pubertal development, so the alcohol delays the time of puberty in both sexes . It is known that breast development and hormonal changes at puberty might affect BCR . 5.3. Endocrine Disruptor Chemicals (EDCs) and BC Disparities James-Todd et al. (2016) showed that non-White people have higher concentrations of many EDCs compared to White people due to the higher levels of exposure and magnification that occur across their lifespans and could lead to disparate health outcomes . To expand on this, non-Hispanic Black and Mexican American women have higher metabolite concentrations of low-molecular-weight phthalates (e.g., dibutyl phthalate (DBP)) than non-Hispanic White women . It is known that certain phthalates that resemble estradiol, a treatment for menopausal symptoms, may induce breast cancer; for example, DBP exposure was associated with an approximately two-fold increase in the rate of ER+ BC . Unal et al. (2012) showed that AAW had the highest maternal serum concentration of bisphenol A (BPA), 10-fold higher than CAW, while Hispanic women had intermediate concentrations with an increasing trend to higher concentrations compared to Caucasian women, demonstrating significant racial/ethnic differences in maternal/fetal BPA concentrations . Mandrup et al. (2016) indicated that low-dose exposure to BPA can affect mammary gland development in male and female rats, causing increased growth within ducts, which may be accompanied by an increased risk of developing hyperplasic lesions, similar to early signs of BC in women . After exposure to BPA, the normal-like human breast epithelial cell line, MCF-10F, displayed an increased expression of BRCA1/2, BARD1, CtlP, RAD51 and BRCC3, which are all associated with DNA repair, in addition to the suppression of PDCD5 and BCL2L11 (BIM), which are involved in cell death . Moreover, Black children experience much larger increases in BMI, weight and height compared to White children, while Mexican–American children are placed between Black and White children . Persistent organic pollutants (POPs) bioaccumulate in adipose tissue, resulting in greater body burdens of these environmental toxicants with obesity . For example, Black people experience higher exposure levels to polychlorinated biphenyls (PCBs) compared to White people . A cross-sectional study found that AA subjects consumed more fish than white subjects . PCBs, known for their high lipophilicity and persistence, tend to bioaccumulate in organisms through the food network, accruing in fish and marine mammals’ adipose tissue, where the PCB concentration is thousands of times higher than in water due to the biomagnification process . Leng et al. (2016) showed that several PCBs are abundant in both human serum and breast tissue and would increase the risk of BC . 5.4. Migration Patterns and Breast Cancer Disparities Migrants are influenced by different risk factors before, during and after migration . Lamminmäki et al. (2023) reported that non-Western immigrant women in Nordic countries, Denmark, Finland, Iceland and Norway, had statistically significantly lower BC incidence than native women, but the BCR among immigrant women increased with the duration of residence . Interestingly, these authors specified that higher education increased the BCR among immigrant women . Similarly, BC rates of occurrence are four–seven times greater in America than in China and Japan. Once women from Asian countries like Japan, China, or the Philippines migrated to the United States, their BCR rates climbed over multiple generations, becoming practically similar to the BCR of U.S. White people . Herbach et al. (2021) also identified disparities in BC progression related to nativity; immigrants from Asia, Eastern Europe, Latin America and the Caribbean and developing or transitional nations had higher disparities compared with immigrants from developed countries that experienced the least disparity . Thus, immigrants’ BCR increased compared to people remaining in the countries of origin, primarily due to exposure to a Western lifestyle , especially to a Western non-healthy diet, as a risk factor for the development and preservation of the long-term inflammation of tissues correlated to innate immune cell reprogramming . The inflammation linked to a lifestyle is a significant factor in the initiation, development and progression of BC . The “metaflammation” concept reflects the crosstalk between the immune landscape, metabolic pathways, obesity and metabolic syndrome (MetS), resistance to insulin and persistent inflammation . It is commonly established that MetS is a risk component for or indicator of BC and is more common in patients with BC . It was estimated that about one-third of cancers in Western high-income societies are attributable to factors related to food, nutrition and physical activity . Many food components may act as mutagens, influence the expression of oncogenes or tumor suppressor genes by the induction of epigenetic changes, such as DNA methylation or histone acetylation, and/or alter the cells’ microenvironment by modulating hormone or growth factor-based signaling, facilitating the growth and proliferation of specific cell populations . Evidence suggests that a healthy dietary pattern that includes fruits and vegetables, unrefined cereals, nuts and olive oil, and a moderate or low consumption of red meat and saturated fatty acids might improve the overall survival (OS) of BC . Krisanits et al. (2020) studied pubertal mammary gland development in NHW, AAW and Asian American women in combination with diets linked to changes in BC risk and disparity . These authors discovered an increased BCR and BC discrepancy related to regimens comprised of high fat, N-3 polyunsaturated fatty acids, N-6 polyunsaturated fatty acids, being overweight and a Western style of eating, which can lead to abnormalities in the growth of mammary glands during puberty in mouse models . Jacobs et al. (2021) indicated that both conventional meals and cereal–dairy breakfast eating patterns may minimize BCR for this population . Findings on Black women as well as in women of European descent showed an inverse association of dietary vitamin A (retinol and carotenoids) intake with BCR, mainly in premenopausal women . Higher alcohol consumption has been associated with an increased risk for BC development and appears to have a stronger effect on ER+ BC , because alcohol induces alterations in estrogen receptor (ER) physiology and function . Women, as well as rodents used in experiments, demonstrated an elevation in estrogen (17-β-estradiol (E2)) associated with increased alcohol drinking , followed by the activation of ER-alpha (ER-α) . Moreover, in BC cells, ERα is involved in the genomic pathway, when it is localized in the nucleus, or in a non-genomic pathway, when it is present in the cytoplasm, but in both cases, it binds to E2 . The nuclear pathways of ERα action involve interaction with AP-1/c-Jun, NF-κB, p53, SP-1 and STAT . Among these pathways, neuropeptide substance P (SP) and its related receptor neurokinin-1 receptor (NK1R) are known to promote the proliferation of BC cells via NF-κB-mediated inflammatory responses . In addition, ERα36, an isoform of ERα that can be considered an oncogenic biomarker, is distributed in the cytoplasm and can induce the proliferation and endocrine resistance of BC cells . Candelaria et al. (2015), based on a genomic-based approach, demonstrated that alcohol promotes cell proliferation and increased growth factor signaling through a large number of alcohol-responsive genes, principally those involved in apoptotic and cell proliferation pathways . These authors identified the proto-oncogene BRAF , an alcohol- and estrogen-induced gene, to be overexpressed in BC patients with poor outcomes . Moreover, Voordeckers et al. (2020) emphasize the mechanisms underlying ethanol-related genome stability by the recruitment of error-prone DNA polymerases, known as mutagenic effectors of DNA repair pathways , to the replication fork . Heavy drinking among younger Black women was lower than that of White and Hispanic women , which could be associated with a lower BC incidence among Black women compared to White women . Alcohol stimulates the migration and invasion of the BC cell line MCF7 , the EMT, vascular development, cellular oxidative stress (OS) and rendering of reactive oxygen species , with decreased levels of E-cadherin, α, β and γ catenin protein and the BRCA1 gene, which suppresses tumor expression . Furthermore, drinking alcohol affects numerous genes related to the reaction to hormonal treatment and reduces the activity of tamoxifen in BC cells . Moreover, alcohol abuse interferes with insulin-like growth factor-1 (IGF1), a known contributor to pubertal development, so the alcohol delays the time of puberty in both sexes . It is known that breast development and hormonal changes at puberty might affect BCR . James-Todd et al. (2016) showed that non-White people have higher concentrations of many EDCs compared to White people due to the higher levels of exposure and magnification that occur across their lifespans and could lead to disparate health outcomes . To expand on this, non-Hispanic Black and Mexican American women have higher metabolite concentrations of low-molecular-weight phthalates (e.g., dibutyl phthalate (DBP)) than non-Hispanic White women . It is known that certain phthalates that resemble estradiol, a treatment for menopausal symptoms, may induce breast cancer; for example, DBP exposure was associated with an approximately two-fold increase in the rate of ER+ BC . Unal et al. (2012) showed that AAW had the highest maternal serum concentration of bisphenol A (BPA), 10-fold higher than CAW, while Hispanic women had intermediate concentrations with an increasing trend to higher concentrations compared to Caucasian women, demonstrating significant racial/ethnic differences in maternal/fetal BPA concentrations . Mandrup et al. (2016) indicated that low-dose exposure to BPA can affect mammary gland development in male and female rats, causing increased growth within ducts, which may be accompanied by an increased risk of developing hyperplasic lesions, similar to early signs of BC in women . After exposure to BPA, the normal-like human breast epithelial cell line, MCF-10F, displayed an increased expression of BRCA1/2, BARD1, CtlP, RAD51 and BRCC3, which are all associated with DNA repair, in addition to the suppression of PDCD5 and BCL2L11 (BIM), which are involved in cell death . Moreover, Black children experience much larger increases in BMI, weight and height compared to White children, while Mexican–American children are placed between Black and White children . Persistent organic pollutants (POPs) bioaccumulate in adipose tissue, resulting in greater body burdens of these environmental toxicants with obesity . For example, Black people experience higher exposure levels to polychlorinated biphenyls (PCBs) compared to White people . A cross-sectional study found that AA subjects consumed more fish than white subjects . PCBs, known for their high lipophilicity and persistence, tend to bioaccumulate in organisms through the food network, accruing in fish and marine mammals’ adipose tissue, where the PCB concentration is thousands of times higher than in water due to the biomagnification process . Leng et al. (2016) showed that several PCBs are abundant in both human serum and breast tissue and would increase the risk of BC . Migrants are influenced by different risk factors before, during and after migration . Lamminmäki et al. (2023) reported that non-Western immigrant women in Nordic countries, Denmark, Finland, Iceland and Norway, had statistically significantly lower BC incidence than native women, but the BCR among immigrant women increased with the duration of residence . Interestingly, these authors specified that higher education increased the BCR among immigrant women . Similarly, BC rates of occurrence are four–seven times greater in America than in China and Japan. Once women from Asian countries like Japan, China, or the Philippines migrated to the United States, their BCR rates climbed over multiple generations, becoming practically similar to the BCR of U.S. White people . Herbach et al. (2021) also identified disparities in BC progression related to nativity; immigrants from Asia, Eastern Europe, Latin America and the Caribbean and developing or transitional nations had higher disparities compared with immigrants from developed countries that experienced the least disparity . Thus, immigrants’ BCR increased compared to people remaining in the countries of origin, primarily due to exposure to a Western lifestyle , especially to a Western non-healthy diet, as a risk factor for the development and preservation of the long-term inflammation of tissues correlated to innate immune cell reprogramming . The inflammation linked to a lifestyle is a significant factor in the initiation, development and progression of BC . The “metaflammation” concept reflects the crosstalk between the immune landscape, metabolic pathways, obesity and metabolic syndrome (MetS), resistance to insulin and persistent inflammation . It is commonly established that MetS is a risk component for or indicator of BC and is more common in patients with BC . Cancer starts when the first cell undergoes a harmful mutation . During sexual reproduction, fertilization is the union of two gametes, the oocyte and sperm, to form a diploid zygote and to initiate the development of a new and unique embryo . The oocyte differentiates and starts to develop within a primordial follicle of the embryonic ovary of the future mother. Herein, the oocyte is a resting cell in which DNA damage accumulates over time until follicle recruitment and ovulation due to the absence of mechanisms to eliminate the failed cells during replication . Apart from germline mutations in BC susceptibility genes, which could be inherited by the oocyte, it is also subject to the detrimental effects of the mother’s aging . Inherited mutations are called germline mutations because they are present in gametes, both in the ovum and sperm, and become generally present in every cell of the resulting child’s body . Germline mutations increase susceptibility to tumors, while somatic mutations are the secondary reason for the occurrence of cancers . In addition, gametes represent targets for EDCs and thus a way for environmentally induced alterations/epimutations with transgenerational inheritance over several generations . Interestingly, de novo genetic mutations accumulate even with the first zygotic cell divisions , so in-womb development represents a “sensitive window” for the introduction of mutations because of a higher rate of cellular proliferation . A human cell must repair over 10,000 DNA lesions per day to counteract the intrinsic causes of DNA damage, but it is also necessary to consider the lesions induced by environmental sources of DNA damage . Consequently, the failure to detect and repair such lesions at the cell level can lead to a harmful mutation rate, genomic instability or cell death . Furthermore, the embryonic genetic mosaicism that arises early in development as a consequence of the mutational landscape is implicated in cancer . As shown above, a small proportion of cancers are due to inherited mutations, which result in a high risk of developing specific cancers . Mutations that occur in somatic cells are called somatic mutations, and they accumulate in healthy cells of the body throughout life, becoming an important cause of cancer that may change across a range of time periods, from one to fifty years, so the final neoplastic landscape of a malignant subclone within a tumor reflects the sum of acquired mutations in time by somatic evolution . If we consider the mammary glands’ development, this process starts in the mother’s womb. Thus, there are three stages of mammary gland development in humans: embryonic, pubertal and adult . The development of mammary glands starts in the embryonic ectoderm during embryogenesis, with the formation of milk/mammary lines that resolve into mammary placodes, which expand and invaginate within the underlying mesenchyme to form mammary buds, followed by the formation of the initial ductal tree present at birth . Thus, during embryonic mammary development, normal breast cells proliferate, migrate and invade the stromal compartment, similar to BC cells that proliferate, suffer the EMT, invade and migrate from the primary tumor site to distant sites to form organotropic metastases . Evidence suggests that estrogen levels are higher in AAW compared with CAW , so an embryo’s exposure to excessive maternal endogenous and/or synthetic estrogens, i.e., endocrine disruptor chemicals, could be associated with an increased risk of malignant transformation of the breast tissue later in life . Soto et al. (2008) hypothesized that fetal exposure to xenoestrogens may play a role in the increased incidence of BC and suggested that BC may begin in the womb and impact the early stages of growth of the breast ducts . In addition, Cohn et al. (2015) showed that in utero exposure to dichlorodiphenyltrichloroethane (DDT) is associated with an increased risk of BC, mainly in Africa and Asia where DDT exposure persists and use continues . Phthalates, phenols and parabens are temporary EDCs linked to breast cancer . Biomarker concentrations of temporary EDCs vary more among women versus men and among Black Americans than White Americans, owing to insufficient access to healthy food or the use of specific goods with greater amounts of phthalates, such as relaxers for hair and skin-whitening topic products, which are specifically marketed to Black consumers . It has been demonstrated that AA are also predominantly exposed to excessive amounts of bisphenol A (BPA), as indicated by the fact that urine BPA levels across Black people of all ages were substantially higher than those in the non-Black population . Furthermore, Tchen et al.’s (2022) findings support that exposure to BPA and bisphenol F (BPF) in pregnant women is associated with disruption of aromatic amino acid, xenobiotic, steroid and other amino acid metabolisms, which are connected to responses to stress, regulation of weight, steroid metabolism, inflammation and reproduction . Wormsbaecher et al. (2020) linked the exposure of EDCs to molecular alterations that develop over time and contribute to an increased susceptibility to BC in adulthood by identifying significant dysregulated genes and transcriptional modifications in mature fibroblasts subjected to BPA in the uterus and diethylstilbestrol (DES), along with particular extracellular matrix (ECM) compositions and increased collagen deposition in adult mammary glands . Evidence suggests that elevated breast density is a strong BC risk factor because collagen fiber features may be associated with BC risk and progression . It is known that women with the highest breast density have an estimated four–five-fold greater risk of developing BC compared to women with the lowest breast density . Many authors have shown that Black women have a statistically significantly higher absolute breast area density (40.1 cm 2 ) compared with White women, who have 33.1 cm 2 . Moreover, black women also have a higher volumetric density (263.1 cm 3 ) than White women (181.6 cm 3 ) . Caswell et al. (2013) showed that women with Ashkenazi Jewish ancestry are more likely to have age-adjusted and body mass index (BMI)-adjusted percent mammographic density (PMD) due to a unique set of genetic variants or environmental risk factors that increase mammographic density . Exposure to BPA through inhalation or body accumulation can also regulate estrogenic signals, which in turn can cause cancer cells to proliferate and become malignant by activating the Wnt/β-catenin pathway. This pathway is widely linked to the pathophysiology of advanced BC and to the development of embryos . After birth, pubertal development and reproductive life events, such as pregnancies, lactation and mammary involution accomplished by physiological apoptosis, have been described as subsequent stages of mammary gland development. Puberty initiates branching morphogenesis, which requires estrogen, growth hormone (GH), and insulin-like growth factor 1 (IGF1), to create a ductal tree in the fat pad, while during pregnancy, progesterone and prolactin generate alveoli development, leading to milk secretion during lactation. Also, dramatic changes that occur in the mammary gland during each pregnancy are orchestrated by signaling pathways that regulate a specialized subpopulation of mammary stem and progenitor cells . Black women have more children, especially at younger ages, and a lower prevalence of breastfeeding than White women , which have been associated with a higher incidence of ER-/PR- BC in AA women relative to White women . In the United States, Black teens overall have higher pregnancy and birth rates than White teens . AA girls experience earlier menarche , which is also established as a risk factor for BC . Moreover, Nguyen et al. (2019) found that parity and a young age at first pregnancy have been associated with a significant reduction in the risk of developing the luminal subtype of BC, but not TNBC . The human microbiome is “the second genome of the body” , accounting for over 3.3 million genes , as well as our “last organ” . Thus, the human microbiome includes all specific types of microbiota and extremely complex interactions between them, such as the bacteriome, archaeome, mycobiome and virome, studied by microbiomics . Metagenomics studies microorganisms from specific environments by functional gene screening or sequencing analysis , being a collection of genomes and genes from microorganisms, resulting in bacterial, archaeal, fungal and viral metagenomes . Due to its importance to human health, the human microbiome has become a focal point for precision medicine . Human breast tissue and milk harbor unique and diverse microbiota, partially translocated from the gastrointestinal tract as well as from the skin as another putative source of pathogenicity to breast tissue . It is known that the microbiota inhabiting the breast tissue TME is involved in breast carcinogenesis . Additionally, women with BC at the post-menopausal stage and healthy controls have different gut microbiota compositions and functions, which may have an impact on BC development . Wang et al. (2017) showed that breast tissue revealed significantly different microbiomes in BC and non-BC patients, with a decreased abundance of Methylobacterium in cancer patients . Moreover, BC patients harbor urinary microbiomes abundant in Gram-positive bacteria associated with skin flora . Recently, Niccolai et al. (2023) documented the presence of a sexually dimorphic breast-associated microbiota, defined as a “microgenderome” . These authors observed that, in women, the dysbiosis extends to the whole breast tissue, whereas in men, it appears to be present just in the tumor site . Certain breast or gastrointestinal microorganisms, which are found in altered equilibrium/dysbiosis, may create toxins that harm DNA, break down the proteins released from tumor suppressor genes, cause oxidative stress (OS), activate pro-inflammatory mechanisms, and alter cell proliferation, survival pathways and the immune system . Moreover, since estrogens are the most important risk factor in BC, especially in postmenopausal women, an important role of the human microbiome is the regulation of steroid-hormone metabolism . Thus, the enzymes of intestinal microorganisms deconjugate conjugated estrogen metabolites, leading to a biologically active form of estrogen that arrives back in the bloodstream, or synthetize estrogen-like compounds that mimic estrogen function . Other evidence suggests that bacteria can invade and transform BC cells, inducing cytoskeleton rearrangements and promoting metastatic colonization . Thus, the EMT and inflammation are the molecular mechanisms that are most frequently affected by pathogenic organisms to induce malignant progression . Conversely, the eubiosis state acts as a protective factor against cancer . Smith et al. (2019) emphasized that the microbial communities in the breast tissue of non-Hispanic Black (NHB) and non-Hispanic White (NHW) women can differ by race, stage or BC subtype in . A study conducted by Price et al. (2022) detected that the gut microbiome profiles differ between Black and White women in association with insulin sensitivity . Thus, 50% of Black women have been classified as insulin-resistant compared to 30% of White women ; Black women also have a greater relative abundance of Actinobacteria compared with White women . It is known that secondary metabolites derived from Actinobacteria have an influential role in tumor development as well as inhibition . Usually, patients with breast cancer typically show alterations in the composition of microbes in their breasts in addition to decreased microbial diversity of gut microbiota . Moreover, studies have demonstrated significant differences in the relative abundance of specific taxa between NHB and NHW women . Both tumor and normal tissue adjacent to tumor (NAT) samples emphasized a specific microbiota in both NHB and NHW women, whereas, when compared to a matching NAT area, the microbial diversity in NHB TNBC cancer tissue was much reduced . Smith et al. (2022) reported that TNBC has a specific microbiota that differs from the less aggressive BC subtypes, emphasizing a correlation between host metabolic process changes and breast microbial dysbiosis in NHB and NHW women’s breast cancers . Thus, potential race-specific microbial biomarkers of BC correlate to genes involved in tumor aggressiveness, angiogenesis, migration and metastasis, as well as oncogenic signaling pathways GLI and Notch in a specific manner . Precision oncology is based on deep knowledge of the molecular profile of tumors and allows for more accurate and personalized therapy for specific groups of patients. Evidence suggests that different biomarkers have been found to have racial and ethnic differences, among other types of disparities, such as chronological or biological age-, sex/gender- or environmental exposure-related ones. Usually, BC disparities are due to ethnicity, socioeconomic position, psycho-social stressors, comorbidities, and a Western lifestyle. The aim of this review was to deepen the understanding of BC-related disparities, mainly from a biomedical perspective that includes genomic-based differences, disparities in breast tumor biology and developmental biology, differences in breast tumors’ immune and metabolic landscapes, ecological factors involved in these disparities, as well as microbiomics- and metagenomics-based disparities in BC. Black women disproportionately bear the burdens of BC. Triple-negative breast cancer (TNBC) is twice as prevalent among Black women compared with White women. BC occurs in either gender, but female breast cancer (FBC) is the main cancer among women worldwide, while male breast cancer (MBC) is a rare disease. However, male patients have worse survival and higher mortality compared to FBC patients. Older women have a worse prognosis compared to younger patients. In Black women, a younger age and obesity, associated with a low socioeconomic status, influence TNBC development. Breast tissue revealed significantly different microbiomes in BC and non-BC patients, while a sexually dimorphic breast-associated microbiota, defined as a “microgenderome”, is involved in male–female BC disparities. Moreover, multiple studies have demonstrated that many chemicals have disproportionately high exposure levels to Black women and emphasize BC-associated biological activities, leading to higher exposure-related biomarker levels compared to White women. As for eastern immigrant women in Nordic countries, Denmark, Finland, Iceland and Norway following their migration to the U.S., women from Asian nations, such as China, Japan and the Philippines, saw an increase in their BCR rates over several generations, eventually matching those of U.S. White women. Thus, BRCA 1/2 germline pathogenic variants are highly ethnic-specific in BC patients, with a high frequency of BRCA variation in specific countries or ethnic groups, especially within genetically isolated populations, as well as other somatic and germline mutations in high- or moderate-prevalence genes, such as TP53 , PI3KCA , CHEK2 , BARD1 , ATM and PALB2 , which also emphasize racial differences. Moreover, the frequency of the presence of hormone receptors within BC varies by race and ethnicity. Also, TME, insulin growth factor (IGF) signaling, tumor necrosis factor (TNF)-mediated pathways, xenobiotic metabolism, metabolic reprogramming, epigenetic mechanisms/hypermethylation, obesity and metaflammation all contribute to high race/ethnic disparities. Androgen-receptor ( AR ) gene copy increase, steroid hormone-mediated pathways, especially estrogen and AR-related pathways, DNA methylation levels of cytokeratin-related genes and other genes involved in cytoskeleton architecture, membrane trafficking, gene transcription, cell migration, invasion, adhesion, survival and growth, and sexually dimorphic breast-associated microbiota (known as the “microgenderiome”) contribute to biological sex/gender-related disparities in BC. Genomic instability, steroid hormone-mediated pathways, transcriptome changes, telomere shortening, epigenetic alterations, deregulating nutrient sensing, mitochondrial dysfunction, loss of proteostasis, altered intercellular communications, chronic inflammation, Wnt/β-catenin signaling and composition and functions of gut/breast microbiota contribute to age-related disparities in BC. We can conclude that onco-breastomics, in principle, based on genomics, proteomics, epigenomics, hormonomics, metabolomics and exposomics data, is able to characterize the multiple biological processes and molecular pathways involved in BC disparities, clarifying the differences in incidence, mortality and treatment response for different groups of BC patients.
Enhancing Care Coordination in Oncology and Nononcology Thoracic Surgery Care Pathways Through a Digital Health Solution: Mixed Methods Study
2e3c4381-0eda-4e5a-84ed-1ed3abdc03b5
11632290
Internal Medicine[mh]
Background In Quebec, as in other provinces in Canada, care coordination is an important issue due to the fragmentation of the health system, which is also observed around the world . A key issue is the lack of coordination between health care providers (HCPs), which can lead to interruptions in the patient’s care pathway, adversely affecting their health and well-being . The thoracic surgery care pathways are complex, requiring collaboration between various HCPs from different disciplines and settings . The coordination of interfacility service corridors is even more complex for certain types of specialized health care, such as thoracic oncology surgery , where patients may require surgery, radiotherapy, or chemotherapy . Effective care coordination ensures safe and efficient care transitions, promoting patient safety and care quality . Care transitions involve different HCPs, requiring multidisciplinary communication, coordination, planning, and shared accountability . Effective care coordination involves the timely exchange of concise, complete, and relevant information between different HCPs within the same health care facility or from one facility to another, to ensure patient care management at transition points . However, transitions between facilities are critical points where continuity may be compromised if there is a lack of coordination . Studies have shown that a lack of care coordination reduces system performance and negatively impacts patients’ health and quality of life . Susceptible patients or those with complex needs are particularly affected . Poor coordination leads to duplicated tests or treatments, medical errors, increased costs, and mismanaged transitions, all of which comprise patient satisfaction and care quality . By contrast, effective coordination reduces emergency room visits, hospital readmissions , delays, and adverse events . Digital Technologies and Care Coordination It is well known that the computerization of the Quebec health care network, and clinical computerization in particular, lags behind that of other Canadian provinces, contributing to fragmented patient records . However, this issue is also observed worldwide. For example, patients’ medical information is scattered across different systems and not easily accessible or shared between HCPs . Overall, the flow of information, both within and between health care facilities, is deficient, and patients constantly have to repeat their information or undergo unnecessary tests or examinations simply because the information is inaccessible . Information and communication technologies (ICTs) are increasingly perceived as tools that can improve the quality of care, patient safety, and the efficiency of the health care system . ICTs provide HCPs with real-time access to patient information, eliminating redundant or unnecessary tests and procedures , and facilitating multidisciplinary collaboration . In addition, ICTs enhance care traceability and promote evidence-based medicine . Automating administrative tasks through ICTs can optimize resource use and improve patient satisfaction . To ensure efficient coordination of interfacility thoracic surgery care and services, it is important that the health care facilities work together in a transparent and coordinated way, sharing patient information and ensuring seamless continuity of care. Digital health solutions are required to make workflows more efficient and ensure that patients receive the right care at the right time . Therefore, this study aims to improve oncology and nononcology thoracic surgery care pathways by enhancing care coordination by first analyzing the interfacility process, then designing, adapting, and testing a customized digital platform and finally implementing the solution while assessing the end-user experience. In Quebec, as in other provinces in Canada, care coordination is an important issue due to the fragmentation of the health system, which is also observed around the world . A key issue is the lack of coordination between health care providers (HCPs), which can lead to interruptions in the patient’s care pathway, adversely affecting their health and well-being . The thoracic surgery care pathways are complex, requiring collaboration between various HCPs from different disciplines and settings . The coordination of interfacility service corridors is even more complex for certain types of specialized health care, such as thoracic oncology surgery , where patients may require surgery, radiotherapy, or chemotherapy . Effective care coordination ensures safe and efficient care transitions, promoting patient safety and care quality . Care transitions involve different HCPs, requiring multidisciplinary communication, coordination, planning, and shared accountability . Effective care coordination involves the timely exchange of concise, complete, and relevant information between different HCPs within the same health care facility or from one facility to another, to ensure patient care management at transition points . However, transitions between facilities are critical points where continuity may be compromised if there is a lack of coordination . Studies have shown that a lack of care coordination reduces system performance and negatively impacts patients’ health and quality of life . Susceptible patients or those with complex needs are particularly affected . Poor coordination leads to duplicated tests or treatments, medical errors, increased costs, and mismanaged transitions, all of which comprise patient satisfaction and care quality . By contrast, effective coordination reduces emergency room visits, hospital readmissions , delays, and adverse events . It is well known that the computerization of the Quebec health care network, and clinical computerization in particular, lags behind that of other Canadian provinces, contributing to fragmented patient records . However, this issue is also observed worldwide. For example, patients’ medical information is scattered across different systems and not easily accessible or shared between HCPs . Overall, the flow of information, both within and between health care facilities, is deficient, and patients constantly have to repeat their information or undergo unnecessary tests or examinations simply because the information is inaccessible . Information and communication technologies (ICTs) are increasingly perceived as tools that can improve the quality of care, patient safety, and the efficiency of the health care system . ICTs provide HCPs with real-time access to patient information, eliminating redundant or unnecessary tests and procedures , and facilitating multidisciplinary collaboration . In addition, ICTs enhance care traceability and promote evidence-based medicine . Automating administrative tasks through ICTs can optimize resource use and improve patient satisfaction . To ensure efficient coordination of interfacility thoracic surgery care and services, it is important that the health care facilities work together in a transparent and coordinated way, sharing patient information and ensuring seamless continuity of care. Digital health solutions are required to make workflows more efficient and ensure that patients receive the right care at the right time . Therefore, this study aims to improve oncology and nononcology thoracic surgery care pathways by enhancing care coordination by first analyzing the interfacility process, then designing, adapting, and testing a customized digital platform and finally implementing the solution while assessing the end-user experience. Ethical Considerations Ethics approval was obtained from the research ethics committee of the Centre intégré de santé et de services sociaux de l’Outaouais (CISSSO) before the beginning of the study (2019-258_141_MP), in Quebec. All participants provided written informed consent before participation. The privacy rights of the study participants were observed. The study participants did not receive monetary compensation. Pilot Project Context The pilot project focused on the provision of interregional services between 2 facilities, namely the McGill University Health Centre (MUHC) in Montreal and the CISSSO in Gatineau. This service corridor enables the efficient use of the Community Health and Social Services Network’s resources so that patients’ needs can be met as quickly as possible. In 2014, Quebec’s Ministère de la Santé et des Services sociaux (MSSS) approached the MUHC, which has a supraregional team of experts, to establish a close collaboration with affiliated centers specializing in lung and esophageal cancer cases. The CISSSO has an exclusive thoracic surgery service corridor with the MUHC. As part of this collaboration, MUHC surgeons spend 3 to 4 days per month at the CISSSO for clinic visits with >50 patients per clinic, and the surgeons perform >200 surgical procedures on these patients at the MUHC each year. It should be noted that there are >1000 consultations at the CISSSO every year. In September 2018, the Direction générale de cancérologie reconfirmed the added value of such networking and mandated the MUHC to create a pulmonary oncology network to optimize care pathways and service corridors with its affiliated centers. In addition, the 2015 to 2025 National Public Health Program produced by Quebec’s MSSS emphasizes the importance of organizing health care in a way that will ensure the continuity of health care services, better harmonize transitions at different levels of the health care system, and avoid duplication of services. A key element of this structure is facilitating accessibility and coordination between health care units within the same region to ensure the complementary nature of their service offering and between regions when specialized services are required. The oncology and nononcology thoracic surgery pathways, where the referral center (MUHC) and the affiliated center (CISSSO) must collaborate on a series of clinical and administrative activities, have a critical mass of patients that is very well suited to a pilot project that can be replicated in other specialties. Specifically, the care pathways involve a complex organization of resources and patient flows. There are many reasons to optimize the care pathways. First, coordination between health care facilities has become risky. In addition, the process of transmitting patients’ clinical information, the administrative documents relating to this information, and the monitoring of the continuum of services are unstable and insecure. A technological solution can play an important role in bridging these gaps. Following initial analyses of clinical and administrative flows, the need to optimize the care pathways became clear, with a focus on improving safety, and facilitating care coordination through the implementation of an integrated digital health solution. The three main and interdependent objectives of this implementation are (1) to understand the interfacility thoracic surgery pathways; (2) to design, adapt, and test the platform with the target pathways; and (3) to implement the platform and evaluate the end-user experience. Study Design and Settings Overview A pathway refers to a care plan that details the specific steps for managing the care of a patient with a specific pathology to ensure high-quality, consistent, and continuous care . For this purpose, this study used an integrated knowledge mobilization approach, a partner-centered approach that seeks to improve outcomes by involving all relevant partners (political decision makers, managers, HCPs, community members, patients, digital health professionals, etc) throughout the research process . This approach theorizes that the coconstruction of knowledge is likely to result in relevant, applicable, and transferable knowledge for end users . We also used an exploratory mixed methods approach that combines different sources of data to determine and respond to the needs of HCPs and to improve care coordination that will enable continuous and consistent management of patients throughout the care pathways. To meet the 3 main, interdependent objectives, we conducted a multicenter implementation study at 2 health care facilities (MUHC and CISSSO) located in 2 different health regions (Montreal and Gatineau) in the province of Quebec. For the first 2 objectives, we used qualitative research methods, while the third objective involved a quantitative evaluation. The COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist was used to ensure that the study met the recommended standards of qualitative data reporting . Data Collection and Analysis of Objectives Objective 1: Understand the Interfacility Thoracic Surgery Care Pathways To facilitate the achievement of objective 1 and ensure the collection of the greatest possible amount of quality information, an interview guide was created and used at both health care facilities. Each respondent was given a copy of the guide in advance, ensuring they were informed of the covered topics. The aim of the guide was to help gain a better understanding of the interfacility thoracic surgery care pathways related to 3 main components. The first component was the context of clinical and administrative flows. The second component encompassed the clinical and administrative forms and official documents used throughout the care pathways. The third component included the major problems encountered by HCPs. Sampling was purposive, given the 2 chosen health care facilities and the different types of target participants (thoracic surgeons, nurses, oncology nurse navigators, managers, and medical secretaries). All participants (N=27) were contacted by email, which included the interview guide and consent form. The form explained the context, project objective, procedure and duration, anticipated benefits, as well as anonymity and confidentiality. Everyone agreed to take part in the project. Free and informed consent was obtained from all participants at the scheduled data collection meetings. A total of 27 semistructured individual face-to-face interviews, each lasting 60 minutes, were conducted over a 6-month period between June and November 2019 at the MUHC and the CISSSO by the researcher. All interviews were audio recorded with participants’ permission. The participants did not have any personal or professional relationship with anyone from the research team. The interviews were transcribed by the researcher in Microsoft Word. The interviews were analyzed in isolation to highlight the experiences and concerns associated with the pathways. A summary of the analysis of each interview was validated with the respondents. Objective 2: Design, Adapt, and Test the Platform With the Target Pathway To facilitate the achievement of objective 2, which consists of designing, adapting, and testing the digital health platform with the target care pathways, we used the participatory design approach. This approach, also known as cocreation or end user–centered design, is advocated to foster the development of health care technology solutions . Numerous studies illustrate the benefits of incorporating the perspectives and knowledge of future users at the outset of the technology design and development process . As part of this pilot project, a close collaboration was established with doctors, nurses, medical secretaries, researchers, developers, designers, and other key stakeholders. Various qualitative methods were used to support stakeholder involvement. In the first stage, based on the interviews conducted face-to-face under objective 1, we were able to (1) identify the clinical and administrative needs of future users and (2) map the current thoracic surgery care pathways to identify optimization opportunities and implement targeted improvements. This information was gathered from the following 13 participants: 5 (38%) MUHC surgeons, 3 (23%) CISSSO nurses, 3 (23%) MUHC nurses, and 2 (15%) MUHC medical secretaries. During the second stage, based on the previous results, we mapped the target interfacility thoracic surgery pathways. Our objectives were to identify the key steps of the care pathways and to illustrate how the platform can help improve care coordination and management at each step. We carried out an analysis of existing workflows using the Business Process Model Notation method with Visio software 2021 (Microsoft Corporation). All (13/13, 100%) participants mentioned earlier were involved in validating the map of the current care pathways. On the basis of the data generated by the first 2 stages, we organized web participatory design workshops over an 18-month period, with the aim of actively involving future users in the technology design process. Each workshop, conducted via Microsoft Teams teleconference, lasted 90 minutes and was visually recorded with participants’ permission. In addition, the researcher took notes throughout the sessions. A total of 11 participants took part in the workshops, including 1 (9%) researcher, 3 (27%) developers and designers, 2 (18%) surgeons, 2 (18%) nurses, 1 (9%) manager, 1 (9%) Réseau universitaire intégré de santé et de services sociaux McGill Telehealth Coordination Centre consultant, and 1 (9%) MUHC clinical coordinator. These workshops generated knowledge coproduced with the participants, which was incorporated into the prototype under development. On the basis of the data generated during the participatory design workshops, a prototype was designed and discussed with end users and other key stakeholders during feedback meetings. The aim was to validate certain functionalities and workflows that were planned for the prototype. Subsequently, the Akinox team carried out an iterative review of the prototype following its usual development process. Akinox is a company that develops digital solutions for health care organizations and was our technology partner for this pilot project. By the end of this process, the requirements for the adaptation and finalization of the Akinox digital health platform were established. This enabled us to prepare the platform for implementation in the target setting, considering the feedback from future users and ensuring that the design meets their needs and preferences. This iterative, user-centered approach resulted in a final product that is better adapted and more user-friendly for end users. Objective 3: Implement the Platform and Evaluate the End-User Experience The Akinox digital health platform was rolled out in January 2021. Each region was provided with cloud-based access to the platform as agreed in collaboration with IT units in each health region. We used a phased approach to implementation. Training sessions were organized in each setting, and a user guide was sent to the 13 end users of the platform. In December 2021, we conducted a web-based survey of all MUHC and CISSSO HCPs (N=13) involved in the thoracic surgery care pathways to assess their experience of using the platform. All participants received an email with a link to the web-based survey, which was conducted using SurveyMonkey software (SurveyMonkey Inc), sent by the researcher. The survey comprised four parts: (1) demographic information (region of origin, health care facility, and profession), (2) perceived benefits of the platform, (3) assessment of the platform in terms of specific workflows for each user profile, and (4) assessment of the overall user experience of the platform. For part 2, respondents rated the perceived benefits of the platform using a 5-point Likert scale ranging from “strongly disagree” to “strongly agree.” Questions focused on the positive aspects of the platform that improve work efficiency, care coordination, etc. In part 3, respondents rated the suitability of the platform for their specific tasks using a 5-point Likert scale. For part 4, respondents were asked to answer open-ended questions focusing on the factors of acceptance and use of the platform. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. Ethics approval was obtained from the research ethics committee of the Centre intégré de santé et de services sociaux de l’Outaouais (CISSSO) before the beginning of the study (2019-258_141_MP), in Quebec. All participants provided written informed consent before participation. The privacy rights of the study participants were observed. The study participants did not receive monetary compensation. The pilot project focused on the provision of interregional services between 2 facilities, namely the McGill University Health Centre (MUHC) in Montreal and the CISSSO in Gatineau. This service corridor enables the efficient use of the Community Health and Social Services Network’s resources so that patients’ needs can be met as quickly as possible. In 2014, Quebec’s Ministère de la Santé et des Services sociaux (MSSS) approached the MUHC, which has a supraregional team of experts, to establish a close collaboration with affiliated centers specializing in lung and esophageal cancer cases. The CISSSO has an exclusive thoracic surgery service corridor with the MUHC. As part of this collaboration, MUHC surgeons spend 3 to 4 days per month at the CISSSO for clinic visits with >50 patients per clinic, and the surgeons perform >200 surgical procedures on these patients at the MUHC each year. It should be noted that there are >1000 consultations at the CISSSO every year. In September 2018, the Direction générale de cancérologie reconfirmed the added value of such networking and mandated the MUHC to create a pulmonary oncology network to optimize care pathways and service corridors with its affiliated centers. In addition, the 2015 to 2025 National Public Health Program produced by Quebec’s MSSS emphasizes the importance of organizing health care in a way that will ensure the continuity of health care services, better harmonize transitions at different levels of the health care system, and avoid duplication of services. A key element of this structure is facilitating accessibility and coordination between health care units within the same region to ensure the complementary nature of their service offering and between regions when specialized services are required. The oncology and nononcology thoracic surgery pathways, where the referral center (MUHC) and the affiliated center (CISSSO) must collaborate on a series of clinical and administrative activities, have a critical mass of patients that is very well suited to a pilot project that can be replicated in other specialties. Specifically, the care pathways involve a complex organization of resources and patient flows. There are many reasons to optimize the care pathways. First, coordination between health care facilities has become risky. In addition, the process of transmitting patients’ clinical information, the administrative documents relating to this information, and the monitoring of the continuum of services are unstable and insecure. A technological solution can play an important role in bridging these gaps. Following initial analyses of clinical and administrative flows, the need to optimize the care pathways became clear, with a focus on improving safety, and facilitating care coordination through the implementation of an integrated digital health solution. The three main and interdependent objectives of this implementation are (1) to understand the interfacility thoracic surgery pathways; (2) to design, adapt, and test the platform with the target pathways; and (3) to implement the platform and evaluate the end-user experience. Overview A pathway refers to a care plan that details the specific steps for managing the care of a patient with a specific pathology to ensure high-quality, consistent, and continuous care . For this purpose, this study used an integrated knowledge mobilization approach, a partner-centered approach that seeks to improve outcomes by involving all relevant partners (political decision makers, managers, HCPs, community members, patients, digital health professionals, etc) throughout the research process . This approach theorizes that the coconstruction of knowledge is likely to result in relevant, applicable, and transferable knowledge for end users . We also used an exploratory mixed methods approach that combines different sources of data to determine and respond to the needs of HCPs and to improve care coordination that will enable continuous and consistent management of patients throughout the care pathways. To meet the 3 main, interdependent objectives, we conducted a multicenter implementation study at 2 health care facilities (MUHC and CISSSO) located in 2 different health regions (Montreal and Gatineau) in the province of Quebec. For the first 2 objectives, we used qualitative research methods, while the third objective involved a quantitative evaluation. The COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist was used to ensure that the study met the recommended standards of qualitative data reporting . Data Collection and Analysis of Objectives Objective 1: Understand the Interfacility Thoracic Surgery Care Pathways To facilitate the achievement of objective 1 and ensure the collection of the greatest possible amount of quality information, an interview guide was created and used at both health care facilities. Each respondent was given a copy of the guide in advance, ensuring they were informed of the covered topics. The aim of the guide was to help gain a better understanding of the interfacility thoracic surgery care pathways related to 3 main components. The first component was the context of clinical and administrative flows. The second component encompassed the clinical and administrative forms and official documents used throughout the care pathways. The third component included the major problems encountered by HCPs. Sampling was purposive, given the 2 chosen health care facilities and the different types of target participants (thoracic surgeons, nurses, oncology nurse navigators, managers, and medical secretaries). All participants (N=27) were contacted by email, which included the interview guide and consent form. The form explained the context, project objective, procedure and duration, anticipated benefits, as well as anonymity and confidentiality. Everyone agreed to take part in the project. Free and informed consent was obtained from all participants at the scheduled data collection meetings. A total of 27 semistructured individual face-to-face interviews, each lasting 60 minutes, were conducted over a 6-month period between June and November 2019 at the MUHC and the CISSSO by the researcher. All interviews were audio recorded with participants’ permission. The participants did not have any personal or professional relationship with anyone from the research team. The interviews were transcribed by the researcher in Microsoft Word. The interviews were analyzed in isolation to highlight the experiences and concerns associated with the pathways. A summary of the analysis of each interview was validated with the respondents. Objective 2: Design, Adapt, and Test the Platform With the Target Pathway To facilitate the achievement of objective 2, which consists of designing, adapting, and testing the digital health platform with the target care pathways, we used the participatory design approach. This approach, also known as cocreation or end user–centered design, is advocated to foster the development of health care technology solutions . Numerous studies illustrate the benefits of incorporating the perspectives and knowledge of future users at the outset of the technology design and development process . As part of this pilot project, a close collaboration was established with doctors, nurses, medical secretaries, researchers, developers, designers, and other key stakeholders. Various qualitative methods were used to support stakeholder involvement. In the first stage, based on the interviews conducted face-to-face under objective 1, we were able to (1) identify the clinical and administrative needs of future users and (2) map the current thoracic surgery care pathways to identify optimization opportunities and implement targeted improvements. This information was gathered from the following 13 participants: 5 (38%) MUHC surgeons, 3 (23%) CISSSO nurses, 3 (23%) MUHC nurses, and 2 (15%) MUHC medical secretaries. During the second stage, based on the previous results, we mapped the target interfacility thoracic surgery pathways. Our objectives were to identify the key steps of the care pathways and to illustrate how the platform can help improve care coordination and management at each step. We carried out an analysis of existing workflows using the Business Process Model Notation method with Visio software 2021 (Microsoft Corporation). All (13/13, 100%) participants mentioned earlier were involved in validating the map of the current care pathways. On the basis of the data generated by the first 2 stages, we organized web participatory design workshops over an 18-month period, with the aim of actively involving future users in the technology design process. Each workshop, conducted via Microsoft Teams teleconference, lasted 90 minutes and was visually recorded with participants’ permission. In addition, the researcher took notes throughout the sessions. A total of 11 participants took part in the workshops, including 1 (9%) researcher, 3 (27%) developers and designers, 2 (18%) surgeons, 2 (18%) nurses, 1 (9%) manager, 1 (9%) Réseau universitaire intégré de santé et de services sociaux McGill Telehealth Coordination Centre consultant, and 1 (9%) MUHC clinical coordinator. These workshops generated knowledge coproduced with the participants, which was incorporated into the prototype under development. On the basis of the data generated during the participatory design workshops, a prototype was designed and discussed with end users and other key stakeholders during feedback meetings. The aim was to validate certain functionalities and workflows that were planned for the prototype. Subsequently, the Akinox team carried out an iterative review of the prototype following its usual development process. Akinox is a company that develops digital solutions for health care organizations and was our technology partner for this pilot project. By the end of this process, the requirements for the adaptation and finalization of the Akinox digital health platform were established. This enabled us to prepare the platform for implementation in the target setting, considering the feedback from future users and ensuring that the design meets their needs and preferences. This iterative, user-centered approach resulted in a final product that is better adapted and more user-friendly for end users. Objective 3: Implement the Platform and Evaluate the End-User Experience The Akinox digital health platform was rolled out in January 2021. Each region was provided with cloud-based access to the platform as agreed in collaboration with IT units in each health region. We used a phased approach to implementation. Training sessions were organized in each setting, and a user guide was sent to the 13 end users of the platform. In December 2021, we conducted a web-based survey of all MUHC and CISSSO HCPs (N=13) involved in the thoracic surgery care pathways to assess their experience of using the platform. All participants received an email with a link to the web-based survey, which was conducted using SurveyMonkey software (SurveyMonkey Inc), sent by the researcher. The survey comprised four parts: (1) demographic information (region of origin, health care facility, and profession), (2) perceived benefits of the platform, (3) assessment of the platform in terms of specific workflows for each user profile, and (4) assessment of the overall user experience of the platform. For part 2, respondents rated the perceived benefits of the platform using a 5-point Likert scale ranging from “strongly disagree” to “strongly agree.” Questions focused on the positive aspects of the platform that improve work efficiency, care coordination, etc. In part 3, respondents rated the suitability of the platform for their specific tasks using a 5-point Likert scale. For part 4, respondents were asked to answer open-ended questions focusing on the factors of acceptance and use of the platform. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. A pathway refers to a care plan that details the specific steps for managing the care of a patient with a specific pathology to ensure high-quality, consistent, and continuous care . For this purpose, this study used an integrated knowledge mobilization approach, a partner-centered approach that seeks to improve outcomes by involving all relevant partners (political decision makers, managers, HCPs, community members, patients, digital health professionals, etc) throughout the research process . This approach theorizes that the coconstruction of knowledge is likely to result in relevant, applicable, and transferable knowledge for end users . We also used an exploratory mixed methods approach that combines different sources of data to determine and respond to the needs of HCPs and to improve care coordination that will enable continuous and consistent management of patients throughout the care pathways. To meet the 3 main, interdependent objectives, we conducted a multicenter implementation study at 2 health care facilities (MUHC and CISSSO) located in 2 different health regions (Montreal and Gatineau) in the province of Quebec. For the first 2 objectives, we used qualitative research methods, while the third objective involved a quantitative evaluation. The COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist was used to ensure that the study met the recommended standards of qualitative data reporting . Objective 1: Understand the Interfacility Thoracic Surgery Care Pathways To facilitate the achievement of objective 1 and ensure the collection of the greatest possible amount of quality information, an interview guide was created and used at both health care facilities. Each respondent was given a copy of the guide in advance, ensuring they were informed of the covered topics. The aim of the guide was to help gain a better understanding of the interfacility thoracic surgery care pathways related to 3 main components. The first component was the context of clinical and administrative flows. The second component encompassed the clinical and administrative forms and official documents used throughout the care pathways. The third component included the major problems encountered by HCPs. Sampling was purposive, given the 2 chosen health care facilities and the different types of target participants (thoracic surgeons, nurses, oncology nurse navigators, managers, and medical secretaries). All participants (N=27) were contacted by email, which included the interview guide and consent form. The form explained the context, project objective, procedure and duration, anticipated benefits, as well as anonymity and confidentiality. Everyone agreed to take part in the project. Free and informed consent was obtained from all participants at the scheduled data collection meetings. A total of 27 semistructured individual face-to-face interviews, each lasting 60 minutes, were conducted over a 6-month period between June and November 2019 at the MUHC and the CISSSO by the researcher. All interviews were audio recorded with participants’ permission. The participants did not have any personal or professional relationship with anyone from the research team. The interviews were transcribed by the researcher in Microsoft Word. The interviews were analyzed in isolation to highlight the experiences and concerns associated with the pathways. A summary of the analysis of each interview was validated with the respondents. Objective 2: Design, Adapt, and Test the Platform With the Target Pathway To facilitate the achievement of objective 2, which consists of designing, adapting, and testing the digital health platform with the target care pathways, we used the participatory design approach. This approach, also known as cocreation or end user–centered design, is advocated to foster the development of health care technology solutions . Numerous studies illustrate the benefits of incorporating the perspectives and knowledge of future users at the outset of the technology design and development process . As part of this pilot project, a close collaboration was established with doctors, nurses, medical secretaries, researchers, developers, designers, and other key stakeholders. Various qualitative methods were used to support stakeholder involvement. In the first stage, based on the interviews conducted face-to-face under objective 1, we were able to (1) identify the clinical and administrative needs of future users and (2) map the current thoracic surgery care pathways to identify optimization opportunities and implement targeted improvements. This information was gathered from the following 13 participants: 5 (38%) MUHC surgeons, 3 (23%) CISSSO nurses, 3 (23%) MUHC nurses, and 2 (15%) MUHC medical secretaries. During the second stage, based on the previous results, we mapped the target interfacility thoracic surgery pathways. Our objectives were to identify the key steps of the care pathways and to illustrate how the platform can help improve care coordination and management at each step. We carried out an analysis of existing workflows using the Business Process Model Notation method with Visio software 2021 (Microsoft Corporation). All (13/13, 100%) participants mentioned earlier were involved in validating the map of the current care pathways. On the basis of the data generated by the first 2 stages, we organized web participatory design workshops over an 18-month period, with the aim of actively involving future users in the technology design process. Each workshop, conducted via Microsoft Teams teleconference, lasted 90 minutes and was visually recorded with participants’ permission. In addition, the researcher took notes throughout the sessions. A total of 11 participants took part in the workshops, including 1 (9%) researcher, 3 (27%) developers and designers, 2 (18%) surgeons, 2 (18%) nurses, 1 (9%) manager, 1 (9%) Réseau universitaire intégré de santé et de services sociaux McGill Telehealth Coordination Centre consultant, and 1 (9%) MUHC clinical coordinator. These workshops generated knowledge coproduced with the participants, which was incorporated into the prototype under development. On the basis of the data generated during the participatory design workshops, a prototype was designed and discussed with end users and other key stakeholders during feedback meetings. The aim was to validate certain functionalities and workflows that were planned for the prototype. Subsequently, the Akinox team carried out an iterative review of the prototype following its usual development process. Akinox is a company that develops digital solutions for health care organizations and was our technology partner for this pilot project. By the end of this process, the requirements for the adaptation and finalization of the Akinox digital health platform were established. This enabled us to prepare the platform for implementation in the target setting, considering the feedback from future users and ensuring that the design meets their needs and preferences. This iterative, user-centered approach resulted in a final product that is better adapted and more user-friendly for end users. Objective 3: Implement the Platform and Evaluate the End-User Experience The Akinox digital health platform was rolled out in January 2021. Each region was provided with cloud-based access to the platform as agreed in collaboration with IT units in each health region. We used a phased approach to implementation. Training sessions were organized in each setting, and a user guide was sent to the 13 end users of the platform. In December 2021, we conducted a web-based survey of all MUHC and CISSSO HCPs (N=13) involved in the thoracic surgery care pathways to assess their experience of using the platform. All participants received an email with a link to the web-based survey, which was conducted using SurveyMonkey software (SurveyMonkey Inc), sent by the researcher. The survey comprised four parts: (1) demographic information (region of origin, health care facility, and profession), (2) perceived benefits of the platform, (3) assessment of the platform in terms of specific workflows for each user profile, and (4) assessment of the overall user experience of the platform. For part 2, respondents rated the perceived benefits of the platform using a 5-point Likert scale ranging from “strongly disagree” to “strongly agree.” Questions focused on the positive aspects of the platform that improve work efficiency, care coordination, etc. In part 3, respondents rated the suitability of the platform for their specific tasks using a 5-point Likert scale. For part 4, respondents were asked to answer open-ended questions focusing on the factors of acceptance and use of the platform. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. To facilitate the achievement of objective 1 and ensure the collection of the greatest possible amount of quality information, an interview guide was created and used at both health care facilities. Each respondent was given a copy of the guide in advance, ensuring they were informed of the covered topics. The aim of the guide was to help gain a better understanding of the interfacility thoracic surgery care pathways related to 3 main components. The first component was the context of clinical and administrative flows. The second component encompassed the clinical and administrative forms and official documents used throughout the care pathways. The third component included the major problems encountered by HCPs. Sampling was purposive, given the 2 chosen health care facilities and the different types of target participants (thoracic surgeons, nurses, oncology nurse navigators, managers, and medical secretaries). All participants (N=27) were contacted by email, which included the interview guide and consent form. The form explained the context, project objective, procedure and duration, anticipated benefits, as well as anonymity and confidentiality. Everyone agreed to take part in the project. Free and informed consent was obtained from all participants at the scheduled data collection meetings. A total of 27 semistructured individual face-to-face interviews, each lasting 60 minutes, were conducted over a 6-month period between June and November 2019 at the MUHC and the CISSSO by the researcher. All interviews were audio recorded with participants’ permission. The participants did not have any personal or professional relationship with anyone from the research team. The interviews were transcribed by the researcher in Microsoft Word. The interviews were analyzed in isolation to highlight the experiences and concerns associated with the pathways. A summary of the analysis of each interview was validated with the respondents. To facilitate the achievement of objective 2, which consists of designing, adapting, and testing the digital health platform with the target care pathways, we used the participatory design approach. This approach, also known as cocreation or end user–centered design, is advocated to foster the development of health care technology solutions . Numerous studies illustrate the benefits of incorporating the perspectives and knowledge of future users at the outset of the technology design and development process . As part of this pilot project, a close collaboration was established with doctors, nurses, medical secretaries, researchers, developers, designers, and other key stakeholders. Various qualitative methods were used to support stakeholder involvement. In the first stage, based on the interviews conducted face-to-face under objective 1, we were able to (1) identify the clinical and administrative needs of future users and (2) map the current thoracic surgery care pathways to identify optimization opportunities and implement targeted improvements. This information was gathered from the following 13 participants: 5 (38%) MUHC surgeons, 3 (23%) CISSSO nurses, 3 (23%) MUHC nurses, and 2 (15%) MUHC medical secretaries. During the second stage, based on the previous results, we mapped the target interfacility thoracic surgery pathways. Our objectives were to identify the key steps of the care pathways and to illustrate how the platform can help improve care coordination and management at each step. We carried out an analysis of existing workflows using the Business Process Model Notation method with Visio software 2021 (Microsoft Corporation). All (13/13, 100%) participants mentioned earlier were involved in validating the map of the current care pathways. On the basis of the data generated by the first 2 stages, we organized web participatory design workshops over an 18-month period, with the aim of actively involving future users in the technology design process. Each workshop, conducted via Microsoft Teams teleconference, lasted 90 minutes and was visually recorded with participants’ permission. In addition, the researcher took notes throughout the sessions. A total of 11 participants took part in the workshops, including 1 (9%) researcher, 3 (27%) developers and designers, 2 (18%) surgeons, 2 (18%) nurses, 1 (9%) manager, 1 (9%) Réseau universitaire intégré de santé et de services sociaux McGill Telehealth Coordination Centre consultant, and 1 (9%) MUHC clinical coordinator. These workshops generated knowledge coproduced with the participants, which was incorporated into the prototype under development. On the basis of the data generated during the participatory design workshops, a prototype was designed and discussed with end users and other key stakeholders during feedback meetings. The aim was to validate certain functionalities and workflows that were planned for the prototype. Subsequently, the Akinox team carried out an iterative review of the prototype following its usual development process. Akinox is a company that develops digital solutions for health care organizations and was our technology partner for this pilot project. By the end of this process, the requirements for the adaptation and finalization of the Akinox digital health platform were established. This enabled us to prepare the platform for implementation in the target setting, considering the feedback from future users and ensuring that the design meets their needs and preferences. This iterative, user-centered approach resulted in a final product that is better adapted and more user-friendly for end users. The Akinox digital health platform was rolled out in January 2021. Each region was provided with cloud-based access to the platform as agreed in collaboration with IT units in each health region. We used a phased approach to implementation. Training sessions were organized in each setting, and a user guide was sent to the 13 end users of the platform. In December 2021, we conducted a web-based survey of all MUHC and CISSSO HCPs (N=13) involved in the thoracic surgery care pathways to assess their experience of using the platform. All participants received an email with a link to the web-based survey, which was conducted using SurveyMonkey software (SurveyMonkey Inc), sent by the researcher. The survey comprised four parts: (1) demographic information (region of origin, health care facility, and profession), (2) perceived benefits of the platform, (3) assessment of the platform in terms of specific workflows for each user profile, and (4) assessment of the overall user experience of the platform. For part 2, respondents rated the perceived benefits of the platform using a 5-point Likert scale ranging from “strongly disagree” to “strongly agree.” Questions focused on the positive aspects of the platform that improve work efficiency, care coordination, etc. In part 3, respondents rated the suitability of the platform for their specific tasks using a 5-point Likert scale. For part 4, respondents were asked to answer open-ended questions focusing on the factors of acceptance and use of the platform. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. Objective 1: Understand the Interfacility Thoracic Surgery Care Pathways Overview The existing oncology and nononcology thoracic surgery care pathways include the reference center (MUHC) and the affiliated center (CISSSO). Under a formal agreement, the 2 health care facilities must collaborate on a series of clinical and administrative activities in support of patients with lung and esophageal cancer, with the aim of providing patients with a seamless care experience. The MUHC is recognized for its leading-edge expertise and has a supraregional team dedicated to the treatment of lung and esophageal cancer. The CISSSO has a designated interdisciplinary team to ensure complementarity and continuity in the care and services provided to oncology (lung and esophageal cancer) and nononcology patients. Context of Clinical and Administrative Workflows provides a macro view of the thoracic surgery process. It begins with the receipt of a request for a patient consultation with a surgeon and ends with the patient being discharged from the hospital after surgery. Patient care management involves a large number of human resources. Coordination between HCPs in the same facility and between health care facilities is essential to ensure that patient care management is carried out according to priority and pathology. The oncology and nononcology care pathways are complex in terms of scheduling patients for timely treatment in a variety of care settings. At the CISSSO, there are 2 care settings: the outpatient clinic and the oncology clinic. The outpatient clinic provides consultation and follow-up services to patients. These may include follow-ups to examinations requiring surgery, pre and postsurgery care, as well as pre and posthospitalization care for general surgery and biopsies. The oncology clinic cares for patients with cancer. Each clinic has assembled a care team to educate patients about their disease; provide the care and services required by their condition; deliver education related to their condition; and provide support to patients, their families, and loved ones throughout the care pathway. At the MUHC, there are 4 care settings: the thoracic surgery clinic, the preoperative clinic, the care unit, and the oncology clinic for certain patients. The thoracic surgery clinic provides consultation, postexamination, postoperative, and posthospitalization follow-up services. It specializes in the investigation and treatment of potential or diagnosed thoracic pathologies (lung cancer, esophageal cancer, etc). The preoperative clinic conducts the patient’s preoperative health check. During this visit, patients undergo various medical examinations. The visit is also used to schedule the hospital stay and discharge following the operation. The care unit is an inpatient unit that receives patients arriving from the operating room. The MUHC oncology clinic provides the same services as the CISSSO. This service organization model involves an agreement between the MUHC and the CISSSO to deliver the service offering and meet the needs of the Gatineau region’s population. CISSSO patients must travel to another health care facility (MUHC) in another region (Montreal) to receive the care and services they need. Patients must travel more than 200 km from Gatineau to Montreal. The oncology and nononcology thoracic surgery patient care pathways are complex, involving several transitions with HCPs at scheduled times and in specific care settings. Clinical Forms and Other Documents In today’s thoracic surgery pathways, it is common to have several forms to complete at different times (preoperative forms, forms related to the surgery, and postoperative forms) and health care facilities (MUHC and CISSSO) and by various HCPs. These forms and documents are used to collect essential information to ensure appropriate patient management and effective communication between members of the care team at each health care facility. During the interviews, we identified 25 forms, such as the request for admission to surgery, the blood transfusion consent form, and the informed consent form. In addition to the forms, other documents formed part of the patient’s medical record, such as clinical notes filed by consultation date, pathology and imaging reports, laboratory results, and the discharge summary after a hospital stay. We also listed 13 pre- and postoperative guides or instructions provided to patients. These documents were designed to inform patients about the various stages of their journey, prepare them for surgery, and help them recover effectively after the operation. All documentation is kept in the patient’s file in paper form at both the CISSSO and the MUHC. In addition, surgeons pick up these documents and physically transport them to the health care facility, where the next stage of treatment or care management will take place. Major Problems in the Current Care Pathways Overview Analysis of the pathways illustrated major dysfunctions attributable to the fact that the patient’s care pathway is shared between 2 health care facilities operating in silos. Organizational silos have a negative impact on integration, as each entity focuses on its own area of responsibility at the expense of improving efficiency. Through the interviews, we identified 4 areas that explain the coordination issues in the interfacility thoracic surgery service corridors. Communication and Management and Transmission of Information The management and transmission of information between the MUHC and the CISSSO are deficient and present an increased risk of error. Lack of communication between the silos further complicates the situation, to the point where information is sometimes missing, incomplete, or processed twice. The CISSSO frequently sends the results of preoperative visits twice, by email and by post, to ensure that the MUHC receives them. However, these results arrive at different care units within the MUHC, making it difficult to verify that a patient’s file is complete. When surgeons cannot find the necessary tests, they are forced to repeat them. This causes delays, exposes the patient to potential risk, and impacts the quality of care provided. In addition, having surgeons physically transport documents between health care facilities presents certain disadvantages and risks. First, there is the risk of documents being lost, damaged, or misplaced during transport. Second, surgeons at the MUHC sometimes forget the documents when they travel to the CISSSO clinic. The absence of this information impairs the process considerably, given that decision-making throughout the pathways is highly dependent on the availability of certain key pieces of information. Not having these key elements triggers a “scramble for information” that consumes a staggering amount of energy and time. This leads to conflicts between the HCPs who are required to generate the information and those who need it to carry out their clinical and administrative duties properly. In this particular case, the movement of patient care management information between the CISSSO and the MUHC is not secure and reliable, and this information is not systematically filed in the patient’s clinical record. The process for exchanging patient information between the health care facilities is not harmonized and is sometimes dysfunctional and error-prone. Coordination of Care Ineffective communication between the different HCPs and between the health care facilities as well as the absence of an integrated information system to facilitate rapid and secure sharing of medical information and patient follow-up appear to impact coordination between the different steps of the care pathways and between HCPs. For example, information such as test results and treatment plans is missing when patients are transferred from one department to another or from one hospital to another. This is also the case for coordination and synchronization between surgeons and nurses, with patients finding it difficult to move seamlessly from one step of the care pathway to another. Lack of communication makes it difficult to ensure that a patient’s care and services progress smoothly, particularly when it comes to appointment reminders and follow-up after hospitalization or an examination. In fact, the lack of follow-up mechanisms between the different HCPs makes it difficult to follow a patient’s progress along their care pathways, complicating the scheduling of care, including the scheduling of surgery. In the absence of adequate follow-up, some patients may need additional tests before surgery or require chemotherapy before surgery. If these needs are not identified and addressed in time, it can lead to delays in preparatory care and compromise the quality and effectiveness of care. Sometimes patients are examined too late when their cancer is already at an advanced stage due to gaps in follow-up. Duplication of preoperative patient preparation steps between the CISSSO and the MUHC leads to inefficiencies and additional investigations. Even when the patient has already been cleared for discharge at the CISSSO after a preoperative visit, certain relevant information about their medical condition is not properly documented and transmitted to the MUHC. Frequent changes in MUHC operating schedules pose major challenges when scheduling the resources needed for each surgical procedure. This includes operating rooms, medical and nursing staff, equipment, and supplies. When schedules change, it is difficult to ensure that all resources are available at the right time, leading to delays and cancellations. Unfortunately, these unexpected changes in dates cause anxiety and uncertainty, especially if they are announced at the last minute. Patients and their families must make rapid adjustments to the organization of their hospital stay and to the support they require. Moreover, the fact that CISSSO patients are required to travel a great distance—more than 200 km—to get to the MUHC adds another challenge. A common problem is the lack of coordination with primary care providers and hospitals after surgery. For example, there is inadequate coordination of postoperative patient follow-up, including referral and collaboration with community resources, such as local community service centers (LCSCs) or other home care services. There is a paucity of clear guidelines defining responsibilities and referral steps between health care facilities and community resources. Moreover, the lack of formalization leads to confusion about who is responsible for which tasks and results in insufficient coordination of care. This can compromise patient recovery. Continuity of Care and Services Many issues were raised regarding the continuity of care and services between the 2 health care facilities, namely the absence of certain essential information, such as the patient’s discharge date and the MUHC’s discharge summary. This hinders proper postoperative follow-up and compromises overall patient care management at the CISSSO. The CISSSO nurse must make phone calls to the MUHC to obtain the documents by fax. Furthermore, the CISSSO’s attending physicians do not always receive discharge summaries from the MUHC hospital; these summaries contain essential information on their patients’ health status. The high volume of calls and patients represents a challenge for their care management and follow-up. To meet this challenge, CISSSO nurses must decide which patients to prioritize every day. There is also a lack of continuity in requests for pathology services, and it takes several days to obtain results. Patients are not systematically told about their postoperative appointments when they are discharged from the MUHC hospital, and they leave the health care facility without knowing whom to contact for a follow-up appointment, leading to confusion and difficulty in obtaining follow-up care at the CISSSO. The CISSSO is not aware of the specific dates of surgeries and discharges, so it is difficult to schedule and inform patients of postoperative appointments before they leave the MUHC. In fact, it is often the patient who calls the CISSSO to say that they have not been given a postoperative appointment. Furthermore, the CISSSO frequently receives phone calls from patients who have not received a call from the LCSC to change their dressing or receive other necessary care after surgery. CISSSO patients are not always contacted by the LCSC for their postsurgery care management, and the interfacility service request (IFSR) is sent to the wrong LCSC in and around Gatineau. In fact, the MUHC does not always confirm the patient’s home LCSC before sending the IFSR. Therefore, the MUHC must prepare another IFSR for follow-ups at another LCSC. In addition, the type of required follow-up (home or outpatient care) is often missing from the IFSR. This information is important to ensure that the patient receives the appropriate services according to their postoperative needs and to ensure adequate continuity of care. Defining and Understanding Roles Some tasks are not uniformly and systematically performed from one health care facility to another. Approaches and practices between HCPs and care settings are not harmonized. These disparities lead to fragmentation of care and loss of efficiency across the thoracic surgery care pathways. In addition, the oncology and nononcology pathways involve numerous players whose roles are sometimes inadequately defined. There are overlaps, duplications, and gaps in patient care management. The absence of an integrated information system can make coordination more difficult and lead to inefficient processes. Objective 2: Design, Adapt, and Test the Platform With the Target Pathways Step 1.1: Clinical and Administrative Needs of Future Users Future users identified needs related to the workflows and future use of a digital health platform aiming to cover the entire interfacility thoracic surgery care pathways from initial triage when the patient is assessed to patient discharge after surgery. MUHC and CISSSO HCPs submitted a list of administrative and clinical needs that were classified under the following three categories: (1) communication of administrative, medical, and paramedical information; (2) clinical and organizational practices; and (3) human resources . List of administrative and clinical needs. Elements included by the participants related to the need for effective communication and informational continuity Enable the rapid and secure transmission of information and documents required for the continuity of care and services from one health care provider (HCP) to another within the same facility and between health care facilities Ensure confidentiality and data protection when transmitting information Ensure easy and direct access, without intermediaries, to patient medical data, such as medical records, test results, and x-ray results Ensure automatic updating of information and documents without duplication Promote the use of standardized and harmonized documents and forms between the McGill University Health Centre and the Centre intégré de santé et de services sociaux de l’Outaouais Implement a centralized dashboard to share information in real time Promote the harmonization and standardization of communication between HCPs within the same facility and between health care facilities Promote the interoperability of interfacility IT systems Other specific needs raised by the participants in terms of clinical and organizational practices Enable real-time tracking of the steps in the patient’s care pathways Enable efficient management of tasks and flows, with reminders and notifications Implement processes to achieve and maintain ministry targets for thoracic surgery Implement measures to reduce wait times for diagnostic examinations and preoperative tests Establish standardized protocols for preoperative investigations Implement mechanisms to limit changes in surgery dates Implement standardized procedures for patient follow-up after thoracic surgery, including postoperative visits Human resources-related needs identified as part of the implementation of the platform Clarify and define the roles and responsibilities of each HCP within the care pathways Propose guidelines for the organization of care and services Facilitate patient access to care teams by establishing clear communication channels and optimizing appointment scheduling processes Implement a change management plan to support HCPs in adopting the platform Step 1.2: Targeted Improvements Following the mapping of the current thoracic surgery care pathways, several targeted improvements were implemented. The first improvement was a review of the roles and responsibilities of each HCP involved in the pathways. This helped clarify the responsibilities of each member of the care team to ensure effective coordination and optimal patient care management. The second improvement was the recruitment of a nurse navigator at both the MUHC and the CISSSO. This person plays the role of monitor, ensuring continuity of care and services throughout the patient’s pathway. She acts as a liaison between the patient and all the HCPs, facilitating communication and enabling more fluid, personalized care management. The third improvement was the revision of organizational processes surrounding the coordination of activities relating to medical records and the storage of documents in the patient’s file. This revision has improved efficiency and prevented delays or errors in the management and tracking of patient medical data. The fourth was the rationalization of clinical and administrative flows between health care facilities. This approach has optimized the processes, ensuring smooth, efficient care management throughout the care pathways. Finally, the last improvement was demonstrating the relevance of a platform for interfacility care management of patients who underwent thoracic surgery. Key aspects of this demonstration included a secure environment for sharing patient medical information, automated notifications to HCPs at different steps of the patient’s care pathway, and the assignment of different user profiles (eg, surgeons and nurses) according to their roles and responsibilities in the care pathways. Step 2: Key Phases of the Target Pathways The key steps of the target care pathways for interfacility thoracic surgery are triage of the consultation request (CISSSO), preparation of the surgical consultation (CISSSO), consultation (CISSSO), patient referral to surgery, preparation of the preoperative visit (MUHC or CISSSO), the outcome of the preoperative visit (MUHC or CISSSO), surgery (MUHC), hospitalization (MUHC), patient discharge and referral (MUHC), and sharing postdischarge documents (MUHC). These different phases are characterized by round-the-clock user access to clinical information and documents. The key steps, from triage of the consultation request to facilitating access to clinical documents, aim to improve coordination, quality of care, and the accessibility of information throughout the patient’s care pathway. The target pathways illustrated in integrate the platform as well as the CISSSO and MUHC interfaces. By integrating the Akinox digital health platform and optimizing care coordination processes, the target care pathways aim to provide seamless, coordinated patient care . shows the different modules of the platform. Modules and functionalities of the Akinox digital health platform. Triage Patient registration (creating a request and selecting a patient) Initial triage (surgery consultation not required and surgery consultation to be scheduled) Result of the triage Schedule the thoracic surgery consultation Book appointment Thoracic surgery consultation Consultation (consultation date and results) Admission to surgery (3 electronic forms: admission and surgery form, consent form, and transfusion consent form) Preoperative scheduling and execution Schedule the preoperative visit Preoperative visit Patient care management and discharge Schedule the surgery Education by telephone Documents and examinations Patient discharge Postdischarge reports - present screenshots of the Akinox digital health platform. The platform makes it possible to track each key step in the care pathways. Specifically, the “metro line” (ie, the platform’s term for “flowchart,” shown on the left side of ) provides a visual overview of the patient’s progress along the care pathways, indicating their current location in the overall process. This facilitates the coordination of care between HCPs and health care facilities, ensuring efficient patient follow-up at every stage of the patient’s care pathway. The platform automatically sends notifications to the various HCPs. These notifications are used, namely, to inform doctors, nurses, and other members of the care team of updates, changes in treatment, or any patient-related event. All platform users have access to real-time information. The platform allows users to delete a document while the step is in draft mode. Once a step has been submitted, documents can no longer be deleted, as they are sent to the MUHC patient record . The platform also allows users to add documents in the Documents and Examinations section at any point along the pathways. Thus, the platform facilitates communication and sharing of clinical information between health care facilities and between the HCPs involved in the care pathways. Given the sensitive nature of patient data, the platform guarantees data security and confidentiality in compliance with Quebec MSSS regulations on the protection of medical data. Addition of Functionalities Following the Rollout of the Platform It is important to note that functionalities that were added to the platform were not initially included as part of this pilot project. The aim of these additions was to provide the best, most value-added functionalities to improve, optimize, and automate care pathways workflows to meet the current and future needs of HCPs. All additions were analyzed, prioritized, developed, tested, and validated with stakeholders. To complete the work, we held several meetings with the platform’s end users to obtain their feedback in an iterative fashion and better identify areas for improvement. A total of 9 functionalities were added following the results of the survey (objective 3) and consultations with HCPs after the platform’s rollout. These functionalities have been grouped into 5 categories (opening a new request, consultation step, closing the request, administrative notes, and dashboard indicator; ). Functionalities added following the survey results grouped in 5 categories. Opening a new request 1. Enable surgeons to initiate a request on the platform themselves. Nurses are usually the ones who initiate requests when a new patient has a scheduled appointment with a surgeon. However, some patients who come in for a follow-up appointment may require surgery. Thanks to this new functionality, the surgeon is no longer dependent on a nurse to create the request for a patient they are seeing for a follow-up visit. Consultation step 2. Enable the nurse and surgeon to change the site of the preoperative visit at the consultation stage. In fact, there were frequent errors when completing this field, which could no longer be modified once the consultation step had been submitted. Correction by the technology partner (Akinox) was then required, resulting in delays and costs. This new functionality gives clinicians full autonomy in the event of an error. 3. Make the selection of a statement mandatory in the transfusion consent form. This ensures that the form is completed and stored in the patient’s file. 4. Include the name of the surgeon who will perform the surgery in the consent form. When the platform was rolled out, the name of every surgeon appeared in the document. For legal reasons, only the name of the surgeon who meets with the patient should appear. The new functionality corrects this problem. 5. Enable patients to sign forms electronically. When the platform was rolled out, surgeons had to print out 3 forms (surgery admission request, informed consent form, and transfusion consent form) for patients to sign before the nurse scanned them and uploaded them to the platform. This new functionality enables patients to sign documents directly on the platform, using a tablet and stylus. The documents are then automatically stored in the patient’s file at the McGill University Health Centre. 6. Enable surgeons to enter their consultation notes before, during, or after the electronic forms are completed. When the platform was rolled out, having to complete a note before completing the forms did not fit well into the workflow of the surgeons, who would normally complete the note when the patient left the room. This functionality gives surgeons greater latitude. Closing a request 7. lose a request on the platform without having the 3 patient discharge documents (discharge summary, operation report, and pathology report). These documents are not actually completed for all patients. The addition of this functionality enables the user to close a request even when one or more documents are missing so the platform dashboard reflects the actual status of these requests. Administrative notes 8. Add administrative notes. The notes are integrated throughout the patient’s care pathways, making them available and visible to all users. They are used to enter information to facilitate patient care management (eg, waiting for an examination before completing the preoperative visit). These notes reduce the need for email or telephone exchanges, making patient care management more efficient. Dashboard indicator 9. Add the clinical priority to indicators. This addition makes it possible to compare surgery-related delays with the clinical priority initially indicated by the surgeon during the consultation. Several of these functionalities required discussion and validation with various departments, including information security teams, legal affairs, and medical records. Objective 3: Implement the Platform and Evaluate the End-User Experience Overview Postimplementation data were available for extraction from the platform from January 21, 2021, to September 7, 2021. During this period, 106 patients were candidates for thoracic surgery. Of these, 70.8% (75/106) had oncological conditions, and 29.2% (31/106) had nononcological conditions. Of the 106 patient candidates, 58 (54.7%) completed the surgery journey. Among the patient candidates, 75 (70.8%) had an oncological condition and either an ongoing or completed care pathway. Demographic Information With regard to objective 3 of the study, every participant of the 13 participants completed the survey: 5 (38%) MUHC surgeons, 1 (8%) nurse navigator, and 2 (15%) nurses from the CISSSO, 1 (8%) MUHC nurse navigator, 2 (15%) MUHC nurses, 1 (8%) MUHC central operating room booking medical secretary, and 1 (8%) MUHC thoracic surgery clinic medical secretary. Perceived Benefits of the Platform All (13/13, 100%) participants either agreed or strongly agreed with the perceived benefits of the platform . Evaluation of the Platform in the Context of Specific Workflows for Each User Profile - show the evaluation of the platform’s effectiveness in the context of workflows by health care facilities and by the HCPs involved in the care pathways. The platform was customized to meet the specific needs of different user profiles. Therefore, each user’s access to the platform’s functionalities was tailored to their role and responsibilities in the patient’s care pathway. The evaluation helped validate whether the clinical and administrative flows specific to each stage of the thoracic surgery care pathways were well supported by the platform. Feedback from two nurses at the MUHC highlighted unanimous agreement on two key aspects of the platform. Both nurses (2/2, 100%) agreed that the platform facilitates patient discharge and referral. Similarly, both nurses (2/2, 100%) confirmed that the platform effectively facilitates the transmission of information related to the discharge process. The results indicated that for the initial triage, consultation, and preoperative visit stages, CISSSO nurses and the nurse navigator “agreed” or “strongly agreed” that the platform supports each workflow . The results of the MUHC nurses’ and nurse navigator’s surveys were similarly positive with regard to the platform at specific stages of the care pathways . The MUHC preoperative and surgery secretary and the MUHC thoracic surgery medical secretary “strongly agreed” that the platform is fit for purpose for all flows. That said, the MUHC central operating room booking medical secretary “agreed” that the platform is task-ready, except for the MUHC preoperative visit, where she “strongly disagreed” . A functionality (2-consultation step; ) has been added to better meet her needs at this specific stage of the care pathways. The results from surgeons ranged from “neutral” to “strongly agree.” Their feedback led to further adjustments to optimize the platform and ensure that it meets surgeons’ requirements for consultations throughout the care pathways, from triage to patient discharge. In total, 6 functionalities were added after the platform was rolled out . Evaluation of the Overall End-User Experience Overview Part 4 of the survey included open-ended questions enabling end users to evaluate their overall experience of the platform during the pilot period. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. Perceived Ease of Use All (13/13, 100%) the HCPs found the platform easy and pleasant to use. Menus and functionalities were organized logically, enabling them to quickly find what they need. Moreover, data were presented in a structured way, enabling HCPs to interpret it easily and make informed decisions based on the patient’s needs. One participant stated the following: Navigating with the metro line is intuitive and reduces the time spent searching for patient information. Nurse navigator Another participant stated the following: The quality of the platform makes it easy to use as part of our operational reality. Nurse All (13/13, 100%) participants stated that the platform is not complicated and requires a minimum of learning time. One participant stated the following: The platform is easy to use for a dinosaur like me...The software is not slow and is pleasant to use. It’s simple. The visuals are well done. Surgeon Perceived Usefulness Improving Efficiency All (13/13, 100%) participants appreciated the easy access to patient information. They can search and download data easily, which saves time and simplifies their tasks. One participant stated the following: Accessing data is easy, so I can search and download information easily. Nurse Another participant mentioned the following: Ease of access to documents and the speed of sending them. Medical secretary Participants reported that medical information is stored securely on the platform, preventing the loss of physical documents. This helps to ensure data confidentiality and integrity. In addition, MUHC surgeons can easily access documents signed at the CISSSO, which eliminates delays in the document treatment process, enabling faster patient care management. One participant stated the following: There is less risk of losing information and documents between Gatineau and Montreal. Easier access to documents signed in Gatineau avoids delays in processing information. There is less of a need to consult patients’ physical files in Montreal, easier access to all documents, including examination reports such as CT and PET scans, as well as analyses that are performed in Gatineau for Montreal surgeons and anesthetists. This means I can get the information I need quickly, without any delays in transferring documents. Surgeon Improving Effectiveness With easy access to data, HCPs can avoid searching for information in different physical files and care settings, enabling them to focus more on patient care. One participant stated the following: Information is easy to find. Compared to the emails we used to send...The different users can access it in real time. Surgeon Participants reported that they can track a patient’s progress over time, even as they are transferred from one care setting or health care facility to another. Thus, the platform facilitates the exchange of information and continuity of care. This ensures more efficient and secure patient care management throughout the care pathways, facilitating better information and coordination for all care teams, whether it is between the different HCPs or between the CISSSO and the MUHC. The participant stated the following: The platform facilitates communication and information sharing between care teams and between the MUHC and the CISSSO. We can exchange important information to ensure that care management is adapted to each of our patients. Nurse navigator Participants stressed the importance of having complete, up-to-date information on the patient’s health status to make informed decisions regarding diagnosis, treatment, and overall care management. Real-time access to data enables better coordination between the various HCPs involved in the care pathways. The participant stated the following: Continuous access to patient data reduces risks, improves the quality of care and optimizes health outcomes. Nurse Another participant stated as follows: The solution has facilitated access to patient file numbers, it enables the retrieval of medical notes and avoids duplication of work. Medical secretary All (13/13, 100%) participants described how the coordination of transfers between health care facilities becomes more efficient when they have access to the patient’s medical information at every stage of the care pathways. This reduces the risk of losing patients in the health care system, as the information is available to all HCPs involved in the patient’s follow-up. One participant stated the following: We no longer lose patients in the system. Surgeon Another participant stated the following: By avoiding the loss of patients in the health care system, we also avoid redundant tests and unnecessary medical procedures. Surgeon The platform gathers all of a patient’s medical data in one place and enables ongoing monitoring of the patient. One participant stated the following: The care process is more reliable with the platform, and the platform’s metro line makes it possible to monitor the various steps of the patient’s care pathway. Surgeon Most (10/13, 77%) participants mentioned that the platform fits into existing work processes, as it has been designed to offer functionalities that are specific to each user’s role and responsibilities. The platform makes their work easier and more efficient. One participant stated the following: The platform has a positive impact on our performance because it is compatible and integrates well with aspects of our work and the way we work. Surgeon Another participant stated the following: The platform accurately delivers information and does so in an easy-to-interpret format in which we can perform our tasks when we need to. This allows me to spend more time with my patients. Nurse navigator Most (10/13, 77%) participants mentioned that the automation has streamlined workflows, eliminating redundant steps and unnecessary delays. This has led to better time management and more efficient use of HCPs. Facilitating Conditions Facilitating conditions play an important role in the successful adoption and optimal use of the platform. Users had ongoing technical support to help them with any platform-related technical issues. This technical support reduced the frustration associated with technical issues and encouraged them to continue using it with confidence. Two participants stated the following: I feel supported and confident in using the platform. Nurse Someone is always available to help me with any platform-related issues. Nurse navigator Participants emphasized that they received individual training to learn the platform’s functionalities and that they had a clear, detailed user guide at their disposal. Overview The existing oncology and nononcology thoracic surgery care pathways include the reference center (MUHC) and the affiliated center (CISSSO). Under a formal agreement, the 2 health care facilities must collaborate on a series of clinical and administrative activities in support of patients with lung and esophageal cancer, with the aim of providing patients with a seamless care experience. The MUHC is recognized for its leading-edge expertise and has a supraregional team dedicated to the treatment of lung and esophageal cancer. The CISSSO has a designated interdisciplinary team to ensure complementarity and continuity in the care and services provided to oncology (lung and esophageal cancer) and nononcology patients. Context of Clinical and Administrative Workflows provides a macro view of the thoracic surgery process. It begins with the receipt of a request for a patient consultation with a surgeon and ends with the patient being discharged from the hospital after surgery. Patient care management involves a large number of human resources. Coordination between HCPs in the same facility and between health care facilities is essential to ensure that patient care management is carried out according to priority and pathology. The oncology and nononcology care pathways are complex in terms of scheduling patients for timely treatment in a variety of care settings. At the CISSSO, there are 2 care settings: the outpatient clinic and the oncology clinic. The outpatient clinic provides consultation and follow-up services to patients. These may include follow-ups to examinations requiring surgery, pre and postsurgery care, as well as pre and posthospitalization care for general surgery and biopsies. The oncology clinic cares for patients with cancer. Each clinic has assembled a care team to educate patients about their disease; provide the care and services required by their condition; deliver education related to their condition; and provide support to patients, their families, and loved ones throughout the care pathway. At the MUHC, there are 4 care settings: the thoracic surgery clinic, the preoperative clinic, the care unit, and the oncology clinic for certain patients. The thoracic surgery clinic provides consultation, postexamination, postoperative, and posthospitalization follow-up services. It specializes in the investigation and treatment of potential or diagnosed thoracic pathologies (lung cancer, esophageal cancer, etc). The preoperative clinic conducts the patient’s preoperative health check. During this visit, patients undergo various medical examinations. The visit is also used to schedule the hospital stay and discharge following the operation. The care unit is an inpatient unit that receives patients arriving from the operating room. The MUHC oncology clinic provides the same services as the CISSSO. This service organization model involves an agreement between the MUHC and the CISSSO to deliver the service offering and meet the needs of the Gatineau region’s population. CISSSO patients must travel to another health care facility (MUHC) in another region (Montreal) to receive the care and services they need. Patients must travel more than 200 km from Gatineau to Montreal. The oncology and nononcology thoracic surgery patient care pathways are complex, involving several transitions with HCPs at scheduled times and in specific care settings. Clinical Forms and Other Documents In today’s thoracic surgery pathways, it is common to have several forms to complete at different times (preoperative forms, forms related to the surgery, and postoperative forms) and health care facilities (MUHC and CISSSO) and by various HCPs. These forms and documents are used to collect essential information to ensure appropriate patient management and effective communication between members of the care team at each health care facility. During the interviews, we identified 25 forms, such as the request for admission to surgery, the blood transfusion consent form, and the informed consent form. In addition to the forms, other documents formed part of the patient’s medical record, such as clinical notes filed by consultation date, pathology and imaging reports, laboratory results, and the discharge summary after a hospital stay. We also listed 13 pre- and postoperative guides or instructions provided to patients. These documents were designed to inform patients about the various stages of their journey, prepare them for surgery, and help them recover effectively after the operation. All documentation is kept in the patient’s file in paper form at both the CISSSO and the MUHC. In addition, surgeons pick up these documents and physically transport them to the health care facility, where the next stage of treatment or care management will take place. Major Problems in the Current Care Pathways Overview Analysis of the pathways illustrated major dysfunctions attributable to the fact that the patient’s care pathway is shared between 2 health care facilities operating in silos. Organizational silos have a negative impact on integration, as each entity focuses on its own area of responsibility at the expense of improving efficiency. Through the interviews, we identified 4 areas that explain the coordination issues in the interfacility thoracic surgery service corridors. Communication and Management and Transmission of Information The management and transmission of information between the MUHC and the CISSSO are deficient and present an increased risk of error. Lack of communication between the silos further complicates the situation, to the point where information is sometimes missing, incomplete, or processed twice. The CISSSO frequently sends the results of preoperative visits twice, by email and by post, to ensure that the MUHC receives them. However, these results arrive at different care units within the MUHC, making it difficult to verify that a patient’s file is complete. When surgeons cannot find the necessary tests, they are forced to repeat them. This causes delays, exposes the patient to potential risk, and impacts the quality of care provided. In addition, having surgeons physically transport documents between health care facilities presents certain disadvantages and risks. First, there is the risk of documents being lost, damaged, or misplaced during transport. Second, surgeons at the MUHC sometimes forget the documents when they travel to the CISSSO clinic. The absence of this information impairs the process considerably, given that decision-making throughout the pathways is highly dependent on the availability of certain key pieces of information. Not having these key elements triggers a “scramble for information” that consumes a staggering amount of energy and time. This leads to conflicts between the HCPs who are required to generate the information and those who need it to carry out their clinical and administrative duties properly. In this particular case, the movement of patient care management information between the CISSSO and the MUHC is not secure and reliable, and this information is not systematically filed in the patient’s clinical record. The process for exchanging patient information between the health care facilities is not harmonized and is sometimes dysfunctional and error-prone. Coordination of Care Ineffective communication between the different HCPs and between the health care facilities as well as the absence of an integrated information system to facilitate rapid and secure sharing of medical information and patient follow-up appear to impact coordination between the different steps of the care pathways and between HCPs. For example, information such as test results and treatment plans is missing when patients are transferred from one department to another or from one hospital to another. This is also the case for coordination and synchronization between surgeons and nurses, with patients finding it difficult to move seamlessly from one step of the care pathway to another. Lack of communication makes it difficult to ensure that a patient’s care and services progress smoothly, particularly when it comes to appointment reminders and follow-up after hospitalization or an examination. In fact, the lack of follow-up mechanisms between the different HCPs makes it difficult to follow a patient’s progress along their care pathways, complicating the scheduling of care, including the scheduling of surgery. In the absence of adequate follow-up, some patients may need additional tests before surgery or require chemotherapy before surgery. If these needs are not identified and addressed in time, it can lead to delays in preparatory care and compromise the quality and effectiveness of care. Sometimes patients are examined too late when their cancer is already at an advanced stage due to gaps in follow-up. Duplication of preoperative patient preparation steps between the CISSSO and the MUHC leads to inefficiencies and additional investigations. Even when the patient has already been cleared for discharge at the CISSSO after a preoperative visit, certain relevant information about their medical condition is not properly documented and transmitted to the MUHC. Frequent changes in MUHC operating schedules pose major challenges when scheduling the resources needed for each surgical procedure. This includes operating rooms, medical and nursing staff, equipment, and supplies. When schedules change, it is difficult to ensure that all resources are available at the right time, leading to delays and cancellations. Unfortunately, these unexpected changes in dates cause anxiety and uncertainty, especially if they are announced at the last minute. Patients and their families must make rapid adjustments to the organization of their hospital stay and to the support they require. Moreover, the fact that CISSSO patients are required to travel a great distance—more than 200 km—to get to the MUHC adds another challenge. A common problem is the lack of coordination with primary care providers and hospitals after surgery. For example, there is inadequate coordination of postoperative patient follow-up, including referral and collaboration with community resources, such as local community service centers (LCSCs) or other home care services. There is a paucity of clear guidelines defining responsibilities and referral steps between health care facilities and community resources. Moreover, the lack of formalization leads to confusion about who is responsible for which tasks and results in insufficient coordination of care. This can compromise patient recovery. Continuity of Care and Services Many issues were raised regarding the continuity of care and services between the 2 health care facilities, namely the absence of certain essential information, such as the patient’s discharge date and the MUHC’s discharge summary. This hinders proper postoperative follow-up and compromises overall patient care management at the CISSSO. The CISSSO nurse must make phone calls to the MUHC to obtain the documents by fax. Furthermore, the CISSSO’s attending physicians do not always receive discharge summaries from the MUHC hospital; these summaries contain essential information on their patients’ health status. The high volume of calls and patients represents a challenge for their care management and follow-up. To meet this challenge, CISSSO nurses must decide which patients to prioritize every day. There is also a lack of continuity in requests for pathology services, and it takes several days to obtain results. Patients are not systematically told about their postoperative appointments when they are discharged from the MUHC hospital, and they leave the health care facility without knowing whom to contact for a follow-up appointment, leading to confusion and difficulty in obtaining follow-up care at the CISSSO. The CISSSO is not aware of the specific dates of surgeries and discharges, so it is difficult to schedule and inform patients of postoperative appointments before they leave the MUHC. In fact, it is often the patient who calls the CISSSO to say that they have not been given a postoperative appointment. Furthermore, the CISSSO frequently receives phone calls from patients who have not received a call from the LCSC to change their dressing or receive other necessary care after surgery. CISSSO patients are not always contacted by the LCSC for their postsurgery care management, and the interfacility service request (IFSR) is sent to the wrong LCSC in and around Gatineau. In fact, the MUHC does not always confirm the patient’s home LCSC before sending the IFSR. Therefore, the MUHC must prepare another IFSR for follow-ups at another LCSC. In addition, the type of required follow-up (home or outpatient care) is often missing from the IFSR. This information is important to ensure that the patient receives the appropriate services according to their postoperative needs and to ensure adequate continuity of care. Defining and Understanding Roles Some tasks are not uniformly and systematically performed from one health care facility to another. Approaches and practices between HCPs and care settings are not harmonized. These disparities lead to fragmentation of care and loss of efficiency across the thoracic surgery care pathways. In addition, the oncology and nononcology pathways involve numerous players whose roles are sometimes inadequately defined. There are overlaps, duplications, and gaps in patient care management. The absence of an integrated information system can make coordination more difficult and lead to inefficient processes. The existing oncology and nononcology thoracic surgery care pathways include the reference center (MUHC) and the affiliated center (CISSSO). Under a formal agreement, the 2 health care facilities must collaborate on a series of clinical and administrative activities in support of patients with lung and esophageal cancer, with the aim of providing patients with a seamless care experience. The MUHC is recognized for its leading-edge expertise and has a supraregional team dedicated to the treatment of lung and esophageal cancer. The CISSSO has a designated interdisciplinary team to ensure complementarity and continuity in the care and services provided to oncology (lung and esophageal cancer) and nononcology patients. provides a macro view of the thoracic surgery process. It begins with the receipt of a request for a patient consultation with a surgeon and ends with the patient being discharged from the hospital after surgery. Patient care management involves a large number of human resources. Coordination between HCPs in the same facility and between health care facilities is essential to ensure that patient care management is carried out according to priority and pathology. The oncology and nononcology care pathways are complex in terms of scheduling patients for timely treatment in a variety of care settings. At the CISSSO, there are 2 care settings: the outpatient clinic and the oncology clinic. The outpatient clinic provides consultation and follow-up services to patients. These may include follow-ups to examinations requiring surgery, pre and postsurgery care, as well as pre and posthospitalization care for general surgery and biopsies. The oncology clinic cares for patients with cancer. Each clinic has assembled a care team to educate patients about their disease; provide the care and services required by their condition; deliver education related to their condition; and provide support to patients, their families, and loved ones throughout the care pathway. At the MUHC, there are 4 care settings: the thoracic surgery clinic, the preoperative clinic, the care unit, and the oncology clinic for certain patients. The thoracic surgery clinic provides consultation, postexamination, postoperative, and posthospitalization follow-up services. It specializes in the investigation and treatment of potential or diagnosed thoracic pathologies (lung cancer, esophageal cancer, etc). The preoperative clinic conducts the patient’s preoperative health check. During this visit, patients undergo various medical examinations. The visit is also used to schedule the hospital stay and discharge following the operation. The care unit is an inpatient unit that receives patients arriving from the operating room. The MUHC oncology clinic provides the same services as the CISSSO. This service organization model involves an agreement between the MUHC and the CISSSO to deliver the service offering and meet the needs of the Gatineau region’s population. CISSSO patients must travel to another health care facility (MUHC) in another region (Montreal) to receive the care and services they need. Patients must travel more than 200 km from Gatineau to Montreal. The oncology and nononcology thoracic surgery patient care pathways are complex, involving several transitions with HCPs at scheduled times and in specific care settings. In today’s thoracic surgery pathways, it is common to have several forms to complete at different times (preoperative forms, forms related to the surgery, and postoperative forms) and health care facilities (MUHC and CISSSO) and by various HCPs. These forms and documents are used to collect essential information to ensure appropriate patient management and effective communication between members of the care team at each health care facility. During the interviews, we identified 25 forms, such as the request for admission to surgery, the blood transfusion consent form, and the informed consent form. In addition to the forms, other documents formed part of the patient’s medical record, such as clinical notes filed by consultation date, pathology and imaging reports, laboratory results, and the discharge summary after a hospital stay. We also listed 13 pre- and postoperative guides or instructions provided to patients. These documents were designed to inform patients about the various stages of their journey, prepare them for surgery, and help them recover effectively after the operation. All documentation is kept in the patient’s file in paper form at both the CISSSO and the MUHC. In addition, surgeons pick up these documents and physically transport them to the health care facility, where the next stage of treatment or care management will take place. Overview Analysis of the pathways illustrated major dysfunctions attributable to the fact that the patient’s care pathway is shared between 2 health care facilities operating in silos. Organizational silos have a negative impact on integration, as each entity focuses on its own area of responsibility at the expense of improving efficiency. Through the interviews, we identified 4 areas that explain the coordination issues in the interfacility thoracic surgery service corridors. Communication and Management and Transmission of Information The management and transmission of information between the MUHC and the CISSSO are deficient and present an increased risk of error. Lack of communication between the silos further complicates the situation, to the point where information is sometimes missing, incomplete, or processed twice. The CISSSO frequently sends the results of preoperative visits twice, by email and by post, to ensure that the MUHC receives them. However, these results arrive at different care units within the MUHC, making it difficult to verify that a patient’s file is complete. When surgeons cannot find the necessary tests, they are forced to repeat them. This causes delays, exposes the patient to potential risk, and impacts the quality of care provided. In addition, having surgeons physically transport documents between health care facilities presents certain disadvantages and risks. First, there is the risk of documents being lost, damaged, or misplaced during transport. Second, surgeons at the MUHC sometimes forget the documents when they travel to the CISSSO clinic. The absence of this information impairs the process considerably, given that decision-making throughout the pathways is highly dependent on the availability of certain key pieces of information. Not having these key elements triggers a “scramble for information” that consumes a staggering amount of energy and time. This leads to conflicts between the HCPs who are required to generate the information and those who need it to carry out their clinical and administrative duties properly. In this particular case, the movement of patient care management information between the CISSSO and the MUHC is not secure and reliable, and this information is not systematically filed in the patient’s clinical record. The process for exchanging patient information between the health care facilities is not harmonized and is sometimes dysfunctional and error-prone. Coordination of Care Ineffective communication between the different HCPs and between the health care facilities as well as the absence of an integrated information system to facilitate rapid and secure sharing of medical information and patient follow-up appear to impact coordination between the different steps of the care pathways and between HCPs. For example, information such as test results and treatment plans is missing when patients are transferred from one department to another or from one hospital to another. This is also the case for coordination and synchronization between surgeons and nurses, with patients finding it difficult to move seamlessly from one step of the care pathway to another. Lack of communication makes it difficult to ensure that a patient’s care and services progress smoothly, particularly when it comes to appointment reminders and follow-up after hospitalization or an examination. In fact, the lack of follow-up mechanisms between the different HCPs makes it difficult to follow a patient’s progress along their care pathways, complicating the scheduling of care, including the scheduling of surgery. In the absence of adequate follow-up, some patients may need additional tests before surgery or require chemotherapy before surgery. If these needs are not identified and addressed in time, it can lead to delays in preparatory care and compromise the quality and effectiveness of care. Sometimes patients are examined too late when their cancer is already at an advanced stage due to gaps in follow-up. Duplication of preoperative patient preparation steps between the CISSSO and the MUHC leads to inefficiencies and additional investigations. Even when the patient has already been cleared for discharge at the CISSSO after a preoperative visit, certain relevant information about their medical condition is not properly documented and transmitted to the MUHC. Frequent changes in MUHC operating schedules pose major challenges when scheduling the resources needed for each surgical procedure. This includes operating rooms, medical and nursing staff, equipment, and supplies. When schedules change, it is difficult to ensure that all resources are available at the right time, leading to delays and cancellations. Unfortunately, these unexpected changes in dates cause anxiety and uncertainty, especially if they are announced at the last minute. Patients and their families must make rapid adjustments to the organization of their hospital stay and to the support they require. Moreover, the fact that CISSSO patients are required to travel a great distance—more than 200 km—to get to the MUHC adds another challenge. A common problem is the lack of coordination with primary care providers and hospitals after surgery. For example, there is inadequate coordination of postoperative patient follow-up, including referral and collaboration with community resources, such as local community service centers (LCSCs) or other home care services. There is a paucity of clear guidelines defining responsibilities and referral steps between health care facilities and community resources. Moreover, the lack of formalization leads to confusion about who is responsible for which tasks and results in insufficient coordination of care. This can compromise patient recovery. Continuity of Care and Services Many issues were raised regarding the continuity of care and services between the 2 health care facilities, namely the absence of certain essential information, such as the patient’s discharge date and the MUHC’s discharge summary. This hinders proper postoperative follow-up and compromises overall patient care management at the CISSSO. The CISSSO nurse must make phone calls to the MUHC to obtain the documents by fax. Furthermore, the CISSSO’s attending physicians do not always receive discharge summaries from the MUHC hospital; these summaries contain essential information on their patients’ health status. The high volume of calls and patients represents a challenge for their care management and follow-up. To meet this challenge, CISSSO nurses must decide which patients to prioritize every day. There is also a lack of continuity in requests for pathology services, and it takes several days to obtain results. Patients are not systematically told about their postoperative appointments when they are discharged from the MUHC hospital, and they leave the health care facility without knowing whom to contact for a follow-up appointment, leading to confusion and difficulty in obtaining follow-up care at the CISSSO. The CISSSO is not aware of the specific dates of surgeries and discharges, so it is difficult to schedule and inform patients of postoperative appointments before they leave the MUHC. In fact, it is often the patient who calls the CISSSO to say that they have not been given a postoperative appointment. Furthermore, the CISSSO frequently receives phone calls from patients who have not received a call from the LCSC to change their dressing or receive other necessary care after surgery. CISSSO patients are not always contacted by the LCSC for their postsurgery care management, and the interfacility service request (IFSR) is sent to the wrong LCSC in and around Gatineau. In fact, the MUHC does not always confirm the patient’s home LCSC before sending the IFSR. Therefore, the MUHC must prepare another IFSR for follow-ups at another LCSC. In addition, the type of required follow-up (home or outpatient care) is often missing from the IFSR. This information is important to ensure that the patient receives the appropriate services according to their postoperative needs and to ensure adequate continuity of care. Defining and Understanding Roles Some tasks are not uniformly and systematically performed from one health care facility to another. Approaches and practices between HCPs and care settings are not harmonized. These disparities lead to fragmentation of care and loss of efficiency across the thoracic surgery care pathways. In addition, the oncology and nononcology pathways involve numerous players whose roles are sometimes inadequately defined. There are overlaps, duplications, and gaps in patient care management. The absence of an integrated information system can make coordination more difficult and lead to inefficient processes. Analysis of the pathways illustrated major dysfunctions attributable to the fact that the patient’s care pathway is shared between 2 health care facilities operating in silos. Organizational silos have a negative impact on integration, as each entity focuses on its own area of responsibility at the expense of improving efficiency. Through the interviews, we identified 4 areas that explain the coordination issues in the interfacility thoracic surgery service corridors. The management and transmission of information between the MUHC and the CISSSO are deficient and present an increased risk of error. Lack of communication between the silos further complicates the situation, to the point where information is sometimes missing, incomplete, or processed twice. The CISSSO frequently sends the results of preoperative visits twice, by email and by post, to ensure that the MUHC receives them. However, these results arrive at different care units within the MUHC, making it difficult to verify that a patient’s file is complete. When surgeons cannot find the necessary tests, they are forced to repeat them. This causes delays, exposes the patient to potential risk, and impacts the quality of care provided. In addition, having surgeons physically transport documents between health care facilities presents certain disadvantages and risks. First, there is the risk of documents being lost, damaged, or misplaced during transport. Second, surgeons at the MUHC sometimes forget the documents when they travel to the CISSSO clinic. The absence of this information impairs the process considerably, given that decision-making throughout the pathways is highly dependent on the availability of certain key pieces of information. Not having these key elements triggers a “scramble for information” that consumes a staggering amount of energy and time. This leads to conflicts between the HCPs who are required to generate the information and those who need it to carry out their clinical and administrative duties properly. In this particular case, the movement of patient care management information between the CISSSO and the MUHC is not secure and reliable, and this information is not systematically filed in the patient’s clinical record. The process for exchanging patient information between the health care facilities is not harmonized and is sometimes dysfunctional and error-prone. Ineffective communication between the different HCPs and between the health care facilities as well as the absence of an integrated information system to facilitate rapid and secure sharing of medical information and patient follow-up appear to impact coordination between the different steps of the care pathways and between HCPs. For example, information such as test results and treatment plans is missing when patients are transferred from one department to another or from one hospital to another. This is also the case for coordination and synchronization between surgeons and nurses, with patients finding it difficult to move seamlessly from one step of the care pathway to another. Lack of communication makes it difficult to ensure that a patient’s care and services progress smoothly, particularly when it comes to appointment reminders and follow-up after hospitalization or an examination. In fact, the lack of follow-up mechanisms between the different HCPs makes it difficult to follow a patient’s progress along their care pathways, complicating the scheduling of care, including the scheduling of surgery. In the absence of adequate follow-up, some patients may need additional tests before surgery or require chemotherapy before surgery. If these needs are not identified and addressed in time, it can lead to delays in preparatory care and compromise the quality and effectiveness of care. Sometimes patients are examined too late when their cancer is already at an advanced stage due to gaps in follow-up. Duplication of preoperative patient preparation steps between the CISSSO and the MUHC leads to inefficiencies and additional investigations. Even when the patient has already been cleared for discharge at the CISSSO after a preoperative visit, certain relevant information about their medical condition is not properly documented and transmitted to the MUHC. Frequent changes in MUHC operating schedules pose major challenges when scheduling the resources needed for each surgical procedure. This includes operating rooms, medical and nursing staff, equipment, and supplies. When schedules change, it is difficult to ensure that all resources are available at the right time, leading to delays and cancellations. Unfortunately, these unexpected changes in dates cause anxiety and uncertainty, especially if they are announced at the last minute. Patients and their families must make rapid adjustments to the organization of their hospital stay and to the support they require. Moreover, the fact that CISSSO patients are required to travel a great distance—more than 200 km—to get to the MUHC adds another challenge. A common problem is the lack of coordination with primary care providers and hospitals after surgery. For example, there is inadequate coordination of postoperative patient follow-up, including referral and collaboration with community resources, such as local community service centers (LCSCs) or other home care services. There is a paucity of clear guidelines defining responsibilities and referral steps between health care facilities and community resources. Moreover, the lack of formalization leads to confusion about who is responsible for which tasks and results in insufficient coordination of care. This can compromise patient recovery. Many issues were raised regarding the continuity of care and services between the 2 health care facilities, namely the absence of certain essential information, such as the patient’s discharge date and the MUHC’s discharge summary. This hinders proper postoperative follow-up and compromises overall patient care management at the CISSSO. The CISSSO nurse must make phone calls to the MUHC to obtain the documents by fax. Furthermore, the CISSSO’s attending physicians do not always receive discharge summaries from the MUHC hospital; these summaries contain essential information on their patients’ health status. The high volume of calls and patients represents a challenge for their care management and follow-up. To meet this challenge, CISSSO nurses must decide which patients to prioritize every day. There is also a lack of continuity in requests for pathology services, and it takes several days to obtain results. Patients are not systematically told about their postoperative appointments when they are discharged from the MUHC hospital, and they leave the health care facility without knowing whom to contact for a follow-up appointment, leading to confusion and difficulty in obtaining follow-up care at the CISSSO. The CISSSO is not aware of the specific dates of surgeries and discharges, so it is difficult to schedule and inform patients of postoperative appointments before they leave the MUHC. In fact, it is often the patient who calls the CISSSO to say that they have not been given a postoperative appointment. Furthermore, the CISSSO frequently receives phone calls from patients who have not received a call from the LCSC to change their dressing or receive other necessary care after surgery. CISSSO patients are not always contacted by the LCSC for their postsurgery care management, and the interfacility service request (IFSR) is sent to the wrong LCSC in and around Gatineau. In fact, the MUHC does not always confirm the patient’s home LCSC before sending the IFSR. Therefore, the MUHC must prepare another IFSR for follow-ups at another LCSC. In addition, the type of required follow-up (home or outpatient care) is often missing from the IFSR. This information is important to ensure that the patient receives the appropriate services according to their postoperative needs and to ensure adequate continuity of care. Some tasks are not uniformly and systematically performed from one health care facility to another. Approaches and practices between HCPs and care settings are not harmonized. These disparities lead to fragmentation of care and loss of efficiency across the thoracic surgery care pathways. In addition, the oncology and nononcology pathways involve numerous players whose roles are sometimes inadequately defined. There are overlaps, duplications, and gaps in patient care management. The absence of an integrated information system can make coordination more difficult and lead to inefficient processes. Step 1.1: Clinical and Administrative Needs of Future Users Future users identified needs related to the workflows and future use of a digital health platform aiming to cover the entire interfacility thoracic surgery care pathways from initial triage when the patient is assessed to patient discharge after surgery. MUHC and CISSSO HCPs submitted a list of administrative and clinical needs that were classified under the following three categories: (1) communication of administrative, medical, and paramedical information; (2) clinical and organizational practices; and (3) human resources . List of administrative and clinical needs. Elements included by the participants related to the need for effective communication and informational continuity Enable the rapid and secure transmission of information and documents required for the continuity of care and services from one health care provider (HCP) to another within the same facility and between health care facilities Ensure confidentiality and data protection when transmitting information Ensure easy and direct access, without intermediaries, to patient medical data, such as medical records, test results, and x-ray results Ensure automatic updating of information and documents without duplication Promote the use of standardized and harmonized documents and forms between the McGill University Health Centre and the Centre intégré de santé et de services sociaux de l’Outaouais Implement a centralized dashboard to share information in real time Promote the harmonization and standardization of communication between HCPs within the same facility and between health care facilities Promote the interoperability of interfacility IT systems Other specific needs raised by the participants in terms of clinical and organizational practices Enable real-time tracking of the steps in the patient’s care pathways Enable efficient management of tasks and flows, with reminders and notifications Implement processes to achieve and maintain ministry targets for thoracic surgery Implement measures to reduce wait times for diagnostic examinations and preoperative tests Establish standardized protocols for preoperative investigations Implement mechanisms to limit changes in surgery dates Implement standardized procedures for patient follow-up after thoracic surgery, including postoperative visits Human resources-related needs identified as part of the implementation of the platform Clarify and define the roles and responsibilities of each HCP within the care pathways Propose guidelines for the organization of care and services Facilitate patient access to care teams by establishing clear communication channels and optimizing appointment scheduling processes Implement a change management plan to support HCPs in adopting the platform Step 1.2: Targeted Improvements Following the mapping of the current thoracic surgery care pathways, several targeted improvements were implemented. The first improvement was a review of the roles and responsibilities of each HCP involved in the pathways. This helped clarify the responsibilities of each member of the care team to ensure effective coordination and optimal patient care management. The second improvement was the recruitment of a nurse navigator at both the MUHC and the CISSSO. This person plays the role of monitor, ensuring continuity of care and services throughout the patient’s pathway. She acts as a liaison between the patient and all the HCPs, facilitating communication and enabling more fluid, personalized care management. The third improvement was the revision of organizational processes surrounding the coordination of activities relating to medical records and the storage of documents in the patient’s file. This revision has improved efficiency and prevented delays or errors in the management and tracking of patient medical data. The fourth was the rationalization of clinical and administrative flows between health care facilities. This approach has optimized the processes, ensuring smooth, efficient care management throughout the care pathways. Finally, the last improvement was demonstrating the relevance of a platform for interfacility care management of patients who underwent thoracic surgery. Key aspects of this demonstration included a secure environment for sharing patient medical information, automated notifications to HCPs at different steps of the patient’s care pathway, and the assignment of different user profiles (eg, surgeons and nurses) according to their roles and responsibilities in the care pathways. Step 2: Key Phases of the Target Pathways The key steps of the target care pathways for interfacility thoracic surgery are triage of the consultation request (CISSSO), preparation of the surgical consultation (CISSSO), consultation (CISSSO), patient referral to surgery, preparation of the preoperative visit (MUHC or CISSSO), the outcome of the preoperative visit (MUHC or CISSSO), surgery (MUHC), hospitalization (MUHC), patient discharge and referral (MUHC), and sharing postdischarge documents (MUHC). These different phases are characterized by round-the-clock user access to clinical information and documents. The key steps, from triage of the consultation request to facilitating access to clinical documents, aim to improve coordination, quality of care, and the accessibility of information throughout the patient’s care pathway. The target pathways illustrated in integrate the platform as well as the CISSSO and MUHC interfaces. By integrating the Akinox digital health platform and optimizing care coordination processes, the target care pathways aim to provide seamless, coordinated patient care . shows the different modules of the platform. Modules and functionalities of the Akinox digital health platform. Triage Patient registration (creating a request and selecting a patient) Initial triage (surgery consultation not required and surgery consultation to be scheduled) Result of the triage Schedule the thoracic surgery consultation Book appointment Thoracic surgery consultation Consultation (consultation date and results) Admission to surgery (3 electronic forms: admission and surgery form, consent form, and transfusion consent form) Preoperative scheduling and execution Schedule the preoperative visit Preoperative visit Patient care management and discharge Schedule the surgery Education by telephone Documents and examinations Patient discharge Postdischarge reports - present screenshots of the Akinox digital health platform. The platform makes it possible to track each key step in the care pathways. Specifically, the “metro line” (ie, the platform’s term for “flowchart,” shown on the left side of ) provides a visual overview of the patient’s progress along the care pathways, indicating their current location in the overall process. This facilitates the coordination of care between HCPs and health care facilities, ensuring efficient patient follow-up at every stage of the patient’s care pathway. The platform automatically sends notifications to the various HCPs. These notifications are used, namely, to inform doctors, nurses, and other members of the care team of updates, changes in treatment, or any patient-related event. All platform users have access to real-time information. The platform allows users to delete a document while the step is in draft mode. Once a step has been submitted, documents can no longer be deleted, as they are sent to the MUHC patient record . The platform also allows users to add documents in the Documents and Examinations section at any point along the pathways. Thus, the platform facilitates communication and sharing of clinical information between health care facilities and between the HCPs involved in the care pathways. Given the sensitive nature of patient data, the platform guarantees data security and confidentiality in compliance with Quebec MSSS regulations on the protection of medical data. Addition of Functionalities Following the Rollout of the Platform It is important to note that functionalities that were added to the platform were not initially included as part of this pilot project. The aim of these additions was to provide the best, most value-added functionalities to improve, optimize, and automate care pathways workflows to meet the current and future needs of HCPs. All additions were analyzed, prioritized, developed, tested, and validated with stakeholders. To complete the work, we held several meetings with the platform’s end users to obtain their feedback in an iterative fashion and better identify areas for improvement. A total of 9 functionalities were added following the results of the survey (objective 3) and consultations with HCPs after the platform’s rollout. These functionalities have been grouped into 5 categories (opening a new request, consultation step, closing the request, administrative notes, and dashboard indicator; ). Functionalities added following the survey results grouped in 5 categories. Opening a new request 1. Enable surgeons to initiate a request on the platform themselves. Nurses are usually the ones who initiate requests when a new patient has a scheduled appointment with a surgeon. However, some patients who come in for a follow-up appointment may require surgery. Thanks to this new functionality, the surgeon is no longer dependent on a nurse to create the request for a patient they are seeing for a follow-up visit. Consultation step 2. Enable the nurse and surgeon to change the site of the preoperative visit at the consultation stage. In fact, there were frequent errors when completing this field, which could no longer be modified once the consultation step had been submitted. Correction by the technology partner (Akinox) was then required, resulting in delays and costs. This new functionality gives clinicians full autonomy in the event of an error. 3. Make the selection of a statement mandatory in the transfusion consent form. This ensures that the form is completed and stored in the patient’s file. 4. Include the name of the surgeon who will perform the surgery in the consent form. When the platform was rolled out, the name of every surgeon appeared in the document. For legal reasons, only the name of the surgeon who meets with the patient should appear. The new functionality corrects this problem. 5. Enable patients to sign forms electronically. When the platform was rolled out, surgeons had to print out 3 forms (surgery admission request, informed consent form, and transfusion consent form) for patients to sign before the nurse scanned them and uploaded them to the platform. This new functionality enables patients to sign documents directly on the platform, using a tablet and stylus. The documents are then automatically stored in the patient’s file at the McGill University Health Centre. 6. Enable surgeons to enter their consultation notes before, during, or after the electronic forms are completed. When the platform was rolled out, having to complete a note before completing the forms did not fit well into the workflow of the surgeons, who would normally complete the note when the patient left the room. This functionality gives surgeons greater latitude. Closing a request 7. lose a request on the platform without having the 3 patient discharge documents (discharge summary, operation report, and pathology report). These documents are not actually completed for all patients. The addition of this functionality enables the user to close a request even when one or more documents are missing so the platform dashboard reflects the actual status of these requests. Administrative notes 8. Add administrative notes. The notes are integrated throughout the patient’s care pathways, making them available and visible to all users. They are used to enter information to facilitate patient care management (eg, waiting for an examination before completing the preoperative visit). These notes reduce the need for email or telephone exchanges, making patient care management more efficient. Dashboard indicator 9. Add the clinical priority to indicators. This addition makes it possible to compare surgery-related delays with the clinical priority initially indicated by the surgeon during the consultation. Several of these functionalities required discussion and validation with various departments, including information security teams, legal affairs, and medical records. Future users identified needs related to the workflows and future use of a digital health platform aiming to cover the entire interfacility thoracic surgery care pathways from initial triage when the patient is assessed to patient discharge after surgery. MUHC and CISSSO HCPs submitted a list of administrative and clinical needs that were classified under the following three categories: (1) communication of administrative, medical, and paramedical information; (2) clinical and organizational practices; and (3) human resources . List of administrative and clinical needs. Elements included by the participants related to the need for effective communication and informational continuity Enable the rapid and secure transmission of information and documents required for the continuity of care and services from one health care provider (HCP) to another within the same facility and between health care facilities Ensure confidentiality and data protection when transmitting information Ensure easy and direct access, without intermediaries, to patient medical data, such as medical records, test results, and x-ray results Ensure automatic updating of information and documents without duplication Promote the use of standardized and harmonized documents and forms between the McGill University Health Centre and the Centre intégré de santé et de services sociaux de l’Outaouais Implement a centralized dashboard to share information in real time Promote the harmonization and standardization of communication between HCPs within the same facility and between health care facilities Promote the interoperability of interfacility IT systems Other specific needs raised by the participants in terms of clinical and organizational practices Enable real-time tracking of the steps in the patient’s care pathways Enable efficient management of tasks and flows, with reminders and notifications Implement processes to achieve and maintain ministry targets for thoracic surgery Implement measures to reduce wait times for diagnostic examinations and preoperative tests Establish standardized protocols for preoperative investigations Implement mechanisms to limit changes in surgery dates Implement standardized procedures for patient follow-up after thoracic surgery, including postoperative visits Human resources-related needs identified as part of the implementation of the platform Clarify and define the roles and responsibilities of each HCP within the care pathways Propose guidelines for the organization of care and services Facilitate patient access to care teams by establishing clear communication channels and optimizing appointment scheduling processes Implement a change management plan to support HCPs in adopting the platform Following the mapping of the current thoracic surgery care pathways, several targeted improvements were implemented. The first improvement was a review of the roles and responsibilities of each HCP involved in the pathways. This helped clarify the responsibilities of each member of the care team to ensure effective coordination and optimal patient care management. The second improvement was the recruitment of a nurse navigator at both the MUHC and the CISSSO. This person plays the role of monitor, ensuring continuity of care and services throughout the patient’s pathway. She acts as a liaison between the patient and all the HCPs, facilitating communication and enabling more fluid, personalized care management. The third improvement was the revision of organizational processes surrounding the coordination of activities relating to medical records and the storage of documents in the patient’s file. This revision has improved efficiency and prevented delays or errors in the management and tracking of patient medical data. The fourth was the rationalization of clinical and administrative flows between health care facilities. This approach has optimized the processes, ensuring smooth, efficient care management throughout the care pathways. Finally, the last improvement was demonstrating the relevance of a platform for interfacility care management of patients who underwent thoracic surgery. Key aspects of this demonstration included a secure environment for sharing patient medical information, automated notifications to HCPs at different steps of the patient’s care pathway, and the assignment of different user profiles (eg, surgeons and nurses) according to their roles and responsibilities in the care pathways. The key steps of the target care pathways for interfacility thoracic surgery are triage of the consultation request (CISSSO), preparation of the surgical consultation (CISSSO), consultation (CISSSO), patient referral to surgery, preparation of the preoperative visit (MUHC or CISSSO), the outcome of the preoperative visit (MUHC or CISSSO), surgery (MUHC), hospitalization (MUHC), patient discharge and referral (MUHC), and sharing postdischarge documents (MUHC). These different phases are characterized by round-the-clock user access to clinical information and documents. The key steps, from triage of the consultation request to facilitating access to clinical documents, aim to improve coordination, quality of care, and the accessibility of information throughout the patient’s care pathway. The target pathways illustrated in integrate the platform as well as the CISSSO and MUHC interfaces. By integrating the Akinox digital health platform and optimizing care coordination processes, the target care pathways aim to provide seamless, coordinated patient care . shows the different modules of the platform. Modules and functionalities of the Akinox digital health platform. Triage Patient registration (creating a request and selecting a patient) Initial triage (surgery consultation not required and surgery consultation to be scheduled) Result of the triage Schedule the thoracic surgery consultation Book appointment Thoracic surgery consultation Consultation (consultation date and results) Admission to surgery (3 electronic forms: admission and surgery form, consent form, and transfusion consent form) Preoperative scheduling and execution Schedule the preoperative visit Preoperative visit Patient care management and discharge Schedule the surgery Education by telephone Documents and examinations Patient discharge Postdischarge reports - present screenshots of the Akinox digital health platform. The platform makes it possible to track each key step in the care pathways. Specifically, the “metro line” (ie, the platform’s term for “flowchart,” shown on the left side of ) provides a visual overview of the patient’s progress along the care pathways, indicating their current location in the overall process. This facilitates the coordination of care between HCPs and health care facilities, ensuring efficient patient follow-up at every stage of the patient’s care pathway. The platform automatically sends notifications to the various HCPs. These notifications are used, namely, to inform doctors, nurses, and other members of the care team of updates, changes in treatment, or any patient-related event. All platform users have access to real-time information. The platform allows users to delete a document while the step is in draft mode. Once a step has been submitted, documents can no longer be deleted, as they are sent to the MUHC patient record . The platform also allows users to add documents in the Documents and Examinations section at any point along the pathways. Thus, the platform facilitates communication and sharing of clinical information between health care facilities and between the HCPs involved in the care pathways. Given the sensitive nature of patient data, the platform guarantees data security and confidentiality in compliance with Quebec MSSS regulations on the protection of medical data. It is important to note that functionalities that were added to the platform were not initially included as part of this pilot project. The aim of these additions was to provide the best, most value-added functionalities to improve, optimize, and automate care pathways workflows to meet the current and future needs of HCPs. All additions were analyzed, prioritized, developed, tested, and validated with stakeholders. To complete the work, we held several meetings with the platform’s end users to obtain their feedback in an iterative fashion and better identify areas for improvement. A total of 9 functionalities were added following the results of the survey (objective 3) and consultations with HCPs after the platform’s rollout. These functionalities have been grouped into 5 categories (opening a new request, consultation step, closing the request, administrative notes, and dashboard indicator; ). Functionalities added following the survey results grouped in 5 categories. Opening a new request 1. Enable surgeons to initiate a request on the platform themselves. Nurses are usually the ones who initiate requests when a new patient has a scheduled appointment with a surgeon. However, some patients who come in for a follow-up appointment may require surgery. Thanks to this new functionality, the surgeon is no longer dependent on a nurse to create the request for a patient they are seeing for a follow-up visit. Consultation step 2. Enable the nurse and surgeon to change the site of the preoperative visit at the consultation stage. In fact, there were frequent errors when completing this field, which could no longer be modified once the consultation step had been submitted. Correction by the technology partner (Akinox) was then required, resulting in delays and costs. This new functionality gives clinicians full autonomy in the event of an error. 3. Make the selection of a statement mandatory in the transfusion consent form. This ensures that the form is completed and stored in the patient’s file. 4. Include the name of the surgeon who will perform the surgery in the consent form. When the platform was rolled out, the name of every surgeon appeared in the document. For legal reasons, only the name of the surgeon who meets with the patient should appear. The new functionality corrects this problem. 5. Enable patients to sign forms electronically. When the platform was rolled out, surgeons had to print out 3 forms (surgery admission request, informed consent form, and transfusion consent form) for patients to sign before the nurse scanned them and uploaded them to the platform. This new functionality enables patients to sign documents directly on the platform, using a tablet and stylus. The documents are then automatically stored in the patient’s file at the McGill University Health Centre. 6. Enable surgeons to enter their consultation notes before, during, or after the electronic forms are completed. When the platform was rolled out, having to complete a note before completing the forms did not fit well into the workflow of the surgeons, who would normally complete the note when the patient left the room. This functionality gives surgeons greater latitude. Closing a request 7. lose a request on the platform without having the 3 patient discharge documents (discharge summary, operation report, and pathology report). These documents are not actually completed for all patients. The addition of this functionality enables the user to close a request even when one or more documents are missing so the platform dashboard reflects the actual status of these requests. Administrative notes 8. Add administrative notes. The notes are integrated throughout the patient’s care pathways, making them available and visible to all users. They are used to enter information to facilitate patient care management (eg, waiting for an examination before completing the preoperative visit). These notes reduce the need for email or telephone exchanges, making patient care management more efficient. Dashboard indicator 9. Add the clinical priority to indicators. This addition makes it possible to compare surgery-related delays with the clinical priority initially indicated by the surgeon during the consultation. Several of these functionalities required discussion and validation with various departments, including information security teams, legal affairs, and medical records. Overview Postimplementation data were available for extraction from the platform from January 21, 2021, to September 7, 2021. During this period, 106 patients were candidates for thoracic surgery. Of these, 70.8% (75/106) had oncological conditions, and 29.2% (31/106) had nononcological conditions. Of the 106 patient candidates, 58 (54.7%) completed the surgery journey. Among the patient candidates, 75 (70.8%) had an oncological condition and either an ongoing or completed care pathway. Demographic Information With regard to objective 3 of the study, every participant of the 13 participants completed the survey: 5 (38%) MUHC surgeons, 1 (8%) nurse navigator, and 2 (15%) nurses from the CISSSO, 1 (8%) MUHC nurse navigator, 2 (15%) MUHC nurses, 1 (8%) MUHC central operating room booking medical secretary, and 1 (8%) MUHC thoracic surgery clinic medical secretary. Perceived Benefits of the Platform All (13/13, 100%) participants either agreed or strongly agreed with the perceived benefits of the platform . Evaluation of the Platform in the Context of Specific Workflows for Each User Profile - show the evaluation of the platform’s effectiveness in the context of workflows by health care facilities and by the HCPs involved in the care pathways. The platform was customized to meet the specific needs of different user profiles. Therefore, each user’s access to the platform’s functionalities was tailored to their role and responsibilities in the patient’s care pathway. The evaluation helped validate whether the clinical and administrative flows specific to each stage of the thoracic surgery care pathways were well supported by the platform. Feedback from two nurses at the MUHC highlighted unanimous agreement on two key aspects of the platform. Both nurses (2/2, 100%) agreed that the platform facilitates patient discharge and referral. Similarly, both nurses (2/2, 100%) confirmed that the platform effectively facilitates the transmission of information related to the discharge process. The results indicated that for the initial triage, consultation, and preoperative visit stages, CISSSO nurses and the nurse navigator “agreed” or “strongly agreed” that the platform supports each workflow . The results of the MUHC nurses’ and nurse navigator’s surveys were similarly positive with regard to the platform at specific stages of the care pathways . The MUHC preoperative and surgery secretary and the MUHC thoracic surgery medical secretary “strongly agreed” that the platform is fit for purpose for all flows. That said, the MUHC central operating room booking medical secretary “agreed” that the platform is task-ready, except for the MUHC preoperative visit, where she “strongly disagreed” . A functionality (2-consultation step; ) has been added to better meet her needs at this specific stage of the care pathways. The results from surgeons ranged from “neutral” to “strongly agree.” Their feedback led to further adjustments to optimize the platform and ensure that it meets surgeons’ requirements for consultations throughout the care pathways, from triage to patient discharge. In total, 6 functionalities were added after the platform was rolled out . Evaluation of the Overall End-User Experience Overview Part 4 of the survey included open-ended questions enabling end users to evaluate their overall experience of the platform during the pilot period. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. Perceived Ease of Use All (13/13, 100%) the HCPs found the platform easy and pleasant to use. Menus and functionalities were organized logically, enabling them to quickly find what they need. Moreover, data were presented in a structured way, enabling HCPs to interpret it easily and make informed decisions based on the patient’s needs. One participant stated the following: Navigating with the metro line is intuitive and reduces the time spent searching for patient information. Nurse navigator Another participant stated the following: The quality of the platform makes it easy to use as part of our operational reality. Nurse All (13/13, 100%) participants stated that the platform is not complicated and requires a minimum of learning time. One participant stated the following: The platform is easy to use for a dinosaur like me...The software is not slow and is pleasant to use. It’s simple. The visuals are well done. Surgeon Perceived Usefulness Improving Efficiency All (13/13, 100%) participants appreciated the easy access to patient information. They can search and download data easily, which saves time and simplifies their tasks. One participant stated the following: Accessing data is easy, so I can search and download information easily. Nurse Another participant mentioned the following: Ease of access to documents and the speed of sending them. Medical secretary Participants reported that medical information is stored securely on the platform, preventing the loss of physical documents. This helps to ensure data confidentiality and integrity. In addition, MUHC surgeons can easily access documents signed at the CISSSO, which eliminates delays in the document treatment process, enabling faster patient care management. One participant stated the following: There is less risk of losing information and documents between Gatineau and Montreal. Easier access to documents signed in Gatineau avoids delays in processing information. There is less of a need to consult patients’ physical files in Montreal, easier access to all documents, including examination reports such as CT and PET scans, as well as analyses that are performed in Gatineau for Montreal surgeons and anesthetists. This means I can get the information I need quickly, without any delays in transferring documents. Surgeon Improving Effectiveness With easy access to data, HCPs can avoid searching for information in different physical files and care settings, enabling them to focus more on patient care. One participant stated the following: Information is easy to find. Compared to the emails we used to send...The different users can access it in real time. Surgeon Participants reported that they can track a patient’s progress over time, even as they are transferred from one care setting or health care facility to another. Thus, the platform facilitates the exchange of information and continuity of care. This ensures more efficient and secure patient care management throughout the care pathways, facilitating better information and coordination for all care teams, whether it is between the different HCPs or between the CISSSO and the MUHC. The participant stated the following: The platform facilitates communication and information sharing between care teams and between the MUHC and the CISSSO. We can exchange important information to ensure that care management is adapted to each of our patients. Nurse navigator Participants stressed the importance of having complete, up-to-date information on the patient’s health status to make informed decisions regarding diagnosis, treatment, and overall care management. Real-time access to data enables better coordination between the various HCPs involved in the care pathways. The participant stated the following: Continuous access to patient data reduces risks, improves the quality of care and optimizes health outcomes. Nurse Another participant stated as follows: The solution has facilitated access to patient file numbers, it enables the retrieval of medical notes and avoids duplication of work. Medical secretary All (13/13, 100%) participants described how the coordination of transfers between health care facilities becomes more efficient when they have access to the patient’s medical information at every stage of the care pathways. This reduces the risk of losing patients in the health care system, as the information is available to all HCPs involved in the patient’s follow-up. One participant stated the following: We no longer lose patients in the system. Surgeon Another participant stated the following: By avoiding the loss of patients in the health care system, we also avoid redundant tests and unnecessary medical procedures. Surgeon The platform gathers all of a patient’s medical data in one place and enables ongoing monitoring of the patient. One participant stated the following: The care process is more reliable with the platform, and the platform’s metro line makes it possible to monitor the various steps of the patient’s care pathway. Surgeon Most (10/13, 77%) participants mentioned that the platform fits into existing work processes, as it has been designed to offer functionalities that are specific to each user’s role and responsibilities. The platform makes their work easier and more efficient. One participant stated the following: The platform has a positive impact on our performance because it is compatible and integrates well with aspects of our work and the way we work. Surgeon Another participant stated the following: The platform accurately delivers information and does so in an easy-to-interpret format in which we can perform our tasks when we need to. This allows me to spend more time with my patients. Nurse navigator Most (10/13, 77%) participants mentioned that the automation has streamlined workflows, eliminating redundant steps and unnecessary delays. This has led to better time management and more efficient use of HCPs. Facilitating Conditions Facilitating conditions play an important role in the successful adoption and optimal use of the platform. Users had ongoing technical support to help them with any platform-related technical issues. This technical support reduced the frustration associated with technical issues and encouraged them to continue using it with confidence. Two participants stated the following: I feel supported and confident in using the platform. Nurse Someone is always available to help me with any platform-related issues. Nurse navigator Participants emphasized that they received individual training to learn the platform’s functionalities and that they had a clear, detailed user guide at their disposal. Postimplementation data were available for extraction from the platform from January 21, 2021, to September 7, 2021. During this period, 106 patients were candidates for thoracic surgery. Of these, 70.8% (75/106) had oncological conditions, and 29.2% (31/106) had nononcological conditions. Of the 106 patient candidates, 58 (54.7%) completed the surgery journey. Among the patient candidates, 75 (70.8%) had an oncological condition and either an ongoing or completed care pathway. With regard to objective 3 of the study, every participant of the 13 participants completed the survey: 5 (38%) MUHC surgeons, 1 (8%) nurse navigator, and 2 (15%) nurses from the CISSSO, 1 (8%) MUHC nurse navigator, 2 (15%) MUHC nurses, 1 (8%) MUHC central operating room booking medical secretary, and 1 (8%) MUHC thoracic surgery clinic medical secretary. All (13/13, 100%) participants either agreed or strongly agreed with the perceived benefits of the platform . - show the evaluation of the platform’s effectiveness in the context of workflows by health care facilities and by the HCPs involved in the care pathways. The platform was customized to meet the specific needs of different user profiles. Therefore, each user’s access to the platform’s functionalities was tailored to their role and responsibilities in the patient’s care pathway. The evaluation helped validate whether the clinical and administrative flows specific to each stage of the thoracic surgery care pathways were well supported by the platform. Feedback from two nurses at the MUHC highlighted unanimous agreement on two key aspects of the platform. Both nurses (2/2, 100%) agreed that the platform facilitates patient discharge and referral. Similarly, both nurses (2/2, 100%) confirmed that the platform effectively facilitates the transmission of information related to the discharge process. The results indicated that for the initial triage, consultation, and preoperative visit stages, CISSSO nurses and the nurse navigator “agreed” or “strongly agreed” that the platform supports each workflow . The results of the MUHC nurses’ and nurse navigator’s surveys were similarly positive with regard to the platform at specific stages of the care pathways . The MUHC preoperative and surgery secretary and the MUHC thoracic surgery medical secretary “strongly agreed” that the platform is fit for purpose for all flows. That said, the MUHC central operating room booking medical secretary “agreed” that the platform is task-ready, except for the MUHC preoperative visit, where she “strongly disagreed” . A functionality (2-consultation step; ) has been added to better meet her needs at this specific stage of the care pathways. The results from surgeons ranged from “neutral” to “strongly agree.” Their feedback led to further adjustments to optimize the platform and ensure that it meets surgeons’ requirements for consultations throughout the care pathways, from triage to patient discharge. In total, 6 functionalities were added after the platform was rolled out . Overview Part 4 of the survey included open-ended questions enabling end users to evaluate their overall experience of the platform during the pilot period. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. Perceived Ease of Use All (13/13, 100%) the HCPs found the platform easy and pleasant to use. Menus and functionalities were organized logically, enabling them to quickly find what they need. Moreover, data were presented in a structured way, enabling HCPs to interpret it easily and make informed decisions based on the patient’s needs. One participant stated the following: Navigating with the metro line is intuitive and reduces the time spent searching for patient information. Nurse navigator Another participant stated the following: The quality of the platform makes it easy to use as part of our operational reality. Nurse All (13/13, 100%) participants stated that the platform is not complicated and requires a minimum of learning time. One participant stated the following: The platform is easy to use for a dinosaur like me...The software is not slow and is pleasant to use. It’s simple. The visuals are well done. Surgeon Perceived Usefulness Improving Efficiency All (13/13, 100%) participants appreciated the easy access to patient information. They can search and download data easily, which saves time and simplifies their tasks. One participant stated the following: Accessing data is easy, so I can search and download information easily. Nurse Another participant mentioned the following: Ease of access to documents and the speed of sending them. Medical secretary Participants reported that medical information is stored securely on the platform, preventing the loss of physical documents. This helps to ensure data confidentiality and integrity. In addition, MUHC surgeons can easily access documents signed at the CISSSO, which eliminates delays in the document treatment process, enabling faster patient care management. One participant stated the following: There is less risk of losing information and documents between Gatineau and Montreal. Easier access to documents signed in Gatineau avoids delays in processing information. There is less of a need to consult patients’ physical files in Montreal, easier access to all documents, including examination reports such as CT and PET scans, as well as analyses that are performed in Gatineau for Montreal surgeons and anesthetists. This means I can get the information I need quickly, without any delays in transferring documents. Surgeon Improving Effectiveness With easy access to data, HCPs can avoid searching for information in different physical files and care settings, enabling them to focus more on patient care. One participant stated the following: Information is easy to find. Compared to the emails we used to send...The different users can access it in real time. Surgeon Participants reported that they can track a patient’s progress over time, even as they are transferred from one care setting or health care facility to another. Thus, the platform facilitates the exchange of information and continuity of care. This ensures more efficient and secure patient care management throughout the care pathways, facilitating better information and coordination for all care teams, whether it is between the different HCPs or between the CISSSO and the MUHC. The participant stated the following: The platform facilitates communication and information sharing between care teams and between the MUHC and the CISSSO. We can exchange important information to ensure that care management is adapted to each of our patients. Nurse navigator Participants stressed the importance of having complete, up-to-date information on the patient’s health status to make informed decisions regarding diagnosis, treatment, and overall care management. Real-time access to data enables better coordination between the various HCPs involved in the care pathways. The participant stated the following: Continuous access to patient data reduces risks, improves the quality of care and optimizes health outcomes. Nurse Another participant stated as follows: The solution has facilitated access to patient file numbers, it enables the retrieval of medical notes and avoids duplication of work. Medical secretary All (13/13, 100%) participants described how the coordination of transfers between health care facilities becomes more efficient when they have access to the patient’s medical information at every stage of the care pathways. This reduces the risk of losing patients in the health care system, as the information is available to all HCPs involved in the patient’s follow-up. One participant stated the following: We no longer lose patients in the system. Surgeon Another participant stated the following: By avoiding the loss of patients in the health care system, we also avoid redundant tests and unnecessary medical procedures. Surgeon The platform gathers all of a patient’s medical data in one place and enables ongoing monitoring of the patient. One participant stated the following: The care process is more reliable with the platform, and the platform’s metro line makes it possible to monitor the various steps of the patient’s care pathway. Surgeon Most (10/13, 77%) participants mentioned that the platform fits into existing work processes, as it has been designed to offer functionalities that are specific to each user’s role and responsibilities. The platform makes their work easier and more efficient. One participant stated the following: The platform has a positive impact on our performance because it is compatible and integrates well with aspects of our work and the way we work. Surgeon Another participant stated the following: The platform accurately delivers information and does so in an easy-to-interpret format in which we can perform our tasks when we need to. This allows me to spend more time with my patients. Nurse navigator Most (10/13, 77%) participants mentioned that the automation has streamlined workflows, eliminating redundant steps and unnecessary delays. This has led to better time management and more efficient use of HCPs. Facilitating Conditions Facilitating conditions play an important role in the successful adoption and optimal use of the platform. Users had ongoing technical support to help them with any platform-related technical issues. This technical support reduced the frustration associated with technical issues and encouraged them to continue using it with confidence. Two participants stated the following: I feel supported and confident in using the platform. Nurse Someone is always available to help me with any platform-related issues. Nurse navigator Participants emphasized that they received individual training to learn the platform’s functionalities and that they had a clear, detailed user guide at their disposal. Part 4 of the survey included open-ended questions enabling end users to evaluate their overall experience of the platform during the pilot period. The open-ended questions focused on ease of use, user-friendliness, expected effort, expected performance, perceived usefulness, etc. All (13/13, 100%) the HCPs found the platform easy and pleasant to use. Menus and functionalities were organized logically, enabling them to quickly find what they need. Moreover, data were presented in a structured way, enabling HCPs to interpret it easily and make informed decisions based on the patient’s needs. One participant stated the following: Navigating with the metro line is intuitive and reduces the time spent searching for patient information. Nurse navigator Another participant stated the following: The quality of the platform makes it easy to use as part of our operational reality. Nurse All (13/13, 100%) participants stated that the platform is not complicated and requires a minimum of learning time. One participant stated the following: The platform is easy to use for a dinosaur like me...The software is not slow and is pleasant to use. It’s simple. The visuals are well done. Surgeon Improving Efficiency All (13/13, 100%) participants appreciated the easy access to patient information. They can search and download data easily, which saves time and simplifies their tasks. One participant stated the following: Accessing data is easy, so I can search and download information easily. Nurse Another participant mentioned the following: Ease of access to documents and the speed of sending them. Medical secretary Participants reported that medical information is stored securely on the platform, preventing the loss of physical documents. This helps to ensure data confidentiality and integrity. In addition, MUHC surgeons can easily access documents signed at the CISSSO, which eliminates delays in the document treatment process, enabling faster patient care management. One participant stated the following: There is less risk of losing information and documents between Gatineau and Montreal. Easier access to documents signed in Gatineau avoids delays in processing information. There is less of a need to consult patients’ physical files in Montreal, easier access to all documents, including examination reports such as CT and PET scans, as well as analyses that are performed in Gatineau for Montreal surgeons and anesthetists. This means I can get the information I need quickly, without any delays in transferring documents. Surgeon Improving Effectiveness With easy access to data, HCPs can avoid searching for information in different physical files and care settings, enabling them to focus more on patient care. One participant stated the following: Information is easy to find. Compared to the emails we used to send...The different users can access it in real time. Surgeon Participants reported that they can track a patient’s progress over time, even as they are transferred from one care setting or health care facility to another. Thus, the platform facilitates the exchange of information and continuity of care. This ensures more efficient and secure patient care management throughout the care pathways, facilitating better information and coordination for all care teams, whether it is between the different HCPs or between the CISSSO and the MUHC. The participant stated the following: The platform facilitates communication and information sharing between care teams and between the MUHC and the CISSSO. We can exchange important information to ensure that care management is adapted to each of our patients. Nurse navigator Participants stressed the importance of having complete, up-to-date information on the patient’s health status to make informed decisions regarding diagnosis, treatment, and overall care management. Real-time access to data enables better coordination between the various HCPs involved in the care pathways. The participant stated the following: Continuous access to patient data reduces risks, improves the quality of care and optimizes health outcomes. Nurse Another participant stated as follows: The solution has facilitated access to patient file numbers, it enables the retrieval of medical notes and avoids duplication of work. Medical secretary All (13/13, 100%) participants described how the coordination of transfers between health care facilities becomes more efficient when they have access to the patient’s medical information at every stage of the care pathways. This reduces the risk of losing patients in the health care system, as the information is available to all HCPs involved in the patient’s follow-up. One participant stated the following: We no longer lose patients in the system. Surgeon Another participant stated the following: By avoiding the loss of patients in the health care system, we also avoid redundant tests and unnecessary medical procedures. Surgeon The platform gathers all of a patient’s medical data in one place and enables ongoing monitoring of the patient. One participant stated the following: The care process is more reliable with the platform, and the platform’s metro line makes it possible to monitor the various steps of the patient’s care pathway. Surgeon Most (10/13, 77%) participants mentioned that the platform fits into existing work processes, as it has been designed to offer functionalities that are specific to each user’s role and responsibilities. The platform makes their work easier and more efficient. One participant stated the following: The platform has a positive impact on our performance because it is compatible and integrates well with aspects of our work and the way we work. Surgeon Another participant stated the following: The platform accurately delivers information and does so in an easy-to-interpret format in which we can perform our tasks when we need to. This allows me to spend more time with my patients. Nurse navigator Most (10/13, 77%) participants mentioned that the automation has streamlined workflows, eliminating redundant steps and unnecessary delays. This has led to better time management and more efficient use of HCPs. All (13/13, 100%) participants appreciated the easy access to patient information. They can search and download data easily, which saves time and simplifies their tasks. One participant stated the following: Accessing data is easy, so I can search and download information easily. Nurse Another participant mentioned the following: Ease of access to documents and the speed of sending them. Medical secretary Participants reported that medical information is stored securely on the platform, preventing the loss of physical documents. This helps to ensure data confidentiality and integrity. In addition, MUHC surgeons can easily access documents signed at the CISSSO, which eliminates delays in the document treatment process, enabling faster patient care management. One participant stated the following: There is less risk of losing information and documents between Gatineau and Montreal. Easier access to documents signed in Gatineau avoids delays in processing information. There is less of a need to consult patients’ physical files in Montreal, easier access to all documents, including examination reports such as CT and PET scans, as well as analyses that are performed in Gatineau for Montreal surgeons and anesthetists. This means I can get the information I need quickly, without any delays in transferring documents. Surgeon With easy access to data, HCPs can avoid searching for information in different physical files and care settings, enabling them to focus more on patient care. One participant stated the following: Information is easy to find. Compared to the emails we used to send...The different users can access it in real time. Surgeon Participants reported that they can track a patient’s progress over time, even as they are transferred from one care setting or health care facility to another. Thus, the platform facilitates the exchange of information and continuity of care. This ensures more efficient and secure patient care management throughout the care pathways, facilitating better information and coordination for all care teams, whether it is between the different HCPs or between the CISSSO and the MUHC. The participant stated the following: The platform facilitates communication and information sharing between care teams and between the MUHC and the CISSSO. We can exchange important information to ensure that care management is adapted to each of our patients. Nurse navigator Participants stressed the importance of having complete, up-to-date information on the patient’s health status to make informed decisions regarding diagnosis, treatment, and overall care management. Real-time access to data enables better coordination between the various HCPs involved in the care pathways. The participant stated the following: Continuous access to patient data reduces risks, improves the quality of care and optimizes health outcomes. Nurse Another participant stated as follows: The solution has facilitated access to patient file numbers, it enables the retrieval of medical notes and avoids duplication of work. Medical secretary All (13/13, 100%) participants described how the coordination of transfers between health care facilities becomes more efficient when they have access to the patient’s medical information at every stage of the care pathways. This reduces the risk of losing patients in the health care system, as the information is available to all HCPs involved in the patient’s follow-up. One participant stated the following: We no longer lose patients in the system. Surgeon Another participant stated the following: By avoiding the loss of patients in the health care system, we also avoid redundant tests and unnecessary medical procedures. Surgeon The platform gathers all of a patient’s medical data in one place and enables ongoing monitoring of the patient. One participant stated the following: The care process is more reliable with the platform, and the platform’s metro line makes it possible to monitor the various steps of the patient’s care pathway. Surgeon Most (10/13, 77%) participants mentioned that the platform fits into existing work processes, as it has been designed to offer functionalities that are specific to each user’s role and responsibilities. The platform makes their work easier and more efficient. One participant stated the following: The platform has a positive impact on our performance because it is compatible and integrates well with aspects of our work and the way we work. Surgeon Another participant stated the following: The platform accurately delivers information and does so in an easy-to-interpret format in which we can perform our tasks when we need to. This allows me to spend more time with my patients. Nurse navigator Most (10/13, 77%) participants mentioned that the automation has streamlined workflows, eliminating redundant steps and unnecessary delays. This has led to better time management and more efficient use of HCPs. Facilitating conditions play an important role in the successful adoption and optimal use of the platform. Users had ongoing technical support to help them with any platform-related technical issues. This technical support reduced the frustration associated with technical issues and encouraged them to continue using it with confidence. Two participants stated the following: I feel supported and confident in using the platform. Nurse Someone is always available to help me with any platform-related issues. Nurse navigator Participants emphasized that they received individual training to learn the platform’s functionalities and that they had a clear, detailed user guide at their disposal. Principal Findings Our research has shown that the thoracic surgery care pathways are complex and require effective coordination of interfacility service corridors (objective 1). The oncology and nononcology care pathways are not limited to the surgical procedure itself. Pathways also include triage, preoperative preparation, postoperative follow-up, rehabilitation, and overall patient care management throughout the care pathway. This continuity of care requires seamless communication between the different HCPs and health care facilities. The main challenge is that the 2 health care facilities operate in silos. When the MUHC and the CISSSO operate in isolation, without effective communication and information sharing, it can lead to problems at transition points in the care pathways. It can also impede continuity of care, leading to duplication, errors, or delays in treatment and follow-up, with potential consequences for patients’ health and quality of life. However, the implementation of a digital health solution can play a role in the coordination and efficiency of health care, leading to integrated, coordinated, and equitable patient care at every step of the care pathways. The information gathered in objective 1 provided an understanding of the entire patient journey and the activities in which HCPs are involved. Mapping the care pathways using the Business Process Model Notation method provided precise indications of the user interface, system integrations, functionalities, workflows, and dataflows required to optimize the interfacility thoracic surgery care pathways. Analysis of this mapping helped pinpoint the steps that do not add value for HCPs or are missing as well as any opportunities for improvement, aiming to better meet their clinical and administrative needs. Our research also demonstrated that using the participatory design approach from the outset of the platform design process in conjunction with future users and other key stakeholders was beneficial (objective 2). The workshops enabled participants to share their needs, expectations, and priorities regarding platform functionalities. These exchanges contributed to a better understanding of specific use cases and issues facing future users. Involving them as stakeholders in the design process strengthened their sense of ownership and commitment to the developed solution. The knowledge generated through the workshops was used to enhance the prototype under development. Thus, by integrating coproduced knowledge, we were able to ensure that the platform corresponded to the actual needs and expectations of future users, increasing the chances of adoption and acceptance of the solution. This user-centered approach is conducive to creating a design that is better suited to the real needs of end users and contributes to a better overall user experience. The survey results (objective 3) showed that all HCPs “agreed” or “strongly agreed” on the benefits of the platform. For the vast majority, the clinical and administrative flows for each user profile are well supported by the platform. To ensure continuous improvement of the platform, 9 functionalities were added in response to end-user feedback, representing significant added value to the care pathways. Evaluation of the end-user experience (objective 3) demonstrated several benefits resulting from the platform. First, the thoracic surgery care pathways have been optimized, automated, and made more secure. This ensures better connectivity between the different players, facilitating the flow of exchanges and information traceability, while ensuring that each player has a better understanding of the patient care process at the MUHC and the CISSSO. Second, the platform has helped overcome the challenges associated with operating in silos. It facilitates communication between HCPs and between the MUHC and the CISSSO, which can lead to better decision-making for patients. In turn, this reduces the risk of medical errors and improves efficiency by optimizing case processing time and improving information transfer. These improvements have made it possible to reallocate staff at both health care facilities from searching for and monitoring information to activities that add value to the care pathways. Therefore, workflows are more efficient and effective, and resources are better used. This was enabled at both health care facilities by eliminating unnecessary or duplicated steps, reducing delays in the process, improving the fluidity of the care pathways, giving all HCPs access to information related to completed and upcoming care pathway steps, and providing access to performance indicators across the entire care pathways with a view to continuous improvement for the benefit of patients. In addition, the platform promotes better care coordination by streamlining transitions between care settings across the entire health care continuum. This enhanced coordination ensures that patients benefit from comprehensive, coherent care throughout their care pathways. Finally, the success of the project convinced the clinical teams and senior management of the health care facilities (MUHC and CISSSO) to pursue the long-term use of the Akinox digital health platform for the oncology and nononcology thoracic surgery care pathways. Limitations and Future Research While the study represents a significant advancement in the field of thoracic surgery care pathways and interfacility health care service coordination, there are some limitations that need to be addressed to ensure the generalizability, sustainability, and overall effectiveness of the integrated digital health solution. One limitation is the lack of validation of the platform’s effectiveness across different health care contexts and specialties. Further research is needed to assess its generalizability across varied health care environments and specialties beyond thoracic surgery and develop evidence-based guidelines for implementation in diverse clinical contexts. In addition, the study lacks a comprehensive assessment of patients’ experiences and satisfaction with the digital health solution. Understanding patients’ perspectives is important for evaluating the overall effectiveness and impact of the integrated digital health platform on patient-centered care. Future research should include patient feedback to enrich our understanding of how the platform influences patient outcomes and experiences. Furthermore, the quantitative evaluation of the user experience is based on a limited sample of 13 users. Although these initial findings provide valuable insights, the small sample size reduces the ability to generalize the results to a broader population. Expanding the user base in future studies is essential to capturing a more diverse range of experiences and ensuring the platform’s adaptability and effectiveness across different user groups. Finally, for future research, it would be valuable to statistically measure the process over time to assess the real improvements following the platform’s implementation. This analysis could offer deeper insights into the platform’s impact on operational efficiency and patient outcomes, providing data that can inform continuous improvement efforts. Conclusions This pilot project helped develop a usable and valid target care pathway model for the interfacility thoracic surgery care pathways by integrating a platform. The platform’s infrastructure is designed to be easily configurable and adaptable to different types of cancer and surgery. This means that the model can be extended to other medical specialties, enabling a smoother care pathway for a greater number of patients. The platform also provides a technological solution and model that can be exported to the pulmonary oncology network and other care networks and clinical units. Furthermore, this pilot project highlights the best practices and conditions for success in the consolidation of a cancer network that can be transferred to other networks, namely the key role played by nurse navigators, who are the guarantors of the patient’s care pathway. Our pilot project is in line with one of the objectives of the MSSS’s Information Technology Division, which aims to use information resources in the health care network to make the shift to digital technology by improving business processes. This project is part of Quebec’s Digital Strategy, launched by the Ministère de l’Économie et de l’Innovation: one of its orientations is to have connected health care for the citizens. The Ministère de l’Économie et de l’Innovation believes that digital technology makes it possible to respond to patients’ needs according to their realities, optimizing and improving health care services. Collaboration and sharing, in this case, between HCPs from different health care facilities and even with patients, represent the future of the integrated, patient-centered health care system. In terms of managerial insights, the study highlights the importance of strategic leadership in the implementation of digital health solutions. By fostering collaboration between different stakeholders, organizations can improve care coordination and operational effectiveness. This is in line with health care that emphasizes the need for patient-centered care approaches. In addition, this entire pilot project is part of the MSSS’s approach, aiming to improve the accessibility, equity, integration, and quality of services and care. This model, which is increasingly patient-centered, should help provide care and services within medically acceptable timeframes; it should be transferable to other care pathways and health care facilities and contribute, in this case, to better care and services for patients with cancer. Finally, this study pushes the boundaries of theoretical advancements in medical informatics by bridging the gap between digital solutions and practical applications in clinical settings. It emphasizes the role of technology not just as a tool but as an integral part of a patient’s care pathway, thereby enhancing the theoretical frameworks on health informatics. Our research has shown that the thoracic surgery care pathways are complex and require effective coordination of interfacility service corridors (objective 1). The oncology and nononcology care pathways are not limited to the surgical procedure itself. Pathways also include triage, preoperative preparation, postoperative follow-up, rehabilitation, and overall patient care management throughout the care pathway. This continuity of care requires seamless communication between the different HCPs and health care facilities. The main challenge is that the 2 health care facilities operate in silos. When the MUHC and the CISSSO operate in isolation, without effective communication and information sharing, it can lead to problems at transition points in the care pathways. It can also impede continuity of care, leading to duplication, errors, or delays in treatment and follow-up, with potential consequences for patients’ health and quality of life. However, the implementation of a digital health solution can play a role in the coordination and efficiency of health care, leading to integrated, coordinated, and equitable patient care at every step of the care pathways. The information gathered in objective 1 provided an understanding of the entire patient journey and the activities in which HCPs are involved. Mapping the care pathways using the Business Process Model Notation method provided precise indications of the user interface, system integrations, functionalities, workflows, and dataflows required to optimize the interfacility thoracic surgery care pathways. Analysis of this mapping helped pinpoint the steps that do not add value for HCPs or are missing as well as any opportunities for improvement, aiming to better meet their clinical and administrative needs. Our research also demonstrated that using the participatory design approach from the outset of the platform design process in conjunction with future users and other key stakeholders was beneficial (objective 2). The workshops enabled participants to share their needs, expectations, and priorities regarding platform functionalities. These exchanges contributed to a better understanding of specific use cases and issues facing future users. Involving them as stakeholders in the design process strengthened their sense of ownership and commitment to the developed solution. The knowledge generated through the workshops was used to enhance the prototype under development. Thus, by integrating coproduced knowledge, we were able to ensure that the platform corresponded to the actual needs and expectations of future users, increasing the chances of adoption and acceptance of the solution. This user-centered approach is conducive to creating a design that is better suited to the real needs of end users and contributes to a better overall user experience. The survey results (objective 3) showed that all HCPs “agreed” or “strongly agreed” on the benefits of the platform. For the vast majority, the clinical and administrative flows for each user profile are well supported by the platform. To ensure continuous improvement of the platform, 9 functionalities were added in response to end-user feedback, representing significant added value to the care pathways. Evaluation of the end-user experience (objective 3) demonstrated several benefits resulting from the platform. First, the thoracic surgery care pathways have been optimized, automated, and made more secure. This ensures better connectivity between the different players, facilitating the flow of exchanges and information traceability, while ensuring that each player has a better understanding of the patient care process at the MUHC and the CISSSO. Second, the platform has helped overcome the challenges associated with operating in silos. It facilitates communication between HCPs and between the MUHC and the CISSSO, which can lead to better decision-making for patients. In turn, this reduces the risk of medical errors and improves efficiency by optimizing case processing time and improving information transfer. These improvements have made it possible to reallocate staff at both health care facilities from searching for and monitoring information to activities that add value to the care pathways. Therefore, workflows are more efficient and effective, and resources are better used. This was enabled at both health care facilities by eliminating unnecessary or duplicated steps, reducing delays in the process, improving the fluidity of the care pathways, giving all HCPs access to information related to completed and upcoming care pathway steps, and providing access to performance indicators across the entire care pathways with a view to continuous improvement for the benefit of patients. In addition, the platform promotes better care coordination by streamlining transitions between care settings across the entire health care continuum. This enhanced coordination ensures that patients benefit from comprehensive, coherent care throughout their care pathways. Finally, the success of the project convinced the clinical teams and senior management of the health care facilities (MUHC and CISSSO) to pursue the long-term use of the Akinox digital health platform for the oncology and nononcology thoracic surgery care pathways. While the study represents a significant advancement in the field of thoracic surgery care pathways and interfacility health care service coordination, there are some limitations that need to be addressed to ensure the generalizability, sustainability, and overall effectiveness of the integrated digital health solution. One limitation is the lack of validation of the platform’s effectiveness across different health care contexts and specialties. Further research is needed to assess its generalizability across varied health care environments and specialties beyond thoracic surgery and develop evidence-based guidelines for implementation in diverse clinical contexts. In addition, the study lacks a comprehensive assessment of patients’ experiences and satisfaction with the digital health solution. Understanding patients’ perspectives is important for evaluating the overall effectiveness and impact of the integrated digital health platform on patient-centered care. Future research should include patient feedback to enrich our understanding of how the platform influences patient outcomes and experiences. Furthermore, the quantitative evaluation of the user experience is based on a limited sample of 13 users. Although these initial findings provide valuable insights, the small sample size reduces the ability to generalize the results to a broader population. Expanding the user base in future studies is essential to capturing a more diverse range of experiences and ensuring the platform’s adaptability and effectiveness across different user groups. Finally, for future research, it would be valuable to statistically measure the process over time to assess the real improvements following the platform’s implementation. This analysis could offer deeper insights into the platform’s impact on operational efficiency and patient outcomes, providing data that can inform continuous improvement efforts. This pilot project helped develop a usable and valid target care pathway model for the interfacility thoracic surgery care pathways by integrating a platform. The platform’s infrastructure is designed to be easily configurable and adaptable to different types of cancer and surgery. This means that the model can be extended to other medical specialties, enabling a smoother care pathway for a greater number of patients. The platform also provides a technological solution and model that can be exported to the pulmonary oncology network and other care networks and clinical units. Furthermore, this pilot project highlights the best practices and conditions for success in the consolidation of a cancer network that can be transferred to other networks, namely the key role played by nurse navigators, who are the guarantors of the patient’s care pathway. Our pilot project is in line with one of the objectives of the MSSS’s Information Technology Division, which aims to use information resources in the health care network to make the shift to digital technology by improving business processes. This project is part of Quebec’s Digital Strategy, launched by the Ministère de l’Économie et de l’Innovation: one of its orientations is to have connected health care for the citizens. The Ministère de l’Économie et de l’Innovation believes that digital technology makes it possible to respond to patients’ needs according to their realities, optimizing and improving health care services. Collaboration and sharing, in this case, between HCPs from different health care facilities and even with patients, represent the future of the integrated, patient-centered health care system. In terms of managerial insights, the study highlights the importance of strategic leadership in the implementation of digital health solutions. By fostering collaboration between different stakeholders, organizations can improve care coordination and operational effectiveness. This is in line with health care that emphasizes the need for patient-centered care approaches. In addition, this entire pilot project is part of the MSSS’s approach, aiming to improve the accessibility, equity, integration, and quality of services and care. This model, which is increasingly patient-centered, should help provide care and services within medically acceptable timeframes; it should be transferable to other care pathways and health care facilities and contribute, in this case, to better care and services for patients with cancer. Finally, this study pushes the boundaries of theoretical advancements in medical informatics by bridging the gap between digital solutions and practical applications in clinical settings. It emphasizes the role of technology not just as a tool but as an integral part of a patient’s care pathway, thereby enhancing the theoretical frameworks on health informatics.
Misoprostol use in obstetrics
79cc8efd-4398-42a4-b077-35347bc9e401
10621739
Gynaecology[mh]
Misoprostol is a prostaglandin E1 (PGE1) analogue that has been on the World Health Organization (WHO) List of Essential Medicines since 2005. Brazil has one of the most restrictive regulations in the world related to the use of misoprostol establishing it is exclusively for hospital use with special control, and sale, purchase and advertising prohibited by law. Misoprostol is currently the reference drug for pharmacological treatment in cases of induced abortion, both in the first trimester of pregnancy and at more advanced gestational ages. Misoprostol is an effective medication for cervical ripening and labor induction. Misoprostol is an essential drug for the management of postpartum hemorrhage. The use of misoprostol is recommended for the following situations: legal abortion, uterine evacuation due to embryonic or fetal death, cervical ripening before labor induction (uterine cervix maturation), labor induction and management of postpartum hemorrhage. Misoprostol 800 mcg vaginally (four 200 mcg pills) is recommended for uterine evacuation in pregnancy loss up to 13 weeks. In cervical preparation for surgical abortion at less than 13 weeks of pregnancy, the use of misoprostol 400 mcg vaginally 3-4 hours before the procedure is recommended. The use of misoprostol alone according to the gestational age for uterine evacuation is recommended for termination of pregnancy in legal abortion. The use of vaginal misoprostol according to the gestational age is recommended for uterine evacuation in case of fetal death: at 13-26 weeks, 200 mcg every 4-6 hours; at 27-28 weeks, 100 mcg every 4-6 hours; and over 28 weeks, 25 mcg every six hours. The use of misoprostol at an initial dose of 25 mcg vaginally every 4-6 hours is recommended for cervical ripening and induction of labor with a live fetus in pregnancies over 26 weeks. The use of misoprostol for cervical ripening and induction of labor with a live fetus is not recommended in women with a previous cesarean section due to the greater risk of uterine rupture. Misoprostol is a safe and effective option for women with premature rupture of membranes and unfavorable uterine cervix, as long as they do not have contraindications for taking the medication, for example, previous cesarean section. Rectal misoprostol 800 mcg is recommended as part of the drug treatment of postpartum hemorrhage. In Brazil, misoprostol should be made available to all health services at all levels of care, and it is desirable that outpatient use be allowed, when indicated. Misoprostol is a synthetic analogue of prostaglandin E1 (PGE1) with gastric secretion inhibitory and mucosal protection properties through the production of bicarbonate and mucus. It was first approved to be used to protect the stomach mucosa in patients using non-steroidal anti-inflammatory drugs. This drug has been widely used in obstetric practice to induce abortion and as an agent to promote cervical ripening in induction of labor at term. The combination of misoprostol and mifepristone is used in medical abortions with a good safety profile in several countries. In Brazil, the commercialization of misoprostol is controlled for use in the hospital environment, in labor induction and legal abortion, or in cases of emptying of the uterus in abortion or retained dead fetus. There is widespread debate about the standardization of dosage in the use of misoprostol. Higher doses of misoprostol are used for induced and retained abortions, and much lower doses are used for cervical ripening and labor induction in term pregnancies. It is also indicated for the treatment of postpartum hemorrhage (PPH). Misoprostol is a synthetic analogue of PGE1. It is metabolized in the liver, deesterified and becomes the active metabolite, misoprostol acid. It has the ability to bind to uterine smooth muscle cells, increasing the strength and frequency of uterine contractions. In the uterine cervix, it also promotes the breakdown of collagen in the connective tissue and a reduction in cervical tonus. Misoprostol can be used orally, vaginally, sublingually and rectally. In oral administration, the drug reaches its maximum peak 20-30 minutes after ingestion, remaining detectable for up to four hours. Misoprostol administered sublingually is absorbed more quickly and has higher peak concentrations than when administered orally, which tends to cause higher rates of gastrointestinal side effects at any dose. Overall bioavailability of the drug used vaginally is greater, since the absorption is slower than in other routes, and the maximum plasmatic peak is reached in 40-60 minutes, remaining stable up to two hours after application. The vaginal route also allows for greater effects on the cervix and uterus. The pharmacokinetics of rectal misoprostol is similar to that of vaginal misoprostol, although with a lower overall bioavailability and a significantly lower peak plasma level. It has been demonstrated that levels of misoprostol in breast milk are known to peak and decline rapidly with an average half-life of around one hour. Although it normally appears in colostrum and milk, the low levels detected suggest that a minimal amount of misoprostol could potentially be ingested by the newborn. Although other prostaglandins can cause myocardial infarction and bronchospasm, misoprostol is not associated with these effects. Toxic doses have not been well established and cumulative doses of up to 2,200 mcg in 12 hours are well tolerated without significant adverse effects. A case of non-lethal misoprostol overdose was reported after ingestion of 6,000 mcg, coursing with hyperthermia, rhabdomyolysis, hypoxemia and metabolic acidosis. One fatal case was reported after ingestion of 12,000 mcg (60 tablets), causing gastrointestinal bleeding with gastric and esophageal necrosis and organ failure. The most common adverse effects of misoprostol are nausea, vomiting, diarrhea, abdominal pain, chills, shivering and fever. All these effects are dose-dependent. Gastrointestinal effects may occur in approximately 35% of women and are more common after oral or sublingual administration. Diarrhea is the most common adverse effect and is usually mild and self-limited to one day. Shivering and fever are also transitory effects and may occur in 28% and 7.5%, respectively, of women who used 600 mcg of misoprostol orally. The occurrence of fever and shivering from misoprostol in the active management of the third stage favors the routine use of oxytocin as the drug of choice for the prevention of hemorrhage. Although dose-dependent, uterine hyperstimulation is one of the most frequent adverse effects in labor induction. The risk of uterine hyperstimulation was high with high doses of misoprostol used in the past. With low doses (≤50 mcg of initial dose), the risk is similar to that of dinoprostone, 4-12%, depending on the route and dosage. In a Cochrane meta-analysis, the risk of hyperstimulation with alteration of fetal heart rate was significantly lower with low-dose oral misoprostol (3.4%) compared to vaginal dinoprostone (7.0%; RR: 0.49; 0.40-0.59). In that same meta-analysis, a lower risk of hyperstimulation with fetal cardiac alteration was also found with oral misoprostol (3.9%), compared to the vaginal route (5.7%; RR: 0.69; 0.53-0.92). Fetal distress, the presence of meconium in the amniotic fluid and uterine rupture may occur as a result of hyperstimulation (hypersystole or tachysystole with or without hypertonia). Uterine rupture is the most feared adverse effect of labor induction, especially in women with previous uterine scar. Although extremely rare, there are case reports of uterine rupture during first-trimester abortion induction. Most cases of uterine rupture have been described in third-trimester inductions and associated with previous uterine scar or other risk factors. The risk of uterine rupture in women with induction of labor for vaginal delivery after cesarean section with misoprostol is 6-12%. Therefore, this is usually a contraindication for using the drug. It is important to emphasize that misoprostol can be used in the second trimester in women with a previous cesarean section, since most studies point to a low risk of uterine rupture. A meta-analysis identified that this risk is not significantly different when the woman has had a previous cesarean section (0.47%) compared to no uterine scar (0.08%; RR: 2.36; 0.39-14.32), although it is significantly higher with two or more previous cesarean sections (2.5%; RR: 17 ,55; 3-102.8). The Food and Drug Administration (FDA) classifies misoprostol as a category X drug (evidence of teratogenesis in animals and humans) in the first and second trimesters of pregnancy. Animal studies have shown a significant reduction in fertility with the use of high doses (6.25 to 625 times the maximum human therapeutic dose). In pregnant rabbits, doses of 300 to 1,500 mcg/kg of misoprostol on days 7-19 of embryogenesis have been associated with teratogenic effects. Misoprostol-related malformations were initially described in case reports in humans. These findings were subsequently confirmed in case-control and prospective studies and meta-analyses. Most of these data come from Brazil and involve cases of malformations related to failed abortion with the use of misoprostol. In countries where abortion is legally permitted, patients rarely continue with the pregnancy after a failed abortion with misoprostol. In humans, there are several malformations associated with the use of misoprostol in the first trimester of pregnancy, such as: Moebius sequence (compromise of the VI and VII cranial nerves with paralysis of the eyes and facial muscles), arthrogryposis, transverse reduction of extremities and limbs, congenital clubfoot, hydrocephalus, encephalocele, meningocele, hemifacial microsomia, severe trismus. The risk of any malformation associated with the use of misoprostol is 2.64 (95% confidence interval [CI]: 1.03-6.75) compared to the unexposed group, while the risks for the Moebius sequence and transverse limb reduction were 25.31 (95% CI: 11.11-57.66) and 11.86 (95% CI: 4.86-28.90), respectively. The teratogenic mechanism attributed to fetal malformations and alterations is a result of vascular disruption caused by intense uterine contractions and vaginal bleeding leading to embryonic hypoperfusion with tissue hypoxia, endothelial cell damage and tissue loss. Fetal malformations and impairments depend on the developmental stage of the embryo, and the greatest risks are related to the use in the first trimester of pregnancy. It is still controversial if the risk of teratogenicity is dose-dependent, since studies indicate, for example, the association of severe malformations such as hydrocephalus with both low (200 mcg) and high doses (800 mcg) of misoprostol. Hence, it is not possible to provide certainty regarding the absence or severity of alterations after using any dose of misoprostol in the first trimester of pregnancy. Misoprostol is used for uterine evacuation in first trimester pregnancy loss. On ultrasound examination, pregnancy loss can be characterized by the following aspects: presence of gestational sac without yolk sac or embryo and with mean diameter ≥ 25 mm; embryo with crown-rump length greater than or equal to 7 mm without cardiac activity; no embryo with a heartbeat two weeks after an examination demonstrating an empty gestational sac or no embryo with a heartbeat at 11 or more days after an examination demonstrating a gestational sac with yolk sac. In these situations, three approaches are possible: expectant management, mechanical uterine evacuation, or pharmacological evacuation. The most effective and safe way to promote pharmacological uterine evacuation is the combination of mifepristone 200 mg followed by misoprostol (1-2 days later), with an efficacy rate of around 90% versus 70% when using misoprostol alone. Given the unavailability of mifepristone, since its use is not regulated in Brazil, the isolated use of misoprostol is a reasonable alternative. There are several protocols, and the International Federation of Gynecology and Obstetrics (FIGO) and the World Health Organization (WHO) recommend the administration of 800 mcg vaginally, sublingually or buccally (four 200 mcg tablets). FIGO recommends a second dose three hours later. There are no clear definitions regarding the interval and number of complementary doses, if necessary. Longer dosing intervals have the benefit of exposing the patient to a reduced risk of adverse effects. On the other hand, shorter dosing intervals (closer to three hours) may be necessary to generate sufficient uterine activity, particularly if misoprostol is given buccally or sublingually. Although uterine hyperstimulation is rare, particularly in the first trimester, the risk may increase with shorter dosing intervals. In pregnancies of less than 12 weeks, 1-3 doses of misoprostol are usually sufficient to expel the uterine contents. The main advantages of using misoprostol include avoiding uterine perforation and formation of synechiae, reduced risks of sequelae inherent to the mechanical dilation of the cervix, and no need for anesthetic procedure. Disadvantages include a longer resolution time (sometimes days), higher prevalence of some symptoms such as cramps, bleeding, nausea, fever and chills, occasional need for surgical complementation and blood transfusion, and the woman's anxiety because of the waiting. When opting for mechanical evacuation of the uterus, misoprostol can be used to prepare the cervix, avoiding or facilitating instrumental dilation before aspiration or curettage. The recommended dose is 400 mcg vaginally 3-4 hours before the procedure. If available, the sublingual route can be used in a shorter time interval (one hour). Misoprostol 800 mcg vaginally (four 200 mcg tablets) is recommended for uterine evacuation in pregnancy loss up to 13 weeks. Brazil has one of the most restrictive regulations related to induced abortion - induced abortion is only legally permitted in cases of pregnancy resulting from rape, risk to the woman's life and fetal anencephaly – and to the use of misoprostol in the world. In a study of countries in Africa, Asia and Latin America, Brazil was close only to Vietnam among those with greater restrictions on access to medical abortion in the world. Brazil is the only South American country where misoprostol is not available directly to women, whether in health services or for sale in pharmacies. Contrary to what one might imagine, these barriers fail to reduce the use of misoprostol by women, since half of illegal abortions in the country are performed with this drug. The regimen of use of misoprostol alone recommended for the induction of abortion in cases provided for by law, is shown in . The drug is used until expulsion of products of conception. In the first trimester, three doses of misoprostol are usually sufficient to complete the treatment. In cervical preparation prior to surgical abortion in pregnancies over 12-14 weeks, the use of misoprostol 400 mcg (vaginally or orally) 2-3 hours before surgical treatment is routinely recommended. If sublingual route is used, the time until the surgical procedure can be reduced to 1-2 hours. Although cervical preparation should not be used routinely in pregnancies before 12 weeks, it can be beneficial in specific cases such as women at increased risk of complications during cervical dilation, for example, those with cervical anomalies or a history of cervical surgery. Safety and efficacy data of the misoprostol treatment regimen alone for induced abortion were published in a randomized clinical trial of 2,066 women who received three doses of misoprostol 800 mcg. In that study, only 0.04% of women had vaginal bleeding requiring return to the hospital. There were no serious adverse events among study participants. The WHO cites the possibility of the combined use of letrozole and misoprostol as safe and effective in terminating pregnancies of less than 13 weeks in scenarios where mifepristone is not available (letrozole 10 mg orally each day for three days followed by misoprostol 800 mcg sublingually on the fourth day). The use of misoprostol on an outpatient basis is considered effective and safe for the treatment of induced abortion, especially in the first 12 weeks of pregnancy. The use of misoprostol during this period has minimal adverse effects, such as diarrhea, vomiting, nausea and fever, which can be easily treated by professionals outside the hospital setting. Outpatient use can reduce costs for both the health system and the hospital due to the waiver of hospitalization, as well as for women, since they do not need to remain in hospitals and in most cases, can receive adequate care at health units close to their homes. In cases of induction of labor, the use of misoprostol in a hospital environment is recommended. When the diagnosis of fetal death is established, the health professional assisting this pregnant woman and her family must always be able to answer the posed questions with empathy and embracement, even if there are no answers to all. A systematic review including 14 controlled and randomized studies that evaluated the use of misoprostol in fetal death in the second and third trimesters found 100% effectiveness in uterine evacuation within 48 hours. Randomized studies support the use of misoprostol as a first-line agent in the induction of labor in fetal death at 20-24 weeks, including in patients with a history of previous cesarean section. Several intervals between doses, dosages and routes of administration are described, but none showed clear evidence of superiority. The regimen of misoprostol recommended for uterine evacuation in the case of fetal death at 14-24 weeks of gestational age is 400 mcg vaginally every 4-6 hours. In cases of fetal death after more than 24 weeks, labor induction depends on the conditions of cervical maturation. In patients with a favorable cervix (Bishop index ≥ 6), labor induction can be started with oxytocin without the use of misoprostol for previous cervical ripening. In patients with an unfavorable cervix and without previous uterine scar, misoprostol is the agent of choice for preparing the cervix and inducing labor. The following regimens are recommended: 25-26 weeks: misoprostol 400 mcg vaginally or sublingually every 4-6 hours; 27-28 weeks: misoprostol 100 mcg vaginally or sublingually every 4-6 hours; Over 28 weeks: misoprostol 25 mcg vaginally every six hours. In patients with previous segmental scarring and unfavorable cervix at 24-28 weeks, cervical preparation can be performed with a mechanical method (transcervical balloon) followed by the use of oxytocin. The use of misoprostol seems to be an acceptable alternative at this gestational age, since the risk of uterine rupture is low. In a review study in which misoprostol was used at this gestational age, the risk of uterine rupture was 0.28% (95% CI: 0.08-1.00) in patients with a previous cesarean section versus 0.04% (95% CI: 0.01-0.20) in patients without a previous cesarean section. However, at 24-26 weeks, low doses of misoprostol (100 mcg to 200 mcg per dose) may be suggested. In pregnancies over 28 weeks, cervical preparation for labor induction should be performed in accordance with recommendations for parturient women with a live fetus. In the labor induction process, when the situation of the uterine cervix is unfavorable, a maturation process is recommended to shorten the duration of induction and increase the chance of vaginal delivery. When the Bishop score is less than 6, the cervix is generally considered unfavorable, and mechanical and/or pharmacological methods can be used in this process. Prostaglandins, including misoprostol, are contraindicated for cervical ripening or induction of labor in full-term pregnancies with previous cesarean section or other major uterine surgery due to the association with a higher risk of uterine rupture. Pre-existing regular uterine activity is a relative contraindication to the use of misoprostol, as it can lead to excessive uterine activity. Delaying or avoiding administration should be considered if the patient has two or more painful contractions within 10 minutes, especially in patients who have already received at least one dose of prostaglandin. In Brazil, misoprostol for vaginal use in labor induction is available in tablets containing 25 mcg of the drug. The 50 mcg dose is more effective than the 25 mcg dose, but leads to higher rates of tachysystole, cesarean delivery due to fetal compromise, admission to neonatal intensive care units, and meconium elimination. The interval between doses can vary between 3-6 hours. The number of doses required for cervical maturation and/or effective labor varies. If necessary, oxytocin can be started four hours after the final dose of misoprostol. There are no definitions regarding the total limit of doses or the time of maturation and/or labor induction. In some countries, a pessary with controlled release of misoprostol (200 mcg in 24 hours) is available. Comparative studies with the dinoprostone pessary have shown a significantly shorter mean time to vaginal delivery and a greater chance of tachysystole. A 2021 meta-analysis supported the use of low doses of oral misoprostol for labor induction and suggested that an initial dose of 25 mcg can offer a good balance between efficacy and safety. Other routes for the use of misoprostol in labor induction, including buccal and sublingual administration, have been less studied. Small trials suggest similar or inferior results to those of vaginal or oral administration. In pregnancies over 26 weeks, the use of misoprostol at an initial dose of 25 mcg vaginally every 4-6 hours is recommended for cervical maturation prior to labor induction. Women planning a vaginal birth after a previous caesarean section (Trial of labor after cesarean – TOLAC) may need labor induction. There are two concerns: reduced chances of vaginal birth after caesarean section (VBAC) and increased risk of uterine rupture. Having a previous vaginal delivery and a favorable cervix are the main predictors of induction resulting in VBAC. Induction itself does not reduce the chances of VBAC when compared with expectant management. The major risk is uterine rupture related to induction. Regardless of the method used for induction, women with a previous cesarean section and induced labor are at greater risk of uterine rupture than those in labor with spontaneous delivery or expectant management. The frequency of uterine rupture in women at full-term who had labor induced was almost twice as high as the frequency in women in whom labor began spontaneously (1.5% versus 0.8%). The factors associated with an increased risk of rupture during induced TOLAC include: No previous vaginal delivery – for example, in a study, the risks of rupture during TOLAC-induced in women without a previous vaginal delivery versus a previous vaginal delivery were 1.5% and 0.6%, respectively; Use of prostaglandins – induction with prostaglandins appears to be associated with a greater risk of uterine rupture than induction with oxytocin or cervical ripening with mechanical methods followed by administration of oxytocin. Risk of rupture with prostaglandin use – Data from large randomized trials and from good quality observational studies on the effects of prostaglandins alone or in combination with other agents for cervical ripening in TOLAC are not available. Much data on prostaglandin use in women with a previous caesarean section has been derived from observational studies in which misoprostol (PGE1) was used. Reports on the use of other prostaglandins, such as prostaglandin E2, are limited by their small size, the co-administration of other agents and the lack of stratification by previous vaginal delivery. Unspecified prostaglandin – Concern over the use of prostaglandins arose following the publication of a large population-based retrospective cohort study that analyzed data from 20,095 primiparous women who delivered after a single previous cesarean section. In that study, the rate of uterine rupture was similar for women in spontaneous labor and those induced without the use of prostaglandin, but significantly higher among women induced with prostaglandin (type not available). The specific uterine rupture rate by category was: Repeat cesarean sections without labor: 1.6 ruptures per 1,000 planned repeat cesareans; Spontaneous labor: 5.2 ruptures per 1,000 spontaneous deliveries; Induced labor (without prostaglandins): 7.7 ruptures per 1,000 labors induced without the use of prostaglandins; Induced labor (with prostaglandins): 24.5 ruptures per 1,000 labors induced using prostaglandins. Compared to repeat cesarean delivery, the relative risk of rupture with the use of prostaglandins was 15.6 (95% CI: 8.1-30.0). However, despite the very large number of cases, the information in this study is from a database and individual reviews of medical records were not performed to check other medications administered. The risk of uterine rupture reported in this retrospective study was lower in another large prospective study. In that study, the rate of uterine rupture among patients induced with prostaglandin with or without oxytocin was lower – 14 per 1,000 induced deliveries –, although still considerably high. Specifically on misoprostol (PGE1), a randomized trial on the use of misoprostol for cervical ripening in labor induction in women with previous cesarean sections was stopped early because of safety concerns due to uterine rupture. This study and several case reports have led some researchers to conclude that misoprostol is associated with a greater risk of uterine rupture than other prostaglandins and therefore should not be used in women planning a TOLAC. The positions of Gynecology and Obstetrics Societies worldwide are: American College of Obstetricians and Gynecologists (ACOG –United States) – advises that misoprostol should not be used for cervical ripening or labor induction in women at term with any previous uterine incision and does not address the use of prostaglandin E2; Society of Obstetricians and Gynecologists of Canada (SOGC – Canada) – has the same position regarding the use of misoprostol, but allows the use of prostaglandin E2 (dinoprostone) in some circumstances and after appropriate advice; National Institute for Health and Care Excellence (United Kingdom) – concluded that if childbirth is indicated, women who have had a previous cesarean section can receive labor induction with vaginal prostaglandin E2, but do not mention misoprostol. In conclusion, the use of misoprostol in women with previous cesarean is not recommended given the higher risk of uterine rupture. Note that mechanical methods are available, effective and safe. Premature rupture of membranes (PROM) is one of the most common complications of term and preterm pregnancies, but there is a gap in knowledge about how management affects the cesarean rate. As gestational age at delivery is the critical factor influencing perinatal outcome, expectant management is generally adopted when far from term. In PROM at term, the risk of maternal and fetal infectious morbidity increases with longer duration of membrane rupture. Therefore, expectant management should be brief, with instructions for induction of labor. Meta-analyses conclude that misoprostol is an effective and safe agent for inducing labor in women with PROM at term. Compared to oxytocin, the risk of contraction abnormalities and the rate of maternal and neonatal complications were similar between the two groups. Misoprostol 25 mcg should be considered as the starting dose for cervical ripening and labor induction in women with PROM. The frequency of administration should not exceed 3-6 hours. Furthermore, oxytocin should not be administered less than 4 hours after the last dose of misoprostol. Misoprostol at higher doses (50 mcg every six hours) may be appropriate in some situations, although higher doses may be associated with an increased risk of complications, including uterine tachysystole with fetal heart rate decelerations. A Cochrane Review suggests the immediate induction of labor in patients with PROM at term. Compared with expectant management, induction of labor is associated with a reduction in maternal and possibly neonatal infection and lower treatment costs, without an increase in cesarean sections. In conclusion, the use of misoprostol is recommended as a safe and effective option for women with PROM and unfavorable cervix, provided they do not have contraindications for the use of this medication, such as, for example, previous cesarean section. Postpartum hemorrhage affects around 2% of all patients, and in only 25% of cases the risk factors are pronounced. The obstetrician must perform prophylaxis in 100% of cases and be aware of the occurrence of PPH, even if drug prophylaxis is performed. There is strong evidence that the association of uterotonics prescribed in the immediate postoperative period of childbirth reduces blood loss greater than 500 mL: ergometrine plus oxytocin (RR: 0.70; 95% CI: 0.59-0.84) and misoprostol plus oxytocin (RR: 0.70; 95% CI: 0.58-0.86) and reduces the need for blood products (RR: 0.51; 95% CI: 0.37-0.70). This is not only a result of the combination of the strength of the two drugs, but also because oxytocin is thermolabile and it is difficult to guarantee a cold chain throughout the medication production, transportation and dispensing route. However, the association of two uterotonics increases the occurrence of side events, mainly vomiting (RR: 2.11; 95% CI: 1.39-3.18). Therefore, the use of two uterotonics is recommended for patients at high risk of PPH, always bearing in mind the contraindication of ergometrine for hypertensive/pre-eclampsia patients. The following uterotonics are recommended for the prophylaxis of PPH: Oxytocin: In post-vaginal delivery: single dose of 10 IU intramuscularly right after birth; In cesarean section: 5 IU in slow intravenous infusion in three minutes and maintenance solution (20 IU of oxytocin in 500 ml of 0.9% saline solution intravenously at 125 ml/h for 4-12 hours); Misoprostol: single dose of 600 mcg rectally; Ergometrine: single dose of 0.2 mg intramuscularly. For the drug treatment of PPH, the use of misoprostol 800 mcg rectally is recommended. It is important to remember that since the onset of action of rectal misoprostol is slower than that of other uterotonics, it should be used as an adjuvant to treatment with oxytocin. Misoprostol should not be used in isolation, maintaining uterine massage until the onset of its effect, which may take 15-20 minutes. Always consider the use of tranexamic acid 1 g intravenously over 10 minutes, with the possibility of repeating the 1 g dose in 30 minutes if bleeding persists. Circular letter number 182/2021 of the Office of the President of the Brazilian Federal Council of Medicine, expressed the impossibility of using misoprostol outside the hospital setting. The letter highlights the Ordinance of the Brazilian National Health Surveillance Agency (Anvisa) number 344/98, of the Secretariat for Health Surveillance of the Ministry of Health, according to which misoprostol is on list C1 that includes substances subject to special control (prescription in two copies), with the addendum that the purchase and use of medication containing the substance misoprostol will only be allowed in hospitals duly registered with the Sanitary Authority. In its guide, the WHO (World Health Organization, 2018) recognizes that the home use of misoprostol is a safe and effective option for women. In addition, the drug was added to the WHO list of essential drugs in 2019, at the same time that the need for in-person medical supervision to administer pharmacological abortion was withdrawn. In Brazil, Anvisa ordinances and resolutions and manifestations of the Federal Council of Medicine currently establish that misoprostol has exclusive hospital use with special control. Compared to other countries in the world and to WHO recommendations, there is excessive difficulty in accessing and releasing the use of misoprostol in Brazil. Given the existence of a robust body of evidence, there are no scientific justifications for imposing other restrictions on misoprostol, in addition to those related to special control drugs, i.e. prescription in two copies with retention of one copy in the pharmacy, and the possibility of identifying who prescribed the induced abortion treatment. In obstetric practice, misoprostol has been widely used in legal abortion, uterine emptying due to embryonic or fetal death, cervical ripening and labor induction, and management of PPH. Contrary to the accumulated scientific evidence, Brazil has one of the most restrictive regulations in the world related to the use of misoprostol. The great difficulty in acquiring, storing and dispensing the medication imposed by Ordinance No. 344/1998 of Anvisa, still in force, contributes to denying the right to safer outpatient treatments for women who need it. These restrictions also hinder the availability of this medication, essential and mandatory, in obstetric care services. National Commission Specialized in Childbirth, Puerperium and Abortion Care of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Alberto Trapani Júnior Vice-President: Alessandra Cristina Marcolin Secretary Sheila Koettker Silveira Membros: Elias Ferreira de Melo Junior Liduina de Albuquerque Rocha e Sousa Marcia Maria Auxiliadora de Aquino Mirela Foresti Jiménez Ricardo Porto Tedesco Tenilson Amaral Oliveira National Commission Specialized in Antenatal Care of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Fernanda Garanhani de Castro Surita Vice-President: Lílian de Paiva Rodrigues Hsu Secretary Adriana Gomes Luz Membros: Jorge Oliveira Vaz Eliana Martorano Amaral Eugenia Glaucy Moura Ferreira Francisco Herlanio Costa Carvalho Joeline Maria Cleto Cerqueira Jose Meirelles Filho Luciana Silva dos Anjos França Marianna Facchinetti Brock Mary Uchiyama Nakamura Patricia Goncalves Teixeira Renato Ajeje Sergio Hecker Luz National Commission Specialized in Gestational Trophoblastic Disease of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Antonio Rodrigues Braga Neto Vice-President: José Mauro Madi Secretário Mauricio Guilherme Campos Viggiano Membros: Bruno Maurizio Grillo Christiani Bisinoto de Sousa Claudio Sergio Medeiros Paiva Elaine Azevedo Soares Leal Elza Maria Hartmann Uberti Fabiana Rebelo Pereira Costa Izildinha Maesta Jose Arimatea dos Santos Junior Maria do Carmo Lopes de Melo Rita de Cassia Alves Ferreira Silva Sue Yazaki Sun Tiago Pedromonico Arrym National Commission Specialized in High Risk Pregnancy of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Rosiane Mattar Vice-President: Alberto Carlos Moreno Zaconeta Secretary Mylene Martins Lavado Membros: Arlley Cleverson Belo da Silva Carlos Alberto Maganha Elton Carlos Ferreira Felipe Favorette Campanharo Inessa Beraldo de Andrade Bonomi Janete Vettorazzi Maria Rita de Figueiredo Lemos Bortolotto Fernanda Santos Grossi Renato Teixeira Souza Sara Toassa Gomes Solha Vera Therezinha Medeiros Borges National Commission Specialized in Fetal Medicine of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Mario Henrique Burlacchini de Carvalho Vice-President: José Antonio de Azevedo Magalhães Secretary Roseli Mieko Yamamoto Nomura Membros: Alberto Borges Peixoto Carlos Henrique Mascarenhas Silva Carolina Leite Drummond Edward Araujo Júnior Fernando Artur Carvalho Bastos Guilherme Loureiro Fernandes Jair Roberto da Silva Braga Jorge Fonte de Rezende Filho Marcello Braga Viggiano Maria de Lourdes Brizot Nadia Stella Viegas dos Reis Reginaldo Antônio de Oliveira Freitas Júnior Rodrigo Ruano National Commission Specialized in Maternal Mortality of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Marcos Nakamura Pereira Vice-President: Rodolfo de Carvalho Pacagnella Secretary Melania Maria Ramos de Amorim Membros: Acacia Maria Lourenço Francisco Nasr Douglas Bernal Tiago Elvira Maria Mafaldo Soares Fatima Cristina Cunha Penso Ida Perea Monteiro João Paulo Dias de Souza Lucila Nagata Maria do Carmo Leal Monica Almeida Neri Monica Iassanã dos Reis Jacinta Pereira Matias Penha Maria Mendes da Rocha National Commission Specialized in Obstetric Emergencies of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Alvaro Luiz Lage Alves Vice-President: Gabriel Costa Osanan Secretary Samira El Maerrawi Tebecherane Haddad Membros: Adriana Amorim Francisco Alexandre Massao Nozaki Brena Carvalho Pinto de Melo Breno José Acauan Filho Carla Betina Andreucci Polido Eduardo Cordioli Frederico Jose Amedee Peret Gilberto Nagahama Laises Braga Vieira Lucas Barbosa da Silva Marcelo Guimarães Rodrigues Rodrigo Dias Nunes Roxana Knobel National Commission Specialized in Sexual Violence and Pregnancy Interruption Provided for by Law of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Robinson Dias de Medeiros Vice-President: Cristião Fernando Rosas Secretary Helena Borges Martins da Silva Paro Membros: Aline Veras Morais Brilhante Anibal Eusébio Faúndes Latham Débora Fernandes Britto Edison Luiz Almeida Tizzot Isabelle Cantidio Fernandes Diogenes Kenia Zimmerer Vieira Michele Lopes Pedrosa Osmar Ribeiro Colas Rivaldo Mendes de Albuquerque Rosires Pereira de Andrade Suely de Souza Resende Zelia Maria Campos National Commission Specialized in Professional Defense and Appreciation of the Brazilian Federation of Gynecology and Obstetrics Associations (Febrasgo) President: Maria Celeste Osorio Wender Membros: Carlos Henrique Mascarenhas Silva Etelvino de Souza Trindade Henrique Zacharias Borges Filho Juvenal Barreto Borriello de Andrade Lia Cruz Vaz da Costa Damásio Maria Rita de Souza Mesquita Mirela Foresti Jiménez Sergio Hofmeister de Almeida Martins Costa Celia Regina da Silva Aljerry Dias do Rego Rosires Pereira de Andrade Maria Auxiliadora Budib Carlos Alberto Sa Marques Hilka Flavia Barra do Espirito Santo Alves Pereira
The Correlation of Bile Duct Dilatation in Postmortem Computed Tomography of Lethal Intoxication Cases for Different Drug Types—A Retrospective Study
bb10ac02-f7d2-450c-8407-d49964279b02
11587109
Forensic Medicine[mh]
Morphine is known to constrict the sphincter of Oddi (SO), which increases biliary pressure within the common bile duct . This effect is utilized in cholangiopancreatography in Magnetic Resonance Imaging (MRI), where intravenous morphine is administered prior to the examination to improve image quality by distention of the biliary and pancreatic ducts . In clinical radiology, in general, this effect of morphine on the biliary duct system can be helpful for the evaluation of patients with primary sclerosing cholangitis or malignant biliary and pancreatic neoplasms . The average duration of this effect (morphine as an i.v. medication) usually lasts up to a couple of minutes . The common bile duct (CBD) arises from the union of the common hepatic duct (consisting of the right and left hepatic ducts) with the cystic duct. The CBD is the main duct of the liver and gallbladder and opens into the lumen of the duodenum at the papilla duodenal major. In about 80% of cases, this occurs together with the pancreatic duct, which carries the secretion of the exocrine pancreas. The emptying of the bile into the duodenum is carried out by the SO, which is located at the duodenal junction. It consists of smooth muscle surrounding the bile duct . The diameter of the common bile duct of up to 5–9 mm in ultrasound examinations and of up to 6–8 mm in computed tomography (CT) is considered normal . Thus far, there are no defined normal ranges of the CBD in postmortem CT. Concerning the influence of other drugs on the SO, it has been shown in an animal model that acetylcholine and prostigmin as well as alcohol increase the electrical activity in the sphincter, whereas atropine showed a depressant effect and amyl nitrite reduced the electrical activity of the sphincter . Also, in the human SO, contraction and pressure declined after the administration of anisodamine, atropine, or scopolaminbutylbromid . So far, we do not know if this effect of the opioids or other drugs on the SO persists after death. If that would be the case, a dilatation of the CBD (caused by a constricted SO) might, for example, be used as an indicator for possible intoxication (in the absence of other obvious causes and depending on the forensic context). The common unspecific signs of an intoxication recognized during autopsy like edema of the brain and the lung as well as a distension of the urinary bladder can be reproduced in postmortem CT . Therefore, we want to investigate (I) whether in autopsy-proven lethal intoxications with opiates/opioids, a dilatation of the CBD is still visible in postmortem computed tomography, and (II) if a dilatation of the CBD might also be measurable for other substance groups (e.g., stimulants, hypnotics, antipsychotics, etc.). 2.1. Study Group We searched the archive of the Institute of Forensic Medicne Zurich for all autopsy cases for a three-year period from January 2016 until December 2018. Toxicological information was taken from the autopsy and the associated toxicology report, resulting in 1410 cases. In all cases, an autopsy was ordered by the prosecutor’s office. Before the autopsy, a postmortem CT scan was conducted in each case, which was interpreted by a trained forensic pathologist or a radiologist. The toxicological examinations were carried out upon separate request from the authorities. The results of the examinations were stored in the institute’s database. We excluded all cases without toxicological analysis, with non-lethal intoxications and an age < 18 years. Also, cases were excluded where a disease was diagnosed that could influence the diameter of the CBD (e.g., pancreatic cancer and all visible alterations, as well as cases known from medical history, with a possible influence on the biliary system). Furthermore, all corpses were excluded that showed putrefaction gas in the biliary system or other signs of decomposition. In forensic routine casework, the exact time of death is typically unknown. Usually, we only have an estimated time of death to work with. Therefore, we decided to exclude all corpses that either exhibited signs of decomposition in the external examination or in postmortem computed tomography. Overall, we included 125 cases. Of the 125 individuals, 81 were male and 44 were female. The mean age was 45 years, the median age was 43 years, and the age range was 18–99 years. As a control group, we included all cases that had a negative toxicology report (no detection of any toxicological substance such as alcohol, medicines, or drugs) ( n = 88; males, n = 60; females, n = 28; mean age = 48 years; median age = 48 years; and age range = 18–88 years). The control group consisted of 88 cases with a negative toxicology report. Among these cases, 60 were male and 28 were female. The males were aged between 18 and 83 years (median age: 47 years) and the females between 19 and 88 years (median age: 53 years). 2.2. Toxicological Analysis Analysis was routinely performed directly following the autopsy. First, urine was screened by a cloned enzyme donor immunoassay (CEDIA ® ) for opiates, cocaine, cannabis, amphetamines, methadone, barbiturates, benzodiazepines, and lysergic acid diethylamide (LSD), followed by untargeted liquid chromatography tandem mass spectrometry (LC–MS/MS) ion trap screening after dilution and filtration (Bruker amazon ® , Maurer/Wissenbach/Weber database ). Peripheral blood was analyzed for ethanol and other volatile compounds by headspace gas chromatography flame ionization detection (HS-GC-FID). Targeted quantitative analysis of the toxicologically relevant substances was performed using validated LC–MS/MS methods. 2.3. Substances Intoxicating substances were classified into the following substance categories : opiates (heroin/morphine, codeine), opioids (methadone, fentanyl, tramadol), stimulants (cocaine, amphetamine and derivatives), hypnotics (benzodiazepines, Z-drugs, barbiturates, antihistamines), antipsychotics (antidepressants, neuroleptics), gasses (CO, helium), and others (chloroquine, acetaminophen, cyanide). 2.4. Imaging and Readout All our cases had undergone postmortem computed tomography (PMCT) before autopsy. PMCT was performed with a 128-slice scanner (SOMATOM Definition Flash, Siemens Healthineers, Erlangen, Germany) with bodies in the supine position using automatic dose modulation (CARE Dose 4D™, Siemens Healthineers, Erlangen, Germany). The imaging parameters included tube voltage, 120 kVp, and slice collimation, 128 × 0.6 mm. PMCT data were reviewed on a Syngo system imaging software VB40 for multimodality reading (Syngo. via, Siemens Healthineers, Erlangen, Germany). A medical student under the supervision of both a board-certified radiologist and a forensic pathologist performed a blinded read-out for these PMCT data (soft kernel reconstruction B30s, soft tissue window), namely, a measurement of the CBD diameter and an evaluation of the gallbladder (present versus post-cholecystectomy) and summarizing the data in an excel sheet . Additionally, the dataset was assessed for the presence of decomposition gas. Cases in which decomposition gas were found in the vascular system or organs were excluded. 2.5. Statistics Continuous variables were examined for normality by visual analysis. The distribution of age and sex were listed as minimal, maximal, and median values. For testing the significance between the study and control groups, the Mann–Whitney U test was performed using SPSS (Version 27). A p -value < 0.001 was considered statistically significant. The r-value was calculated manually and interpreted according to Cohen (see below). The correlations were examined by using crosstables (Pearson and eta correlation) and interpreted according to Cohen. Absolute values of the correlation coefficient r = 0.1–0.3 were regarded as a weak, r = 0.3–0.5 as a moderate, and r = 0.5–1.0 as a strong correlation. 2.6. Ethics This research project does not fall within the scope of the Human Research Act (HRA). Therefore, authorization from the ethics committee is not required (KEK ZH-Nr. 15-0686). We searched the archive of the Institute of Forensic Medicne Zurich for all autopsy cases for a three-year period from January 2016 until December 2018. Toxicological information was taken from the autopsy and the associated toxicology report, resulting in 1410 cases. In all cases, an autopsy was ordered by the prosecutor’s office. Before the autopsy, a postmortem CT scan was conducted in each case, which was interpreted by a trained forensic pathologist or a radiologist. The toxicological examinations were carried out upon separate request from the authorities. The results of the examinations were stored in the institute’s database. We excluded all cases without toxicological analysis, with non-lethal intoxications and an age < 18 years. Also, cases were excluded where a disease was diagnosed that could influence the diameter of the CBD (e.g., pancreatic cancer and all visible alterations, as well as cases known from medical history, with a possible influence on the biliary system). Furthermore, all corpses were excluded that showed putrefaction gas in the biliary system or other signs of decomposition. In forensic routine casework, the exact time of death is typically unknown. Usually, we only have an estimated time of death to work with. Therefore, we decided to exclude all corpses that either exhibited signs of decomposition in the external examination or in postmortem computed tomography. Overall, we included 125 cases. Of the 125 individuals, 81 were male and 44 were female. The mean age was 45 years, the median age was 43 years, and the age range was 18–99 years. As a control group, we included all cases that had a negative toxicology report (no detection of any toxicological substance such as alcohol, medicines, or drugs) ( n = 88; males, n = 60; females, n = 28; mean age = 48 years; median age = 48 years; and age range = 18–88 years). The control group consisted of 88 cases with a negative toxicology report. Among these cases, 60 were male and 28 were female. The males were aged between 18 and 83 years (median age: 47 years) and the females between 19 and 88 years (median age: 53 years). Analysis was routinely performed directly following the autopsy. First, urine was screened by a cloned enzyme donor immunoassay (CEDIA ® ) for opiates, cocaine, cannabis, amphetamines, methadone, barbiturates, benzodiazepines, and lysergic acid diethylamide (LSD), followed by untargeted liquid chromatography tandem mass spectrometry (LC–MS/MS) ion trap screening after dilution and filtration (Bruker amazon ® , Maurer/Wissenbach/Weber database ). Peripheral blood was analyzed for ethanol and other volatile compounds by headspace gas chromatography flame ionization detection (HS-GC-FID). Targeted quantitative analysis of the toxicologically relevant substances was performed using validated LC–MS/MS methods. Intoxicating substances were classified into the following substance categories : opiates (heroin/morphine, codeine), opioids (methadone, fentanyl, tramadol), stimulants (cocaine, amphetamine and derivatives), hypnotics (benzodiazepines, Z-drugs, barbiturates, antihistamines), antipsychotics (antidepressants, neuroleptics), gasses (CO, helium), and others (chloroquine, acetaminophen, cyanide). All our cases had undergone postmortem computed tomography (PMCT) before autopsy. PMCT was performed with a 128-slice scanner (SOMATOM Definition Flash, Siemens Healthineers, Erlangen, Germany) with bodies in the supine position using automatic dose modulation (CARE Dose 4D™, Siemens Healthineers, Erlangen, Germany). The imaging parameters included tube voltage, 120 kVp, and slice collimation, 128 × 0.6 mm. PMCT data were reviewed on a Syngo system imaging software VB40 for multimodality reading (Syngo. via, Siemens Healthineers, Erlangen, Germany). A medical student under the supervision of both a board-certified radiologist and a forensic pathologist performed a blinded read-out for these PMCT data (soft kernel reconstruction B30s, soft tissue window), namely, a measurement of the CBD diameter and an evaluation of the gallbladder (present versus post-cholecystectomy) and summarizing the data in an excel sheet . Additionally, the dataset was assessed for the presence of decomposition gas. Cases in which decomposition gas were found in the vascular system or organs were excluded. Continuous variables were examined for normality by visual analysis. The distribution of age and sex were listed as minimal, maximal, and median values. For testing the significance between the study and control groups, the Mann–Whitney U test was performed using SPSS (Version 27). A p -value < 0.001 was considered statistically significant. The r-value was calculated manually and interpreted according to Cohen (see below). The correlations were examined by using crosstables (Pearson and eta correlation) and interpreted according to Cohen. Absolute values of the correlation coefficient r = 0.1–0.3 were regarded as a weak, r = 0.3–0.5 as a moderate, and r = 0.5–1.0 as a strong correlation. This research project does not fall within the scope of the Human Research Act (HRA). Therefore, authorization from the ethics committee is not required (KEK ZH-Nr. 15-0686). In 62% of the cases with lethal intoxications, one substance alone or several substances from the same substance group were detected. The remaining 38% of the cases showed a mixture of substances from different substance groups. To further simplify the analysis, the substance most likely to be fatal was considered the leading intoxicant in those mixed intoxication cases and defined the assignment of the case to the corresponding substance group. The other substances of the case were considered as having less relevance for death. Since the intoxication subgroups “gasses” and “others” contained only five and six cases, respectively, they were excluded from the statistical analysis. The distribution of CBD diameters in both groups is shown in . The diameter ranged between 2 and 11 mm (median diameter: 5 mm) in the study group ( a) and between 2 and 8 mm (median diameter: 4 mm) in the control group ( b). In the control group, the diameter of the CBD ranged in males between 2 and 8 mm (median diameter: 4 mm) and in females between 2 and 7 mm (median diameter: 3 mm). A diameter of >8 mm was considered pathological, similar to the known clinical reference values . The Mann–Whitney U test showed a statistically significant difference between the CBD diameters in the intoxication group overall, when compared to the CBD diameters in the control group ( p < 0.001; r = 0.23). There was a weak correlation between the CBD diameter and sex (study group, r = 0.066, p = 0.462; control group, r = 0.244, p = 0.022), with slightly larger CBD diameters observed in males . We also found a weak correlation between age and the CBD diameter (study group, r = 0.28, p = 0.754; control group, r = 0.11, p = 0.916). For both subgroups of “opiates” and “opioids”, there was a strong statistically significant difference between the CBD diameter (being wider) in those groups compared to that in the control group (both p = 0.001) . Diameters > 8 mm were found in both subgroups, whereas only one CBD with a diameter of >8 mm was found in the psychotropic drugs subgroup, and none in the control group. For the other three subgroups, there was no statistically significant difference between the CBD diameter in the intoxication subgroups compared with that in the control group (stimulants, p = 0.462, r = 0.039; hypnotics, p = 0.244, r = 0.161; psychotropic drugs, p = 0.142, r = 0.299) . The constricting effect of opioids or opiates on the SO is well known in a clinical setting. We hypothesized that, in autopsy-proven lethal intoxications with opiates/opioids, a dilatation of the CBD is still visible in postmortem computed tomography and (II) a dilatation of the CBD might also be measurable for other substance groups (e.g., stimulants, hypnotics, antipsychotics, etc.). The first aim of our study was to investigate whether, in autopsy-proven lethal intoxications with opiates or opioids, a dilatation of the CBD is still visible in postmortem computed tomography. We could indeed show that a dilatation of the CBD (>8 mm) was significantly correlated with lethal intoxication in opioid or opiate cases, which is already known in clinical diagnostics . This means that the effect of opioids and opiates persists after death. Therefore, a dilated CBD might be used as an indicator for intoxication in postmortem investigations. Certainly, this finding should be regarded carefully and only in the context of the case circumstances. Nevertheless, it might act as an additional indication for a possible lethal intoxication, alongside other already established signs of intoxication, such as a full urinary bladder, brain edema, or lung edema . The correlation between age/sex and the dilatation of the CBD was weak, so it should not be taken into account. A limitation of our study is that we did not regard the possible influence of the time of death interval on the outcome. It might be interesting to repeat our study on defined time of death intervals to evaluate how long exactly the dilatation of the CBD persists postmortem. Our finding of a dilated CBD in lethal opioid or opiate intoxications might be even more pronounced in only recently deceased bodies. The second aim of our study was to evaluate if a dilatation of the CBD might also be seen in lethal intoxication for other subgroups of drugs. In subgroups with large enough case numbers (such as “stimulants”, “hypnotics”, and “antipsychotics”), we found no correlation between lethal intoxication and the dilatation of the CBD. As a limitation for this subgroup analysis, the case number for some of the subgroups (for example, “gasses”) was unfortunately too low for statistical analysis. Further studies with larger case numbers are needed. Normal ranges for the CBD are not known for postmortem CT. In clinical patients, a CBD diameter range of up to 6–8 mm in computed tomography (CT) is specified as normal . We found a statistical significance between an CBD diameter of >8 mm and opioid/opiate intoxication. In our study, we observed diameters of 2–8 mm in the control group (median diameter of 3 mm in females and 4 mm in males). Although the control group consisted of only 88 individuals, it seems that the CBD diameter is smaller in corpses. A dilated common bile duct in postmortem computed tomography might be used as an indication for a lethal opioid or opiate intoxication, and a statistical significance between a CBD of >8 mm and opioid/opiate intoxication was found, but only in specific case circumstances or together with other indicative findings in a postmortem investigation. As a singular finding, it should be interpreted with great caution because a normal diameter of the CBD does not exclude intoxication.
Experimental Study on Noise-Reduced Propagation Characteristics of the Parametric Acoustic Array Field in a Neck Phantom
10a3924b-69a5-4e7b-a06f-f31e4255323f
11820233
Surgical Procedures, Operative[mh]
The electrolarynx (EL) is a critical device for voice reconstruction in patients undergoing total laryngectomy, a surgical procedure that removes the larynx . This device compensates for the loss of vocal cords by using an external sound source, offering benefits like ease of use and the ability to produce continuous speech . Research shows that more than half of laryngectomy patients rely on EL as their primary means of communication within five years post-surgery . The EL functions by placing a vibrating membrane against the neck, allowing the sound produced to travel through surrounding tissues to the pharynx, thereby creating a substitute voice. However, a significant drawback is the mechanical noise (often referred to as “radiation noise”) generated by the device, which can reach 20–25 dB and severely interfere with speech intelligibility . Researchers have employed various strategies to eliminate radiation noise, including physical modifications to the EL and signal processing techniques. For instance, Madden et al. replaced the traditional motor with an eccentric motor, achieving a 20% improvement in speech intelligibility by reducing noise. However, maintaining clear and natural speech quality remains a challenge. Norton and Bernstein wrapped the EL in thick foam, which reduced radiation noise by approximately 5 dB. However, this modification made the device cumbersome and less user-friendly. Other studies have focused on signal processing methods, such as spectral subtraction, voice conversion, and adaptive filtering, which have shown promise in enhancing speech quality by minimizing background noise . Although these techniques enhance the clarity of the output, their lack of real-time performance often makes them unsuitable for spontaneous conversation. The primary source of radiation noise in electrolarynx (EL) devices originates from external sound sources. One potential solution is to relocate the sound source to the oral cavity or pharynx, a method that has been explored by several researchers. For instance, Takahashi et al. mounted a vibrating source on a denture; however, the resulting sound failed to produce natural voice quality. Huang et al. developed a sound-generating device that is placed on the upper jaw, using a 3D-printed speaker holder to secure the speaker to a tooth sleeve. This design allows users to modulate sound frequency and amplitude by adjusting lung pressure and mouth shape. Despite its advantages, the tooth sleeve can interfere with speech production and pose challenges related to stability and hygiene. Moreover, foreign objects in the mouth can further complicate articulation. Painter et al. explored an electromagnetic EL device for implantation in neck tissue, but this approach carries the risks associated with surgical procedures. Parametric acoustic array (PAA) technology offers a promising alternative to address these challenges. This technique, which has been successfully applied in fields such as underwater measurement , underwater communication , and parametric speakers , generates difference-frequency waves from high-frequency sound waves. This capability enables precise and focused sound generation, even in complex environments . Mills et al. demonstrated the feasibility of generating difference frequency signals in soft tissue, highlighting the potential of using PAA for internal voice source reconstruction. While simulations have provided valuable insights, they cannot fully replicate the complexity of real tissue environments. Experimental validation is therefore essential to confirm the feasibility of using PAA in practical applications. To bridge this gap, our study conducts experimental investigations using a tissue-mimicking phantom to explore the feasibility of generating voice sources within the human body through PAA technology. This study aims to experimentally investigate the use of modulated PAA technology to generate voice sources within a tissue-mimicking phantom that replicates the acoustic characteristics of human neck tissue. By comparing generated voice sources with natural voice and traditional EL outputs, this study seeks to establish the effectiveness of PAA technology in voice reconstruction. The findings aim to provide critical insights into the practical application of PAA technology for voice reconstruction in laryngectomy patients, with the potential to significantly improve their quality of life. 2.1. Experimental Platform The experimental platform was designed to investigate the propagation characteristics of the PAA difference-frequency sound field in a neck phantom. The setup comprises two main subsystems: the signal excitation system and the signal acquisition system, with a neck phantom serving as the medium for acoustic wave propagation. The complete experimental setup is illustrated in , which includes a schematic diagram ( a) and a photograph of the physical setup ( b). 2.1.1. Human Neck Tissue-Mimicking Phantom In this study, a polyvinyl alcohol (PVA) material (product No. 563900, Sigma-Aldrich) was used to fabricate an acoustic phantom that simulates human neck tissue. The PVA (molecular weight: 130,000, Sigma-Aldrich, Zwijndrecht, The Netherlands) was prepared by dissolving 20% wt PVA in a mixture of 80% wt dimethyl sulfoxide (DMSO, Sigma-Aldrich, Zwijndrecht, The Netherlands) and 20% wt Milli-Q water. The solution underwent a series of freezing and thawing cycles to achieve the desired acoustic properties, which were designed to approximate those of human tissue (speed of sound: 1616 m/s, attenuation coefficient: 1.69 dB/cm, B/A value: 11.7) . The phantom was shaped as a hollow cylinder, with a flat surface on the front for precise placement of the ultrasonic transducer. The height of the phantom was approximately 10 cm, and the outer diameter of the cylinder was designed to match the diameter of a human neck. The diameter of the central hole was chosen to resemble the size of the lower human vocal tract. Additionally, the thickness from the inner hole to the flat surface was made to approximate the thickness of human neck tissue . A circular hole, approximately 1 cm in diameter, was added to the rear of the phantom to allow the insertion of a microphone for measurement. 2.1.2. Signal Excitation System The signal excitation system was designed to generate stable ultrasonic signals. The excitation signal was produced by an arbitrary waveform generator (Analog Discovery 3, Digilent, WA, USA) and then amplified by a power amplifier (ATA3040, Aigtek, Xi’an, China) to drive the ultrasonic transducer (H2KA050 KA1CD00, Unictron, Taiwan). The transducer, with a center frequency of 50 kHz and a diameter of 5 cm, was positioned at the middle of the flat surface of the phantom, ensuring optimal transmission of acoustic waves into the medium. b illustrates the physical setup of the transducer and phantom. 2.2. Experimental Procedure The experiments were conducted in a quiet room to minimize external noise interference. The signal excitation system transmitted an excitation signal with a peak-to-peak voltage of 90 V, driving the ultrasonic transducer to emit ultrasound waves, which propagated through the neck phantom. The signal acquisition system captured 10 s of continuous audio data at a sampling rate of 204.8 kHz with 24-bit resolution. To assess the impact of the phantom’s acoustic properties, measurements were also taken in air after the phantom was removed. All measurements were repeated three times for reliability. Subsequently, the ultrasonic transducer was replaced with a commercial EL, and the same acquisition conditions were used to capture the sound emitted by the EL as it propagated through the phantom. 2.3. Excitation Signal The parametric array excitation signal used in this study was generated using the Amplitude Modulation (AM) method , as described by the following equation: (1) S t = ( A + m ( t ) ) × c o s ⁡ ( 2 π f c t ) , where m ( t ) represents the envelope signal, s ( t ) is the resulting modulated signal, A is the amplitude of the carrier signal, and c o s ⁡ ( 2 π f c t ) is the carrier signal with frequency f c . The envelope signal was generated using the Liljencrants–Fant (LF) glottal waveform model, which accurately simulates human glottal airflow during phonation . The glottal waveform frequency was set to 200 Hz, with a carrier frequency of 50 kHz corresponding to the central frequency of the ultrasonic transducer. The modulation depth was set to 100%, resulting in full modulation of the carrier signal by the glottal waveform. illustrates the predefined glottal waveform signal and the corresponding AM-modulated excitation signal. 2.4. Signal Processing and Parameter Evaluation The collected sound signals were subjected to a finite impulse response (FIR) band-pass filter with a passband of 20 Hz to 1000 Hz to extract the difference frequency signals. The extracted signal waveform was compared to the preset glottal waveform in the time domain. To quantify the similarity between the two signals, the Pearson correlation coefficient (r) was calculated. This was performed using MATLAB, with r computed based on the covariance of the variables normalized by their standard deviations: (2) r = ∑ x i − x ¯ y i − y ¯ ∑ x i − x ¯ 2 ∑ y i − y ¯ 2 where x i and y i represent the measured values of AS/TMS and LFS, respectively, and x ¯ and y ¯ are the mean values of AS/TMS and LFS. Additionally, the autoregressive (AR) power spectral density of the acquired signals was calculated using the AR Burg method, with an order of 190 chosen for its stability and accuracy in spectral estimation. The sound pressure levels (SPLs) recorded by the external and internal microphones were used to evaluate the intensity of the difference frequency signal and quantify radiation noise. The SPL difference (Δ L ) between the external microphone, which measures radiation noise, and the internal microphone, which measures the sound inside the phantom, serves as an indicator of noise leakage. A larger Δ L indicates less radiation noise. The SPL difference was calculated using the following formula: (3) ∆ L = S P L e x t − S P L i n t , To assess the statistical significance of the SPL differences between the two excitation sources (EL and PAA), a paired t -test was performed using a significance level of p < 0.001 . This stricter threshold was chosen to ensure robust conclusions in the context of this study. The experimental platform was designed to investigate the propagation characteristics of the PAA difference-frequency sound field in a neck phantom. The setup comprises two main subsystems: the signal excitation system and the signal acquisition system, with a neck phantom serving as the medium for acoustic wave propagation. The complete experimental setup is illustrated in , which includes a schematic diagram ( a) and a photograph of the physical setup ( b). 2.1.1. Human Neck Tissue-Mimicking Phantom In this study, a polyvinyl alcohol (PVA) material (product No. 563900, Sigma-Aldrich) was used to fabricate an acoustic phantom that simulates human neck tissue. The PVA (molecular weight: 130,000, Sigma-Aldrich, Zwijndrecht, The Netherlands) was prepared by dissolving 20% wt PVA in a mixture of 80% wt dimethyl sulfoxide (DMSO, Sigma-Aldrich, Zwijndrecht, The Netherlands) and 20% wt Milli-Q water. The solution underwent a series of freezing and thawing cycles to achieve the desired acoustic properties, which were designed to approximate those of human tissue (speed of sound: 1616 m/s, attenuation coefficient: 1.69 dB/cm, B/A value: 11.7) . The phantom was shaped as a hollow cylinder, with a flat surface on the front for precise placement of the ultrasonic transducer. The height of the phantom was approximately 10 cm, and the outer diameter of the cylinder was designed to match the diameter of a human neck. The diameter of the central hole was chosen to resemble the size of the lower human vocal tract. Additionally, the thickness from the inner hole to the flat surface was made to approximate the thickness of human neck tissue . A circular hole, approximately 1 cm in diameter, was added to the rear of the phantom to allow the insertion of a microphone for measurement. 2.1.2. Signal Excitation System The signal excitation system was designed to generate stable ultrasonic signals. The excitation signal was produced by an arbitrary waveform generator (Analog Discovery 3, Digilent, WA, USA) and then amplified by a power amplifier (ATA3040, Aigtek, Xi’an, China) to drive the ultrasonic transducer (H2KA050 KA1CD00, Unictron, Taiwan). The transducer, with a center frequency of 50 kHz and a diameter of 5 cm, was positioned at the middle of the flat surface of the phantom, ensuring optimal transmission of acoustic waves into the medium. b illustrates the physical setup of the transducer and phantom. In this study, a polyvinyl alcohol (PVA) material (product No. 563900, Sigma-Aldrich) was used to fabricate an acoustic phantom that simulates human neck tissue. The PVA (molecular weight: 130,000, Sigma-Aldrich, Zwijndrecht, The Netherlands) was prepared by dissolving 20% wt PVA in a mixture of 80% wt dimethyl sulfoxide (DMSO, Sigma-Aldrich, Zwijndrecht, The Netherlands) and 20% wt Milli-Q water. The solution underwent a series of freezing and thawing cycles to achieve the desired acoustic properties, which were designed to approximate those of human tissue (speed of sound: 1616 m/s, attenuation coefficient: 1.69 dB/cm, B/A value: 11.7) . The phantom was shaped as a hollow cylinder, with a flat surface on the front for precise placement of the ultrasonic transducer. The height of the phantom was approximately 10 cm, and the outer diameter of the cylinder was designed to match the diameter of a human neck. The diameter of the central hole was chosen to resemble the size of the lower human vocal tract. Additionally, the thickness from the inner hole to the flat surface was made to approximate the thickness of human neck tissue . A circular hole, approximately 1 cm in diameter, was added to the rear of the phantom to allow the insertion of a microphone for measurement. The signal excitation system was designed to generate stable ultrasonic signals. The excitation signal was produced by an arbitrary waveform generator (Analog Discovery 3, Digilent, WA, USA) and then amplified by a power amplifier (ATA3040, Aigtek, Xi’an, China) to drive the ultrasonic transducer (H2KA050 KA1CD00, Unictron, Taiwan). The transducer, with a center frequency of 50 kHz and a diameter of 5 cm, was positioned at the middle of the flat surface of the phantom, ensuring optimal transmission of acoustic waves into the medium. b illustrates the physical setup of the transducer and phantom. The experiments were conducted in a quiet room to minimize external noise interference. The signal excitation system transmitted an excitation signal with a peak-to-peak voltage of 90 V, driving the ultrasonic transducer to emit ultrasound waves, which propagated through the neck phantom. The signal acquisition system captured 10 s of continuous audio data at a sampling rate of 204.8 kHz with 24-bit resolution. To assess the impact of the phantom’s acoustic properties, measurements were also taken in air after the phantom was removed. All measurements were repeated three times for reliability. Subsequently, the ultrasonic transducer was replaced with a commercial EL, and the same acquisition conditions were used to capture the sound emitted by the EL as it propagated through the phantom. The parametric array excitation signal used in this study was generated using the Amplitude Modulation (AM) method , as described by the following equation: (1) S t = ( A + m ( t ) ) × c o s ⁡ ( 2 π f c t ) , where m ( t ) represents the envelope signal, s ( t ) is the resulting modulated signal, A is the amplitude of the carrier signal, and c o s ⁡ ( 2 π f c t ) is the carrier signal with frequency f c . The envelope signal was generated using the Liljencrants–Fant (LF) glottal waveform model, which accurately simulates human glottal airflow during phonation . The glottal waveform frequency was set to 200 Hz, with a carrier frequency of 50 kHz corresponding to the central frequency of the ultrasonic transducer. The modulation depth was set to 100%, resulting in full modulation of the carrier signal by the glottal waveform. illustrates the predefined glottal waveform signal and the corresponding AM-modulated excitation signal. The collected sound signals were subjected to a finite impulse response (FIR) band-pass filter with a passband of 20 Hz to 1000 Hz to extract the difference frequency signals. The extracted signal waveform was compared to the preset glottal waveform in the time domain. To quantify the similarity between the two signals, the Pearson correlation coefficient (r) was calculated. This was performed using MATLAB, with r computed based on the covariance of the variables normalized by their standard deviations: (2) r = ∑ x i − x ¯ y i − y ¯ ∑ x i − x ¯ 2 ∑ y i − y ¯ 2 where x i and y i represent the measured values of AS/TMS and LFS, respectively, and x ¯ and y ¯ are the mean values of AS/TMS and LFS. Additionally, the autoregressive (AR) power spectral density of the acquired signals was calculated using the AR Burg method, with an order of 190 chosen for its stability and accuracy in spectral estimation. The sound pressure levels (SPLs) recorded by the external and internal microphones were used to evaluate the intensity of the difference frequency signal and quantify radiation noise. The SPL difference (Δ L ) between the external microphone, which measures radiation noise, and the internal microphone, which measures the sound inside the phantom, serves as an indicator of noise leakage. A larger Δ L indicates less radiation noise. The SPL difference was calculated using the following formula: (3) ∆ L = S P L e x t − S P L i n t , To assess the statistical significance of the SPL differences between the two excitation sources (EL and PAA), a paired t -test was performed using a significance level of p < 0.001 . This stricter threshold was chosen to ensure robust conclusions in the context of this study. 3.1. Time-Domain Analysis The waveforms of the difference-frequency glottal wave obtained after excitation by the parametric array are shown in . a illustrates the waveform for five periods, with the first row representing the glottal wave obtained through the LF model calculation (envelope signal, LFS). The second and third rows show the difference-frequency signal waveforms captured at the transducer’s axial position in air (air signal, AS) and after propagation through the tissue-mimicking phantom (tissue-mimicking signal, TMS), respectively. From the time-domain signals, it can be observed that the modulated glottal wave signal, after propagating through both the air and tissue-mimicking media, retains a periodicity corresponding to the fundamental frequency (F0) of the original glottal wave signal. The restored waveforms closely resemble the original glottal wave signal. Pearson correlation analysis revealed a correlation coefficient of 0.9767 between AS and LFS, and 0.9438 between TMS and LFS. Compared to the LFS waveform, the difference-frequency waveforms (AS and TMS) show some distortion in their waveform shapes. To further analyze the differences, one period from the LFS, AS, and TMS waveforms, as well as the corresponding EL signal, were normalized and aligned by their signal peaks, as shown in b. The analysis indicates that the peak of all three waveforms occurs at approximately the 50% point of the signal period. However, compared to LFS, the rise time of AS and TMS is steeper. The trough of the LFS signal is stable, without high-frequency noise or unwanted frequency components. On the right side of the peak, at approximately 70% of the signal period, a second sharp peak appears in both AS and TMS signals, with the TMS signal showing a more pronounced peak. 3.2. Frequency-Domain Analysis The Burg AR power spectral density (PSD) curves of LFS, AS, and TMS signals are shown in . All three signals exhibit a dominant peak at the F0, with harmonic components appearing at integer multiples of the F0 (200 Hz, 400 Hz, 600 Hz, etc.). The energy of the F0 is the highest, while the harmonic components decrease in amplitude as the frequency increases. The primary energy is concentrated around 200 Hz, with harmonic components becoming progressively weaker at higher frequencies. The fourth harmonic in the TMS signal is notably smaller compared to the AS signal. The frequency spectra of AS and TMS signals show slight attenuation of higher harmonics compared to the LFS signal, with the TMS signal exhibiting more pronounced attenuation in the higher frequency range. 3.3. Radiation Noise Analysis As shown in a, the sound pressure level (SPL) of radiation noise measured with the EL as the excitation source was 81.20 dB (SD: 0.36 dB). In contrast, when using the PAA as the excitation source, the SPL of radiation noise was significantly lower at 24 dB (SD: 0.16 dB), with a difference that was statistically significant ( p < 0.0001). b illustrates the Δ L between the external and internal microphones. For the EL, the SPL difference was −16 dB (SD: 0.34 dB), whereas for the PAA, the SPL difference was −23 dB (SD: 0.15 dB). The SPL difference for the PAA was significantly higher than that for the EL ( p < 0.0001), indicating that less radiation noise was leaking outside the phantom when using the PAA. The waveforms of the difference-frequency glottal wave obtained after excitation by the parametric array are shown in . a illustrates the waveform for five periods, with the first row representing the glottal wave obtained through the LF model calculation (envelope signal, LFS). The second and third rows show the difference-frequency signal waveforms captured at the transducer’s axial position in air (air signal, AS) and after propagation through the tissue-mimicking phantom (tissue-mimicking signal, TMS), respectively. From the time-domain signals, it can be observed that the modulated glottal wave signal, after propagating through both the air and tissue-mimicking media, retains a periodicity corresponding to the fundamental frequency (F0) of the original glottal wave signal. The restored waveforms closely resemble the original glottal wave signal. Pearson correlation analysis revealed a correlation coefficient of 0.9767 between AS and LFS, and 0.9438 between TMS and LFS. Compared to the LFS waveform, the difference-frequency waveforms (AS and TMS) show some distortion in their waveform shapes. To further analyze the differences, one period from the LFS, AS, and TMS waveforms, as well as the corresponding EL signal, were normalized and aligned by their signal peaks, as shown in b. The analysis indicates that the peak of all three waveforms occurs at approximately the 50% point of the signal period. However, compared to LFS, the rise time of AS and TMS is steeper. The trough of the LFS signal is stable, without high-frequency noise or unwanted frequency components. On the right side of the peak, at approximately 70% of the signal period, a second sharp peak appears in both AS and TMS signals, with the TMS signal showing a more pronounced peak. The Burg AR power spectral density (PSD) curves of LFS, AS, and TMS signals are shown in . All three signals exhibit a dominant peak at the F0, with harmonic components appearing at integer multiples of the F0 (200 Hz, 400 Hz, 600 Hz, etc.). The energy of the F0 is the highest, while the harmonic components decrease in amplitude as the frequency increases. The primary energy is concentrated around 200 Hz, with harmonic components becoming progressively weaker at higher frequencies. The fourth harmonic in the TMS signal is notably smaller compared to the AS signal. The frequency spectra of AS and TMS signals show slight attenuation of higher harmonics compared to the LFS signal, with the TMS signal exhibiting more pronounced attenuation in the higher frequency range. As shown in a, the sound pressure level (SPL) of radiation noise measured with the EL as the excitation source was 81.20 dB (SD: 0.36 dB). In contrast, when using the PAA as the excitation source, the SPL of radiation noise was significantly lower at 24 dB (SD: 0.16 dB), with a difference that was statistically significant ( p < 0.0001). b illustrates the Δ L between the external and internal microphones. For the EL, the SPL difference was −16 dB (SD: 0.34 dB), whereas for the PAA, the SPL difference was −23 dB (SD: 0.15 dB). The SPL difference for the PAA was significantly higher than that for the EL ( p < 0.0001), indicating that less radiation noise was leaking outside the phantom when using the PAA. This study experimentally investigates the propagation of the PAA sound field through tissue-mimicking media to evaluate its potential for reconstructing glottal waveforms. The results confirm that the PAA effectively generates low-frequency difference waves within the tissue-mimicking media, retaining the envelope characteristics of the modulated signal after propagation. Notably, these difference-frequency waves, when transmitted through the medium, exhibit a higher degree of similarity to the human glottal waveform than the excitation signal generated by the EL. This finding demonstrates the feasibility of using the PAA to reconstruct glottal waveforms, offering a potential advantage over traditional EL devices, which produce signals that are significantly different from natural human phonation. The time–domain analysis revealed that the PAA emitted modulated glottal waveforms (LF model) maintaining periodicity with the same fundamental frequency (200 Hz) as the pre-defined signal, regardless of whether they propagated through air or tissue-mimicking media. The waveforms produced by PAA showed a high degree of similarity to the original glottal waveform, as evidenced by the Pearson correlation coefficients exceeding 0.9. In contrast, the waveform produced by the EL resembled a sharp, impulsive signal, rather than a smoothly modulated glottal waveform. This finding aligns with the simulations reported by Mills et al. but differs in that Mills’ approach used a frequency difference method with two transducers, which could only generate sine waves corresponding to the frequency difference between two excitation signals. This study, however, utilized a single transducer to modulate the excitation signal, allowing for more versatile waveform generation and yielding a signal closer to natural human phonation. Furthermore, the use of one transducer simplifies the design of future devices, making the system more portable and cost-effective. When comparing the LFS, AS, and TMS waveforms, the peak-to-peak value of the TMS waveform was significantly smaller than that of the AS waveform, possibly due to stronger attenuation of the sound by the tissue-mimicking medium. Additionally, the rise and fall of the TMS waveform were less smooth compared to the LFS waveform, with added peaks and valleys, which could be attributed to the harmonic components present in the original glottal signal. These harmonic components experienced some loss during propagation through the tissue-mimicking medium, a phenomenon that became more pronounced as the medium’s nonlinearity increased. This was further corroborated by the power spectral analysis. In the frequency domain, the AR power spectral density analysis indicated that, in addition to the F0, the LF model waveform contained several harmonic components. After propagation through both air and tissue-mimicking media, the parametric array sound field retained these harmonic components, with their energy gradually decreasing as the frequency increased, similar to the energy distribution observed in the original LF model. When compared to the EL-generated sound field, the PAA signal exhibited a frequency spectrum that more closely matched that of the LF model, emphasizing its potential to more accurately simulate human phonation. A key aspect of this study was the evaluation of radiation noise produced by the two excitation sources, using an external microphone placed behind the tissue-mimicking medium. The results presented in indicate a significant difference in radiation noise between the EL and the PAA. The SPL of the radiation noise generated by the PAA was substantially lower than that of the EL, owing to the directional nature of the PAA. Additionally, the simplified neck phantom used in this study may have led to some sound leakage at the openings of the trachea. In a more realistic human model, where the trachea is a closed tube, we would expect even lower radiation noise. Moreover, due to the low energy of the difference-frequency sound generated by the PAA, any leaked radiation noise would likely be imperceptible to the listener. While this study demonstrates the potential of the PAA for reconstructing glottal waveforms and reducing radiation noise, the glottal waveform frequency obtained (200 Hz) remains below the typical fundamental frequency range of normal human speech (60–500 Hz). This limitation is likely due to the output power of the transducer and the modulation method used. Future research should focus on enhancing the output power and improving the conversion efficiency of the difference-frequency signal to achieve a broader frequency range that better matches the natural human voice. Furthermore, the tissue-mimicking phantom in this study does not fully replicate the complex structure of human tissues, particularly regarding their nonlinear acoustic properties. Future studies should explore more anatomically accurate phantoms or in vivo experiments to further validate the feasibility of the parametric acoustic array for practical clinical applications, such as in improving the quality and naturalness of speech in patients with laryngectomy. The selection of a 5 cm diameter and a 50 kHz center frequency for the transducer in this study was driven by the specific requirements of PAA technology in generating glottal waves within the human body. The diameter was chosen to accommodate the limited area of the human neck while maximizing the energy output of the difference-frequency wave, as the emitting area directly influences the efficiency of PAA energy conversion. Similarly, the 50 kHz center frequency was selected to balance energy conversion efficiency and directivity, with lower frequencies providing higher efficiency but suffering from poor directivity. While the experimental results confirm the feasibility of generating glottal waves within the phantom, further optimization of transducer parameters remains an essential avenue for improving conversion efficiency. Future work will focus on refining transducer size and center frequency, exploring advanced modulation methods, and employing focused transducers to enhance power density at specific target locations, ensuring a stronger and more localized difference-frequency sound field for applications in the human body. This study provides compelling evidence that PAA technology is capable of reconstructing glottal waveforms with higher fidelity than traditional EL devices, offering significant advantages in both waveform accuracy and radiation noise reduction. Comparative analysis demonstrated a high degree of similarity between the PAA-generated signals and the model glottal waveforms. Additionally, the autoregressive spectral analysis confirmed that the PAA accurately reproduces essential spectral features of the glottal waveform, further supporting its potential for voice rehabilitation. The use of a single transducer to generate modulated signals makes this method more efficient and practical for future speech rehabilitation technologies. However, improvements in signal strength, frequency range, and system design are necessary to fully meet the demands of natural human speech. Future work will focus on optimizing transducer parameters, exploring advanced modulation techniques, and conducting clinical evaluations to ensure effective translation of this technology into practical applications. This study lays a strong foundation for advancing voice restoration solutions, with the potential to significantly improve the quality of life for individuals with total laryngectomy.
Effects of Exercises of Different Intensities on Bone Microstructure and Cardiovascular Risk Factors in Ovariectomized Mice
2ffdabd6-7337-44d0-b04b-16e18dca7c8c
11817207
Surgical Procedures, Operative[mh]
Menopause, defined as the cessation of menstruation due to the permanent loss of ovarian follicular function, can see women spending up to 40% of their lives in a postmenopausal state . The incidence of osteoporosis and cardiovascular disease (CVD) significantly increases in postmenopausal women . Studies have shown a close link between bone health and CVD . Low bone mineral density is associated with endothelial dysfunction, coronary artery disease, peripheral vascular disease, and cardiovascular mortality . Complications such as fractures caused by osteoporosis and CVD severely diminish the quality of life of patients and are major causes of death among elderly women . Appropriate exercise can increase bone density and prevent cardiovascular disease . Due to the significant role of exercise in disease prevention and treatment, a review published in 2016 proposed that “exercise is a kind of medicine” . However, it remains unclear whether there are differences in the preventive and therapeutic effects of different exercise intensities on cardiovascular disease and osteoporosis in menopausal women, as well as the underlying mechanisms of these effects. Osteocalcin (OCN) is a small protein secreted by osteoblasts that not only affects bone formation but also enters the bloodstream to influence glucose metabolism and the cardiovascular system, serving as an important endocrine factor . Moreover, serum OCN is one of the few factors with functions that cover major menopause-related diseases in women, including osteoporosis, cardiovascular disease, and anxiety . Studies on menopausal women have shown that coronary artery calcium score and atherosclerosis are positively correlated with OCN concentrations , indicating that serum OCN may play a role in cardiovascular diseases among menopausal women. It is necessary to investigate the exercise-induced changes of OCN in order to clarify the roles of exercise on the cardiovascular system. The OVX mouse model can effectively mimic the changes in estrogen levels in postmenopausal women and has therefore been widely accepted as a model for menopause to study postmenopausal symptoms . Therefore, in this study, we used ovariectomized (OVX) mice as a model and subjected them to moderate-intensity continuous exercise and high-intensity interval exercise to examine their effects on serum OCN levels, as well as cardiovascular risk factors and osteoporosis in OVX mice. Our study indicates that there is no difference between the two exercise modalities in improving cardiovascular disease risk factors in OVX mice, with MICT showing superior effects on bone microstructure compared to HIIT. Meanwhile, ucOCN does not appear to be the direct cause of the improvement in cardiovascular risk factors due to exercise; rather, it may be related to changes in estrogen levels. Conversely, ucOCN could serve as a potential biomarker for assessing the effectiveness of exercise in the prevention and treatment of osteoporosis. 2.1. Body Weight and Uterus Weight The wet uterine weights and body weights of each group are shown in B,C. Compared with the Sham group, the uterine weights of the OVX group, as well as the OVX + MICT and OVX + HIIT groups, were significantly reduced. There were no significant differences between the OVX group and the OVX + HIIT or OVX + MICT groups. Additionally, the body weights of the mice in the OVX group were significantly higher than those in the Sham group at the third week post-OVX, and both intensities of exercise significantly inhibited the weight gain in the OVX mice. 2.2. Serum E 2 , cOCN, and ucOcn Level Serum E 2 , cOCN, and ucOCN levels were measured to assess the effects of varied exercise intensities on mice ( D–F). Compared with the Sham group, serum E 2 levels significantly decreased in the OVX, OVX + MICT, and OVX + HIIT groups, with no significant differences among them ( D). As shown in E,F, compared with the Sham group, the serum levels of cOCN in OVX mice were significantly reduced, while the concentrations of ucOCN were significantly increased. However, compared with the OVX group, serum ucOCN levels were significantly decreased in both the OVX + MICT group and the OVX + HIIT group. Therefore, exercise did not improve the effects of ovariectomy on serum E 2 levels but significantly decreased the levels of serum ucOCN. 2.3. Lipid Parameters, Blood Pressure, and Blood Vessel Morphology As shown in A,B, compared to Sham mice, OVX mice exhibited significantly elevated serum TG and significantly decreased HDL-C. Both types of exercise significantly reduced serum TG and increased HDL-C in OVX mice. There were no significant differences in LDL-C among the groups ( C). The changes in T-CHO were similar to HDL-C across the groups ( D). E exhibits the aortic intima smoothness and elastic fiber arrangement of Sham, OVX, OVX + MICT, and OVX + HIIT mice. Sham mice showed smooth aortic intima and dense elastic fibers. OVX mice had aortic protrusions, disordered fibers, and ruptures. The vascular elastic fibers of OVX + MICT mice are arranged neatly and without ruptures, while OVX + HIIT mice have loose fibers with reduced ruptures. Von Kossa staining ( F) revealed no calcification in any group. G shows OVX mice exhibited thicker aortic walls than Sham, while both exercise groups exhibited thinner walls. H indicates higher SBP in OVX mice, and the elevation were inhibited by both exercise types, while DBP showed no significant changes across all groups ( I). 2.4. Microstructure of the Distal Femur As shown in A, the 2D and 3D images of the distal femur revealed a significant reduction in the number of trabeculae in the cancellous bone of the distal femur region in OVX mice, accompanied by a substantial increase in trabecular spacing and deterioration of bone microarchitecture. Quantitative analysis results in B–E show that BMD, BV/TV, Tb.Th, and Tb.N were significantly decreased in the OVX group compared to Sham mice. Compared with OVX mice, BMD, BV/TV, Tb.Th and Tb.N were significantly increased in the OVX + MICT group. Furthermore, BMD, BV/TV and Tb.N were significantly increased in the OVX + HIIT group, with no significant change in Tb.Th. Additionally, BMD and BV/TV were significantly higher in the OVX + MICT group than in the OVX + HIIT group. As shown in F,G, compared with Sham mice, Tb.Sp and DA were significantly increased in the cancellous bone of the distal femur of OVX mice. Compared with OVX mice, Tb.Sp was significantly decreased in the OVX + MICT group, while there was no significant improvement in Tb.Sp in the OVX + HIIT group. Neither type of exercise exhibited a significant effect on DA. These results indicated that MICT on improving the microarchitecture of cancellous bone in mice was superior to HIIT, as evidenced by significantly increased BMD and BV/TV and significantly reduced Tb.Sp compared to HIIT. 2.5. The Number of Osteoblasts and Osteoclasts in the Tibia As shown in A,B, the OVX group exhibited significantly fewer osteoblasts per unit area. Both MICT and HIIT exercises increased osteoblasts. As shown in C,D, the number of osteoclasts per unit area significantly increased in OVX mice, while both exercises markedly reduced the number of osteoclasts. The wet uterine weights and body weights of each group are shown in B,C. Compared with the Sham group, the uterine weights of the OVX group, as well as the OVX + MICT and OVX + HIIT groups, were significantly reduced. There were no significant differences between the OVX group and the OVX + HIIT or OVX + MICT groups. Additionally, the body weights of the mice in the OVX group were significantly higher than those in the Sham group at the third week post-OVX, and both intensities of exercise significantly inhibited the weight gain in the OVX mice. 2 , cOCN, and ucOcn Level Serum E 2 , cOCN, and ucOCN levels were measured to assess the effects of varied exercise intensities on mice ( D–F). Compared with the Sham group, serum E 2 levels significantly decreased in the OVX, OVX + MICT, and OVX + HIIT groups, with no significant differences among them ( D). As shown in E,F, compared with the Sham group, the serum levels of cOCN in OVX mice were significantly reduced, while the concentrations of ucOCN were significantly increased. However, compared with the OVX group, serum ucOCN levels were significantly decreased in both the OVX + MICT group and the OVX + HIIT group. Therefore, exercise did not improve the effects of ovariectomy on serum E 2 levels but significantly decreased the levels of serum ucOCN. As shown in A,B, compared to Sham mice, OVX mice exhibited significantly elevated serum TG and significantly decreased HDL-C. Both types of exercise significantly reduced serum TG and increased HDL-C in OVX mice. There were no significant differences in LDL-C among the groups ( C). The changes in T-CHO were similar to HDL-C across the groups ( D). E exhibits the aortic intima smoothness and elastic fiber arrangement of Sham, OVX, OVX + MICT, and OVX + HIIT mice. Sham mice showed smooth aortic intima and dense elastic fibers. OVX mice had aortic protrusions, disordered fibers, and ruptures. The vascular elastic fibers of OVX + MICT mice are arranged neatly and without ruptures, while OVX + HIIT mice have loose fibers with reduced ruptures. Von Kossa staining ( F) revealed no calcification in any group. G shows OVX mice exhibited thicker aortic walls than Sham, while both exercise groups exhibited thinner walls. H indicates higher SBP in OVX mice, and the elevation were inhibited by both exercise types, while DBP showed no significant changes across all groups ( I). As shown in A, the 2D and 3D images of the distal femur revealed a significant reduction in the number of trabeculae in the cancellous bone of the distal femur region in OVX mice, accompanied by a substantial increase in trabecular spacing and deterioration of bone microarchitecture. Quantitative analysis results in B–E show that BMD, BV/TV, Tb.Th, and Tb.N were significantly decreased in the OVX group compared to Sham mice. Compared with OVX mice, BMD, BV/TV, Tb.Th and Tb.N were significantly increased in the OVX + MICT group. Furthermore, BMD, BV/TV and Tb.N were significantly increased in the OVX + HIIT group, with no significant change in Tb.Th. Additionally, BMD and BV/TV were significantly higher in the OVX + MICT group than in the OVX + HIIT group. As shown in F,G, compared with Sham mice, Tb.Sp and DA were significantly increased in the cancellous bone of the distal femur of OVX mice. Compared with OVX mice, Tb.Sp was significantly decreased in the OVX + MICT group, while there was no significant improvement in Tb.Sp in the OVX + HIIT group. Neither type of exercise exhibited a significant effect on DA. These results indicated that MICT on improving the microarchitecture of cancellous bone in mice was superior to HIIT, as evidenced by significantly increased BMD and BV/TV and significantly reduced Tb.Sp compared to HIIT. As shown in A,B, the OVX group exhibited significantly fewer osteoblasts per unit area. Both MICT and HIIT exercises increased osteoblasts. As shown in C,D, the number of osteoclasts per unit area significantly increased in OVX mice, while both exercises markedly reduced the number of osteoclasts. Menopause is an inevitable life stage for women, during which the aging of ovaries leads to a decrease in estrogen levels, thereby increasing the risk of various diseases . Osteoporosis and cardiovascular disease are common among middle-aged and elderly women, with cardiovascular disease being the primary cause of death among older women . Multiple studies have shown that the risk of cardiovascular disease increases after the onset of menopause . Abnormal blood pressure and lipid parameters are risk factors for cardiovascular disease (CVD) . Studies have shown that reducing systolic and diastolic blood pressure can decrease the risk of CVD . Both epidemiological and experimental studies indicate that decreased HDL-C and increased LDL-C increase the risk of CVD . Clinical studies demonstrated that actively improving abnormal blood lipid levels can regress atherosclerotic plaques and reduce the incidence of CVD . The decline in estrogen levels in postmenopausal women and ovariectomized animals leads to elevated blood pressure and abnormal lipid parameters . In our study, OVX mice exhibited significantly increased systolic blood pressure, decreased HDL-C levels, and increased TG levels. Additionally, these mice showed increased elastic fiber rupture and elevated aortic wall thickness. Both exercise intensities effectively lowered blood pressure, improved the morphological structure of the aortic wall, and reduced wall thickness. These changes might be related to the regulation of lipid parameters by exercise. Moreover, both exercise intensities significantly reduced serum TG levels and increased HDL-C levels. Notably, T-CHO levels decreased in OVX mice and increased in the exercise groups. This might be related to the lack of significant changes in LDL-C levels among the groups and the increase in HDL-C levels due to exercise. The specific mechanisms require further investigation. In summary, the four key factors significantly reducing the risk of cardiovascular diseases are attributed to exercise, including the decrease in TG levels, the increase in HDL content, the reduction in blood vessel wall thickness, and the effective control of SBP. These positive physiological changes not only effectively decrease the deposition of lipids on the inner wall of blood vessels but also enhance the cholesterol reverse transport mechanism, thereby significantly improving blood vessel elasticity and compliance, optimizing the circulatory system, and greatly reducing the burden and potential damage to the heart and blood vessels . This series of comprehensive effects constructs a solid defense against the occurrence and development of atherosclerosis, thereby substantially lowering the incidence of cardiovascular diseases . It is well-documented that weight gain is a common occurrence in OVX mice, primarily due to metabolic alterations resulting from decreased estrogen levels . In alignment with previous studies, our results showed that the body weights of OVX mice were significantly higher than those of the Sham group . This weight gain can influence cardiovascular risk factors through various mechanisms, such as increased release of inflammatory cytokines from adipose tissue and negative impacts on insulin sensitivity . Interestingly, both MICT and HIIT significantly curbed the weight gain in OVX mice. This observation suggests that exercise may counteract weight gain by improving energy balance and metabolic regulation . Specifically, exercise could achieve this by increasing energy expenditure, boosting basal metabolic rate, and enhancing insulin sensitivity . In our previous research, exercise significantly elevated the serum osteocalcin levels in VCD-induced ovarian senescent mice and ameliorated their anxiety-like behaviors . Also, studies reported that OCN is involved in the regulation of glucose and lipid metabolism and is associated with vascular atherosclerosis and vascular calcification . Therefore, in this study, we measured the circulating OCN levels in mice at the 9th week after OVX and found that the levels of ucOCN were significantly elevated, while the levels of cOCN were significantly decreased. Both types of exercise significantly reduced the ucOCN levels in OVX mice but had no significant effect on cOCN levels. The significant negative relationship between OCN and estrogen indicated that changes in OCN were induced by estrogen. In fact, we observed a trend of increased estrogen levels in mice from the exercise groups. According to research reports, Wistar female rats that underwent ovariectomy exhibited a significant increase in serum estrogen levels after engaging in exercise for an extended period of time (1 h/d, 6 d/w) for a duration of three months . Meanwhile, we did not detect a correlation between OCN and TG, HDL-C, SBP, or vascular wall thickness, indicating that the reduction in cardiovascular risk factors by both exercises in OVX mice is not directly related to changes in serum ucOC levels. Similar to our results, Wieczorek-Baranowska and his colleagues reported that 8 weeks of aerobic training in postmenopausal women significantly improved central obesity, decreased OCN levels, and reduced insulin resistance, but they did not observe a direct relationship between OCN concentration changes with training and metabolic markers . Another study has demonstrated that there are significant gender differences in the impact of exercise on the regulation of OCN . In this study, exercise increased the level of circulating OCN in female mice but decreased it in male mice. Notably, this change was associated with improvements in cognitive outcomes yet had no correlation with metabolic outcomes. Meanwhile, although exogenous osteocalcin did not improve metabolism, it had a significant effect on improving cognitive defects induced by a high-fat diet. Furthermore, some studies have reported that OCN-knockout mice did not exhibit significant insulin resistance or glucose and lipid metabolism disorders . In contrast, another study involving 39 young obese male participants randomly divided them into a control group and an exercise group, with the exercise group undergoing an 8-week aerobic exercise training program . The results showed that exercise-induced reduction in body fat and improvement in insulin sensitivity were accompanied by a significant increase in serum osteocalcin levels. Moreover, the increase in osteocalcin was negatively correlated with changes in body weight, BMI, body fat percentage, and insulin resistance index . Therefore, it can be concluded that the impact of exercise on osteocalcin and its role in metabolic regulation may vary due to age and gender differences. Additionally, different animal models and exercise interventions of varying intensities can produce different results. Thus, more systematic and comprehensive studies, including gain-of-function and knockout experiments, are needed in the future to further verify the role of osteocalcin in the regulation of energy metabolism and cardiovascular risk factors by exercise. The regulatory effect of exercise on estrogen levels may be an important mechanism by which it improves cardiovascular risk factors in OVX mice. Meanwhile, ucOCN does not seem to be the direct cause of the improvement in cardiovascular risk factors due to exercise; rather, it may serve as a potential biomarker for assessing the effectiveness of exercise in the prevention and treatment of osteoporosis. The microstructure of bone tissue can effectively reflect the health status of bones, and BV/TV, Tb.Th, Tb.Sp, and Tb.N are the primary indicators reflecting bone microstructure . In this study, significant decreases were observed for BMD, BV/TV, Tb.N, and Tb.Th in the distal cancellous bone region of the femur in OVX mice. Additionally, Tb.Sp and DA have increased significantly. MICT was found to improve the bone microstructure more effectively than HIIT because MICT well improved BMD, BV/TV, and Tb.Sp in the distal cancellous bone of the femur in OVX mice. Our results are consistent with previous research . Furthermore, low-to-moderate-intensity treadmill exercise at a speed of 10 m/min can partially reverse the trabecular bone loss induced by ovariectomy. Although high-intensity treadmill exercise at 18 m/min also exhibits certain positive effects, running at the lower intensity is more effective in reducing bone loss . It further supports our results that moderate-intensity exercise is more beneficial for improving bone health. This may be related to the fact that MICT is more effective in promoting osteoblast number (+10.22%) compared to HIIT. Additionally, MICT, with its moderate-intensity continuous exercise, may apply a more sustained and appropriate load stimulus to the bones. In contrast, the appropriate stimulus time that HIIT exerts on the bones may be shorter. This is also one of the reasons why MICT is superior to HIIT in improving bone microstructure. However, the specific mechanism of action still needs further in-depth research. Morphological analysis of tibial trabecular structure reveals that both MICT and HIIT can significantly increase the number of osteoblasts (+42.07% and +29.41%) and decrease the number of osteoclasts (−50.83% and −57.91%). Osteoblasts are key cells responsible for bone formation and bone matrix synthesis. In this study, MICT and HIIT effectively promoted osteoblast proliferation, accelerating bone formation and repair processes. Notably, MICT exhibited particularly prominent effects in this regard, which partly explains why MICT outperforms HIIT in improving bone microstructure. Osteoclasts are responsible for bone resorption and remodeling. Both MICT and HIIT significantly reduced the number of osteoclasts, indicating that these two exercise modes can inhibit the bone resorption process and reduce bone loss. Research indicates that cOCN, due to its structural characteristic of having two carboxyl groups at the termini, exhibits a strong binding ability to Ca 2+ on the surface of hydroxyapatite, enabling it to effectively deposit in bone matrix . Approximately 60% to 90% of cOCN is deposited in bone matrix, while the remaining portion is released into the circulatory system . In contrast, ucOCN has a weaker affinity for bone and is more abundant in the bloodstream . Notably, osteoclast activity creates acidic resorption lacunae where OCN deposited in the bone matrix may undergo decarboxylation, thereby increasing ucOCN levels in the blood . Therefore, the reduction in osteoclast numbers due to exercise may contribute to lowering circulating ucOCN levels by minimizing this decarboxylation process. Despite the promotion of osteoblast proliferation by both MICT and HIIT, circulating cOCN levels do not increase, implying enhanced deposition of cOCN within the bones. Further research is needed to explore the role of OCN carboxylation in bone health and its potential implications for the prevention and treatment of bone diseases. 4.1. Animals Twenty-four female C57BL/6J mice aged 7–8 weeks (purchased from the Animal Center of the Medical School of Xi’an Jiaotong University, SCXK2012-003) were housed in a sterile animal room at the School of Life Sciences and Technology, Xi’an Jiaotong University. One week after acclimatization, the experiments commenced. The mice were randomly divided into four groups: Sham group (Sham, n = 8), ovariectomized control group (OVX, n = 8), ovariectomized + moderate-intensity continuous training group (OVX + MICT, n = 8), and the ovariectomized + high-intensity interval training group (OVX + HIIT, n = 8). During the experimental period, the mice had free access to standard rodent feed and sterile water. The relative temperature and humidity in the animal room were maintained at 22 °C ± 2 °C and 60% ± 5%, respectively. The diurnal cycle was controlled at 12 h of light and 12 h of darkness. The entire procedure was reviewed and approved by the Biomedical Ethics Committee of the Medical School of Xi’an Jiaotong University in accordance with ethical principles, with an approval number of 2020-625. It was carried out in compliance with the “Guide for the Care and Use of Laboratory Animals” published by the National Institutes of Health (NIH Publication No. 8023, revised in 1978). 4.2. Ovariectomy For the OVX mice, a bilateral dorsal incision surgery was performed. A vertical incision was made on each side of the midline of the back, approximately one finger’s width away from the midline, at the point between the iliac bone and the ribs. The skin and subcutaneous fascia were cut open with scissors, and the abdominal muscles were cut along the edge of the erector spinae muscles. The abdominal cavity was opened with forceps supporting the fat, which was carefully lifted out. Upon locating the ovaries, the blood vessels and fat below them, as well as the uterus, were suture-ligated, and the ovaries were removed. The muscles and skin were sutured layer by layer, and the wound was disinfected with iodophor. For the Sham group mice, their ovaries were retained, and only an equivalent volume of fat next to the ovaries was removed. The wounds of the mice recovered well after the surgery, and no infections occurred. 4.3. Exercise Protocols The two exercise groups of mice underwent an adaptive training period of 6 min per day at a speed of 6 m/min for three consecutive days. Following the adaptive training, the maximum running capacity (MRC) of the mice was measured. The measurement method involved starting at an initial speed of 6 m/min, with an increase of 3 m/min every 3 min, until the mice could no longer keep up with the treadmill speed , with a maximum detectable speed of 24 m/min. The exercise protocol for the OVX + MIIT group was as follows: continuous exercise at 70% of the MRC (17 m/min) for 40 min/d , five days per week, for a duration of 8 weeks. The exercise protocol for the OVX + HIIT group was as follows: intermittent exercise consisting of 90% and 50% of the MRC . Specifically, each day began with 10 min of exercise at 17 m/min, followed by five cycles of 3 min at 21 m/min interspersed with 3 min at 12 m/min, five days per week, for a duration of 8 weeks. All exercise sessions were completed between 6:00 p.m. and 9:00 p.m. 4.4. Blood Pressure Measurement Blood pressure in mouse tails was measured using a small animal blood pressure monitor (BP-2010A, Reward, Beijing, China). The mouse was restrained using a fixing device to expose its tail, which was then placed in a heated chamber at 37 °C to allow the mouse to acclimate to the environment. The pressure cuff of the blood pressure measuring device was placed near the base of the mouse’s tail. Blood pressure measurement began once the mouse had calmed down. Each animal was measured 20 times, and the average of the middle ten readings was taken as the mouse’s blood pressure value. 4.5. Serum Analysis Strictly adhering to the instructions provided by the commercial ELISA kits, we assessed the levels of estradiol (E 2 , Cloud-Clone Corp, Wuhan, China), Gla-osteocalcin (cOCN, Takara, Tokyo, Japan), Glu-osteocalcin (ucOCN, Takara, Tokyo, Japan), triglyceride (TG, Nanjing Jiancheng, Nanjing, China), high-density lipoprotein cholesterol (HDL-C, Nanjing Jiancheng, Nanjing, China), low-density lipoprotein cholesterol (LDL-C, Nanjing Jiancheng, Nanjing, China), and total cholesterol (T-CHO, Nanjing Jiancheng, Nanjing, China). A Bio-Rad Model 680 microplate reader (Bio-Rad, Hercules, CA, USA) was utilized to measure the absorbance values. The detection thresholds were established at 12.35 pg/mL for E 2 , 10.5 ng/mL for cOCN, and 0.25 ng/mL for ucOCN. 4.6. Morphometric Analysis The aorta and tibia were dissected and fixed with 4% formaldehyde. The tibia was then decalcified, sectioned, and prepared for embedding in wax after washing with PBS. The aorta underwent dehydration and wax embedding. Both were sectioned at 8–10 μm, mounted on slides, and stained with hematoxylin and eosin. For Von Kossa staining, sections were washed, immersed in silver solution, exposed to light, treated with sodium thiosulfate, counterstained with Van Gieson’s stain, and then processed for dehydration, clearing, and mounting. Histophysiological evaluations were conducted under a microscope. Aortic structure and thickness were measured at 200× magnification, while osteoblast and osteoclast count per trabecular unit area were calculated at 400× magnification. 4.7. Micro-CT Analysis The microarchitecture of the trabecular bone region in the distal femur of mice was scanned using the German Y. Cheetah micrometer X-ray three-dimensional imaging system (Y.Cheetah; YXLON International GmbH, Hamburg, Germany). The parameters were set as follows: voltage at 80 KV, current at 35 μA, resolution at 6 μm, and a total of 720 scanning layers. After the scanning was completed, the grayscale images obtained from X-ray imaging were reconstructed. Using VG Studio MAX 3.0 analysis software, the region of interest (ROI) was selected. Based on the anatomical features of the femur, the first layer where both the medial and lateral condyles of the distal femur simultaneously disappeared was identified, and a cylinder with a radius of 1.5 cm and a height of 1 cm was selected as the ROI, extending from bottom to top. Trabecular bone within this region was extracted for analysis. This process yielded visual 2D and 3D images and quantitative indices of the trabecular microarchitecture of the distal femur, including bone mineral density (BMD), bone volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular number (Tb.N), trabecular separation (Tb.Sp), and degree of anisotropy (DA). 4.8. Statistical Analysis The results are presented as mean ± standard deviation (mean ± SD). Statistical analyses were conducted using SPSS version 20.0 software (SPSS Institute, Chicago, IL, USA). All data were tested using the one-sample Kolmogorov–Smirnov test and were found to be normally distributed. One-way analysis of variance (ANOVA) was employed to assess whether there were differences among the three groups. Once a significant difference was detected, the least significant difference multiple comparison test was used to determine whether the difference between every two groups was statistically significant. p -value < 0.05 was considered statistically significant. Twenty-four female C57BL/6J mice aged 7–8 weeks (purchased from the Animal Center of the Medical School of Xi’an Jiaotong University, SCXK2012-003) were housed in a sterile animal room at the School of Life Sciences and Technology, Xi’an Jiaotong University. One week after acclimatization, the experiments commenced. The mice were randomly divided into four groups: Sham group (Sham, n = 8), ovariectomized control group (OVX, n = 8), ovariectomized + moderate-intensity continuous training group (OVX + MICT, n = 8), and the ovariectomized + high-intensity interval training group (OVX + HIIT, n = 8). During the experimental period, the mice had free access to standard rodent feed and sterile water. The relative temperature and humidity in the animal room were maintained at 22 °C ± 2 °C and 60% ± 5%, respectively. The diurnal cycle was controlled at 12 h of light and 12 h of darkness. The entire procedure was reviewed and approved by the Biomedical Ethics Committee of the Medical School of Xi’an Jiaotong University in accordance with ethical principles, with an approval number of 2020-625. It was carried out in compliance with the “Guide for the Care and Use of Laboratory Animals” published by the National Institutes of Health (NIH Publication No. 8023, revised in 1978). For the OVX mice, a bilateral dorsal incision surgery was performed. A vertical incision was made on each side of the midline of the back, approximately one finger’s width away from the midline, at the point between the iliac bone and the ribs. The skin and subcutaneous fascia were cut open with scissors, and the abdominal muscles were cut along the edge of the erector spinae muscles. The abdominal cavity was opened with forceps supporting the fat, which was carefully lifted out. Upon locating the ovaries, the blood vessels and fat below them, as well as the uterus, were suture-ligated, and the ovaries were removed. The muscles and skin were sutured layer by layer, and the wound was disinfected with iodophor. For the Sham group mice, their ovaries were retained, and only an equivalent volume of fat next to the ovaries was removed. The wounds of the mice recovered well after the surgery, and no infections occurred. The two exercise groups of mice underwent an adaptive training period of 6 min per day at a speed of 6 m/min for three consecutive days. Following the adaptive training, the maximum running capacity (MRC) of the mice was measured. The measurement method involved starting at an initial speed of 6 m/min, with an increase of 3 m/min every 3 min, until the mice could no longer keep up with the treadmill speed , with a maximum detectable speed of 24 m/min. The exercise protocol for the OVX + MIIT group was as follows: continuous exercise at 70% of the MRC (17 m/min) for 40 min/d , five days per week, for a duration of 8 weeks. The exercise protocol for the OVX + HIIT group was as follows: intermittent exercise consisting of 90% and 50% of the MRC . Specifically, each day began with 10 min of exercise at 17 m/min, followed by five cycles of 3 min at 21 m/min interspersed with 3 min at 12 m/min, five days per week, for a duration of 8 weeks. All exercise sessions were completed between 6:00 p.m. and 9:00 p.m. Blood pressure in mouse tails was measured using a small animal blood pressure monitor (BP-2010A, Reward, Beijing, China). The mouse was restrained using a fixing device to expose its tail, which was then placed in a heated chamber at 37 °C to allow the mouse to acclimate to the environment. The pressure cuff of the blood pressure measuring device was placed near the base of the mouse’s tail. Blood pressure measurement began once the mouse had calmed down. Each animal was measured 20 times, and the average of the middle ten readings was taken as the mouse’s blood pressure value. Strictly adhering to the instructions provided by the commercial ELISA kits, we assessed the levels of estradiol (E 2 , Cloud-Clone Corp, Wuhan, China), Gla-osteocalcin (cOCN, Takara, Tokyo, Japan), Glu-osteocalcin (ucOCN, Takara, Tokyo, Japan), triglyceride (TG, Nanjing Jiancheng, Nanjing, China), high-density lipoprotein cholesterol (HDL-C, Nanjing Jiancheng, Nanjing, China), low-density lipoprotein cholesterol (LDL-C, Nanjing Jiancheng, Nanjing, China), and total cholesterol (T-CHO, Nanjing Jiancheng, Nanjing, China). A Bio-Rad Model 680 microplate reader (Bio-Rad, Hercules, CA, USA) was utilized to measure the absorbance values. The detection thresholds were established at 12.35 pg/mL for E 2 , 10.5 ng/mL for cOCN, and 0.25 ng/mL for ucOCN. The aorta and tibia were dissected and fixed with 4% formaldehyde. The tibia was then decalcified, sectioned, and prepared for embedding in wax after washing with PBS. The aorta underwent dehydration and wax embedding. Both were sectioned at 8–10 μm, mounted on slides, and stained with hematoxylin and eosin. For Von Kossa staining, sections were washed, immersed in silver solution, exposed to light, treated with sodium thiosulfate, counterstained with Van Gieson’s stain, and then processed for dehydration, clearing, and mounting. Histophysiological evaluations were conducted under a microscope. Aortic structure and thickness were measured at 200× magnification, while osteoblast and osteoclast count per trabecular unit area were calculated at 400× magnification. The microarchitecture of the trabecular bone region in the distal femur of mice was scanned using the German Y. Cheetah micrometer X-ray three-dimensional imaging system (Y.Cheetah; YXLON International GmbH, Hamburg, Germany). The parameters were set as follows: voltage at 80 KV, current at 35 μA, resolution at 6 μm, and a total of 720 scanning layers. After the scanning was completed, the grayscale images obtained from X-ray imaging were reconstructed. Using VG Studio MAX 3.0 analysis software, the region of interest (ROI) was selected. Based on the anatomical features of the femur, the first layer where both the medial and lateral condyles of the distal femur simultaneously disappeared was identified, and a cylinder with a radius of 1.5 cm and a height of 1 cm was selected as the ROI, extending from bottom to top. Trabecular bone within this region was extracted for analysis. This process yielded visual 2D and 3D images and quantitative indices of the trabecular microarchitecture of the distal femur, including bone mineral density (BMD), bone volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular number (Tb.N), trabecular separation (Tb.Sp), and degree of anisotropy (DA). The results are presented as mean ± standard deviation (mean ± SD). Statistical analyses were conducted using SPSS version 20.0 software (SPSS Institute, Chicago, IL, USA). All data were tested using the one-sample Kolmogorov–Smirnov test and were found to be normally distributed. One-way analysis of variance (ANOVA) was employed to assess whether there were differences among the three groups. Once a significant difference was detected, the least significant difference multiple comparison test was used to determine whether the difference between every two groups was statistically significant. p -value < 0.05 was considered statistically significant. In summary, both MICT and HIIT can effectively improve the cardiovascular disease-related risk factors in OVX mice, but moderate-intensity treadmill exercise much more enhances bone mineral density and improves bone microstructure in OVX mice. It means that in postmenopausal women, opting for MICT appears to be more beneficial to both the cardiovascular system and the skeletal system. UcOCN could serve as a metabolic biomarker for improvements of bone health through exercise, but it has not been found to participate in the regulation of cardiovascular disease-related risk factors, at least in the current study.
Links between Neuroanatomy and Neurophysiology with Turning Performance in People with Multiple Sclerosis
72a31c28-6775-40ff-b655-86dc9898d630
10490793
Physiology[mh]
Multiple sclerosis is a chronic, immune-mediated, demyelinating neurological disease of the CNS. Among young adults, multiple sclerosis is the major cause of neurological impairment, leading to irreversible long-term disability throughout the disease course . People with multiple sclerosis (PwMS) present with deleterious neural adaptations that contribute to symptom severity and disease progression . These adaptations can lead to irreparable consequences ranging from transient dysfunction to irreversible impairments . Progressive brain atrophy is a well-known feature of multiple sclerosis and considered irreversible brain damage affecting both gray and white matter . Gray matter adaptations arise early and have been reported prior to clinical diagnosis . Additionally, studies have shown gray matter alterations demonstrate stronger associations with motor and cognitive dysfunction when compared to lesion accumulation . While neuroanatomical adaptations are relevant hallmarks in multiple sclerosis, neurophysiological modifications impact symptomology. A common non-invasive brain stimulation technique applied to evaluate motor cortex associated neurophysiological activity is transcranial magnetic stimulation (TMS) . The use of TMS over the motor cortex can provide relative measures of excitatory (i.e., glutamatergic) and inhibitory (i.e., GABAergic) corticospinal activity . In neurotypical adult brains, the homeostatic balance of excitatory and inhibitory neurotransmission ensures proper neuronal functioning, although in multiple sclerosis this balance appears disrupted . Though not fully understood, the excitatory imbalance is thought to result from large quantities of glutamate release by activated immune cells during periods of inflammation . This imbalance leads to excess release of extracellular glutamate and subsequently an excitotoxic environment for neural tissues . While excitatory levels naturally fluctuate, relapses, lesion formation, and disease progression are associated with periods of increased extracellular glutamate . As such, much of the TMS literature has focused on assessing excitation in PwMS . Although less common, researchers have used TMS to assess inhibitory activity, which has been associated with cognitive function and motor performance in populations, including multiple sclerosis . Though most motor performance research has focused on assessing the upper limbs, the neural control of the lower limbs has received less attention . Though the neural control of walking remains a topic of continued investigation, there is agreement that walking involves spinal and supraspinal neural control mechanisms . A growing body of literature detailing supraspinal contributors in response to complex walking tasks has revealed neuronal firing and cortical thickness patterns associated with complex locomotor movements . For instance, complex locomotor movements such as turning while walking have been associated with corticospinal neurophysiological function, indicating that levels of corticospinal inhibitory activity relate to turning kinematics in healthy young and older adults , implying that supraspinal contributions are inherit and necessary for effective locomotor performance. While neuroimaging studies have examined associations between neuroanatomical structure, neurophysiological function, and walking, they are generally focussed on straight-ahead walking. However, the neural underpinnings associated with turning while walking remain elusive, especially in populations where turning performance is known to be compromised, such as those with multiple sclerosis. The present study aimed to assess the neuroanatomical and neurophysiological correlates of turning performance in people diagnosed with relapsing remitting multiple sclerosis (RRMS). We developed three primary hypotheses, first that PwMS would demonstrate reginal thinning within bilateral primary motor cortices compared to neurotypical control participants. Second, we hypothesized that PwMS would demonstrate reduced excitatory and inhibitory corticospinal activity compared to neurotypical controls. Lastly, we hypothesized that PwMS would demonstrate a positive association between motor cortex thickness and turning performance, a positive association between corticospinal excitation and turning performance, and a negative association between corticospinal inhibition and turning performance. 2.1. Population In total, 26 individuals with RRMS and 23 healthy controls completed two separate laboratory visits. Participants were excluded if they were unable to walk or stand for 10 min without the use of an assistive device, having any MRI- or TMS-related contraindications such as non-MRI compatible implanted medical devices, implanted ferrous metal or metal fragments, facial tattoos or permanent makeup, cochlear implants, having a personal or family history of epilepsy, or currently taking medications known to lower seizure threshold or known to be a contraindication for TMS, and/or having any additional musculoskeletal or vestibular condition. In addition, healthy control participants were free from any clinically diagnosed neurological condition or disease known to impact mobility. Further study enrollment details are displayed in . This study was approved by the Colorado State University Institutional Review Board (IRB# 18-7738 H), and all participants provided written informed consent prior to participation. 2.2. Turning Acquisition Participants performed three separate 360° turn trials at their self-selected fast pace, and 1-min of continuous but alternating 360° turns at their self-selected natural pace. To encourage natural turning participants were instructed not to spin or conduct a military style turn. All 360° turns were conducted in an open space while barefoot with research staff spotting. Participants also completed a series of 180° turns during a self-selected pace 2-min walk test. Participants were instructed to turn as if forgetting something in a room they had just left. 2.3. Turning Processing Both the 360° and 180° turn metrics were collected using Opal wireless inertial sensors and quantified through previously validated Mobility Lab software (Version 2) (Opal Sensors, APDM Inc., Portland, OR, USA) . The primary turning metrics for 360° in-place fast turns included, turn duration (s), turn angle (°/s), and peak turn velocity (°/s). The primary turning metrics for 1-min 360° in-place turns included turn duration (s), turn angle (°/s), peak turn velocity (°/s), and number of turns completed . The primary metrics for 180° turns included turn duration (s), turn angle (°/s), peak turn velocity (°/s), and number of steps in the turn . 2.4. MRI Acquisition All participants underwent an MRI protocol on a Siemens 3T MAGNETOM Prismafit equipped with a 32-channel head coil. MRI protocol included: T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) (repetition time (TR)/echo time (TE): 2400/2.07 ms, inversion time: 1000 ms, flip angle: 8°, echo train length: 0.49 ms, field-of-view: 256 mm (180 mm (RL), 256 mm (AP), 256 mm (FH)), slices: 224 (sagittal), resolution: 0.8 × 0.8 × 0.8 mm 3 ); and a T2-weighted fluid-attenuated inversion recovery (FLAIR) (TR/TE: 6000/428 ms, inversion time: 2000 ms, echo train length: 933 ms, field-of-view: 256 mm (176 mm (RL), 256 mm (AP), 256 mm (FH)), slices: 176 (sagittal), resolution: 1.0 × 1.0 × 1.0 mm 3 ). 2.5. MRI Processing Global and regional cortical thickness measures were reconstructed using the FreeSurfer (Version 6.0.0)-recon-all processing pipeline . The T1 and T2-FLAIR images underwent the multimodal recon-all processing pipeline known improve cortical parcellations and segmentations . Following quality assurance of each scan, a priori precentral and paracentral gyri regional thickness measures were exported for further analysis. The precentral and paracentral regions were defined using the Desikan-Killiany atlas through FreeSurfer . 2.6. Muscle Strength Acquisition Participants produced a series of maximal voluntary contractions (MVCs) to determine the maximal force output of each tibialis anterior (TA) muscle. Participants’ legs (individually) were secured to a platform using a strap secured around the dorsum of the foot and around the heel to limit posterior foot translation. The dorsum strap was secured to a high-capacity carabiner and stationary force transducer. Participants performed between two and five MVC trials that were analyzed for maximal force output. MVC trials were separated by 2 min and concluded when force production no longer increased across trials and the two highest force values were within 10% of each other. The same process was replicated for the opposite leg. 2.7. TMS Acquisition Prior to TMS, participants were comfortably seated with both feet placed on the stationary platform. Electromyography (EMG) electrodes were placed on the muscle belly of each TA using bipolar Ag-AgCl surface electrodes. Following EMG placement, TMS was delivered independently over each motor cortex targeting the contralateral TA muscle. Initial ‘hot spot’ locations were defined as being 1 cm posterior and 1 cm lateral to either side of the vertex. Based on each location, the coil was systematically and incrementally moved until the stimulation response on the contralateral TA produced the largest and most consistent MEP response. Single TMS pulses were deliver using a 2 × 95 mm angled butterfly coil (120-degree, Cool D-B80, MagVenture) positioned tangentially against the scalp at roughly 45–65° from the mid-sagittal line . The resting motor threshold (RMT) was determined in both hemispheres and defined as the lowest stimulator intensity to elicit an MEP with a peak-to-peak amplitude of ≥50 μV in five out of ten trials. Two 3-min trials were performed, during which time participants were asked to sustain an isometric dorsiflexion at 15% of their MVC while stimulations were delivered every 7–10 s to the corresponding hemispheric ‘hot spot’ at 120% of the RMT, ensuring a corticomotor response. Each participant received a median of 21 stimulations per hemisphere. 2.8. TMS Processing EMG data were collected at 2000 Hz (BIOPAC Systems, Inc., Santa Barbara, CA, USA). Offline, EMG data were filtered using a combination bandpass filter (5–500 Hz) with a 60 Hz Nyquist filter through AcqKnowledge software. Filtered data were then imported to MATLAB (MathWorks, Nantick, MA, USA) then rectified and processed using a custom MATLAB script used to identify and quantify TMS measures. To analyze TMS measures, 100 ms prior to simulation and 350 ms post-stimulation were extracted from the EMG trace for each stimulation. All EMG trace segments were visually inspected for quality; stimulations not resulting in a clear or expected stimulation response were removed from further analysis. The remaining traces were averaged together and underwent processing to quantify excitatory and inhibitory responses. Excitatory measures via MEPs are suggested to signify corticospinal excitability, largely used as a proxy for assessing relative levels of glutamatergic activity . To assess GABAergic inhibitory activity, three measures embedded within the silent period were assessed. Conventionally, the cortical silent period (cSP) is quantified as the duration in which muscle activity is diminished following the MEP, where shorter durations indicate reduced inhibitory activity. In the current manuscript, we assessed cSP duration as well as the average percent depth of the silent period (%dSP AVE ), and the maximum percent depth of silent period (%dSP MAX ) . 2.9. Statistical Analysis Statistical analysis was conducted in JMP Pro 15 with alpha levels set to 0.05 unless indicated otherwise. Between-group sex differences were examined using a chi-square test, all other between-group demographic variables were assessed using a two-sample t-test. All data are presented as mean ± SD unless noted otherwise. To assess for differences between the three, 360° fast pace turn trials a repeated measures analysis of variance was performed. No significant differences were observed and, therefore, all variables were averaged together. To assess between-group differences for turning metrics, we used linear mixed models. The linear mixed models included group, age, and sex as fixed effects, and subjects included as a random effect using unbounded variance components and the restricted maximum likelihood (REML) method. To assess differences for the region of interest (ROI) cortical thickness and TMS measures, again we used linear mixed models. The linear mixed model included group, hemisphere, age, and sex as fixed effects, group × hemisphere as an interaction, and subjects included as a random effect using unbounded variance components and the REML method. In all cases, inspection of residual plots showed equal variance. Post-hoc analyses were not performed as no interactions demonstrated significance. Pearson’s correlation coefficients were used to assess correlations between hemisphere-specific cortical thickness measures and turning variables, and hemisphere-specific TMS metrics and turning variables. Correlations were corrected for multiple comparisons using the Bonferroni correction method. Bidirectional correlation strengths (positive or negative) were classified as very strong (±0.9–1.0), strong (±0.7–0.9), moderate (±0.5–0.69), weak (±0.3–0.49), and negligible (±<0.30) . In total, 26 individuals with RRMS and 23 healthy controls completed two separate laboratory visits. Participants were excluded if they were unable to walk or stand for 10 min without the use of an assistive device, having any MRI- or TMS-related contraindications such as non-MRI compatible implanted medical devices, implanted ferrous metal or metal fragments, facial tattoos or permanent makeup, cochlear implants, having a personal or family history of epilepsy, or currently taking medications known to lower seizure threshold or known to be a contraindication for TMS, and/or having any additional musculoskeletal or vestibular condition. In addition, healthy control participants were free from any clinically diagnosed neurological condition or disease known to impact mobility. Further study enrollment details are displayed in . This study was approved by the Colorado State University Institutional Review Board (IRB# 18-7738 H), and all participants provided written informed consent prior to participation. Participants performed three separate 360° turn trials at their self-selected fast pace, and 1-min of continuous but alternating 360° turns at their self-selected natural pace. To encourage natural turning participants were instructed not to spin or conduct a military style turn. All 360° turns were conducted in an open space while barefoot with research staff spotting. Participants also completed a series of 180° turns during a self-selected pace 2-min walk test. Participants were instructed to turn as if forgetting something in a room they had just left. Both the 360° and 180° turn metrics were collected using Opal wireless inertial sensors and quantified through previously validated Mobility Lab software (Version 2) (Opal Sensors, APDM Inc., Portland, OR, USA) . The primary turning metrics for 360° in-place fast turns included, turn duration (s), turn angle (°/s), and peak turn velocity (°/s). The primary turning metrics for 1-min 360° in-place turns included turn duration (s), turn angle (°/s), peak turn velocity (°/s), and number of turns completed . The primary metrics for 180° turns included turn duration (s), turn angle (°/s), peak turn velocity (°/s), and number of steps in the turn . All participants underwent an MRI protocol on a Siemens 3T MAGNETOM Prismafit equipped with a 32-channel head coil. MRI protocol included: T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) (repetition time (TR)/echo time (TE): 2400/2.07 ms, inversion time: 1000 ms, flip angle: 8°, echo train length: 0.49 ms, field-of-view: 256 mm (180 mm (RL), 256 mm (AP), 256 mm (FH)), slices: 224 (sagittal), resolution: 0.8 × 0.8 × 0.8 mm 3 ); and a T2-weighted fluid-attenuated inversion recovery (FLAIR) (TR/TE: 6000/428 ms, inversion time: 2000 ms, echo train length: 933 ms, field-of-view: 256 mm (176 mm (RL), 256 mm (AP), 256 mm (FH)), slices: 176 (sagittal), resolution: 1.0 × 1.0 × 1.0 mm 3 ). Global and regional cortical thickness measures were reconstructed using the FreeSurfer (Version 6.0.0)-recon-all processing pipeline . The T1 and T2-FLAIR images underwent the multimodal recon-all processing pipeline known improve cortical parcellations and segmentations . Following quality assurance of each scan, a priori precentral and paracentral gyri regional thickness measures were exported for further analysis. The precentral and paracentral regions were defined using the Desikan-Killiany atlas through FreeSurfer . Participants produced a series of maximal voluntary contractions (MVCs) to determine the maximal force output of each tibialis anterior (TA) muscle. Participants’ legs (individually) were secured to a platform using a strap secured around the dorsum of the foot and around the heel to limit posterior foot translation. The dorsum strap was secured to a high-capacity carabiner and stationary force transducer. Participants performed between two and five MVC trials that were analyzed for maximal force output. MVC trials were separated by 2 min and concluded when force production no longer increased across trials and the two highest force values were within 10% of each other. The same process was replicated for the opposite leg. Prior to TMS, participants were comfortably seated with both feet placed on the stationary platform. Electromyography (EMG) electrodes were placed on the muscle belly of each TA using bipolar Ag-AgCl surface electrodes. Following EMG placement, TMS was delivered independently over each motor cortex targeting the contralateral TA muscle. Initial ‘hot spot’ locations were defined as being 1 cm posterior and 1 cm lateral to either side of the vertex. Based on each location, the coil was systematically and incrementally moved until the stimulation response on the contralateral TA produced the largest and most consistent MEP response. Single TMS pulses were deliver using a 2 × 95 mm angled butterfly coil (120-degree, Cool D-B80, MagVenture) positioned tangentially against the scalp at roughly 45–65° from the mid-sagittal line . The resting motor threshold (RMT) was determined in both hemispheres and defined as the lowest stimulator intensity to elicit an MEP with a peak-to-peak amplitude of ≥50 μV in five out of ten trials. Two 3-min trials were performed, during which time participants were asked to sustain an isometric dorsiflexion at 15% of their MVC while stimulations were delivered every 7–10 s to the corresponding hemispheric ‘hot spot’ at 120% of the RMT, ensuring a corticomotor response. Each participant received a median of 21 stimulations per hemisphere. EMG data were collected at 2000 Hz (BIOPAC Systems, Inc., Santa Barbara, CA, USA). Offline, EMG data were filtered using a combination bandpass filter (5–500 Hz) with a 60 Hz Nyquist filter through AcqKnowledge software. Filtered data were then imported to MATLAB (MathWorks, Nantick, MA, USA) then rectified and processed using a custom MATLAB script used to identify and quantify TMS measures. To analyze TMS measures, 100 ms prior to simulation and 350 ms post-stimulation were extracted from the EMG trace for each stimulation. All EMG trace segments were visually inspected for quality; stimulations not resulting in a clear or expected stimulation response were removed from further analysis. The remaining traces were averaged together and underwent processing to quantify excitatory and inhibitory responses. Excitatory measures via MEPs are suggested to signify corticospinal excitability, largely used as a proxy for assessing relative levels of glutamatergic activity . To assess GABAergic inhibitory activity, three measures embedded within the silent period were assessed. Conventionally, the cortical silent period (cSP) is quantified as the duration in which muscle activity is diminished following the MEP, where shorter durations indicate reduced inhibitory activity. In the current manuscript, we assessed cSP duration as well as the average percent depth of the silent period (%dSP AVE ), and the maximum percent depth of silent period (%dSP MAX ) . Statistical analysis was conducted in JMP Pro 15 with alpha levels set to 0.05 unless indicated otherwise. Between-group sex differences were examined using a chi-square test, all other between-group demographic variables were assessed using a two-sample t-test. All data are presented as mean ± SD unless noted otherwise. To assess for differences between the three, 360° fast pace turn trials a repeated measures analysis of variance was performed. No significant differences were observed and, therefore, all variables were averaged together. To assess between-group differences for turning metrics, we used linear mixed models. The linear mixed models included group, age, and sex as fixed effects, and subjects included as a random effect using unbounded variance components and the restricted maximum likelihood (REML) method. To assess differences for the region of interest (ROI) cortical thickness and TMS measures, again we used linear mixed models. The linear mixed model included group, hemisphere, age, and sex as fixed effects, group × hemisphere as an interaction, and subjects included as a random effect using unbounded variance components and the REML method. In all cases, inspection of residual plots showed equal variance. Post-hoc analyses were not performed as no interactions demonstrated significance. Pearson’s correlation coefficients were used to assess correlations between hemisphere-specific cortical thickness measures and turning variables, and hemisphere-specific TMS metrics and turning variables. Correlations were corrected for multiple comparisons using the Bonferroni correction method. Bidirectional correlation strengths (positive or negative) were classified as very strong (±0.9–1.0), strong (±0.7–0.9), moderate (±0.5–0.69), weak (±0.3–0.49), and negligible (±<0.30) . 3.1. Participants We were unable to detect a reliable ‘hot spot’ in certain hemispheres in one control and four multiple sclerosis participants, although the usable data from those participants were maintained for all further analysis. The demographic and clinical characteristics are summarized in . No significant differences were observed for age ( F (1, 48) = 0.13, p = 0.72) or sex (χ 2 (2) = 0.36, p = 0.55). Additionally, weight, height, and BMI were not significantly different between groups. 3.2. Turning Performance In-place 360° fast pace turns demonstrated significant differences between groups for turn duration ( F (1, 45) = 20.50, p < 0.001), peak turn velocity ( F (1, 45) = 23.82, p < 0.001), and turn angle ( F (1, 45) = 4.20, p = 0.046). Specifically, PwMS demonstrated significantly longer turn durations, and slower peak turn velocities, and turn angles closer to 360°. Continuous, but alternating, self-selected pace 360° in-place turns demonstrated significant differences for all turn variables. PwMS demonstrated significantly longer turn durations ( F (1, 45) = 6.70, p = 0.01), slower peak turn velocities ( F (1, 45) = 6.90, p = 0.01), reduced (i.e., nearer to 360°) turn angles ( F (1, 45) = 7.03, p = 0.01), and fewer total turns completed ( F (1, 45) = 5.39, p = 0.03) compared to control participants. Turns completed during the self-selected pace 2-min walk test did not demonstrate significant differences between groups for turn duration ( F (1, 45) = 2.56, p = 0.12), peak turn velocity ( F (1, 45) = 0.63, p = 0.43), turn angle ( F (1, 45) = 0.10, p = 0.75), or number of steps to complete the turn ( F (1, 45) = 3.71, p = 0.06). The data for each type of turn and variable collected are reported in . 3.3. Motor Cortex Thickness Cortical thickness of the precentral gyrus demonstrated a significant main effect of group ( F (1, 45) = 9.41, p = 0.004) with reduced thickness in PwMS, and a significant main effect of hemisphere ( F (1, 47) = 9.10, p = 0.004) with the right hemisphere demonstrating less cortical thickness for both groups. No significant group × hemisphere interaction ( F (1, 47) = 0.001, p = 0.98) was observed. The paracentral gyrus demonstrated a significant main effect of group ( F (1, 45) = 10.86, p = 0.002) such that PwMS demonstrated reduced cortical thickness. Further, there were no effect of hemisphere ( F (1, 47) = 1.23, p = 0.27) or group × hemisphere interaction ( F (1, 47) = 0.52, p = 0.47). Thickness values for each group and hemisphere can be observed in . 3.4. TMS Maximal strength for the TA muscles demonstrated a significant main effect of group ( F (1, 45) = 15.53, p < 0.001), with the multiple sclerosis cohort demonstrating overall weaker dorsiflexor output. No significant effects were found for RMT between groups ( F (1, 45) = 0.09, p = 0.33), hemispheres ( F (1, 47) = 0.25, p = 0.61), or the group × hemisphere interaction ( F (1, 47) = 0.12, p = 0.73). Motor cortex excitability was measured via MEP amplitude relative to the pre-stimulation mean muscle activity. For MEP amplitude normalized to the pre-stimulation average, there were no main effects of group ( F (1, 42.4) = 0.02, p = 0.90), hemisphere ( F (1, 43.1) = 1.57, p = 0.21), or a group × hemisphere interaction ( F (1, 43.1) = 2.45, p = 0.12). For the inhibitory TMS measures, no significant effect of group ( F (1, 43.9) = 0.05, p = 0.83), hemisphere ( F (1, 44.4) = 1.51, p = 0.23), or interaction ( F (1, 44.4) = 0.66, p = 0.42) was found for the cortical silent period (cSP) duration. For %dSP AVE there was a significant main effect of group ( F (1, 45.6) = 6.54, p = 0.01), although no main effect of hemisphere ( F (1, 45.9) = 0.18, p = 0.67) nor a group × hemisphere interaction ( F (1, 45.9) = 1.02, p = 0.32). The results show significantly reduced %dSP AVE for PwMS compared to their neurotypical counterparts. For %dSP MAX a significant main effect of group ( F (1, 44.1) = 5.87, p = 0.02) was observed, such that PwMS demonstrated reduced %dSP MAX . However, no main effect of hemisphere ( F (1, 44.3) = 0.23, p = 0.23) or group × hemisphere interaction ( F (1, 44.2) = 0.15, p = 0.70) was revealed. TMS-related measures for each group and hemisphere are reported in . 3.5. Associations 3.5.1. Associations between 360° In-Place Fast Turns and Neurophysiology and Neuroanatomical Structure Controls did not demonstrate any significant associations between turn variables and the TMS or MRI measures. In contrast, PwMS demonstrated significant negative associations between turn duration and left hemisphere inhibitory measures. Additionally, a significant negative association was revealed between turn duration and paracentral gyri thickness, together indicating that those with greater levels of left hemisphere corticospinal inhibition and paracentral thickness demonstrated shorter turn durations. For the right hemisphere, a significant positive association was observed between turn angle and MEP amplitude, indicating that those with greater right hemisphere excitability demonstrated greater 360° turn angles. details the associations between 360° turn measures and hemisphere-specific TMS and MRI variables. 3.5.2. Associations between 360° Self-Selected Pace In-Place 1-min Continuous Turns and Neurophysiology and Neuroanatomical Structure For the 1 min of continuous 360° normal pace turns, controls did not demonstrate any significant associations between turning variables and the TMS or MRI measures. Alternatively, PwMS demonstrated significant associations between turn duration and number of turns completed and left hemisphere %dSP AVE and %dSP MAX . These associations demonstrated that those individuals with greater levels of inhibitory capacity performed turns in less time and completed more total turns over the course of 1 min. Additionally, significant associations were observed between turn velocity and left hemisphere paracentral and precentral cortical thickness, such that those with greater thickness demonstrated faster turn velocities. Specific to the right hemisphere, only one significant association showed a positive association between turn angle and MEP amplitude, indicating that those with greater right hemisphere MEP amplitude demonstrated greater 360° turn angles (i.e., further from 360°). details associations between 360° turn measures and hemisphere-specific TMS and MRI variables. 3.5.3. Associations between 180° Self-Selected Pace Turns While Walking and Neurophysiology and Neuroanatomical Structure For the 180° self-selected pace turns, neurotypical controls demonstrated a significant negative association between turn duration and right hemisphere precentral gyri thickness, indicating that neurotypical controls with greater precentral thickness perform turns in less time. No other significant associations were observed between turn variables and hemispheric TMS or MRI measures for neurotypical controls. PwMS demonstrated significant positive associations between turn velocity and left hemisphere silent period duration and %dcSP AVE . These associations indicate that those with greater levels of inhibition demonstrate faster turn velocities. Moreover, a significant association was observed between turn duration and right hemisphere MEP amplitude, indicating that those with greater excitability perform turns in less time. details associations between 180° turn measures and hemisphere-specific TMS and MRI variables. Refer to , which provides correlation scatter plots for all left hemisphere significant associations between neuroanatomical structure and turning performance and neurophysiological function and turning performance. We were unable to detect a reliable ‘hot spot’ in certain hemispheres in one control and four multiple sclerosis participants, although the usable data from those participants were maintained for all further analysis. The demographic and clinical characteristics are summarized in . No significant differences were observed for age ( F (1, 48) = 0.13, p = 0.72) or sex (χ 2 (2) = 0.36, p = 0.55). Additionally, weight, height, and BMI were not significantly different between groups. In-place 360° fast pace turns demonstrated significant differences between groups for turn duration ( F (1, 45) = 20.50, p < 0.001), peak turn velocity ( F (1, 45) = 23.82, p < 0.001), and turn angle ( F (1, 45) = 4.20, p = 0.046). Specifically, PwMS demonstrated significantly longer turn durations, and slower peak turn velocities, and turn angles closer to 360°. Continuous, but alternating, self-selected pace 360° in-place turns demonstrated significant differences for all turn variables. PwMS demonstrated significantly longer turn durations ( F (1, 45) = 6.70, p = 0.01), slower peak turn velocities ( F (1, 45) = 6.90, p = 0.01), reduced (i.e., nearer to 360°) turn angles ( F (1, 45) = 7.03, p = 0.01), and fewer total turns completed ( F (1, 45) = 5.39, p = 0.03) compared to control participants. Turns completed during the self-selected pace 2-min walk test did not demonstrate significant differences between groups for turn duration ( F (1, 45) = 2.56, p = 0.12), peak turn velocity ( F (1, 45) = 0.63, p = 0.43), turn angle ( F (1, 45) = 0.10, p = 0.75), or number of steps to complete the turn ( F (1, 45) = 3.71, p = 0.06). The data for each type of turn and variable collected are reported in . Cortical thickness of the precentral gyrus demonstrated a significant main effect of group ( F (1, 45) = 9.41, p = 0.004) with reduced thickness in PwMS, and a significant main effect of hemisphere ( F (1, 47) = 9.10, p = 0.004) with the right hemisphere demonstrating less cortical thickness for both groups. No significant group × hemisphere interaction ( F (1, 47) = 0.001, p = 0.98) was observed. The paracentral gyrus demonstrated a significant main effect of group ( F (1, 45) = 10.86, p = 0.002) such that PwMS demonstrated reduced cortical thickness. Further, there were no effect of hemisphere ( F (1, 47) = 1.23, p = 0.27) or group × hemisphere interaction ( F (1, 47) = 0.52, p = 0.47). Thickness values for each group and hemisphere can be observed in . Maximal strength for the TA muscles demonstrated a significant main effect of group ( F (1, 45) = 15.53, p < 0.001), with the multiple sclerosis cohort demonstrating overall weaker dorsiflexor output. No significant effects were found for RMT between groups ( F (1, 45) = 0.09, p = 0.33), hemispheres ( F (1, 47) = 0.25, p = 0.61), or the group × hemisphere interaction ( F (1, 47) = 0.12, p = 0.73). Motor cortex excitability was measured via MEP amplitude relative to the pre-stimulation mean muscle activity. For MEP amplitude normalized to the pre-stimulation average, there were no main effects of group ( F (1, 42.4) = 0.02, p = 0.90), hemisphere ( F (1, 43.1) = 1.57, p = 0.21), or a group × hemisphere interaction ( F (1, 43.1) = 2.45, p = 0.12). For the inhibitory TMS measures, no significant effect of group ( F (1, 43.9) = 0.05, p = 0.83), hemisphere ( F (1, 44.4) = 1.51, p = 0.23), or interaction ( F (1, 44.4) = 0.66, p = 0.42) was found for the cortical silent period (cSP) duration. For %dSP AVE there was a significant main effect of group ( F (1, 45.6) = 6.54, p = 0.01), although no main effect of hemisphere ( F (1, 45.9) = 0.18, p = 0.67) nor a group × hemisphere interaction ( F (1, 45.9) = 1.02, p = 0.32). The results show significantly reduced %dSP AVE for PwMS compared to their neurotypical counterparts. For %dSP MAX a significant main effect of group ( F (1, 44.1) = 5.87, p = 0.02) was observed, such that PwMS demonstrated reduced %dSP MAX . However, no main effect of hemisphere ( F (1, 44.3) = 0.23, p = 0.23) or group × hemisphere interaction ( F (1, 44.2) = 0.15, p = 0.70) was revealed. TMS-related measures for each group and hemisphere are reported in . 3.5.1. Associations between 360° In-Place Fast Turns and Neurophysiology and Neuroanatomical Structure Controls did not demonstrate any significant associations between turn variables and the TMS or MRI measures. In contrast, PwMS demonstrated significant negative associations between turn duration and left hemisphere inhibitory measures. Additionally, a significant negative association was revealed between turn duration and paracentral gyri thickness, together indicating that those with greater levels of left hemisphere corticospinal inhibition and paracentral thickness demonstrated shorter turn durations. For the right hemisphere, a significant positive association was observed between turn angle and MEP amplitude, indicating that those with greater right hemisphere excitability demonstrated greater 360° turn angles. details the associations between 360° turn measures and hemisphere-specific TMS and MRI variables. 3.5.2. Associations between 360° Self-Selected Pace In-Place 1-min Continuous Turns and Neurophysiology and Neuroanatomical Structure For the 1 min of continuous 360° normal pace turns, controls did not demonstrate any significant associations between turning variables and the TMS or MRI measures. Alternatively, PwMS demonstrated significant associations between turn duration and number of turns completed and left hemisphere %dSP AVE and %dSP MAX . These associations demonstrated that those individuals with greater levels of inhibitory capacity performed turns in less time and completed more total turns over the course of 1 min. Additionally, significant associations were observed between turn velocity and left hemisphere paracentral and precentral cortical thickness, such that those with greater thickness demonstrated faster turn velocities. Specific to the right hemisphere, only one significant association showed a positive association between turn angle and MEP amplitude, indicating that those with greater right hemisphere MEP amplitude demonstrated greater 360° turn angles (i.e., further from 360°). details associations between 360° turn measures and hemisphere-specific TMS and MRI variables. 3.5.3. Associations between 180° Self-Selected Pace Turns While Walking and Neurophysiology and Neuroanatomical Structure For the 180° self-selected pace turns, neurotypical controls demonstrated a significant negative association between turn duration and right hemisphere precentral gyri thickness, indicating that neurotypical controls with greater precentral thickness perform turns in less time. No other significant associations were observed between turn variables and hemispheric TMS or MRI measures for neurotypical controls. PwMS demonstrated significant positive associations between turn velocity and left hemisphere silent period duration and %dcSP AVE . These associations indicate that those with greater levels of inhibition demonstrate faster turn velocities. Moreover, a significant association was observed between turn duration and right hemisphere MEP amplitude, indicating that those with greater excitability perform turns in less time. details associations between 180° turn measures and hemisphere-specific TMS and MRI variables. Refer to , which provides correlation scatter plots for all left hemisphere significant associations between neuroanatomical structure and turning performance and neurophysiological function and turning performance. Controls did not demonstrate any significant associations between turn variables and the TMS or MRI measures. In contrast, PwMS demonstrated significant negative associations between turn duration and left hemisphere inhibitory measures. Additionally, a significant negative association was revealed between turn duration and paracentral gyri thickness, together indicating that those with greater levels of left hemisphere corticospinal inhibition and paracentral thickness demonstrated shorter turn durations. For the right hemisphere, a significant positive association was observed between turn angle and MEP amplitude, indicating that those with greater right hemisphere excitability demonstrated greater 360° turn angles. details the associations between 360° turn measures and hemisphere-specific TMS and MRI variables. For the 1 min of continuous 360° normal pace turns, controls did not demonstrate any significant associations between turning variables and the TMS or MRI measures. Alternatively, PwMS demonstrated significant associations between turn duration and number of turns completed and left hemisphere %dSP AVE and %dSP MAX . These associations demonstrated that those individuals with greater levels of inhibitory capacity performed turns in less time and completed more total turns over the course of 1 min. Additionally, significant associations were observed between turn velocity and left hemisphere paracentral and precentral cortical thickness, such that those with greater thickness demonstrated faster turn velocities. Specific to the right hemisphere, only one significant association showed a positive association between turn angle and MEP amplitude, indicating that those with greater right hemisphere MEP amplitude demonstrated greater 360° turn angles (i.e., further from 360°). details associations between 360° turn measures and hemisphere-specific TMS and MRI variables. For the 180° self-selected pace turns, neurotypical controls demonstrated a significant negative association between turn duration and right hemisphere precentral gyri thickness, indicating that neurotypical controls with greater precentral thickness perform turns in less time. No other significant associations were observed between turn variables and hemispheric TMS or MRI measures for neurotypical controls. PwMS demonstrated significant positive associations between turn velocity and left hemisphere silent period duration and %dcSP AVE . These associations indicate that those with greater levels of inhibition demonstrate faster turn velocities. Moreover, a significant association was observed between turn duration and right hemisphere MEP amplitude, indicating that those with greater excitability perform turns in less time. details associations between 180° turn measures and hemisphere-specific TMS and MRI variables. Refer to , which provides correlation scatter plots for all left hemisphere significant associations between neuroanatomical structure and turning performance and neurophysiological function and turning performance. 4.1. Turning Significant differences were observed for each 360° turn measure, indicating that PwMS, independent of speed, maintain reduced turning performance compared to controls. These current results closely align with prior studies reporting increased turn duration and velocity in PwMS . Turn angles for the 360° turns were closer to the instructed 360° mark in PwMS compared to controls. This finding coincides with Shah et al., who reported reduced turn angles in PwMS during 7 days of continuous mobility monitoring when compared to controls . While the study designs were different, we postulate that PwMS perform reduced turn angles due to an abundance of caution and the ability to produce an appropriate compensatory response if needed . However, the true functional significance of reduced turn angles in PwMS remains unknown. No significant differences were observed between groups for any of the 180° turn variables. These results are partially divergent to previously reported 180° turns differences between PwMS and controls. For instance, Spain et al. reported similar peak turn velocities between groups during an instrumented Timed Up and Go (iTUG) test but significantly longer 180° turn durations in PwMS . We suspect the differences are likely task related given that the iTUG and the 2-min walk have unique task objectives. Together, these results may indicate that PwMS demonstrate altered turning characteristics and subsequently reduced turning performance for 360° in-place turns although no differences for 180° turns while walking. Possibly indicating that task complexity and turn style may provide important turn related kinematic differences between PwMS and controls. 4.2. MRI The precentral and paracentral gyri were chosen a-priori based on the known influence and integration of motor commands converging within those cortical regions along with prior evidence demonstrating disease-related susceptibility to cortical atrophy . The present results are consistent with prior investigations demonstrating motor cortex gray matter thinning in PwMS with the RRMS phenotype . 4.3. TMS A necessary measure for any TMS study is the determination of the RMT, thereby normalizing responses across study participants. The majority of studies assessing RMT have reported no differences between healthy controls and PwMS, though greater thresholds for PwMS have been reported (see review by Snow et al. ). In alignment, our results demonstrated no significant differences between groups for RMT . While the mechanisms associated with RMT and the generation of transmembrane excitation remain inconclusive, these results may indicate that both groups demonstrated similar pyramidal fiber orientation within the simulated region. Indeed, modelling studies indicate that pyramidal cell structure and orientation influence the electrical fields produced by TMS, which are necessary for MEP production . Glutamatergic activity is a commonly measured neurophysiological outcome in the multiple sclerosis literature and often assessed via the MEP amplitude . With no observed differences between groups, the current results are consistent with numerous studies that also accounted for important covariates . These results may indicate that our two cohorts demonstrated similar density of cortico–motor neuronal projections stemming from the motor cortex, thus providing similar glutamatergic responses to TMS . However, it should be noted that MEP measures and their physiological underpinnings are complex and remain not fully understood . Inhibitory neurotransmission has demonstrated associations with motor learning, neural plasticity, and motor control . However, the role of inhibitory neurotransmission within the corticospinal system of PwMS remains inconclusive, with studies reporting inconsistent findings . Moreover, there remains a substantial lack of TMS-related studies focused on assessing the lower limbs. Indeed, only one other study, performed by Tataroglu et al., has explored inhibitory activity via single pulse TMS targeting the lower limbs in PwMS . Their results demonstrated significantly longer cSP durations for PwMS compared to neurotypical controls and, furthermore, greater durations for those with progressive phenotypes of the disease. In contrast, the current results demonstrated no differences in silent period duration between PwMS and controls. Upon further exploration, Tataroglu et al. reported an average silent period duration of 118.1 ± 74 ms for their RRMS group, while the current multiple sclerosis cohort demonstrated similar durations being 108.3 ± 50 ms and 123.9 ± 68 ms for the left and right hemispheres, respectively. Interestingly, and dissimilar to our results, considerably shorter silent period durations were reported for their control cohort . A variety of factors can influence silent period duration including stimulation intensity, contraction force, and number of stimulations, among others . These protocol differences could contribute to the differences observed, limiting the comparative interpretation between results. Intrahemispheric and interhemispheric inhibition are most often quantified through the silent period duration; however, percent average and max depth of the silent period have been used to quantify levels of interhemispheric inhibitory activity . Studies assessing interhemispheric inhibition have shown %dSP AVE/MAX demonstrate greater sensitivity for delineating between young and older adults compared to the conventionally reported silent period duration , although these measures have not been widely reported for intrahemispheric (i.e., cSP) inhibitory analyses. One potential reason could be due to the greater inhibitory influence (i.e., EMG amplitude during the silent period nears 100% muscle deactivation) for cSPs compared to iSPs, meaning researchers may feel these measures would be less likely to demonstrate group differences . While the physiological mechanisms underlying these metrics remain under investigation, they are suspected to provide unique perspectives of GABAergic inhibitory influence and have been characterized as both sensitive and reliable were a greater percent indicates larger corticospinal inhibitory influence at the muscle . Our results demonstrated similar silent period durations, albeit with significantly reduced inhibitory influence in PwMS compared to controls. While these results are a new expression of inhibitory data, they align with prior studies reporting no differences or reduced inhibitory activity in PwMS compared to neurotypical adults . The current inhibitory results add to the existing literature by revealing temporally similar silent period durations to the lower limbs between groups while demonstrating reduced inhibitory influence on the TA in PwMS. 4.4. Associations Given that the data did not demonstrate a hemispheric difference for inhibition, we could have averaged the hemispheric results; however, we decided to maintain independent hemispheric values. Principally, this decision was guided by evidence of brain lateralization, or the notion of each hemisphere integrating specific responses and actions . For instance, Fling et al. demonstrated associations between proprioceptive balance control and microstructural integrity of Brodmann Area 3a to be restricted entirely to the right hemisphere in PwMS . Moreover, voluntary inhibition of manual movements (via go–no-go tasks) has been postulated to be right-lateralized to the frontal-basal ganglio–thalamic pathway , although evidence to contradict that assumption exists, suggesting that both hemispheres work in cooperation . Furthermore, it has been postulated that the left hemisphere may be responsible for planning motor actions, and subsequently has become specialized in regulating well-established patterns of behavior characterized as routine, familiar, and internally directed . While the contributions for each hemisphere remain under investigation, particularly for lower limb directed movements, evidence indicates each hemisphere has some degree of functional independence. 4.4.1. Neuroanatomical Associations Gray matter atrophy has been associated with clinical indicators of physical disability, cognitive decline, and disease duration . Importantly, gray matter atrophy demonstrates independence from white matter degradation and provides stronger associations with clinical indicators . While gray matter atrophy demonstrates clinically relevant correlations in multiple sclerosis, recent reports suggest atrophic patterns are predominately regional rather than globally diffuse . Furthermore, regional patterns of atrophy appear to provide distinct relationships with functional disability . Specifically, it has been observed that left-lateralized atrophic clusters of the sensorimotor cortex demonstrate significant negative associations with disease severity using the Expanded Disability Status Scale (EDSS) . Given that the EDSS has a bias towards locomotor disability, this negative association suggests the atrophy of sensorimotor region influences functional disability. Consistent with prior results demonstrating an association between the disease severity and cortical thickness of the motor cortex, our results further demonstrate motor cortex atrophy associated with specific characteristics of turning performance, revealing that reduced cortical thickness of the precentral and paracentral gyri correlates with 360° turn duration, peak velocity, and angle in PwMS and 180° turn duration in controls. Therefore, providing further evidence that motor cortex atrophy influences functional disability, specifically turning performance. To our knowledge, one other study performed by Lorefice and colleagues assessed the associations between global cortical atrophy and components of the iTUG . The authors showed a significant negative association between global cortical gray matter volume and completion time along with significant positive associations between turn velocity and global cortical gray matter volume , thus, revealing a relationship between cortical gray matter structure and lower limb dynamic motor performance, where greater cortical gray matter volume relates to faster turns and greater iTUG performance. It must be noted that turns performed during the iTUG are 180°, which in our study, only turn duration for the control group demonstrated significance. We believe these differences are a product of unique task objectives and associating specific turning variables to global, rather than regional, measures of atrophy. 4.4.2. Neurophysiologic Associations Glutamate and GABA are critical for the development and regulation of descending motor commands. While intrahemispheric glutamatergic activity describes the balance between excitation and inhibition, TMS-related glutamatergic measures are often less sensitive to motor control assessments compared to inhibitory neurotransmission . The current results show a similar trend, revealing many more significant inhibitory associations compared to excitatory associations with turning performance. While our results revealed no group differences for normalized MEP amplitude, two associations emerged as significant, with smaller amplitudes correlating with reduced turning performance in PwMS. Corresponding with our results, prior research has shown weak-to-moderate associations between disease severity and MEP amplitude, such that reduced glutamatergic activity relates to greater disease severity . While older adults and PwMS are distinct in many ways, they do demonstrate similar mobility deficits, for instance, both groups demonstrate very similar turning characteristics . Additionally, the associations from the present study demonstrate similarities to our previously published healthy aging data, such that greater inhibition in the impaired groups (e.g., PwMS and older adults) relates to better turning performance. While cSP duration did not demonstrate a groupwise difference, it did reveal a negative association with 360° in-pace fast turn duration and a positive association with 180° peak turn velocity. These results may indicate that a temporal inhibitory influence (i.e., cSP duration) on lower limb muscles is important for temporally mediated turning movements. Interestingly, the two measures of inhibitory influence demonstrated associations with additional turning variables, potentially implying that inhibitory influence could be a sensitive measure for lower limb dynamic movements requiring higher level neural command. While there were many associations pertaining to neurophysiological activity and turning performance for PwMS, no significant association was observed in the control group. However, directionally, the associations in the control group broadly opposed those in the multiple sclerosis group, such that neurotypical adults with greater inhibitory activity demonstrated reduced turning performance. Interestingly, this result is like previously published observations for the inhibitory control of turning in healthy young adults . While these results demonstrated a similar pattern to young adults, it must be noted that the inhibitory adaptations associated with middle-aged adults largely remains unexplored. However, we postulate that healthy middle-aged adults may rely on alternate neural resources such as subcortical and/or spinal level modulation for successful bilateral lower limb control . Interestingly, all the correlations between turning and inhibition were lateralized to the left hemisphere. These findings are particularly interesting and demonstrate novel findings regarding inhibitory lateralization and turning performance in PwMS. While the significance of left hemisphere lateralization and turning performance in PwMS remains unknown, left hemispheric specialization is thought to play a significant role in enhancing movement planning and execution . Further, studies have provided evidence suggesting a particular role for the left hemisphere in motor learning tasks that require movement planning and execution for future actions. These ideas are consistent with pathological populations that have demonstrated greater left, rather than right, hemispheric damage resulting in ideomotor apraxia, a disorder where the observed spatiotemporal motor deficits are thought to arise from impaired planning due to damage of the left frontal and parietal brain regions . Additionally, the left parietal cortex has been implicated for its role in the preparation of selected overt movements and the decision of which limb to use for particular tasks . Collectively, it appears that the left hemisphere is particularly well suited for movement planning and execution, which has been shown to be disrupted in PwMS . These results may indicate a compensatory inhibitory lateralization meant to assist in properly executing the planned turn. Interestingly, the control group did not demonstrate similar associations, indicating the utilization of different neural mechanisms to perform the same task, though the broad age range of the control cohort may have diminished any associations. To date, no studies have assessed both cortical thickness and neurophysiological activity in PwMS specific to the lower limbs. Moreover, no studies have incorporated objective measures of dynamic lower limb movements to identify neural mechanisms associated with performance. This study provides evidence to suggest a relationship between neuroanatomical structure and neurophysiological function of the motor cortex and turning performance in PwMS. 4.5. Limitations The inclusion of only individuals with RRMS with an average EDSS of 3.5 could be a limitation, as it does not account for other disease phenotypes or greater levels of disease severity. Therefore, these results should be interpreted responsibly when relating them to other phenotypes or severity levels. Given that the leg region of the motor cortex (i.e., paracentral gyrus) lies within the longitudinal fissure, there was a slight possibility of TMS stimulation over-flow to the homologous paracentral gyrus. However, during ‘hot spot’ detection, MEPs were collected simultaneously from both legs with special care taken to elicit MEPs from the targeted hemisphere and corresponding TA. Additionally, we did not include specific MRI sequences suitable for the detection of gray matter lesions, such as a double inversion recovery sequence . However, the analysis performed for this particular study integrated T1 + T2-FLAIR data, which have been shown to significantly improve cortical segmentation and parcellation and the identification of cortical atrophy . Despite the vertex-wise analysis, we cannot rule out that white matter atrophy could have influenced the results, while FreeSurfer segmentations were rigorously assessed for errors, differences could exist between subjects with or without sulci deformation (i.e., widening) due to white matter atrophy. However, it remains unclear whether this would preferentially affect the results of certain cortical regions. Significant differences were observed for each 360° turn measure, indicating that PwMS, independent of speed, maintain reduced turning performance compared to controls. These current results closely align with prior studies reporting increased turn duration and velocity in PwMS . Turn angles for the 360° turns were closer to the instructed 360° mark in PwMS compared to controls. This finding coincides with Shah et al., who reported reduced turn angles in PwMS during 7 days of continuous mobility monitoring when compared to controls . While the study designs were different, we postulate that PwMS perform reduced turn angles due to an abundance of caution and the ability to produce an appropriate compensatory response if needed . However, the true functional significance of reduced turn angles in PwMS remains unknown. No significant differences were observed between groups for any of the 180° turn variables. These results are partially divergent to previously reported 180° turns differences between PwMS and controls. For instance, Spain et al. reported similar peak turn velocities between groups during an instrumented Timed Up and Go (iTUG) test but significantly longer 180° turn durations in PwMS . We suspect the differences are likely task related given that the iTUG and the 2-min walk have unique task objectives. Together, these results may indicate that PwMS demonstrate altered turning characteristics and subsequently reduced turning performance for 360° in-place turns although no differences for 180° turns while walking. Possibly indicating that task complexity and turn style may provide important turn related kinematic differences between PwMS and controls. The precentral and paracentral gyri were chosen a-priori based on the known influence and integration of motor commands converging within those cortical regions along with prior evidence demonstrating disease-related susceptibility to cortical atrophy . The present results are consistent with prior investigations demonstrating motor cortex gray matter thinning in PwMS with the RRMS phenotype . A necessary measure for any TMS study is the determination of the RMT, thereby normalizing responses across study participants. The majority of studies assessing RMT have reported no differences between healthy controls and PwMS, though greater thresholds for PwMS have been reported (see review by Snow et al. ). In alignment, our results demonstrated no significant differences between groups for RMT . While the mechanisms associated with RMT and the generation of transmembrane excitation remain inconclusive, these results may indicate that both groups demonstrated similar pyramidal fiber orientation within the simulated region. Indeed, modelling studies indicate that pyramidal cell structure and orientation influence the electrical fields produced by TMS, which are necessary for MEP production . Glutamatergic activity is a commonly measured neurophysiological outcome in the multiple sclerosis literature and often assessed via the MEP amplitude . With no observed differences between groups, the current results are consistent with numerous studies that also accounted for important covariates . These results may indicate that our two cohorts demonstrated similar density of cortico–motor neuronal projections stemming from the motor cortex, thus providing similar glutamatergic responses to TMS . However, it should be noted that MEP measures and their physiological underpinnings are complex and remain not fully understood . Inhibitory neurotransmission has demonstrated associations with motor learning, neural plasticity, and motor control . However, the role of inhibitory neurotransmission within the corticospinal system of PwMS remains inconclusive, with studies reporting inconsistent findings . Moreover, there remains a substantial lack of TMS-related studies focused on assessing the lower limbs. Indeed, only one other study, performed by Tataroglu et al., has explored inhibitory activity via single pulse TMS targeting the lower limbs in PwMS . Their results demonstrated significantly longer cSP durations for PwMS compared to neurotypical controls and, furthermore, greater durations for those with progressive phenotypes of the disease. In contrast, the current results demonstrated no differences in silent period duration between PwMS and controls. Upon further exploration, Tataroglu et al. reported an average silent period duration of 118.1 ± 74 ms for their RRMS group, while the current multiple sclerosis cohort demonstrated similar durations being 108.3 ± 50 ms and 123.9 ± 68 ms for the left and right hemispheres, respectively. Interestingly, and dissimilar to our results, considerably shorter silent period durations were reported for their control cohort . A variety of factors can influence silent period duration including stimulation intensity, contraction force, and number of stimulations, among others . These protocol differences could contribute to the differences observed, limiting the comparative interpretation between results. Intrahemispheric and interhemispheric inhibition are most often quantified through the silent period duration; however, percent average and max depth of the silent period have been used to quantify levels of interhemispheric inhibitory activity . Studies assessing interhemispheric inhibition have shown %dSP AVE/MAX demonstrate greater sensitivity for delineating between young and older adults compared to the conventionally reported silent period duration , although these measures have not been widely reported for intrahemispheric (i.e., cSP) inhibitory analyses. One potential reason could be due to the greater inhibitory influence (i.e., EMG amplitude during the silent period nears 100% muscle deactivation) for cSPs compared to iSPs, meaning researchers may feel these measures would be less likely to demonstrate group differences . While the physiological mechanisms underlying these metrics remain under investigation, they are suspected to provide unique perspectives of GABAergic inhibitory influence and have been characterized as both sensitive and reliable were a greater percent indicates larger corticospinal inhibitory influence at the muscle . Our results demonstrated similar silent period durations, albeit with significantly reduced inhibitory influence in PwMS compared to controls. While these results are a new expression of inhibitory data, they align with prior studies reporting no differences or reduced inhibitory activity in PwMS compared to neurotypical adults . The current inhibitory results add to the existing literature by revealing temporally similar silent period durations to the lower limbs between groups while demonstrating reduced inhibitory influence on the TA in PwMS. Given that the data did not demonstrate a hemispheric difference for inhibition, we could have averaged the hemispheric results; however, we decided to maintain independent hemispheric values. Principally, this decision was guided by evidence of brain lateralization, or the notion of each hemisphere integrating specific responses and actions . For instance, Fling et al. demonstrated associations between proprioceptive balance control and microstructural integrity of Brodmann Area 3a to be restricted entirely to the right hemisphere in PwMS . Moreover, voluntary inhibition of manual movements (via go–no-go tasks) has been postulated to be right-lateralized to the frontal-basal ganglio–thalamic pathway , although evidence to contradict that assumption exists, suggesting that both hemispheres work in cooperation . Furthermore, it has been postulated that the left hemisphere may be responsible for planning motor actions, and subsequently has become specialized in regulating well-established patterns of behavior characterized as routine, familiar, and internally directed . While the contributions for each hemisphere remain under investigation, particularly for lower limb directed movements, evidence indicates each hemisphere has some degree of functional independence. 4.4.1. Neuroanatomical Associations Gray matter atrophy has been associated with clinical indicators of physical disability, cognitive decline, and disease duration . Importantly, gray matter atrophy demonstrates independence from white matter degradation and provides stronger associations with clinical indicators . While gray matter atrophy demonstrates clinically relevant correlations in multiple sclerosis, recent reports suggest atrophic patterns are predominately regional rather than globally diffuse . Furthermore, regional patterns of atrophy appear to provide distinct relationships with functional disability . Specifically, it has been observed that left-lateralized atrophic clusters of the sensorimotor cortex demonstrate significant negative associations with disease severity using the Expanded Disability Status Scale (EDSS) . Given that the EDSS has a bias towards locomotor disability, this negative association suggests the atrophy of sensorimotor region influences functional disability. Consistent with prior results demonstrating an association between the disease severity and cortical thickness of the motor cortex, our results further demonstrate motor cortex atrophy associated with specific characteristics of turning performance, revealing that reduced cortical thickness of the precentral and paracentral gyri correlates with 360° turn duration, peak velocity, and angle in PwMS and 180° turn duration in controls. Therefore, providing further evidence that motor cortex atrophy influences functional disability, specifically turning performance. To our knowledge, one other study performed by Lorefice and colleagues assessed the associations between global cortical atrophy and components of the iTUG . The authors showed a significant negative association between global cortical gray matter volume and completion time along with significant positive associations between turn velocity and global cortical gray matter volume , thus, revealing a relationship between cortical gray matter structure and lower limb dynamic motor performance, where greater cortical gray matter volume relates to faster turns and greater iTUG performance. It must be noted that turns performed during the iTUG are 180°, which in our study, only turn duration for the control group demonstrated significance. We believe these differences are a product of unique task objectives and associating specific turning variables to global, rather than regional, measures of atrophy. 4.4.2. Neurophysiologic Associations Glutamate and GABA are critical for the development and regulation of descending motor commands. While intrahemispheric glutamatergic activity describes the balance between excitation and inhibition, TMS-related glutamatergic measures are often less sensitive to motor control assessments compared to inhibitory neurotransmission . The current results show a similar trend, revealing many more significant inhibitory associations compared to excitatory associations with turning performance. While our results revealed no group differences for normalized MEP amplitude, two associations emerged as significant, with smaller amplitudes correlating with reduced turning performance in PwMS. Corresponding with our results, prior research has shown weak-to-moderate associations between disease severity and MEP amplitude, such that reduced glutamatergic activity relates to greater disease severity . While older adults and PwMS are distinct in many ways, they do demonstrate similar mobility deficits, for instance, both groups demonstrate very similar turning characteristics . Additionally, the associations from the present study demonstrate similarities to our previously published healthy aging data, such that greater inhibition in the impaired groups (e.g., PwMS and older adults) relates to better turning performance. While cSP duration did not demonstrate a groupwise difference, it did reveal a negative association with 360° in-pace fast turn duration and a positive association with 180° peak turn velocity. These results may indicate that a temporal inhibitory influence (i.e., cSP duration) on lower limb muscles is important for temporally mediated turning movements. Interestingly, the two measures of inhibitory influence demonstrated associations with additional turning variables, potentially implying that inhibitory influence could be a sensitive measure for lower limb dynamic movements requiring higher level neural command. While there were many associations pertaining to neurophysiological activity and turning performance for PwMS, no significant association was observed in the control group. However, directionally, the associations in the control group broadly opposed those in the multiple sclerosis group, such that neurotypical adults with greater inhibitory activity demonstrated reduced turning performance. Interestingly, this result is like previously published observations for the inhibitory control of turning in healthy young adults . While these results demonstrated a similar pattern to young adults, it must be noted that the inhibitory adaptations associated with middle-aged adults largely remains unexplored. However, we postulate that healthy middle-aged adults may rely on alternate neural resources such as subcortical and/or spinal level modulation for successful bilateral lower limb control . Interestingly, all the correlations between turning and inhibition were lateralized to the left hemisphere. These findings are particularly interesting and demonstrate novel findings regarding inhibitory lateralization and turning performance in PwMS. While the significance of left hemisphere lateralization and turning performance in PwMS remains unknown, left hemispheric specialization is thought to play a significant role in enhancing movement planning and execution . Further, studies have provided evidence suggesting a particular role for the left hemisphere in motor learning tasks that require movement planning and execution for future actions. These ideas are consistent with pathological populations that have demonstrated greater left, rather than right, hemispheric damage resulting in ideomotor apraxia, a disorder where the observed spatiotemporal motor deficits are thought to arise from impaired planning due to damage of the left frontal and parietal brain regions . Additionally, the left parietal cortex has been implicated for its role in the preparation of selected overt movements and the decision of which limb to use for particular tasks . Collectively, it appears that the left hemisphere is particularly well suited for movement planning and execution, which has been shown to be disrupted in PwMS . These results may indicate a compensatory inhibitory lateralization meant to assist in properly executing the planned turn. Interestingly, the control group did not demonstrate similar associations, indicating the utilization of different neural mechanisms to perform the same task, though the broad age range of the control cohort may have diminished any associations. To date, no studies have assessed both cortical thickness and neurophysiological activity in PwMS specific to the lower limbs. Moreover, no studies have incorporated objective measures of dynamic lower limb movements to identify neural mechanisms associated with performance. This study provides evidence to suggest a relationship between neuroanatomical structure and neurophysiological function of the motor cortex and turning performance in PwMS. Gray matter atrophy has been associated with clinical indicators of physical disability, cognitive decline, and disease duration . Importantly, gray matter atrophy demonstrates independence from white matter degradation and provides stronger associations with clinical indicators . While gray matter atrophy demonstrates clinically relevant correlations in multiple sclerosis, recent reports suggest atrophic patterns are predominately regional rather than globally diffuse . Furthermore, regional patterns of atrophy appear to provide distinct relationships with functional disability . Specifically, it has been observed that left-lateralized atrophic clusters of the sensorimotor cortex demonstrate significant negative associations with disease severity using the Expanded Disability Status Scale (EDSS) . Given that the EDSS has a bias towards locomotor disability, this negative association suggests the atrophy of sensorimotor region influences functional disability. Consistent with prior results demonstrating an association between the disease severity and cortical thickness of the motor cortex, our results further demonstrate motor cortex atrophy associated with specific characteristics of turning performance, revealing that reduced cortical thickness of the precentral and paracentral gyri correlates with 360° turn duration, peak velocity, and angle in PwMS and 180° turn duration in controls. Therefore, providing further evidence that motor cortex atrophy influences functional disability, specifically turning performance. To our knowledge, one other study performed by Lorefice and colleagues assessed the associations between global cortical atrophy and components of the iTUG . The authors showed a significant negative association between global cortical gray matter volume and completion time along with significant positive associations between turn velocity and global cortical gray matter volume , thus, revealing a relationship between cortical gray matter structure and lower limb dynamic motor performance, where greater cortical gray matter volume relates to faster turns and greater iTUG performance. It must be noted that turns performed during the iTUG are 180°, which in our study, only turn duration for the control group demonstrated significance. We believe these differences are a product of unique task objectives and associating specific turning variables to global, rather than regional, measures of atrophy. Glutamate and GABA are critical for the development and regulation of descending motor commands. While intrahemispheric glutamatergic activity describes the balance between excitation and inhibition, TMS-related glutamatergic measures are often less sensitive to motor control assessments compared to inhibitory neurotransmission . The current results show a similar trend, revealing many more significant inhibitory associations compared to excitatory associations with turning performance. While our results revealed no group differences for normalized MEP amplitude, two associations emerged as significant, with smaller amplitudes correlating with reduced turning performance in PwMS. Corresponding with our results, prior research has shown weak-to-moderate associations between disease severity and MEP amplitude, such that reduced glutamatergic activity relates to greater disease severity . While older adults and PwMS are distinct in many ways, they do demonstrate similar mobility deficits, for instance, both groups demonstrate very similar turning characteristics . Additionally, the associations from the present study demonstrate similarities to our previously published healthy aging data, such that greater inhibition in the impaired groups (e.g., PwMS and older adults) relates to better turning performance. While cSP duration did not demonstrate a groupwise difference, it did reveal a negative association with 360° in-pace fast turn duration and a positive association with 180° peak turn velocity. These results may indicate that a temporal inhibitory influence (i.e., cSP duration) on lower limb muscles is important for temporally mediated turning movements. Interestingly, the two measures of inhibitory influence demonstrated associations with additional turning variables, potentially implying that inhibitory influence could be a sensitive measure for lower limb dynamic movements requiring higher level neural command. While there were many associations pertaining to neurophysiological activity and turning performance for PwMS, no significant association was observed in the control group. However, directionally, the associations in the control group broadly opposed those in the multiple sclerosis group, such that neurotypical adults with greater inhibitory activity demonstrated reduced turning performance. Interestingly, this result is like previously published observations for the inhibitory control of turning in healthy young adults . While these results demonstrated a similar pattern to young adults, it must be noted that the inhibitory adaptations associated with middle-aged adults largely remains unexplored. However, we postulate that healthy middle-aged adults may rely on alternate neural resources such as subcortical and/or spinal level modulation for successful bilateral lower limb control . Interestingly, all the correlations between turning and inhibition were lateralized to the left hemisphere. These findings are particularly interesting and demonstrate novel findings regarding inhibitory lateralization and turning performance in PwMS. While the significance of left hemisphere lateralization and turning performance in PwMS remains unknown, left hemispheric specialization is thought to play a significant role in enhancing movement planning and execution . Further, studies have provided evidence suggesting a particular role for the left hemisphere in motor learning tasks that require movement planning and execution for future actions. These ideas are consistent with pathological populations that have demonstrated greater left, rather than right, hemispheric damage resulting in ideomotor apraxia, a disorder where the observed spatiotemporal motor deficits are thought to arise from impaired planning due to damage of the left frontal and parietal brain regions . Additionally, the left parietal cortex has been implicated for its role in the preparation of selected overt movements and the decision of which limb to use for particular tasks . Collectively, it appears that the left hemisphere is particularly well suited for movement planning and execution, which has been shown to be disrupted in PwMS . These results may indicate a compensatory inhibitory lateralization meant to assist in properly executing the planned turn. Interestingly, the control group did not demonstrate similar associations, indicating the utilization of different neural mechanisms to perform the same task, though the broad age range of the control cohort may have diminished any associations. To date, no studies have assessed both cortical thickness and neurophysiological activity in PwMS specific to the lower limbs. Moreover, no studies have incorporated objective measures of dynamic lower limb movements to identify neural mechanisms associated with performance. This study provides evidence to suggest a relationship between neuroanatomical structure and neurophysiological function of the motor cortex and turning performance in PwMS. The inclusion of only individuals with RRMS with an average EDSS of 3.5 could be a limitation, as it does not account for other disease phenotypes or greater levels of disease severity. Therefore, these results should be interpreted responsibly when relating them to other phenotypes or severity levels. Given that the leg region of the motor cortex (i.e., paracentral gyrus) lies within the longitudinal fissure, there was a slight possibility of TMS stimulation over-flow to the homologous paracentral gyrus. However, during ‘hot spot’ detection, MEPs were collected simultaneously from both legs with special care taken to elicit MEPs from the targeted hemisphere and corresponding TA. Additionally, we did not include specific MRI sequences suitable for the detection of gray matter lesions, such as a double inversion recovery sequence . However, the analysis performed for this particular study integrated T1 + T2-FLAIR data, which have been shown to significantly improve cortical segmentation and parcellation and the identification of cortical atrophy . Despite the vertex-wise analysis, we cannot rule out that white matter atrophy could have influenced the results, while FreeSurfer segmentations were rigorously assessed for errors, differences could exist between subjects with or without sulci deformation (i.e., widening) due to white matter atrophy. However, it remains unclear whether this would preferentially affect the results of certain cortical regions. The results from this study demonstrate that in PwMS, neuroanatomical structure and neurophysiological function are related to turning performance. Upon closer inspection, the associations between inhibitory activity and turning performance are characteristically stronger associations than those between motor cortex thickness and turning performance. While the statistical assumptions were not met, a mediation analysis could provide greater clarity as to the degree of influence inhibition has on the association between cortical thickness and turning performance. From these results, PwMS perform turns more similar to their control counterparts when greater inhibitory activity and motor cortex thickness are present. Finally, these results indicate that PwMS may utilize higher order cortically controlled neural mechanisms to perform dynamic movements typically associated with fall risk.
Advancements in Brain Aneurysm Management: Integrating Neuroanatomy, Physiopathology, and Neurosurgical Techniques
1b28c804-4752-4902-a184-1ca28d96ca45
11596862
Anatomy[mh]
Intracranial aneurysm (IA) is a cerebrovascular disorder characterized by an abnormal focal dilation caused by a weakened region within the wall of a cerebral artery. Both microsurgical and endovascular treatments aim to exclude aneurysms from cerebral circulation, thereby preventing their rupture. Despite significant advancements in endovascular techniques, achieving complete and durable aneurysm occlusion remains a complex challenge. The underlying biological mechanisms driving aneurysm growth and recanalization are not yet fully elucidated. An aneurysm represents a weakened area within a blood vessel wall that protrudes, disrupting normal blood flow and potentially leading to a hemorrhagic stroke. Vascular microsurgery seeks to repair aneurysms by isolating them from the arterial circulation while avoiding rupture or damage to adjacent tissues . This surgical procedure involves the careful dissection of surrounding structures and the retraction of brain tissue to expose the aneurysm neck, where a titanium clip is applied to prevent rupture. Errors in this procedure can result in ischemic injury or hemorrhage . In cases of large aneurysms with complex anatomical features, multiple clips may be necessary. Given the high risks associated with these procedures, there is limited opportunity for residents to practice, underscoring the importance of neurosimulation tools. These tools enhance training by providing opportunities to gain experience and manage errors within a controlled, safe environment . Training methods include biological tissue models , mannequins , and virtual reality (VR) systems . Prominent VR systems include NeuroTouch, designed for tumor removal, and ImmersiveTouch, which focuses on vascular neurosurgery. However, many VR systems designed for vascular applications fail to adequately model brain tissue . Research has demonstrated significant sex differences in the prevalence of IA. A recent study reported that incidental unruptured intracranial aneurysms (UIAs) are notably more prevalent in females than in males, supported by an analysis of over 14,000 adults, which revealed a higher odds ratio (OR) for females (OR, 1.92 [95% CI, 1.33–2.84]) . Long-term studies and meta-analyses further indicate that females are at a higher risk of developing de novo aneurysms, with ORs ranging from 1.81 to 3.83 . Additionally, female gender has been identified as an independent risk factor for the growth of unruptured cerebral aneurysms. A long-term follow-up study involving 87 patients found that females were more likely to experience aneurysm growth of at least 1 mm after adjusting for age (OR, 3.36 [95% CI, 1.11–10.22]) . Another study analyzing 1325 unruptured aneurysms concluded that female sex was the only significant risk factor for aneurysm growth ( p = 0.0281) . The location of aneurysms also varies by sex. An analysis of 682 aneurysms revealed that females are more likely to have aneurysms along the internal carotid artery (54% in females vs. 38% in males), whereas males more commonly present with aneurysms along the anterior cerebral artery (29% in males vs. 15% in females; p = 0.001) . These findings were corroborated by another study of 444 aneurysms, which showed a higher incidence of anterior cerebral artery aneurysms in males (81% vs. 49% in females, p < 0.0001) and a greater prevalence of internal carotid artery aneurysms in females (64% vs. 24% in males, p < 0.0001) . A recent study involving 1277 patients with ruptured IA identified female sex as a significant risk factor for the presence of multiple aneurysms (relative risk ratio, 1.80 [95% CI, 1.31–2.48]) . The underlying causes of these sex differences in aneurysm location and multiplicity may involve hormonal influences on vascular remodeling and a higher propensity for vessel wall weakness in females . Additionally, hemodynamic factors, such as larger vessel diameters in males and higher wall shear stress in certain arteries in females, may contribute to the observed sex differences in aneurysm prevalence and location . In the human vascular system, blood flow can be classified as either laminar or turbulent, depending on its velocity and the geometric characteristics of the vessels. Laminar flow is typically observed in large, straight vessels, while turbulent flow is more common in smaller, curved vessels. Under turbulent flow conditions, the blood velocity is relatively high, and the geometric complexity of the vessels increases, which in turn exerts mechanical forces on the vessel walls. Persistent abnormal blood flow can disrupt endothelial cell function, making it a key factor in the development of IA . The mechanical forces exerted by blood flow on vessel walls include endothelial cell stretch, the impulse force on the vessel wall, and the tangential force, commonly referred to as wall shear stress (WSS). WSS is measured in Pascals (Pa) or Newtons per square meter (N/m 2 ) and represents the friction between layers of blood moving at different velocities within the vessel. In healthy large blood vessels, the average WSS is approximately 15 dynes/cm 2 , although it varies considerably across different regions of the vascular system due to the complex vessel architecture. For instance, WSS ranges from 9.5 to 15 dynes/cm 2 in the common carotid artery and from 3.9 to 4.9 dynes/cm 2 in the femoral artery. In vessels with large diameters and regular shapes, WSS tends to remain stable; however, in curved or bifurcated vessels, abnormal WSS can continuously stimulate the vessel walls, leading to compromised structural integrity and triggering inflammatory responses . The brain, which receives 20% of the body’s circulating blood volume, is characterized by a particularly intricate vascular anatomy, especially within the circle of Willis, which contains numerous curves and bifurcations . Computational flow dynamics (CFD) is a widely utilized tool for investigating and simulating hemodynamic conditions in cerebral arteries. CFD allows for the visualization and measurement of vessel dynamics, including WSS and the oscillating shear index (OSI), the latter of which reflects the resilience of vessel walls . This technology enables non-invasive assessment of blood dynamics, facilitating the calculation of blood flow quantity and distribution across different regions of the brain . Hypertension, smoking, alcohol consumption, environmental factors, and genetic predispositions have all been identified as significant risk factors for the development of IA (IAs). Research has demonstrated that abnormal hemodynamics is closely associated with the occurrence and progression of IAs . The circle of Willis is the most common site for IAs due to its unique anatomical structure, which is particularly susceptible to irregular hemodynamic stimulation. According to a study, approximately 90% of saccular IAs are located within the circle of Willis, with the remainder predominantly found at vessel bifurcations . This circle serves as a crucial anastomotic arterial ring, connecting the anterior and posterior circulations as well as the left and right hemispheres. A relevant study reported that in autopsy examinations, the integrity of the circle of Willis is observed in only 40% of the population, with anatomical variations present in more than half of individuals, resulting in diverse and complex intracranial blood flow patterns . In addition to congenital variations in the circle of Willis, many individuals experience vascular tissue abnormalities due to genetic factors, which increase their susceptibility to aneurysm formation under hemodynamic stress. The elastin layer in the vascular wall of IA is often absent, potentially due to abnormal gene expression of structural proteins. For instance, between 2% and 10% of patients with polycystic kidney disease develop aneurysms because of a genetic inability to encode a specific vascular structural protein. Similarly, in Marfan syndrome, IA can arise due to the lack of collagen-encoding genes . Ehlers–Danlos syndrome is another genetic disorder that can contribute to the development of IA by affecting connective tissues . Both Marfan and Ehlers–Danlos syndromes can also predispose individuals to a range of other conditions, including dermatological issues, cardiovascular diseases, gastrointestinal disorders, osteoarthritis, and even organ ruptures . To better understand the factors influencing aneurysm location within cerebral arteries, researchers have explored the relationship between aneurysm location and wall shear stress (WSS) levels. High WSS and high WSS gradient environments have been shown to induce destructive remodeling of the arterial wall, similar to the processes involved in aneurysm initiation. Persistent primitive olfactory arteries (PPOA) and azygos pericallosal arteries, although rare, have been identified as potential sites for aneurysm formation due to their distinctive structural characteristics, highlighting the significance of anatomical variations in aneurysm development. Furthermore, increased regional blood flow under high WSS conditions has been implicated in the formation of nascent aneurysms, which exhibit histological features akin to those observed in human aneurysms, including the absence of the internal elastic lamina and a thinned media layer . In the progression of aneurysms, longitudinal blood flow impinging on the vessel wall is recognized as a critical risk factor for aneurysm growth, as continuous blood flow exacerbates damage to the vascular structure. Low wall shear stress (WSS) combined with a high oscillatory shear index (OSI) has been shown to promote aneurysm growth, with regions of growth often exhibiting low shear conditions and increased oscillatory flow patterns. This low WSS environment is analogous to the conditions that favor the formation of atherosclerotic plaques, which may explain why atherosclerotic lesions are frequently observed within aneurysms. Low WSS also fosters vascular inflammation and endothelial dysfunction, which gradually compromise the structural integrity of the vascular wall, thereby facilitating aneurysm development. While high WSS is typically associated with the initiation of aneurysms, low WSS is more closely linked to the rupture of aneurysms. Although the precise relationship between WSS levels and aneurysm growth or rupture is still a subject of debate, there is a general consensus that both low and high WSS play a significant role in influencing the natural history of aneurysms . IA can be classified through various schemes that address different aspects of the condition. IA represent a heterogeneous group of lesions, with the most prevalent type being saccular or berry aneurysms, which account for approximately 90% of all aneurysms. Other types include fusiform aneurysms, which involve an extended segment of the vessel; traumatic aneurysms; mycotic aneurysms, which are associated with underlying infectious processes; dissecting aneurysms; and microaneurysms, typically found on small perforator vessels as a result of chronic hypertension. Around 85% of aneurysms are located in the anterior circulation of the circle of Willis . Clinically, they are categorized based on their rupture status: either ruptured or unruptured. Morphologically, aneurysms are divided into saccular and non-saccular types, with non-saccular IAs further subdivided into fusiform, dolichoectatic, and dissecting aneurysms. In terms of their location within the intracranial circulation, aneurysms are classified as either anterior circulation or posterior circulation. The angioarchitecture of aneurysms is particularly important for planning management strategies and is categorized based on neck size and its relationship to the dome. These classification systems not only provide a detailed description of aneurysms but also play a crucial role in predicting prognosis, planning management, and determining appropriate treatment strategies. For example, despite advances in the understanding of aneurysm pathophysiology and technological developments, aneurysms that are larger than 10 mm, have a wide neck, an unfavorable dome-to-neck ratio (<2), are located in the posterior circulation, and have a fusiform configuration continue to present significant therapeutic challenges, with over 20% of such cases not responding well to even the most advanced endovascular or surgical treatments available . The current trend in IA size classification now favors the definitions proposed by the Japanese UCAS study: small (<5 mm), medium (5–10 mm), large (10–25 mm), and giant (>25 mm). This size classification system is recommended based on natural history data for IAs, as it more accurately encompasses higher-risk demographics. Additionally, it is suggested that a wide-neck aneurysm be defined as one with a neck diameter greater than 4 mm or a dome-to-neck ratio (DNR) less than 2, until further research provides evidence to support lowering the DNR threshold. The appropriateness of these definitions for IA size classification remains under evaluation and is a subject of ongoing discussion and research . Aneurysms are classified based on their location within the brain’s vascular system. In the anterior circulation, aneurysms may be located at the anterior communicating artery (AComm), specifically at the junction of the anterior cerebral arteries, as well as at various segments of the internal carotid artery (ICA). Within the ICA, aneurysms may occur at the ophthalmic segment near the origin of the ophthalmic artery, at the posterior communicating artery (PComm), where it branches from the ICA, and at the bifurcation, where the ICA divides into the middle cerebral artery (MCA) and anterior cerebral artery (ACA). Middle cerebral artery (MCA) aneurysms typically arise at the bifurcation or trifurcation points of the MCA. In the posterior circulation, aneurysms may be located on the basilar artery, either at its bifurcation or along its length, as well as on the vertebral arteries where they converge to form the basilar artery. Posterior inferior cerebellar artery (PICA) aneurysms are typically found at the junction of the vertebral artery and PICA. Additionally, peripherally located aneurysms involve branches of the cerebellar arteries, including the superior cerebellar artery (SCA), anterior inferior cerebellar artery (AICA), and posterior inferior cerebellar artery (PICA). AComm aneurysms are the most prevalent type of IA, comprising 23–40% of all IA and 12–15% of unruptured cases. These aneurysms are particularly common among patients under 30 years of age. Due to their distinct anatomical and hemodynamic characteristics, AComm aneurysms have a higher likelihood of rupture compared to other types of IA . The anatomical features of an AComm aneurysm significantly influence the choice between microsurgery and interventional embolization. Factors such as the aneurysm’s size and orientation play a crucial role in determining the complexity of the surgery and selecting the most appropriate surgical approach. Furthermore, the presence of calcification and intra-aneurysm thrombosis necessitates specific precautions during surgical clipping and may increase the risk of complications . The anatomical structure of the bilateral A1 and A2 segments is critical in the formation of AComm aneurysms. The development of multiple aneurysms at the AComm is influenced by several factors, including hemodynamic stress. The diameters of these segments and their proportional relationships are closely associated with the formation and rupture risk of aneurysms. For instance, the diameter ratio between the A1 and A2 segments is significantly correlated with the likelihood of aneurysm rupture. During clipping surgeries or interventional embolization, the diameters and anatomical relationships of the bilateral A1 and A2 segments must be carefully considered to determine the optimal side for surgical clipping, assess the operation’s difficulty, and evaluate the potential need for stenting during embolization . Previous studies have identified the presence of a dominant A1 segment and the dysplasia or deficiency of the contralateral A1 segment as key factors in the formation of AComm aneurysms. Additionally, research has shown that smaller A2 diameters and larger A1/A2 diameter ratios are associated with an increased likelihood of AComm aneurysm formation and rupture . The angle between the A1 and A2 segments also plays a role in aneurysm formation and rupture risk. It was found that a smaller angle between these segments increased the probability of aneurysm development, particularly in patients with a dominant A1 segment . These findings indicate that the absence or dysplasia of the A1 segment is closely associated with aneurysm formation. Research found that 49.8% of patients with AComm aneurysms exhibited hypoplasia of the A1 segment, a condition strongly correlated with anterior cerebral artery infarcts and adverse prognosis following aneurysm clipping. Multivariate analysis further identified A1 segment hypoplasia as an independent risk factor for poor prognosis after aneurysm clipping . Anatomical variations, such as the infraoptic course of the A1 segment, are particularly noteworthy due to their significant impact on the occurrence and management of aneurysms . In 2008 appeared the first classification of the relationship between the bilateral A2 segments in cases of superiorly projecting AComm aneurysms into two categories: the open A2 plane side (where the A2 segment near the aneurysm body is more posterior) and the closed A2 plane side (where the A2 segment is more anterior) . Surgical procedures performed on the closed A2 plane side present challenges in exposing the aneurysm neck, often necessitating the removal of the gyrus rectus and manipulation of the A2 segment. These actions increase the risk of residual aneurysm necks and postoperative complications. Conversely, it was suggested that surgery performed on the open A2 plane side is both easier and safer for aneurysm neck exposure, thereby reducing the likelihood of postoperative complications . Magnetic resonance imaging (MRI) is highly effective for visualizing soft tissue structures; however, certain patients are contraindicated for this imaging modality. These contraindications include the presence of medical implants such as pacemakers, metal implants, or conditions like severe claustrophobia. In cases of claustrophobia, anesthesia may be administered to facilitate the procedure. Recent advancements, including 4D MRI and 3D contrast-enhanced MRI, have demonstrated promising potential in enhancing diagnostic accuracy and improving the follow-up of cerebral aneurysms and vascular abnormalities . While computed tomography angiography (CTA) is generally preferred as the initial diagnostic test for ruptured aneurysms, magnetic resonance angiography (MRA) can be used to confirm the diagnosis in later stages. Although CTA is effective in the early stages, its utility diminishes in patients with diffuse subarachnoid hemorrhage, severe anemia, or minimal bleeding that is absorbed into the cerebrospinal fluid. Furthermore, digital subtraction angiography (DSA) is regarded as the gold standard for detecting IA due to its superior accuracy, though it is not considered cost-effective as an initial diagnostic test . As time progresses, identifying a ruptured aneurysm becomes increasingly challenging due to the diffusion of blood throughout the intracranial cavity. The location of blood collections within the intracranial cavity may help indicate the site of the original injury. For instance, ruptures of middle cerebral artery aneurysms often result in blood accumulating in proximal fissures such as the Sylvian fissure. Despite its invasive nature, DSA remains a valuable tool for radiologists, aiding in both diagnosis and guiding the appropriate treatment . Combining CTA with digital subtraction angiography (DSA) provides the most comprehensive imaging for IA, allowing detailed visualization of flow patterns and aneurysm characteristics. Imaging is critical for the detection and characterization of aneurysms, as it can reveal essential details such as the aneurysm’s location, size, morphology, and geometry. These factors are crucial in determining the appropriate therapeutic strategy, whether that involves surgical intervention or conservative management. DSA, a fluoroscopic technique utilizing iodine contrast, produces high-resolution images of intracranial blood vessels by digitally subtracting surrounding tissues. It is considered the gold standard for imaging IA due to its superior spatial resolution, specificity, and sensitivity, enabling precise determination of morphological characteristics like size and neck diameter . Newly developed 3D navigation systems have shown substantial benefits in aneurysm surgery. Neuronavigation enables the use of minimally invasive techniques, allowing surgeons to reach aneurysms more quickly while minimizing cortical damage. In a study using neuronavigation in 12 cases of distal anterior cerebral artery aneurysms, results showed safer surgeries with real-time imaging, smaller craniotomies, and no complications. Furthermore, neuronavigation has proven particularly advantageous in identifying distal middle cerebral artery aneurysms associated with intracerebral hematoma following rupture, enhancing surgical precision and outcomes . The advent of 3D rotational angiography (3DRA) has further enhanced DSAs spatial resolution by eliminating imaging errors caused by the superposition of vascular structures, thereby enabling the visualization of small IAs (less than 3 mm). However, DSA is an invasive procedure and carries risks associated with the use of intra-arterial devices and iodine-containing contrast agents, including neurological complications (0.1–1%) and severe allergic reactions (0.05–0.1%) . To address these concerns, several non-invasive imaging techniques, such as CTA, have been developed. CTA offers specificity and sensitivity levels that nearly match those of DSA for detecting IAs larger than 3 mm [sensitivity: 93.3–97.2%; specificity: 87.8–100%] . However, CTA is less effective at detecting small IAs located near the skull bone due to the similar absorption of ionizing rays by calcium and iodinated contrast agents [sensitivity: 61%] . Techniques like match mask bone elimination (MMBE) have been developed to enhance specificity by eliminating bone-induced signals, but these require longer exposure to ionizing radiation and are sensitive to patient movement. Dual-energy CTA (DE-CTA) has improved material differentiation capabilities, reducing artifacts from bony structures without the drawbacks associated with MMBE . Magnetic resonance angiography (MRA) offers a less invasive alternative to traditional imaging techniques, as it does not utilize X-rays. MRA sequences, such as time-of-flight MRA (TOF-MRA) and non-enhanced magnetization-prepared rapid acquisition gradient echo (MPRAGE), have garnered significant interest due to their ability to visualize IA without the use of contrast agents, thus avoiding the health risks associated with iodinated agents. TOF-MRA at 1.5 and 3 Tesla (T) is commonly employed for visualizing IAs, with greater sensitivity and accuracy achieved at 3T [sensitivity: 1.5T = 53.6%, 3T = 76.6%; accuracy: 1.5T = 84%, 3T = 91.9%]. However, artifacts may occur in cases of turbulent or low blood flow, which are common in large or coiled aneurysms. Gadolinium-enhanced MRA (GE-MRA) can be utilized as a flow-independent method that also avoids the need for contrast agents . Both TOF-MRA and GE-MRA demonstrate a sensitivity of 95% when compared to DSA . Recently, the use of 7T MRA has been evaluated for studying IAs. Although 7T MRI is not yet widely available, it shows significant potential for detecting IAs and providing detailed anatomical descriptions, making it a valuable tool for IA follow-up. The combination of 7T 3D-TOF and MPRAGE has been shown to delineate unruptured IAs as effectively as DSA. Additionally, intracranial black blood vessel imaging (MR-IBBVI), a novel MRA sequence based on blood signal suppression, offers higher sensitivity and specificity than TOF-MRA, regardless of aneurysm size [sensitivity: MR-IBBVI = 94.5%, TOF-MRA = 62.7%; specificity: MR-IBBVI = 94.5%, TOF-MRA = 92%, both compared with DSA] . However, classical imaging techniques, including those mentioned above, do not allow for the observation of vessel wall remodeling, which is a critical feature of IAs that are progressing toward rupture. Currently, no imaging technique is capable of visualizing the disruption of the elastic lamina or the thinning of the media. Optical coherence tomography (OCT), a widely utilized technology in the field of ophthalmology, is currently being refined for uses within the cranium. OCT functions by exploiting the variable reflective characteristics of tissues to near-infrared light. A catheter is inserted into the specific blood vessel, and high-resolution 2D cross-sectional pictures are obtained (with a resolution ranging from 1 to 15 μm). Research has demonstrated that Optical Coherence Tomography (OCT) is capable of detecting disruptions in the layers of IA, where the borders between the intima and media layers become blurred, as opposed to a normal vessel wall. Furthermore, Optical Coherence Tomography (OCT) can be utilized in real-time to observe and verify the accurate placement of intrasaccular devices during endovascular procedures . The development of these imaging techniques has the potential to greatly improve the current methods for assessing the risk of IA rupture. Currently, the risk assessment is mostly based on the shape of the IA, as determined by the available imaging technologies. The significance of hemodynamic stresses in growth, enlargement, and rupture of IA has been well-established. Important hemodynamic parameters consist of wall shear stress (WSS), which reflects the tangential frictional force exerted by blood flow on the vessel wall; oscillatory shear index (OSI), which quantifies the direction and magnitude of flow variations throughout a cardiac cycle; relative residence time (RRT), which suggests the movement of blood flow in time at the aneurysm wall; and flow patterns. Presently, there is widespread recognition that high wall shear stress (WSS) is a contributing factor in the formation of IA. Nevertheless, its impact on aneurysm rupture is not as well understood, as equally high and low WSS can result in detrimental remodeling of the aneurysm wall. The presence of high wall shear stress (WSS) is thought to initiate a pathogenic response mediated by mural cells, whereas low WSS is linked to an inflammatory response mediated by immune cells. Nevertheless, the occurrence of IA rupture is more commonly linked to elevated OSI, extended RRT, and intricate flow patterns, which cannot be easily observed using the clinical imaging techniques discussed in the morphological imaging part of this study . Computational fluid dynamics (CFD) is a commonly employed method for investigating hemodynamic factors, and it heavily depends on detailed 3D datasets with high resolution. Computational Fluid Dynamics (CFD) employs the inherent properties of IA, such as their dimensions, position, aspect ratio, and size ratio, to determine Wall Shear Stress (WSS), Oscillatory Shear Index (OSI), flow velocity, and Residence Time Ratio (RRT). CFD results are significantly affected by the selection of imaging modality. However, there is no commonly accepted imaging modality that is considered the most accurate for CFD calculations. Although CFD is effective in calculating hemodynamic parameters and improving our understanding of IAs, it has several limitations. These include the assumption that blood behaves as a Newtonian fluid and arteries are rigid structures, as well as the lack of a standardized protocol. These limitations have been reviewed in previous studies . In clinical practice, 3D rotational angiography (3DRA) is considered the gold standard for detecting and defining the static characteristics of aneurysms. However, there is no consensus on the best method for assessing dynamic features. Typically, a combination of 2D and 3DRA is employed to evaluate cerebrovascular blood flow. 2D digital subtraction angiography (2D-DSA) provides information on flow dynamics during the passage of a contrast agent, while 3DRA offers static anatomical details. This combination has led to the development of 4D-DSA, or time-resolved 3DRA, which integrates 2D-DSA and 3DRA. This method utilizes 3D images obtained from conventional 3DRA while retaining temporal information, thereby allowing for the visualization of contrast agent influx and efflux from any angle . An analysis reviewed several studies that successfully detected and quantified IA wall deformation across different frames with high spatial and temporal resolutions (ranging from 35 to 165 ms and 0.2 mm, respectively). Although most applications of 4D digital subtraction angiography (4D-DSA) have been explored in the context of arteriovenous malformations, only one study has qualitatively assessed its effectiveness in detecting IA flow patterns, reporting excellent visualization in 27.7% of IAs and fair visualization in 72.3%. Additionally, it was demonstrated that 4D-DSA is as reliable as 3D rotational angiography (3DRA) for computational fluid dynamics (CFD) analysis, showing no significant differences in flow velocity or wall shear stress (WSS) calculations . With its high spatial resolution (voxel volume = 0.008 mm 3 ), 4D-DSA provides anatomical characterization of IAs comparable to that of the gold standard 3DRA. Therefore, while 4D-DSA still requires refinement for the direct quantification of blood hemodynamics, its spatial resolution supports robust CFD analysis alongside morphological characterization of IAs . Traditionally, blood flow visualization with magnetic resonance imaging (MRI) has utilized the phase-contrast method to assess unidirectional flow in a 2D space. This method has since evolved into 3D time-resolved phase-contrast MRI, also known as 4D-MRI. This advanced imaging technique quantifies blood flow velocity directly in 3D, enabling the modeling of flow patterns and the quantification of hemodynamic parameters such as WSS, oscillatory shear index (OSI), and vorticity. In 2020, two comprehensive reviews were published on the capabilities of 4D-MRI in studying IA hemodynamics. These reviews indicate that 4D-MRI, often compared to CFD, reliably depicts intra-aneurysmal flow patterns across various IA morphologies. Nevertheless, 4D-MRI is subject to limitations, namely in relation to its spatial and temporal resolution, which can have an effect on the precision of hemodynamic parameter calculations . For example, voxel sizes in 4D-MRI range from 0.43 × 0.43 × 0.43 mm 3 to 1 × 1 × 1.6 mm 3 , whereas computational fluid dynamics (CFD) simulations typically use voxel sizes around 0.1 mm. As a result, wall shear stress (WSS) values derived from 4D-MRI tend to be of lower magnitude, although the localization of WSS remains comparable. Another limitation of 4D-MRI in clinical settings is its lengthy acquisition time, which can range from 5 to 30 min depending on the magnetic field strength and acquisition protocol. To mitigate this, an accelerated high spatiotemporal resolution 4D-7T-MRI technique has been proposed, which provides accurate quantitative flow measurements within a reduced acquisition time of 10 min . Compared to classical CTA, which may require longer acquisition times or multiple acquisitions over a period, 4D-CTA captures the influx and efflux of contrast agents and morphological changes in IA within a cardiac cycle when acquisition is ECG-gated. 4D-CTA is primarily utilized in evaluating hemorrhagic or ischemic stroke and vascular malformations, and it has been suggested as a potential replacement for the gold standard 3D digital subtraction angiography (DSA) in follow-up imaging. This is because 4D-CTA produces accurate IA geometries and reliable CFD results comparable to those obtained with 3D rotational angiography (3DRA) . Beyond these conventional hemodynamic parameters, aneurysmal pulsatility has emerged as an important dynamic parameter for assessing IAs. Increased wall motion is believed to be associated with reduced aneurysm wall stability and an increased risk of rupture. This pulsation, which includes both the global pulsation of the aneurysm and the movement of focal parts such as blebs, must be distinguished from the physiological cerebrovascular movement that occurs during the cardiac cycle. Due to the rapid and low magnitude nature of these pulsations, developing an accurate imaging modality poses significant challenges. A study using 7 T MRI to quantify volume pulsation demonstrated insufficient accuracy due to multiple imaging artifacts. The most common technique for studying aneurysmal pulsation is 4D-CTA, which achieves spatial resolution comparable to the movement of the aneurysm under study (with high-resolution CT scans providing 0.25 mm resolution and standard scans offering 0.6–0.8 mm resolution). This technique has been reported to effectively measure aneurysm pulsation in IAs larger than 5 mm in vivo . Another risk factor that warrants consideration is the Coanda Effect, which may offer insight into the behavior of blood flow in specific scenarios. The understanding of brain aneurysms has been advanced through the study of convex structures within blood vessels. Convex structures, such as cholesterol plaques, can develop inside arteries, influencing blood flow dynamics. The Coanda Effect describes the tendency of a fluid flow to adhere to a nearby surface. In the context of blood flow within arteries, this effect causes the blood to adhere to the wall opposite the convex structure, such as a plaque. This interaction reduces the resistance of the affected arterial wall, eventually leading to the formation of a concave. As the Coanda Effect continues to direct blood flow towards the concavity, a cycle ensues that can ultimately result in the formation of a saccular aneurysm . Selecting an appropriate treatment strategy for IA is a complex and nuanced decision that requires careful consideration of various factors. Therapeutic decision-making must balance the potential risks and benefits of conservative management versus aggressive intervention, with decisions tailored to the individual patient’s circumstances and the resources available at the treating hospital. The primary options for aneurysm management include open microneurosurgery and endovascular techniques, each offering distinct advantages and drawbacks. Several key factors influence the decision-making process for aneurysm treatment. These factors include the rupture status of the aneurysm, whether it is symptomatic or asymptomatic, and the presence of a family history or genetic predisposition to aneurysms. Additional risk factors such as hypertension, smoking, alcohol use, and drug use are also critical considerations. The size and location of the aneurysm—whether it is situated in the anterior or posterior circulation, within the anterior or posterior communicating segment, or in extradural versus intradural locations—are significant determinants in the treatment approach. High-risk aneurysm characteristics, such as changes in size or morphology over time, the emergence of new neurologic or cranial nerve symptoms, irregularities, the presence of daughter lobes, and a history of prior subarachnoid hemorrhage, also play a pivotal role in the decision-making process. Moreover, the patient’s age, functional status, comorbidities, life expectancy, vascular anatomy, and the anticipated risk of surgical or endovascular treatment are essential considerations when determining the most appropriate course of action. The complexity of these factors underscores the importance of a personalized approach in the management of IA, with treatment strategies carefully tailored to optimize patient outcomes. 5.1. Neurosurgical Techniques The primary microsurgical techniques for managing IA focus on placing a clip across the neck of the aneurysm to exclude it from circulation while preserving the patency of the parent vessel. Alternative surgical options include proximal arterial occlusion, bypass techniques, aneurysm wrapping, or trapping, which involves combined proximal and distal vessel occlusion. Attaining adequate neck exposure is crucial in aneurysm clipping procedures. Over time, the advancement of different surgical techniques for aneurysms located at the base of the skull has greatly decreased the distance between the surgeon and the aneurysm. This has resulted in less exposure and pulling of the neurovascular tissue, while also improving the surgeon’s ability to perform the procedure. After the dura is opened, the main goal is to achieve control and release of the aneurysm neck with minimal dissection. Performing dissection in close proximity to the aneurysm poses the danger of rupture, so the most secure method is to maneuver around the parent artery. It is essential to identify the parent artery and conduct a meticulous dissection towards the aneurysm neck along this blood supply. The approach to obtaining proximal control is contingent upon the specific location of the aneurysm. Dissecting the complete parent artery from its origin is typically unnecessary. In the case of an AComm aneurysm, the A1 segment near the optic nerve is usually identified to establish proximal control. When dealing with a middle cerebral artery aneurysm, especially with the smaller Sylvian approach, achieving proximal control provides only a certain level of safety. However, to ensure complete control, it is necessary to manage all the arteries that supply blood to and drain blood from the aneurysm. Once the parent artery is accessed, the approach to the aneurysm neck should be made along this artery. Both sides of the aneurysm neck must be separated from the arterial branches sufficiently to allow the clip blades to pass smoothly, typically requiring a clearance of at least 3 mm. At this stage, it is important to avoid contact with the aneurysm dome. Complete separation of the aneurysm neck from the arteries is necessary to allow the dissector to pass through. Temporary clipping is often a useful technique during the clipping of the aneurysm. After adequate dissection, the next step is to select the clip that best fits the aneurysm neck, using the angiogram as a guide. The clip length should ideally match the neck diameter multiplied by π/2. If this measurement appears too small during the procedure, it may indicate that the perceived neck includes a nearby brain artery. Any bleeding from the aneurysm should be controlled immediately. For smaller aneurysms, this can often be managed by applying a cotton pad with gentle compression. In cases of larger leaks or aneurysms, temporary clipping of all arteries supplying the aneurysm may be necessary . It is crucial to ensure that the clip fully spans the aneurysm neck without entrapping any perforating arteries. Small dogears at the base of the aneurysm should be addressed with additional small clips to ensure complete occlusion. Preoperative imaging plays a vital role in selecting the appropriate surgical approach. Factors such as the aneurysm’s size, projection, and its relationship to nearby structures must be carefully evaluated to optimize visualization and minimize surgical risks. A thorough understanding of the aneurysm’s position relative to surrounding anatomical features is essential for successful surgery . 5.2. Endovascular Techniques Endovascular therapy, particularly the coiling of cerebral aneurysms, has become the primary treatment modality for IA in many neurovascular centers. This shift is largely due to the progressive refinement of coiling devices and techniques, as well as evidence demonstrating significant benefits for patients with a good preoperative clinical grade. The primary goal of coiling is to achieve dense packing within the aneurysm sac to induce rapid coagulation, effectively isolating the aneurysm from active circulation. The geometry of the aneurysm is a critical factor in determining the appropriate treatment approach and its likely outcome. Unassisted coiling is typically well-suited for IA that exhibit favorable anatomical characteristics, such as an optimal neck width, a favorable dome-to-neck ratio, an appropriate aspect ratio, and a favorable relationship with branch vessels. The conventional techniques include balloon-assisted coiling (BAC) and stent-assisted coiling (SAC). However, more complex aneurysms with unfavorable anatomy may require additional techniques to achieve successful treatment. These adjunct techniques provide the necessary support to manage challenging geometries and ensure effective isolation of the aneurysm . Flow-diverting (FD) stents are another advanced option, designed to divert blood flow away from the aneurysm, promoting stasis and natural thrombosis within the aneurysm sac while also providing a scaffold for endothelial growth. Endovascular coiling continues to be a viable and effective treatment option for both ruptured and unruptured aneurysms, often serving as an alternative to surgical clipping, which is associated with higher morbidity and mortality rates. Advances in coil embolization techniques now include balloon-assisted methods and the use of intracranial stents to enhance coil packing, maintain parent artery patency, and reduce the risk of coil herniation . Additionally, significant improvements in coil properties have been made to increase aneurysm occlusion rates, further enhancing the efficacy of this treatment approach . Wide-neck cerebral aneurysms pose significant technical challenges for endovascular coiling. These aneurysms are generally defined by a dome-to-neck ratio of less than 2 or a neck diameter of 4 mm or greater. The primary challenges in coiling wide-neck aneurysms include the propensity for coils to herniate into the parent vessel, difficulty in clearly defining the interface between the aneurysm neck and the parent vessel, and the need to protect vessel branches located at the aneurysm neck . In order to address these difficulties, additional devices such as intracranial stents and balloons have been created. Balloon-assisted coil embolization decreases the likelihood of coil displacement into the main artery and provides immediate control near the source in case of an aneurysm rupture during the procedure. Stent-assisted coiling enhances the density of coil packing, which is crucial for treating aneurysms that are wide-necked, massive, or enormous in size. This method greatly enhances the rates of obliteration and decreases the probability of aneurysm regrowth . Balloon remodeling during endovascular coiling involves the temporary inflation of a balloon catheter across the aneurysm neck during coil placement. This technique enhances coil packing density, and the balloon is deflated and removed at the conclusion of the procedure. In the event of an aneurysm rupture during coiling, the balloon can temporarily occlude the parent artery, providing crucial control. Originally, balloon remodeling was performed using a single low-compliance balloon, which was mainly suited for sidewall aneurysms. However, more compliant balloons are now available, including Hyperform, HyperGlide (single lumen), TransForm (single lumen), and Scepter (dual lumen) . Despite initial concerns regarding blood stasis and thrombus formation during balloon inflation, studies have shown that the safety profile of balloon-assisted coiling is comparable to that of conventional coiling . The TransForm occlusion balloon catheter (Stryker Neurovascular, Fremont, CA, USA) represents a significant advancement in balloon microcatheter design. This catheter features a single lumen that is compatible with 0.014-inch microguidewires and is available in both compliant and super-compliant versions. Its micromachined hypotube design facilitates rapid inflation and deflation, enhances visibility, and reduces procedural times. An early clinical study involving 23 aneurysms treated with the TransForm catheter reported a complete aneurysm occlusion rate (Raymond Roy Classification I) of 78%, with no serious complications observed . A systematic review researched its use in 63 bifurcation aneurysms across Europe and the USA, concluding that it is “safe and probably effective” for treating intracranial bifurcation aneurysms, although larger sample sizes are necessary to draw definitive conclusions . More recently, it was reported early post-market results for the treatment of 54 aneurysms across 13 US centers, further contributing to the growing body of evidence supporting the efficacy and safety of the TransForm catheter . 5.3. Relevant Case Studies The primary objective of IA treatment is to disconnect blood flow from the parent artery to the aneurysm lumen, ultimately achieving complete aneurysm occlusion. Over the past century, and particularly in the last 30 years, treatment approaches have evolved significantly, providing neurosurgeons with a range of strategies to choose from. Europe and South America were the first regions to have clinical trials and use flow diverters (FDs) for treating aneurysms. Flow diverter devices, initially released in Europe in 2007 and later in the United States four years later, provide an efficient endovascular therapy for aneurysms that are big (>10 mm) and have a wide neck (>4 mm), particularly those found in the siphon region of the internal carotid artery (ICA) . These devices are specifically advised for aneurysms situated in the internal carotid artery (ICA), extending from the petrous to the superior hypophyseal segments. Flow diverters are metallic implants with low porosity that are similar to stents. They are implanted in the parent artery of the aneurysm. All patients who receive flow diverters are required to undergo dual antiplatelet therapy. Flow diverters are being used more frequently, especially for patients who have a high risk during surgery. Endovascular techniques have emerged as the favored method of treating many aneurysms in the posterior circulation, which pose a greater risk of rupture and compressive symptoms compared to aneurysms in the anterior circulation. Furthermore, flow diversion is used as an alternate treatment option when conventional surgical methods are not recommended. It also allows for the sequential treatment of several aneurysms, resulting in positive outcomes and low rates of complications and death . Flow diverters function by reducing blood flow within the aneurysm and redirecting it to the parent vessel. The Silk device (Balt) and the Pipeline Embolization Device (PED; ev3/Covidien) received CE (Conformité Européenne) approval in 2008. The early clinical use of the Silk device revealed potential hemorrhagic and thromboembolic complications, particularly in electively treated asymptomatic patients. Initial unfamiliarity with the device and the inability to predict these complications led to some early disappointment with FD treatment . However, the perception of flow diverters improved significantly following the publication of the results from the Pipeline Embolization Device for the Intracranial Treatment of Aneurysms (PITA) trial. This multicenter, single-arm prospective study documented the experience of neurointerventionalists using PEDs to treat 31 predominantly large, wide-necked ICA aneurysms across three European centers and Argentina, achieving a complete occlusion rate of 93% at six months with a low complication rate of 6.5% . The first-generation Pipeline Embolization Device (PED) gained FDA approval in 2011 following the results of the Pipeline for Uncoilable or Failed Aneurysms (PUFS) trial. This prospective, multicenter, single-arm trial, which commenced in 2008 and concluded in 2014, involved 109 large and giant (≥10 mm) wide-necked internal carotid artery (ICA) aneurysms. The trial reported impressive complete occlusion rates of 86.8% at one year, 93.4% at three years, and 95.2% at five years, with low rates of thromboembolic complications (5.6%) and retreatment (5.7%) . The second generation of the PED, known as Pipeline Flex, received FDA approval in 2015. This updated device retained the original PED design but introduced an enhanced delivery mechanism featuring a resheathing capability, which improved maneuverability and reduced both procedure and fluoroscopy times . A larger single-center study involving 491 anterior circulation aneurysms treated with the PED demonstrated a 78% complete occlusion rate at 12 months. The study identified several predictors of nonocclusion, including larger aneurysm size, incorporation of a branch vessel into the aneurysm dome or neck, and male sex . Similarly, a multicenter study, which examined 523 PED-treated aneurysms, reported a 76.6% complete occlusion rate at 12 months. In this study, older age (>70 years), aneurysm size ≥ 15 mm, and fusiform morphology were identified as independent predictors of nonocclusion . Fusiform morphology was also recognized as a predictor of failure to occlude in another study . Flow diverter (FD) treatment, while generally effective, is not without risks, particularly within the first 30 days post-procedure. Complications may include thromboembolic or ischemic events, intracranial hemorrhage (due to parent artery injury or aneurysm rupture), and FD stent malposition or migration. Hyperacute or acute in-stent thrombosis, distal vessel occlusions, and spontaneous embolization (SBO) are more common in patients with complex aneurysms or tortuous parent vessel anatomies. Treatment options for these complications include glycoprotein IIB/IIIa inhibitors, intra-arterial fibrinolytics, or mechanical thrombectomy, though these interventions carry a risk of intracranial hemorrhage. Additional procedural complications, such as parent vessel perforation, may also occur and require interventions such as heparin reversal, blood pressure control, or, in extreme cases, parent vessel sacrifice . 5.4. Hybrid Microsurgical and Endovascular Approach in the Management of Multiple Cerebral Aneurysms While the majority of IA can be effectively treated using either microsurgical or endovascular techniques alone, a subset of patients with complex or multiple aneurysms may benefit from a combined approach. The primary objective of any IA treatment is to achieve complete occlusion of the aneurysm while preserving the patency of the parent arterial flow . To meet this objective, a combined approach that incorporates both microsurgical and endovascular techniques may be advantageous, as it can minimize risk while maximizing treatment efficacy and outcomes. For example, one-stage endovascular coiling and microsurgical clipping of multiple remote aneurysms on opposite sides can be particularly effective when a middle cerebral artery (MCA) aneurysm, suitable for clipping, coexists with a difficult-to-reach contralateral aneurysm that is more amenable to coiling. In this context, microsurgical and endovascular techniques should be viewed as complementary rather than competing, and a combined management strategy can positively influence patient outcomes . Some experts advocate for one-stage or hybrid surgeries in the treatment of multiple IA, noting that the surgical risk for such procedures is only slightly higher than that for single aneurysms . For instance, the clipping of multiple ipsilateral MCA aneurysms in a single surgery remains a preferred procedure due to its low morbidity, mortality, and recurrence rates. However, the expansion of endovascular techniques provides robust alternatives for these aneurysms, especially in cases where surgical clipping is less suitable. Endovascular methods have proven to be a well-established alternative in such scenarios . One significant advantage of the endovascular approach is its superior ability to accurately determine the rupture site, thereby reducing the risk of misdiagnosis or incorrect localization. Moreover, endovascular procedures allow for the simultaneous occlusion of all aneurysms, regardless of their rupture status, within a single session, ensuring that the bleeding aneurysm is not left untreated . Although ischemic and hemorrhagic complications are more frequent in endovascular treatment compared to clipping, this approach is particularly suitable for high-risk aneurysms. As a result, endovascular treatment is recommended for contralateral aneurysms and those that are technically challenging to clip. The primary microsurgical techniques for managing IA focus on placing a clip across the neck of the aneurysm to exclude it from circulation while preserving the patency of the parent vessel. Alternative surgical options include proximal arterial occlusion, bypass techniques, aneurysm wrapping, or trapping, which involves combined proximal and distal vessel occlusion. Attaining adequate neck exposure is crucial in aneurysm clipping procedures. Over time, the advancement of different surgical techniques for aneurysms located at the base of the skull has greatly decreased the distance between the surgeon and the aneurysm. This has resulted in less exposure and pulling of the neurovascular tissue, while also improving the surgeon’s ability to perform the procedure. After the dura is opened, the main goal is to achieve control and release of the aneurysm neck with minimal dissection. Performing dissection in close proximity to the aneurysm poses the danger of rupture, so the most secure method is to maneuver around the parent artery. It is essential to identify the parent artery and conduct a meticulous dissection towards the aneurysm neck along this blood supply. The approach to obtaining proximal control is contingent upon the specific location of the aneurysm. Dissecting the complete parent artery from its origin is typically unnecessary. In the case of an AComm aneurysm, the A1 segment near the optic nerve is usually identified to establish proximal control. When dealing with a middle cerebral artery aneurysm, especially with the smaller Sylvian approach, achieving proximal control provides only a certain level of safety. However, to ensure complete control, it is necessary to manage all the arteries that supply blood to and drain blood from the aneurysm. Once the parent artery is accessed, the approach to the aneurysm neck should be made along this artery. Both sides of the aneurysm neck must be separated from the arterial branches sufficiently to allow the clip blades to pass smoothly, typically requiring a clearance of at least 3 mm. At this stage, it is important to avoid contact with the aneurysm dome. Complete separation of the aneurysm neck from the arteries is necessary to allow the dissector to pass through. Temporary clipping is often a useful technique during the clipping of the aneurysm. After adequate dissection, the next step is to select the clip that best fits the aneurysm neck, using the angiogram as a guide. The clip length should ideally match the neck diameter multiplied by π/2. If this measurement appears too small during the procedure, it may indicate that the perceived neck includes a nearby brain artery. Any bleeding from the aneurysm should be controlled immediately. For smaller aneurysms, this can often be managed by applying a cotton pad with gentle compression. In cases of larger leaks or aneurysms, temporary clipping of all arteries supplying the aneurysm may be necessary . It is crucial to ensure that the clip fully spans the aneurysm neck without entrapping any perforating arteries. Small dogears at the base of the aneurysm should be addressed with additional small clips to ensure complete occlusion. Preoperative imaging plays a vital role in selecting the appropriate surgical approach. Factors such as the aneurysm’s size, projection, and its relationship to nearby structures must be carefully evaluated to optimize visualization and minimize surgical risks. A thorough understanding of the aneurysm’s position relative to surrounding anatomical features is essential for successful surgery . Endovascular therapy, particularly the coiling of cerebral aneurysms, has become the primary treatment modality for IA in many neurovascular centers. This shift is largely due to the progressive refinement of coiling devices and techniques, as well as evidence demonstrating significant benefits for patients with a good preoperative clinical grade. The primary goal of coiling is to achieve dense packing within the aneurysm sac to induce rapid coagulation, effectively isolating the aneurysm from active circulation. The geometry of the aneurysm is a critical factor in determining the appropriate treatment approach and its likely outcome. Unassisted coiling is typically well-suited for IA that exhibit favorable anatomical characteristics, such as an optimal neck width, a favorable dome-to-neck ratio, an appropriate aspect ratio, and a favorable relationship with branch vessels. The conventional techniques include balloon-assisted coiling (BAC) and stent-assisted coiling (SAC). However, more complex aneurysms with unfavorable anatomy may require additional techniques to achieve successful treatment. These adjunct techniques provide the necessary support to manage challenging geometries and ensure effective isolation of the aneurysm . Flow-diverting (FD) stents are another advanced option, designed to divert blood flow away from the aneurysm, promoting stasis and natural thrombosis within the aneurysm sac while also providing a scaffold for endothelial growth. Endovascular coiling continues to be a viable and effective treatment option for both ruptured and unruptured aneurysms, often serving as an alternative to surgical clipping, which is associated with higher morbidity and mortality rates. Advances in coil embolization techniques now include balloon-assisted methods and the use of intracranial stents to enhance coil packing, maintain parent artery patency, and reduce the risk of coil herniation . Additionally, significant improvements in coil properties have been made to increase aneurysm occlusion rates, further enhancing the efficacy of this treatment approach . Wide-neck cerebral aneurysms pose significant technical challenges for endovascular coiling. These aneurysms are generally defined by a dome-to-neck ratio of less than 2 or a neck diameter of 4 mm or greater. The primary challenges in coiling wide-neck aneurysms include the propensity for coils to herniate into the parent vessel, difficulty in clearly defining the interface between the aneurysm neck and the parent vessel, and the need to protect vessel branches located at the aneurysm neck . In order to address these difficulties, additional devices such as intracranial stents and balloons have been created. Balloon-assisted coil embolization decreases the likelihood of coil displacement into the main artery and provides immediate control near the source in case of an aneurysm rupture during the procedure. Stent-assisted coiling enhances the density of coil packing, which is crucial for treating aneurysms that are wide-necked, massive, or enormous in size. This method greatly enhances the rates of obliteration and decreases the probability of aneurysm regrowth . Balloon remodeling during endovascular coiling involves the temporary inflation of a balloon catheter across the aneurysm neck during coil placement. This technique enhances coil packing density, and the balloon is deflated and removed at the conclusion of the procedure. In the event of an aneurysm rupture during coiling, the balloon can temporarily occlude the parent artery, providing crucial control. Originally, balloon remodeling was performed using a single low-compliance balloon, which was mainly suited for sidewall aneurysms. However, more compliant balloons are now available, including Hyperform, HyperGlide (single lumen), TransForm (single lumen), and Scepter (dual lumen) . Despite initial concerns regarding blood stasis and thrombus formation during balloon inflation, studies have shown that the safety profile of balloon-assisted coiling is comparable to that of conventional coiling . The TransForm occlusion balloon catheter (Stryker Neurovascular, Fremont, CA, USA) represents a significant advancement in balloon microcatheter design. This catheter features a single lumen that is compatible with 0.014-inch microguidewires and is available in both compliant and super-compliant versions. Its micromachined hypotube design facilitates rapid inflation and deflation, enhances visibility, and reduces procedural times. An early clinical study involving 23 aneurysms treated with the TransForm catheter reported a complete aneurysm occlusion rate (Raymond Roy Classification I) of 78%, with no serious complications observed . A systematic review researched its use in 63 bifurcation aneurysms across Europe and the USA, concluding that it is “safe and probably effective” for treating intracranial bifurcation aneurysms, although larger sample sizes are necessary to draw definitive conclusions . More recently, it was reported early post-market results for the treatment of 54 aneurysms across 13 US centers, further contributing to the growing body of evidence supporting the efficacy and safety of the TransForm catheter . The primary objective of IA treatment is to disconnect blood flow from the parent artery to the aneurysm lumen, ultimately achieving complete aneurysm occlusion. Over the past century, and particularly in the last 30 years, treatment approaches have evolved significantly, providing neurosurgeons with a range of strategies to choose from. Europe and South America were the first regions to have clinical trials and use flow diverters (FDs) for treating aneurysms. Flow diverter devices, initially released in Europe in 2007 and later in the United States four years later, provide an efficient endovascular therapy for aneurysms that are big (>10 mm) and have a wide neck (>4 mm), particularly those found in the siphon region of the internal carotid artery (ICA) . These devices are specifically advised for aneurysms situated in the internal carotid artery (ICA), extending from the petrous to the superior hypophyseal segments. Flow diverters are metallic implants with low porosity that are similar to stents. They are implanted in the parent artery of the aneurysm. All patients who receive flow diverters are required to undergo dual antiplatelet therapy. Flow diverters are being used more frequently, especially for patients who have a high risk during surgery. Endovascular techniques have emerged as the favored method of treating many aneurysms in the posterior circulation, which pose a greater risk of rupture and compressive symptoms compared to aneurysms in the anterior circulation. Furthermore, flow diversion is used as an alternate treatment option when conventional surgical methods are not recommended. It also allows for the sequential treatment of several aneurysms, resulting in positive outcomes and low rates of complications and death . Flow diverters function by reducing blood flow within the aneurysm and redirecting it to the parent vessel. The Silk device (Balt) and the Pipeline Embolization Device (PED; ev3/Covidien) received CE (Conformité Européenne) approval in 2008. The early clinical use of the Silk device revealed potential hemorrhagic and thromboembolic complications, particularly in electively treated asymptomatic patients. Initial unfamiliarity with the device and the inability to predict these complications led to some early disappointment with FD treatment . However, the perception of flow diverters improved significantly following the publication of the results from the Pipeline Embolization Device for the Intracranial Treatment of Aneurysms (PITA) trial. This multicenter, single-arm prospective study documented the experience of neurointerventionalists using PEDs to treat 31 predominantly large, wide-necked ICA aneurysms across three European centers and Argentina, achieving a complete occlusion rate of 93% at six months with a low complication rate of 6.5% . The first-generation Pipeline Embolization Device (PED) gained FDA approval in 2011 following the results of the Pipeline for Uncoilable or Failed Aneurysms (PUFS) trial. This prospective, multicenter, single-arm trial, which commenced in 2008 and concluded in 2014, involved 109 large and giant (≥10 mm) wide-necked internal carotid artery (ICA) aneurysms. The trial reported impressive complete occlusion rates of 86.8% at one year, 93.4% at three years, and 95.2% at five years, with low rates of thromboembolic complications (5.6%) and retreatment (5.7%) . The second generation of the PED, known as Pipeline Flex, received FDA approval in 2015. This updated device retained the original PED design but introduced an enhanced delivery mechanism featuring a resheathing capability, which improved maneuverability and reduced both procedure and fluoroscopy times . A larger single-center study involving 491 anterior circulation aneurysms treated with the PED demonstrated a 78% complete occlusion rate at 12 months. The study identified several predictors of nonocclusion, including larger aneurysm size, incorporation of a branch vessel into the aneurysm dome or neck, and male sex . Similarly, a multicenter study, which examined 523 PED-treated aneurysms, reported a 76.6% complete occlusion rate at 12 months. In this study, older age (>70 years), aneurysm size ≥ 15 mm, and fusiform morphology were identified as independent predictors of nonocclusion . Fusiform morphology was also recognized as a predictor of failure to occlude in another study . Flow diverter (FD) treatment, while generally effective, is not without risks, particularly within the first 30 days post-procedure. Complications may include thromboembolic or ischemic events, intracranial hemorrhage (due to parent artery injury or aneurysm rupture), and FD stent malposition or migration. Hyperacute or acute in-stent thrombosis, distal vessel occlusions, and spontaneous embolization (SBO) are more common in patients with complex aneurysms or tortuous parent vessel anatomies. Treatment options for these complications include glycoprotein IIB/IIIa inhibitors, intra-arterial fibrinolytics, or mechanical thrombectomy, though these interventions carry a risk of intracranial hemorrhage. Additional procedural complications, such as parent vessel perforation, may also occur and require interventions such as heparin reversal, blood pressure control, or, in extreme cases, parent vessel sacrifice . While the majority of IA can be effectively treated using either microsurgical or endovascular techniques alone, a subset of patients with complex or multiple aneurysms may benefit from a combined approach. The primary objective of any IA treatment is to achieve complete occlusion of the aneurysm while preserving the patency of the parent arterial flow . To meet this objective, a combined approach that incorporates both microsurgical and endovascular techniques may be advantageous, as it can minimize risk while maximizing treatment efficacy and outcomes. For example, one-stage endovascular coiling and microsurgical clipping of multiple remote aneurysms on opposite sides can be particularly effective when a middle cerebral artery (MCA) aneurysm, suitable for clipping, coexists with a difficult-to-reach contralateral aneurysm that is more amenable to coiling. In this context, microsurgical and endovascular techniques should be viewed as complementary rather than competing, and a combined management strategy can positively influence patient outcomes . Some experts advocate for one-stage or hybrid surgeries in the treatment of multiple IA, noting that the surgical risk for such procedures is only slightly higher than that for single aneurysms . For instance, the clipping of multiple ipsilateral MCA aneurysms in a single surgery remains a preferred procedure due to its low morbidity, mortality, and recurrence rates. However, the expansion of endovascular techniques provides robust alternatives for these aneurysms, especially in cases where surgical clipping is less suitable. Endovascular methods have proven to be a well-established alternative in such scenarios . One significant advantage of the endovascular approach is its superior ability to accurately determine the rupture site, thereby reducing the risk of misdiagnosis or incorrect localization. Moreover, endovascular procedures allow for the simultaneous occlusion of all aneurysms, regardless of their rupture status, within a single session, ensuring that the bleeding aneurysm is not left untreated . Although ischemic and hemorrhagic complications are more frequent in endovascular treatment compared to clipping, this approach is particularly suitable for high-risk aneurysms. As a result, endovascular treatment is recommended for contralateral aneurysms and those that are technically challenging to clip. Liquid embolization, initially used primarily for treating arteriovenous malformations and fistulae, has also been employed in the management of IA. Around the time when stand-alone coiling procedures first emerged, some clinicians began incorporating liquid embolization materials alongside other space-occupying embolic agents. This approach was informed by an evolving understanding of the mechanisms underlying aneurysm exclusion from intracranial circulation, moving beyond the previously supported electrothrombosis model. In the 1990s, a study showed 19 giant aneurysms treated using a combination of detachable balloons, occlusion coils, and ethylene vinyl alcohol copolymer liquid. However, concerns about the generation of emboli and the use of dimethyl sulfoxide (DMSO) in the cerebral vasculature led to the temporary abandonment of these techniques . The development of newer materials, such as the more viscous Onyx HD-500—composed of an ethylene vinyl alcohol copolymer dissolved in a DMSO solvent—marked a significant advancement. The Cerebral Aneurysm Multicenter European Onyx (CAMEO) trial, published in 2004, demonstrated that Onyx HD-500 embolization achieved superior occlusion rates compared to coil embolization in previously treated aneurysms of similar types . Despite these favorable results in the hands of skilled operators, the complexity and time-consuming nature of the procedure contributed to its decline. The procedure often involved repeated balloon inflations and deflations, necessitating delicate “seal tests” with contrast injection. These steps could significantly prolong the procedure, especially if contrast injection disrupted the embolic material. Balloon inflation had to be carefully managed to minimize the risk of cerebral ischemia, while inadequate inflation posed the risk of emboli formation and improper precipitation of the liquid embolic material. Additionally, balloon migration during the serial deflation/reinflation cycles often required retrieval and redeployment, further extending the duration of the procedure. In the modern era of flow diversion and other advanced treatments, liquid embolization remains a viable strategy for specific cases, particularly in patients with incomplete aneurysm occlusion, persistent high-risk features, or nickel allergies. However, the impact of nickel allergies on related complications has been a subject of ongoing debate . Although the transition from stand-alone coiling to stent-assisted coiling has become evident for the treatment of many aneurysms, no initial studies explicitly demonstrated the inferiority of intrasaccular liquid embolization compared to newer treatment strategies. However, long-term follow-up revealed that the durability of intrasaccular liquid embolization was suboptimal . In the current era of flow diversion and intrasaccular flow disruption, the use of intrasaccular liquid embolization has become limited and is no longer commercially available. This is despite its theoretical advantages, such as the potential to fill 100% of an aneurysm, provide a surface for endothelialization, and effectively occlude giant aneurysms . The limited applicability of flow diverters (FDs) in acutely ruptured or bifurcation aneurysms presents ongoing challenges in the endovascular treatment of these complex cases. The conventional methods, including balloon-assisted coiling (BAC) and stent-assisted coiling (SAC), though technically demanding, are associated with higher rates of incomplete occlusion, recanalization, retreatment, intraoperative and postoperative complications, and bleeding due to the necessity of antiplatelet therapy. These challenges underscore the need for enhanced operator skills and have driven increased research focus on the treatment of complex bifurcation aneurysms . The pCONus device, which has been extensively utilized, consists of a self-expanding, laser-cut stent with a distal crown of four petals that are deployed within the aneurysm and a base with six polyamide fibers at the neck. This design facilitates stable coil placement by providing a mechanical barrier at the aneurysm neck. The second-generation pCONus device, which features six distal petals and omits the polyamide fibers, is better suited to accommodate steep angles between the parent vessel and the aneurysm sac, offering greater metal coverage inside the sac to aid in coiling . Similarly, the eCLIPs device, a laser-cut, non-circumferential device, includes an “anchor” for the neck and a “leaf segment” with movable ribs that can be delivered through a coiling microcatheter. The leaf segment offers 23–42% metal coverage over the aneurysm neck, functioning as a flow disruptor and a scaffold for endothelial growth. The second-generation eCLIPs device is self-expanding, microcatheter-deliverable, fully retrievable, and self-orienting, offering a refined approach to managing complex aneurysms . Selecting the appropriate treatment for IA is a multifaceted decision that depends on numerous factors, including patient characteristics, the specifics of the aneurysm, available hospital resources, and the expertise of the surgeon. Age is particularly significant in determining the optimal management strategy. Prior to 1990, surgical clipping was the primary treatment for ruptured aneurysms. However, the introduction of Guglielmi Detachable Coils (GDC) provided a viable alternative. The International Subarachnoid Aneurysm Trial (ISAT) compared endovascular coiling with surgical clipping and found that coiling resulted in better disability-free survival at one year. Long-term follow-up data from ISAT indicated lower mortality rates with coiling, although there was a slightly higher risk of rebleeding . Despite these findings, the ISAT results are somewhat limited by selection bias, as the study excluded many aneurysm types and primarily focused on patients in good clinical condition, making it difficult to generalize the outcomes. The Barrow Ruptured Aneurysm Trial (BRAT) sought to provide a more accurate reflection of real-world conditions. This trial found that coiling was associated with fewer poor outcomes one year post-treatment, although this difference was not statistically significant after three years . For unruptured aneurysms, research also tends to favor coiling over clipping. For example, a study analyzed data from nearly 5000 patients and found that those who underwent surgical clipping faced higher risks of complications and less favorable outcomes compared to those treated with coiling. A large meta-analysis supported these findings, demonstrating higher rates of independence and lower mortality with coiling . The evolution of endovascular aneurysm treatments has been a gradual process, with each new device addressing previous challenges and improving safety and efficacy. Innovations in this field have made these treatments more accessible and effective. For example, many clinicians now favor the transradial approach over the traditional transfemoral method due to its lower complications rates. Advances in microcatheters and microwires have enabled access to previously unreachable blood vessels, expanding the scope of endovascular interventions. Enhanced antiplatelet therapies have also reduced complications associated with certain procedures, while optimized follow-up protocols have minimized unnecessary visits, thereby lowering patient risks and improving overall care. Looking ahead, future devices may leverage our growing understanding of aneurysm healing by incorporating active protein coatings designed to promote better healing within aneurysms. Endovascular coiling techniques continue to advance, with the development of new, low-profile stents and temporary bridging devices aimed at increasing the efficacy of coiling procedures. Innovations such as the Woven EndoBridge (WEB) device are showing great promise, and the use of 3D-printed vascular models is becoming increasingly popular for pre-procedural planning. These advancements are continually raising the standards for safety and effectiveness in aneurysm treatment. Moreover, it is important to be noted that the cutting-edge technology should be integrated with vast anatomical knowledge in order to obtain great treatment results. For example, liquid embolization, initially used for arteriovenous malformations, has also been applied to IA. Integrating neuroanatomical knowledge with cutting-edge techniques has been crucial to the evolution of this procedure. Early approaches combined embolization with coiling, but neuroanatomical complexities, such as vessel architecture and blood flow patterns, led to complications like emboli formation and challenges with dimethyl sulfoxide (DMSO) use. The introduction of Onyx HD-500 improved occlusion rates, but the procedure’s complexity underscored the importance of anatomical precision. Today, liquid embolization is reserved for specific high-risk cases, but newer devices like pCONus and eCLIPs offer enhanced control, thanks to their ability to navigate the neurovascular anatomy more precisely. These innovations, informed by a deep understanding of cerebral vasculature, improve the effectiveness of treating complex aneurysms, demonstrating the critical role of neuroanatomy in advancing surgical practices.
Safety of herbal medicine in the postpartum period of a Korean Medicine hospital and postpartum care centre: protocol of a registry study (SAFEHERE-PC)
0cd455c6-807d-48de-b61e-5cb57de76d32
11344528
Pharmacology[mh]
The postpartum period, as defined by the guidelines, refers to the first 6–8 weeks after childbirth. During this time, the body gradually returns to a state similar to that before pregnancy, as it recovers from the changes introduced by pregnancy and childbirth. Postpartum symptoms are common, with 47%–94% of women experiencing health problems even beyond 8 weeks after childbirth. The most common of these problems include fatigue, headaches, swelling and musculoskeletal pain. Postpartum depression is also highly prevalent. These physical and mental discomforts during the postpartum period affect the woman’s health and quality of life, as well as the health of the child. This can lead to potential complications in the neonate’s health, like delays in language development, the development of behavioural issues, and other cognitive or physical developmental delays ; therefore, the provision of appropriate treatment during the postpartum period is crucial. In South Korea, Korean medicine (KM) is commonly used alongside conventional medicine for postpartum care. KM treatment includes herbal medicine (HM), acupuncture, moxibustion, cupping therapy, KM physical therapy and other therapies. KM actively addresses various postpartum symptoms, such as pain, systemic symptoms, urogenital discomfort and mental-neurological symptoms. A survey of postpartum mothers who received traditional KM treatment showed that 78.68% expressed their intention to undergo the treatment again in future pregnancies. The satisfaction rate with HM treatment was remarkably high at 86.88%. However, some concerns about the safety of HM have been identified, and the drug utilisation review service of South Korea has not provided information regarding adverse reactions related to HM. Safety evidence for postpartum HM consumption in South Korea is primarily based on experimental studies focusing on the influence of herbal treatments on postpartum lactation and reports on safety and adverse reactions are insufficient. In order to address the lack of systematic information on adverse events (AEs) related to postpartum HM in Korea, we have established the following research questions: (1) What is the incidence rate of AEs after taking postpartum herbal medicine? (2) Which types of herbal medicines are associated with specific AEs? (3) What is the causality between the observed AEs and the herbal medicines? These questions focus on incidence rates, types of herbal medicines and their associated AEs, and the causality between the herbal medicines and the AEs. This data-driven and practical safety assessment aims to enhance patient safety and increase the applicability of these findings in clinical settings. To this end, this study will establish a registry of patients receiving HM treatment during the postpartum period and collect clinical data on treatments and AEs to build evidence evaluating the safety of HM use as follows: (1) collecting clinical data and information on AEs related to HM treatment in postpartum patients using a standardised protocol and (2) evaluating the causality of any AEs that occur during HM treatment using the WHO Uppsala Monitoring Centre (WHO-UMC) causality assessment and the Naranjo Algorithm Score. Registry design This is a prospective observational registry for HM use during the postpartum period in a KM hospital and postpartum care centre. We aim to enrol a total of 1000 eligible patients between March 2024 and June 2027. The study will focus on postpartum patients admitted to the postpartum care centre at Woosuk KM hospital (the KM Gynaecology and Obstetric Department) who receive HM treatment and voluntarily consent to the collection of their clinical data. Woosuk KM Hospital has been operating postpartum wards and postpartum care centre since 1999 and has also been involved in regional public postpartum healthcare project. As the only KM hospital with a postpartum care centre in South Korea, over 90% of postpartum mothers using the centre use KM gynaecology and obstetric department for postpartum treatment and take HM. In clinical settings, decoction forms of HM are the most popular oral dosage forms administered in South Korea. In this registry, HM will be administered as decoctions as prescribed in Korean clinical practice, allowing for the addition or subtraction of herbs based on the patient’s condition, without prescription restrictions. KM treatment includes acupuncture, moxibustion, cupping, Chuna manipulation and KM physical therapies and will adhere to the scope of routine medical care. Throughout the hospitalisation period at the postpartum care centre, demographic information, medical history, AEs and treatment details, including HM prescriptions and concomitant medication usage, will be collected for analysis. An online survey will be conducted 14 days (±3 days) postdischarge to assess potential adverse reactions for patients receiving additional HM prescriptions after discharge. Participating researchers will adhere to standard operating procedures for KM treatment of postpartum patients to ensure synchronisation and standardisation of the process prior to the initiation of the study. Eligibility criteria The inclusion criteria for this registry are as follows: (1) women aged over 19 years; (2) patients admitted to the postpartum care centre at Woosuk KM Hospital within 6 months of childbirth (based on the date of delivery) and receiving HM treatment for postpartum conditions at the obstetrics and gynaecology department and (3) patients who have received sufficient explanation before registration, voluntarily agree to participate in this study and sign a written informed consent form approved by the institutional review board. The exclusion criteria are as follows: (1) patients who do not agree to register and (2) participants judged inappropriate for participating in this study. Data collection (outcome measures) After admission to the postpartum care centre at Woosuk KM Hospital, participants’ data will be collected orally, and written consent will be obtained from each participant at the first visit. The occurrence of AEs will be monitored daily during hospitalisation. shows data collection and follow-up schedules. Demographics: Demographic information, including birthday and age, will be gathered at the admission date. Anthropometric data: Height, weight and extracellular water will be measured on admission and discharge. Maternal health and delivery details: We will collect information on the overall maternal health status, including date of delivery; mode of delivery; gestational weeks; number of pregnancies/term births/premature births/miscarriages/surviving children and factors related to high-risk pregnancy status, such as advanced maternal age (35 years and older), assisted in vitro fertilisation, gestational hypertension and gestational diabetes, at the admission date. Additionally, we will gather the demographic characteristics and feeding methods of neonates. HM treatment: For the observation and collection of authentic clinical data, no restrictions will be imposed on the type or frequency of HM treatment. Treatments will be prescribed by physicians in the postpartum care centre of the KM hospital tailored to the participants’ condition, defined according to traditional Chinese medicine syndrome differentiation and treatment principles. Prescribed HM data include the herbal drug name, dosage and treatment duration of the HM administration. AEs: The occurrence of adverse reactions will be checked daily during hospitalisation. If an AE occurs, we will collect details of the AE, including AE start date, symptom descriptions, outcome, seriousness, action taken after AE, AE reappearance after reintroduction of suspected HM and treatment methods for AE. This includes the collection of concurrent medication information. If the participants continue HM intake after discharge, we will check for adverse reactions associated with specific HM at 2 weeks after discharge. Data management and quality control We will use the myTrial Electronic Data Capture (NIKOM, Gyeongsan, Republic of Korea), an electronic case report form (eCRF) system validated by IT-KoM, for data collection. The designated investigator, who certifies the KM doctor, will enter and archive data in the eCRF system. Investigators access the data input form by inputting the web server address ( http://ecrf.nikom.or.kr ) from a remote computer with internet connectivity and log in using the provided user identity and password furnished by the website administrator. Periodic reviews of each case entered into the registry will be conducted, and data queries will be generated as per the data monitoring manual. The data collected for this study will be securely housed on a dedicated collection server with strict regulation of data access permissions. Sample size calculation Based on a study by Lessing et al , which suggested that the spread of AE decreases rapidly in sample sizes exceeding 1000, our study will aim to recruit a minimum of 1000 patients. The study is scheduled to commence in March 2024 and conclude in June 2027. The number of patients admitted to the postpartum care centre at Woosuk KM Hospital is approximately 300 per year, thus we expect to be able to achieve our target sample size. Statistical analysis and causality assessment For continuous data, such as demographic characteristics, we will present the mean and SD. Categorical data will be reported as frequencies and percentages. All statistical analyses will be conducted using two-sided tests, with a significance level of 5%. To assess the magnitude of adverse reactions, we will present the number of AEs as an OR with a 95% CI. In the case of AEs, we will evaluate their severity based on the criteria for severity assessment of AEs. Additionally, the causality of AEs will be assessed using the WHO-UMC causality assessment and the Naranjo Algorithm Score, similar to other studies that employed these tests in comparable situations. This assessment will be conducted by an independent Adverse Reaction Assessment Committee under the Regional Drug Safety Centre, Korea Institute of Drug Safety and Risk Management. In a future follow-up study, we intend to cross-reference this information with sociodemographic and obstetric data sourced from the Korean National Health Insurance Database. Patient and public involvement Patients were not involved in the development of the protocol and will not be involved in the implementation of the study. This is a prospective observational registry for HM use during the postpartum period in a KM hospital and postpartum care centre. We aim to enrol a total of 1000 eligible patients between March 2024 and June 2027. The study will focus on postpartum patients admitted to the postpartum care centre at Woosuk KM hospital (the KM Gynaecology and Obstetric Department) who receive HM treatment and voluntarily consent to the collection of their clinical data. Woosuk KM Hospital has been operating postpartum wards and postpartum care centre since 1999 and has also been involved in regional public postpartum healthcare project. As the only KM hospital with a postpartum care centre in South Korea, over 90% of postpartum mothers using the centre use KM gynaecology and obstetric department for postpartum treatment and take HM. In clinical settings, decoction forms of HM are the most popular oral dosage forms administered in South Korea. In this registry, HM will be administered as decoctions as prescribed in Korean clinical practice, allowing for the addition or subtraction of herbs based on the patient’s condition, without prescription restrictions. KM treatment includes acupuncture, moxibustion, cupping, Chuna manipulation and KM physical therapies and will adhere to the scope of routine medical care. Throughout the hospitalisation period at the postpartum care centre, demographic information, medical history, AEs and treatment details, including HM prescriptions and concomitant medication usage, will be collected for analysis. An online survey will be conducted 14 days (±3 days) postdischarge to assess potential adverse reactions for patients receiving additional HM prescriptions after discharge. Participating researchers will adhere to standard operating procedures for KM treatment of postpartum patients to ensure synchronisation and standardisation of the process prior to the initiation of the study. The inclusion criteria for this registry are as follows: (1) women aged over 19 years; (2) patients admitted to the postpartum care centre at Woosuk KM Hospital within 6 months of childbirth (based on the date of delivery) and receiving HM treatment for postpartum conditions at the obstetrics and gynaecology department and (3) patients who have received sufficient explanation before registration, voluntarily agree to participate in this study and sign a written informed consent form approved by the institutional review board. The exclusion criteria are as follows: (1) patients who do not agree to register and (2) participants judged inappropriate for participating in this study. After admission to the postpartum care centre at Woosuk KM Hospital, participants’ data will be collected orally, and written consent will be obtained from each participant at the first visit. The occurrence of AEs will be monitored daily during hospitalisation. shows data collection and follow-up schedules. Demographics: Demographic information, including birthday and age, will be gathered at the admission date. Anthropometric data: Height, weight and extracellular water will be measured on admission and discharge. Maternal health and delivery details: We will collect information on the overall maternal health status, including date of delivery; mode of delivery; gestational weeks; number of pregnancies/term births/premature births/miscarriages/surviving children and factors related to high-risk pregnancy status, such as advanced maternal age (35 years and older), assisted in vitro fertilisation, gestational hypertension and gestational diabetes, at the admission date. Additionally, we will gather the demographic characteristics and feeding methods of neonates. HM treatment: For the observation and collection of authentic clinical data, no restrictions will be imposed on the type or frequency of HM treatment. Treatments will be prescribed by physicians in the postpartum care centre of the KM hospital tailored to the participants’ condition, defined according to traditional Chinese medicine syndrome differentiation and treatment principles. Prescribed HM data include the herbal drug name, dosage and treatment duration of the HM administration. AEs: The occurrence of adverse reactions will be checked daily during hospitalisation. If an AE occurs, we will collect details of the AE, including AE start date, symptom descriptions, outcome, seriousness, action taken after AE, AE reappearance after reintroduction of suspected HM and treatment methods for AE. This includes the collection of concurrent medication information. If the participants continue HM intake after discharge, we will check for adverse reactions associated with specific HM at 2 weeks after discharge. We will use the myTrial Electronic Data Capture (NIKOM, Gyeongsan, Republic of Korea), an electronic case report form (eCRF) system validated by IT-KoM, for data collection. The designated investigator, who certifies the KM doctor, will enter and archive data in the eCRF system. Investigators access the data input form by inputting the web server address ( http://ecrf.nikom.or.kr ) from a remote computer with internet connectivity and log in using the provided user identity and password furnished by the website administrator. Periodic reviews of each case entered into the registry will be conducted, and data queries will be generated as per the data monitoring manual. The data collected for this study will be securely housed on a dedicated collection server with strict regulation of data access permissions. Based on a study by Lessing et al , which suggested that the spread of AE decreases rapidly in sample sizes exceeding 1000, our study will aim to recruit a minimum of 1000 patients. The study is scheduled to commence in March 2024 and conclude in June 2027. The number of patients admitted to the postpartum care centre at Woosuk KM Hospital is approximately 300 per year, thus we expect to be able to achieve our target sample size. For continuous data, such as demographic characteristics, we will present the mean and SD. Categorical data will be reported as frequencies and percentages. All statistical analyses will be conducted using two-sided tests, with a significance level of 5%. To assess the magnitude of adverse reactions, we will present the number of AEs as an OR with a 95% CI. In the case of AEs, we will evaluate their severity based on the criteria for severity assessment of AEs. Additionally, the causality of AEs will be assessed using the WHO-UMC causality assessment and the Naranjo Algorithm Score, similar to other studies that employed these tests in comparable situations. This assessment will be conducted by an independent Adverse Reaction Assessment Committee under the Regional Drug Safety Centre, Korea Institute of Drug Safety and Risk Management. In a future follow-up study, we intend to cross-reference this information with sociodemographic and obstetric data sourced from the Korean National Health Insurance Database. Patients were not involved in the development of the protocol and will not be involved in the implementation of the study. This registry was approved by the Institutional Review Board of the Woosuk KM Hospital of Woosuk University, Jeonju, Republic of Korea (WSOH IRB H2311-03-01). Written informed consent will be obtained from all participants before enrolment in the study. The results will be disseminated in peer-reviewed journals and conference presentations.
Systemic review of age brackets in pediatric emergency medicine literature and the development of a universal age classification for pediatric emergency patients - the Munich Age Classification System (MACS)
9bec17d0-5efd-4b8d-8ae1-c6107d59274b
10369835
Pediatrics[mh]
Differentiation according to patient age is the most common method of distinguishing between pediatric and adult emergencies . Up to now no uniform and internationally valid standard for the classification of pediatric patients on the basis of their age has been established . While age classification has been well studied for clinical trials , there is no detailed review for the field of epidemiological health services research. As a result, it is difficult to compare the results of individual epidemiologic papers to date and, consequently, overarching meta-analyses are possible only on a limited basis. It is therefore imperative to agree internationally on an age classification that is as uniform as possible for future work. The goal of this review is therefore the identification of different age groups in pediatric emergency care. We first reviewed the classifications found in the literature and identified differences. Then, based on physiological and anatomical conditions, we created our proposal for a unified classification from the previously reviewed categories. Thus, the age classification presented in this text is intended to serve as an internationally uniform reference for further studies in the future. We conducted a systematic literature review using the PRISMA method . The research question was addressed using the PICO scheme as follows : P roblem inconsistent age classification of pediatric emergencies to date. I ntervention: relevant articles were first identified based on an extensive literature search and the age classifications used were examined in more detail. The articles had to address the three aspects of “age as a differentiator,“ “health services research in emergency medicine,“ and “pediatrics” to be included in the literature selection process. For this purpose, various individual terms and so-called “MESH terms” were combined into different queries of the Pubmed (MEDLINE) database. “MESH terms” are terms defined by the database to better categorize and classify individual articles. Table lists all queries that were used for the literature search and indicates how many hits were found and how many articles were included in the final evaluation. As Fig. shows, the initial query produced 6,226 hits. After duplicates were removed, the texts were checked for relevance and topicality based on the publication date, the title and the abstract. Accordingly, only 217 titles were evaluated as suitable for further consideration. All 217 titles dealt with a topic that contributes to answering the initial question and were published after 1980. Only texts that defined clear age limits were used for further analysis. Thus, from an initial 6226 articles found, 115 could be filtered for final analysis. C omparison: results from the literature search will be compared with two particularly relevant already existing proposals for age classification. The proposed age groupings of the National Association of Statutory Health Insurance Physicians for Germany and the Eunice Kennedy Shriver N ational I nstitute of C hild H ealth and H uman D evelopment NICHD for the English-speaking world were used as a reference ,. O utcome: development of the final age classification. In order to create a generally valid classification, selected developmental steps of each child and the associated physiological and anatomical changes in childhood are examined. The aim is to use suitable examples to show fundamental differences in the emergency medical care of children and adults as a function of age. These differences will be used to establish a consistent and well-reasoned age classification of pediatric emergencies based on the results of the literature review. roblem inconsistent age classification of pediatric emergencies to date. I ntervention: relevant articles were first identified based on an extensive literature search and the age classifications used were examined in more detail. The articles had to address the three aspects of “age as a differentiator,“ “health services research in emergency medicine,“ and “pediatrics” to be included in the literature selection process. For this purpose, various individual terms and so-called “MESH terms” were combined into different queries of the Pubmed (MEDLINE) database. “MESH terms” are terms defined by the database to better categorize and classify individual articles. Table lists all queries that were used for the literature search and indicates how many hits were found and how many articles were included in the final evaluation. As Fig. shows, the initial query produced 6,226 hits. After duplicates were removed, the texts were checked for relevance and topicality based on the publication date, the title and the abstract. Accordingly, only 217 titles were evaluated as suitable for further consideration. All 217 titles dealt with a topic that contributes to answering the initial question and were published after 1980. Only texts that defined clear age limits were used for further analysis. Thus, from an initial 6226 articles found, 115 could be filtered for final analysis. C omparison: results from the literature search will be compared with two particularly relevant already existing proposals for age classification. The proposed age groupings of the National Association of Statutory Health Insurance Physicians for Germany and the Eunice Kennedy Shriver N ational I nstitute of C hild H ealth and H uman D evelopment NICHD for the English-speaking world were used as a reference ,. O utcome: development of the final age classification. In order to create a generally valid classification, selected developmental steps of each child and the associated physiological and anatomical changes in childhood are examined. The aim is to use suitable examples to show fundamental differences in the emergency medical care of children and adults as a function of age. These differences will be used to establish a consistent and well-reasoned age classification of pediatric emergencies based on the results of the literature review. Intervention: analysis of the identified articles The final 115 articles are evaluated and analyzed below. To get an overview of age limits already in use within pediatric emergency care, the age limits from the 115 articles were aggregated and examined according to their frequencies. Figure 2 reveals a separation into five groups: Group 1: ≤ 1–2 Years. Group 2: 3–6 Years. Group 3: 7–12 Years. Group 4: 13–17 Years. Group 5: ≥ 18 Years. It should be noted that the sum of the individual characteristics exceeds the article number of 115, since in several articles not just one age was considered as a limit, but there were staggered intervals with several subgroups. To illustrate this fact, articles that consider subgroups were analyzed separately. Figure illustrates the distribution of these subgroups graphically. Most of the articles do not form subgroups, but commit themselves to a fixed age limits for differentiating between childhood and adulthood. Only 35 of the 115 articles examined considered further subdivisions in their work. Of these 35 articles, 15 in turn use only one subdivision using two age groups. Figure primarily shows that no uniform approach can be identified with regard to age limits. It can be seen that patient ages of < 1, 2, 6, 12 and 18 years were used particularly often for classification. Comparison: Comparison between the result of the literature review and national recommendations. The following classification is one of the most common used in Germany . Newborn: up to the completed 28th day of life. Infant: 29 days – 12 months. Toddler: 2–3 years. Child: 4–12 years. Adolescent: 13–18 years. Adult: from the beginning of the 19th year, while the classification of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) frequently used in the U.S. and Australia: Infancy: Birth – 12 months. Toddler: 13 months – 2 years. Early childhood: 3–5 years. Middle childhood: 6–11 years. Early adolescence: 12–18 years. Late adolescence: 19–21 years. While the NICHD’s recommendation differentiates in some respects in more detail than the common used classification in Germany, significant similarities nevertheless emerge: First, the termination of infant status at 12 months is consistent. Similarly, the age limit of 18 years is present in both cases. The difference in middle childhood is interesting. The recommendation from the U.S. provides for separation at 6 years. The National Health Insurance Association includes this age group undifferentiated with the interval of 4–12. It is also not clear from Fig. whether differentiation is more common in the existing literature for 6- to 11-year-olds or for 4- to 12-year-olds. Outcome: development of the final age classification This part of our work addresses particularly relevant aspects in the treatment of pediatric emergency patients. Together with the basic physiological and anatomical characteristics presented below, the proposed new age classification was established. For the following reasons, we focus on the following three topics: The clinical picture of sepsis is found at the top of the most common causes of death in children worldwide . Fever as a leading symptom of sepsis in childhood serves as a motivation to go into more detail on the development of the immune system . Respiratory emergencies are among the most common emergency situations, especially in children . In 2014, 56,800 deaths related to traumatic brain injury were recorded in the USA, 2,529 of which involved children . Epidemiological studies for Germany showed that the incidence of traumatic brain injuries is above average, especially in patients under 16 years of age . Furthermore, it was found that patients who had not yet completed their first year of life had a twice as high incidence of traumatic brain injury compared to the general population . Table briefly summarizes the most important age limits from the selected examples. It can be seen that newborns represent a group of their own. Children up to 2 years of age also show some distinctive features. At the age of 5, the next clear developmental step can be seen, before puberty begins at around 11. It is clear that in a generally valid age classification a demarcation within the age of 4–12 years is indispensable. By the age of 18, most vital signs and anatomical conditions are at the level of an average adult. The final 115 articles are evaluated and analyzed below. To get an overview of age limits already in use within pediatric emergency care, the age limits from the 115 articles were aggregated and examined according to their frequencies. Figure 2 reveals a separation into five groups: Group 1: ≤ 1–2 Years. Group 2: 3–6 Years. Group 3: 7–12 Years. Group 4: 13–17 Years. Group 5: ≥ 18 Years. It should be noted that the sum of the individual characteristics exceeds the article number of 115, since in several articles not just one age was considered as a limit, but there were staggered intervals with several subgroups. To illustrate this fact, articles that consider subgroups were analyzed separately. Figure illustrates the distribution of these subgroups graphically. Most of the articles do not form subgroups, but commit themselves to a fixed age limits for differentiating between childhood and adulthood. Only 35 of the 115 articles examined considered further subdivisions in their work. Of these 35 articles, 15 in turn use only one subdivision using two age groups. Figure primarily shows that no uniform approach can be identified with regard to age limits. It can be seen that patient ages of < 1, 2, 6, 12 and 18 years were used particularly often for classification. Comparison: Comparison between the result of the literature review and national recommendations. The following classification is one of the most common used in Germany . Newborn: up to the completed 28th day of life. Infant: 29 days – 12 months. Toddler: 2–3 years. Child: 4–12 years. Adolescent: 13–18 years. Adult: from the beginning of the 19th year, while the classification of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) frequently used in the U.S. and Australia: Infancy: Birth – 12 months. Toddler: 13 months – 2 years. Early childhood: 3–5 years. Middle childhood: 6–11 years. Early adolescence: 12–18 years. Late adolescence: 19–21 years. While the NICHD’s recommendation differentiates in some respects in more detail than the common used classification in Germany, significant similarities nevertheless emerge: First, the termination of infant status at 12 months is consistent. Similarly, the age limit of 18 years is present in both cases. The difference in middle childhood is interesting. The recommendation from the U.S. provides for separation at 6 years. The National Health Insurance Association includes this age group undifferentiated with the interval of 4–12. It is also not clear from Fig. whether differentiation is more common in the existing literature for 6- to 11-year-olds or for 4- to 12-year-olds. This part of our work addresses particularly relevant aspects in the treatment of pediatric emergency patients. Together with the basic physiological and anatomical characteristics presented below, the proposed new age classification was established. For the following reasons, we focus on the following three topics: The clinical picture of sepsis is found at the top of the most common causes of death in children worldwide . Fever as a leading symptom of sepsis in childhood serves as a motivation to go into more detail on the development of the immune system . Respiratory emergencies are among the most common emergency situations, especially in children . In 2014, 56,800 deaths related to traumatic brain injury were recorded in the USA, 2,529 of which involved children . Epidemiological studies for Germany showed that the incidence of traumatic brain injuries is above average, especially in patients under 16 years of age . Furthermore, it was found that patients who had not yet completed their first year of life had a twice as high incidence of traumatic brain injury compared to the general population . Table briefly summarizes the most important age limits from the selected examples. It can be seen that newborns represent a group of their own. Children up to 2 years of age also show some distinctive features. At the age of 5, the next clear developmental step can be seen, before puberty begins at around 11. It is clear that in a generally valid age classification a demarcation within the age of 4–12 years is indispensable. By the age of 18, most vital signs and anatomical conditions are at the level of an average adult. Our literature review shows that currently mostly an arbitrary and often insufficiently justified classification of the studied population is made on the basis of age. The aim of this work was therefore to establish an internationally applicable age classification for pediatric emergencies. Although Clark et al. primarily referred to the exact terminology of the child within medicine and Williams et al. dealt with the age classification for clinical studies , the aim of this work was to clearly review the classifications used so far in the literature for the first time. The basic physiological and anatomical differences that are instrumental in differentiating patients, particularly in emergency medicine, were used to create and justify a reasonable classification. The following classification of the different pediatric ages, shown in Table , is proposed as the Munich Age Classification System: Particular attention should be drawn to the differentiation between early and late childhood. This subdivision is not found in the recommendation of the Association of Statutory Health Insurance Physicians. However, as in Table could be seen we have been able to determine treatment-relevant age-dependent differences exactly for this period. The immune system reaches a new physiological developmental stage (differentiation of B-lymphocytes) at around 5 years of age. This leads to the immune system being able to work more specifically and no longer having to respond to known pathogens with a generalized immune response. The anatomy of the skull changes in a way that results in relevant differences in treatment in case of trauma, and the mechanisms of accidents also differ from each other at around 6 years of age. Furthermore, the weight distribution of MACS reveals significant differences in weight within the two age categories. As medication dosages are weight dependent, this provides further justification for stratification within the age range of 3–11 years. A differentiation is therefore strongly recommended at this point. Our research presents a unified classification based on the existing literature as well as selected anatomical and physiological peculiarities. Existing clinical recommendations are often described inconsistently and use different distinguishing features. The relevance of the MACS to individual clinical procedures (such as resuscitation, intubation, analgesia, ventilation, wound care, clinical imaging) represents a research prospect for further studies. It is recommended to decide the assignment of the patient to the appropriate category either according to the age of the MACS or according to the corresponding weight as shown in Table . The greatest limitation of this work is the selective choice of topics with respect to physiological and anatomical differences. The focus on emergency medicine is evident in the selection of topics. Therefore, other parameters such as the onset of sexual maturity, the change in metabolism or the hormonal transition of the body were not addressed. Furthermore, it should be noted that the aggregation and categorization of patients based on age inevitably leads to inaccuracies and not all details can be represented. In particular, individual characteristics or certain details of specific research questions cannot always be mapped with this. Willimans et al. showed, at least for clinical studies, how a generally applicable age classification could best be adapted to individual research questions . Legal and administrative regulations, such as age of compulsory education or attainment of full legal capacity, also vary both nationally and internationally. It is therefore not realistic to map all relevant factors in a universally valid classification. However, it is much more important that a uniform classification is used despite of these limitations - even if this does not describe all details. Only in this way is it possible to evaluate research results as efficiently as possible and without diminishing their significance, even across international boundaries. Consequently, it is not important that the classification used represents reality in its entirety, but rather that there is international agreement on a uniform standard. The age classification of this work can thus contribute to counteracting the current practice of strongly varying and often arbitrary classifications of patients.
Antibiotic resistance of bioaerosols in particulate matter from indoor environments of the hospitals in Dhaka Bangladesh
022d1095-a55f-4216-bfdf-0f78f4d983b3
11612278
Microbiology[mh]
Quantitative studies of airborne particulate matter (PM) have focused a great deal of attention on bioaerosols in recent years, with an increasing recognition of their massive significance and impact on health, climate, and environmental pollution concerns . Bioaerosols contain living as well as non-living elements, such as pollens, bacteria, fungi, and viruses . Therefore, they can consist of both pathogenic and non-pathogenic microbes, whether dead or alive . According to Humbal et al. (2019) , these bioaerosols comprise solids and semi-solids containing biotic and abiotic components whose size ranges from 0.001 nm to 100 μm. Sneezing, coughing, talking, washing, flushing the toilet, etc., can all cause biological particulate matter to become airborne . In hospitals, the scenario regarding bioaerosols is more complex compared to other commercial and residential buildings. The airborne form of bacteria can cause infections in patients and hospital staff, with heightened vulnerability observed in operating theaters, intensive care units (ICUs), and delivery rooms . Hospitals commonly use and expose pathogens to antibiotics, resulting in high levels of antibiotic resistance . Antibiotic-resistant bacteria (ARB) create significant challenges in treating infectious diseases . A variety of chemicals produced by various items, such as antibacterial agents, sterilizers, laboratory materials, various medical operations, and therapeutic management, as well as medical wastes, might be significant indoor sources of pollution in hospitals . The air pollutants dictated by zonal origins and long-range transmission (prevailing wind direction) may pertain to indoor air quality (IAQ), but localized air pollution sources, such as hospital parking lots’ emissions, road traffic etc. may also contribute . Furthermore, outdoor and interior construction, humidifiers, contaminated carpets, and cooling towers can all contribute to hospital pollution . In particular, PM 2.5 and PM 10 , which are significant sources of airborne microorganisms, have been demonstrated to correlate with the concentration of bacterial colonies . Air movements can carry these airborne bacteria upward, as they are tiny enough to survive for a long time in the environment . Bacteria may be eliminated from the air by procedures like dry deposition and/or wet deposition (being removed by precipitation, i.e. rain or snow) . Depending on the amount of haze and the severity of the pollution, the influence of temperature on bioaerosols might differ. For instance, in the winter, hub bacteria were more prevalent in PM 2.5 on days with high pollution levels than on days with lower levels . However, during foggy days in the fall and early winter in Beijing, the airborne bacterial load and community structure were mostly influenced by relative humidity and particulate matter concentrations . These findings emphasize the complex connections among bioaerosols, temperature, humidity, and particulate matter in different environmental conditions. Bioaerosols have been linked to chronic health difficulties, respiratory disorders, and infectious infections . Furthermore, there is a remarkable correlation between the Air Quality Index (AQI) and the quantity of airborne bacteria and fungi. The quantity of airborne fungus and the number of airborne bacteria steadily rise when the AQI hits 200 . The interactions between PM and AQI, and subsequently, the types of bacteria present, depend significantly on the chemical and microbiological compositions . The composition and variety of bioaerosols have been reported to significantly change at high or extreme pollution levels . Exposure to biological agents, for example, fungi, bacteria, parasites, and viruses, causes infectious diseases, which can spread through indirect contact, such as coughing or sneezing, or direct contact, such as biting, touching, or licking . Mycobacterium tuberculosis , which infected individuals expel into the environment when they cough, sneeze, or talk, causes tuberculosis (TB) through air contamination . Bacillus anthracis causes anthrax, which individuals can contract through ingestion, inhalation, or skin contact with infected animals . Recognizing the potential health risks posed by bioaerosol in occupational settings is crucial, and industries where bioaerosol exposure is a concern should take appropriate measures to mitigate exposure and protect worker health. Bioaerosol components may threaten hospital facilities’ indoor air quality (IAQ). The microorganisms found in bioaerosols have the potential to affect patients and healthcare workers by increasing the prevalence of occupational illnesses and hospital-associated infections. The objectives of this research work are to estimate the concentrations of bacterial bioaerosols and particulate matter from different hospitals and two ambient locations in the greater Dhaka region. In addition, the identification of the bacterial isolates obtained from the sampling sites was identified to get information about nosocomial infection and other health impacts in the hospital environments and to evaluate the antibiotic resistance of the identified bacterial isolates. Particulate matter (PM) concentrations The average and standard deviation of particulate matter concentrations (PM 1.0 , PM 2.5 , and PM 10 ) for all sampling locations are given in Supplementary Table and Fig. . Differences in average PM 1.0 , PM 2.5 and PM 10 concentrations were found to be statistically significant ( p < 0.05) across all study locations. The concentration of PM 1.0 was found to be much higher in all the hospital sites. The greatest values of PM 1.0 and PM 2.5 were found at sampling point I5 of Dhaka Medical College Hospital (DMCH) (80.46 ± 11.32 µg/m 3 , 220.60 ± 16.52 µg/m 3 ), while the highest PM 10 concentration was 1452.21 ± 189.78 µg/m 3 for sampling point I6 of DMCH. The PM 10 concentrations were highest near 1:00 p.m. at BSMMUH and KBMH (Supplementary Fig. ) and it might be due to the increasing activities of the people coming to the hospitals and also due to the temperature elevation . The average and standard deviation of particulate matter concentrations (PM 1.0 , PM 2.5 , and PM 10 ) for all sampling locations are given in Supplementary Table and Fig. . Differences in average PM 1.0 , PM 2.5 and PM 10 concentrations were found to be statistically significant ( p < 0.05) across all study locations. The concentration of PM 1.0 was found to be much higher in all the hospital sites. The greatest values of PM 1.0 and PM 2.5 were found at sampling point I5 of Dhaka Medical College Hospital (DMCH) (80.46 ± 11.32 µg/m 3 , 220.60 ± 16.52 µg/m 3 ), while the highest PM 10 concentration was 1452.21 ± 189.78 µg/m 3 for sampling point I6 of DMCH. The PM 10 concentrations were highest near 1:00 p.m. at BSMMUH and KBMH (Supplementary Fig. ) and it might be due to the increasing activities of the people coming to the hospitals and also due to the temperature elevation . The concentration of bacterial bioaerosols at various sample locations of the hospitals was considerably varied based on the data acquired (Supplementary Table ). The hospitals considered include a mix of public and private institutions with different patient volumes, infrastructure, and air handling systems, providing a broad view of hospital indoor air conditions in greater Dhaka. We explored various environmental parameters affecting bioaerosol load and resistance. Following an 8-hour sampling period, the location I5 of DMCH exhibited the highest concentration of culturable total aerobic bacterial colonies in PM (948.39 ± 84.14 CFU/m 3 ). The total concentration range of bacterial colonies across the hospital sites varied from 194.65 ± 22.48 CFU/m 3 to 948.39 ± 84.14 CFU/m 3 . The lowest mean concentrations of culturable bacterial bioaerosol were 59.37 ± 16.51 and 48.65 ± 18.47 CFU/m 3 found at I9 and I10 or the control sites, respectively. Figure shows the bioaerosol concentration with standard deviation at different sampling sites. Bacterial bioaerosol concentration was significantly lower at sampling location I3 of KBMH, which has an air conditioning system. A significant relationship ( p < 0.05) between bacterial concentration and particulate matter has been shown in Fig. , in which an increase in bacterial concentration was observed with the increasing concentration of particulate matter. The regression parameters in Fig-3(c) show the modest connection (R 2 = 0.27) between the PM 10 concentration and the concentration of bacterial bioaerosol, though the value of the concentration of microbes and PM 10 is significant ( p < 0.05). Based on the information provided in Fig. (a) and (b), it can be deduced that there is a strong connection between bacterial bioaerosols and fine particles, especially PM 1.0 and PM 2.5 . The high R values for PM 1.0 and PM 2.5 (0.80 and 0.85) suggest a significant correlation between bacterial bioaerosols and these fine particulate matter fractions. The meteorological parameters (i.e. temperature, relative humidity) were measured simultaneously to study the influence of these on the number and growth of the bacterial bioaerosol (Supplementary Table S2). A substantial ( p < 0.05) correlation was found by the ANOVA-single component test between temperature and the number and growth of bacteria. Temperature change and R 2 = 0.68 showed a positive correlation for indoor microorganism variation (Fig. a). Relative humidity (RH) functions similarly to temperature and has a significant impact on airborne microbial concentration, diversity, and composition, among other factors. The R value was 0.51 (Fig. b), demonstrating a positive association between the concentration of bacterial bioaerosol and relative humidity. The colony features and structural morphology of these isolates were recorded (Supplementary Table S3 ). Eleven different bacterial colonies were isolated based on their apparent colony characteristics. Eight (isolates 2, 3, 5, 6, 7, 8, 9) were gram-negative (73% of the total isolated samples), and the other four (isolates 1, 4, 10, 11) were gram-positive (27% of the total isolated samples). Twenty-one different antibiotics were employed in the antibiotic susceptibility test. With the exception of isolate-11, every examined bacterial isolate shown resistance to the majority of drugs. Isolate-11 was sensitive to all the antibiotics, and it was obtained from the bioaerosol sample of one of the controlled sites (I10) (Table ; Fig. ). All bacterial isolates were sensitive to only 3 antibiotics (gentamicin, tigecycline, and vancomycin). Only one isolate was found to be resistant against imipenem, meropenem, and colistin (isolate-9 against imipenem, meropenem and isolate-6 against colistin). The isolates’ multi-drug resistance (MDR) was demonstrated by these findings. The highest antibiotic resistance was observed in the case of isolate-9, which was about 71.43% of the total antibiotics used, and the lowest was observed in isolate-10 at 19.05% (Table ) . Isolate-11 showed no resistance against the antibiotics used. Isolates of bioaerosol samples collected from BSMMUH (I1, I2) and DMCH (I5, I6) showed higher resistance than other hospitals. Identification of the bacterial isolates The isolates obtained from the study were subjected to phylogenetic and 16 S rRNA sequence analyses, which revealed that they belong to various bacterial families, including Staphylococcaceae , Pseudomonadaceae , Bacillaceae , Acetobacteraceae , and Enterobacteriaceae . Distance tree analysis demonstrated that isolate-3 was very closely associated with P. stutzeri (Table ; Fig. ), with a 100% similarity to the database’s reference sequence. Isolates-1 and 10 exhibited 99% resemblance to the standard sequence, indicating a close relationship with S. aureus. On the other hand, 98% closeness to the reference sequence indicated that isolates 7 and 9 were closely related to E. coli . Isolate-2 and 3 were P. aeruginosa and P. stutzeri found to have 99% and 100% similarities, respectively. B. cereus , B. subtilis and B. aerius were some of the species (isolate-4, 8 and 11) of bacillaceae family, also found to have 99% likeliness to the reference sequence. Isolate-6 was found to be P. vulgaris and had 99% similarity. Supplementary Fig. S2 shows the DNA sequencing chromatogram of isolate-3 which was generated by using Chromas software (Version 2.6.6). The sequences generated by automated sequencers are displayed as a graph called a chromatogram, which contains a series of peaks in four different colors. The DNA sequences of the bacterial isolates were aligned with the reference sequences. According to Supplementary Fig. S3, in the isolate-3 sequence examined by Chromas software, all 643 nucleotide base pairs were matched with the reference P. stutzeri with no gaps observed. Bacterial species identified at different sampling locations In the hospital environments, S. aureus and E. coli were the most frequently detected bacteria (Table ) . The majority of the bacterial species, which are primarily pathogens or opportunistic pathogens, were discovered in the Bangabandhu Sheikh Mujib Medical University Hospital and the Dhaka Medical College Hospital. Bacillus spp. was mostly found in the controlled sites which rarely create any health issues. So, hospitals contained many harmful bacterial species compared to other places. The isolates obtained from the study were subjected to phylogenetic and 16 S rRNA sequence analyses, which revealed that they belong to various bacterial families, including Staphylococcaceae , Pseudomonadaceae , Bacillaceae , Acetobacteraceae , and Enterobacteriaceae . Distance tree analysis demonstrated that isolate-3 was very closely associated with P. stutzeri (Table ; Fig. ), with a 100% similarity to the database’s reference sequence. Isolates-1 and 10 exhibited 99% resemblance to the standard sequence, indicating a close relationship with S. aureus. On the other hand, 98% closeness to the reference sequence indicated that isolates 7 and 9 were closely related to E. coli . Isolate-2 and 3 were P. aeruginosa and P. stutzeri found to have 99% and 100% similarities, respectively. B. cereus , B. subtilis and B. aerius were some of the species (isolate-4, 8 and 11) of bacillaceae family, also found to have 99% likeliness to the reference sequence. Isolate-6 was found to be P. vulgaris and had 99% similarity. Supplementary Fig. S2 shows the DNA sequencing chromatogram of isolate-3 which was generated by using Chromas software (Version 2.6.6). The sequences generated by automated sequencers are displayed as a graph called a chromatogram, which contains a series of peaks in four different colors. The DNA sequences of the bacterial isolates were aligned with the reference sequences. According to Supplementary Fig. S3, in the isolate-3 sequence examined by Chromas software, all 643 nucleotide base pairs were matched with the reference P. stutzeri with no gaps observed. In the hospital environments, S. aureus and E. coli were the most frequently detected bacteria (Table ) . The majority of the bacterial species, which are primarily pathogens or opportunistic pathogens, were discovered in the Bangabandhu Sheikh Mujib Medical University Hospital and the Dhaka Medical College Hospital. Bacillus spp. was mostly found in the controlled sites which rarely create any health issues. So, hospitals contained many harmful bacterial species compared to other places. The values of particulate matter concentration at the controlled sites were significantly lower than those at the hospital locations, likely due to the absence of a source of particulate matter pollution . All the hospital sites had considerably higher PM1.0 concentrations, a significant concern given that smaller particulate matter easily deposits in the human lower respiratory tract, leading to serious public health issues , . There were also significant variances in the mean values of different hospitals. National Ambient Air Quality Standards (NAAQS) for Bangladesh are 65 µg/m 3 for PM 2.5 (24-hour average) and 150 µg/m 3 for PM 10 (24-hour average). So, the concentration of particulate matter at different hospitals exceeded the NAAQS value and also the WHO value, which is 15 µg/m 3 for PM 2.5 and 45 µg/m 3 for PM 10 (24-hour average) . The higher concentrations of bioaerosol found in this study might have been attributed to a number of factors, including a high patient and visitor volume, inadequate air conditioning, and prolonged window closures. Artificial air cooling has been proven in several studies to have the potential to lower indoor bacterial counts . The highest values in I5 and I6 may also be associated with the building’s age, non-standard flooring, consumable materials, wall seams, a high percentage of outdated beds in the wards, natural ventilation, and a high patient density in the wards . The number of patient beds is one of the primary factors promoting the generation and release of airborne bioaerosols, according to several studies . Bacteria, particularly pathogenic bacteria, positively correlate with PM’s physical and chemical makeup . A moderate correlation (R² = 0.27) was found between PM 10 levels and bacterial bioaerosol concentrations, with both showing statistically significant values ( p < 0.05). This might be because of the high concentration of fungi that greatly contributes to the PM 10 concentration . Bacterial bioaerosols and fine particulate matter fractions appear to be significantly correlated, as indicated by the high R values for PM 1.0 and PM 2.5 (0.80 and 0.85). It appears that the elements in the particulate matter provide ideal circumstances for the development of microorganisms. A study by Hoeksma et al. (2015) investigated the effects of temperature on E. coli , M. synoviae , and E. mundtii by observing bacterial decay. The results demonstrated that different microbial species exhibited varying abundances at different temperature ranges, with some thriving in high temperatures, others in low temperatures, and a majority occurring at moderate air temperature values. In an indoor environment at the University of Dhaka, RS et al. (2022) conducted a study that showed a positive correlation between bacterial concentration and temperature (R value 0.73) and a R value of 0.68 was found between the relation of bioaerosol concentration and temperature in this study which is also supported by the previous one. The growth of culturable microorganisms was more pronounced during the spring or winter compared to the summer season, primarily due to temperature fluctuations . Relative humidity (RH) functions similarly to temperature and has a significant impact on airborne microbial concentration, diversity, and composition, among other factors. The combined effects of relative humidity and temperature are likely to play a crucial role in shaping the behavior of airborne microorganisms . High relative humidity was advantageous for bacterial release and proliferation, but it may potentially lower bacterial viability . Another study, which was performed at the University of Dhaka, found a positive correlation between indoor bacterial concentration and relative humidity, and the R value was 0.68 37 , which supports the result obtained here (R 2 = 0.51). The relative humidity and bacterial concentration were discovered to be negatively correlated . According to Knudsen et al. (2017) , various microorganisms react to relative humidity differently, and occasionally they don’t react at all. In a study at a hospital in Iran, it was found that the highest antibiotic resistance was in cefixime (45.8%), ceftazidime (30.2%), gentamicin (12%) and ciprofloxacin (12%) 8 . One contributing factor to the rise of antibiotic resistance is the overuse of antibiotics, which can promote the development of antibiotic-resistant bacteria and antibiotic-resistant genes . This overuse creates a selection pressure that favors the survival and proliferation of resistant strains, reducing the effectiveness of antibiotics and making it more challenging to treat bacterial infections effectively. P. aeruginosa , P. stutzeri , (A) schindleri , and P. vulgaris were also identified as potential sources of nosocomial infections in immunocompromised patients within hospital settings . (B) cereus and B. subtilis were also found in the hospital environment, according to the research. According to reports, E. coli and K. pneumoniae are the two most prevalent nosocomial pathogens in hospitals that cause urinary tract infections (UTIs) in Europe . Others have found similar patterns, with high incidence of Staphylococcus , Micrococcus , and Bacillus in various hospital settings , . Among the Gram-negative bacteria, Acinetobacter spp., P. aeruginosa , and E. aerogenes were detected on the plate surface, highlighting their potential association with healthcare-related infections through hospital indoor air . These findings underscore the importance of monitoring and understanding the presence of different bacterial species in hospital environments to implement effective infection control measures and protect the health of patients and healthcare workers. Coagulase-negative staphylococcus (CONS) is a common cause of nosocomial infections, particularly in neonatal and pediatric intensive care units, and is associated with significant patient mortality and morbidity . Similar findings were reported by Memon et al. (2016) , who observed a high prevalence of S. aureus in various hospital wards, highlighting its role as a notorious pathogen responsible for nosocomial infections in immunocompromised patients. S. aureus , P. aeruginosa , P. stutzeri , B. cereus , (A) schindleri , P. vulgaris , (B) subtilis , E. coli , and B. aerius were found in this study, all of which are associated with creating nosocomial infections or hospital-acquired infections (HAI) in patients and healthcare workers, except Bacillus aerius , . Most of the bacterial species were found to be opportunistic pathogens. B. cereus and E. coli were identified as pathogens based on other studies. B. aerius was the one with no pathogenic report so far. Among the bacteria responsible for causing a wide range of clinical infections, S. aureus is a major human pathogen. It is known to be a leading cause of various infections, including bacteremia, infective endocarditis, osteoarticular infections, skin and soft tissue infections, pleuropulmonary infections, and device-related infections . P. aeruginosa and P. stutzeri are also implicated in causing infections such as bacteremia, urinary tract infections (UTIs), and respiratory infections , . Additionally, A. schindleri can lead to nosocomial infections, with a predilection for aspiration pneumonia and catheter-associated bacteremia. These bacteria can pose significant health risks in hospital settings, particularly to immunocompromised patients and those with underlying medical conditions. Recent works have proposed hospitals as emission hotspots of antibiotic-resistant bacteria in urban environments , which is in accordance with this study. The World Health Organization (WHO) has provided 100 CFU/m 3 as the maximum number for hospital guidelines for bacteria . Given that each patient and staff member have a different level of immunosuppression and susceptibility to infection, the study of bioaerosol concentration and the evaluation of bacterial resistance to antibiotics is crucial for the prevention of hospital-acquired infections (HAIs) or nosocomial infections and may be impacted by ineffective management of these factors (11). Healthcare facilities and hospitals stand out among all building types for their link with pathogenic bacteria. Nosocomial infections, which impact 15% of inpatients, are particularly prone to hospitalized patients . Antibiotic resistance will cause at least 700,000 deaths annually, and the rise in ARGs will cause 10 million fatalities annually by 2050 50 . A study estimated a global antibiotic consumption rate of 14.3 defined daily doses (DDD) per 1000 population per day in 2018, with a 95% uncertainty interval of 13.2 to 15.6 DDD . This amounted to a total of 40.2 billion DDD consumed worldwide in 2018. This represented a significant increase of 46% from the antibiotic consumption rate of 9.8 DDD per 1000 per day in 2000, with a 95% uncertainty interval of 9.2 to 10.5 DDD. The rise in antibiotic consumption over this period raises concerns about the potential impact on antimicrobial resistance and the need for appropriate stewardship and control measures to ensure responsible and effective use of antibiotics globally. These findings emphasize the need for national and international hospital infection control guidelines to address airborne antibiotic-resistant bioaerosol threats, especially in locations with limited resources. Methods Characteristics of the sampling sites and hospital building In the greater Dhaka region, the samples were collected from two public and two private hospitals (Fig. ). The Bangabandhu Sheikh Mujib Medical University Hospital (BSMMUH), with a capacity of 1900 beds, is Bangladesh’s first public and second largest hospital with numerous departments. The second hospital was Khwaja Badrudduja Modern Hospital is a compact healthcare facility comprising 20 beds primarily catering to primary care needs. The third hospital was the Dhaka Medical College Hospital (DMCH).With 2600 beds and multiple departments, it’s one of the largest and most established hospitals in Bangladesh. The fourth hospital was Monno Medical College Hospital, which is located in a rural region. The hospital is a significant medical facility equipped with 500 beds and multiple departments. As the ambient sites, Green Model Town Residential Area and Mukarram Hussain Khundker Bhaban were chosen. In the Green Model Town area, the living room space of a six-story building was chosen for the sampling point. The area is full of trees and has less traffic than other sampling places during sampling hours. At the Mukarram Hussain Bhaban, the sampling was conducted at the Atmospheric & Environmental Chemistry Research Laboratory, which was chosen to have a proper air ventilation system and to avoid the potential confounding effect of traffic congestion. Supplementary Table S4 provides an extensive overview of the features of the office structure and sampling site. Sample collection During the pre-monsoon season (February to June of 2023), air samples in the hospitals were collected using UV sterilized Quartz filter paper (Gelman, Membrane Filters, Type TISSU Quartz 2500QAT-UP, 47 mm diameter) with a 4.0-minute hold period between each measurement. Particulate matter was collected using a low-volume air sampler, in which the airflow rate (16.7 L per minute) was recorded by an orifice plate inserted between the filter and the vacuum pump. This design employs a filter cassette set up in a single-filter tray and a Partisol FRM ® Model 2000 single-channel air sampler. The concentrations of particulate matter (PM 1.0 , PM 2.5 , and PM 10 ) were determined using the AEROCET-531 (USA) air quality monitoring instrument. For three days in a row, each hospital’s working hours (8:00 am to 4:00 pm) were selected for the purpose of sampling. Samples of bioaerosol were taken at a height of approximately 1.5 m in order to replicate the human breathing zone’s aspiration. IGERESS air quality monitoring device was used to gather temperature and relative humidity data (Model: WP6930S, VSON Technology Co., Ltd, Guangdong, China). Conditioning of filter paper We used an ultraviolet irradiation process for 8 h to sterilize the blank quartz filter paper, either killing off any remaining microbiological particles or rendering them inactive. The autoclaved water was used to moisten the irradiated filter paper before it was immediately put in the low-volume air sampler’s filter holder. After the completion of the sampling, a pre-sterilized anti-cutter was used to cut the filter papers into small pieces, which were then added to the 100 mL nutrient broth. The material was completely dissolved in the broth after being agitated for 30 to 40 min on a hot plate (37 °C) magnetic stirrer. Next, using a sterile bent glass rod, 25 µL of the material was spread out over the nutritional agar medium plates. The plates were then incubated at 37 °C for 24 h. Then, the total colony forming units (CFU) were counted. Following sampling, the loaded filter paper was kept at 4 °C until further examination. Calculation of the bioaerosol concentration The concentration of bacteria in the bioaerosol was calculated by dividing the CFU by the measured air volume (CFU/m 3 ) . 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\text{B}\text{i}\text{o}\text{a}\text{e}\text{r}\text{o}\text{s}\text{o}\text{l}\:\text{C}\text{o}\text{n}\text{c}\text{e}\text{n}\text{t}\text{r}\text{a}\text{t}\text{i}\text{o}\text{n}\:(\text{C}\text{F}\text{U}/\text{m}^3)=\frac{Number\:of\:colonies\:\times\:Aliquot\:dilution\:factor\:}{Volume\:of\:total\:air\:sampling\:\left(in\:m3\right)}$$\end{document} Identification of the bacterial bioaerosol species Obtaining pure culture Different bacterial colonies were preliminarily determined only by observing their colony characteristics. Using sterile loops, each colony was removed, and a streak plate experiment was conducted on nutritional agar medium. Following that, the plates were incubated at 37 °C for 24 to 48 h. After getting a pure culture, gram staining, antibiotic sensitivity test, and identification of bacteria were performed. Gram staining method At first, a smear was made of a bacterial culture on the glass side, heat-fixed and then applied a primary stain (crystal violet) for 60 s. It was washed gently to remove the dye and added the iodine solution for 60 s. After removing iodine through washing, ethanol was added for 15 s and gently washed the slide again. Then, a counter stain was added, safranin, and the slide was kept for 60 s. After washing, the slides were air-dried and observed under a light microscope with 100x magnification, and the morphology, arrangement, and distinguishing features of the bacterial cells were observed . Antibiotic resistivity of the isolates The Kirby-Bauer disk diffusion method was used to assess the antibiotic susceptibility of the chosen bacterial isolates. For this, selected bacteria were cultured in liquid nutrient broth media, and from this culture media, 100 µl was taken and spread on Muller-Hinton agar (Difco, USA) plate. After that, antibiotic discs were positioned and incubated at 37ºC for the entire night. The sensitivity was then evaluated by measuring the inhibition zone in millimeters and comparing it with the reference chart . Antibiotic discs (Oxoid, England) used for this experiment were tigecycline (15 µg), ciprofloxacin (5 µg), gentamicin (10 µg), imipenem (10 µg), azithromycin (15 µg), cloxacillin (1 µg), colistin (10 µg), chloramphenicol (30 µg), vancomycin (30 µg), cefepime (30 µg), cephalexin (30 µg), and meropenem (10 µg). The guidelines provided by the Clinical and Laboratory Standards Institute (CLSI) were followed in determining the antibiotics’ susceptibility and resistance . Identification of bacterial isolates using 16 S rRNA sequencing For each isolate, a portion of the 16 S rRNA gene was amplified via the Polymerase Chain Reaction (PCR) technique. Total genomic DNA was isolated using the kit (Invitrogen™ PCR Master Mix Starter, UK). Primers 27 F-AGA GTT TGA TCM TGG CTC AG and 1492 R-TAC GGY TAC CTT GTT ACG ACTT were utilized to amplify the specific region of the 16 S rRNA sequence . The PCR was run for 30 cycles and the condition of PCR was annealing at 57 °C for 45 s, primer extension at 72 °C for 2 min, and denaturation at 94 °C for 1 min. The last extension stage was carried out at 72 °C for 10 min. Gel electrophoresis (2% agarose gel) was performed to confirm PCR product amplification. The PCR product was then purified using a kit (FavorPrep™ GEL/PCR Purification Kit, Taiwan), and concentration was determined using nanodrop (Thermo Fisher Scientific, USA). The purified PCR products were then subjected to Sanger sequencing (3500 Genetic Analyzer, Thermo Fisher Scientific, USA). By using the online blast software interface, all sequences were compared to the 16 S rRNA database of bacteria and archaea. The top 10 sequences obtained from the blast findings were taken into consideration for creating phylogenetic trees for each isolate using the Maximum Likelihood procedure in MEGA version 5.25 software . The most closely related sequences for each isolate were found by examining the resultant trees, and the alignment outcome was noted. The names of the species were allocated based on the best match. Statistical analysis All statistical analyses were performed using the MS Excel-2019 software. The variations in particulate matter concentrations were examined using one-way ANOVA (analysis of variance). Statistically significant alterations were determined using a paired t-test with a 95% confidence level (p-value = 0.05). The R value was used to measure the proportion of the variance in the bioaerosol concentration with the particulate matter concentration and with the meteorological parameter. The supplementary section contains all the applicable ANOVA (Analysis of variance) test equations. Characteristics of the sampling sites and hospital building In the greater Dhaka region, the samples were collected from two public and two private hospitals (Fig. ). The Bangabandhu Sheikh Mujib Medical University Hospital (BSMMUH), with a capacity of 1900 beds, is Bangladesh’s first public and second largest hospital with numerous departments. The second hospital was Khwaja Badrudduja Modern Hospital is a compact healthcare facility comprising 20 beds primarily catering to primary care needs. The third hospital was the Dhaka Medical College Hospital (DMCH).With 2600 beds and multiple departments, it’s one of the largest and most established hospitals in Bangladesh. The fourth hospital was Monno Medical College Hospital, which is located in a rural region. The hospital is a significant medical facility equipped with 500 beds and multiple departments. As the ambient sites, Green Model Town Residential Area and Mukarram Hussain Khundker Bhaban were chosen. In the Green Model Town area, the living room space of a six-story building was chosen for the sampling point. The area is full of trees and has less traffic than other sampling places during sampling hours. At the Mukarram Hussain Bhaban, the sampling was conducted at the Atmospheric & Environmental Chemistry Research Laboratory, which was chosen to have a proper air ventilation system and to avoid the potential confounding effect of traffic congestion. Supplementary Table S4 provides an extensive overview of the features of the office structure and sampling site. In the greater Dhaka region, the samples were collected from two public and two private hospitals (Fig. ). The Bangabandhu Sheikh Mujib Medical University Hospital (BSMMUH), with a capacity of 1900 beds, is Bangladesh’s first public and second largest hospital with numerous departments. The second hospital was Khwaja Badrudduja Modern Hospital is a compact healthcare facility comprising 20 beds primarily catering to primary care needs. The third hospital was the Dhaka Medical College Hospital (DMCH).With 2600 beds and multiple departments, it’s one of the largest and most established hospitals in Bangladesh. The fourth hospital was Monno Medical College Hospital, which is located in a rural region. The hospital is a significant medical facility equipped with 500 beds and multiple departments. As the ambient sites, Green Model Town Residential Area and Mukarram Hussain Khundker Bhaban were chosen. In the Green Model Town area, the living room space of a six-story building was chosen for the sampling point. The area is full of trees and has less traffic than other sampling places during sampling hours. At the Mukarram Hussain Bhaban, the sampling was conducted at the Atmospheric & Environmental Chemistry Research Laboratory, which was chosen to have a proper air ventilation system and to avoid the potential confounding effect of traffic congestion. Supplementary Table S4 provides an extensive overview of the features of the office structure and sampling site. During the pre-monsoon season (February to June of 2023), air samples in the hospitals were collected using UV sterilized Quartz filter paper (Gelman, Membrane Filters, Type TISSU Quartz 2500QAT-UP, 47 mm diameter) with a 4.0-minute hold period between each measurement. Particulate matter was collected using a low-volume air sampler, in which the airflow rate (16.7 L per minute) was recorded by an orifice plate inserted between the filter and the vacuum pump. This design employs a filter cassette set up in a single-filter tray and a Partisol FRM ® Model 2000 single-channel air sampler. The concentrations of particulate matter (PM 1.0 , PM 2.5 , and PM 10 ) were determined using the AEROCET-531 (USA) air quality monitoring instrument. For three days in a row, each hospital’s working hours (8:00 am to 4:00 pm) were selected for the purpose of sampling. Samples of bioaerosol were taken at a height of approximately 1.5 m in order to replicate the human breathing zone’s aspiration. IGERESS air quality monitoring device was used to gather temperature and relative humidity data (Model: WP6930S, VSON Technology Co., Ltd, Guangdong, China). We used an ultraviolet irradiation process for 8 h to sterilize the blank quartz filter paper, either killing off any remaining microbiological particles or rendering them inactive. The autoclaved water was used to moisten the irradiated filter paper before it was immediately put in the low-volume air sampler’s filter holder. After the completion of the sampling, a pre-sterilized anti-cutter was used to cut the filter papers into small pieces, which were then added to the 100 mL nutrient broth. The material was completely dissolved in the broth after being agitated for 30 to 40 min on a hot plate (37 °C) magnetic stirrer. Next, using a sterile bent glass rod, 25 µL of the material was spread out over the nutritional agar medium plates. The plates were then incubated at 37 °C for 24 h. Then, the total colony forming units (CFU) were counted. Following sampling, the loaded filter paper was kept at 4 °C until further examination. The concentration of bacteria in the bioaerosol was calculated by dividing the CFU by the measured air volume (CFU/m 3 ) . 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\text{B}\text{i}\text{o}\text{a}\text{e}\text{r}\text{o}\text{s}\text{o}\text{l}\:\text{C}\text{o}\text{n}\text{c}\text{e}\text{n}\text{t}\text{r}\text{a}\text{t}\text{i}\text{o}\text{n}\:(\text{C}\text{F}\text{U}/\text{m}^3)=\frac{Number\:of\:colonies\:\times\:Aliquot\:dilution\:factor\:}{Volume\:of\:total\:air\:sampling\:\left(in\:m3\right)}$$\end{document} Obtaining pure culture Different bacterial colonies were preliminarily determined only by observing their colony characteristics. Using sterile loops, each colony was removed, and a streak plate experiment was conducted on nutritional agar medium. Following that, the plates were incubated at 37 °C for 24 to 48 h. After getting a pure culture, gram staining, antibiotic sensitivity test, and identification of bacteria were performed. Gram staining method At first, a smear was made of a bacterial culture on the glass side, heat-fixed and then applied a primary stain (crystal violet) for 60 s. It was washed gently to remove the dye and added the iodine solution for 60 s. After removing iodine through washing, ethanol was added for 15 s and gently washed the slide again. Then, a counter stain was added, safranin, and the slide was kept for 60 s. After washing, the slides were air-dried and observed under a light microscope with 100x magnification, and the morphology, arrangement, and distinguishing features of the bacterial cells were observed . Different bacterial colonies were preliminarily determined only by observing their colony characteristics. Using sterile loops, each colony was removed, and a streak plate experiment was conducted on nutritional agar medium. Following that, the plates were incubated at 37 °C for 24 to 48 h. After getting a pure culture, gram staining, antibiotic sensitivity test, and identification of bacteria were performed. At first, a smear was made of a bacterial culture on the glass side, heat-fixed and then applied a primary stain (crystal violet) for 60 s. It was washed gently to remove the dye and added the iodine solution for 60 s. After removing iodine through washing, ethanol was added for 15 s and gently washed the slide again. Then, a counter stain was added, safranin, and the slide was kept for 60 s. After washing, the slides were air-dried and observed under a light microscope with 100x magnification, and the morphology, arrangement, and distinguishing features of the bacterial cells were observed . The Kirby-Bauer disk diffusion method was used to assess the antibiotic susceptibility of the chosen bacterial isolates. For this, selected bacteria were cultured in liquid nutrient broth media, and from this culture media, 100 µl was taken and spread on Muller-Hinton agar (Difco, USA) plate. After that, antibiotic discs were positioned and incubated at 37ºC for the entire night. The sensitivity was then evaluated by measuring the inhibition zone in millimeters and comparing it with the reference chart . Antibiotic discs (Oxoid, England) used for this experiment were tigecycline (15 µg), ciprofloxacin (5 µg), gentamicin (10 µg), imipenem (10 µg), azithromycin (15 µg), cloxacillin (1 µg), colistin (10 µg), chloramphenicol (30 µg), vancomycin (30 µg), cefepime (30 µg), cephalexin (30 µg), and meropenem (10 µg). The guidelines provided by the Clinical and Laboratory Standards Institute (CLSI) were followed in determining the antibiotics’ susceptibility and resistance . Identification of bacterial isolates using 16 S rRNA sequencing For each isolate, a portion of the 16 S rRNA gene was amplified via the Polymerase Chain Reaction (PCR) technique. Total genomic DNA was isolated using the kit (Invitrogen™ PCR Master Mix Starter, UK). Primers 27 F-AGA GTT TGA TCM TGG CTC AG and 1492 R-TAC GGY TAC CTT GTT ACG ACTT were utilized to amplify the specific region of the 16 S rRNA sequence . The PCR was run for 30 cycles and the condition of PCR was annealing at 57 °C for 45 s, primer extension at 72 °C for 2 min, and denaturation at 94 °C for 1 min. The last extension stage was carried out at 72 °C for 10 min. Gel electrophoresis (2% agarose gel) was performed to confirm PCR product amplification. The PCR product was then purified using a kit (FavorPrep™ GEL/PCR Purification Kit, Taiwan), and concentration was determined using nanodrop (Thermo Fisher Scientific, USA). The purified PCR products were then subjected to Sanger sequencing (3500 Genetic Analyzer, Thermo Fisher Scientific, USA). By using the online blast software interface, all sequences were compared to the 16 S rRNA database of bacteria and archaea. The top 10 sequences obtained from the blast findings were taken into consideration for creating phylogenetic trees for each isolate using the Maximum Likelihood procedure in MEGA version 5.25 software . The most closely related sequences for each isolate were found by examining the resultant trees, and the alignment outcome was noted. The names of the species were allocated based on the best match. For each isolate, a portion of the 16 S rRNA gene was amplified via the Polymerase Chain Reaction (PCR) technique. Total genomic DNA was isolated using the kit (Invitrogen™ PCR Master Mix Starter, UK). Primers 27 F-AGA GTT TGA TCM TGG CTC AG and 1492 R-TAC GGY TAC CTT GTT ACG ACTT were utilized to amplify the specific region of the 16 S rRNA sequence . The PCR was run for 30 cycles and the condition of PCR was annealing at 57 °C for 45 s, primer extension at 72 °C for 2 min, and denaturation at 94 °C for 1 min. The last extension stage was carried out at 72 °C for 10 min. Gel electrophoresis (2% agarose gel) was performed to confirm PCR product amplification. The PCR product was then purified using a kit (FavorPrep™ GEL/PCR Purification Kit, Taiwan), and concentration was determined using nanodrop (Thermo Fisher Scientific, USA). The purified PCR products were then subjected to Sanger sequencing (3500 Genetic Analyzer, Thermo Fisher Scientific, USA). By using the online blast software interface, all sequences were compared to the 16 S rRNA database of bacteria and archaea. The top 10 sequences obtained from the blast findings were taken into consideration for creating phylogenetic trees for each isolate using the Maximum Likelihood procedure in MEGA version 5.25 software . The most closely related sequences for each isolate were found by examining the resultant trees, and the alignment outcome was noted. The names of the species were allocated based on the best match. All statistical analyses were performed using the MS Excel-2019 software. The variations in particulate matter concentrations were examined using one-way ANOVA (analysis of variance). Statistically significant alterations were determined using a paired t-test with a 95% confidence level (p-value = 0.05). The R value was used to measure the proportion of the variance in the bioaerosol concentration with the particulate matter concentration and with the meteorological parameter. The supplementary section contains all the applicable ANOVA (Analysis of variance) test equations. Below is the link to the electronic supplementary material. Supplementary Material 1
Real-world evidence in the reassessment of oncology therapies: payer perceptions from five countries
08670b63-0789-46df-ac42-e41e5a8de248
11441014
Internal Medicine[mh]
Study design A web-based survey was conducted among current and former HTA and payer decision-makers using the Market Access Transformation Rapid Payer Response (RPR) online portal. The RPR portal is a research platform with a current enrollment of approximately 2000 individuals who are or have been involved in value assessment on behalf of HTA agencies or payer organizations from around the world. For this study, experts were selected from countries representing different payer types. Participants from France, Germany, Spain, the UK and the USA were recruited based on their expertise in determining the coverage and reimbursement of oncology therapies at either a national or regional level. All participants were familiar with current HTA pricing and reimbursement mechanisms for oncology therapies in their country. All had a minimum of 5 years of experience in an HTA or healthcare payer organization and a minimum of 2 years of experience in assessing oncology therapies. 30 current and former payer decision-makers in France (n = 5), Germany (n = 5), Spain (n = 5), the UK (n = 5) and the USA (n = 10) were invited to complete the survey, and all accepted the invitation within 3 days. Of the 30 participants, 23 had experience at the national level and seven reported regional experience. Because any new therapy is assessed by country-specific decision-making agencies via a public procedure in France, Germany, Spain and the UK, participants were selected based on history of employment in relevant organizations at the national and regional decision-making level . In the USA, all participants were employed by private insurers, such as managed care organizations, pharmacy benefit management firms and integrated delivery networks. Participants completed the English-language, web-based survey in the 10-day period from 5 May 2022 to 15 May 2022. Participants were asked to reflect on their experience related to oncology therapies. The questions were based on information obtained from a targeted literature review of primary research focused on HTA agencies and payers and the information they used to inform decisions. Thus, questions were asked on the following issues: sources of evidence important to pricing and reimbursement decisions at reassessment; common uses of RWE by HTA agencies/payer decision-makers; and considerations related to RWE design, methods (including end points) and communications. Participants responded to the questions with answers in multiple-choice, open-text, Likert-type scale, or ranked-choice formats. The questions were designed to collect the opinions of the participants and variations of interpretation were expected. The results are not intended to capture the formal guidelines or the official positions of the organizations from which the participants gained their experience. Ethics Participants were compensated in line with a service-level agreement developed by the independent market research agency. Informed consent was obtained in the online questionnaire, which included a statement of voluntary participation and notification that the research was sponsored by a pharmaceutical company for market research purposes and was not promotional. The research was carried out within the Market Research codes of conduct and complied with the European Union (EU) General Data Protection Regulation (GDPR EU 2016/679) , the UK Data Protection law , and relevant laws for the conduct of market research . Statistical analyses Descriptive analyses of the participants' survey responses were performed. Categorical variables were summarized using frequencies and percentages. Continuous variables were summarized using means. A web-based survey was conducted among current and former HTA and payer decision-makers using the Market Access Transformation Rapid Payer Response (RPR) online portal. The RPR portal is a research platform with a current enrollment of approximately 2000 individuals who are or have been involved in value assessment on behalf of HTA agencies or payer organizations from around the world. For this study, experts were selected from countries representing different payer types. Participants from France, Germany, Spain, the UK and the USA were recruited based on their expertise in determining the coverage and reimbursement of oncology therapies at either a national or regional level. All participants were familiar with current HTA pricing and reimbursement mechanisms for oncology therapies in their country. All had a minimum of 5 years of experience in an HTA or healthcare payer organization and a minimum of 2 years of experience in assessing oncology therapies. 30 current and former payer decision-makers in France (n = 5), Germany (n = 5), Spain (n = 5), the UK (n = 5) and the USA (n = 10) were invited to complete the survey, and all accepted the invitation within 3 days. Of the 30 participants, 23 had experience at the national level and seven reported regional experience. Because any new therapy is assessed by country-specific decision-making agencies via a public procedure in France, Germany, Spain and the UK, participants were selected based on history of employment in relevant organizations at the national and regional decision-making level . In the USA, all participants were employed by private insurers, such as managed care organizations, pharmacy benefit management firms and integrated delivery networks. Participants completed the English-language, web-based survey in the 10-day period from 5 May 2022 to 15 May 2022. Participants were asked to reflect on their experience related to oncology therapies. The questions were based on information obtained from a targeted literature review of primary research focused on HTA agencies and payers and the information they used to inform decisions. Thus, questions were asked on the following issues: sources of evidence important to pricing and reimbursement decisions at reassessment; common uses of RWE by HTA agencies/payer decision-makers; and considerations related to RWE design, methods (including end points) and communications. Participants responded to the questions with answers in multiple-choice, open-text, Likert-type scale, or ranked-choice formats. The questions were designed to collect the opinions of the participants and variations of interpretation were expected. The results are not intended to capture the formal guidelines or the official positions of the organizations from which the participants gained their experience. Participants were compensated in line with a service-level agreement developed by the independent market research agency. Informed consent was obtained in the online questionnaire, which included a statement of voluntary participation and notification that the research was sponsored by a pharmaceutical company for market research purposes and was not promotional. The research was carried out within the Market Research codes of conduct and complied with the European Union (EU) General Data Protection Regulation (GDPR EU 2016/679) , the UK Data Protection law , and relevant laws for the conduct of market research . Descriptive analyses of the participants' survey responses were performed. Categorical variables were summarized using frequencies and percentages. Continuous variables were summarized using means. This paper explores insights related to the reassessment of oncology therapies in three categories: opinions about the current uses of RWE that are important to pricing and reimbursement decisions; perceived value of RWE and country-specific preferences for RWE design, methods and communications. When asked to rank different sources of evidence by importance to pricing and reimbursement decisions at the time of reassessment, most participants confirmed that clinical evidence from randomized controlled trials is most important. Overall, 23 of 30 participants (77%) ranked clinical evidence from trials as the most important source, including all ten participants from Germany and Spain, four of five in France, one of five in the UK and eight of ten in the USA. After clinical evidence from trials, RWE that provided new data on treatment effectiveness and characterized treatment patterns ranked higher than all other types of evidence, including systematic literature reviews, indirect treatment comparisons, budget impact analyses, cost–effectiveness analyses and patient/physician preference studies. RWE on effectiveness was ranked first or second by three of five participants in Germany, three of five in Spain, two of five in France, one of five in the UK and three of ten in the USA. Payers in the UK expressed a preference for cost–effectiveness analyses compared with other types of evidence for decision-making in their market (three of five participants ranked cost–effectiveness analysis first). Participants agreed that payer decision-makers most commonly use RWE to confirm whether efficacy and safety results from randomized controlled trials are reflected in real-world outcomes and to confirm the projected utilization of an oncology therapy. Most participants reported that RWE can support the reassessment of oncology therapies by demonstrating long-term effectiveness (27 of 30 [90%]) and addressing safety questions and concerns (24 of 30 [80%]). All participants from France, Spain and the UK and most from Germany (4 of 5 [80%]) and the USA (8 of 10 [80%]) reported that RWE can be used to confirm the effectiveness of an oncology therapy during the reassessment review. When asked if RWE can confirm safety data, all participants from France and Spain and 80% of participants from Germany (four of five) and the USA (eight of ten) agreed. In the UK, two of five participants (40%) responded that RWE could be used to confirm safety results . While the study found agreement across countries related to use of RWE for confirming effectiveness and safety, there was considerable geographic variability when considering other uses of RWE in reassessment. For example, all French and most Spanish and US participants found RWE useful in confirming the level of treatment utilization, but only one of five German participants and two of five UK participants agreed. Similarly, four of five participants in Spain and in the UK would use RWE at reassessment to confirm the cost or economic benefit of an oncology therapy, while half of US, three of five French and no German participants would do so. Similar results were found when participants were asked about the value of RWE in assessing healthcare resource use. Four of five UK participants (80%) agreed that RWE could be useful in assessing resource use, but only one of five German participants (20%) agreed. All five German participants (100%) agreed that RWE could confirm the patient perspective in terms of quality of life and patient-reported outcomes. In contrast, only one participant each in France and the USA reported that RWE could confirm the patient perspective in a way that would be useful for oncology reassessment . How RWE documents were presented mattered to study participants. Most participants ranked peer-reviewed publications ahead of value dossiers, conference abstracts and other forms of communication (29 of 30 [97%]). However, payers from Germany (four of five) expressed a slight preference for value dossiers . Participants were asked to rate several RWE study design/methodology options based on the potential impact on their decision-making using a scale from 1 to 7, where 1 was no value and 7 was highest value. A mean score from 5.0 to 7.0 was considered high value. Prospective observational studies were rated high value by participants from France (mean score = 6.0), Germany (5.0), Spain (5.6) and the UK (6.0). Product/disease registries were rated high value by payers from France (mean score = 6.0) and the UK (5.6). Participants from the USA did not consider any specific study design to be high value . The preferred end points for inclusion in RWE studies were overall survival, adverse event rates, discontinuation rates and progression-free survival . The results of this survey suggest that payer decision-makers support the idea that RWE plays an important role in informing the reassessment decisions made by HTA agencies and payers when considering oncology therapies. The participants agreed that RWE has a place in evidence generation, but they had differing views on how best to use it. HTA agencies and payers value RWE that confirms clinical evidence from registration trials. The application of stringent methodological requirements is a key determinant of the value of RWE in reassessment decisions. Most participants reported that RWE was useful for confirming treatment utilization, economic impact and healthcare resource use. Our study confirmed that when RWE measures the impact of a treatment on survival, it can influence payer decision-making. Additionally, the participants in this study agreed that RWE provides more patients than registration trials for evaluating the incidence of adverse events, which can lead to a better understanding of the safety of new therapies. The participants in this study valued safety data from RWE even for cancer therapies. RWE that captures the impact of a treatment on patient-reported outcomes and provides updated estimates for cost–effectiveness models has the potential to inform decision-makers in markets where that information is valued. Results from quality-of-life questionnaires have been shown to predict survival in patients with cancer . The potential exists for decision-makers to consider patient-reported outcome data as part of a larger body of evidence. Regulatory agencies have issued guidance documents and accepted evidence from RWE to fill evidence gaps. In 2021, the FDA approved a modified dosing regimen for cetuximab based on RWE from Flatiron Health, an electronic health record-derived database . Additionally, the FDA and European Medicines Agency (EMA) shared high-level guidance documents describing the characteristics of ECA design . Also, several agencies have included data from ECAs in their evaluation of oncology therapies, but lacked alignment in their assessments of the same evidence . We believe that with the availability of resource use data, a larger patient population, and potentially longer follow-up periods, RWE will provide an important source of information for payer decision-makers and reimbursement agencies as well as for regulatory agencies . Some researchers have expressed concern that over-reliance on observational data at the expense of randomized controlled trials could fail to fill the information gaps that are critical to evaluating oncology therapies . The results of our questionnaire suggest that payer decision-makers see a place for RWE when the reassessments of oncology therapies are needed. For all of the countries represented in this study, the availability of new evidence or changes in the market could trigger a reassessment of an oncology therapy . National agencies in France, Germany and Spain schedule reassessments based on the expected availability of results from additional and ongoing clinical trials , or reassessments could be triggered if the budget impact of a new therapy exceeds expectations . In France, the Transparency Committee conducts a formal reassessment of all therapies every 5 years or when significant new information becomes available . The drug manufacturer may request a reassessment of a therapy . Payers in France emphasized the need for RWE to confirm the safety of an oncology therapy during the reassessment review. RWE can be used to validate the risk-benefit ratio in resubmission dossiers and to confirm long-term effectiveness versus the standard of care . In Germany, data from randomized controlled trials are preferred, and RWE must come from well-designed studies to be considered useful. If those conditions are met, RWE could be used to demonstrate comparative clinical benefit in a target patient population or subpopulation for some therapies that require additional data. Use of RWE in reassessment is possible; however, German decision-making bodies, including Gemeinsamer Bundesausschuss (G-BA) and Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen (IQWiG), require evidence to meet robust methodological standards . Reassessments are not mandatory but may be requested by the G-BA or the manufacturer and typically occur at least 12 months after the initial assessment. The payer decision-makers in our study who stated a preference for value dossiers over abstracts may have been interested in seeing more detail on the research than is provided in a conference abstract. In Germany, value dossiers must be submitted for approval as German payers expect to see dossiers and analyze the evidence themselves in advance of a decision. Payers in Spain may accept RWE and pay-for-performance schemes to corroborate long-term effectiveness and mitigate some safety concerns . RWE may be used to demonstrate the generalizability of efficacy results to local or real-world populations. Reassessments are initiated at the national or regional level, although there is not a clearly defined formal process . Officially, regional agencies are not supposed to deny access to medicines with centrally approved reimbursement, but in practice, within their role of managing and paying healthcare providers, some regional payers apply further criteria . For this study, three of the five participants from Spain had experience in regional HTA roles. For UK payers, RWE has been used to provide data to populate cost–effectiveness analyses . A recent guideline was published to help researchers develop RWE studies to meet the requirements of the National Institute for Health and Care Excellence (NICE) . NICE has the option to recommend a managed access agreement called the Early Access to Medicines Scheme (EAMS) that allows patients to receive a new therapy while additional data are collected. The UK acknowledged that a higher threshold for cost–effectiveness was required for oncology therapies compared with other medications to meet the needs of patients, and the Cancer Drug Fund (CDF) was established in 2011. The CDF provided a separate reimbursement path for oncology therapies that did not meet the cost–effectiveness standards for NICE. In 2016, the CDF was reformed to take a more sustainable approach to funding medicines and collecting data. A lack of survival data at the initial assessment of oncology therapies has been identified as an information gap for payer decision-makers in the UK. The CDF has been tasked with using the Systemic Anti-Cancer Therapy (SACT) database to answer some of these questions. The SACT is a UK-specific RWE dataset consisting of routinely collected data from cancer patients. Evidence suggests that this resource has not been fully realized in recent reimbursement decisions . In the UK, the CDF generally reassesses new therapies after 2 to 3 years. NICE typically initiates a review up to 3 years post-approval, especially with changes in the technology/evidence base or clinical practice . In the UK, a reassessment could be triggered if a new therapy is not as cost-effective as expected . The US FDA recently released a guidance document for the use of RWD and RWE to support regulatory decision-making that encouraged sponsors to engage with the agency early in the drug development process . US payers highlight the value of RWE in understanding long-term clinical effectiveness, safety and durability of effect versus standard of care and in helping understand the patient journey beyond registration trials . US payers are more likely to use RWE in the reassessment of oncology therapies accepted under accelerated FDA approvals, as well as for cell and gene therapies with high average wholesale prices. In the USA, payers rely heavily on the results of clinical practice guidelines, such as those from the National Comprehensive Cancer Network ® (NCCN ® ), to inform coverage decisions . Reassessments are generally conducted annually but could be triggered by other factors, such as changes to the product label, updated guidelines, approval of other therapies, new safety data, unexpectedly high utilization, opportunities for value-based pricing or risk-based contracting or a request from clinicians and patient advocacy groups . While payer decision-makers agree that RWE can contribute to the reassessment of oncology therapies, the wide range of opportunities and diverse needs of different payers creates a challenge for drug developers . Even large registration trials are unlikely to produce or report data from a sufficient number of patients in an individual country or region. RWE can provide decision-makers with the local data that they need and enhance the relevance of information by providing longer-term follow-up and context for local treatment patterns, as well as patient characteristics and end points of special interest. One recent example of RWE gathered from country-specific data to support the results from registration trials is from the UK. Nivolumab was made available to patients with gastric/gastroesophageal junction cancer in the UK under EAMS. Nivolumab had demonstrated clinically meaningful anti-tumor activity with favorable overall survival and safety results in two clinical trials (ATTRACTION-2 and CheckMate-032 ). An RWE study was designed to capture disease progression, overall survival and health-related quality of life in the patients who received the therapy under the EAMS . The 6-month results aligned with the results of the registration trials . Our study is not without limitations. The results presented in this paper focus on the reassessment of oncology therapies. The target markets were selected to capture a range of healthcare payer types. However, the number of participants per country was small. For the countries on the European continent, only five payers with expertise in each market were surveyed. The USA has a more diverse mix of payers, so twice as many payers from the USA were included. Thus, the overall survey results are not equally representative of the countries of interest. Some questions may have been interpreted differently by different participants. The results of this exploratory analysis of survey data provide an indication of the opinions of experienced payer decision-makers based on practices from different regions. Response bias is a potential concern for survey studies. For example, the participants may have felt obligated to respond more positively to some questions to please the researcher. However, the surveys were emailed instead of conducted in person to reduce response bias due to the presence of a researcher. High-quality RWE is required to minimize the risk of bias so decision-makers can trust the results. The application of rigorous research methods will improve the viability of RWE . Higher-quality RWE will use standardized methods and procedures to harmonize study designs and outcomes (e.g., STaRT-RWE template , EQUATOR network , PICOTS framework  and International Consortium for Health Outcomes Measurement [ICHOM] indicators ). Developing international standards for RWE data collection and data sharing has been identified as an important component of collecting meaningful RWE . The International Society for Pharmacoepidemiology (ISPE) and the Professional Society for Health Economics and Outcomes Research (ISPOR) recently published a consensus statement by their joint task force that describes a harmonized protocol template for RWE studies designed to evaluate a treatment effect and inform decision-making, called the HARmonized Protocol Template to Enhance Reproducibility (HARPER) . A survey of decision-makers in central European countries identified international collaboration and political support as potential solutions for overcoming the barriers to the use of RWE in HTA assessments . RWE studies that are conducted with input from payers and key opinion leaders can close the gap between what is known at initial assessment and the comprehensive informational needs of stakeholders . National and international organizations are forming to establish standards for developing, conducting, and reporting RWE that aim to address changing market dynamics and pharmaceutical innovations, including the entry of new competitors . A partial list from the countries considered in this study is included in the supplemental material (Supplementary Table). When standard study designs/methodologies for collecting RWE develop, studies can be replicated and validated across countries. Decision-makers in individual countries and markets will be more confident that the results of RWE studies describe their population, reflect the standard of care in their country, and cover a timeframe that is relevant to their patients of interest. Researchers should consider the areas of uncertainty that are of concern to payer decision-makers at the time of the initial assessment and design RWE studies that can help fill the information gap . Improvements in electronic medical record data capture will provide additional opportunities for developing RWE studies . With the creation of disease-specific registries, randomized prospective RWE studies should be designed and implemented in specific patient populations. Research objectives, well-designed hypotheses and statistical protocols should be developed in advance of the studies . If RWE is used to support reassessment of a new or conditionally approved oncology therapy, transparency is key to acceptance of the results. Creating a scientific committee that includes international and national experts to advise on publication strategy (e.g., sequence of topics, journals) is important for sharing both the plan for the research and the results. Protocols for RWE studies should be publicly posted to clinicaltrials.gov or other publicly available sources . Researchers must generate RWE to the highest possible standards to build trust and deliver credible information that provides value to payers, providers and patients. Researchers and manufacturers should work with key opinion leaders and payers to develop a comprehensive plan to provide information for reassessments at the appropriate intervals. The goal should be to identify the information that is lacking, develop RWE assessments to best fill the information gap, publicly share the study plans and results, and prepare the data package to deliver an up-to-date view of the therapy. The provision of consistently valuable and reliable RWE will help stakeholders better understand the role of RWE and increase its value in decision-making. Because of rapid market changes and opportunities for combination treatments, oncology therapies are well suited to lead the way on this important initiative to highlight the need for comprehensive RWE. RWE can play an important role in informing HTA and payer oncology therapy reassessment decisions. However, organizations generating and communicating RWE need to ensure that their RWE plans recognize the heterogeneity in how different HTA agencies and payers perceive the value of such evidence. A comprehensive RWE generation plan will be responsive to how different types of HTA agencies and payer organizations view the acceptability of RWE. Using a range of information to improve access to the most effective oncology therapies at approval and reassessment provides options for patients and providers but requires a comprehensive approach to collecting and communicating RWE. Supplementary Table
The impact of preoperative immunonutritional status on prognosis in ovarian cancer: a multicenter real-world study
1aa3a1e5-2508-4045-bcad-a92304202e2e
11831797
Surgical Procedures, Operative[mh]
As the most lethal gynecologic cancer, ovarian cancer is the eighth most common cancer among women with approximately 324,398 new cases and 206,839 deaths globally in 2022 . Given the concealed position of the ovaries, coupled with early symptoms often being mild or atypical, and the absence of efficient techniques for early screening and diagnosis, approximately 70% of patients with ovarian cancer are diagnosed at advanced stages . Despite significant advances in the treatment of ovarian cancer, the 5-year overall survival rate remains less than 50% due to high recurrence rates and susceptibility to chemotherapy resistance . Therefore, there is an urgent need to evaluate potential prognostic indicators to guide treatment strategies and to identify patients at high risk of recurrence and death. In addition to clinicopathologic and therapeutic factors, immunonutritional status is a key host factor influencing the prognosis of patients with ovarian cancer . Up to 70% of ovarian cancer patients, especially those in advanced stages, experience malnutrition attributed to factors such as the high catabolic state, malignant intestinal obstruction, and loss of appetite . Malnutrition suppresses the immune response, increases the risk of postoperative infection, diminishes tolerance to chemotherapy, and exacerbates survival . The prognostic nutritional index (PNI), a composite index based on serum albumin concentration and peripheral blood lymphocyte count, is significantly associated with the prognosis of ovarian cancer patients . Furthermore, the systemic inflammatory response, a hallmark of cancer, is instrumental in the development and progression of cancer. It promotes tumor survival, proliferation, invasion, metastasis, and angiogenesis and enhance the risk of chemotherapy resistance in cancer patients . High neutrophil, monocyte, and platelet counts in peripheral blood are associated with poor prognosis , whereas low lymphocyte counts are associated with reduced antitumor response and decreased survival . High values of Neutrophil to Lymphocyte Ratio, Platelet to Lymphocyte ratio, and Systemic Immune-Inflammation Index (SII) indicate the immunosuppression and correlate with higher tumor aggressiveness and poorer overall prognosis of the patients . Therefore, the preoperative immunonutritional status remarkably affects prognosis. Previous studies explored the association between these indicators and the prognosis of ovarian cancer patients, with the majority being small-sample, single-center studies . Larger sample sizes and multicenter studies are warranted to demonstrate the effect of preoperative immunonutritional status on prognosis. Additionally, most studies only investigated the effect of nutritional status or inflammation on prognosis, rather than comprehensively investigating the influence of immunonutritional status on survival outcomes. In this study, we aim to investigate the effect of preoperative immunonutritional status on prognosis in ovarian cancer patients with multicenter data derived from China Real World Gynecologic Oncology Platform. Patients selection and data collection Our study enrolled patients with ovarian cancer diagnosed at seven tertiary medical centers (as shown in Supplementary Table S1) between January 2012 and February 2023 from the China Real World Gynecologic Oncology Platform (NUWA). The inclusion criteria of patients comprised: (1) primary epithelial ovarian, peritoneal, and fallopian tube cancers diagnosed by pathologic examination; (2) underwent comprehensive surgery staging or primary debulking surgery; (3) available data of routine blood count and albumin within 7 days before surgery; (4) available prognosis data. The exclusion criteria included: (1) borderline tumor; (2) presence of diseases that interfere with laboratory examination results, such as hepatitis, nephropathy, autoimmune diseases, infectious diseases, and hematologic dysfunction; (3) multiple primary malignant neoplasms; (4) multiple cytoreductive surgeries. The demographic information, clinicopathological characteristics, first-line treatment, and prognostic information were obtained from the medical and follow-up records. Peripheral blood cells, serum albumin, and carbohydrate antigen 125 (CA125) were extracted from patients’ laboratory examinations within 7 days before the surgery. Given the superior predictive power of PNI and SII to indicate the nutritional status and the inflammatory immunity , we opted to assess the immuno-nutritional status of patients using both PNI and SII. PNI and SII were calculated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Prognostic nutritional index (PNI) = serum albumin (g/L)}+5\times\text{total lymphocyte count (10}^\wedge\text{9/L)};$$\end{document} Prognostic nutritional index (PNI) = serum albumin (g/L) + 5 × total lymphocyte count (10 ∧ 9/L) ; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Systemic immune-inflammation index (SII) = neutrophil count (10}^\wedge\text{9/L)}\times\text{platelet count (}10^\wedge\text{9/L) / lymphocyte count (}10^\wedge\text{9/L).}$$\end{document} Systemic immune-inflammation index (SII) = neutrophil count (10 ∧ 9/L) × platelet count ( 10 ∧ 9/L) / lymphocyte count ( 10 ∧ 9/L). In this study, family history of cancer was defined as malignant tumors in first-degree relatives of patients . Comorbidities mainly included hypertension, chronic respiratory disease, hyperthyroidism and hypothyroidism. Postoperative residual lesions are categorized into R0, R1 and R2 in accordance with their size . No visual residual lesions were classified into R0, residual lesions ≤ 1 cm were classified into R1, and residual lesions > 1 cm were classified into R2. Progression-free survival (PFS) was the time from diagnosis to recurrence, progression, death, or last follow-up, whichever comes first. Overall survival (OS) was the time from diagnosis to death from any cause or last follow-up. An ethical approval for this study was obtained from the Medical Ethics Committee of Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology (TJ-IRB202401053). As a real-world retrospective study, a waiver of informed consent was requested, and the results were used for scientific research only. Statistical analysis Median and interquartile range (IQR) were utilized to describe the distribution of continuous variables, while frequency counts and percentages were used to describe the distribution of categorical variables. The comparison of continuous variables was performed using the Mann–Whitney U test, whereas the comparison of categorical variables was performed by chi-square test and Fisher's exact test. Kaplan–Meier (K-M) survival curve and Cox proportional hazards model (hazard ratio (HR) and 95% confidence interval (CI)) were used for prognosis analysis. The R package “survminer” was used to determine the optimal cut-off values of PNI and SII related to PFS or OS. The fundamental principle is based on K-M survival curve and Log-rank test to determine the point with the smallest P value, then the value corresponding to that point is the optimal cut-off value. A two-tailed P value < 0.05 was considered statistically significant. All statistical analysis was performed using R version 4.2.1. Our study enrolled patients with ovarian cancer diagnosed at seven tertiary medical centers (as shown in Supplementary Table S1) between January 2012 and February 2023 from the China Real World Gynecologic Oncology Platform (NUWA). The inclusion criteria of patients comprised: (1) primary epithelial ovarian, peritoneal, and fallopian tube cancers diagnosed by pathologic examination; (2) underwent comprehensive surgery staging or primary debulking surgery; (3) available data of routine blood count and albumin within 7 days before surgery; (4) available prognosis data. The exclusion criteria included: (1) borderline tumor; (2) presence of diseases that interfere with laboratory examination results, such as hepatitis, nephropathy, autoimmune diseases, infectious diseases, and hematologic dysfunction; (3) multiple primary malignant neoplasms; (4) multiple cytoreductive surgeries. The demographic information, clinicopathological characteristics, first-line treatment, and prognostic information were obtained from the medical and follow-up records. Peripheral blood cells, serum albumin, and carbohydrate antigen 125 (CA125) were extracted from patients’ laboratory examinations within 7 days before the surgery. Given the superior predictive power of PNI and SII to indicate the nutritional status and the inflammatory immunity , we opted to assess the immuno-nutritional status of patients using both PNI and SII. PNI and SII were calculated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Prognostic nutritional index (PNI) = serum albumin (g/L)}+5\times\text{total lymphocyte count (10}^\wedge\text{9/L)};$$\end{document} Prognostic nutritional index (PNI) = serum albumin (g/L) + 5 × total lymphocyte count (10 ∧ 9/L) ; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Systemic immune-inflammation index (SII) = neutrophil count (10}^\wedge\text{9/L)}\times\text{platelet count (}10^\wedge\text{9/L) / lymphocyte count (}10^\wedge\text{9/L).}$$\end{document} Systemic immune-inflammation index (SII) = neutrophil count (10 ∧ 9/L) × platelet count ( 10 ∧ 9/L) / lymphocyte count ( 10 ∧ 9/L). In this study, family history of cancer was defined as malignant tumors in first-degree relatives of patients . Comorbidities mainly included hypertension, chronic respiratory disease, hyperthyroidism and hypothyroidism. Postoperative residual lesions are categorized into R0, R1 and R2 in accordance with their size . No visual residual lesions were classified into R0, residual lesions ≤ 1 cm were classified into R1, and residual lesions > 1 cm were classified into R2. Progression-free survival (PFS) was the time from diagnosis to recurrence, progression, death, or last follow-up, whichever comes first. Overall survival (OS) was the time from diagnosis to death from any cause or last follow-up. An ethical approval for this study was obtained from the Medical Ethics Committee of Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology (TJ-IRB202401053). As a real-world retrospective study, a waiver of informed consent was requested, and the results were used for scientific research only. Median and interquartile range (IQR) were utilized to describe the distribution of continuous variables, while frequency counts and percentages were used to describe the distribution of categorical variables. The comparison of continuous variables was performed using the Mann–Whitney U test, whereas the comparison of categorical variables was performed by chi-square test and Fisher's exact test. Kaplan–Meier (K-M) survival curve and Cox proportional hazards model (hazard ratio (HR) and 95% confidence interval (CI)) were used for prognosis analysis. The R package “survminer” was used to determine the optimal cut-off values of PNI and SII related to PFS or OS. The fundamental principle is based on K-M survival curve and Log-rank test to determine the point with the smallest P value, then the value corresponding to that point is the optimal cut-off value. A two-tailed P value < 0.05 was considered statistically significant. All statistical analysis was performed using R version 4.2.1. Clinical characteristics of patients A total of 922 patients diagnosed with epithelial ovarian cancer were ultimately enrolled in this study (Fig. ). The median age at diagnosis was 52 (IQR, 47, 59) years of the 922 patients (Table ). The median value was 45.9 (40.8, 49.9) for PNI, and 873.6 (559.3, 1421.3) for SII, respectively. Notably, 597 patients (64.8%) were diagnosed with high-grade serous ovarian carcinoma, while 599 patients (65.0%) presented with International Federation of Gynecology and Obstetrics (FIGO) stage III or IV disease. Lymphadenectomy was performed in 686 (74.4%) patients, with 243 (35.4%) patients showing lymph node metastasis. Additionally, 750 (81.3%) patients achieved optimal tumor resection. Only 10 (1.1%) patients received Poly ADP-ribose Polymerase inhibitors (PARPi), and 4 (0.4%) patients were treated with bevacizumab. Cut-off values of PNI and SII The optimal cut-off values of PNI and SII relative to PFS and OS were determined for the early (FIGO stage I-IIA) and advanced (FIGO stage IIB-IV) patients, respectively. As displayed in Table , for the early stage patients (224 individuals), the optimal cut-off values of PNI relative to PFS and OS were both 47.47, and the optimal cut-off values of SII relative to PFS and OS were 551.37 and 771.78, respectively. In the advanced stage cohort (698 individuals), the optimal cut-off value of PNI was 47.76 relative to PFS and 46.00 relative to OS. The optimal cut-off value for SII was 720.96 relative to PFS and 1686.11 relative to OS. Prognosis analysis The median follow-up time was 55.1 months. Patients were categorized into high and low index groups according to the cut-off values of PNI and SII. K-M survival curves indicated that the both median PFS and OS were significantly longer in the high PNI group and low SII group (all P < 0.01), regardless of stage (Fig. ). Then, we performed univariate and multivariate Cox regression analysis to identify the factors influencing the PFS and OS in both early- and advanced-stage patients. As presented in Table , the univariate analysis for early-stage patients revealed that both PNI and SII were significantly associated with PFS, while PNI, SII, and comorbidities were significantly associated with OS. In multivariate analysis, after controlling for confounders, high PNI was an independent protective factor for PFS (HR (95% CI) = 0.39 (0.20–0.76), P = 0.006) and OS (HR (95% CI) = 0.44 (0.20–0.97), P = 0.042), respectively. High SII was an independent risk factor for PFS (HR (95% CI) = 2.43 (1.23–4.81), P = 0.011) and was a marginally unfavorable prognostic factor for OS (HR (95% CI) = 2.05 (0.96–4.39), P = 0.064). In advanced population, univariate analysis indicated that PNI, SII, histology, stage, lymphadenectomy, and residual disease were significantly associated with both PFS and OS (all P < 0.05, Table )). However, in multivariate analysis, PNI and SII had no impact on PFS ( P = 0.185, P = 0.188, respectively). Both PNI (HR (95% CI) = 0.77 (0.60–0.99), P = 0.043) and SII (HR (95% CI) = 1.34 (1.01–1.78), P = 0.041) were independent prognostic factors for OS. Besides, histology, stage, and residual disease were associated with PFS and OS (all P < 0.001). A total of 922 patients diagnosed with epithelial ovarian cancer were ultimately enrolled in this study (Fig. ). The median age at diagnosis was 52 (IQR, 47, 59) years of the 922 patients (Table ). The median value was 45.9 (40.8, 49.9) for PNI, and 873.6 (559.3, 1421.3) for SII, respectively. Notably, 597 patients (64.8%) were diagnosed with high-grade serous ovarian carcinoma, while 599 patients (65.0%) presented with International Federation of Gynecology and Obstetrics (FIGO) stage III or IV disease. Lymphadenectomy was performed in 686 (74.4%) patients, with 243 (35.4%) patients showing lymph node metastasis. Additionally, 750 (81.3%) patients achieved optimal tumor resection. Only 10 (1.1%) patients received Poly ADP-ribose Polymerase inhibitors (PARPi), and 4 (0.4%) patients were treated with bevacizumab. The optimal cut-off values of PNI and SII relative to PFS and OS were determined for the early (FIGO stage I-IIA) and advanced (FIGO stage IIB-IV) patients, respectively. As displayed in Table , for the early stage patients (224 individuals), the optimal cut-off values of PNI relative to PFS and OS were both 47.47, and the optimal cut-off values of SII relative to PFS and OS were 551.37 and 771.78, respectively. In the advanced stage cohort (698 individuals), the optimal cut-off value of PNI was 47.76 relative to PFS and 46.00 relative to OS. The optimal cut-off value for SII was 720.96 relative to PFS and 1686.11 relative to OS. The median follow-up time was 55.1 months. Patients were categorized into high and low index groups according to the cut-off values of PNI and SII. K-M survival curves indicated that the both median PFS and OS were significantly longer in the high PNI group and low SII group (all P < 0.01), regardless of stage (Fig. ). Then, we performed univariate and multivariate Cox regression analysis to identify the factors influencing the PFS and OS in both early- and advanced-stage patients. As presented in Table , the univariate analysis for early-stage patients revealed that both PNI and SII were significantly associated with PFS, while PNI, SII, and comorbidities were significantly associated with OS. In multivariate analysis, after controlling for confounders, high PNI was an independent protective factor for PFS (HR (95% CI) = 0.39 (0.20–0.76), P = 0.006) and OS (HR (95% CI) = 0.44 (0.20–0.97), P = 0.042), respectively. High SII was an independent risk factor for PFS (HR (95% CI) = 2.43 (1.23–4.81), P = 0.011) and was a marginally unfavorable prognostic factor for OS (HR (95% CI) = 2.05 (0.96–4.39), P = 0.064). In advanced population, univariate analysis indicated that PNI, SII, histology, stage, lymphadenectomy, and residual disease were significantly associated with both PFS and OS (all P < 0.05, Table )). However, in multivariate analysis, PNI and SII had no impact on PFS ( P = 0.185, P = 0.188, respectively). Both PNI (HR (95% CI) = 0.77 (0.60–0.99), P = 0.043) and SII (HR (95% CI) = 1.34 (1.01–1.78), P = 0.041) were independent prognostic factors for OS. Besides, histology, stage, and residual disease were associated with PFS and OS (all P < 0.001). In this study, we comprehensively investigate the effect of preoperative immunonutritional status on prognosis in ovarian cancer patients, including both PNI and SII. PNI and SII were both independent prognostic factors for PFS and OS in the early-stage patients (FIGO stage I-IIA). In advanced cohort (FIGO stage IIB-IV), PNI and SII were significantly associated with OS but had no impact on PFS. Overall, preoperative immunonutritional status independently affects the prognosis of patients with ovarian cancer. Intervention in patients suffering from suboptimal preoperative immunonutritional status facilitates improved survival outcomes. Recently, there has been a growing interest in the association between preoperative nutritional indicators and the prognostic outcomes of various cancers . Malnutrition associated with cancer is usually driven by the activation of systemic inflammation triggered by tumor advancement, which in turn compromises immune function and diminishes overall survival . In addition, patients with advanced ovarian cancer often develop malnutrition associated with intestinal obstruction caused by peritoneal dissemination . As a powerful nutritional indicator, PNI was observed to be significantly associated with prognosis in several ovarian cancer studies. Miao et al. found a significant association between PNI and PFS (HR 1.890, 95% CI: 1.396–2.560; P < 0.001) as well as OS (HR 1.747, 95% CI: 1.293–2.360; P < 0.001) in 344 epithelial ovarian cancer patients . Subsequently, Zhang et al. indicated that PNI was an independent prognostic factor for PFS (HR 2.10, 95% CI: 1.38–3.19; P = 0.001) and OS (HR 2.54, 95% CI: 1.76–3.68; P < 0.001) in stage III ovarian cancer . These findings are consistent with the results of our study. In patients with high-grade serous ovarian cancer, a continuous decrease in PNI significantly correlated with impaired OS ( P = 0.021), whereas a dichotomous decrease did not ( P = 0.346) . The predictive effect of PNI on prognosis seems to differ in early and advanced ovarian cancer. Decreased PNI did not adversely impact PFS or disease-specific survival (DSS) in early-stage patients, but was significantly associated with an inferior PFS ( P < 0.0001) and DSS ( P < 0.0001) in advanced-stage patients . However, Yoshikawa et al. revealed that high PNI had a significant independent favorable impact on OS ( P = 0.010) but was not correlated with PFS ( P = 0.220) in patients with early-stage ovarian clear cell carcinoma . In our study, we found that increased PNI was not associated with longer PFS in early-stage patients but with longer OS, which is compatible with the findings of the previous study. Interestingly, further analyses found that the high PNI group had a significantly longer post relapse survival than in the PNI-low group . These indicated that PNI was more likely to reflect susceptibility to treatment than to predict time to relapse. The observation that decreased PNI effectively predicted platinum resistance further substantiates this viewpoint . In addition to preoperative nutritional status, inflammatory and related cells are vital to cancer formation and progression, thus affecting the survival outcomes. Inflammatory cells secrete cytokines and chemokines, stimulate angiogenesis and proliferation, and promote metastasis . High SII (including neutrophil, platelet, and lymphocyte counts), was observed to be related to higher levels of circulating tumor cells in the cancer patients, suggesting an increased risk of metastasis and recurrence . In study by Nie et al., they revealed that high SII was an independent risk factor for PFS (HR 7.61(95% CI 3.34–17.35), P < 0.001) and OS (HR 6.36(95% CI 2.64–15.33), P < 0.001) . This study did not differentiate between early and late-stage SII, nor did it separate SII based on PFS and OS, applying a uniform SII across all populations. In early-stage ovarian cancer (stage I-IIIA1), increased SII was associated with worse disease-free survival (DFS) and OS . However, Borella et al. found a significantly negative association between high SII and DFS (HR 6.84 (95% CI 1.30–35.9), P = 0.023), but no association between high SII and DSS in stage I epithelial ovarian cancer . In our study, SII were both independent prognostic factors for PFS and OS in the early-stage patients (FIGO stage I-IIA). Differences in results among early-stage patients may be due to inconsistencies in the included populations. In advanced cohort (FIGO stage IIB-IV), we found that PNI and SII were significantly associated with OS but had no impact on PFS. Unfortunately, we have not found any study on the efficacy of SII in patients with advanced ovarian disease. Beyond the primary treatment population, elevated SII was also an independent risk factor for prognosis in patients with platinum-sensitive recurrent epithelial ovarian cancer . Moreover, SII has been identified as a predictor of the therapeutic efficacy of neoadjuvant chemotherapy and bevacizumab. High SII was a predictor of inefficacy of neoadjuvant chemotherapy and a risk factor for death in patients with stage III ovarian cancer . High SII also impaired the efficacy of bevacizumab, resulting in no survival benefit in the chemotherapy plus bevacizumab group compared with the chemotherapy alone group . There are several advantages to our research. To the best of our knowledge, this is the largest multicenter study to comprehensively investigate the effect of preoperative immunonutritional status on prognosis in ovarian cancer patients. The data required for the PNI and SII indices can be easily obtained from routine blood and liver function tests, and both indices are cost-effective, practical, and reliable. Furthermore, PNI and SII are better predictors of nutritional status and inflammatory immunity than other nutritional and immune indicators . Ultimately, the cutoff values for PNI and SII were determined through early-late stratification of PFS and OS, providing a more accurate representation of the immunonutritional status across various populations and enhancing prognostic predictions. Several limitations in our study should be noted. Our investigation centered on the association between preoperative PNI and SII (baseline PNI and SII) and patient survival outcomes. However, given that cancer progression and recurrence are dynamic, multistage processes, consistently low levels of PNI and high levels of SII may be more robust indicators of poor prognosis. Although our study is the largest multicenter study to comprehensively investigate the effect of preoperative immunonutritional status on prognosis in ovarian cancer patients, only 922 patients were included in our study. Larger sample sizes are needed to allow for internal validation of cutoff values and to eliminate potential model overfitting. Variations in instruments or kits across hospitals can result in slight differences in laboratory test results, such as routine blood tests. However, since we included all top tertiary hospitals in China, these differences are relatively minor and within acceptable limits. In our study, HGSC patients made up the majority (64.8%), and the survival differences between high and low PNI and SII groups may largely stem from the HGSC patients. Histologic stratification further revealed that these survival differences were present in both HGSC and other histologic types (data not shown), indicating that varying immunonutritional status may affect survival across all histologic types, not just specific ones. Previous studies indicated PNI cutoff values between 42.9 and 50.4 which aligns with the values used in our study. Similarly, SII cutoff values were no greater than 1000 , with most of our study's SII cutoff values falling within this range, except for those related to OS in advanced patients. The SII cutoff value for overall survival in advanced patients appears to be high. However, we obtained the same cutoff value using Xtile software, another recognized method for determining optimal survival cutoff values . To investigate if the survival difference arises from the unequal group sizes of high and low SII, we analyzed survival using the median SII from the advanced stage population. Significant differences in PFS and OS were observed between high and low SII groups (All P < 0.05, data not shown). In conclusion, Poor preoperative immunonutritional status has a deleterious effect on the prognosis of patients with ovarian cancer. The combination of PNI and SII can be used as simple and useful markers for predicting short-term and long-term survival of ovarian cancer patients. When patients show a poor preoperative immunonutritional status, timely intervention should be implemented to enhance their immunonutritional condition, thereby mitigating adverse prognostic outcomes. Supplementary Material 1: Supplementary Table S1. List of Study Sites.
The Change of Ciliary Muscle-Trabecular Meshwork-Schlemm Canal Complex after Phacoemulsification using Swept-Source-Optical Coherence Tomography
d35a3e2c-0f85-4a88-b263-c488e1c238b6
11844712
Surgical Procedures, Operative[mh]
Cataract surgery has been reported to have a reducing effect on intraocular pressure (IOP) in glaucomatous and non-glaucomatous eyes . This effect seems to be more noticeable in eyes with narrow angles (NAs) than in eyes with open angles (OAs) . However, the exact mechanism of this effect is still not fully understood. Upon cataract removal, the decrease in IOP may result from increasing the anterior chamber depth (ACD) and anterior chamber angle (ACA), including the trabecular-iris angle (TIA) , the angle opening distance , and trabecular-iris surface area . In addition, Schlemm canal (SC) expansion after cataract surgery has been positively correlated with decreased IOP . It is assumed that SC size would be improved due to widening of the drainage angle. However, few studies have examined the relationship between changes in SC size and ACA-related parameters following cataract surgery. Anatomically, the SC, trabecular meshwork (TM), and ciliary muscle (CM) are tightly connected to constitute a complete tension structure . The relaxation and contraction of the TM and SC tissues, which are regulated by the CM, adjust the outflow rate of aqueous humor . As a mean resistance point of the aqueous humor outflow pathway , smaller SC size was negatively correlated with higher IOP in non-glaucomatous and glaucomatous eyes . Qi et al. indicated that a smaller vertical diameter of SC and a thinner TM were associated with early IOP elevation after cataract surgery. Zhao et al. found a significant increase in SC diameter (SCD), SC-cross-sectional area (SC-CSA) of SC, and TM width (TMW) after cataract surgery. However, the specific correlation between morphological changes in the SC, TM, and CM after cataract surgery has not been investigated. Therefore, this study aimed to further investigate the effect of cataract surgery on the aqueous outflow pathway and the underlying mechanism wherein eyes with NAs show a greater decrease in IOP post-cataract surgery than that with OAs. Swept-source-optical coherence tomography (SS-OCT) was used to evaluate the relationship between SC-CSA and TM, CM, and TIA changes after cataract surgery and to compare the difference in SC-CSA changes in non-glaucomatous eyes with NAs and OAs. This prospective study was conducted at the Eye Hospital of Wenzhou Medical University, Hangzhou Branch, Southeast China, between April 2022 and September 2022. This study and the required data were prospectively approved by the Eye Hospital of Wenzhou Medical University’s Institutional Review Board (No. 2022-014-K-11) and complied with the 1975 Helsinki Declaration, as revised in 1983. Written informed consent was obtained from each participant after the nature of the procedures and possible risks were explained. This trial was registered at NIH ( clinicaltrial.gov ) on April 24, 2022 (NCT05352854). Patients requiring cataract surgery and having normal IOP and NAs/OAs were included. The exclusion criteria were as follows: (1) major intraoperative and postoperative complications (e.g., posterior capsule rupture or endophthalmitis); (2) previous intraocular surgery and penetrating or laser surgery, including peripheral iridotomy; (3) glaucoma (glaucomatous vision loss or optic neuropathy changes), uveitis, severe retinal diseases, pseudoexfoliation syndrome, and other serious eye diseases; (4) corneal and conjunctival abnormalities, including scarring, malnutrition, and corneal opacity, affecting SS-OCT imaging; (5) glaucoma medication; (6) SS-OCT images in which the SC could not be identified; and (7) failure to complete follow-up. Demographic information was recorded. Each patient underwent a comprehensive ocular examination, including visual acuity testing, slit-lamp biomicroscope, fundus examination, IOP measurement (non-contact tonometer; TX-F; Cannon), axial length (AL), and ACD measurement (IOL-Master 700). Slit-lamp biomicroscope and fundus examinations were performed by a practical ophthalmologist (L.Z.L.). Swept-Source-Optical Coherence Tomography SS-OCT (CASIA 2) was performed for all patients under the same indoor lighting conditions (the same illumination of approximately 500 lx of the indoor fluorescent lamp and room temperature between 20 and 26 °C). Scans were centered on the ACA structure in the temporal and nasal quadrants (at the 3 o’clock or 9 o’clock positions) using the angle HD (high definition) 2D line scanning mode. To ensure that the ACA was in the instrument’s field of view, the patients were instructed to focus on the left or right built-in fixation lights during the test. Images with good corneal vertex reflections were captured. The same position was determined based on the texture of the vessels on the scleral and iris surfaces. At least three consecutive images were taken by the same experienced operator (J.X.P.) during each measurement, and the images with the best visibility of SC morphology were selected for analysis. The TIA at 500 µm from the scleral spur (TIA500) was then automatically processed using built-in software and measurement tools provided by the manufacturer ( a). The angle was graded using the Shaffer’s classification system . According to the preoperative TIA500, the patients were divided into two groups: NAs (TIA500 < 25°) and OAs (TIA500 ≥ 25°). The SC-, TM-, and CM-related parameters were manually measured using ImageJ software ( http://imagej.nih.gov/ij/ ). Each image was magnified by 200%, 150%, and 75% to measure SC, TM, and CM, respectively. All information and ocular characteristics of the patients were blinded during the measurements to avoid biases by the observer (J.X.P.). To assess the reproducibility of the image analysis using ImageJ, measurements were repeated twice preoperatively with an interval of 2 weeks in 15 randomly chosen eyes, and the coefficient of variation (CV) and intraclass correlation coefficient were calculated. Preoperative and Postoperative Parameters Evaluation The measured SS-OCT parameters and their definitions are as follows: SC-CSA was drawn as a long circular region inside the SC outline; SCD was the distance between the anterior and posterior ends of the SC-CSA; TMW was the distance between the scleral spur and the Schwalbe’s line, which was defined as the boundary between the high-reflective inner corneal lining and the low-reflective TM; trabecular meshwork thickness (TMT) was the vertical distance between the posterior endpoint of the SC and the inner surface of the cornea; and the distance between the inner apex of the CM and scleral spur was defined as IA-SS. b illustrates these parameters. A single surgeon (Z.Y.E.) performed the surgeries, and all patients received topical anesthesia. No complications occurred during any of the surgeries. IOP and SS-OCT measurements were repeated according to the protocol when patients returned for postoperative care, 1-week post-surgery. The same ophthalmologist (J.X.P.) examined all the enrolled patients postoperatively. The main outcome measurements were changes in IOP, SC-CSA, SCD, TMW, TMT, TIA500, and IA-SS after phacoemulsification. Statistical Analysis All statistical analyses were performed using SPSS (v.21.0). The Kolmogorov-Smirnov test was used to assess normal distribution. Variables are expressed as mean ± standard deviation or median (interquartile range) based on normality. An independent sample t test or the Mann-Whitney U test was used to compare preoperative and postoperative variables and OA and NA groups, depending on the normality of data. Univariate analysis was used to assess the correlation between the changes in SC-CSA and related variables. Multiple linear regression was performed to determine the variables that were associated with SC-CSA expansion, including those with p < 0.10 in the univariate analysis. Statistical significance was set at p < 0.05. SS-OCT (CASIA 2) was performed for all patients under the same indoor lighting conditions (the same illumination of approximately 500 lx of the indoor fluorescent lamp and room temperature between 20 and 26 °C). Scans were centered on the ACA structure in the temporal and nasal quadrants (at the 3 o’clock or 9 o’clock positions) using the angle HD (high definition) 2D line scanning mode. To ensure that the ACA was in the instrument’s field of view, the patients were instructed to focus on the left or right built-in fixation lights during the test. Images with good corneal vertex reflections were captured. The same position was determined based on the texture of the vessels on the scleral and iris surfaces. At least three consecutive images were taken by the same experienced operator (J.X.P.) during each measurement, and the images with the best visibility of SC morphology were selected for analysis. The TIA at 500 µm from the scleral spur (TIA500) was then automatically processed using built-in software and measurement tools provided by the manufacturer ( a). The angle was graded using the Shaffer’s classification system . According to the preoperative TIA500, the patients were divided into two groups: NAs (TIA500 < 25°) and OAs (TIA500 ≥ 25°). The SC-, TM-, and CM-related parameters were manually measured using ImageJ software ( http://imagej.nih.gov/ij/ ). Each image was magnified by 200%, 150%, and 75% to measure SC, TM, and CM, respectively. All information and ocular characteristics of the patients were blinded during the measurements to avoid biases by the observer (J.X.P.). To assess the reproducibility of the image analysis using ImageJ, measurements were repeated twice preoperatively with an interval of 2 weeks in 15 randomly chosen eyes, and the coefficient of variation (CV) and intraclass correlation coefficient were calculated. The measured SS-OCT parameters and their definitions are as follows: SC-CSA was drawn as a long circular region inside the SC outline; SCD was the distance between the anterior and posterior ends of the SC-CSA; TMW was the distance between the scleral spur and the Schwalbe’s line, which was defined as the boundary between the high-reflective inner corneal lining and the low-reflective TM; trabecular meshwork thickness (TMT) was the vertical distance between the posterior endpoint of the SC and the inner surface of the cornea; and the distance between the inner apex of the CM and scleral spur was defined as IA-SS. b illustrates these parameters. A single surgeon (Z.Y.E.) performed the surgeries, and all patients received topical anesthesia. No complications occurred during any of the surgeries. IOP and SS-OCT measurements were repeated according to the protocol when patients returned for postoperative care, 1-week post-surgery. The same ophthalmologist (J.X.P.) examined all the enrolled patients postoperatively. The main outcome measurements were changes in IOP, SC-CSA, SCD, TMW, TMT, TIA500, and IA-SS after phacoemulsification. All statistical analyses were performed using SPSS (v.21.0). The Kolmogorov-Smirnov test was used to assess normal distribution. Variables are expressed as mean ± standard deviation or median (interquartile range) based on normality. An independent sample t test or the Mann-Whitney U test was used to compare preoperative and postoperative variables and OA and NA groups, depending on the normality of data. Univariate analysis was used to assess the correlation between the changes in SC-CSA and related variables. Multiple linear regression was performed to determine the variables that were associated with SC-CSA expansion, including those with p < 0.10 in the univariate analysis. Statistical significance was set at p < 0.05. The present study comprised 115 eyes of 92 patients who underwent cataract surgery, of which 14 eyes were excluded due to loss to follow-up, and 12 eyes were excluded because the SC cannot be defined due to poor quality of SS-OCT images. Therefore, 75 patients (89 eyes), with an average age of 69.0 ± 9.3 years (range 36–86), were selected for the final analysis. shows the demographics and ocular characteristics of participants. The average ACD and AL before surgery was 2.89 ± 0.48 mm and 23.59 ± 1.81 mm, respectively. The measurements of SC-CSA, SCD, TMW, and TMT were reproducible with an intraclass correlation coefficient ≥0.8 (all p < 0.01). The coefficient of variations of the SC-CSA, SCD, TMW, and TMT were 34.4%/35.0%, 28.0%/28.2%, 17.2%/19.7%, and 28.4%/23.8%, in the nasal and temporal sections, respectively. summarizes the preoperative and postoperative ocular parameters measured by SS-OCT. Preoperatively, the average IOP was 15.4 ± 3.3 mm Hg, which dropped to 13.3 ± 3.3 mm Hg at 1 week postoperatively ( p < 0.001). The average SC-CSA, SCD, TMW, TMT, and TIA500 increased significantly after surgery (all p < 0.001) in both the nasal and temporal sections. Conversely, the IA-SS score decreased after surgery, although the difference was not significant ( p = 0.094 in the nasal section and p = 0.063 in the temporal section). The results of the univariate linear regression analysis of the association between SC expansion and changes in related parameters, including preoperative SC-CSA, are shown in and . Changes in SC-CSA were associated with changes in TMW (β = 3.393 ± 1.103, p = 0.003 nasally; β = 3.560 ± 1.048, p = 0.001 temporally), temporal TMT (β = 14.253 ± 3.233, p < 0.001), and nasal TIA500 (β = 61.758 ± 14.647, p < 0.001). There was no association between preoperative SC-CSA, changes in IA-SS and changes in the SC-CSA in our cases. In multivariate linear regression analysis, after adjusting for preoperative SC-CSA and changes in IA-SS, changes in TMW (β = 3.726 ± 1.085, p = 0.001 nasally; β = 3.405 ± 0.945, p = 0.001 temporally), TMT (β = 5.224 ± 2.033, p = 0.012 nasally; β = 11.853 ± 3.059, p < 0.001 temporally), and TIA500 (β = 40.330 ± 15.100, p = 0.009 nasally; β = 35.453 ± 17.527, p = 0.047 temporally) were significantly associated with changes in SC-CSA after cataract surgery . shows the SC-CSA-related variables in the OAs and NAs groups. The preoperative SC-CSA was greater in OAs eyes than in NAs eyes, although the difference was not significant in the temporal section ( p = 0.17). After cataract surgery, the changes in SC-CSA were greater in NAs eyes compared with OAs eyes (3,363.2 ± 1,156.3 vs. 2,049.5 ± 1,095.0 in the nasal section, p < 0.001; 3,556.5 ± 1,703.8 vs. 2,088.1 ± 1,207.7 in the temporal section, p < 0.001). This study confirms previous reports of IOP reduction after cataract surgery . In our series of non-glaucomatous patients with either OAs or NAs, the average IOP dropped significantly 1 week after cataract surgery. Since decreased ACA and SC are fundamental mechanisms of glaucoma, it follows that the decrease in IOP may be a result of the increase in ACD , ACA , and SC after cataract surgery . However, previous studies focused only on the relationship between changes in IOP and the related parameters outlined above. Studies on SC-TM-CM changes and their relationship with ACA are rare. To our best knowledge, this is the first observation, using OCT, which shows the specific correlation between the changes in SC-CSA and changes in TMW, TMT, and TIA500 1 week after cataract surgery. This bridges the gap between studies that have shown an increase in SC-CSA following cataract surgery and OCT imaging of aqueous outflow structures. In the univariate analysis, we found that, in the nasal section, for a 1 µm increase in TMW and a 1° increase in TIA500, SC-CSA increased by approximately 3.39 µm 2 and 61.76 µm 2 , respectively. In the temporal section, for a 1 µm increase in both TMW and TMT, SC-CSA increased by approximately 3.56 µm 2 and 14.25 µm 2 , respectively. In terms of the structure of the SC-TM-CM complex , the CM diverges into external and internal branches. The external branches are inserted into the juxtacanalicular elastic net; the internal branches connect to the trabecular lamella, which is fixed in the scleral spur (SS) at the posterior TM. The cell processes of SC endothelial cells are attached to the cell processes of juxtacanalicular cells, whose cell processes are sequentially attached to the trabecular lamella, thus forming a hemidesmosome structure . In several clinical studies , CM contraction induced by pilocarpine stretched the TM and increased the SC-CSA, which indicated a possible relationship between SC-CSA, TM, and CM. A previous study demonstrated that increasing age and development of cataracts were associated with thickening of the lens, a steeper anterior curvature of the lens, and therefore an anteriorly located and smaller CM (smaller cross-sectional area of the CM and IA-SS), as well as a narrower SC. It follows that the alterations seen in the present study, including the increase in TIA500, could be a result of the posterior displacement of these anterior structures (including the CM, SS, and iris root), caused by lens exchange with phacoemulsification and intraocular lens implantation. In addition, Zhao et al. found that the anterior vault, which was defined as the distance between the posterior corneal surface and the horizontal line connecting the two SS, increased after cataract surgery, indicating posterior displacement of the SS. This movement would increase posterior traction on the SS and adjust the zonular tension vectors transmitted to the SS, CM, and TM , thus facilitating aqueous outflow by pulling the TM toward the center of the eye and enlarging the SC lumen, which were represented as increases in TMW, TMT, SCD, and SC-CSA. Conversely, the IA-SS decreased after cataract surgery, although the difference was not significant. It appeared to be associated with posterior displacement of the SS after cataract surgery. Therefore, cataract surgery seems to have a “chain reaction” on TIA500 and the aqueous humor outflow pathway. Since the SC acts as the primary resistance point in the outflow facility , we investigated the relationship between changes in SC-CSA and other related variables. However, no correlation between changes in SC-CSA and IA-SS was found in this study. This outcome may be owing to the lack of significant difference between the preoperative and postoperative IA-SS. Upon further investigation, the changes in SC-CSA were significantly associated with changes in TMW, TMT, and TIA500, as shown in the multivariate linear regression analysis. Cataract surgery has been shown to have a greater effect on lowering IOP in eyes with NAs than in OAs . Huang et al. found that increases in ACD and the angle opening distance were both greater in NAs eyes than in OAs eyes between any two points of follow-up time after cataract surgery. Therefore, these more significant increases in ACD and ACA, induced by cataract removal, may be the primary contributing factors for greater IOP change in eyes with NAs . In the present study, we found that the increase in SC-CSA was greater in NA eyes than in OA eyes after cataract surgery, which adds further evidence for the greater decrease in IOP seen in NAs eyes. It also agreed with the correlation between the change in SC-CSA and TIA500 found in this study. Our study had certain limitations. First, due to the poor quality of SS-OCT images and the inability to define the SC or TM, we have had to exclude 10.5% eyes. Although scans were centered in the temporal and nasal quadrants and located by the texture of the vessels on the scleral and iris surfaces, the SC and TM may not be in exactly the same position of the eye in the pre- and postoperative period. Second, we only examined one postsurgical timepoint. However, the parameters involved in this study have the potential to change over a longer period after surgery. Therefore, further investigations, with longer follow-up periods, are warranted. After cataract surgery, SC-CSA increased significantly, and this increase was accompanied by an increase in the TMW, TMT, and TIA500. Compared to OAs eyes, the increase in SC-CSA was greater in NA eyes, which may explain the greater decrease in IOP seen in NA eyes after cataract surgery. The authors express their gratitude to the department of cataracts of Eye Hospital of Wenzhou Medical University Hangzhou Branch for their cooperation and assistance. This study and the required data were prospectively approved by the Eye Hospital of Wenzhou Medical University’s Institutional Review Board (No. 2022-014-K-11) and complied with the 1975 Helsinki Declaration, as revised in 1983. Written informed consent was obtained from each participant. This trial was registered at NIH ( clinicaltrial.gov ) on April 24, 2022 (NCT05352854). The authors have no conflicts of interest to declare. This work was supported by research grants from the Basic scientific research projects in Wenzhou [Y20220145]; the Zhejiang Medical Health Science and Technology Project [No. 2023KY913]; the Science and Technology Department of the State Administration of Traditional Chinese Medicine – Zhejiang Province Joint Construction Project [GZY-ZJ-KJ-24089]; the “Pioneer” and “Leading Goose” R&D Program of Zhejiang [2022C03070]. All authors contributed to the study conception and design. Zhangliang Li, MD, provided assistance with experimental design, data analysis, and manuscript revision; Xueer Wu, MD, provided assistance with data analysis and manuscript writing; Xinpei Ji, MD, provided assistance with data collection and analysis; Zehui Zhu, MD, and Nan Zhe, MD, provided assistance with data collection; Yun-e Zhao, MD, provided assistance with experimental design and manuscript revision.
Knowledge, attitudes, and practices of paediatric medical officers and registrars on the developmental origins of health and disease in a tertiary women’s and children’s hospital
f2aa05d6-b947-4cbf-bc17-4ca155fd3851
11459700
Pediatrics[mh]
Developmental Origins of Health and Disease (DOHaD) conceptualises that environmental and maternal factors can shape the development and health of individuals, progressing from the early embryonic period to childhood and beyond . It is believed that some developmental processes can affect an individual’s susceptibility and resistance to diseases, including chronic conditions , with epigenetic modifications such as DNA methylation in genes related to metabolism and relevant pathways that have been associated with diseases such as type 2 diabetes mellitus and obesity . Given the increasing burden of obesity and related non-communicable diseases such as cardiovascular disease globally , efforts to tackle this epidemic have been gaining traction . However, the translation of DOHaD knowledge into healthcare is lacking globally , restricting its potential clinical impacts from preconception to postnatal care in the first 1000 days of a child’s life . Thus, efforts to assess the prevailing knowledge, attitudes, and practices of physicians are crucial, in order to identify barriers and individualize interventions to increase uptake of DOHaD practice. Locally in Singapore, the government first launched a nation-wide initiative to wage war against diabetes mellitus , followed by the launch of the Healthier SG Strategy . Multi-pronged measures including primary and secondary preventive methods were largely aimed at adopting a healthy lifestyle with the emphasis on going beyond healthcare to health . Subsequently, a similar approach has been developed in improving preventive efforts to promote maternal and child health in Singapore, underpinning the importance of DOHaD perspectives in modern day practice . However, while such models are being developed to advance care for maternal and child health at the policy level, there has been a paucity of formal studies conducted in Singapore to assess the knowledge and opinions regarding DOHaD among physicians involved in maternal and child health. A recent study by Ku et al. in Singapore found poor translation of DOHaD awareness into clinical practice among obstetrics and gynaecology residents, paediatric residents, and medical students . However, given a small representation of paediatric residents (9 out of 117 respondents) , generalizability and applicability of the results towards identifying gaps in post-graduate medical education and paediatric residency training may be limited. Furthermore, paediatricians play a vital role in integrating, shaping, and building the foundation for maternal and child health, together with their obstetric colleagues . Paediatric care has far-reaching benefits and consequences on health into adulthood. They must therefore be armed with the necessary knowledge and skills early in their training, with DOHaD principles ingrained and translated into clinical practice. Therefore, we aimed to perform a study targeting physicians who underwent training in the paediatric department, to assess their knowledge, attitudes, and practices (KAP) towards DOHaD. The overarching goal is to identify potential areas for improvement in our current training system for our next generation of doctors, and to develop a more robust system to improve translation of DOHaD concepts into clinical practice, thus enhancing maternal and child health as a cohesive unit in a tertiary women’s and children’s hospital. Moving forward, this can also encompass primary care, potentially providing seamless care across the health system. We conducted a cross-sectional online survey among medical officers and registrars from the department of paediatrics in KK Women’s and Children’s Hospital (KKH), Singapore, from June to August 2022. Inclusion criteria were medical officers who were doing their general paediatric posting during the study period with at least a year of clinical experience, junior residents, senior residents, and resident physicians. House officers (HOs) and consultants (Cs) were excluded from the survey. Medical officers doing their general paediatric posting and junior residents were grouped as “Medical Officers” (MOs); senior residents and resident physicians were grouped as “Registrars” (REGs). Information including their age, sex, year of graduation, and length of service was recorded. Items of our questionnaire were developed after focused group discussions. Content validity was determined by domain experts comprising obstetricians and gynaecologists (O&G), paediatricians and medical students. Development was in accordance to existing KAP guides . After its use to investigate KAP among medical students, O&G residents, and paediatric residents , some questions were revised to increase their applicability and relevance to daily paediatric practice. Supplementary Figure shows a sample of the questionnaire used. The first section of the questionnaire was on Knowledge , focusing on physicians’ awareness of the definitions and clinical impacts of DOHaD concepts on a child’s lifelong metabolic health. There were 15 questions in this section, evaluating understanding of how preconception health, in-utero conditions, and the first two years of a child’s life can influence his or her lifelong cardiometabolic outcomes. They were also asked to appraise their own level of knowledge, as well as that of their colleagues. Their responses were based on a 5-point Likert scale, ranging from ‘strongly disagree’ to ‘strongly agree’, ‘not’ to ‘very’, and ‘null’ to ‘excellent’. Higher scores within this section indicate a better understanding of the principles of DOHaD and their clinical relevance. The attitudes and opinions of the physicians on the significance and long-term clinical implications of DOHaD were assessed in the section. Similarly, nine questions within this section were scored on a 5-point Likert scale, ranging from ‘strongly disagree’ to ‘strongly agree’. Questions were posed on how much the physician prioritized the pivotal role of infant growth assessment, including the critical role of empowering parents, with proper counselling and guidance. The higher the scores, the stronger the degree of belief and ownership regarding the importance of counselling patients’ parents about DOHaD concepts, given its long-term repercussions on cardiovascular diseases. The translation of DOHaD theories into a physician’s clinical practice was evaluated in the Practice section. Here, five questions clarified whether a physician’s management of a child incorporated knowledge about DOHaD, such as the importance of optimal weaning and nutrition practices for children as well as holistic growth assessment. A 5-point Likert scale was used, from ‘never’ to ‘always’, reflecting the translation of DOHaD concepts into clinical management. The questionnaire was administered electronically via the FormSG web-based platform , a secure digital government form. Sample size calculation There was a total of 95 medical officers, junior residents, resident physicians, and senior residents in the institution. Using Slovin’s formula, with a 95% confidence interval and 5% margin of error, a minimum sample size of 77 was required. Statistical analysis Statistical analyses were conducted with SPSS Version 25 and the R programming language. A two-sided p -value of less than 0.05 was considered statistically significant. To gain a holistic view of the performances in each section ( Knowledge , Attitudes and Practice ), the mean scores for each KAP domain were calculated. Independent t -tests were conducted to determine the differences in KAP mean scores between junior and senior doctors. Chi-square statistics were used to compare sex differences between the groups. The internal consistency for each section was determined by Cronbach’s coefficient alpha, with an acceptable value taken to be 0.70 and higher . There was a total of 95 medical officers, junior residents, resident physicians, and senior residents in the institution. Using Slovin’s formula, with a 95% confidence interval and 5% margin of error, a minimum sample size of 77 was required. Statistical analyses were conducted with SPSS Version 25 and the R programming language. A two-sided p -value of less than 0.05 was considered statistically significant. To gain a holistic view of the performances in each section ( Knowledge , Attitudes and Practice ), the mean scores for each KAP domain were calculated. Independent t -tests were conducted to determine the differences in KAP mean scores between junior and senior doctors. Chi-square statistics were used to compare sex differences between the groups. The internal consistency for each section was determined by Cronbach’s coefficient alpha, with an acceptable value taken to be 0.70 and higher . There was a total of 95 physicians who met inclusion criteria, comprising MOs ( n = 52) and REGs ( n = 43). There was a 100% response rate, and their baseline characteristics are shown in Table . The respondents were predominantly female ( n = 68, 71.6%). On average, MOs and REGs were 4 and 8 years post-graduation respectively, with a corresponding age gap of almost 4 years. Proportion of responses and their mean scores of selected questions for discussion are shown in Table . These questions were selected from their respective sections (Knowledge, Attitudes, or Practices) because (i) the overall mean scores were among the lowest, or (ii) the highest overall percentage of responses indicating “Agree”, “Frequently”, or “Yes”. The complete breakdown of the responses of each individual question is shown in Figure (MOs) and Figure (REGs). Knowledge Few physicians ( n = 22, 23.2%) knew the term ‘DOHaD’. Majority assessed their knowledge about DOHaD to be poor, with only 13.7% ( n = 13) rating their conceptual understanding as ‘good’ and ‘excellent’. In addition, they generally rated their colleagues to be inadequately informed, as 88.4% ( n = 84) felt that their peers were not familiar with DOHaD. Although the majority demonstrated a grasp of how maternal well-being preconception and during pregnancy could affect the metabolic health of their children, they were unaware of the potential impact on the risk of non-communicable diseases of future grandchildren, with 44.2% ( n = 42) either unsure or disagreeing with transgenerational metabolic health influences. Majority were interested to have a deeper understanding of the early determinants of non-communicable diseases, with 88.4% ( n = 84) interested to receive training in topics related to DOHaD. Awareness of the term DOHaD together with self and peer appraisal of knowledge regarding its concepts corresponded with seniority, as REGs had higher mean scores in the relevant questions in comparison with their junior colleagues ( p = 0.016). Attitudes One-third of physicians ( n = 32, 33.7%) were not confident in their abilities to counsel patients regarding the prevention of future metabolic diseases in children by initiating healthy eating practices from the start of weaning. On the other hand, they generally felt strongly ( n = 91, 95.8%) about the physician’s responsibility in anticipatory guidance and health promotion regarding early childhood nutrition and eating habits, and the importance of holistic growth assessment during the first 2 years of life on the outcomes of lifelong metabolic health. They also recognized the significance of DOHaD concepts in clinical practice ( n = 94, 98.9%). The responses were similar between the senior (REGs) and junior ranks of physicians (MOs) ( p = 0.214). Practices The MOs and REGs had generally integrated some elements of the DOHaD principles into their practice, counselling parents about the appropriate weaning and nutrition practices for their children ( n = 86, 90.5%), providing them with lifestyle advice if the child was under or overweight ( n = 92, 96.8%), and all of them have made onward referrals to a dietician if there were concerns about inadequate nutritional intake. Majority also understood the importance of evaluating weight in relation to length ( n = 92, 96.8%). However, a significant proportion did not emphasize the link between wellbeing during the first 2 years of life, and lifelong risk of non-communicable diseases ( n = 46, 48.5%). The REGs reported providing counselling and advice on healthy lifestyles, feeding and nutritional habits for children to a greater extent than their junior colleagues ( p = 0.008). The REGs also had a higher mean Knowledge score of 4.51 (see Fig. ) as compared to their junior colleagues at 4.29 ( p = 0.016). Although there were also correspondingly higher scores in Attitudes ( p = 0.214) and Practices ( p = 0.138), these were not statistically significant. Internal consistency and reliability were good in the and sections, with a Cronbach’s alpha value of 0.79 and 0.75 respectively, but it was 0.52 for the section. Few physicians ( n = 22, 23.2%) knew the term ‘DOHaD’. Majority assessed their knowledge about DOHaD to be poor, with only 13.7% ( n = 13) rating their conceptual understanding as ‘good’ and ‘excellent’. In addition, they generally rated their colleagues to be inadequately informed, as 88.4% ( n = 84) felt that their peers were not familiar with DOHaD. Although the majority demonstrated a grasp of how maternal well-being preconception and during pregnancy could affect the metabolic health of their children, they were unaware of the potential impact on the risk of non-communicable diseases of future grandchildren, with 44.2% ( n = 42) either unsure or disagreeing with transgenerational metabolic health influences. Majority were interested to have a deeper understanding of the early determinants of non-communicable diseases, with 88.4% ( n = 84) interested to receive training in topics related to DOHaD. Awareness of the term DOHaD together with self and peer appraisal of knowledge regarding its concepts corresponded with seniority, as REGs had higher mean scores in the relevant questions in comparison with their junior colleagues ( p = 0.016). One-third of physicians ( n = 32, 33.7%) were not confident in their abilities to counsel patients regarding the prevention of future metabolic diseases in children by initiating healthy eating practices from the start of weaning. On the other hand, they generally felt strongly ( n = 91, 95.8%) about the physician’s responsibility in anticipatory guidance and health promotion regarding early childhood nutrition and eating habits, and the importance of holistic growth assessment during the first 2 years of life on the outcomes of lifelong metabolic health. They also recognized the significance of DOHaD concepts in clinical practice ( n = 94, 98.9%). The responses were similar between the senior (REGs) and junior ranks of physicians (MOs) ( p = 0.214). The MOs and REGs had generally integrated some elements of the DOHaD principles into their practice, counselling parents about the appropriate weaning and nutrition practices for their children ( n = 86, 90.5%), providing them with lifestyle advice if the child was under or overweight ( n = 92, 96.8%), and all of them have made onward referrals to a dietician if there were concerns about inadequate nutritional intake. Majority also understood the importance of evaluating weight in relation to length ( n = 92, 96.8%). However, a significant proportion did not emphasize the link between wellbeing during the first 2 years of life, and lifelong risk of non-communicable diseases ( n = 46, 48.5%). The REGs reported providing counselling and advice on healthy lifestyles, feeding and nutritional habits for children to a greater extent than their junior colleagues ( p = 0.008). The REGs also had a higher mean Knowledge score of 4.51 (see Fig. ) as compared to their junior colleagues at 4.29 ( p = 0.016). Although there were also correspondingly higher scores in Attitudes ( p = 0.214) and Practices ( p = 0.138), these were not statistically significant. Internal consistency and reliability were good in the and sections, with a Cronbach’s alpha value of 0.79 and 0.75 respectively, but it was 0.52 for the section. Key findings While good practices were demonstrated among REGs, self-assessed knowledge of the DOHaD principles in both groups were shown to be generally poor. This suggests an uncertainty regarding the clinical translation of DOHaD concepts into practice among physicians. Nevertheless, both groups expressed their interest in being properly equipped by learning more about DOHaD. Low DOHaD literacy among paediatric physicians Our study showed that REGs possessed greater self-assessed knowledge of DOHaD, compared to MOs. This is expected, given that REGs have likely received a longer duration of training in the field of paediatrics and may potentially have been more exposed to relevant concepts. However, self-assessed knowledge of DOHaD in both groups remained poor, which is consistent with a Canadian study that reflected a general lack of knowledge in DOHaD among various healthcare providers . Our results suggest that there may be insufficient formal training or exposure to DOHaD within the current training of paediatric physicians. These findings are similar to another qualitative study conducted in Japan and New Zealand, which showed poor awareness of DOHaD concepts among university healthcare students despite having prior exposure, indicating that current efforts to inculcate DOHaD principles and practices through medical education may be inadequate . Given the rising prominence of DOHaD and its role in targeted interventions to reduce the risks of developing non-communicable diseases, more must be done to improve the literacy of DOHaD among physicians. By applying DOHaD principles, physicians can provide tailored guidance and counselling on childhood nutrition and lifestyle behaviours, especially to caregivers of children at risk of poor metabolic health. This approach not only facilitates identifying and managing high-risk groups, but also enhances downstream clinical management, potentially leading to better long-term health outcomes. Uncertainty regarding clinical translation of DOHaD concepts Interestingly, despite the poor self-assessment of knowledge, our study has shown that REGs have good practices, including providing appropriate nutrition and healthy lifestyle advice. We postulate three contributing factors to account for this. Firstly, it is possible that there is an under-reporting of self-assessed knowledge among the physicians about DOHaD, which may be assessed through on-site observations and interviews in future studies. Secondly, the practices performed may have been perceived to be ideal by physicians, but this may not be objectively true on the ground. The relevant concepts and latest evidence surrounding the early determinants of metabolic diseases may not have been sufficiently translated into clinical practice, a widely recognized occurrence . Lastly, it is plausible that while physicians may seem well-versed in such practices, they could have been influenced by other aspects of their training within paediatric medicine, such as observational and experience-based learning , rather than being firmly grounded using the DOHaD principles and thereafter applying them to their clinical practice. Our study thus indicates that more needs to be done to understand the association between the physicians’ knowledge and practice of DOHaD, including the knowledge barriers faced. Expression of interest in being more well-equipped Paediatric physicians, both junior and senior alike, expressed their interest in being properly equipped and aware of the DOHaD concepts through formal training. They recognized the relevance and importance to their practice in health promotion efforts to positively influence the risk of developing non-communicable diseases during the first 2 years of a child’s life. In addition, they conveyed feeling a sense of duty and responsibility to be the ones providing counselling and anticipatory guidance regarding this, given that they are the first point of contact whenever a child comes into their clinic for well-baby visits, or opportunistically during consultations for other medical reasons. This is inherent to being a paediatrician, serving as an advocate holistically for the child’s health and wellbeing . Recommendations Our findings demonstrated poor knowledge about DOHaD concepts in physicians (MOs and REGs), concurring with another local DOHaD KAP study by Ku et al. . However, the responses from this study are more representative of the needs from a paediatric postgraduate medical education perspective. The tailored questionnaire used in this study was also more applicable and reflective of DOHaD-related practices amongst paediatric physicians, including the uncertainty regarding clinical translation of relevant concepts. This fosters a stronger push towards bridging the gap between DOHaD research knowledge and clinical applicability. To our knowledge, there are no programmes dedicated to physicians regarding the education and integration of DOHaD concepts from bench to bedside in Singapore. This is not unique to Singapore, as barriers to clinical translation and knowledge acquisition among healthcare providers, including the lack of practice guidelines, have been similarly identified in other countries such as Canada . Till date, current literature also revealed that much international efforts for DOHaD knowledge translation have been focused on adolescents and university students . With these in mind, we recommend improving knowledge translation efforts locally through a multi-pronged approach: (i) restructuring formal undergraduate medical education and post-graduate training, (ii) publishing clinical practice guidelines which integrate DOHaD principles, and (iii) improving outreach and health promotion efforts within the community. DOHaD concepts should be incorporated into the curriculum of medical education, starting early from medical school. This can be reinforced further for those pursuing formal residency training in paediatric medicine. Routine assessments may be conducted, including organising theory tests to appraise knowledge of DOHaD concepts, and conducting practical sessions with standardised patients to assess communication and counselling skills. Embedding patient education and counselling skills into the curriculum are essential, as this could enhance the confidence and effectiveness of the medical professional in guiding healthy nutrition and lifestyle choices in their patients and caregivers. Providing continuing medical education and training programmes for DOHaD should also be considered. This may be available in the form of workshops, conferences, and seminars. In addition, although not examined in this study, such training should not be confined to paediatric residents, obstetrics and gynaecology residents, and medical students. To provide effective maternal and child healthcare in an institution, every healthcare provider, including doctors of all ranks, nurses, and allied health professionals, should be equipped with the proficiency and be well versed in DOHaD theories. The efficacy of such interventional educational policies may be analysed in further follow-up studies. From a policy point of view, it may be ideal to consider creating clinical practice guidelines grounded in DOHaD concepts to help translate these ideas into clinical practice and assist physicians when counselling patients. This would raise the standards of care rendered to women and children, making the transformation of DOHaD principles into routine practice. To increase the outreach of these guidelines and their relevant concepts, trained providers may hold sharing sessions for other physicians who work in a primary care setting. Accreditation programmes in DOHaD counselling can also be considered for these physicians practicing in the primary care setting, with the focus on health promotion and lifestyle behavioural changes. This will equip them with the necessary skills to counsel their patients in the community on an opportunistic basis. From a health promotion standpoint, such initiatives may ensure that relevant principles are readily accessible to the general public including parents and other members of the family nucleus. Online educational materials and modules catered to parents may also be developed, as seen in initiatives from other countries like First 1000 Days Australia , as well as Healthier Together in the United Kingdom . Strengths and limitations A major strength of this study is possessing a 100% response rate, which increases the reliability of our results. However, there was a lack of a validated questionnaire when assessing the KAP of physicians in a paediatric department. We acknowledge that there was no face, construct, or criterion validation by conducting a pilot evaluation among a smaller group of respondents. However, we ensured content validation by incorporating input from domain experts during the development of the questionnaire items, in accordance with existing KAP guidelines. In addition, the knowledge and sections of our questionnaire had good Cronbach’s Alpha indices of at least 0.7, suggesting good internal consistency. Nevertheless, this study can be further expanded to evaluate the KAP of other healthcare professionals involved in maternal and child health, including nurses and allied health professionals. Instead of solely administering questionnaires, conducting on-site evaluation of practices and interviews of these healthcare professionals can also be considered in future studies. While good practices were demonstrated among REGs, self-assessed knowledge of the DOHaD principles in both groups were shown to be generally poor. This suggests an uncertainty regarding the clinical translation of DOHaD concepts into practice among physicians. Nevertheless, both groups expressed their interest in being properly equipped by learning more about DOHaD. Our study showed that REGs possessed greater self-assessed knowledge of DOHaD, compared to MOs. This is expected, given that REGs have likely received a longer duration of training in the field of paediatrics and may potentially have been more exposed to relevant concepts. However, self-assessed knowledge of DOHaD in both groups remained poor, which is consistent with a Canadian study that reflected a general lack of knowledge in DOHaD among various healthcare providers . Our results suggest that there may be insufficient formal training or exposure to DOHaD within the current training of paediatric physicians. These findings are similar to another qualitative study conducted in Japan and New Zealand, which showed poor awareness of DOHaD concepts among university healthcare students despite having prior exposure, indicating that current efforts to inculcate DOHaD principles and practices through medical education may be inadequate . Given the rising prominence of DOHaD and its role in targeted interventions to reduce the risks of developing non-communicable diseases, more must be done to improve the literacy of DOHaD among physicians. By applying DOHaD principles, physicians can provide tailored guidance and counselling on childhood nutrition and lifestyle behaviours, especially to caregivers of children at risk of poor metabolic health. This approach not only facilitates identifying and managing high-risk groups, but also enhances downstream clinical management, potentially leading to better long-term health outcomes. Interestingly, despite the poor self-assessment of knowledge, our study has shown that REGs have good practices, including providing appropriate nutrition and healthy lifestyle advice. We postulate three contributing factors to account for this. Firstly, it is possible that there is an under-reporting of self-assessed knowledge among the physicians about DOHaD, which may be assessed through on-site observations and interviews in future studies. Secondly, the practices performed may have been perceived to be ideal by physicians, but this may not be objectively true on the ground. The relevant concepts and latest evidence surrounding the early determinants of metabolic diseases may not have been sufficiently translated into clinical practice, a widely recognized occurrence . Lastly, it is plausible that while physicians may seem well-versed in such practices, they could have been influenced by other aspects of their training within paediatric medicine, such as observational and experience-based learning , rather than being firmly grounded using the DOHaD principles and thereafter applying them to their clinical practice. Our study thus indicates that more needs to be done to understand the association between the physicians’ knowledge and practice of DOHaD, including the knowledge barriers faced. Paediatric physicians, both junior and senior alike, expressed their interest in being properly equipped and aware of the DOHaD concepts through formal training. They recognized the relevance and importance to their practice in health promotion efforts to positively influence the risk of developing non-communicable diseases during the first 2 years of a child’s life. In addition, they conveyed feeling a sense of duty and responsibility to be the ones providing counselling and anticipatory guidance regarding this, given that they are the first point of contact whenever a child comes into their clinic for well-baby visits, or opportunistically during consultations for other medical reasons. This is inherent to being a paediatrician, serving as an advocate holistically for the child’s health and wellbeing . Our findings demonstrated poor knowledge about DOHaD concepts in physicians (MOs and REGs), concurring with another local DOHaD KAP study by Ku et al. . However, the responses from this study are more representative of the needs from a paediatric postgraduate medical education perspective. The tailored questionnaire used in this study was also more applicable and reflective of DOHaD-related practices amongst paediatric physicians, including the uncertainty regarding clinical translation of relevant concepts. This fosters a stronger push towards bridging the gap between DOHaD research knowledge and clinical applicability. To our knowledge, there are no programmes dedicated to physicians regarding the education and integration of DOHaD concepts from bench to bedside in Singapore. This is not unique to Singapore, as barriers to clinical translation and knowledge acquisition among healthcare providers, including the lack of practice guidelines, have been similarly identified in other countries such as Canada . Till date, current literature also revealed that much international efforts for DOHaD knowledge translation have been focused on adolescents and university students . With these in mind, we recommend improving knowledge translation efforts locally through a multi-pronged approach: (i) restructuring formal undergraduate medical education and post-graduate training, (ii) publishing clinical practice guidelines which integrate DOHaD principles, and (iii) improving outreach and health promotion efforts within the community. DOHaD concepts should be incorporated into the curriculum of medical education, starting early from medical school. This can be reinforced further for those pursuing formal residency training in paediatric medicine. Routine assessments may be conducted, including organising theory tests to appraise knowledge of DOHaD concepts, and conducting practical sessions with standardised patients to assess communication and counselling skills. Embedding patient education and counselling skills into the curriculum are essential, as this could enhance the confidence and effectiveness of the medical professional in guiding healthy nutrition and lifestyle choices in their patients and caregivers. Providing continuing medical education and training programmes for DOHaD should also be considered. This may be available in the form of workshops, conferences, and seminars. In addition, although not examined in this study, such training should not be confined to paediatric residents, obstetrics and gynaecology residents, and medical students. To provide effective maternal and child healthcare in an institution, every healthcare provider, including doctors of all ranks, nurses, and allied health professionals, should be equipped with the proficiency and be well versed in DOHaD theories. The efficacy of such interventional educational policies may be analysed in further follow-up studies. From a policy point of view, it may be ideal to consider creating clinical practice guidelines grounded in DOHaD concepts to help translate these ideas into clinical practice and assist physicians when counselling patients. This would raise the standards of care rendered to women and children, making the transformation of DOHaD principles into routine practice. To increase the outreach of these guidelines and their relevant concepts, trained providers may hold sharing sessions for other physicians who work in a primary care setting. Accreditation programmes in DOHaD counselling can also be considered for these physicians practicing in the primary care setting, with the focus on health promotion and lifestyle behavioural changes. This will equip them with the necessary skills to counsel their patients in the community on an opportunistic basis. From a health promotion standpoint, such initiatives may ensure that relevant principles are readily accessible to the general public including parents and other members of the family nucleus. Online educational materials and modules catered to parents may also be developed, as seen in initiatives from other countries like First 1000 Days Australia , as well as Healthier Together in the United Kingdom . A major strength of this study is possessing a 100% response rate, which increases the reliability of our results. However, there was a lack of a validated questionnaire when assessing the KAP of physicians in a paediatric department. We acknowledge that there was no face, construct, or criterion validation by conducting a pilot evaluation among a smaller group of respondents. However, we ensured content validation by incorporating input from domain experts during the development of the questionnaire items, in accordance with existing KAP guidelines. In addition, the knowledge and sections of our questionnaire had good Cronbach’s Alpha indices of at least 0.7, suggesting good internal consistency. Nevertheless, this study can be further expanded to evaluate the KAP of other healthcare professionals involved in maternal and child health, including nurses and allied health professionals. Instead of solely administering questionnaires, conducting on-site evaluation of practices and interviews of these healthcare professionals can also be considered in future studies. In conclusion, our study demonstrated the existing lack of awareness about DOHaD among paediatric physicians. Despite close contact with patients, with opportunities to counsel and provide anticipatory guidance, this is not congruent with a sense of competence and being well-equipped enough to do so. However, it is a positive sign that both MOs and REGs, who interact with caregivers of patients during the first two years of life, express a desire to learn more about DOHaD principles, so that they are able to provide an improved level of care to impact maternal and child health. With this in mind, changes can be made to pragmatically enhance the understanding and practice of DOHaD within healthcare, and even beyond to the community. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3
Suppression of mycotoxins production and efficient chelation of heavy metals using natural melanin originated from
5f2ed0ca-01a9-4f9f-b509-8e36f9d4ebb2
11724575
Microbiology[mh]
Heavy metals (HMs) and mold contamination pose significant risks to human health and the environment due to their toxicity, persistent presence, and capacity for bioaccumulation . Among the most hazardous HMs are chromium [Cr], cadmium [Cd], lead [Pb], and mercury [Hg], as they tend to accumulate in multiple organs . Consequently, mycotoxin and HM toxicity is a global health concern, necessitating further research to discover novel approaches to combat the harmful effects of these toxins as well as non-conventional protocols. Numerous remediation techniques, including chemical precipitation, photocatalytic adsorption, destruction, and filtration, have been previously developed . The global popularity of safe materials for therapeutic purposes and biosorptive agents for hazardous environmental pollutants is steadily increasing . In this framework, natural products offer a promising prospect for effectively inhibiting mycotoxin release and bioremediation of heavy metal toxicity . The low cost and simple adsorption procedure are viable, particularly for non-degradable HMs . Hence, melanin is a natural agent with outstanding heavy metal removal properties . Melanin is a dark, high molecular-weight pigment synthesized through hydroxylation and polymerization of organic compounds . It is a potent ion exchange molecule that effectively chelates pollutants, chemicals, and HMs . In addition, the capacity of melanin to chelate compounds and control their cellular uptake, and thus, it functions as a trap or storage site for metal ions . Further, melanin was found to possess mycotoxin detoxification properties . Despite these remarkable characteristics of melanin, it is still challenging to produce it on a large scale since the chemical synthesis is expensive and environmentally unfriendly . Evidently, melanin production from plants and animals has some limitations imposed by the natural growth cycle and environment . Microbial melanin is found in the cell wall and produced through enzymatic reactions involving tyrosinase and polyketidesynthase from DOPA and dihydroxynaphthalene (DHN), respectively . Currently, the researches focus on highlighting newly natural melanin-inducing progeny of filamentous fungi through fermentation technology . Therefore, the current study was designed to produce and refine a melanin pigment using A. flavus and A. carbonarius in a submerged culture. The melanin was characterized using various techniques, including physical and chemical analysis, UV, HPLC, FT–IR, and NMR. Moreover, their antioxidant activity, suppression of AF–B1 and OTA production, and removal of HMs were studied. Sample collection, isolation, and purification of fungi A total of 12 rhizosphere soil samples were collected from various regions in Sharkia Governorate, Egypt. The soil samples were collected and transferred to the lab on the same day for the isolation of fungi. The serial dilution plating method was used to dilute the soil samples to minimize the number of fungal colonies in each soil dilution . The stock soil solution was prepared by dissolving 50 g of each soil sample (separately) in 100 mL 85% NaCl, with thorough agitation. The solution was then diluted into a series of prepared vials labeled from 10–1 to 10–6, each containing 9 mL 85% NaCl. One milliliter of the soil stock solution was transferred to the first vial. Subsequently, another 1 mL of the solution from the first vial was transferred to the second vial, and the steps were repeated until the last dilution. Czapek Dox Agar (CDA) (Hi Media Lab. Pvt. Ltd. Mumbai, India; Ref. GM075) plates were prepared, and 0.1 mL of each dilution was pipetted and spread on the prepared CDA plates. The plates were incubated at 28–30℃ for 5–7 days. In order to obtain pure fungal isolates, the appeared colonies were then sub-cultured on sterile CDA plates and incubated for 5–7 days at 28–30℃. Morphological and molecular identification The obtained fungal pure isolates were identified based on their macroscopic and microscopic morphological characteristics . These isolates were further screened for their potential to produce melanin, and the most potent producers were confirmed by molecular identification of the 18–28S rRNA gene at SolGent Company in Daejeon, South Korea. The forward and reverse primers ITS1 (5′-TCCGTAGGTGAACCTGCGG-3′) and ITS4 (5’-TCCTCCGCT TATTGATATGC-3′) . were used for PCR amplification of 18-28S rRNA gene. The purified PCR amplicons were sequenced using the same primers. The obtained sequences were further searched for similarities with nucleotide sequences in the Gen-Bank using the BLAST (BLASTn) search tool, and sequences were deposited in the Gen-Bank for accession number. The MegAlign (DNA Star) software version 5.05 (DNASTAR Inc., Madison, WI, USA) [DNASTAR Inc. (2003) , DNASTAR software, version 5.05 was used to conduct a phylogenetic analysis of the sequences. Screening of melanin production The twenty fungal isolates were screened for their potential to produce melanin. The fungal isolates were inoculated separately in Czapek Dox Liquid Medium (Hi Media Lab. Pvt. Ltd., India, Ref. M1170A) and incubated at 30℃ for seven days. Then, the cultures were filtered, and the mycelia were used to extract melanin. As described by El-Sayyad et al. , approximately 1 g of mycelial biomass of each isolate was sterilized with 1 N NaOH for 20 min at 121 °C and 1.5 bar. Subsequently, the mixture was centrifuged at room temperature for 10 min at 8000 rpm, and the supernatant, including melanin, was gathered. HCL with pH 2 was used for condensing the melanin pigment at 10,000 rpm for 10 min at 4℃. Afterward, the pigment was washed thrice with ethyl acetate; chloroform (2;3, v/v) and then centrifuged three times with distilled water. The gathered melanin was cut in dehumidified air. Lastly, the pigments were dissolved in a 1 M KOH to a final concentration of 0.01 µg/mL and analyzed via UV-Vis (T60 UV/Vis. 200–900 nm) with a scanning interval of 1 nm, where L-DOPA (PHR1271-500MG, Sigma USA) served as a standard. The biomass dry weight of melanin was estimated as described by Kumar et al. , El-Batal & Al Tamie , Joshi et al. . Physicochemical characterization and pathway mechanism for melanin production The extracted melanin pigments from A. flavus and A. carbonarius were identified based on their physical and chemical properties, including solubility in water, color observation, solubility in KOH, precipitation in 3N HCL, solubility in organic acid solvent (methanol, ethanol, hexane, chloroform, benzene, acetone, and ethyl acetate), reaction with H 2 O 2 , reaction with FeCl 3 , (reaction for polyphenols test), and reaction with sodium dithionite and potassium ferricyanide . The melanin passage mechanism was elucidated by assessing the impact of inhibitors, such as Kojic acid which suppresses DOPA, and tricyclazole, which inhibits DHN. Kojic acid was solubilized in distilled H 2 O, while tricyclazole was dissolved in ethanol. They were subjected to an autoclaved and cooled PDA medium to denote rates of 1, 10, or 100 µg/mL . The cultures were acclimated for seven days at 25 °C, after which the pigmentation and growth were observed . HPLC analysis High-performance liquid chromatography (HPLC) analysis was used to identify the precursor molecules and intermediates involved in melanin synthesis. The purified melanin was anatomized by HPLC (Thermo scientific HPLC system, Santa Clara, USA) using C18 column (Eclipse Plus C18 4.6_150 mm, 3.5_m, Cat.# 959963-902) through isocratic mobile stage of methanol and 1% acetic acid with a flux average of 1.0 mL/min for 20 min. Prior to injection onto the column, the specimen was filtered using a 0.2 µm filter (Millipore, Amicon, Mumbai, India). The analysis concentrated on the adherence to chemical standards and the process of condensation that occurred during the detention period. This was accomplished through the diligent effort of the assimilation summit. The melanin was amplified using L-DOPA (PHR1271-500MG, Sigma USA) as a standard . FTIR and NMR spectral analysis Fourier-transform infrared (FTIR) spectral analysis and Nuclear Magnetic Resonance (NMR) spectroscopy of the purified fungal melanin were conducted at a micro-analytical lab at Cairo University, Egypt. The FTIR spectral analysis was performed at room temperature with disbanding whole reflection by an FTIR (Perkin Elmer). The specimens were fused and compressed with KBr, then maintained at a rate of 650–4000 cm −1 . Output signs have been described and analyzed using spectra software. For NMR spectroscopy of melanin, approximately 20 mg of purified melanin was mixed with 1 mL of reiterated dimethyl sulfoxide, filtered through a small cotton sleeve tightly packed into a Pasteur pipette, and transferred to the proton NMR device ( 1 H NMR). The purified sample was analyzed in sol ECA-500 with cry-probe operating at 500 MHz with DMSO at 205 ppm. The specimens were settled by solubility of 50 mg from U7 melanin and the standard in 3.5 mL of deuterium oxide/ammonia combination. The mixture was made by mixing 0.01 mL of hydrous ammonia (33%) with 10 mL of deuterium oxide at pH 10 . Optimization of melanin production conditions To determine the optimal conditions for the production of melanin by the most potent strains, different media, incubation temperature, incubation period, pH levels, carbon sources, nitrogen sources, and heavy metals were assessed . During the optimization experiments, UV-Vis analysis was utilized to measure the concentration of melanin, using L-DOPA as a reference standard. Additionally, the dry weight was estimated in each experiment. Potato dextrose broth medium (PD), Mineral salt medium (MS), and Czapek–Dox broth medium (CD) were used to test their effect on the production of melanin by A. flavus and A. carbonarius . Three sterilized replicates of each medium (pH 5.5) were inoculated with 1 mL of spore suspension 2.0 × 10 6 spores/mL. Cultures were then incubated at 30 °C for seven days. Subsequently, the melanin production was evaluated as described in the previous step. After the determination of the optimal medium for melanin production, the medium was adjusted at varying pH levels ranging from pH 2 to pH 8. Afterward, the cultures with pH adjustments were subjected to incubation at various temperatures, ranging from 20 to 40℃. The optimal incubation period was determined in the following manner. The fungal cultures were incubated for a total period of 16 days with Melanin production detected on days 3, 5, 7, 10, 12, 14, and 16. In order to determine the best carbon source for melanin production, the main carbon source in the CD medium was replaced with each of the tested carbon sources at the same concentration as the original carbon source. Therefore, the CD medium, which did not contain the original carbon source, was supplemented with 3% (the concentration of the original carbon source in the CD medium) of soluble starch, glucose, lactose, fructose, maltose, and pectin separately. Subsequently, the carbon source with the highest quality was analyzed at various concentrations to determine the optimal concentration. Similarly, to determine the most effective nitrogen source for enhancing melanin production, the original nitrogen source of the CD medium was replaced with the tested nitrogen sources (each tested separately) at the original nitrogen concentration of the medium. Consequently, the following nitrogen sources were added to the medium in dissimilar amounts: NaNO 3 (3 g/100 mL), (NH 4 ) 2 SO 4 (2.33 g/100 mL), urea (1.12 g/100 mL), yeast extract (3.33 g/100 mL), NH 4 H 2 PO 4 (4.2 g/100 mL), beef extract (1.6 g/100 mL) and peptone (2.19 g/100 mL). Subsequently, the nitrogen source promoting melanin production at the optimal level was tested at different quantities to detect the optimal concentration. After setting up the optimal production medium, pH, incubation temperature, incubation period, carbon source, and nitrogen source, the production of melanin in the presence of 0.1, 0.2, 0.5, 1, 2, 3, and 5 mM of different heavy metal sources (CuSO 4 .5H 2 O, FeSO 4 .7H 2 O, Potassium dichromate (K 2 Cr 2 O 7 ), and CdSO 4 .4H 2 O) was also evaluated. The fungal dry weight and melanin concentration were detected in each experiment, as explained previously. DPPH radicals scavenging efficiency The DPPH method was used with slight modifications to assess the antioxidant efficacy of the melanin compared to the standard L-DOPA. This method is based primarily on evaluating the potential of melanin to scavenge free radicals. About 1 mL of the tested specimen was dissolved in methanol and combined with 1 mL of DPPH (0.002% dissolved in methanol). The mixtures were thoroughly mixed and allowed to settle for 30 min. The fusion compounds demonstrated a high level of sharpness at 517 nm UV- spectra. Ascorbic acid served as a standard. Antioxidant efficacy was assessed as the reduction in the intensity of DPPH. DPPH radical scavenging efficiency was determined as EC50 value, where 50% of the DPPH radicals were scavenged . Detecting the inhibition of mycotoxin production To detect the effect of purified fungal melanin on the production of AFB1 and OTA by A. flavus and A. carbonarius , respectively, Erlenmeyer flasks containing 100 mL of CD liquid medium were supplemented with varying amounts of pure melanin (50, 100, 200, 300, 400, and 500 mg). Each flask was inoculated separately with a spore suspension of A. flavus and A. carbonarius (1 × 10 6 spores/mL). The flasks were incubated for ten days at 25 °C, the culture media were centrifuged at 10,000 rpm for 30 min, and the mycelia removed. The concentrations of aflatoxin-B1 and ochratoxin-A were detected in the mycelia filtrate as follows. The AFB1 was isolated from A. flavus culture filtrate following the method described by Schuller et al . , and the final extracts were purified according to the methodology described by Takeda et al. . The OTA was isolated from A. carbonarius culture filtrate by mixing it with an equal volume of methylene chloride , following the removal of fat with n-hexane . The culture filtrate was subsequently stirred for 30 min and left to stand for an additional 30 min in a separating funnel. Following the filtration through anhydrous sodium sulfate, the methylene chloride layer was subjected to vacuum evaporation until it reached complete dryness. HPLC analysis was further performed to measure the concentrations of mycotoxins AFB1 and OTA. This analysis was performed at the Animal Health Research Institute, Dokki, Giza, Egypt, using an Agilent Series 1200 quaternary gradient pump, autosampler, FLD detector, and HPLC 2D Chemstation software (Hewlett-Packard, Les Ulis, Germany). The chromatographic-graphic separation was performed using a reversed-phase column (Extend-C18, Zorbax column, 4.6 mm i.d., 250 mm, 5 μm, Agilent Co.) . Heavy metal adsorption The fungal melanin was tested for its potential to adsorb the HM ions using batch experiments as described by Nguyen et al. . Approximately 20 mL of 0.2 mg/mL Potassium dichromate (K 2 Cr 2 O 7 0.2 mg/mL Cd sulfate solutions (CdSO 4 ) were prepared separately in conical glass flasks (50 mL). In the tests for detecting the impact of the initial HM ion concentration, Cr and Cd concentrations were adjusted at 5 mg/L. Throughout these experiments, melanin was used at a solid-to-liquid ratio of 0.5%, except for the one specifically designed to examine the impact of a solid-to-liquid ratio. The impact of initial pH was evaluated to select the optimal pH value for HM removal. Then, the initial pH was set at 4.0, which was the optimal choice for HM removal experiments. The mixture was thoroughly mixed with a shaker at 150 rpm for two hours at 25 °C. Afterward, the supernatant was filtered with a filter membrane (Pore size 0.45 μm), and then, the HM concentration evaluated in the filtrate using an Atomic Absorption Spectrophotometer (AAS) (Unicam 969) at the Central Laboratory, Faculty of Agriculture, Zagazig University. The removal efficiency was calculated using the following equation; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Removal efficiency }}\left( \% \right){\text{ }} = {\text{ }}\left( {{\text{C}}0 - {\text{Ct}}} \right)/{\text{C}}0{\text{ }} \times 100$$\end{document} Where C0 is the initial ion concentration and Ct is the ion concentration at time. In addition, the residual HM (In the precipitate), chelated with melanin, was also estimated. The precipitate (containing melanin-chelated HM) was dried in an oven for an hour. The resulting powder specimens (dry precipitate) were analyzed by FTIR spectral analysis and Energy-dispersive X-ray spectroscopy (EDX). FTIR spectral analysis of the samples was performed at room temperature with disbanding whole reflection by an FTIR (Perkin Elmer). The specimens were fused and compressed with KBr, then maintained at a rate of 650–4000 cm −1 . Output signs have been described and analyzed using spectra software . EDX was performed at the Regional Center for Mycology and Biotechnology (RCMB), Al-Azhar University, Cairo, Egypt, to measure the percentage of melanin-chelated HM. An X-ray detector (Ametet) with an accelerating voltage of 20 kV was used to perform EDX Spectrometer for semi-quantitative and qualitative elemental assessment (Quanta FEG250). Based on the peak emission of X-rays produced by the interaction of each element of a compound with the electron beam, the EDX technique is useful for figuring out the composition of samples. An electron or X-ray beam is directed into the sample under study in order to induce the emission of distinctive X-rays from it. Ground state (or unexcited) electrons in distinct energy levels or electron shells attached to the nucleus are present in an atom in the sample when it is at rest. An electron in an inner shell may be excited by the incident beam, which would cause it to be ejected from the shell and leave an electron hole in its place. Following the filling of the hole by an electron from an outer, higher-energy shell, the energy differential between the higher- and lower-energy shells may be emitted as an X-ray. An energy-dispersive spectrometer can detect the quantity and energy of X-rays released from a specimen. EDS enables the measurement of the specimen’s elemental composition since the X-ray energies are indicative of the energy differential between the two shells and the atomic structure of the emitting element . In-silico study Molecular docking study The study investigated molecular interactions between melanin and two fungal toxins through in silico approaches. The crystal structure of aflatoxin from Aspergillus flavus was obtained from the Data Bank (PDB ID: 8hbs), while the ochratoxin structure from Aspergillus carbonarius was retrieved from UniProt (ID: AF-A0A1R3RGJ2-F1-model_v4). Three-dimensional ligand structures were generated using ChemBio Office software in conjunction with the Drug Bank database. Ligands preparation involved removing water molecules and co-crystallized ligands using UCSF Chimera, followed by the addition of polar hydrogen atoms and assignment of partial charges. AutoDock Vina software was employed for molecular docking simulations to analyze binding poses and calculate interaction energies between the ligands and target toxins . Molecular dynamics simulations The toxin-ligand complexes from docking studies were placed in water boxes with counter ions for system neutralization. Energy minimization was performed to eliminate steric clashes before running molecular dynamics simulations in UCSF Chimera to assess complex stability over time. The analysis included evaluation of docking scores and hydrogen bonding patterns . Statistical analysis Data are presented as mean ± SD, based on triplicate measurements from three independent experiments. Different letters (a–h): A statistically significant difference at P < 0.05 according to One-way ANOVA with LSD and Duncan . Using, i: A statistically significant difference at P < 0.05 according to Independent t -test. *: A statistically significant difference at P < 0.05 A total of 12 rhizosphere soil samples were collected from various regions in Sharkia Governorate, Egypt. The soil samples were collected and transferred to the lab on the same day for the isolation of fungi. The serial dilution plating method was used to dilute the soil samples to minimize the number of fungal colonies in each soil dilution . The stock soil solution was prepared by dissolving 50 g of each soil sample (separately) in 100 mL 85% NaCl, with thorough agitation. The solution was then diluted into a series of prepared vials labeled from 10–1 to 10–6, each containing 9 mL 85% NaCl. One milliliter of the soil stock solution was transferred to the first vial. Subsequently, another 1 mL of the solution from the first vial was transferred to the second vial, and the steps were repeated until the last dilution. Czapek Dox Agar (CDA) (Hi Media Lab. Pvt. Ltd. Mumbai, India; Ref. GM075) plates were prepared, and 0.1 mL of each dilution was pipetted and spread on the prepared CDA plates. The plates were incubated at 28–30℃ for 5–7 days. In order to obtain pure fungal isolates, the appeared colonies were then sub-cultured on sterile CDA plates and incubated for 5–7 days at 28–30℃. The obtained fungal pure isolates were identified based on their macroscopic and microscopic morphological characteristics . These isolates were further screened for their potential to produce melanin, and the most potent producers were confirmed by molecular identification of the 18–28S rRNA gene at SolGent Company in Daejeon, South Korea. The forward and reverse primers ITS1 (5′-TCCGTAGGTGAACCTGCGG-3′) and ITS4 (5’-TCCTCCGCT TATTGATATGC-3′) . were used for PCR amplification of 18-28S rRNA gene. The purified PCR amplicons were sequenced using the same primers. The obtained sequences were further searched for similarities with nucleotide sequences in the Gen-Bank using the BLAST (BLASTn) search tool, and sequences were deposited in the Gen-Bank for accession number. The MegAlign (DNA Star) software version 5.05 (DNASTAR Inc., Madison, WI, USA) [DNASTAR Inc. (2003) , DNASTAR software, version 5.05 was used to conduct a phylogenetic analysis of the sequences. The twenty fungal isolates were screened for their potential to produce melanin. The fungal isolates were inoculated separately in Czapek Dox Liquid Medium (Hi Media Lab. Pvt. Ltd., India, Ref. M1170A) and incubated at 30℃ for seven days. Then, the cultures were filtered, and the mycelia were used to extract melanin. As described by El-Sayyad et al. , approximately 1 g of mycelial biomass of each isolate was sterilized with 1 N NaOH for 20 min at 121 °C and 1.5 bar. Subsequently, the mixture was centrifuged at room temperature for 10 min at 8000 rpm, and the supernatant, including melanin, was gathered. HCL with pH 2 was used for condensing the melanin pigment at 10,000 rpm for 10 min at 4℃. Afterward, the pigment was washed thrice with ethyl acetate; chloroform (2;3, v/v) and then centrifuged three times with distilled water. The gathered melanin was cut in dehumidified air. Lastly, the pigments were dissolved in a 1 M KOH to a final concentration of 0.01 µg/mL and analyzed via UV-Vis (T60 UV/Vis. 200–900 nm) with a scanning interval of 1 nm, where L-DOPA (PHR1271-500MG, Sigma USA) served as a standard. The biomass dry weight of melanin was estimated as described by Kumar et al. , El-Batal & Al Tamie , Joshi et al. . The extracted melanin pigments from A. flavus and A. carbonarius were identified based on their physical and chemical properties, including solubility in water, color observation, solubility in KOH, precipitation in 3N HCL, solubility in organic acid solvent (methanol, ethanol, hexane, chloroform, benzene, acetone, and ethyl acetate), reaction with H 2 O 2 , reaction with FeCl 3 , (reaction for polyphenols test), and reaction with sodium dithionite and potassium ferricyanide . The melanin passage mechanism was elucidated by assessing the impact of inhibitors, such as Kojic acid which suppresses DOPA, and tricyclazole, which inhibits DHN. Kojic acid was solubilized in distilled H 2 O, while tricyclazole was dissolved in ethanol. They were subjected to an autoclaved and cooled PDA medium to denote rates of 1, 10, or 100 µg/mL . The cultures were acclimated for seven days at 25 °C, after which the pigmentation and growth were observed . High-performance liquid chromatography (HPLC) analysis was used to identify the precursor molecules and intermediates involved in melanin synthesis. The purified melanin was anatomized by HPLC (Thermo scientific HPLC system, Santa Clara, USA) using C18 column (Eclipse Plus C18 4.6_150 mm, 3.5_m, Cat.# 959963-902) through isocratic mobile stage of methanol and 1% acetic acid with a flux average of 1.0 mL/min for 20 min. Prior to injection onto the column, the specimen was filtered using a 0.2 µm filter (Millipore, Amicon, Mumbai, India). The analysis concentrated on the adherence to chemical standards and the process of condensation that occurred during the detention period. This was accomplished through the diligent effort of the assimilation summit. The melanin was amplified using L-DOPA (PHR1271-500MG, Sigma USA) as a standard . Fourier-transform infrared (FTIR) spectral analysis and Nuclear Magnetic Resonance (NMR) spectroscopy of the purified fungal melanin were conducted at a micro-analytical lab at Cairo University, Egypt. The FTIR spectral analysis was performed at room temperature with disbanding whole reflection by an FTIR (Perkin Elmer). The specimens were fused and compressed with KBr, then maintained at a rate of 650–4000 cm −1 . Output signs have been described and analyzed using spectra software. For NMR spectroscopy of melanin, approximately 20 mg of purified melanin was mixed with 1 mL of reiterated dimethyl sulfoxide, filtered through a small cotton sleeve tightly packed into a Pasteur pipette, and transferred to the proton NMR device ( 1 H NMR). The purified sample was analyzed in sol ECA-500 with cry-probe operating at 500 MHz with DMSO at 205 ppm. The specimens were settled by solubility of 50 mg from U7 melanin and the standard in 3.5 mL of deuterium oxide/ammonia combination. The mixture was made by mixing 0.01 mL of hydrous ammonia (33%) with 10 mL of deuterium oxide at pH 10 . To determine the optimal conditions for the production of melanin by the most potent strains, different media, incubation temperature, incubation period, pH levels, carbon sources, nitrogen sources, and heavy metals were assessed . During the optimization experiments, UV-Vis analysis was utilized to measure the concentration of melanin, using L-DOPA as a reference standard. Additionally, the dry weight was estimated in each experiment. Potato dextrose broth medium (PD), Mineral salt medium (MS), and Czapek–Dox broth medium (CD) were used to test their effect on the production of melanin by A. flavus and A. carbonarius . Three sterilized replicates of each medium (pH 5.5) were inoculated with 1 mL of spore suspension 2.0 × 10 6 spores/mL. Cultures were then incubated at 30 °C for seven days. Subsequently, the melanin production was evaluated as described in the previous step. After the determination of the optimal medium for melanin production, the medium was adjusted at varying pH levels ranging from pH 2 to pH 8. Afterward, the cultures with pH adjustments were subjected to incubation at various temperatures, ranging from 20 to 40℃. The optimal incubation period was determined in the following manner. The fungal cultures were incubated for a total period of 16 days with Melanin production detected on days 3, 5, 7, 10, 12, 14, and 16. In order to determine the best carbon source for melanin production, the main carbon source in the CD medium was replaced with each of the tested carbon sources at the same concentration as the original carbon source. Therefore, the CD medium, which did not contain the original carbon source, was supplemented with 3% (the concentration of the original carbon source in the CD medium) of soluble starch, glucose, lactose, fructose, maltose, and pectin separately. Subsequently, the carbon source with the highest quality was analyzed at various concentrations to determine the optimal concentration. Similarly, to determine the most effective nitrogen source for enhancing melanin production, the original nitrogen source of the CD medium was replaced with the tested nitrogen sources (each tested separately) at the original nitrogen concentration of the medium. Consequently, the following nitrogen sources were added to the medium in dissimilar amounts: NaNO 3 (3 g/100 mL), (NH 4 ) 2 SO 4 (2.33 g/100 mL), urea (1.12 g/100 mL), yeast extract (3.33 g/100 mL), NH 4 H 2 PO 4 (4.2 g/100 mL), beef extract (1.6 g/100 mL) and peptone (2.19 g/100 mL). Subsequently, the nitrogen source promoting melanin production at the optimal level was tested at different quantities to detect the optimal concentration. After setting up the optimal production medium, pH, incubation temperature, incubation period, carbon source, and nitrogen source, the production of melanin in the presence of 0.1, 0.2, 0.5, 1, 2, 3, and 5 mM of different heavy metal sources (CuSO 4 .5H 2 O, FeSO 4 .7H 2 O, Potassium dichromate (K 2 Cr 2 O 7 ), and CdSO 4 .4H 2 O) was also evaluated. The fungal dry weight and melanin concentration were detected in each experiment, as explained previously. The DPPH method was used with slight modifications to assess the antioxidant efficacy of the melanin compared to the standard L-DOPA. This method is based primarily on evaluating the potential of melanin to scavenge free radicals. About 1 mL of the tested specimen was dissolved in methanol and combined with 1 mL of DPPH (0.002% dissolved in methanol). The mixtures were thoroughly mixed and allowed to settle for 30 min. The fusion compounds demonstrated a high level of sharpness at 517 nm UV- spectra. Ascorbic acid served as a standard. Antioxidant efficacy was assessed as the reduction in the intensity of DPPH. DPPH radical scavenging efficiency was determined as EC50 value, where 50% of the DPPH radicals were scavenged . To detect the effect of purified fungal melanin on the production of AFB1 and OTA by A. flavus and A. carbonarius , respectively, Erlenmeyer flasks containing 100 mL of CD liquid medium were supplemented with varying amounts of pure melanin (50, 100, 200, 300, 400, and 500 mg). Each flask was inoculated separately with a spore suspension of A. flavus and A. carbonarius (1 × 10 6 spores/mL). The flasks were incubated for ten days at 25 °C, the culture media were centrifuged at 10,000 rpm for 30 min, and the mycelia removed. The concentrations of aflatoxin-B1 and ochratoxin-A were detected in the mycelia filtrate as follows. The AFB1 was isolated from A. flavus culture filtrate following the method described by Schuller et al . , and the final extracts were purified according to the methodology described by Takeda et al. . The OTA was isolated from A. carbonarius culture filtrate by mixing it with an equal volume of methylene chloride , following the removal of fat with n-hexane . The culture filtrate was subsequently stirred for 30 min and left to stand for an additional 30 min in a separating funnel. Following the filtration through anhydrous sodium sulfate, the methylene chloride layer was subjected to vacuum evaporation until it reached complete dryness. HPLC analysis was further performed to measure the concentrations of mycotoxins AFB1 and OTA. This analysis was performed at the Animal Health Research Institute, Dokki, Giza, Egypt, using an Agilent Series 1200 quaternary gradient pump, autosampler, FLD detector, and HPLC 2D Chemstation software (Hewlett-Packard, Les Ulis, Germany). The chromatographic-graphic separation was performed using a reversed-phase column (Extend-C18, Zorbax column, 4.6 mm i.d., 250 mm, 5 μm, Agilent Co.) . The fungal melanin was tested for its potential to adsorb the HM ions using batch experiments as described by Nguyen et al. . Approximately 20 mL of 0.2 mg/mL Potassium dichromate (K 2 Cr 2 O 7 0.2 mg/mL Cd sulfate solutions (CdSO 4 ) were prepared separately in conical glass flasks (50 mL). In the tests for detecting the impact of the initial HM ion concentration, Cr and Cd concentrations were adjusted at 5 mg/L. Throughout these experiments, melanin was used at a solid-to-liquid ratio of 0.5%, except for the one specifically designed to examine the impact of a solid-to-liquid ratio. The impact of initial pH was evaluated to select the optimal pH value for HM removal. Then, the initial pH was set at 4.0, which was the optimal choice for HM removal experiments. The mixture was thoroughly mixed with a shaker at 150 rpm for two hours at 25 °C. Afterward, the supernatant was filtered with a filter membrane (Pore size 0.45 μm), and then, the HM concentration evaluated in the filtrate using an Atomic Absorption Spectrophotometer (AAS) (Unicam 969) at the Central Laboratory, Faculty of Agriculture, Zagazig University. The removal efficiency was calculated using the following equation; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Removal efficiency }}\left( \% \right){\text{ }} = {\text{ }}\left( {{\text{C}}0 - {\text{Ct}}} \right)/{\text{C}}0{\text{ }} \times 100$$\end{document} Where C0 is the initial ion concentration and Ct is the ion concentration at time. In addition, the residual HM (In the precipitate), chelated with melanin, was also estimated. The precipitate (containing melanin-chelated HM) was dried in an oven for an hour. The resulting powder specimens (dry precipitate) were analyzed by FTIR spectral analysis and Energy-dispersive X-ray spectroscopy (EDX). FTIR spectral analysis of the samples was performed at room temperature with disbanding whole reflection by an FTIR (Perkin Elmer). The specimens were fused and compressed with KBr, then maintained at a rate of 650–4000 cm −1 . Output signs have been described and analyzed using spectra software . EDX was performed at the Regional Center for Mycology and Biotechnology (RCMB), Al-Azhar University, Cairo, Egypt, to measure the percentage of melanin-chelated HM. An X-ray detector (Ametet) with an accelerating voltage of 20 kV was used to perform EDX Spectrometer for semi-quantitative and qualitative elemental assessment (Quanta FEG250). Based on the peak emission of X-rays produced by the interaction of each element of a compound with the electron beam, the EDX technique is useful for figuring out the composition of samples. An electron or X-ray beam is directed into the sample under study in order to induce the emission of distinctive X-rays from it. Ground state (or unexcited) electrons in distinct energy levels or electron shells attached to the nucleus are present in an atom in the sample when it is at rest. An electron in an inner shell may be excited by the incident beam, which would cause it to be ejected from the shell and leave an electron hole in its place. Following the filling of the hole by an electron from an outer, higher-energy shell, the energy differential between the higher- and lower-energy shells may be emitted as an X-ray. An energy-dispersive spectrometer can detect the quantity and energy of X-rays released from a specimen. EDS enables the measurement of the specimen’s elemental composition since the X-ray energies are indicative of the energy differential between the two shells and the atomic structure of the emitting element . Molecular docking study The study investigated molecular interactions between melanin and two fungal toxins through in silico approaches. The crystal structure of aflatoxin from Aspergillus flavus was obtained from the Data Bank (PDB ID: 8hbs), while the ochratoxin structure from Aspergillus carbonarius was retrieved from UniProt (ID: AF-A0A1R3RGJ2-F1-model_v4). Three-dimensional ligand structures were generated using ChemBio Office software in conjunction with the Drug Bank database. Ligands preparation involved removing water molecules and co-crystallized ligands using UCSF Chimera, followed by the addition of polar hydrogen atoms and assignment of partial charges. AutoDock Vina software was employed for molecular docking simulations to analyze binding poses and calculate interaction energies between the ligands and target toxins . Molecular dynamics simulations The toxin-ligand complexes from docking studies were placed in water boxes with counter ions for system neutralization. Energy minimization was performed to eliminate steric clashes before running molecular dynamics simulations in UCSF Chimera to assess complex stability over time. The analysis included evaluation of docking scores and hydrogen bonding patterns . The study investigated molecular interactions between melanin and two fungal toxins through in silico approaches. The crystal structure of aflatoxin from Aspergillus flavus was obtained from the Data Bank (PDB ID: 8hbs), while the ochratoxin structure from Aspergillus carbonarius was retrieved from UniProt (ID: AF-A0A1R3RGJ2-F1-model_v4). Three-dimensional ligand structures were generated using ChemBio Office software in conjunction with the Drug Bank database. Ligands preparation involved removing water molecules and co-crystallized ligands using UCSF Chimera, followed by the addition of polar hydrogen atoms and assignment of partial charges. AutoDock Vina software was employed for molecular docking simulations to analyze binding poses and calculate interaction energies between the ligands and target toxins . The toxin-ligand complexes from docking studies were placed in water boxes with counter ions for system neutralization. Energy minimization was performed to eliminate steric clashes before running molecular dynamics simulations in UCSF Chimera to assess complex stability over time. The analysis included evaluation of docking scores and hydrogen bonding patterns . Data are presented as mean ± SD, based on triplicate measurements from three independent experiments. Different letters (a–h): A statistically significant difference at P < 0.05 according to One-way ANOVA with LSD and Duncan . Using, i: A statistically significant difference at P < 0.05 according to Independent t -test. *: A statistically significant difference at P < 0.05 Fungal isolates and melanin productivity As shown in Supplementary data Figure , all the isolates tested exhibited the ability to produce melanin. However, the A. flavus and A. carbonarius produced the highest amounts of melanin pigment about 690 µg/mL and 517 µg/mL, respectively. The most efficient melanin producer, isolate A. flavus , was subsequently confirmed by 18S-28S rRNA gene sequencing. It showed the highest sequence similarity with the strain A. flavus ATCC16883 (NR_111041). The phylogenetic tree was constructed (Fig. ), and the sequences were deposited into the GenBank ( https://www.ncbi.nlm.nih.gov/Genbank ) under accession No. MZ314535. Identification of melanin characteristics Physicochemical properties The melanin pigments extracted from A. flavus and A. carbonarius were primarily identified by the physicochemical properties compared to the standard L-DOPA. The extracted melanin pigments were insoluble in hot and cold distilled water and various organic solvents, including methanol, ethanol, hexane, chloroform, benzene, acetone, and ethyl acetate (Table ). In addition, melanin pigments dissolved rapidly in KOH at 100℃. Then, the pigments underwent condensation at pH 3 and 3N HCl, forming flocculent brown condensate in the FeCl 3 test. When the extracted melanin was exposed to reactions with oxidizing chemicals such as H 2 O 2 , NaOCl, KMnO 4 , and K 2 Cr 2 O 7 , these agents bleached it and became colorless. In contrast, the pigments become decolorized and change to brown when interacting with the reducing agent’s potassium ferricyanide and sodium dithionite (Table ). Melanin pathways Melanin formation was verified through the identification of DOPA and DHN pathways in A. flavus and A. carbonarius (Fig. ). The pathway output and efficient suppression of melanin preparation exhibited during total discoloration of the fungal strains were acquired in the consistency of an inhibitor for the tested strains. The inhibitory effects of the pathway output and effective suppression of melanin preparation were observed during the complete discoloration of the fungal strains. UV, HPLC, FTIR, and 1 H-NMR analyses The UV-Vis analysis of the melanin produced from A. flavus showed an optical density of 3.5 at 260 nm, similar to that of the standard (L-DOPA), followed by the melanin produced by A. carbonarius (Fig. ). The HPLC analysis of both the fungal melanin and the standard L-DOPA revealed a distinct peak with a retention time of approximately 5.06 min in the upper region of the curve. This peak suggests that the purity of A. flavus and A. carbonarius melanin is comparable to that of the standard melanin, as depicted in Fig. A, B and C. Moreover, the 1H-NMR spectroscopy was used to identify the melanin produced by A. flavus and A. carbonarius . The analysis showed that the pigments were detected within the range of 1.0 to 8.0 ppm (Fig. A, B and C). The 1 H-NMR spectrum of extracted melanin exhibits distinct signals in both the aromatic and aliphatic regions. The chart’s upper region displayed an increase in absorption levels, ranging from 3.79 to 5.07 ppm, designed to be carbons or protons correlated to oxygen and/or nitrogen atoms. In the aliphatic area of the extracted melanin’s 1 H-NMR spectra, signals in the region 1.2 ppm can be proposed to CH 3 groups of alkyl parts, such as CH 2 CH 3 and CH(CH 3 ) 2 . The compound CH (CH 3 ) 2 exhibits a consistent bond at a frequency of 6.9 Hz 25. The shift observed at the 8.7 ppm region clearly indicates the presence of an indole ring, which appeared in the L-Dopa region of 7.7 ppm. Furthermore, the results of FTIR analysis showed that the melanin pigments of A. flavus and A. carbonarius exhibited FTIR spectra identical to those of the standard L-DOPA (Fig. A, B and C). The peak at around 3428 cm −1 on the chart is primarily attributed to the presence of (OH) and NH groups, while the peaks at 2924 cm −1 and 2854 cm −1 correspond to CH 2 or –CH 3 groups. Furthermore, broad signals at 1710 and 1627 cm −1 indicate the presence of ketone or carboxylic acid (C=O) functional groups. The bands observed at 850, 786, and 708 cm −1 correspond to the stretching vibrations of (C–H), =C–H, and out-of-plane bend vibrations of N-H or C–Cl. These bands indicate the presence of aliphatic compounds, while the absence of standard curve modes is indicative of the absence of aliphatic series. Optimization of melanin production conditions In order to maximize melanin production by A. flavus and A. carbonarius , the production conditions, including medium, temperature, pH, carbon source, nitrogen source, and incubation period were optimized. The concentration of produced melanin was estimated in each experiment. Among the three tested media, CD was the best medium for melanin production at 30 ℃ after seven days of incubation. The amount of the produced melanin by A. flavus and A. carbonarius at 25℃ was less than the produced amount at 30℃ and upon the increase of incubation temperature over 30℃, the quantity of produced melanin decreased (Fig. A). The pH values of the growth medium depend on the use of citrate-phosphate buffer, and it was found that the production of melanin increased gradually with the increase of pH until reaching pH 5 where both A. flavus and A. carbonarius exhibited the maximum dry weight and melanin concentration (12,000 µg/mL and 800 µg/mL for A. flavus and, 9100 µg/mL and 653 µg/mL for A. carbonarius, respectively). Then, the dry weight and melanin concentration decreased steadily with the increase of medium pH until reaching the lowest level at pH 8. By replacing the original nitrogen source of the medium with tested nitrogen sources, it was found that the production of melanin increased to the maximum level in presence of yeast extract as a nitrogen source. When diverse yeast extract concentrations were tested, it was found that the maximum melanin production by A. flavus obtained at 0.2% yeast extract. Conversely, A. carbonarius exhibited the highest melanin production when the medium was supplemented with 0.3% yeast extract (Fig. B). Furthermore, the production of melanin pigment in the presence of glucose, sucrose, fructose, maltose, lactose, soluble starch, cellulose, and pectin as a carbon source were tested separately. Glucose was detected as the best carbon source to augment the production of melanin by A. flavus and A. carbonarius . When several glucose concentrations were tested for their potential to enhance melanin production, it is noted that the melanin concentration increased with the increase of glucose concentration in the medium. However, the melanin production reached the highest level when the medium supplemented with 3% glucose, and the melanin production diminished with the increase of glucose amount in the medium (Fig. C). The synthesis of melanin by A. flavus and A. carbonarius was examined at various time points during the cultivation process. The melanin concentration produced by both strains increased gradually with the extended incubation period at the optimal conditions until reaching the maximum level after 14 days (1500 µg/mL for A. flavus and 1000 µg/mL for A. carbonarius ). When the incubation period extended to 16 days, the concentration of melanin remained constant (Fig. D). Melanin synthesis by the selected fungal strains was also assessed in the optimized medium using various concentrations of HM sources. The UV-Vis analysis showed that A. flavus and A. carbonarius were able to produce melanin with the highest concentration when the medium supplemented individually with 0.5 mM CuSO 4 , 0.5 mM FeSO 4 , 0.1 mM K 2 Cr 2 O 7 , and 0.1 mM CdSO 4 . However, melanin production by both strains was found to decrease by increasing the heavy metal concentrations above the detected amounts (Table ). Antioxidant activity The antioxidant activity of fungal melanin was confirmed using the DPPH method. The purified melanin pigments exhibited significant capability to scavenge free radicals, as demonstrated by their DPPH (EC50 of 55.5 µg/mL) efficacy, which was comparable to that of L-DOPA (EC50 of 59.5 µg/mL), particularly at a concentration of 100 µg/mL. In Table , L-DOPA and ascorbic acid were used at concentrations of 20, 40, 60, 80, 100, 120, and 140 µg/mL. The ascorbic acid displayed a remarkable ability to scavenge radicals, with an EC50 value of 40.6 µg/mL. The DPPH scavenging activity increased with the increase in melanin concentration. According to the data in Table , the EC50 activity rises with the concentration increase until it reaches into stabilizing point at 100 µg/mL. Suppression of mycotoxin production The effect of fungal melanin on the production of mycotoxins was evaluated by HPLC analysis of mycotoxins in each treatment. Our results expressed in Table reflect that raising the concentration of pure melanin gradually decreases the concentration of produced Af–B1 toxin and the growth (expressed as dry weight) of A. flavus . It was observed that supplementing A. flavus culture with 0.3% pure melanin resulted in complete inhibition of AF–B1 toxin production with significant diminution in A. flavus growth. In addition, the results demonstrate that the increase of melanin concentration above 0.3% leads to continual decline of fungal growth. These findings were monitored by HPLC analysis of the produced AF–B1 toxin in each treatment compared to the standard toxin sample at the same retention time (5.55 min) (Fig. A, B and C). The effect of melanin on the production of OTA by A. carbonarius is illustrated in (Table ). Evidently, the increase of melanin in the culture medium results in the continuous decline in OTA and the decline of fungal growth. By measuring the produced OTA in each experiment, we found that 0.4% melanin-enriched medium caused complete suppression of OTA production, and the increased melanin concentration in the medium of A. carbonarius continued to reduce the fungal growth. Our findings were confirmed by HPLC analysis of OTA in each experiment output. The OTA produced in each treatment was analyzed in comparison with the standard OTA at the same retention time (5.55 min) (Fig. A, B and C). Heavy metal chelation The efficiency of purified fungal melanin to chelate the heavy metals Cd and Cr was identified using AAS analysis for the experiment solution filtrate and EDX and FTIR analyses for the melanin-chelated HM powder. The results demonstrated a direct relationship between the concentration of melanin and the adsorption of Cr and Cd. The potential of melanin to adsorb heavy metals was calculated and expressed as a removal efficiency percentage. It was found that the addition of 1 mg/mL of purified melanin resulted in a 49% removal efficiency of Cd and 63% of Cr. Our results showed that the removal efficiency of Cd increases up to 56%, 57%, and 60% with the increase of melanin concentration to 5, 10, and 15 mg/mL, respectively. The presence of comparable melanin concentrations resulted in the removal efficiency of Cr to 64%, 67%, and 77 %, respectively (Table ). The FTIR analysis of melanin-chelated Cd, compared to the control sample of pure melanin, showed a shift in the occurrence of specific peaks and the appearance and disappearance of new peaks (Table , Fig. A and B). The presence of broken bonds at peaks 3736 and 3434 cm −1 indicates the strong presence of Alcohol (O-H) bands. Also, a new peak at 2853 cm −1 was observed, representing the C–H bonds in the alkane group, which was absent in the control specimen (Fig. B). Another new peak at 1712 cm −1 implies the stretching mode of C=O bonds, which could be attributed to either ketone or carboxylic acid. The intense peaks at 2853, 1710, 596, and 447 cm −1 correspond to alkanes, the C=O stretching mode of a ketone or carboxylic acid, and the C–Br and C–I stretching, respectively. A shift of 50 cm −1 at 1456.03 cm −1 signifies a change in the C–H bending in CH 3 or C=C groups (scissoring) or aromatic –C=C stretching vibrations. The shift at 3408 cm −1 (Δ 26 cm −1 ) corresponds to the stretching of the –OH alcohol group (O–H), which is firmly and broadly H-bonded. The shift at 1632 cm −1 (Δ 10 cm −1 ) can be attributed to the stretching mode of C=C, N-H bending in primary amine, or C=O stretching mode (amide). The consecutive shifts at 2361, 1623, and 1260 cm −1 (Δ 2, 10, 8 cm −1 ) back to the stretching mode of C¼O in carbonyl groups found in alcohol, esters, ethers, carboxylic acids, as well as the stretching mode of C≡N and N–H bending in primary amine. The intensity of the peak diminished, and a slight displacement occurred at 1147 cm −1 , corresponding to the C-O stretching mode (ether) or C–O stretching mode. Furthermore, the peaks observed at 928 cm −1 and 815 cm −1 were no longer present in the sample (26-b). The FTIR spectral analysis of melanin-chelated Cr, compared to the pure fungal melanin, represented in Fig. A and C, revealed a significant shift in the woven Umber at 3785 and 3408 cm −1 (Δ49 and 26 cm −1 , respectively). This shift indicates the interaction between –OH groups and the absorption of Cd, as well as the asymmetric stretching of –NH 2 groups in amines. The prominent peaks observed at 2853 cm −1 and 1710 cm −1 (Fig. C) corresponds to the stretching vibrations of the C–H bonds in the alkane group (strong, intense) and the stretching vibrations of the O–H bonds in the carboxylic group. These peaks also show the stretching vibrations of the C=O bonds in ketones and carboxylic acids. A further significant change was observed at 1456 cm −1 (Δ 50) cm −1 as a result of aromatic –C=C stretching vibrations and C–H bending in CH3 groups (scissoring). The reasons for the significant shift at 1260 cm-1 (Δ 8 cm −1 ) were C=O bending mode, C–O–H bending mode, C–O stretching mode (ether), C–O stretching mode (alcohol), and C–O carboxylic acid. At 873 cm −1 (Δ 24) cm −1 , another noticeable shift was identified as an indication of C–S stretching or C–Br, C–I (alkyl halides). In addition, there was a change in the peak intensity and a drop in the peak at 1147 cm −1 , 541 cm −1 , and 471 cm −1 (C–O stretching mode (ether) or C–O stretching mode and C–S stretching or C–Br, C–I (alkyl halides). The peak at 928 cm −1 in control disappeared. In sample c, a new C–S stretching peak resurfaced in a substantial shift band at 447 cm −1 (Table ). The EDX spectral analysis demonstrated the presence percentage of Cd and Cr on the melanin surface with percentages of 46.2% and 16.3%, correspondingly compared to the control sample of pure fungal melanin Fig. A, B and C. The molecular docking studies revealed distinct binding characteristics between melanin and the two fungal aflatoxin and ochratoxin (Table ). The interaction between melanin and aflatoxin in A. flavus showed particularly strong binding, with a free binding energy of −9.5 kcal/mol. This complex was stabilized by a specific hydrogen-oxygen bond measuring 2.20 Å between the hydrogen of Tyrosine 180 and the ligand’s oxygen atom, as clearly visible in the molecular visualization (Fig. A). This strong binding energy and the presence of a specific hydrogen bond suggest a stable and potentially biologically significant interaction. Where, the aflatoxin backbone is shown in ribbon representation with key amino acid residues displayed in stick format. The interaction diagram details the network of molecular contacts in the binding site, with different types of interactions color-coded according to their nature (Fig. A). Also, the results showed the molecular docking visualization of melanin binding to aflatoxin in A. flavus and the binding pocket with key structural features including the hydrogen-oxygen bond (2.20 Å) between Tyrosine 180 (TYR 180) and the melanin ligand (Fig. B). In contrast, the interaction between melanin and ochratoxin in A. carbonarius was notably weaker, with a binding energy of −5.4 kcal/mol. Interestingly, no hydrogen bonds were observed in this complex, as shown in the corresponding molecular visualization (Fig. A). The binding site analysis revealed the presence of several key amino acid residues including VAL 130, ARG 129, GLU 36, and HIS 44, which may contribute to the overall binding stability through other types of molecular interactions despite the absence of hydrogen bonds. The ochratoxin is represented in ribbon format with varying colors indicating different secondary structure elements, while key residues are shown in stick representation. The accompanying interaction diagram maps the spatial arrangement of protein-ligand contacts in the binding site, though no hydrogen bonds are present in this complex (Fig. A). Further, the results showed the molecular docking visualization of melanin binding to ochratoxin in A. carbonarius as well as the binding pocket environment with labeled amino acid residues including VAL 130, ARG 129, GLU 36, HIS 44, ILE 178, and SER 173 (Fig. B). As shown in Supplementary data Figure , all the isolates tested exhibited the ability to produce melanin. However, the A. flavus and A. carbonarius produced the highest amounts of melanin pigment about 690 µg/mL and 517 µg/mL, respectively. The most efficient melanin producer, isolate A. flavus , was subsequently confirmed by 18S-28S rRNA gene sequencing. It showed the highest sequence similarity with the strain A. flavus ATCC16883 (NR_111041). The phylogenetic tree was constructed (Fig. ), and the sequences were deposited into the GenBank ( https://www.ncbi.nlm.nih.gov/Genbank ) under accession No. MZ314535. Physicochemical properties The melanin pigments extracted from A. flavus and A. carbonarius were primarily identified by the physicochemical properties compared to the standard L-DOPA. The extracted melanin pigments were insoluble in hot and cold distilled water and various organic solvents, including methanol, ethanol, hexane, chloroform, benzene, acetone, and ethyl acetate (Table ). In addition, melanin pigments dissolved rapidly in KOH at 100℃. Then, the pigments underwent condensation at pH 3 and 3N HCl, forming flocculent brown condensate in the FeCl 3 test. When the extracted melanin was exposed to reactions with oxidizing chemicals such as H 2 O 2 , NaOCl, KMnO 4 , and K 2 Cr 2 O 7 , these agents bleached it and became colorless. In contrast, the pigments become decolorized and change to brown when interacting with the reducing agent’s potassium ferricyanide and sodium dithionite (Table ). Melanin pathways Melanin formation was verified through the identification of DOPA and DHN pathways in A. flavus and A. carbonarius (Fig. ). The pathway output and efficient suppression of melanin preparation exhibited during total discoloration of the fungal strains were acquired in the consistency of an inhibitor for the tested strains. The inhibitory effects of the pathway output and effective suppression of melanin preparation were observed during the complete discoloration of the fungal strains. UV, HPLC, FTIR, and 1 H-NMR analyses The UV-Vis analysis of the melanin produced from A. flavus showed an optical density of 3.5 at 260 nm, similar to that of the standard (L-DOPA), followed by the melanin produced by A. carbonarius (Fig. ). The HPLC analysis of both the fungal melanin and the standard L-DOPA revealed a distinct peak with a retention time of approximately 5.06 min in the upper region of the curve. This peak suggests that the purity of A. flavus and A. carbonarius melanin is comparable to that of the standard melanin, as depicted in Fig. A, B and C. Moreover, the 1H-NMR spectroscopy was used to identify the melanin produced by A. flavus and A. carbonarius . The analysis showed that the pigments were detected within the range of 1.0 to 8.0 ppm (Fig. A, B and C). The 1 H-NMR spectrum of extracted melanin exhibits distinct signals in both the aromatic and aliphatic regions. The chart’s upper region displayed an increase in absorption levels, ranging from 3.79 to 5.07 ppm, designed to be carbons or protons correlated to oxygen and/or nitrogen atoms. In the aliphatic area of the extracted melanin’s 1 H-NMR spectra, signals in the region 1.2 ppm can be proposed to CH 3 groups of alkyl parts, such as CH 2 CH 3 and CH(CH 3 ) 2 . The compound CH (CH 3 ) 2 exhibits a consistent bond at a frequency of 6.9 Hz 25. The shift observed at the 8.7 ppm region clearly indicates the presence of an indole ring, which appeared in the L-Dopa region of 7.7 ppm. Furthermore, the results of FTIR analysis showed that the melanin pigments of A. flavus and A. carbonarius exhibited FTIR spectra identical to those of the standard L-DOPA (Fig. A, B and C). The peak at around 3428 cm −1 on the chart is primarily attributed to the presence of (OH) and NH groups, while the peaks at 2924 cm −1 and 2854 cm −1 correspond to CH 2 or –CH 3 groups. Furthermore, broad signals at 1710 and 1627 cm −1 indicate the presence of ketone or carboxylic acid (C=O) functional groups. The bands observed at 850, 786, and 708 cm −1 correspond to the stretching vibrations of (C–H), =C–H, and out-of-plane bend vibrations of N-H or C–Cl. These bands indicate the presence of aliphatic compounds, while the absence of standard curve modes is indicative of the absence of aliphatic series. Optimization of melanin production conditions In order to maximize melanin production by A. flavus and A. carbonarius , the production conditions, including medium, temperature, pH, carbon source, nitrogen source, and incubation period were optimized. The concentration of produced melanin was estimated in each experiment. Among the three tested media, CD was the best medium for melanin production at 30 ℃ after seven days of incubation. The amount of the produced melanin by A. flavus and A. carbonarius at 25℃ was less than the produced amount at 30℃ and upon the increase of incubation temperature over 30℃, the quantity of produced melanin decreased (Fig. A). The pH values of the growth medium depend on the use of citrate-phosphate buffer, and it was found that the production of melanin increased gradually with the increase of pH until reaching pH 5 where both A. flavus and A. carbonarius exhibited the maximum dry weight and melanin concentration (12,000 µg/mL and 800 µg/mL for A. flavus and, 9100 µg/mL and 653 µg/mL for A. carbonarius, respectively). Then, the dry weight and melanin concentration decreased steadily with the increase of medium pH until reaching the lowest level at pH 8. By replacing the original nitrogen source of the medium with tested nitrogen sources, it was found that the production of melanin increased to the maximum level in presence of yeast extract as a nitrogen source. When diverse yeast extract concentrations were tested, it was found that the maximum melanin production by A. flavus obtained at 0.2% yeast extract. Conversely, A. carbonarius exhibited the highest melanin production when the medium was supplemented with 0.3% yeast extract (Fig. B). Furthermore, the production of melanin pigment in the presence of glucose, sucrose, fructose, maltose, lactose, soluble starch, cellulose, and pectin as a carbon source were tested separately. Glucose was detected as the best carbon source to augment the production of melanin by A. flavus and A. carbonarius . When several glucose concentrations were tested for their potential to enhance melanin production, it is noted that the melanin concentration increased with the increase of glucose concentration in the medium. However, the melanin production reached the highest level when the medium supplemented with 3% glucose, and the melanin production diminished with the increase of glucose amount in the medium (Fig. C). The synthesis of melanin by A. flavus and A. carbonarius was examined at various time points during the cultivation process. The melanin concentration produced by both strains increased gradually with the extended incubation period at the optimal conditions until reaching the maximum level after 14 days (1500 µg/mL for A. flavus and 1000 µg/mL for A. carbonarius ). When the incubation period extended to 16 days, the concentration of melanin remained constant (Fig. D). Melanin synthesis by the selected fungal strains was also assessed in the optimized medium using various concentrations of HM sources. The UV-Vis analysis showed that A. flavus and A. carbonarius were able to produce melanin with the highest concentration when the medium supplemented individually with 0.5 mM CuSO 4 , 0.5 mM FeSO 4 , 0.1 mM K 2 Cr 2 O 7 , and 0.1 mM CdSO 4 . However, melanin production by both strains was found to decrease by increasing the heavy metal concentrations above the detected amounts (Table ). The melanin pigments extracted from A. flavus and A. carbonarius were primarily identified by the physicochemical properties compared to the standard L-DOPA. The extracted melanin pigments were insoluble in hot and cold distilled water and various organic solvents, including methanol, ethanol, hexane, chloroform, benzene, acetone, and ethyl acetate (Table ). In addition, melanin pigments dissolved rapidly in KOH at 100℃. Then, the pigments underwent condensation at pH 3 and 3N HCl, forming flocculent brown condensate in the FeCl 3 test. When the extracted melanin was exposed to reactions with oxidizing chemicals such as H 2 O 2 , NaOCl, KMnO 4 , and K 2 Cr 2 O 7 , these agents bleached it and became colorless. In contrast, the pigments become decolorized and change to brown when interacting with the reducing agent’s potassium ferricyanide and sodium dithionite (Table ). Melanin formation was verified through the identification of DOPA and DHN pathways in A. flavus and A. carbonarius (Fig. ). The pathway output and efficient suppression of melanin preparation exhibited during total discoloration of the fungal strains were acquired in the consistency of an inhibitor for the tested strains. The inhibitory effects of the pathway output and effective suppression of melanin preparation were observed during the complete discoloration of the fungal strains. 1 H-NMR analyses The UV-Vis analysis of the melanin produced from A. flavus showed an optical density of 3.5 at 260 nm, similar to that of the standard (L-DOPA), followed by the melanin produced by A. carbonarius (Fig. ). The HPLC analysis of both the fungal melanin and the standard L-DOPA revealed a distinct peak with a retention time of approximately 5.06 min in the upper region of the curve. This peak suggests that the purity of A. flavus and A. carbonarius melanin is comparable to that of the standard melanin, as depicted in Fig. A, B and C. Moreover, the 1H-NMR spectroscopy was used to identify the melanin produced by A. flavus and A. carbonarius . The analysis showed that the pigments were detected within the range of 1.0 to 8.0 ppm (Fig. A, B and C). The 1 H-NMR spectrum of extracted melanin exhibits distinct signals in both the aromatic and aliphatic regions. The chart’s upper region displayed an increase in absorption levels, ranging from 3.79 to 5.07 ppm, designed to be carbons or protons correlated to oxygen and/or nitrogen atoms. In the aliphatic area of the extracted melanin’s 1 H-NMR spectra, signals in the region 1.2 ppm can be proposed to CH 3 groups of alkyl parts, such as CH 2 CH 3 and CH(CH 3 ) 2 . The compound CH (CH 3 ) 2 exhibits a consistent bond at a frequency of 6.9 Hz 25. The shift observed at the 8.7 ppm region clearly indicates the presence of an indole ring, which appeared in the L-Dopa region of 7.7 ppm. Furthermore, the results of FTIR analysis showed that the melanin pigments of A. flavus and A. carbonarius exhibited FTIR spectra identical to those of the standard L-DOPA (Fig. A, B and C). The peak at around 3428 cm −1 on the chart is primarily attributed to the presence of (OH) and NH groups, while the peaks at 2924 cm −1 and 2854 cm −1 correspond to CH 2 or –CH 3 groups. Furthermore, broad signals at 1710 and 1627 cm −1 indicate the presence of ketone or carboxylic acid (C=O) functional groups. The bands observed at 850, 786, and 708 cm −1 correspond to the stretching vibrations of (C–H), =C–H, and out-of-plane bend vibrations of N-H or C–Cl. These bands indicate the presence of aliphatic compounds, while the absence of standard curve modes is indicative of the absence of aliphatic series. In order to maximize melanin production by A. flavus and A. carbonarius , the production conditions, including medium, temperature, pH, carbon source, nitrogen source, and incubation period were optimized. The concentration of produced melanin was estimated in each experiment. Among the three tested media, CD was the best medium for melanin production at 30 ℃ after seven days of incubation. The amount of the produced melanin by A. flavus and A. carbonarius at 25℃ was less than the produced amount at 30℃ and upon the increase of incubation temperature over 30℃, the quantity of produced melanin decreased (Fig. A). The pH values of the growth medium depend on the use of citrate-phosphate buffer, and it was found that the production of melanin increased gradually with the increase of pH until reaching pH 5 where both A. flavus and A. carbonarius exhibited the maximum dry weight and melanin concentration (12,000 µg/mL and 800 µg/mL for A. flavus and, 9100 µg/mL and 653 µg/mL for A. carbonarius, respectively). Then, the dry weight and melanin concentration decreased steadily with the increase of medium pH until reaching the lowest level at pH 8. By replacing the original nitrogen source of the medium with tested nitrogen sources, it was found that the production of melanin increased to the maximum level in presence of yeast extract as a nitrogen source. When diverse yeast extract concentrations were tested, it was found that the maximum melanin production by A. flavus obtained at 0.2% yeast extract. Conversely, A. carbonarius exhibited the highest melanin production when the medium was supplemented with 0.3% yeast extract (Fig. B). Furthermore, the production of melanin pigment in the presence of glucose, sucrose, fructose, maltose, lactose, soluble starch, cellulose, and pectin as a carbon source were tested separately. Glucose was detected as the best carbon source to augment the production of melanin by A. flavus and A. carbonarius . When several glucose concentrations were tested for their potential to enhance melanin production, it is noted that the melanin concentration increased with the increase of glucose concentration in the medium. However, the melanin production reached the highest level when the medium supplemented with 3% glucose, and the melanin production diminished with the increase of glucose amount in the medium (Fig. C). The synthesis of melanin by A. flavus and A. carbonarius was examined at various time points during the cultivation process. The melanin concentration produced by both strains increased gradually with the extended incubation period at the optimal conditions until reaching the maximum level after 14 days (1500 µg/mL for A. flavus and 1000 µg/mL for A. carbonarius ). When the incubation period extended to 16 days, the concentration of melanin remained constant (Fig. D). Melanin synthesis by the selected fungal strains was also assessed in the optimized medium using various concentrations of HM sources. The UV-Vis analysis showed that A. flavus and A. carbonarius were able to produce melanin with the highest concentration when the medium supplemented individually with 0.5 mM CuSO 4 , 0.5 mM FeSO 4 , 0.1 mM K 2 Cr 2 O 7 , and 0.1 mM CdSO 4 . However, melanin production by both strains was found to decrease by increasing the heavy metal concentrations above the detected amounts (Table ). The antioxidant activity of fungal melanin was confirmed using the DPPH method. The purified melanin pigments exhibited significant capability to scavenge free radicals, as demonstrated by their DPPH (EC50 of 55.5 µg/mL) efficacy, which was comparable to that of L-DOPA (EC50 of 59.5 µg/mL), particularly at a concentration of 100 µg/mL. In Table , L-DOPA and ascorbic acid were used at concentrations of 20, 40, 60, 80, 100, 120, and 140 µg/mL. The ascorbic acid displayed a remarkable ability to scavenge radicals, with an EC50 value of 40.6 µg/mL. The DPPH scavenging activity increased with the increase in melanin concentration. According to the data in Table , the EC50 activity rises with the concentration increase until it reaches into stabilizing point at 100 µg/mL. The effect of fungal melanin on the production of mycotoxins was evaluated by HPLC analysis of mycotoxins in each treatment. Our results expressed in Table reflect that raising the concentration of pure melanin gradually decreases the concentration of produced Af–B1 toxin and the growth (expressed as dry weight) of A. flavus . It was observed that supplementing A. flavus culture with 0.3% pure melanin resulted in complete inhibition of AF–B1 toxin production with significant diminution in A. flavus growth. In addition, the results demonstrate that the increase of melanin concentration above 0.3% leads to continual decline of fungal growth. These findings were monitored by HPLC analysis of the produced AF–B1 toxin in each treatment compared to the standard toxin sample at the same retention time (5.55 min) (Fig. A, B and C). The effect of melanin on the production of OTA by A. carbonarius is illustrated in (Table ). Evidently, the increase of melanin in the culture medium results in the continuous decline in OTA and the decline of fungal growth. By measuring the produced OTA in each experiment, we found that 0.4% melanin-enriched medium caused complete suppression of OTA production, and the increased melanin concentration in the medium of A. carbonarius continued to reduce the fungal growth. Our findings were confirmed by HPLC analysis of OTA in each experiment output. The OTA produced in each treatment was analyzed in comparison with the standard OTA at the same retention time (5.55 min) (Fig. A, B and C). The efficiency of purified fungal melanin to chelate the heavy metals Cd and Cr was identified using AAS analysis for the experiment solution filtrate and EDX and FTIR analyses for the melanin-chelated HM powder. The results demonstrated a direct relationship between the concentration of melanin and the adsorption of Cr and Cd. The potential of melanin to adsorb heavy metals was calculated and expressed as a removal efficiency percentage. It was found that the addition of 1 mg/mL of purified melanin resulted in a 49% removal efficiency of Cd and 63% of Cr. Our results showed that the removal efficiency of Cd increases up to 56%, 57%, and 60% with the increase of melanin concentration to 5, 10, and 15 mg/mL, respectively. The presence of comparable melanin concentrations resulted in the removal efficiency of Cr to 64%, 67%, and 77 %, respectively (Table ). The FTIR analysis of melanin-chelated Cd, compared to the control sample of pure melanin, showed a shift in the occurrence of specific peaks and the appearance and disappearance of new peaks (Table , Fig. A and B). The presence of broken bonds at peaks 3736 and 3434 cm −1 indicates the strong presence of Alcohol (O-H) bands. Also, a new peak at 2853 cm −1 was observed, representing the C–H bonds in the alkane group, which was absent in the control specimen (Fig. B). Another new peak at 1712 cm −1 implies the stretching mode of C=O bonds, which could be attributed to either ketone or carboxylic acid. The intense peaks at 2853, 1710, 596, and 447 cm −1 correspond to alkanes, the C=O stretching mode of a ketone or carboxylic acid, and the C–Br and C–I stretching, respectively. A shift of 50 cm −1 at 1456.03 cm −1 signifies a change in the C–H bending in CH 3 or C=C groups (scissoring) or aromatic –C=C stretching vibrations. The shift at 3408 cm −1 (Δ 26 cm −1 ) corresponds to the stretching of the –OH alcohol group (O–H), which is firmly and broadly H-bonded. The shift at 1632 cm −1 (Δ 10 cm −1 ) can be attributed to the stretching mode of C=C, N-H bending in primary amine, or C=O stretching mode (amide). The consecutive shifts at 2361, 1623, and 1260 cm −1 (Δ 2, 10, 8 cm −1 ) back to the stretching mode of C¼O in carbonyl groups found in alcohol, esters, ethers, carboxylic acids, as well as the stretching mode of C≡N and N–H bending in primary amine. The intensity of the peak diminished, and a slight displacement occurred at 1147 cm −1 , corresponding to the C-O stretching mode (ether) or C–O stretching mode. Furthermore, the peaks observed at 928 cm −1 and 815 cm −1 were no longer present in the sample (26-b). The FTIR spectral analysis of melanin-chelated Cr, compared to the pure fungal melanin, represented in Fig. A and C, revealed a significant shift in the woven Umber at 3785 and 3408 cm −1 (Δ49 and 26 cm −1 , respectively). This shift indicates the interaction between –OH groups and the absorption of Cd, as well as the asymmetric stretching of –NH 2 groups in amines. The prominent peaks observed at 2853 cm −1 and 1710 cm −1 (Fig. C) corresponds to the stretching vibrations of the C–H bonds in the alkane group (strong, intense) and the stretching vibrations of the O–H bonds in the carboxylic group. These peaks also show the stretching vibrations of the C=O bonds in ketones and carboxylic acids. A further significant change was observed at 1456 cm −1 (Δ 50) cm −1 as a result of aromatic –C=C stretching vibrations and C–H bending in CH3 groups (scissoring). The reasons for the significant shift at 1260 cm-1 (Δ 8 cm −1 ) were C=O bending mode, C–O–H bending mode, C–O stretching mode (ether), C–O stretching mode (alcohol), and C–O carboxylic acid. At 873 cm −1 (Δ 24) cm −1 , another noticeable shift was identified as an indication of C–S stretching or C–Br, C–I (alkyl halides). In addition, there was a change in the peak intensity and a drop in the peak at 1147 cm −1 , 541 cm −1 , and 471 cm −1 (C–O stretching mode (ether) or C–O stretching mode and C–S stretching or C–Br, C–I (alkyl halides). The peak at 928 cm −1 in control disappeared. In sample c, a new C–S stretching peak resurfaced in a substantial shift band at 447 cm −1 (Table ). The EDX spectral analysis demonstrated the presence percentage of Cd and Cr on the melanin surface with percentages of 46.2% and 16.3%, correspondingly compared to the control sample of pure fungal melanin Fig. A, B and C. The molecular docking studies revealed distinct binding characteristics between melanin and the two fungal aflatoxin and ochratoxin (Table ). The interaction between melanin and aflatoxin in A. flavus showed particularly strong binding, with a free binding energy of −9.5 kcal/mol. This complex was stabilized by a specific hydrogen-oxygen bond measuring 2.20 Å between the hydrogen of Tyrosine 180 and the ligand’s oxygen atom, as clearly visible in the molecular visualization (Fig. A). This strong binding energy and the presence of a specific hydrogen bond suggest a stable and potentially biologically significant interaction. Where, the aflatoxin backbone is shown in ribbon representation with key amino acid residues displayed in stick format. The interaction diagram details the network of molecular contacts in the binding site, with different types of interactions color-coded according to their nature (Fig. A). Also, the results showed the molecular docking visualization of melanin binding to aflatoxin in A. flavus and the binding pocket with key structural features including the hydrogen-oxygen bond (2.20 Å) between Tyrosine 180 (TYR 180) and the melanin ligand (Fig. B). In contrast, the interaction between melanin and ochratoxin in A. carbonarius was notably weaker, with a binding energy of −5.4 kcal/mol. Interestingly, no hydrogen bonds were observed in this complex, as shown in the corresponding molecular visualization (Fig. A). The binding site analysis revealed the presence of several key amino acid residues including VAL 130, ARG 129, GLU 36, and HIS 44, which may contribute to the overall binding stability through other types of molecular interactions despite the absence of hydrogen bonds. The ochratoxin is represented in ribbon format with varying colors indicating different secondary structure elements, while key residues are shown in stick representation. The accompanying interaction diagram maps the spatial arrangement of protein-ligand contacts in the binding site, though no hydrogen bonds are present in this complex (Fig. A). Further, the results showed the molecular docking visualization of melanin binding to ochratoxin in A. carbonarius as well as the binding pocket environment with labeled amino acid residues including VAL 130, ARG 129, GLU 36, HIS 44, ILE 178, and SER 173 (Fig. B). The toxicity of heavy metals and fungal contamination poses hazards to human health and the environment . When considering a solution for this growing environmental issue, melanin, a bio-polymeric pigment, can effectively scavenge free radicals, bind metal ions, and contribute to preventing the release of mycotoxin, Melanin is a bio-polymeric pigment and can act as a scavenger of free radicals, efficiently chelate metal ions, and play a role in inhibiting the secretion of mycotoxins . Therefore, this study is designated to employ fungi as a natural source to produce melanin. In this framework, the study began with the isolation of fungi from soil samples. All isolates were found to be melanin producers in the CDA medium. However, A. flavus and A. carbonarius were the most potent producers. Similar species are previously reported as melanin producers . The notable benefits of these two isolates lie in their ability to generate melanin in high concentrations, where A. flavus produced 690 µg/mL of melanin from a fungal biomass dry weight of 9233 µg/mL. A. carbonarius produced 517 µg/mL of melanin from mycelial dry weight of 8770 µg/mL. Afterward, the purified melanin from A. flavus and A. carbonarius showed typical melanin characteristics. A thorough examination was conducted on the UV-Vis spectra at a wavelength of 260 nm, which is identical to the standard (L-DOPA). This result is consistent with previous findings on the absorbance rates of various fungal melanin at altitudes ranging from 200−300 nm . Furthermore, a log of optical density confirmed a linear bend together with negative descents . Fungal melanin was water and organic solvents insoluble. Nevertheless, it exhibited high solubility in alkaline solutions and precipitated in acidic solutions, forming a flocculent brown condensate with FeCl 3 . These properties were investigated in artificial and natural melanin, which have been mentioned previously . The detected properties of melanin in this study are attributed to the unparalleled constructions authorizing them as proton granters or recipients . In addition, the refined melanin manifested positive trials towards whole chemical characters . To further validate our results, the process of synthesizing the DOPA or DHN-melanin pathway was demonstrated by employing various inhibitor compounds. The presence of kojic acid resulted in the lack of pigmentation in A. flavus and A. carbonarius . Kojic acid inhibits the activity of the tyrosinase enzyme that promotes the conversion of tyrosine to dopaquinone through the DOPA pathway. Hence, it is involved in the production of melanin pigment, as reported in previous studies . They also confirmed the presence of DHN passage in both A. terreus and A. tubingensis . Moreover, diethyldithiocarbamate, tricyclazole, pyroquilon, and thalide inhibit the synthesis of DHN melanin but do not affect the production of DOPA melanin . In addition to the confirmatory assays of fungal melanin, HPLC analysis was conducted to assess melanin production by A. flavus and A. carbonarious , focusing on physical and chemical data. The chemical uniformity of repetitive A. flavus melanin was identical to that reported in . The chemical composition of the extracted melanin from tested fungal strains was also verified using 1 H-NMR and FTIR spectroscopic analyses. The melanin of A. flavus and A. carbonarious was found to be identical to the standard synthetic melanin and was confirmed to be the same as the melanin reported previously by Kumar et al . . The 1 H-NMR spectrum of the isolated melanin revealed signals that spanned both the aliphatic and aromatic regions. The emphasis has been placed on the methyl series of alkyl groups, where signals can be attributed to carbon or proton atoms connected to nitrogen and/or oxygen atoms. Peaks can also be assigned to protons attached to substituted aromatic or heteroaromatic moieties, yielding similar results as reported previously . The FT–IR analysis of fungal melanin explored a broad range of intensity levels and assessed the consistency of hydrogen bonding in the OH group and the expansion of the aromatic C=C structure. Moreover, peaks corresponding to the OH and NH bonds, as well as those attributed to the presence of carbonyl bonds were identified. The distinctive characteristics of the infrared spectra for melanin were the same as those stated in previous literatures . In different media, the measurements of melanin concentrations showed that CD culture induced the potency of the suspicious melanin by both fungal strains. Earlier studies also demonstrated that the CD culture significantly contributed to enhancing melanin production . Melanin production by A. flavus and A. carbonarius exhibited a gradual increase between 14–15 days of incubation at 30 ℃ and pH 5, similar to the findings of Raman et al. who proved the increase in melanin production from Aspergillus fumigatus by about two fold after optimization and after a period of 5 days. On the other hand, Saleh et al. demonstrated a maximum melanin yield from yeast strain at about 48.5 mg/L with pH 6.0 and at 22 o C. Additionally, the majority of melanin pigment was synthesized after 10 days of incubation and was entirely produced by the end of the lag stage of fungal germination . Furthermore, a high yield of melanin was achieved at a concentration of 0.1 mM FeSO 4 . These results confirmed that Fe increased melanin production as Fe is a substantial cofactor for manufacturing melanin by the L-DOPA pathway. Alternatively, a previous study conducted by El-Batal and Al Tamie , showed that copper Cu enhances melanin synthesis due to its significant role as a catalyst for apo tyrosinase, thereby potentially improving protein and enzyme efficacy. After determining the optimal melanin production conditions, the antioxidant activity of melanin was assessed. The results showed that the increase in melanin concentration leads to a gradual increase in DPPH scavenging activity by capturing electrons and scavenging ROS while enhancing the release of catalyst minerals . The obtained data revealed that the melanin extracted from A. flavus has a high scavenging activity at 100 μg/mL (94 %). The melanin’s high concentrations effectively counteracted the presence of free radicals, which fits with the previous findings . The ability of melanin to remove reactive oxygen species (ROS) has been highlighted, suggesting that melanin may protect pigment cells from oxidative stress caused by ROS . In addition, the potential of melanin to inhibit mycotoxin production was evaluated. The results show that increased melanin concentrations resulted in a continuous decrease of aflatoxin B1 and ochratoxin A and significant A. flavus and A. carbonarius growth control. These results fall in accordance with the previous studies . This inhibitory action of melanin is attributed to its phenolic nature, which acts by inhibiting certain early stages of the process, preventing the accumulation of toxic intermediate products formed in later stages . Melanin is also negatively charged and made up of polyphenolic chemicals and multifunctional polymers. Since phenolic chemicals are known to impede the synthesis of aflatoxin, they will prevent the buildup of harmful intermediates that are generated in the later steps of the route by inhibiting one or more early steps rather than late ones . In the present study, the significant difference in binding energies (−9.5 vs −5.4 kcal/mol) between the two complexes suggests that melanin may have a stronger biological interaction with aflatoxin-producing A. flavus compared to ochratoxin-producing A. carbonarius . The presence of the specific hydrogen bond in the A. flavus complex likely contributes to this enhanced stability and could be a key factor in the molecular mechanism of interaction between these compounds. These findings provide valuable insights into the potential differential effects of melanin on different fungal toxin systems and could have implications for understanding fungal biology and potentially developing targeted interventions . The molecular docking analysis reveals distinct mechanisms of action for melanin’s interaction with aflatoxin in A. flavus and ochratoxin in A. carbonarius . In A. flavus , melanin demonstrates a strong binding mechanism characterized by a significant binding energy of −9.5 kcal/mol. The primary mechanism involves a specific hydrogen-oxygen bond formation between Tyrosine 180 (TYR 180) and the melanin molecule, with a precise bond length of 2.20 Å. This strong hydrogen bonding suggests that melanin likely stabilizes the protein structure through direct interaction with the TYR 180 residue. The highly negative binding energy indicates a spontaneous and thermodynamically favorable interaction, suggesting that melanin could effectively modulate the protein’s function by maintaining a stable complex at the binding site. The presence of LEU 374 and other surrounding residues in the binding pocket appears to create a favorable microenvironment that enhances the stability of this interaction. In contrast, the mechanism of action with ochratoxin in A. carbonarius follows a different pattern, characterized by a moderately strong binding energy of −5.4 kcal/mol. The absence of hydrogen bonds suggests that the interaction relies primarily on other forces, possibly including van der Waals interactions and hydrophobic effects. The binding pocket, formed by residues including VAL 130, ARG 129, GLU 36, HIS 44, ILE 178, and SER 173, creates a specific spatial arrangement that accommodates melanin through these non–hydrogen bonding interactions. The presence of both polar (ARG 129, GLU 36, HIS 44, SER 173) and nonpolar (VAL 130, ILE 178) residues in the binding site suggests a complex interaction mechanism involving both hydrophilic and hydrophobic regions of the protein. This similar with that reported in previous study about apposition of the aromatic rings ocratoxin and the indole nuclei of the melanin may also result in van der Waals’ forces, and the combination of these two types of forces may underlay the binding of ochratoxin to melanin . The primary building block of melanin is an indole nucleus, and it is abundant in negatively charged groups such carboxyl groups and semi-quinones as well as its ionic interactions like van der Waals forces on the melanin polymer that is crucial for the affinity . Thus, the significant difference in binding energies and interaction mechanisms between the two systems suggests that melanin may have evolved different functional roles in these two fungal species. In A. flavus , the strong, specific hydrogen bonding mechanism indicates a potential regulatory role, possibly affecting aflatoxin production or metabolism. The weaker but still significant binding in A. carbonarius , mediated through non–hydrogen bonding interactions, suggests a more subtle modulatory effect on ochratoxin-related processes. These mechanistic insights could be particularly valuable for understanding how melanin might be used to differentially target these fungal species or their toxin production pathways. The stronger binding mechanism with aflatoxin-producing A. flavus suggests this might be a more promising target for melanin-based interventions. Comparing the melanin-chelated heavy metal precipitate with the pure melanin using EDX and FTIR analyses found that fungal melanin exhibited significant concentration-dependent adsorption capacity. Also, the AAs analysis demonstrated that higher melanin concentrations result in greater removal efficiency of Cr6+ and Cd. The in vitro studies observed that the removal percentages of Cr and Cd were 60% and 77%, respectively, when the concentration of melanin was increased to 15 mg. Comparable results were reported by Nguyen et al. . Additionally, the ability of melanin to bind various metal ions is one of the most prevalent characteristics of melanin pigments . As a potent and quick ion exchange molecule, melanin acts as a radical sink by binding pollutants, chemicals, and heavy metals. The significance of this characteristic in biology lies in the ability of melanin to chelate chemicals and control their absorption into cells . In view of the functional groups elucidated by FTIR analysis, almost all of them were reported to chelate heavy metals by different mechanisms of action. Melanin, exhibited abundant functional groups such as=O, –OH, –NH, and –COOH through FTIR presented herein and that provides numerous binding sites for heavy metal ions . Carboxylic groups elucidated in this study appeared also in a previous study to effectively adsorb heavy metals through coordination or chelation mechanisms, particularly cadmium . Additionally, melanin binds predominantly to accessible carboxyl groups (acid base complex) as well as the free radicals produced by the comproportionating reactions were responsible for the complexation of metal ions on melanin pigment . According to physicochemical analysis of fungal melanin which is generated from the precursor L-Dopa that appeared herein, should have quinone, semiquinone, carboxyl, amine, and hydroxyl (phenolic) groups by an almost similar mode of action . Previous literature has showed that the number of active centers, accessibility, and affinity of the active centers for metal ions are some of the factors that determine the material’s capacity for chemical sorption . Where, the adsorption of metals and its binding site is caused by distinct functional groups in melanin. For example, Pb2+ can bind to a variety of locations, such as carboxyl (COOH), amine (NH), and catechol (OH) groups. On the other hand, the carboxyl (COOH) group is the particular binding site for Cd2+, Zn2+, and Ca2+ . Furthermore, the electronegativity may be the source of the variation in the binding affinities of the metal ions since the attraction to the negative charges of free radical intermediates plays a significant role in the binding of divalent metal ions. Finally, the order of metal electro- negativity is followed by the order of binding affinities for Cd and Cr according to different mechanisms of action that have hypothesized herein and have showed in the previous study . Conclusion This study provides significant evidence regarding the bioremediation pipeline, enabling a natural fungal approach for melanin production and utilizing melanin as a heavy metal-chelating agent. In addition, the study demonstrated the antioxidant potential of melanin and its capability to inhibit the fungal growth and detoxification of mycotoxins aflatoxin B1 and ochratoxin A. This finding suggests potential applications for fungal melanin in eliminating heavy metals from water resources, removal of heavy metals fungal strains and preventing fungal and mycotoxin contamination in food. Furthermore, it will be necessary to explore its applications in industrial or agricultural settings. This study provides significant evidence regarding the bioremediation pipeline, enabling a natural fungal approach for melanin production and utilizing melanin as a heavy metal-chelating agent. In addition, the study demonstrated the antioxidant potential of melanin and its capability to inhibit the fungal growth and detoxification of mycotoxins aflatoxin B1 and ochratoxin A. This finding suggests potential applications for fungal melanin in eliminating heavy metals from water resources, removal of heavy metals fungal strains and preventing fungal and mycotoxin contamination in food. Furthermore, it will be necessary to explore its applications in industrial or agricultural settings. Below is the link to the electronic supplementary material. Supplementary Material 1
Genome evolution following an ecological shift in nectar-dwelling
ef5ab308-c129-4b5c-b5bb-bcdcb11d5143
11774029
Microbiology[mh]
The gammaproteobacterial genus Acinetobacter is diverse and includes taxa that inhabit a broad range of environments, such as soil and water . Some Acinetobacter lineages have also evolved to be host-associated or animal pathogens, with a notable example being the recently emerged human pathogen Acinetobacter baumannii . Taxa in the genus are phenotypically and genetically diverse and frequently adapt to new ecological niches . However, few direct connections have been made between specific genomic changes and ecological transitions within Acinetobacter or in bacteria more broadly . One poorly characterized habitat transition within the genus Acinetobacter is adaptation for growth in floral nectar. Several Acinetobacter species found in floral nectar appear to be most closely related to soil-dwelling relatives . Nectar represents a significant environmental shift compared with soil habitats, likely with different selective pressures. Genomic comparisons between Acinetobacter adapted to floral nectar versus other habitats could uncover how bacteria evolve to new environments and which genetic traits facilitate major ecological switches. The high genetic diversity and genomic plasticity within Acinetobacter may be driven by mechanisms facilitating horizontal gene transfer (HGT), including competence for natural transformation, conjugative abilities, and mobile elements, such as plasmids, prophage, and insertion sequences . Horizontally acquired genomic islands are commonly observed throughout Acinetobacter and can contain genes conferring beneficial phenotypes like antibiotic resistance . HGT is a source of evolutionary novelty in bacteria , but other sources of genetic diversity can also be important, such as error-prone DNA polymerases in A. baumannii , or gene duplication followed by divergence . Gene duplication can also potentially lead to increased gene expression, allowing for enhanced nutrient acquisition, temperature stress tolerance, and overall resistance to antibiotics , but it is unclear how broadly important this mechanism is for bacterial adaptation. Genetic novelty in bacteria can allow for the evolution of new traits and subsequent exploitation of new niches . In some cases, specific genes have been linked to habitat-specific fitness. For instance, A. baumannii has antibiotic resistance genes allowing for persistence in hospital settings . Antibiotic resistance is a common example of a novel trait resulting from a specific environmental selective pressure because it is easily observable and important in well-studied pathogen systems. In natural systems, specific traits have occasionally been connected to ecological changes in bacteria , but such connections can be difficult to infer. In other cases, traits that are linked to success in a specific environment may be known but not their genetic basis. For instance, the ability to access nutrients from pollen is a unique and potentially beneficial trait in floral nectar-dwelling Acinetobacter , but how this trait was gained is unknown . Floral nectar is a nutritional reward produced by flowers to attract pollinating animals. It is high in carbohydrates; the sugar content in floral nectar can reach 90% of nectar dry weight , and it is a resource for microbes as well . However, floral nectar habitats create several stresses for microbes, including limitation of nutrients other than sugar . Although nectar contains amino acids and lipids , it can contain limiting amounts of nitrogen for some microbes . These factors make nectar a selective environment and can lead to strong priority effects where early arriving microbes prevent subsequent colonization of flowers . Culture-dependent and independent methods have revealed diverse microbes that thrive in these conditions . The genus Acinetobacter makes up a high proportion of bacterial taxa in floral nectar and is prevalent and readily cultured from nectar environments . Acinetobacter is also frequently found associated with floral visitors. For example, Acinetobacter apis was isolated from the gut of the western honey bee, Apis mellifera , and bee pollen provisions and nests sometimes include Acinetobacter . However, it is unknown whether Acinetobacter found with pollinators are nectar-dwelling species being dispersed by floral visitors, or if they are specific associates of pollinators. For ease here, we refer here to isolates from both nectar and floral visitors as nectar-dwelling. Previous phylogenomic analysis of Acinetobacter isolates from nectar and bees found that they were closely related to soil-dwelling species . This work suggested one evolutionary origin of nectar-dwelling/bee association but did not assess evolutionary patterns within this lineage. Here, we study the genome evolution of nectar-dwelling Acinetobacter in comparison to taxa isolated from other environments. We include genomes of three previously described species, A. apis , A. boissieri , and A. nectaris , newly sequenced A. nectaris isolates, and genomes of three recently described species, A. pollinis , A. rathckeae , and A. baretiae . For comparison, we included genomes from A. brisouii , which is isolated from soil and water and was previously found to be the closest relative of A. nectaris , as well as those from eight other environmental Acinetobacter species, chosen to represent broad-scale diversity within the genus. We hypothesized that the switch to floral nectar from soil would drastically change the selective pressures experienced by this Acinetobacter lineage, leading to changes in gene content. We used comparative genomics to understand which genes may become unnecessary or beneficial for bacteria in floral nectar, and to identify metabolic abilities that may have facilitated this environmental switch. Phylogeny and genome characteristics of nectar-dwelling Acinetobacter To understand the evolutionary history of nectar-dwelling Acinetobacter , we constructed a phylogenomic tree using genomes of Acinetobacter isolates from floral nectar and floral visitors. The isolates collected from floral nectar and pollinators form a clade, with bootstrap support of 100, separate from soil, water, and animal-dwelling Acinetobacter species . This confirms that there is one known evolutionary origin of nectar-dwelling within Acinetobacter and that this group evolved from a presumed soil-dwelling ancestor. The six species in the nectar clade appear to not be isolated from environments outside of floral nectar or pollinators based on 16S rRNA sequence comparisons to GenBank databases . Multiple of these species, A. nectaris , A. boissieri , and A. pollinis , are abundant and common in floral nectar from locations worldwide, and our isolates came from both North America and Europe . This suggests that members of the clade are specialized for growth in floral nectar and/or associated with pollinators and are widely found in these habitats . We used genomic comparisons between nectar-dwelling Acinetobacter and relatives living in distinct environments to uncover genomic patterns associated with nectar specialization. Relative to taxa found in other environments, isolates in the nectar-specialized Acinetobacter clade have smaller genomes and lower numbers of protein-coding genes . Among all complete reference Acinetobacter genomes in GenBank (92 total), there is a significant difference between genome sizes within the nectar-specialist clade, with an average of 2.64 Mb, and the environmental clade, averaging 3.61 Mb (Welch’s t -test, one-tailed; df = 17.78, T = −13.10, P = <0.000001). Only two non-nectar-dwelling species (out of 86 genomes) had genome sizes that overlapped with those in the nectar clade . Across genomes of nectar-dwelling isolates and environmental isolates analyzed here, nectar isolates have 243–977 fewer protein-coding genes, a 10%–30% reduction in proteins. Genomic reduction can occur for various reasons. Genome streamlining is common for bacteria living in stable, nutrient-poor conditions, such as some soil and marine habitats , and is thought to be driven by selection and facilitated by large effective population sizes . Additionally, some environmental stresses may promote genome streamlining due to selection . Gene loss can also be degenerative and result from genetic drift, with extreme examples occurring in bacteria that are host-restricted and experience frequent population bottlenecks . Nectar-dwelling bacteria may experience population bottlenecks due to the transient nature of the floral environment. We therefore tested for evidence of genetic drift as is seen in host-restricted bacteria; such genomes often show high evolutionary rates, high rates of pseudogenes, and low genome GC content . Evolutionary rate tests found a higher evolutionary rate for the nectar-dwelling clade (substitution rate relative to environmental taxa = 2.5; −lnL = 54,3761.48) compared with the null hypothesis of a global clock across the nectar and environmental Acinetobacter phylogeny (−lnL = 542,853.29; likelihood ratio = 908.19; P = <0.000001) . Similarly, we found that nectar isolate genomes have slightly lower percent GC compositions relative to genomes of environmental taxa . We did not find evidence of genomic degeneration in the form of pseudogenes, as the number of pseudogenes detected in nectar-dwelling Acinetobacter ranged from 125 to 294, while environmental Acinetobacter had a similar range of 187–283 and other Acinetobacter species fall within this range as well . We speculate that the reduced genome size in nectar-dwelling Acinetobacter could be due to a combination of relaxed selection on some genes after the shift to floral nectar, as well as a relative increase in genetic drift due to population bottlenecks. To further investigate this, we sought to define the gene content and functional capacities of nectar-dwelling species compared with soil-dwelling relatives. Gene content evolution with the switch to nectar To determine the content of predicted proteins among Acinetobacter clades, we performed an ortholog clustering analysis to identify shared orthologs, recent paralogs, and unique genes . This analysis resulted in 7,334 orthologs in total identified across all genomes, with 1,076 core orthologs present across all genomes. About 40% of total orthologs were only found in environmental isolate genomes, whereas only 16% of orthologs were unique to the nectar-dwelling clade (Table S3), supporting a trend towards gene loss rather than gain in the nectar clade. To trace gene gain and loss events within the nectar-specialist clade, we performed a maximum-likelihood ancestral state reconstruction analysis. Overall, there have been dynamic gene gains across the evolution of the group. Substantial loss (480 orthologs) occurred at the ancestral node of the nectar-dwelling clade and at the ancestral nodes leading to most nectar-dwelling species . Gene loss sometimes remained high (98–234 orthologs) even at tips and more recent nodes ( ; Fig. S1). This pattern is consistent with genome reduction as described above and also suggests that the process of gene loss is still ongoing in nectar-dwelling isolates. Gene gains became higher closer to tips, with gains of ~200 orthologs at some nodes and gains as high as 159 orthologs at tips (Fig. S1). Gene gains identified in this analysis, particularly at tips, could be the result of divergence leading to novel orthologs as well as horizontal acquisition of new genes. Gene number and content changes in the nectar-dwelling clade compared with environmental relatives occurred across diverse functional categories . We note that these functions are putative, as they are predicted by homology and categorized by Rapid Annotation using Subsystem Technology (RAST), not confirmed within these taxa. The largest differences were reduced ortholog numbers in nectar-dwelling isolates relative to environmental relatives. Eight (out of 25) functional categories were reduced in nectar-dwelling isolates compared with environmental isolates, ranging from 29% to 77% reduction, and seven functional categories had significantly lower ortholog numbers in the nectar clade (ANOVA analysis, ). Ordered by relative reduction, these significantly reduced categories were metabolism of aromatic compounds, miscellaneous, nitrogen metabolism, fatty acid metabolism, iron acquisition and metabolism, carbohydrate metabolism, and respiration . We observed a relative increase in ortholog number less often across nectar-dwelling isolates, with only the category of phages and mobile elements and nine subcategories showing significant increases (ANOVA analysis, ). Some categories and subcategories showed high relative increases in ortholog numbers (ranging from 43% increase to more than double), but these typically involved small actual numbers of orthologs (~1–20 mean orthologs per genome), and most subcategories (15 out of 20) were increased by less than 10% in nectar isolates . Additionally, three categories showed relatively low reductions (2%–11%) but were significantly increased when the data were normalized by total ortholog number, which we interpret as the category showing less reduction than expected by chance gene loss. These categories were amino acid metabolism, cofactor and vitamin synthesis, and protein metabolism (ANOVA analysis, ). To understand the biological relevance of these differences, we investigated the specific orthologs present in nectar-dwelling versus environmental isolates for functional categories with significant or high (greater than 30%) difference in ortholog numbers . Metabolism of aromatic compounds Nectar-dwelling isolate genomes had significantly fewer orthologs predicted to be involved in the metabolism of aromatic compounds ( ; Tukey’s HSD test, P = <0.000001). With a reduction of 77% compared with environmental isolates, this group showed the highest amount of relative difference of any functional category. The prevalence of specific orthologs in this category was variable across genomes, and nectar-dwelling isolate genomes contained a subset of orthologs that were also found in some environmental genomes . Several of the orthologs in nectar isolates are involved in benzoate metabolism, an aromatic compound that is released by plants . However, nectar-dwelling Acinetobacter genomes were also missing several benzoate metabolism genes present in environmental Acinetobacter , so there was not a clear functional difference. The decrease in genes in this category suggests that nectar-dwelling Acinetobacter may encounter a limited diversity of aromatic compounds compared with species in other environments, but this could also be driven by nectar having less aromatic compound variability than soil or water habitats. Nitrogen and amino acid metabolism Gene content differences in nectar-dwelling isolates compared with environmental relatives suggest that shifts in nitrogen and amino acid metabolism strategies accompanied the switch to nectar dwelling. Nectar-dwelling isolate genomes had 44% fewer orthologs involved in nitrogen metabolism, a significant reduction ( ; Tukey’s HSD test, P = 0.030). However, we note that this pattern was partly due to an apparent loss of redundancy . Orthologs that were missing from nectar isolate genomes, including glutamine and glutamate synthases and ammonium transporters, were present in environmental Acinetobacter as several distinct orthologs but were present in nectar-dwelling isolates as only one ortholog. Since nitrogen metabolism is interconnected with amino acid metabolism, we considered amino acid metabolism ortholog within this context. The category of amino acid metabolism showed a modest (4%) reduction in nectar isolates in untransformed analysis but a significant increase in normalized analysis ( ; Tukey’s HSD test, P = 0.001), suggesting less loss than expected by chance loss. The subcategories of branched chain amino acids, such as leucine, and histidine metabolism were decreased by 26% and 21%, respectively. We found that genes for degrading leucine and histidine, which were present in greater than 50% of environmental genomes, were absent from all nectar-dweller genomes . At the same time, four amino acid metabolism subcategories showed higher ortholog numbers in nectar genomes, including a 71% increase in the subcategory of proline metabolism . This difference was mainly due to an increase in genes for the transport of amino acids . Overall, we see a loss of amino acid degradation and redundant transporters, as well as gain of additional transporters. Floral nectar is low in nitrogen relative to carbon , and the ability to assimilate nitrogen sources has been linked to competition and growth in floral nectar in both yeasts and Acinetobacter . A shift towards more diverse transport systems for nitrogen sources could be driven by selection for nitrogen scavenging. Additionally, under nitrogen limitation in floral nectar, the use of available amino acids in protein synthesis, rather than their degradation, may be selected for. Carbohydrate metabolism Genes involved in carbohydrate metabolism showed a significant decrease in nectar-dwelling isolates, which had 31% fewer orthologs than environmental isolates ( ; Tukey’s HSD test, P = 0.002). The pattern was driven by the subcategories of central carbohydrate metabolism and organic acid metabolism . Genes for monosaccharide metabolism were more than twice as numerous in nectar isolate genomes ( ; Tukey’s HSD test, P = 0.00707), although this category contained a small number of orthologs. Compared with environmental habitats, floral nectar consists of simple carbohydrates, including fructose, sucrose, and glucose . Many of the nectar-dwelling species can assimilate fructose, and some can assimilate glucose and sucrose . In comparison, non-nectar-dwelling species, such as A. baylyi , are often unable to utilize fructose, sucrose, or glucose as sole carbon sources, but can metabolize other diverse carbon sources . Consistent with a shift to monosaccharide utilization, we found that phosphotransferase system (PTS) genes specific to nectar sugars are more common in nectar-dwelling isolates compared with environmental isolates . PTS genes are a common method for bacteria to transport sugars into cells via a phosphorylation cascade . PTS can also be involved in sensing and regulation of physiological processes related to sugar, such as carbohydrate active enzymes, chemotaxis, and biofilm formation . These multicomponent systems are specific to distinct molecules, including fructose, mannitol, sucrose, and glucose. The sucrose-specific PTS enzyme complex (EIIABC) is present in four out of six nectar-dwelling species but absent from all environmental isolates. Fructose-specific EIIABC complexes were found in all nectar-dwelling isolates and only three environmental taxa. Together, these differences support a shift in carbohydrate usage from those found in soil to monosaccharides present in floral nectar. We investigated carbohydrate metabolic capabilities in more detail by comparing carbohydrate active enzymes identified using the CAZy database . Environmental genomes contained significantly more genes in the glycosyl transferase (GT) families (Fisher’s exact test, P = 0.001), with an average of 23 genes per environmental genome and 15 per nectar-dweller genome (Fig. S2). This difference is mainly driven by enzymes in the GT2 and GT4 families. The role of these specific genes within Acinetobacter species is unclear, but typically enzymes in these families are involved in the synthesis of cell wall, capsular, and extracellular biofilm polysaccharides , suggesting that some of these functions may be different in nectar-dwelling Acinetobacter . In support of this, several biofilm formation genes are found in the nectar-dwelling isolates, including pgaABCD genes of the poly-β-1,6- N -acetyl-D-glucosamine (PGA) operon responsible for the maintenance of biofilm stability . Of the 15 nectar-dwelling isolates, nine have the complete PGA operon, while only four out of the 11 environmental isolates have the full operon. A. pollinis isolates also have two to six times the number of copies of the pgaB gene, which is critical for export of PGA , suggesting that biofilm formation, surface attachment, or cell–cell attachment may be important traits in floral nectar or pollinator environments. Additionally, the subcategory of type IV secretion systems contains significantly (31%; ; Tukey’s HSD test, P = 0.0111) more orthologs in nectar-dwelling genomes than in environmental genomes. These increased orthologs are all involved in pilin and fimbrial biogenesis . For example, the fimT / pilVWXY gene cluster is absent in environmental genomes and present in most nectar isolate genomes, whereas pilABCE and pilMNOPQ clusters were present in both groups. This could suggest additional importance of surface attachment for the nectar-dwelling clade. In contrast to the decrease in GT enzymes, nectar dwellers contain significantly more genes in glycoside hydrolase (GH) families (Fisher’s exact test, P = <0.000001), averaging 15 orthologs per genome compared with 13 orthologs per environmental genome (Fig. S2). This pattern was mainly driven by genes in the GH 28 family, which are involved in the breakdown of the polygalacturonic acid backbone of pectin . Pectin is a major component of plant cell walls, and we hypothesize, as discussed below, that the ability to degrade this polysaccharide may be beneficial in floral nectar. Phage and mobile elements The number of orthologs in the category of phage/prophage, transposable elements, and plasmids was nearly double in nectar-dwelling isolates compared with environmental isolates ( ; Tukey’s HSD test, P = 0.023). This increase was driven by a doubled number of orthologs in the subcategory of phages and prophages ( ; Tukey’s HSD test, P = 0.00041). Considering that environmental Acinetobacter are thought to have high numbers of prophage, this increase is notable . Consistent with high phage interactions, we also found a significant increase in the number of CRISPR–Cas system orthologs present in nectar isolates ( ; ANOVA, P = 0.000174). These were genes for Cascade proteins Cas1, Cas3, Csy2, Csy3, and Csy4, present in A. apis , A. rathckeae , and A. baretiae and also present in some environmental isolates, suggesting that CRISPR–Cas systems may be sporadic across Acinetobacter from both nectar and the environment . We hypothesized that HGT could be important in conferring novel functions for Acinetobacter switching to a new habitat, and so we screened nectar-dwelling genomes for genomic islands. We found genomic islands within all members of the nectar-dwelling clade, and this analysis also identified intact prophages. Gene counts from genomic islands ranged from 111 to 352 genes with approximately 50% annotated as hypothetical and the remainder involved in plasmid or transposon mobilization, phage replication, or Type1 secretion components, suggesting that HGT is facilitated by mobile elements (Table S5). Other significant differences Additional ortholog categories saw significant shifts in nectar-dweller genomes compared with environmental genomes but with unclear connections to function in nectar. For example, iron acquisition and metabolism both showed significant reduction in nectar isolates (37% reduction; Tukey’s HSD test; P = 0.000125), although the category had relatively few genes overall . Nectar-dwelling isolate genomes were lacking siderophore related orthologs, specifically regulatory (sigma factors) and receptor uptake orthologs. However, siderophore uptake genes were common among significantly increased membrane transport genes present in nectar dwellers but absent in environmental isolates ( ; Tukey’s HSD test; P = 0.00099), suggesting reliance on different siderophores among nectar versus environmental isolates . Additionally, the categories of miscellaneous functions, fatty acid metabolism, and respiration were significantly reduced in nectar isolates ( ; Tukey’s HSD test; P = 0.011, P = 0.0000892, and P = 0.002). The fatty acid or respiration genes lost in nectar-dwelling isolates did not provide insight into the biological significance of this change . The miscellaneous orthologs absent from nectar-dweller genomes were predicted to catalyze the degradation of aromatic compounds, suggesting that the decrease in this category is related to the observed decrease in metabolism of aromatic compounds. The categories of cofactor and vitamin synthesis and protein metabolism showed modest decreases in untransformed analyses but significant increases in normalized data ( ; Tukey’s HSD test; P = 0.001 and P = 0.00939). Within the category of cofactors, nectar-dwelling genomes were missing several orthologs involved in folate metabolism . At the same time, orthologs involved in pyridoxine metabolism were significantly increased in nectar isolates ( ; Tukey’s HSD test; P = 0.0439). In the case of protein metabolism, orthologs in protein biosynthesis and degradation were significantly higher in nectar-dwelling genomes ( ; Tukey’s HSD test, P < 0.000001, P = 0.000513). Additionally, the categories of DNA repair ( ; Tukey’s HSD test, P = 0.00664) and nucleoside and nucleotide metabolism ( ; Tukey’s HSD test, P = 0.00252) were significantly increased. However, these all showed very low change compared with environmental genomes (3%–7% relative increases), with changes of only 1–4 orthologs per category . Acquisition and diversification of pectin enzymes Among orthologs present in nectar-dwelling isolates and not environmental isolates were genes coding for pectin degradation enzymes, specifically PL1 family pectin lyases and GH28 family polygalacturonases. All species in the nectar-dwelling clade contain at least one of these genes, with several species possessing multiple copies of genes associated with the degradation of pectin (Table S6). Pectin is a recalcitrant polysaccharide that provides structural stability in plant cell walls and the outer layers of pollen grains . Among bacteria, enzymes for degrading pectin are commonly found in necrotrophic plant pathogens, which use them to digest plant tissue . These enzymes are notably absent among Acinetobacter genomes in GenBank, except for orthologs found in the nectar clade. Sequences in GenBank with the highest similarity to nectar-clade orthologs of PL1 and GH28 genes are found outside of Acinetobacter in plant pathogens, such as Pectobacterium , Erwinia , and Dickeya . These genes were likely acquired by nectar-dwelling Acinetobacter by HGT, possibly from a necrotrophic plant pathogen in the Enterobacterales. Tracing the pattern of gains in pectin degradation genes onto the phylogeny of nectar-dwelling Acinetobacter suggests that at least one copy each of the pectin lyase and polygalacturonase genes were present in the common ancestor of the nectar-clade Acinetobacter analyzed here . In several isolates, these two orthologs are located next to each other on the chromosome, so they may have been gained together in one event. As seen in the gene trees for the pectin lyase and polygalacturonase orthologs, we determined that these genes experienced multiple duplication events with paralogs sister to each other . These duplications occurred within the species A. pollinis , which contains six copies of polygalacturonase and three copies of pectin lyase (within isolate SCC477 as an example). Additional horizontal transfers, losses, or duplication events may have occurred within the nectar clade, as some species have multiple copies within a gene tree ( A. boissieri , ) or differences in topology between gene trees and species trees ( A. apis , ). Some of these copies are on contigs that are likely from plasmids (based on increased read depth and the presence of plasmid replication genes), which may have facilitated duplication and transfers of these genes. Duplication was more common for the polygalacturonase than the pectin lyase genes, and the polygalacturonase genes were also the only example of multiple copies outside of the A. pollinis . The fact that these genes have been maintained and duplicated within the nectar-dwelling clade suggests that they may serve an important ecological role for these bacteria. Gene duplication can increase production of protein products but also allows for functional divergence due to selection. To test for this, we performed branch-site tests for positive selection on pectin-degrading genes in nectar-dwelling Acinetobacter and unrelated outgroups. Amino acid substitutions in several pectin-degrading enzyme protein sequences show signatures of positive selection . Positive selection was detected at the nodes and tips of the polygalacturonase gene tree , particularly for A. pollinis (nine sites) and A. apis (seven sites) orthologs . The high number of duplication events of these genes in A. pollinis , together with signatures of positive selection, suggests that pectin-degrading enzymes may be functionally diversifying in this species. Both of these enzymes cleave linkages in the polygalacturonic acid backbone of pectin . Necrotrophic plant pathogens typically have diverse copies of these enzymes, with slight variations in catalytic ability, to effectively degrade pectin . This pattern may be convergently evolving in A. pollinis . To investigate the potential function of the amino acids under selection in Acinetobacter polygalacturonase enzymes, we generated predicted protein structures of representative orthologs from clades with sites experiencing positive selection at the tips or nodes . Protein structures were predicted with high confidence and were generally similar to structures of proteins from known plant pathogens (Fig. S3). All nectar isolate proteins included known conserved active motifs, the catalytic sites NTD and RIK, and substrate-binding sites G/QDD and G/SHG in the predicted binding cleft of the enzyme . Several of the sites found to be under positive selection were also located around the binding cleft. For example, six sites found to be under positive selection in specific orthologs (tips) are predicted to be near the substrate-binding cleft in three orthologs . Additionally, three sites under selection at ancestral nodes, and therefore present in several orthologs, were also near the binding cleft in two orthologs . Most of the substitutions found to be under selection were between amino acids that vary in hydrophobicity due to their side chains or charge. Of the nine sites under selection near the binding cleft, five involved substitutions from hydrophilic to hydrophobic amino acids, one involved a substitution in the opposite direction, and three were between similarly hydrophobic amino acids. Previous work found that hydrophilic amino acids in the binding cleft were important for function of polygalacturonase in plant pathogens , and our results suggest that changes in hydrophobicity may be beneficial for enzyme activity in floral nectar compared with plant tissue. It is not known how much nectar-dwelling Acinetobacter interact with major sources of pectin in plant tissue, but to our knowledge, they have not been observed to infect plants. However, microbes in nectar regularly interact with pollen grains, which are introduced into nectar by pollinator activity . In fact, some Acinetobacter can cause pollen grains to burst open or pseudogerminate . This ability is beneficial for Acinetobacter , as it is associated with increased growth in nectar when pollen is present . Floral nectar has been shown to be nitrogen limiting for both yeasts and bacteria , so the ability to access nitrogen from pollen in nectar could increase microbial fitness. However, pollen is protected by a resistant exine layer and is difficult to degrade . Pectin is an essential component of pollen cell walls and pollen tubes , and pectin-degrading enzymes have been hypothesized to be involved in pollen breakdown by bacterial gut symbionts of honey bees . We hypothesize that the pectin degradation enzymes in Acinetobacter could be involved in accessing nutrients from pollen, which could explain the apparent importance of genes coding for such enzymes in the clade. In support of this, we find that most of the GH28 and PL1 proteins in Acinetobacter have secretion signals , similar to secreted pectin-degrading enzymes in Pectobacterium , suggesting that they should act extracellularly. Furthermore, we find the most selection on these genes within A. pollinis and A. apis . The former shows strong impacts on pollen bursting and pseudogermination , and the latter was isolated from honey bees and is likely to encounter pollen regularly. We speculate that the ability to degrade pectin could be a key trait allowing Acinetobacter to thrive in nectar and in association with pollinators. Conclusions We found that the ecological shift from soil-dwelling to nectar-dwelling led to genomic reduction, followed by dynamic gene gains and losses underlying apparent metabolic shifts in diverse functions. The nectar clade had an increase in the number of genes involved in monosaccharide metabolism and transport, likely due to the high sugar environment of nectar. We also found changes in nitrogen and amino acids metabolism genes, suggesting a switch toward nitrogen scavenging relative to environmental Acinetobacter , consistent with nitrogen limitation in floral nectar. Nectar-dwelling Acinetobacter species have acquired pectin-degrading enzymes, presumably through HGT from plant pathogens. We found duplication, diversification, and positive selection within pectin-degrading genes, supporting our hypothesis that these genes may provide an important ecological function. Overall, we find that genome evolution from gene loss, diversification, and HGT may have all contributed to the Acinetobacter habitat switch to floral nectar. Acinetobacter To understand the evolutionary history of nectar-dwelling Acinetobacter , we constructed a phylogenomic tree using genomes of Acinetobacter isolates from floral nectar and floral visitors. The isolates collected from floral nectar and pollinators form a clade, with bootstrap support of 100, separate from soil, water, and animal-dwelling Acinetobacter species . This confirms that there is one known evolutionary origin of nectar-dwelling within Acinetobacter and that this group evolved from a presumed soil-dwelling ancestor. The six species in the nectar clade appear to not be isolated from environments outside of floral nectar or pollinators based on 16S rRNA sequence comparisons to GenBank databases . Multiple of these species, A. nectaris , A. boissieri , and A. pollinis , are abundant and common in floral nectar from locations worldwide, and our isolates came from both North America and Europe . This suggests that members of the clade are specialized for growth in floral nectar and/or associated with pollinators and are widely found in these habitats . We used genomic comparisons between nectar-dwelling Acinetobacter and relatives living in distinct environments to uncover genomic patterns associated with nectar specialization. Relative to taxa found in other environments, isolates in the nectar-specialized Acinetobacter clade have smaller genomes and lower numbers of protein-coding genes . Among all complete reference Acinetobacter genomes in GenBank (92 total), there is a significant difference between genome sizes within the nectar-specialist clade, with an average of 2.64 Mb, and the environmental clade, averaging 3.61 Mb (Welch’s t -test, one-tailed; df = 17.78, T = −13.10, P = <0.000001). Only two non-nectar-dwelling species (out of 86 genomes) had genome sizes that overlapped with those in the nectar clade . Across genomes of nectar-dwelling isolates and environmental isolates analyzed here, nectar isolates have 243–977 fewer protein-coding genes, a 10%–30% reduction in proteins. Genomic reduction can occur for various reasons. Genome streamlining is common for bacteria living in stable, nutrient-poor conditions, such as some soil and marine habitats , and is thought to be driven by selection and facilitated by large effective population sizes . Additionally, some environmental stresses may promote genome streamlining due to selection . Gene loss can also be degenerative and result from genetic drift, with extreme examples occurring in bacteria that are host-restricted and experience frequent population bottlenecks . Nectar-dwelling bacteria may experience population bottlenecks due to the transient nature of the floral environment. We therefore tested for evidence of genetic drift as is seen in host-restricted bacteria; such genomes often show high evolutionary rates, high rates of pseudogenes, and low genome GC content . Evolutionary rate tests found a higher evolutionary rate for the nectar-dwelling clade (substitution rate relative to environmental taxa = 2.5; −lnL = 54,3761.48) compared with the null hypothesis of a global clock across the nectar and environmental Acinetobacter phylogeny (−lnL = 542,853.29; likelihood ratio = 908.19; P = <0.000001) . Similarly, we found that nectar isolate genomes have slightly lower percent GC compositions relative to genomes of environmental taxa . We did not find evidence of genomic degeneration in the form of pseudogenes, as the number of pseudogenes detected in nectar-dwelling Acinetobacter ranged from 125 to 294, while environmental Acinetobacter had a similar range of 187–283 and other Acinetobacter species fall within this range as well . We speculate that the reduced genome size in nectar-dwelling Acinetobacter could be due to a combination of relaxed selection on some genes after the shift to floral nectar, as well as a relative increase in genetic drift due to population bottlenecks. To further investigate this, we sought to define the gene content and functional capacities of nectar-dwelling species compared with soil-dwelling relatives. To determine the content of predicted proteins among Acinetobacter clades, we performed an ortholog clustering analysis to identify shared orthologs, recent paralogs, and unique genes . This analysis resulted in 7,334 orthologs in total identified across all genomes, with 1,076 core orthologs present across all genomes. About 40% of total orthologs were only found in environmental isolate genomes, whereas only 16% of orthologs were unique to the nectar-dwelling clade (Table S3), supporting a trend towards gene loss rather than gain in the nectar clade. To trace gene gain and loss events within the nectar-specialist clade, we performed a maximum-likelihood ancestral state reconstruction analysis. Overall, there have been dynamic gene gains across the evolution of the group. Substantial loss (480 orthologs) occurred at the ancestral node of the nectar-dwelling clade and at the ancestral nodes leading to most nectar-dwelling species . Gene loss sometimes remained high (98–234 orthologs) even at tips and more recent nodes ( ; Fig. S1). This pattern is consistent with genome reduction as described above and also suggests that the process of gene loss is still ongoing in nectar-dwelling isolates. Gene gains became higher closer to tips, with gains of ~200 orthologs at some nodes and gains as high as 159 orthologs at tips (Fig. S1). Gene gains identified in this analysis, particularly at tips, could be the result of divergence leading to novel orthologs as well as horizontal acquisition of new genes. Gene number and content changes in the nectar-dwelling clade compared with environmental relatives occurred across diverse functional categories . We note that these functions are putative, as they are predicted by homology and categorized by Rapid Annotation using Subsystem Technology (RAST), not confirmed within these taxa. The largest differences were reduced ortholog numbers in nectar-dwelling isolates relative to environmental relatives. Eight (out of 25) functional categories were reduced in nectar-dwelling isolates compared with environmental isolates, ranging from 29% to 77% reduction, and seven functional categories had significantly lower ortholog numbers in the nectar clade (ANOVA analysis, ). Ordered by relative reduction, these significantly reduced categories were metabolism of aromatic compounds, miscellaneous, nitrogen metabolism, fatty acid metabolism, iron acquisition and metabolism, carbohydrate metabolism, and respiration . We observed a relative increase in ortholog number less often across nectar-dwelling isolates, with only the category of phages and mobile elements and nine subcategories showing significant increases (ANOVA analysis, ). Some categories and subcategories showed high relative increases in ortholog numbers (ranging from 43% increase to more than double), but these typically involved small actual numbers of orthologs (~1–20 mean orthologs per genome), and most subcategories (15 out of 20) were increased by less than 10% in nectar isolates . Additionally, three categories showed relatively low reductions (2%–11%) but were significantly increased when the data were normalized by total ortholog number, which we interpret as the category showing less reduction than expected by chance gene loss. These categories were amino acid metabolism, cofactor and vitamin synthesis, and protein metabolism (ANOVA analysis, ). To understand the biological relevance of these differences, we investigated the specific orthologs present in nectar-dwelling versus environmental isolates for functional categories with significant or high (greater than 30%) difference in ortholog numbers . Nectar-dwelling isolate genomes had significantly fewer orthologs predicted to be involved in the metabolism of aromatic compounds ( ; Tukey’s HSD test, P = <0.000001). With a reduction of 77% compared with environmental isolates, this group showed the highest amount of relative difference of any functional category. The prevalence of specific orthologs in this category was variable across genomes, and nectar-dwelling isolate genomes contained a subset of orthologs that were also found in some environmental genomes . Several of the orthologs in nectar isolates are involved in benzoate metabolism, an aromatic compound that is released by plants . However, nectar-dwelling Acinetobacter genomes were also missing several benzoate metabolism genes present in environmental Acinetobacter , so there was not a clear functional difference. The decrease in genes in this category suggests that nectar-dwelling Acinetobacter may encounter a limited diversity of aromatic compounds compared with species in other environments, but this could also be driven by nectar having less aromatic compound variability than soil or water habitats. Gene content differences in nectar-dwelling isolates compared with environmental relatives suggest that shifts in nitrogen and amino acid metabolism strategies accompanied the switch to nectar dwelling. Nectar-dwelling isolate genomes had 44% fewer orthologs involved in nitrogen metabolism, a significant reduction ( ; Tukey’s HSD test, P = 0.030). However, we note that this pattern was partly due to an apparent loss of redundancy . Orthologs that were missing from nectar isolate genomes, including glutamine and glutamate synthases and ammonium transporters, were present in environmental Acinetobacter as several distinct orthologs but were present in nectar-dwelling isolates as only one ortholog. Since nitrogen metabolism is interconnected with amino acid metabolism, we considered amino acid metabolism ortholog within this context. The category of amino acid metabolism showed a modest (4%) reduction in nectar isolates in untransformed analysis but a significant increase in normalized analysis ( ; Tukey’s HSD test, P = 0.001), suggesting less loss than expected by chance loss. The subcategories of branched chain amino acids, such as leucine, and histidine metabolism were decreased by 26% and 21%, respectively. We found that genes for degrading leucine and histidine, which were present in greater than 50% of environmental genomes, were absent from all nectar-dweller genomes . At the same time, four amino acid metabolism subcategories showed higher ortholog numbers in nectar genomes, including a 71% increase in the subcategory of proline metabolism . This difference was mainly due to an increase in genes for the transport of amino acids . Overall, we see a loss of amino acid degradation and redundant transporters, as well as gain of additional transporters. Floral nectar is low in nitrogen relative to carbon , and the ability to assimilate nitrogen sources has been linked to competition and growth in floral nectar in both yeasts and Acinetobacter . A shift towards more diverse transport systems for nitrogen sources could be driven by selection for nitrogen scavenging. Additionally, under nitrogen limitation in floral nectar, the use of available amino acids in protein synthesis, rather than their degradation, may be selected for. Genes involved in carbohydrate metabolism showed a significant decrease in nectar-dwelling isolates, which had 31% fewer orthologs than environmental isolates ( ; Tukey’s HSD test, P = 0.002). The pattern was driven by the subcategories of central carbohydrate metabolism and organic acid metabolism . Genes for monosaccharide metabolism were more than twice as numerous in nectar isolate genomes ( ; Tukey’s HSD test, P = 0.00707), although this category contained a small number of orthologs. Compared with environmental habitats, floral nectar consists of simple carbohydrates, including fructose, sucrose, and glucose . Many of the nectar-dwelling species can assimilate fructose, and some can assimilate glucose and sucrose . In comparison, non-nectar-dwelling species, such as A. baylyi , are often unable to utilize fructose, sucrose, or glucose as sole carbon sources, but can metabolize other diverse carbon sources . Consistent with a shift to monosaccharide utilization, we found that phosphotransferase system (PTS) genes specific to nectar sugars are more common in nectar-dwelling isolates compared with environmental isolates . PTS genes are a common method for bacteria to transport sugars into cells via a phosphorylation cascade . PTS can also be involved in sensing and regulation of physiological processes related to sugar, such as carbohydrate active enzymes, chemotaxis, and biofilm formation . These multicomponent systems are specific to distinct molecules, including fructose, mannitol, sucrose, and glucose. The sucrose-specific PTS enzyme complex (EIIABC) is present in four out of six nectar-dwelling species but absent from all environmental isolates. Fructose-specific EIIABC complexes were found in all nectar-dwelling isolates and only three environmental taxa. Together, these differences support a shift in carbohydrate usage from those found in soil to monosaccharides present in floral nectar. We investigated carbohydrate metabolic capabilities in more detail by comparing carbohydrate active enzymes identified using the CAZy database . Environmental genomes contained significantly more genes in the glycosyl transferase (GT) families (Fisher’s exact test, P = 0.001), with an average of 23 genes per environmental genome and 15 per nectar-dweller genome (Fig. S2). This difference is mainly driven by enzymes in the GT2 and GT4 families. The role of these specific genes within Acinetobacter species is unclear, but typically enzymes in these families are involved in the synthesis of cell wall, capsular, and extracellular biofilm polysaccharides , suggesting that some of these functions may be different in nectar-dwelling Acinetobacter . In support of this, several biofilm formation genes are found in the nectar-dwelling isolates, including pgaABCD genes of the poly-β-1,6- N -acetyl-D-glucosamine (PGA) operon responsible for the maintenance of biofilm stability . Of the 15 nectar-dwelling isolates, nine have the complete PGA operon, while only four out of the 11 environmental isolates have the full operon. A. pollinis isolates also have two to six times the number of copies of the pgaB gene, which is critical for export of PGA , suggesting that biofilm formation, surface attachment, or cell–cell attachment may be important traits in floral nectar or pollinator environments. Additionally, the subcategory of type IV secretion systems contains significantly (31%; ; Tukey’s HSD test, P = 0.0111) more orthologs in nectar-dwelling genomes than in environmental genomes. These increased orthologs are all involved in pilin and fimbrial biogenesis . For example, the fimT / pilVWXY gene cluster is absent in environmental genomes and present in most nectar isolate genomes, whereas pilABCE and pilMNOPQ clusters were present in both groups. This could suggest additional importance of surface attachment for the nectar-dwelling clade. In contrast to the decrease in GT enzymes, nectar dwellers contain significantly more genes in glycoside hydrolase (GH) families (Fisher’s exact test, P = <0.000001), averaging 15 orthologs per genome compared with 13 orthologs per environmental genome (Fig. S2). This pattern was mainly driven by genes in the GH 28 family, which are involved in the breakdown of the polygalacturonic acid backbone of pectin . Pectin is a major component of plant cell walls, and we hypothesize, as discussed below, that the ability to degrade this polysaccharide may be beneficial in floral nectar. The number of orthologs in the category of phage/prophage, transposable elements, and plasmids was nearly double in nectar-dwelling isolates compared with environmental isolates ( ; Tukey’s HSD test, P = 0.023). This increase was driven by a doubled number of orthologs in the subcategory of phages and prophages ( ; Tukey’s HSD test, P = 0.00041). Considering that environmental Acinetobacter are thought to have high numbers of prophage, this increase is notable . Consistent with high phage interactions, we also found a significant increase in the number of CRISPR–Cas system orthologs present in nectar isolates ( ; ANOVA, P = 0.000174). These were genes for Cascade proteins Cas1, Cas3, Csy2, Csy3, and Csy4, present in A. apis , A. rathckeae , and A. baretiae and also present in some environmental isolates, suggesting that CRISPR–Cas systems may be sporadic across Acinetobacter from both nectar and the environment . We hypothesized that HGT could be important in conferring novel functions for Acinetobacter switching to a new habitat, and so we screened nectar-dwelling genomes for genomic islands. We found genomic islands within all members of the nectar-dwelling clade, and this analysis also identified intact prophages. Gene counts from genomic islands ranged from 111 to 352 genes with approximately 50% annotated as hypothetical and the remainder involved in plasmid or transposon mobilization, phage replication, or Type1 secretion components, suggesting that HGT is facilitated by mobile elements (Table S5). Additional ortholog categories saw significant shifts in nectar-dweller genomes compared with environmental genomes but with unclear connections to function in nectar. For example, iron acquisition and metabolism both showed significant reduction in nectar isolates (37% reduction; Tukey’s HSD test; P = 0.000125), although the category had relatively few genes overall . Nectar-dwelling isolate genomes were lacking siderophore related orthologs, specifically regulatory (sigma factors) and receptor uptake orthologs. However, siderophore uptake genes were common among significantly increased membrane transport genes present in nectar dwellers but absent in environmental isolates ( ; Tukey’s HSD test; P = 0.00099), suggesting reliance on different siderophores among nectar versus environmental isolates . Additionally, the categories of miscellaneous functions, fatty acid metabolism, and respiration were significantly reduced in nectar isolates ( ; Tukey’s HSD test; P = 0.011, P = 0.0000892, and P = 0.002). The fatty acid or respiration genes lost in nectar-dwelling isolates did not provide insight into the biological significance of this change . The miscellaneous orthologs absent from nectar-dweller genomes were predicted to catalyze the degradation of aromatic compounds, suggesting that the decrease in this category is related to the observed decrease in metabolism of aromatic compounds. The categories of cofactor and vitamin synthesis and protein metabolism showed modest decreases in untransformed analyses but significant increases in normalized data ( ; Tukey’s HSD test; P = 0.001 and P = 0.00939). Within the category of cofactors, nectar-dwelling genomes were missing several orthologs involved in folate metabolism . At the same time, orthologs involved in pyridoxine metabolism were significantly increased in nectar isolates ( ; Tukey’s HSD test; P = 0.0439). In the case of protein metabolism, orthologs in protein biosynthesis and degradation were significantly higher in nectar-dwelling genomes ( ; Tukey’s HSD test, P < 0.000001, P = 0.000513). Additionally, the categories of DNA repair ( ; Tukey’s HSD test, P = 0.00664) and nucleoside and nucleotide metabolism ( ; Tukey’s HSD test, P = 0.00252) were significantly increased. However, these all showed very low change compared with environmental genomes (3%–7% relative increases), with changes of only 1–4 orthologs per category . Among orthologs present in nectar-dwelling isolates and not environmental isolates were genes coding for pectin degradation enzymes, specifically PL1 family pectin lyases and GH28 family polygalacturonases. All species in the nectar-dwelling clade contain at least one of these genes, with several species possessing multiple copies of genes associated with the degradation of pectin (Table S6). Pectin is a recalcitrant polysaccharide that provides structural stability in plant cell walls and the outer layers of pollen grains . Among bacteria, enzymes for degrading pectin are commonly found in necrotrophic plant pathogens, which use them to digest plant tissue . These enzymes are notably absent among Acinetobacter genomes in GenBank, except for orthologs found in the nectar clade. Sequences in GenBank with the highest similarity to nectar-clade orthologs of PL1 and GH28 genes are found outside of Acinetobacter in plant pathogens, such as Pectobacterium , Erwinia , and Dickeya . These genes were likely acquired by nectar-dwelling Acinetobacter by HGT, possibly from a necrotrophic plant pathogen in the Enterobacterales. Tracing the pattern of gains in pectin degradation genes onto the phylogeny of nectar-dwelling Acinetobacter suggests that at least one copy each of the pectin lyase and polygalacturonase genes were present in the common ancestor of the nectar-clade Acinetobacter analyzed here . In several isolates, these two orthologs are located next to each other on the chromosome, so they may have been gained together in one event. As seen in the gene trees for the pectin lyase and polygalacturonase orthologs, we determined that these genes experienced multiple duplication events with paralogs sister to each other . These duplications occurred within the species A. pollinis , which contains six copies of polygalacturonase and three copies of pectin lyase (within isolate SCC477 as an example). Additional horizontal transfers, losses, or duplication events may have occurred within the nectar clade, as some species have multiple copies within a gene tree ( A. boissieri , ) or differences in topology between gene trees and species trees ( A. apis , ). Some of these copies are on contigs that are likely from plasmids (based on increased read depth and the presence of plasmid replication genes), which may have facilitated duplication and transfers of these genes. Duplication was more common for the polygalacturonase than the pectin lyase genes, and the polygalacturonase genes were also the only example of multiple copies outside of the A. pollinis . The fact that these genes have been maintained and duplicated within the nectar-dwelling clade suggests that they may serve an important ecological role for these bacteria. Gene duplication can increase production of protein products but also allows for functional divergence due to selection. To test for this, we performed branch-site tests for positive selection on pectin-degrading genes in nectar-dwelling Acinetobacter and unrelated outgroups. Amino acid substitutions in several pectin-degrading enzyme protein sequences show signatures of positive selection . Positive selection was detected at the nodes and tips of the polygalacturonase gene tree , particularly for A. pollinis (nine sites) and A. apis (seven sites) orthologs . The high number of duplication events of these genes in A. pollinis , together with signatures of positive selection, suggests that pectin-degrading enzymes may be functionally diversifying in this species. Both of these enzymes cleave linkages in the polygalacturonic acid backbone of pectin . Necrotrophic plant pathogens typically have diverse copies of these enzymes, with slight variations in catalytic ability, to effectively degrade pectin . This pattern may be convergently evolving in A. pollinis . To investigate the potential function of the amino acids under selection in Acinetobacter polygalacturonase enzymes, we generated predicted protein structures of representative orthologs from clades with sites experiencing positive selection at the tips or nodes . Protein structures were predicted with high confidence and were generally similar to structures of proteins from known plant pathogens (Fig. S3). All nectar isolate proteins included known conserved active motifs, the catalytic sites NTD and RIK, and substrate-binding sites G/QDD and G/SHG in the predicted binding cleft of the enzyme . Several of the sites found to be under positive selection were also located around the binding cleft. For example, six sites found to be under positive selection in specific orthologs (tips) are predicted to be near the substrate-binding cleft in three orthologs . Additionally, three sites under selection at ancestral nodes, and therefore present in several orthologs, were also near the binding cleft in two orthologs . Most of the substitutions found to be under selection were between amino acids that vary in hydrophobicity due to their side chains or charge. Of the nine sites under selection near the binding cleft, five involved substitutions from hydrophilic to hydrophobic amino acids, one involved a substitution in the opposite direction, and three were between similarly hydrophobic amino acids. Previous work found that hydrophilic amino acids in the binding cleft were important for function of polygalacturonase in plant pathogens , and our results suggest that changes in hydrophobicity may be beneficial for enzyme activity in floral nectar compared with plant tissue. It is not known how much nectar-dwelling Acinetobacter interact with major sources of pectin in plant tissue, but to our knowledge, they have not been observed to infect plants. However, microbes in nectar regularly interact with pollen grains, which are introduced into nectar by pollinator activity . In fact, some Acinetobacter can cause pollen grains to burst open or pseudogerminate . This ability is beneficial for Acinetobacter , as it is associated with increased growth in nectar when pollen is present . Floral nectar has been shown to be nitrogen limiting for both yeasts and bacteria , so the ability to access nitrogen from pollen in nectar could increase microbial fitness. However, pollen is protected by a resistant exine layer and is difficult to degrade . Pectin is an essential component of pollen cell walls and pollen tubes , and pectin-degrading enzymes have been hypothesized to be involved in pollen breakdown by bacterial gut symbionts of honey bees . We hypothesize that the pectin degradation enzymes in Acinetobacter could be involved in accessing nutrients from pollen, which could explain the apparent importance of genes coding for such enzymes in the clade. In support of this, we find that most of the GH28 and PL1 proteins in Acinetobacter have secretion signals , similar to secreted pectin-degrading enzymes in Pectobacterium , suggesting that they should act extracellularly. Furthermore, we find the most selection on these genes within A. pollinis and A. apis . The former shows strong impacts on pollen bursting and pseudogermination , and the latter was isolated from honey bees and is likely to encounter pollen regularly. We speculate that the ability to degrade pectin could be a key trait allowing Acinetobacter to thrive in nectar and in association with pollinators. We found that the ecological shift from soil-dwelling to nectar-dwelling led to genomic reduction, followed by dynamic gene gains and losses underlying apparent metabolic shifts in diverse functions. The nectar clade had an increase in the number of genes involved in monosaccharide metabolism and transport, likely due to the high sugar environment of nectar. We also found changes in nitrogen and amino acids metabolism genes, suggesting a switch toward nitrogen scavenging relative to environmental Acinetobacter , consistent with nitrogen limitation in floral nectar. Nectar-dwelling Acinetobacter species have acquired pectin-degrading enzymes, presumably through HGT from plant pathogens. We found duplication, diversification, and positive selection within pectin-degrading genes, supporting our hypothesis that these genes may provide an important ecological function. Overall, we find that genome evolution from gene loss, diversification, and HGT may have all contributed to the Acinetobacter habitat switch to floral nectar. Phylogenetic analyses A phylogenomic species tree was inferred using 26 Acinetobacter genomes and Pseudomonas syringae pv. tomato strain DC3000 as an outgroup . These included sequences from nine environmental Acinetobacter species. These genomes were chosen to represent all deep branching clades, with one genome per clade, from the phylogenetic analysis in Garcia-Garcera et al. . Based on that analysis, we excluded clades containing the animal pathogens A. baumannii or A. parvus , as they were found to have undergone rapid and distinct evolutionary changes compared with soil-dwelling relatives . We also included a genome of A. larvae , which we reasoned might have convergent gene content similarity with pollinator-associated isolated because it originated from a moth larvae gut , and all sequenced genomes of A. brisouii , which is the closest relative to previously sequenced nectar isolates . We included all previously sequenced genomes of Acinetobacter isolated from floral nectar or pollinators . Additionally, we included newly sequenced A. nectaris isolates (EC31, EC34, BB226, and BB362) collected by Rachel Vannette at the University of California, Davis main campus and Bee Biology research facility from the floral nectar of Epilobium canum , Scrophularia californica , and Penstemon heteriphyllus . These isolates were grown using previous methods , and DNA was extracted using a Qiagen Blood and Tissue kit and the manufacturer’s instructions. Nextera libraries were prepared using genomic DNA and run on a 2 × 250 paired-end Rapid Run HiSEQ 2500 platform at the Cornell University Institute of Biotechnology Resource Center Genomics Facility. Genomes were assembled using Discovar de novo and checked for completeness using the Gammaproteobacteria set of 275 gene markers in CheckM (v1.0.18) . Most nectar-dwelling isolate genomes were found to be at least 99% complete , with only three with lower completeness scores (96%–98%). We also report genome completeness for comparison environmental isolates , which were 96%–99%. These values were taken from GenBank and generated using CheckM with the Acinetobacter marker set. All analyzed genomes, both those generated here and those from GenBank, were annotated using the RAST Server for consistency . Protein sequences were used in the PhyloPhlAn 3.0 pipeline to determine conserved proteins within Acinetobacter genomes. PhyloPhlAn identified 399 conserved proteins and their nucleotide sequences were extracted and concatenated with PhyloPhlAn and aligned using MAFFT . Maximum-likelihood trees were reconstructed using IQ Tree with bootstrapping set to 1,000 and a symmetric substitution model. Welch’s t -test was used to determine the significance of differences between genome size from the nectar clade and environmental genomes. Tests of evolutionary rate were performed in PAML , using the rooted phylogenomic tree to generate likelihood values assuming a global clock (null hypothesis) versus a local clock (alternative hypothesis). The local clock allowed the nectar clade to have a different substitution rate than the rest of the tree. The difference between these likelihoods was tested with a likelihood ratio test in PAML. Gene trees were reconstructed using polygalacturonase and pectin lyase genes from the Acinetobacter genomes. Outgroups were selected using nectar-dwelling Acinetobacter spp. pectin lyase and polygalacturonase genes as BLAST queries in GenBank. We found that all of the best BLAST hits for these genes were from necrotrophic plant pathogens, so we included the most similar (>65% identity and >50% query coverage) sequences in the analyses. Acinetobacter polygalacturonase and pectin lyase genes were identified in our newly sequenced genomes by RAST annotation, BLAST of the genomes using plant pathogen orthologs as queries, and comparison with the CAZy database to confirm that we had identified all orthologs. Genes were aligned using MAFFT and were used for maximum-likelihood phylogenetic inference in IQ Tree using the TIM3 substitution model, which was selected using model finder. Ortholog analyses Orthologous protein sequence clustering was conducted using OrthoMCL using the following parameters: mode = 1, inflation = 2, pi cutoff = 50. To determine the ancestral state of orthologs across the Acinetobacter species tree, the software package Count was used , implementing Wagner Parsimony and gene gain penalty of 1.6. This analysis determines the genes gained and lost at each node of the Acinetobacter phylogenomic tree minimizing state changes and assuming all character states are reversible, and with the gain penalty gene losses are more likely than gene gains. This ratio of gene loss to gene gain was determined in Count using the rate model optimization tool, with a gain–loss duplication model and a Poisson prior distribution at the root. To estimate the number of pseudogenes present within the genomes, we used the program Pseudofinder . The algorithm identifies pseudogenes from GenBank files by analyzing average coding sequence (CDS) length, fragmented CDS, and intergenic pseudogenes, and alignment lengths are compared against homologs identified by blastp hits in the UniProt protein database. The following parameters were used to predict potential intergenic, fragmented, truncated, and long pseudogenes: intergenic length = 30, length pseudo = 0.65, shared hits = 0.5, hitcap = 15, intergenic threshold = 0.3. The webserver tools IslandViewer4 and Phaster were used to identify genomic islands and prophage. Ortholog enrichment within functional categories for nectar-dwelling versus environmental genomes was performed using ANOVA and Tukey honestly significant difference (HSD) tests using rstatix in R v.4.4.0 and R studio v.2024.4.1.748 . Functional category and subcategory classifications were obtained from RAST and only categories with greater than six total orthologs were included in analyses. Uncategorized orthologs were excluded. To account for potential effects of variation in genome size, we analyzed both actual ortholog numbers and numbers normalized by the total ortholog number for each genome. Tukey HSD significance tests were conducted for functional categories and subcategories for both untransformed data and normalized data. Categories and subcategories are presented as consistently reduced or increased (in both untransformed and normalized analyses), as reduced less than expected based on total ortholog number (decreased in untransformed, increased in normalized analyses), or increased less than expected based on total ortholog number (increased in untransformed, decreased in normalized analyses). Selection and protein analyses dN/dS (ω) values were estimated for the polygalacturonase and pectin lyase genes (Table S6) and IQ Tree gene tree maximum-likelihood phylogenies using codeml in the PAML v4.4 package, with gaps included . Loci with identical sequences between closely related isolates were removed for the analysis. For each tip branch and node in the phylogenies, a likelihood ratio test for positive selection was performed to compare nested branch-site models (Model Anull versus Model A) . These analyses allowed for independent comparisons of estimated ω among all branches and subclades (serving as the foreground branches) against the remainder of the phylogeny (background). Tips and nodes were reported as positive for selection if likelihood ratio test results were below the Bonferroni multiple testing correction cut off, and Bayes Empirical Bayes values were above 0.5. The AI protein prediction software Alphafold was used to predict the structure of pectin degradation enzymes. The Alphafold algorithm is a neural network that generates a multiple sequence alignment from the query protein sequence provided and extracts evolutionary information to generate protein predictions . A web version of Alphafold was used for these predictions . To determine if amino acid sites under selection were functionally important, we predicted the structure of polygalacturonase orthologs identified to be under positive selection from the PAML analysis, specifically genes from A. pollinis isolate FNA3 (GenBank locus tags I2F29_RS12745, I2F29_RS12925, and I2F29_RS11025) and A. apis (CFY84_RS01715). Additionally, we compared confidence values for these protein structures to the structure of a protein from a plant pathogen, Phaseolibacter flectans . Three-dimensional protein predictions were edited using ChimeraX to highlight sites under selection, as well as predicted active sites . Additional domains in the proteins were predicted using the Uniprot database . A phylogenomic species tree was inferred using 26 Acinetobacter genomes and Pseudomonas syringae pv. tomato strain DC3000 as an outgroup . These included sequences from nine environmental Acinetobacter species. These genomes were chosen to represent all deep branching clades, with one genome per clade, from the phylogenetic analysis in Garcia-Garcera et al. . Based on that analysis, we excluded clades containing the animal pathogens A. baumannii or A. parvus , as they were found to have undergone rapid and distinct evolutionary changes compared with soil-dwelling relatives . We also included a genome of A. larvae , which we reasoned might have convergent gene content similarity with pollinator-associated isolated because it originated from a moth larvae gut , and all sequenced genomes of A. brisouii , which is the closest relative to previously sequenced nectar isolates . We included all previously sequenced genomes of Acinetobacter isolated from floral nectar or pollinators . Additionally, we included newly sequenced A. nectaris isolates (EC31, EC34, BB226, and BB362) collected by Rachel Vannette at the University of California, Davis main campus and Bee Biology research facility from the floral nectar of Epilobium canum , Scrophularia californica , and Penstemon heteriphyllus . These isolates were grown using previous methods , and DNA was extracted using a Qiagen Blood and Tissue kit and the manufacturer’s instructions. Nextera libraries were prepared using genomic DNA and run on a 2 × 250 paired-end Rapid Run HiSEQ 2500 platform at the Cornell University Institute of Biotechnology Resource Center Genomics Facility. Genomes were assembled using Discovar de novo and checked for completeness using the Gammaproteobacteria set of 275 gene markers in CheckM (v1.0.18) . Most nectar-dwelling isolate genomes were found to be at least 99% complete , with only three with lower completeness scores (96%–98%). We also report genome completeness for comparison environmental isolates , which were 96%–99%. These values were taken from GenBank and generated using CheckM with the Acinetobacter marker set. All analyzed genomes, both those generated here and those from GenBank, were annotated using the RAST Server for consistency . Protein sequences were used in the PhyloPhlAn 3.0 pipeline to determine conserved proteins within Acinetobacter genomes. PhyloPhlAn identified 399 conserved proteins and their nucleotide sequences were extracted and concatenated with PhyloPhlAn and aligned using MAFFT . Maximum-likelihood trees were reconstructed using IQ Tree with bootstrapping set to 1,000 and a symmetric substitution model. Welch’s t -test was used to determine the significance of differences between genome size from the nectar clade and environmental genomes. Tests of evolutionary rate were performed in PAML , using the rooted phylogenomic tree to generate likelihood values assuming a global clock (null hypothesis) versus a local clock (alternative hypothesis). The local clock allowed the nectar clade to have a different substitution rate than the rest of the tree. The difference between these likelihoods was tested with a likelihood ratio test in PAML. Gene trees were reconstructed using polygalacturonase and pectin lyase genes from the Acinetobacter genomes. Outgroups were selected using nectar-dwelling Acinetobacter spp. pectin lyase and polygalacturonase genes as BLAST queries in GenBank. We found that all of the best BLAST hits for these genes were from necrotrophic plant pathogens, so we included the most similar (>65% identity and >50% query coverage) sequences in the analyses. Acinetobacter polygalacturonase and pectin lyase genes were identified in our newly sequenced genomes by RAST annotation, BLAST of the genomes using plant pathogen orthologs as queries, and comparison with the CAZy database to confirm that we had identified all orthologs. Genes were aligned using MAFFT and were used for maximum-likelihood phylogenetic inference in IQ Tree using the TIM3 substitution model, which was selected using model finder. Orthologous protein sequence clustering was conducted using OrthoMCL using the following parameters: mode = 1, inflation = 2, pi cutoff = 50. To determine the ancestral state of orthologs across the Acinetobacter species tree, the software package Count was used , implementing Wagner Parsimony and gene gain penalty of 1.6. This analysis determines the genes gained and lost at each node of the Acinetobacter phylogenomic tree minimizing state changes and assuming all character states are reversible, and with the gain penalty gene losses are more likely than gene gains. This ratio of gene loss to gene gain was determined in Count using the rate model optimization tool, with a gain–loss duplication model and a Poisson prior distribution at the root. To estimate the number of pseudogenes present within the genomes, we used the program Pseudofinder . The algorithm identifies pseudogenes from GenBank files by analyzing average coding sequence (CDS) length, fragmented CDS, and intergenic pseudogenes, and alignment lengths are compared against homologs identified by blastp hits in the UniProt protein database. The following parameters were used to predict potential intergenic, fragmented, truncated, and long pseudogenes: intergenic length = 30, length pseudo = 0.65, shared hits = 0.5, hitcap = 15, intergenic threshold = 0.3. The webserver tools IslandViewer4 and Phaster were used to identify genomic islands and prophage. Ortholog enrichment within functional categories for nectar-dwelling versus environmental genomes was performed using ANOVA and Tukey honestly significant difference (HSD) tests using rstatix in R v.4.4.0 and R studio v.2024.4.1.748 . Functional category and subcategory classifications were obtained from RAST and only categories with greater than six total orthologs were included in analyses. Uncategorized orthologs were excluded. To account for potential effects of variation in genome size, we analyzed both actual ortholog numbers and numbers normalized by the total ortholog number for each genome. Tukey HSD significance tests were conducted for functional categories and subcategories for both untransformed data and normalized data. Categories and subcategories are presented as consistently reduced or increased (in both untransformed and normalized analyses), as reduced less than expected based on total ortholog number (decreased in untransformed, increased in normalized analyses), or increased less than expected based on total ortholog number (increased in untransformed, decreased in normalized analyses). dN/dS (ω) values were estimated for the polygalacturonase and pectin lyase genes (Table S6) and IQ Tree gene tree maximum-likelihood phylogenies using codeml in the PAML v4.4 package, with gaps included . Loci with identical sequences between closely related isolates were removed for the analysis. For each tip branch and node in the phylogenies, a likelihood ratio test for positive selection was performed to compare nested branch-site models (Model Anull versus Model A) . These analyses allowed for independent comparisons of estimated ω among all branches and subclades (serving as the foreground branches) against the remainder of the phylogeny (background). Tips and nodes were reported as positive for selection if likelihood ratio test results were below the Bonferroni multiple testing correction cut off, and Bayes Empirical Bayes values were above 0.5. The AI protein prediction software Alphafold was used to predict the structure of pectin degradation enzymes. The Alphafold algorithm is a neural network that generates a multiple sequence alignment from the query protein sequence provided and extracts evolutionary information to generate protein predictions . A web version of Alphafold was used for these predictions . To determine if amino acid sites under selection were functionally important, we predicted the structure of polygalacturonase orthologs identified to be under positive selection from the PAML analysis, specifically genes from A. pollinis isolate FNA3 (GenBank locus tags I2F29_RS12745, I2F29_RS12925, and I2F29_RS11025) and A. apis (CFY84_RS01715). Additionally, we compared confidence values for these protein structures to the structure of a protein from a plant pathogen, Phaseolibacter flectans . Three-dimensional protein predictions were edited using ChimeraX to highlight sites under selection, as well as predicted active sites . Additional domains in the proteins were predicted using the Uniprot database .
Non‐Surgical Treatment of Moderate Periodontal Intrabony Defects With Adjunctive Cross‐Linked Hyaluronic Acid: A Single‐Blinded Randomized Controlled Clinical Trial
085b4313-04ff-4013-8fbb-1da4db421c59
11743238
Dentistry[mh]
Introduction At present, robust evidence is available demonstrating that professional mechanical removal of subgingival plaque using machine‐driven or hand instruments represents a crucial step in the periodontal treatment and is able to re‐establish periodontal health without the need for additional periodontal surgery (Lang, Salvi, and Sculean ). Data from a systematic review (Suvan et al. ) reported a proportion of closed pockets (i.e., probing depth [PD] ≤ 4 mm) of 57% at 3 months, while it was 74% at 6 months. These results were independent of the non‐surgical periodontal protocols applied (i.e., quadrant‐wise or full‐mouth approaches) or the instruments used (i.e., sonic/ultrasonic devices, hand instruments or a combination of both). Although the majority of periodontal pockets are ‘closed’ following non‐surgical periodontal treatment, and there should be no difference in response to non‐surgical therapy based on the presence of intrabony defects (Tomasi, Leyland, and Wennstrom ), in certain clinical situations (i.e., deep periodontal pockets) pocket closure may not be achieved. After completion of active periodontal therapy, residual pockets (i.e., PD ≥ 5 mm) associated with intrabony defects present a risk factor for disease progression and may require additional surgical therapy (Matuliene et al. ; Papapanou and Wennstrom ). Usually, intrabony defects are considered candidates for periodontal surgical procedures using different regenerative biomaterials (Iorio‐Siciliano et al. ; Matarasso et al. ; Nibali et al. ) and flap designs (Windisch et al. ). However, in the last years several authors have proposed treatment of intrabony defects by means of a minimally invasive non‐surgical technique (MINST) based on the use of mini and micro instruments in combination with magnification loupes (Barbato et al. ). The MINST approach potentially reduces the post‐operative trauma and gingival recessions, thus preserving the aesthetics (Iorio‐Siciliano et al. ; Riberio et al. ) while yielding substantial clinical and radiographic improvements in intrabony defects (Nibali et al. , ). In addition, similar PD reduction and clinical attachment level (CAL) gain were noted when comparing MINST with minimally invasive surgical approaches without biomaterials in the treatment of intra‐osseous defects (Riberio et al. ). It has been suggested that these results depend on the accurate professional mechanical removal of subgingival plaque and formation of a stable blood clot. To enhance the clinical benefits of MINST in terms of blood clot stabilization and acceleration of healing processes, several authors have proposed the use of adjunctive therapies delivered alongside professional mechanical removal of subgingival plaque, such as the application of an enamel matrix derivative (EMD) (Graziani et al. ; Jentsch et al. ). However, the adjunctive use of EMD following non‐surgical subgingival professional mechanical plaque removal does not seem to additionally improve the clinical outcomes when compared to non‐surgical subgingival instrumentation alone (Roccuzzo et al. ). Likewise, Anoixiadou and co‐workers did not find any statistically significant changes in clinical and radiographic parameters 12 months after treatment of intrabony defects using MINST with or without the local application of EMD (Anoixiadou, Parashis, and Vouros ). On the contrary, a systematic review reported a moderate benefit of local application of hyaluronic acid (HA) on the clinical outcomes following non‐surgical periodontal therapy (Eliezer et al. ). HA stimulates blood clot formation and shows a bacteriostatic effect on periodontal bacterial pathogens (Scully et al. ). Moreover, HA plays a crucial role in each phase of wound healing by stimulating cell proliferation (Olczyk et al. ) and inducing angiogenesis and osteogenesis (Bezerra et al. ). In the last years, a new formulation of cross‐linked hyaluronic acid gel of non‐animal origin with high molecular weight (xHyA) was proposed to improve wound healing and regenerate the periodontal tissues (Mendes et al. ). A series of histological studies reported that intrabony defects, gingival recessions and furcation defects treated using xHyA gel showed a greater area of new cementum and new periodontal ligament (Shirakata, Imafuji, et al. ; Shirakata, Nakamura, et al. ; Shirakata et al. ). These preclinical observations have been corroborated by clinical studies indicating a substantial benefit of xHyA in the treatment of gingival recessions and intrabony defects (Pilloni, Rojas, et al. ; Pilloni, Zeza, et al. ; Pilloni et al. ). However, it is currently unknown to what extent the use of xHyA in conjunction with MINST may further improve clinical outcomes in intrabony defects compared to the use of MINST alone. Therefore, the aim of present study was to clinically and radiographically evaluate the outcomes obtained at 6 months following the treatment of moderate intrabony defects using MINST with or without adjunctive delivery of xHyA. Materials and Methods 2.1 Study Design The study was designed as a superiority, parallel‐arm, single‐blinded randomized controlled trial (RCT) with a 6‐month follow‐up. The idea was to test the null hypothesis of no statistically significant differences with respect to PD change. In each patient, one intrabony defect was selected for the investigation. The intrabony defects were randomly assigned at test or control procedure. Intrabony defects of the test group were treated by means of MINST and xHyA gel as adjunct, while in the control group MINST alone was performed. The study was conducted at the Department of Periodontology, University of Naples Federico II, from January 2022 to March 2023. The Research Protocol was submitted to and approved by the Institutional Review Board (IRB) of the University of Naples Federico II (Approval Number: 141/21) and the study protocol was registered at ClinicalTrial.gov (No. NCT05188898). Furthermore, written consent was obtained from all patients before the investigation. The study is reported according to CONSORT statement and it was conducted in observance to the Principles of the Declaration of Helsinki on experimentation involving human subjects. 2.2 Patient Sample From the patient pool of the Department of Periodontology, University of Naples Federico II, patients diagnosed with periodontitis according to Tonetti, Greenwell, and Kornmann  were invited to participate in the study. After initial screening, an accurate periodontal exam confirming the diagnosis of periodontitis was made. Patients who met the eligibility criteria were enrolled in the study. The inclusion criteria were as follows: – Males and females aged ≥ 18 years. – Patients with diagnosis of periodontitis (stage III or IV) (Tonetti, Greenwell, and Kornmann ). – Single‐rooted and multi‐rooted teeth in both arches. – Presence of interdental periodontal defects with PD ≥ 5 mm associated with an intrabony component ≥ 2 mm at single‐rooted teeth or at molars with ≤ class I furcation involvement. – One intrabony defect was treated per patient. If multiple teeth presented pockets associated with an intrabony defect, only the site with the deepest PD was selected for the study. In case of same PD in two or more intrabony defects per patient, the site with the deepest radiographic intrabony component was selected. The exclusion criteria were as follows: – Patients with systemic diseases. – Pregnant or lactating females. – Tobacco smokers (≥ 10 cigarettes per day). – Multi‐rooted teeth with class II and class III furcation defects. – Third molars. – Teeth with grade III mobility. – Peri‐apical pathology and acute abscess. – Non‐surgical or surgical periodontal treatment in the past 12 months. – Prolonged treatment with antibiotics or anti‐inflammatory agents within 6 months prior to periodontal therapy. – Patients without adequate level of oral hygiene following step 1 of periodontal therapy (full‐mouth plaque score [FMPS] ≥ 20%). – Patients without adequate level of oral hygiene at the 1‐, 3‐ and 6‐month follow‐up visits (FMPS ≥ 20%). The initial periodontal screening took place from October 2021 to December 2021, while the trial was conducted from January 2022 to March 2023. 2.3 Clinical and Radiographic Outcome Measures 2.3.1 Primary Outcome The primary outcome was the change in PD measured from the gingival margin to the bottom of the pocket. 2.3.2 Secondary Outcomes The following secondary clinical and radiographic outcomes were assessed: – FMPS, representing the percentage of sites covered with plaque (O'Leary, Drake, and Naylor ). – Full‐mouth bleeding score (FMBS), representing the percentage of sites with bleeding on probing (Claffey et al. ). – CAL, measured from the cemento‐enamel junction (CEJ) to the bottom of the pocket. – Gingival recession (GR), measured from the CEJ to the gingival margin. – CEJ–bottom of the defect (CEJ‐BD), measured from the CEJ to the most apical extension of the bone defect. – Defect fill (DF), calculated as the difference between CEJ‐BD at baseline and after 6 months. – Radiographic defect angle (RDA), defined as angle between the line connecting the CEJ of the tooth presenting the intrabony defect to the most apical point of the defect and the line connecting the most apical point of the defect and the point where the bone crest touched the neighbouring tooth (Steffensen and Webert ). All clinical variables were recorded using a manual periodontal probe (PCP‐UNC 15, Hu‐Friedy, Chicago, IL, USA), applying a probing force of 0.2 N. The radiographic examination was performed at baseline and at the 6‐month follow‐up. Radiographs were acquired using a parallel cone technique with a Rinn holder, and radiographic measurements were performed using a computer software (VistaSoft 2.4.3. Durr Dental Italia S.R.L). The distortion in the radiographic measurements was adjusted using the correction factor method (Tu et al. ). In brief, the correction factor was calculated using the vertical distance between the CEJ and the root apex (RA) at baseline and after 6 months: Correction factor = baseline CEJ ‐ RA / 6 ‐ month CEJ ‐ RA The corrected radiograph change in defect fill (DF) was derived as DF = baseline CEJ ‐ BD – 6 ‐ month CEJ ‐ BD × correction factor In addition, information on gender, age and smoking habits was also collected. All data were collected in the Department of Periodontology, University of Naples Federico II. 2.4 Sample Size Calculation Sample size calculation was performed using a computer software (IBM‐SPSS, IBM Inc.). Based on data presented in a previous study (Rajan et al. ), in order to detect a statistically significant difference of 1.12 ± 0.9 mm with a power of 0.80 for the primary outcome (PD change at 6 months) between test and control procedures, a sample size of 11 patients with one intrabony defect was required in each group. To avoid an underpowered sample size due to an overestimation of the expected difference between both procedures, a total of 21 patients with 21 intrabony defects were enrolled in each group. Potential dropouts were not included in the sample size calculation. 2.5 Investigator Calibration All parameters were recorded by two expert periodontists (B.A. and M.L.). Examiners attended a training and calibration session on a total of 40 patients (20 patients per examiner) not involved in the trial. Repeated measurements of PD were assessed once by each examiner. Furthermore, the inter‐examiner variability for the radiographic measurement (RDD) was also checked. Twenty radiographs of patients not enrolled in the study were used for the calibration. A contingency coefficient (Cohen's kappa coefficient) was used to test the agreement between examiners. A value of 0.954 was obtained for the clinical variable (PD), while a value of 0.814 was found for the radiographic parameter (RDD). 2.6 Randomization and Blinding The patients were randomly assigned to test or control procedures by means of a simple randomization without restrictions and using a 1:1 allocation ratio. Minimization and stratification were not done. Randomization was made using a computerized random number generator ( Random.org ; www.random.org ). Allocation concealment was effected by associating even numbers to the test group and odd numbers to the control group. The cards with numbers were enclosed in opaque envelopes, and treatment allocation was performed after professional mechanical subgingival plaque removal of the intrabony defect selected for the study by opening the envelope containing the number. The random allocation sequence was generated by a clinician not involved in the investigation. The examiners of outcome measures were masked with respect to test and control procedures, while the periodontist performing the treatments and patients were not masked. At baseline and after 6 months follow‐up, radiographs were taken by two masked examiners (A.B. and L.M.). Clinical parameters were recorded at baseline by A.B. and after 3‐ and 6‐months follow‐up by M.L., while the radiographic variables were measured by the same masked examiners at baseline and after 6 months. 2.7 Intervention 2.7.1 Pre‐Treatment Prior to the pre‐treatment phase, FMPS and FMBS were assessed. In the pre‐treatment session (first step of periodontal therapy) (Sanz et al. ), all participants received supragingival professional mechanical plaque removal to eliminate the supragingival biofilm and calculus in combination with oral hygiene instructions and motivation. After 4 weeks, the clinical parameters of the intrabony defects selected for the trial were assessed (i.e., PD, CAL, GR and BOP) (Figure ). 2.7.2 Treatment All patients received step 2 of periodontal therapy based on a quadrant‐wise approach, that is, in four appointments. Teeth with degree 2 mobility were splinted before the subgingival instrumentation. All periodontal pockets in each quadrant were treated with subgingival professional mechanical plaque removal using an ultrasonic scaler under local anaesthesia. Only the periodontal pocket associated with intrabony defect selected for the investigation was treated with the experimental procedure. MINST was performed by means of subgingival application of thin ultrasonic tips (Figure ). Additional subgingival instrumentation using Gracey mini‐curettes (Hu‐Friedy) was also performed to achieve biofilm removal in aeras with difficult access (Figure ). Subgingival rinses were not performed in order to achieve blood clot stabilization following subgingival instrumentation. All therapies were performed using ×4.0 magnification loupes (Univet, Italy) (Nibali et al. ). After completion of MINST, the defects were randomly assigned to the test group or the control group. In the patients of the test group, at the end of subgingival professional mechanical plaque removal, the pockets associated with intrabony defects were filled once using xHyA gel (Hyadent BG, Regedent AG, Zürich, Switzerland) (Figure ). The defects of the control group were treated only with the MINST approach (Figure ). After completion of both procedures, oral hygiene instructions were reinforced. No antiseptic mouthwashes or antibiotics were prescribed for either group. All clinical procedures were performed by the same expert operator (V.I.S.). 2.7.3 Post‐Operative Follow‐Up All patients were recalled at 1, 3 and 6‐months following treatment for supragingival professional mechanical plaque removal and motivation. After 3 months, only clinical parameters were recorded, while at 6 months the final clinical and radiographic evaluations were performed (Figure ). During the follow‐ups at 1 and 3 months, no additional subgingival professional mechanical plaque removal or application of xHyA was performed. 2.8 Statistical Analysis All data were collected and analysed at the Department of Periodontology, University of Naples Federico II. Data analysis was conducted using a statistical software package (IBM‐SPSS, IBM Inc.); the statistician was not blinded with respect to the research protocol. Since each patient contributed only one intrabony defect to the study, the patient was considered as the statistical unit. The variables PD, CAL, GR, CEJ‐BD and DF were expressed in millimetres and FMPS and FMBS were expressed in percentages, while the radiographic angles were reported in degrees. Means and standard deviation (SD) were calculated for each parameter. The assumption of normal distribution was checked for all parameters by means of the Shapiro–Wilk test, and normal or non‐normal distribution was checked using parametric or not parametric tests. A Chi‐square test was used to compare gender and smoking habits between test and control procedures, while the Mann–Whitney U test was used for evaluating the age and teeth location (mandible/maxillae). The inter‐group and intra‐group analyses of FMPS and FMBS were carried out using an unpaired and a paired t ‐test, respectively. An intra‐group analysis for the variables full‐mouth probing depth (FMPD), PD, CAL, GR, BOP, CEJ‐BD and defect angle was performed using the Wilcoxon test, while the inter‐group evaluation was made by means of the Mann–Whitney U test. The inter‐group analysis for DF was performed by means of the unpaired t ‐test. Intra‐group analysis of differences in number and percentages of sites with PD ≤ 4 mm was carried out using the lambda test, while the inter‐group evaluation was done by means of the McNemar test. To compare the frequency distribution of sites with residual PD and CAL gain between the test and control procedures, the lambda test was used. A p ‐value < 0.05 was set to accept a statistically significant difference. Study Design The study was designed as a superiority, parallel‐arm, single‐blinded randomized controlled trial (RCT) with a 6‐month follow‐up. The idea was to test the null hypothesis of no statistically significant differences with respect to PD change. In each patient, one intrabony defect was selected for the investigation. The intrabony defects were randomly assigned at test or control procedure. Intrabony defects of the test group were treated by means of MINST and xHyA gel as adjunct, while in the control group MINST alone was performed. The study was conducted at the Department of Periodontology, University of Naples Federico II, from January 2022 to March 2023. The Research Protocol was submitted to and approved by the Institutional Review Board (IRB) of the University of Naples Federico II (Approval Number: 141/21) and the study protocol was registered at ClinicalTrial.gov (No. NCT05188898). Furthermore, written consent was obtained from all patients before the investigation. The study is reported according to CONSORT statement and it was conducted in observance to the Principles of the Declaration of Helsinki on experimentation involving human subjects. Patient Sample From the patient pool of the Department of Periodontology, University of Naples Federico II, patients diagnosed with periodontitis according to Tonetti, Greenwell, and Kornmann  were invited to participate in the study. After initial screening, an accurate periodontal exam confirming the diagnosis of periodontitis was made. Patients who met the eligibility criteria were enrolled in the study. The inclusion criteria were as follows: – Males and females aged ≥ 18 years. – Patients with diagnosis of periodontitis (stage III or IV) (Tonetti, Greenwell, and Kornmann ). – Single‐rooted and multi‐rooted teeth in both arches. – Presence of interdental periodontal defects with PD ≥ 5 mm associated with an intrabony component ≥ 2 mm at single‐rooted teeth or at molars with ≤ class I furcation involvement. – One intrabony defect was treated per patient. If multiple teeth presented pockets associated with an intrabony defect, only the site with the deepest PD was selected for the study. In case of same PD in two or more intrabony defects per patient, the site with the deepest radiographic intrabony component was selected. The exclusion criteria were as follows: – Patients with systemic diseases. – Pregnant or lactating females. – Tobacco smokers (≥ 10 cigarettes per day). – Multi‐rooted teeth with class II and class III furcation defects. – Third molars. – Teeth with grade III mobility. – Peri‐apical pathology and acute abscess. – Non‐surgical or surgical periodontal treatment in the past 12 months. – Prolonged treatment with antibiotics or anti‐inflammatory agents within 6 months prior to periodontal therapy. – Patients without adequate level of oral hygiene following step 1 of periodontal therapy (full‐mouth plaque score [FMPS] ≥ 20%). – Patients without adequate level of oral hygiene at the 1‐, 3‐ and 6‐month follow‐up visits (FMPS ≥ 20%). The initial periodontal screening took place from October 2021 to December 2021, while the trial was conducted from January 2022 to March 2023. Clinical and Radiographic Outcome Measures 2.3.1 Primary Outcome The primary outcome was the change in PD measured from the gingival margin to the bottom of the pocket. 2.3.2 Secondary Outcomes The following secondary clinical and radiographic outcomes were assessed: – FMPS, representing the percentage of sites covered with plaque (O'Leary, Drake, and Naylor ). – Full‐mouth bleeding score (FMBS), representing the percentage of sites with bleeding on probing (Claffey et al. ). – CAL, measured from the cemento‐enamel junction (CEJ) to the bottom of the pocket. – Gingival recession (GR), measured from the CEJ to the gingival margin. – CEJ–bottom of the defect (CEJ‐BD), measured from the CEJ to the most apical extension of the bone defect. – Defect fill (DF), calculated as the difference between CEJ‐BD at baseline and after 6 months. – Radiographic defect angle (RDA), defined as angle between the line connecting the CEJ of the tooth presenting the intrabony defect to the most apical point of the defect and the line connecting the most apical point of the defect and the point where the bone crest touched the neighbouring tooth (Steffensen and Webert ). All clinical variables were recorded using a manual periodontal probe (PCP‐UNC 15, Hu‐Friedy, Chicago, IL, USA), applying a probing force of 0.2 N. The radiographic examination was performed at baseline and at the 6‐month follow‐up. Radiographs were acquired using a parallel cone technique with a Rinn holder, and radiographic measurements were performed using a computer software (VistaSoft 2.4.3. Durr Dental Italia S.R.L). The distortion in the radiographic measurements was adjusted using the correction factor method (Tu et al. ). In brief, the correction factor was calculated using the vertical distance between the CEJ and the root apex (RA) at baseline and after 6 months: Correction factor = baseline CEJ ‐ RA / 6 ‐ month CEJ ‐ RA The corrected radiograph change in defect fill (DF) was derived as DF = baseline CEJ ‐ BD – 6 ‐ month CEJ ‐ BD × correction factor In addition, information on gender, age and smoking habits was also collected. All data were collected in the Department of Periodontology, University of Naples Federico II. Primary Outcome The primary outcome was the change in PD measured from the gingival margin to the bottom of the pocket. Secondary Outcomes The following secondary clinical and radiographic outcomes were assessed: – FMPS, representing the percentage of sites covered with plaque (O'Leary, Drake, and Naylor ). – Full‐mouth bleeding score (FMBS), representing the percentage of sites with bleeding on probing (Claffey et al. ). – CAL, measured from the cemento‐enamel junction (CEJ) to the bottom of the pocket. – Gingival recession (GR), measured from the CEJ to the gingival margin. – CEJ–bottom of the defect (CEJ‐BD), measured from the CEJ to the most apical extension of the bone defect. – Defect fill (DF), calculated as the difference between CEJ‐BD at baseline and after 6 months. – Radiographic defect angle (RDA), defined as angle between the line connecting the CEJ of the tooth presenting the intrabony defect to the most apical point of the defect and the line connecting the most apical point of the defect and the point where the bone crest touched the neighbouring tooth (Steffensen and Webert ). All clinical variables were recorded using a manual periodontal probe (PCP‐UNC 15, Hu‐Friedy, Chicago, IL, USA), applying a probing force of 0.2 N. The radiographic examination was performed at baseline and at the 6‐month follow‐up. Radiographs were acquired using a parallel cone technique with a Rinn holder, and radiographic measurements were performed using a computer software (VistaSoft 2.4.3. Durr Dental Italia S.R.L). The distortion in the radiographic measurements was adjusted using the correction factor method (Tu et al. ). In brief, the correction factor was calculated using the vertical distance between the CEJ and the root apex (RA) at baseline and after 6 months: Correction factor = baseline CEJ ‐ RA / 6 ‐ month CEJ ‐ RA The corrected radiograph change in defect fill (DF) was derived as DF = baseline CEJ ‐ BD – 6 ‐ month CEJ ‐ BD × correction factor In addition, information on gender, age and smoking habits was also collected. All data were collected in the Department of Periodontology, University of Naples Federico II. Sample Size Calculation Sample size calculation was performed using a computer software (IBM‐SPSS, IBM Inc.). Based on data presented in a previous study (Rajan et al. ), in order to detect a statistically significant difference of 1.12 ± 0.9 mm with a power of 0.80 for the primary outcome (PD change at 6 months) between test and control procedures, a sample size of 11 patients with one intrabony defect was required in each group. To avoid an underpowered sample size due to an overestimation of the expected difference between both procedures, a total of 21 patients with 21 intrabony defects were enrolled in each group. Potential dropouts were not included in the sample size calculation. Investigator Calibration All parameters were recorded by two expert periodontists (B.A. and M.L.). Examiners attended a training and calibration session on a total of 40 patients (20 patients per examiner) not involved in the trial. Repeated measurements of PD were assessed once by each examiner. Furthermore, the inter‐examiner variability for the radiographic measurement (RDD) was also checked. Twenty radiographs of patients not enrolled in the study were used for the calibration. A contingency coefficient (Cohen's kappa coefficient) was used to test the agreement between examiners. A value of 0.954 was obtained for the clinical variable (PD), while a value of 0.814 was found for the radiographic parameter (RDD). Randomization and Blinding The patients were randomly assigned to test or control procedures by means of a simple randomization without restrictions and using a 1:1 allocation ratio. Minimization and stratification were not done. Randomization was made using a computerized random number generator ( Random.org ; www.random.org ). Allocation concealment was effected by associating even numbers to the test group and odd numbers to the control group. The cards with numbers were enclosed in opaque envelopes, and treatment allocation was performed after professional mechanical subgingival plaque removal of the intrabony defect selected for the study by opening the envelope containing the number. The random allocation sequence was generated by a clinician not involved in the investigation. The examiners of outcome measures were masked with respect to test and control procedures, while the periodontist performing the treatments and patients were not masked. At baseline and after 6 months follow‐up, radiographs were taken by two masked examiners (A.B. and L.M.). Clinical parameters were recorded at baseline by A.B. and after 3‐ and 6‐months follow‐up by M.L., while the radiographic variables were measured by the same masked examiners at baseline and after 6 months. Intervention 2.7.1 Pre‐Treatment Prior to the pre‐treatment phase, FMPS and FMBS were assessed. In the pre‐treatment session (first step of periodontal therapy) (Sanz et al. ), all participants received supragingival professional mechanical plaque removal to eliminate the supragingival biofilm and calculus in combination with oral hygiene instructions and motivation. After 4 weeks, the clinical parameters of the intrabony defects selected for the trial were assessed (i.e., PD, CAL, GR and BOP) (Figure ). 2.7.2 Treatment All patients received step 2 of periodontal therapy based on a quadrant‐wise approach, that is, in four appointments. Teeth with degree 2 mobility were splinted before the subgingival instrumentation. All periodontal pockets in each quadrant were treated with subgingival professional mechanical plaque removal using an ultrasonic scaler under local anaesthesia. Only the periodontal pocket associated with intrabony defect selected for the investigation was treated with the experimental procedure. MINST was performed by means of subgingival application of thin ultrasonic tips (Figure ). Additional subgingival instrumentation using Gracey mini‐curettes (Hu‐Friedy) was also performed to achieve biofilm removal in aeras with difficult access (Figure ). Subgingival rinses were not performed in order to achieve blood clot stabilization following subgingival instrumentation. All therapies were performed using ×4.0 magnification loupes (Univet, Italy) (Nibali et al. ). After completion of MINST, the defects were randomly assigned to the test group or the control group. In the patients of the test group, at the end of subgingival professional mechanical plaque removal, the pockets associated with intrabony defects were filled once using xHyA gel (Hyadent BG, Regedent AG, Zürich, Switzerland) (Figure ). The defects of the control group were treated only with the MINST approach (Figure ). After completion of both procedures, oral hygiene instructions were reinforced. No antiseptic mouthwashes or antibiotics were prescribed for either group. All clinical procedures were performed by the same expert operator (V.I.S.). 2.7.3 Post‐Operative Follow‐Up All patients were recalled at 1, 3 and 6‐months following treatment for supragingival professional mechanical plaque removal and motivation. After 3 months, only clinical parameters were recorded, while at 6 months the final clinical and radiographic evaluations were performed (Figure ). During the follow‐ups at 1 and 3 months, no additional subgingival professional mechanical plaque removal or application of xHyA was performed. Pre‐Treatment Prior to the pre‐treatment phase, FMPS and FMBS were assessed. In the pre‐treatment session (first step of periodontal therapy) (Sanz et al. ), all participants received supragingival professional mechanical plaque removal to eliminate the supragingival biofilm and calculus in combination with oral hygiene instructions and motivation. After 4 weeks, the clinical parameters of the intrabony defects selected for the trial were assessed (i.e., PD, CAL, GR and BOP) (Figure ). Treatment All patients received step 2 of periodontal therapy based on a quadrant‐wise approach, that is, in four appointments. Teeth with degree 2 mobility were splinted before the subgingival instrumentation. All periodontal pockets in each quadrant were treated with subgingival professional mechanical plaque removal using an ultrasonic scaler under local anaesthesia. Only the periodontal pocket associated with intrabony defect selected for the investigation was treated with the experimental procedure. MINST was performed by means of subgingival application of thin ultrasonic tips (Figure ). Additional subgingival instrumentation using Gracey mini‐curettes (Hu‐Friedy) was also performed to achieve biofilm removal in aeras with difficult access (Figure ). Subgingival rinses were not performed in order to achieve blood clot stabilization following subgingival instrumentation. All therapies were performed using ×4.0 magnification loupes (Univet, Italy) (Nibali et al. ). After completion of MINST, the defects were randomly assigned to the test group or the control group. In the patients of the test group, at the end of subgingival professional mechanical plaque removal, the pockets associated with intrabony defects were filled once using xHyA gel (Hyadent BG, Regedent AG, Zürich, Switzerland) (Figure ). The defects of the control group were treated only with the MINST approach (Figure ). After completion of both procedures, oral hygiene instructions were reinforced. No antiseptic mouthwashes or antibiotics were prescribed for either group. All clinical procedures were performed by the same expert operator (V.I.S.). Post‐Operative Follow‐Up All patients were recalled at 1, 3 and 6‐months following treatment for supragingival professional mechanical plaque removal and motivation. After 3 months, only clinical parameters were recorded, while at 6 months the final clinical and radiographic evaluations were performed (Figure ). During the follow‐ups at 1 and 3 months, no additional subgingival professional mechanical plaque removal or application of xHyA was performed. Statistical Analysis All data were collected and analysed at the Department of Periodontology, University of Naples Federico II. Data analysis was conducted using a statistical software package (IBM‐SPSS, IBM Inc.); the statistician was not blinded with respect to the research protocol. Since each patient contributed only one intrabony defect to the study, the patient was considered as the statistical unit. The variables PD, CAL, GR, CEJ‐BD and DF were expressed in millimetres and FMPS and FMBS were expressed in percentages, while the radiographic angles were reported in degrees. Means and standard deviation (SD) were calculated for each parameter. The assumption of normal distribution was checked for all parameters by means of the Shapiro–Wilk test, and normal or non‐normal distribution was checked using parametric or not parametric tests. A Chi‐square test was used to compare gender and smoking habits between test and control procedures, while the Mann–Whitney U test was used for evaluating the age and teeth location (mandible/maxillae). The inter‐group and intra‐group analyses of FMPS and FMBS were carried out using an unpaired and a paired t ‐test, respectively. An intra‐group analysis for the variables full‐mouth probing depth (FMPD), PD, CAL, GR, BOP, CEJ‐BD and defect angle was performed using the Wilcoxon test, while the inter‐group evaluation was made by means of the Mann–Whitney U test. The inter‐group analysis for DF was performed by means of the unpaired t ‐test. Intra‐group analysis of differences in number and percentages of sites with PD ≤ 4 mm was carried out using the lambda test, while the inter‐group evaluation was done by means of the McNemar test. To compare the frequency distribution of sites with residual PD and CAL gain between the test and control procedures, the lambda test was used. A p ‐value < 0.05 was set to accept a statistically significant difference. Results 3.1 Patient Accountability Seventy patients diagnosed with periodontitis (Tonetti, Greenwell, and Kornmann ) were invited to participate in the study. After initial screening, 20 patients not meeting the inclusion criteria were excluded, while 8 patients declined to participate in the study. Finally, 42 patients with 42 intrabony defects were included in the study. At 6 months, 38 patients with one intrabony defect each (a total of 38 intrabony defects) were available for the analysis. Four patients were lost to follow‐up. In the control group, two patients were excluded during the follow‐up based on insufficient level of oral hygiene, while in the test group two patients declined to continue at 1 and 2 months, respectively (Appendix ). During the follow‐up, no adverse events were recorded and no teeth were lost. 3.2 Patient Characteristics The characteristics of the sample enrolled in the trial are given in Table . Fourteen females and 5 males (mean age 49.3 ± 11.6 years) and 10 females and 9 males (mean age 50.8 ± 10.8 years) with a diagnosis of generalized stage III, grade C periodontitis were allocated to the test group and the control group, respectively. Nine patients were tobacco smokers (four in the test group and five in the control group). Five intrabony defects in the mandible and 14 in the maxilla received the experimental procedure, while 8 intrabony defects in the mandible and 11 in the maxilla were treated by means of the control procedure alone. No statistically significant differences were found between the test and control groups ( p > 0.05) (Table ). 3.3 FMPS and FMBS All patients showed a statistically significant improvement in FMPS and FMBS after 6 months ( p < 0.05). At 6 months, FMPS and FMBS decreased from 58.6 ± 6.0% and 53.4 ± 6.6% to 18.7 ± 2.2% and 14.3 ± 3.6% in patients of the test group, while FMPS and FMBS changed from 59.5 ± 6.5% to 18.9 ± 1.8% and from 55.8 ± 6.6% to 14.7 ± 2.5% in the control group. Inter‐group comparison did not show statistically significant differences ( p > 0.05) (Table ). 3.4 FMPD and Number of Sites With PD > 4 mm and PD > 6 mm of the Entire Dentition At baseline, the mean FMPD was 3.9 ± 0.5 mm and 4 ± 0.4 mm in patients of the test and control group, respectively. After 6 months, FMPD was 3.3 ± 02 mm in the test group and 3.4 ± 0.2 mm in the control group. Intra‐group comparison (i.e., baseline to 6 months) showed statistically significant differences ( p < 0.05), while no statistically significant differences between the test and control groups were found ( p > 0.05). At 6 months, the number of sites with PD > 4 mm and ≤ 6 mm changed from 384 to 330 and from 480 to 322 in the test and control groups, respectively. At baseline, patients of the test and control groups had 546 and 490 sites with PD > 6 mm, respectively. No sites with PD > 6 mm were recorded after 6 months. Hence, overall, sites with PD > 4 mm varied from 930 to 330 in the test group and from 970 to 322 in the control group (Table ). 3.5 Changes in PD The primary outcome, namely PD, decreased statistically significantly at 3 and 6 months in both groups ( p < 0.05). At baseline, intrabony defects treated with the experimental procedures showed a PD of 6.7 ± 1.4 mm, while in the control group the PD was 6.8 ± 0.8 mm. At 3 months, PD was 3.3 ± 1.0 and 5.2 ± 0.7 mm in the test and the control group, respectively, while after 6 months, PD of 4 ± 0.8 and 4.2 ± 0.8 mm was recorded for the test and the control procedure, respectively. No statistically significant differences between the test and control groups were recorded at baseline and after 6 months ( p > 0.05). However, a statistically significant difference was noted at 3 months ( p < 0.05) (Table ). 3.6 Changes in CAL Statistically significant changes were found between baseline, 3 months and 6 months in both groups ( p < 0.05). In the test group, CAL changed from 8.4 ± 2.8 to 4.9 ± 2.0 and to 5.5 ± 1.9 mm between baseline and 3 and 6 months, while the defects of the control group showed a CAL change from 8.2 ± 1.7 to 6.8 ± 1.9 and to 6 ± 2.4 mm. Inter‐group comparison showed no statistically significant difference at baseline and after 6 months ( p > 0.05). However, a statistically significant difference was noted at 3 months ( p < 0.05). No statistically significant differences ( p > 0.05) were noted between test and control procedures at baseline and at the 6 months follow‐up (Table ). 3.7 Changes in GR At baseline, patients treated with the test procedure reported a GR of 1.6 ± 1.7 mm, while patients of control group showed a GR 1.4 ± 1.5 mm. At the 3‐month follow‐up, the GR was 1.6 ± 1.6 and 1.6 ± 1.9 mm in the test and the control group, respectively, while after 6 months these parameters were 1.5 ± 1.7 mm in the test group and 1.8 ± 2.2 mm in the control group (Table ). 3.8 Number and Percentage of BOP ‐Positive Sites The number and percentage of sites with positive BOP at baseline and at 3‐ and 6‐months follow‐up are summarized in Table . At baseline, BOP‐positive sites were assessed at 12 defects (63.1%) of the test group and at 11 defects (57.9%) of the control group. After 3 months, the number (percentage) of BOP‐positive sites was 2 (10.5%) and 5 (26.3%) in the test and control groups, respectively. At 6 months, two (10%) sites in the test group and five (21%) in the control group were BOP‐positive. A statistically significant improvement was observed when the number and percentage of BOP‐positive sites were compared at baseline and after 3 months ( p < 0.05). Comparable results were observed when evaluating the presence of BOP‐positive sites between baseline and 6 months ( p < 0.05). No statistically significant changes were recorded between 3 and 6 months ( p > 0.05) (Table ). 3.9 Number and Percentage of Defect Sites With ‘Pocket Closure’ The number and percentage of sites that displayed a PD ≤ 4 mm without BOP (i.e., pocket closure) are shown in Table . After 3 months, the number of sites with pocket closure was 16 (84.2%) and 2 (10.5%) in the test and control group, respectively, while after 6 months, in 15 (78.9%) test sites and 12 (63.1%) control sites PD ≤ 4 mm was recorded. Statistically significant differences were observed between the test and control groups at 3 months ( p < 0.05), while at 6 months no statistically significant differences were recorded ( p > 0.05). Intra‐group analysis showed no statistically significant changes between 3 and 6 months in the test group ( p > 0.05), while a statistically significant difference was observed in the control group ( p < 0.05) (Table ). 3.10 Frequency Distribution of Sites With Residual PDs and CAL Changes Table summarizes the frequency distribution of residual PDs with or without BOP and CAL changes after 3 and 6 months. A statistically significant improvement in residual PDs and CAL changes was seen after 3 months when xHyA gel was used ( p < 0.05), while no statistically significant changes were noted at 6 months ( p > 0.05) (Table ). 3.11 Radiographic Outcomes A statistically significant difference was noted between baseline and 6 months for both procedures when CEJ‐BD values were compared ( p < 0.05). CEJ‐BD was 6.3 ± 2.3 and 4.3 ± 2.3 mm in the test group, while a mean of 6.9 ± 1.7 and 5.6 ± 1.9 mm was assessed in the control group. After 6 months, a statistically significant difference was found while comparing CEJ‐BD values of intrabony defects treated with MINST + xHyA compared with MINST alone ( p < 0.05). At 6 months, DF was 2.1 ± 0.9 and 1.4 ± 1.1 mm for the test and control procedures, respectively. A statistically significant difference was found ( p < 0.05). At baseline, an RDA of 36 ± 14.8° and 44.7 ± 12.9° was assessed in the test and control patients, respectively. At 6 months, in the test group RDA was 53.9 ± 23.8° and in the control group 48.4 ± 19.8°. Intra‐group comparison (i.e., baseline to 6 months) showed a statistically significant difference ( p < 0.05) in the test group but no statistically significant difference in the control group ( p > 0.05). At 6 months, inter‐group analysis did not show statistically significant differences ( p > 0.05). No statistically significant differences ( p > 0.05) were found when inter‐group analysis was made. However, the intra‐group comparison (i.e., baseline to 6 months) showed a statistically significant change for patients of the control group ( p < 0.05) (Table ). Patient Accountability Seventy patients diagnosed with periodontitis (Tonetti, Greenwell, and Kornmann ) were invited to participate in the study. After initial screening, 20 patients not meeting the inclusion criteria were excluded, while 8 patients declined to participate in the study. Finally, 42 patients with 42 intrabony defects were included in the study. At 6 months, 38 patients with one intrabony defect each (a total of 38 intrabony defects) were available for the analysis. Four patients were lost to follow‐up. In the control group, two patients were excluded during the follow‐up based on insufficient level of oral hygiene, while in the test group two patients declined to continue at 1 and 2 months, respectively (Appendix ). During the follow‐up, no adverse events were recorded and no teeth were lost. Patient Characteristics The characteristics of the sample enrolled in the trial are given in Table . Fourteen females and 5 males (mean age 49.3 ± 11.6 years) and 10 females and 9 males (mean age 50.8 ± 10.8 years) with a diagnosis of generalized stage III, grade C periodontitis were allocated to the test group and the control group, respectively. Nine patients were tobacco smokers (four in the test group and five in the control group). Five intrabony defects in the mandible and 14 in the maxilla received the experimental procedure, while 8 intrabony defects in the mandible and 11 in the maxilla were treated by means of the control procedure alone. No statistically significant differences were found between the test and control groups ( p > 0.05) (Table ). FMPS and FMBS All patients showed a statistically significant improvement in FMPS and FMBS after 6 months ( p < 0.05). At 6 months, FMPS and FMBS decreased from 58.6 ± 6.0% and 53.4 ± 6.6% to 18.7 ± 2.2% and 14.3 ± 3.6% in patients of the test group, while FMPS and FMBS changed from 59.5 ± 6.5% to 18.9 ± 1.8% and from 55.8 ± 6.6% to 14.7 ± 2.5% in the control group. Inter‐group comparison did not show statistically significant differences ( p > 0.05) (Table ). FMPD and Number of Sites With PD > 4 mm and PD > 6 mm of the Entire Dentition At baseline, the mean FMPD was 3.9 ± 0.5 mm and 4 ± 0.4 mm in patients of the test and control group, respectively. After 6 months, FMPD was 3.3 ± 02 mm in the test group and 3.4 ± 0.2 mm in the control group. Intra‐group comparison (i.e., baseline to 6 months) showed statistically significant differences ( p < 0.05), while no statistically significant differences between the test and control groups were found ( p > 0.05). At 6 months, the number of sites with PD > 4 mm and ≤ 6 mm changed from 384 to 330 and from 480 to 322 in the test and control groups, respectively. At baseline, patients of the test and control groups had 546 and 490 sites with PD > 6 mm, respectively. No sites with PD > 6 mm were recorded after 6 months. Hence, overall, sites with PD > 4 mm varied from 930 to 330 in the test group and from 970 to 322 in the control group (Table ). Changes in PD The primary outcome, namely PD, decreased statistically significantly at 3 and 6 months in both groups ( p < 0.05). At baseline, intrabony defects treated with the experimental procedures showed a PD of 6.7 ± 1.4 mm, while in the control group the PD was 6.8 ± 0.8 mm. At 3 months, PD was 3.3 ± 1.0 and 5.2 ± 0.7 mm in the test and the control group, respectively, while after 6 months, PD of 4 ± 0.8 and 4.2 ± 0.8 mm was recorded for the test and the control procedure, respectively. No statistically significant differences between the test and control groups were recorded at baseline and after 6 months ( p > 0.05). However, a statistically significant difference was noted at 3 months ( p < 0.05) (Table ). Changes in CAL Statistically significant changes were found between baseline, 3 months and 6 months in both groups ( p < 0.05). In the test group, CAL changed from 8.4 ± 2.8 to 4.9 ± 2.0 and to 5.5 ± 1.9 mm between baseline and 3 and 6 months, while the defects of the control group showed a CAL change from 8.2 ± 1.7 to 6.8 ± 1.9 and to 6 ± 2.4 mm. Inter‐group comparison showed no statistically significant difference at baseline and after 6 months ( p > 0.05). However, a statistically significant difference was noted at 3 months ( p < 0.05). No statistically significant differences ( p > 0.05) were noted between test and control procedures at baseline and at the 6 months follow‐up (Table ). Changes in GR At baseline, patients treated with the test procedure reported a GR of 1.6 ± 1.7 mm, while patients of control group showed a GR 1.4 ± 1.5 mm. At the 3‐month follow‐up, the GR was 1.6 ± 1.6 and 1.6 ± 1.9 mm in the test and the control group, respectively, while after 6 months these parameters were 1.5 ± 1.7 mm in the test group and 1.8 ± 2.2 mm in the control group (Table ). Number and Percentage of BOP ‐Positive Sites The number and percentage of sites with positive BOP at baseline and at 3‐ and 6‐months follow‐up are summarized in Table . At baseline, BOP‐positive sites were assessed at 12 defects (63.1%) of the test group and at 11 defects (57.9%) of the control group. After 3 months, the number (percentage) of BOP‐positive sites was 2 (10.5%) and 5 (26.3%) in the test and control groups, respectively. At 6 months, two (10%) sites in the test group and five (21%) in the control group were BOP‐positive. A statistically significant improvement was observed when the number and percentage of BOP‐positive sites were compared at baseline and after 3 months ( p < 0.05). Comparable results were observed when evaluating the presence of BOP‐positive sites between baseline and 6 months ( p < 0.05). No statistically significant changes were recorded between 3 and 6 months ( p > 0.05) (Table ). Number and Percentage of Defect Sites With ‘Pocket Closure’ The number and percentage of sites that displayed a PD ≤ 4 mm without BOP (i.e., pocket closure) are shown in Table . After 3 months, the number of sites with pocket closure was 16 (84.2%) and 2 (10.5%) in the test and control group, respectively, while after 6 months, in 15 (78.9%) test sites and 12 (63.1%) control sites PD ≤ 4 mm was recorded. Statistically significant differences were observed between the test and control groups at 3 months ( p < 0.05), while at 6 months no statistically significant differences were recorded ( p > 0.05). Intra‐group analysis showed no statistically significant changes between 3 and 6 months in the test group ( p > 0.05), while a statistically significant difference was observed in the control group ( p < 0.05) (Table ). Frequency Distribution of Sites With Residual PDs and CAL Changes Table summarizes the frequency distribution of residual PDs with or without BOP and CAL changes after 3 and 6 months. A statistically significant improvement in residual PDs and CAL changes was seen after 3 months when xHyA gel was used ( p < 0.05), while no statistically significant changes were noted at 6 months ( p > 0.05) (Table ). Radiographic Outcomes A statistically significant difference was noted between baseline and 6 months for both procedures when CEJ‐BD values were compared ( p < 0.05). CEJ‐BD was 6.3 ± 2.3 and 4.3 ± 2.3 mm in the test group, while a mean of 6.9 ± 1.7 and 5.6 ± 1.9 mm was assessed in the control group. After 6 months, a statistically significant difference was found while comparing CEJ‐BD values of intrabony defects treated with MINST + xHyA compared with MINST alone ( p < 0.05). At 6 months, DF was 2.1 ± 0.9 and 1.4 ± 1.1 mm for the test and control procedures, respectively. A statistically significant difference was found ( p < 0.05). At baseline, an RDA of 36 ± 14.8° and 44.7 ± 12.9° was assessed in the test and control patients, respectively. At 6 months, in the test group RDA was 53.9 ± 23.8° and in the control group 48.4 ± 19.8°. Intra‐group comparison (i.e., baseline to 6 months) showed a statistically significant difference ( p < 0.05) in the test group but no statistically significant difference in the control group ( p > 0.05). At 6 months, inter‐group analysis did not show statistically significant differences ( p > 0.05). No statistically significant differences ( p > 0.05) were found when inter‐group analysis was made. However, the intra‐group comparison (i.e., baseline to 6 months) showed a statistically significant change for patients of the control group ( p < 0.05) (Table ). Discussion The present study aimed at comparing the healing of intrabony defects treated by means of MINST + xHyA gel application with MINST alone during non‐surgical periodontal therapy. After 6 months, a significant improvement of all clinical and radiographic parameters was observed in both groups. However, no statistically significant difference between test and control procedures was found. Therefore, the null hypothesis, namely no statistically significant difference between procedures, was accepted. These data seem to suggest that the adjunct of xHyA gel to MINST did not yield superior results when compared with MINST alone after 6 months. The results of the present study must be interpreted with caution, however, because the intrabony defects displayed moderate severity, both in terms of clinical and radiographic measurements. The clinical improvements of sites treated with the experimental procedure at 6 months (i.e., PD reduction of 2.7 mm and CAL gain of 2.8 mm) agree with those reported in previous studies. Diehl and co‐workers achieved a PD reduction of > 2 mm associated with a reduction of the sites with BOP in suprabony defects (Diehl et al. ). Ramanauskaite and co‐workers reported a PD reduction of 2.9 mm and a CAL gain of 2.6 mm in suprabony defects (Ramanauskaite, Machiulskiene, Dvyliene, et al. ). However, in our trial the control procedure (i.e., MINST alone) showed similar results when compared with local application of xHyA following MINST. These findings agree with those reported by Pilloni and coworkers on the efficacy of xHyA application as adjunct to subgingival professional mechanical plaque removal of suprabony defects (Pilloni, Rojas, et al. ; Pilloni, Zeza, et al. ). The authors of that study noted that the use of xHyA showed a tendency for better results but there was no statistically significant difference between test and control procedures. On the contrary, statistically significant differences were found by Ramanauskaite and co‐workers for all parameters investigated (i.e., PD reduction and CAL gain) when xHyA was added to MINST (Ramanauskaite, Machiulskiene, Shirakata, et al. ). Potential explanations may be related to the type of periodontal defects and adjunctive delivery of sodium hypochlorite/amino acids. In the study by Ramanauskaite, Machiulskiene, Shirakata, et al. , the authors tested the combination xHyA and sodium hypochlorite/amino acids as adjunct to MINST in patients with primarily suprabony defects, while in the present study, only isolated intrabony defects were selected. It may thus be anticipated that the reasons for the lack of difference between test and control groups at 6 months depend on the efficacy of MINST in the treatment of intrabony defects (Nibali et al. ) with or without adjunctive use of biological agents. In the present study, the benefit of using xHyA gel as adjunt on the PD reduction and CAL gain was only noted during the first 3 months of healing. At 3 months, the differences in PD reduction and in CAL gain were statistically significant between procedures, and the sites treated with MINST + xHyA gel showed faster healing compared to those treated by means of MINST alone. The statistically significant improvement of the defects treated by MINST + xHyA seems to depend on the delayed response of the defects of the control group rather than on the experimental procedure. Certainly, intrabony defects treated by means of MINST alone require a healing period of > 3 months. However, one of the objectives of this study was also to evaluate the potential of xHyA in accelerating the healing process. For these reasons, the clinical parameters were also recorded at 3 months in both groups. It is important to point out that patients treated with the experimental procedure showed a higher percentage of pocket closure at 3 months (84.2% vs. 10.5%), and no sites with PD ≥ 6 were recorded. These results seem to indicate a possible stimulating effect of xHyA in the early healing phase of intrabony defects. Indirect biological evidence showed that xHyA accelerated the phase of wound healing by stimulating cell migration and proliferation (Olczyk et al. ) and by providing a sealing effect of the periodontal pocket. After local application, xHyA increased the dentin surface texture, and then an increase in the number and improvement in spreading of periodontal ligament cells on this surface were noted (Mueller et al. ). However, these observations should be interpreted with caution because histological evidence is lacking in humans. The statistically significant improvement in clinical parameters of test sites may also be related to the bacteriostatic effect of xHyA on the strains associated with periodontitis. Microbiological evidence has shown that the adjunctive application of xHyA prevents recolonization by periodontopathogens of the treated sites (Eick et al. ). In addition, the counts of Treponema denticola and Compylobacter rectus are significantly reduced in sites treated with xHyA, while Prevotella intermedia and Porphyromonas gingivalis increased in sites that did not receive xHyA gel (Eick et al. ). In all patients, both clinical procedures were delivered attempting to minimize potential trauma to the gingival tissue. For these reasons, gingival curettage was avoided and changes in GR within 1 mm were observed in both groups. Both treatment modalities led to significant radiographic improvement as indicated by mean defect fill and defect angle changes, although the radiographic results were smaller in magnitude when compared with those of a previous study (Nibali et al. ). This aspect depends on the configuration of the intrabony defects enrolled in the present investigation, because the majority of the defects were shallow with wide a radiographic angle. At 6 months, the intrabony defects treated with the experimental procedure showed a statistically significant defect fill compared to sites provided with the control treatment (2.1 vs. 1.4 mm). The higher radiographic defect fill of sites treated with the experimental therapy may have been due to the fact that xHyA strongly induces the growth of osteoprogenitor cells and maintains their stemness, thus suggesting a potential regulatory effect of HA on the balance between self‐renewal and differentiation during bone healing (Asparuhova et al. ). Radiographic reduction of the vertical intrabony component was associated with an increase in the radiographic defect angle. This increase is an expression of the substantial bone fill, which occurs in the most apical part of the defect. At baseline, the sites treated with xHyA showed a mean of radiographic defect angle of ≥ 36°, which was associated with a CAL gain of ≤ 4 mm (i.e., 2.8 mm). This result agrees with the outcomes of a previous study (Tsitoura et al. ) reporting a significant association between the baseline radiographic defect angle and CAL gain. These authors (Tsitoura et al. ) demonstrated that the probability of obtaining a CAL gain of ≥ 4 mm is higher when the radiographic defect angle is ≤ 22° than when it is ≥ 36°. The majority of the intrabony defects enrolled in the present study were shallow with a wide radiographic angle. Usually, wide defect angles are related to shallow intrabony defects (Kim et al. ) and these characteristics did not negatively influence the final results because the healing (i.e., percentage of CAL gain) of deep and shallow defects was reported to be similar after periodontal treatment (Cortellini et al. ). The lack of a customized bite to place the film holder and the relatively short time interval for the radiographic analysis are limitations of this study. Since the radiographs were not taken in a standardized way, the correction factor method (Tu et al. ) was used to correct the potential variation in radiographic images due to positioning. However, the primary objective of the present investigation was to evaluate the efficacy of xHyA as adjunct to MINST in PD reduction of intrabony defects during the second step of periodontal therapy. The radiographic examination can be considered as an adjunctive exam to verify the healing pattern of intrabony defect and to confirm the efficacy of xHyA in the healing process. Certainly, studies with a longer follow‐up are needed to confirm these results. Usually, CAL gain is set as the primary outcome measure when a biological agent is tested in the treatment of intrabony defects. Although regenerative properties (Mendes et al. ) of xHyA were observed in previous studies, in the present investigation the CAL gain was not considered as the primary outcome. The aim of present trial was to evaluate the efficacy of xHyA as an adjunct to MINST in the treatment of intrabony defects during step 2 of periodontal therapy. In other words, this trial investigated the possibility to resolve periodontal pockets associated with intrabony defects by means of non‐surgical periodontal treatment and to avoid adjunctive surgical therapies (i.e., step 3 of periodontal therapy). Hence, in agreement with a previous study (Loos and Needleman ), PD was set as the primary outcome measure. One intrabony defect was treated per patient. If multiple teeth presented pockets associated with an intrabony defect, only the site with the deepest PD was selected for the study. In case of the same PD in two or more intrabony defects per patient, the site with the deepest radiographic intrabony component was selected. A sample size of 11 patients with one intrabony defect was required in each group to reject the null hypothesis and obtain a statistically significant difference between test and control procedures (Rajan et al. ) with respect to the primary outcome (i.e., PD change). However, Rajan and co‐workers evaluated the adjunctive effect of local application of HA gel following conventional subgingival instrumentation (SRP) in patients with periodontitis, and no previous studies compared the local adjunct of xHyA to MINST with respect to MINST alone in the treatment of intrabony defects. For these reasons, 42 patients with 42 intrabony defects were enrolled to avoid an underpowered sample size due to an overestimation of the expected difference between both procedures. This aspect may be considered a bias because a larger number of patients than indicated by the sample size calculation could increase the probability of obtaining a statistically significant difference. However, the recruitment of a larger sample increases the probability of achieving a statistical inference with the possibility to transfer the results to the population. To avoid bias related to the operator, all patients received the same therapy (i.e., MINST) and the randomization was done after completion of subgingival professional mechanical plaque removal of intrabony defects. Unfortunately, a placebo gel was not used to treat the sites of the control group. This fact could be considered a limitation of the present study because the operator and patients were not masked with respect to the test and control procedures. The reason for omitting the placebo was due to the fact that, since the study was only supported by the institution of the authors, there was no possibility to obtain a suitable placebo gel. Although the lack of a placebo gel may be considered a bias, it has to be kept in mind that the randomization envelope was opened only when the operator completed subgingival instrumentation. Furthermore, most studies evaluating the use of xHyA gel in various trials (Pilloni, Rojas, et al. ; Pilloni, Zeza, et al. ; Ramanauskaite, Machiulskiene, Shirakata, et al. ) did not use a placebo. Additionally, the lack of a placebo gel reflects the clinical reality, thus making the results more relevant for the clinician. The relatively short follow‐up time (i.e., 6 months) may be considered another limitation. The aim of the present study was to test the efficacy of MINST with xHyA gel as an adjunct in the treatment of intrabony defects during the second step of periodontal treatment prior to periodontal re‐evaluation. Data from the systematic review by Suvan et al.  indicated a mean PD reduction of 1.4 mm and a proportion of pocket closure of 74% after 6–8 months. Hence, a follow‐up of 6 months can be considered acceptable prior to the periodontal re‐evaluation in daily practice. The lack of split mouth design was a limitation of this study. Since the experimental and control sites were treated as part of step 2 of periodontal therapy together with other sites, the healing of test sites may be attributed to the patient's general response and not exclusively to the adjunctive use of xHyA. In fact, the mean FMPD significantly improved in both groups after 6 months, and no statistically significant differences were found between the groups. However, adjunctive delivery of xHyA accelerated healing of the experimental sites in the first 3 months. Since the intrabony defects were treated by means of a non‐surgical approach, the morphology of the intrabony component (i.e., number of residual walls) and its impact on the final outcomes were not evaluated. However, a recent study (Nibali et al. ) reported no evidence for the associations between defects characteristics and healing of intrabony defects following MINST. One of the limitations of present trial was the lack of patient‐reported outcomes (i.e., PROMs). Although all patients received a minimally invasive therapy, the additional use of xHyA may reduce the post‐operative discomfort. Another limitation of the present study was the sample population recruited in the trial. These outcomes are related to a sample enrolled following strict eligibility criteria; however, the majority of patients with severe periodontitis are smokers and may also suffer from various systemic disorders (i.e., diabetes), Hence, the true efficacy of the experimental procedure cannot be generalized to the large majority of the population suffering from periodontitis. Conclusion Taken together, the present results indicate that (a) treatment of intrabony defects with MINST, with or without the application of xHyA gel, resulted in statistically significant improvements in the investigated clinical parameters at 3 and 6 months after therapy, and (b) although the adjunctive use of xHyA gel to MINST improved the clinical outcomes compared with MINST alone up to 3 months, statistically significant differences were not observed at 6 months. Vincenzo Iorio‐Siciliano: conceptualization, investigation. Andrea Blasi: data acquisition, statistical analysis. Leopoldo Mauriello: patient enrollment and data acquisition. Giovanni E. Salvi: co‐drafted the manuscript and data interpretation. Luca Ramaglia: co‐drafted the protocol and manuscript, project administration. Anton Sculean: co‐drafted the manuscript and data interpretation. All authors approved the final version. The authors declare no conflicts of interest. Appendix S1. Flow diagram.
null
37c5479d-2d7d-4cf8-92e7-b40ac8988268
11349381
Forensic Medicine[mh]
Tularemia is a zoonotic disease caused by Francisella tularensis , a bacterium found in several animal species, most frequently occurring in rabbits and rodents. In 2023, tularemia occurred in a wildlife volunteer after exposure to a deceased, infected harbor seal, the first known report of tularemia acquired through contact with a marine mammal, and the first detection of F. tularensis in a marine mammal. Health care providers, public health investigators, and persons working with marine wildlife need to be aware of the potential risk for tularemia and other zoonotic diseases associated with harbor seal contact and adhere to established safety protocols. On October 20, 2023, a previously healthy woman aged 32 years who lived in Kitsap County, Washington was evaluated by a primary care provider for a painful swelling on the left hand. The patient worked as a wildlife biologist for a nonprofit organization and reported nicking a finger with a scalpel on October 3, while performing a necropsy on a harbor seal ( Phoca vitulina ) that had been found deceased along South Puget Sound. The patient wore personal protective equipment including a surgical gown, laboratory goggles, an N-95 respirator, and surgical gloves during the necropsy; the cut occurred through the glove. Although the wound initially appeared to heal, it became inflamed and painful 2 weeks after the scalpel cut. Around this time, the patient experienced onset of subjective fever and ipsilateral axial lymph node swelling, as well as cough and congestion. The patient was prescribed doxycycline and topical mupirocin on October 20, and fully recovered. Although tularemia was not suspected by the provider at that time, the wound exudate was collected and submitted to a local clinical laboratory where it was cultured and identified as suspected Francisella species. On November 3, 2023, the Washington State Public Health Laboratory received the isolate, where it tested positive for F. tularensis by bacterial culture, direct fluorescent antibody, and polymerase chain reaction (PCR). The seal necropsy report documented signs of possible infection of unspecified etiology in thoracic and abdominal organs without substantial wounds or other signs of trauma. Public health authorities partnered with the Washington Department of Fish and Wildlife to submit animal specimens to the Washington Animal Disease Diagnostic Laboratory for histopathology and F. tularensis PCR testing; six specimens tested positive by PCR, and three were forwarded to CDC’s Division of Vector-Borne Diseases for confirmation. Molecular sequencing (six housekeeping genes; 4,107 base pairs) of the lung specimen performed by CDC identified F. tularensis type B (ssp. holarctica ), phylogenetically similar to type B strains previously found in the western United States . The clinical isolate from the human case was destroyed in accordance with the Tier 1 select agent handling protocol, with no sequence generated, prohibiting comparison with the sequence obtained from the seal specimen. This finding is the first known detection of F. tularensis in a marine mammal. Public health authorities identified one other wildlife volunteer present during the necropsy. Contact identification and symptom monitoring of this volunteer and of laboratory workers handling the clinical specimen was conducted by the respective local health jurisdictions. No ill persons or additional cases were identified. This activity was reviewed by CDC, deemed not research, and was conducted consistent with applicable federal law and CDC policy. Although most tularemia cases acquired in the northwestern United States are associated with environmental exposure or contact with rodents or lagomorphs, marine mammals should be considered as a potential source of infection. Health care providers, public health investigators, and persons working with marine wildlife need to be aware of the potential risk for tularemia and other zoonotic diseases associated with harbor seal contact and should wear appropriate personal protective equipment and adhere to established safety protocols .
Pharmacy Workload in Clinical Trial Management: A Preliminary Complexity Assessment Tool for Sponsored Oncology and Haematology Trials
52233868-a34c-4141-ad15-1b08b7704077
11119074
Internal Medicine[mh]
The development of new drugs in recent years, as well as new regulations and guidelines for research, has led to important changes in the way research is conducted and the role of each professional . Cancer clinical trials are becoming increasingly complex, requiring the involvement of multiple disciplines and dedicated personnel to perform clinical, regulatory, and administrative activities and protocol patient-related procedures . Clinical trials are designed to produce results in a short time, speeding up the approval of new molecules . The total number and frequency of procedures, tests, data collections, and data elements specified in each clinical trial protocol affect the effort required by a trial site to ensure compliance with the protocol and regulatory requirements. According to good clinical practice (GCP), the principal investigator (PI) may/should delegate some or all of the investigator’s responsibilities for investigational medicinal products (IMPs) to a trained pharmacist or other appropriate person . This results in the lack of an internationally standardised professional profile, and the different roles that pharmacists play in the research team could make it difficult to predict the workload in another context. In Italy, the delegation of this responsibility to a pharmacist is allowed, but not mandatory. The Italian Medicines Agency requires the presence of a pharmacist responsible for the management of IMPs only for phase I trials . There are differences in the organisation and staffing of each pharmacy service. In some units, staff members working in the clinical trial area are involved in other hospital pharmacy activities during the working day, whereas in other centres, their involvement is exclusive . Most pharmacies provide basic services for research, such as dispensing and stock control, while more specialised activities are common at sites with a greater commitment to research . The responsibilities of the investigational drug services include, but are not limited to, the study feasibility, a site initiation visit, the training of the involved personnel, a receipt for the IMPs, accountability, storage, the calibration and maintenance of temperature-monitoring equipment, dose preparation (under sterile conditions and ensuring blinding, if required), dispensing, the recording of drugs returned by patients, the final disposition of drugs, the return to the sponsor, a close-out visit, and the hosting of monitors and auditors. All of these activities must be carried out while ensuring safety, compliance with legal and regulatory requirements, adherence to internal standard operating procedures and policies, compliance with the protocol, and the quality of the trial process. Most drug management tasks are considered “source documents” that must be available for monitoring visits, sponsor audits, and regulatory inspections , so it is crucial that they are performed and documented to ensure compliance. For these reasons, the role of the pharmacy is important in adding quality and value to clinical trials. It should also be noted that the management of IMPs in oncology clinical trials is becoming increasingly sophisticated, with complex preparations (high-risk compounding, blinding), complex dosing regimens, multiple drug regimens, a high toxicity potential, extensive sponsor training modules, and increasingly stringent sponsor requirements for storage and monitoring. In this context, investigational drug services are time- and effort-consuming, vary from trial to trial depending on their complexity, and need to be organised in a structured approach, especially for sites with a large number of ongoing clinical trials. In light of these considerations, the involvement of pharmacy staff needs to be measured and evaluated. Pharmacy costs are not routinely budgeted, nor do they need to be budgeted. This results in situations where the research pharmacy costs are absorbed by the institution or are covered by other means, such as a core grant . Alternatively, when pharmacy costs are budgeted, the most common system used is a fixed percentage for all clinical trials . A system based on the payment of pharmacy grants according to the complexity of the clinical trial from the perspective of the pharmacy service should be considered. The system should assess the complexity of a trial based on the involvement of the pharmacy in order to provide a consistent grant at the contractual stage between the sponsor and the research site. The complexity of a clinical trial should be assessed at an early stage, before the contract is signed, using a scoring tool. The numerical value of complexity scores can be difficult to interpret and apply in a meaningful way. Categorising the tool scores into ordinal complexity categories would be more intuitive and serve as a practical and effective tool for assessing the complexity of clinical trials and applying for consistent pharmacy grants . Grants should be recognised when the site is activated, regardless of whether patients are enrolled in the trial, and for each patient enrolled. In this way, revenue generated by clinical trials should be proportionate to the impact on the pharmacy service’s workload, which would result in a more accurate quantification than applying a simple percentage of the amount received by the principal investigator for each trial patient. There is limited research available that has measured the complexity of pharmacy services for clinical trials. Some of these works are based on the use of tools that measure the time spent on specific tasks or the resources of the pharmacists or professionals employed . The number of protocols a pharmacist can manage depends on many factors: the complexity of the protocols, the responsibilities of the pharmacy staff, the experience of the staff, and the organisation of the research centre . As a result, it is difficult to translate the complexity of clinical trials from the perspective of the pharmacy service into resources in terms of staff and time. In the work by Pagès-Puigdemont et al., routine activities were measured in time and then translated into value, but non-routine activities were voluntarily excluded. This resulted in a high value being assigned to activities that are always performed and time-consuming, but can be planned and, therefore, have less impact on a pharmacist’s work, such as site selection visits and site initiation visits. Values were not assigned to non-routine activities that, despite having less impact in terms of time, are outside the scope of routine activities and are more critical for the patient and/or for the proper conduct of the trial . Song et al. developed a systematic complexity scoring tool to assess pharmacy effort, but the development of the tool required application during the study period, limiting the ability of the tool to categorise the study in the preliminary phase and influence the contract . The literature suggests that the largest number of high-complexity clinical trials are in oncology due to the specific trial designs (basket and umbrella trials are clear examples), patients, and drug management. In addition, academic trials are often pragmatic and closely resemble daily clinical practice, with broader eligibility criteria, less frequent disease assessments, fewer data requests, less frequent monitoring visits, fewer bureaucratic constraints, fewer training documents, and a smaller training workload . Therefore, for an oncology research centre running multiple clinical trials simultaneously, it is necessary to build a tool with complexity categories that are calibrated to oncology-sponsored trials, thereby increasing the sensitivity of the tool. The aim of this study was to develop a tool to assess the complexity of pharmacy involvement in a sponsored oncology or haematology clinical trial. The tool score cut-off points for the complexity categories (low, medium, and high) were then identified. Categorisation into ordinal complexity categories will serve as a practical and effective tool for assessing the complexity of clinical trials for consistent pharmacy grant applications. The items of the tool and the individual item scores were agreed upon among the pharmacists at two different cancer research centres. It was decided that at least two pharmacists from each centre would be involved in the construction of the tool to ensure that all potential items of interest, regular activities, and extraordinary activities were brought to light. Disagreements regarding the definition of different items were discussed and a final version of the pharmacy complexity assessment tool (Pharm-CAT) was approved. Each of the selected items had a concrete impact on the workload of the pharmacy service or had an impact by deviating from standard procedures. For example, if a drug is not provided by the sponsor and has to be provided by the trial site, it is advisable to separate a single batch for use in the trial, label it in accordance with GCP, and store it in the clinical trial area to ensure the traceability of the drug. Activities that had an impact on the workload, but were common to all the sponsored clinical trials were deliberately excluded, e.g., feasibility assessments, a site initiation visit, monitoring visits, and training activities. Pharm-CAT consisted of 15 items divided into three sections: study design, drug management, and drug preparation. The score assigned to each item ranged from 0 to 3 points. Items 1, 3, 4, 5, 7, 12, and 13 received 1, 2, or 3 points; items 2 and 6 received 2 or 3 points; item 8 received 1 or 2 points; items 9, 10, and 11 received 1 or 3 points; item 14 received 0, 2, or 3 points; and item 15 received 0 or 3 points. In order to obtain a numerical complexity score, the total score for each clinical trial was calculated by adding the scores of the different items according to the scale shown in . The resulting score was a minimum of 15 points and a maximum of 44 points. The pharmacists were instructed to use Pharm-CAT to assign a score to each new sponsored trial. Each assessment generated by the compilation of Pharm-CAT for a new sponsored clinical trial from July 2023 to February 2024 was archived after being recorded in an Excel data collection table containing the following data: the research centre, trial acronym, trial type (oncology or haematology and phase I, II, or III), assessing pharmacist, and score. The Pharm-CAT scores were divided into 3 categories: low-complexity, medium-complexity, and high-complexity. To determine the cut-offs for the three complexity categories, we sorted the scores in ascending order and selected the cut-offs corresponding to the first and third tertiles, i.e., at 33.3% and 66.7% of the score distribution. Ensuring reproducibility is an important step in increasing the reliability and usefulness of Pharm-CAT. To verify reproducibility, Pharm-CAT was applied by two pharmacists independently for each new clinical trial to highlight any differences in the total score and in the assigned complexity category. Independent scoring is essential for establishing the accuracy of Pharm-CAT. To establish the complexity categories and check the reproducibility, we determined that at least 50 trials should be independently scored by two pharmacists to record 100 scores. Once the cut-offs were established, Cohen’s linear weighted kappa was used in the cross-classification as a measure of the agreement among pharmacists. The characteristics of the data contributed during the project period were summarised using percentages. Each pharmacist involved in the construction and use of the tool was a professional with proven experience in the management of experimental drugs in oncology and/or haematology clinical trials. Pharm-CAT took 3–5 min to complete. Sixty clinical trials were evaluated and a total of 120 scores were recorded. Centre 1 evaluated 40 trials (80 scores) and centre 2 evaluated 20 trials (40 scores). In 77% of the studies, the independent assessment by the two pharmacists resulted in the same score; in 14 studies, the score was different. The difference was only one point in 11 studies, two points in 2 studies, and four points in 1 study. The average score difference in 60 trials was 0.32 points. The result of the 120 scores determined the cut-offs, resulting in the three complexity categories. Low-complexity scores ranged from 0 to 19, medium-complexity scores ranged from 20 to 25, and high-complexity scores were 26 or higher. The average score recorded was 22.88 points. The lowest score recorded was 15 points and the highest was 33 points. The Cohen’s weighted kappa calculation was 0.98, confirming agreement among pharmacists. shows the categories of the evaluated clinical trials. Thirty-two per cent of the trials had a high complexity, 38% had a medium complexity, and 30% had a low complexity. Haematological trials had a higher mean score and a higher percentage of high-complexity studies. Phase I trials had a higher mean score and a higher percentage of high-complexity studies than phase 2 and phase 3 trials. No phase I studies fell into the low-complexity category. To the authors’ knowledge, this is the first specific tool for assessing the complexity of pharmacy involvement in a sponsored oncology or haematology clinical trial. Pharm-CAT was found to be easy and user-friendly. The inter-pharmacist reproducibility of the assessment was confirmed in 77% of cases, and in 13 of the 14 cases where there was a difference in the final score, the two assessments placed the trial in the same complexity category, as the two scores did not straddle the subsequently established cut-offs, and Cohen’s weighted kappa calculation confirmed agreement among the pharmacists. The average score of the phase I trials was higher than those of the phase II and III trials, which was certainly due specifically to item 1 in for which two extra points were awarded, but also to a general sensitivity of the tool. This emphasises the impact on the work of pharmacists when exceptions such as special drug preparation methods with dedicated devices or those not-yet compliant with the use of closed systems are encountered. We do not believe that there is a different sensitivity for Pharm-CAT in the evaluation of haematological and oncological studies; in fact, the differences found in the results were justified by a higher presence of haematological phase I studies (5 out of 21, or about 24%) than oncological studies (1 out of 39, or about 3%). The phase II trials recorded the lowest average score due to the standard trial design, which, in most cases, does not involve the administration of numerous experimental drugs or a blind, as there is no comparison arm. Now that the complexity categories have been established with an initial sample of 60 clinical trials, it is necessary to continue evaluating the trials to verify the sensitivity of the cut-offs established. In particular, a second sample of trials is needed to confirm a homogeneous distribution among the categories and avoid a disproportionate number of trials falling unjustifiably into the same category. Our future goal is to analyse a larger set of assessments from different cancer research centres to verify the statistical concordance of the cut-offs established by this initial work and to make this tool widely available. Differences in the regulatory requirements among countries may affect the generalisability, or there may be latent factors not captured by Pharm-CAT. Some activities were excluded because they were routinely performed (e.g., feasibility assessments, a site initiation visit, monitoring visits, and training activities). Users can modify Pharm-CAT to include items that are unique to their pharmacy service and exclude items that are not relevant or too specific. It is necessary to adapt the tool to the specific needs and resources of individual centres in order to measure the pharmacy workload in these settings. Pharm-CAT proved to be user-friendly. The determination of complexity category cut-offs based on oncology- and haematology-sponsored trials led to the development of a simple and sensitive evaluation method in this field. Pharm-CAT aims to assess the workload as a prospective consideration when evaluating new clinical trials for activation and contracting, and to justify and negotiate trial pharmacy grants. In addition, the information obtained from the use of Pharm-CAT in a pharmacy service over time and at regular intervals will make it possible to monitor the evolution of the workload and the level of complexity of the work performed. Prospective multicentre validation of Pharm-CAT is needed to verify and confirm its applicability.
Pharmacologic Treatment of Obesity in adults and its impact on comorbidities: 2024 Update and Position Statement of Specialists from the Brazilian Association for the Study of Obesity and Metabolic Syndrome (Abeso) and the Brazilian Society of Endocrinology and Metabolism (SBEM)
1533181c-8254-4683-b4a3-30386647c643
11634287
Internal Medicine[mh]
Obesity is a chronic and recurrent disease that causes or aggravates more than two hundred other diseases and is associated with increased morbidity, disability, and mortality. Following international epidemiological trends, data from the Brazilian Institute of Geography and Statistics (IBGE) show that obesity already affects a quarter of the adult population in Brazil; with its galloping rates, projections indicate that by 2035 up to 40% of the Brazilian population could be in the obesity range . In this scenario, it is unquestionable that every health professional must understand obesity, as even if they do not treat it directly, they will evaluate and treat people living with this disease. Welcoming the patient appropriately and discussing the consequences and therapeutic options for obesity and its related comorbidities in each consultation is essential. The pharmacologic treatment of obesity is undergoing a transitioning period but remains extremely stigmatized. Several reasons contributing to this stigmatization have been addressed in an editorial ; among them, the stigma of obesity itself stands out, as it is still viewed as solely dependent on “lifestyle.” Data shows that only 1% of individuals for whom medications for obesity are clinically recommended actually receive them, and many who do not have such recommendations end up using them for aesthetic purposes . Recently, the Brazilian Association for the Study of Obesity and Metabolic Syndrome (Abeso) and the Brazilian Society of Endocrinology and Metabolism (SBEM) produced a joint document emphasizing the importance of language in reducing stigma . Using the term “antiobesity medications” is always recommended, while the term “weight loss drugs” should be avoided. Treating obesity is much more than “losing weight,” as it includes maintaining the lost weight and the benefits beyond weight loss. Additionally, “losing weight” can be desired by anyone, not just those with a chronic disease. In this context, an important point that emerges is how pharmacologic treatments developed following the standards of evidence-based medicine, with approval from regulatory agencies and well-conducted clinical studies, are often confused and interchanged with treatments that have no scientific support and are potentially dangerous – from manipulated formulas to herbs sold on the internet, serums, and injections produced with no regard for sanitary or health concerns. Much of the stigma surrounding treatment has also been caused by older medications known for their poor risk-benefit ratio and various medications that have been discontinued in recent decades due to unacceptable side effects. An additional reason is the low efficacy of some drugs that had outcomes falling short of those desired by both physicians and patients. Thankfully, advancements in pharmacologic treatments have overcome previously described barriers . New medications, some already available on the market and others still under investigation, have the potential to achieve clinically relevant weight losses. Outcome studies show that their therapeutic targets extend far beyond weight loss, with clear objectives of improving health indicators and quality of life. The present document aims to compile the existing evidence on antiobesity medications already approved by the National Health Surveillance Agency (Anvisa), along with their main efficacy and safety data. Despite many advances, we are still far from ensuring proper access to medications for all individuals with obesity. The high cost of some medications is still a major barrier. The development of study protocols showing clear benefits and optimal pharmacoeconomic profiles may facilitate the future inclusion of some of these medications in the Unified Health System (SUS). Considering all the above, it becomes quite clear that Abeso and SBEM have the duty of producing a document explaining the therapeutic options approved by Anvisa for the treatment of obesity, to guide specialists and nonspecialists toward serious and ethical treatment and distance patients from dangerous, ineffective, and expensive treatments, which remain so common in our country. It is important to emphasize that obesity treatment goes far beyond medications, with lifestyle changes (LSCs) remaining the cornerstone. However, this document focuses on pharmacologic treatment, aiming to familiarize the reader with therapeutic options along with their effects on body weight, metabolic effects, and side effects. The treatment of obesity is complex, and many of its nuances and clinical questions will not be answered here. In the future, the two societies will collaborate on a broader and more comprehensive document to address, based on existing literature, practical questions aiming to facilitate treatment management. However, for the transformative time we are currently in, this data compilation – written by experts in the field who conducted an in-depth review of the literature for the most complete and current evidence – will serve as a guide to improve the care of people living with obesity. 1.1 Mechanism of action Sibutramine works by inhibiting the reuptake of norepinephrine and serotonin in the synaptic cleft and, to a lesser extent, by inhibiting the reuptake of dopamine. Its main effect is on regulating food intake, prolonging satiety rather than reducing hunger. Considering this pharmacologic characteristic, sibutramine should be classified as a satiety-inducing agent and not as an anorectic agent . 1.2 Dosage/usage instructions Sibutramine is commercially available in 10 mg and 15 mg tablets for daily use in patients aged ≥ 18 years. The prescription must be written on a controlled B2 prescription form, accompanied by a consent form completed by the physician and the patient in triplicate, in accordance with Anvisa standards . 1.3 Tolerability/side effects The main side effects of sibutramine are associated with its noradrenergic stimulation and sympathomimetic properties, the most common being xerostomia (29.2%), tachycardia (20.9%), constipation (18.9%), hypertension (17.5%), insomnia (17.2%), and headache (11.3%) . 1.4 Absolute contraindications Based on the results of the Sibutramine Cardiovascular Outcomes Trial (SCOUT; described in item 1.5.6, “Effects of sibutramine on lipid metabolism”), sibutramine is contraindicated in patients with type 2 diabetes mellitus (T2DM) with at least one additional risk factor ( e.g. , hypertension controlled by medication, dyslipidemia, active smoking, or diabetic kidney disease with evidence of microalbuminuria), coronary artery disease (CAD), stroke, arrhythmia, heart failure (HF), and inadequately controlled hypertension (levels above 145 x 90 mmHg) . 1.5 Efficacy 1.5.1 Effects of sibutramine on body weight In a systematic review of 29 clinical studies, sibutramine led to a weight loss of 2.8 kg in 12 weeks, 6 kg in 24 weeks, and 4.5 kg in 54 weeks of treatment. In studies with a duration of 44-54 weeks, the difference in the proportions of participants achieving 5% weight loss was 34% for sibutramine versus 19% for placebo, and for those achieving 10% weight loss, it was 31% for sibutramine versus 12% for placebo . Sibutramine led to improvement in anthropometric measurements. A Cochrane database systematic review of studies on weight loss medications with 12-18 months of follow-up assessed the effect of sibutramine on reducing weight, waist circumference, and body mass index (BMI). Patients using sibutramine lost 4.3% more weight than those using placebo in 10 of the evaluated studies. Five studies showed a BMI reduction of 1.5 kg/m 2 , and eight studies showed a waist circumference reduction of 4.0 cm . 1.5.2 Effects of sibutramine on weight loss maintenance Sibutramine has also proven effective in preventing weight regain when added after dietary interventions. The double-blind, randomized controlled trial (RCT) Sibutramine Trial on Obesity Reduction and Maintenance (STORM) showed the benefits of sibutramine on weight loss and maintenance over 2 years of treatment. In the study, 605 patients with obesity received sibutramine (10 mg/day) associated with a low-calorie diet for 6 months. Patients who achieved > 5% weight loss after 6 months were allocated to continue sibutramine or switch to placebo for 18 months. The sibutramine group had greater weight loss than the placebo group at 2 years (-4.0 kg [-2.4 to -5.6 kg]), reinforcing the importance of maintaining the medication for longer after the initial weight loss . Another systematic review including three studies evaluated weight loss maintenance with sibutramine. The analysis showed that after an initial dietary weight loss intervention lasting 1-6 months, individuals who achieved a body weight loss of at least 5% were randomized to treatment with placebo or sibutramine. The results demonstrated that 10%-30% more individuals treated with sibutramine compared with placebo were successful in maintaining the initial loss (defined as the maintenance of 80%-100% of the weight lost after 12-18 months of treatment) . 1.5.3 Effects of sibutramine on body composition The effects of sibutramine on body composition have been evaluated in very specific studies. The first RCT evaluated the effects of sibutramine 10 mg for 12 weeks in 24 adolescents. No difference in body composition was found in the sibutramine group compared with the placebo group . The second RCT, with the same duration as the first (12 weeks), evaluated sibutramine at doses of 10 mg and 15 mg in 181 individuals with obesity. The sibutramine group showed a trend toward a greater reduction in body fat percentage than the placebo group (p = 0.05). No difference in lean mass was observed between the groups . In the STORM study, a subgroup analysis showed a preferential reduction of visceral adipose tissue compared with subcutaneous tissue in body composition assessed by computed tomography . 1.5.4 Effects of sibutramine in patients with prediabetes/glucose intolerance We found no studies evaluating sibutramine in this population. 1.5.5 Effects of sibutramine in patients with type 2 diabetes mellitus A meta-analysis evaluating long-duration, high-quality studies on glycemic control in patients with T2DM showed that treatment with sibutramine is associated with a slight improvement in glycemic control . Specifically, an RCT with a 1-year duration evaluating 195 individuals with T2DM and daily sibutramine doses of 15 mg and 20 mg showed that patients using this medication experienced improved glycemic control in parallel with weight loss. Individuals with a 10% weight loss showed an average 1.2% reduction in glycated hemoglobin (HbA1c) level . 1.5.6 Effects of sibutramine on lipid metabolism A meta-analysis of 29 studies showed that treatment with sibutramine is associated with a slight improvement in low-density lipoprotein cholesterol (LDL-c) and triglyceride levels . Sibutramine was associated with a reduction in triglyceride levels, in addition to a slight increase in high-density lipoprotein cholesterol (HDL-c) levels compared with placebo, with better results observed in patients with greater weight loss . Another meta-analysis showed improvement in triglyceride levels compared with placebo (-7.7 mmol/L) and an increase in HDL-c (1.5 mg/L), while LDL-c levels did not differ between groups . 1.5.7 Effects of sibutramine on blood pressure and heart rate In a meta-analysis conducted by Rucker and cols., seven of the included studies showed that the use of sibutramine was associated with an increase in systolic (mean 1.7 mmHg, from 0.1 to 3.3 mmHg) and diastolic (mean 2.4 mmHg, from 1.5 to 3.3 mmHg) blood pressure (BP) and heart rate (mean 4.5 bpm, from 3.5 to 5.6 bpm) . Despite the observed increase in BP levels, BP control in individuals with previously controlled hypertension was not compromised when their antihypertensive medication was adjusted . Therefore, sibutramine must be used with regular heart rate and BP monitoring. Patients who present a significant increase in these parameters should have their treatment discontinued. Of note, the treatment can be carried out as long as the absolute contraindications are respected (item 1.4, “Absolute contraindications”). 1.5.8 Effects of sibutramine on obstructive sleep apnea syndrome An uncontrolled study of 87 patients with obesity using sibutramine 10 mg/day for 6 months showed that this treatment associated with LSCs resulted in a weight loss of 8.3 ± 4.7 kg, which was accompanied by reductions in neck circumference, obstructive sleep apnea syndrome (OSAS) severity (decrease in apnea-hypopnea index [AHI] of 16.3+/-19.4 events/h), and Epworth sleepiness scale score (decrease of 4.5+/-4.6) . A study compared the efficacy of weight loss induced by sibutramine versus continuous positive airway pressure (CPAP) treatment over a 1-year period in 40 patients with obesity, specifically evaluating their effects on sleep respiratory parameters. The sibutramine group had a body weight decrease of 5.4 ± 1.4 kg compared with the CPAP group, which had no weight loss. Treatment with CPAP improved all respiratory and sleep parameters, while sibutramine-induced weight loss improved only the nocturnal profile of arterial oxygen saturation . 1.5.9 Effects of sibutramine in patients with polycystic ovary syndrome An RCT including 40 women with obesity and polycystic ovary syndrome (PCOS) evaluated the efficacy of sibutramine therapy alone or combined with ethinylestradiol-cyproterone (EE-CPA) on clinical and metabolic parameters in women with obesity and PCOS. At the end of the study, there were significant decreases in BMI value, Ferriman-Gallwey hirsutism score, and serum total testosterone, free testosterone, and dehydroepiandrosterone sulfate (DHEAS) levels and a significant increase in sex hormone-binding globulin (SHBG) level in both groups. The sibutramine group had a greater reduction in body weight, waist/hip ratio, diastolic BP (DBP), and triglyceride levels, along with improvement in insulin sensitivity, which are important pathophysiological factors in PCOS . A small, open-label, randomized study included 59 women with overweight or obesity and PCOS treated with sibutramine 10 mg for 6 months. The women were divided into a group on a low-calorie diet plus sibutramine and another group on a low-calorie diet alone. Body weight decreased in both groups, but the decrease was greater with sibutramine. In both groups, all women with an abnormal oral glucose tolerance test (OGTT) at baseline had normal glucose tolerance at 6 months. The free androgen index, glucose area under the curve, and fasting triglyceride level decreased at 6 months only in the group using sibutramine . 1.5.10 Effects of sibutramine in patients with male hypogonadism In a case report of a patient with a mutation in the melanocortin 4 receptor (MC4R) who had obesity and hypogonadotropic hypogonadism and was experiencing increasing body weight, sibutramine led to the maintenance of body weight and improved body composition and metabolic abnormalities related to obesity . 1.5.11 Effects of sibutramine on metabolic dysfunction-associated steatotic liver disease Only one small study has evaluated the effects of sibutramine on metabolic dysfunction-associated steatotic liver disease (MASLD). Thirteen individuals with obesity and nonalcoholic steatohepatitis (NASH) were evaluated over a 6-month period. There was a 10.2% decrease in body weight, along with a reduction of 47% in insulin resistance and declines of 41% in AST, 59% in ALT, and 27% in gamma-glutamyl transferase levels. Ultrasonographic regression of steatosis was observed in 11 of 13 patients using sibutramine. The study concluded that sibutramine-induced weight loss reduced insulin resistance and improved biochemical markers and ultrasonographic findings in patients with NASH . 1.5.12 Effects of sibutramine on quality of life A pooled analysis of four RCTs evaluated 555 patients with obesity regarding the impact of sibutramine treatment on quality of life assessed by the Impact of Weight on Quality of Life (IWQOL) and Short Form Health Survey (SF-36) scales. The SF-36 scale is a questionnaire with 36 questions that cover several dimensions of physical and mental health, including physical function, pain, vitality, and mental health, among other aspects. The IWQOL questionnaire assesses health-related quality of life specifically in patients with obesity. It comprises 31 questions evaluating patients’ perceptions of their weight, self-esteem, social life, physical activity, physical comfort, and other quality-of-life aspects. The study found that weight loss in the sibutramine group led to a significant improvement in quality of life, with the improvement being proportional to the weight loss . 1.5.13 Effects of sibutramine on osteoarticular diseases We found no studies evaluating sibutramine in this population. 1.5.14 Effects of sibutramine in patients with chronic kidney disease We found no studies evaluating sibutramine in this population. 1.5.15 Effects of sibutramine on cardiovascular diseases The SCOUT study assessed cardiovascular outcomes in individuals with cardiovascular disease (CVD; coronary artery disease, stroke, or peripheral arterial occlusive disease), T2DM with one or more cardiovascular risk factors, or both. In patients treated with sibutramine, the risk rate of cardiovascular events increased by 16%, with the risk of nonfatal acute myocardial infarction (AMI) increasing by 28% and the risk of nonfatal stroke increasing by 36%. Thus, sibutramine was associated with an increased risk of nonfatal cardiovascular events in this group of patients. The risks of death from cardiovascular causes and cardiorespiratory arrest were not different between the two groups . Sibutramine works by inhibiting the reuptake of norepinephrine and serotonin in the synaptic cleft and, to a lesser extent, by inhibiting the reuptake of dopamine. Its main effect is on regulating food intake, prolonging satiety rather than reducing hunger. Considering this pharmacologic characteristic, sibutramine should be classified as a satiety-inducing agent and not as an anorectic agent . Sibutramine is commercially available in 10 mg and 15 mg tablets for daily use in patients aged ≥ 18 years. The prescription must be written on a controlled B2 prescription form, accompanied by a consent form completed by the physician and the patient in triplicate, in accordance with Anvisa standards . The main side effects of sibutramine are associated with its noradrenergic stimulation and sympathomimetic properties, the most common being xerostomia (29.2%), tachycardia (20.9%), constipation (18.9%), hypertension (17.5%), insomnia (17.2%), and headache (11.3%) . Based on the results of the Sibutramine Cardiovascular Outcomes Trial (SCOUT; described in item 1.5.6, “Effects of sibutramine on lipid metabolism”), sibutramine is contraindicated in patients with type 2 diabetes mellitus (T2DM) with at least one additional risk factor ( e.g. , hypertension controlled by medication, dyslipidemia, active smoking, or diabetic kidney disease with evidence of microalbuminuria), coronary artery disease (CAD), stroke, arrhythmia, heart failure (HF), and inadequately controlled hypertension (levels above 145 x 90 mmHg) . 1.5.1 Effects of sibutramine on body weight In a systematic review of 29 clinical studies, sibutramine led to a weight loss of 2.8 kg in 12 weeks, 6 kg in 24 weeks, and 4.5 kg in 54 weeks of treatment. In studies with a duration of 44-54 weeks, the difference in the proportions of participants achieving 5% weight loss was 34% for sibutramine versus 19% for placebo, and for those achieving 10% weight loss, it was 31% for sibutramine versus 12% for placebo . Sibutramine led to improvement in anthropometric measurements. A Cochrane database systematic review of studies on weight loss medications with 12-18 months of follow-up assessed the effect of sibutramine on reducing weight, waist circumference, and body mass index (BMI). Patients using sibutramine lost 4.3% more weight than those using placebo in 10 of the evaluated studies. Five studies showed a BMI reduction of 1.5 kg/m 2 , and eight studies showed a waist circumference reduction of 4.0 cm . 1.5.2 Effects of sibutramine on weight loss maintenance Sibutramine has also proven effective in preventing weight regain when added after dietary interventions. The double-blind, randomized controlled trial (RCT) Sibutramine Trial on Obesity Reduction and Maintenance (STORM) showed the benefits of sibutramine on weight loss and maintenance over 2 years of treatment. In the study, 605 patients with obesity received sibutramine (10 mg/day) associated with a low-calorie diet for 6 months. Patients who achieved > 5% weight loss after 6 months were allocated to continue sibutramine or switch to placebo for 18 months. The sibutramine group had greater weight loss than the placebo group at 2 years (-4.0 kg [-2.4 to -5.6 kg]), reinforcing the importance of maintaining the medication for longer after the initial weight loss . Another systematic review including three studies evaluated weight loss maintenance with sibutramine. The analysis showed that after an initial dietary weight loss intervention lasting 1-6 months, individuals who achieved a body weight loss of at least 5% were randomized to treatment with placebo or sibutramine. The results demonstrated that 10%-30% more individuals treated with sibutramine compared with placebo were successful in maintaining the initial loss (defined as the maintenance of 80%-100% of the weight lost after 12-18 months of treatment) . 1.5.3 Effects of sibutramine on body composition The effects of sibutramine on body composition have been evaluated in very specific studies. The first RCT evaluated the effects of sibutramine 10 mg for 12 weeks in 24 adolescents. No difference in body composition was found in the sibutramine group compared with the placebo group . The second RCT, with the same duration as the first (12 weeks), evaluated sibutramine at doses of 10 mg and 15 mg in 181 individuals with obesity. The sibutramine group showed a trend toward a greater reduction in body fat percentage than the placebo group (p = 0.05). No difference in lean mass was observed between the groups . In the STORM study, a subgroup analysis showed a preferential reduction of visceral adipose tissue compared with subcutaneous tissue in body composition assessed by computed tomography . 1.5.4 Effects of sibutramine in patients with prediabetes/glucose intolerance We found no studies evaluating sibutramine in this population. 1.5.5 Effects of sibutramine in patients with type 2 diabetes mellitus A meta-analysis evaluating long-duration, high-quality studies on glycemic control in patients with T2DM showed that treatment with sibutramine is associated with a slight improvement in glycemic control . Specifically, an RCT with a 1-year duration evaluating 195 individuals with T2DM and daily sibutramine doses of 15 mg and 20 mg showed that patients using this medication experienced improved glycemic control in parallel with weight loss. Individuals with a 10% weight loss showed an average 1.2% reduction in glycated hemoglobin (HbA1c) level . 1.5.6 Effects of sibutramine on lipid metabolism A meta-analysis of 29 studies showed that treatment with sibutramine is associated with a slight improvement in low-density lipoprotein cholesterol (LDL-c) and triglyceride levels . Sibutramine was associated with a reduction in triglyceride levels, in addition to a slight increase in high-density lipoprotein cholesterol (HDL-c) levels compared with placebo, with better results observed in patients with greater weight loss . Another meta-analysis showed improvement in triglyceride levels compared with placebo (-7.7 mmol/L) and an increase in HDL-c (1.5 mg/L), while LDL-c levels did not differ between groups . 1.5.7 Effects of sibutramine on blood pressure and heart rate In a meta-analysis conducted by Rucker and cols., seven of the included studies showed that the use of sibutramine was associated with an increase in systolic (mean 1.7 mmHg, from 0.1 to 3.3 mmHg) and diastolic (mean 2.4 mmHg, from 1.5 to 3.3 mmHg) blood pressure (BP) and heart rate (mean 4.5 bpm, from 3.5 to 5.6 bpm) . Despite the observed increase in BP levels, BP control in individuals with previously controlled hypertension was not compromised when their antihypertensive medication was adjusted . Therefore, sibutramine must be used with regular heart rate and BP monitoring. Patients who present a significant increase in these parameters should have their treatment discontinued. Of note, the treatment can be carried out as long as the absolute contraindications are respected (item 1.4, “Absolute contraindications”). 1.5.8 Effects of sibutramine on obstructive sleep apnea syndrome An uncontrolled study of 87 patients with obesity using sibutramine 10 mg/day for 6 months showed that this treatment associated with LSCs resulted in a weight loss of 8.3 ± 4.7 kg, which was accompanied by reductions in neck circumference, obstructive sleep apnea syndrome (OSAS) severity (decrease in apnea-hypopnea index [AHI] of 16.3+/-19.4 events/h), and Epworth sleepiness scale score (decrease of 4.5+/-4.6) . A study compared the efficacy of weight loss induced by sibutramine versus continuous positive airway pressure (CPAP) treatment over a 1-year period in 40 patients with obesity, specifically evaluating their effects on sleep respiratory parameters. The sibutramine group had a body weight decrease of 5.4 ± 1.4 kg compared with the CPAP group, which had no weight loss. Treatment with CPAP improved all respiratory and sleep parameters, while sibutramine-induced weight loss improved only the nocturnal profile of arterial oxygen saturation . 1.5.9 Effects of sibutramine in patients with polycystic ovary syndrome An RCT including 40 women with obesity and polycystic ovary syndrome (PCOS) evaluated the efficacy of sibutramine therapy alone or combined with ethinylestradiol-cyproterone (EE-CPA) on clinical and metabolic parameters in women with obesity and PCOS. At the end of the study, there were significant decreases in BMI value, Ferriman-Gallwey hirsutism score, and serum total testosterone, free testosterone, and dehydroepiandrosterone sulfate (DHEAS) levels and a significant increase in sex hormone-binding globulin (SHBG) level in both groups. The sibutramine group had a greater reduction in body weight, waist/hip ratio, diastolic BP (DBP), and triglyceride levels, along with improvement in insulin sensitivity, which are important pathophysiological factors in PCOS . A small, open-label, randomized study included 59 women with overweight or obesity and PCOS treated with sibutramine 10 mg for 6 months. The women were divided into a group on a low-calorie diet plus sibutramine and another group on a low-calorie diet alone. Body weight decreased in both groups, but the decrease was greater with sibutramine. In both groups, all women with an abnormal oral glucose tolerance test (OGTT) at baseline had normal glucose tolerance at 6 months. The free androgen index, glucose area under the curve, and fasting triglyceride level decreased at 6 months only in the group using sibutramine . 1.5.10 Effects of sibutramine in patients with male hypogonadism In a case report of a patient with a mutation in the melanocortin 4 receptor (MC4R) who had obesity and hypogonadotropic hypogonadism and was experiencing increasing body weight, sibutramine led to the maintenance of body weight and improved body composition and metabolic abnormalities related to obesity . 1.5.11 Effects of sibutramine on metabolic dysfunction-associated steatotic liver disease Only one small study has evaluated the effects of sibutramine on metabolic dysfunction-associated steatotic liver disease (MASLD). Thirteen individuals with obesity and nonalcoholic steatohepatitis (NASH) were evaluated over a 6-month period. There was a 10.2% decrease in body weight, along with a reduction of 47% in insulin resistance and declines of 41% in AST, 59% in ALT, and 27% in gamma-glutamyl transferase levels. Ultrasonographic regression of steatosis was observed in 11 of 13 patients using sibutramine. The study concluded that sibutramine-induced weight loss reduced insulin resistance and improved biochemical markers and ultrasonographic findings in patients with NASH . 1.5.12 Effects of sibutramine on quality of life A pooled analysis of four RCTs evaluated 555 patients with obesity regarding the impact of sibutramine treatment on quality of life assessed by the Impact of Weight on Quality of Life (IWQOL) and Short Form Health Survey (SF-36) scales. The SF-36 scale is a questionnaire with 36 questions that cover several dimensions of physical and mental health, including physical function, pain, vitality, and mental health, among other aspects. The IWQOL questionnaire assesses health-related quality of life specifically in patients with obesity. It comprises 31 questions evaluating patients’ perceptions of their weight, self-esteem, social life, physical activity, physical comfort, and other quality-of-life aspects. The study found that weight loss in the sibutramine group led to a significant improvement in quality of life, with the improvement being proportional to the weight loss . 1.5.13 Effects of sibutramine on osteoarticular diseases We found no studies evaluating sibutramine in this population. 1.5.14 Effects of sibutramine in patients with chronic kidney disease We found no studies evaluating sibutramine in this population. 1.5.15 Effects of sibutramine on cardiovascular diseases The SCOUT study assessed cardiovascular outcomes in individuals with cardiovascular disease (CVD; coronary artery disease, stroke, or peripheral arterial occlusive disease), T2DM with one or more cardiovascular risk factors, or both. In patients treated with sibutramine, the risk rate of cardiovascular events increased by 16%, with the risk of nonfatal acute myocardial infarction (AMI) increasing by 28% and the risk of nonfatal stroke increasing by 36%. Thus, sibutramine was associated with an increased risk of nonfatal cardiovascular events in this group of patients. The risks of death from cardiovascular causes and cardiorespiratory arrest were not different between the two groups . In a systematic review of 29 clinical studies, sibutramine led to a weight loss of 2.8 kg in 12 weeks, 6 kg in 24 weeks, and 4.5 kg in 54 weeks of treatment. In studies with a duration of 44-54 weeks, the difference in the proportions of participants achieving 5% weight loss was 34% for sibutramine versus 19% for placebo, and for those achieving 10% weight loss, it was 31% for sibutramine versus 12% for placebo . Sibutramine led to improvement in anthropometric measurements. A Cochrane database systematic review of studies on weight loss medications with 12-18 months of follow-up assessed the effect of sibutramine on reducing weight, waist circumference, and body mass index (BMI). Patients using sibutramine lost 4.3% more weight than those using placebo in 10 of the evaluated studies. Five studies showed a BMI reduction of 1.5 kg/m 2 , and eight studies showed a waist circumference reduction of 4.0 cm . Sibutramine has also proven effective in preventing weight regain when added after dietary interventions. The double-blind, randomized controlled trial (RCT) Sibutramine Trial on Obesity Reduction and Maintenance (STORM) showed the benefits of sibutramine on weight loss and maintenance over 2 years of treatment. In the study, 605 patients with obesity received sibutramine (10 mg/day) associated with a low-calorie diet for 6 months. Patients who achieved > 5% weight loss after 6 months were allocated to continue sibutramine or switch to placebo for 18 months. The sibutramine group had greater weight loss than the placebo group at 2 years (-4.0 kg [-2.4 to -5.6 kg]), reinforcing the importance of maintaining the medication for longer after the initial weight loss . Another systematic review including three studies evaluated weight loss maintenance with sibutramine. The analysis showed that after an initial dietary weight loss intervention lasting 1-6 months, individuals who achieved a body weight loss of at least 5% were randomized to treatment with placebo or sibutramine. The results demonstrated that 10%-30% more individuals treated with sibutramine compared with placebo were successful in maintaining the initial loss (defined as the maintenance of 80%-100% of the weight lost after 12-18 months of treatment) . The effects of sibutramine on body composition have been evaluated in very specific studies. The first RCT evaluated the effects of sibutramine 10 mg for 12 weeks in 24 adolescents. No difference in body composition was found in the sibutramine group compared with the placebo group . The second RCT, with the same duration as the first (12 weeks), evaluated sibutramine at doses of 10 mg and 15 mg in 181 individuals with obesity. The sibutramine group showed a trend toward a greater reduction in body fat percentage than the placebo group (p = 0.05). No difference in lean mass was observed between the groups . In the STORM study, a subgroup analysis showed a preferential reduction of visceral adipose tissue compared with subcutaneous tissue in body composition assessed by computed tomography . We found no studies evaluating sibutramine in this population. A meta-analysis evaluating long-duration, high-quality studies on glycemic control in patients with T2DM showed that treatment with sibutramine is associated with a slight improvement in glycemic control . Specifically, an RCT with a 1-year duration evaluating 195 individuals with T2DM and daily sibutramine doses of 15 mg and 20 mg showed that patients using this medication experienced improved glycemic control in parallel with weight loss. Individuals with a 10% weight loss showed an average 1.2% reduction in glycated hemoglobin (HbA1c) level . A meta-analysis of 29 studies showed that treatment with sibutramine is associated with a slight improvement in low-density lipoprotein cholesterol (LDL-c) and triglyceride levels . Sibutramine was associated with a reduction in triglyceride levels, in addition to a slight increase in high-density lipoprotein cholesterol (HDL-c) levels compared with placebo, with better results observed in patients with greater weight loss . Another meta-analysis showed improvement in triglyceride levels compared with placebo (-7.7 mmol/L) and an increase in HDL-c (1.5 mg/L), while LDL-c levels did not differ between groups . In a meta-analysis conducted by Rucker and cols., seven of the included studies showed that the use of sibutramine was associated with an increase in systolic (mean 1.7 mmHg, from 0.1 to 3.3 mmHg) and diastolic (mean 2.4 mmHg, from 1.5 to 3.3 mmHg) blood pressure (BP) and heart rate (mean 4.5 bpm, from 3.5 to 5.6 bpm) . Despite the observed increase in BP levels, BP control in individuals with previously controlled hypertension was not compromised when their antihypertensive medication was adjusted . Therefore, sibutramine must be used with regular heart rate and BP monitoring. Patients who present a significant increase in these parameters should have their treatment discontinued. Of note, the treatment can be carried out as long as the absolute contraindications are respected (item 1.4, “Absolute contraindications”). An uncontrolled study of 87 patients with obesity using sibutramine 10 mg/day for 6 months showed that this treatment associated with LSCs resulted in a weight loss of 8.3 ± 4.7 kg, which was accompanied by reductions in neck circumference, obstructive sleep apnea syndrome (OSAS) severity (decrease in apnea-hypopnea index [AHI] of 16.3+/-19.4 events/h), and Epworth sleepiness scale score (decrease of 4.5+/-4.6) . A study compared the efficacy of weight loss induced by sibutramine versus continuous positive airway pressure (CPAP) treatment over a 1-year period in 40 patients with obesity, specifically evaluating their effects on sleep respiratory parameters. The sibutramine group had a body weight decrease of 5.4 ± 1.4 kg compared with the CPAP group, which had no weight loss. Treatment with CPAP improved all respiratory and sleep parameters, while sibutramine-induced weight loss improved only the nocturnal profile of arterial oxygen saturation . An RCT including 40 women with obesity and polycystic ovary syndrome (PCOS) evaluated the efficacy of sibutramine therapy alone or combined with ethinylestradiol-cyproterone (EE-CPA) on clinical and metabolic parameters in women with obesity and PCOS. At the end of the study, there were significant decreases in BMI value, Ferriman-Gallwey hirsutism score, and serum total testosterone, free testosterone, and dehydroepiandrosterone sulfate (DHEAS) levels and a significant increase in sex hormone-binding globulin (SHBG) level in both groups. The sibutramine group had a greater reduction in body weight, waist/hip ratio, diastolic BP (DBP), and triglyceride levels, along with improvement in insulin sensitivity, which are important pathophysiological factors in PCOS . A small, open-label, randomized study included 59 women with overweight or obesity and PCOS treated with sibutramine 10 mg for 6 months. The women were divided into a group on a low-calorie diet plus sibutramine and another group on a low-calorie diet alone. Body weight decreased in both groups, but the decrease was greater with sibutramine. In both groups, all women with an abnormal oral glucose tolerance test (OGTT) at baseline had normal glucose tolerance at 6 months. The free androgen index, glucose area under the curve, and fasting triglyceride level decreased at 6 months only in the group using sibutramine . In a case report of a patient with a mutation in the melanocortin 4 receptor (MC4R) who had obesity and hypogonadotropic hypogonadism and was experiencing increasing body weight, sibutramine led to the maintenance of body weight and improved body composition and metabolic abnormalities related to obesity . Only one small study has evaluated the effects of sibutramine on metabolic dysfunction-associated steatotic liver disease (MASLD). Thirteen individuals with obesity and nonalcoholic steatohepatitis (NASH) were evaluated over a 6-month period. There was a 10.2% decrease in body weight, along with a reduction of 47% in insulin resistance and declines of 41% in AST, 59% in ALT, and 27% in gamma-glutamyl transferase levels. Ultrasonographic regression of steatosis was observed in 11 of 13 patients using sibutramine. The study concluded that sibutramine-induced weight loss reduced insulin resistance and improved biochemical markers and ultrasonographic findings in patients with NASH . A pooled analysis of four RCTs evaluated 555 patients with obesity regarding the impact of sibutramine treatment on quality of life assessed by the Impact of Weight on Quality of Life (IWQOL) and Short Form Health Survey (SF-36) scales. The SF-36 scale is a questionnaire with 36 questions that cover several dimensions of physical and mental health, including physical function, pain, vitality, and mental health, among other aspects. The IWQOL questionnaire assesses health-related quality of life specifically in patients with obesity. It comprises 31 questions evaluating patients’ perceptions of their weight, self-esteem, social life, physical activity, physical comfort, and other quality-of-life aspects. The study found that weight loss in the sibutramine group led to a significant improvement in quality of life, with the improvement being proportional to the weight loss . We found no studies evaluating sibutramine in this population. We found no studies evaluating sibutramine in this population. The SCOUT study assessed cardiovascular outcomes in individuals with cardiovascular disease (CVD; coronary artery disease, stroke, or peripheral arterial occlusive disease), T2DM with one or more cardiovascular risk factors, or both. In patients treated with sibutramine, the risk rate of cardiovascular events increased by 16%, with the risk of nonfatal acute myocardial infarction (AMI) increasing by 28% and the risk of nonfatal stroke increasing by 36%. Thus, sibutramine was associated with an increased risk of nonfatal cardiovascular events in this group of patients. The risks of death from cardiovascular causes and cardiorespiratory arrest were not different between the two groups . 2.1 Mechanism of action Orlistat acts in the gastrointestinal tract by decreasing the absorption of dietary fats. Its mechanism of action involves irreversibly inhibiting gastric and pancreatic lipases, which reduces the hydrolysis of triglycerides into fatty acids and monoglycerides and decreases the absorption of ingested fat by 30% . 2.2 Dosage/usage instructions The recommended daily dose of orlistat is 120 mg to be taken during or up to 1 hour after each of the three main meals . 2.3 Tolerability/side effects The main side effects associated with orlistat therapy are related to the gastrointestinal system. A meta-analysis including 16 studies found that over 80% of patients treated with orlistat had at least one gastrointestinal side effect . The most frequent gastrointestinal side effects were steatorrhea, urgency to defecate, and flatulence with fat elimination, each with frequency rates of 15%-30% in most studies . Notably, the effects of diarrhea and abdominal pain are commonly observed in individuals with low adherence to diet . Due to its effect on reducing the absorption of intestinal fat, chronic use of orlistat results in decreased absorption of fat-soluble vitamins (A, D, E, and K) . Additionally, dietary fat can bind to calcium in the intestinal lumen, leading to increased intestinal absorption of oxalate and preventing oxalate from naturally binding with intraluminal oxalate. The increase in circulating oxalate can lead to hyperoxaluria, a condition associated with the formation of kidney stones . 2.4 Absolute contraindications Orlistat is contraindicated in pregnant and breastfeeding women. Patients with chronic malabsorption syndrome or cholestasis should also not use this medication . 2.5 Efficacy 2.5.1 Effects of orlistat on body weight A meta-analysis including RCTs with a duration of at least 1 year and the use of orlistat 120 mg three times daily found greater weight reductions in the orlistat group compared with the placebo group. Patients treated with orlistat lost 2.9 kg (2.5-3.2 kg) more weight than those treated with placebo. A greater number of participants in the orlistat group achieved clinically significant weight loss, with 21% and 12% achieving 5% and 10% of body weight loss, respectively. The same study showed a greater reduction in waist circumference with orlistat therapy (2.06 cm) compared with placebo . 2.5.2 Effects of orlistat on body weight maintenance The Xenical in the Prevention of Diabetes in Obese Subjects (XENDOS) study was designed to evaluate the prevention of diabetes in individuals with prediabetes using orlistat. This 4-year RCT included 3,305 patients with obesity and without a T2DM diagnosis who had normal glucose tolerance or were intolerant to oral glucose. Intensive LSCs were recommended, associated with treatment with orlistat or placebo. Weight loss was significantly greater with orlistat than with placebo at 1 year (10.6 versus 6.2 kg, respectively) and remained significantly greater at the end of the fourth year of the study (5.8 versus 3.0 kg, respectively) . 2.5.3 Effects of orlistat on body composition Two small studies evaluated the effects of orlistat on body composition. The first, an RCT, compared the effects of 1 year of treatment with orlistat or placebo on body composition assessed by dual-energy X-ray absorptiometry (DXA). Interestingly, weight loss was significant in both the orlistat and placebo groups, but there was no significant difference between the two groups (11.2 ± 7.5 kg versus 8.1 ± 7.5 kg). There was also no significant difference between groups in relation to body composition parameters (fat-free mass [FFM], fat mass [FM], or percentage fat mass [FM%]), although both groups showed reductions in these three parameters . The second study included 72 patients who completed a 2-year RCT comparing orlistat versus placebo. Body composition (FM and FFM) was assessed using bioimpedance, and the FM/FFM ratio was calculated. After a 12-month period, the groups had a significant reduction in FFM, but the difference between the two groups was not significant. In contrast, patients in the orlistat group had a greater reduction in FM (38.0 ± 7.6 kg to 29.1 ± 11.2 kg) than those in the placebo group (37.5 ± 8.5 kg to 32.3 ± 11.2 kg) . 2.5.4 Effects of orlistat in patients with prediabetes/glucose intolerance The effects of orlistat in individuals with prediabetes were evaluated in the XENDOS study (described previously). After 4 years of treatment, the cumulative risk of developing diabetes was 9.0% in the placebo group and 6.2% in the orlistat treatment group, corresponding to a risk reduction of 37.3%. Among 21% of the individuals with impaired glucose tolerance at baseline, the incidence of T2DM decreased by 45.0% over 4 years of orlistat therapy . 2.5.5 Effects of orlistat on glycemic control in patients with type 2 diabetes mellitus A meta-analysis included 2,550 patients with obesity and T2DM who used orlistat 120 mg three times daily or placebo. Weight loss was 2.4 kg greater in the orlistat group than in the placebo group. Patients treated with orlistat had significantly greater reductions in mean fasting plasma glucose and HbA1c levels than those treated with placebo (1.39 mmol/L versus 0.47 mmol/L and 0.74% versus 0.31%, respectively) . A systematic review analyzed 12 RCTs of orlistat associated with LSCs in individuals with T2DM. Orlistat, compared with lifestyle interventions alone, led to a greater mean weight loss (2.10 kg) . A subgroup analysis of patients with T2DM from five studies included in a meta-analysis of orlistat for weight loss showed a 2.3% weight reduction, along with decreases in fasting blood glucose by 1.0 mmol/L (95% CI = 0.6-1.5 mmol/L) and HbA1c levels by 0.4% (95% CI = 0.2%-0.6%) . 2.5.6 Effects of orlistat on lipid metabolism A systematic review and meta-analysis evaluated the effects of orlistat on different lipid profile parameters. It included 13 studies assessing the effects of orlistat on total cholesterol (n = 5,206), 13 studies with effects on LDL-c (n = 5,206), 11 studies with effects on HDL-c (n = 4,152), and 11 studies with effects on triglycerides (n = 4,456). The results showed that orlistat promoted average reductions of 12.4 mg/dL in total cholesterol (10.8-14.3 mg/dL), 10.05 mg/dL in LDL-c (8.5-11.6 mg/dL), and 1.1 mg/dL in HDL-c (0.77-1.5 mg/dL). No significant effects were observed on triglyceride levels . A 24-week RCT evaluated the effects of orlistat 120 mg three times daily versus placebo on weight loss and serum lipids in patients with obesity and dyslipidemia. The mean percentage of weight loss was 6.8% in the orlistat group compared with 3.8% in the placebo group (p < 0.001). The orlistat group, compared with the placebo group, experienced a significant reduction in total cholesterol (11.9% versus 4.0%, respectively) and LDL-c (17.6% versus 7.6%, respectively; p < 0.001). For different weight reductions, the change in LDL-c level was more pronounced in the orlistat group, indicating a possible direct effect of orlistat on cholesterol reduction independent of weight reduction . 2.5.7 Effects of orlistat on blood pressure and heart rate A meta-analysis of 16 studies found a 1.5 mmHg (0.9-2.2 mmHg) reduction in placebo-subtracted systolic BP (SBP) in 13 studies and a 1.4 mmHg (0.7-2.0 mmHg) reduction in DBP in 12 studies . 2.5.8 Effects of orlistat on obstructive sleep apnea syndrome One RCT compared orlistat versus placebo over a 2-year period in patients with obesity. Orlistat improved the quality of life among patients with OSAS, but its effect on AHI was not measured . Another randomized study compared orlistat 120 mg three times daily versus placebo for 2 years in 743 patients with obesity. The use of orlistat to promote weight loss resulted in improved vitality among patients with OSAS, as measured by the SF-36, but the trial did not measure AHI or other sleep parameters . 2.5.9 Effects of orlistat in patients with polycystic ovary syndrome A meta-analysis of eight studies evaluated the use of oral contraceptives (OCP) plus orlistat compared with OCP alone in patients with PCOS with overweight or obesity. The combined OCP plus orlistat treatment was more effective than OCP alone in reducing weight and hormonal, lipid, and insulin metabolism parameters, as well as improving ovulation and pregnancy rates compared with OCP alone . A systemic review of six RCTs assessed the efficacy of orlistat versus metformin in women with obesity and PCOS and found significant reductions in weight loss along with total cholesterol and triglyceride levels in the orlistat group compared with the metformin group . 2.5.10 Effects of orlistat in patients with male hypogonadism There are no studies on the effects of orlistat in patients with male hypogonadism. 2.5.11 Effects of orlistat on metabolic dysfunction-associated steatotic liver disease A meta-analysis of seven studies, of which only three were RCTs, evaluated the effects of orlistat in patients with MASLD and overweight or obesity. In all, 330 patients with hepatic steatosis or steatohepatitis were evaluated. Despite the improvement in laboratory parameters (transaminases), no improvement in steatosis, steatohepatitis, or fibrosis was observed . A systematic review of studies assessing weight loss medications evaluated the effect of orlistat 120 mg twice daily associated with LSCs in six studies lasting at least 24 weeks and including patients with MASLD and overweight or obesity. All studies found reduced hepatic fat content and/or reduced liver enzymes (ALT and AST) concomitant with a 5%-10% reduction in body weight loss. Additionally, three studies reported improvement in histopathological findings. The results suggest that the reduction in hepatic fat content was primarily due to weight loss, with no evidence of independent effects of orlistat on MASLD . 2.5.12 Effects of orlistat on quality of life One RCT compared orlistat versus placebo over a 2-year period in patients with obesity. Patients treated with orlistat reported significantly greater satisfaction with their antiobesity medication than those receiving placebo at 1 and 2 years (p < 0.001 in the orlistat 120 mg group; p < 0.05 in the orlistat 60 mg group). Patients who used orlistat 120 mg experienced improved quality of life . 2.5.13 Effects of orlistat on osteoarticular diseases A 6-month RCT including 50 women aged 45-60 years with obesity and Kellgren-Lawrence stage II-III knee osteoarthritis found that weight reduction was significantly greater in patients treated with orlistat (9.05%; average 9.5 kg) than in those who only followed a hypocaloric diet (2.54%; average 2.66 kg). Body weight reduction in patients with orlistat reduced joint pain by 52% and joint stiffness by 51%, and improved joint functional insufficiency by 51% and quality of life by 52% . A retrospective, non-placebo-controlled study analyzed the medical records of 10 women with overweight and knee osteoarthritis treated for 6 months with orlistat 120 mg three times daily, aerobic exercise, and exercise for muscle mass gain. Osteoarthritis symptoms were assessed before treatment, at the end of treatment, and 6 months posttreatment. Significant improvement in scores reflecting knee pain, stiffness, and function was seen at the end of treatment with orlistat compared with placebo (37 versus 21, 44.5 versus 28.3, and 45.5 versus 27.1, respectively), along with a reduction in BMI (32.9 kg/m 2 versus 29.5 kg/m 2 , respectively). Although the mean BMI returned to the baseline value (31.1 kg/m 2 ) after 6 months, the improvement in the other parameters persisted (23.9, p = 0.028; 27.1, p = 0.028; and 32.9, p = 0.037, respectively) . 2.5.14 Effects of orlistat in patients with chronic kidney disease We found no studies evaluating orlistat in chronic kidney disease (CKD). 2.5.15 Effects of orlistat in patients with cardiovascular outcomes We found no RCTs evaluating the cardiovascular safety of orlistat. Orlistat acts in the gastrointestinal tract by decreasing the absorption of dietary fats. Its mechanism of action involves irreversibly inhibiting gastric and pancreatic lipases, which reduces the hydrolysis of triglycerides into fatty acids and monoglycerides and decreases the absorption of ingested fat by 30% . The recommended daily dose of orlistat is 120 mg to be taken during or up to 1 hour after each of the three main meals . The main side effects associated with orlistat therapy are related to the gastrointestinal system. A meta-analysis including 16 studies found that over 80% of patients treated with orlistat had at least one gastrointestinal side effect . The most frequent gastrointestinal side effects were steatorrhea, urgency to defecate, and flatulence with fat elimination, each with frequency rates of 15%-30% in most studies . Notably, the effects of diarrhea and abdominal pain are commonly observed in individuals with low adherence to diet . Due to its effect on reducing the absorption of intestinal fat, chronic use of orlistat results in decreased absorption of fat-soluble vitamins (A, D, E, and K) . Additionally, dietary fat can bind to calcium in the intestinal lumen, leading to increased intestinal absorption of oxalate and preventing oxalate from naturally binding with intraluminal oxalate. The increase in circulating oxalate can lead to hyperoxaluria, a condition associated with the formation of kidney stones . Orlistat is contraindicated in pregnant and breastfeeding women. Patients with chronic malabsorption syndrome or cholestasis should also not use this medication . 2.5.1 Effects of orlistat on body weight A meta-analysis including RCTs with a duration of at least 1 year and the use of orlistat 120 mg three times daily found greater weight reductions in the orlistat group compared with the placebo group. Patients treated with orlistat lost 2.9 kg (2.5-3.2 kg) more weight than those treated with placebo. A greater number of participants in the orlistat group achieved clinically significant weight loss, with 21% and 12% achieving 5% and 10% of body weight loss, respectively. The same study showed a greater reduction in waist circumference with orlistat therapy (2.06 cm) compared with placebo . 2.5.2 Effects of orlistat on body weight maintenance The Xenical in the Prevention of Diabetes in Obese Subjects (XENDOS) study was designed to evaluate the prevention of diabetes in individuals with prediabetes using orlistat. This 4-year RCT included 3,305 patients with obesity and without a T2DM diagnosis who had normal glucose tolerance or were intolerant to oral glucose. Intensive LSCs were recommended, associated with treatment with orlistat or placebo. Weight loss was significantly greater with orlistat than with placebo at 1 year (10.6 versus 6.2 kg, respectively) and remained significantly greater at the end of the fourth year of the study (5.8 versus 3.0 kg, respectively) . 2.5.3 Effects of orlistat on body composition Two small studies evaluated the effects of orlistat on body composition. The first, an RCT, compared the effects of 1 year of treatment with orlistat or placebo on body composition assessed by dual-energy X-ray absorptiometry (DXA). Interestingly, weight loss was significant in both the orlistat and placebo groups, but there was no significant difference between the two groups (11.2 ± 7.5 kg versus 8.1 ± 7.5 kg). There was also no significant difference between groups in relation to body composition parameters (fat-free mass [FFM], fat mass [FM], or percentage fat mass [FM%]), although both groups showed reductions in these three parameters . The second study included 72 patients who completed a 2-year RCT comparing orlistat versus placebo. Body composition (FM and FFM) was assessed using bioimpedance, and the FM/FFM ratio was calculated. After a 12-month period, the groups had a significant reduction in FFM, but the difference between the two groups was not significant. In contrast, patients in the orlistat group had a greater reduction in FM (38.0 ± 7.6 kg to 29.1 ± 11.2 kg) than those in the placebo group (37.5 ± 8.5 kg to 32.3 ± 11.2 kg) . 2.5.4 Effects of orlistat in patients with prediabetes/glucose intolerance The effects of orlistat in individuals with prediabetes were evaluated in the XENDOS study (described previously). After 4 years of treatment, the cumulative risk of developing diabetes was 9.0% in the placebo group and 6.2% in the orlistat treatment group, corresponding to a risk reduction of 37.3%. Among 21% of the individuals with impaired glucose tolerance at baseline, the incidence of T2DM decreased by 45.0% over 4 years of orlistat therapy . 2.5.5 Effects of orlistat on glycemic control in patients with type 2 diabetes mellitus A meta-analysis included 2,550 patients with obesity and T2DM who used orlistat 120 mg three times daily or placebo. Weight loss was 2.4 kg greater in the orlistat group than in the placebo group. Patients treated with orlistat had significantly greater reductions in mean fasting plasma glucose and HbA1c levels than those treated with placebo (1.39 mmol/L versus 0.47 mmol/L and 0.74% versus 0.31%, respectively) . A systematic review analyzed 12 RCTs of orlistat associated with LSCs in individuals with T2DM. Orlistat, compared with lifestyle interventions alone, led to a greater mean weight loss (2.10 kg) . A subgroup analysis of patients with T2DM from five studies included in a meta-analysis of orlistat for weight loss showed a 2.3% weight reduction, along with decreases in fasting blood glucose by 1.0 mmol/L (95% CI = 0.6-1.5 mmol/L) and HbA1c levels by 0.4% (95% CI = 0.2%-0.6%) . 2.5.6 Effects of orlistat on lipid metabolism A systematic review and meta-analysis evaluated the effects of orlistat on different lipid profile parameters. It included 13 studies assessing the effects of orlistat on total cholesterol (n = 5,206), 13 studies with effects on LDL-c (n = 5,206), 11 studies with effects on HDL-c (n = 4,152), and 11 studies with effects on triglycerides (n = 4,456). The results showed that orlistat promoted average reductions of 12.4 mg/dL in total cholesterol (10.8-14.3 mg/dL), 10.05 mg/dL in LDL-c (8.5-11.6 mg/dL), and 1.1 mg/dL in HDL-c (0.77-1.5 mg/dL). No significant effects were observed on triglyceride levels . A 24-week RCT evaluated the effects of orlistat 120 mg three times daily versus placebo on weight loss and serum lipids in patients with obesity and dyslipidemia. The mean percentage of weight loss was 6.8% in the orlistat group compared with 3.8% in the placebo group (p < 0.001). The orlistat group, compared with the placebo group, experienced a significant reduction in total cholesterol (11.9% versus 4.0%, respectively) and LDL-c (17.6% versus 7.6%, respectively; p < 0.001). For different weight reductions, the change in LDL-c level was more pronounced in the orlistat group, indicating a possible direct effect of orlistat on cholesterol reduction independent of weight reduction . 2.5.7 Effects of orlistat on blood pressure and heart rate A meta-analysis of 16 studies found a 1.5 mmHg (0.9-2.2 mmHg) reduction in placebo-subtracted systolic BP (SBP) in 13 studies and a 1.4 mmHg (0.7-2.0 mmHg) reduction in DBP in 12 studies . 2.5.8 Effects of orlistat on obstructive sleep apnea syndrome One RCT compared orlistat versus placebo over a 2-year period in patients with obesity. Orlistat improved the quality of life among patients with OSAS, but its effect on AHI was not measured . Another randomized study compared orlistat 120 mg three times daily versus placebo for 2 years in 743 patients with obesity. The use of orlistat to promote weight loss resulted in improved vitality among patients with OSAS, as measured by the SF-36, but the trial did not measure AHI or other sleep parameters . 2.5.9 Effects of orlistat in patients with polycystic ovary syndrome A meta-analysis of eight studies evaluated the use of oral contraceptives (OCP) plus orlistat compared with OCP alone in patients with PCOS with overweight or obesity. The combined OCP plus orlistat treatment was more effective than OCP alone in reducing weight and hormonal, lipid, and insulin metabolism parameters, as well as improving ovulation and pregnancy rates compared with OCP alone . A systemic review of six RCTs assessed the efficacy of orlistat versus metformin in women with obesity and PCOS and found significant reductions in weight loss along with total cholesterol and triglyceride levels in the orlistat group compared with the metformin group . 2.5.10 Effects of orlistat in patients with male hypogonadism There are no studies on the effects of orlistat in patients with male hypogonadism. 2.5.11 Effects of orlistat on metabolic dysfunction-associated steatotic liver disease A meta-analysis of seven studies, of which only three were RCTs, evaluated the effects of orlistat in patients with MASLD and overweight or obesity. In all, 330 patients with hepatic steatosis or steatohepatitis were evaluated. Despite the improvement in laboratory parameters (transaminases), no improvement in steatosis, steatohepatitis, or fibrosis was observed . A systematic review of studies assessing weight loss medications evaluated the effect of orlistat 120 mg twice daily associated with LSCs in six studies lasting at least 24 weeks and including patients with MASLD and overweight or obesity. All studies found reduced hepatic fat content and/or reduced liver enzymes (ALT and AST) concomitant with a 5%-10% reduction in body weight loss. Additionally, three studies reported improvement in histopathological findings. The results suggest that the reduction in hepatic fat content was primarily due to weight loss, with no evidence of independent effects of orlistat on MASLD . 2.5.12 Effects of orlistat on quality of life One RCT compared orlistat versus placebo over a 2-year period in patients with obesity. Patients treated with orlistat reported significantly greater satisfaction with their antiobesity medication than those receiving placebo at 1 and 2 years (p < 0.001 in the orlistat 120 mg group; p < 0.05 in the orlistat 60 mg group). Patients who used orlistat 120 mg experienced improved quality of life . 2.5.13 Effects of orlistat on osteoarticular diseases A 6-month RCT including 50 women aged 45-60 years with obesity and Kellgren-Lawrence stage II-III knee osteoarthritis found that weight reduction was significantly greater in patients treated with orlistat (9.05%; average 9.5 kg) than in those who only followed a hypocaloric diet (2.54%; average 2.66 kg). Body weight reduction in patients with orlistat reduced joint pain by 52% and joint stiffness by 51%, and improved joint functional insufficiency by 51% and quality of life by 52% . A retrospective, non-placebo-controlled study analyzed the medical records of 10 women with overweight and knee osteoarthritis treated for 6 months with orlistat 120 mg three times daily, aerobic exercise, and exercise for muscle mass gain. Osteoarthritis symptoms were assessed before treatment, at the end of treatment, and 6 months posttreatment. Significant improvement in scores reflecting knee pain, stiffness, and function was seen at the end of treatment with orlistat compared with placebo (37 versus 21, 44.5 versus 28.3, and 45.5 versus 27.1, respectively), along with a reduction in BMI (32.9 kg/m 2 versus 29.5 kg/m 2 , respectively). Although the mean BMI returned to the baseline value (31.1 kg/m 2 ) after 6 months, the improvement in the other parameters persisted (23.9, p = 0.028; 27.1, p = 0.028; and 32.9, p = 0.037, respectively) . 2.5.14 Effects of orlistat in patients with chronic kidney disease We found no studies evaluating orlistat in chronic kidney disease (CKD). 2.5.15 Effects of orlistat in patients with cardiovascular outcomes We found no RCTs evaluating the cardiovascular safety of orlistat. A meta-analysis including RCTs with a duration of at least 1 year and the use of orlistat 120 mg three times daily found greater weight reductions in the orlistat group compared with the placebo group. Patients treated with orlistat lost 2.9 kg (2.5-3.2 kg) more weight than those treated with placebo. A greater number of participants in the orlistat group achieved clinically significant weight loss, with 21% and 12% achieving 5% and 10% of body weight loss, respectively. The same study showed a greater reduction in waist circumference with orlistat therapy (2.06 cm) compared with placebo . The Xenical in the Prevention of Diabetes in Obese Subjects (XENDOS) study was designed to evaluate the prevention of diabetes in individuals with prediabetes using orlistat. This 4-year RCT included 3,305 patients with obesity and without a T2DM diagnosis who had normal glucose tolerance or were intolerant to oral glucose. Intensive LSCs were recommended, associated with treatment with orlistat or placebo. Weight loss was significantly greater with orlistat than with placebo at 1 year (10.6 versus 6.2 kg, respectively) and remained significantly greater at the end of the fourth year of the study (5.8 versus 3.0 kg, respectively) . Two small studies evaluated the effects of orlistat on body composition. The first, an RCT, compared the effects of 1 year of treatment with orlistat or placebo on body composition assessed by dual-energy X-ray absorptiometry (DXA). Interestingly, weight loss was significant in both the orlistat and placebo groups, but there was no significant difference between the two groups (11.2 ± 7.5 kg versus 8.1 ± 7.5 kg). There was also no significant difference between groups in relation to body composition parameters (fat-free mass [FFM], fat mass [FM], or percentage fat mass [FM%]), although both groups showed reductions in these three parameters . The second study included 72 patients who completed a 2-year RCT comparing orlistat versus placebo. Body composition (FM and FFM) was assessed using bioimpedance, and the FM/FFM ratio was calculated. After a 12-month period, the groups had a significant reduction in FFM, but the difference between the two groups was not significant. In contrast, patients in the orlistat group had a greater reduction in FM (38.0 ± 7.6 kg to 29.1 ± 11.2 kg) than those in the placebo group (37.5 ± 8.5 kg to 32.3 ± 11.2 kg) . The effects of orlistat in individuals with prediabetes were evaluated in the XENDOS study (described previously). After 4 years of treatment, the cumulative risk of developing diabetes was 9.0% in the placebo group and 6.2% in the orlistat treatment group, corresponding to a risk reduction of 37.3%. Among 21% of the individuals with impaired glucose tolerance at baseline, the incidence of T2DM decreased by 45.0% over 4 years of orlistat therapy . A meta-analysis included 2,550 patients with obesity and T2DM who used orlistat 120 mg three times daily or placebo. Weight loss was 2.4 kg greater in the orlistat group than in the placebo group. Patients treated with orlistat had significantly greater reductions in mean fasting plasma glucose and HbA1c levels than those treated with placebo (1.39 mmol/L versus 0.47 mmol/L and 0.74% versus 0.31%, respectively) . A systematic review analyzed 12 RCTs of orlistat associated with LSCs in individuals with T2DM. Orlistat, compared with lifestyle interventions alone, led to a greater mean weight loss (2.10 kg) . A subgroup analysis of patients with T2DM from five studies included in a meta-analysis of orlistat for weight loss showed a 2.3% weight reduction, along with decreases in fasting blood glucose by 1.0 mmol/L (95% CI = 0.6-1.5 mmol/L) and HbA1c levels by 0.4% (95% CI = 0.2%-0.6%) . A systematic review and meta-analysis evaluated the effects of orlistat on different lipid profile parameters. It included 13 studies assessing the effects of orlistat on total cholesterol (n = 5,206), 13 studies with effects on LDL-c (n = 5,206), 11 studies with effects on HDL-c (n = 4,152), and 11 studies with effects on triglycerides (n = 4,456). The results showed that orlistat promoted average reductions of 12.4 mg/dL in total cholesterol (10.8-14.3 mg/dL), 10.05 mg/dL in LDL-c (8.5-11.6 mg/dL), and 1.1 mg/dL in HDL-c (0.77-1.5 mg/dL). No significant effects were observed on triglyceride levels . A 24-week RCT evaluated the effects of orlistat 120 mg three times daily versus placebo on weight loss and serum lipids in patients with obesity and dyslipidemia. The mean percentage of weight loss was 6.8% in the orlistat group compared with 3.8% in the placebo group (p < 0.001). The orlistat group, compared with the placebo group, experienced a significant reduction in total cholesterol (11.9% versus 4.0%, respectively) and LDL-c (17.6% versus 7.6%, respectively; p < 0.001). For different weight reductions, the change in LDL-c level was more pronounced in the orlistat group, indicating a possible direct effect of orlistat on cholesterol reduction independent of weight reduction . A meta-analysis of 16 studies found a 1.5 mmHg (0.9-2.2 mmHg) reduction in placebo-subtracted systolic BP (SBP) in 13 studies and a 1.4 mmHg (0.7-2.0 mmHg) reduction in DBP in 12 studies . One RCT compared orlistat versus placebo over a 2-year period in patients with obesity. Orlistat improved the quality of life among patients with OSAS, but its effect on AHI was not measured . Another randomized study compared orlistat 120 mg three times daily versus placebo for 2 years in 743 patients with obesity. The use of orlistat to promote weight loss resulted in improved vitality among patients with OSAS, as measured by the SF-36, but the trial did not measure AHI or other sleep parameters . A meta-analysis of eight studies evaluated the use of oral contraceptives (OCP) plus orlistat compared with OCP alone in patients with PCOS with overweight or obesity. The combined OCP plus orlistat treatment was more effective than OCP alone in reducing weight and hormonal, lipid, and insulin metabolism parameters, as well as improving ovulation and pregnancy rates compared with OCP alone . A systemic review of six RCTs assessed the efficacy of orlistat versus metformin in women with obesity and PCOS and found significant reductions in weight loss along with total cholesterol and triglyceride levels in the orlistat group compared with the metformin group . There are no studies on the effects of orlistat in patients with male hypogonadism. A meta-analysis of seven studies, of which only three were RCTs, evaluated the effects of orlistat in patients with MASLD and overweight or obesity. In all, 330 patients with hepatic steatosis or steatohepatitis were evaluated. Despite the improvement in laboratory parameters (transaminases), no improvement in steatosis, steatohepatitis, or fibrosis was observed . A systematic review of studies assessing weight loss medications evaluated the effect of orlistat 120 mg twice daily associated with LSCs in six studies lasting at least 24 weeks and including patients with MASLD and overweight or obesity. All studies found reduced hepatic fat content and/or reduced liver enzymes (ALT and AST) concomitant with a 5%-10% reduction in body weight loss. Additionally, three studies reported improvement in histopathological findings. The results suggest that the reduction in hepatic fat content was primarily due to weight loss, with no evidence of independent effects of orlistat on MASLD . One RCT compared orlistat versus placebo over a 2-year period in patients with obesity. Patients treated with orlistat reported significantly greater satisfaction with their antiobesity medication than those receiving placebo at 1 and 2 years (p < 0.001 in the orlistat 120 mg group; p < 0.05 in the orlistat 60 mg group). Patients who used orlistat 120 mg experienced improved quality of life . A 6-month RCT including 50 women aged 45-60 years with obesity and Kellgren-Lawrence stage II-III knee osteoarthritis found that weight reduction was significantly greater in patients treated with orlistat (9.05%; average 9.5 kg) than in those who only followed a hypocaloric diet (2.54%; average 2.66 kg). Body weight reduction in patients with orlistat reduced joint pain by 52% and joint stiffness by 51%, and improved joint functional insufficiency by 51% and quality of life by 52% . A retrospective, non-placebo-controlled study analyzed the medical records of 10 women with overweight and knee osteoarthritis treated for 6 months with orlistat 120 mg three times daily, aerobic exercise, and exercise for muscle mass gain. Osteoarthritis symptoms were assessed before treatment, at the end of treatment, and 6 months posttreatment. Significant improvement in scores reflecting knee pain, stiffness, and function was seen at the end of treatment with orlistat compared with placebo (37 versus 21, 44.5 versus 28.3, and 45.5 versus 27.1, respectively), along with a reduction in BMI (32.9 kg/m 2 versus 29.5 kg/m 2 , respectively). Although the mean BMI returned to the baseline value (31.1 kg/m 2 ) after 6 months, the improvement in the other parameters persisted (23.9, p = 0.028; 27.1, p = 0.028; and 32.9, p = 0.037, respectively) . We found no studies evaluating orlistat in chronic kidney disease (CKD). We found no RCTs evaluating the cardiovascular safety of orlistat. 3.1 Mechanism of action Liraglutide is a glucagon-like peptide-1 (GLP-1) analogue (GLP-1a) that shares 97% homology with the native GLP-1. Structural modifications to the protein increased its circulation half-life from 1-2 minutes to 13 hours . Liraglutide acts on hypothalamic neurons involved in energy balance and centers linked to pleasure and reward, stimulates pancreatic glucose-dependent insulin production, inhibits glucagon and somatostatin, and slows gastric emptying . 3.2 Dosage/usage instructions Liraglutide 3.0 mg was approved by the US Food and Drug Administration (FDA) in 2014 for treating obesity; this dose was higher than the one previously approved for treating T2DM (1.8 mg). The medication should be introduced gradually to minimize side effects, which are commonly gastrointestinal in nature. Liraglutide comes with a delivery system containing 3 mL, capable of dispensing doses of 0.6 mg, 1.2 mg, 1.8 mg, 2.4 mg, or 3.0 mg. The treatment should begin with 0.6 mg/day subcutaneously and increase by 0.6 mg each week until reaching the maximum dose of 3.0 mg/day. 3.3 Tolerability/side effects The most common adverse events are mainly related to the gastrointestinal system and affect more than 5% of patients. These side effects include nausea, vomiting, diarrhea, constipation, abdominal pain, and dyspepsia. In 94% of cases, these events are mild or moderate, usually related to the medication dose (hence the recommendation for gradual increase), transient, and rarely lead to treatment interruption . Serious adverse events affect more than 0.2% of patients and include a higher incidence of cholelithiasis and acute cholecystitis, attributed to both weight loss and reduced gallbladder contractility. The risk of pancreatitis was slightly higher in the liraglutide group (0.4%) than the placebo group (0.1%), but this difference was not significant . The medication has an overall excellent safety profile, including in neuropsychiatric aspects, with no interaction with centrally acting medications, and demonstrates good efficacy. 3.4 Absolute contraindications The few contraindications to liraglutide include pregnancy, breastfeeding, and hypersensitivity to the drug or its excipients. Caution is recommended when liraglutide is used by patients with a previous history of acute pancreatitis. Its use should be avoided by patients with a personal or family history of multiple endocrine neoplasia or medullary thyroid cancer, as the drug has been shown to induce thyroid C-cell hyperplasia in rodents . 3.5 Efficacy 3.5.1 Effects of liraglutide on body weight Preliminary studies have shown significantly greater weight loss with liraglutide than placebo or orlistat . Subsequently, a series of studies named Satiety and Clinical Adiposity – Liraglutide Evidence (SCALE) analyzed the use of liraglutide in the treatment of obesity and its complications. In the SCALE Obesity and Prediabetes study, 63.2% and 33.1% of the patients lost, respectively, more than 5% and 10% of their initial weight after 56 weeks . The study continued for another 2 years, to a total of 3 years, in patients with prediabetes. The 5%, 10%, and 15% weight loss in patients randomized to liraglutide were 49.6%, 24.8%, and 11%, respectively . In the SCALE Maintenance study, patients with obesity who lost 6% of weight with diet and physical activity were randomized to liraglutide 3.0 mg or placebo for 1 year. Those who used liraglutide had an additional loss of 6.1% compared with those who used placebo, reinforcing the importance of chronic and multidisciplinary treatment of obesity . A recent study evaluated patients who lost an average of 13.1 kg over 8 weeks on a low-calorie diet. Those who were subsequently randomized to a combination of liraglutide 3.0 mg and physical exercise achieved an additional weight loss of 3.4 kg, and at 1 year, 33% were able to maintain a weight loss of over 20% of their initial weight . 3.5.2 Effects of liraglutide on body weight maintenance The effects of liraglutide on weight loss maintenance were evaluated in the SCALE Maintenance study described previously . 3.5.3 Effects of liraglutide on body composition The study cited previously also analyzed body composition using DXA and reported a 3.9% reduction in absolute body fat percentage, which was double the decrease observed in the exercise group (1.7%) . Another study published in the same year assessed the use of liraglutide 3.0 mg in decreasing visceral fat, evaluated using magnetic resonance imaging. At 36 weeks, there was an average 12.5% reduction with liraglutide compared with 1.6% with placebo . 3.5.4 Effects of liraglutide in patients with prediabetes/glucose intolerance The effects of liraglutide in preventing the progression of prediabetes to T2DM and improving insulin resistance with weight loss are well established . However, studies with animal models suggest other complex direct actions of liraglutide in inhibiting the progression of prediabetes . Some clinical studies have evaluated the effects of liraglutide in individuals with prediabetes. Kim and cols. compared the effects of liraglutide doses up to 1.8 mg versus placebo in a group of patients aged 40-70 years with overweight or obesity and prediabetes. Weight loss associated with liraglutide was accompanied by a 29% reduction in peripheral insulin resistance, as assessed by the insulin suppression test. Additionally, 75% of the individuals on liraglutide achieved normal fasting plasma glucose compared with 19% of those on placebo . The most important RCT was the SCALE Obesity and Prediabetes trial, in which 2,254 patients with overweight or obesity and prediabetes were randomized, in a 2:1 ratio, to liraglutide 3.0 mg or placebo, combined with a standardized diet and exercise. The study showed significant and sustained results of improved glycemic control with reduced insulin resistance in the context of 6.1% weight loss over 3 years in patients using liraglutide. Only 2% of the participants in the liraglutide group developed diabetes, compared with 6% in the placebo group. Liraglutide led to an approximately 80% reduction in T2DM risk, and the estimated time to onset of T2DM over 160 weeks was 2.7 times longer in the liraglutide group compared with the placebo group. Furthermore, at 160 weeks, 66% of patients on liraglutide achieved normoglycemia, compared with 36% of those on placebo. An additional post hoc analysis was conducted at week 172 to address the lack of follow-up data for withdrawn participants, assuming that diabetes was undiagnosed in 1% of the participants withdrawn from the liraglutide group and in 0% of those withdrawn from the placebo group. The results showed that the risk of T2DM remained 66% lower in the participants who received liraglutide . 3.5.5 Effects of liraglutide on glycemic control in patients with type 2 diabetes mellitus Considering that controlling excess weight is one of the priorities in T2DM management, liraglutide has become one of the first-choice treatments for patients with T2DM and obesity due to its mechanism of action of direct hypoglycemic effects and body weight reduction . The safety, tolerability, and efficacy of liraglutide were initially assessed in the treatment of T2DM through the Liraglutide Effect and Actions in Diabetes (LEAD) program. This program consisted of six RCTs that assessed liraglutide as a standalone treatment and in combination with oral antidiabetic drugs (OADs) at different stages of the disease. Levels of HbA1c decreased by 0.8%-1.6% from baseline with liraglutide at doses up to 1.8 mg . Rapid and sustained reductions in fasting plasma glucose level (up to 43.2 mg/dL) were observed from baseline to the end of each LEAD study. Liraglutide also effectively reduced postprandial glucose levels, with a mean reduction over three meals of up to 48.6 mg/dL across the six LEAD studies. These RCTs also confirmed a low risk of hypoglycemia with liraglutide, which is consistent with its glucose-dependent insulin secretion stimulating action . The SCALE Diabetes study included 846 adults with overweight or obesity and with T2DM, randomized to receive liraglutide 3.0 mg, liraglutide 1.8 mg, or placebo for 56 weeks. Reductions in HbA1c level from baseline were 1.3%, 1.1%, and 0.3% in each group, respectively, and the percentages of individuals achieving HbA1c level of 6.5% or lower at the end of the study were 56.5%, 45.6%, and 15%, respectively. Liraglutide 3.0 mg was significantly superior to liraglutide 1.8 mg regarding glucose-related measures, including HbA1c values, fasting plasma glucose, fasting proinsulin, proinsulin-to-insulin ratio, and change in OAD association. However, the study authors advised caution in interpreting the comparison between the two doses, as the analyses were not controlled for multiplicity . A systematic review published in 2016 included 43 studies conducted in Europe (n = 24), the United States (n = 5), and Asia-Pacific (n = 14), evaluating a total of 7,413 patients with T2DM treated with liraglutide as monotherapy or combined with hypoglycemic agents. The studies ranged in duration from 3 to 24 months (46.5%; n = 20 with ≥ 12 months) and assessed liraglutide doses between 0.9 and 1.8 mg. Liraglutide treatment resulted in HbA1c changes from -0.6% to -2.26% and reduced plasma glucose values, regardless of baseline HbA1c levels and follow-up duration. Overall, 29.3%-64.5% and 22%-41% of patients with T2DM treated with liraglutide achieved target HbA1c levels of 7% and 6.5%, respectively. Over time, treatment with liraglutide resulted in a mean change of -1.3 to -8.7 kg in absolute weight from baseline. Hypoglycemia with liraglutide monotherapy occurred at a ≤ 0.8% rate and was more frequent in patients using liraglutide combined with hypoglycemic agents (0-15.2%) . A subsequent multicenter study conducted across 45 diabetes clinics in Italy included 1,723 patients who received liraglutide doses of up to 1.8 mg and were followed for up to 24 months. In all, 43.5% of the patients achieved a reduction in HbA1c of ≥ 1% in 12 months, and 40.9% reached the HbA1c target of ≤ 7% at 24 months with liraglutide monotherapy or combined with other hypoglycemic agents . Other studies in a “real-world” context have confirmed the glycemic control results observed in RCT conditions . 3.5.6 Effects of liraglutide on lipid metabolism Studies in animals and humans suggest that liraglutide may have some effects on lipid metabolism, independent of weight loss. In rats, the effects of liraglutide have been shown to impact pathways involved in increased cholesterol efflux and in the expression of genes involved in the breakdown of lipoproteins containing apolipoprotein (apo) B-100, which is the main component of very-low-density lipoprotein cholesterol (VLDL-c), intermediate-density lipoprotein cholesterol (IDL-c), LDL-c, and lipoprotein (a) particles . In the same study, treatment of patients with T2DM with liraglutide 1.2 mg for 6 months significantly reduced plasma apo B-100 and fasting triglyceride levels and induced breakdown of triglyceride-rich lipoproteins (VLDL-c and IDL-c) and LDL-c . Taskinen and cols. observed specific effects of liraglutide 1.8 mg on postprandial chylomicron metabolism in a small group of individuals with T2DM. Liraglutide led to a marked decrease in apo B-48 production in the intestine, increased the size of postprandial chylomicrons in circulation, dramatically reduced the direct clearance of chylomicrons, and decreased the hepatic secretion of VLDL-triglycerides . In another study, liraglutide reduced postprandial hyperlipidemia by increasing apo B-48 catabolism and reducing apo B-48 production in patients with T2DM . In a Finnish study center, 22 patients with T2DM using metformin and statin were randomized to receive liraglutide 1.8 mg or placebo for 16 weeks. At the end of the study, serum concentrations of triglycerides, chylomicrons, and large VLDL-c particles after a high-fat mixed meal were significantly lower in the liraglutide group but not in the placebo group, despite similar weight losses in both two groups. Concentrations of apo C-III, a critical regulator of postprandial triglyceride metabolism, decreased markedly in the fasting and postprandial periods in the liraglutide group but not in the placebo group . A meta-analysis of the results of the LEAD trials revealed significant reductions from baseline in total cholesterol (5.0 mg/dL), LDL-c (7.7 mg/dL), and triglycerides (17.7 mg/dL; p < 0.01 for all) among patients treated with liraglutide 1.8 mg, although these reductions were not significant compared with placebo or active comparators . In contrast, the SCALE Diabetes study showed that liraglutide 3.0 mg, but not liraglutide 1.8 mg, significantly improved total cholesterol, VLDL-c, HDL-c, and triglyceride levels compared with placebo; no effects were observed on levels of LDL-c or free fatty acids . 3.5.7 Effects of liraglutide on blood pressure and heart rate Studies have confirmed the effect of liraglutide on reducing BP values. This effect was attributed not only to the associated weight loss but also to a combination of other mechanisms, such as the promotion of natriuresis and vasodilation . Notably, GLP-1as are generally associated with a slight increase in heart rate. Current data indicate that this effect does not result in increased cardiovascular risk, although a pronounced increase in heart rate may be associated with adverse clinical outcomes in patients with advanced HF . A pooled analysis of the six LEAD RCTs, including data from almost 2,800 individuals with T2DM, showed that participants receiving liraglutide experienced significantly greater mean reductions in SBP values than those receiving placebo at 26 weeks relative to baseline. These reductions were noticeable after 2 weeks of treatment. Although the trials were not statistically powered to evaluate BP reduction, consistent reductions were observed in SBP values with liraglutide (1.8 mg or 1.2 mg once daily), with reductions of 2.1-6.7 mmHg from baseline to the end of the treatment period ( - weeks). Small and nonsignificant reductions from baseline in DBP values were observed with liraglutide in most of these trials. The SBP reductions observed in patients treated with liraglutide correlated weakly with weight loss. Liraglutide 1.2 mg and 1.8 mg were associated with a significant mean increase of 3 beats per minute (bpm) in pulse rate, compared with a mean increase of 1 bpm with placebo . A similar heart rate increase with liraglutide ( bpm) has also been found in the LEADER study, which will be detailed later . Kumarathurai and cols. observed a significant increase in heart rate and reduction in heart rate variability (HRV) in patients with newly diagnosed T2DM and stable CAD who received liraglutide 1.8 mg for 12 weeks compared with placebo. This HRV reduction was not mediated by the increased heart rate observed after liraglutide therapy, suggesting a direct influence of liraglutide on sympathovagal balance . In an RCT, liraglutide was associated with a significant SBP reduction compared with placebo when added to patients with T2DM already treated with multiple daily insulin injections. Although significant correlations were found between reductions in SBP and reductions in body weight and BMI, one in three liraglutide-treated patients who experienced a marked reduction in SBP did not have a substantial decrease in body weight. A greater SBP reduction was predicted by higher baseline DBP values and by lower baseline mean values of glucose regulation parameters. One explanation for this latter finding is that patients with higher mean values of glucose regulation parameters are more likely to experience blood glucose improvement with liraglutide, which decreases glycosuria and, thus, attenuates weight loss. Therefore, from a BP perspective, some patients may benefit from the use of liraglutide despite not having improvements in other traditional metabolic risk factors . Zhao and cols. evaluated the effect of liraglutide on BP in a meta-analysis of 18 RCTs. The authors observed that, compared with placebo, liraglutide reduced SBP by 3.18 mmHg but had no significant effect on DBP. Only three RCTs evaluated the effect of liraglutide at the doses of 2.4 mg and 3.0 mg. Although no RCTs have been published on liraglutide 3.0 mg specifically among patients with obesity and hypertension, a subgroup analysis defined by liraglutide dose, compared with placebo, showed significant SBP reductions with the doses of 2.4 mg/day (-5.01 mmHg) and 3.0 mg/day (-3.67 mmHg) and DBP reduction (-1.46 mmHg) with the dose of 3.0 mg/day . 3.5.8 Effects of liraglutide on obstructive sleep apnea syndrome Although the association of OSAS with both obesity and T2DM is well established , only a few studies have directly measured with polysomnography the effects of liraglutide in patients with OSAS. The classic RCT SCALE Sleep Apnea evaluated the effects of liraglutide 3.0 mg in individuals with obesity and moderate or severe OSAS who were reluctant or unable to use CPAP therapy. After 32 weeks of treatment, a significantly greater reduction in mean AHI was observed in the treated group compared with the placebo group, both of which were also addressed with monthly counseling on diet and exercise (-12.2 ± 1.8 events/h versus -6.1 ± 2.0 events/h, respectively). The improvement in OSAS outcomes was associated with the degree of weight loss at the end of the study . A recently published study included individuals with T2DM and moderate or severe OSAS randomized to a control group or a liraglutide group. Both groups used CPAP and received drug treatment for T2DM, except for the first group, which received liraglutide at a dose of up to 1.8 mg. After 3 months of follow-up, the mean BMI, AHI, and SBP values in the liraglutide group were lower than those in the control group, while minimum oxygen saturation was higher in the liraglutide group . 3.5.9 Effects of liraglutide in patients with polycystic ovary syndrome The effects of liraglutide in women with PCOS were assessed in a series of studies, both as a standalone and in combination with metformin, demonstrating significant weight loss and reduction in testosterone levels. The results were heterogeneous regarding insulin resistance and menstrual patterns. Most studies used liraglutide doses between 1.2 mg and 1.8 mg. Although few studies have evaluated fertility and gestational outcomes with GLP-1as, weight loss is known to be the most significant factor affecting the improvement of these parameters in PCOS . It is important to note that the liraglutide package insert recommends discontinuing the medication if the patient desires to become pregnant. The effects of GLP-1as in women with PCOS have been evaluated in a meta-analysis of six studies with liraglutide (1.2-1.8 mg) and one with exenatide. A significant weight loss and reduction in total testosterone levels was observed, but no effects were found in abdominal circumference, fasting insulinemia, homeostasis model assessment of insulin resistance (HOMA-IR) values, or SHBG level. Only one study evaluated hirsutism and menstrual cycles, and this study found no significant changes after liraglutide treatment . A recent meta-analysis compared the effects of liraglutide (1.2-1.8 mg), metformin, and the combination of metformin + liraglutide in women with overweight or obesity and PCOS. Compared with the group treated with metformin alone, the metformin + liraglutide group showed greater weight loss and reduction in waist circumference, fasting blood glucose, and insulin levels, but no difference in HOMA-IR values. When the standalone treatments with metformin versus liraglutide were compared, liraglutide was only superior to metformin in terms of weight loss. There was no significant difference between metformin, liraglutide, and combined metformin plus liraglutide in improving total testosterone, free testosterone, or SHBG levels. Although two studies reported improvements in menstrual cycles with the combined therapy compared with metformin alone, they used different indicators, hindering a meta-analysis of these data . The effects of liraglutide 1.8 mg on ovarian morphology, hormonal levels, and menstrual bleeding patterns were evaluated in a double-blind RCT including 72 women with overweight or obesity and PCOS. The group treated with liraglutide experienced a reduction in ovarian volume, along with an increase in SHBG level, reduction in free testosterone level, and improvement in bleeding rate . While most studies evaluated lower doses of liraglutide, a double-blind RCT assessed the effects of liraglutide 3.0 mg for 32 weeks in 82 women with obesity and PCOS, reporting significant weight loss, improvement in hyperandrogenism, and restoration of menstrual cycles . The pregnancy rates after in vitro fertilization were investigated in an open-label RCT including 28 women with obesity and PCOS, comparing the effects of metformin plus liraglutide 1.2 mg versus metformin alone for 12 weeks. The pregnancy rate per embryo transfer was significantly greater in the combined treatment group (85.7%) compared with the metformin alone group (28.6%), and the cumulative pregnancy rates over a 12-month period were 69.2% and 35.7%, respectively . 3.5.10 Effects of liraglutide in patients with male hypogonadism Studies evaluating the effects of liraglutide in patients with male hypogonadism do not allow for definitive conclusions but suggest an improvement in testosterone levels and sexual function accompanying weight loss and improvement in metabolic parameters. It is unclear whether the effects of liraglutide in patients with male hypogonadism are mediated exclusively by the reduction in adiposity. In animal models, there is evidence of direct effects of central GLP-1 signaling on the gonadal axis. Intracerebroventricular GLP-1 injection induces an immediate luteinizing hormone (LH) surge in male rats . A retrospective observational study has evaluated the effects of liraglutide added to testosterone replacement therapy (TRT), metformin, and LSCs on erectile function in men with obesity, T2DM, and hypogonadism. In the first year, all 43 patients (aged - years) received TRT, metformin, and LSC recommendations. In the second year, those who did not reach the target HbA1c value received additional liraglutide 1.2 mg daily. The group that received liraglutide showed additional weight loss and improvement in erectile function compared with the group that did not receive it . A prospective, randomized, open-label study evaluated the effects of liraglutide 3.0 mg daily compared with testosterone 50 mg (1% transdermal gel) for 16 weeks in 30 men with a mean age of 46 years, obesity, and functional hypogonadism. The weight loss was only significant in the group that received liraglutide. Both groups experienced improvements in total testosterone levels, libido, and sexual function. Follicle-stimulating hormone (FSH) and LH levels increased in the liraglutide group and decreased in the testosterone group . In a prospective study, 110 young (aged - years) men with obesity and functional hypogonadism were divided according to their desire for fertility into three groups to receive gonadotropins, liraglutide 3.0 mg, or transdermal testosterone 60 mg for 4 months. The group that received liraglutide showed significant weight loss and higher levels of testosterone and gonadotropins, as well as improved erectile function and conventional sperm parameters relative to baseline levels and compared with the other groups . 3.5.11 Effects of liraglutide on metabolic dysfunction-associated steatotic liver disease Liraglutide has demonstrated benefits in patients with MASLD, reducing liver fat content and improving steatohepatitis. In addition to its weight loss effect in reducing lipotoxicity, other mechanisms have been proposed, such as modification of portal and peripheral insulin and glucagon concentrations, and improvements in hepatocyte mitochondrial function and hepatic insulin sensitivity . Four RCTs showed a reduction in liver fat content assessed by magnetic resonance imaging-based techniques after treatment with liraglutide 1.8 mg for 6 months. The study evaluated adults with overweight or obesity and T2DM and women with overweight and PCOS . Other studies have reported similar results. The LEAN study, a double-blind RCT, examined the effects of liraglutide on steatohepatitis and fibrosis. Armstrong and cols. randomized 52 overweight patients with biopsy-proven steatohepatitis to receive liraglutide 1.8 mg or placebo for 48 weeks. The primary outcome of resolution of steatohepatitis occurred in 39% of patients in the liraglutide group versus 9% of those in the placebo group (p = 0.019), and progression of fibrosis occurred in 9% of patients receiving liraglutide and in 36% of those receiving placebo . 3.5.12 Effects of liraglutide on quality of life Treatment with liraglutide resulted in improved quality-of-life parameters compared with placebo in an RCT. The benefits appeared to be associated with weight loss, as they were greater in individuals with greater weight loss, regardless of treatment arm. One of the secondary outcomes of the SCALE Obesity and Prediabetes study was health-related quality of life, assessed using the SF-36, IWQOL-Lite, and Treatment Related Impact Measure – Weight after 56 weeks of treatment with liraglutide 3.0 mg. Compared with the placebo group, the liraglutide group had higher SF-36 scores in the general physical and mental health domains, higher IWQOL-Lite total scores, and more favorable individual domain scores on both instruments. In the assessment with the Treatment Related Impact Measure – Weight, the total score was also higher in the liraglutide group, despite a lower score for the experience of side effects . The greatest benefits were observed in the physical aspects of the IWQOL-Lite and in self-esteem . Quality of life was also assessed in the continuation of the SCALE Obesity and Prediabetes study for 160 weeks, which showed that the improvement demonstrated after 1 year of treatment with liraglutide 3.0 mg was generally maintained after 3 years . 3.5.13 Effects of liraglutide on osteoarticular diseases While preclinical studies have suggested positive effects of GLP-1 receptor agonists in osteoarthritis , including direct effects on various joint cell types , an RCT found no significant benefits of liraglutide 3.0 mg for pain associated with knee osteoarthritis, despite a relatively small weight difference between the groups . Gudbergsen and cols. randomized 156 patients with overweight or obesity and knee osteoarthritis who had lost more than 5% of weight with dietary intervention for 8 weeks to receive liraglutide 3.0 mg or placebo for 52 weeks. At the end of the study, the difference in weight between the groups was 3.9 kg, and there was no significant difference in knee pain, as measured by a subscale of the Knee Injury and Osteoarthritis Outcome Score (KOOS). However, it is important to highlight that the average weight loss with the pre-randomization dietary intervention was 12.5 kg and there was a significant improvement in symptoms during this period. Consequently, at the time of randomization, the patients had mild-to-moderate pain, which may have limited the potential of pharmacologic intervention to promote further improvement . 3.5.14 Effects of liraglutide in patients with chronic kidney disease Kidney disease is one of the most important complications of T2DM and the most common cause of CKD and end-stage renal disease (ESRD). A recent review looked at medications with established evidence for treating diabetic kidney disease. Among them are incretin-based therapeutic agents, including liraglutide, which have demonstrated vasotropic actions, suggesting a potential to reduce the risk of diabetic kidney disease . In the Liraglutide and Cardiovascular Outcomes in Type 2 Diabetes (LEADER) study, liraglutide showed cardiovascular and renal benefits, particularly in participants with CKD. The results suggested that reductions in HbA1c and SBP values may moderately mediate the renal benefits of liraglutide. Potential benefits may be driven by other mediators or direct mechanisms . A post hoc analysis evaluated the safety of liraglutide treatment in patients with CKD. A total of 9,340 patients with T2DM were randomized to receive either liraglutide or placebo, both in addition to standard treatment. Of these, 2,158 had CKD and 7,182 had no CKD (defined as an estimated glomerular filtration rate [eGFR] < 60 and ≥ 60 mL/min, respectively); 966 patients had macroalbuminuria, and 2,456 had microalbuminuria (urine albumin-to-creatinine ratio > 300 mg/g and ≥ 30 to ≤ 300 mg/g, respectively). At the beginning of the study, the mean eGFR was 46 ± 11 mL/min in patients with CKD and 91 ± 22 mL/min in those without CKD. The risk of severe hypoglycemia was significantly lower with liraglutide compared with placebo in patients with CKD or with micro- or macroalbuminuria (hazard ratio [HR] = 0.63 and 0.57, respectively). The study concluded that the use of liraglutide in patients with CKD was safe, with no difference between patients with and without CKD . No dosage adjustment is necessary for patients with mild or moderate renal impairment. Experience is limited in patients with severe renal insufficiency. A study evaluated the safety and efficacy of liraglutide in patients with T2DM and ESRD dependent on dialysis. Twenty-four patients with T2DM and ESRD and 23 control individuals with T2DM and normal renal function were randomized to receive 12 weeks of liraglutide (titrated to a maximum dose of 1.8 mg) or placebo as an add-on to ongoing antidiabetic treatment. Glycemic control improved in both groups treated with liraglutide, and the basal insulin dose decreased accordingly. Body weight also decreased in both groups treated with liraglutide. The plasma concentration of liraglutide was 49% higher in the ESRD group compared with the control group. Nausea and vomiting occurred more frequently among liraglutide-treated patients with ESRD compared with control individuals. The study concluded that reduced treatment doses and a prolonged titration period may be advisable , although liraglutide is currently not recommended in this population. 3.5.15 Effects of liraglutide on cardiovascular outcomes Studies in animal models have shown liraglutide effects in reducing oxidative stress and inflammation and preventing apoptosis of endothelial cells; these effects were independent of glycemic control or weight loss and may contribute to the cardiovascular protective action of this drug . Beneficial effects in reducing inflammatory markers and neutralizing oxidative stress and endothelial dysfunction in individuals treated with liraglutide have also been described . Cardiovascular outcomes of liraglutide were investigated in the LEADER study, in which 9,340 individuals with T2DM and high cardiovascular risk were randomized and followed for a median of 3.8 years. The group treated with liraglutide at a dose of up to 1.8 mg had a 13% reduction in primary outcomes (cardiovascular death, nonfatal AMI, or nonfatal stroke) compared with the placebo group. Mortality from cardiovascular causes was lower in the liraglutide group (4.7% versus 6.0% in the placebo group). Nonfatal myocardial infarction, nonfatal stroke, and hospitalizations due to HF were less frequent in the liraglutide group, but the differences compared with placebo were not significant . A post hoc analysis of the LEADER study was performed to evaluate the treatment effect of liraglutide versus placebo on cardiovascular outcomes by LDL-c level < 50 mg/dL, 50-70 mg/dL, and > 70 mg/dL and statin use at the beginning of the study. The results suggest that the benefits of liraglutide on mortality and cardiovascular outcomes appear consistent in patients with T2DM at high cardiovascular risk, independent of LDL-c level, and persist even in the setting of very low baseline LDL-c levels and concomitant statin use. These data suggest that the potential antiatherosclerotic effects of the medication are complementary to its effect in reducing lipids . No RCTs have been conducted to assess the cardiovascular benefits of the 3.0 mg dose in patients with obesity without T2DM. However, a post hoc analysis was performed using pooled data from 5,908 participants from the five RCTs of the SCALE program (liraglutide versus placebo or orlistat). In that study, liraglutide 3.0 mg was not associated with increased cardiovascular risk. Since wide confidence intervals were found, and two retrospective studies were included in the analysis, it is not possible to claim cardiovascular protection with the medication, only noninferiority compared with placebo regarding this outcome . Liraglutide is a glucagon-like peptide-1 (GLP-1) analogue (GLP-1a) that shares 97% homology with the native GLP-1. Structural modifications to the protein increased its circulation half-life from 1-2 minutes to 13 hours . Liraglutide acts on hypothalamic neurons involved in energy balance and centers linked to pleasure and reward, stimulates pancreatic glucose-dependent insulin production, inhibits glucagon and somatostatin, and slows gastric emptying . Liraglutide 3.0 mg was approved by the US Food and Drug Administration (FDA) in 2014 for treating obesity; this dose was higher than the one previously approved for treating T2DM (1.8 mg). The medication should be introduced gradually to minimize side effects, which are commonly gastrointestinal in nature. Liraglutide comes with a delivery system containing 3 mL, capable of dispensing doses of 0.6 mg, 1.2 mg, 1.8 mg, 2.4 mg, or 3.0 mg. The treatment should begin with 0.6 mg/day subcutaneously and increase by 0.6 mg each week until reaching the maximum dose of 3.0 mg/day. The most common adverse events are mainly related to the gastrointestinal system and affect more than 5% of patients. These side effects include nausea, vomiting, diarrhea, constipation, abdominal pain, and dyspepsia. In 94% of cases, these events are mild or moderate, usually related to the medication dose (hence the recommendation for gradual increase), transient, and rarely lead to treatment interruption . Serious adverse events affect more than 0.2% of patients and include a higher incidence of cholelithiasis and acute cholecystitis, attributed to both weight loss and reduced gallbladder contractility. The risk of pancreatitis was slightly higher in the liraglutide group (0.4%) than the placebo group (0.1%), but this difference was not significant . The medication has an overall excellent safety profile, including in neuropsychiatric aspects, with no interaction with centrally acting medications, and demonstrates good efficacy. The few contraindications to liraglutide include pregnancy, breastfeeding, and hypersensitivity to the drug or its excipients. Caution is recommended when liraglutide is used by patients with a previous history of acute pancreatitis. Its use should be avoided by patients with a personal or family history of multiple endocrine neoplasia or medullary thyroid cancer, as the drug has been shown to induce thyroid C-cell hyperplasia in rodents . 3.5.1 Effects of liraglutide on body weight Preliminary studies have shown significantly greater weight loss with liraglutide than placebo or orlistat . Subsequently, a series of studies named Satiety and Clinical Adiposity – Liraglutide Evidence (SCALE) analyzed the use of liraglutide in the treatment of obesity and its complications. In the SCALE Obesity and Prediabetes study, 63.2% and 33.1% of the patients lost, respectively, more than 5% and 10% of their initial weight after 56 weeks . The study continued for another 2 years, to a total of 3 years, in patients with prediabetes. The 5%, 10%, and 15% weight loss in patients randomized to liraglutide were 49.6%, 24.8%, and 11%, respectively . In the SCALE Maintenance study, patients with obesity who lost 6% of weight with diet and physical activity were randomized to liraglutide 3.0 mg or placebo for 1 year. Those who used liraglutide had an additional loss of 6.1% compared with those who used placebo, reinforcing the importance of chronic and multidisciplinary treatment of obesity . A recent study evaluated patients who lost an average of 13.1 kg over 8 weeks on a low-calorie diet. Those who were subsequently randomized to a combination of liraglutide 3.0 mg and physical exercise achieved an additional weight loss of 3.4 kg, and at 1 year, 33% were able to maintain a weight loss of over 20% of their initial weight . 3.5.2 Effects of liraglutide on body weight maintenance The effects of liraglutide on weight loss maintenance were evaluated in the SCALE Maintenance study described previously . 3.5.3 Effects of liraglutide on body composition The study cited previously also analyzed body composition using DXA and reported a 3.9% reduction in absolute body fat percentage, which was double the decrease observed in the exercise group (1.7%) . Another study published in the same year assessed the use of liraglutide 3.0 mg in decreasing visceral fat, evaluated using magnetic resonance imaging. At 36 weeks, there was an average 12.5% reduction with liraglutide compared with 1.6% with placebo . 3.5.4 Effects of liraglutide in patients with prediabetes/glucose intolerance The effects of liraglutide in preventing the progression of prediabetes to T2DM and improving insulin resistance with weight loss are well established . However, studies with animal models suggest other complex direct actions of liraglutide in inhibiting the progression of prediabetes . Some clinical studies have evaluated the effects of liraglutide in individuals with prediabetes. Kim and cols. compared the effects of liraglutide doses up to 1.8 mg versus placebo in a group of patients aged 40-70 years with overweight or obesity and prediabetes. Weight loss associated with liraglutide was accompanied by a 29% reduction in peripheral insulin resistance, as assessed by the insulin suppression test. Additionally, 75% of the individuals on liraglutide achieved normal fasting plasma glucose compared with 19% of those on placebo . The most important RCT was the SCALE Obesity and Prediabetes trial, in which 2,254 patients with overweight or obesity and prediabetes were randomized, in a 2:1 ratio, to liraglutide 3.0 mg or placebo, combined with a standardized diet and exercise. The study showed significant and sustained results of improved glycemic control with reduced insulin resistance in the context of 6.1% weight loss over 3 years in patients using liraglutide. Only 2% of the participants in the liraglutide group developed diabetes, compared with 6% in the placebo group. Liraglutide led to an approximately 80% reduction in T2DM risk, and the estimated time to onset of T2DM over 160 weeks was 2.7 times longer in the liraglutide group compared with the placebo group. Furthermore, at 160 weeks, 66% of patients on liraglutide achieved normoglycemia, compared with 36% of those on placebo. An additional post hoc analysis was conducted at week 172 to address the lack of follow-up data for withdrawn participants, assuming that diabetes was undiagnosed in 1% of the participants withdrawn from the liraglutide group and in 0% of those withdrawn from the placebo group. The results showed that the risk of T2DM remained 66% lower in the participants who received liraglutide . 3.5.5 Effects of liraglutide on glycemic control in patients with type 2 diabetes mellitus Considering that controlling excess weight is one of the priorities in T2DM management, liraglutide has become one of the first-choice treatments for patients with T2DM and obesity due to its mechanism of action of direct hypoglycemic effects and body weight reduction . The safety, tolerability, and efficacy of liraglutide were initially assessed in the treatment of T2DM through the Liraglutide Effect and Actions in Diabetes (LEAD) program. This program consisted of six RCTs that assessed liraglutide as a standalone treatment and in combination with oral antidiabetic drugs (OADs) at different stages of the disease. Levels of HbA1c decreased by 0.8%-1.6% from baseline with liraglutide at doses up to 1.8 mg . Rapid and sustained reductions in fasting plasma glucose level (up to 43.2 mg/dL) were observed from baseline to the end of each LEAD study. Liraglutide also effectively reduced postprandial glucose levels, with a mean reduction over three meals of up to 48.6 mg/dL across the six LEAD studies. These RCTs also confirmed a low risk of hypoglycemia with liraglutide, which is consistent with its glucose-dependent insulin secretion stimulating action . The SCALE Diabetes study included 846 adults with overweight or obesity and with T2DM, randomized to receive liraglutide 3.0 mg, liraglutide 1.8 mg, or placebo for 56 weeks. Reductions in HbA1c level from baseline were 1.3%, 1.1%, and 0.3% in each group, respectively, and the percentages of individuals achieving HbA1c level of 6.5% or lower at the end of the study were 56.5%, 45.6%, and 15%, respectively. Liraglutide 3.0 mg was significantly superior to liraglutide 1.8 mg regarding glucose-related measures, including HbA1c values, fasting plasma glucose, fasting proinsulin, proinsulin-to-insulin ratio, and change in OAD association. However, the study authors advised caution in interpreting the comparison between the two doses, as the analyses were not controlled for multiplicity . A systematic review published in 2016 included 43 studies conducted in Europe (n = 24), the United States (n = 5), and Asia-Pacific (n = 14), evaluating a total of 7,413 patients with T2DM treated with liraglutide as monotherapy or combined with hypoglycemic agents. The studies ranged in duration from 3 to 24 months (46.5%; n = 20 with ≥ 12 months) and assessed liraglutide doses between 0.9 and 1.8 mg. Liraglutide treatment resulted in HbA1c changes from -0.6% to -2.26% and reduced plasma glucose values, regardless of baseline HbA1c levels and follow-up duration. Overall, 29.3%-64.5% and 22%-41% of patients with T2DM treated with liraglutide achieved target HbA1c levels of 7% and 6.5%, respectively. Over time, treatment with liraglutide resulted in a mean change of -1.3 to -8.7 kg in absolute weight from baseline. Hypoglycemia with liraglutide monotherapy occurred at a ≤ 0.8% rate and was more frequent in patients using liraglutide combined with hypoglycemic agents (0-15.2%) . A subsequent multicenter study conducted across 45 diabetes clinics in Italy included 1,723 patients who received liraglutide doses of up to 1.8 mg and were followed for up to 24 months. In all, 43.5% of the patients achieved a reduction in HbA1c of ≥ 1% in 12 months, and 40.9% reached the HbA1c target of ≤ 7% at 24 months with liraglutide monotherapy or combined with other hypoglycemic agents . Other studies in a “real-world” context have confirmed the glycemic control results observed in RCT conditions . 3.5.6 Effects of liraglutide on lipid metabolism Studies in animals and humans suggest that liraglutide may have some effects on lipid metabolism, independent of weight loss. In rats, the effects of liraglutide have been shown to impact pathways involved in increased cholesterol efflux and in the expression of genes involved in the breakdown of lipoproteins containing apolipoprotein (apo) B-100, which is the main component of very-low-density lipoprotein cholesterol (VLDL-c), intermediate-density lipoprotein cholesterol (IDL-c), LDL-c, and lipoprotein (a) particles . In the same study, treatment of patients with T2DM with liraglutide 1.2 mg for 6 months significantly reduced plasma apo B-100 and fasting triglyceride levels and induced breakdown of triglyceride-rich lipoproteins (VLDL-c and IDL-c) and LDL-c . Taskinen and cols. observed specific effects of liraglutide 1.8 mg on postprandial chylomicron metabolism in a small group of individuals with T2DM. Liraglutide led to a marked decrease in apo B-48 production in the intestine, increased the size of postprandial chylomicrons in circulation, dramatically reduced the direct clearance of chylomicrons, and decreased the hepatic secretion of VLDL-triglycerides . In another study, liraglutide reduced postprandial hyperlipidemia by increasing apo B-48 catabolism and reducing apo B-48 production in patients with T2DM . In a Finnish study center, 22 patients with T2DM using metformin and statin were randomized to receive liraglutide 1.8 mg or placebo for 16 weeks. At the end of the study, serum concentrations of triglycerides, chylomicrons, and large VLDL-c particles after a high-fat mixed meal were significantly lower in the liraglutide group but not in the placebo group, despite similar weight losses in both two groups. Concentrations of apo C-III, a critical regulator of postprandial triglyceride metabolism, decreased markedly in the fasting and postprandial periods in the liraglutide group but not in the placebo group . A meta-analysis of the results of the LEAD trials revealed significant reductions from baseline in total cholesterol (5.0 mg/dL), LDL-c (7.7 mg/dL), and triglycerides (17.7 mg/dL; p < 0.01 for all) among patients treated with liraglutide 1.8 mg, although these reductions were not significant compared with placebo or active comparators . In contrast, the SCALE Diabetes study showed that liraglutide 3.0 mg, but not liraglutide 1.8 mg, significantly improved total cholesterol, VLDL-c, HDL-c, and triglyceride levels compared with placebo; no effects were observed on levels of LDL-c or free fatty acids . 3.5.7 Effects of liraglutide on blood pressure and heart rate Studies have confirmed the effect of liraglutide on reducing BP values. This effect was attributed not only to the associated weight loss but also to a combination of other mechanisms, such as the promotion of natriuresis and vasodilation . Notably, GLP-1as are generally associated with a slight increase in heart rate. Current data indicate that this effect does not result in increased cardiovascular risk, although a pronounced increase in heart rate may be associated with adverse clinical outcomes in patients with advanced HF . A pooled analysis of the six LEAD RCTs, including data from almost 2,800 individuals with T2DM, showed that participants receiving liraglutide experienced significantly greater mean reductions in SBP values than those receiving placebo at 26 weeks relative to baseline. These reductions were noticeable after 2 weeks of treatment. Although the trials were not statistically powered to evaluate BP reduction, consistent reductions were observed in SBP values with liraglutide (1.8 mg or 1.2 mg once daily), with reductions of 2.1-6.7 mmHg from baseline to the end of the treatment period ( - weeks). Small and nonsignificant reductions from baseline in DBP values were observed with liraglutide in most of these trials. The SBP reductions observed in patients treated with liraglutide correlated weakly with weight loss. Liraglutide 1.2 mg and 1.8 mg were associated with a significant mean increase of 3 beats per minute (bpm) in pulse rate, compared with a mean increase of 1 bpm with placebo . A similar heart rate increase with liraglutide ( bpm) has also been found in the LEADER study, which will be detailed later . Kumarathurai and cols. observed a significant increase in heart rate and reduction in heart rate variability (HRV) in patients with newly diagnosed T2DM and stable CAD who received liraglutide 1.8 mg for 12 weeks compared with placebo. This HRV reduction was not mediated by the increased heart rate observed after liraglutide therapy, suggesting a direct influence of liraglutide on sympathovagal balance . In an RCT, liraglutide was associated with a significant SBP reduction compared with placebo when added to patients with T2DM already treated with multiple daily insulin injections. Although significant correlations were found between reductions in SBP and reductions in body weight and BMI, one in three liraglutide-treated patients who experienced a marked reduction in SBP did not have a substantial decrease in body weight. A greater SBP reduction was predicted by higher baseline DBP values and by lower baseline mean values of glucose regulation parameters. One explanation for this latter finding is that patients with higher mean values of glucose regulation parameters are more likely to experience blood glucose improvement with liraglutide, which decreases glycosuria and, thus, attenuates weight loss. Therefore, from a BP perspective, some patients may benefit from the use of liraglutide despite not having improvements in other traditional metabolic risk factors . Zhao and cols. evaluated the effect of liraglutide on BP in a meta-analysis of 18 RCTs. The authors observed that, compared with placebo, liraglutide reduced SBP by 3.18 mmHg but had no significant effect on DBP. Only three RCTs evaluated the effect of liraglutide at the doses of 2.4 mg and 3.0 mg. Although no RCTs have been published on liraglutide 3.0 mg specifically among patients with obesity and hypertension, a subgroup analysis defined by liraglutide dose, compared with placebo, showed significant SBP reductions with the doses of 2.4 mg/day (-5.01 mmHg) and 3.0 mg/day (-3.67 mmHg) and DBP reduction (-1.46 mmHg) with the dose of 3.0 mg/day . 3.5.8 Effects of liraglutide on obstructive sleep apnea syndrome Although the association of OSAS with both obesity and T2DM is well established , only a few studies have directly measured with polysomnography the effects of liraglutide in patients with OSAS. The classic RCT SCALE Sleep Apnea evaluated the effects of liraglutide 3.0 mg in individuals with obesity and moderate or severe OSAS who were reluctant or unable to use CPAP therapy. After 32 weeks of treatment, a significantly greater reduction in mean AHI was observed in the treated group compared with the placebo group, both of which were also addressed with monthly counseling on diet and exercise (-12.2 ± 1.8 events/h versus -6.1 ± 2.0 events/h, respectively). The improvement in OSAS outcomes was associated with the degree of weight loss at the end of the study . A recently published study included individuals with T2DM and moderate or severe OSAS randomized to a control group or a liraglutide group. Both groups used CPAP and received drug treatment for T2DM, except for the first group, which received liraglutide at a dose of up to 1.8 mg. After 3 months of follow-up, the mean BMI, AHI, and SBP values in the liraglutide group were lower than those in the control group, while minimum oxygen saturation was higher in the liraglutide group . 3.5.9 Effects of liraglutide in patients with polycystic ovary syndrome The effects of liraglutide in women with PCOS were assessed in a series of studies, both as a standalone and in combination with metformin, demonstrating significant weight loss and reduction in testosterone levels. The results were heterogeneous regarding insulin resistance and menstrual patterns. Most studies used liraglutide doses between 1.2 mg and 1.8 mg. Although few studies have evaluated fertility and gestational outcomes with GLP-1as, weight loss is known to be the most significant factor affecting the improvement of these parameters in PCOS . It is important to note that the liraglutide package insert recommends discontinuing the medication if the patient desires to become pregnant. The effects of GLP-1as in women with PCOS have been evaluated in a meta-analysis of six studies with liraglutide (1.2-1.8 mg) and one with exenatide. A significant weight loss and reduction in total testosterone levels was observed, but no effects were found in abdominal circumference, fasting insulinemia, homeostasis model assessment of insulin resistance (HOMA-IR) values, or SHBG level. Only one study evaluated hirsutism and menstrual cycles, and this study found no significant changes after liraglutide treatment . A recent meta-analysis compared the effects of liraglutide (1.2-1.8 mg), metformin, and the combination of metformin + liraglutide in women with overweight or obesity and PCOS. Compared with the group treated with metformin alone, the metformin + liraglutide group showed greater weight loss and reduction in waist circumference, fasting blood glucose, and insulin levels, but no difference in HOMA-IR values. When the standalone treatments with metformin versus liraglutide were compared, liraglutide was only superior to metformin in terms of weight loss. There was no significant difference between metformin, liraglutide, and combined metformin plus liraglutide in improving total testosterone, free testosterone, or SHBG levels. Although two studies reported improvements in menstrual cycles with the combined therapy compared with metformin alone, they used different indicators, hindering a meta-analysis of these data . The effects of liraglutide 1.8 mg on ovarian morphology, hormonal levels, and menstrual bleeding patterns were evaluated in a double-blind RCT including 72 women with overweight or obesity and PCOS. The group treated with liraglutide experienced a reduction in ovarian volume, along with an increase in SHBG level, reduction in free testosterone level, and improvement in bleeding rate . While most studies evaluated lower doses of liraglutide, a double-blind RCT assessed the effects of liraglutide 3.0 mg for 32 weeks in 82 women with obesity and PCOS, reporting significant weight loss, improvement in hyperandrogenism, and restoration of menstrual cycles . The pregnancy rates after in vitro fertilization were investigated in an open-label RCT including 28 women with obesity and PCOS, comparing the effects of metformin plus liraglutide 1.2 mg versus metformin alone for 12 weeks. The pregnancy rate per embryo transfer was significantly greater in the combined treatment group (85.7%) compared with the metformin alone group (28.6%), and the cumulative pregnancy rates over a 12-month period were 69.2% and 35.7%, respectively . 3.5.10 Effects of liraglutide in patients with male hypogonadism Studies evaluating the effects of liraglutide in patients with male hypogonadism do not allow for definitive conclusions but suggest an improvement in testosterone levels and sexual function accompanying weight loss and improvement in metabolic parameters. It is unclear whether the effects of liraglutide in patients with male hypogonadism are mediated exclusively by the reduction in adiposity. In animal models, there is evidence of direct effects of central GLP-1 signaling on the gonadal axis. Intracerebroventricular GLP-1 injection induces an immediate luteinizing hormone (LH) surge in male rats . A retrospective observational study has evaluated the effects of liraglutide added to testosterone replacement therapy (TRT), metformin, and LSCs on erectile function in men with obesity, T2DM, and hypogonadism. In the first year, all 43 patients (aged - years) received TRT, metformin, and LSC recommendations. In the second year, those who did not reach the target HbA1c value received additional liraglutide 1.2 mg daily. The group that received liraglutide showed additional weight loss and improvement in erectile function compared with the group that did not receive it . A prospective, randomized, open-label study evaluated the effects of liraglutide 3.0 mg daily compared with testosterone 50 mg (1% transdermal gel) for 16 weeks in 30 men with a mean age of 46 years, obesity, and functional hypogonadism. The weight loss was only significant in the group that received liraglutide. Both groups experienced improvements in total testosterone levels, libido, and sexual function. Follicle-stimulating hormone (FSH) and LH levels increased in the liraglutide group and decreased in the testosterone group . In a prospective study, 110 young (aged - years) men with obesity and functional hypogonadism were divided according to their desire for fertility into three groups to receive gonadotropins, liraglutide 3.0 mg, or transdermal testosterone 60 mg for 4 months. The group that received liraglutide showed significant weight loss and higher levels of testosterone and gonadotropins, as well as improved erectile function and conventional sperm parameters relative to baseline levels and compared with the other groups . 3.5.11 Effects of liraglutide on metabolic dysfunction-associated steatotic liver disease Liraglutide has demonstrated benefits in patients with MASLD, reducing liver fat content and improving steatohepatitis. In addition to its weight loss effect in reducing lipotoxicity, other mechanisms have been proposed, such as modification of portal and peripheral insulin and glucagon concentrations, and improvements in hepatocyte mitochondrial function and hepatic insulin sensitivity . Four RCTs showed a reduction in liver fat content assessed by magnetic resonance imaging-based techniques after treatment with liraglutide 1.8 mg for 6 months. The study evaluated adults with overweight or obesity and T2DM and women with overweight and PCOS . Other studies have reported similar results. The LEAN study, a double-blind RCT, examined the effects of liraglutide on steatohepatitis and fibrosis. Armstrong and cols. randomized 52 overweight patients with biopsy-proven steatohepatitis to receive liraglutide 1.8 mg or placebo for 48 weeks. The primary outcome of resolution of steatohepatitis occurred in 39% of patients in the liraglutide group versus 9% of those in the placebo group (p = 0.019), and progression of fibrosis occurred in 9% of patients receiving liraglutide and in 36% of those receiving placebo . 3.5.12 Effects of liraglutide on quality of life Treatment with liraglutide resulted in improved quality-of-life parameters compared with placebo in an RCT. The benefits appeared to be associated with weight loss, as they were greater in individuals with greater weight loss, regardless of treatment arm. One of the secondary outcomes of the SCALE Obesity and Prediabetes study was health-related quality of life, assessed using the SF-36, IWQOL-Lite, and Treatment Related Impact Measure – Weight after 56 weeks of treatment with liraglutide 3.0 mg. Compared with the placebo group, the liraglutide group had higher SF-36 scores in the general physical and mental health domains, higher IWQOL-Lite total scores, and more favorable individual domain scores on both instruments. In the assessment with the Treatment Related Impact Measure – Weight, the total score was also higher in the liraglutide group, despite a lower score for the experience of side effects . The greatest benefits were observed in the physical aspects of the IWQOL-Lite and in self-esteem . Quality of life was also assessed in the continuation of the SCALE Obesity and Prediabetes study for 160 weeks, which showed that the improvement demonstrated after 1 year of treatment with liraglutide 3.0 mg was generally maintained after 3 years . 3.5.13 Effects of liraglutide on osteoarticular diseases While preclinical studies have suggested positive effects of GLP-1 receptor agonists in osteoarthritis , including direct effects on various joint cell types , an RCT found no significant benefits of liraglutide 3.0 mg for pain associated with knee osteoarthritis, despite a relatively small weight difference between the groups . Gudbergsen and cols. randomized 156 patients with overweight or obesity and knee osteoarthritis who had lost more than 5% of weight with dietary intervention for 8 weeks to receive liraglutide 3.0 mg or placebo for 52 weeks. At the end of the study, the difference in weight between the groups was 3.9 kg, and there was no significant difference in knee pain, as measured by a subscale of the Knee Injury and Osteoarthritis Outcome Score (KOOS). However, it is important to highlight that the average weight loss with the pre-randomization dietary intervention was 12.5 kg and there was a significant improvement in symptoms during this period. Consequently, at the time of randomization, the patients had mild-to-moderate pain, which may have limited the potential of pharmacologic intervention to promote further improvement . 3.5.14 Effects of liraglutide in patients with chronic kidney disease Kidney disease is one of the most important complications of T2DM and the most common cause of CKD and end-stage renal disease (ESRD). A recent review looked at medications with established evidence for treating diabetic kidney disease. Among them are incretin-based therapeutic agents, including liraglutide, which have demonstrated vasotropic actions, suggesting a potential to reduce the risk of diabetic kidney disease . In the Liraglutide and Cardiovascular Outcomes in Type 2 Diabetes (LEADER) study, liraglutide showed cardiovascular and renal benefits, particularly in participants with CKD. The results suggested that reductions in HbA1c and SBP values may moderately mediate the renal benefits of liraglutide. Potential benefits may be driven by other mediators or direct mechanisms . A post hoc analysis evaluated the safety of liraglutide treatment in patients with CKD. A total of 9,340 patients with T2DM were randomized to receive either liraglutide or placebo, both in addition to standard treatment. Of these, 2,158 had CKD and 7,182 had no CKD (defined as an estimated glomerular filtration rate [eGFR] < 60 and ≥ 60 mL/min, respectively); 966 patients had macroalbuminuria, and 2,456 had microalbuminuria (urine albumin-to-creatinine ratio > 300 mg/g and ≥ 30 to ≤ 300 mg/g, respectively). At the beginning of the study, the mean eGFR was 46 ± 11 mL/min in patients with CKD and 91 ± 22 mL/min in those without CKD. The risk of severe hypoglycemia was significantly lower with liraglutide compared with placebo in patients with CKD or with micro- or macroalbuminuria (hazard ratio [HR] = 0.63 and 0.57, respectively). The study concluded that the use of liraglutide in patients with CKD was safe, with no difference between patients with and without CKD . No dosage adjustment is necessary for patients with mild or moderate renal impairment. Experience is limited in patients with severe renal insufficiency. A study evaluated the safety and efficacy of liraglutide in patients with T2DM and ESRD dependent on dialysis. Twenty-four patients with T2DM and ESRD and 23 control individuals with T2DM and normal renal function were randomized to receive 12 weeks of liraglutide (titrated to a maximum dose of 1.8 mg) or placebo as an add-on to ongoing antidiabetic treatment. Glycemic control improved in both groups treated with liraglutide, and the basal insulin dose decreased accordingly. Body weight also decreased in both groups treated with liraglutide. The plasma concentration of liraglutide was 49% higher in the ESRD group compared with the control group. Nausea and vomiting occurred more frequently among liraglutide-treated patients with ESRD compared with control individuals. The study concluded that reduced treatment doses and a prolonged titration period may be advisable , although liraglutide is currently not recommended in this population. 3.5.15 Effects of liraglutide on cardiovascular outcomes Studies in animal models have shown liraglutide effects in reducing oxidative stress and inflammation and preventing apoptosis of endothelial cells; these effects were independent of glycemic control or weight loss and may contribute to the cardiovascular protective action of this drug . Beneficial effects in reducing inflammatory markers and neutralizing oxidative stress and endothelial dysfunction in individuals treated with liraglutide have also been described . Cardiovascular outcomes of liraglutide were investigated in the LEADER study, in which 9,340 individuals with T2DM and high cardiovascular risk were randomized and followed for a median of 3.8 years. The group treated with liraglutide at a dose of up to 1.8 mg had a 13% reduction in primary outcomes (cardiovascular death, nonfatal AMI, or nonfatal stroke) compared with the placebo group. Mortality from cardiovascular causes was lower in the liraglutide group (4.7% versus 6.0% in the placebo group). Nonfatal myocardial infarction, nonfatal stroke, and hospitalizations due to HF were less frequent in the liraglutide group, but the differences compared with placebo were not significant . A post hoc analysis of the LEADER study was performed to evaluate the treatment effect of liraglutide versus placebo on cardiovascular outcomes by LDL-c level < 50 mg/dL, 50-70 mg/dL, and > 70 mg/dL and statin use at the beginning of the study. The results suggest that the benefits of liraglutide on mortality and cardiovascular outcomes appear consistent in patients with T2DM at high cardiovascular risk, independent of LDL-c level, and persist even in the setting of very low baseline LDL-c levels and concomitant statin use. These data suggest that the potential antiatherosclerotic effects of the medication are complementary to its effect in reducing lipids . No RCTs have been conducted to assess the cardiovascular benefits of the 3.0 mg dose in patients with obesity without T2DM. However, a post hoc analysis was performed using pooled data from 5,908 participants from the five RCTs of the SCALE program (liraglutide versus placebo or orlistat). In that study, liraglutide 3.0 mg was not associated with increased cardiovascular risk. Since wide confidence intervals were found, and two retrospective studies were included in the analysis, it is not possible to claim cardiovascular protection with the medication, only noninferiority compared with placebo regarding this outcome . Preliminary studies have shown significantly greater weight loss with liraglutide than placebo or orlistat . Subsequently, a series of studies named Satiety and Clinical Adiposity – Liraglutide Evidence (SCALE) analyzed the use of liraglutide in the treatment of obesity and its complications. In the SCALE Obesity and Prediabetes study, 63.2% and 33.1% of the patients lost, respectively, more than 5% and 10% of their initial weight after 56 weeks . The study continued for another 2 years, to a total of 3 years, in patients with prediabetes. The 5%, 10%, and 15% weight loss in patients randomized to liraglutide were 49.6%, 24.8%, and 11%, respectively . In the SCALE Maintenance study, patients with obesity who lost 6% of weight with diet and physical activity were randomized to liraglutide 3.0 mg or placebo for 1 year. Those who used liraglutide had an additional loss of 6.1% compared with those who used placebo, reinforcing the importance of chronic and multidisciplinary treatment of obesity . A recent study evaluated patients who lost an average of 13.1 kg over 8 weeks on a low-calorie diet. Those who were subsequently randomized to a combination of liraglutide 3.0 mg and physical exercise achieved an additional weight loss of 3.4 kg, and at 1 year, 33% were able to maintain a weight loss of over 20% of their initial weight . The effects of liraglutide on weight loss maintenance were evaluated in the SCALE Maintenance study described previously . The study cited previously also analyzed body composition using DXA and reported a 3.9% reduction in absolute body fat percentage, which was double the decrease observed in the exercise group (1.7%) . Another study published in the same year assessed the use of liraglutide 3.0 mg in decreasing visceral fat, evaluated using magnetic resonance imaging. At 36 weeks, there was an average 12.5% reduction with liraglutide compared with 1.6% with placebo . The effects of liraglutide in preventing the progression of prediabetes to T2DM and improving insulin resistance with weight loss are well established . However, studies with animal models suggest other complex direct actions of liraglutide in inhibiting the progression of prediabetes . Some clinical studies have evaluated the effects of liraglutide in individuals with prediabetes. Kim and cols. compared the effects of liraglutide doses up to 1.8 mg versus placebo in a group of patients aged 40-70 years with overweight or obesity and prediabetes. Weight loss associated with liraglutide was accompanied by a 29% reduction in peripheral insulin resistance, as assessed by the insulin suppression test. Additionally, 75% of the individuals on liraglutide achieved normal fasting plasma glucose compared with 19% of those on placebo . The most important RCT was the SCALE Obesity and Prediabetes trial, in which 2,254 patients with overweight or obesity and prediabetes were randomized, in a 2:1 ratio, to liraglutide 3.0 mg or placebo, combined with a standardized diet and exercise. The study showed significant and sustained results of improved glycemic control with reduced insulin resistance in the context of 6.1% weight loss over 3 years in patients using liraglutide. Only 2% of the participants in the liraglutide group developed diabetes, compared with 6% in the placebo group. Liraglutide led to an approximately 80% reduction in T2DM risk, and the estimated time to onset of T2DM over 160 weeks was 2.7 times longer in the liraglutide group compared with the placebo group. Furthermore, at 160 weeks, 66% of patients on liraglutide achieved normoglycemia, compared with 36% of those on placebo. An additional post hoc analysis was conducted at week 172 to address the lack of follow-up data for withdrawn participants, assuming that diabetes was undiagnosed in 1% of the participants withdrawn from the liraglutide group and in 0% of those withdrawn from the placebo group. The results showed that the risk of T2DM remained 66% lower in the participants who received liraglutide . Considering that controlling excess weight is one of the priorities in T2DM management, liraglutide has become one of the first-choice treatments for patients with T2DM and obesity due to its mechanism of action of direct hypoglycemic effects and body weight reduction . The safety, tolerability, and efficacy of liraglutide were initially assessed in the treatment of T2DM through the Liraglutide Effect and Actions in Diabetes (LEAD) program. This program consisted of six RCTs that assessed liraglutide as a standalone treatment and in combination with oral antidiabetic drugs (OADs) at different stages of the disease. Levels of HbA1c decreased by 0.8%-1.6% from baseline with liraglutide at doses up to 1.8 mg . Rapid and sustained reductions in fasting plasma glucose level (up to 43.2 mg/dL) were observed from baseline to the end of each LEAD study. Liraglutide also effectively reduced postprandial glucose levels, with a mean reduction over three meals of up to 48.6 mg/dL across the six LEAD studies. These RCTs also confirmed a low risk of hypoglycemia with liraglutide, which is consistent with its glucose-dependent insulin secretion stimulating action . The SCALE Diabetes study included 846 adults with overweight or obesity and with T2DM, randomized to receive liraglutide 3.0 mg, liraglutide 1.8 mg, or placebo for 56 weeks. Reductions in HbA1c level from baseline were 1.3%, 1.1%, and 0.3% in each group, respectively, and the percentages of individuals achieving HbA1c level of 6.5% or lower at the end of the study were 56.5%, 45.6%, and 15%, respectively. Liraglutide 3.0 mg was significantly superior to liraglutide 1.8 mg regarding glucose-related measures, including HbA1c values, fasting plasma glucose, fasting proinsulin, proinsulin-to-insulin ratio, and change in OAD association. However, the study authors advised caution in interpreting the comparison between the two doses, as the analyses were not controlled for multiplicity . A systematic review published in 2016 included 43 studies conducted in Europe (n = 24), the United States (n = 5), and Asia-Pacific (n = 14), evaluating a total of 7,413 patients with T2DM treated with liraglutide as monotherapy or combined with hypoglycemic agents. The studies ranged in duration from 3 to 24 months (46.5%; n = 20 with ≥ 12 months) and assessed liraglutide doses between 0.9 and 1.8 mg. Liraglutide treatment resulted in HbA1c changes from -0.6% to -2.26% and reduced plasma glucose values, regardless of baseline HbA1c levels and follow-up duration. Overall, 29.3%-64.5% and 22%-41% of patients with T2DM treated with liraglutide achieved target HbA1c levels of 7% and 6.5%, respectively. Over time, treatment with liraglutide resulted in a mean change of -1.3 to -8.7 kg in absolute weight from baseline. Hypoglycemia with liraglutide monotherapy occurred at a ≤ 0.8% rate and was more frequent in patients using liraglutide combined with hypoglycemic agents (0-15.2%) . A subsequent multicenter study conducted across 45 diabetes clinics in Italy included 1,723 patients who received liraglutide doses of up to 1.8 mg and were followed for up to 24 months. In all, 43.5% of the patients achieved a reduction in HbA1c of ≥ 1% in 12 months, and 40.9% reached the HbA1c target of ≤ 7% at 24 months with liraglutide monotherapy or combined with other hypoglycemic agents . Other studies in a “real-world” context have confirmed the glycemic control results observed in RCT conditions . Studies in animals and humans suggest that liraglutide may have some effects on lipid metabolism, independent of weight loss. In rats, the effects of liraglutide have been shown to impact pathways involved in increased cholesterol efflux and in the expression of genes involved in the breakdown of lipoproteins containing apolipoprotein (apo) B-100, which is the main component of very-low-density lipoprotein cholesterol (VLDL-c), intermediate-density lipoprotein cholesterol (IDL-c), LDL-c, and lipoprotein (a) particles . In the same study, treatment of patients with T2DM with liraglutide 1.2 mg for 6 months significantly reduced plasma apo B-100 and fasting triglyceride levels and induced breakdown of triglyceride-rich lipoproteins (VLDL-c and IDL-c) and LDL-c . Taskinen and cols. observed specific effects of liraglutide 1.8 mg on postprandial chylomicron metabolism in a small group of individuals with T2DM. Liraglutide led to a marked decrease in apo B-48 production in the intestine, increased the size of postprandial chylomicrons in circulation, dramatically reduced the direct clearance of chylomicrons, and decreased the hepatic secretion of VLDL-triglycerides . In another study, liraglutide reduced postprandial hyperlipidemia by increasing apo B-48 catabolism and reducing apo B-48 production in patients with T2DM . In a Finnish study center, 22 patients with T2DM using metformin and statin were randomized to receive liraglutide 1.8 mg or placebo for 16 weeks. At the end of the study, serum concentrations of triglycerides, chylomicrons, and large VLDL-c particles after a high-fat mixed meal were significantly lower in the liraglutide group but not in the placebo group, despite similar weight losses in both two groups. Concentrations of apo C-III, a critical regulator of postprandial triglyceride metabolism, decreased markedly in the fasting and postprandial periods in the liraglutide group but not in the placebo group . A meta-analysis of the results of the LEAD trials revealed significant reductions from baseline in total cholesterol (5.0 mg/dL), LDL-c (7.7 mg/dL), and triglycerides (17.7 mg/dL; p < 0.01 for all) among patients treated with liraglutide 1.8 mg, although these reductions were not significant compared with placebo or active comparators . In contrast, the SCALE Diabetes study showed that liraglutide 3.0 mg, but not liraglutide 1.8 mg, significantly improved total cholesterol, VLDL-c, HDL-c, and triglyceride levels compared with placebo; no effects were observed on levels of LDL-c or free fatty acids . Studies have confirmed the effect of liraglutide on reducing BP values. This effect was attributed not only to the associated weight loss but also to a combination of other mechanisms, such as the promotion of natriuresis and vasodilation . Notably, GLP-1as are generally associated with a slight increase in heart rate. Current data indicate that this effect does not result in increased cardiovascular risk, although a pronounced increase in heart rate may be associated with adverse clinical outcomes in patients with advanced HF . A pooled analysis of the six LEAD RCTs, including data from almost 2,800 individuals with T2DM, showed that participants receiving liraglutide experienced significantly greater mean reductions in SBP values than those receiving placebo at 26 weeks relative to baseline. These reductions were noticeable after 2 weeks of treatment. Although the trials were not statistically powered to evaluate BP reduction, consistent reductions were observed in SBP values with liraglutide (1.8 mg or 1.2 mg once daily), with reductions of 2.1-6.7 mmHg from baseline to the end of the treatment period ( - weeks). Small and nonsignificant reductions from baseline in DBP values were observed with liraglutide in most of these trials. The SBP reductions observed in patients treated with liraglutide correlated weakly with weight loss. Liraglutide 1.2 mg and 1.8 mg were associated with a significant mean increase of 3 beats per minute (bpm) in pulse rate, compared with a mean increase of 1 bpm with placebo . A similar heart rate increase with liraglutide ( bpm) has also been found in the LEADER study, which will be detailed later . Kumarathurai and cols. observed a significant increase in heart rate and reduction in heart rate variability (HRV) in patients with newly diagnosed T2DM and stable CAD who received liraglutide 1.8 mg for 12 weeks compared with placebo. This HRV reduction was not mediated by the increased heart rate observed after liraglutide therapy, suggesting a direct influence of liraglutide on sympathovagal balance . In an RCT, liraglutide was associated with a significant SBP reduction compared with placebo when added to patients with T2DM already treated with multiple daily insulin injections. Although significant correlations were found between reductions in SBP and reductions in body weight and BMI, one in three liraglutide-treated patients who experienced a marked reduction in SBP did not have a substantial decrease in body weight. A greater SBP reduction was predicted by higher baseline DBP values and by lower baseline mean values of glucose regulation parameters. One explanation for this latter finding is that patients with higher mean values of glucose regulation parameters are more likely to experience blood glucose improvement with liraglutide, which decreases glycosuria and, thus, attenuates weight loss. Therefore, from a BP perspective, some patients may benefit from the use of liraglutide despite not having improvements in other traditional metabolic risk factors . Zhao and cols. evaluated the effect of liraglutide on BP in a meta-analysis of 18 RCTs. The authors observed that, compared with placebo, liraglutide reduced SBP by 3.18 mmHg but had no significant effect on DBP. Only three RCTs evaluated the effect of liraglutide at the doses of 2.4 mg and 3.0 mg. Although no RCTs have been published on liraglutide 3.0 mg specifically among patients with obesity and hypertension, a subgroup analysis defined by liraglutide dose, compared with placebo, showed significant SBP reductions with the doses of 2.4 mg/day (-5.01 mmHg) and 3.0 mg/day (-3.67 mmHg) and DBP reduction (-1.46 mmHg) with the dose of 3.0 mg/day . Although the association of OSAS with both obesity and T2DM is well established , only a few studies have directly measured with polysomnography the effects of liraglutide in patients with OSAS. The classic RCT SCALE Sleep Apnea evaluated the effects of liraglutide 3.0 mg in individuals with obesity and moderate or severe OSAS who were reluctant or unable to use CPAP therapy. After 32 weeks of treatment, a significantly greater reduction in mean AHI was observed in the treated group compared with the placebo group, both of which were also addressed with monthly counseling on diet and exercise (-12.2 ± 1.8 events/h versus -6.1 ± 2.0 events/h, respectively). The improvement in OSAS outcomes was associated with the degree of weight loss at the end of the study . A recently published study included individuals with T2DM and moderate or severe OSAS randomized to a control group or a liraglutide group. Both groups used CPAP and received drug treatment for T2DM, except for the first group, which received liraglutide at a dose of up to 1.8 mg. After 3 months of follow-up, the mean BMI, AHI, and SBP values in the liraglutide group were lower than those in the control group, while minimum oxygen saturation was higher in the liraglutide group . The effects of liraglutide in women with PCOS were assessed in a series of studies, both as a standalone and in combination with metformin, demonstrating significant weight loss and reduction in testosterone levels. The results were heterogeneous regarding insulin resistance and menstrual patterns. Most studies used liraglutide doses between 1.2 mg and 1.8 mg. Although few studies have evaluated fertility and gestational outcomes with GLP-1as, weight loss is known to be the most significant factor affecting the improvement of these parameters in PCOS . It is important to note that the liraglutide package insert recommends discontinuing the medication if the patient desires to become pregnant. The effects of GLP-1as in women with PCOS have been evaluated in a meta-analysis of six studies with liraglutide (1.2-1.8 mg) and one with exenatide. A significant weight loss and reduction in total testosterone levels was observed, but no effects were found in abdominal circumference, fasting insulinemia, homeostasis model assessment of insulin resistance (HOMA-IR) values, or SHBG level. Only one study evaluated hirsutism and menstrual cycles, and this study found no significant changes after liraglutide treatment . A recent meta-analysis compared the effects of liraglutide (1.2-1.8 mg), metformin, and the combination of metformin + liraglutide in women with overweight or obesity and PCOS. Compared with the group treated with metformin alone, the metformin + liraglutide group showed greater weight loss and reduction in waist circumference, fasting blood glucose, and insulin levels, but no difference in HOMA-IR values. When the standalone treatments with metformin versus liraglutide were compared, liraglutide was only superior to metformin in terms of weight loss. There was no significant difference between metformin, liraglutide, and combined metformin plus liraglutide in improving total testosterone, free testosterone, or SHBG levels. Although two studies reported improvements in menstrual cycles with the combined therapy compared with metformin alone, they used different indicators, hindering a meta-analysis of these data . The effects of liraglutide 1.8 mg on ovarian morphology, hormonal levels, and menstrual bleeding patterns were evaluated in a double-blind RCT including 72 women with overweight or obesity and PCOS. The group treated with liraglutide experienced a reduction in ovarian volume, along with an increase in SHBG level, reduction in free testosterone level, and improvement in bleeding rate . While most studies evaluated lower doses of liraglutide, a double-blind RCT assessed the effects of liraglutide 3.0 mg for 32 weeks in 82 women with obesity and PCOS, reporting significant weight loss, improvement in hyperandrogenism, and restoration of menstrual cycles . The pregnancy rates after in vitro fertilization were investigated in an open-label RCT including 28 women with obesity and PCOS, comparing the effects of metformin plus liraglutide 1.2 mg versus metformin alone for 12 weeks. The pregnancy rate per embryo transfer was significantly greater in the combined treatment group (85.7%) compared with the metformin alone group (28.6%), and the cumulative pregnancy rates over a 12-month period were 69.2% and 35.7%, respectively . Studies evaluating the effects of liraglutide in patients with male hypogonadism do not allow for definitive conclusions but suggest an improvement in testosterone levels and sexual function accompanying weight loss and improvement in metabolic parameters. It is unclear whether the effects of liraglutide in patients with male hypogonadism are mediated exclusively by the reduction in adiposity. In animal models, there is evidence of direct effects of central GLP-1 signaling on the gonadal axis. Intracerebroventricular GLP-1 injection induces an immediate luteinizing hormone (LH) surge in male rats . A retrospective observational study has evaluated the effects of liraglutide added to testosterone replacement therapy (TRT), metformin, and LSCs on erectile function in men with obesity, T2DM, and hypogonadism. In the first year, all 43 patients (aged - years) received TRT, metformin, and LSC recommendations. In the second year, those who did not reach the target HbA1c value received additional liraglutide 1.2 mg daily. The group that received liraglutide showed additional weight loss and improvement in erectile function compared with the group that did not receive it . A prospective, randomized, open-label study evaluated the effects of liraglutide 3.0 mg daily compared with testosterone 50 mg (1% transdermal gel) for 16 weeks in 30 men with a mean age of 46 years, obesity, and functional hypogonadism. The weight loss was only significant in the group that received liraglutide. Both groups experienced improvements in total testosterone levels, libido, and sexual function. Follicle-stimulating hormone (FSH) and LH levels increased in the liraglutide group and decreased in the testosterone group . In a prospective study, 110 young (aged - years) men with obesity and functional hypogonadism were divided according to their desire for fertility into three groups to receive gonadotropins, liraglutide 3.0 mg, or transdermal testosterone 60 mg for 4 months. The group that received liraglutide showed significant weight loss and higher levels of testosterone and gonadotropins, as well as improved erectile function and conventional sperm parameters relative to baseline levels and compared with the other groups . Liraglutide has demonstrated benefits in patients with MASLD, reducing liver fat content and improving steatohepatitis. In addition to its weight loss effect in reducing lipotoxicity, other mechanisms have been proposed, such as modification of portal and peripheral insulin and glucagon concentrations, and improvements in hepatocyte mitochondrial function and hepatic insulin sensitivity . Four RCTs showed a reduction in liver fat content assessed by magnetic resonance imaging-based techniques after treatment with liraglutide 1.8 mg for 6 months. The study evaluated adults with overweight or obesity and T2DM and women with overweight and PCOS . Other studies have reported similar results. The LEAN study, a double-blind RCT, examined the effects of liraglutide on steatohepatitis and fibrosis. Armstrong and cols. randomized 52 overweight patients with biopsy-proven steatohepatitis to receive liraglutide 1.8 mg or placebo for 48 weeks. The primary outcome of resolution of steatohepatitis occurred in 39% of patients in the liraglutide group versus 9% of those in the placebo group (p = 0.019), and progression of fibrosis occurred in 9% of patients receiving liraglutide and in 36% of those receiving placebo . Treatment with liraglutide resulted in improved quality-of-life parameters compared with placebo in an RCT. The benefits appeared to be associated with weight loss, as they were greater in individuals with greater weight loss, regardless of treatment arm. One of the secondary outcomes of the SCALE Obesity and Prediabetes study was health-related quality of life, assessed using the SF-36, IWQOL-Lite, and Treatment Related Impact Measure – Weight after 56 weeks of treatment with liraglutide 3.0 mg. Compared with the placebo group, the liraglutide group had higher SF-36 scores in the general physical and mental health domains, higher IWQOL-Lite total scores, and more favorable individual domain scores on both instruments. In the assessment with the Treatment Related Impact Measure – Weight, the total score was also higher in the liraglutide group, despite a lower score for the experience of side effects . The greatest benefits were observed in the physical aspects of the IWQOL-Lite and in self-esteem . Quality of life was also assessed in the continuation of the SCALE Obesity and Prediabetes study for 160 weeks, which showed that the improvement demonstrated after 1 year of treatment with liraglutide 3.0 mg was generally maintained after 3 years . While preclinical studies have suggested positive effects of GLP-1 receptor agonists in osteoarthritis , including direct effects on various joint cell types , an RCT found no significant benefits of liraglutide 3.0 mg for pain associated with knee osteoarthritis, despite a relatively small weight difference between the groups . Gudbergsen and cols. randomized 156 patients with overweight or obesity and knee osteoarthritis who had lost more than 5% of weight with dietary intervention for 8 weeks to receive liraglutide 3.0 mg or placebo for 52 weeks. At the end of the study, the difference in weight between the groups was 3.9 kg, and there was no significant difference in knee pain, as measured by a subscale of the Knee Injury and Osteoarthritis Outcome Score (KOOS). However, it is important to highlight that the average weight loss with the pre-randomization dietary intervention was 12.5 kg and there was a significant improvement in symptoms during this period. Consequently, at the time of randomization, the patients had mild-to-moderate pain, which may have limited the potential of pharmacologic intervention to promote further improvement . Kidney disease is one of the most important complications of T2DM and the most common cause of CKD and end-stage renal disease (ESRD). A recent review looked at medications with established evidence for treating diabetic kidney disease. Among them are incretin-based therapeutic agents, including liraglutide, which have demonstrated vasotropic actions, suggesting a potential to reduce the risk of diabetic kidney disease . In the Liraglutide and Cardiovascular Outcomes in Type 2 Diabetes (LEADER) study, liraglutide showed cardiovascular and renal benefits, particularly in participants with CKD. The results suggested that reductions in HbA1c and SBP values may moderately mediate the renal benefits of liraglutide. Potential benefits may be driven by other mediators or direct mechanisms . A post hoc analysis evaluated the safety of liraglutide treatment in patients with CKD. A total of 9,340 patients with T2DM were randomized to receive either liraglutide or placebo, both in addition to standard treatment. Of these, 2,158 had CKD and 7,182 had no CKD (defined as an estimated glomerular filtration rate [eGFR] < 60 and ≥ 60 mL/min, respectively); 966 patients had macroalbuminuria, and 2,456 had microalbuminuria (urine albumin-to-creatinine ratio > 300 mg/g and ≥ 30 to ≤ 300 mg/g, respectively). At the beginning of the study, the mean eGFR was 46 ± 11 mL/min in patients with CKD and 91 ± 22 mL/min in those without CKD. The risk of severe hypoglycemia was significantly lower with liraglutide compared with placebo in patients with CKD or with micro- or macroalbuminuria (hazard ratio [HR] = 0.63 and 0.57, respectively). The study concluded that the use of liraglutide in patients with CKD was safe, with no difference between patients with and without CKD . No dosage adjustment is necessary for patients with mild or moderate renal impairment. Experience is limited in patients with severe renal insufficiency. A study evaluated the safety and efficacy of liraglutide in patients with T2DM and ESRD dependent on dialysis. Twenty-four patients with T2DM and ESRD and 23 control individuals with T2DM and normal renal function were randomized to receive 12 weeks of liraglutide (titrated to a maximum dose of 1.8 mg) or placebo as an add-on to ongoing antidiabetic treatment. Glycemic control improved in both groups treated with liraglutide, and the basal insulin dose decreased accordingly. Body weight also decreased in both groups treated with liraglutide. The plasma concentration of liraglutide was 49% higher in the ESRD group compared with the control group. Nausea and vomiting occurred more frequently among liraglutide-treated patients with ESRD compared with control individuals. The study concluded that reduced treatment doses and a prolonged titration period may be advisable , although liraglutide is currently not recommended in this population. Studies in animal models have shown liraglutide effects in reducing oxidative stress and inflammation and preventing apoptosis of endothelial cells; these effects were independent of glycemic control or weight loss and may contribute to the cardiovascular protective action of this drug . Beneficial effects in reducing inflammatory markers and neutralizing oxidative stress and endothelial dysfunction in individuals treated with liraglutide have also been described . Cardiovascular outcomes of liraglutide were investigated in the LEADER study, in which 9,340 individuals with T2DM and high cardiovascular risk were randomized and followed for a median of 3.8 years. The group treated with liraglutide at a dose of up to 1.8 mg had a 13% reduction in primary outcomes (cardiovascular death, nonfatal AMI, or nonfatal stroke) compared with the placebo group. Mortality from cardiovascular causes was lower in the liraglutide group (4.7% versus 6.0% in the placebo group). Nonfatal myocardial infarction, nonfatal stroke, and hospitalizations due to HF were less frequent in the liraglutide group, but the differences compared with placebo were not significant . A post hoc analysis of the LEADER study was performed to evaluate the treatment effect of liraglutide versus placebo on cardiovascular outcomes by LDL-c level < 50 mg/dL, 50-70 mg/dL, and > 70 mg/dL and statin use at the beginning of the study. The results suggest that the benefits of liraglutide on mortality and cardiovascular outcomes appear consistent in patients with T2DM at high cardiovascular risk, independent of LDL-c level, and persist even in the setting of very low baseline LDL-c levels and concomitant statin use. These data suggest that the potential antiatherosclerotic effects of the medication are complementary to its effect in reducing lipids . No RCTs have been conducted to assess the cardiovascular benefits of the 3.0 mg dose in patients with obesity without T2DM. However, a post hoc analysis was performed using pooled data from 5,908 participants from the five RCTs of the SCALE program (liraglutide versus placebo or orlistat). In that study, liraglutide 3.0 mg was not associated with increased cardiovascular risk. Since wide confidence intervals were found, and two retrospective studies were included in the analysis, it is not possible to claim cardiovascular protection with the medication, only noninferiority compared with placebo regarding this outcome . 4.1 Mechanism of action Semaglutide is a long-acting GLP-1a that mimics the effects of native GLP-1. Like other GLP-1as, semaglutide has effects in various locations and multiple actions, including reduced caloric intake, increased satiety, and decreased hunger, leading to weight loss . In animal models, GLP-1as act both on the hypothalamus, stimulating anorexigenic pathways, and on the mesolimbic system, influencing the reward system . In a study including 72 adults with overweight or obesity and comparing semaglutide 2.4 mg versus placebo, ad libitum energy intake was 35% lower with semaglutide than placebo (1,736 versus 2,676 kJ, respectively; estimated treatment difference -940 kJ). Semaglutide reduced hunger and potential food intake, and increased fullness and satiety compared with placebo. The CoEQ scores indicated better dietary control and reduced food cravings with semaglutide compared with placebo (p < 0.05). These effects resulted in a 9.9% reduction in body weight with semaglutide and 0.4% with placebo . Semaglutide is 89% bioavailable after subcutaneous injection. Peak concentrations occur 3 days after injection, and a steady state is reached by week 5 when injected once weekly. Similar exposure was achieved in three subcutaneous administration sites: abdomen, thigh, and upper arm. More than 99% of semaglutide binds to plasma albumin, providing protection against degradation and renal clearance. Semaglutide is modified through the substitution of alanine at position 8 to protect it from natural degradation by dipeptidyl peptidase 4 (DPP-4). The elimination half-life of semaglutide is approximately 1 week; therefore, it remains in circulation for approximately 5-7 weeks after the last dose. Semaglutide is eliminated in urine and feces. No dosage adjustments are required based on hepatic or renal function . 4.2 Dosage/usage instructions Weight loss with semaglutide is dose-dependent, with higher doses resulting in greater weight loss. The package insert recommends an initial subcutaneous dose of 0.25 mg once weekly, with no relation to meal times. The dose should be titrated every 4 weeks, increasing to 0.5 mg, 1.0 mg, 1.7 mg, and 2.4 mg, which is the maximum effective dose for weight loss. In patients with poor tolerance to dose titration, it is recommended to consider a 4-week “delay” in dose titration, i.e. , to maintain the maximum tolerated dose for 4 weeks longer before attempting a new dose increase. The goal should be toward the maximum tolerated dose, although some patients in clinical practice are “hyper-responders” and experience significant weight loss with lower doses . 4.3 Tolerability/side effects The most common side effects of semaglutide occur in the gastrointestinal tract, as with other GLP-1as. The Semaglutide Treatment Effect in People with Obesity (STEP) studies were a pivotal phase 3 clinical trial series evaluating subcutaneous semaglutide 2.4 mg weekly for weight loss. Data from these studies served for discussions regarding the weight loss efficacy, safety profile, tolerability, and effects of semaglutide on cardiometabolic parameters. In STEP 1, nausea, diarrhea, vomiting, and constipation occurred in 74.2% of participants in the semaglutide group and 47.9% of those in the placebo group. As a rule, events were mild to moderate in severity and transient, resolving permanently after treatment discontinuation. Gallbladder-related disorders (mainly cholelithiasis) were reported in 2.6% and 1.2% of participants in the semaglutide and placebo groups, respectively. Three participants in the semaglutide group had mild acute pancreatitis (two had gallstones); all participants made a full recovery. Serious adverse events were reported in 9.8% of patients in the semaglutide group and 6.4% of those in the placebo group and included mainly severe gastrointestinal and hepatobiliary events. One death was reported in each group, and neither was considered related to the receipt of semaglutide or placebo, as assessed by an independent external event adjudication committee. There was no difference between groups regarding the incidence of benign or malignant neoplasms, cardiovascular events, acute renal failure, or hypoglycemia . The other STEP studies had a similar pattern of side effects. In a meta-analysis of four studies, including three studies from the STEP series and with a total of 3,613 patients, Tan and cols. found that the risk of gastrointestinal adverse events was 1.59 times higher with semaglutide (RR = 1.59, 95% CI = 1.34-1.88). The risk of discontinuation due to adverse events was twice as high in the semaglutide group (RR = 2.19), and the risk of severe adverse events (SAEs), particularly biliary tract diseases (cholelithiasis and cholecystitis) and acute pancreatitis, was 1.6 times higher in the semaglutide group . In a large database analysis conducted in the United States (PharMetrics Plus) with approximately 16 million patients, Sodhi and cols. compared users of the GLP-1as liraglutide and semaglutide with those using the combination of naltrexone and bupropion (N/B). They found that GLP-1a use was associated with an increased risk of pancreatitis (HR = 9.09), intestinal obstruction (HR = 4.22), and gastroparesis (HR = 3.67), but not of biliary disease (HR = 1.50, nonsignificant), differing from findings in the STEP studies. Two aspects of the study must be highlighted: first, the confidence interval was very wide, suggesting that the sample was not adequate; second, the indiscriminate use of GLP-1as may have increased the risk of side effects, among other consequences . 4.4 Absolute contraindications The use of semaglutide is contraindicated during pregnancy and in cases of hypersensitivity to semaglutide or any of its excipients . 4.5 Efficacy 4.5.1 Effects of semaglutide on body weight The efficacy of semaglutide for weight loss was initially demonstrated in a phase 2 study, in which patients with overweight or obesity and without T2DM were divided into seven groups: five using daily subcutaneous semaglutide at different doses (0.05 mg, 0.1 mg, 0.2 mg, 0.3 mg, and 0.4 mg), one using liraglutide 3.0 mg, and one using placebo. At the end of the study, the mean weight loss in patients using semaglutide was 6.0% (0.05 mg), 8.6% (0.1 mg), 11.6% (0.2 mg), 11.2% (0.3 mg), and 13.8% (0.4 mg), showing a clear superiority of semaglutide over placebo (2.3%). Starting at a daily dose of 0.2 mg, which is equivalent to 1.4 mg per week, weight loss with semaglutide was greater than that with liraglutide (7.8%) . In STEP 1, a total of 1,961 patients with overweight or obesity and without T2DM were evaluated and followed up for 68 weeks. All participants were instructed to follow a hypocaloric diet with a 500 kcal/day deficit and practice 150 minutes of physical activity per week. At the end of the study, participants in the semaglutide 2.4 mg group lost on average 16.9% of weight, while those in the placebo group lost 2.4%. The nadir was reached at week 60 . The STEP 2 study evaluated 1,210 patients with T2DM with a BMI > 27 kg/m 2 and HbA1c levels between 7.0% and 10%. The patients were divided into three groups: semaglutide 2.4 mg, semaglutide 1.0 mg, and placebo. At 68 weeks, semaglutide 2.4 mg led to greater weight loss than semaglutide 1.0 mg. Patients in the semaglutide 2.4 mg group lost an average of 9.6% of their body weight, compared with 7.0% in the semaglutide 1.0 mg group and 3.4% in the placebo group . A comparison of the results of the STEP 1 and STEP 2 studies showed that the participants with T2DM lost less weight than those without T2DM, replicating the finding by studies conducted with other medications. The STEP 3 study evaluated 611 patients and had a design virtually identical to that of STEP 1, differing only in the degree of LSCs, which were more intensive. At the end of the 68-week period, the intervention group lost an average of 16% of body weight, while the placebo group lost 5.7% . The STEP 4 study was designed to assess the effects of continuing versus interrupting semaglutide treatment in individuals with overweight or obesity. A total of 902 patients received semaglutide in escalating doses of up to 2.4 mg/week, with an average weight loss of 10%. At week 20, half of the group was randomized to continue on semaglutide while the other half was switched to placebo. At the end of the study, at week 68, the semaglutide group had an additional weight loss of approximately 7.9%, with an average weight loss of 17.4%, while the group that interrupted treatment had an average weight regain of 6.9%, with an average weight loss of 5.0%. The results of this study highlighted the importance of maintaining pharmacologic treatment in patients with obesity . The STEP 5 study was designed to evaluate the long-term effects of subcutaneous semaglutide 2.4 mg once weekly compared with placebo, as an add-on to behavioral intervention, on body weight and cardiometabolic risk factors in adults with overweight or obesity. At follow-up week 104, the mean decrease in body weight was 15.2% in the semaglutide group and 2.6% in the placebo group, demonstrating the long-term efficacy of the treatment . A widely used way of assessing weight loss is by evaluating weight loss categories, i.e. , classifying weight loss into different categories, generally based on the percentage of weight loss. highlights the categorical weight loss observed in the STEP series studies. 4.5.2 Effects of semaglutide on weight maintenance See the description of the STEP 4 study above. 4.5.3 Effects of semaglutide on body composition Modification of body composition is an increasingly valued parameter in studies with antiobesity drugs. The therapeutic target is quality weight loss, i.e. , weight loss at the expense of fat mass with preservation or minimal loss of lean mass. The effects of semaglutide on body composition were investigated in the STEP 1 study, where a subgroup of 140 participants underwent body composition analysis using DXA. Despite a decrease in lean mass in absolute terms (-5.26 kg in the semaglutide group versus 1.83 kg in the placebo group; difference -3.43 kg), there was a predominant reduction in fat mass (-8.36 kg in the semaglutide group versus -1.37 kg in the placebo group; difference -6.99 kg), which resulted in patients having decreased percentage of body fat at the end of the study . 4.5.4 Effects of semaglutide in patients with prediabetes/glucose intolerance In a post hoc analysis of the STEP 1, 3, and 4 studies, including approximately 3,375 patients with overweight or obesity and prediabetes, the intervention group (semaglutide 2.4 mg) experienced improvement in all glycemic parameters after 68 weeks of treatment, with reductions in the risk of progression from prediabetes to T2DM between 84% and 89%, demonstrating the therapeutic potential of the drug . The STEP 10 study evaluated the effects of semaglutide 2.4 mg on reversing prediabetes to normoglycemia in patients with obesity. A total of 207 patients were randomized, including 138 to the semaglutide group and 69 to the placebo group. At 52 weeks, 81.1% of patients treated with semaglutide showed blood glucose normalization compared with 14.1% of those treated with placebo (OR = 19.8; p < 0.0001). Regarding HbA1c, the average level at baseline was 5.9% and at week 52, the level was 0.5% lower in the semaglutide group compared with the placebo group . 4.5.5 Effects of semaglutide in patients with type 2 diabetes mellitus The efficacy of semaglutide for glycemic control was well demonstrated in the SUSTAIN series of studies, where semaglutide was administered subcutaneously at a dose of 1.0 mg/week, and in the PIONNER series of studies, where it was administered orally at a dose of up to 14 mg/day . These two development programs included only patients with T2DM and will not be discussed in this document. In the STEP 2 study, the 2.4 mg dose was tested in overweight patients with T2DM. At the end of the 68-week follow-up period, the HbA1c level decreased by 1.6%, which was not significantly different from the 1.5% decrease with the 1.0 mg dose . In a meta-analysis assessing changes in cardiometabolic parameters, semaglutide treatment of patients with overweight or obesity without T2DM resulted in a 7.5% reduction in fasting blood glucose. 4.5.6 Effects of semaglutide on lipid profile A meta-analysis evaluating changes in cardiometabolic parameters in patients with overweight or obesity and without T2DM found that semaglutide reduced serum levels of LDL-c by 6%, triglycerides by 18%, and non-HDL-c by 8%, but did not change significantly the HDL-c level . 4.5.7 Effects of semaglutide on blood pressure and heart rate In a meta-analysis including 4,744 patients, semaglutide resulted in mean decreases of 4.83 mmHg in SBP and 2.45 mmHg in DBP among patients with obesity without T2DM. All GLP-1as increase heart rate, and this applies to semaglutide as well. Semaglutide leads to an average heart rate increase of 2-5 bpm. However, this effect appears to be caused by direct stimulation of the sinus node rather than reflex tachycardia due to stimulation of the autonomic nervous system and is not associated with an increased risk of adverse cardiac events . 4.5.8 Effects of semaglutide on polycystic ovary syndrome Jensterle and cols. randomized 25 women with obesity and PCOS (33.7 ± 5.3 years, BMI 36.1 ± 3.9 kg/m 2 ) to receive semaglutide 1.0 mg or placebo for 16 weeks. The authors assessed the participants’ tongues in regard to volume, fat tissue, and fat proportion using magnetic resonance imaging. Tongue fat tissue and fat proportion reduced significantly after semaglutide versus placebo (-1.94 ± 5.51 cm 3 versus 3.12 ± 4.87 cm 3 and 0.02 ± 0.07 cm 3 versus 0.04 ± 0.06 cm 3 , respectively). Correlation analysis revealed that these reductions were associated with those in body weight, BMI, and waist circumference . This was the first study confirming the beneficial effect of semaglutide among women with obesity and PCOS. Recommendations on PCOS were recently published by a global task force (Recommendations from the 2023 International Evidence-based Guideline for the Assessment and Management of Polycystic Ovary Syndrome). In the absence of adequate evidence, the consensus recommendations were prepared by the committee in collaboration with consumer organizations. Recommendation 4.5.1 states that “antiobesity medications, including liraglutide, semaglutide, both glucagon-like peptide-1 (GLP- ) receptor agonists and orlistat, could be considered, in addition to active lifestyle intervention, for the management of higher weight in adults with PCOS as per general population guidelines” . 4.5.9 Effects of semaglutide on obstructive sleep apnea syndrome No studies specifically on semaglutide and OSAS are currently available. 4.5.10 Effects of semaglutide in patients with male hypogonadism No studies specifically on semaglutide and male hypogonadism are currently available. 4.5.11 Effects of semaglutide on metabolic dysfunction-associated steatotic liver disease A phase 2 RCT included 320 patients with biopsy-confirmed NASH and liver fibrosis who were randomized to receive subcutaneous semaglutide at daily doses of 0.1 mg, 0.2 mg, or 0.4 mg or placebo for 72 weeks. The primary endpoint of NASH resolution without worsening fibrosis was achieved by 40%, 36%, and 59% of participants in the semaglutide 0.1 mg, 0.2 mg, and 0.4 mg groups, respectively, compared with 17% of those in the placebo group (p < 0.001). However, no difference between the groups was observed regarding improvement in fibrosis stage. In conclusion, semaglutide treatment of patients with NASH and fibrosis led to a significantly higher number of patients experiencing resolution of NASH compared with placebo treatment, with no difference in improvement in fibrosis stage . Another phase 2, double-blind, placebo-controlled study included 71 patients with biopsy-confirmed NASH-related cirrhosis and BMI ≥ 27 kg/m 2 . In all, 49 (69%) patients were of the female sex. The patients had a mean age of 59.5 years and a mean BMI of 34.9 kg/m 2 ; 53 (75%) patients had T2DM. In total, 47 patients were randomized to the semaglutide group and 24 patients to the placebo group. After 48 weeks, there was no significant difference between the two groups regarding the proportion of patients with improvement in liver fibrosis of one stage or more without worsening of MASLD (5 [11%] of 47 patients in the semaglutide group versus 7 [29%] of 24 patients in the placebo group; HR = 0.28; p = 0.087). There was also no significant difference between groups in the proportion of patients achieving NASH resolution (p = 0.29). Similar proportions of patients in each group reported adverse events (42 [89%] patients in the semaglutide group versus 19 [79%] patients in the placebo group) and SAEs (6 [13%] patients versus 2 [8%] patients, respectively). The most frequent adverse events were nausea (21 [45%] versus 4 [17%]), diarrhea (9 [19%] versus 2 [8%]), and vomiting (8 [17%] versus none). Liver and kidney functions remained stable. There were no events of hepatic decompensation or deaths. In conclusion, semaglutide did not significantly improve fibrosis or lead to NASH resolution compared with placebo among patients with NASH and compensated cirrhosis . An ongoing phase 3 study of semaglutide in individuals with MASLD/metabolic dysfunction-associated steatohepatitis (MASH) is scheduled to be completed in 2028. 4.5.12 Effects of semaglutide on quality of life The quality of life of patients participating in clinical studies can be assessed using quality of life scores. In studies with semaglutide, the questionnaires used for this purpose were the SF-36 and IWQOL-Lite-CT. In the STEP studies, a significant improvement in quality of life was observed among patients using semaglutide when compared with placebo . 4.5.13 Effects of semaglutide on osteoarticular diseases The STEP 9 RCT included individuals with obesity and a clinical diagnosis of knee osteoarthritis with radiological findings and pain (Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC] pain subscale score ≥ 40). The individuals were randomized to semaglutide 2.4 mg (n = 271) or placebo (n = 136) and were followed for 68 weeks. In addition to weight loss, patients randomized to semaglutide experienced significantly greater reduction in the pain scale (-41.7 points) compared with those randomized to placebo (-25.5 points; difference -14.1 points; p < 0.001), along with improvement in the subscale assessing physical function and reduced use of analgesics . 4.5.14 Effects of semaglutide in patients with chronic kidney disease In a real-world study of 122 patients with obesity and T2DM, treatment with semaglutide resulted in weight loss, reduced blood glucose levels, and a 50% decrease in albuminuria, with no impact on eGFR. The treatment withdrawal rate due to side effects was 5.9%, which is similar to that observed in studies carried out with patients without CKD . In a post hoc analysis of the STEP 1, 3, and 4 studies, the use of semaglutide also decreased albuminuria in patients with overweight or obesity and without diabetes, with no effects on eGFR . A prespecified analysis of the SELECT study (described in item 4.5.15, “Effects of semaglutide on cardiovascular risk protection”) evaluated the effects of semaglutide on renal outcomes. The outcomes assessed included death from renal causes, initiation of dialysis therapy or renal transplantation, development of eGFR < 15 mL/min/1.73 m 2 , persistent reduction of over 50% in eGFR compared with baseline, and development of persistent macroalbuminuria. Patients randomized to semaglutide had a 22% reduction in this composite outcome (HR = 0.78; p = 0.02), with the endpoints determined primarily by persistent ≥ 50% reduction of eGFR and the onset of macroalbuminuria. Treatment with semaglutide also led to a smaller absolute reduction in eGFR compared with placebo (-0.86 mL/min/1.73 m 2 versus -1.61 mL/min/1.73 m 2 , respectively) after 104 weeks and had an effect on reducing albuminuria . Finally, the results of the FLOW study, which evaluated the effects of semaglutide 1.0 mg in patients with T2DM and CKD, have been published. The outcomes were similar to those previously described in the SELECT study, with cardiovascular death also included as a primary outcome. The study was interrupted prematurely due to efficacy, with the semaglutide 1.0 mg group demonstrating a 24% reduction in the primary outcome . 4.5.15 Effects of semaglutide on cardiovascular risk protection The cardiovascular safety of semaglutide was investigated in patients with T2DM and high cardiovascular risk in the SUSTAIN-6 study, with 3,297 patients randomized to weekly subcutaneous semaglutide 0.5 mg or 1.0 mg or placebo, for 104 weeks. The primary composite outcome of cardiovascular death, nonfatal AMI, or nonfatal stroke occurred in 108 of 1,648 patients (6.6%) in the semaglutide group and in 146 of 1,649 patients (8.9%) in the placebo group (odds ratio [OR] = 0.74; 95% CI, 0.58-0.95; p < 0.001 for noninferiority) . The results of the SELECT study, the first study to demonstrate the cardiovascular benefit of a medication in individuals with obesity without diabetes, were published in 2023. In this multicenter, double-blind RCT designed to assess superiority, more than 17,000 patients with BMI ≥ 27 kg/m 2 and CVD were randomized to receive weekly 2.4 mg of subcutaneous semaglutide or placebo. During a median follow-up of 39.8 months, a primary event (cardiovascular death, nonfatal AMI, or nonfatal stroke) occurred in 569 of 8,803 patients (6.5%) in the semaglutide group and in 701 of 8,801 patients (8.0%) in the placebo group (OR = 0.80; 95% CI, 0.72-0.90; p < 0.001). The study concluded that semaglutide 2.4 mg was superior to placebo, leading to a 20% reduction in the incidence of cardiovascular events in patients with overweight or obesity and established CVD . Another landmark study was the STEP-HFpEF, the first study evaluating the effects of a GLP-1a in patients with HF with preserved ejection fraction (HFpEF). The RCT evaluated the impact of 52 weeks of treatment with semaglutide 2.4 mg in 529 patients with HFpEF and obesity. The primary outcomes were symptom improvement (assessed using the Kansas City Cardiomyopathy Questionnaire Clinical Summary Score [KCCQ-CSS]) and body weight reduction. Secondary outcomes included changes in the 6-minute walk distance and reductions in high-sensitivity C-reactive protein (CRP), among others. Patients randomized to semaglutide had a significant reduction in KCCQ-CSS scores relative to placebo (-16.6 points versus -8.7 points, respectively), significant body weight loss (-13.3% versus -2.6%, respectively), and reduction in secondary outcomes (including a reduction in high-sensitivity CRP). In patients with obesity and HFpEF, treatment with semaglutide led to improvement in symptoms, physical limitations, and exercise capacity . Semaglutide is a long-acting GLP-1a that mimics the effects of native GLP-1. Like other GLP-1as, semaglutide has effects in various locations and multiple actions, including reduced caloric intake, increased satiety, and decreased hunger, leading to weight loss . In animal models, GLP-1as act both on the hypothalamus, stimulating anorexigenic pathways, and on the mesolimbic system, influencing the reward system . In a study including 72 adults with overweight or obesity and comparing semaglutide 2.4 mg versus placebo, ad libitum energy intake was 35% lower with semaglutide than placebo (1,736 versus 2,676 kJ, respectively; estimated treatment difference -940 kJ). Semaglutide reduced hunger and potential food intake, and increased fullness and satiety compared with placebo. The CoEQ scores indicated better dietary control and reduced food cravings with semaglutide compared with placebo (p < 0.05). These effects resulted in a 9.9% reduction in body weight with semaglutide and 0.4% with placebo . Semaglutide is 89% bioavailable after subcutaneous injection. Peak concentrations occur 3 days after injection, and a steady state is reached by week 5 when injected once weekly. Similar exposure was achieved in three subcutaneous administration sites: abdomen, thigh, and upper arm. More than 99% of semaglutide binds to plasma albumin, providing protection against degradation and renal clearance. Semaglutide is modified through the substitution of alanine at position 8 to protect it from natural degradation by dipeptidyl peptidase 4 (DPP-4). The elimination half-life of semaglutide is approximately 1 week; therefore, it remains in circulation for approximately 5-7 weeks after the last dose. Semaglutide is eliminated in urine and feces. No dosage adjustments are required based on hepatic or renal function . Weight loss with semaglutide is dose-dependent, with higher doses resulting in greater weight loss. The package insert recommends an initial subcutaneous dose of 0.25 mg once weekly, with no relation to meal times. The dose should be titrated every 4 weeks, increasing to 0.5 mg, 1.0 mg, 1.7 mg, and 2.4 mg, which is the maximum effective dose for weight loss. In patients with poor tolerance to dose titration, it is recommended to consider a 4-week “delay” in dose titration, i.e. , to maintain the maximum tolerated dose for 4 weeks longer before attempting a new dose increase. The goal should be toward the maximum tolerated dose, although some patients in clinical practice are “hyper-responders” and experience significant weight loss with lower doses . The most common side effects of semaglutide occur in the gastrointestinal tract, as with other GLP-1as. The Semaglutide Treatment Effect in People with Obesity (STEP) studies were a pivotal phase 3 clinical trial series evaluating subcutaneous semaglutide 2.4 mg weekly for weight loss. Data from these studies served for discussions regarding the weight loss efficacy, safety profile, tolerability, and effects of semaglutide on cardiometabolic parameters. In STEP 1, nausea, diarrhea, vomiting, and constipation occurred in 74.2% of participants in the semaglutide group and 47.9% of those in the placebo group. As a rule, events were mild to moderate in severity and transient, resolving permanently after treatment discontinuation. Gallbladder-related disorders (mainly cholelithiasis) were reported in 2.6% and 1.2% of participants in the semaglutide and placebo groups, respectively. Three participants in the semaglutide group had mild acute pancreatitis (two had gallstones); all participants made a full recovery. Serious adverse events were reported in 9.8% of patients in the semaglutide group and 6.4% of those in the placebo group and included mainly severe gastrointestinal and hepatobiliary events. One death was reported in each group, and neither was considered related to the receipt of semaglutide or placebo, as assessed by an independent external event adjudication committee. There was no difference between groups regarding the incidence of benign or malignant neoplasms, cardiovascular events, acute renal failure, or hypoglycemia . The other STEP studies had a similar pattern of side effects. In a meta-analysis of four studies, including three studies from the STEP series and with a total of 3,613 patients, Tan and cols. found that the risk of gastrointestinal adverse events was 1.59 times higher with semaglutide (RR = 1.59, 95% CI = 1.34-1.88). The risk of discontinuation due to adverse events was twice as high in the semaglutide group (RR = 2.19), and the risk of severe adverse events (SAEs), particularly biliary tract diseases (cholelithiasis and cholecystitis) and acute pancreatitis, was 1.6 times higher in the semaglutide group . In a large database analysis conducted in the United States (PharMetrics Plus) with approximately 16 million patients, Sodhi and cols. compared users of the GLP-1as liraglutide and semaglutide with those using the combination of naltrexone and bupropion (N/B). They found that GLP-1a use was associated with an increased risk of pancreatitis (HR = 9.09), intestinal obstruction (HR = 4.22), and gastroparesis (HR = 3.67), but not of biliary disease (HR = 1.50, nonsignificant), differing from findings in the STEP studies. Two aspects of the study must be highlighted: first, the confidence interval was very wide, suggesting that the sample was not adequate; second, the indiscriminate use of GLP-1as may have increased the risk of side effects, among other consequences . The use of semaglutide is contraindicated during pregnancy and in cases of hypersensitivity to semaglutide or any of its excipients . 4.5.1 Effects of semaglutide on body weight The efficacy of semaglutide for weight loss was initially demonstrated in a phase 2 study, in which patients with overweight or obesity and without T2DM were divided into seven groups: five using daily subcutaneous semaglutide at different doses (0.05 mg, 0.1 mg, 0.2 mg, 0.3 mg, and 0.4 mg), one using liraglutide 3.0 mg, and one using placebo. At the end of the study, the mean weight loss in patients using semaglutide was 6.0% (0.05 mg), 8.6% (0.1 mg), 11.6% (0.2 mg), 11.2% (0.3 mg), and 13.8% (0.4 mg), showing a clear superiority of semaglutide over placebo (2.3%). Starting at a daily dose of 0.2 mg, which is equivalent to 1.4 mg per week, weight loss with semaglutide was greater than that with liraglutide (7.8%) . In STEP 1, a total of 1,961 patients with overweight or obesity and without T2DM were evaluated and followed up for 68 weeks. All participants were instructed to follow a hypocaloric diet with a 500 kcal/day deficit and practice 150 minutes of physical activity per week. At the end of the study, participants in the semaglutide 2.4 mg group lost on average 16.9% of weight, while those in the placebo group lost 2.4%. The nadir was reached at week 60 . The STEP 2 study evaluated 1,210 patients with T2DM with a BMI > 27 kg/m 2 and HbA1c levels between 7.0% and 10%. The patients were divided into three groups: semaglutide 2.4 mg, semaglutide 1.0 mg, and placebo. At 68 weeks, semaglutide 2.4 mg led to greater weight loss than semaglutide 1.0 mg. Patients in the semaglutide 2.4 mg group lost an average of 9.6% of their body weight, compared with 7.0% in the semaglutide 1.0 mg group and 3.4% in the placebo group . A comparison of the results of the STEP 1 and STEP 2 studies showed that the participants with T2DM lost less weight than those without T2DM, replicating the finding by studies conducted with other medications. The STEP 3 study evaluated 611 patients and had a design virtually identical to that of STEP 1, differing only in the degree of LSCs, which were more intensive. At the end of the 68-week period, the intervention group lost an average of 16% of body weight, while the placebo group lost 5.7% . The STEP 4 study was designed to assess the effects of continuing versus interrupting semaglutide treatment in individuals with overweight or obesity. A total of 902 patients received semaglutide in escalating doses of up to 2.4 mg/week, with an average weight loss of 10%. At week 20, half of the group was randomized to continue on semaglutide while the other half was switched to placebo. At the end of the study, at week 68, the semaglutide group had an additional weight loss of approximately 7.9%, with an average weight loss of 17.4%, while the group that interrupted treatment had an average weight regain of 6.9%, with an average weight loss of 5.0%. The results of this study highlighted the importance of maintaining pharmacologic treatment in patients with obesity . The STEP 5 study was designed to evaluate the long-term effects of subcutaneous semaglutide 2.4 mg once weekly compared with placebo, as an add-on to behavioral intervention, on body weight and cardiometabolic risk factors in adults with overweight or obesity. At follow-up week 104, the mean decrease in body weight was 15.2% in the semaglutide group and 2.6% in the placebo group, demonstrating the long-term efficacy of the treatment . A widely used way of assessing weight loss is by evaluating weight loss categories, i.e. , classifying weight loss into different categories, generally based on the percentage of weight loss. highlights the categorical weight loss observed in the STEP series studies. 4.5.2 Effects of semaglutide on weight maintenance See the description of the STEP 4 study above. 4.5.3 Effects of semaglutide on body composition Modification of body composition is an increasingly valued parameter in studies with antiobesity drugs. The therapeutic target is quality weight loss, i.e. , weight loss at the expense of fat mass with preservation or minimal loss of lean mass. The effects of semaglutide on body composition were investigated in the STEP 1 study, where a subgroup of 140 participants underwent body composition analysis using DXA. Despite a decrease in lean mass in absolute terms (-5.26 kg in the semaglutide group versus 1.83 kg in the placebo group; difference -3.43 kg), there was a predominant reduction in fat mass (-8.36 kg in the semaglutide group versus -1.37 kg in the placebo group; difference -6.99 kg), which resulted in patients having decreased percentage of body fat at the end of the study . 4.5.4 Effects of semaglutide in patients with prediabetes/glucose intolerance In a post hoc analysis of the STEP 1, 3, and 4 studies, including approximately 3,375 patients with overweight or obesity and prediabetes, the intervention group (semaglutide 2.4 mg) experienced improvement in all glycemic parameters after 68 weeks of treatment, with reductions in the risk of progression from prediabetes to T2DM between 84% and 89%, demonstrating the therapeutic potential of the drug . The STEP 10 study evaluated the effects of semaglutide 2.4 mg on reversing prediabetes to normoglycemia in patients with obesity. A total of 207 patients were randomized, including 138 to the semaglutide group and 69 to the placebo group. At 52 weeks, 81.1% of patients treated with semaglutide showed blood glucose normalization compared with 14.1% of those treated with placebo (OR = 19.8; p < 0.0001). Regarding HbA1c, the average level at baseline was 5.9% and at week 52, the level was 0.5% lower in the semaglutide group compared with the placebo group . 4.5.5 Effects of semaglutide in patients with type 2 diabetes mellitus The efficacy of semaglutide for glycemic control was well demonstrated in the SUSTAIN series of studies, where semaglutide was administered subcutaneously at a dose of 1.0 mg/week, and in the PIONNER series of studies, where it was administered orally at a dose of up to 14 mg/day . These two development programs included only patients with T2DM and will not be discussed in this document. In the STEP 2 study, the 2.4 mg dose was tested in overweight patients with T2DM. At the end of the 68-week follow-up period, the HbA1c level decreased by 1.6%, which was not significantly different from the 1.5% decrease with the 1.0 mg dose . In a meta-analysis assessing changes in cardiometabolic parameters, semaglutide treatment of patients with overweight or obesity without T2DM resulted in a 7.5% reduction in fasting blood glucose. 4.5.6 Effects of semaglutide on lipid profile A meta-analysis evaluating changes in cardiometabolic parameters in patients with overweight or obesity and without T2DM found that semaglutide reduced serum levels of LDL-c by 6%, triglycerides by 18%, and non-HDL-c by 8%, but did not change significantly the HDL-c level . 4.5.7 Effects of semaglutide on blood pressure and heart rate In a meta-analysis including 4,744 patients, semaglutide resulted in mean decreases of 4.83 mmHg in SBP and 2.45 mmHg in DBP among patients with obesity without T2DM. All GLP-1as increase heart rate, and this applies to semaglutide as well. Semaglutide leads to an average heart rate increase of 2-5 bpm. However, this effect appears to be caused by direct stimulation of the sinus node rather than reflex tachycardia due to stimulation of the autonomic nervous system and is not associated with an increased risk of adverse cardiac events . 4.5.8 Effects of semaglutide on polycystic ovary syndrome Jensterle and cols. randomized 25 women with obesity and PCOS (33.7 ± 5.3 years, BMI 36.1 ± 3.9 kg/m 2 ) to receive semaglutide 1.0 mg or placebo for 16 weeks. The authors assessed the participants’ tongues in regard to volume, fat tissue, and fat proportion using magnetic resonance imaging. Tongue fat tissue and fat proportion reduced significantly after semaglutide versus placebo (-1.94 ± 5.51 cm 3 versus 3.12 ± 4.87 cm 3 and 0.02 ± 0.07 cm 3 versus 0.04 ± 0.06 cm 3 , respectively). Correlation analysis revealed that these reductions were associated with those in body weight, BMI, and waist circumference . This was the first study confirming the beneficial effect of semaglutide among women with obesity and PCOS. Recommendations on PCOS were recently published by a global task force (Recommendations from the 2023 International Evidence-based Guideline for the Assessment and Management of Polycystic Ovary Syndrome). In the absence of adequate evidence, the consensus recommendations were prepared by the committee in collaboration with consumer organizations. Recommendation 4.5.1 states that “antiobesity medications, including liraglutide, semaglutide, both glucagon-like peptide-1 (GLP- ) receptor agonists and orlistat, could be considered, in addition to active lifestyle intervention, for the management of higher weight in adults with PCOS as per general population guidelines” . 4.5.9 Effects of semaglutide on obstructive sleep apnea syndrome No studies specifically on semaglutide and OSAS are currently available. 4.5.10 Effects of semaglutide in patients with male hypogonadism No studies specifically on semaglutide and male hypogonadism are currently available. 4.5.11 Effects of semaglutide on metabolic dysfunction-associated steatotic liver disease A phase 2 RCT included 320 patients with biopsy-confirmed NASH and liver fibrosis who were randomized to receive subcutaneous semaglutide at daily doses of 0.1 mg, 0.2 mg, or 0.4 mg or placebo for 72 weeks. The primary endpoint of NASH resolution without worsening fibrosis was achieved by 40%, 36%, and 59% of participants in the semaglutide 0.1 mg, 0.2 mg, and 0.4 mg groups, respectively, compared with 17% of those in the placebo group (p < 0.001). However, no difference between the groups was observed regarding improvement in fibrosis stage. In conclusion, semaglutide treatment of patients with NASH and fibrosis led to a significantly higher number of patients experiencing resolution of NASH compared with placebo treatment, with no difference in improvement in fibrosis stage . Another phase 2, double-blind, placebo-controlled study included 71 patients with biopsy-confirmed NASH-related cirrhosis and BMI ≥ 27 kg/m 2 . In all, 49 (69%) patients were of the female sex. The patients had a mean age of 59.5 years and a mean BMI of 34.9 kg/m 2 ; 53 (75%) patients had T2DM. In total, 47 patients were randomized to the semaglutide group and 24 patients to the placebo group. After 48 weeks, there was no significant difference between the two groups regarding the proportion of patients with improvement in liver fibrosis of one stage or more without worsening of MASLD (5 [11%] of 47 patients in the semaglutide group versus 7 [29%] of 24 patients in the placebo group; HR = 0.28; p = 0.087). There was also no significant difference between groups in the proportion of patients achieving NASH resolution (p = 0.29). Similar proportions of patients in each group reported adverse events (42 [89%] patients in the semaglutide group versus 19 [79%] patients in the placebo group) and SAEs (6 [13%] patients versus 2 [8%] patients, respectively). The most frequent adverse events were nausea (21 [45%] versus 4 [17%]), diarrhea (9 [19%] versus 2 [8%]), and vomiting (8 [17%] versus none). Liver and kidney functions remained stable. There were no events of hepatic decompensation or deaths. In conclusion, semaglutide did not significantly improve fibrosis or lead to NASH resolution compared with placebo among patients with NASH and compensated cirrhosis . An ongoing phase 3 study of semaglutide in individuals with MASLD/metabolic dysfunction-associated steatohepatitis (MASH) is scheduled to be completed in 2028. 4.5.12 Effects of semaglutide on quality of life The quality of life of patients participating in clinical studies can be assessed using quality of life scores. In studies with semaglutide, the questionnaires used for this purpose were the SF-36 and IWQOL-Lite-CT. In the STEP studies, a significant improvement in quality of life was observed among patients using semaglutide when compared with placebo . 4.5.13 Effects of semaglutide on osteoarticular diseases The STEP 9 RCT included individuals with obesity and a clinical diagnosis of knee osteoarthritis with radiological findings and pain (Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC] pain subscale score ≥ 40). The individuals were randomized to semaglutide 2.4 mg (n = 271) or placebo (n = 136) and were followed for 68 weeks. In addition to weight loss, patients randomized to semaglutide experienced significantly greater reduction in the pain scale (-41.7 points) compared with those randomized to placebo (-25.5 points; difference -14.1 points; p < 0.001), along with improvement in the subscale assessing physical function and reduced use of analgesics . 4.5.14 Effects of semaglutide in patients with chronic kidney disease In a real-world study of 122 patients with obesity and T2DM, treatment with semaglutide resulted in weight loss, reduced blood glucose levels, and a 50% decrease in albuminuria, with no impact on eGFR. The treatment withdrawal rate due to side effects was 5.9%, which is similar to that observed in studies carried out with patients without CKD . In a post hoc analysis of the STEP 1, 3, and 4 studies, the use of semaglutide also decreased albuminuria in patients with overweight or obesity and without diabetes, with no effects on eGFR . A prespecified analysis of the SELECT study (described in item 4.5.15, “Effects of semaglutide on cardiovascular risk protection”) evaluated the effects of semaglutide on renal outcomes. The outcomes assessed included death from renal causes, initiation of dialysis therapy or renal transplantation, development of eGFR < 15 mL/min/1.73 m 2 , persistent reduction of over 50% in eGFR compared with baseline, and development of persistent macroalbuminuria. Patients randomized to semaglutide had a 22% reduction in this composite outcome (HR = 0.78; p = 0.02), with the endpoints determined primarily by persistent ≥ 50% reduction of eGFR and the onset of macroalbuminuria. Treatment with semaglutide also led to a smaller absolute reduction in eGFR compared with placebo (-0.86 mL/min/1.73 m 2 versus -1.61 mL/min/1.73 m 2 , respectively) after 104 weeks and had an effect on reducing albuminuria . Finally, the results of the FLOW study, which evaluated the effects of semaglutide 1.0 mg in patients with T2DM and CKD, have been published. The outcomes were similar to those previously described in the SELECT study, with cardiovascular death also included as a primary outcome. The study was interrupted prematurely due to efficacy, with the semaglutide 1.0 mg group demonstrating a 24% reduction in the primary outcome . 4.5.15 Effects of semaglutide on cardiovascular risk protection The cardiovascular safety of semaglutide was investigated in patients with T2DM and high cardiovascular risk in the SUSTAIN-6 study, with 3,297 patients randomized to weekly subcutaneous semaglutide 0.5 mg or 1.0 mg or placebo, for 104 weeks. The primary composite outcome of cardiovascular death, nonfatal AMI, or nonfatal stroke occurred in 108 of 1,648 patients (6.6%) in the semaglutide group and in 146 of 1,649 patients (8.9%) in the placebo group (odds ratio [OR] = 0.74; 95% CI, 0.58-0.95; p < 0.001 for noninferiority) . The results of the SELECT study, the first study to demonstrate the cardiovascular benefit of a medication in individuals with obesity without diabetes, were published in 2023. In this multicenter, double-blind RCT designed to assess superiority, more than 17,000 patients with BMI ≥ 27 kg/m 2 and CVD were randomized to receive weekly 2.4 mg of subcutaneous semaglutide or placebo. During a median follow-up of 39.8 months, a primary event (cardiovascular death, nonfatal AMI, or nonfatal stroke) occurred in 569 of 8,803 patients (6.5%) in the semaglutide group and in 701 of 8,801 patients (8.0%) in the placebo group (OR = 0.80; 95% CI, 0.72-0.90; p < 0.001). The study concluded that semaglutide 2.4 mg was superior to placebo, leading to a 20% reduction in the incidence of cardiovascular events in patients with overweight or obesity and established CVD . Another landmark study was the STEP-HFpEF, the first study evaluating the effects of a GLP-1a in patients with HF with preserved ejection fraction (HFpEF). The RCT evaluated the impact of 52 weeks of treatment with semaglutide 2.4 mg in 529 patients with HFpEF and obesity. The primary outcomes were symptom improvement (assessed using the Kansas City Cardiomyopathy Questionnaire Clinical Summary Score [KCCQ-CSS]) and body weight reduction. Secondary outcomes included changes in the 6-minute walk distance and reductions in high-sensitivity C-reactive protein (CRP), among others. Patients randomized to semaglutide had a significant reduction in KCCQ-CSS scores relative to placebo (-16.6 points versus -8.7 points, respectively), significant body weight loss (-13.3% versus -2.6%, respectively), and reduction in secondary outcomes (including a reduction in high-sensitivity CRP). In patients with obesity and HFpEF, treatment with semaglutide led to improvement in symptoms, physical limitations, and exercise capacity . The efficacy of semaglutide for weight loss was initially demonstrated in a phase 2 study, in which patients with overweight or obesity and without T2DM were divided into seven groups: five using daily subcutaneous semaglutide at different doses (0.05 mg, 0.1 mg, 0.2 mg, 0.3 mg, and 0.4 mg), one using liraglutide 3.0 mg, and one using placebo. At the end of the study, the mean weight loss in patients using semaglutide was 6.0% (0.05 mg), 8.6% (0.1 mg), 11.6% (0.2 mg), 11.2% (0.3 mg), and 13.8% (0.4 mg), showing a clear superiority of semaglutide over placebo (2.3%). Starting at a daily dose of 0.2 mg, which is equivalent to 1.4 mg per week, weight loss with semaglutide was greater than that with liraglutide (7.8%) . In STEP 1, a total of 1,961 patients with overweight or obesity and without T2DM were evaluated and followed up for 68 weeks. All participants were instructed to follow a hypocaloric diet with a 500 kcal/day deficit and practice 150 minutes of physical activity per week. At the end of the study, participants in the semaglutide 2.4 mg group lost on average 16.9% of weight, while those in the placebo group lost 2.4%. The nadir was reached at week 60 . The STEP 2 study evaluated 1,210 patients with T2DM with a BMI > 27 kg/m 2 and HbA1c levels between 7.0% and 10%. The patients were divided into three groups: semaglutide 2.4 mg, semaglutide 1.0 mg, and placebo. At 68 weeks, semaglutide 2.4 mg led to greater weight loss than semaglutide 1.0 mg. Patients in the semaglutide 2.4 mg group lost an average of 9.6% of their body weight, compared with 7.0% in the semaglutide 1.0 mg group and 3.4% in the placebo group . A comparison of the results of the STEP 1 and STEP 2 studies showed that the participants with T2DM lost less weight than those without T2DM, replicating the finding by studies conducted with other medications. The STEP 3 study evaluated 611 patients and had a design virtually identical to that of STEP 1, differing only in the degree of LSCs, which were more intensive. At the end of the 68-week period, the intervention group lost an average of 16% of body weight, while the placebo group lost 5.7% . The STEP 4 study was designed to assess the effects of continuing versus interrupting semaglutide treatment in individuals with overweight or obesity. A total of 902 patients received semaglutide in escalating doses of up to 2.4 mg/week, with an average weight loss of 10%. At week 20, half of the group was randomized to continue on semaglutide while the other half was switched to placebo. At the end of the study, at week 68, the semaglutide group had an additional weight loss of approximately 7.9%, with an average weight loss of 17.4%, while the group that interrupted treatment had an average weight regain of 6.9%, with an average weight loss of 5.0%. The results of this study highlighted the importance of maintaining pharmacologic treatment in patients with obesity . The STEP 5 study was designed to evaluate the long-term effects of subcutaneous semaglutide 2.4 mg once weekly compared with placebo, as an add-on to behavioral intervention, on body weight and cardiometabolic risk factors in adults with overweight or obesity. At follow-up week 104, the mean decrease in body weight was 15.2% in the semaglutide group and 2.6% in the placebo group, demonstrating the long-term efficacy of the treatment . A widely used way of assessing weight loss is by evaluating weight loss categories, i.e. , classifying weight loss into different categories, generally based on the percentage of weight loss. highlights the categorical weight loss observed in the STEP series studies. See the description of the STEP 4 study above. Modification of body composition is an increasingly valued parameter in studies with antiobesity drugs. The therapeutic target is quality weight loss, i.e. , weight loss at the expense of fat mass with preservation or minimal loss of lean mass. The effects of semaglutide on body composition were investigated in the STEP 1 study, where a subgroup of 140 participants underwent body composition analysis using DXA. Despite a decrease in lean mass in absolute terms (-5.26 kg in the semaglutide group versus 1.83 kg in the placebo group; difference -3.43 kg), there was a predominant reduction in fat mass (-8.36 kg in the semaglutide group versus -1.37 kg in the placebo group; difference -6.99 kg), which resulted in patients having decreased percentage of body fat at the end of the study . In a post hoc analysis of the STEP 1, 3, and 4 studies, including approximately 3,375 patients with overweight or obesity and prediabetes, the intervention group (semaglutide 2.4 mg) experienced improvement in all glycemic parameters after 68 weeks of treatment, with reductions in the risk of progression from prediabetes to T2DM between 84% and 89%, demonstrating the therapeutic potential of the drug . The STEP 10 study evaluated the effects of semaglutide 2.4 mg on reversing prediabetes to normoglycemia in patients with obesity. A total of 207 patients were randomized, including 138 to the semaglutide group and 69 to the placebo group. At 52 weeks, 81.1% of patients treated with semaglutide showed blood glucose normalization compared with 14.1% of those treated with placebo (OR = 19.8; p < 0.0001). Regarding HbA1c, the average level at baseline was 5.9% and at week 52, the level was 0.5% lower in the semaglutide group compared with the placebo group . The efficacy of semaglutide for glycemic control was well demonstrated in the SUSTAIN series of studies, where semaglutide was administered subcutaneously at a dose of 1.0 mg/week, and in the PIONNER series of studies, where it was administered orally at a dose of up to 14 mg/day . These two development programs included only patients with T2DM and will not be discussed in this document. In the STEP 2 study, the 2.4 mg dose was tested in overweight patients with T2DM. At the end of the 68-week follow-up period, the HbA1c level decreased by 1.6%, which was not significantly different from the 1.5% decrease with the 1.0 mg dose . In a meta-analysis assessing changes in cardiometabolic parameters, semaglutide treatment of patients with overweight or obesity without T2DM resulted in a 7.5% reduction in fasting blood glucose. A meta-analysis evaluating changes in cardiometabolic parameters in patients with overweight or obesity and without T2DM found that semaglutide reduced serum levels of LDL-c by 6%, triglycerides by 18%, and non-HDL-c by 8%, but did not change significantly the HDL-c level . In a meta-analysis including 4,744 patients, semaglutide resulted in mean decreases of 4.83 mmHg in SBP and 2.45 mmHg in DBP among patients with obesity without T2DM. All GLP-1as increase heart rate, and this applies to semaglutide as well. Semaglutide leads to an average heart rate increase of 2-5 bpm. However, this effect appears to be caused by direct stimulation of the sinus node rather than reflex tachycardia due to stimulation of the autonomic nervous system and is not associated with an increased risk of adverse cardiac events . Jensterle and cols. randomized 25 women with obesity and PCOS (33.7 ± 5.3 years, BMI 36.1 ± 3.9 kg/m 2 ) to receive semaglutide 1.0 mg or placebo for 16 weeks. The authors assessed the participants’ tongues in regard to volume, fat tissue, and fat proportion using magnetic resonance imaging. Tongue fat tissue and fat proportion reduced significantly after semaglutide versus placebo (-1.94 ± 5.51 cm 3 versus 3.12 ± 4.87 cm 3 and 0.02 ± 0.07 cm 3 versus 0.04 ± 0.06 cm 3 , respectively). Correlation analysis revealed that these reductions were associated with those in body weight, BMI, and waist circumference . This was the first study confirming the beneficial effect of semaglutide among women with obesity and PCOS. Recommendations on PCOS were recently published by a global task force (Recommendations from the 2023 International Evidence-based Guideline for the Assessment and Management of Polycystic Ovary Syndrome). In the absence of adequate evidence, the consensus recommendations were prepared by the committee in collaboration with consumer organizations. Recommendation 4.5.1 states that “antiobesity medications, including liraglutide, semaglutide, both glucagon-like peptide-1 (GLP- ) receptor agonists and orlistat, could be considered, in addition to active lifestyle intervention, for the management of higher weight in adults with PCOS as per general population guidelines” . No studies specifically on semaglutide and OSAS are currently available. No studies specifically on semaglutide and male hypogonadism are currently available. A phase 2 RCT included 320 patients with biopsy-confirmed NASH and liver fibrosis who were randomized to receive subcutaneous semaglutide at daily doses of 0.1 mg, 0.2 mg, or 0.4 mg or placebo for 72 weeks. The primary endpoint of NASH resolution without worsening fibrosis was achieved by 40%, 36%, and 59% of participants in the semaglutide 0.1 mg, 0.2 mg, and 0.4 mg groups, respectively, compared with 17% of those in the placebo group (p < 0.001). However, no difference between the groups was observed regarding improvement in fibrosis stage. In conclusion, semaglutide treatment of patients with NASH and fibrosis led to a significantly higher number of patients experiencing resolution of NASH compared with placebo treatment, with no difference in improvement in fibrosis stage . Another phase 2, double-blind, placebo-controlled study included 71 patients with biopsy-confirmed NASH-related cirrhosis and BMI ≥ 27 kg/m 2 . In all, 49 (69%) patients were of the female sex. The patients had a mean age of 59.5 years and a mean BMI of 34.9 kg/m 2 ; 53 (75%) patients had T2DM. In total, 47 patients were randomized to the semaglutide group and 24 patients to the placebo group. After 48 weeks, there was no significant difference between the two groups regarding the proportion of patients with improvement in liver fibrosis of one stage or more without worsening of MASLD (5 [11%] of 47 patients in the semaglutide group versus 7 [29%] of 24 patients in the placebo group; HR = 0.28; p = 0.087). There was also no significant difference between groups in the proportion of patients achieving NASH resolution (p = 0.29). Similar proportions of patients in each group reported adverse events (42 [89%] patients in the semaglutide group versus 19 [79%] patients in the placebo group) and SAEs (6 [13%] patients versus 2 [8%] patients, respectively). The most frequent adverse events were nausea (21 [45%] versus 4 [17%]), diarrhea (9 [19%] versus 2 [8%]), and vomiting (8 [17%] versus none). Liver and kidney functions remained stable. There were no events of hepatic decompensation or deaths. In conclusion, semaglutide did not significantly improve fibrosis or lead to NASH resolution compared with placebo among patients with NASH and compensated cirrhosis . An ongoing phase 3 study of semaglutide in individuals with MASLD/metabolic dysfunction-associated steatohepatitis (MASH) is scheduled to be completed in 2028. The quality of life of patients participating in clinical studies can be assessed using quality of life scores. In studies with semaglutide, the questionnaires used for this purpose were the SF-36 and IWQOL-Lite-CT. In the STEP studies, a significant improvement in quality of life was observed among patients using semaglutide when compared with placebo . The STEP 9 RCT included individuals with obesity and a clinical diagnosis of knee osteoarthritis with radiological findings and pain (Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC] pain subscale score ≥ 40). The individuals were randomized to semaglutide 2.4 mg (n = 271) or placebo (n = 136) and were followed for 68 weeks. In addition to weight loss, patients randomized to semaglutide experienced significantly greater reduction in the pain scale (-41.7 points) compared with those randomized to placebo (-25.5 points; difference -14.1 points; p < 0.001), along with improvement in the subscale assessing physical function and reduced use of analgesics . In a real-world study of 122 patients with obesity and T2DM, treatment with semaglutide resulted in weight loss, reduced blood glucose levels, and a 50% decrease in albuminuria, with no impact on eGFR. The treatment withdrawal rate due to side effects was 5.9%, which is similar to that observed in studies carried out with patients without CKD . In a post hoc analysis of the STEP 1, 3, and 4 studies, the use of semaglutide also decreased albuminuria in patients with overweight or obesity and without diabetes, with no effects on eGFR . A prespecified analysis of the SELECT study (described in item 4.5.15, “Effects of semaglutide on cardiovascular risk protection”) evaluated the effects of semaglutide on renal outcomes. The outcomes assessed included death from renal causes, initiation of dialysis therapy or renal transplantation, development of eGFR < 15 mL/min/1.73 m 2 , persistent reduction of over 50% in eGFR compared with baseline, and development of persistent macroalbuminuria. Patients randomized to semaglutide had a 22% reduction in this composite outcome (HR = 0.78; p = 0.02), with the endpoints determined primarily by persistent ≥ 50% reduction of eGFR and the onset of macroalbuminuria. Treatment with semaglutide also led to a smaller absolute reduction in eGFR compared with placebo (-0.86 mL/min/1.73 m 2 versus -1.61 mL/min/1.73 m 2 , respectively) after 104 weeks and had an effect on reducing albuminuria . Finally, the results of the FLOW study, which evaluated the effects of semaglutide 1.0 mg in patients with T2DM and CKD, have been published. The outcomes were similar to those previously described in the SELECT study, with cardiovascular death also included as a primary outcome. The study was interrupted prematurely due to efficacy, with the semaglutide 1.0 mg group demonstrating a 24% reduction in the primary outcome . The cardiovascular safety of semaglutide was investigated in patients with T2DM and high cardiovascular risk in the SUSTAIN-6 study, with 3,297 patients randomized to weekly subcutaneous semaglutide 0.5 mg or 1.0 mg or placebo, for 104 weeks. The primary composite outcome of cardiovascular death, nonfatal AMI, or nonfatal stroke occurred in 108 of 1,648 patients (6.6%) in the semaglutide group and in 146 of 1,649 patients (8.9%) in the placebo group (odds ratio [OR] = 0.74; 95% CI, 0.58-0.95; p < 0.001 for noninferiority) . The results of the SELECT study, the first study to demonstrate the cardiovascular benefit of a medication in individuals with obesity without diabetes, were published in 2023. In this multicenter, double-blind RCT designed to assess superiority, more than 17,000 patients with BMI ≥ 27 kg/m 2 and CVD were randomized to receive weekly 2.4 mg of subcutaneous semaglutide or placebo. During a median follow-up of 39.8 months, a primary event (cardiovascular death, nonfatal AMI, or nonfatal stroke) occurred in 569 of 8,803 patients (6.5%) in the semaglutide group and in 701 of 8,801 patients (8.0%) in the placebo group (OR = 0.80; 95% CI, 0.72-0.90; p < 0.001). The study concluded that semaglutide 2.4 mg was superior to placebo, leading to a 20% reduction in the incidence of cardiovascular events in patients with overweight or obesity and established CVD . Another landmark study was the STEP-HFpEF, the first study evaluating the effects of a GLP-1a in patients with HF with preserved ejection fraction (HFpEF). The RCT evaluated the impact of 52 weeks of treatment with semaglutide 2.4 mg in 529 patients with HFpEF and obesity. The primary outcomes were symptom improvement (assessed using the Kansas City Cardiomyopathy Questionnaire Clinical Summary Score [KCCQ-CSS]) and body weight reduction. Secondary outcomes included changes in the 6-minute walk distance and reductions in high-sensitivity C-reactive protein (CRP), among others. Patients randomized to semaglutide had a significant reduction in KCCQ-CSS scores relative to placebo (-16.6 points versus -8.7 points, respectively), significant body weight loss (-13.3% versus -2.6%, respectively), and reduction in secondary outcomes (including a reduction in high-sensitivity CRP). In patients with obesity and HFpEF, treatment with semaglutide led to improvement in symptoms, physical limitations, and exercise capacity . 5.1 Mechanism of action Bupropion is a dopamine and norepinephrine reuptake inhibitor recommended for the treatment of depression and smoking cessation. It has an anorectic effect, related to the stimulation of pro-opiomelanocortin (POMC) neurons located in the arcuate nucleus (ARC) of the hypothalamus. These neurons release alpha-melanocyte-stimulating hormone (α-MSH), which acts on MC4R, decreasing food intake and increasing energy expenditure. Despite the demonstration of this effect in animal models, clinical studies have shown a modest weight-reducing effect of bupropion monotherapy, causing it not to meet the criteria for approval as a monotherapy for obesity . Naltrexone is an opioid receptor antagonist used primarily in the treatment of alcohol and opioid dependence. It is metabolized by the hepatic enzyme dihydrodiol dehydrogenase into its active metabolite 6β-naltrexol. Both naltrexone and 6β-naltrexol are competitive antagonists at μ- and κ-opioid receptors in the central nervous system (CNS). In POMC neurons, β-endorphin release exerts negative feedback by binding to μ-opioid receptors on the POMC neuron itself, decreasing α-MSH release activity. Although studies with naloxone (another opioid antagonist) have shown reduced food intake in rats, studies with naltrexone have been disappointing, as it led to minimal or no weight loss as a monotherapy . The idea of associating an opioid receptor antagonist to block the autoinhibitory feedback in POMC neurons of the ARC emerged as a good strategy to enhance the anorectic effect of bupropion. This led to the development of the fixed-dose combination of naltrexone and bupropion. The fixed-dose combination of naltrexone 8 mg and bupropion 90 mg (Contrave ® ) has a synergistic effect . 5.2 Dosage/usage instructions The dosage of the N/B combination should be titrated weekly. The starting dose is one tablet in the morning for 7 days, followed by a progressive increase to one tablet every 12 hours in the second week, two tablets in the morning and one tablet at night in the third week, and two tablets every 12 hours from the fourth week onward. Tablets should not be broken, chewed, or crushed, and total daily doses exceeding 32 mg/360 mg per day are not recommended. The tablet can be administered with meals but should not be taken with high-fat meals due to a significant increase in systemic exposure to bupropion and naltrexone . 5.3 Tolerability and side effects The most common adverse events, affecting over 4% of the individuals who used this medication, were nausea (32.5%), constipation (19.2%), vomiting (17.6%), and headache (10.7%), as well as dizziness, insomnia, xerostomia, diarrhea, anxiety, hot flushes, fatigue, and tremor . A meta-analysis evaluating study discontinuation due to adverse effects of antiobesity agents included four studies assessing the N/B combination. Out of 2,044 participants in the N/B group, 501 had adverse events compared with 175 of 1,319 in the placebo group (OR = 2.6) . describes the SAEs and discontinuation rates observed in the phase 3 studies. The most frequent adverse reactions leading to discontinuation were nausea (6.3%), headache (1.7%), and vomiting (1.1%). 5.4 Absolute contraindications The combination of N/B is contraindicated in the following clinical conditions: uncontrolled hypertension, epilepsy or history of seizures, severe hepatic impairment, grade 5 CKD, presence of CNS tumor, history of bipolar disorder, bulimia or anorexia nervosa (increased risk of seizures), chronic use of opioid or opiate agonists or partial agonists or acute withdrawal of opiates, abrupt discontinuation of alcohol, benzodiazepines, barbiturates and antiepileptic drugs, concomitant administration of monoamine oxidase inhibitors (MAOIs; a gap of at least 14 days must be given between discontinuation of MAOI and treatment initiation), and known allergy to bupropion or naltrexone . The N/B combination should be suspended 24-72 hours before small- and medium-size surgeries and 72 hours before major surgeries or procedures requiring intensive pain management with opioids to eliminate the antagonistic effect of the medication on opioid analgesia, while bupropion should be continued. It is recommended to reintroduce N/B 7 days after cessation of opioids in the postoperative period. 5.5 Efficacy 5.5.1 Efficacy of bupropion/naltrexone on body weight The weight loss and categorical weight loss percentages of 5% and 10% found in the main studies are summarized in . The clinical development program for the N/B combination was named Contrave Obesity Research (COR) and involved two phase 2 studies and four phase 3 studies: COR-I , COR-II , COR-BMOD (Behavior Modification) , and COR-Diabetes . 5.5.2 Effects on body composition In a 24-week phase 2 study comparing placebo, naltrexone monotherapy, bupropion monotherapy, and one of three N/B dose combinations for efficacy and safety, a subgroup underwent body composition analysis using DXA and computed tomography. Eighty participants completed this subgroup analysis. The N/B combination resulted in weight loss and greater reduction in body fat (-14.0 ± 1.3%) than placebo (-4.0 ± 2.0%), naltrexone monotherapy (-3.2 ± 2.5%), and bupropion monotherapy (-4.1 ± 2.9%; all p < 0.01). The reduction in visceral adipose tissue mass was also greater with N/B (-15.0 ± 1.8%) than with placebo (-4.6 ± 2.7%), naltrexone monotherapy (-0.1 ± 3.5%), and bupropion monotherapy (-2.3 ± 4.2%; all p < 0.01). The reductions in body fat and visceral adipose tissue mass with N/B were proportional to weight loss, and weight loss with N/B was not associated with a greater relative reduction in lean mass than placebo or monotherapies . 5.5.3 Effects on glycemic control in patients with type 2 diabetes mellitus The COR-Diabetes study evaluated patients with T2DM who did not achieve the glycemic goal of HbA1c level < 7% with oral antidiabetic agents or with diet and exercise alone. In the entire population of these four studies, 24% of participants had hypertension and 54% had dyslipidemia at baseline . presents the main results of the COR-Diabetes study. 5.5.4 Effects of bupropion/naltrexone in patients with prediabetes/glucose intolerance The effects of the N/B combination in patients with prediabetes/glucose intolerance have not been evaluated. 5.5.5 Effects of bupropion/naltrexone on lipid metabolism In the COR-Diabetes study, which evaluated patients with T2DM outside the HbA1c target, 54% had dyslipidemia at baseline. Compared with placebo, participants treated with N/B had a mean reduction of 11.2% in triglycerides ( versus a reduction of 0.8% in the placebo group) and an increase of 3.0 ± 0.5 mg/dL in HDL-c ( versus a reduction of 0.3 ± 0.6 mg/dL in the placebo group), with no significant effect on LDL-c. The magnitude of these variations in the COR-I and COR-BMOD studies was similar. 5.5.6 Effects of bupropion/naltrexone on blood pressure and heart rate The N/B combination may elevate SBP and/or DBP values and increase resting heart rate. Both BP and pulse should be measured prior to therapy initiation with the N/B combination and monitored at regular intervals consistent with usual clinical practice, particularly in patients with controlled hypertension prior to treatment. The N/B combination should not be administered to patients with uncontrolled hypertension. Among patients treated with the N/B combination in placebo-controlled clinical studies, mean SBP and DBP values were approximately 1 mmHg above those at baseline at weeks 4 and 8, similar to those at baseline at week 12, and approximately 1 mmHg below those at baseline between weeks 24 and 56. In contrast, among patients treated with placebo, the mean BP value was approximately 2-3 mmHg below the baseline value across the same time points, yielding statistically significant differences between groups at all assessments during this period. The largest mean differences between the groups were observed in the first 12 weeks (treatment difference +1.8 to +2.4 mmHg for SBP; +1.7 to +2.1 mmHg for DBP) . 5.5.7 Effects on obstructive sleep apnea syndrome The effects of the N/B combination in patients with OSAS have not been evaluated. 5.5.8 Effects of bupropion/naltrexone in patients with polycystic ovary syndrome The effects of the N/B combination in women with PCOS have not been evaluated. 5.5.9 Effects of bupropion/naltrexone in patients with male hypogonadism The effects of the N/B combination in men with hypogonadism have not been evaluated. 5.5.10 Effects of bupropion/naltrexone on metabolic dysfunction-associated steatotic liver disease There are limited data on the effects of the N/B combination in patients with MASLD. In a post hoc analysis of four RCTs, the N/B combination for 1 year resulted in an improvement in fibrosis-4 index (FIB-4; a noninvasive index of liver fibrosis) independent of potential confounders, including weight change. The effect of N/B intervention was independently associated with a decrease in ALT . 5.5.11 Effects of bupropion/naltrexone on quality of life The N/B combination was evaluated in a multicenter, randomized, controlled, open-label study examining weight-related quality of life, control over eating behavior, and sexual function after 26 weeks of treatment plus a comprehensive LSC program (N/B + LSC, n = 153) or usual care (UC, n = 89), which included minimal lifestyle intervention. Participants in the N/B + LSC group and UC group lost, respectively, 9.46% and 0.94% of their initial body weight at week 26 (p < 0.0001). The participants in the N/B + LSC group had greater improvement in the total IWQOL score compared with those in the UC group (p < 0.0001). Among participants with moderate/severe scores on the binge eating scale, 91% of N/B + LSC participants versus 18% of UC participants experienced improvement. In participants with sexual dysfunction defined by the Arizona Sexual Experiences Scale, 58% of N/B + LSC participants and 19% of UC participants no longer met the criteria for dysfunction at week 26 . 5.5.12 Effects of bupropion/naltrexone on osteoarticular diseases The effects of the N/B combination in patients with osteoarthritis or other osteoarticular diseases have not been evaluated. 5.5.13 Effects of bupropion/naltrexone in patients with renal disease The effects of the N/B combination in patients with renal disease have not been evaluated. 5.5.14 Effects of bupropion/naltrexone on cardiovascular diseases The LIGHT study was designed to determine the cardiovascular safety of N/B compared with placebo in patients with overweight or obesity. The trial enrolled 8,910 patients with overweight or obesity who had increased cardiovascular risk, but after public disclosure by the sponsor of confidential interim data during the trial, the study’s academic leadership recommended termination of the trial, which was agreed by the sponsor. Male participants were older than 45 years and female participants were older than 50 years, and the mean age was 61.0 ± 7.3 years. For the 25% interim analysis, cardiovascular outcomes occurred in 59 patients treated with placebo (1.3%) and in 35 patients treated with N/B (0.8%; HR = 0.59). After 50% of planned events, cardiovascular outcomes occurred in 102 patients (2.3%) in the placebo group and in 90 patients (2.0%) in the N/B group . Bupropion is a dopamine and norepinephrine reuptake inhibitor recommended for the treatment of depression and smoking cessation. It has an anorectic effect, related to the stimulation of pro-opiomelanocortin (POMC) neurons located in the arcuate nucleus (ARC) of the hypothalamus. These neurons release alpha-melanocyte-stimulating hormone (α-MSH), which acts on MC4R, decreasing food intake and increasing energy expenditure. Despite the demonstration of this effect in animal models, clinical studies have shown a modest weight-reducing effect of bupropion monotherapy, causing it not to meet the criteria for approval as a monotherapy for obesity . Naltrexone is an opioid receptor antagonist used primarily in the treatment of alcohol and opioid dependence. It is metabolized by the hepatic enzyme dihydrodiol dehydrogenase into its active metabolite 6β-naltrexol. Both naltrexone and 6β-naltrexol are competitive antagonists at μ- and κ-opioid receptors in the central nervous system (CNS). In POMC neurons, β-endorphin release exerts negative feedback by binding to μ-opioid receptors on the POMC neuron itself, decreasing α-MSH release activity. Although studies with naloxone (another opioid antagonist) have shown reduced food intake in rats, studies with naltrexone have been disappointing, as it led to minimal or no weight loss as a monotherapy . The idea of associating an opioid receptor antagonist to block the autoinhibitory feedback in POMC neurons of the ARC emerged as a good strategy to enhance the anorectic effect of bupropion. This led to the development of the fixed-dose combination of naltrexone and bupropion. The fixed-dose combination of naltrexone 8 mg and bupropion 90 mg (Contrave ® ) has a synergistic effect . The dosage of the N/B combination should be titrated weekly. The starting dose is one tablet in the morning for 7 days, followed by a progressive increase to one tablet every 12 hours in the second week, two tablets in the morning and one tablet at night in the third week, and two tablets every 12 hours from the fourth week onward. Tablets should not be broken, chewed, or crushed, and total daily doses exceeding 32 mg/360 mg per day are not recommended. The tablet can be administered with meals but should not be taken with high-fat meals due to a significant increase in systemic exposure to bupropion and naltrexone . The most common adverse events, affecting over 4% of the individuals who used this medication, were nausea (32.5%), constipation (19.2%), vomiting (17.6%), and headache (10.7%), as well as dizziness, insomnia, xerostomia, diarrhea, anxiety, hot flushes, fatigue, and tremor . A meta-analysis evaluating study discontinuation due to adverse effects of antiobesity agents included four studies assessing the N/B combination. Out of 2,044 participants in the N/B group, 501 had adverse events compared with 175 of 1,319 in the placebo group (OR = 2.6) . describes the SAEs and discontinuation rates observed in the phase 3 studies. The most frequent adverse reactions leading to discontinuation were nausea (6.3%), headache (1.7%), and vomiting (1.1%). The combination of N/B is contraindicated in the following clinical conditions: uncontrolled hypertension, epilepsy or history of seizures, severe hepatic impairment, grade 5 CKD, presence of CNS tumor, history of bipolar disorder, bulimia or anorexia nervosa (increased risk of seizures), chronic use of opioid or opiate agonists or partial agonists or acute withdrawal of opiates, abrupt discontinuation of alcohol, benzodiazepines, barbiturates and antiepileptic drugs, concomitant administration of monoamine oxidase inhibitors (MAOIs; a gap of at least 14 days must be given between discontinuation of MAOI and treatment initiation), and known allergy to bupropion or naltrexone . The N/B combination should be suspended 24-72 hours before small- and medium-size surgeries and 72 hours before major surgeries or procedures requiring intensive pain management with opioids to eliminate the antagonistic effect of the medication on opioid analgesia, while bupropion should be continued. It is recommended to reintroduce N/B 7 days after cessation of opioids in the postoperative period. 5.5.1 Efficacy of bupropion/naltrexone on body weight The weight loss and categorical weight loss percentages of 5% and 10% found in the main studies are summarized in . The clinical development program for the N/B combination was named Contrave Obesity Research (COR) and involved two phase 2 studies and four phase 3 studies: COR-I , COR-II , COR-BMOD (Behavior Modification) , and COR-Diabetes . 5.5.2 Effects on body composition In a 24-week phase 2 study comparing placebo, naltrexone monotherapy, bupropion monotherapy, and one of three N/B dose combinations for efficacy and safety, a subgroup underwent body composition analysis using DXA and computed tomography. Eighty participants completed this subgroup analysis. The N/B combination resulted in weight loss and greater reduction in body fat (-14.0 ± 1.3%) than placebo (-4.0 ± 2.0%), naltrexone monotherapy (-3.2 ± 2.5%), and bupropion monotherapy (-4.1 ± 2.9%; all p < 0.01). The reduction in visceral adipose tissue mass was also greater with N/B (-15.0 ± 1.8%) than with placebo (-4.6 ± 2.7%), naltrexone monotherapy (-0.1 ± 3.5%), and bupropion monotherapy (-2.3 ± 4.2%; all p < 0.01). The reductions in body fat and visceral adipose tissue mass with N/B were proportional to weight loss, and weight loss with N/B was not associated with a greater relative reduction in lean mass than placebo or monotherapies . 5.5.3 Effects on glycemic control in patients with type 2 diabetes mellitus The COR-Diabetes study evaluated patients with T2DM who did not achieve the glycemic goal of HbA1c level < 7% with oral antidiabetic agents or with diet and exercise alone. In the entire population of these four studies, 24% of participants had hypertension and 54% had dyslipidemia at baseline . presents the main results of the COR-Diabetes study. 5.5.4 Effects of bupropion/naltrexone in patients with prediabetes/glucose intolerance The effects of the N/B combination in patients with prediabetes/glucose intolerance have not been evaluated. 5.5.5 Effects of bupropion/naltrexone on lipid metabolism In the COR-Diabetes study, which evaluated patients with T2DM outside the HbA1c target, 54% had dyslipidemia at baseline. Compared with placebo, participants treated with N/B had a mean reduction of 11.2% in triglycerides ( versus a reduction of 0.8% in the placebo group) and an increase of 3.0 ± 0.5 mg/dL in HDL-c ( versus a reduction of 0.3 ± 0.6 mg/dL in the placebo group), with no significant effect on LDL-c. The magnitude of these variations in the COR-I and COR-BMOD studies was similar. 5.5.6 Effects of bupropion/naltrexone on blood pressure and heart rate The N/B combination may elevate SBP and/or DBP values and increase resting heart rate. Both BP and pulse should be measured prior to therapy initiation with the N/B combination and monitored at regular intervals consistent with usual clinical practice, particularly in patients with controlled hypertension prior to treatment. The N/B combination should not be administered to patients with uncontrolled hypertension. Among patients treated with the N/B combination in placebo-controlled clinical studies, mean SBP and DBP values were approximately 1 mmHg above those at baseline at weeks 4 and 8, similar to those at baseline at week 12, and approximately 1 mmHg below those at baseline between weeks 24 and 56. In contrast, among patients treated with placebo, the mean BP value was approximately 2-3 mmHg below the baseline value across the same time points, yielding statistically significant differences between groups at all assessments during this period. The largest mean differences between the groups were observed in the first 12 weeks (treatment difference +1.8 to +2.4 mmHg for SBP; +1.7 to +2.1 mmHg for DBP) . 5.5.7 Effects on obstructive sleep apnea syndrome The effects of the N/B combination in patients with OSAS have not been evaluated. 5.5.8 Effects of bupropion/naltrexone in patients with polycystic ovary syndrome The effects of the N/B combination in women with PCOS have not been evaluated. 5.5.9 Effects of bupropion/naltrexone in patients with male hypogonadism The effects of the N/B combination in men with hypogonadism have not been evaluated. 5.5.10 Effects of bupropion/naltrexone on metabolic dysfunction-associated steatotic liver disease There are limited data on the effects of the N/B combination in patients with MASLD. In a post hoc analysis of four RCTs, the N/B combination for 1 year resulted in an improvement in fibrosis-4 index (FIB-4; a noninvasive index of liver fibrosis) independent of potential confounders, including weight change. The effect of N/B intervention was independently associated with a decrease in ALT . 5.5.11 Effects of bupropion/naltrexone on quality of life The N/B combination was evaluated in a multicenter, randomized, controlled, open-label study examining weight-related quality of life, control over eating behavior, and sexual function after 26 weeks of treatment plus a comprehensive LSC program (N/B + LSC, n = 153) or usual care (UC, n = 89), which included minimal lifestyle intervention. Participants in the N/B + LSC group and UC group lost, respectively, 9.46% and 0.94% of their initial body weight at week 26 (p < 0.0001). The participants in the N/B + LSC group had greater improvement in the total IWQOL score compared with those in the UC group (p < 0.0001). Among participants with moderate/severe scores on the binge eating scale, 91% of N/B + LSC participants versus 18% of UC participants experienced improvement. In participants with sexual dysfunction defined by the Arizona Sexual Experiences Scale, 58% of N/B + LSC participants and 19% of UC participants no longer met the criteria for dysfunction at week 26 . 5.5.12 Effects of bupropion/naltrexone on osteoarticular diseases The effects of the N/B combination in patients with osteoarthritis or other osteoarticular diseases have not been evaluated. 5.5.13 Effects of bupropion/naltrexone in patients with renal disease The effects of the N/B combination in patients with renal disease have not been evaluated. 5.5.14 Effects of bupropion/naltrexone on cardiovascular diseases The LIGHT study was designed to determine the cardiovascular safety of N/B compared with placebo in patients with overweight or obesity. The trial enrolled 8,910 patients with overweight or obesity who had increased cardiovascular risk, but after public disclosure by the sponsor of confidential interim data during the trial, the study’s academic leadership recommended termination of the trial, which was agreed by the sponsor. Male participants were older than 45 years and female participants were older than 50 years, and the mean age was 61.0 ± 7.3 years. For the 25% interim analysis, cardiovascular outcomes occurred in 59 patients treated with placebo (1.3%) and in 35 patients treated with N/B (0.8%; HR = 0.59). After 50% of planned events, cardiovascular outcomes occurred in 102 patients (2.3%) in the placebo group and in 90 patients (2.0%) in the N/B group . The weight loss and categorical weight loss percentages of 5% and 10% found in the main studies are summarized in . The clinical development program for the N/B combination was named Contrave Obesity Research (COR) and involved two phase 2 studies and four phase 3 studies: COR-I , COR-II , COR-BMOD (Behavior Modification) , and COR-Diabetes . In a 24-week phase 2 study comparing placebo, naltrexone monotherapy, bupropion monotherapy, and one of three N/B dose combinations for efficacy and safety, a subgroup underwent body composition analysis using DXA and computed tomography. Eighty participants completed this subgroup analysis. The N/B combination resulted in weight loss and greater reduction in body fat (-14.0 ± 1.3%) than placebo (-4.0 ± 2.0%), naltrexone monotherapy (-3.2 ± 2.5%), and bupropion monotherapy (-4.1 ± 2.9%; all p < 0.01). The reduction in visceral adipose tissue mass was also greater with N/B (-15.0 ± 1.8%) than with placebo (-4.6 ± 2.7%), naltrexone monotherapy (-0.1 ± 3.5%), and bupropion monotherapy (-2.3 ± 4.2%; all p < 0.01). The reductions in body fat and visceral adipose tissue mass with N/B were proportional to weight loss, and weight loss with N/B was not associated with a greater relative reduction in lean mass than placebo or monotherapies . The COR-Diabetes study evaluated patients with T2DM who did not achieve the glycemic goal of HbA1c level < 7% with oral antidiabetic agents or with diet and exercise alone. In the entire population of these four studies, 24% of participants had hypertension and 54% had dyslipidemia at baseline . presents the main results of the COR-Diabetes study. The effects of the N/B combination in patients with prediabetes/glucose intolerance have not been evaluated. In the COR-Diabetes study, which evaluated patients with T2DM outside the HbA1c target, 54% had dyslipidemia at baseline. Compared with placebo, participants treated with N/B had a mean reduction of 11.2% in triglycerides ( versus a reduction of 0.8% in the placebo group) and an increase of 3.0 ± 0.5 mg/dL in HDL-c ( versus a reduction of 0.3 ± 0.6 mg/dL in the placebo group), with no significant effect on LDL-c. The magnitude of these variations in the COR-I and COR-BMOD studies was similar. The N/B combination may elevate SBP and/or DBP values and increase resting heart rate. Both BP and pulse should be measured prior to therapy initiation with the N/B combination and monitored at regular intervals consistent with usual clinical practice, particularly in patients with controlled hypertension prior to treatment. The N/B combination should not be administered to patients with uncontrolled hypertension. Among patients treated with the N/B combination in placebo-controlled clinical studies, mean SBP and DBP values were approximately 1 mmHg above those at baseline at weeks 4 and 8, similar to those at baseline at week 12, and approximately 1 mmHg below those at baseline between weeks 24 and 56. In contrast, among patients treated with placebo, the mean BP value was approximately 2-3 mmHg below the baseline value across the same time points, yielding statistically significant differences between groups at all assessments during this period. The largest mean differences between the groups were observed in the first 12 weeks (treatment difference +1.8 to +2.4 mmHg for SBP; +1.7 to +2.1 mmHg for DBP) . The effects of the N/B combination in patients with OSAS have not been evaluated. The effects of the N/B combination in women with PCOS have not been evaluated. The effects of the N/B combination in men with hypogonadism have not been evaluated. There are limited data on the effects of the N/B combination in patients with MASLD. In a post hoc analysis of four RCTs, the N/B combination for 1 year resulted in an improvement in fibrosis-4 index (FIB-4; a noninvasive index of liver fibrosis) independent of potential confounders, including weight change. The effect of N/B intervention was independently associated with a decrease in ALT . The N/B combination was evaluated in a multicenter, randomized, controlled, open-label study examining weight-related quality of life, control over eating behavior, and sexual function after 26 weeks of treatment plus a comprehensive LSC program (N/B + LSC, n = 153) or usual care (UC, n = 89), which included minimal lifestyle intervention. Participants in the N/B + LSC group and UC group lost, respectively, 9.46% and 0.94% of their initial body weight at week 26 (p < 0.0001). The participants in the N/B + LSC group had greater improvement in the total IWQOL score compared with those in the UC group (p < 0.0001). Among participants with moderate/severe scores on the binge eating scale, 91% of N/B + LSC participants versus 18% of UC participants experienced improvement. In participants with sexual dysfunction defined by the Arizona Sexual Experiences Scale, 58% of N/B + LSC participants and 19% of UC participants no longer met the criteria for dysfunction at week 26 . The effects of the N/B combination in patients with osteoarthritis or other osteoarticular diseases have not been evaluated. The effects of the N/B combination in patients with renal disease have not been evaluated. The LIGHT study was designed to determine the cardiovascular safety of N/B compared with placebo in patients with overweight or obesity. The trial enrolled 8,910 patients with overweight or obesity who had increased cardiovascular risk, but after public disclosure by the sponsor of confidential interim data during the trial, the study’s academic leadership recommended termination of the trial, which was agreed by the sponsor. Male participants were older than 45 years and female participants were older than 50 years, and the mean age was 61.0 ± 7.3 years. For the 25% interim analysis, cardiovascular outcomes occurred in 59 patients treated with placebo (1.3%) and in 35 patients treated with N/B (0.8%; HR = 0.59). After 50% of planned events, cardiovascular outcomes occurred in 102 patients (2.3%) in the placebo group and in 90 patients (2.0%) in the N/B group . At the time this document was prepared, tirzepatide was only approved in Brazil for the treatment of patients with T2DM (September 2023). However, it has already been approved in Europe (April 2024) and in the United States (November 2023) for the treatment of obesity. The application for approval has already been submitted to Anvisa, and the authors of the present document believe that approval in Brazil should be obtained in the near future. Until approval is granted, the use of tirzepatide for the treatment of obesity in Brazil will be considered off-label. 6.1 Mechanism of action Tirzepatide, the first medication in the incretin class with a dual mechanism of action, is a synthetic peptide with dual agonism action on GLP-1 and glucose-dependent insulinotropic polypeptide (GIP) receptors. Notably, GIP is a peptide secreted by K cells in the duodenum and jejunum in response to nutrient intake. It regulates energy balance through cell surface receptor signaling in CNS cells and adipose tissue . Engineered from the native GIP sequence, preclinical data show a proportionally higher affinity of tirzepatide for GIP receptors compared with GLP-1 receptors (1:5). The GIP receptor activation appears to act synergistically with GLP-1 receptor activation to yield a greater weight reduction in mice than that achieved with GLP-1 receptor monoagonism . The exact molecular mechanisms involved in the therapeutic effects of tirzepatide on glycemic control and body weight are not yet fully understood. One hypothesis is that GLP-1 activity reduces glucose levels, facilitating the effects of GIP on resensitized beta cells. Tirzepatide also appears to act as a more potent coagonist compared with GLP-1, with little β-arrestin recruitment and receptor internalization, which could explain its superior activity in target cells . 6.2 Dosage/usage instructions The initial dose of tirzepatide to begin titration is 2.5 mg applied subcutaneously once weekly. After 4 weeks, the dose should be increased to 5 mg. Increases of 2.5 mg can be made every minimum period of 4 weeks, reaching a maximum once-weekly dose of 15 mg. Based on the pharmacokinetics of tirzepatide, no dose adjustment is recommended based on age, gender, or body weight or in patients with hepatic or renal impairment (including those with ESRD) . 6.3 Tolerability/side effects In the SURMOUNT-1 study, the most common adverse events were gastrointestinal in nature. Nausea was the most frequent side effect, observed in 24.6%-31% of patients, mainly during the dose titration period. Other reported effects were diarrhea and constipation (23% and 11.7%, respectively), all with mild-to-moderate severity, causing treatment discontinuation in a maximum of 7.1% of patients . Tirzepatide, at doses of 5 to 15 mg, was well tolerated during the SURPASS program: SAEs were reported in 1%-8% of participants with diabetes (SURPASS - ) and in 6%-17% of participants with more advanced diabetes (SURPASS - ) – these SAE rates are similar to those reported in placebo and active comparator groups. The incidence of gastrointestinal adverse events was similar between tirzepatide, semaglutide, and dulaglutide. Most adverse events were mild to moderate, dose-dependent, and occurred during dose escalation and subsequent reduction. 6.4 Absolute contraindications The use of tirzepatide is contraindicated during pregnancy and in patients with a personal history of chronic pancreatitis or a personal or family history of medullary thyroid cancer or multiple endocrine neoplasia 2A or 2B. 6.5 Efficacy 6.5.1 Efficacy of tirzepatide on body weight The phase 3 SURMOUNT-1 RCT compared the response to weekly tirzepatide at doses of 5 mg, 10 mg, or 15 mg versus placebo in 2,539 adults with obesity or BMI > 27 kg/m 2 associated with at least one weight-related complication, excluding diabetes. The follow-up duration was 72 weeks, including the 20-week dose-escalation period . In this study, the average initial weight was 104.8 kg, and BMI was 38 kg/m 2 . The mean reduction in body weight observed at week 72 with tirzepatide was 16.0% (16.8%-15.2%) with the 5 mg dose, 21.4% (22.2%-20.6%; which was equivalent to 22.2 kg body weight reduction) with the 10 mg dose, and 20.9% (21.9%-19.9% or 23.6 kg) with the 15 mg dose. The SURMOUNT-2 RCT evaluated treatment with subcutaneous tirzepatide ( mg or mg) once weekly or placebo for 72 weeks in 1,514 adults with obesity and T2DM. The primary outcomes were the percent change in body weight from baseline and body weight reduction of 5% or more. At baseline, the mean body weight was 100.7 kg (standard deviation ± 21.1 kg), BMI was 36.1 kg/m 2 (±6.6 kg/m 2 ), and HbA1c level was 8.02% (±0.89%). The mean changes in body weight at week 72 with tirzepatide 10 mg and 15 mg were -12.8% (±0.6%) and -14.7% (±0.5%), respectively, and -3.2% (±0.5%) with placebo, resulting in estimated treatment differences versus placebo of -9.6% (95% CI = -11.1 to -8.1%) with tirzepatide 10 mg and -11.6% (95% CI = -13.0 to -10.1%) with tirzepatide 15 mg (all p < 0.0001) . The SURMOUNT-3 RCT evaluated the impact of tirzepatide in individuals with obesity who had an adequate response to treatment with intensive LSCs. It included 579 individuals with BMI > 30 kg/m 2 or 27 kg/m 2 (with at least one comorbidity associated with obesity) who achieved a minimum weight loss of 5% after 12 weeks of intensive LSCs. After randomization, patients receiving tirzepatide for 72 weeks had a mean weight change of -18.5% compared with -2.5% in the placebo group . highlights the categorical weight loss observed in the SURMOUNT series studies. 6.5.2 Effects of tirzepatide on weight maintenance The effects of tirzepatide on weight maintenance were evaluated in the SURMOUNT-4 RCT. This study enrolled 783 participants in an initial 36-week open-label period who received tirzepatide 10 mg or 15 mg. At week 36, a total of 670 participants were randomized to continue treatment with tirzepatide (n = 335) or switch to placebo (n = 335) for an additional 52 weeks. In the initial 36-week period, participants (mean initial weight 107.3 kg) lost an average of 20.9% of their body weight. From weeks 36 to 88, participants who remained on tirzepatide had an average additional weight loss of 5.5%, while the group randomized to placebo gained an average of 14.0%. In conclusion, withdrawal of tirzepatide led to a substantial regain of lost weight, while the continuation of the medication not only maintained the weight lost but also led to an additional weight loss . 6.5.3 Effects of tirzepatide on body composition In the SURMOUNT-1 study, a subgroup of 160 participants underwent body composition analysis using DXA. The results showed greater fat mass reduction in the tirzepatide group compared with the placebo group (33.9% versus 8.2%, respectively, difference -25.7%). Similarly, the ratio between total fat mass and lean mass reduced more in the tirzepatide group (from 0.93 to 0.70) than in the placebo group (from 0.95 to 0.88), from baseline to week 72 . A plethysmography analysis was conducted to compare body composition changes in 45 individuals with T2DM treated with tirzepatide 15 mg/week, 44 treated with semaglutide 1 mg/week, and 28 treated with placebo. At week 28, the tirzepatide-treated group experienced greater fat mass reduction than the placebo group (9.6 kg [12.4 to 6.9 kg]; p < 0.001) and semaglutide group (3.8 kg; p < 0.002). Similarly, the reduction in FFM was greater in the tirzepatide group compared with the placebo group (1.5 kg; p < 0.001) and semaglutide group (0.8 kg; p < 0.018) . 6.5.4 Effects of tirzepatide in patients with prediabetes/glucose intolerance In the SURMOUNT-1 study, 95.3% of individuals with prediabetes at baseline reverted to normoglycemia with tirzepatide, compared with 61.9% in the placebo group . Treatment with tirzepatide significantly reduced the 10-year predicted risk of T2DM development compared with placebo in participants with obesity or overweight, independent from baseline glycemic status. This was the finding of a post hoc analysis of the SURMOUNT-1 study, which used a cardiometabolic disease staging risk score to calculate the predicted 10-year risk of T2DM at baseline and at study weeks 24 and 72. At week 72, the mean absolute predicted risk score reductions for T2DM were significantly greater in the tirzepatide groups (5 mg, 12.4%; 10 mg, 14.4%; 15 mg, 14.7%) compared with the placebo group (0.7%). Participants with prediabetes had greater mean reductions in risk score from baseline (16.0%-20.3%) compared with those without prediabetes (10.1%-11.3%) . 6.5.5 Effects of tirzepatide in patients with type 2 diabetes mellitus A recent meta-analysis evaluated 6,609 individuals with T2DM included in seven RCTs lasting at least 12 weeks to analyze the efficacy of different tirzepatide doses ( mg, mg, and mg) in reducing HbA1c levels compared with other antidiabetic agents or placebo. Tirzepatide was superior in reducing HbA1c levels in a dose-dependent manner, with mean differences ranging from -1.62% to -2.06% versus placebo, -0.29% to -0.92% versus GLP-1as, and -0.70% to -1.09% versus basal insulin regimens . The SURPASS-2 study included 1,876 patients with T2DM and compared tirzepatide 5 mg, 10 mg, and 15 mg versus semaglutide 1 mg in a 1:1:1:1 design for 40 weeks, with the primary outcome of reduction in HbA1c level. The mean reductions in HbA1c levels were 2.01%, 2.24%, and 2.30% with tirzepatide 5 mg, 10 mg, and 15 mg, respectively, and 1.86% with semaglutide 1.0 mg. The baseline mean HbA1c level was 8.28%. After 40 weeks, almost half of the patients who received tirzepatide 10 mg and 15 mg (40% and 46%, respectively) had HbA1c levels ≤ 5.7%. This was observed in 27% of the patients who received tirzepatide 5 mg and in 19% of those who received semaglutide 1 mg . 6.5.6 Effects of tirzepatide on lipid metabolism In the SURPASS 1 to 5 study programs, treatment with tirzepatide at doses of 5 mg, 10 mg, and 15 mg resulted in reductions in serum triglyceride and LDL-c levels . 6.5.7 Effects of tirzepatide on blood pressure and heart rate In the SURPASS 1 to 5 program studies, tirzepatide treatment of patients with T2DM resulted in mean reductions in SBP and DBP values of 6-9 mmHg and 3-4 mmHg, respectively. There was a mean reduction in SBP and DBP of 2 mmHg each in patients treated with placebo. In placebo-controlled phase 3 studies, treatment with tirzepatide resulted in a mean heart rate increase of 2-4 bpm compared with a mean heart rate increase of 1 bpm with placebo . In the SURMOUNT-1 study, individuals with obesity/overweight without diabetes had mean reductions of 7.2 mmHg in SBP and 4.8 mmHg in DBP with tirzepatide compared with mean reductions of 1 mmHg and 0.8 mmHg, respectively, with placebo . 6.5.8 Effects of tirzepatide on obstructive sleep apnea syndrome A 52-week RCT (SURMOUNT-OSA) was conducted to evaluate the efficacy and safety of tirzepatide at the maximum tolerated dose ( mg or mg) versus placebo as an adjunct to diet and exercise in participants with moderate-to-severe OSAS (AHI ≥ 15). Patients treated with tirzepatide ( mg or mg weekly) experienced an AHI reduction of 27.4 events/hour compared with 4.8 events/hour in those treated with placebo. As a secondary outcome, tirzepatide led to a mean AHI reduction of 55% compared with 5.0% with placebo. Finally, the mean weight loss was 18.1% in the tirzepatide group compared with 1.3% in the placebo group . 6.5.9 Effects of tirzepatide in patients with polycystic ovary syndrome Tirzepatide has not been evaluated for effects in women with PCOS. 6.5.10 Effects of tirzepatide in patients with male hypogonadism Tirzepatide has not been evaluated for effects in patients with male hypogonadism. 6.5.11 Effects of tirzepatide on nonalcoholic fatty liver disease A study used magnetic resonance imaging to evaluate the liver fat content, volume of visceral adipose tissue, and abdominal subcutaneous adipose tissue in 296 individuals with T2DM treated with tirzepatide or insulin degludec participating in the SURPASS-3 study. At week 52, the participants using tirzepatide (pooled tirzepatide mg and mg groups) experienced significantly greater mean reductions in liver fat content compared with those using insulin degludec (-8.1% versus -3.4%), respectively, from a baseline liver fat content of 15.7% and 16.6%, respectively . At 52 weeks, participants treated with tirzepatide 5 mg, 10 mg, and 15 mg had significantly greater reductions in volume of visceral adipose tissue (-1.10 L, -1.53 L, and -1.65 L, respectively) and abdominal subcutaneous adipose tissue (-1.40 L, -2.25 L, and -2.05 L, respectively) compared with their respective baseline values of 6.6 L and 10.4 L. These reductions contrasted with the increases observed in the insulin degludec-treated group (0.38 L and 0.63 L) . Overall, 67%-81% of tirzepatide-treated participants achieved at least a 30% reduction in liver fat content. Another post hoc analysis evaluated the effects of tirzepatide on MASLD and fibrosis biomarkers in patients with T2DM compared with dulaglutide and placebo for 26 weeks and showed that the higher dose of tirzepatide significantly decreased MASLD-related biomarkers and increased adiponectin in these patients . A phase 2 RCT was conducted to evaluate the effects of tirzepatide treatment in individuals with biopsy-confirmed MASH and stage F2 or F3 fibrosis. The patients were randomized to placebo or tirzepatide 5 mg, 10 mg, or 15 mg (n = 190) and treated for 52 weeks, when the biopsy was then repeated. The percentage of patients who achieved the MASH improvement endpoint without fibrosis progression was 10% in the placebo group, 44% in the tirzepatide 5 mg group, 56% in the tirzepatide 10 mg group, and 62% in the tirzepatide 15 mg group. The percentage of patients who had improvement in at least one fibrosis stage (without worsening of MASH) was 30% in the placebo group, 55% in the tirzepatide 5 mg group, 51% in the tirzepatide 10 mg group, and 61% in the tirzepatide 15 mg group . 6.5.12 Effects of tirzepatide on quality of life An exploratory analysis of the phase 3 SURPASS J-mono study assessed treatment satisfaction using the Japanese translation of the Diabetes Treatment Satisfaction Questionnaire (DTSQs) and the DTSQc version. After 52 weeks of treatment, there was a trend toward greater satisfaction among patients who received any dose of tirzepatide compared with those who received dulaglutide. The overall mean DTSQc scores at week 52 were significantly higher with tirzepatide 5 mg, 10 mg, and 15 mg versus dulaglutide 0.75 mg (11.5, 12.1, and 12.3, respectively, versus 8.9; p < 0.001). Post hoc subgroup analyses demonstrated greater treatment satisfaction with tirzepatide compared with dulaglutide in the subgroup with ages below 65 years (p < 0.001) and baseline BMI ≥ 25 kg/m 2 (p < 0.01), along with similar treatment satisfaction across treatment arms in the subgroup with ages 65 years or above and with BMI < 25 kg/m 2 . 6.5.13 Effects of tirzepatide on osteoarticular diseases Tirzepatide has not been evaluated for effects in osteoarticular diseases. 6.5.14 Effects of tirzepatide in patients with chronic kidney disease An exploratory post hoc analysis of SURPASS-4 showed that tirzepatide reduced the decline in eGFR and decreased the urine albumin-to-creatinine ratio (UACR) compared with insulin glargine in individuals with T2DM and high cardiovascular risk. At baseline, participants had a mean eGFR of 81 mL/min/1.73 m 2 and median UACR of 15 mg/g (17% of participants had eGFR < 60 mL/min/1.73 m 2 , 28% had microalbuminuria, and 8% had macroalbuminuria). The mean rate of eGFR decline was -1.4 mL/min/1.73 m 2 per year for the combined tirzepatide treatment groups versus -3.6 mL/min/1.73 m 2 per year in the insulin group. The UACR increased from baseline with insulin glargine (36.9%) but not with tirzepatide (-6.8%), with a between-group frequency difference of -31.9%. Participants receiving tirzepatide had fewer occurrences of the composite renal outcome (time to first occurrence of eGFR decline of at least 40% from baseline, ESRD, death due to renal failure, or new-onset macroalbuminuria) compared with those receiving insulin glargine (HR = 0.58; 95% CI = 0.43-0.8). These findings were primarily driven by a reduced number of individuals developing new-onset macroalbuminuria . 6.5.15 Effects of tirzepatide on cardiovascular diseases A meta-analysis of cardiovascular outcomes included seven RCTs with at least 26 weeks of follow-up comparing the time to occurrence to the first prespecified major adverse cardiac event (MACE; including cardiovascular death, AMI, stroke, and hospitalization for unstable angina) between participants using combined doses of tirzepatide (n = 4,887) and controls (n = 2,328). One-third of the participants had established CVD. In all, 142 participants experienced at least one MACE event after treatment for just over 1 year. The HRs comparing tirzepatide versus control were 0.80 (95% CI = 0.57-1.11) for MACE-4 ( i.e. , the four major adverse cardiac events considered in the trial), 0.90 (95% CI = 0.50-1.61) for cardiovascular death, and 0.80 (95% CI = 0.51-1.25) for all-cause death . These results suggest that tirzepatide does not increase cardiovascular risk. However, the exact impact of tirzepatide on cardiovascular outcomes in individuals with T2DM and established CVD will be addressed in the SURPASS-CVOT trial, an ongoing study evaluating the noninferiority and superiority of tirzepatide versus dulaglutide 1.5 mg for cardiovascular safety in individuals with T2DM and atherosclerosis confirmed by prior CVD (ClinicalTrials.gov Identifier: NCT04255433). presents the effects of the different medications approved for treating obesity in Brazil after an average treatment period of 1 year. Differences in methodology and statistical analysis among the studies hinder a direct comparison between the medications. In conclusion, historically, pharmacological treatments for obesity have been underutilized, with very few drug options available for a long time. Fortunately, this landscape is changing rapidly. In recent years, several new drugs with varying mechanisms of action, efficacy, and safety profiles (see for summary) have emerged in Brazil. This document aims to provide a comprehensive literature review of the available pharmacological options with out establishing a definitive guideline, which is expected to be published in the near future. The goal is to familiarize healthcare providers with these options, whether they prescribe them as medical doctors or simply receive patients in use (who could need guidance) or refer them to treatment. We hope this document can serve as an useful guide and also a tool to reduce stigma surrounding obesity pharmacology. Tirzepatide, the first medication in the incretin class with a dual mechanism of action, is a synthetic peptide with dual agonism action on GLP-1 and glucose-dependent insulinotropic polypeptide (GIP) receptors. Notably, GIP is a peptide secreted by K cells in the duodenum and jejunum in response to nutrient intake. It regulates energy balance through cell surface receptor signaling in CNS cells and adipose tissue . Engineered from the native GIP sequence, preclinical data show a proportionally higher affinity of tirzepatide for GIP receptors compared with GLP-1 receptors (1:5). The GIP receptor activation appears to act synergistically with GLP-1 receptor activation to yield a greater weight reduction in mice than that achieved with GLP-1 receptor monoagonism . The exact molecular mechanisms involved in the therapeutic effects of tirzepatide on glycemic control and body weight are not yet fully understood. One hypothesis is that GLP-1 activity reduces glucose levels, facilitating the effects of GIP on resensitized beta cells. Tirzepatide also appears to act as a more potent coagonist compared with GLP-1, with little β-arrestin recruitment and receptor internalization, which could explain its superior activity in target cells . The initial dose of tirzepatide to begin titration is 2.5 mg applied subcutaneously once weekly. After 4 weeks, the dose should be increased to 5 mg. Increases of 2.5 mg can be made every minimum period of 4 weeks, reaching a maximum once-weekly dose of 15 mg. Based on the pharmacokinetics of tirzepatide, no dose adjustment is recommended based on age, gender, or body weight or in patients with hepatic or renal impairment (including those with ESRD) . In the SURMOUNT-1 study, the most common adverse events were gastrointestinal in nature. Nausea was the most frequent side effect, observed in 24.6%-31% of patients, mainly during the dose titration period. Other reported effects were diarrhea and constipation (23% and 11.7%, respectively), all with mild-to-moderate severity, causing treatment discontinuation in a maximum of 7.1% of patients . Tirzepatide, at doses of 5 to 15 mg, was well tolerated during the SURPASS program: SAEs were reported in 1%-8% of participants with diabetes (SURPASS - ) and in 6%-17% of participants with more advanced diabetes (SURPASS - ) – these SAE rates are similar to those reported in placebo and active comparator groups. The incidence of gastrointestinal adverse events was similar between tirzepatide, semaglutide, and dulaglutide. Most adverse events were mild to moderate, dose-dependent, and occurred during dose escalation and subsequent reduction. The use of tirzepatide is contraindicated during pregnancy and in patients with a personal history of chronic pancreatitis or a personal or family history of medullary thyroid cancer or multiple endocrine neoplasia 2A or 2B. 6.5.1 Efficacy of tirzepatide on body weight The phase 3 SURMOUNT-1 RCT compared the response to weekly tirzepatide at doses of 5 mg, 10 mg, or 15 mg versus placebo in 2,539 adults with obesity or BMI > 27 kg/m 2 associated with at least one weight-related complication, excluding diabetes. The follow-up duration was 72 weeks, including the 20-week dose-escalation period . In this study, the average initial weight was 104.8 kg, and BMI was 38 kg/m 2 . The mean reduction in body weight observed at week 72 with tirzepatide was 16.0% (16.8%-15.2%) with the 5 mg dose, 21.4% (22.2%-20.6%; which was equivalent to 22.2 kg body weight reduction) with the 10 mg dose, and 20.9% (21.9%-19.9% or 23.6 kg) with the 15 mg dose. The SURMOUNT-2 RCT evaluated treatment with subcutaneous tirzepatide ( mg or mg) once weekly or placebo for 72 weeks in 1,514 adults with obesity and T2DM. The primary outcomes were the percent change in body weight from baseline and body weight reduction of 5% or more. At baseline, the mean body weight was 100.7 kg (standard deviation ± 21.1 kg), BMI was 36.1 kg/m 2 (±6.6 kg/m 2 ), and HbA1c level was 8.02% (±0.89%). The mean changes in body weight at week 72 with tirzepatide 10 mg and 15 mg were -12.8% (±0.6%) and -14.7% (±0.5%), respectively, and -3.2% (±0.5%) with placebo, resulting in estimated treatment differences versus placebo of -9.6% (95% CI = -11.1 to -8.1%) with tirzepatide 10 mg and -11.6% (95% CI = -13.0 to -10.1%) with tirzepatide 15 mg (all p < 0.0001) . The SURMOUNT-3 RCT evaluated the impact of tirzepatide in individuals with obesity who had an adequate response to treatment with intensive LSCs. It included 579 individuals with BMI > 30 kg/m 2 or 27 kg/m 2 (with at least one comorbidity associated with obesity) who achieved a minimum weight loss of 5% after 12 weeks of intensive LSCs. After randomization, patients receiving tirzepatide for 72 weeks had a mean weight change of -18.5% compared with -2.5% in the placebo group . highlights the categorical weight loss observed in the SURMOUNT series studies. 6.5.2 Effects of tirzepatide on weight maintenance The effects of tirzepatide on weight maintenance were evaluated in the SURMOUNT-4 RCT. This study enrolled 783 participants in an initial 36-week open-label period who received tirzepatide 10 mg or 15 mg. At week 36, a total of 670 participants were randomized to continue treatment with tirzepatide (n = 335) or switch to placebo (n = 335) for an additional 52 weeks. In the initial 36-week period, participants (mean initial weight 107.3 kg) lost an average of 20.9% of their body weight. From weeks 36 to 88, participants who remained on tirzepatide had an average additional weight loss of 5.5%, while the group randomized to placebo gained an average of 14.0%. In conclusion, withdrawal of tirzepatide led to a substantial regain of lost weight, while the continuation of the medication not only maintained the weight lost but also led to an additional weight loss . 6.5.3 Effects of tirzepatide on body composition In the SURMOUNT-1 study, a subgroup of 160 participants underwent body composition analysis using DXA. The results showed greater fat mass reduction in the tirzepatide group compared with the placebo group (33.9% versus 8.2%, respectively, difference -25.7%). Similarly, the ratio between total fat mass and lean mass reduced more in the tirzepatide group (from 0.93 to 0.70) than in the placebo group (from 0.95 to 0.88), from baseline to week 72 . A plethysmography analysis was conducted to compare body composition changes in 45 individuals with T2DM treated with tirzepatide 15 mg/week, 44 treated with semaglutide 1 mg/week, and 28 treated with placebo. At week 28, the tirzepatide-treated group experienced greater fat mass reduction than the placebo group (9.6 kg [12.4 to 6.9 kg]; p < 0.001) and semaglutide group (3.8 kg; p < 0.002). Similarly, the reduction in FFM was greater in the tirzepatide group compared with the placebo group (1.5 kg; p < 0.001) and semaglutide group (0.8 kg; p < 0.018) . 6.5.4 Effects of tirzepatide in patients with prediabetes/glucose intolerance In the SURMOUNT-1 study, 95.3% of individuals with prediabetes at baseline reverted to normoglycemia with tirzepatide, compared with 61.9% in the placebo group . Treatment with tirzepatide significantly reduced the 10-year predicted risk of T2DM development compared with placebo in participants with obesity or overweight, independent from baseline glycemic status. This was the finding of a post hoc analysis of the SURMOUNT-1 study, which used a cardiometabolic disease staging risk score to calculate the predicted 10-year risk of T2DM at baseline and at study weeks 24 and 72. At week 72, the mean absolute predicted risk score reductions for T2DM were significantly greater in the tirzepatide groups (5 mg, 12.4%; 10 mg, 14.4%; 15 mg, 14.7%) compared with the placebo group (0.7%). Participants with prediabetes had greater mean reductions in risk score from baseline (16.0%-20.3%) compared with those without prediabetes (10.1%-11.3%) . 6.5.5 Effects of tirzepatide in patients with type 2 diabetes mellitus A recent meta-analysis evaluated 6,609 individuals with T2DM included in seven RCTs lasting at least 12 weeks to analyze the efficacy of different tirzepatide doses ( mg, mg, and mg) in reducing HbA1c levels compared with other antidiabetic agents or placebo. Tirzepatide was superior in reducing HbA1c levels in a dose-dependent manner, with mean differences ranging from -1.62% to -2.06% versus placebo, -0.29% to -0.92% versus GLP-1as, and -0.70% to -1.09% versus basal insulin regimens . The SURPASS-2 study included 1,876 patients with T2DM and compared tirzepatide 5 mg, 10 mg, and 15 mg versus semaglutide 1 mg in a 1:1:1:1 design for 40 weeks, with the primary outcome of reduction in HbA1c level. The mean reductions in HbA1c levels were 2.01%, 2.24%, and 2.30% with tirzepatide 5 mg, 10 mg, and 15 mg, respectively, and 1.86% with semaglutide 1.0 mg. The baseline mean HbA1c level was 8.28%. After 40 weeks, almost half of the patients who received tirzepatide 10 mg and 15 mg (40% and 46%, respectively) had HbA1c levels ≤ 5.7%. This was observed in 27% of the patients who received tirzepatide 5 mg and in 19% of those who received semaglutide 1 mg . 6.5.6 Effects of tirzepatide on lipid metabolism In the SURPASS 1 to 5 study programs, treatment with tirzepatide at doses of 5 mg, 10 mg, and 15 mg resulted in reductions in serum triglyceride and LDL-c levels . 6.5.7 Effects of tirzepatide on blood pressure and heart rate In the SURPASS 1 to 5 program studies, tirzepatide treatment of patients with T2DM resulted in mean reductions in SBP and DBP values of 6-9 mmHg and 3-4 mmHg, respectively. There was a mean reduction in SBP and DBP of 2 mmHg each in patients treated with placebo. In placebo-controlled phase 3 studies, treatment with tirzepatide resulted in a mean heart rate increase of 2-4 bpm compared with a mean heart rate increase of 1 bpm with placebo . In the SURMOUNT-1 study, individuals with obesity/overweight without diabetes had mean reductions of 7.2 mmHg in SBP and 4.8 mmHg in DBP with tirzepatide compared with mean reductions of 1 mmHg and 0.8 mmHg, respectively, with placebo . 6.5.8 Effects of tirzepatide on obstructive sleep apnea syndrome A 52-week RCT (SURMOUNT-OSA) was conducted to evaluate the efficacy and safety of tirzepatide at the maximum tolerated dose ( mg or mg) versus placebo as an adjunct to diet and exercise in participants with moderate-to-severe OSAS (AHI ≥ 15). Patients treated with tirzepatide ( mg or mg weekly) experienced an AHI reduction of 27.4 events/hour compared with 4.8 events/hour in those treated with placebo. As a secondary outcome, tirzepatide led to a mean AHI reduction of 55% compared with 5.0% with placebo. Finally, the mean weight loss was 18.1% in the tirzepatide group compared with 1.3% in the placebo group . 6.5.9 Effects of tirzepatide in patients with polycystic ovary syndrome Tirzepatide has not been evaluated for effects in women with PCOS. 6.5.10 Effects of tirzepatide in patients with male hypogonadism Tirzepatide has not been evaluated for effects in patients with male hypogonadism. 6.5.11 Effects of tirzepatide on nonalcoholic fatty liver disease A study used magnetic resonance imaging to evaluate the liver fat content, volume of visceral adipose tissue, and abdominal subcutaneous adipose tissue in 296 individuals with T2DM treated with tirzepatide or insulin degludec participating in the SURPASS-3 study. At week 52, the participants using tirzepatide (pooled tirzepatide mg and mg groups) experienced significantly greater mean reductions in liver fat content compared with those using insulin degludec (-8.1% versus -3.4%), respectively, from a baseline liver fat content of 15.7% and 16.6%, respectively . At 52 weeks, participants treated with tirzepatide 5 mg, 10 mg, and 15 mg had significantly greater reductions in volume of visceral adipose tissue (-1.10 L, -1.53 L, and -1.65 L, respectively) and abdominal subcutaneous adipose tissue (-1.40 L, -2.25 L, and -2.05 L, respectively) compared with their respective baseline values of 6.6 L and 10.4 L. These reductions contrasted with the increases observed in the insulin degludec-treated group (0.38 L and 0.63 L) . Overall, 67%-81% of tirzepatide-treated participants achieved at least a 30% reduction in liver fat content. Another post hoc analysis evaluated the effects of tirzepatide on MASLD and fibrosis biomarkers in patients with T2DM compared with dulaglutide and placebo for 26 weeks and showed that the higher dose of tirzepatide significantly decreased MASLD-related biomarkers and increased adiponectin in these patients . A phase 2 RCT was conducted to evaluate the effects of tirzepatide treatment in individuals with biopsy-confirmed MASH and stage F2 or F3 fibrosis. The patients were randomized to placebo or tirzepatide 5 mg, 10 mg, or 15 mg (n = 190) and treated for 52 weeks, when the biopsy was then repeated. The percentage of patients who achieved the MASH improvement endpoint without fibrosis progression was 10% in the placebo group, 44% in the tirzepatide 5 mg group, 56% in the tirzepatide 10 mg group, and 62% in the tirzepatide 15 mg group. The percentage of patients who had improvement in at least one fibrosis stage (without worsening of MASH) was 30% in the placebo group, 55% in the tirzepatide 5 mg group, 51% in the tirzepatide 10 mg group, and 61% in the tirzepatide 15 mg group . 6.5.12 Effects of tirzepatide on quality of life An exploratory analysis of the phase 3 SURPASS J-mono study assessed treatment satisfaction using the Japanese translation of the Diabetes Treatment Satisfaction Questionnaire (DTSQs) and the DTSQc version. After 52 weeks of treatment, there was a trend toward greater satisfaction among patients who received any dose of tirzepatide compared with those who received dulaglutide. The overall mean DTSQc scores at week 52 were significantly higher with tirzepatide 5 mg, 10 mg, and 15 mg versus dulaglutide 0.75 mg (11.5, 12.1, and 12.3, respectively, versus 8.9; p < 0.001). Post hoc subgroup analyses demonstrated greater treatment satisfaction with tirzepatide compared with dulaglutide in the subgroup with ages below 65 years (p < 0.001) and baseline BMI ≥ 25 kg/m 2 (p < 0.01), along with similar treatment satisfaction across treatment arms in the subgroup with ages 65 years or above and with BMI < 25 kg/m 2 . 6.5.13 Effects of tirzepatide on osteoarticular diseases Tirzepatide has not been evaluated for effects in osteoarticular diseases. 6.5.14 Effects of tirzepatide in patients with chronic kidney disease An exploratory post hoc analysis of SURPASS-4 showed that tirzepatide reduced the decline in eGFR and decreased the urine albumin-to-creatinine ratio (UACR) compared with insulin glargine in individuals with T2DM and high cardiovascular risk. At baseline, participants had a mean eGFR of 81 mL/min/1.73 m 2 and median UACR of 15 mg/g (17% of participants had eGFR < 60 mL/min/1.73 m 2 , 28% had microalbuminuria, and 8% had macroalbuminuria). The mean rate of eGFR decline was -1.4 mL/min/1.73 m 2 per year for the combined tirzepatide treatment groups versus -3.6 mL/min/1.73 m 2 per year in the insulin group. The UACR increased from baseline with insulin glargine (36.9%) but not with tirzepatide (-6.8%), with a between-group frequency difference of -31.9%. Participants receiving tirzepatide had fewer occurrences of the composite renal outcome (time to first occurrence of eGFR decline of at least 40% from baseline, ESRD, death due to renal failure, or new-onset macroalbuminuria) compared with those receiving insulin glargine (HR = 0.58; 95% CI = 0.43-0.8). These findings were primarily driven by a reduced number of individuals developing new-onset macroalbuminuria . 6.5.15 Effects of tirzepatide on cardiovascular diseases A meta-analysis of cardiovascular outcomes included seven RCTs with at least 26 weeks of follow-up comparing the time to occurrence to the first prespecified major adverse cardiac event (MACE; including cardiovascular death, AMI, stroke, and hospitalization for unstable angina) between participants using combined doses of tirzepatide (n = 4,887) and controls (n = 2,328). One-third of the participants had established CVD. In all, 142 participants experienced at least one MACE event after treatment for just over 1 year. The HRs comparing tirzepatide versus control were 0.80 (95% CI = 0.57-1.11) for MACE-4 ( i.e. , the four major adverse cardiac events considered in the trial), 0.90 (95% CI = 0.50-1.61) for cardiovascular death, and 0.80 (95% CI = 0.51-1.25) for all-cause death . These results suggest that tirzepatide does not increase cardiovascular risk. However, the exact impact of tirzepatide on cardiovascular outcomes in individuals with T2DM and established CVD will be addressed in the SURPASS-CVOT trial, an ongoing study evaluating the noninferiority and superiority of tirzepatide versus dulaglutide 1.5 mg for cardiovascular safety in individuals with T2DM and atherosclerosis confirmed by prior CVD (ClinicalTrials.gov Identifier: NCT04255433). presents the effects of the different medications approved for treating obesity in Brazil after an average treatment period of 1 year. Differences in methodology and statistical analysis among the studies hinder a direct comparison between the medications. In conclusion, historically, pharmacological treatments for obesity have been underutilized, with very few drug options available for a long time. Fortunately, this landscape is changing rapidly. In recent years, several new drugs with varying mechanisms of action, efficacy, and safety profiles (see for summary) have emerged in Brazil. This document aims to provide a comprehensive literature review of the available pharmacological options with out establishing a definitive guideline, which is expected to be published in the near future. The goal is to familiarize healthcare providers with these options, whether they prescribe them as medical doctors or simply receive patients in use (who could need guidance) or refer them to treatment. We hope this document can serve as an useful guide and also a tool to reduce stigma surrounding obesity pharmacology. The phase 3 SURMOUNT-1 RCT compared the response to weekly tirzepatide at doses of 5 mg, 10 mg, or 15 mg versus placebo in 2,539 adults with obesity or BMI > 27 kg/m 2 associated with at least one weight-related complication, excluding diabetes. The follow-up duration was 72 weeks, including the 20-week dose-escalation period . In this study, the average initial weight was 104.8 kg, and BMI was 38 kg/m 2 . The mean reduction in body weight observed at week 72 with tirzepatide was 16.0% (16.8%-15.2%) with the 5 mg dose, 21.4% (22.2%-20.6%; which was equivalent to 22.2 kg body weight reduction) with the 10 mg dose, and 20.9% (21.9%-19.9% or 23.6 kg) with the 15 mg dose. The SURMOUNT-2 RCT evaluated treatment with subcutaneous tirzepatide ( mg or mg) once weekly or placebo for 72 weeks in 1,514 adults with obesity and T2DM. The primary outcomes were the percent change in body weight from baseline and body weight reduction of 5% or more. At baseline, the mean body weight was 100.7 kg (standard deviation ± 21.1 kg), BMI was 36.1 kg/m 2 (±6.6 kg/m 2 ), and HbA1c level was 8.02% (±0.89%). The mean changes in body weight at week 72 with tirzepatide 10 mg and 15 mg were -12.8% (±0.6%) and -14.7% (±0.5%), respectively, and -3.2% (±0.5%) with placebo, resulting in estimated treatment differences versus placebo of -9.6% (95% CI = -11.1 to -8.1%) with tirzepatide 10 mg and -11.6% (95% CI = -13.0 to -10.1%) with tirzepatide 15 mg (all p < 0.0001) . The SURMOUNT-3 RCT evaluated the impact of tirzepatide in individuals with obesity who had an adequate response to treatment with intensive LSCs. It included 579 individuals with BMI > 30 kg/m 2 or 27 kg/m 2 (with at least one comorbidity associated with obesity) who achieved a minimum weight loss of 5% after 12 weeks of intensive LSCs. After randomization, patients receiving tirzepatide for 72 weeks had a mean weight change of -18.5% compared with -2.5% in the placebo group . highlights the categorical weight loss observed in the SURMOUNT series studies. The effects of tirzepatide on weight maintenance were evaluated in the SURMOUNT-4 RCT. This study enrolled 783 participants in an initial 36-week open-label period who received tirzepatide 10 mg or 15 mg. At week 36, a total of 670 participants were randomized to continue treatment with tirzepatide (n = 335) or switch to placebo (n = 335) for an additional 52 weeks. In the initial 36-week period, participants (mean initial weight 107.3 kg) lost an average of 20.9% of their body weight. From weeks 36 to 88, participants who remained on tirzepatide had an average additional weight loss of 5.5%, while the group randomized to placebo gained an average of 14.0%. In conclusion, withdrawal of tirzepatide led to a substantial regain of lost weight, while the continuation of the medication not only maintained the weight lost but also led to an additional weight loss . In the SURMOUNT-1 study, a subgroup of 160 participants underwent body composition analysis using DXA. The results showed greater fat mass reduction in the tirzepatide group compared with the placebo group (33.9% versus 8.2%, respectively, difference -25.7%). Similarly, the ratio between total fat mass and lean mass reduced more in the tirzepatide group (from 0.93 to 0.70) than in the placebo group (from 0.95 to 0.88), from baseline to week 72 . A plethysmography analysis was conducted to compare body composition changes in 45 individuals with T2DM treated with tirzepatide 15 mg/week, 44 treated with semaglutide 1 mg/week, and 28 treated with placebo. At week 28, the tirzepatide-treated group experienced greater fat mass reduction than the placebo group (9.6 kg [12.4 to 6.9 kg]; p < 0.001) and semaglutide group (3.8 kg; p < 0.002). Similarly, the reduction in FFM was greater in the tirzepatide group compared with the placebo group (1.5 kg; p < 0.001) and semaglutide group (0.8 kg; p < 0.018) . In the SURMOUNT-1 study, 95.3% of individuals with prediabetes at baseline reverted to normoglycemia with tirzepatide, compared with 61.9% in the placebo group . Treatment with tirzepatide significantly reduced the 10-year predicted risk of T2DM development compared with placebo in participants with obesity or overweight, independent from baseline glycemic status. This was the finding of a post hoc analysis of the SURMOUNT-1 study, which used a cardiometabolic disease staging risk score to calculate the predicted 10-year risk of T2DM at baseline and at study weeks 24 and 72. At week 72, the mean absolute predicted risk score reductions for T2DM were significantly greater in the tirzepatide groups (5 mg, 12.4%; 10 mg, 14.4%; 15 mg, 14.7%) compared with the placebo group (0.7%). Participants with prediabetes had greater mean reductions in risk score from baseline (16.0%-20.3%) compared with those without prediabetes (10.1%-11.3%) . A recent meta-analysis evaluated 6,609 individuals with T2DM included in seven RCTs lasting at least 12 weeks to analyze the efficacy of different tirzepatide doses ( mg, mg, and mg) in reducing HbA1c levels compared with other antidiabetic agents or placebo. Tirzepatide was superior in reducing HbA1c levels in a dose-dependent manner, with mean differences ranging from -1.62% to -2.06% versus placebo, -0.29% to -0.92% versus GLP-1as, and -0.70% to -1.09% versus basal insulin regimens . The SURPASS-2 study included 1,876 patients with T2DM and compared tirzepatide 5 mg, 10 mg, and 15 mg versus semaglutide 1 mg in a 1:1:1:1 design for 40 weeks, with the primary outcome of reduction in HbA1c level. The mean reductions in HbA1c levels were 2.01%, 2.24%, and 2.30% with tirzepatide 5 mg, 10 mg, and 15 mg, respectively, and 1.86% with semaglutide 1.0 mg. The baseline mean HbA1c level was 8.28%. After 40 weeks, almost half of the patients who received tirzepatide 10 mg and 15 mg (40% and 46%, respectively) had HbA1c levels ≤ 5.7%. This was observed in 27% of the patients who received tirzepatide 5 mg and in 19% of those who received semaglutide 1 mg . In the SURPASS 1 to 5 study programs, treatment with tirzepatide at doses of 5 mg, 10 mg, and 15 mg resulted in reductions in serum triglyceride and LDL-c levels . In the SURPASS 1 to 5 program studies, tirzepatide treatment of patients with T2DM resulted in mean reductions in SBP and DBP values of 6-9 mmHg and 3-4 mmHg, respectively. There was a mean reduction in SBP and DBP of 2 mmHg each in patients treated with placebo. In placebo-controlled phase 3 studies, treatment with tirzepatide resulted in a mean heart rate increase of 2-4 bpm compared with a mean heart rate increase of 1 bpm with placebo . In the SURMOUNT-1 study, individuals with obesity/overweight without diabetes had mean reductions of 7.2 mmHg in SBP and 4.8 mmHg in DBP with tirzepatide compared with mean reductions of 1 mmHg and 0.8 mmHg, respectively, with placebo . A 52-week RCT (SURMOUNT-OSA) was conducted to evaluate the efficacy and safety of tirzepatide at the maximum tolerated dose ( mg or mg) versus placebo as an adjunct to diet and exercise in participants with moderate-to-severe OSAS (AHI ≥ 15). Patients treated with tirzepatide ( mg or mg weekly) experienced an AHI reduction of 27.4 events/hour compared with 4.8 events/hour in those treated with placebo. As a secondary outcome, tirzepatide led to a mean AHI reduction of 55% compared with 5.0% with placebo. Finally, the mean weight loss was 18.1% in the tirzepatide group compared with 1.3% in the placebo group . Tirzepatide has not been evaluated for effects in women with PCOS. Tirzepatide has not been evaluated for effects in patients with male hypogonadism. A study used magnetic resonance imaging to evaluate the liver fat content, volume of visceral adipose tissue, and abdominal subcutaneous adipose tissue in 296 individuals with T2DM treated with tirzepatide or insulin degludec participating in the SURPASS-3 study. At week 52, the participants using tirzepatide (pooled tirzepatide mg and mg groups) experienced significantly greater mean reductions in liver fat content compared with those using insulin degludec (-8.1% versus -3.4%), respectively, from a baseline liver fat content of 15.7% and 16.6%, respectively . At 52 weeks, participants treated with tirzepatide 5 mg, 10 mg, and 15 mg had significantly greater reductions in volume of visceral adipose tissue (-1.10 L, -1.53 L, and -1.65 L, respectively) and abdominal subcutaneous adipose tissue (-1.40 L, -2.25 L, and -2.05 L, respectively) compared with their respective baseline values of 6.6 L and 10.4 L. These reductions contrasted with the increases observed in the insulin degludec-treated group (0.38 L and 0.63 L) . Overall, 67%-81% of tirzepatide-treated participants achieved at least a 30% reduction in liver fat content. Another post hoc analysis evaluated the effects of tirzepatide on MASLD and fibrosis biomarkers in patients with T2DM compared with dulaglutide and placebo for 26 weeks and showed that the higher dose of tirzepatide significantly decreased MASLD-related biomarkers and increased adiponectin in these patients . A phase 2 RCT was conducted to evaluate the effects of tirzepatide treatment in individuals with biopsy-confirmed MASH and stage F2 or F3 fibrosis. The patients were randomized to placebo or tirzepatide 5 mg, 10 mg, or 15 mg (n = 190) and treated for 52 weeks, when the biopsy was then repeated. The percentage of patients who achieved the MASH improvement endpoint without fibrosis progression was 10% in the placebo group, 44% in the tirzepatide 5 mg group, 56% in the tirzepatide 10 mg group, and 62% in the tirzepatide 15 mg group. The percentage of patients who had improvement in at least one fibrosis stage (without worsening of MASH) was 30% in the placebo group, 55% in the tirzepatide 5 mg group, 51% in the tirzepatide 10 mg group, and 61% in the tirzepatide 15 mg group . An exploratory analysis of the phase 3 SURPASS J-mono study assessed treatment satisfaction using the Japanese translation of the Diabetes Treatment Satisfaction Questionnaire (DTSQs) and the DTSQc version. After 52 weeks of treatment, there was a trend toward greater satisfaction among patients who received any dose of tirzepatide compared with those who received dulaglutide. The overall mean DTSQc scores at week 52 were significantly higher with tirzepatide 5 mg, 10 mg, and 15 mg versus dulaglutide 0.75 mg (11.5, 12.1, and 12.3, respectively, versus 8.9; p < 0.001). Post hoc subgroup analyses demonstrated greater treatment satisfaction with tirzepatide compared with dulaglutide in the subgroup with ages below 65 years (p < 0.001) and baseline BMI ≥ 25 kg/m 2 (p < 0.01), along with similar treatment satisfaction across treatment arms in the subgroup with ages 65 years or above and with BMI < 25 kg/m 2 . Tirzepatide has not been evaluated for effects in osteoarticular diseases. An exploratory post hoc analysis of SURPASS-4 showed that tirzepatide reduced the decline in eGFR and decreased the urine albumin-to-creatinine ratio (UACR) compared with insulin glargine in individuals with T2DM and high cardiovascular risk. At baseline, participants had a mean eGFR of 81 mL/min/1.73 m 2 and median UACR of 15 mg/g (17% of participants had eGFR < 60 mL/min/1.73 m 2 , 28% had microalbuminuria, and 8% had macroalbuminuria). The mean rate of eGFR decline was -1.4 mL/min/1.73 m 2 per year for the combined tirzepatide treatment groups versus -3.6 mL/min/1.73 m 2 per year in the insulin group. The UACR increased from baseline with insulin glargine (36.9%) but not with tirzepatide (-6.8%), with a between-group frequency difference of -31.9%. Participants receiving tirzepatide had fewer occurrences of the composite renal outcome (time to first occurrence of eGFR decline of at least 40% from baseline, ESRD, death due to renal failure, or new-onset macroalbuminuria) compared with those receiving insulin glargine (HR = 0.58; 95% CI = 0.43-0.8). These findings were primarily driven by a reduced number of individuals developing new-onset macroalbuminuria . A meta-analysis of cardiovascular outcomes included seven RCTs with at least 26 weeks of follow-up comparing the time to occurrence to the first prespecified major adverse cardiac event (MACE; including cardiovascular death, AMI, stroke, and hospitalization for unstable angina) between participants using combined doses of tirzepatide (n = 4,887) and controls (n = 2,328). One-third of the participants had established CVD. In all, 142 participants experienced at least one MACE event after treatment for just over 1 year. The HRs comparing tirzepatide versus control were 0.80 (95% CI = 0.57-1.11) for MACE-4 ( i.e. , the four major adverse cardiac events considered in the trial), 0.90 (95% CI = 0.50-1.61) for cardiovascular death, and 0.80 (95% CI = 0.51-1.25) for all-cause death . These results suggest that tirzepatide does not increase cardiovascular risk. However, the exact impact of tirzepatide on cardiovascular outcomes in individuals with T2DM and established CVD will be addressed in the SURPASS-CVOT trial, an ongoing study evaluating the noninferiority and superiority of tirzepatide versus dulaglutide 1.5 mg for cardiovascular safety in individuals with T2DM and atherosclerosis confirmed by prior CVD (ClinicalTrials.gov Identifier: NCT04255433). presents the effects of the different medications approved for treating obesity in Brazil after an average treatment period of 1 year. Differences in methodology and statistical analysis among the studies hinder a direct comparison between the medications. In conclusion, historically, pharmacological treatments for obesity have been underutilized, with very few drug options available for a long time. Fortunately, this landscape is changing rapidly. In recent years, several new drugs with varying mechanisms of action, efficacy, and safety profiles (see for summary) have emerged in Brazil. This document aims to provide a comprehensive literature review of the available pharmacological options with out establishing a definitive guideline, which is expected to be published in the near future. The goal is to familiarize healthcare providers with these options, whether they prescribe them as medical doctors or simply receive patients in use (who could need guidance) or refer them to treatment. We hope this document can serve as an useful guide and also a tool to reduce stigma surrounding obesity pharmacology.
In vitro antimicrobial and antioxidant activities of bioactive compounds extracted from
0de1762e-fc4f-4f2b-86b0-15c3f959e9f8
11550811
Microbiology[mh]
The constant evolution of microbial resistance to antibiotics today represents a critical challenge to global health security . The alarming figures underscore the scale of this issue and emphasizes the imperative for immediate action. According to data from the World Health Organization (WHO), approximately 700,000 people die every year from drug-resistant diseases, a figure projected to escalate to 10 million by 2050 if substantial measures are not taken . Antimicrobial resistance (AMR) is escalating, posing severe threats to public health. Statistical modeling forecasts estimate that bacterial AMR was responsible for 4.95 million deaths in 2019, with 1.27 million directly attributed to bacterial AMR . The all-age mortality rate attributable to resistance peaked in Western Sub-Saharan Africa, with 27.3 deaths per 100,000 people, while Australasia exhibited the lowest rate at 6.5 deaths per 100,000 people . Escherichia coli , followed by Staphylococcus aureus , Klebsiella pneumoniae , Streptococcus pneumoniae , Acinetobacter baumannii and Pseudomonas aeruginosa Were the primary pathogens contributing to resistance-related deaths, accounting for 929,000 AMR-attributable deaths and 3.57 million AMR-associated deaths in 2019 . These alarming figures from statistical models and forecasts underscore the gravity of the situation, with significant consequences for public health . Antibiotics, once considered miracle cures, are increasingly losing their efficacy due to their excessive and misguided use . Studies indicate that 30- 50% of antibiotic prescriptions are unnecessary or do not conform to medical guidelines . Additionally, The extensive application of antibiotics in agriculture exacerbates bacterial resistance . According to the Food and Agriculture Organization of the United Nations (FAO), around 80% of antibiotics manufactured globally are employed in the agricultural sector, accelerating the development of resistant bacterial strains. Given the borderless nature of bacterial resistance, a coordinated global approach is imperative. Governments, healthcare professionals, the pharmaceutical industry, and the general public must collaborate to implement prevention strategies, promote responsible antibiotic use and invest in research into novel therapeutic solutions. Among promising approaches, the use of Actinobacteria is emerging as a compelling avenue for combating bacterial infections. Actinobacteria , soil microorganisms, are known for their ability to produce antibacterial compounds, some of which serve as the basis foundation for commonly used antibiotics in medicine . These natural antimicrobial agents, such as streptomycin and chloramphenicol, revolutionized the treatment of infections in the twentieth century . By harnessing the potential of Actinobacteria , scientists aim to discover new antibiotics or develop enhanced derivatives to counter bacteria resistant to existing drugs. Extreme environments, such as inhospitable soils, harbor rich reservoirs of Actinobacteria diversity, offering a valuable source of new antimicrobial molecules. Our study aims to isolate Actinobacteria from two soil samples collected in two distinct areas, followed by the assessment of their antimicrobial properties against various strains, some of which display multiple antibiotic resistance and pathogenicity in humans. Simultaneously, we are undertaking a qualitative and quantitative investigation of the antioxidants produced by these Actinobacteria to evaluate their antioxidant activity, thereby contributing to the battle against antimicrobial resistance. Physico-chemical analysis of soil samples The soil analysis of the KG site (Kenzi’s Garden) revealed alkalinity with a pH of 8.05, and non-salinity with an electrical conductivity (EC) of 0.62 dS/m. According to the texture diagram, this soil is classified as loamy, consisting of 71% silt, with relatively low proportions of clay and sand (14% and 15% respectively). Moreover, X-ray fluorescence analysis of soil minerals identified the presence of exchangeable cations such as potassium (K), magnesium (Mg), aluminum (Al), calcium (Ca), iron (Fe), and manganese (Mn). Some elements like phosphorus (P), calcium (Ca), and iron (Fe) were found in high concentrations, whereas others like aluminum (Al), sulfur (S), chlorine (Cl), copper (Cu), and zinc (Zn) were present in lower quantities (Table ). Similarly, the soil from the FG site (FST’s Garden) exhibited alkaline properties with a pH of 8.19 and a loamy texture comprising 60% silt. Its electrical conductivity was also low, measured at 0.53 dS/m. Although this soil demonstrated relatively low clay and sand content it stood out for its low content of mineral elements, except for calcium (Ca) and iron (Fe), which were present in moderate concentrations (Table ). Isolation of Actinobacteria isolates A total of 6 presumptively phenotypically distinct Actinobacteria isolates were recovered from the two types of soil samples, with 4 isolates originating from site A and 2 isolates from site B (Table ). The Actinobacteria isolates were retrieved across all four selective media used (M2, Bennett, GLM, and GA), with the highest number of isolates obtained from M2 medium (n = 4), followed by GLM (n = 2). However, the Bennet and GA media, did not yield any isolates (Table ). Phenotypic characteristics and genotypic identification The E2 Strain, from garden soil, exhibited abundant growth on various culture media such as ISP1, ISP2, ISP7, Bennett, GYEA, as well as Actinobacteria isolation media (M2 and GLM) after incubation for 7–14 days at 28 °C. This strain exhibited strong growth on ISP2, Bennett and GYEA media, and moderate growth on ISP1 and ISP7, but no growth on GA agar. Observation of a 10-day culture on ISP2 agar revealed an abundance of well-developed, unfragmented aerial and vegetative hyphae. Growth occurred over a temperature range of 4–37 °C (optimum 28 °C), with tolerance to NaCl concentrations ranging from 1 to 5% (optimum 1–2%) and pH between 5.0 and 10 (optimum pH 7.8–8.06). The E2 isolate exhibited the ability to assimilate various carbohydrate compounds as a carbon source, including cellobiose, sucrose, mannose, fructose, and glucose. However, it did not assimilate mannitol, xylose, and starch (Table , Fig. ). A partial sequence of the gene coding for 16S RNA (1407 bp) was determined and deposited in the GenBank database under accession number (PP731514). Analysis of this sequence enabled the isolate E2 to be compared with the Streptomyces species, with a similarity rate of 97.51%. The phylogenetic position of this isolate in relation to the closest species in the Streptomyces genus is shown in Fig. and Table . Primary and secondary screening of Actinobacteria isolates After identification, the isolates were subjected to primary screening, as illustrated in Figs. and . Of the six Actinobacteria isolates tested, 4 revealed antimicrobial activity against Gram-negative bacteria ( Pseudomonas aeruginosa ATCC 27853, Escherichia coli ATCC 25922), Gram-positive bacteria ( Staphylococcus aureus ATCC 25923), and the fungal strain ( Candida albicans ATCC 60193). Of the 4 active isolates, E2 and E6 were selected for secondary metabolite extraction because of their high antimicrobial activity compared with the other 2 isolates, E3 and E4. These 2 bioactive isolates were then subjected to secondary screening against various clinically pathogenic and multidrug-resistant bacteria ( Listeria monocytogenes , Klebsiella pneumoniae 20B1572, Proteus sp. 19K1313, Escherichia coli 19L2418, Klebsiella pneumoniae 19K 929, Proteus vulgaris 16C1737, and Escherichia coli 16D1150) (Table and Fig. ). This table summarizes the antimicrobial activities extracted using n-hexane, dichloromethane, and ethyl acetate. Minimum inhibitory concentration The ethyl acetate extract of isolate E2 was evaluated to determine its minimum inhibitory concentration (MIC) against multidrug-resistant bacterial strains (Fig. ). The results show that this extract exhibits notable bactericidal activity, with an MIC of 0.0625 mg/mL against Klebsiella pneumoniae 20B1572 and an MIC of 0.125 mg/mL against Klebsiella pneumoniae 19K929. These results highlight the efficacy of the ethyl acetate extract, particularly against the two Klebsiella pneumoniae strains selected for their most significant inhibition zones and multidrug resistance. Determination of phenolic and flavonoid compounds in ethyl acetate extract The total phenol and flavonoid contents in the ethyl acetate extract of E2 strain were determined at different concentrations (ranging from 0.1 to 1 mg/mL) (Table ). Absorbance values were measured using the Folin-Ciocalteu reagent for phenols and aluminum chloride reagent for flavonoids. These values were then compared to standard absorbance values of gallic acid (y = 0.0009x − 0.0201; R 2 = 0.9982) for phenols and quercetin (y = 0.0005x + 0.0702; R 2 = 0.9977) for flavonoids. Phenol content ranged from 0.472 ± 0.004 to 0.628 ± 0.012 GAE/g of extract, while flavonoid content varied from 0.023 ± 0.010 to 0.249 ± 0.013 mg QE/mg of extract across the tested concentrations. Antioxidant activity DPPH and ABTS assays To assess the antioxidant power of the ethyl acetate extract derived from E2 strain, two assays targeting free radical scavenging, namely DPPH and ABTS, were employed. The scavenging activity was quantified by determining the percentage inhibition of DPPH and ABTS, along with their respective IC 50 values. Figure illustrates the average percentages of DPPH and ABTS free radical scavenging activities across various concentrations of the ethyl acetate extract of E2 strain. The results show that the radical scavenging activities of the ethyl acetate extract of E2 strain and the standards (ascorbic acid and Trolox) are directly proportional to their concentration. Ethyl acetate extract showed a moderate inhibition percentage of 43.61 ± 0.30% at a dose of 1 mg/mL, compared with the standard antioxidant Ascorbic acid (AA), which showed a significant DPPH inhibitory power of 65.32 ± 1.01% at the same concentration. Similarly, ethyl acetate extract revealed an inhibition rate of 39.21 ± 2.11% at the same dose for the standard antioxidant Trolox, which showed a significant ABTS inhibitory potency of 63.44 ± 0.85% at 1 mg/mL. The IC 50 values calculated for the DPPH and ABTS assays corroborate these results. The ethyl acetate extract of E2 strain shows higher IC 50 values (1.16 ± 0.01 mg/mL for DPPH and 1.22 ± 0.003 mg/mL for ABTS) than AA (0.58 ± 0.01 mg/mL for DPPH) and Trolox (0.69 ± 0.02 mg/mL for ABTS). A higher IC 50 indicates lower radical scavenging activity and antioxidant potential. Compared with the synthetic standards AA and Trolox, ethyl acetate extract of E2 strain shows moderate antioxidant capacity. Ferric reducing antioxidant power (FRAP) In this experiment, the ferric reducing antioxidant power (FRAP) of ethyl acetate extract of E2 strain and standard Ascorbic acid was shown to be concentration-dependent. An increase in absorbance suggests an increase in ferric reduction capacity. The results revealed a significant ferric reduction activity for ethyl acetate extract between the different concentrations tested ( p < 0.0001) (Fig. ). This activity showed values from 0.786 ± 0.007 to 1.164 ± 0.012 mg AAE (ascorbic acid equivalent) per mg extract. Correlation between total phenol and flavonoid contents and antioxidant activity A Pearson correlation was established between the phenolic compounds content and the antioxidant activity tested by three DPPH, ABTS and FRAP assays of the ethyl acetate extract of E2 strain (Fig. ). The results revealed a highly significant positive correlation ( p < 0.0001) between total phenolic and flavonoid content and antioxidant capacity, analyzed by three different tests (DPPH, ABTS and FRAP). The highest correlation was observed between flavonoid content and DPPH, ABTS and FRAP (r 2 = 0.97) (Fig. b,d,f), followed by total phenolic content and ABTS (r 2 = 0.81), total phenolic content and FRAP (r 2 = 0.80), total phenolic content and DPPH (r 2 = 0.78) (Fig. a,c,e). GC–MS analysis of E2 ethyl acetate extract Utilizing gas chromatography-mass spectrometry (GC–MS), an analysis was conducted on the chemical composition of the ethyl acetate extract obtained from E2 strain. This analysis revealed the presence of 6 compounds eluted within the time range of 1.736 to 16.025 min. These compounds include Disulfide, dimethyl (1); S-methyl methanethiosulfonate (2); Dimethylsulfoxonium formylmethylid (3); Maltol (4); 2,4-dithiapentane (5); and Eugenol (6) (Table , Fig. ). These compounds have been noted for their various biological activities such as antimicrobial, antifungal, antioxidant, and antitumor properties (Table ). HPLC–UV/vis analysis of E2 ethyl acetate extract The HPLC–UV/Vis chromatogram (high-performance liquid chromatography using a UV–visible spectrum) revealed the presence of six phenolic compounds in the crude ethyl acetate extract of the Streptomyces sp. E2 strain. Each of these compounds exhibited retention times similar to those of the standards used (Fig. and Table ). The soil analysis of the KG site (Kenzi’s Garden) revealed alkalinity with a pH of 8.05, and non-salinity with an electrical conductivity (EC) of 0.62 dS/m. According to the texture diagram, this soil is classified as loamy, consisting of 71% silt, with relatively low proportions of clay and sand (14% and 15% respectively). Moreover, X-ray fluorescence analysis of soil minerals identified the presence of exchangeable cations such as potassium (K), magnesium (Mg), aluminum (Al), calcium (Ca), iron (Fe), and manganese (Mn). Some elements like phosphorus (P), calcium (Ca), and iron (Fe) were found in high concentrations, whereas others like aluminum (Al), sulfur (S), chlorine (Cl), copper (Cu), and zinc (Zn) were present in lower quantities (Table ). Similarly, the soil from the FG site (FST’s Garden) exhibited alkaline properties with a pH of 8.19 and a loamy texture comprising 60% silt. Its electrical conductivity was also low, measured at 0.53 dS/m. Although this soil demonstrated relatively low clay and sand content it stood out for its low content of mineral elements, except for calcium (Ca) and iron (Fe), which were present in moderate concentrations (Table ). Actinobacteria isolates A total of 6 presumptively phenotypically distinct Actinobacteria isolates were recovered from the two types of soil samples, with 4 isolates originating from site A and 2 isolates from site B (Table ). The Actinobacteria isolates were retrieved across all four selective media used (M2, Bennett, GLM, and GA), with the highest number of isolates obtained from M2 medium (n = 4), followed by GLM (n = 2). However, the Bennet and GA media, did not yield any isolates (Table ). The E2 Strain, from garden soil, exhibited abundant growth on various culture media such as ISP1, ISP2, ISP7, Bennett, GYEA, as well as Actinobacteria isolation media (M2 and GLM) after incubation for 7–14 days at 28 °C. This strain exhibited strong growth on ISP2, Bennett and GYEA media, and moderate growth on ISP1 and ISP7, but no growth on GA agar. Observation of a 10-day culture on ISP2 agar revealed an abundance of well-developed, unfragmented aerial and vegetative hyphae. Growth occurred over a temperature range of 4–37 °C (optimum 28 °C), with tolerance to NaCl concentrations ranging from 1 to 5% (optimum 1–2%) and pH between 5.0 and 10 (optimum pH 7.8–8.06). The E2 isolate exhibited the ability to assimilate various carbohydrate compounds as a carbon source, including cellobiose, sucrose, mannose, fructose, and glucose. However, it did not assimilate mannitol, xylose, and starch (Table , Fig. ). A partial sequence of the gene coding for 16S RNA (1407 bp) was determined and deposited in the GenBank database under accession number (PP731514). Analysis of this sequence enabled the isolate E2 to be compared with the Streptomyces species, with a similarity rate of 97.51%. The phylogenetic position of this isolate in relation to the closest species in the Streptomyces genus is shown in Fig. and Table . Actinobacteria isolates After identification, the isolates were subjected to primary screening, as illustrated in Figs. and . Of the six Actinobacteria isolates tested, 4 revealed antimicrobial activity against Gram-negative bacteria ( Pseudomonas aeruginosa ATCC 27853, Escherichia coli ATCC 25922), Gram-positive bacteria ( Staphylococcus aureus ATCC 25923), and the fungal strain ( Candida albicans ATCC 60193). Of the 4 active isolates, E2 and E6 were selected for secondary metabolite extraction because of their high antimicrobial activity compared with the other 2 isolates, E3 and E4. These 2 bioactive isolates were then subjected to secondary screening against various clinically pathogenic and multidrug-resistant bacteria ( Listeria monocytogenes , Klebsiella pneumoniae 20B1572, Proteus sp. 19K1313, Escherichia coli 19L2418, Klebsiella pneumoniae 19K 929, Proteus vulgaris 16C1737, and Escherichia coli 16D1150) (Table and Fig. ). This table summarizes the antimicrobial activities extracted using n-hexane, dichloromethane, and ethyl acetate. The ethyl acetate extract of isolate E2 was evaluated to determine its minimum inhibitory concentration (MIC) against multidrug-resistant bacterial strains (Fig. ). The results show that this extract exhibits notable bactericidal activity, with an MIC of 0.0625 mg/mL against Klebsiella pneumoniae 20B1572 and an MIC of 0.125 mg/mL against Klebsiella pneumoniae 19K929. These results highlight the efficacy of the ethyl acetate extract, particularly against the two Klebsiella pneumoniae strains selected for their most significant inhibition zones and multidrug resistance. The total phenol and flavonoid contents in the ethyl acetate extract of E2 strain were determined at different concentrations (ranging from 0.1 to 1 mg/mL) (Table ). Absorbance values were measured using the Folin-Ciocalteu reagent for phenols and aluminum chloride reagent for flavonoids. These values were then compared to standard absorbance values of gallic acid (y = 0.0009x − 0.0201; R 2 = 0.9982) for phenols and quercetin (y = 0.0005x + 0.0702; R 2 = 0.9977) for flavonoids. Phenol content ranged from 0.472 ± 0.004 to 0.628 ± 0.012 GAE/g of extract, while flavonoid content varied from 0.023 ± 0.010 to 0.249 ± 0.013 mg QE/mg of extract across the tested concentrations. DPPH and ABTS assays To assess the antioxidant power of the ethyl acetate extract derived from E2 strain, two assays targeting free radical scavenging, namely DPPH and ABTS, were employed. The scavenging activity was quantified by determining the percentage inhibition of DPPH and ABTS, along with their respective IC 50 values. Figure illustrates the average percentages of DPPH and ABTS free radical scavenging activities across various concentrations of the ethyl acetate extract of E2 strain. The results show that the radical scavenging activities of the ethyl acetate extract of E2 strain and the standards (ascorbic acid and Trolox) are directly proportional to their concentration. Ethyl acetate extract showed a moderate inhibition percentage of 43.61 ± 0.30% at a dose of 1 mg/mL, compared with the standard antioxidant Ascorbic acid (AA), which showed a significant DPPH inhibitory power of 65.32 ± 1.01% at the same concentration. Similarly, ethyl acetate extract revealed an inhibition rate of 39.21 ± 2.11% at the same dose for the standard antioxidant Trolox, which showed a significant ABTS inhibitory potency of 63.44 ± 0.85% at 1 mg/mL. The IC 50 values calculated for the DPPH and ABTS assays corroborate these results. The ethyl acetate extract of E2 strain shows higher IC 50 values (1.16 ± 0.01 mg/mL for DPPH and 1.22 ± 0.003 mg/mL for ABTS) than AA (0.58 ± 0.01 mg/mL for DPPH) and Trolox (0.69 ± 0.02 mg/mL for ABTS). A higher IC 50 indicates lower radical scavenging activity and antioxidant potential. Compared with the synthetic standards AA and Trolox, ethyl acetate extract of E2 strain shows moderate antioxidant capacity. To assess the antioxidant power of the ethyl acetate extract derived from E2 strain, two assays targeting free radical scavenging, namely DPPH and ABTS, were employed. The scavenging activity was quantified by determining the percentage inhibition of DPPH and ABTS, along with their respective IC 50 values. Figure illustrates the average percentages of DPPH and ABTS free radical scavenging activities across various concentrations of the ethyl acetate extract of E2 strain. The results show that the radical scavenging activities of the ethyl acetate extract of E2 strain and the standards (ascorbic acid and Trolox) are directly proportional to their concentration. Ethyl acetate extract showed a moderate inhibition percentage of 43.61 ± 0.30% at a dose of 1 mg/mL, compared with the standard antioxidant Ascorbic acid (AA), which showed a significant DPPH inhibitory power of 65.32 ± 1.01% at the same concentration. Similarly, ethyl acetate extract revealed an inhibition rate of 39.21 ± 2.11% at the same dose for the standard antioxidant Trolox, which showed a significant ABTS inhibitory potency of 63.44 ± 0.85% at 1 mg/mL. The IC 50 values calculated for the DPPH and ABTS assays corroborate these results. The ethyl acetate extract of E2 strain shows higher IC 50 values (1.16 ± 0.01 mg/mL for DPPH and 1.22 ± 0.003 mg/mL for ABTS) than AA (0.58 ± 0.01 mg/mL for DPPH) and Trolox (0.69 ± 0.02 mg/mL for ABTS). A higher IC 50 indicates lower radical scavenging activity and antioxidant potential. Compared with the synthetic standards AA and Trolox, ethyl acetate extract of E2 strain shows moderate antioxidant capacity. In this experiment, the ferric reducing antioxidant power (FRAP) of ethyl acetate extract of E2 strain and standard Ascorbic acid was shown to be concentration-dependent. An increase in absorbance suggests an increase in ferric reduction capacity. The results revealed a significant ferric reduction activity for ethyl acetate extract between the different concentrations tested ( p < 0.0001) (Fig. ). This activity showed values from 0.786 ± 0.007 to 1.164 ± 0.012 mg AAE (ascorbic acid equivalent) per mg extract. A Pearson correlation was established between the phenolic compounds content and the antioxidant activity tested by three DPPH, ABTS and FRAP assays of the ethyl acetate extract of E2 strain (Fig. ). The results revealed a highly significant positive correlation ( p < 0.0001) between total phenolic and flavonoid content and antioxidant capacity, analyzed by three different tests (DPPH, ABTS and FRAP). The highest correlation was observed between flavonoid content and DPPH, ABTS and FRAP (r 2 = 0.97) (Fig. b,d,f), followed by total phenolic content and ABTS (r 2 = 0.81), total phenolic content and FRAP (r 2 = 0.80), total phenolic content and DPPH (r 2 = 0.78) (Fig. a,c,e). Utilizing gas chromatography-mass spectrometry (GC–MS), an analysis was conducted on the chemical composition of the ethyl acetate extract obtained from E2 strain. This analysis revealed the presence of 6 compounds eluted within the time range of 1.736 to 16.025 min. These compounds include Disulfide, dimethyl (1); S-methyl methanethiosulfonate (2); Dimethylsulfoxonium formylmethylid (3); Maltol (4); 2,4-dithiapentane (5); and Eugenol (6) (Table , Fig. ). These compounds have been noted for their various biological activities such as antimicrobial, antifungal, antioxidant, and antitumor properties (Table ). The HPLC–UV/Vis chromatogram (high-performance liquid chromatography using a UV–visible spectrum) revealed the presence of six phenolic compounds in the crude ethyl acetate extract of the Streptomyces sp. E2 strain. Each of these compounds exhibited retention times similar to those of the standards used (Fig. and Table ). Since the discovery of the first antibiotic, penicillin, by Alexander Fleming in 1928 , antibiotics have been crucial in treating infections caused by pathogenic microorganisms. However, over-the-counter availability and misuse, along with their extensive use in agriculture, have led to the emergence of new pathogens that have acquired resistance. The main concern is the rise of multidrug-resistant bacteria (MDR), which are responsible for a majority of nosocomial infections and can cause severe, difficult-to-treat illnesses, sometimes resulting in prolonged illness and increased risk of death. Given this alarming situation, investment in the research of new bioactive compounds has become essential to combat antibiotic resistance. Recent studies have demonstrated that the Streptomyces genus represents the most effective source for discovering powerful and broad-spectrum medications due to the biosynthetic genes responsible for producing unexpressed secondary metabolites carried within the Streptomyces genome . In our study, we isolated a strain of Actinobacteria E2 from the soil of a terrestrial ecosystem collected from the garden located in the Settat Casablanca region in Morocco. Previous studies have shown that the physicochemical properties of soil such as texture, pH, salinity, mineral elements, and vegetation can influence the population of Actinobacteria – . Actinobacteria are known to be sensitive to the acidic pH of soil , except for some genera that are neutrophilic acid-tolerant or strict acidophiles . Indeed, Actinobacteria generally thrive in neutral to slightly alkaline conditions (between 7 and 8) . Furthermore, soil pH plays a crucial role in influencing microbial communities and their metabolic activities, and Actinobacteria appear to favor slightly alkaline environments . The results of our study, showing an average pH of 8.12, indicate slightly alkaline soil, which is favorable for Actinobacteria growth. Soil electrical conductivity reflects salinity levels in the soil and has a direct impact on microbial populations. Actinobacteria are sensitive to saline stress, so high electrical conductivity can harm their populations . Our samples are considered non-saline soil since the salinity range is below 6000 µS/cm . Other studies have suggested that Actinobacteria are significantly and positively correlated with clay and silt particles. These particles provide a conducive habitat for Actinobacteria due to their ability to retain water and nutrients, as well as absorb organic carbon. Soils rich in silt, as observed in the two studied sites (71% and 60% respectively), promote Actinobacteria growth by offering a larger surface area for colonization and providing more favorable conditions for their development (M. Annabi, 2005). In contrast, sand particles have lower cation exchange capacity and reduced nutrient content, making them less conducive to Actinobacteria growth. Several studies have shown that the availability of minerals such as potassium (K), phosphorus (P), and magnesium (Mg) is an important factor for Actinobacteria growth . Potassium is an essential macronutrient for microbial growth and metabolism, and its availability in soil can affect microbial community composition . Actinobacteria can benefit from potassium availability, making them more abundant in potassium-rich soils. Phosphorus is an essential nutrient for microbial growth, and its availability often limits microbial activities in the soil . However, Actinobacteria may have different strategies for phosphorus acquisition or may not be as influenced by available phosphorus levels as other microorganisms . The isolation results on four different culture media indicate that the M2 and GLM media are more conducive to the growth of Actinobacteria in both soil samples. This could be explained by the presence of starch (macromolecule) in these two media. This macromolecule is catabolized by most Actinobacteria , making the M2 and GLM media favorable for their growth . Additionally, the richness in carbon and nitrogen sources also promotes optimal growth of these bacteria . In contrast, the other two culture media used (Bennett and GYEA) only yielded low to moderate growth, which could be attributed to a lack of essential nutrients for Actinobacteria growth. We isolated a total of 6 Actinobacteria isolates from two types of Moroccan soil. One isolate, coded as E2 strain, was selected based on its broad-spectrum antimicrobial activity for further characterization and bioactivity evaluation. The strain was identified as belonging to the genus Streptomyces and designated as Streptomyces sp. strain E2 with the GenBank accession number. Our results demonstrated that strain E2 produces one or more secondary metabolites with various bioactivities such as antimicrobial, antioxidant, anticancer, and other biological activities. Reliable taxonomy of prokaryotes, particularly the genus Streptomyces , requires data from both DNA-based methods and phenotypic characterization . Studies have shown that closely related strains of Streptomyces can differ in terms of biochemical profiles and carbon source utilization , . Therefore, to enhance our understanding and complement the phylogenetic analysis of strain E2, we conducted a detailed micro-morphological, biochemical and physiological characteristic. Our data revealed that E2 strain belonged to the genus Streptomyces and could thrive across different temperature ranges (4 to 37 °C), pH levels (4 to 10), and NaCl concentrations (1 to 10%, w/v). These findings suggest that E2 strain had the ability to withstand NaCl concentrations of up to 100 g/L (10%). Tian et al. concluded that saline or hypersaline environments warrant special attention as they may offer new avenues for the discovery of natural molecules. Actinobacteria capable of growing from 4 to 60 °C have been reported . However, the optimum growth temperature verified for Actinobacteria was 28 °C. In addition, some Actinobacteria are able to grow in culture media with pH values of 3 and 13 . However, the pH range was most often 4 to 10, with the optimum pH for Actinobacteria growth being in the neutral region, particularly 8. Microscopic observations in both fresh state and after Gram staining indicated that E2 strain belongs to the group of filamentous Gram-positive bacteria, displaying a filamentous and branched appearance . In primary screening, our results demonstrated significantly higher antimicrobial activity than those of Aouiche et al. , who showed lower levels of antimicrobial activity. For example, the Streptomyces sp. strain PAL111 isolate showed inhibition zones of 10 mm for Staphylococcus aureus and 20 mm for E. coli in the aforementioned study, while our results demonstrated significantly higher antimicrobial activities. Furthermore, our study revealed larger zones of inhibition against Candida albicans ATCC 60193 compared to the previous study by Aouiche et al. , where zones of inhibition were weaker, ranging from 7 to 13 mm. These results suggest that our primary screening identified Streptomyces strains with potentially more potent antimicrobial activities. The Streptomyces genus is widely recognized for its capacity to produce a diverse array of secondary metabolites, which possess a wide range of bioactivities. These bioactivities include but are not limited to antitumor, antiviral, antioxidant, antihypertensive, immunosuppressive, and especially antimicrobial properties. These metabolites serve as essential defense mechanisms against competing microorganisms in the natural environment, allowing Streptomyces species to thrive and survive in various ecological niches . Among these bioactive compounds, antimicrobial metabolites produced by Streptomyces strains have garnered particular attention due to their potential applications in medicine and agriculture . E2 Strain, in particular, has demonstrated significant antimicrobial activity against both gram-positive and gram-negative bacteria, as well as yeast. This broad-spectrum antimicrobial activity suggests that the metabolites produced by E2 strain may hold promise as potential candidates for the development of novel antimicrobial agents. Understanding the mechanisms behind the antimicrobial activity of Streptomyces metabolites, such as those produced by E2 strain, is crucial for harnessing their therapeutic potential. Further research into the bioactive compounds produced by Streptomyces strains and their mode of action could lead to the discovery of new antibiotics to combat the growing threat of antibiotic resistance. Free radicals, like reactive oxygen and nitrogen species, play crucial roles in various biological processes such as cell signaling and immunity. However, excessive free radicals can lead to oxidative stress , damaging important molecules and causing various diseases . The body’s natural antioxidative mechanisms, involving enzymes like superoxide dismutase and nonenzymatic compounds, usually control free radical levels. But under stress, these mechanisms can become overwhelmed, leading to oxidative stress. Consumption of exogenous antioxidants helps reduce oxidative stress and prevent diseases like cardiovascular disorders and rheumatoid arthritis . Natural antioxidants from microorganisms, particularly Actinobacteria like Streptomyces , are safer alternatives to synthetic chemicals . Streptomyces sp. strain E2 extract has shown promising antioxidant activity in various assays (DPPH, ABTS and FRAP), indicating potential health benefits. To accurately assess antioxidant potential, multiple assays indicating different mechanisms should be employed, and results should be standardized to allow comparison between studies. In this study, we evaluated Streptomyces sp. strain E2 extract using various assays and expressed results as equivalents of ascorbic acid, providing valuable insights into its antioxidant capacity. The application of Gas Chromatography-Mass Spectrometry (GC–MS) analysis has been instrumental in the bioprospecting of natural products derived from Streptomyces bacteria – . Maltol and eugenol are the major compounds present in the extract of the strain Streptomyces sp. E2. Maltol has been found in various plants , and isolated as a bacterial metabolite from Streptomyces sp. strain GW3/1538 and Streptomyces sp. strain SBT348 derived from marine sponges . As a chelator of metal ions , maltol shows promising applications, particularly in protecting nerve cells against oxidative damage caused by reactive oxygen species (ROS), thereby contributing to the maintenance of normal cellular functions and the reduction of oxidative stress associated with diabetes and irreversible kidney damage . The second main compound identified in the ethyl acetate extract of E2 strain is eugenol. Eugenol exhibits a diverse range of properties, including antimicrobial activity, antioxidant properties, anesthetic potential, anticarcinogenic effects, and anti-inflammatory action, as well as demonstrated efficacy in the treatment of diabetes and the reduction of blood lipid levels , . The World Health Organization (WHO) generally recognizes eugenol as safe (GRAS) and non-mutagenic. There are several methods for isolating eugenol, including solvent extraction, steam distillation, and hydrodistillation . Eugenol is known for its effects on the cell membrane and cell wall of Gram-negative and Gram-positive bacteria, leading to their lysis and the leakage of their intracellular contents, including lipids and proteins . Previous studies, such as that conducted by Gülçin , have highlighted eugenol’s strong antioxidant properties and its ability to scavenge free radicals. It has exhibited antibacterial effects against various species, including Staphylococcus aureus , Pseudomonas aeruginosa , and Escherichia coli . The HPLC–UV/vis analysis of the ethyl acetate extract from Streptomyces sp. strain E2 revealed a rich presence of phenolic acids, including gallic acid, chlorogenic acid, vanillic acid, trans-ferulic acid, ellagic acid, and cinnamic acid. These compounds are well-known for their potent antioxidant and antimicrobial properties . Gallic acid and chlorogenic acid are particularly effective at neutralizing free radicals and protecting cells from oxidative stress, which is crucial in preventing diseases such as cardiovascular diseases and cancer – . Vanillic acid and trans-ferulic acid, on the other hand, have demonstrated significant antibacterial and antifungal activities, indicating the potential of the E2 strain as a natural antimicrobial agent , . Similarly, the biological properties of ellagic acid and cinnamic acid produced by Streptomyces species have been extensively studied. Ellagic acid exhibits anti-carcinogenic properties, inducing apoptosis in cancer cells and inhibiting tumor cell proliferation , . Additionally, it was reported that cinnamic acid could reduce inflammation, combat microbial infections, and protect against oxidative damage , . The present study highlights the discovery of a novel actinobacterial isolate, designated as strain E2, originating from the soil of a terrestrial ecosystem in the Settat Casablanca region of Morocco. This strain, identified as belonging to the genus Streptomyces , exhibited broad-spectrum antimicrobial activity, along with promising antioxidant properties. The findings of this study underscore the significance of soil microbial biodiversity, particularly actinobacteria, as a potential source of new bioactive molecules. Specifically, Streptomyces sp. strain E2 holds particular interest due to its ability to produce secondary metabolites with antimicrobial and antioxidant activities. These results pave the way for further research avenues, including the comprehensive characterization of the mechanisms of action of metabolites produced by Streptomyces sp. strain E2, as well as their in vivo evaluation to determine efficacy and safety. Additionally, studies on modulating culture conditions to enhance the production of bioactive metabolites could be explored. Moreover, exploring the potential applications of the compounds identified in the ethyl acetate extract of strain E2, in various fields, including as antimicrobial and antioxidant agents, and even as candidates for the development of novel drugs, warrants investigation. Furthermore, to achieve a more precise taxonomic resolution for strain E2, additional research involving the sequencing of multiple conserved genes and conducting a multi-gene phylogenetic analysis is warranted. Lastly, these findings underscore the importance of preserving natural ecosystems, such as soils, as reservoirs of valuable microbial biodiversity, and the need to continue research efforts to explore their biotechnological potential in the context of combating antibiotic resistance and promoting human and environmental health. Soil sample collection Soil samples were aseptically collected from terrestrial ecosystems in April 2023. The specific locations were the Kenzi garden (GPS: 32° 59′ 15.1″ N, 7° 36′ 16.7″ W) and the Faculty of Science and Technology garden in Settat (GPS: 33° 00′ 37″ N, 7° 61′ 83″ W), located in the Casablanca-Settat region, Morocco. To ensure diversity and avoid duplication in the isolation of Actinobacteria , four distinct sampling points were designated for each site. Sampling necessitated the careful removal of a 5-cm layer from the soil surface utilizing a sterile spatula, followed by the extraction of 150 to 200 g of the underlying layer . These meticulously acquired soil samples were then transferred under aseptic conditions to sterile ‘Stomacher’ bags, where they were blended and standardized to create a uniform soil mixture. Following this, they were conveyed to the Microbiology laboratory for storage at 4 °C until subsequent analysis . Physical–chemical analysis of the soil samples The pH, electrical conductivity (EC), mineral content, organic matter (OM), and soil texture of each soil sample were assessed using the methodologies outlined in our previous research , . Minerals including carbon (C), oxygen (O), magnesium (Mg), silicon (Si), iron (Fe), potassium (K), and calcium (Ca) were examined using a scanning electron microscope (SEM) (JEOL model JSM-IT500HR), as reported by Stefaniak et al. . On the other hand, analysis of zinc (Zn), manganese (Mn), chlorine (Cl), aluminum (Al), phosphorus (P), copper (Cu), and sulfur (S) was conducted through the energy-dispersive X-ray fluorescence method, with the Epsilon 3XLE instrument from PANalytical, France, following the methodology described by Thirion-Merle . Pretreatment of soil samples, isolation, purification, and preservation of Actinobacteria isolates To increase the number of Actinobacteria , a pre-treatment involving drying the soil for at least a week at room temperature was performed . Subsequently, the dried soil samples underwent grinding with a mortar to eliminate debris and stone particles present in the soil samples. The soil samples were then stored in sterile tubes at 4 °C. To isolate Actinobacteria from the soil samples, 10 g of each soil sample underwent serial dilution to 10 –4 , and 100 μL of each dilution was spread onto selective culture media (M2, Bennett, GLM, GA) containing 50 mg/L actidone to inhibit fungal growth . Petri dishes were then incubated at 28 °C for 1 week, with daily monitoring. Following the incubation period, colonies exhibiting Actinobacteria characteristics based on macroscopic and microscopic observations were sub-cultured on ISP2 medium using the streak method to obtain pure cultures. For short-term preservation, these cultures were kept in inclined tubes at 4 °C , while longer-term preservation involved storage in 20% glycerol at − 20 °C . This preservation approach maintains the stability and viability of Actinobacteria strains. Assessment of the antimicrobial activity of actinobacterial isolates by in vitro screening Test microorganisms The antimicrobial effectiveness of Actinobacteria isolates was assessed against a range of test microorganisms. This panel included Staphylococcus aureus ATCC 25923, Pseudomonas aereginosa ATCC 27853, Escherichia coli ATCC 25922 and Candida albicans ATCC 60193 (a pathogenic fungus). These test strains were obtained from the Pasteur Institute of Morocco in Casablanca. Additionally, six clinically multi-resistant strains were employed, consisting of Listeria monocytogenes , Klebsiella pneumoniae 19K 929, Proteus sp. 19K1313, Klebsiella pneumoniae 20B1572, Proteus vulgaris 16C1737, and Klebsiella pneumoniae 20B1572. These strains were sourced from the Pasteur Settat Medical Analysis Laboratory in Morocco. Primary screening Primary screening of Actinobacteria was conducted on ISP2 agar medium (Composition: yeast extract: 4 g, glucose: 4 g, malate extract: 10 g, agar: 16 g, distilled water: 1L, and pH adjusted at 6.51) utilizing the double layer method . On a sterile agar medium, a pure Actinobacteria isolate was centrally inoculated onto each plate. Subsequently, the plates were incubated at 28 °C for a duration of 10 days. A second layer, consisting of 5 mL of Muller Hinton medium (MHM) weakly agarized with 0.7% (w/v) agar, was previously inoculated with various microorganisms including non-pathogenic bacteria such as Staphylococcus aureus ATCC 25923, Pseudomonas aeruginosa ATCC 27853, and Escherichia coli ATCC 25922, as well as the pathogenic fungus Candida albicans ATCC 60193. These microbial strains were sourced from the Pasteur Institute in Casablanca, Morocco. Following an incubation period of 24 h at 37 °C for bacteria and 48 h at 28 °C for fungi, zones of inhibition surrounding the Actinobacteria colonies were visually inspected and measured using a caliper. Secondary screening Fermentation and extraction of secondary metabolites from actinobacterial isolates Following the identification of Actinobacteria isolates E2 and E6 exhibiting notable antimicrobial properties in the preliminary screening, an extensive investigation into their secondary metabolites was conducted. Fermentation and extraction processes were employed to isolate these bioactive compounds. In this study, 500 mL Erlenmeyer flasks, each containing 100 mL of ISP2 culture medium, were utilized for fermenting the active Actinobacteria isolates. The cultures were then subjected to constant agitation at 150 rpm and maintained at 28 °C. Subsequently, the Actinobacteria cultures underwent centrifugation at 10,000 g for 20 min to remove the mycelial mass, and the resulting supernatant was collected. To extract the secondary metabolites, the supernatant was subjected to organic solvent extraction with solvents of increasing polarity. Initially, it was mixed with hexane, followed by dichloromethane, ethyl acetate, and butanol, successively. The organic extracts obtained were then evaporated at 45 °C to remove solvent residues. Finally, the dry extracts, along with residual aqueous phases, were dissolved in dimethyl sulfoxide (DMSO) to facilitate concentration determination, following established protocols , , . Evaluation of antimicrobial activity using the disc diffusion method To assess antimicrobial activity in liquid media, we employed the paper disk technique as described by Badji et al. . Sterile filter paper discs, 6 mm in diameter, were impregnated with 25 µL of each extract, along with DMSO and nalidixic acid serving as negative and positive controls, respectively. After impregnation, the discs were allowed to dry for a brief period near a Bunsen burner before being placed onto the surface of Mueller–Hinton agar (MHA), which had been previously inoculated with test bacteria using the swabbing technique. Before commencing antimicrobial testing, bacterial cells were harvested and adjusted to an optical density (OD) ranging from 0.08 to 0.13 at 625 nm, roughly equating to 10 6 colony-forming units per milliliter (CFU/mL), using a spectrophotometer (Selectra VR2000, Barcelona, Spain) . Similarly, for assessing antifungal activity, inoculum optical densities were maintained within the range of 0.18 to 0.20 at 623 nm, corresponding to a concentration of approximately 10 6 spores/mL . Subsequently, the plates were refrigerated at + 4 °C for 2 h to allow for the diffusion of molecules before being incubated at 37 °C for 24 h. Following incubation, the diameter of the zones of inhibition was measured in mm. Determination of minimum inhibitory concentration The minimum inhibitory concentration (MIC) values were determined using a liquid culture medium dilution method. Briefly, 100 µL of Mueller Hinton broth (MHB) were added to each well of a 96-well plate. Then, 100 µL of each sample’s stock solution (1 mg/mL) were mixed in the first column. A series of cascade dilutions were performed up to column 10, resulting in a concentration range from 1 mg/mL to 0.001 mg/mL of the ethyl acetate extract of the E2 isolate. The bacterial culture was adjusted to an absorbance equivalent to 0.5 McFarland, and 10 µL were added to each well except for those in column 12, which served as the negative control (MHB without inoculum). Column 11 served as the positive growth control (tested strain in MHB). Each test was conducted in triplicate. After incubation, 20 µL of a 2, 3, 5-triphenyl-tetrazolium chloride (MTT) aqueous dye (Merck-Germany; CAS No. 298-96-4) were added to the wells and incubated for 3 h. The MIC was determined as the lowest concentration that showed no microbial growth, indicated by a color change from yellow to pink . Characterization of Actinobacteria isolates: cultural, micro-morphological, biochemical and physiological characteristics Cultural characteristics, including growth intensity, surface pigmentation, colony morphology, and the presence of diffusible pigments in agar, were observed on different media such as Bennett, ISP1, ISP7, and GYEA. The inoculation of these media was performed using the streaking method, and the plates (90 mm in diameter) were then incubated at 28 °C and monitored daily for a duration of 10 days. Microscopic characteristics of pure isolates were examined using light microscopy (Olympus CX43RF), both in fresh samples and after Gram staining. Additionally, the physiological and biochemical characteristics of the Actinobacteria isolates were evaluated using established methodologies as described in previous studies , , . These assessments included the evaluation of melanoid pigment production, tolerance to varying concentrations of sodium chloride (NaCl) and pH levels, growth at different temperatures, and carbohydrate assimilation. Genotypic identification of isolates through 16S rRNA sequencing Genotypic identification was carried out via 16S rRNA sequencing to verify the species identification of our isolates. DNA extraction from the strain was executed using an automated system, specifically the Mag Purix Bacterial DNA Extraction Kit, in accordance with the manufacturer’s instructions. The amplification reaction followed the protocol outlined by . The 16S rRNA gene underwent amplification using universal bacterial primer sequences Fd1 (5′-AGAGTTTGATCATGGCTCAG-3′), and rP2 (5′-ACGGTTACCTTGTTACGACTT-3′), resulting in an amplicon size of 1,500 bp as per established conditions. Subsequently, all amplified products were subjected to sequencing for identity validation. Both strands of the purified amplicons were sequenced using a 3130 × 1 Genetic Analyzer, employing the same primers utilized for PCR amplification. Sequences obtained were assembled into contigs using DNA Baser Assembler software version 5.15.0, saved in FASTA format. Multiple alignment of E2 strain 16S rRNA gene sequence with representative sequences of related Streptomyces strains was conducted using MEGA X software . This alignment facilitated construction of a phylogenetic tree via the neighbor-joining method . within the same software. Evolutionary distances were computed using Kimura’s two-parameter model, and tree topologies were evaluated through bootstrap analyses employing Felsenstein’s 1000 resamples method . Determination of phenolics and flavonoids in ethyl acetate extract The determination of total phenolics, the Folin-Ciocalteu method, as detailed by Bensadón et al. , was employed. This involved combining 3 mL of diluted Folin-Ciocalteu reagent (1:10) with 500 μL of sample or standard (1 mg/mL prepared in methanol), followed by the addition of 3 mL Na 2 CO 3 (6%). After one hour’s incubation at room temperature, protected from light, absorbance was recorded at 760 nm. Similarly, quantification of flavonoid content was carried out using the aluminum chloride method, according to the protocol described by Bahorun et al. . Briefly, 1 mL of sample or standard (prepared in methanol) was mixed with 1 mL of AlCl 3 solution (2% in methanol). After a 10 min incubation period, absorbance was measured at 415 nm. Antioxidant activity of E2 ethyl acetate extract The antioxidant capacity of the ethyl acetate extract from Streptomyces sp. strain E2 was evaluated using three different assays. Firstly, the DPPH (2,2-diphenyl-1-picrylhydrazyl) free radical scavenging activity was determined according to the method described by Blois , with absorbance measured at 517 nm using an Elisa microplate reader. Ascorbic acid served as the positive antioxidant control. Secondly, the 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid) (ABTS) assay was conducted following the protocol developed by Re et al. . The resulting absorbance was measured at 743 nm, with a decrease indicating a reduction in radical amount, and Trolox was employed as a positive control. Finally, the ferric reducing antioxidant power (FRAP) of the extracts was assessed as outlined by Oyaizu , , with absorbance measured at 700 nm against a similar blank, where distilled water replaced the extract. Ascorbic acid was utilized as the positive standard in this assay. Analysis of E2 ethyl acetate extract by GC–MS Gas chromatography-mass spectrometry (GC–MS) analysis was conducted following the procedure outlined by Chakraborty et al. . The analysis utilized an Agilent 7890A Series gas chromatography (GC) system coupled with mass spectrometry (MS), comprising a multimode injector and an HP-5MS capillary column (30 m × 0.250 mm × 0.25 μm). Solubilized extracts were introduced into the column using helium as the carrier gas (1.7 mL/min) in a 1:4 fractionated injection mode. The ion source and quadrupole temperatures were maintained at 230 and 150 °C, respectively. The temperature program ranged from 60 to 360 °C. Compound identification was achieved by comparing the obtained mass spectra with data available in the NIST MS 2017 library . HPLC–UV/visible analysis The HPLC–UV/vis analysis was performed using a Shimadzu HPLC system equipped with an SPD-20A UV/absorbance detector. Separation was achieved using a Waters reverse-phase (RP) Symmetry C-18 column (150 × 3.9 mm, 5 μm) at ambient temperature. The mobile phase consisted of deionized water with trifluoroacetic acid (TFA) (pH 2.5) as solvent A and 99.99% methanol as solvent B. A gradient elution method was employed: initially, 100–50% solvent A over 0 to 20 min, followed by 50–40% solvent A from 20 to 30 min, and finally, 40–100% solvent A from 30 to 40 min. The flow rate of the mobile phase was maintained at 1 mL/min, and the detector was set at 280 nm . Phenolic compounds were identified by comparing their retention times and UV–Visible spectra with those of previously injected standards (resorcinol, caffeic acid, syringic acid, vanillin, p-coumaric acid, sinapic acid, ferulic acid, epicatechin, quercetin, gallic acid, chlorogenic acid, vanillic acid, trans-ferulic acid, ellagic acid, and cinnamic acid) . Statistical analysis The antioxidant and antimicrobial activities, along with the determination of total phenolic and flavonoid compounds, were carried out in triplicate for each test. Data collected were analyzed using GraphPad Prism 8.4.3 software and presented as mean ± standard deviation (SD). Significant differences between groups were determined via one-way analysis of variance (ANOVA) followed by Tukey’s multiple comparisons test. A significance level of p ≤ 0.05 was adopted for all data analyses in this study. Pearson correlation analysis was conducted using GraphPad Prism 8.4.3 software to evaluate the relationship between total phenolic and flavonoid compounds and antioxidant activity. Soil samples were aseptically collected from terrestrial ecosystems in April 2023. The specific locations were the Kenzi garden (GPS: 32° 59′ 15.1″ N, 7° 36′ 16.7″ W) and the Faculty of Science and Technology garden in Settat (GPS: 33° 00′ 37″ N, 7° 61′ 83″ W), located in the Casablanca-Settat region, Morocco. To ensure diversity and avoid duplication in the isolation of Actinobacteria , four distinct sampling points were designated for each site. Sampling necessitated the careful removal of a 5-cm layer from the soil surface utilizing a sterile spatula, followed by the extraction of 150 to 200 g of the underlying layer . These meticulously acquired soil samples were then transferred under aseptic conditions to sterile ‘Stomacher’ bags, where they were blended and standardized to create a uniform soil mixture. Following this, they were conveyed to the Microbiology laboratory for storage at 4 °C until subsequent analysis . The pH, electrical conductivity (EC), mineral content, organic matter (OM), and soil texture of each soil sample were assessed using the methodologies outlined in our previous research , . Minerals including carbon (C), oxygen (O), magnesium (Mg), silicon (Si), iron (Fe), potassium (K), and calcium (Ca) were examined using a scanning electron microscope (SEM) (JEOL model JSM-IT500HR), as reported by Stefaniak et al. . On the other hand, analysis of zinc (Zn), manganese (Mn), chlorine (Cl), aluminum (Al), phosphorus (P), copper (Cu), and sulfur (S) was conducted through the energy-dispersive X-ray fluorescence method, with the Epsilon 3XLE instrument from PANalytical, France, following the methodology described by Thirion-Merle . Actinobacteria isolates To increase the number of Actinobacteria , a pre-treatment involving drying the soil for at least a week at room temperature was performed . Subsequently, the dried soil samples underwent grinding with a mortar to eliminate debris and stone particles present in the soil samples. The soil samples were then stored in sterile tubes at 4 °C. To isolate Actinobacteria from the soil samples, 10 g of each soil sample underwent serial dilution to 10 –4 , and 100 μL of each dilution was spread onto selective culture media (M2, Bennett, GLM, GA) containing 50 mg/L actidone to inhibit fungal growth . Petri dishes were then incubated at 28 °C for 1 week, with daily monitoring. Following the incubation period, colonies exhibiting Actinobacteria characteristics based on macroscopic and microscopic observations were sub-cultured on ISP2 medium using the streak method to obtain pure cultures. For short-term preservation, these cultures were kept in inclined tubes at 4 °C , while longer-term preservation involved storage in 20% glycerol at − 20 °C . This preservation approach maintains the stability and viability of Actinobacteria strains. Test microorganisms The antimicrobial effectiveness of Actinobacteria isolates was assessed against a range of test microorganisms. This panel included Staphylococcus aureus ATCC 25923, Pseudomonas aereginosa ATCC 27853, Escherichia coli ATCC 25922 and Candida albicans ATCC 60193 (a pathogenic fungus). These test strains were obtained from the Pasteur Institute of Morocco in Casablanca. Additionally, six clinically multi-resistant strains were employed, consisting of Listeria monocytogenes , Klebsiella pneumoniae 19K 929, Proteus sp. 19K1313, Klebsiella pneumoniae 20B1572, Proteus vulgaris 16C1737, and Klebsiella pneumoniae 20B1572. These strains were sourced from the Pasteur Settat Medical Analysis Laboratory in Morocco. The antimicrobial effectiveness of Actinobacteria isolates was assessed against a range of test microorganisms. This panel included Staphylococcus aureus ATCC 25923, Pseudomonas aereginosa ATCC 27853, Escherichia coli ATCC 25922 and Candida albicans ATCC 60193 (a pathogenic fungus). These test strains were obtained from the Pasteur Institute of Morocco in Casablanca. Additionally, six clinically multi-resistant strains were employed, consisting of Listeria monocytogenes , Klebsiella pneumoniae 19K 929, Proteus sp. 19K1313, Klebsiella pneumoniae 20B1572, Proteus vulgaris 16C1737, and Klebsiella pneumoniae 20B1572. These strains were sourced from the Pasteur Settat Medical Analysis Laboratory in Morocco. Primary screening of Actinobacteria was conducted on ISP2 agar medium (Composition: yeast extract: 4 g, glucose: 4 g, malate extract: 10 g, agar: 16 g, distilled water: 1L, and pH adjusted at 6.51) utilizing the double layer method . On a sterile agar medium, a pure Actinobacteria isolate was centrally inoculated onto each plate. Subsequently, the plates were incubated at 28 °C for a duration of 10 days. A second layer, consisting of 5 mL of Muller Hinton medium (MHM) weakly agarized with 0.7% (w/v) agar, was previously inoculated with various microorganisms including non-pathogenic bacteria such as Staphylococcus aureus ATCC 25923, Pseudomonas aeruginosa ATCC 27853, and Escherichia coli ATCC 25922, as well as the pathogenic fungus Candida albicans ATCC 60193. These microbial strains were sourced from the Pasteur Institute in Casablanca, Morocco. Following an incubation period of 24 h at 37 °C for bacteria and 48 h at 28 °C for fungi, zones of inhibition surrounding the Actinobacteria colonies were visually inspected and measured using a caliper. Fermentation and extraction of secondary metabolites from actinobacterial isolates Following the identification of Actinobacteria isolates E2 and E6 exhibiting notable antimicrobial properties in the preliminary screening, an extensive investigation into their secondary metabolites was conducted. Fermentation and extraction processes were employed to isolate these bioactive compounds. In this study, 500 mL Erlenmeyer flasks, each containing 100 mL of ISP2 culture medium, were utilized for fermenting the active Actinobacteria isolates. The cultures were then subjected to constant agitation at 150 rpm and maintained at 28 °C. Subsequently, the Actinobacteria cultures underwent centrifugation at 10,000 g for 20 min to remove the mycelial mass, and the resulting supernatant was collected. To extract the secondary metabolites, the supernatant was subjected to organic solvent extraction with solvents of increasing polarity. Initially, it was mixed with hexane, followed by dichloromethane, ethyl acetate, and butanol, successively. The organic extracts obtained were then evaporated at 45 °C to remove solvent residues. Finally, the dry extracts, along with residual aqueous phases, were dissolved in dimethyl sulfoxide (DMSO) to facilitate concentration determination, following established protocols , , . Evaluation of antimicrobial activity using the disc diffusion method To assess antimicrobial activity in liquid media, we employed the paper disk technique as described by Badji et al. . Sterile filter paper discs, 6 mm in diameter, were impregnated with 25 µL of each extract, along with DMSO and nalidixic acid serving as negative and positive controls, respectively. After impregnation, the discs were allowed to dry for a brief period near a Bunsen burner before being placed onto the surface of Mueller–Hinton agar (MHA), which had been previously inoculated with test bacteria using the swabbing technique. Before commencing antimicrobial testing, bacterial cells were harvested and adjusted to an optical density (OD) ranging from 0.08 to 0.13 at 625 nm, roughly equating to 10 6 colony-forming units per milliliter (CFU/mL), using a spectrophotometer (Selectra VR2000, Barcelona, Spain) . Similarly, for assessing antifungal activity, inoculum optical densities were maintained within the range of 0.18 to 0.20 at 623 nm, corresponding to a concentration of approximately 10 6 spores/mL . Subsequently, the plates were refrigerated at + 4 °C for 2 h to allow for the diffusion of molecules before being incubated at 37 °C for 24 h. Following incubation, the diameter of the zones of inhibition was measured in mm. Determination of minimum inhibitory concentration The minimum inhibitory concentration (MIC) values were determined using a liquid culture medium dilution method. Briefly, 100 µL of Mueller Hinton broth (MHB) were added to each well of a 96-well plate. Then, 100 µL of each sample’s stock solution (1 mg/mL) were mixed in the first column. A series of cascade dilutions were performed up to column 10, resulting in a concentration range from 1 mg/mL to 0.001 mg/mL of the ethyl acetate extract of the E2 isolate. The bacterial culture was adjusted to an absorbance equivalent to 0.5 McFarland, and 10 µL were added to each well except for those in column 12, which served as the negative control (MHB without inoculum). Column 11 served as the positive growth control (tested strain in MHB). Each test was conducted in triplicate. After incubation, 20 µL of a 2, 3, 5-triphenyl-tetrazolium chloride (MTT) aqueous dye (Merck-Germany; CAS No. 298-96-4) were added to the wells and incubated for 3 h. The MIC was determined as the lowest concentration that showed no microbial growth, indicated by a color change from yellow to pink . Characterization of Actinobacteria isolates: cultural, micro-morphological, biochemical and physiological characteristics Cultural characteristics, including growth intensity, surface pigmentation, colony morphology, and the presence of diffusible pigments in agar, were observed on different media such as Bennett, ISP1, ISP7, and GYEA. The inoculation of these media was performed using the streaking method, and the plates (90 mm in diameter) were then incubated at 28 °C and monitored daily for a duration of 10 days. Microscopic characteristics of pure isolates were examined using light microscopy (Olympus CX43RF), both in fresh samples and after Gram staining. Additionally, the physiological and biochemical characteristics of the Actinobacteria isolates were evaluated using established methodologies as described in previous studies , , . These assessments included the evaluation of melanoid pigment production, tolerance to varying concentrations of sodium chloride (NaCl) and pH levels, growth at different temperatures, and carbohydrate assimilation. Genotypic identification of isolates through 16S rRNA sequencing Genotypic identification was carried out via 16S rRNA sequencing to verify the species identification of our isolates. DNA extraction from the strain was executed using an automated system, specifically the Mag Purix Bacterial DNA Extraction Kit, in accordance with the manufacturer’s instructions. The amplification reaction followed the protocol outlined by . The 16S rRNA gene underwent amplification using universal bacterial primer sequences Fd1 (5′-AGAGTTTGATCATGGCTCAG-3′), and rP2 (5′-ACGGTTACCTTGTTACGACTT-3′), resulting in an amplicon size of 1,500 bp as per established conditions. Subsequently, all amplified products were subjected to sequencing for identity validation. Both strands of the purified amplicons were sequenced using a 3130 × 1 Genetic Analyzer, employing the same primers utilized for PCR amplification. Sequences obtained were assembled into contigs using DNA Baser Assembler software version 5.15.0, saved in FASTA format. Multiple alignment of E2 strain 16S rRNA gene sequence with representative sequences of related Streptomyces strains was conducted using MEGA X software . This alignment facilitated construction of a phylogenetic tree via the neighbor-joining method . within the same software. Evolutionary distances were computed using Kimura’s two-parameter model, and tree topologies were evaluated through bootstrap analyses employing Felsenstein’s 1000 resamples method . Determination of phenolics and flavonoids in ethyl acetate extract The determination of total phenolics, the Folin-Ciocalteu method, as detailed by Bensadón et al. , was employed. This involved combining 3 mL of diluted Folin-Ciocalteu reagent (1:10) with 500 μL of sample or standard (1 mg/mL prepared in methanol), followed by the addition of 3 mL Na 2 CO 3 (6%). After one hour’s incubation at room temperature, protected from light, absorbance was recorded at 760 nm. Similarly, quantification of flavonoid content was carried out using the aluminum chloride method, according to the protocol described by Bahorun et al. . Briefly, 1 mL of sample or standard (prepared in methanol) was mixed with 1 mL of AlCl 3 solution (2% in methanol). After a 10 min incubation period, absorbance was measured at 415 nm. Antioxidant activity of E2 ethyl acetate extract The antioxidant capacity of the ethyl acetate extract from Streptomyces sp. strain E2 was evaluated using three different assays. Firstly, the DPPH (2,2-diphenyl-1-picrylhydrazyl) free radical scavenging activity was determined according to the method described by Blois , with absorbance measured at 517 nm using an Elisa microplate reader. Ascorbic acid served as the positive antioxidant control. Secondly, the 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid) (ABTS) assay was conducted following the protocol developed by Re et al. . The resulting absorbance was measured at 743 nm, with a decrease indicating a reduction in radical amount, and Trolox was employed as a positive control. Finally, the ferric reducing antioxidant power (FRAP) of the extracts was assessed as outlined by Oyaizu , , with absorbance measured at 700 nm against a similar blank, where distilled water replaced the extract. Ascorbic acid was utilized as the positive standard in this assay. Analysis of E2 ethyl acetate extract by GC–MS Gas chromatography-mass spectrometry (GC–MS) analysis was conducted following the procedure outlined by Chakraborty et al. . The analysis utilized an Agilent 7890A Series gas chromatography (GC) system coupled with mass spectrometry (MS), comprising a multimode injector and an HP-5MS capillary column (30 m × 0.250 mm × 0.25 μm). Solubilized extracts were introduced into the column using helium as the carrier gas (1.7 mL/min) in a 1:4 fractionated injection mode. The ion source and quadrupole temperatures were maintained at 230 and 150 °C, respectively. The temperature program ranged from 60 to 360 °C. Compound identification was achieved by comparing the obtained mass spectra with data available in the NIST MS 2017 library . HPLC–UV/visible analysis The HPLC–UV/vis analysis was performed using a Shimadzu HPLC system equipped with an SPD-20A UV/absorbance detector. Separation was achieved using a Waters reverse-phase (RP) Symmetry C-18 column (150 × 3.9 mm, 5 μm) at ambient temperature. The mobile phase consisted of deionized water with trifluoroacetic acid (TFA) (pH 2.5) as solvent A and 99.99% methanol as solvent B. A gradient elution method was employed: initially, 100–50% solvent A over 0 to 20 min, followed by 50–40% solvent A from 20 to 30 min, and finally, 40–100% solvent A from 30 to 40 min. The flow rate of the mobile phase was maintained at 1 mL/min, and the detector was set at 280 nm . Phenolic compounds were identified by comparing their retention times and UV–Visible spectra with those of previously injected standards (resorcinol, caffeic acid, syringic acid, vanillin, p-coumaric acid, sinapic acid, ferulic acid, epicatechin, quercetin, gallic acid, chlorogenic acid, vanillic acid, trans-ferulic acid, ellagic acid, and cinnamic acid) . Following the identification of Actinobacteria isolates E2 and E6 exhibiting notable antimicrobial properties in the preliminary screening, an extensive investigation into their secondary metabolites was conducted. Fermentation and extraction processes were employed to isolate these bioactive compounds. In this study, 500 mL Erlenmeyer flasks, each containing 100 mL of ISP2 culture medium, were utilized for fermenting the active Actinobacteria isolates. The cultures were then subjected to constant agitation at 150 rpm and maintained at 28 °C. Subsequently, the Actinobacteria cultures underwent centrifugation at 10,000 g for 20 min to remove the mycelial mass, and the resulting supernatant was collected. To extract the secondary metabolites, the supernatant was subjected to organic solvent extraction with solvents of increasing polarity. Initially, it was mixed with hexane, followed by dichloromethane, ethyl acetate, and butanol, successively. The organic extracts obtained were then evaporated at 45 °C to remove solvent residues. Finally, the dry extracts, along with residual aqueous phases, were dissolved in dimethyl sulfoxide (DMSO) to facilitate concentration determination, following established protocols , , . To assess antimicrobial activity in liquid media, we employed the paper disk technique as described by Badji et al. . Sterile filter paper discs, 6 mm in diameter, were impregnated with 25 µL of each extract, along with DMSO and nalidixic acid serving as negative and positive controls, respectively. After impregnation, the discs were allowed to dry for a brief period near a Bunsen burner before being placed onto the surface of Mueller–Hinton agar (MHA), which had been previously inoculated with test bacteria using the swabbing technique. Before commencing antimicrobial testing, bacterial cells were harvested and adjusted to an optical density (OD) ranging from 0.08 to 0.13 at 625 nm, roughly equating to 10 6 colony-forming units per milliliter (CFU/mL), using a spectrophotometer (Selectra VR2000, Barcelona, Spain) . Similarly, for assessing antifungal activity, inoculum optical densities were maintained within the range of 0.18 to 0.20 at 623 nm, corresponding to a concentration of approximately 10 6 spores/mL . Subsequently, the plates were refrigerated at + 4 °C for 2 h to allow for the diffusion of molecules before being incubated at 37 °C for 24 h. Following incubation, the diameter of the zones of inhibition was measured in mm. The minimum inhibitory concentration (MIC) values were determined using a liquid culture medium dilution method. Briefly, 100 µL of Mueller Hinton broth (MHB) were added to each well of a 96-well plate. Then, 100 µL of each sample’s stock solution (1 mg/mL) were mixed in the first column. A series of cascade dilutions were performed up to column 10, resulting in a concentration range from 1 mg/mL to 0.001 mg/mL of the ethyl acetate extract of the E2 isolate. The bacterial culture was adjusted to an absorbance equivalent to 0.5 McFarland, and 10 µL were added to each well except for those in column 12, which served as the negative control (MHB without inoculum). Column 11 served as the positive growth control (tested strain in MHB). Each test was conducted in triplicate. After incubation, 20 µL of a 2, 3, 5-triphenyl-tetrazolium chloride (MTT) aqueous dye (Merck-Germany; CAS No. 298-96-4) were added to the wells and incubated for 3 h. The MIC was determined as the lowest concentration that showed no microbial growth, indicated by a color change from yellow to pink . Actinobacteria isolates: cultural, micro-morphological, biochemical and physiological characteristics Cultural characteristics, including growth intensity, surface pigmentation, colony morphology, and the presence of diffusible pigments in agar, were observed on different media such as Bennett, ISP1, ISP7, and GYEA. The inoculation of these media was performed using the streaking method, and the plates (90 mm in diameter) were then incubated at 28 °C and monitored daily for a duration of 10 days. Microscopic characteristics of pure isolates were examined using light microscopy (Olympus CX43RF), both in fresh samples and after Gram staining. Additionally, the physiological and biochemical characteristics of the Actinobacteria isolates were evaluated using established methodologies as described in previous studies , , . These assessments included the evaluation of melanoid pigment production, tolerance to varying concentrations of sodium chloride (NaCl) and pH levels, growth at different temperatures, and carbohydrate assimilation. Genotypic identification was carried out via 16S rRNA sequencing to verify the species identification of our isolates. DNA extraction from the strain was executed using an automated system, specifically the Mag Purix Bacterial DNA Extraction Kit, in accordance with the manufacturer’s instructions. The amplification reaction followed the protocol outlined by . The 16S rRNA gene underwent amplification using universal bacterial primer sequences Fd1 (5′-AGAGTTTGATCATGGCTCAG-3′), and rP2 (5′-ACGGTTACCTTGTTACGACTT-3′), resulting in an amplicon size of 1,500 bp as per established conditions. Subsequently, all amplified products were subjected to sequencing for identity validation. Both strands of the purified amplicons were sequenced using a 3130 × 1 Genetic Analyzer, employing the same primers utilized for PCR amplification. Sequences obtained were assembled into contigs using DNA Baser Assembler software version 5.15.0, saved in FASTA format. Multiple alignment of E2 strain 16S rRNA gene sequence with representative sequences of related Streptomyces strains was conducted using MEGA X software . This alignment facilitated construction of a phylogenetic tree via the neighbor-joining method . within the same software. Evolutionary distances were computed using Kimura’s two-parameter model, and tree topologies were evaluated through bootstrap analyses employing Felsenstein’s 1000 resamples method . The determination of total phenolics, the Folin-Ciocalteu method, as detailed by Bensadón et al. , was employed. This involved combining 3 mL of diluted Folin-Ciocalteu reagent (1:10) with 500 μL of sample or standard (1 mg/mL prepared in methanol), followed by the addition of 3 mL Na 2 CO 3 (6%). After one hour’s incubation at room temperature, protected from light, absorbance was recorded at 760 nm. Similarly, quantification of flavonoid content was carried out using the aluminum chloride method, according to the protocol described by Bahorun et al. . Briefly, 1 mL of sample or standard (prepared in methanol) was mixed with 1 mL of AlCl 3 solution (2% in methanol). After a 10 min incubation period, absorbance was measured at 415 nm. The antioxidant capacity of the ethyl acetate extract from Streptomyces sp. strain E2 was evaluated using three different assays. Firstly, the DPPH (2,2-diphenyl-1-picrylhydrazyl) free radical scavenging activity was determined according to the method described by Blois , with absorbance measured at 517 nm using an Elisa microplate reader. Ascorbic acid served as the positive antioxidant control. Secondly, the 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid) (ABTS) assay was conducted following the protocol developed by Re et al. . The resulting absorbance was measured at 743 nm, with a decrease indicating a reduction in radical amount, and Trolox was employed as a positive control. Finally, the ferric reducing antioxidant power (FRAP) of the extracts was assessed as outlined by Oyaizu , , with absorbance measured at 700 nm against a similar blank, where distilled water replaced the extract. Ascorbic acid was utilized as the positive standard in this assay. Gas chromatography-mass spectrometry (GC–MS) analysis was conducted following the procedure outlined by Chakraborty et al. . The analysis utilized an Agilent 7890A Series gas chromatography (GC) system coupled with mass spectrometry (MS), comprising a multimode injector and an HP-5MS capillary column (30 m × 0.250 mm × 0.25 μm). Solubilized extracts were introduced into the column using helium as the carrier gas (1.7 mL/min) in a 1:4 fractionated injection mode. The ion source and quadrupole temperatures were maintained at 230 and 150 °C, respectively. The temperature program ranged from 60 to 360 °C. Compound identification was achieved by comparing the obtained mass spectra with data available in the NIST MS 2017 library . The HPLC–UV/vis analysis was performed using a Shimadzu HPLC system equipped with an SPD-20A UV/absorbance detector. Separation was achieved using a Waters reverse-phase (RP) Symmetry C-18 column (150 × 3.9 mm, 5 μm) at ambient temperature. The mobile phase consisted of deionized water with trifluoroacetic acid (TFA) (pH 2.5) as solvent A and 99.99% methanol as solvent B. A gradient elution method was employed: initially, 100–50% solvent A over 0 to 20 min, followed by 50–40% solvent A from 20 to 30 min, and finally, 40–100% solvent A from 30 to 40 min. The flow rate of the mobile phase was maintained at 1 mL/min, and the detector was set at 280 nm . Phenolic compounds were identified by comparing their retention times and UV–Visible spectra with those of previously injected standards (resorcinol, caffeic acid, syringic acid, vanillin, p-coumaric acid, sinapic acid, ferulic acid, epicatechin, quercetin, gallic acid, chlorogenic acid, vanillic acid, trans-ferulic acid, ellagic acid, and cinnamic acid) . The antioxidant and antimicrobial activities, along with the determination of total phenolic and flavonoid compounds, were carried out in triplicate for each test. Data collected were analyzed using GraphPad Prism 8.4.3 software and presented as mean ± standard deviation (SD). Significant differences between groups were determined via one-way analysis of variance (ANOVA) followed by Tukey’s multiple comparisons test. A significance level of p ≤ 0.05 was adopted for all data analyses in this study. Pearson correlation analysis was conducted using GraphPad Prism 8.4.3 software to evaluate the relationship between total phenolic and flavonoid compounds and antioxidant activity.
Limitations of biopsy-based transcript diagnostics to detect T-cell-mediated allograft rejection
13abf50b-8e79-405e-9734-774d530a8e7c
11852332
Surgical Procedures, Operative[mh]
Kidney transplant biopsies remain the cornerstone to diagnosing acute allograft rejection in kidney transplant recipients (KTRs) and are performed when alloimmune injury is suspected . Although the introduction of the Banff classification as a guideline for histological rejection diagnosis offered more objective criteria based on defined lesions (i-, t-, v-, g-, ptc-, etc.), the semiquantitative assessment holds the problem of interobserver variability . T-cell-mediated rejection (TCMR), as one modality of rejection, is diagnosed based on three histological Banff lesions: interstitial inflammation (i-lesion), tubulitis (t-lesion) and intimal arteritis (v-lesion) . Decisions for management vary according to the severity of TCMR: treatment protocols for fully developed TCMR include pulsed steroids and/or anti-thymocyte globulin (ATG), as well as an optimization of maintenance immunosuppression. These protocols were established many years ago . However, management in cases of borderline changes or isolated TCMR lesions remains unclear. Additionally, several important lesions of the TCMR continuum (i-, t-, v-lesions) have confounding factors, especially if the occurrence is standalone: i- and t-lesions are found in acute kidney injury (AKI) and other overlapping pathologies (BK nephropathy, pyelonephritis, acute interstitial nephritis) . Intimal arteritis (v-) lesions are not specific for TCMR or antibody-mediated rejection (AMR), and their relevance for clinical outcomes as isolated lesions remain equivocal . In fact, isolated v-lesions in the early phase after transplantation could be representative of ischemic changes rather than acute vascular rejection . Different biomarkers other than the Banff classification, such as the donor-derived cell-free DNA (DD-cfDNA), the Banff Human Organ Transplant Panel (B-HOT) or quantitative polymerase chain reaction (qPCR) are currently in discussion and used for the exclusion or inclusion of rejection, and have been proven to be useful, especially in reclassifying biopsies suspicious for AMR . The development and validation of the Molecular Microscope Diagnostic System (MMDx) as a microarray-based mRNA assessment, also offered value in the diagnosis of TCMR and AMR . However, although independent of individual bias by the pathologist, limitations like discrepancies with histology and uncertainty of MMDx in early stages of alloimmune injury or overlapping pathologies have been addressed in previous research, and its validation in clinical practice in such cases is still needed . A recent study showed no identification of mAMR in cases with donor-specific antibodies (DSAs) but without microvascular inflammation (MVI) in an independent cohort from the validation cohort . Earlier studies also showed variable results of molecular diagnosis in cases of histological uncertainty regarding TCMR-suspicion—yet it was suggested that MMDx may be able to reclassify molecular TCMR in a subset of patient groups with either/or i-, t- and v-lesions that do not meet all Banff criteria for diagnosis of TCMR . In our cohort from the University Hospital of Zurich we investigated the added value of MMDx in ambiguous histologic lesions of the TCMR continuum, i.e. isolated tubulitis (i0, t1–3, v0), borderline changes (according to Banff 2022) and isolated arteritis lesions (no borderline, v1) with prior exclusion of influencing factors, such as overlapping pathologies, pre-treatment or concurrent chronic-active TCMR (caTCMR). Study population This single-center study consists of 249 kidney transplant biopsies (21 protocol, 228 for cause) from 219 KTRs, consecutively and unselectively conducted between 2021 and 2023 at the University Hospital of Zurich. All patients gave general consent for using clinical data, including biopsy results and MMDx analysis. The study was approved by the cantonal ethics commission review board of Zurich, Switzerland (BASEC 2020-02 817) and complied with the Declaration of Helsinki. Rejection terminology In the context of explaining rejections based on histology throughout the manuscript, the prefix ‘h’ is appended (e.g. hAMR). However, when referring to molecular phenotypes of rejections, the prefix ‘m’ is used (e.g. mAMR). Biopsy process and histologic categorization All biopsies were evaluated at the bedside for adequacy (sufficient cortex) during the procedure. Local pathologists assigned histologic diagnoses according to the 2018 Reference Guide to the Banff Classification and The Banff 2019 Kidney Meeting Report . Our study team reclassified all biopsies according to the Banff update 2022 . The study cohort ( n = 214) was categorized into isolated tubulitis ( n = 101), borderline changes ( n = 9), isolated arteritis ( n = 37) and a group without suspicion for TCMR (i0, t0, v0; no inflammation, n = 67), the latter most importantly for comparison of molecular (Classifier) and rejection phenotype scores (R scores). Thirty-five cases were included as a positive control cohort consisting of either hTCMR (divided into TCMR IA/IB, n = 9 or TCMR IIA/IIB, n = 18; total n = 27) or mixed histologic rejection (hAMR/TCMR; n = 8). Inclusion criteria for all 249 biopsies were: biopsies with a complete histologic examination based on relevant Banff lesions (i-, t-, v-, g-, ptc-lesions) according to the Banff classification and corresponding MMDx analysis. Exclusion criteria for the study cohort ( n = 214) were hTCMR (including caTCMR) or hAMR/TCMR according to the Banff classification, and alternative histopathologic findings competing with T-cell-specific Banff lesions or glomerulitis (BK nephropathy, pyelonephritis, acute interstitial nephritis and glomerulonephritis; n = 67). Patients receiving anti-rejection therapy shortly before biopsy were also excluded ( n = 8). Isolated v1 (formally TCMR IIA regardless of i- and t-lesions) was an exception, as it was classified as isolated arteritis. As the main subcategorization of all subgroups was presence or absence of microvascular inflammation (MVI), hAMR cases were also included. The detailed deduction is demonstrated in Fig. . Definitions according to the Banff classification Isolated tubulitis was defined as no foci of inflammation (i0) or arteritis (v0) but foci of tubulitis (t1–3). Borderline changes followed the diagnostic criteria of the Banff update 2022 . Importantly, while a formal classification would designate the v1-lesion as TCMR IIA regardless of i- and t-lesions, it was categorized as isolated arteritis in this study, if no criteria for borderline changes (i > 0 and t > 0) were present. hTCMR criteria in the positive control cohort also followed the diagnostic criteria of Banff 2022. MVI at or above threshold was defined as g+ptc ≥2, with g >0; ptc2 only in cases without borderline changes or hTCMR ( n = 3). g+ptc >0, but <2 was considered MVI below threshold and g+ptc = 0 negative. C4d was counted as negative in ABO-incompatible transplantations (total n = 19) and counted as positive if >1 (tested by immune fluorescence). Further subcategorization based on the new AMR continuum presented in the Banff update 2022 consisted of: findings below threshold, probable hAMR (DSA+, C4d–, g+ptc <2 but >0), DSA-negative, C4d-negative, MVI (g+ptc ≥2) and hAMR . MMDx analysis Small tissue samples (3–4 mm) were obtained from one of the at least two biopsy cores and stored in RNAlater ® solution. MMDx analysis was performed in Kashi Clinical Laboratories (Portland, OR, USA) per protocol. The following molecular rejection phenotypes were assessed: mTCMR, mAMR/TCMR, minor mAMR (rejection score “normal,” but AMR score “mild/moderate”) and mAMR. Rejection score “mild/moderate,” but AMR and TCMR scores “normal” ( n = 6), and rejection score “normal,” but TCMR score “mild” ( n = 1) were counted as “no molecular rejection” since there is currently no molecular phenotype to these entities. Diagnostic criteria for phenotypes followed the descriptions from: https://cloudfront.ualberta.ca/-/media/medicine/institutes-centres-groups/atagc/report_description.docx . Donor-specific antibodies The presence of donor-specific antibodies (DSAs) was assessed by OneLambda single antigen beads. An adjusted mean fluorescence intensity of ≥500 was considered significant for DSA relevance. Preformed (pDSA) and de novo DSA (dnDSA) were analyzed. All individuals received full re-typing for relevant HLA-Antigens (including HLA-C, HLA-DQ, HLA-DP) when needed for interpretation of HLA antibodies occurring in the HLA-Luminex Singles class I or II (DSA vs non-DSA-HLA antibodies). Statistical analysis Statistical analysis was conducted with IBM SPSS Version 29 (SPSS, Chicago, IL, USA), Microsoft ® Excel ® (©2010 Microsoft Corporation; all rights reserved) and GraphPad Prism Version 9.5.1. Non-parametric tests such as the Mann–Whitney U or Kruskal–Wallis test were performed to compare continuous variables. Fisher's exact or the chi-squared test (χ 2 ) were used to compare categorical variables, as appropriate. Probability values and confidence intervals were two-sided. Continuous variables were described using the median and interquartile range (IQR; Q1–Q3) regardless of distribution. Categorical variables were presented as numbers ( n ) and percentages . A P -value of <.05 was considered significant. This single-center study consists of 249 kidney transplant biopsies (21 protocol, 228 for cause) from 219 KTRs, consecutively and unselectively conducted between 2021 and 2023 at the University Hospital of Zurich. All patients gave general consent for using clinical data, including biopsy results and MMDx analysis. The study was approved by the cantonal ethics commission review board of Zurich, Switzerland (BASEC 2020-02 817) and complied with the Declaration of Helsinki. In the context of explaining rejections based on histology throughout the manuscript, the prefix ‘h’ is appended (e.g. hAMR). However, when referring to molecular phenotypes of rejections, the prefix ‘m’ is used (e.g. mAMR). All biopsies were evaluated at the bedside for adequacy (sufficient cortex) during the procedure. Local pathologists assigned histologic diagnoses according to the 2018 Reference Guide to the Banff Classification and The Banff 2019 Kidney Meeting Report . Our study team reclassified all biopsies according to the Banff update 2022 . The study cohort ( n = 214) was categorized into isolated tubulitis ( n = 101), borderline changes ( n = 9), isolated arteritis ( n = 37) and a group without suspicion for TCMR (i0, t0, v0; no inflammation, n = 67), the latter most importantly for comparison of molecular (Classifier) and rejection phenotype scores (R scores). Thirty-five cases were included as a positive control cohort consisting of either hTCMR (divided into TCMR IA/IB, n = 9 or TCMR IIA/IIB, n = 18; total n = 27) or mixed histologic rejection (hAMR/TCMR; n = 8). Inclusion criteria for all 249 biopsies were: biopsies with a complete histologic examination based on relevant Banff lesions (i-, t-, v-, g-, ptc-lesions) according to the Banff classification and corresponding MMDx analysis. Exclusion criteria for the study cohort ( n = 214) were hTCMR (including caTCMR) or hAMR/TCMR according to the Banff classification, and alternative histopathologic findings competing with T-cell-specific Banff lesions or glomerulitis (BK nephropathy, pyelonephritis, acute interstitial nephritis and glomerulonephritis; n = 67). Patients receiving anti-rejection therapy shortly before biopsy were also excluded ( n = 8). Isolated v1 (formally TCMR IIA regardless of i- and t-lesions) was an exception, as it was classified as isolated arteritis. As the main subcategorization of all subgroups was presence or absence of microvascular inflammation (MVI), hAMR cases were also included. The detailed deduction is demonstrated in Fig. . Isolated tubulitis was defined as no foci of inflammation (i0) or arteritis (v0) but foci of tubulitis (t1–3). Borderline changes followed the diagnostic criteria of the Banff update 2022 . Importantly, while a formal classification would designate the v1-lesion as TCMR IIA regardless of i- and t-lesions, it was categorized as isolated arteritis in this study, if no criteria for borderline changes (i > 0 and t > 0) were present. hTCMR criteria in the positive control cohort also followed the diagnostic criteria of Banff 2022. MVI at or above threshold was defined as g+ptc ≥2, with g >0; ptc2 only in cases without borderline changes or hTCMR ( n = 3). g+ptc >0, but <2 was considered MVI below threshold and g+ptc = 0 negative. C4d was counted as negative in ABO-incompatible transplantations (total n = 19) and counted as positive if >1 (tested by immune fluorescence). Further subcategorization based on the new AMR continuum presented in the Banff update 2022 consisted of: findings below threshold, probable hAMR (DSA+, C4d–, g+ptc <2 but >0), DSA-negative, C4d-negative, MVI (g+ptc ≥2) and hAMR . Small tissue samples (3–4 mm) were obtained from one of the at least two biopsy cores and stored in RNAlater ® solution. MMDx analysis was performed in Kashi Clinical Laboratories (Portland, OR, USA) per protocol. The following molecular rejection phenotypes were assessed: mTCMR, mAMR/TCMR, minor mAMR (rejection score “normal,” but AMR score “mild/moderate”) and mAMR. Rejection score “mild/moderate,” but AMR and TCMR scores “normal” ( n = 6), and rejection score “normal,” but TCMR score “mild” ( n = 1) were counted as “no molecular rejection” since there is currently no molecular phenotype to these entities. Diagnostic criteria for phenotypes followed the descriptions from: https://cloudfront.ualberta.ca/-/media/medicine/institutes-centres-groups/atagc/report_description.docx . The presence of donor-specific antibodies (DSAs) was assessed by OneLambda single antigen beads. An adjusted mean fluorescence intensity of ≥500 was considered significant for DSA relevance. Preformed (pDSA) and de novo DSA (dnDSA) were analyzed. All individuals received full re-typing for relevant HLA-Antigens (including HLA-C, HLA-DQ, HLA-DP) when needed for interpretation of HLA antibodies occurring in the HLA-Luminex Singles class I or II (DSA vs non-DSA-HLA antibodies). Statistical analysis was conducted with IBM SPSS Version 29 (SPSS, Chicago, IL, USA), Microsoft ® Excel ® (©2010 Microsoft Corporation; all rights reserved) and GraphPad Prism Version 9.5.1. Non-parametric tests such as the Mann–Whitney U or Kruskal–Wallis test were performed to compare continuous variables. Fisher's exact or the chi-squared test (χ 2 ) were used to compare categorical variables, as appropriate. Probability values and confidence intervals were two-sided. Continuous variables were described using the median and interquartile range (IQR; Q1–Q3) regardless of distribution. Categorical variables were presented as numbers ( n ) and percentages . A P -value of <.05 was considered significant. Basic and biopsy characteristics Table displays basic and biopsy characteristics for all groups—additional characteristics (including DSA characteristics) are presented in . A greater proportion of isolated arteritis (59.5%) underwent biopsy within the first year post-transplantation (<1 year), compared with the other subgroups in the study cohort ( P < .001). However, this difference did not significantly affect molecular rejection rates when comparing biopsies conducted <1 year and those after the first year post-transplantation (>1 year) ( P = .27 for the whole study cohort, P = .711 for isolated arteritis; ). Percentages of MVI at/above threshold (14.9%) and subsequent hAMR (11.9%) were significantly lower in no inflammation, compared with TCMR-suspicion (40.8% and 30.6%, respectively) and the positive control cohort (62.9% and 22.9%, respectively; P < .001 for MVI and P = .03 for hAMR; ). The whole cohort ( n = 249) exhibited a high prevalence of DSA at the time of the biopsy (overall 51%). Molecular rejections and histologic differentiation of all biopsies The rates and differentiation of all molecular rejections are demonstrated as an overview in Table and Fig. and with more histology details in . Molecular rejection rates were 37/147 (25.2%; 32 with MVI) in TCMR-suspicion, 6/67 (9%; 4 with MVI) in no inflammation and 30/35 (85.7%; 19 with MVI) in the positive control cohort. As visible in Fig. b, 32/60 cases (53.3%) with TCMR-suspicion and MVI showed molecular rejections (2 mAMR/TCMR, 5 minor mAMR and 25 mAMR) compared with 4/10 cases (40%) with no inflammation but MVI (1 mAMR/TCMR and 3 mAMR; P = .508). However, 5/87 cases (5.7%) only with TCMR-suspicion but MVI below threshold/negative showed molecular rejections (2 minor mAMR and 3 mAMR) compared with 2/57 cases (3.5%) with no inflammation and MVI below threshold/negative (1 minor mAMR and 1 mAMR; P = .704). Twenty-two of 32 (68.9%) molecular rejection cases with TCMR-suspicion and MVI had hAMR (31.1% did not), compared with 4/4 cases (100%) with no inflammation but MVI . Only 3 mAMR/TCMR (1.4%) and no pure mTCMR were detected in the study cohort, compared with 12 mAMR/TCMR (34.3%) and 10 mTCMR (28.6%) in the positive control cohort ( P < .001). Molecular rejection rates in the positive control cohort were similar when subdivided based on the presence/absence of MVI; however, all cases of pure mAMR in this group ( n = 7) had MVI at/above threshold ( and ). When subdividing the positive control cohort based on TCMR1 or TCMR2 phenotypes, all pure mTCMR-cases ( n = 10) were among TCMR2, while TCMR1 cases exhibited non-significantly, but slightly more mAMR (33.3%) and mAMR/TCMR (38.9%) than TCMR2 (5.9% and 29.4%; P = .088 and P = .725, respectively; ). Molecular rejections and histologic differentiation among subgroups of TCMR-suspicion The highest molecular rejection rate was observed in borderline changes with MVI (80%), followed by isolated tubulitis with MVI (52.8%; Fig. ). Rejection rates were significantly lower if MVI was below threshold or negative ( P < .001 for all groups with MVI at/above threshold vs below threshold/negative). Molecular (Classifier) and rejection phenotype scores for TCMR TCMR-specific molecular (Classifier) scores were different between TCMR-suspicion (median 0.010, IQR 0.010–0.020) and no inflammation (median 0.010, IQR 0.000–0.010; P = .005) and between TCMR-suspicion and the positive control cohort (median 0.27, IQR 0.040–0.83; P < .001; Fig. ). However, only 6/147 (4.1%) cases from TCMR-suspicion and 1/67 (1.5%) cases from no inflammation showed a TCMR-specific Classifier score ≥0.1 ( P = .438). As expected, the number of cases with a TCMR-specific molecular (Classifier) score ≥0.1 was significantly higher in the positive control cohort (23/35, 65.7%; P < .001 vs TCMR-suspicion). Looking at rejection phenotype scores relevant for mTCMR or mAMR/TCMR (R2 and R3), there was no differentiation between TCMR-suspicion and no inflammation ( P = .157 for R2 and 0.121 for R3), but a clear differentiation between TCMR-suspicion and the positive control cohort overall, and above a score of 0.1 ( P < .001 for all comparisons). Subthreshold findings, suggested as probable TCMR (pTCMR) with TCMR-specific molecular scores >0.1 and R2 + R3 >0.2 but not fulfilling mTCMR criteria were scarce ( n = 4) and corresponded to mAMR/TCMR in two cases, mAMR in one case and no molecular rejection in one case. Detailed MMDx scores in all subgroups are described in Table . Molecular mixed phenotypes All mAMR/TCMR cases ( n = 15) among the whole cohort ( n = 249) are visible in Table . All 3/3 cases (100%, case numbers 1–3) from the study cohort ( n = 214) had MVI/hAMR compared with only 3/12 cases (25%) and 4/12 cases (33.3%) from the positive control cohort (case numbers 4–15), respectively. R score patterns showed heterogeneity with higher all AMR (R7 >50%) and lower all TCMR (R8 <50%) scores in case numbers 1–3 compared with R scores in case numbers 4–15 (visualized in ), yet, ultimately leading to the same molecular phenotype. Patterns of Banff lesions and DSA among all molecular rejections Throughout all molecular rejections from the whole cohort (73/249, 29.3%), 8/10 cases (80%) with mTCMR exhibited i >1 and t >1 by histology compared with 10/15 cases (66.7%) of mAMR/TCMR and only 1/39 cases (2.6%) of mAMR ( P < .001; Table and Fig. ). v >0 was not significantly different between the different molecular rejection groups ( P = .551). However, presence of MVI at/above threshold was highest in mAMR (35/39, 89.7%). DSA presence (pDSA or dnDSA) was 30% in mTCMR, 80% in mAMR/TCMR, 55.6% in minor mAMR and 61.5% mAMR ( P = .085, Table ). The time point of first detection of dnDSA was significantly earlier in mTCMR and mAMR/TCMR compared with minor mAMR or mAMR ( P < .001; also Table ). Detailed individual Banff lesions for different subgroups are demonstrated in . Table displays basic and biopsy characteristics for all groups—additional characteristics (including DSA characteristics) are presented in . A greater proportion of isolated arteritis (59.5%) underwent biopsy within the first year post-transplantation (<1 year), compared with the other subgroups in the study cohort ( P < .001). However, this difference did not significantly affect molecular rejection rates when comparing biopsies conducted <1 year and those after the first year post-transplantation (>1 year) ( P = .27 for the whole study cohort, P = .711 for isolated arteritis; ). Percentages of MVI at/above threshold (14.9%) and subsequent hAMR (11.9%) were significantly lower in no inflammation, compared with TCMR-suspicion (40.8% and 30.6%, respectively) and the positive control cohort (62.9% and 22.9%, respectively; P < .001 for MVI and P = .03 for hAMR; ). The whole cohort ( n = 249) exhibited a high prevalence of DSA at the time of the biopsy (overall 51%). The rates and differentiation of all molecular rejections are demonstrated as an overview in Table and Fig. and with more histology details in . Molecular rejection rates were 37/147 (25.2%; 32 with MVI) in TCMR-suspicion, 6/67 (9%; 4 with MVI) in no inflammation and 30/35 (85.7%; 19 with MVI) in the positive control cohort. As visible in Fig. b, 32/60 cases (53.3%) with TCMR-suspicion and MVI showed molecular rejections (2 mAMR/TCMR, 5 minor mAMR and 25 mAMR) compared with 4/10 cases (40%) with no inflammation but MVI (1 mAMR/TCMR and 3 mAMR; P = .508). However, 5/87 cases (5.7%) only with TCMR-suspicion but MVI below threshold/negative showed molecular rejections (2 minor mAMR and 3 mAMR) compared with 2/57 cases (3.5%) with no inflammation and MVI below threshold/negative (1 minor mAMR and 1 mAMR; P = .704). Twenty-two of 32 (68.9%) molecular rejection cases with TCMR-suspicion and MVI had hAMR (31.1% did not), compared with 4/4 cases (100%) with no inflammation but MVI . Only 3 mAMR/TCMR (1.4%) and no pure mTCMR were detected in the study cohort, compared with 12 mAMR/TCMR (34.3%) and 10 mTCMR (28.6%) in the positive control cohort ( P < .001). Molecular rejection rates in the positive control cohort were similar when subdivided based on the presence/absence of MVI; however, all cases of pure mAMR in this group ( n = 7) had MVI at/above threshold ( and ). When subdividing the positive control cohort based on TCMR1 or TCMR2 phenotypes, all pure mTCMR-cases ( n = 10) were among TCMR2, while TCMR1 cases exhibited non-significantly, but slightly more mAMR (33.3%) and mAMR/TCMR (38.9%) than TCMR2 (5.9% and 29.4%; P = .088 and P = .725, respectively; ). The highest molecular rejection rate was observed in borderline changes with MVI (80%), followed by isolated tubulitis with MVI (52.8%; Fig. ). Rejection rates were significantly lower if MVI was below threshold or negative ( P < .001 for all groups with MVI at/above threshold vs below threshold/negative). TCMR-specific molecular (Classifier) scores were different between TCMR-suspicion (median 0.010, IQR 0.010–0.020) and no inflammation (median 0.010, IQR 0.000–0.010; P = .005) and between TCMR-suspicion and the positive control cohort (median 0.27, IQR 0.040–0.83; P < .001; Fig. ). However, only 6/147 (4.1%) cases from TCMR-suspicion and 1/67 (1.5%) cases from no inflammation showed a TCMR-specific Classifier score ≥0.1 ( P = .438). As expected, the number of cases with a TCMR-specific molecular (Classifier) score ≥0.1 was significantly higher in the positive control cohort (23/35, 65.7%; P < .001 vs TCMR-suspicion). Looking at rejection phenotype scores relevant for mTCMR or mAMR/TCMR (R2 and R3), there was no differentiation between TCMR-suspicion and no inflammation ( P = .157 for R2 and 0.121 for R3), but a clear differentiation between TCMR-suspicion and the positive control cohort overall, and above a score of 0.1 ( P < .001 for all comparisons). Subthreshold findings, suggested as probable TCMR (pTCMR) with TCMR-specific molecular scores >0.1 and R2 + R3 >0.2 but not fulfilling mTCMR criteria were scarce ( n = 4) and corresponded to mAMR/TCMR in two cases, mAMR in one case and no molecular rejection in one case. Detailed MMDx scores in all subgroups are described in Table . All mAMR/TCMR cases ( n = 15) among the whole cohort ( n = 249) are visible in Table . All 3/3 cases (100%, case numbers 1–3) from the study cohort ( n = 214) had MVI/hAMR compared with only 3/12 cases (25%) and 4/12 cases (33.3%) from the positive control cohort (case numbers 4–15), respectively. R score patterns showed heterogeneity with higher all AMR (R7 >50%) and lower all TCMR (R8 <50%) scores in case numbers 1–3 compared with R scores in case numbers 4–15 (visualized in ), yet, ultimately leading to the same molecular phenotype. Throughout all molecular rejections from the whole cohort (73/249, 29.3%), 8/10 cases (80%) with mTCMR exhibited i >1 and t >1 by histology compared with 10/15 cases (66.7%) of mAMR/TCMR and only 1/39 cases (2.6%) of mAMR ( P < .001; Table and Fig. ). v >0 was not significantly different between the different molecular rejection groups ( P = .551). However, presence of MVI at/above threshold was highest in mAMR (35/39, 89.7%). DSA presence (pDSA or dnDSA) was 30% in mTCMR, 80% in mAMR/TCMR, 55.6% in minor mAMR and 61.5% mAMR ( P = .085, Table ). The time point of first detection of dnDSA was significantly earlier in mTCMR and mAMR/TCMR compared with minor mAMR or mAMR ( P < .001; also Table ). Detailed individual Banff lesions for different subgroups are demonstrated in . Transplant physicians hope that MMDx will help to distinguish TCMR presence or absence in vague histologic findings of i-, t- or v-lesions below the threshold of hTCMR. Such a biomarker is crucial since intensifying immunosuppression in these vulnerable patients causes morbidity (e.g. infections) . In contrast, the presence of even subclinical hTCMR is associated with an increased risk of developing alloimmunity, interstitial fibrosis and tubular atrophy, and reduced graft survival . In our biopsy cohort we observed that MMDx does not identify pure mTCMR along the continuum of TCMR-suspicion regardless of MVI presence or absence. However, MMDx differentiates between mAMR/TCMR and pure mAMR in the patient cohort with TCMR-suspicous lesions and the presence of MVI. All three mAMR/TCMR cases within the study cohort fulfilled Banff criteria for hAMR. All other molecular rejections in the study cohort were mAMR (either minor or full), with MVI as the driving force. A recent study investigating a transitional B-cell-based risk stratifying biomarker in the setting of borderline rejection suggested that especially cases with borderline changes in combination with moderate MVI (g+ptc ≥2) had poorer long-term outcomes, but the graft survival in individuals without MVI was similar to KTRs without suspicious findings in compared surveillance kidney biopsies . In our study, the occurrence of MVI in all TCMR-suspicion subgroups was strongly associated with mAMR but not mTCMR, suggesting a microvascular disease that usually spares other components of the parenchyma. Therefore, MMDx in TCMR-suspicion may be useful to identify concomitant mAMR or mAMR/TCMR, particularly since Banff 2022 maintains to discount isolated ptc in the presence of tubulitis or borderline changes . Four patients in our cohort showed g0, ptc2 with at least borderline (i and t) lesions, from which three had mAMR/TCMR. Furthermore, almost one-third of biopsies with either minor mAMR, mAMR or mAMR/TCMR in the group with TCMR-suspicion and MVI did not fulfill hAMR criteria. While MMDx can reclassify biopsies with histologic suspicion of AMR to mAMR, it does not provide the same added diagnostic benefit in the TCMR continuum as presented in earlier studies. Reeve et al . even observed a lower rate of mTCMR in cases of hTCMR . The agreement between pathologists in our hospital and MMDx was comparable (28.6% for mTCMR and 34.3% mAMR/TCMR in the positive control cohort). One possible explanation for this phenomenon is that MVI takes precedence over lower levels of i- and t-lesions in the diagnosis of mAMR or mAMR/TCMR. This observation was consistent with the findings of the current study, where cases of mTCMR in the positive control cohort consistently exhibited significant i + t-scores (sum >2). Additionally, all calls of mAMR within the positive control cohort ( n = 7) were associated with MVI at/above threshold. This underlines the opinion of many experts that MMDx sets the threshold to diagnose mTCMR—or to detect TCMR-specific lesions as relevant—too high, whereas the threshold for mAMR is considerably lower. While the TCMR-specific molecular score differentiated between TCMR-suspicion and no inflammation on subthreshold levels, the R score relevant for mTCMR (R2) did not. R2 scores higher than 0.1 in this study ( n = 13) corresponded to mAMR in two cases and no molecular rejection in 11 cases. Regardless of subthreshold differentiation, the Classifier activity seemed to be too low to generate a mTCMR phenotype. Beyond this generally low sensitivity for TCMR-specific lesions, diversion from other histologic findings, such as infections or the recurrence of a primary disease, could be an explanation. However, we excluded cases with overlapping pathological findings (e.g. BK nephropathy) from the study cohort since minor molecular findings (normal rejection but abnormal TCMR score or vice versa) were observed, especially in overlapping pathologic findings (data not shown). mTCMR has been recently differentiated into two TCMR classes. TCMR1 shows higher mTCMR activity and mAMR activity (mixed rejection), including v-lesions, and TCMR2 shows less mTCMR activity but more atrophy fibrosis . Interestingly, in this work, the authors show only 12 cases out of a total cohort of 1679 cases (INTERCOMEX study) with borderline changes by histology and mTCMR, with 2 cases attributed to TCMR1 (formerly mixed) and 10 cases to TCMR2. However, in this very small group of 10 cases with borderline changes and TCMR2, it remains unclear how many actually had MVI (18% of cases with TCMR2 showed mAMR/TCMR) or fulfilled the histologic criteria for caTCMR (although a debated entity) because of the higher proportion of atrophy fibrosis. Also, based on the data presented by the INTERCOMEX study, it remains open to what extent confounding diseases were present. In our study we found TCMR1 cases consisted of more mAMR and mAMR/TCMR whereas pure mTCMR phenotypes were only observed in the TCMR2 subgroup. Of course, the accuracy of MMDx is not guaranteed—discrepancy rates for TCMR and AMR detection (histology vs molecular diagnostics) are known . In this respect, our data do not allow conclusions on whether the molecular diagnosis is correct or incorrect in our cohort. However, a biomarker that does not differentiate in a cohort with TCMR-suspicion does not ultimately enhance clinical decision-making. In this context, however, it can be assumed that the interobserver variation in histology assessment is also reflected. We therefore recommend that transplant centers with high rates of borderline changes investigate their individual added value of MMDx in TCMR-suspicion. While the study cohort had excellent data quality and substantial biopsy numbers, it has some limitations. The histopathological diagnosis was made by specialized pathologists from the University Hospital of Zurich without a second opinion. Also, pending patient follow-up data is crucial for retrospectively assessing the impact of MMDx interpretation, anti-rejection treatment, kidney function and alloimmunity development. Follow-up biopsies are needed to further assess the clinical relevance of our findings. After thorough exclusion of overlapping pathological findings, MMDx did not identify pure mTCMR in patients with isolated tubulitis, borderline changes or isolated arteritis, regardless of MVI, and identified mAMR and mAMR/TCMR more frequently. The rate of mTCMR was also low in the positive control cohort, consisting of hTCMR and hAMR/TCMR. Our data suggest that MMDx has a lower sensitivity for TCMR lesions compared with AMR lesions, but may be useful in identifying concomitant mAMR or ruling out molecular TCMR activity in suspicious cases with subthreshold activity. Future studies with targeted interventions and follow-up biopsies should investigate whether this differentiation affects graft outcomes. gfae147_Supplemental_File
Functional Neurophysiological Biomarkers of Early-Stage Alzheimer’s Disease: A Perspective of Network Hyperexcitability in Disease Progression
0f04c935-7cb3-4a7a-a882-bff3732752d5
9484128
Physiology[mh]
Network overexcitability or hyperexcitability (NH), a state in which neural networks exhibit an increased likelihood to be excited or activated, is a mainstay in several forms of epilepsy and seizure-related disorders. One possible cellular basis for this state of hyperexcitability originates from the excitability of excitatory neurons , and may stem from factors intrinsic to the neuron, such as the availability of synaptic neurotransmitter receptors, or extrinsic factors such as disinhibition from inhibitory interneuron activity or astrocytic clearance of neurotransmitters . While NH has been generally associated with epilepsy and the development of seizures, it also occurs in many other neurological disorders , indicating a strong relationship between hyperexcitability and brain dysfunction. While hyperexcitability across all these disorders is unlikely to share the same etiology, the presence of hyperexcitability in these disorders implies a more mechanistic role through which associated behavioral and cognitive symptoms can manifest. Alzheimer’s disease (AD) is a progressive neurodegenerative disorder without effective treatment at present. Current symptomatic treatment options for AD partially alleviate cognitive and physiological deficits, but do not significantly alter the pathology progression or prolong the lifespan of the patient . The current, repeated failures of clinical trials serve to underscore the complex, multifactorial nature of AD, indicating an incomplete understanding of the pathological mechanisms underlying this progressive disease . These failures have been ascribed to reasons such as the validity of the available animal models, the validity of the tests for translational assays, the subjectivity of (early) clinical diagnoses as well as the lack of valid and precise biomarkers, e.g., for patient selection in clinical trials . The timing of therapeutic intervention has also been highlighted as a key factor influencing ineffectiveness in clinical trials , due to the extensive neurodegeneration likely present at the point of a confirmed diagnosis. Following this line of thinking, attention has turned to finding newer and more valid biomarkers of AD that allow for therapeutic intervention at an earlier stage of the disease, potentially allowing for halting or slowing of the disease . Currently, the most accepted pathological features of AD are the presence of extracellular amyl-oid plaques and intracellular neurofibrillary tangles , chronic neuroinflammation , and neurodege-neration . In addition to the molecular characteristics of AD, neurophysiological characteristics of increased network activity , epileptic activity , slowing of neural oscillations and reductions in waveform complexity have also been reported in some patients. Several observations point to NH in patients with AD, even before incipient pathology, suggesting that hyperexcitability may be a prodromal feature of AD . Consistent with these findings, indications of NH have been noted in animal models of pathology associated with AD . Therefore, NH has been posited as a potential indication of dysfunctional neural networks thought to occur in the prodromal stages of AD, linked to soluble amyloid-β species and accumulating AD-related proteins and peptides , with links to cognitive dysfunction and progression of pathology . Here, we discuss the consequences of NH on network function, assess the evidence for NH in animal models and human patients, as well as address some of the gaps pertaining to the hypothesis of hyperexcitability in AD. Lastly, we provide a perspective on the development of animal models that proposes the application of NH as a measure of construct validity, as well as approaches to improve experimental design by incorporating the assessment of NH that are crucial for adequate translational validity. Implications of altered excitability for network function It has been established that the proper functioning of neurons is necessary for the development and the continued survival of the organism, as shown by the debilitating phenotypes associated with perturbances to neuronal function . Neuronal excitability is one such property that governs a broad range of neuronal functions, ranging from, but not limited to: neuronal development, functional integration, and even cell death . The factors that determine the excitability of a neuron arise from elements related to the generation of action potentials and can be broadly classified into factors intrinsic and extrinsic to the neuron. Intrinsic factors such as the density of receptors at a synapse, the properties of ion channels, or the phosphorylation state of certain proteins in the cell can all affect the excitability of a neuron . For example, a mutation in voltage-gated sodium cha-nnels that results in the reduction of the sodium channel voltage activation threshold, subsequently requiring less depolarization to open the ion channel and initiate an action potential, could increase neuronal excitability . Factors extrinsic to the neuron affecting its excitability would be associated with other determinants such as interactions with other inhibitory or excitatory neurons, the availability of extracellular neurotransmitters, the concentrations of surrounding extracellular ions, the presence of extracellular ligands which alter the membrane potential , or neuroinflammation . An example of such an extrinsic factor would be inhibitory interneurons . Functional organization and network hyperexcitability-associated perturbations The activity of a neuron does not only depend on its excitability but also on the activity of other neurons to which it is connected within the network. One of the properties that emerges from the complex interaction of neurons in networks is neural oscillations, the high amplitude oscillations detectable from local extracellular field potentials to scalp recordings . The intrinsically complex dynamics of single neurons that allow them to resonate and oscillate at multiple frequencies are a key feature believed to facilitate the emergence of large-scale oscillations that span multiple brain regions . Neuronal oscillations have been classified into various bands in the frequency range from 0.05 to 500 Hz, with different functional or behavioral correlates putatively ascribed to each of the frequency bands. However, definitions of the frequency bands may somewhat differ between researchers and between species investigated (e.g., , the frequency bands of delta (< 4 Hz), theta (4–8 Hz), alpha (8–15 Hz), beta (16–31 Hz), gamma (> 30 Hz)). At the cellular level, the interaction between excitatory and inhibitory neuronal activity is known to be responsible for the generation of synchronous rhythms . Alterations in oscillatory rhythms have been implicated in some of the neurological disorders that exhibit hyperexcitability, such as epilepsy , AD , and Fragile X syndrome as well as Parkinson’s disease . However, the relationship between different forms of hyperexcitability and the development of abnormal neuronal rhythms remains to be fully understood. One study observed marked increases in gamma band power and decreased phase-locking properties in patients with fragile X syndrome, which appears to be related to the observation of hyperexcitability . NH has also been shown to be linked to the development of high-frequency oscillations (80–500 Hz), a potential biomarker of epilepsy , with the presence of hyperexcitable mechanisms linked to these oscillations . In the context of AD, a multitude of oscillatory changes have been documented in the literature . Indications of altered gamma activity and slow-wave activity in animal models are just some examples that have been reported. In AD patients, reports of altered theta, delta oscillation power , gamma, delta, and alpha power and amplitudes in resting-state EEG as well as a “slowing” of oscillations have been reported. However, studies exploring the potential causal link(s) between altered oscillatory activity and NH in AD are sparse . Another way in which NH may have large-scale implications is in terms of structural changes that may impact the function of the network. An important structural change associated with the development of seizures and epilepsy associated with the temporal lobe is the axonal sprouting of mossy fibers in the hippocampus . Mossy fibers are the excitatory outputs of granule cells in the hippocampus, and the main excitatory output of granule cell layer, projecting toward the Cornu Ammonis region 3 in the hippocampus under physiologically normal conditions . In response to injury and/or damage, as in repeated seizures, the mossy fibers of the granule cells redirect to the supragranular layer of the dentate gyrus . Depending on the projections of this redirection toward the dendrites of other granule cells, or inhibitory interneurons, the resulting net effect on the hippocampal network could be excitatory or inhibitory, respectively . Therefore, this resulting structural change could further alter the balance of excitation and inhibition. Considering the wide range of disturbances and brain disorders associated with NH as well as the structural and functional implications briefly covered above, the therapeutic management of this phenomenon in neurological disorders could modify the course of the disease or even contribute to recovery. Recently, NH has emerged as a potential neurophysiological readout of network dysfunction in AD and prognostic indicator of disease. However, what is the evidence that NH is a key feature of AD? In the next section, the evidence for NH as a neurophysiological indicator of AD is reviewed and evaluated. It has been established that the proper functioning of neurons is necessary for the development and the continued survival of the organism, as shown by the debilitating phenotypes associated with perturbances to neuronal function . Neuronal excitability is one such property that governs a broad range of neuronal functions, ranging from, but not limited to: neuronal development, functional integration, and even cell death . The factors that determine the excitability of a neuron arise from elements related to the generation of action potentials and can be broadly classified into factors intrinsic and extrinsic to the neuron. Intrinsic factors such as the density of receptors at a synapse, the properties of ion channels, or the phosphorylation state of certain proteins in the cell can all affect the excitability of a neuron . For example, a mutation in voltage-gated sodium cha-nnels that results in the reduction of the sodium channel voltage activation threshold, subsequently requiring less depolarization to open the ion channel and initiate an action potential, could increase neuronal excitability . Factors extrinsic to the neuron affecting its excitability would be associated with other determinants such as interactions with other inhibitory or excitatory neurons, the availability of extracellular neurotransmitters, the concentrations of surrounding extracellular ions, the presence of extracellular ligands which alter the membrane potential , or neuroinflammation . An example of such an extrinsic factor would be inhibitory interneurons . The activity of a neuron does not only depend on its excitability but also on the activity of other neurons to which it is connected within the network. One of the properties that emerges from the complex interaction of neurons in networks is neural oscillations, the high amplitude oscillations detectable from local extracellular field potentials to scalp recordings . The intrinsically complex dynamics of single neurons that allow them to resonate and oscillate at multiple frequencies are a key feature believed to facilitate the emergence of large-scale oscillations that span multiple brain regions . Neuronal oscillations have been classified into various bands in the frequency range from 0.05 to 500 Hz, with different functional or behavioral correlates putatively ascribed to each of the frequency bands. However, definitions of the frequency bands may somewhat differ between researchers and between species investigated (e.g., , the frequency bands of delta (< 4 Hz), theta (4–8 Hz), alpha (8–15 Hz), beta (16–31 Hz), gamma (> 30 Hz)). At the cellular level, the interaction between excitatory and inhibitory neuronal activity is known to be responsible for the generation of synchronous rhythms . Alterations in oscillatory rhythms have been implicated in some of the neurological disorders that exhibit hyperexcitability, such as epilepsy , AD , and Fragile X syndrome as well as Parkinson’s disease . However, the relationship between different forms of hyperexcitability and the development of abnormal neuronal rhythms remains to be fully understood. One study observed marked increases in gamma band power and decreased phase-locking properties in patients with fragile X syndrome, which appears to be related to the observation of hyperexcitability . NH has also been shown to be linked to the development of high-frequency oscillations (80–500 Hz), a potential biomarker of epilepsy , with the presence of hyperexcitable mechanisms linked to these oscillations . In the context of AD, a multitude of oscillatory changes have been documented in the literature . Indications of altered gamma activity and slow-wave activity in animal models are just some examples that have been reported. In AD patients, reports of altered theta, delta oscillation power , gamma, delta, and alpha power and amplitudes in resting-state EEG as well as a “slowing” of oscillations have been reported. However, studies exploring the potential causal link(s) between altered oscillatory activity and NH in AD are sparse . Another way in which NH may have large-scale implications is in terms of structural changes that may impact the function of the network. An important structural change associated with the development of seizures and epilepsy associated with the temporal lobe is the axonal sprouting of mossy fibers in the hippocampus . Mossy fibers are the excitatory outputs of granule cells in the hippocampus, and the main excitatory output of granule cell layer, projecting toward the Cornu Ammonis region 3 in the hippocampus under physiologically normal conditions . In response to injury and/or damage, as in repeated seizures, the mossy fibers of the granule cells redirect to the supragranular layer of the dentate gyrus . Depending on the projections of this redirection toward the dendrites of other granule cells, or inhibitory interneurons, the resulting net effect on the hippocampal network could be excitatory or inhibitory, respectively . Therefore, this resulting structural change could further alter the balance of excitation and inhibition. Considering the wide range of disturbances and brain disorders associated with NH as well as the structural and functional implications briefly covered above, the therapeutic management of this phenomenon in neurological disorders could modify the course of the disease or even contribute to recovery. Recently, NH has emerged as a potential neurophysiological readout of network dysfunction in AD and prognostic indicator of disease. However, what is the evidence that NH is a key feature of AD? In the next section, the evidence for NH as a neurophysiological indicator of AD is reviewed and evaluated. In humans, direct evidence for NH in patients with AD is sparse, due to ethical and practical considerations that would preclude a definitive evaluation of NH. Such a definitive evaluation would theoretically consist of exact placement of invasive recording electrodes or advanced magnetoencephalography (MEG) approaches, which are presently limited. Numerous other indirect indications suggest the existence of NH in patients with AD, such as the presence of hippocampal hyperactivity , cortical hyperexcitability measured by transcranial magnetic stimulation , increased risk of seizure and epileptic-like symptoms , and alterations in default mode network (DMN) inactivation . These alterations in brain function have been noted in subjects through techniques such as functional magnetic resonance imaging (fMRI) , MEG , and transcranial magnetic stimulation (TMS) studies . In this section, we discuss the past and recent observations supporting the presence of hyperexcitability in humans. Hippocampal hyperactivity Hippocampal hyperactivity is an important feature of AD during the prodromal stages of the disease and may reflect an observable manifestation of underlying NH in the early stages of the disease. The temporal progression of hippocampal activity has been reported to be increased during amnestic mild cognitive impairment (aMCI) and the preclinical phase of early AD and progresses to reduced activity relative to baseline in the later phases of AD , when the diagnosis is confirmed. This observation of hippocampal hyperactivity has gained interest as a potential biomarker and functional indicator of the disease state for the development of therapeutic interventions , and that may help to disentangle the multiple AD genotypes and phenotypes and their heterogeneous underlying clinicopathology. The evaluation of hippocampal hyperactivity has been carried out using various modalities such as fMRI, MEG, or positron emission tomography (PET). One such PET study has reported an increase in hippocampal glucose metabolism, where global-hippocampal connectivity decreases resulted in incr-eased intrahippocampal glucose metabolism . The increased glucose metabolism has been suggested to imply an increase in intrahippocampal activity, providing some evidence for the presence of hippocampal hyperactivity as well. Hippocampal hyperactivity was initially proposed as a compensatory mechanism to support memory function and has also subsequently been suggested to be maladaptive instead . One study supporting the idea of a compensatory function examined blood oxygenation level brain response in nondemented participants with or without the apolipoprotein E ( APOE ) ɛ4 allele, a major risk factor for the development of AD . Blood oxygen level dependent (BOLD) activity levels in APOE ɛ4 subjects were increased to a greater extent during learning of new images compared to control subjects, distributed in the precuneus, frontal, temporal, and cingulate gyri regions of the brain. This increase in activation during the learning task has been interpreted as a compensatory increase in cognitive effort to achieve comparable levels of episodic memory encoding. Opposing this compensatory activation hypothesis, a study by Bakker and colleagues showed an improvement in cognitive function by reducing hyperactivity of the hippocampus using levetiracetam . This reduced hippocampal activity was observed through fMRI readouts. The rationale proposed by Bakker and colleagues was that if hyperactivation was indeed beneficially compensatory, attenuation of hyperactivation would result in a reduction in cognitive function. However, the reduction of hyperactivity resulted in an improvement in cognitive function, suggesting that this hyperactivity was detrimental to cognitive function instead of a beneficial compensation. These studies illustrate a seemingly dichotomous relationship between compensation and maladaptive phenomena in terms of hippocampal hyperactivity. While the words “compensatory” and “maladaptive” may initially appear to be mutually exclusive and dichotomous, it is possible compensatory mechanisms may lead to the development of detrimental states, such as in the case of maladaptive plasticity . While it is not certain that the increased hippocampal activity reflects a compensatory or maladaptive state, it bears noting that these two concepts are not mutually exclusive. There is an ongoing debate regarding the cause and timing of this hippocampal hyperactivity, with indications that implicate both amyloid and tau pathology as some of the responsible factors involved. PET has been employed to detect the pathology of amyloid plaques and more recently, the pathology of tau as well . Prior to the application of PET scanning as a measure of the amount of amyloid and tau pathology, the extent of the pathology could only be quantified in postmortem tissue, precluding any definitive correlation between pathological load, hippocampal activity, and cognitive function while the patient was alive. Several studies have applied PET in conjunction with fMRI readouts to estimate the contribution of AD pathology to hippocampal hyperactivity . Several studies carried out investigating the effects of amyloid-β using PET imaging have reported both contrary increases and decreases in brain activation . A recent article proposed that tau accumulation was associated with hippocampal hyperactivity, as opposed to amyloid-β . In this study, it was proposed that the emergence of hyperactivity occurs in the later stages of preclinical AD and leads to discrepancies in the correlation between amyloid-β levels and hippocampal hyperactivity. Default mode network alterations Another observation that could also be related to underlying hyperexcitability is the deficiencies observed in the DMN of AD patients . The DMN is an inter-regional brain network believed to be associated with introspective thinking, planning, and remembering the past . The regions that comprise the DMN are the posterior cingulate cortex, precuneus, dorsal and ventral medial prefrontal, lateral (mainly inferior) parietal cortices, and medial temporal lobes . DMN network activity is characterized by a consistent reduction in activity while performing goal-directed tasks and is activated during states of quiet rest . Based on this characteristic of the DMN, one could conceptualize the deactivation and activation of the DMN as states of externally-directed focus and internally-directed thinking respectively . It is believed that the proper activation and deactivation of the DMN is necessary for the retrieval of stored memories as well as the encoding and acquisition of new memories . Studies have shown that lower DMN activity during stimulus-driven goal-directed cognitive tasks is associated with more successful performance . In the context of AD, the DMN has been characterized by reductions in resting state functional connectivity and activity and is associated with the progression and severity of the disease . In addition, the compromised integrity of the DMN system has been related to the progression of the disease . In the context of task-related DMN activation, decreased levels of DMN task-related deactivation in aMCI and AD patients, as well as APOE4 carriers have been reported. However, in contrast to the decreased resting state functional connectivity, other studies have shown increased levels of functional connectivity, suggested to be compensatory . One study has reported disrupted medial-parietal and medial-temporal lobe dysconnectivity has resulted in increased intrinsic medial-temporal lobe local functional connectivity and subsequent increase in intrinsic activity. While evidence points toward an overall decrease in DMN functional connectivity and activity, especially in the later stages of the disease , a possible indication whereby hyperexcitability may exist is the decreased deactivation of the DMN. The decreased DMN deactivation indicates a possible inability to properly deactivate DMN or inappropriate DMN activation during these tasks . NH could be one such mechanism that may explain the inability to appropriately deactivate the DMN which has been ascribed to the deficits associated with levels of GABA , suggesting that inhibitory deficits could be contributing to the reduced deactivation, and indicate a shift toward a more excitable network state. This decrease in deactivation has also been associated with the presence of amyloid pathology . It is not clear if this reduced deactivation reflects a maladaptive mechanism, functional reorganization to reflect compensatory mechanisms in order to sustain cognitive functions, or both. Lastly, a link between alterations in the DMN and depression, a common comorbidity in AD has been suggested , including ruminations-related electroencephalography (EEG) changes , which emphasizes the potential clinical importance of hyperexcitability beyond epilepsy per se. These observations of decreased DMN deactivation might reflect underlying hyperexcitability in the earlier stages of the disease . However, it also bears noting that multiple reports have indicated overall DMN activity reduction . A better understanding of how decreased DMN deactivation might be related to NH could provide a better insight into translational indications of AD or eventually even of AD patient stratification. Cortical hyperexcitability Another form of hyperexcitability suggested comes from studies involving TMS of the motor cortex in patients with AD . These studies involve the stimulation of the motor cortex using a TMS paradigm and measurement of the evoked motor potential. The minimum stimulation threshold necessary to evoke a motor threshold in AD patients has been reported to be lower than that healthy controls, suggesting an “increased” motor cortex excitability . This increased excitability of the motor cortex is suggested to be a compensatory mechanism to facilitate voluntary movements . Current discussion has attributed this alteration in excitability to dysfunction of cholinergic and glutamatergic signaling . There is evidence for the deterioration of the cholinergic signaling system in AD throughout the disease, generally stemming from the neurodegeneration of cholinergic neurons in the Nucleus basalis of Meynert, in the basal forebrain . This region harbors neurons rich in the acetylcholine neurotransmitter and projects extensively into cortical regions (for a review of the cholinergic system, see Mesulam, 2013 ). The motor cortex has muscarinic and cholinergic terminals and receives a large input from the Nucleus basalis of Meynert , believed to be inhibitory . The neurodegeneration of these cholinergic neurons in AD would lead to a reduction in inhibition based on this hypothesis and could contribute to cortical hyperexcitability. Risk of seizures and epilepsy Epilepsy is considered to be strongly related to NH, with its myriad of etiology generally ascribed to the imbalance of excitation and inhibition and the development and onset of seizures . Patients with AD have been reported to have an increased risk of developing seizures over the course of the disease . The prevalence of seizures appears to increase with the duration of AD, with studies correlating onset of seizures with the later stages of AD . This observation has been hypothesized to be due to the progressive severity of neurodegeneration or the increased accuracy of the diagnosis of AD . Other studies have observed an increased rate of seizure occurrence in younger patients with AD , attributed to the higher prevalence of patients with familial AD, which has been associated with higher rates of seizures or more aggressive progression of AD . Seizures are believed to arise from the hypersynchronous state of neuronal populations, characterized by heterogeneity of neuronal firing and temporal evolution of synchronization . Physiological evidence for abnormal neuronal synchronicity has been reported in animal models of AD pathology , highlighting a possible mechanistic explanation for epileptogenesis in patients with AD. Although the clear causative factor underlying the development of seizure-like activity in AD is not fully understood, animal studies have attributed the presence of amyloid and tau pathology, as well as interneuron dysfunction to epileptogenesis. Due to the difficulty of detecting seizures, particularly of the nonconvulsive form, characterizing the prevalence of seizures in patients with prodromal AD is extremely challenging . It is not clear which form(s) of epilepsy is (are) engendered in AD or if it can be classified into a single type. Due to the numerous forms of epilepsies, the presence of epilepsy-like and seizure-like symptoms in AD patients may even differ from hitherto known forms of epilepsy such as temporal lobe epilepsy. Studies investigating the types of seizures in patients with AD have reported generalized convulsive seizures , complex partial seizures , as well as nonconvulsive seizures , indicating some extent of seizure heterogeneity in patients with AD. Other indications point to similarities between the seizure activity observed in patients with AD and those with focal hippocampal seizures . However, certain limitations and inaccuracies in seizure reporting could also preclude an accurate assessment of seizure prevalence in AD patients . Correlational studies have shown that the incidence of seizure activity in patients with AD is related to faster cognitive decline compared to AD patients with no reported incidence of seizures . One possibility suggests that the severity of the pathology may determine the rate of seizure prevalence. Another possibility suggests that the presence of seizures could exacerbate the rate of disease progression. Better characterization of epilepsy and seizure phenotypes in patients with AD could provide a better insight into how epilepsy-associated phenomena contribute to AD pathogenesis or even serve as a predictive indicator of disease progression for therapeutic intervention (for a recent review see Toniolo et al. ). In their first retrospective studies by Vossel et al. examining the incidence of seizures in 233 MCI subjects and 1,024 probable AD subjects, the incidence rate of repeated seizures in MCI and probable AD patients was 5%and 3.4%respectively , while in their prospective follow-up study, this prevalence rate reached 42%in patients with AD . This indicates that not all patients develop seizures, suggesting an incomplete penetrance of this phenotype, and that NH in the form of seizures may potentially only be present in a subpopulation of AD patients, emphasizing the heterogeneity of the AD patient population . The presence of increased network activity and the prevalence of epileptiform activity in patients with AD suggests hyperexcitability occurring throughout the disease. Although the full spectrum of molecular and cellular correlates of NH has not been elucidated, several reports have identified factors leading to the development of NH. In the next section, we review some of the putative pathological mechanisms of NH from studies primarily conducted in animals. Hippocampal hyperactivity is an important feature of AD during the prodromal stages of the disease and may reflect an observable manifestation of underlying NH in the early stages of the disease. The temporal progression of hippocampal activity has been reported to be increased during amnestic mild cognitive impairment (aMCI) and the preclinical phase of early AD and progresses to reduced activity relative to baseline in the later phases of AD , when the diagnosis is confirmed. This observation of hippocampal hyperactivity has gained interest as a potential biomarker and functional indicator of the disease state for the development of therapeutic interventions , and that may help to disentangle the multiple AD genotypes and phenotypes and their heterogeneous underlying clinicopathology. The evaluation of hippocampal hyperactivity has been carried out using various modalities such as fMRI, MEG, or positron emission tomography (PET). One such PET study has reported an increase in hippocampal glucose metabolism, where global-hippocampal connectivity decreases resulted in incr-eased intrahippocampal glucose metabolism . The increased glucose metabolism has been suggested to imply an increase in intrahippocampal activity, providing some evidence for the presence of hippocampal hyperactivity as well. Hippocampal hyperactivity was initially proposed as a compensatory mechanism to support memory function and has also subsequently been suggested to be maladaptive instead . One study supporting the idea of a compensatory function examined blood oxygenation level brain response in nondemented participants with or without the apolipoprotein E ( APOE ) ɛ4 allele, a major risk factor for the development of AD . Blood oxygen level dependent (BOLD) activity levels in APOE ɛ4 subjects were increased to a greater extent during learning of new images compared to control subjects, distributed in the precuneus, frontal, temporal, and cingulate gyri regions of the brain. This increase in activation during the learning task has been interpreted as a compensatory increase in cognitive effort to achieve comparable levels of episodic memory encoding. Opposing this compensatory activation hypothesis, a study by Bakker and colleagues showed an improvement in cognitive function by reducing hyperactivity of the hippocampus using levetiracetam . This reduced hippocampal activity was observed through fMRI readouts. The rationale proposed by Bakker and colleagues was that if hyperactivation was indeed beneficially compensatory, attenuation of hyperactivation would result in a reduction in cognitive function. However, the reduction of hyperactivity resulted in an improvement in cognitive function, suggesting that this hyperactivity was detrimental to cognitive function instead of a beneficial compensation. These studies illustrate a seemingly dichotomous relationship between compensation and maladaptive phenomena in terms of hippocampal hyperactivity. While the words “compensatory” and “maladaptive” may initially appear to be mutually exclusive and dichotomous, it is possible compensatory mechanisms may lead to the development of detrimental states, such as in the case of maladaptive plasticity . While it is not certain that the increased hippocampal activity reflects a compensatory or maladaptive state, it bears noting that these two concepts are not mutually exclusive. There is an ongoing debate regarding the cause and timing of this hippocampal hyperactivity, with indications that implicate both amyloid and tau pathology as some of the responsible factors involved. PET has been employed to detect the pathology of amyloid plaques and more recently, the pathology of tau as well . Prior to the application of PET scanning as a measure of the amount of amyloid and tau pathology, the extent of the pathology could only be quantified in postmortem tissue, precluding any definitive correlation between pathological load, hippocampal activity, and cognitive function while the patient was alive. Several studies have applied PET in conjunction with fMRI readouts to estimate the contribution of AD pathology to hippocampal hyperactivity . Several studies carried out investigating the effects of amyloid-β using PET imaging have reported both contrary increases and decreases in brain activation . A recent article proposed that tau accumulation was associated with hippocampal hyperactivity, as opposed to amyloid-β . In this study, it was proposed that the emergence of hyperactivity occurs in the later stages of preclinical AD and leads to discrepancies in the correlation between amyloid-β levels and hippocampal hyperactivity. Another observation that could also be related to underlying hyperexcitability is the deficiencies observed in the DMN of AD patients . The DMN is an inter-regional brain network believed to be associated with introspective thinking, planning, and remembering the past . The regions that comprise the DMN are the posterior cingulate cortex, precuneus, dorsal and ventral medial prefrontal, lateral (mainly inferior) parietal cortices, and medial temporal lobes . DMN network activity is characterized by a consistent reduction in activity while performing goal-directed tasks and is activated during states of quiet rest . Based on this characteristic of the DMN, one could conceptualize the deactivation and activation of the DMN as states of externally-directed focus and internally-directed thinking respectively . It is believed that the proper activation and deactivation of the DMN is necessary for the retrieval of stored memories as well as the encoding and acquisition of new memories . Studies have shown that lower DMN activity during stimulus-driven goal-directed cognitive tasks is associated with more successful performance . In the context of AD, the DMN has been characterized by reductions in resting state functional connectivity and activity and is associated with the progression and severity of the disease . In addition, the compromised integrity of the DMN system has been related to the progression of the disease . In the context of task-related DMN activation, decreased levels of DMN task-related deactivation in aMCI and AD patients, as well as APOE4 carriers have been reported. However, in contrast to the decreased resting state functional connectivity, other studies have shown increased levels of functional connectivity, suggested to be compensatory . One study has reported disrupted medial-parietal and medial-temporal lobe dysconnectivity has resulted in increased intrinsic medial-temporal lobe local functional connectivity and subsequent increase in intrinsic activity. While evidence points toward an overall decrease in DMN functional connectivity and activity, especially in the later stages of the disease , a possible indication whereby hyperexcitability may exist is the decreased deactivation of the DMN. The decreased DMN deactivation indicates a possible inability to properly deactivate DMN or inappropriate DMN activation during these tasks . NH could be one such mechanism that may explain the inability to appropriately deactivate the DMN which has been ascribed to the deficits associated with levels of GABA , suggesting that inhibitory deficits could be contributing to the reduced deactivation, and indicate a shift toward a more excitable network state. This decrease in deactivation has also been associated with the presence of amyloid pathology . It is not clear if this reduced deactivation reflects a maladaptive mechanism, functional reorganization to reflect compensatory mechanisms in order to sustain cognitive functions, or both. Lastly, a link between alterations in the DMN and depression, a common comorbidity in AD has been suggested , including ruminations-related electroencephalography (EEG) changes , which emphasizes the potential clinical importance of hyperexcitability beyond epilepsy per se. These observations of decreased DMN deactivation might reflect underlying hyperexcitability in the earlier stages of the disease . However, it also bears noting that multiple reports have indicated overall DMN activity reduction . A better understanding of how decreased DMN deactivation might be related to NH could provide a better insight into translational indications of AD or eventually even of AD patient stratification. Another form of hyperexcitability suggested comes from studies involving TMS of the motor cortex in patients with AD . These studies involve the stimulation of the motor cortex using a TMS paradigm and measurement of the evoked motor potential. The minimum stimulation threshold necessary to evoke a motor threshold in AD patients has been reported to be lower than that healthy controls, suggesting an “increased” motor cortex excitability . This increased excitability of the motor cortex is suggested to be a compensatory mechanism to facilitate voluntary movements . Current discussion has attributed this alteration in excitability to dysfunction of cholinergic and glutamatergic signaling . There is evidence for the deterioration of the cholinergic signaling system in AD throughout the disease, generally stemming from the neurodegeneration of cholinergic neurons in the Nucleus basalis of Meynert, in the basal forebrain . This region harbors neurons rich in the acetylcholine neurotransmitter and projects extensively into cortical regions (for a review of the cholinergic system, see Mesulam, 2013 ). The motor cortex has muscarinic and cholinergic terminals and receives a large input from the Nucleus basalis of Meynert , believed to be inhibitory . The neurodegeneration of these cholinergic neurons in AD would lead to a reduction in inhibition based on this hypothesis and could contribute to cortical hyperexcitability. Epilepsy is considered to be strongly related to NH, with its myriad of etiology generally ascribed to the imbalance of excitation and inhibition and the development and onset of seizures . Patients with AD have been reported to have an increased risk of developing seizures over the course of the disease . The prevalence of seizures appears to increase with the duration of AD, with studies correlating onset of seizures with the later stages of AD . This observation has been hypothesized to be due to the progressive severity of neurodegeneration or the increased accuracy of the diagnosis of AD . Other studies have observed an increased rate of seizure occurrence in younger patients with AD , attributed to the higher prevalence of patients with familial AD, which has been associated with higher rates of seizures or more aggressive progression of AD . Seizures are believed to arise from the hypersynchronous state of neuronal populations, characterized by heterogeneity of neuronal firing and temporal evolution of synchronization . Physiological evidence for abnormal neuronal synchronicity has been reported in animal models of AD pathology , highlighting a possible mechanistic explanation for epileptogenesis in patients with AD. Although the clear causative factor underlying the development of seizure-like activity in AD is not fully understood, animal studies have attributed the presence of amyloid and tau pathology, as well as interneuron dysfunction to epileptogenesis. Due to the difficulty of detecting seizures, particularly of the nonconvulsive form, characterizing the prevalence of seizures in patients with prodromal AD is extremely challenging . It is not clear which form(s) of epilepsy is (are) engendered in AD or if it can be classified into a single type. Due to the numerous forms of epilepsies, the presence of epilepsy-like and seizure-like symptoms in AD patients may even differ from hitherto known forms of epilepsy such as temporal lobe epilepsy. Studies investigating the types of seizures in patients with AD have reported generalized convulsive seizures , complex partial seizures , as well as nonconvulsive seizures , indicating some extent of seizure heterogeneity in patients with AD. Other indications point to similarities between the seizure activity observed in patients with AD and those with focal hippocampal seizures . However, certain limitations and inaccuracies in seizure reporting could also preclude an accurate assessment of seizure prevalence in AD patients . Correlational studies have shown that the incidence of seizure activity in patients with AD is related to faster cognitive decline compared to AD patients with no reported incidence of seizures . One possibility suggests that the severity of the pathology may determine the rate of seizure prevalence. Another possibility suggests that the presence of seizures could exacerbate the rate of disease progression. Better characterization of epilepsy and seizure phenotypes in patients with AD could provide a better insight into how epilepsy-associated phenomena contribute to AD pathogenesis or even serve as a predictive indicator of disease progression for therapeutic intervention (for a recent review see Toniolo et al. ). In their first retrospective studies by Vossel et al. examining the incidence of seizures in 233 MCI subjects and 1,024 probable AD subjects, the incidence rate of repeated seizures in MCI and probable AD patients was 5%and 3.4%respectively , while in their prospective follow-up study, this prevalence rate reached 42%in patients with AD . This indicates that not all patients develop seizures, suggesting an incomplete penetrance of this phenotype, and that NH in the form of seizures may potentially only be present in a subpopulation of AD patients, emphasizing the heterogeneity of the AD patient population . The presence of increased network activity and the prevalence of epileptiform activity in patients with AD suggests hyperexcitability occurring throughout the disease. Although the full spectrum of molecular and cellular correlates of NH has not been elucidated, several reports have identified factors leading to the development of NH. In the next section, we review some of the putative pathological mechanisms of NH from studies primarily conducted in animals. Evidence in AD patients points toward the possibility of NH, most likely correlating with increased activation of several regions of the brain, such as the DMN and the hippocampus. However, the deter-mination of molecular pathological correlates as well as the localization of hyperexcitability directly in humans would require some form of invasive electrode implantation for measurement of electro-physiological changes associated with hyperexcita-bility. This has led to studies attempting to elucidate the molecular and cellular correlates of NH in animal models exhibiting AD-associated pathology. It should be noted that differences in techniques used for the experimental qualification of hyperexcitability in humans as compared to in animal models, could preclude a direct translational comparison. In this regard, it is possible that the exact nature of hyperexcitability in animal models may differ from that in human patients. Unlike techniques used in human studies to eval-uate neurophysiology, preclinical studies of hyperexcitability involve techniques such as intracranial EEG , calcium imaging , and patch clamping of ex vivo slices carried out in animal models of AD pathology. Hence, a closer look at animal studies associated with the incipience of NH, as well as a critical evaluation of the evidence for the pathological bases of NH will be presented next. Glutamate dysfunction as a molecular mechanism underlying network hyperexcitability The glutamate hypothesis of AD, initially propo-sed decades ago, was based on postmortem evidence indicating reduced aspartate binding as well as a loss of other putative markers of glutamatergic activity . The observation of increased glutamate concentrations around neurons and synapses, attributed to deficits in the glutamatergic processing pathway is at the heart of this hypothesis . Glutamate, being the main excitatory neurotransmitter in the brain, could be a main molecular effector of inducing NH. Glutamate receptors can be classified into ionotropic and metabotropic glutamate receptors, both of which have been suggested to be implicated in hyperexcitability and excitotoxicity . Of the two groups, metabotropic glutamate receptors (mGluRs) have been shown to interact with AD pathology, such as extracellular oligomeric amyloid-β species, resulting in LTD induction , synaptotoxicity , among other effects . It has also been shown that the interaction of amyloid-β with a particular mGluR subtype, mGlu1, results in a dramatic and lasting depolarization of membrane potential . This depolarization could very likely contribute to a transition to a state of hyperexcitability. However, the activation of mGlu1 has also been associated with the proteolytic processing of amyloid-β protein precursor (AβPP), increasing the production of the neuroprotective sAβPP α fragment, and a decrease in amyloid-β production , potentially serving as a sensor of extracellular amyloid-β levels. In vitro app-lication of amyloid-β fragments in slice culture have shown increased glutamate concentrations , potentially through augmenting glutamate release or by inhibiting the uptake of glutamate by astrocytes . A recent in vivo report also appears to be in line with the in vitro findings, reporting increased neuronal activity being attributed to gl-utamate accumulation as a result of amyloid-β . The role of glutamate has also been implicated in the process of glutamate-mediated excitotoxicity, the process by which neurons succumb to damage or die as a result of overstimulation by glutamate . It has been suggested that the neurodegeneration seen in AD could be the result of this form of excitotoxicity . This excitotoxicity is believed to be mediated by an increased influx of calcium, primarily through NMDA receptors . The hyperexcitability seen in AD could also imply ensuing excitotoxicity. Inhibitory signaling dysfunction as a cellular basis of network hyperexcitability Based on the hyperexcitability model resulting from an excitation and inhibition imbalance, inhib-itory signaling represents a major player in this balance. Inhibitory signaling is a product of both the inhibitory presynaptic neuron and receptors on the postsynaptic neuron. Interneurons are a major source of inhibitory input to neurons, generally facilitated by the action of neurotransmitters such as the GABA neurotransmitter . Several indications of interneuron dysfunction appear to be present in AD, primarily from preclinical studies but have yet to be well studied and characterized in human observations. These preclinical studies identified alterations suggested to be attributed to parvalbumin-expressing cells and hippocampal perisomatic GABAergic synapses . In addition, the restoration of these interneuron cell populations appears to attenuate effects of hyperexcitability as well as the restoration of deficits in oscillatory brain rhythms . Neurophysiological changes in interneurons associated with AD pathology appear to be evident in multiple mouse models of AD pathology, with functional implications for the network . One of these functional consequences concerns the alterat-ion of oscillatory rhythms. Interneuron activity contributes greatly to the presence of gamma oscillations in the brain , and are believed to contribute to the temporal coordination of neuronal activity at the network level, facilitating aspects of cognition and neuronal computation . Along with putative changes in interneurons, the properties of gamma oscillations have also been noted to be altered in several mouse models of AD pathology , correlating the neurophysiological changes with functional changes. Although some direct evidence for the presence of interneuron deficits in AD patients is present , indirect evidence also supports this hypothesis. Oscillatory activity is impaired in patients with AD, in particular gamma oscillations . A recent study involving the application of light stimulation at the gamma frequency range to entrain interneurons has been reported to clear amyloid plaque build-up , suggesting that external neuromodulation of interneuron function may be capable of altering pathology. In addition to changes in oscillatory activity, interneuron deficits have also been correlated with the presence of epileptic and seizure phenomenon . One such study has reported reductions in network hypersynchrony facilitated by modified interneuron transplants in a mouse model of AD pathology . While network synchrony was shown to be reduced in this study, it should be noted that it does not imply that interneuron dysfunction might not be the cause of this of form of hypersynchrony but instead only attenuates it. Interneuron deficits present itself as a potential cellular candidate for explaining the presence of NH in animal models and patients with AD. Expanding our understanding of interneuron deficits (e.g., subtypes of interneuron affected, receptor expression properties, interneuron quantity) should provide better insight into the development of animal model on a cellular basis of NH. Amyloid-associated network hyperexcitability in animal studies Several animal models of amyloid pathology sho-wcase hyperexcitability-related behavior in the form of their propensity to exhibit unprovoked seizures , increased susceptibility to pharmacologically and audiogenic seizures as well as epilept-iform-like activity . This effect also does not appear to be limited to a single amyloid animal model but appears to be a common trait across multiple amyloid pathology models , indicating a phenotype strongly associated with amyloid-related alterations in these mice. Studies associated with NH and amyloid are summarized in . The hyperexcitability associated with amyloid-β also appears to be dose-dependent, as suggested by the presence of tonic-hyperexcitability proportionate to the amount of amyloid plaque burden . However, it has been suggested that the molecular correlate of this hyperexcitability is likely to be the oligomeric species of amyloid-β , rather than the amyloid plaques as suggested by experiments involving ex vivo application of amyloid-β species described in the following section. Ex vivo experiments involving the application and incubation of amyloid-β species have demonstrated that these oligomeric species possess the ability to depolarize the membrane potential and shift the network to a more excitable state . However, the excitability of the network already appears to be permanently altered as evidenced by ex vivo slice experiments on these mouse models of amyloid pathology . This suggests the possibility that chronic exposure of neurons to extracellular amyloid-β could already induce permanent changes in the network, providing a possible explanation for the lack of clear effects seen in amyloid clearance-related therapies. Alternatively, the development of the organism could be altered under the influence of the transgenic expression of proteins responsible for AD pathology, resulting in an already abnormal baseline state. Accompanying the ex vivo experiments involving the application of amyloid-β to brain slices, one in vivo experiment involving the application of amyloid species appear to elicit similar results as visualized by calcium imaging . Paradoxically, this only had an effect when amyloid-β was applied in wild-type mice as opposed to mice with amyloid phenotypes. This suggests some sort of alteration of the network already present in response to the presence of transgenic manipulations associated with amyloid. Neuronal activity has also been shown to affect AβPP processing and the release of amyloid-β spe-cies . It is suggested that the increase in neuronal activity leads to an increase in amyloid-β secretion and aggregation into oligomers which could be part of a positive feedback loop driving each other reciprocally. The functional consequences of amyloid-asso-ciated hyperexcitability still lacks sufficient understanding. Apart from the suggested role of amyloid-β associated hyperexcitability prompting subsequent epileptogenesis , in a functional context, one other study has reported that progressive deterioration of neuronal tuning for visual stimuli occurs in relation to amyloid load. In particular, this was noted only in hyperactive neurons during spontaneous activity . In the context of amyloid-β associated hyperexcitability and its purported role in excitotoxicity , animal models bearing amyloid-associated transgenic manipulations generally do not show overt neurodegeneration or cell loss , even in the presence of high amounts of amyloid plaque load and seizures. This implies that neurodegeneration in AD may not be linked solely to the presence of amyloid-driven hyperexcitability and excitotoxicity. However, amyloid-associated hyperexcitability may indirectly facilitate the development of amyloid plaques, which may drive the propagation of tau pathology and subsequent tau-pathology-associated neurodegeneration . Moreover, specific proteins and AD-related peptides (e.g., sAβPP α or A η ) are increasingly being identified for their specific role in circuit excitability dynamics, hypothesizing the mediating effects of amyloid as well as tau to hyper- and hypoexcitability . Tau-associated network hyperexcitability in animal studies In addition to the studies of NH focusing on amyloid pathology in AD, the other main hallmark of AD, the tau protein, has been shown to be related to hyperexcitability . Tau is a microtubule-associated protein that is involved in the assembly or disassembly of microtubules . Several reports have indicated that levels of tau modulate NH, with reductions of endogenous tau protein levels ameliorating hyperexcitability and overexpression exacerbating hyperexcitability . These findings suggest that levels of tau proportionately facilitate NH. Moreover, contrary evidence has suggested that increase in tau is capable of silencing neurons and contributing towards a state of reduced excitability . A summary of reports involving tau-associated NH can be seen in . There may be several mechanisms by which tau dysfunction may elicit a phenotype of hyperexcitability or even the opposite. Activation of synaptic and extrasynaptic NMDA receptors has been shown to correlate with an increased expression of tau . In addition to this, at least one type of NMDAR activation, Fyn-mediated NMDAR activation, has been reported to be associated with tau . As previously discussed, amyloid-β has also been indicated to interact with NMDA receptors . This relationship between amyloid-β, NMDAR interaction, and tau-mediated NMDA activation may be one mechanistic explanation for the presence of increased neuronal activation in AD. Another factor that may contribute to increased levels of tau expression could be increased levels of extracellular glutamate also interacting with α -amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and NMDA receptors . However, the effect of tau reduction in an animal model without amyloid-related mutations (e.g., in Kcna1 + /–Tau+/– C57BL/6 mice ) shows that the reduction of levels of tau results in the attenuation of hyperexcitability in this model . This suggests that there may not be a preference of tau to a specific cause of hyperexcitability and that tau is a general regulator of intrinsic neuronal excitability and alters the excitation/inhibition balance without any particular preference for amyloid . However, conflicting evidence for the effects of tau pathology on the effects of neuronal activity is present. One study reported the silencing of hyperactive neurons observed in amyloid-bearing APP/PS1 animals when crossed with inducible tau-expressing rTg4510 and rTg21221 mice . These mice bear the mutated form of the human tau protein, the P301L mutation that confers a risk for developing frontotemporal dementia in humans . This suggests that in this case, an increase in mutant tau expression could silence the increased activity associated with the APP/PS1 mouse strain as evaluated by calcium imaging. In addition, other reports of mutated and soluble tau protein species has been shown to reduce the transient frequency of calcium in cortical neurons in layers 2/3 of P301S mice, independent of neu-rofibrillary entanglement . Moreover, the high-frequency ripple oscillations of local field potentials in the CA1 hippocampal area are considerably reduced in young rTg4510 mice, and even more deteriorated in old rTg4510 mice . In addition, diminished neuronal activity with tau pathology in aged EC-tau mice , as well as reduced raw theta power in mice models of tauopathies , have been reported. One other study has suggest that entorhinal cortex neuronal hyperactivity is associated with the human amyloid precursor protein (hAPP) or Aβ, instead of tau in a combined tau-amyloid mouse model . These reports have indicated evidence for tau being capable of reducing neuronal activity and highlight a role in neuronal silencing. The phosphorylation of tau, a main factor thought to lead up to the formation of neurofibrillary tangles, appears to exhibit a role in attenuating hyperexcitability as well. Several reports have shown that tau hyperphosphorylation is associated with reductions in hippocampal CA1 neuron excitability as well as decreased synaptic AMPA receptor expression due to mislocalization of tau as a result of hyperphosphorylation . At least one study has suggested that phosphorylation of tau at a specific site is protective against amyloid-β mediated excitotoxicity , suggesting that phosphorylation of tau may attenuate hyperexcitability. These findings suggest that phosphorylation of tau could be a possible response to hyperexcitability, in order to silence and counteract the increase in excitability . In contrast to the pathological role currently thought to be associated with tau, there appears to be some evidence indicating a beneficial or compensatory role for the existence of this mechanism, at least in the attenuation of hyperexcitability. Assuming this relationship between tau phosphorylation and NH holds true, increasing tau phosphorylation to counteract NH as a therapeutic strategy is unlikely to be a viable option due to the exacerbation of tau pathology and its associated detrimental effects. These studies have highlighted some examples of how tau and tau-associated pathology may be related to NH. It does not appear to paint a clear picture whether tau contributes to or attenuates neuronal excitability, as the absence of tau has been shown to reduce excitability and increased expression of mutant tau and phosphorylated tau also appears capable of silencing neurons, reducing activity. It should be noted that several animal models previously used for the development of treatments, and which form the bases for some NH hypotheses, have limitations which diminish their translational validity. In the next section some of the limitations and caveats of the respective animal models that should be considered or controlled as part of the experimental design are addressed. The glutamate hypothesis of AD, initially propo-sed decades ago, was based on postmortem evidence indicating reduced aspartate binding as well as a loss of other putative markers of glutamatergic activity . The observation of increased glutamate concentrations around neurons and synapses, attributed to deficits in the glutamatergic processing pathway is at the heart of this hypothesis . Glutamate, being the main excitatory neurotransmitter in the brain, could be a main molecular effector of inducing NH. Glutamate receptors can be classified into ionotropic and metabotropic glutamate receptors, both of which have been suggested to be implicated in hyperexcitability and excitotoxicity . Of the two groups, metabotropic glutamate receptors (mGluRs) have been shown to interact with AD pathology, such as extracellular oligomeric amyloid-β species, resulting in LTD induction , synaptotoxicity , among other effects . It has also been shown that the interaction of amyloid-β with a particular mGluR subtype, mGlu1, results in a dramatic and lasting depolarization of membrane potential . This depolarization could very likely contribute to a transition to a state of hyperexcitability. However, the activation of mGlu1 has also been associated with the proteolytic processing of amyloid-β protein precursor (AβPP), increasing the production of the neuroprotective sAβPP α fragment, and a decrease in amyloid-β production , potentially serving as a sensor of extracellular amyloid-β levels. In vitro app-lication of amyloid-β fragments in slice culture have shown increased glutamate concentrations , potentially through augmenting glutamate release or by inhibiting the uptake of glutamate by astrocytes . A recent in vivo report also appears to be in line with the in vitro findings, reporting increased neuronal activity being attributed to gl-utamate accumulation as a result of amyloid-β . The role of glutamate has also been implicated in the process of glutamate-mediated excitotoxicity, the process by which neurons succumb to damage or die as a result of overstimulation by glutamate . It has been suggested that the neurodegeneration seen in AD could be the result of this form of excitotoxicity . This excitotoxicity is believed to be mediated by an increased influx of calcium, primarily through NMDA receptors . The hyperexcitability seen in AD could also imply ensuing excitotoxicity. Based on the hyperexcitability model resulting from an excitation and inhibition imbalance, inhib-itory signaling represents a major player in this balance. Inhibitory signaling is a product of both the inhibitory presynaptic neuron and receptors on the postsynaptic neuron. Interneurons are a major source of inhibitory input to neurons, generally facilitated by the action of neurotransmitters such as the GABA neurotransmitter . Several indications of interneuron dysfunction appear to be present in AD, primarily from preclinical studies but have yet to be well studied and characterized in human observations. These preclinical studies identified alterations suggested to be attributed to parvalbumin-expressing cells and hippocampal perisomatic GABAergic synapses . In addition, the restoration of these interneuron cell populations appears to attenuate effects of hyperexcitability as well as the restoration of deficits in oscillatory brain rhythms . Neurophysiological changes in interneurons associated with AD pathology appear to be evident in multiple mouse models of AD pathology, with functional implications for the network . One of these functional consequences concerns the alterat-ion of oscillatory rhythms. Interneuron activity contributes greatly to the presence of gamma oscillations in the brain , and are believed to contribute to the temporal coordination of neuronal activity at the network level, facilitating aspects of cognition and neuronal computation . Along with putative changes in interneurons, the properties of gamma oscillations have also been noted to be altered in several mouse models of AD pathology , correlating the neurophysiological changes with functional changes. Although some direct evidence for the presence of interneuron deficits in AD patients is present , indirect evidence also supports this hypothesis. Oscillatory activity is impaired in patients with AD, in particular gamma oscillations . A recent study involving the application of light stimulation at the gamma frequency range to entrain interneurons has been reported to clear amyloid plaque build-up , suggesting that external neuromodulation of interneuron function may be capable of altering pathology. In addition to changes in oscillatory activity, interneuron deficits have also been correlated with the presence of epileptic and seizure phenomenon . One such study has reported reductions in network hypersynchrony facilitated by modified interneuron transplants in a mouse model of AD pathology . While network synchrony was shown to be reduced in this study, it should be noted that it does not imply that interneuron dysfunction might not be the cause of this of form of hypersynchrony but instead only attenuates it. Interneuron deficits present itself as a potential cellular candidate for explaining the presence of NH in animal models and patients with AD. Expanding our understanding of interneuron deficits (e.g., subtypes of interneuron affected, receptor expression properties, interneuron quantity) should provide better insight into the development of animal model on a cellular basis of NH. Several animal models of amyloid pathology sho-wcase hyperexcitability-related behavior in the form of their propensity to exhibit unprovoked seizures , increased susceptibility to pharmacologically and audiogenic seizures as well as epilept-iform-like activity . This effect also does not appear to be limited to a single amyloid animal model but appears to be a common trait across multiple amyloid pathology models , indicating a phenotype strongly associated with amyloid-related alterations in these mice. Studies associated with NH and amyloid are summarized in . The hyperexcitability associated with amyloid-β also appears to be dose-dependent, as suggested by the presence of tonic-hyperexcitability proportionate to the amount of amyloid plaque burden . However, it has been suggested that the molecular correlate of this hyperexcitability is likely to be the oligomeric species of amyloid-β , rather than the amyloid plaques as suggested by experiments involving ex vivo application of amyloid-β species described in the following section. Ex vivo experiments involving the application and incubation of amyloid-β species have demonstrated that these oligomeric species possess the ability to depolarize the membrane potential and shift the network to a more excitable state . However, the excitability of the network already appears to be permanently altered as evidenced by ex vivo slice experiments on these mouse models of amyloid pathology . This suggests the possibility that chronic exposure of neurons to extracellular amyloid-β could already induce permanent changes in the network, providing a possible explanation for the lack of clear effects seen in amyloid clearance-related therapies. Alternatively, the development of the organism could be altered under the influence of the transgenic expression of proteins responsible for AD pathology, resulting in an already abnormal baseline state. Accompanying the ex vivo experiments involving the application of amyloid-β to brain slices, one in vivo experiment involving the application of amyloid species appear to elicit similar results as visualized by calcium imaging . Paradoxically, this only had an effect when amyloid-β was applied in wild-type mice as opposed to mice with amyloid phenotypes. This suggests some sort of alteration of the network already present in response to the presence of transgenic manipulations associated with amyloid. Neuronal activity has also been shown to affect AβPP processing and the release of amyloid-β spe-cies . It is suggested that the increase in neuronal activity leads to an increase in amyloid-β secretion and aggregation into oligomers which could be part of a positive feedback loop driving each other reciprocally. The functional consequences of amyloid-asso-ciated hyperexcitability still lacks sufficient understanding. Apart from the suggested role of amyloid-β associated hyperexcitability prompting subsequent epileptogenesis , in a functional context, one other study has reported that progressive deterioration of neuronal tuning for visual stimuli occurs in relation to amyloid load. In particular, this was noted only in hyperactive neurons during spontaneous activity . In the context of amyloid-β associated hyperexcitability and its purported role in excitotoxicity , animal models bearing amyloid-associated transgenic manipulations generally do not show overt neurodegeneration or cell loss , even in the presence of high amounts of amyloid plaque load and seizures. This implies that neurodegeneration in AD may not be linked solely to the presence of amyloid-driven hyperexcitability and excitotoxicity. However, amyloid-associated hyperexcitability may indirectly facilitate the development of amyloid plaques, which may drive the propagation of tau pathology and subsequent tau-pathology-associated neurodegeneration . Moreover, specific proteins and AD-related peptides (e.g., sAβPP α or A η ) are increasingly being identified for their specific role in circuit excitability dynamics, hypothesizing the mediating effects of amyloid as well as tau to hyper- and hypoexcitability . In addition to the studies of NH focusing on amyloid pathology in AD, the other main hallmark of AD, the tau protein, has been shown to be related to hyperexcitability . Tau is a microtubule-associated protein that is involved in the assembly or disassembly of microtubules . Several reports have indicated that levels of tau modulate NH, with reductions of endogenous tau protein levels ameliorating hyperexcitability and overexpression exacerbating hyperexcitability . These findings suggest that levels of tau proportionately facilitate NH. Moreover, contrary evidence has suggested that increase in tau is capable of silencing neurons and contributing towards a state of reduced excitability . A summary of reports involving tau-associated NH can be seen in . There may be several mechanisms by which tau dysfunction may elicit a phenotype of hyperexcitability or even the opposite. Activation of synaptic and extrasynaptic NMDA receptors has been shown to correlate with an increased expression of tau . In addition to this, at least one type of NMDAR activation, Fyn-mediated NMDAR activation, has been reported to be associated with tau . As previously discussed, amyloid-β has also been indicated to interact with NMDA receptors . This relationship between amyloid-β, NMDAR interaction, and tau-mediated NMDA activation may be one mechanistic explanation for the presence of increased neuronal activation in AD. Another factor that may contribute to increased levels of tau expression could be increased levels of extracellular glutamate also interacting with α -amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and NMDA receptors . However, the effect of tau reduction in an animal model without amyloid-related mutations (e.g., in Kcna1 + /–Tau+/– C57BL/6 mice ) shows that the reduction of levels of tau results in the attenuation of hyperexcitability in this model . This suggests that there may not be a preference of tau to a specific cause of hyperexcitability and that tau is a general regulator of intrinsic neuronal excitability and alters the excitation/inhibition balance without any particular preference for amyloid . However, conflicting evidence for the effects of tau pathology on the effects of neuronal activity is present. One study reported the silencing of hyperactive neurons observed in amyloid-bearing APP/PS1 animals when crossed with inducible tau-expressing rTg4510 and rTg21221 mice . These mice bear the mutated form of the human tau protein, the P301L mutation that confers a risk for developing frontotemporal dementia in humans . This suggests that in this case, an increase in mutant tau expression could silence the increased activity associated with the APP/PS1 mouse strain as evaluated by calcium imaging. In addition, other reports of mutated and soluble tau protein species has been shown to reduce the transient frequency of calcium in cortical neurons in layers 2/3 of P301S mice, independent of neu-rofibrillary entanglement . Moreover, the high-frequency ripple oscillations of local field potentials in the CA1 hippocampal area are considerably reduced in young rTg4510 mice, and even more deteriorated in old rTg4510 mice . In addition, diminished neuronal activity with tau pathology in aged EC-tau mice , as well as reduced raw theta power in mice models of tauopathies , have been reported. One other study has suggest that entorhinal cortex neuronal hyperactivity is associated with the human amyloid precursor protein (hAPP) or Aβ, instead of tau in a combined tau-amyloid mouse model . These reports have indicated evidence for tau being capable of reducing neuronal activity and highlight a role in neuronal silencing. The phosphorylation of tau, a main factor thought to lead up to the formation of neurofibrillary tangles, appears to exhibit a role in attenuating hyperexcitability as well. Several reports have shown that tau hyperphosphorylation is associated with reductions in hippocampal CA1 neuron excitability as well as decreased synaptic AMPA receptor expression due to mislocalization of tau as a result of hyperphosphorylation . At least one study has suggested that phosphorylation of tau at a specific site is protective against amyloid-β mediated excitotoxicity , suggesting that phosphorylation of tau may attenuate hyperexcitability. These findings suggest that phosphorylation of tau could be a possible response to hyperexcitability, in order to silence and counteract the increase in excitability . In contrast to the pathological role currently thought to be associated with tau, there appears to be some evidence indicating a beneficial or compensatory role for the existence of this mechanism, at least in the attenuation of hyperexcitability. Assuming this relationship between tau phosphorylation and NH holds true, increasing tau phosphorylation to counteract NH as a therapeutic strategy is unlikely to be a viable option due to the exacerbation of tau pathology and its associated detrimental effects. These studies have highlighted some examples of how tau and tau-associated pathology may be related to NH. It does not appear to paint a clear picture whether tau contributes to or attenuates neuronal excitability, as the absence of tau has been shown to reduce excitability and increased expression of mutant tau and phosphorylated tau also appears capable of silencing neurons, reducing activity. It should be noted that several animal models previously used for the development of treatments, and which form the bases for some NH hypotheses, have limitations which diminish their translational validity. In the next section some of the limitations and caveats of the respective animal models that should be considered or controlled as part of the experimental design are addressed. Limitations and caveats The development of several mouse models with amyloid pathology generally involves the overexpressions of hAPP . Current amyloid mouse models are based on introducing hAPP, in which even overexpression of wild-type hAPP is already associated with neurophysiological effects . In addition to this, the introduction of hAPP compounds the amount of endogenous mouse AβPP already being expressed in the neurons . The presence of high levels of AβPP would also mean that higher levels of AβPP-associated fragments such as sAβPP α , sAβPPβ, amyloid protein fragment eta, and amyloid intracellular domain, would also be elevated in these mouse models. These fragments have been shown to affect synaptic activity as well as amyloidogenesis nontrivially, raising the issue of these fragments confounding some of these interpretations claimed to be derived from amyloid-β solely. A study addressing these confounds by incorporating an inducible switch of AβPP expression in mice demonstrated that by halting AβPP expression, it was able to slow and attenuate the presence of epileptiform-like activity in these mice , suggesting the presence of the NH phenotype to be associated with AβPP and other AβPP fragments rather than amyloid-β. In concordance with this, a study to reduce amyloid-β mediated hyperexcitability by immunotherapy in mice produced the exact opposite effect . These findings suggest that AβPP, rather than amyloid-β, might be the causative factor for NH in these animal models. Another primary argument against the application of AβPP-associated mutations in animal models is the fact that these mutations only represent a small fraction of the population of AD patients. Patients with AD can be divided into two main categories: those with sporadic AD, which account for approximately 94–99%of all cases, and those with familial AD, the remaining 6-1%. The mutations used to induce amyloid pathology in animal models stem from the mutations associated with familial AD and might not or might at least be only partly representative of sporadic AD cases, limiting direct translational potential to many AD patients. Besides the overexpression of proteins, animal models develop under these conditions from the embryonic stage, altering the development of the animal. If a putative neurophysiological effect arising from the pathology was detected in such an animal, it could be possible that the effect could be due to a different baseline due to the development under these genetic conditions rather than the pathology at that point of time itself. Approaches incorporating the temporal control of the expression of pathology-associated genes, such as the regulatable rTg4510 tau mice could provide a more physiological approach to study pathological mechanisms and subsequent neurophysiological changes. As with the generation of amyloid pathology models of AD, mouse models with tau pathology also suffer from some of these caveats and limita-tions. The tau mutation(s) required to elicit part of the tau pathology reminiscent of AD pathology, are more commonly associated with frontotemporal dementia (FTD) rather than AD . Given the heterogeneity of tauopathies, tau pathology associated with FTD may have differential effects on hyperexcitability compared to pathological tau associated with AD . In addition, certain manipulations that involve the phosphorylation state of tau, such as the application of okadaic acid to elicit an increase in tau phosphorylation state , can lead to the phosphorylation of other substrates that could alter neuronal excitability. The specificity of some phosphate-related manipulations limits some of the relationships between tau phosphorylation and its effects on hyperexcitability that can be concluded from some of these experiments. Another unintended effect and confound related to the generation of transgenic animals also involves the incorporation of the transgene construct into the genome of the animal. In a recent example, behavioral and molecular phenotypes originally thought to be associated with tau pathology in the rTg4510 mouse model were instead associated with gene disruption due to construct insertion . These caveats and their associated research implications provide a critical angle to some of the studies associated with NH in preclinical models of AD and challenge some of the underlying assumptions regarding the origin of NH in AD. Being cognizant of these caveats and potential confounding factors should prevent misleading conclusions from being made. The studies presented above showcase strong evidence for the presence of NH in animal models, and to a lesser extent, human patients as well as the implications for the disease. However, due to some of the caveats and limitations associated with animal models, can we be certain that the hyperexcitability seen in animal models of pathology faithfully mimics that of the human condition? Here, we present a perspective on evaluating the presence of NH in animal models of AD, as well as providing some insight into preclinical research tools that should address the uncertainty regarding the validity of NH in preclinical studies in the next section. Network hyperexcitability as a marker of AD pathology model validity The failure of clinical trials of drugs for AD has highlighted an incomplete understanding of the disease. This could be due to inaccurate animal models that may only superficially resemble the pathological traits of AD but not the underlying etiology or dysfunction. Perhaps a deeper understanding of the effects of the functional changes associated with pathology, such as NH are necessary. The neurophysiological indication of NH in AD may be a factor that links the ‘form’ of pathology and ‘function’ (or dysfunction) of neurons in AD. However, since many types of neurological disorders can result in NH, it is key that aspects of hyperexcitability in animal models are relevant to the disease. As mentioned above, indirect evidence for NH may be present in human patients in the form of cortical hyperexcitability, hippocampal hyperactivity, and deficits in DMN deactivation. Based on these premises, experiments investigating the presence of NH in animal models of AD pathology should consider the relevant spatial localization of the NH (i.e., investigating hippocampal hyperactivity in animal models) in conjunction with the presence or absence of AD-associated pathology to evaluate the validity of the model representing AD-relevant NH. Secondly, the temporal aspects of network activity should also be a feature capitulated by animal models. While it remains to be ascertained that the increase in brain activity in the prodromal phase reflects the phenomenon of NH, this could be an indirect indication. Working on this premise, animal models of AD-associated pathology that exhibit NH should also consider the temporal onset and progression of NH in relation to overall network activity. By matching not only pathological, but also the neurophysiological change timelines to the temporal progression currently understood from clinical reports, model validity can be reinforced. The presence of epileptiform and seizure-like activity in AD is a tantalizing possibility for evaluating the presence of AD-relevant NH in animal models. However, given the myriad forms of epilepsies and seizures, caution should be exercised in making definitive conclusions regarding seizures, epilepsy, and AD. Further electrophysiological characterization of the seizure type(s) associated with human AD patients in both sporadic and familial forms is suggested before any definitive claims of model validity are made on the basis of seizure-like phenomena. Nonetheless, once validated in humans, this aspect of NH is expected to be one of the most definitive and promising indications of NH in animal models. Cellular and molecular indications of hyperexcitability such as interneuron deficits and dysfunction of glutamate metabolism may also provide an indication of an abnormal state of excitability. Characterization of cellular subtypes and biomolecular assays of brain homogenates, gene expression levels from both AD patients and animal models may lead to more solid basis of increased excitability from a molecular and cellular perspective. These proposed indications form the initial framework for the evaluation of NH in animal models. As more research on the electrophysiological nature of NH emerges, these proposals will become more refined and specific to the human condition. Further preclinical opportunities for optimizing/validating NH as an early AD indicator The caveats and limitations presented in the previous section in animal models illustrate some potential confounding factors that might preclude a direct translational comparison between current animal models and clinical studies of AD in terms of NH. By reducing these confounds, certainty regarding the origins of NH can be elucidated and compared to human observations for a better measure of model validity. In the cases of animal models of amyloid and tau pathology, the overexpression of the transgenic proteins is the main driving factor of the pathogenesis of the respective pathologies. However, this leads to uncertainty regarding the source of dysfunctions, whether it be due to the pathology itself or the side effects of the transgenic manipulations themselves. Recent developments in animal models of both amyloid and tau pathology may be able to resolve and mitigate this confound to a large extent. The generation of knock-in variants of mouse models of amyloid pathology, termed, the APP-KI mice , replaces the mouse APP gene with a humanized form and incorporate familial Alzhe-imer’s disease mutations associated with the development of amyloid pathology, such as the Swedish (KM670/671NL) or Iberian (I716F) mutations. These mice develop robust amyloid pathology but exhibit similar amounts of the AβPP compared to wild-type mice, eliminating most of the confounding factors associated with the overproduction of AβPP-asso-ciated fragments and AβPP itself. In addition, generation of multiple mouse models combining individual APP mutations (e.g., Swedish or Arctic) such as the APP NL - F or APP NL - G - F mice, increases the amount of C terminal β (CTF-β) fragments in a gene-dose response (i.e., APP NL - G - F mice produce more amyloid-β fragments than APP NL - F mice). This allows for the study of the dose response effects of amyloid-β in these mice when comparing APP NL - G - F and APP NL - F to APP NL/NL mice for example. In a recent study by Johnson and colle-agues investigating the differential effects of AβPP overexpression, mouse models exhibiting AβPP overexpression were compared to APP-KI animals to evaluate the effects of AβPP overexpression. The outcomes noted were that while all models exhibited NH in the form of nonconvulsive seizures, reduction of amyloid-β levels in J20 mice overexpressing AβPP did not ameliorate epileptiform activity. This suggests that the presence of NH phenotypes in these mouse models may not be related solely to amyloid-β levels but to a confluence of factors involving AβPP overexpression and AβPP processing, which can only be investigated using newer animal models controlling for these factors. However, other reports argue against changes in neuronal activity in APP-KI models in terms of net-work hyperactivity. Multi-tetrode recordings of entorhinal cortex and hippocampus, revealed equivalent firing frequencies in APP KI mice relative to age-matched wild-type mice , while diminished power and phase amplitude-coupling was found in this mouse model . Moreover, the incidence of network hyperexcitability in the form of interictal spikes did not differ between APP-KI NL - F mice and wild-type controls . Following from the discussion above regarding the protein overexpression and NH, the evidence stemming from these studies of APP-KI animals does indeed suggest NH to be driven by AβPP overexpression rather than by amyloid-β. In a similar vein, the generation of mouse models of tau pathology requires the overexpression of the humanized form of the mutant MAPT gene encoding for the tau protein. The mutation which allows the development of tau pathology is derived from the mutation observed in FTD, such as the P301S or P301L mutation. Similar to the APP-KI mice, humanized tau knock-in mice were also produced , with the murine MAPT gene replaced by the humanized wild-type MAPT gene. These modifications should reduce confounds of protein overexpression. These mice were reported to be similar to wild-type mice containing murine tau in terms of amyloid-β levels, neuronal death, or brain atrophy, suggesting no clear detrimental effects of replacing murine tau with human tau . It should be noted that these mice do not contain any mutations that promote the development of tau pathology, but rather mimic the properties of endogenous human tau without overexpression. Alternatively, another method of inducing tau pathology exploits the prion-like nature of the pathological form of the protein in tandem with or without transgenic approaches for the induction tau pathology. This method grants to some extent, both temporal and spatial control over where and when tau pathology may be induced, allowing for a more controlled study of the local effects, and spreading of tau pathology. This process, called seeding, involves the injection of tau fragments that promote the aggregation of the tau protein into the pathological form and subsequently propagate it across the brain . However, a prerequisite for this seeding process is the expression of transgenic humanized tau containing mutations (e.g., P301S) that confer a predisposition to develop pathology. Several recent reports have investigated the neurophysiological outcomes of this seeding method in various mouse models associated with tau . Recent developments in tau seeding have enabled the induction of tau pathology in mice that do not have a mutant human tau genotype such as wild-type mice . This opens the possibility for more closely mimicking cases of sporadic AD, which do not have direct genetic bases for developing tau pathology and eliminate the confound of altered development, transgenic insertion artefacts, and protein overexpression. Further strengthening this approach is the source from which these seeds are derived. The seeding method described in is directly derived from brain samples taken from patients with AD, thus theoretically resembling the tau pathology more closely associated with AD rather than FTD. These approaches seem to represent a more accurate proxy of combined amyloid and tau pathologies in AD, allowing the field to step closer to discerning the etiology of NH in AD. Translational limitations and opportunities for clinical detection of network hyperexcitability As briefly described above, conclusively detecting NH in a clinical setting remains elusive, generally due to the requirement for invasive electrodes to be implanted local to the source of NH. Noninvasive neurophysiological methods such as BOLD fMRI offer insight into indications of network activity, but may suffer from issues such as source localization accuracy, spatial and temporal resolution, as well as difficulty in measuring deeper brain structures . Other methods such as PET involve exposure to doses of radiation and limits the numbers of scans that can be safely performed . Other limitations come in the form of practical methodological challenges relating to measuring hyperexcitability in certain vigilance states, such as sleep, which has been suggested to contain more epileptiform activity . However, which methods are the best suited for the detection of hyperexcitability with minimal discomfort to the patient? Both EEG and MEG are promising techniques for the detection of hyperexcitable phenomena due to the direct measurement of electrical activity generated from network activity. However, depending on the recording montage and recording paradigms, MEG, and purely scalp-based EEG may not be able to identify with sufficient spatial resolution, the source(s) of hyperexcitable activity, especially in deeper brain structures . Several approaches attempting to address the issue of deep source localization in both MEG and EEG have been developed to study deeper brain regions. Some of these include MEG virtual electrodes and high-density scalp EEG recordings . While there is debate regarding which method offers a higher spatial and temporal resolution, MEG-based methods have been reported to outperform EEG-based measures in detecting subclinical epileptiform activity in AD patients , as well as predict the conversion from MCI to AD . While both methods have inherent limitations in terms of source localization, the combination of both methods is able to yield better source localization than the usage of a single modality . This combined modality approach has also been successfully applied in the field of epilepsy evaluation and could be an underdeveloped opportunity for the detection of NH in AD patients. The development of several mouse models with amyloid pathology generally involves the overexpressions of hAPP . Current amyloid mouse models are based on introducing hAPP, in which even overexpression of wild-type hAPP is already associated with neurophysiological effects . In addition to this, the introduction of hAPP compounds the amount of endogenous mouse AβPP already being expressed in the neurons . The presence of high levels of AβPP would also mean that higher levels of AβPP-associated fragments such as sAβPP α , sAβPPβ, amyloid protein fragment eta, and amyloid intracellular domain, would also be elevated in these mouse models. These fragments have been shown to affect synaptic activity as well as amyloidogenesis nontrivially, raising the issue of these fragments confounding some of these interpretations claimed to be derived from amyloid-β solely. A study addressing these confounds by incorporating an inducible switch of AβPP expression in mice demonstrated that by halting AβPP expression, it was able to slow and attenuate the presence of epileptiform-like activity in these mice , suggesting the presence of the NH phenotype to be associated with AβPP and other AβPP fragments rather than amyloid-β. In concordance with this, a study to reduce amyloid-β mediated hyperexcitability by immunotherapy in mice produced the exact opposite effect . These findings suggest that AβPP, rather than amyloid-β, might be the causative factor for NH in these animal models. Another primary argument against the application of AβPP-associated mutations in animal models is the fact that these mutations only represent a small fraction of the population of AD patients. Patients with AD can be divided into two main categories: those with sporadic AD, which account for approximately 94–99%of all cases, and those with familial AD, the remaining 6-1%. The mutations used to induce amyloid pathology in animal models stem from the mutations associated with familial AD and might not or might at least be only partly representative of sporadic AD cases, limiting direct translational potential to many AD patients. Besides the overexpression of proteins, animal models develop under these conditions from the embryonic stage, altering the development of the animal. If a putative neurophysiological effect arising from the pathology was detected in such an animal, it could be possible that the effect could be due to a different baseline due to the development under these genetic conditions rather than the pathology at that point of time itself. Approaches incorporating the temporal control of the expression of pathology-associated genes, such as the regulatable rTg4510 tau mice could provide a more physiological approach to study pathological mechanisms and subsequent neurophysiological changes. As with the generation of amyloid pathology models of AD, mouse models with tau pathology also suffer from some of these caveats and limita-tions. The tau mutation(s) required to elicit part of the tau pathology reminiscent of AD pathology, are more commonly associated with frontotemporal dementia (FTD) rather than AD . Given the heterogeneity of tauopathies, tau pathology associated with FTD may have differential effects on hyperexcitability compared to pathological tau associated with AD . In addition, certain manipulations that involve the phosphorylation state of tau, such as the application of okadaic acid to elicit an increase in tau phosphorylation state , can lead to the phosphorylation of other substrates that could alter neuronal excitability. The specificity of some phosphate-related manipulations limits some of the relationships between tau phosphorylation and its effects on hyperexcitability that can be concluded from some of these experiments. Another unintended effect and confound related to the generation of transgenic animals also involves the incorporation of the transgene construct into the genome of the animal. In a recent example, behavioral and molecular phenotypes originally thought to be associated with tau pathology in the rTg4510 mouse model were instead associated with gene disruption due to construct insertion . These caveats and their associated research implications provide a critical angle to some of the studies associated with NH in preclinical models of AD and challenge some of the underlying assumptions regarding the origin of NH in AD. Being cognizant of these caveats and potential confounding factors should prevent misleading conclusions from being made. The studies presented above showcase strong evidence for the presence of NH in animal models, and to a lesser extent, human patients as well as the implications for the disease. However, due to some of the caveats and limitations associated with animal models, can we be certain that the hyperexcitability seen in animal models of pathology faithfully mimics that of the human condition? Here, we present a perspective on evaluating the presence of NH in animal models of AD, as well as providing some insight into preclinical research tools that should address the uncertainty regarding the validity of NH in preclinical studies in the next section. The failure of clinical trials of drugs for AD has highlighted an incomplete understanding of the disease. This could be due to inaccurate animal models that may only superficially resemble the pathological traits of AD but not the underlying etiology or dysfunction. Perhaps a deeper understanding of the effects of the functional changes associated with pathology, such as NH are necessary. The neurophysiological indication of NH in AD may be a factor that links the ‘form’ of pathology and ‘function’ (or dysfunction) of neurons in AD. However, since many types of neurological disorders can result in NH, it is key that aspects of hyperexcitability in animal models are relevant to the disease. As mentioned above, indirect evidence for NH may be present in human patients in the form of cortical hyperexcitability, hippocampal hyperactivity, and deficits in DMN deactivation. Based on these premises, experiments investigating the presence of NH in animal models of AD pathology should consider the relevant spatial localization of the NH (i.e., investigating hippocampal hyperactivity in animal models) in conjunction with the presence or absence of AD-associated pathology to evaluate the validity of the model representing AD-relevant NH. Secondly, the temporal aspects of network activity should also be a feature capitulated by animal models. While it remains to be ascertained that the increase in brain activity in the prodromal phase reflects the phenomenon of NH, this could be an indirect indication. Working on this premise, animal models of AD-associated pathology that exhibit NH should also consider the temporal onset and progression of NH in relation to overall network activity. By matching not only pathological, but also the neurophysiological change timelines to the temporal progression currently understood from clinical reports, model validity can be reinforced. The presence of epileptiform and seizure-like activity in AD is a tantalizing possibility for evaluating the presence of AD-relevant NH in animal models. However, given the myriad forms of epilepsies and seizures, caution should be exercised in making definitive conclusions regarding seizures, epilepsy, and AD. Further electrophysiological characterization of the seizure type(s) associated with human AD patients in both sporadic and familial forms is suggested before any definitive claims of model validity are made on the basis of seizure-like phenomena. Nonetheless, once validated in humans, this aspect of NH is expected to be one of the most definitive and promising indications of NH in animal models. Cellular and molecular indications of hyperexcitability such as interneuron deficits and dysfunction of glutamate metabolism may also provide an indication of an abnormal state of excitability. Characterization of cellular subtypes and biomolecular assays of brain homogenates, gene expression levels from both AD patients and animal models may lead to more solid basis of increased excitability from a molecular and cellular perspective. These proposed indications form the initial framework for the evaluation of NH in animal models. As more research on the electrophysiological nature of NH emerges, these proposals will become more refined and specific to the human condition. The caveats and limitations presented in the previous section in animal models illustrate some potential confounding factors that might preclude a direct translational comparison between current animal models and clinical studies of AD in terms of NH. By reducing these confounds, certainty regarding the origins of NH can be elucidated and compared to human observations for a better measure of model validity. In the cases of animal models of amyloid and tau pathology, the overexpression of the transgenic proteins is the main driving factor of the pathogenesis of the respective pathologies. However, this leads to uncertainty regarding the source of dysfunctions, whether it be due to the pathology itself or the side effects of the transgenic manipulations themselves. Recent developments in animal models of both amyloid and tau pathology may be able to resolve and mitigate this confound to a large extent. The generation of knock-in variants of mouse models of amyloid pathology, termed, the APP-KI mice , replaces the mouse APP gene with a humanized form and incorporate familial Alzhe-imer’s disease mutations associated with the development of amyloid pathology, such as the Swedish (KM670/671NL) or Iberian (I716F) mutations. These mice develop robust amyloid pathology but exhibit similar amounts of the AβPP compared to wild-type mice, eliminating most of the confounding factors associated with the overproduction of AβPP-asso-ciated fragments and AβPP itself. In addition, generation of multiple mouse models combining individual APP mutations (e.g., Swedish or Arctic) such as the APP NL - F or APP NL - G - F mice, increases the amount of C terminal β (CTF-β) fragments in a gene-dose response (i.e., APP NL - G - F mice produce more amyloid-β fragments than APP NL - F mice). This allows for the study of the dose response effects of amyloid-β in these mice when comparing APP NL - G - F and APP NL - F to APP NL/NL mice for example. In a recent study by Johnson and colle-agues investigating the differential effects of AβPP overexpression, mouse models exhibiting AβPP overexpression were compared to APP-KI animals to evaluate the effects of AβPP overexpression. The outcomes noted were that while all models exhibited NH in the form of nonconvulsive seizures, reduction of amyloid-β levels in J20 mice overexpressing AβPP did not ameliorate epileptiform activity. This suggests that the presence of NH phenotypes in these mouse models may not be related solely to amyloid-β levels but to a confluence of factors involving AβPP overexpression and AβPP processing, which can only be investigated using newer animal models controlling for these factors. However, other reports argue against changes in neuronal activity in APP-KI models in terms of net-work hyperactivity. Multi-tetrode recordings of entorhinal cortex and hippocampus, revealed equivalent firing frequencies in APP KI mice relative to age-matched wild-type mice , while diminished power and phase amplitude-coupling was found in this mouse model . Moreover, the incidence of network hyperexcitability in the form of interictal spikes did not differ between APP-KI NL - F mice and wild-type controls . Following from the discussion above regarding the protein overexpression and NH, the evidence stemming from these studies of APP-KI animals does indeed suggest NH to be driven by AβPP overexpression rather than by amyloid-β. In a similar vein, the generation of mouse models of tau pathology requires the overexpression of the humanized form of the mutant MAPT gene encoding for the tau protein. The mutation which allows the development of tau pathology is derived from the mutation observed in FTD, such as the P301S or P301L mutation. Similar to the APP-KI mice, humanized tau knock-in mice were also produced , with the murine MAPT gene replaced by the humanized wild-type MAPT gene. These modifications should reduce confounds of protein overexpression. These mice were reported to be similar to wild-type mice containing murine tau in terms of amyloid-β levels, neuronal death, or brain atrophy, suggesting no clear detrimental effects of replacing murine tau with human tau . It should be noted that these mice do not contain any mutations that promote the development of tau pathology, but rather mimic the properties of endogenous human tau without overexpression. Alternatively, another method of inducing tau pathology exploits the prion-like nature of the pathological form of the protein in tandem with or without transgenic approaches for the induction tau pathology. This method grants to some extent, both temporal and spatial control over where and when tau pathology may be induced, allowing for a more controlled study of the local effects, and spreading of tau pathology. This process, called seeding, involves the injection of tau fragments that promote the aggregation of the tau protein into the pathological form and subsequently propagate it across the brain . However, a prerequisite for this seeding process is the expression of transgenic humanized tau containing mutations (e.g., P301S) that confer a predisposition to develop pathology. Several recent reports have investigated the neurophysiological outcomes of this seeding method in various mouse models associated with tau . Recent developments in tau seeding have enabled the induction of tau pathology in mice that do not have a mutant human tau genotype such as wild-type mice . This opens the possibility for more closely mimicking cases of sporadic AD, which do not have direct genetic bases for developing tau pathology and eliminate the confound of altered development, transgenic insertion artefacts, and protein overexpression. Further strengthening this approach is the source from which these seeds are derived. The seeding method described in is directly derived from brain samples taken from patients with AD, thus theoretically resembling the tau pathology more closely associated with AD rather than FTD. These approaches seem to represent a more accurate proxy of combined amyloid and tau pathologies in AD, allowing the field to step closer to discerning the etiology of NH in AD. As briefly described above, conclusively detecting NH in a clinical setting remains elusive, generally due to the requirement for invasive electrodes to be implanted local to the source of NH. Noninvasive neurophysiological methods such as BOLD fMRI offer insight into indications of network activity, but may suffer from issues such as source localization accuracy, spatial and temporal resolution, as well as difficulty in measuring deeper brain structures . Other methods such as PET involve exposure to doses of radiation and limits the numbers of scans that can be safely performed . Other limitations come in the form of practical methodological challenges relating to measuring hyperexcitability in certain vigilance states, such as sleep, which has been suggested to contain more epileptiform activity . However, which methods are the best suited for the detection of hyperexcitability with minimal discomfort to the patient? Both EEG and MEG are promising techniques for the detection of hyperexcitable phenomena due to the direct measurement of electrical activity generated from network activity. However, depending on the recording montage and recording paradigms, MEG, and purely scalp-based EEG may not be able to identify with sufficient spatial resolution, the source(s) of hyperexcitable activity, especially in deeper brain structures . Several approaches attempting to address the issue of deep source localization in both MEG and EEG have been developed to study deeper brain regions. Some of these include MEG virtual electrodes and high-density scalp EEG recordings . While there is debate regarding which method offers a higher spatial and temporal resolution, MEG-based methods have been reported to outperform EEG-based measures in detecting subclinical epileptiform activity in AD patients , as well as predict the conversion from MCI to AD . While both methods have inherent limitations in terms of source localization, the combination of both methods is able to yield better source localization than the usage of a single modality . This combined modality approach has also been successfully applied in the field of epilepsy evaluation and could be an underdeveloped opportunity for the detection of NH in AD patients. NH is a pathological feature shared among multiple neuropathological disorders with implications for cognitive function and possibly for neurodegenerative disease progression. Accumulating evidence points to the presence of NH in patients with AD, hypothesized to impair cognitive, motor, and behavioral function. The links between NH and amyloid, tau, glutamatergic and interneuron functions have shown multiple pathways by which these pathologies interact, even synergistically, to result in dysfunction. Clinical studies have shown cortical hyperexcitability, alterations in hippocampal activity, increased predisposition to seizure-like activity in AD as well as changes in inactivation of the DMN which may indicate NH. Indications of NH in animal models of AD pathology are emerging as features of model validity. However, studies using animal models of AD pathology have generally involved the application of protein overexpression to induce pathology, which could have NH as a side effect, confounding the interpretation of the relationship between pathology and NH observed in humans and animal models. Recent improvements in model development and molecular approaches to studying AD pathology alleviate some of these confounds associated with protein overexpression and provide a clearer picture of the source of NH. In addition, care should be exercised when generalizing preclinical outcomes of NH phenotypes when using animal models that primarily model the familial form of AD or FTD rather than sporadic AD. It is possible that the NH phenotypes in AD may even differ between familial and sporadic instances of the disease. NH is a very promising indicator of network dysfunction in patients of AD and may serve as a prodromal indicator of AD pathogenesis. By understanding and aligning the source of NH in patients and in animal models, we can obtain a key biomarker of AD progression that correlates with cognitive dysfunction and pathology.
Bridging pharmacology and neural networks: A deep dive into neural ordinary differential equations
28c2a2bd-82b5-4bc2-bd27-cd09b86c063a
11330178
Pharmacology[mh]
In the domain of clinical pharmacology, the modeling of pharmacological processes and systems presents unique challenges and opportunities. To illustrate this, population modeling uses different methodologies to simulate patient clinical outcomes, with statistical models and their interpretability well support decision‐making in drug discovery and development. As an example of a model, nonlinear mixed effect (NLME), has been carefully calibrated by making pharmacological assumptions and incorporating a restricted but significant set of covariates explaining part of the observed variability. Additionally, combining NLME models with ordinary differential equations (ODEs), has significantly contributed to the field of drug development, by including mechanistic and well‐understood dynamics based on biological assumptions. Nevertheless, such models are unable to handle high dimensional covariate space input compared with traditional machine learning (ML) models. Moreover, ML modeling like ensemble methods, mainly tree‐based and gradient boosting models, have proven to deal with complex dynamics and to be extremely valuable in model‐informed drug development. Within the ML field, we have a subcategory of models classified as deep learning (DL) architectures. They are more advanced methods based on the concept of neural networks (NNs). They consist of layers connected to each other by neurons, able to process both static and longitudinal data as input over time. NN architectures can be composed by dense and hidden layers, where nonlinear transformations within each neuron are applied. Furthermore, a hidden function within NNs is represented by introducing nonlinearity into the NNs decision‐making process during the training. To be more specific, for NN dealing with longitudinal data or time series, this function is computed sequentially between each consecutive time step, summarizing the most relevant information retained over time from a longitudinal input data. However, when it comes to handle sparse and irregularly sampled data, NNs present several limitations. Recent work explored solutions to deal with sparsity and irregular sampling of data points by combining DL models with ODEs. Neural ODEs represent a potential advanced solution. Moreover, they are offering a unique interpretation of the hidden state as a function based on a previous hidden state function and longitudinal input data over time, as shown in the Equation  below: (1) h t = f W · h t − 1 + U · x t + b , where h t is the hidden state, a working memory capability that carries information from a sequential input data x t (e.g., list of covariates) from previous events and overwrites at every step t . W and U are weight matrices that are learned during training, b is a bias vector and f is an activation function, a nonlinear function (e.g., sigmoid, tanh…) that decides whether a neuron should be activated or not within the NNs. Compared with NNs models, Neural ODEs are incorporating ODEs concept to generate the dynamic of hidden function. For this purpose, a fully connected‐layer architecture that connects every input neuron to every output neuron is integrated within Neural ODEs. This provides a surrogate function that mimics the ODE‐dynamics g nn , as illustrated by the following Equation : (2) d h t d t = g nn h t t θ , where g nn approximates the ODE and replicates the ODE‐dynamics, θ represents the parameters of the model used during the training, h t contains some relevant patterns learned from the input data, g nn is mainly used to solve the different states of the hidden function over time to make a prediction at individual level for future measurements. Within the Quantitative Pharmacology community, explainability, and interpretability of DL models are fundamental. There is a constant need to innovate with advanced models that bring understandable and quantifiable concepts like combining NNs with ODEs. As a reference, Neural ODEs offer several advantages and applications compared with NNs models. For example, the application of explainable DL providing understanding of the factors influencing predictions, has been shown to be relevant for linking tumor dynamics and overall survival. By deriving an explainable variable “kinetic rate” of the tumor behavior at the patient level, it is possible to interpret how the kinetic rate influences the patient survival outcome. Additionally, another context illustrating that the interpretability of Neural ODEs is in the field of PK, where complex body‐related behaviors are mechanistically translated into ODEs based on multiple variables. Indeed, the use of low‐dimensional Neural ODEs can be beneficial in simplifying the complexity of the ODE‐dynamics, offering advanced tools for application in PK modeling. On the other hand, Neural ODEs present new challenges, including computational complexity and difficulties in training because of their continuous nature. We underscore the need for further investigations, specifically in clinical pharmacology use cases. In this mini review, we will outline the primary workflow of these models and highlight some applications in clinical pharmacology. Different variations of the basic structure of Neural ODEs have been proposed in literature (Table ). Stochastic Neural ODEs introduce stochasticity based on stochastic differential equations, supporting modeling of data having random noise, biological variability among individuals or uncertainties that are mainly approximations or assumptions made during the development of the model. Controlled Neural ODEs integrate a control function, learned by the model, which dictates how the dynamics of the ODE evolve over time. Finally, Latent Neural ODEs are a specific application of Neural ODEs for sparse and irregularly sampled time‐series modeling using longitudinal data as input. They are classified as probabilistic models and mainly based on the concept of Gaussian processes, a nonparametric supervised learning method used to solve regression and probabilistic classification problems. The main purpose of a Latent Neural ODE is to learn and approximate a surrogate model for the underlying ODE by deriving latent representation of the longitudinal input data, called “latent variables.” As shown in Figure , Latent Neural ODEs incorporate an input neural network (encoder) that processes the longitudinal input data into a dense representation within the hidden layers and converge to Gaussian processes for approximating ODEs. From this step, latent trajectories are created at low‐dimensional level for each patient. These correspond to different states of the latent variables that are estimated by a Gaussian sampling from the encoder, as mentioned below through sampling procedure: (3) z t 0 ∼ p z t 0 = N 0 , 1 (4) d z t d t = f nn z t t θ f nn , where z t 1 , z t 2 … , z t N = ODESolve z t 0 f nn θ f nn t 0 … t N z t 0 is the initial condition at the timestep t 0 of the latent variable z t , z t 0 is sampled by a normal distribution with mean μ = 0 and standard deviation σ = 1 , f nn is a fully connected‐layer network. The latter is part of the encoder and computing d z t d t , which precisely represents the substituted ODE with the mean and standard deviation parameters being estimated. , θ f nn are weights and biases learned by f nn . Finally, an output neural network (decoder) is used to transform the evolved latent states back into the data space where the prediction shows up. This process allows Latent Neural ODEs to interpolate (fit) or extrapolate (predict) time series. In this review, we will explore the Latent Neural ODEs, renowned for its high predictive power, to make predictions at the patient level using longitudinal clinical trial datasets. AS ENCODERS Latent Neural ODEs are processing the longitudinal data in input using encoders. These are typically based on Recurrent Neural Network (RNNs), a DL architecture that is trained to process sequential data, encoding the temporal, heterogenous, and sparse representation of input features or covariates over time, for multivariate time series prediction. RNNs as encoders are composed by different layers where longitudinal data are processed sequentially over time. The encoder provides a global representation of the most relevant patterns learned across all the patient population. Furthermore, these RNNs architectures are handling informative missingness , and sparsity of clinical data with less assumptions compared with traditional pharmacological models where forward fill—backward fill and mean average imputations are mainly utilized. Indeed, RNNs manage sparsity by using a binary mask, an indicator to the model for identifying whether or when covariates are measured or not. This provides a clearer view of the clinical trial dataset structure, providing the model a more thorough understanding of the data. This opens opportunities to employ such methods in real clinical data context where the sparsity represents the main complexity for time‐series modeling. Indeed, RNNs are extensively employed in the field of time‐series modeling, with Gated Recurrent Units (GRUs) and Gated Recurrent Unit—Decay (GRUDs) able to include missingness as a feature when sparsity is ubiquitous across the data. GRUs are particularly adept at time‐series modeling by incorporating gating mechanisms that regulate information flow, as shown in Figure , and thereby allowing the retention of relevant information over longer sequences. This ultimately enables adaptability to the irregular time intervals commonly seen in sparse data. , GRUDs, on the other hand, explicitly models the decay of information over time, making it especially effective in scenarios where data are not only sparse but also missing at random—a typical situation in healthcare and other contexts. The GRUD model incorporates the time elapsed since the last observation as a structural input feature to update the hidden state function computed at each time step. This encoder helps preserve the time‐related patterns within the data, even when dealing with missing values. ODE SOLVER TO RECONSTRUCT THE TRAJECTORY OF AN UNDERLYING DISEASE VARIABLE The challenge of solving complex dynamics behind ODEs is also fundamental, particularly in Neural ODEs. Different solutions are mainly conceived, such as adaptive solvers and augmented Neural ODEs that extend the model state with additional dimensions to learn more complex ODEs, solved from ℝ d to ℝ d + p where every data point is concatenated with a vector of zeros. This method is employed to simplify the resolution of ODEs by avoiding intersections in the ODE‐flow within ℝ d . This continuous‐time modeling allows Neural ODEs to adapt to the irregularity of data, representing a more flexible approach to deal with sparse and irregular data compared with traditional ML models. ODE The interpretability of the latent dynamics as explainable variables is crucial as it can effectively underline the intricate relationship between covariates and the clinical outcome. The dynamics of the latent variables for each individual patient are identified by solving the ODE, making an interpolation across the first observations of the patient outcome, and extending those dynamics by an extrapolation of the patient outcome beyond the observable horizon. In theory, Latent Neural ODEs are complex DL architectures coupled with ODEs, where the latent variable z t is defined by sampling using Gaussian processes. , The mean and standard deviation parameters are estimated from the final layer of an RNN encoder as exhibited in Figure , by computing the posterior probability distribution q to obtain parameters of distribution, as cited by the following Equations  and : (5) q z t 0 x ti t i i θ f nn = N z t 0 μ z t 0 σ z t 0 , where μ z t 0 , σ z t 0 comes from hidden state of RNN x ti t i i θ f nn (6) Sample z t 0 ∼ q z t 0 x ti t i i The initial condition z t 0 is estimated from the latent variable for each patient to solve the ODE and fully reconstruct individual latent trajectories aiming the extrapolation of clinical outcomes for future clinical measurements, as shown in Figure . In essence, through the strategic combination of RNN‐based encoders and the inherent capabilities of ODEs, these models effectively handle the sparsity and heterogeneity of clinical data, providing a robust and flexible framework for modeling complex dynamics within these challenging contexts. ODEs IN CLINICAL PHARMACOLOGY The mathematical modeling of drug kinetics—PK and pharmacodynamics (PD) is inherently complex because of the many interacting physiological variables. Frequently used in pharmacometrics, NLME models are limited by the amount of covariates information incorporated into the modeling, making harder to identify the top relevant predictors from a high dimensional input data and predict the clinical outcome itself. Neural ODEs offer a fresh perspective by dealing with large datasets, specifically heterogenous, sparse and multimodal data. Case studies involving the application of Neural ODEs in these areas have demonstrated promising results, although more research is required to validate these findings in diverse clinical settings. Using an example from the clinical pharmacology field referenced by, Neural ODEs are used for predicting PK for individual patients. Drug dosing and PK measurements in patients are collected at irregular time intervals, adding complexity compared with traditional NNs architectures where time‐varying gaps between consecutive visits across different patients are toughly assimilated and learned by traditional NNs. To address this, the Latent Neural ODE model is potentially a DL solution for fitting irregularly sampled data points over time when additional information is provided continuously over time to make prediction beyond observable data. For example, the main objective of PK Neural ODEs‐based models is to predict PK values at the individual level by adding linearly specific dose regimens information into the ODE, as shown in Figure . This approach allows the model to enhance its flexibility when the treatment may change or be stopped, enabling more tailored predictions at the individual level. It yields promising results when compared with ML techniques like LightGBM, a tree‐boosting model and long short‐term memory (LSTM), a type of RNN. Indeed, Latent Neural ODE can accurately reproduce and extrapolate PK trajectories for individual patients even if the dose is discontinued. Advantages of this Latent Neural ODE compared with NLME model are its ability to approximate and learn the ODE at the patient‐level by including high dimensional input covariates into the modeling. An extension of this work was to develop a pharmacology‐informed neural ODE‐based model, illustrating a more complex approach where both PK and PD data are utilized, to simulate patient responses to untested dosing regimens by bridging PK dynamics with PD predictions for each individual patient, this pharmacology‐guided NN encompasses all the relevant pharmacological mechanistic aspects to predict accurately PD values within an easy‐to‐use workflow. Furthermore, the use of Neural ODEs compared with NLME modeling, simplifies the set of multiple mechanistic and pharmacological assumptions to a low‐dimensional dynamical system where ODEs are solved at the patient level and reducing complexity. In practice, pharmacological models can describe dynamics of carefully chosen variables through complex ODEs system. However, these models can incorporate limited and meaningful interpretations of variable behaviors that are often not observable in clinical environments, but observable in laboratory setting, these variables are called “Expert variables.” This problem is addressed in where the authors developed a pharmacology‐informed neural ODE for COVID‐19 disease progression modeling integrating the dynamics described by the pharmacological model for dexamethasone (Dex) with 5 Expert variables (e.g., innate immune response, Dex concentration in Lung, Dex Concentration in Plasma, Viral Load and Adaptive Immune Response). This approach aims to give some guidance and prior knowledge to inform the model by incorporating both expert and latent variables in clinical environment through ODEs. This use case is highlighting the opportunity to be able to customize the Latent Neural ODE using a mechanistic approach represented by expert variables. This use case illustrates the utility to employ a pharmacology‐informed DL‐based model within a clinical environment for improving clinical decisions. Regarding technical limitations in the context of Neural ODEs, a frequently encountered issue is overfitting, particularly in cases where the dataset size is limited. Overfitting happens when a model, too intricate compared with its training data, ends up learning patterns specific to that data and fails to accurately predict new, unseen data. On effective countermeasure is the pretraining of the Neural ODE. This process involves training the model first on a broad and varied dataset. Such an approach equips the model with a more general understanding, laying a solid foundation before refining it for a specific task. Moreover, employing data augmentation methods, which increase the size of the training dataset through the generation of altered data versions, proves beneficial in preventing overfitting, especially when acquiring additional data are impractical. Neural ODEs hold great promise for transforming healthcare data analytics, but several unanswered questions remain. These models suffer from overfitting when the sample size is small. Searching for methods that utilize the pretraining of the Neural ODE is essential to combat overfitting and build a sustainable AI framework by using data augmentation or looking for more rich datasets. Also, opportunities to further explore and leverage applications of these methods are concrete within the clinical pharmacology field. As an example, these methods can offer new venues in advancing clinical oncology by enhancing current approaches for the prediction of total tumor size modeling and even more individual target lesion sizes (iTLs) over time by using. The use of pharmacology‐informed models combining lab values, biomarkers, and treatment plan for prediction of clinical outcomes and longitudinal endpoints plays a significant role to advance personalized medicine and data‐driven decision‐making. Despite several challenges, Neural ODEs offer a promising new avenue for the value‐driven application of AI in healthcare, particularly in complex domains like clinical pharmacology and model‐informed drug development. No funding was received for this work. Idris Bachali Losada is an employee of Randstad and contributed as a paid contractor for the Merck Quantitative Pharmacology, Ares Trading SA (an affiliate of Merck KGaA, Darmstadt, Germany), Lausanne, Switzerland. Nadia Terranova is an employee of Merck Quantitative Pharmacology, Ares Trading SA (an affiliate of Merck KGaA, Darmstadt, Germany), Lausanne, Switzerland.
Endocervical Adenocarcinoma, Gross Examination, and Processing, Including Intraoperative Evaluation: Recommendations From the International Society of Gynecological Pathologists
d8bbd1a2-fe57-4a9e-b5d1-c89bafe31382
7969178
Gynaecology[mh]
LEEP, also known as large loop excision of the transformation zone, is designed to excise the squamocolumnar junction in patients with colposcopically suspected or cytologically suspected/confirmed high-grade squamous intraepithelial lesion . In this setting, LEEP is generally recommended over cold knife conization , . The cutting device is a wire loop that simultaneously cuts tissue and cauterizes the surgical site to achieve hemostasis. Advantages of this technique over cold knife conization are that the hemostasis control permits the procedure to be safely conducted in an office setting (whereas cold knife conization requires an intraoperative setting for hemostasis control) and that the tissue margins are cauterized, allowing easy identification microscopically. A disadvantage of LEEP over cold knife conization is that the cautery-induced thermal artifact may obscure microscopic examination of the tissue at the margin. Ideally the LEEP procedure is performed in a single circumferential pass with the electrosurgical device around the entire squamocolumnar junction, creating a single intact cylinder of tissue (Figs. A–F). In some cases, however, it may not be possible for the surgeon to perform the procedure in a single pass (Figs. G, H); instead the surgeon may need to use 2 or more passes with the electrosurgical device to remove the cylindrical target as 2 or more separate pieces of tissue, rather than an intact cylinder. On occasion, the intact cylinder may break open during the procedure and herein referred to as a fragmented LEEP specimen. The term fragmented LEEP does not include the scenario in which an intact cylinder produced by LEEP is accompanied by an additional separate specimen of the more proximal endocervical canal, often referred to as a top-hat (Fig. F). A challenge in evaluating a fragmented LEEP specimen relates to the understanding of the relationship among the different specimens edges and to the true endocervical and ectocervical margins. Thus, specimen management in this scenario is discussed separately from that of an intact LEEP/cone specimens. Specimen Orientation Fragmented LEEP specimens have a mucosal surface on one side and cervical wall connective tissue on the opposite side (so-called deep margin). The ectocervical edge may appear shiny, smooth and white compared to the endocervical edge which may be pink and finely granular with adherent mucus. The deep connective tissue margin is usually rough and cauterized. Recognizing these landmarks is important, when possible, as the ideal tissue slices are cut parallel to the axis of the endocervical canal. The presence of thermal artifact on microscopic examination distinguishes a true surgical margin from an edge created during the process of slicing or trimming the fragments. Therefore inking the specimen is not required. Recommendations The presence of thermal artifact on microscopic examination of the tissue edges of a fragmented LEEP specimen in the best marker of a true surgical margin. Specimen and Tumor Measurements The number of fragments and the range of the dimensions (minimum size and maximum size) should be recorded. The dimensions of any macroscopic lesions should be documented, though the anatomic orientation of the dimensions may be challenging in a fragmented LEEP. Recommendations Document the number of tissue fragments and size range (minimum to maximum). Specimen Processing and Tissue Sampling The fragments should be sliced at 2 to 3 mm intervals parallel to the endocervical canal. This will produce a tissue section that exhibits the mucosa along one edge, with the endocervical and ectocervical mucosal margins at either end of this edge. The deep connective tissue margin will be the edge opposite of the mucosal one. All tissue should be submitted for microscopic examination. To mitigate against tangential sectioning artifacts and incomplete visualization of the full mucosa in the hematoxylin and eosin (H&E)-stained fragments, it is recommended to place no more than 1 or 2 tissue sections in each cassette since any more may make it difficult to align all the sections in the same plane, potentially resulting in incomplete representation of the mucosa of all of the sections in the block. The number of inital H&E-stained sections per block to examine varies between practices though little evidence exists to provide guidance. One study reported that a single H&E section per block was sufficient for accurate diagnosis as long as deeper sections were examined in certain settings after review of the initial H&E, such as missing mucosa, absence of squamous intraepithelial lesion, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings . Some professional organizations recommend one initial H&E section per block, with additional sections to be considered depending on the findings in that initial section , while many practices evaluate 2 or more initial H&E sections per block in all cases. Further studies are needed to guide best practices. Recommendations Slice the tissue in 2 to 3 mm thick slices parallel to the endocervical canal. Limit the number of slices per cassette in order to avoid incomplete representation due to sectioning artifacts. A single H&E-stained section per block is sufficient for initial microscopic examination, with consideration for deeper sections when there is missing mucosa, absence of squamous intraepithelial lesion/endocervical adenocarcinoma, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings. Alternatively it may be more efficient to routinely examine 2 or more sections per block, depending on the local practice. Fragmented LEEP specimens have a mucosal surface on one side and cervical wall connective tissue on the opposite side (so-called deep margin). The ectocervical edge may appear shiny, smooth and white compared to the endocervical edge which may be pink and finely granular with adherent mucus. The deep connective tissue margin is usually rough and cauterized. Recognizing these landmarks is important, when possible, as the ideal tissue slices are cut parallel to the axis of the endocervical canal. The presence of thermal artifact on microscopic examination distinguishes a true surgical margin from an edge created during the process of slicing or trimming the fragments. Therefore inking the specimen is not required. Recommendations The presence of thermal artifact on microscopic examination of the tissue edges of a fragmented LEEP specimen in the best marker of a true surgical margin. The presence of thermal artifact on microscopic examination of the tissue edges of a fragmented LEEP specimen in the best marker of a true surgical margin. The number of fragments and the range of the dimensions (minimum size and maximum size) should be recorded. The dimensions of any macroscopic lesions should be documented, though the anatomic orientation of the dimensions may be challenging in a fragmented LEEP. Recommendations Document the number of tissue fragments and size range (minimum to maximum). Document the number of tissue fragments and size range (minimum to maximum). The fragments should be sliced at 2 to 3 mm intervals parallel to the endocervical canal. This will produce a tissue section that exhibits the mucosa along one edge, with the endocervical and ectocervical mucosal margins at either end of this edge. The deep connective tissue margin will be the edge opposite of the mucosal one. All tissue should be submitted for microscopic examination. To mitigate against tangential sectioning artifacts and incomplete visualization of the full mucosa in the hematoxylin and eosin (H&E)-stained fragments, it is recommended to place no more than 1 or 2 tissue sections in each cassette since any more may make it difficult to align all the sections in the same plane, potentially resulting in incomplete representation of the mucosa of all of the sections in the block. The number of inital H&E-stained sections per block to examine varies between practices though little evidence exists to provide guidance. One study reported that a single H&E section per block was sufficient for accurate diagnosis as long as deeper sections were examined in certain settings after review of the initial H&E, such as missing mucosa, absence of squamous intraepithelial lesion, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings . Some professional organizations recommend one initial H&E section per block, with additional sections to be considered depending on the findings in that initial section , while many practices evaluate 2 or more initial H&E sections per block in all cases. Further studies are needed to guide best practices. Recommendations Slice the tissue in 2 to 3 mm thick slices parallel to the endocervical canal. Limit the number of slices per cassette in order to avoid incomplete representation due to sectioning artifacts. A single H&E-stained section per block is sufficient for initial microscopic examination, with consideration for deeper sections when there is missing mucosa, absence of squamous intraepithelial lesion/endocervical adenocarcinoma, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings. Alternatively it may be more efficient to routinely examine 2 or more sections per block, depending on the local practice. Slice the tissue in 2 to 3 mm thick slices parallel to the endocervical canal. Limit the number of slices per cassette in order to avoid incomplete representation due to sectioning artifacts. A single H&E-stained section per block is sufficient for initial microscopic examination, with consideration for deeper sections when there is missing mucosa, absence of squamous intraepithelial lesion/endocervical adenocarcinoma, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings. Alternatively it may be more efficient to routinely examine 2 or more sections per block, depending on the local practice. Intact LEEP and cold knife cone specimens consist of a cylinder of the endocervical canal with ectocervical mucosa at one end and endocervical mucosa at the other end. The shape and size of the cylinder may vary considerably from one patient to another, with some being long and tapered and other being broad and shallow. The outer surface of the cylinder corresponds to the deep connective tissue margin. Assessment for invasive tumor and for margin involvement by tumor are 2 key goals of the pathologic evaluation. As thermal artifact from the LEEP technique may obscure the microscopic evaluation of the specimen margins, the cold knife cone technique offers better margin assessment since electrocautery is not used. For histologically confirmed endocervical adenocarcinoma in situ, cold knife conization is generally advised over LEEP by some but not all guidelines , . Specimen Orientation and Inking The 3 margins are the mucosal endocervical margin, the mucosal ectocervical margin, and the deep connective tissue margin. If the surgeon has not provided orientation to the endocervical versus ectocervical margin, this can usually be determined by the different appearance of the mucosa. The ectocervical mucosa is shiny, smooth and white when compared with the endocervical mucosa which often is pink and finely granular with adherent mucus. All of the margins should be inked; a single color is sufficient though some pathologists prefer to use 2 colors to further distinguish the endocervical end versus the ectocervical end of the specimen. The surgeon may designate the anatomic orientation of the cone with a suture, for example indicating the 12 o’clock position or anterior cervical lip. In such a case, the clock face orientation positions should be preserved throughout the specimen sectioning, block designation, and final pathology report. If such orientation is not provided, there is no need for the pathologist to arbitrarily designate clock face orientation positions as there is no way to correlate them with the anatomic landmarks. Recommendations Ink the ectocervical and endocervical mucosal margins as well as the deep connective tissue margin. If anatomic orientation of the specimen is designated, preserve this orientation in the tissue block designations. Specimen Measurements The length (parallel to the endocervical canal), diameter and wall thickness of the specimen should be recorded. The dimensions of any macroscopically visible tumor should be recorded using the same strategy described for trachelectomy specimens (see below). Recommendations Document length, diameter and wall thickness of the specimen. Tumor Location If the surgeon provided orientation of the specimen, then the anatomic location of grossly visible tumor should be recorded as this may assist to correlate with radiologic and intraoperative findings, particularly if there is clinical concern for margin involvement by tumor. Options include using a positions on a clock or designating anterior versus posterior lip of the cervix. Recommendations Document the anatomic location of tumor in the cervix using positions on a clock or designating anterior versus posterior lip of the cervix. Document the distance of tumor to the endocervical margin, ectocervical margin and deep connective tissue margin. Macroscopic Tumor Dimensions The 3 macroscopic dimensions of a cervical tumor are its length (parallel to the endocervical canal), width (in the plane perpendicular to the endocervical canal), and thickness (from tumor surface to the tumor deepest invasion point). The macroscopic depth of invasion is defined as the distance from the endocervical mucosa to the tumor deepest point within the cervical wall. Depending on the relative amount of exophytic growth versus growth into the cervical wall, tumor thickness and tumor depth of invasion may be different. Recommendations Document the tumor length (parallel to the endocervical canal), tumor width (perpendicular to the endocervical canal), tumor thickness, and depth of tumor invasion. Specimen Processing and Tissue Sampling Some intact LEEP/cone specimens may be submitted to the pathology laboratory in the fresh state while others are submitted already in formalin. This affects the strategy for specimen handling. Ideally, a fresh intact LEEP/cone should be opened immediately and pinned out before formalin fixation as this strategy maximizes the opportunity for tissue sectioning to be performed in a way that preserves accurate tissue orientation, permits full visualization of the mucosa, and facilitates parallel 2 to 3 mm thick slices that do not need to be trimmed down before placement in tissue cassettes (Fig. ). The disadvantage of this strategy is that is dependent on the availability of pathology laboratory staff to perform this step on receipt of the fresh specimen in order to prevent tissue autolysis. The procedure for opening a fresh intact LEEP/cone, as well as fixation and sectioning, is similar to that for a trachelectomy specimen, which is discussed below. If it is not feasible to open and pin a fresh intact specimen before formalin fixation, then the specimen should be placed in formalin upon receipt and processed as decribed below. If the LEEP/cone is left intact during formalin-fixation, then there are 2 options for processing the specimen: radial slicing or parallel slicing (Fig. ). The radial strategy is best suited for specimens in which the endocervical canal is clearly visible, whereas specimens in which the endocervical canal is less obvious may be better managed by parallel slicing. For the radial strategy, the goal is that each slice should have mucosa from the ectocervical margin to the endocervical margin along one edge of the section and the deep connective tissue margin at the opposite edge. It is unavoidable that the radial strategy produces tissue sections that are thicker toward the deep connective tissue part of the slice, creating a wedge-like shape that will not lay down flat in the tissue cassette. Therefore, the excess tissue at the thicker end of the section should be trimmed to produce a flat section. These trimmed pieces will not contain any mucosa but should still be examined microscopically. For the parallel slicing strategy, the specimen is serially cut in 2 to 3 mm thick slices parallel to the endocervical canal. This produces uniform thin slices that do not need to be trimmed down further. To fully evaluate the mucosal and connective tissue margins of the first and the last slices, these can be further cut perpendicularly and embedded on their sides. All tissue should be submitted for microscopic examination, including the excess trimmed tissue. The sections should be placed in consecutive cassettes and the cassette code should document the anatomic orientation of the slices in order to facilitate tumor size measurements in the event that tumor is present in >1 slice. Only 1 to 2 sections should be submitted in each cassette. On occasion, a LEEP specimen may be accompanied by an additional, usually smaller, cylinder of endocervix, the so-called top hat specimen, when the clinician determines that additional tissue is necessary to assure a negative margin. The length, diameter and wall thickness should be documented. The true endocervical margin and deep connective tissue margin should be inked. The specimen can be sliced either using a radial approach or parallel approach, as described for the main LEEP specimen. The entire tissue should be submitted for microscopic examination. Regarding the number of initial H&E-stained sections per block to examine, the recommendations stated for fragmented LEEP specimens apply to intact LEEP and cone specimens. Recommendations Fresh intact LEEP/cone specimens can either be opened and pinned before fixation or placed intact in formalin, depending on local practice resources. Specimens opened and pinned before fixation should be thinly sliced parallel to the endocervical canal. Specimens fixed intact can be sliced using a radial or a parallel slicing strategy. Specimens should be entirely submitted for microscopic examination, including any excess trimmed pieces. If an additional so-called top hat specimen is submitted, it should be inked, sliced using the same strategy for the main LEEP specimen, and entirely submitted. A single H&E-stained section per block is sufficient for initial microscopic examination, with consideration of deeper sections when there is missing mucosa, absence of squamous intraepithelial lesion/endocervical adenocarcinoma, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings. Alternatively it may be more efficient to routinely examine 2 or more sections per block, depending on the local practice. The 3 margins are the mucosal endocervical margin, the mucosal ectocervical margin, and the deep connective tissue margin. If the surgeon has not provided orientation to the endocervical versus ectocervical margin, this can usually be determined by the different appearance of the mucosa. The ectocervical mucosa is shiny, smooth and white when compared with the endocervical mucosa which often is pink and finely granular with adherent mucus. All of the margins should be inked; a single color is sufficient though some pathologists prefer to use 2 colors to further distinguish the endocervical end versus the ectocervical end of the specimen. The surgeon may designate the anatomic orientation of the cone with a suture, for example indicating the 12 o’clock position or anterior cervical lip. In such a case, the clock face orientation positions should be preserved throughout the specimen sectioning, block designation, and final pathology report. If such orientation is not provided, there is no need for the pathologist to arbitrarily designate clock face orientation positions as there is no way to correlate them with the anatomic landmarks. Recommendations Ink the ectocervical and endocervical mucosal margins as well as the deep connective tissue margin. If anatomic orientation of the specimen is designated, preserve this orientation in the tissue block designations. Ink the ectocervical and endocervical mucosal margins as well as the deep connective tissue margin. If anatomic orientation of the specimen is designated, preserve this orientation in the tissue block designations. The length (parallel to the endocervical canal), diameter and wall thickness of the specimen should be recorded. The dimensions of any macroscopically visible tumor should be recorded using the same strategy described for trachelectomy specimens (see below). Recommendations Document length, diameter and wall thickness of the specimen. Document length, diameter and wall thickness of the specimen. If the surgeon provided orientation of the specimen, then the anatomic location of grossly visible tumor should be recorded as this may assist to correlate with radiologic and intraoperative findings, particularly if there is clinical concern for margin involvement by tumor. Options include using a positions on a clock or designating anterior versus posterior lip of the cervix. Recommendations Document the anatomic location of tumor in the cervix using positions on a clock or designating anterior versus posterior lip of the cervix. Document the distance of tumor to the endocervical margin, ectocervical margin and deep connective tissue margin. Document the anatomic location of tumor in the cervix using positions on a clock or designating anterior versus posterior lip of the cervix. Document the distance of tumor to the endocervical margin, ectocervical margin and deep connective tissue margin. The 3 macroscopic dimensions of a cervical tumor are its length (parallel to the endocervical canal), width (in the plane perpendicular to the endocervical canal), and thickness (from tumor surface to the tumor deepest invasion point). The macroscopic depth of invasion is defined as the distance from the endocervical mucosa to the tumor deepest point within the cervical wall. Depending on the relative amount of exophytic growth versus growth into the cervical wall, tumor thickness and tumor depth of invasion may be different. Recommendations Document the tumor length (parallel to the endocervical canal), tumor width (perpendicular to the endocervical canal), tumor thickness, and depth of tumor invasion. Document the tumor length (parallel to the endocervical canal), tumor width (perpendicular to the endocervical canal), tumor thickness, and depth of tumor invasion. Some intact LEEP/cone specimens may be submitted to the pathology laboratory in the fresh state while others are submitted already in formalin. This affects the strategy for specimen handling. Ideally, a fresh intact LEEP/cone should be opened immediately and pinned out before formalin fixation as this strategy maximizes the opportunity for tissue sectioning to be performed in a way that preserves accurate tissue orientation, permits full visualization of the mucosa, and facilitates parallel 2 to 3 mm thick slices that do not need to be trimmed down before placement in tissue cassettes (Fig. ). The disadvantage of this strategy is that is dependent on the availability of pathology laboratory staff to perform this step on receipt of the fresh specimen in order to prevent tissue autolysis. The procedure for opening a fresh intact LEEP/cone, as well as fixation and sectioning, is similar to that for a trachelectomy specimen, which is discussed below. If it is not feasible to open and pin a fresh intact specimen before formalin fixation, then the specimen should be placed in formalin upon receipt and processed as decribed below. If the LEEP/cone is left intact during formalin-fixation, then there are 2 options for processing the specimen: radial slicing or parallel slicing (Fig. ). The radial strategy is best suited for specimens in which the endocervical canal is clearly visible, whereas specimens in which the endocervical canal is less obvious may be better managed by parallel slicing. For the radial strategy, the goal is that each slice should have mucosa from the ectocervical margin to the endocervical margin along one edge of the section and the deep connective tissue margin at the opposite edge. It is unavoidable that the radial strategy produces tissue sections that are thicker toward the deep connective tissue part of the slice, creating a wedge-like shape that will not lay down flat in the tissue cassette. Therefore, the excess tissue at the thicker end of the section should be trimmed to produce a flat section. These trimmed pieces will not contain any mucosa but should still be examined microscopically. For the parallel slicing strategy, the specimen is serially cut in 2 to 3 mm thick slices parallel to the endocervical canal. This produces uniform thin slices that do not need to be trimmed down further. To fully evaluate the mucosal and connective tissue margins of the first and the last slices, these can be further cut perpendicularly and embedded on their sides. All tissue should be submitted for microscopic examination, including the excess trimmed tissue. The sections should be placed in consecutive cassettes and the cassette code should document the anatomic orientation of the slices in order to facilitate tumor size measurements in the event that tumor is present in >1 slice. Only 1 to 2 sections should be submitted in each cassette. On occasion, a LEEP specimen may be accompanied by an additional, usually smaller, cylinder of endocervix, the so-called top hat specimen, when the clinician determines that additional tissue is necessary to assure a negative margin. The length, diameter and wall thickness should be documented. The true endocervical margin and deep connective tissue margin should be inked. The specimen can be sliced either using a radial approach or parallel approach, as described for the main LEEP specimen. The entire tissue should be submitted for microscopic examination. Regarding the number of initial H&E-stained sections per block to examine, the recommendations stated for fragmented LEEP specimens apply to intact LEEP and cone specimens. Recommendations Fresh intact LEEP/cone specimens can either be opened and pinned before fixation or placed intact in formalin, depending on local practice resources. Specimens opened and pinned before fixation should be thinly sliced parallel to the endocervical canal. Specimens fixed intact can be sliced using a radial or a parallel slicing strategy. Specimens should be entirely submitted for microscopic examination, including any excess trimmed pieces. If an additional so-called top hat specimen is submitted, it should be inked, sliced using the same strategy for the main LEEP specimen, and entirely submitted. A single H&E-stained section per block is sufficient for initial microscopic examination, with consideration of deeper sections when there is missing mucosa, absence of squamous intraepithelial lesion/endocervical adenocarcinoma, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings. Alternatively it may be more efficient to routinely examine 2 or more sections per block, depending on the local practice. Fresh intact LEEP/cone specimens can either be opened and pinned before fixation or placed intact in formalin, depending on local practice resources. Specimens opened and pinned before fixation should be thinly sliced parallel to the endocervical canal. Specimens fixed intact can be sliced using a radial or a parallel slicing strategy. Specimens should be entirely submitted for microscopic examination, including any excess trimmed pieces. If an additional so-called top hat specimen is submitted, it should be inked, sliced using the same strategy for the main LEEP specimen, and entirely submitted. A single H&E-stained section per block is sufficient for initial microscopic examination, with consideration of deeper sections when there is missing mucosa, absence of squamous intraepithelial lesion/endocervical adenocarcinoma, suspicion for stromal invasion, or findings that are discordant with the clinical, colposcopic, and/or cytologic findings. Alternatively it may be more efficient to routinely examine 2 or more sections per block, depending on the local practice. Trachelectomy is a fertility sparing approach for selected stage I cervical cancers (stage IA1 with lymphovascular space invasion, IA2, or IB with clinically estimated cervical length of 2 cm or less and no radiologic evidence of tumor involvement of the upper endocervix or any extrauterine site) – . Endocervical adenocarcinoma is slightly more prevalent than squamous cell carcinoma among trachelectomy patients in some studies , . Experience remains limited in patients with tumors larger than 2 cm and/or postneoadjuvant therapy – . Radical trachelectomy consists of the cervix (ectocervix, transformation zone, and endocervical canal), upper vagina (cuff of 1–2 cm), and lower parametria. Simple trachelectomy does not include the parametria. Trachelectomy specimens are usually received intact and fresh from the operating room. They are typically submitted for intraoperative consultation, which requires immediate orientation, inking and measuring. After intraoperative sampling and processing, the specimen can be pinned for formalin fixation. Specimen Orientation and Inking Identification of the endocervical and vaginal margins of the specimen is usually straightforward based on the appearance of the vaginal cuff (Fig. ). The anatomic orientation of the specimen in the axial plane allows the parametrial tissue to be oriented as right or left. This is dependent on the surgeon to place an orientation suture (eg, designating 12 o’clock position using a clock-face system; designating the anterior lip of the cervix; or designating the right vs. left parametrial tissue). The nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls is not a true surgical margin though this is often referred to as a radial or paracervical margin . Nevertheless, as the status of this surface (involved by tumor vs. not involved by tumor) may be of relevance to the surgeon and/or radiation oncologist it is advised to ink these surfaces and document presence or absence of tumor. The margins to ink are the endocervical, vaginal, and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Recommendations Orient the laterality of the parametria and the anterior/posterior lip of the cervix based on the orientation provided by the surgeon. Ink the endocervical, vaginal, and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Specimen Measurements It is recommended to document the measurements of the cervix, vaginal cuff, and parametria. As the vaginal cuff may retract around the cervix, it should be stretched out before taking measurements. Recommendations Measure the cervix length (parallel to the endocervical canal), diameter and wall thickness. Measure the parametrial tissue length (from superior to inferior) and lateral dimension (from uterine wall to outer edge). Measure the vaginal cuff minimal and maximal length after stretching it out if it is retracted. Tumor Location Documenting the anatomic location of grossly visible tumor assists in the pathologic correlation with radiologic and intraoperative findings, particularly if there is clinical concern for margin involvement by tumor. We recommend using positions on a clock face and then correlate with the anatomic terminology used by the surgeon to designate the tumor location. If grossly visible tumor invades the cervical wall, the greatest depth of invasion should be documented as well as the total thickness of the cervical wall at that point. Recommendations Document the anatomic location of tumor in the cervix using positions on a clock face and then correlate with the anatomic terminology used by the surgeon. Document the distance of tumor to the endocervical margin, vaginal margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls. Macroscopic Tumor Dimensions A unified approach to determine tumor size measurements in cervical cancer is critical for several reasons: There is significant variation in the method used by gynecologic oncologists, radiologists, and pathologists to estimate tumor size, and currently there is no single recommendation for standard practice. Importantly, the current FIGO staging system for cervical cancer recognizes pathologic variables to influence stage. Moreover, tumor size measurements taken by the pathologist supercede those obtained clinically or radiologically . Size measurements should be obtained in the fresh specimen. The distance to the margins should also be measured at this time. Measuring the lesion and its distance to margins after fixation and handling (opening, pinning) is discouraged for 2 reasons. First, it will likely lead to overestimation of tumor size, as the specimen will be stretched out. Second, there is conflicting evidence on the effect of fixation on specimen and tumor size, with several studies reporting shrinkage after fixation , while others report no significant differences , . The 3 macroscopic dimensions of a cervical tumor are its length (parallel to the endocervical canal), width (in the plane perpendicular to the endocervical canal), and thickness (from tumor surface to the tumor deepest invasion point) (Fig. ). The macroscopic depth of invasion is defined as the distance from the endocervical mucosa to the tumor deepest point within the cervical wall. Depending on the relative amount of exophytic growth versus growth into the cervical wall, tumor thickness and tumor depth of invasion may be different. Recommendations Document the tumor length (parallel to the endocervical canal), tumor width (in the plane perpendicular to the endocervical canal), tumor thickness and depth of tumor invasion. Specimen Processing and Tissue Sampling Open the trachelectomy specimen by making one cut through the wall at one anatomic location (eg, at the 12 o’clock position), parallel to the endocervical canal. If possible, the cut should be perfomed in an area free of gross tumor, though this may not always be possible. This will convert the intact cylindrical shape of the specimen into a rectangle that can be pinned out flat for formalin fixation and will permit the tissue to be sliced in a way that maximizes the preservation of the orientation of all margins for microscopic examination. This will also permit the vaginal cuff, which normally retracts around the ectocervix, to be stretched out before fixation so that the true distance between this margin and the tumor can be evaluated; otherwise, tissue retraction may lead to a significant underestimate of the length of the vaginal cuff that the surgeon resected. Removing the parametrial tissues before opening and pinning the trachelectomy is advised. Pins should be placed in a way that does not disrupt the mucosal margins or the tumor itself. Adequate formalin-fixation will facilitate optimal tissue sectioning. Overnight fixation may be needed. After fixation, the trachelectomy specimen should be serially sliced at 2 to 3 mm intervals parallel to the endocervical canal. Each slice should have mucosa along one edge (from the vaginal cuff margin to the endocervical mucosal margin) and the radial paracervical connective margin along the other edge. If the slices are too large to fit in a cassette, they can be divided into 2 or 3 sections and placed in consecutive cassettes. Large format (macro) blocks, if available, may be of value in such cases. If tumor is grossly visible, sampling should be focused to document the deepest point of invasion as well as the closest distance to all margins. Aside from these key anatomic landmarks, there is no evidence to guide whether the entire tumor or representative sections should be submitted for microscopic examination. From the practical standpoint of tumor stage assignment, it is recommended to entirely submit tumors that are 2 cm or less since microscopic examination may affect the final tumor dimensions, particularly if macroscopically evident wound-healing changes are present, which may lead to over or under-estimation of true tumor size. Tumors larger than 2 cm can be sampled with representative sections. If tumor is not grossly visible then the entire specimen should be submitted for microscopic examination. The vaginal margin should be examined by a perpendicular section at the site of the closest approach of the tumor. In such a case, there is variability across practices as to whether the remainder of the vaginal margin should be examined entirely en face or whether an additional representative perpendicular section away from the tumor is sufficient. If there is no macroscopic tumor, there is also variability among practices as to whether the entire vaginal margin should be examined en face or whether representative perpendicular sections are sufficient. The parametria should be thinly sliced and entirely submitted, preserving the right and left side orientation. If lymph nodes are present within parametria, their number, size, and appearance should be documented . A single H&E-stained section per block is sufficient for the initial microscopic examination. Recommendations Remove parametria and place in cassettes before opening specimen. Open the specimen and obtain measurements (anatomic structures and any lesion) immediately upon receipt. After intraoperative consultation (if performed), pin the specimen for overnight formalin fixation, taking care to stretch out the vaginal cuff to its full length and pin. Tumors 2 cm or less should be entirely submitted while tumors larger than 2 cm can be processed using representative sections. Tissue sections should demonstrate the deepest tumor invasion and the closest approach of the tumor to the vaginal, radial, and parametrial margins. If there is no grossly visible lesion, the entire cervix should be submitted. Perpendicular sections of the vaginal margin closest to the tumor should be examined. Whether the remainder of the vaginal margin should be examined entirely en face or by representative perpendicular sections is left to local practice standards. Similarly, if there is no macroscopic tumor, the decision to examine the entire vaginal margin en face or by representative perpendicular sections is left to local practice standards. The parametria should be entirely submitted. A single H&E-stained section per block is sufficient for initial microscopic examination. Intraoperative Evaluation Clinical Indications In the first descriptions of the trachelectomy procedure, an endocervical margin at least 8 to 10 mm away from the tumor was considered optimal. Conversely, a positive margin or a negative margin <5 mm from the tumor would be considered insufficient, prompting additional surgery , . For this reason, intraoperative evaluation of the proximal (endocervical) margin is routinely performed in this setting. Margin status is critical in the management of these patients: (1) recurrence rates are influenced by margin status, and (2) while the pregnancy success rate is high in these patients, the pregnancy is at risk of complications such as prematurity and first-trimester miscarriage . For these reasons, preservation of as much of the proximal canal as possible is imperative. Intraoperative Impact of Findings The status of the endocervical (proximal) resection margin determines the need for additional excision , . If the proximal margin is positive for invasive carcinoma, radical hysterectomy would be considered. Alternatively, if feasible, an additional portion of the upper endocervix will be removed. If the proximal margin is negative, but the distance of the margin to invasive tumor is <5 mm, an additional portion of the upper endocervix will be removed to guarantee an appropriate width of margin excision. The status of the deep (paracervical) and vaginal mucosa margins is usually not required intraoperatively. Specimen Processing Four different strategies for intraoperative tissue handling and sectioning have been proposed (Table ) , , , , each with excellent sensitivity and negative predictive value. In the absence of any evidence directly comparing these strategies, it is recommended that the protocol of choice be defined at the local practice level, in conjunction with the surgeon to understand their specific intraoperative needs from the pathologic evaluation of the trachelectomy intraoperatively. Reporting Terminology The margin status should be reported as either positive or negative for invasive carcinoma and for in-situ carcinoma. If the margin is negative, the distance of the closest approach of tumor should be reported. In sections taken perpendicularly, a positive margin is defined as invasive carcinoma in direct contact with the inked surface of the margin; all other instances are defined as a negative margin. In en face margin sections, a positive margin is defined as invasive carcinoma present in the section, and a negative margin as absence of invasive carcinoma in the section. Challenges With Interpretation, Reporting and Diagnostic Pitfalls Margin Artifacts (Folding, Tissue Gaps, Irregularities). Intraoperative frozen section examination of the endocervical margin requires good visualization of the entire wall thickness and the inked margin edge (if section is perpendicular). If the initial section is significantly folded or fragmented, obtaining a new level is highly advisable. Identification of tissue gaps or incision marks before inking is very important, as they may distort the endocervical margin. Ink must be applied carefully, avoiding surfaces that do not represent resection margins. Examination of the specimen with the surgical team may be required. Benign Mimickers. Mimickers of endocervical adenocarcinoma include tubo-endometrioid metaplasia and endometriosis. In addition, the proximal margin of the trachelectomy specimen may sometimes be at the lower uterine segment or even endometrium functionalis. These scenarios feature mucin-depleted glands and variable degrees of nuclear pseudostratification as well as proliferation, thus highly resembling human papillomavirus (HPV)-related adenocarcinoma. In addition, tubo-endometrioid metaplasia can feature reactive stroma, further raising concern for malignancy. Attention to the nuclear characteristics is important. In the presence of bland nuclear features), presence of cilia as well as terminal bars, and/or periglandular endometrial-type stroma and/or hemorrhage, a benign diagnosis should be entertained , . A scoring system to distinguish endocervical adenocarcinoma in situ from benign/reactive conditions has been published and subsequently applied to intraoperative evaluation of trachelectomy specimens , . When narrowed to 2 final diagnoses (benign/reactive vs. adenocarcinoma), this system had 94% concordance with the index diagnoses and, when used intraoperatively on trachelectomy specimens, improved the positive predictive value of frozen section and the concordance rate between the intraoperative and final diagnoses. Recommendations The protocol for intraoperative evaluation of trachelectomy specimen should be decided at the local practice level using 1 of the 4 published protocols , , , (Table ). The presence or absence of invasive cancer and of in situ carcinoma at the proximal margin (defined as ink on tumor) should be reported for the intraoperative evaluation. If the margin is negative, then the distance between tumor and margin should be reported. Identification of the endocervical and vaginal margins of the specimen is usually straightforward based on the appearance of the vaginal cuff (Fig. ). The anatomic orientation of the specimen in the axial plane allows the parametrial tissue to be oriented as right or left. This is dependent on the surgeon to place an orientation suture (eg, designating 12 o’clock position using a clock-face system; designating the anterior lip of the cervix; or designating the right vs. left parametrial tissue). The nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls is not a true surgical margin though this is often referred to as a radial or paracervical margin . Nevertheless, as the status of this surface (involved by tumor vs. not involved by tumor) may be of relevance to the surgeon and/or radiation oncologist it is advised to ink these surfaces and document presence or absence of tumor. The margins to ink are the endocervical, vaginal, and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Recommendations Orient the laterality of the parametria and the anterior/posterior lip of the cervix based on the orientation provided by the surgeon. Ink the endocervical, vaginal, and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Orient the laterality of the parametria and the anterior/posterior lip of the cervix based on the orientation provided by the surgeon. Ink the endocervical, vaginal, and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. It is recommended to document the measurements of the cervix, vaginal cuff, and parametria. As the vaginal cuff may retract around the cervix, it should be stretched out before taking measurements. Recommendations Measure the cervix length (parallel to the endocervical canal), diameter and wall thickness. Measure the parametrial tissue length (from superior to inferior) and lateral dimension (from uterine wall to outer edge). Measure the vaginal cuff minimal and maximal length after stretching it out if it is retracted. Measure the cervix length (parallel to the endocervical canal), diameter and wall thickness. Measure the parametrial tissue length (from superior to inferior) and lateral dimension (from uterine wall to outer edge). Measure the vaginal cuff minimal and maximal length after stretching it out if it is retracted. Documenting the anatomic location of grossly visible tumor assists in the pathologic correlation with radiologic and intraoperative findings, particularly if there is clinical concern for margin involvement by tumor. We recommend using positions on a clock face and then correlate with the anatomic terminology used by the surgeon to designate the tumor location. If grossly visible tumor invades the cervical wall, the greatest depth of invasion should be documented as well as the total thickness of the cervical wall at that point. Recommendations Document the anatomic location of tumor in the cervix using positions on a clock face and then correlate with the anatomic terminology used by the surgeon. Document the distance of tumor to the endocervical margin, vaginal margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls. Document the anatomic location of tumor in the cervix using positions on a clock face and then correlate with the anatomic terminology used by the surgeon. Document the distance of tumor to the endocervical margin, vaginal margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls. A unified approach to determine tumor size measurements in cervical cancer is critical for several reasons: There is significant variation in the method used by gynecologic oncologists, radiologists, and pathologists to estimate tumor size, and currently there is no single recommendation for standard practice. Importantly, the current FIGO staging system for cervical cancer recognizes pathologic variables to influence stage. Moreover, tumor size measurements taken by the pathologist supercede those obtained clinically or radiologically . Size measurements should be obtained in the fresh specimen. The distance to the margins should also be measured at this time. Measuring the lesion and its distance to margins after fixation and handling (opening, pinning) is discouraged for 2 reasons. First, it will likely lead to overestimation of tumor size, as the specimen will be stretched out. Second, there is conflicting evidence on the effect of fixation on specimen and tumor size, with several studies reporting shrinkage after fixation , while others report no significant differences , . The 3 macroscopic dimensions of a cervical tumor are its length (parallel to the endocervical canal), width (in the plane perpendicular to the endocervical canal), and thickness (from tumor surface to the tumor deepest invasion point) (Fig. ). The macroscopic depth of invasion is defined as the distance from the endocervical mucosa to the tumor deepest point within the cervical wall. Depending on the relative amount of exophytic growth versus growth into the cervical wall, tumor thickness and tumor depth of invasion may be different. Recommendations Document the tumor length (parallel to the endocervical canal), tumor width (in the plane perpendicular to the endocervical canal), tumor thickness and depth of tumor invasion. Document the tumor length (parallel to the endocervical canal), tumor width (in the plane perpendicular to the endocervical canal), tumor thickness and depth of tumor invasion. Open the trachelectomy specimen by making one cut through the wall at one anatomic location (eg, at the 12 o’clock position), parallel to the endocervical canal. If possible, the cut should be perfomed in an area free of gross tumor, though this may not always be possible. This will convert the intact cylindrical shape of the specimen into a rectangle that can be pinned out flat for formalin fixation and will permit the tissue to be sliced in a way that maximizes the preservation of the orientation of all margins for microscopic examination. This will also permit the vaginal cuff, which normally retracts around the ectocervix, to be stretched out before fixation so that the true distance between this margin and the tumor can be evaluated; otherwise, tissue retraction may lead to a significant underestimate of the length of the vaginal cuff that the surgeon resected. Removing the parametrial tissues before opening and pinning the trachelectomy is advised. Pins should be placed in a way that does not disrupt the mucosal margins or the tumor itself. Adequate formalin-fixation will facilitate optimal tissue sectioning. Overnight fixation may be needed. After fixation, the trachelectomy specimen should be serially sliced at 2 to 3 mm intervals parallel to the endocervical canal. Each slice should have mucosa along one edge (from the vaginal cuff margin to the endocervical mucosal margin) and the radial paracervical connective margin along the other edge. If the slices are too large to fit in a cassette, they can be divided into 2 or 3 sections and placed in consecutive cassettes. Large format (macro) blocks, if available, may be of value in such cases. If tumor is grossly visible, sampling should be focused to document the deepest point of invasion as well as the closest distance to all margins. Aside from these key anatomic landmarks, there is no evidence to guide whether the entire tumor or representative sections should be submitted for microscopic examination. From the practical standpoint of tumor stage assignment, it is recommended to entirely submit tumors that are 2 cm or less since microscopic examination may affect the final tumor dimensions, particularly if macroscopically evident wound-healing changes are present, which may lead to over or under-estimation of true tumor size. Tumors larger than 2 cm can be sampled with representative sections. If tumor is not grossly visible then the entire specimen should be submitted for microscopic examination. The vaginal margin should be examined by a perpendicular section at the site of the closest approach of the tumor. In such a case, there is variability across practices as to whether the remainder of the vaginal margin should be examined entirely en face or whether an additional representative perpendicular section away from the tumor is sufficient. If there is no macroscopic tumor, there is also variability among practices as to whether the entire vaginal margin should be examined en face or whether representative perpendicular sections are sufficient. The parametria should be thinly sliced and entirely submitted, preserving the right and left side orientation. If lymph nodes are present within parametria, their number, size, and appearance should be documented . A single H&E-stained section per block is sufficient for the initial microscopic examination. Recommendations Remove parametria and place in cassettes before opening specimen. Open the specimen and obtain measurements (anatomic structures and any lesion) immediately upon receipt. After intraoperative consultation (if performed), pin the specimen for overnight formalin fixation, taking care to stretch out the vaginal cuff to its full length and pin. Tumors 2 cm or less should be entirely submitted while tumors larger than 2 cm can be processed using representative sections. Tissue sections should demonstrate the deepest tumor invasion and the closest approach of the tumor to the vaginal, radial, and parametrial margins. If there is no grossly visible lesion, the entire cervix should be submitted. Perpendicular sections of the vaginal margin closest to the tumor should be examined. Whether the remainder of the vaginal margin should be examined entirely en face or by representative perpendicular sections is left to local practice standards. Similarly, if there is no macroscopic tumor, the decision to examine the entire vaginal margin en face or by representative perpendicular sections is left to local practice standards. The parametria should be entirely submitted. A single H&E-stained section per block is sufficient for initial microscopic examination. Remove parametria and place in cassettes before opening specimen. Open the specimen and obtain measurements (anatomic structures and any lesion) immediately upon receipt. After intraoperative consultation (if performed), pin the specimen for overnight formalin fixation, taking care to stretch out the vaginal cuff to its full length and pin. Tumors 2 cm or less should be entirely submitted while tumors larger than 2 cm can be processed using representative sections. Tissue sections should demonstrate the deepest tumor invasion and the closest approach of the tumor to the vaginal, radial, and parametrial margins. If there is no grossly visible lesion, the entire cervix should be submitted. Perpendicular sections of the vaginal margin closest to the tumor should be examined. Whether the remainder of the vaginal margin should be examined entirely en face or by representative perpendicular sections is left to local practice standards. Similarly, if there is no macroscopic tumor, the decision to examine the entire vaginal margin en face or by representative perpendicular sections is left to local practice standards. The parametria should be entirely submitted. A single H&E-stained section per block is sufficient for initial microscopic examination. Clinical Indications In the first descriptions of the trachelectomy procedure, an endocervical margin at least 8 to 10 mm away from the tumor was considered optimal. Conversely, a positive margin or a negative margin <5 mm from the tumor would be considered insufficient, prompting additional surgery , . For this reason, intraoperative evaluation of the proximal (endocervical) margin is routinely performed in this setting. Margin status is critical in the management of these patients: (1) recurrence rates are influenced by margin status, and (2) while the pregnancy success rate is high in these patients, the pregnancy is at risk of complications such as prematurity and first-trimester miscarriage . For these reasons, preservation of as much of the proximal canal as possible is imperative. Intraoperative Impact of Findings The status of the endocervical (proximal) resection margin determines the need for additional excision , . If the proximal margin is positive for invasive carcinoma, radical hysterectomy would be considered. Alternatively, if feasible, an additional portion of the upper endocervix will be removed. If the proximal margin is negative, but the distance of the margin to invasive tumor is <5 mm, an additional portion of the upper endocervix will be removed to guarantee an appropriate width of margin excision. The status of the deep (paracervical) and vaginal mucosa margins is usually not required intraoperatively. Specimen Processing Four different strategies for intraoperative tissue handling and sectioning have been proposed (Table ) , , , , each with excellent sensitivity and negative predictive value. In the absence of any evidence directly comparing these strategies, it is recommended that the protocol of choice be defined at the local practice level, in conjunction with the surgeon to understand their specific intraoperative needs from the pathologic evaluation of the trachelectomy intraoperatively. Reporting Terminology The margin status should be reported as either positive or negative for invasive carcinoma and for in-situ carcinoma. If the margin is negative, the distance of the closest approach of tumor should be reported. In sections taken perpendicularly, a positive margin is defined as invasive carcinoma in direct contact with the inked surface of the margin; all other instances are defined as a negative margin. In en face margin sections, a positive margin is defined as invasive carcinoma present in the section, and a negative margin as absence of invasive carcinoma in the section. Challenges With Interpretation, Reporting and Diagnostic Pitfalls Margin Artifacts (Folding, Tissue Gaps, Irregularities). Intraoperative frozen section examination of the endocervical margin requires good visualization of the entire wall thickness and the inked margin edge (if section is perpendicular). If the initial section is significantly folded or fragmented, obtaining a new level is highly advisable. Identification of tissue gaps or incision marks before inking is very important, as they may distort the endocervical margin. Ink must be applied carefully, avoiding surfaces that do not represent resection margins. Examination of the specimen with the surgical team may be required. Benign Mimickers. Mimickers of endocervical adenocarcinoma include tubo-endometrioid metaplasia and endometriosis. In addition, the proximal margin of the trachelectomy specimen may sometimes be at the lower uterine segment or even endometrium functionalis. These scenarios feature mucin-depleted glands and variable degrees of nuclear pseudostratification as well as proliferation, thus highly resembling human papillomavirus (HPV)-related adenocarcinoma. In addition, tubo-endometrioid metaplasia can feature reactive stroma, further raising concern for malignancy. Attention to the nuclear characteristics is important. In the presence of bland nuclear features), presence of cilia as well as terminal bars, and/or periglandular endometrial-type stroma and/or hemorrhage, a benign diagnosis should be entertained , . A scoring system to distinguish endocervical adenocarcinoma in situ from benign/reactive conditions has been published and subsequently applied to intraoperative evaluation of trachelectomy specimens , . When narrowed to 2 final diagnoses (benign/reactive vs. adenocarcinoma), this system had 94% concordance with the index diagnoses and, when used intraoperatively on trachelectomy specimens, improved the positive predictive value of frozen section and the concordance rate between the intraoperative and final diagnoses. Recommendations The protocol for intraoperative evaluation of trachelectomy specimen should be decided at the local practice level using 1 of the 4 published protocols , , , (Table ). The presence or absence of invasive cancer and of in situ carcinoma at the proximal margin (defined as ink on tumor) should be reported for the intraoperative evaluation. If the margin is negative, then the distance between tumor and margin should be reported. In the first descriptions of the trachelectomy procedure, an endocervical margin at least 8 to 10 mm away from the tumor was considered optimal. Conversely, a positive margin or a negative margin <5 mm from the tumor would be considered insufficient, prompting additional surgery , . For this reason, intraoperative evaluation of the proximal (endocervical) margin is routinely performed in this setting. Margin status is critical in the management of these patients: (1) recurrence rates are influenced by margin status, and (2) while the pregnancy success rate is high in these patients, the pregnancy is at risk of complications such as prematurity and first-trimester miscarriage . For these reasons, preservation of as much of the proximal canal as possible is imperative. The status of the endocervical (proximal) resection margin determines the need for additional excision , . If the proximal margin is positive for invasive carcinoma, radical hysterectomy would be considered. Alternatively, if feasible, an additional portion of the upper endocervix will be removed. If the proximal margin is negative, but the distance of the margin to invasive tumor is <5 mm, an additional portion of the upper endocervix will be removed to guarantee an appropriate width of margin excision. The status of the deep (paracervical) and vaginal mucosa margins is usually not required intraoperatively. Four different strategies for intraoperative tissue handling and sectioning have been proposed (Table ) , , , , each with excellent sensitivity and negative predictive value. In the absence of any evidence directly comparing these strategies, it is recommended that the protocol of choice be defined at the local practice level, in conjunction with the surgeon to understand their specific intraoperative needs from the pathologic evaluation of the trachelectomy intraoperatively. The margin status should be reported as either positive or negative for invasive carcinoma and for in-situ carcinoma. If the margin is negative, the distance of the closest approach of tumor should be reported. In sections taken perpendicularly, a positive margin is defined as invasive carcinoma in direct contact with the inked surface of the margin; all other instances are defined as a negative margin. In en face margin sections, a positive margin is defined as invasive carcinoma present in the section, and a negative margin as absence of invasive carcinoma in the section. Margin Artifacts (Folding, Tissue Gaps, Irregularities). Intraoperative frozen section examination of the endocervical margin requires good visualization of the entire wall thickness and the inked margin edge (if section is perpendicular). If the initial section is significantly folded or fragmented, obtaining a new level is highly advisable. Identification of tissue gaps or incision marks before inking is very important, as they may distort the endocervical margin. Ink must be applied carefully, avoiding surfaces that do not represent resection margins. Examination of the specimen with the surgical team may be required. Benign Mimickers. Mimickers of endocervical adenocarcinoma include tubo-endometrioid metaplasia and endometriosis. In addition, the proximal margin of the trachelectomy specimen may sometimes be at the lower uterine segment or even endometrium functionalis. These scenarios feature mucin-depleted glands and variable degrees of nuclear pseudostratification as well as proliferation, thus highly resembling human papillomavirus (HPV)-related adenocarcinoma. In addition, tubo-endometrioid metaplasia can feature reactive stroma, further raising concern for malignancy. Attention to the nuclear characteristics is important. In the presence of bland nuclear features), presence of cilia as well as terminal bars, and/or periglandular endometrial-type stroma and/or hemorrhage, a benign diagnosis should be entertained , . A scoring system to distinguish endocervical adenocarcinoma in situ from benign/reactive conditions has been published and subsequently applied to intraoperative evaluation of trachelectomy specimens , . When narrowed to 2 final diagnoses (benign/reactive vs. adenocarcinoma), this system had 94% concordance with the index diagnoses and, when used intraoperatively on trachelectomy specimens, improved the positive predictive value of frozen section and the concordance rate between the intraoperative and final diagnoses. The protocol for intraoperative evaluation of trachelectomy specimen should be decided at the local practice level using 1 of the 4 published protocols , , , (Table ). The presence or absence of invasive cancer and of in situ carcinoma at the proximal margin (defined as ink on tumor) should be reported for the intraoperative evaluation. If the margin is negative, then the distance between tumor and margin should be reported. There are 3 types of hysterectomy performed for cervical cancer: radical, modified radical, and simple (extrafascial) . Radical hysterectomy is the default approach for FIGO stage IA2, IB1, IB2 and select IB3-IIA1 cervical cancers in women who do not desire fertility preservation. Radical hysterectomy includes the uterine corpus, uterine cervix, the upper 1 to 2 cm of the vagina and bilateral parametrial connective tissue. Parametrectomy is a key surgical goal of radical hysterectomy given the overall risk for microscopic parametrial involvement by cervical cancer, which carries adverse prognostic significance – . Modified radical hysterectomy may be considered in some women with stage IA1 with lymphovascular space invasion or stage IA2 cervical cancer. The modified procedure is less extensive. As the risk for parametrial involvement is significantly lower for early stage cervical cancer, less radical surgery has been offered to selected patients (stage IA1 without lymphovascular invasion) using simple hysterectomy without parametrectomy or upper vaginectomy , . In such specimens, there may be some connective tissue attached to the cervix, which should be examined microscopically, but not reported as formal parametrial tissue . The ovaries and/or fallopian tubes may be included with hysterectomy in certain clinical scenarios. Involvement of the fallopian tubes is rare but up to a quarter of patients with endocervical adenocarcinoma may have ovarian metastasis, particularly in the setting of involvement of the uterine corpus, parametrium, and/or lymphovascular spaces in the cervix – . Specimen Orientation, Margins, and Inking Two anatomic landmarks permit orientation of a hysterectomy specimen: (1) the peritoneal reflection is shorter on the anterior surface of the uterus because of the normal position of the urinary bladder anterior to the uterus, whereas the peritoneal reflection extends further down the posterior aspect of the uterus. (2) The round ligaments are located anterior to the fallopian tubes (Fig. ). The nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls is not a true surgical margin though this is often referred to as a radial or paracervical margin (Fig. ) . Nevertheless, the status of whether tumor extends to this surface may be of relevance to the surgeon and/or radiation oncologist and so it is advised to ink these surfaces and document presence or absence of tumor involvement. The margins to ink are the vaginal and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Recommendations Ink the vaginal and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Specimen Measurements It is recommended to document the measurements of the cervix, vaginal cuff, parametria, uterine corpus, and, if present, the ovaries and tubes. As the vaginal cuff may retract around the cervix, it should be stretched out before taking measurements (Fig. ). Although there is no clinically relevant role for recording the hysterectomy specimen weight, it is routinely recorded in pathology practices in the United States since there are different billing codes for specimens that weigh up 250 g versus those above it . Furthermore, the certification requirements of the American Board of Obstetrics and Gynecology mandate candidates to document specimen weights during their training experience in performing hysterectomies. For these reasons, pathology practices in the United States typically document the weight of hysterectomies, regardless of the clinical setting. As such conditions may not necessarily apply in other countries, documentation of specimen weight is regarded as an optional recommendation defined by the local practice conditions. Recommendations Weight the uterus. Measure the cervix length (parallel to the endocervical canal), diameter, and wall thickness. Measure the parametrial tissue length (from superior to inferior) and lateral dimension (from uterine wall to outer edge). Measure the vaginal cuff minimal and maximal length after stretching it out if it is retracted. Measure the uterine corpus from superior to inferior, side to side and anterior to posterior dimensions. If present, the size of the ovaries and fallopian tubes should be recorded. Strategy for Opening the Hysterectomy If there is no suspicion that the tumor is extending into the parametria, then they can be removed at the interface with the uterine wall before opening the uterus. The parametria are then sliced at 2 to 3 mm intervals and placed in casettes immediately to ensure that these tissues are entirely submitted and that their right/left orientation is preserved. If there is suspicion of tumor extension into the parametria then it is recommended to leave the parametria attached in order to permit slicing of the cervix and parametria in continuity to demonstrate direct tumor extention. Hysterectomy specimens should be opened immediately upon receipt in the pathology laboratory and prepared for formalin-fixation in order to mitigate tissue autolysis, which may impair microscopic examination and/or immunohistochemical and molecular testing – . Even if the hysterectomy is received by the laboratory already in formalin, the endocervical and endometrial lining, as well as any tumor, are the least likely parts of the specimen to be exposed to the formalin if the uterus was not opened. Thus, such specimens should also be opened immediately upon receipt in the pathology laboratory. If the specimen is received fresh, 2 options exist to open the uterus. The first consists of amputation of the uterine cervix from the corpus and process the cervix using the same strategy as for an intact cone or trachelectomy (Fig. ). The uterine body is then opened along the lateral walls using the conventional bivalve approach, resulting in an anterior and a posterior half. The advantage of this strategy is that it permits well-oriented, uniformly thin slices of the cervix to be cut, which facilitates optimal microscopic evaluation for invasion and margin assessment. The disadvantage is that this requires laboratory staffing to be available to perform this processing immediately upon receipt of the fresh specimen. The second option is to use the conventional bivalve approach for opening a uterus along the lateral wall resulting in an anterior half and a posterior half , and can also be used if the hysterectomy is received in formalin. The disadvantage is that the cervix will have to be dissected using a radial slice strategy, similar to that for a formalin-fixed intact cone, which produces slices of uneven thickness that have to be trimmed down to fit in the cassette properly. The decision is left to each local practice as to which of these 2 strategies to use. The uterine corpus should be serially thinly sliced (3–5 mm) parallel to the axial plane, from the endometrial surface through the wall to the serosa, as is conventionally done for other hysterectomy indications . Careful evaluation for secondary involvement of the corpus is merited given its association with ovarian and/or lymph node metastasis. Once the uterus is opened using either strategy above, immediate formalin-fixation is advised before conducting any further tissue sampling. In addition to preventing the consequences of tissue autolysis, formalin-fixation facilitates cutting thin, well-oriented tissue slices of the cervix and tumor, which in turn facilitates accurate microscopic assessment of tumor dimensions and distances to margins. Fixation times vary depending on the size of the specimen but a practical approach is to permit the specimen to fix overnight and complete the dissection and tissue sampling the next day. The vaginal cuff, which may retract around the cervix, should be stretched out before pinning. Recommendations If there is no suspicion that the tumor is extending into the parametria, then they can be removed at the interface with the uterine wall, sliced at 2 to 3 mm intervals and placed in tissue cassettes before opening the uterus. Otherwise the parametria should be left attached and sliced in continuity with the cervix. Open the uterus immediately upon receipt in the lab in order to begin formalin fixation. Fresh hysterectomy specimens can be opened either by amputating the cervix and processing it like a trachelectomy or by the conventional bivalve strategy for opening a uterus. Slice the uterine corpus in parallel thin slices before formalin fixation. Stretch out the vaginal cuff to its full length and pin in position before formalin fixation. Overnight formalin fixation is advised before further tissue sampling. Tumor Location Documenting the anatomic location of grossly visible tumor assists in the pathologic correlation with radiologic and intraoperative findings, particularly if there is clinical concern for margin involvement by tumor. We recommend using positions on a clock face and then correlating with the anatomic terminology used by the surgeon. If grossly visible tumor invades the cervical wall, the greatest depth of invasion should be documented as well as the total thickness of the cervical wall at that point. This permits correlation with microscopic measurements in order to report tumor involvement of the inner, middle, and/or outer third of the cervical wall. This 3-tier system of assessing cervical wall involvement is part of the Sedlis criteria, along with tumor size and lymphovascular space invasion status, used to determine eligibility for external pelvic radiation in cervical cancer patients whose radical hysterectomy shows node-negative, margin-negative, and parametria-negative disease – . If parametrectomy is performed, it should be documented whether tumor is confined to the cervix or involves the parametrium. Tumor involvement of the uterine corpus is important to document as it is associated with increased risk of ovarian and para-aortic lymph node metastasis – . The distance of tumor to the vaginal cuff margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls should be recorded. Recommendations Document tumor involvement in relation to the endocervix, ectocervix, parametria, and uterine corpus. Document the anatomic location of tumor in the cervix using positions on a clock or designating anterior versus posterior lip of the cervix. Document distance of tumor to the vaginal margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls. Macroscopic Tumor Dimensions The recommended measurements of the gross specimen and tumor (if grossly visible) are listed in Table . As stated in the trachelectomy section, a standardized approach to document tumor size is critical. Tumor dimensions and distance to the margins should be obtained in the fresh specimen, before pinning, stretching and fixation. The 3 macroscopic dimensions of a tumor are its length (parallel to the endocervical canal), width (in the plane perpendicular to the endocervical canal), and thickness (from tumor surface to the tumor deepest invasion point). The macroscopic depth of invasion is defined as the distance from the endocervical mucosa to the tumor deepest point within the cervical wall. Depending on the relative amount of exophytic growth versus growth into the cervical wall, tumor thickness and tumor depth of invasion may be different. Of note, documentation of macroscopic tumor dimensions in the report is not required as mandatory by the International Collaboration on Cancer Reporting (ICCR) or from the College of American Pathologist’s (CAP) Cancer Reporting Protocol , . However, for practical purposes it is recommended to record macroscopic length, width, thickness, and depth of invasion in the gross description since these data are eventually needed to determine the final dimensions after review of the microscopic findings. For the purposes of reporting and assigning pathologic tumor stage, only a single final value for each dimension should be given. This final size should be based on (a) correlating the macroscopic and the microscopic measurements in the hysterectomy specimen; (b) integrating the tumor dimensions in the preceding LEEP or cone specimen, which may be larger than those in the hysterectomy specimen; and (c) integrating pretherapy clinical and radiologic tumor dimensions if chemotherapy and/or radiation was administered before hysterectomy as these may also be larger than those in the hysterectomy specimen. Details on determining the final tumor dimension are discussed in the separate review in this issue on tumor staging recommendations. Recommendations Document the tumor length (parallel to the endocervical canal), tumor width (in the plance perpendicular to the endocervical canal), tumor thickness and depth of tumor invasion. Specimen Processing and Tissue Sampling If the cervix was amputated and processed like an intact cone or trachelectomy, then it should be serially sliced at 2 to 3 mm intervals parallel to the endocervical canal. Each slice should have mucosa along one edge (from the vaginal cuff margin to the endocervical mucosal margin) and the paracervical connective tissue surface along the other edge. If the slices are too large to fit in a cassette, they can be divided into 2 or 3 sections and placed in consecutive cassettes. Large format (macro) blocks, if available, may be of value in such cases. Alternatively, if the hysterectomy was opened using the bivalve approach, then the radial slice approach should be used, similar to that used for a formalin-fixed cone/LEEP specimen. Serial 2 to 3 mm slices are made parallel to the endocervical canal. Because of the half-cylindrical shape of the fixed cervix, this will create wedge shaped slices that are thicker at the outer cervical wall. These wedge shaped slices will have to be trimmed so they lay flat in the tissue cassette. Sampling of any grossly visible tumor follows recommendations stated in the trachelectomy section. In short, it is recommended to entirely submit tumors that are 2 cm or less, and representatively sample tumors larger than 2 cm. If tumor is not grossly visible then the entire cervix should be submitted for microscopic examination. Sampling of the vaginal margin is as recommended for trachelectomy specimens. The uterine corpus and lower uterine segment should be examined for tumor involvement. In addition to sampling any gross abnormalities, representative sections of the full thickness (ie from endometrium to serosa) of the lower uterine segment (in respect to any visible lesion) and the anterior and posterior walls of the corpus is recommended. If salpingectomy and/or oophorectomy were performed, these organs should be sliced using the sectioning and extensive examination of the fimbriae (SEE-Fim) protocol to evaluate for metastasis, which may occur, albeit uncommonly – , . There is no clear evidence to guide whether grossly normal appearing ovaries and fallopian tubes should be microscopically examined in their entirety in this setting. At a minimum it is recommended that the entire fimbriae of each fallopian tube be examined microscopically along with representative sections of the ampullary portion of the fallopian tubes, representative sections of the ovaries, and sampling of any abnormalities. Recommendations If the cervix was amputated, opened, and pinned out before fixation, then make 2 to 3 mm slices parallel to the endocervical canal. If the cervix was not amputated and pinned out before fixation, then perform radial slices at 2 to 3 mm intervals parallel to the endocervical canal. Tumors 2 cm or less should be entirely submitted whereas tumors larger than 2 cm can be representatively sampled. Tissue sections should particularly target tumor in relation to closest vaginal, paracervical/radial, and parametrial margins. If there is no grossly visible lesion, the entire cervix should be submitted. Perrpendicular sections of the vaginal margin closest to the tumor should be examined. Whether the remainder of the vaginal margin should be examined entirely en face or by representative perpendicular sections is left to local practice standards. Similarly, if there is no macroscopic tumor, the decision to examine the entire vaginal margin en face or by representative perpendicular sections is left to local practice standards. The parametria should be entirely submitted. The full thickness of the anterior and posterior walls of the corpus and of the lower uterine segment should be representatively sampled. The fallopian tubes should be processed using the SEE-Fim protocol and the fimbriae should be entirely submitted while the ampullary portion can be representatively sampled. The ovaries can be representatively sampled. Two anatomic landmarks permit orientation of a hysterectomy specimen: (1) the peritoneal reflection is shorter on the anterior surface of the uterus because of the normal position of the urinary bladder anterior to the uterus, whereas the peritoneal reflection extends further down the posterior aspect of the uterus. (2) The round ligaments are located anterior to the fallopian tubes (Fig. ). The nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls is not a true surgical margin though this is often referred to as a radial or paracervical margin (Fig. ) . Nevertheless, the status of whether tumor extends to this surface may be of relevance to the surgeon and/or radiation oncologist and so it is advised to ink these surfaces and document presence or absence of tumor involvement. The margins to ink are the vaginal and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Recommendations Ink the vaginal and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. Ink the vaginal and parametrial margins, as well as the nonperitonealized connective tissue at the outer surface of the anterior and posterior cervical walls. It is recommended to document the measurements of the cervix, vaginal cuff, parametria, uterine corpus, and, if present, the ovaries and tubes. As the vaginal cuff may retract around the cervix, it should be stretched out before taking measurements (Fig. ). Although there is no clinically relevant role for recording the hysterectomy specimen weight, it is routinely recorded in pathology practices in the United States since there are different billing codes for specimens that weigh up 250 g versus those above it . Furthermore, the certification requirements of the American Board of Obstetrics and Gynecology mandate candidates to document specimen weights during their training experience in performing hysterectomies. For these reasons, pathology practices in the United States typically document the weight of hysterectomies, regardless of the clinical setting. As such conditions may not necessarily apply in other countries, documentation of specimen weight is regarded as an optional recommendation defined by the local practice conditions. Recommendations Weight the uterus. Measure the cervix length (parallel to the endocervical canal), diameter, and wall thickness. Measure the parametrial tissue length (from superior to inferior) and lateral dimension (from uterine wall to outer edge). Measure the vaginal cuff minimal and maximal length after stretching it out if it is retracted. Measure the uterine corpus from superior to inferior, side to side and anterior to posterior dimensions. If present, the size of the ovaries and fallopian tubes should be recorded. Weight the uterus. Measure the cervix length (parallel to the endocervical canal), diameter, and wall thickness. Measure the parametrial tissue length (from superior to inferior) and lateral dimension (from uterine wall to outer edge). Measure the vaginal cuff minimal and maximal length after stretching it out if it is retracted. Measure the uterine corpus from superior to inferior, side to side and anterior to posterior dimensions. If present, the size of the ovaries and fallopian tubes should be recorded. If there is no suspicion that the tumor is extending into the parametria, then they can be removed at the interface with the uterine wall before opening the uterus. The parametria are then sliced at 2 to 3 mm intervals and placed in casettes immediately to ensure that these tissues are entirely submitted and that their right/left orientation is preserved. If there is suspicion of tumor extension into the parametria then it is recommended to leave the parametria attached in order to permit slicing of the cervix and parametria in continuity to demonstrate direct tumor extention. Hysterectomy specimens should be opened immediately upon receipt in the pathology laboratory and prepared for formalin-fixation in order to mitigate tissue autolysis, which may impair microscopic examination and/or immunohistochemical and molecular testing – . Even if the hysterectomy is received by the laboratory already in formalin, the endocervical and endometrial lining, as well as any tumor, are the least likely parts of the specimen to be exposed to the formalin if the uterus was not opened. Thus, such specimens should also be opened immediately upon receipt in the pathology laboratory. If the specimen is received fresh, 2 options exist to open the uterus. The first consists of amputation of the uterine cervix from the corpus and process the cervix using the same strategy as for an intact cone or trachelectomy (Fig. ). The uterine body is then opened along the lateral walls using the conventional bivalve approach, resulting in an anterior and a posterior half. The advantage of this strategy is that it permits well-oriented, uniformly thin slices of the cervix to be cut, which facilitates optimal microscopic evaluation for invasion and margin assessment. The disadvantage is that this requires laboratory staffing to be available to perform this processing immediately upon receipt of the fresh specimen. The second option is to use the conventional bivalve approach for opening a uterus along the lateral wall resulting in an anterior half and a posterior half , and can also be used if the hysterectomy is received in formalin. The disadvantage is that the cervix will have to be dissected using a radial slice strategy, similar to that for a formalin-fixed intact cone, which produces slices of uneven thickness that have to be trimmed down to fit in the cassette properly. The decision is left to each local practice as to which of these 2 strategies to use. The uterine corpus should be serially thinly sliced (3–5 mm) parallel to the axial plane, from the endometrial surface through the wall to the serosa, as is conventionally done for other hysterectomy indications . Careful evaluation for secondary involvement of the corpus is merited given its association with ovarian and/or lymph node metastasis. Once the uterus is opened using either strategy above, immediate formalin-fixation is advised before conducting any further tissue sampling. In addition to preventing the consequences of tissue autolysis, formalin-fixation facilitates cutting thin, well-oriented tissue slices of the cervix and tumor, which in turn facilitates accurate microscopic assessment of tumor dimensions and distances to margins. Fixation times vary depending on the size of the specimen but a practical approach is to permit the specimen to fix overnight and complete the dissection and tissue sampling the next day. The vaginal cuff, which may retract around the cervix, should be stretched out before pinning. Recommendations If there is no suspicion that the tumor is extending into the parametria, then they can be removed at the interface with the uterine wall, sliced at 2 to 3 mm intervals and placed in tissue cassettes before opening the uterus. Otherwise the parametria should be left attached and sliced in continuity with the cervix. Open the uterus immediately upon receipt in the lab in order to begin formalin fixation. Fresh hysterectomy specimens can be opened either by amputating the cervix and processing it like a trachelectomy or by the conventional bivalve strategy for opening a uterus. Slice the uterine corpus in parallel thin slices before formalin fixation. Stretch out the vaginal cuff to its full length and pin in position before formalin fixation. Overnight formalin fixation is advised before further tissue sampling. If there is no suspicion that the tumor is extending into the parametria, then they can be removed at the interface with the uterine wall, sliced at 2 to 3 mm intervals and placed in tissue cassettes before opening the uterus. Otherwise the parametria should be left attached and sliced in continuity with the cervix. Open the uterus immediately upon receipt in the lab in order to begin formalin fixation. Fresh hysterectomy specimens can be opened either by amputating the cervix and processing it like a trachelectomy or by the conventional bivalve strategy for opening a uterus. Slice the uterine corpus in parallel thin slices before formalin fixation. Stretch out the vaginal cuff to its full length and pin in position before formalin fixation. Overnight formalin fixation is advised before further tissue sampling. Documenting the anatomic location of grossly visible tumor assists in the pathologic correlation with radiologic and intraoperative findings, particularly if there is clinical concern for margin involvement by tumor. We recommend using positions on a clock face and then correlating with the anatomic terminology used by the surgeon. If grossly visible tumor invades the cervical wall, the greatest depth of invasion should be documented as well as the total thickness of the cervical wall at that point. This permits correlation with microscopic measurements in order to report tumor involvement of the inner, middle, and/or outer third of the cervical wall. This 3-tier system of assessing cervical wall involvement is part of the Sedlis criteria, along with tumor size and lymphovascular space invasion status, used to determine eligibility for external pelvic radiation in cervical cancer patients whose radical hysterectomy shows node-negative, margin-negative, and parametria-negative disease – . If parametrectomy is performed, it should be documented whether tumor is confined to the cervix or involves the parametrium. Tumor involvement of the uterine corpus is important to document as it is associated with increased risk of ovarian and para-aortic lymph node metastasis – . The distance of tumor to the vaginal cuff margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls should be recorded. Recommendations Document tumor involvement in relation to the endocervix, ectocervix, parametria, and uterine corpus. Document the anatomic location of tumor in the cervix using positions on a clock or designating anterior versus posterior lip of the cervix. Document distance of tumor to the vaginal margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls. Document tumor involvement in relation to the endocervix, ectocervix, parametria, and uterine corpus. Document the anatomic location of tumor in the cervix using positions on a clock or designating anterior versus posterior lip of the cervix. Document distance of tumor to the vaginal margin, parametrial margin, and nonperitonealized connective tissue at the outer surface of the anterior and posterior cervix walls. The recommended measurements of the gross specimen and tumor (if grossly visible) are listed in Table . As stated in the trachelectomy section, a standardized approach to document tumor size is critical. Tumor dimensions and distance to the margins should be obtained in the fresh specimen, before pinning, stretching and fixation. The 3 macroscopic dimensions of a tumor are its length (parallel to the endocervical canal), width (in the plane perpendicular to the endocervical canal), and thickness (from tumor surface to the tumor deepest invasion point). The macroscopic depth of invasion is defined as the distance from the endocervical mucosa to the tumor deepest point within the cervical wall. Depending on the relative amount of exophytic growth versus growth into the cervical wall, tumor thickness and tumor depth of invasion may be different. Of note, documentation of macroscopic tumor dimensions in the report is not required as mandatory by the International Collaboration on Cancer Reporting (ICCR) or from the College of American Pathologist’s (CAP) Cancer Reporting Protocol , . However, for practical purposes it is recommended to record macroscopic length, width, thickness, and depth of invasion in the gross description since these data are eventually needed to determine the final dimensions after review of the microscopic findings. For the purposes of reporting and assigning pathologic tumor stage, only a single final value for each dimension should be given. This final size should be based on (a) correlating the macroscopic and the microscopic measurements in the hysterectomy specimen; (b) integrating the tumor dimensions in the preceding LEEP or cone specimen, which may be larger than those in the hysterectomy specimen; and (c) integrating pretherapy clinical and radiologic tumor dimensions if chemotherapy and/or radiation was administered before hysterectomy as these may also be larger than those in the hysterectomy specimen. Details on determining the final tumor dimension are discussed in the separate review in this issue on tumor staging recommendations. Recommendations Document the tumor length (parallel to the endocervical canal), tumor width (in the plance perpendicular to the endocervical canal), tumor thickness and depth of tumor invasion. Document the tumor length (parallel to the endocervical canal), tumor width (in the plance perpendicular to the endocervical canal), tumor thickness and depth of tumor invasion. If the cervix was amputated and processed like an intact cone or trachelectomy, then it should be serially sliced at 2 to 3 mm intervals parallel to the endocervical canal. Each slice should have mucosa along one edge (from the vaginal cuff margin to the endocervical mucosal margin) and the paracervical connective tissue surface along the other edge. If the slices are too large to fit in a cassette, they can be divided into 2 or 3 sections and placed in consecutive cassettes. Large format (macro) blocks, if available, may be of value in such cases. Alternatively, if the hysterectomy was opened using the bivalve approach, then the radial slice approach should be used, similar to that used for a formalin-fixed cone/LEEP specimen. Serial 2 to 3 mm slices are made parallel to the endocervical canal. Because of the half-cylindrical shape of the fixed cervix, this will create wedge shaped slices that are thicker at the outer cervical wall. These wedge shaped slices will have to be trimmed so they lay flat in the tissue cassette. Sampling of any grossly visible tumor follows recommendations stated in the trachelectomy section. In short, it is recommended to entirely submit tumors that are 2 cm or less, and representatively sample tumors larger than 2 cm. If tumor is not grossly visible then the entire cervix should be submitted for microscopic examination. Sampling of the vaginal margin is as recommended for trachelectomy specimens. The uterine corpus and lower uterine segment should be examined for tumor involvement. In addition to sampling any gross abnormalities, representative sections of the full thickness (ie from endometrium to serosa) of the lower uterine segment (in respect to any visible lesion) and the anterior and posterior walls of the corpus is recommended. If salpingectomy and/or oophorectomy were performed, these organs should be sliced using the sectioning and extensive examination of the fimbriae (SEE-Fim) protocol to evaluate for metastasis, which may occur, albeit uncommonly – , . There is no clear evidence to guide whether grossly normal appearing ovaries and fallopian tubes should be microscopically examined in their entirety in this setting. At a minimum it is recommended that the entire fimbriae of each fallopian tube be examined microscopically along with representative sections of the ampullary portion of the fallopian tubes, representative sections of the ovaries, and sampling of any abnormalities. Recommendations If the cervix was amputated, opened, and pinned out before fixation, then make 2 to 3 mm slices parallel to the endocervical canal. If the cervix was not amputated and pinned out before fixation, then perform radial slices at 2 to 3 mm intervals parallel to the endocervical canal. Tumors 2 cm or less should be entirely submitted whereas tumors larger than 2 cm can be representatively sampled. Tissue sections should particularly target tumor in relation to closest vaginal, paracervical/radial, and parametrial margins. If there is no grossly visible lesion, the entire cervix should be submitted. Perrpendicular sections of the vaginal margin closest to the tumor should be examined. Whether the remainder of the vaginal margin should be examined entirely en face or by representative perpendicular sections is left to local practice standards. Similarly, if there is no macroscopic tumor, the decision to examine the entire vaginal margin en face or by representative perpendicular sections is left to local practice standards. The parametria should be entirely submitted. The full thickness of the anterior and posterior walls of the corpus and of the lower uterine segment should be representatively sampled. The fallopian tubes should be processed using the SEE-Fim protocol and the fimbriae should be entirely submitted while the ampullary portion can be representatively sampled. The ovaries can be representatively sampled. If the cervix was amputated, opened, and pinned out before fixation, then make 2 to 3 mm slices parallel to the endocervical canal. If the cervix was not amputated and pinned out before fixation, then perform radial slices at 2 to 3 mm intervals parallel to the endocervical canal. Tumors 2 cm or less should be entirely submitted whereas tumors larger than 2 cm can be representatively sampled. Tissue sections should particularly target tumor in relation to closest vaginal, paracervical/radial, and parametrial margins. If there is no grossly visible lesion, the entire cervix should be submitted. Perrpendicular sections of the vaginal margin closest to the tumor should be examined. Whether the remainder of the vaginal margin should be examined entirely en face or by representative perpendicular sections is left to local practice standards. Similarly, if there is no macroscopic tumor, the decision to examine the entire vaginal margin en face or by representative perpendicular sections is left to local practice standards. The parametria should be entirely submitted. The full thickness of the anterior and posterior walls of the corpus and of the lower uterine segment should be representatively sampled. The fallopian tubes should be processed using the SEE-Fim protocol and the fimbriae should be entirely submitted while the ampullary portion can be representatively sampled. The ovaries can be representatively sampled. Pelvic exenteration consists of en bloc resection of pelvic organs with the uterus and vagina. Anterior pelvic exenteration includes the urinary bladder, urethra and/or ureters. Posterior pelvic exenteration includes the rectum. Total pelvic exenteration includes both the anterior and posterior organs . The procedure is indicated in patients with cervical or vaginal carcinoma recurrent in the central pelvis in which conventional radiation therapy fails to control disease, and in those with advanced stage cancer that are amenable for extensive surgical resection , . Specimen Orientation and Inking An exenteration specimen is complex and may be difficult to orient, often due to the presence of extensive necrosis, hemorrhage and fibrosis (usually seen in the setting of previous surgery and/or radiation). The examination starts by identifying all anatomic structures (Fig. ); close correlation with the details in the operative note and discussion with the surgeon is advised. It is useful to employ probes to identify the luminal aspect of the cervix, urethra and/or ureters. Taking photographs of the specimen is recommended to document orientation and to permit correlation with the microscopic sections. The margins to ink are the vagina, parametria, urethra, ureters, proximal and distal rectal margins, and soft tissue margins beyond the parametria (eg, pararectal and paravesical soft tissues). Recommendations Identify all anatomic structures present (cervix, uterine corpus, vagina, urinary bladder, rectum) in conjunction with the operative note and discussion with the surgeon. Margins to ink are the vagina, parametria, urethra, ureters, proximal and distal rectal margins, and soft tissue margins. Specimen Measurements Measurements of all organs should be taken in the fresh state (before fixation). For the uterus and bladder, 3 dimensions should be obtained. For the rectum, vagina, and ureters, the dimensions to report include the length and the range of their diameter. Recommendations Measure all the organs in the fresh state. Specimen Processing, Tumor Measurements and Tissue Sampling Inflation of the urinary bladder and rectum with formalin and fixation of the specimen for at least several hours, or overnight, is advised to optimize the quality of the sections . Once fixed, the entire specimen can be hemisected to demonstrate the relationship of the tumor to the bladder, rectum and soft tissues. The tumor should be measured in three dimensions (superior to inferior, anterior to posterior, and lateral dimensions) and its location with respect to all organs present should be reported, including whether each organ is involved or the gross distance between the lesion and the organ. Similarly, the distance of the tumor to all margins should be recorded. Recommendations for tumor sampling are the same as those made for hysterectomy specimens. Sections should show the interface between the tumor and other structures (vagina, bladder, rectum, soft tissue). Perpendicular sections are advised to show the relationship between the tumor and mucosal surfaces of the bladder, vagina, and rectum. Representative sections of all uninvolved organs should be submitted as should any incidental lesions (eg, rectal polyps). The margins of the vagina and parametrium should be processed according to the recommendations made for hysterectomy specimens. The urethral and ureteral margins, as well as proximal and distal rectal margins, should be obtained en face. Soft tissue margins beyond the parametria (eg, pararectal and paravesical soft tissues) should be sampled perpendicular to the nearest approach of tumor. Representative en face margins can be taken if the tumor is far away. Recommendations Inflate the urinary bladder and rectum with formalin for several hours or overnight and then hemisect the specimen. Measure the tumor in 3 dimensions and document its relationship to all the organs and margins. Representative sections of the tumor should demonstrate its relationship to all organs and margins. The vaginal and parametrial margins are processed as is done for a hysterectomy specimen. The urethral, ureteral, rectal, and soft tissue margins are processed en face, unless there is tumor nearby in which case perpendicular margins are advised. Intraoperative Consultation Intraoperative consultation in the setting of a pelvic exenteration procedure is rare. It is performed to assess the closest soft tissue margins to the tumor to determine the need for additional margins. Such margins can be obtained from the main specimen or be submitted separately by the surgeon. En face sampling of the margin is more practical if the tumor is far away, but perpendicular sections are preferred if the tumor or any tissue abnormality is detected at the margin or close to it. A positive or close margin will prompt excision of additional soft tissue. The distance between tumor and margin at which re-excision is recommended has not been standardized. A second scenario is when biopsies of the pelvic wall, abdomen and/or retroperitoneal area are taken before proceeding with the exenteration. If positive for malignancy, the exenteration is aborted. Potential pitfalls in the interpretation of these specimens include crushed artifact and radiotherapy induced changes, which can be misinterpreted as tumor. An exenteration specimen is complex and may be difficult to orient, often due to the presence of extensive necrosis, hemorrhage and fibrosis (usually seen in the setting of previous surgery and/or radiation). The examination starts by identifying all anatomic structures (Fig. ); close correlation with the details in the operative note and discussion with the surgeon is advised. It is useful to employ probes to identify the luminal aspect of the cervix, urethra and/or ureters. Taking photographs of the specimen is recommended to document orientation and to permit correlation with the microscopic sections. The margins to ink are the vagina, parametria, urethra, ureters, proximal and distal rectal margins, and soft tissue margins beyond the parametria (eg, pararectal and paravesical soft tissues). Recommendations Identify all anatomic structures present (cervix, uterine corpus, vagina, urinary bladder, rectum) in conjunction with the operative note and discussion with the surgeon. Margins to ink are the vagina, parametria, urethra, ureters, proximal and distal rectal margins, and soft tissue margins. Identify all anatomic structures present (cervix, uterine corpus, vagina, urinary bladder, rectum) in conjunction with the operative note and discussion with the surgeon. Margins to ink are the vagina, parametria, urethra, ureters, proximal and distal rectal margins, and soft tissue margins. Measurements of all organs should be taken in the fresh state (before fixation). For the uterus and bladder, 3 dimensions should be obtained. For the rectum, vagina, and ureters, the dimensions to report include the length and the range of their diameter. Recommendations Measure all the organs in the fresh state. Measure all the organs in the fresh state. Inflation of the urinary bladder and rectum with formalin and fixation of the specimen for at least several hours, or overnight, is advised to optimize the quality of the sections . Once fixed, the entire specimen can be hemisected to demonstrate the relationship of the tumor to the bladder, rectum and soft tissues. The tumor should be measured in three dimensions (superior to inferior, anterior to posterior, and lateral dimensions) and its location with respect to all organs present should be reported, including whether each organ is involved or the gross distance between the lesion and the organ. Similarly, the distance of the tumor to all margins should be recorded. Recommendations for tumor sampling are the same as those made for hysterectomy specimens. Sections should show the interface between the tumor and other structures (vagina, bladder, rectum, soft tissue). Perpendicular sections are advised to show the relationship between the tumor and mucosal surfaces of the bladder, vagina, and rectum. Representative sections of all uninvolved organs should be submitted as should any incidental lesions (eg, rectal polyps). The margins of the vagina and parametrium should be processed according to the recommendations made for hysterectomy specimens. The urethral and ureteral margins, as well as proximal and distal rectal margins, should be obtained en face. Soft tissue margins beyond the parametria (eg, pararectal and paravesical soft tissues) should be sampled perpendicular to the nearest approach of tumor. Representative en face margins can be taken if the tumor is far away. Recommendations Inflate the urinary bladder and rectum with formalin for several hours or overnight and then hemisect the specimen. Measure the tumor in 3 dimensions and document its relationship to all the organs and margins. Representative sections of the tumor should demonstrate its relationship to all organs and margins. The vaginal and parametrial margins are processed as is done for a hysterectomy specimen. The urethral, ureteral, rectal, and soft tissue margins are processed en face, unless there is tumor nearby in which case perpendicular margins are advised. Inflate the urinary bladder and rectum with formalin for several hours or overnight and then hemisect the specimen. Measure the tumor in 3 dimensions and document its relationship to all the organs and margins. Representative sections of the tumor should demonstrate its relationship to all organs and margins. The vaginal and parametrial margins are processed as is done for a hysterectomy specimen. The urethral, ureteral, rectal, and soft tissue margins are processed en face, unless there is tumor nearby in which case perpendicular margins are advised. Intraoperative consultation in the setting of a pelvic exenteration procedure is rare. It is performed to assess the closest soft tissue margins to the tumor to determine the need for additional margins. Such margins can be obtained from the main specimen or be submitted separately by the surgeon. En face sampling of the margin is more practical if the tumor is far away, but perpendicular sections are preferred if the tumor or any tissue abnormality is detected at the margin or close to it. A positive or close margin will prompt excision of additional soft tissue. The distance between tumor and margin at which re-excision is recommended has not been standardized. A second scenario is when biopsies of the pelvic wall, abdomen and/or retroperitoneal area are taken before proceeding with the exenteration. If positive for malignancy, the exenteration is aborted. Potential pitfalls in the interpretation of these specimens include crushed artifact and radiotherapy induced changes, which can be misinterpreted as tumor. Pelvic lymphadenectomy is part of primary surgical treatment of all stages of cervical carcinoma except stage IA1 without lymphovascular invasion . Sentinel lymph node (SLN) mapping and biopsy of pelvic lymph nodes for early stage cervical cancer has emerged as a strategy to mitigate the risk for lower extremity lymphadenoma that accompanies systematic pelvic lymphadenectomy . SLN mapping also helps identify unusual lymph drainage patterns . In both Europe and the United States, current guidelines recommend consideration of SLN biopsy as an option for early stage cervical cancer , . The use of intraoperative evaluation of SLN biopsy as a method to triage patients to proceed with radical surgery or to abort and pursue chemoradiotherapy has also been proposed, though diagnostic sensitivity has been shown to be a limitation . Prospective clinical trials evaluating the role of SLN biopsy in early stage cervical cancer management are ongoing , . Para-aortic lymph node dissection is considered for stage IB1 and higher cancers . Overall rates for lymph node metastasis range from 12% for HPV-associated usual type endocervical adenocarcinoma to 16.7% for HPV-independent gastric type adenocarcinoma and 22% for HPV-associated invasive stratified mucinous carcinoma . Among HPV-associated endocervical adenocarcinoma, the 3-tier Silva pattern of invasiveness stratifies patients into those with no risk for nodal metastasis (pattern A), ~4% risk (pattern B) and up to 25% risk (pattern C) . Most HPV-independent endocervical adenocarcinoma are pattern C. The stage assignment based on lymph node involvement depends on the size of the nodal metastasis in both the 2019 FIGO staging system and the 8th edition AJCC staging system , , . In the 2019 FIGO staging, macrometastases (>2 mm) and micrometastases (>0.2–2 mm) are classified as positive lymph nodes (stage IIIC) but isolated tumor cells (up to 0.2 mm) do not affect stage. Current AJCC staging criteria classify macrometastases and micrometastases as stage pN1 and isolated tumor cells as pN0(i+) . Consequently, the strategy for gross management of lymph nodes should be designed to reliably detect nodal metastasis of at least 0.2 mm. The clinical significance of isolated tumor cells is still being studied. Distinguishing Non-SLN Versus SLN for Pathologic Processing Specimen measurements, dissection, and tissue sampling strategies are the same for non-SLN and SLN; however, the processing of blocks is different. Therefore, at the time of gross evaluation, the pathologist should clearly determine whether the specimen is a non-SLN or SLN. This distinction may not always be clear based on macroscopic examination of the specimen since a variety of markers are available for the surgeon to choose from to map SLN . Whereas direct visual mapping with blue dye may impart a blue color to the lymph node specimen, fluorescence mapping by the fluorescent marker indocyanine green or gamma probe mapping by radiocolloid technetium-99 do not affect the appearance of the lymph nodes. Thus, the pathologist should use the specimen requisition form and specimen container label to determine if a lymph node specimen is a SLN in order to determine the proper specimen management strategy. Specimen Measurements The 3 dimensions of the overall lymph node specimen, including associated adipose tissue, should be documented in the gross description. If the specimen consists of multiple fragments, the dimensions of the fragments aggregated together can be reported. The number of macroscopically visible lymph nodes should be recorded as well as the dimension (long axis) of the largest node. If metastatic tumor is macroscopically visible after dissection, the largest dimension should be recorded for each involved lymph node. Recommendations Record the size and number of macroscopically detectable lymph nodes. Specimen Processing and Tissue Sampling Excess adipose tissue can be carefully trimmed from the lymph node but it should not be stripped entirely away as this may disrupt the capsule of the lymph node and may also preclude evaluation for extranodal extension of tumor if metastasis is present. To maximize detection of the small volume metastasis, each lymph node should be sliced perpendicular to its long axis at intervals no more than 2 mm thick (Fig. ). This approach has a higher chance of detecting metastasis than slicing parallel to the long axis of the node as more tissue can be evaulated . If there is macroscopic metastatic tumor, a representative section can be submitted for microscopic examination. However, if there is no macroscopic evidence of metastasis, all of the sliced lymph node should be submitted. Excess adipose lacking any visible abnormality does not need to be submitted. The number of lymph nodes in each cassette should be documented in a way that permits an accurate total count of all lymph nodes examined and of the total with metastasis. If the specimen contains no definitive lymph nodes, the tissue should be submitted entirely for microscopic examination. Recommendations Remove excess adipose tissue from lymph nodes (nonsentinel and sentinel) and slice perpendicular to long axis at 2 mm intervals. Submit all slices of each lymph node for microscopic examination unless there is an obvious macroscopic metastasis, in which case representative section is sufficient. Excess adipose tissue trimmed away does not need to be submitted for microscopic examination. Document the number of lymph nodes in each cassette to allow for an accurate total count. If no lymph nodes are identified grossly, submit the entire tissue for microscopic examination. Tissue Block Processing For non-SLN, a single H&E-stained section per tissue block is sufficient. For SLN, the optimal strategy for tissue block processing remains controversial. The aim to detect low-volume metastases (micrometastases and isolated tumor cells) needs to be balanced with the still unresolved questions about their clinical significance and the utilization of laboratory resources. The concept of SNL ultrastaging refers to using multiple deeper level sections of the tissue block with or without keratin immunohistochemistry aiming at the detection of low-volume nodal disease. One of the largest studies to date on SLN in cervical cancer demonstrated that up to 6.4% of metastases would go undetected if ultrastaging was not performed . However, the optimal parameters that define ultrastaging remain to be resolved, specifically: the number of deeper sections, the distance between the sections, and the use of keratin immunohistochemistry if all of the H&E-stained sections are negative. Depending on the parameters of the protocol, ultrastaging can be labor and resource intensive, as well as expensive. In theory, cutting sections from the tissue block at 200 μm (0.2 mm) intervals until the block is exhausted should detect all micrometastases; this is the concept behind the strategy used in one of the ongoing prospective clinical trials that takes sections at 150  μm intervals . However, this means that a standard 2 mm thick slice of a lymph node would result in 10 H&E sections per tissue block. For a bilateral SLN procedure, this means at least 20 H&E sections in total (that, assuming there is only one block per side). Whether this conceptual approach is feasible outside of a clinical trial may depend on local practice conditions. Further questions regarding keratin staining include the number of sections that should be stained and the location in the tissue block of the sections for keratin staining relative to the location of the sections for H&E staining. Thus, the evidence is clear that some form of ultrastaging is needed for SLN but there is no clear mandate on the exact details of the ideal ultrastaging protocol. Currently, the best practice should be decided at the local practice level and applied uniformly for all patients within that practice. From a practical perspective, if SNL ultrastaging is to be performed, we recommend taking several sections at multiple intervals though the tissue block. One section should be used for H&E stain, and the others can be used for immunohistochemistry or additional H&E stains. The total number of intervals should be decided at the local practice level. Likewise, the use of routine keratin immunohistochemistry should be decided at the local practice level. Keratin immunohistochemistry is useful not only for evaluating suspicious cells on the H&E stain, but also for confirming the classification of metastases that are at the cusp between ITC versus micrometastasis and the cusp between micrometastasis versus macrometastasis. The measurement of the size of the metastases may be more clear on a keratin stained slide than on an H&E stained slide. Recommendations For nonsentinel nodes, a single H&E-stained section per tissue block is sufficient. For sentinel nodes, ultrastaging by deeper level sections should be performed; however the number of levels and the distance between levels should be decided by local practice conditions as there is insufficient evidence to make a specific recommendation. The role for keratin immunohistochemical evaluation of SLNs remains to be fully studied; there is insufficient evidence to make a recommendation about using keratin staining. Intraoperative Evaluation European guidelines recommend that intraoperative evaluation of SLN can be used to triage whether early stage cervical cancer patients should proceed to radical surgery (if there is no nodal metastasis) or whether radical surgery should be abandoned and definitive chemoradiation pursued instead . This strategy is tempered by the imperfect sensitivity of intraoperative evaluation. False negative rates from 25% to 76% have been reported and while the majority of the metastases that went undetected tended to be micrometastases and isolated tumor cells, a small number of macrometastasis were also missed intraoperatively. Therefore it is recommended that intraoperative SLN evaluation be performed only if the surgeon is prepared to alter the intraoperative plan based on the results and is aware of the limitations to diagnostic sensitivity. Intraoperatively, the SLN should be dissected using the same strategy for standard SLN processing. Remove excess adipose but avoid stripping too close to the outer surface of the node. Slice the node perpendicular to the long axis at 2 mm intervals. Evaluate each slice by frozen section but take caution not to cut too deeply into the tissue and do not perform deeper level sections except to pursue suspicious findings as that would potentially exhaust the residual tissue and impair ultrastaging of the residual tissue . For the permanent section processing, the standard ultrastaging protocol should be used on the remainder of the frozen section tissue block. Recommendations Intraoperative evaluation of SLN should be performed only if the surgeon is prepared to alter the intraoperative plan based on the results and is aware of the limitations to diagnostic sensitivity After removing excess adipose tissue, slice the SLN perpendicular to long axis at 2 mm intervals and evaluate all slices by frozen section. Do not perform deeper levels intraoperatively except to pursue suspicious findings. Apply the standard ultrastaging protocol for permanent section processing of the remainder of the frozen tissue block. Specimen measurements, dissection, and tissue sampling strategies are the same for non-SLN and SLN; however, the processing of blocks is different. Therefore, at the time of gross evaluation, the pathologist should clearly determine whether the specimen is a non-SLN or SLN. This distinction may not always be clear based on macroscopic examination of the specimen since a variety of markers are available for the surgeon to choose from to map SLN . Whereas direct visual mapping with blue dye may impart a blue color to the lymph node specimen, fluorescence mapping by the fluorescent marker indocyanine green or gamma probe mapping by radiocolloid technetium-99 do not affect the appearance of the lymph nodes. Thus, the pathologist should use the specimen requisition form and specimen container label to determine if a lymph node specimen is a SLN in order to determine the proper specimen management strategy. The 3 dimensions of the overall lymph node specimen, including associated adipose tissue, should be documented in the gross description. If the specimen consists of multiple fragments, the dimensions of the fragments aggregated together can be reported. The number of macroscopically visible lymph nodes should be recorded as well as the dimension (long axis) of the largest node. If metastatic tumor is macroscopically visible after dissection, the largest dimension should be recorded for each involved lymph node. Recommendations Record the size and number of macroscopically detectable lymph nodes. Record the size and number of macroscopically detectable lymph nodes. Excess adipose tissue can be carefully trimmed from the lymph node but it should not be stripped entirely away as this may disrupt the capsule of the lymph node and may also preclude evaluation for extranodal extension of tumor if metastasis is present. To maximize detection of the small volume metastasis, each lymph node should be sliced perpendicular to its long axis at intervals no more than 2 mm thick (Fig. ). This approach has a higher chance of detecting metastasis than slicing parallel to the long axis of the node as more tissue can be evaulated . If there is macroscopic metastatic tumor, a representative section can be submitted for microscopic examination. However, if there is no macroscopic evidence of metastasis, all of the sliced lymph node should be submitted. Excess adipose lacking any visible abnormality does not need to be submitted. The number of lymph nodes in each cassette should be documented in a way that permits an accurate total count of all lymph nodes examined and of the total with metastasis. If the specimen contains no definitive lymph nodes, the tissue should be submitted entirely for microscopic examination. Recommendations Remove excess adipose tissue from lymph nodes (nonsentinel and sentinel) and slice perpendicular to long axis at 2 mm intervals. Submit all slices of each lymph node for microscopic examination unless there is an obvious macroscopic metastasis, in which case representative section is sufficient. Excess adipose tissue trimmed away does not need to be submitted for microscopic examination. Document the number of lymph nodes in each cassette to allow for an accurate total count. If no lymph nodes are identified grossly, submit the entire tissue for microscopic examination. Remove excess adipose tissue from lymph nodes (nonsentinel and sentinel) and slice perpendicular to long axis at 2 mm intervals. Submit all slices of each lymph node for microscopic examination unless there is an obvious macroscopic metastasis, in which case representative section is sufficient. Excess adipose tissue trimmed away does not need to be submitted for microscopic examination. Document the number of lymph nodes in each cassette to allow for an accurate total count. If no lymph nodes are identified grossly, submit the entire tissue for microscopic examination. For non-SLN, a single H&E-stained section per tissue block is sufficient. For SLN, the optimal strategy for tissue block processing remains controversial. The aim to detect low-volume metastases (micrometastases and isolated tumor cells) needs to be balanced with the still unresolved questions about their clinical significance and the utilization of laboratory resources. The concept of SNL ultrastaging refers to using multiple deeper level sections of the tissue block with or without keratin immunohistochemistry aiming at the detection of low-volume nodal disease. One of the largest studies to date on SLN in cervical cancer demonstrated that up to 6.4% of metastases would go undetected if ultrastaging was not performed . However, the optimal parameters that define ultrastaging remain to be resolved, specifically: the number of deeper sections, the distance between the sections, and the use of keratin immunohistochemistry if all of the H&E-stained sections are negative. Depending on the parameters of the protocol, ultrastaging can be labor and resource intensive, as well as expensive. In theory, cutting sections from the tissue block at 200 μm (0.2 mm) intervals until the block is exhausted should detect all micrometastases; this is the concept behind the strategy used in one of the ongoing prospective clinical trials that takes sections at 150  μm intervals . However, this means that a standard 2 mm thick slice of a lymph node would result in 10 H&E sections per tissue block. For a bilateral SLN procedure, this means at least 20 H&E sections in total (that, assuming there is only one block per side). Whether this conceptual approach is feasible outside of a clinical trial may depend on local practice conditions. Further questions regarding keratin staining include the number of sections that should be stained and the location in the tissue block of the sections for keratin staining relative to the location of the sections for H&E staining. Thus, the evidence is clear that some form of ultrastaging is needed for SLN but there is no clear mandate on the exact details of the ideal ultrastaging protocol. Currently, the best practice should be decided at the local practice level and applied uniformly for all patients within that practice. From a practical perspective, if SNL ultrastaging is to be performed, we recommend taking several sections at multiple intervals though the tissue block. One section should be used for H&E stain, and the others can be used for immunohistochemistry or additional H&E stains. The total number of intervals should be decided at the local practice level. Likewise, the use of routine keratin immunohistochemistry should be decided at the local practice level. Keratin immunohistochemistry is useful not only for evaluating suspicious cells on the H&E stain, but also for confirming the classification of metastases that are at the cusp between ITC versus micrometastasis and the cusp between micrometastasis versus macrometastasis. The measurement of the size of the metastases may be more clear on a keratin stained slide than on an H&E stained slide. Recommendations For nonsentinel nodes, a single H&E-stained section per tissue block is sufficient. For sentinel nodes, ultrastaging by deeper level sections should be performed; however the number of levels and the distance between levels should be decided by local practice conditions as there is insufficient evidence to make a specific recommendation. The role for keratin immunohistochemical evaluation of SLNs remains to be fully studied; there is insufficient evidence to make a recommendation about using keratin staining. For nonsentinel nodes, a single H&E-stained section per tissue block is sufficient. For sentinel nodes, ultrastaging by deeper level sections should be performed; however the number of levels and the distance between levels should be decided by local practice conditions as there is insufficient evidence to make a specific recommendation. The role for keratin immunohistochemical evaluation of SLNs remains to be fully studied; there is insufficient evidence to make a recommendation about using keratin staining. European guidelines recommend that intraoperative evaluation of SLN can be used to triage whether early stage cervical cancer patients should proceed to radical surgery (if there is no nodal metastasis) or whether radical surgery should be abandoned and definitive chemoradiation pursued instead . This strategy is tempered by the imperfect sensitivity of intraoperative evaluation. False negative rates from 25% to 76% have been reported and while the majority of the metastases that went undetected tended to be micrometastases and isolated tumor cells, a small number of macrometastasis were also missed intraoperatively. Therefore it is recommended that intraoperative SLN evaluation be performed only if the surgeon is prepared to alter the intraoperative plan based on the results and is aware of the limitations to diagnostic sensitivity. Intraoperatively, the SLN should be dissected using the same strategy for standard SLN processing. Remove excess adipose but avoid stripping too close to the outer surface of the node. Slice the node perpendicular to the long axis at 2 mm intervals. Evaluate each slice by frozen section but take caution not to cut too deeply into the tissue and do not perform deeper level sections except to pursue suspicious findings as that would potentially exhaust the residual tissue and impair ultrastaging of the residual tissue . For the permanent section processing, the standard ultrastaging protocol should be used on the remainder of the frozen section tissue block. Recommendations Intraoperative evaluation of SLN should be performed only if the surgeon is prepared to alter the intraoperative plan based on the results and is aware of the limitations to diagnostic sensitivity After removing excess adipose tissue, slice the SLN perpendicular to long axis at 2 mm intervals and evaluate all slices by frozen section. Do not perform deeper levels intraoperatively except to pursue suspicious findings. Apply the standard ultrastaging protocol for permanent section processing of the remainder of the frozen tissue block. Intraoperative evaluation of SLN should be performed only if the surgeon is prepared to alter the intraoperative plan based on the results and is aware of the limitations to diagnostic sensitivity After removing excess adipose tissue, slice the SLN perpendicular to long axis at 2 mm intervals and evaluate all slices by frozen section. Do not perform deeper levels intraoperatively except to pursue suspicious findings. Apply the standard ultrastaging protocol for permanent section processing of the remainder of the frozen tissue block.
Note by
d7098f42-7d34-4f0c-aa38-573d7a4ca97b
9396571
Physiology[mh]
Association of cholecystectomy with short-term and long-term risks of depression and suicide
d7e76acc-ddfe-466c-b70c-774ffc3b3b5c
11850815
Surgical Procedures, Operative[mh]
Cholecystectomy is one of the most common types of organ removal surgery, and most cholecystectomies are performed for people with symptomatic gallstone disease or sludge, gallbladder polyps, and severe complications of gallbladder disease such as acute and chronic cholecystitis, and acute cholangitis , . It can also be performed in conjunction with liver transplantation and hepatectomy . According to medical statistics from the Korea Health Insurance Review and Assessment Service, the number of cholecystectomies performed in South Korea increased from 47,601 in 2010 to 83,479 in 2021. Patients who undergo cholecystectomy often experience postcholecystectomy syndrome (PCS), characterized by symptoms such as flatulent dyspepsia, dull abdominal pain, and diarrhea – . PCS can be linked to psychological disorders, including depression and anxiety , – . A recent study of South Koreans that followed patients for an average of 3.67 years revealed that patients in the cholecystectomy group had a greater risk of developing major depressive disorder than the group that did not have cholecystectomy . A study of a Taiwanese population with a 2-year follow-up also found that patients who had undergone cholecystectomy had an increased risk of developing a depressive disorder compared with those who had not. In particular, the risk of depressive disorders was increased in female patients who underwent cholecystectomy . Patients with suspected sphincter of Oddi dysfunction also reported heightened levels of anxiety and depression postcholecystectomy, correlating with their pain and depression . However, no studies have confirmed the long-term effects of cholecystectomy on depression risk. The number of cholecystectomies is increasing in Korea, and a multicenter cohort study revealed that 0.9% of patients reported significant levels of anxiety or depression . However, the specific risk of suicide following cholecystectomy remains poorly understood, despite South Korea’s high suicide rate. A report by Statistics Korea indicated that in 2021, 13,352 individuals died of suicide . A previous study in Korean population has linked cholecystectomy to depression, attributed in part to changes in the gut microbiome and metabolic consequences – . Given the known association between postoperative depression and suicide, further investigation into cholecystectomy’s potential role in suicide risk is warranted to further explore the possible etiologies, including the gut microbiome and PCS , . However, to our knowledge, the association between cholecystectomy and suicide has not been studied. Utilizing the National Health Insurance Service-National Health Screening (NHIS-HEALS) cohort, this study aims to explore both short-term and long-term risks of depression and suicide following cholecystectomy. Data source This study was conducted using the NHIS-HEALS cohort database of South Korea from 2002 to 2019 . This database, maintained by the National Health Insurance Service (NHIS), draws from health screening programs accessible to most insured Koreans. Since 1995, the NHIS has conducted standardized general health screenings biennially for Korean adults aged 40 and above. Data encompass key health variables including blood pressure, body mass index, cholesterol levels, smoking and drinking habits, as well as weekly exercise frequency. Study population and design Figure provides the process of selecting the study population. Utilizing NHIS-HEALS data from 2002 to 2019, 15,437 patients who underwent cholecystectomy between 2004 and 2019 were included. Exclusions comprised 3,819 patients without health screening data, 14 with a history of liver transplantation, 1,032 with liver cancer or choledochocholecystic tumors, and 2,542 with psychiatric disorders diagnosed pre-cholecystectomy. The psychiatric disorders exclusion group was defined as individuals diagnosed with disorders classified under ICD-10 codes F20–F29, F30–F33, F340–F341, F00–F09, and F40–F48. Ultimately, 6,688 patients were included in the cholecystectomy group for depression outcomes, and 6,694 for suicide outcomes. The individuals in the non-cholecystectomy group were matched for age and sex in a 1:10 ratio. Key variables We extracted exposure variables using the procedure registry code Q7380 for cholecystectomy. Depression and suicide were defined using codes from the 10th edition of the International Statistical Classification of Diseases (ICD-10) . Suicide was separately collected by the National Statistical Office of Korea as demographic and sociological statistics, and death could be analyzed by combining it with the death date of the health examination cohort database used in this study. In particular, suicide is defined by using information released as a middle classification unit and is not disclosed as a sub-classification system at the national level because it is sensitive person information. Suicide mortality was designated by codes X60-X84, while depression was diagnosed as F32-F33. Among the ICD-10 codes that correspond to the operational definition of depression, F33 is the definition of recurrence of depression, but due to the limitations of the database we can access, it is impossible to check the history of depression diagnosis before 2002. Therefore, F33 was included in the definition of depression diagnosis in consideration of the fact that a person with a history of depression diagnosis before 2002 could be defined as a recurrence of depression because he was diagnosed with depression during the follow-up period. Patients with an F32-F33 diagnosis, both outpatient and inpatient records, and at least one antidepressant prescription were considered to have depressive disorder. Follow-up for depression and suicide diagnoses was observed until December 31st, 2019. Variables including sex, age, household income, smoking, exercise frequency, alcohol consumption, blood pressure, fasting serum glucose, body mass index, total cholesterol, and Charlson Comorbidity Index were considered. In the evaluation of the patient’s income, the patient’s insurance grade was divided into 20 units and grouped into 4 groups based on the data, and then the income was replaced with the corresponding data grouped. The alcohol consumption refers to the frequency of drinking within 1 week. As a limitation in the database of health screening cohort studies used in this study, drinking consumption behavior was replaced by the date of the number of drinking consumption per week. The number of medium-intensity exercises was the sum of strenuous exercise sessions lasting more than 20 min and moderate exercise sessions lasting more than 30 min within one week. Short-term depression risk was defined as diagnosis within 3 years postcholecystectomy, while long-term risk was defined as diagnosis beyond 3 years postcholecystectomy. Statistical analysis The characteristics of the participants in this study were stratified by cholecystectomy. Continuous variables were presented as the means and standard deviations, while categorical and dichotomous variables are expressed as percentages and frequencies. We compared the clinical characteristics between cholecystectomy and non-cholecystectomy groups using the chi-squared test and the independent sample t-test. The Cox proportional hazards model was utilized to estimate the association between cholecystectomy and depression, as well as between cholecystectomy and suicide, producing adjusted hazard ratio (aHR) and 95% confidence interval (CI) . The hazard ratio was calculated after adjustments included age, sex, diastolic blood pressure, systolic blood pressure, body mass index, household income, smoking, drinking frequency, physical activity, fasting serum glucose, total cholesterol, and Charlson Comorbidity Index . P < 0.05 was considered to indicate statistical significance. SAS Enterprise Guide version 8.3 conducted all analyses. Ethical approval This study was reviewed and approved by the Institutional Review Board of Seoul National University Hospital (IRB number: E-2204-038-1314) and adhered to the principles outlined in the Declaration of Helsinki and its subsequent updates. The requirement for informed consent was waived as the NHIS database used for analysis was anonymized according to strict compliance with confidentiality standards prior to analysis. This study was conducted using the NHIS-HEALS cohort database of South Korea from 2002 to 2019 . This database, maintained by the National Health Insurance Service (NHIS), draws from health screening programs accessible to most insured Koreans. Since 1995, the NHIS has conducted standardized general health screenings biennially for Korean adults aged 40 and above. Data encompass key health variables including blood pressure, body mass index, cholesterol levels, smoking and drinking habits, as well as weekly exercise frequency. Figure provides the process of selecting the study population. Utilizing NHIS-HEALS data from 2002 to 2019, 15,437 patients who underwent cholecystectomy between 2004 and 2019 were included. Exclusions comprised 3,819 patients without health screening data, 14 with a history of liver transplantation, 1,032 with liver cancer or choledochocholecystic tumors, and 2,542 with psychiatric disorders diagnosed pre-cholecystectomy. The psychiatric disorders exclusion group was defined as individuals diagnosed with disorders classified under ICD-10 codes F20–F29, F30–F33, F340–F341, F00–F09, and F40–F48. Ultimately, 6,688 patients were included in the cholecystectomy group for depression outcomes, and 6,694 for suicide outcomes. The individuals in the non-cholecystectomy group were matched for age and sex in a 1:10 ratio. We extracted exposure variables using the procedure registry code Q7380 for cholecystectomy. Depression and suicide were defined using codes from the 10th edition of the International Statistical Classification of Diseases (ICD-10) . Suicide was separately collected by the National Statistical Office of Korea as demographic and sociological statistics, and death could be analyzed by combining it with the death date of the health examination cohort database used in this study. In particular, suicide is defined by using information released as a middle classification unit and is not disclosed as a sub-classification system at the national level because it is sensitive person information. Suicide mortality was designated by codes X60-X84, while depression was diagnosed as F32-F33. Among the ICD-10 codes that correspond to the operational definition of depression, F33 is the definition of recurrence of depression, but due to the limitations of the database we can access, it is impossible to check the history of depression diagnosis before 2002. Therefore, F33 was included in the definition of depression diagnosis in consideration of the fact that a person with a history of depression diagnosis before 2002 could be defined as a recurrence of depression because he was diagnosed with depression during the follow-up period. Patients with an F32-F33 diagnosis, both outpatient and inpatient records, and at least one antidepressant prescription were considered to have depressive disorder. Follow-up for depression and suicide diagnoses was observed until December 31st, 2019. Variables including sex, age, household income, smoking, exercise frequency, alcohol consumption, blood pressure, fasting serum glucose, body mass index, total cholesterol, and Charlson Comorbidity Index were considered. In the evaluation of the patient’s income, the patient’s insurance grade was divided into 20 units and grouped into 4 groups based on the data, and then the income was replaced with the corresponding data grouped. The alcohol consumption refers to the frequency of drinking within 1 week. As a limitation in the database of health screening cohort studies used in this study, drinking consumption behavior was replaced by the date of the number of drinking consumption per week. The number of medium-intensity exercises was the sum of strenuous exercise sessions lasting more than 20 min and moderate exercise sessions lasting more than 30 min within one week. Short-term depression risk was defined as diagnosis within 3 years postcholecystectomy, while long-term risk was defined as diagnosis beyond 3 years postcholecystectomy. The characteristics of the participants in this study were stratified by cholecystectomy. Continuous variables were presented as the means and standard deviations, while categorical and dichotomous variables are expressed as percentages and frequencies. We compared the clinical characteristics between cholecystectomy and non-cholecystectomy groups using the chi-squared test and the independent sample t-test. The Cox proportional hazards model was utilized to estimate the association between cholecystectomy and depression, as well as between cholecystectomy and suicide, producing adjusted hazard ratio (aHR) and 95% confidence interval (CI) . The hazard ratio was calculated after adjustments included age, sex, diastolic blood pressure, systolic blood pressure, body mass index, household income, smoking, drinking frequency, physical activity, fasting serum glucose, total cholesterol, and Charlson Comorbidity Index . P < 0.05 was considered to indicate statistical significance. SAS Enterprise Guide version 8.3 conducted all analyses. This study was reviewed and approved by the Institutional Review Board of Seoul National University Hospital (IRB number: E-2204-038-1314) and adhered to the principles outlined in the Declaration of Helsinki and its subsequent updates. The requirement for informed consent was waived as the NHIS database used for analysis was anonymized according to strict compliance with confidentiality standards prior to analysis. Baseline characteristics The baseline characteristics of the population in the study cohort are shown in Table . The study cohort comprised 6,688 individuals who underwent cholecystectomy and 66,880 age- and sex-matched controls. Both groups had an average age of 62 years (SD: 9.4). 4,041 males underwent cholecystectomy, alongside 40,410 male controls. For females, 2,647 had cholecystectomy, compared to 26,470 controls. As with several previous studies related to cholecystectomy in Korea, patients with cholelithiasis, one of the subjects of cholecystectomy, seem to dominate in men in Korea. The exposed and control groups were matched at a 1:10 ratio. Median body mass index was 24.5 for cholecystectomy patients and 23.9 for controls. Cholecystectomy patients exhibited higher income, lower alcohol consumption, physical activity, and total cholesterol, along with higher fasting serum glucose, body mass index, and diastolic blood pressure, as well as more comorbidities ( p < 0.05 for all). Associations of cholecystectomy with depression and suicide outcomes Table provides the results of the multivariable Cox proportional hazards analysis, indicating the association between cholecystectomy and depression risk after covariate adjustments. This analysis was used to determine the association between cholecystectomy and the risk of developing depression after adjusting for covariates. As shown in the graphical summary (see Fig. ), patients who underwent cholecystectomy showed an increased risk of depression compared to those who didn’t (aHR 1.19 [95% CI, 1.19–1.30]), particularly within 3 years after cholecystectomy (aHR 1.38 [95% CI, 1.19–1.59]). However, no significant long-term risk was observed beyond 3 years postcholecystectomy (aHR 1.09 [95% CI, 0.98–1.22]). Table shows that 229 suicides occurred during the total follow-up period, 22 cases of cholecystectomy and 7 cases of non-operative groups, but there is no association between cholecystectomy and suicide risk (aHR 1.08 [95% CI, 0.69–1.68]). Similarly, there was no significant association observed in the short-term (aHR 0.88 [95% CI, 0.35–2.23]) or long-term (aHR 1.09 [95% CI, 0.68–1.89]) after cholecystectomy. Subgroup analyses were conducted for age group, sex, comorbidity, and inclusion year to assess study heterogeneity. Among patients with a Charlson Comorbidity Index of less than 2, no significant association was found between cholecystectomy and depression risk, both short-term and long-term. However, those with an index of 2 or more had an elevated risk of short-term depression within 3 years following cholecystectomy (aHR 1.48 [95% CI, 1.24–1.77]) (see Table ). In the analysis by sex in Table , both males and females showed increased depression risk within 3 years of cholecystectomy. Notably, females had a higher risk compared to males (aHR 1.39 [95% CI, 1.13–1.70]). Patients under 65 years old exhibited the highest increase in short-term depression risk after cholecystectomy (aHR 1.48 [95% CI, 1.22–1.80]) (see Supplementary Table online). The association between cholecystectomy and the risk of depression was analyzed based on the year of inclusion. Patients included between 2004 and 2007 (aHR 1.46 [95% CI, 1.05–2.04]) and 2008–2012 (aHR 1.13 [95% CI, 0.89–1.43]) showed a significantly increased risk of depression (see Supplementary Table S2). In contrast, no significant association was found between cholecystectomy and suicide risk when analyzed by sex, age, or comorbidity, both short-term and long-term (see Supplementary Table S3, S4, and S5 online). The baseline characteristics of the population in the study cohort are shown in Table . The study cohort comprised 6,688 individuals who underwent cholecystectomy and 66,880 age- and sex-matched controls. Both groups had an average age of 62 years (SD: 9.4). 4,041 males underwent cholecystectomy, alongside 40,410 male controls. For females, 2,647 had cholecystectomy, compared to 26,470 controls. As with several previous studies related to cholecystectomy in Korea, patients with cholelithiasis, one of the subjects of cholecystectomy, seem to dominate in men in Korea. The exposed and control groups were matched at a 1:10 ratio. Median body mass index was 24.5 for cholecystectomy patients and 23.9 for controls. Cholecystectomy patients exhibited higher income, lower alcohol consumption, physical activity, and total cholesterol, along with higher fasting serum glucose, body mass index, and diastolic blood pressure, as well as more comorbidities ( p < 0.05 for all). Table provides the results of the multivariable Cox proportional hazards analysis, indicating the association between cholecystectomy and depression risk after covariate adjustments. This analysis was used to determine the association between cholecystectomy and the risk of developing depression after adjusting for covariates. As shown in the graphical summary (see Fig. ), patients who underwent cholecystectomy showed an increased risk of depression compared to those who didn’t (aHR 1.19 [95% CI, 1.19–1.30]), particularly within 3 years after cholecystectomy (aHR 1.38 [95% CI, 1.19–1.59]). However, no significant long-term risk was observed beyond 3 years postcholecystectomy (aHR 1.09 [95% CI, 0.98–1.22]). Table shows that 229 suicides occurred during the total follow-up period, 22 cases of cholecystectomy and 7 cases of non-operative groups, but there is no association between cholecystectomy and suicide risk (aHR 1.08 [95% CI, 0.69–1.68]). Similarly, there was no significant association observed in the short-term (aHR 0.88 [95% CI, 0.35–2.23]) or long-term (aHR 1.09 [95% CI, 0.68–1.89]) after cholecystectomy. Subgroup analyses were conducted for age group, sex, comorbidity, and inclusion year to assess study heterogeneity. Among patients with a Charlson Comorbidity Index of less than 2, no significant association was found between cholecystectomy and depression risk, both short-term and long-term. However, those with an index of 2 or more had an elevated risk of short-term depression within 3 years following cholecystectomy (aHR 1.48 [95% CI, 1.24–1.77]) (see Table ). In the analysis by sex in Table , both males and females showed increased depression risk within 3 years of cholecystectomy. Notably, females had a higher risk compared to males (aHR 1.39 [95% CI, 1.13–1.70]). Patients under 65 years old exhibited the highest increase in short-term depression risk after cholecystectomy (aHR 1.48 [95% CI, 1.22–1.80]) (see Supplementary Table online). The association between cholecystectomy and the risk of depression was analyzed based on the year of inclusion. Patients included between 2004 and 2007 (aHR 1.46 [95% CI, 1.05–2.04]) and 2008–2012 (aHR 1.13 [95% CI, 0.89–1.43]) showed a significantly increased risk of depression (see Supplementary Table S2). In contrast, no significant association was found between cholecystectomy and suicide risk when analyzed by sex, age, or comorbidity, both short-term and long-term (see Supplementary Table S3, S4, and S5 online). Our retrospective cohort study is the first to present the results of further studies on whether the previously known association with mental health, such as depression after cholecystectomy, can even affect long-term mental health and suicide. The focus of this study was to examine the associations between cholecystectomy and depression and between cholecystectomy and suicide in Koreans who underwent cholecystectomy. we observed an increased risk of short-term depression postcholecystectomy, particularly among those with more comorbidities and individuals under 65 years old. Additionally, both males and females had a greater risk of short-term depression, and the extent of risk elevation was greater for women. However, we found no association between cholecystectomy and long-term depression risk, nor was there a significant association with suicide risk. Previous studies have indicated a higher risk of major depressive disorder among those with a history of cholecystectomy, particularly in patients aged 40–49 years . Another study found a greater risk of depressive disorder postcholecystectomy, especially among women . To better understand the mechanisms that explain the association between cholecystectomy and depression and between cholecystectomy and suicide, this study analyzed the risk of depression and suicide after cholecystectomy, dividing patients into short-term and long-term categories. The findings revealed an increased short-term risk of depression but no long-term association. Additionally, no significant association was found between cholecystectomy and suicide. These results can be analyzed from three viewpoints. First, in line with previous studies reporting that depression occurred in 1.8–3.5% of patients who underwent cholecystectomy , , – , our study showed a 1.3% incidence rate of depression in patients undergoing cholecystectomy. Patients with cholelithiasis have a higher incidence of depression than does the population in general, the relationship between mental health issues and gallbladder disease appears to be bidirectional , . There may be a risk of developing depression due to symptomatic cholelithiasis in the cholecystectomy group included in our study. Therefore, to minimize this potential bias, we adjusted for potential confounding factors such as sex, age, total cholesterol, body mass index, smoking status, household income, alcohol consumption, physical activity, diastolic blood pressure, fasting serum glucose, systolic blood pressure and Charlson Comorbidity Index. We also excluded people who were diagnosed with depression before their cholecystectomy. A total of 913 people with a history of depression were excluded from the study. We used the same exclusion criteria to establish an age- and sex-matched non-cholecystectomy group of people who underwent cholecystectomy. This enabled us to eliminate the possibility that the prevalence of depression in the cholecystectomy group may have influenced the risk of depression after cholecystectomy. Second, the increased short-term risk of depression following cholecystectomy may be attributed to alterations in the gut microbiome, influenced by changes in bile acid flow – . These alterations can disrupt physiological balance, affecting immunity and causing fatigue , . Previous studies have linked gut microbiome changes to depression , , , , impacting neurotransmitter systems and leading to anxiety and stressful behaviors , possibly exacerbated by an increase in harmful bacteria , , , . However, the gut microbiome may adapt in the long-term after cholecystectomy, so the medium- to long-term effects of changes in the gut microbiome remain controversial. Therefore, additional studies are needed to further analyze the relationship between the gut microbiome and depression duration. Finally, PCS is a heterogeneous group of symptoms consisting of persistent postoperative abdominal pain, gastrointestinal symptoms, and jaundice, and has been posited as a mechanism behind the increased risk of short-term depression following cholecystectomy – . The incidence of PCS varies from as little as 2 days to as long as 25 years, but late PCS is defined as PCS occurring several months after surgery , – . Although there are various causes of PCS, bowel movement disorders, notably irritable bowel syndrome or sphincter of Oddi dysfunction, are proposed as major causes , . PCS is linked to depression and anxiety , , , correlating with decreased quality of life , . Patients with symptoms of gallstone disease achieved higher long-term quality of life after cholecystectomy . Therefore, the increased risk of short-term depression after cholecystectomy in our study may be explained by the early PCS experienced by cholecystectomy patients. There was no association between cholecystectomy and the risk of long-term depression, probably because symptoms, such as pain after cholecystectomy, appear early after surgery and gradually improve over time after cholecystectomy, as does quality of life . Previous studies have reported that more than 90% of patients experience an improvement in symptoms after cholecystectomy 1 year after cholecystectomy . Further studies on the mechanism of PCS occurrence could be beneficial for understanding the health outcomes of patients who have undergone cholecystectomy. Although PCS may explain the increased risk of short-term depression after cholecystectomy, the underlying mechanism is likely complex, as changes in the gut microbiome may also play a role. The mechanism behind the highest increase in short-term depression risk in women in the subgroup analysis can be explained as follows. Previous studies have reported that PCS is common in women with a high incidence of gallstones . Additionally, a negative association between women and health-related quality of life after cholecystectomy has been noted . Furthermore, our findings indicate a greater risk of short-term depression postcholecystectomy in patients with a higher Charlson Comorbidity Index, which aligns with previous studies showing a higher incidence of major depressive disorder in patients with hypertension, diabetes, or dyslipidemia . Patients with hypertension, diabetes, or dyslipidemia, in general, were found to have a higher incidence of depression , . Changes in bile acids due to cholecystectomy may influence lipid metabolism, potentially contributing to the development of metabolic diseases, which could explain our results , . This study has several strengths. First, the study analyzed data from a large population cohort. Data from the entire period established from 2002 to 2019 were used. Second, our study is the first to analyze the association between cholecystectomy and suicide risk in Korean adults. The limitation of this study is that it is not a total inspection database because it is a retrospective cohort study. Since the study was not conducted for the entire population, there are limitations in applying the study results to the entire population. Additionally, as the study population is limited to Korea, the findings may not be broadly applicable to a global context. Second, patients who underwent cholecystectomy may have underlying diseases such as gallbladder cancer, bile duct cancer, or liver cancer, and their gallbladder may have been removed during liver transplantation; Therefore, they could be affected by any of these conditions or surgeries, potentially influencing their outcomes. Third, cholecystectomy, depression, and suicide were identified using ICD-10 codes, which have the potential for misclassification because they may not be accurately recorded. Fourth, inaccuracies in suicide coding within claims databases may have occurred . Fifth, one limitation of this study is the absence of data on complications following cholecystectomy, specifically PCS, which is often considered a potential mechanism linking cholecystectomy to depression. While we attempted to identify PCS using the ICD-10 code K91.5 in the cholecystectomy group, no cases were recorded in our dataset. This limitation prevents us from analyzing the potential correlation between PCS and outcome parameters, including depression. In addition, there is a limitation that it is not possible to confirm in the database used in our study, but PCS may at least partially include patients with genetic diseases such as ABCB4/LPAC syndrome. Note that these patients may be included, it would be good if a follow-up study on PCS considered genetic diseases. Additionally, this study suggests an association only, and further research is needed to explain the causal relationship between cholecystectomy and depression and suicide. In conclusion, the results of this study suggest that cholecystectomy is associated with an increased risk of short-term depression within 3 years postoperatively, whereas its association with long-term depression and suicide risk appears to be negligible. These findings emphasize the importance of monitoring and supporting patients’ mental health during the immediate postoperative period. Clinicians should carefully consider the risk of depression in patients undergoing cholecystectomy. Additionally, further research is needed to explore the underlying mechanisms and to validate these findings in diverse populations. Below is the link to the electronic supplementary material. Supplementary Material 1
Family Medicine Physician Readiness to Treat Behavioral Health Conditions: A Mixed Methods Study
423a17f4-8a9d-4485-aadd-c291cb3ae3dc
11366095
Family Medicine[mh]
About 1 in 5 American adults experiences mental illness and 1 in 20 experiences serious mental illness. In 2022, nearly 1 in 5 individuals aged 12 years or older had a substance use disorder. Mental health disorders are a leading contributor to the nation’s disproportionately high healthcare spending. Nationwide, the COVID-19 pandemic exacerbated mental health conditions resulting in more individuals seeking behavioral and mental health resources from an already overburdened system. The country’s behavioral and mental health system faces growing demand in the setting of clinician shortages, inadequate funding, and stigma. , The state of mental health in South Carolina reflects these broader national trends, with similar incidences of anxiety or depressive symptoms. The state’s age-adjusted suicide rate, 15.2 per 100 000 in 2021, is also higher than the national level. Mental Health America ranked the state in the bottom 10 in Access to Care in 2022. Mental Health America Access ranking includes measuring un- and under-insured populations and South Carolina’s uninsured rate, 12.7% in 2020, is higher than the nationwide rate. This lack of access especially impacts those belonging to minority groups and those living in rural communities. Over 25% of South Carolina counties lack a licensed general psychiatrist or psychologist with rural counties facing the lowest rates. Between 2009 and 2019, the number of psychiatrists in the state increased, but the number in rural areas declined by one-third. While an adjacent county may have a psychiatrist or psychologist, some residents may not be able to access reliable transportation or be able to afford to make the trip regularly on top of paying for healthcare. About 13.2% of the state’s adults need but are not receiving treatment for substance use. Family medicine physicians frequently see patients with behavioral and mental health conditions and demonstrate a high level of confidence in managing common behavioral and mental health conditions such as depression and anxiety. However, many family medicine physicians struggle to manage less common conditions such as bipolar disorder and attention-deficit/hyperactivity disorder, and demonstrate low levels of confidence in managing serious mental illnesses. Still, nearly 60% of patients receiving any mental health treatment and almost a third of those receiving care for serious mental illnesses do so from their primary care physician. Ideally, robust multidisciplinary referral systems would connect patients with more complex needs to appropriate and accessible resources. Such a system is necessary for a community to truly improve behavioral and mental health outcomes but is not currently in place in many parts of both South Carolina and the United States. Without adequate resources and referral options, family medicine physicians find themselves managing conditions for which they are not adequately prepared in order to not leave their patients without needed care. Behavioral health integration, a collaborative approach to delivering mental health care within primary care settings, shows promise as a strategy to improve the quality of behavioral healthcare management. In 2023, a Behavioral Health Collaborative was formed to centralize efforts aimed at improving the behavioral and mental health outcomes in this region. Initial goals involved establishing region-specific strengths and barriers and developing an action plan to inform targeted efforts moving forward. A mixed-method study was undertaken to meet these goals: a qualitative gap analysis and a survey of physicians practicing within the region. Gap Analysis A cross-sectional study through facilitator-led groups of local behavioral and mental health stakeholders was conducted by researchers and regional public health officials. Approximately 40 stakeholders participated in a Behavioral Health Collaborative event in September 2023. Participating stakeholders included representatives from peer support recovery programs, mental health clinics, public health agencies, suicide prevention organizations, and school districts. Facilitators led discussion to the following prompts: local challenges/barriers, entities making progress, and proposed action items. Responses were aggregated and categorized using an aim and driver model, or driver diagram, a common process tool used in improvement science. Survey A cross-sectional survey was distributed to family medicine physicians across the 10 counties included in the study. The survey questionnaire items were designed to address 3 constructs: preparedness (7 items), accessibility (3 items), and resources (single item). The outreach strategy involved various channels, including a department-wide email sent to all family medicine physicians within a regional academic department. Additionally, program directors from 4 family medicine residency programs within the study counties facilitated the distribution of the survey among their residents and faculty. Emails and mailers containing study information and the survey link were sent to other Family Medicine practices using available contact information sourced online. The South Carolina chapter of the American Academy of Family Physicians included the survey on a statewide email newsletter. The survey was developed in RedCAP and took respondents approximately 5 min to complete. A copy of the survey can be found in the Supplemental Appendix . The survey included 16 questions designed to assess demographics, sense of preparedness to manage certain behavioral and mental health conditions, and experiences with the local behavioral and mental health service system. Data Analysis Qualitative gap analysis was conducted using an aim and driver model. Descriptive statistics of the survey data included frequency, percentage, mean, and standard deviation. The reliability of the preparedness and accessibility items were measured by Cronbach’s alpha. The response scores were compared using 2-sample t -tests between participants in urban and rural counties as well as between residents and attendings. The rurality of the counties was determined based on the Health Resources and Services Administration’s 2021 List of Rural Counties. To test equality of the 7 preparedness item response scores and of the 3 accessibility item response scores, we applied repeated-measure mixed-effects models among all participants and among subgroups defined by rurality and experience levels. A cross-sectional study through facilitator-led groups of local behavioral and mental health stakeholders was conducted by researchers and regional public health officials. Approximately 40 stakeholders participated in a Behavioral Health Collaborative event in September 2023. Participating stakeholders included representatives from peer support recovery programs, mental health clinics, public health agencies, suicide prevention organizations, and school districts. Facilitators led discussion to the following prompts: local challenges/barriers, entities making progress, and proposed action items. Responses were aggregated and categorized using an aim and driver model, or driver diagram, a common process tool used in improvement science. A cross-sectional survey was distributed to family medicine physicians across the 10 counties included in the study. The survey questionnaire items were designed to address 3 constructs: preparedness (7 items), accessibility (3 items), and resources (single item). The outreach strategy involved various channels, including a department-wide email sent to all family medicine physicians within a regional academic department. Additionally, program directors from 4 family medicine residency programs within the study counties facilitated the distribution of the survey among their residents and faculty. Emails and mailers containing study information and the survey link were sent to other Family Medicine practices using available contact information sourced online. The South Carolina chapter of the American Academy of Family Physicians included the survey on a statewide email newsletter. The survey was developed in RedCAP and took respondents approximately 5 min to complete. A copy of the survey can be found in the Supplemental Appendix . The survey included 16 questions designed to assess demographics, sense of preparedness to manage certain behavioral and mental health conditions, and experiences with the local behavioral and mental health service system. Qualitative gap analysis was conducted using an aim and driver model. Descriptive statistics of the survey data included frequency, percentage, mean, and standard deviation. The reliability of the preparedness and accessibility items were measured by Cronbach’s alpha. The response scores were compared using 2-sample t -tests between participants in urban and rural counties as well as between residents and attendings. The rurality of the counties was determined based on the Health Resources and Services Administration’s 2021 List of Rural Counties. To test equality of the 7 preparedness item response scores and of the 3 accessibility item response scores, we applied repeated-measure mixed-effects models among all participants and among subgroups defined by rurality and experience levels. Gap Analysis Results Primary drivers of the current state of behavioral and mental health in the Upstate were (1) stigma and lack of accessible education about behavioral and mental health, (2) fragmented resources, (3) inaccessible care, and (4) workforce shortage and burnout. Each primary driver’s secondary drivers are included in . Survey Data Analysis Results The estimated Cronbach alpha of the survey items was good and acceptable: 0.89 (95%CI: 0.82-0.93) for the preparedness items and 0.67 (0.44-0.81) for the accessibility items. Forty-three (43) individuals completed the cross-sectional survey, representing 5 rural South Carolina counties. About 69.8% (n = 30) of those were practicing physicians and 30.2% (n = 13) were residents. About 51.2% (n = 22) practiced in an urban county and 48.8% (n = 21) practiced in a rural county. About 58.1% (n = 25) reported additional training in behavioral and mental health while 41.9% (n = 18) did not. As of 2021, 739 family medicine physicians practiced in the Upstate, so the estimated survey response rate was 5.8%. Overall, respondents felt most prepared to manage anxiety, closely followed by depression. Schizophrenia and substance use disorders were the conditions respondents felt least prepared to manage. (See ) Rural-practicing respondents reported feeling more prepared to manage substance use disorders than urban-practicing respondents ( P = .0269). There were no significant differences in sense of preparedness for management of other study conditions. Residents reported feeling less prepared to manage anxiety ( P = .029), depression ( P = .011), and other mood disorders ( P < .001) than practicing physicians. There were no significant differences in sense of preparedness for management of other study conditions. Participants were asked to rate the degree of agreement to the following statement: “There are adequate local resources and referral options for my patients with behavioral and mental health conditions.” Response options ranged from “Strongly Disagree (1)” to “Strongly Agree (5).” Respondents reported an average of 1.95, most closely reflecting “Disagree (2).” While timely access was identified as the factor most burdensome in accessing appropriate care, respondents felt distance, cost/insurance status, and lack of timely access all contributed to the inaccessibility faced by their patients. Practicing physicians reported greater overall resource inaccessibility ( P = .047). Once stratified by factors, cost was the only factor that differed significantly between levels of training ( P = .012). Compared to those working in urban communities, physicians in rural communities reported cost and location as greater barriers to resources ( P = .012 and .035). There was no significant difference between urban and rural respondents for overall resources. Primary drivers of the current state of behavioral and mental health in the Upstate were (1) stigma and lack of accessible education about behavioral and mental health, (2) fragmented resources, (3) inaccessible care, and (4) workforce shortage and burnout. Each primary driver’s secondary drivers are included in . The estimated Cronbach alpha of the survey items was good and acceptable: 0.89 (95%CI: 0.82-0.93) for the preparedness items and 0.67 (0.44-0.81) for the accessibility items. Forty-three (43) individuals completed the cross-sectional survey, representing 5 rural South Carolina counties. About 69.8% (n = 30) of those were practicing physicians and 30.2% (n = 13) were residents. About 51.2% (n = 22) practiced in an urban county and 48.8% (n = 21) practiced in a rural county. About 58.1% (n = 25) reported additional training in behavioral and mental health while 41.9% (n = 18) did not. As of 2021, 739 family medicine physicians practiced in the Upstate, so the estimated survey response rate was 5.8%. Overall, respondents felt most prepared to manage anxiety, closely followed by depression. Schizophrenia and substance use disorders were the conditions respondents felt least prepared to manage. (See ) Rural-practicing respondents reported feeling more prepared to manage substance use disorders than urban-practicing respondents ( P = .0269). There were no significant differences in sense of preparedness for management of other study conditions. Residents reported feeling less prepared to manage anxiety ( P = .029), depression ( P = .011), and other mood disorders ( P < .001) than practicing physicians. There were no significant differences in sense of preparedness for management of other study conditions. Participants were asked to rate the degree of agreement to the following statement: “There are adequate local resources and referral options for my patients with behavioral and mental health conditions.” Response options ranged from “Strongly Disagree (1)” to “Strongly Agree (5).” Respondents reported an average of 1.95, most closely reflecting “Disagree (2).” While timely access was identified as the factor most burdensome in accessing appropriate care, respondents felt distance, cost/insurance status, and lack of timely access all contributed to the inaccessibility faced by their patients. Practicing physicians reported greater overall resource inaccessibility ( P = .047). Once stratified by factors, cost was the only factor that differed significantly between levels of training ( P = .012). Compared to those working in urban communities, physicians in rural communities reported cost and location as greater barriers to resources ( P = .012 and .035). There was no significant difference between urban and rural respondents for overall resources. Stakeholders and family medicine physicians identified barriers to accessing behavioral and mental health services, highlighting lack of timely access issues and extended waiting periods. Family medicine physicians reported perceiving this factor as a greater obstacle than cost or distance. Prohibitive costs for services, insufficient insurance coverage, and lack of affordable, dependable transportation to services were also reported as significant barriers for those seeking care. Family medicine physicians and other primary care physicians are uniquely positioned to address many of these barriers. Particularly, providing common behavioral and mental health services in primary care settings could reduce both service costs and transportation obstacles for patients. Behavioral health integration (BHI) is 1 model associated with higher levels of patient satisfaction, better quality of care, and more cost-effective care. BHI may also mitigate lack of timely access issues as primary care physicians would have formal relationships with behavioral health clinicians. Additionally, such collaboration would allow physicians to learn from the integrated behavioral and mental health specialists, resulting in a boost of physicians’ confidence in managing behavioral and mental health conditions. The survey of family medicine physicians aligned with previous studies, with respondents feeling well-prepared for common mental health conditions such as anxiety and depression. By managing less severe presentations of these conditions, family medicine physicians can alleviate demand pressures, enabling increased access of psychiatric specialists for more severe cases. While many family medicine physicians are already managing these more common presentations, patients may not be aware of their physician’s ability to address such conditions. Family medicine professional organizations could enhance awareness by emphasizing the preparedness of their members and diplomates to treat these common behavioral and mental health conditions, particularly in regions where specialized care is limited. Family medicine physicians felt least prepared to manage schizophrenia, a complex condition that warrants specialized psychiatric management. However, efficient referral systems and a robust network of resources are imperative for family medicine physicians to connect patients with severe mental health conditions to appropriate care. The strain of the current crisis-oriented system, both nationwide and at the local level, underscores the need for better-equipped upstream interventions. This gap analysis identified increased awareness about existing resources as an area for improvement, suggesting a need for collaborative efforts and educational interventions. While practicing physicians who are established in a community may have knowledge of the local resources, residents, and new physicians may be unfamiliar with the resources available. Behavioral health integration could bridge this gap by developing and maintaining up-to-date directories of regional resources for both physicians and community members. Residency programs would benefit from incorporating a session on local resources into their didactic curricula. Further, wide variations exist in the consistency of family medicine residency behavioral health curricula in general, and this has been identified as a major gap in the training of resident physicians. Family medicine physicians whose residency programs had a higher emphasis on behavioral science feel better prepared to use behavioral skills in practice. This represents another area of potential advocacy for family medicine professional organizations. Despite this study’s limitations, such as a small sample size and a confined geographic scope, these findings can help guide future inquiries and initiatives. Future research aims to explore preparedness for additional conditions (such as eating disorders and safety assessments) and extend the study’s reach statewide. This study illuminates critical gaps in the current behavioral and mental healthcare system in a predominantly rural 10-county region and emphasizes the pivotal role of family medicine physicians in bridging these gaps. Family medicine physicians are uniquely equipped for screening, intervention, treatment, and referral to specialty care when needed. Locally, community organizations can support family medicine physicians to bridge the behavioral and mental healthcare gap by establishing collaborative resources and referral systems. On a broader scale, family medicine organizations should actively encourage and empower trainees and practicing physicians to participate in the policy-making processes that shape their communities. Family medicine physicians are already bridging the gap for behavioral and mental healthcare not only in the rural Southeast, but nationwide. By collectively working on targeted initiatives, we can build a more resilient system to better support physicians and patients. sj-pdf-1-jpc-10.1177_21501319241275053 – Supplemental material for Family Medicine Physician Readiness to Treat Behavioral Health Conditions: A Mixed Methods Study Supplemental material, sj-pdf-1-jpc-10.1177_21501319241275053 for Family Medicine Physician Readiness to Treat Behavioral Health Conditions: A Mixed Methods Study by Ashlyn Chea, Moonseong Heo and Timothy Aaron Zeller in Journal of Primary Care & Community Health
Platelet Rich Plasma in Gynecology—Discovering Undiscovered—Review
0ef0b68a-e097-470c-8772-7ef0f99d5cc8
9100365
Gynaecology[mh]
Regenerative medicine combines elements of tissue engineering and molecular biology. This aims to support the regeneration and repair processes of damaged tissues, cells and organs using growth factors present in platelet granularities. Nowadays, one of the most commonly used preparations in regenerative medicine is platelet rich plasma (PRP). The term PRP was first used in the 1970s to describe plasma with an increased concentration of platelets than in peripheral blood . In 1974, Kohler and Lipton, in studying fibroblast physiology, found that platelets could have significance as growth stimulants . In the following years, further research indicated that platelets are a source of growth factors that stimulate fibroblast activity . In order to obtain a clinical effect, platelets must be activated first by external factors and/or by the exposed collagen fibers of damaged tissues. Depending on the preparation and the method of platelet activation, the following types are distinguished: PRP and platelet rich fibrin (PRF). In 2014, Dohan- Ehrenfest et al. proposed the division of PRP presented in . Due to the different methods of obtaining PRP, in 2015, Mautner et al. created a more accurate classification in terms of platelet concentration, presence/absence of leukocytes and red blood cells (RBC), and the presence of activators. The name of this classification is PLRA (Platelets, Leukocyte, Red Blood Cells Activation) . In 2017, Lana et al. emphasized the role of mononuclear cells, such as monocytes and lymphocytes, which belong to the PBMCs (peripheral blood mononuclear cells) group. These cells stimulate the regeneration of the tissues by cytokine release. Therefore, a new classification has been proposed, the acronym being MARSPILL (Method, Activation, Red Blood Cells, Spin, Platelets, Image guidance, Leukocytes, Light activation) . Despite the different classification systems, there are several different kits available on the market, enabling the preparation of PRP solution with different platelets concentrations and additional ingredient content . The main advantage of using PRP is the autologous nature of the preparation; therefore, there is no risk of immune reaction and transmission of microorganisms from other donors . Another significant advantage lies in the fact that its preparation is simple and fast (about 30 min from blood withdrawal to its application) and the cost of preparation is low . As with any procedure, there are contraindications for PRP, and it is not recommended in patients with coagulation disorders . Other contraindications include breastfeeding, pregnancy, cancer diagnosis, or active infections and in situations wherein chronic nonsteroidal anti-inflammatory drugs (NSAIDs) are prescribed . PRP was first applied in patients with thrombocytopenia. Subsequent studies prompted researchers to employ it, among others, in surgery. Currently, thanks to extensive knowledge about PRP, this therapy is increasingly used in various fields of medicine, both in humans and animals. This article is a review of the most recently published literature on the use of PRP in gynecology and obstetrics. A review was conducted by searching through PubMed, Cochrane, Scopus, Web of Science, and MDPI databases. Search terms included: Platelet-rich plasma, PRP, autologous platelet rich plasma, gynecology, obstetrics, ovary, pregnancy, urinary incontinence, aesthetic medicine, sexual medicine, female and wound healing with different combinations in order to find as many recently published articles. The article selection of interest was made to the following criteria: recently published papers including randomized clinical trials (RCTs), prospective controlled studies, prospective cohort studies or case series, case reports concerning PRP application in gynecological or obstetrical conditions; other criteria: English language, human studies, female. Letters to the editor and abstracts accepted in annual national and international conferences as well as articles with data similarity were excluded from the review . Data extraction and quality assessment. Two independent reviewers (DS-C and MEG) reviewed the studies, and discrepancies were resolved by consensus including the third author (KF). Data extracted from all eligible studies included the year of publication, surgical details of the procedure, clinical objective and subjective outcomes and intra- and post-operative complications. After searching all databases, the following articles were included in this review. Initially, we found 187 papers concerning PRP application in all gynecological conditions published. After screening the titles and abstracts we decided to choose sixty articles whose results described humans, were most recently published and met all other inclusion and exclusion criteria. Animal results were used exceptionally, in order to describe the scientific basis which justified PRP application in specific gynecological conditions in humans. Finally, all authors agreed to include 41 papers in the final investigation. The summary of selected cited articles and PRP preparation used are given in . The main limitations of the presented articles are that different kits and methods were used to prepare the PRP solution and drawing of final conclusion is difficult and burdened with technical bias. The application of PRP in gynecology is still a developing process. Despite easy access to PRP, relatively simple preparation and the satisfactorily known mechanism of action, it is used to a limited extent in this field. So far its widest application is in reproductive medicine, especially in cases of thin endometrium, Asherman’s syndrome, or premature ovarian failure (POF) but also in wound healing and lower urinary tract symptoms (LUTS), such as urinary incontinence or genitourinary fistula treatment. 4.1. Endometrium Endometrium status is one of the main factors of pregnancy implantation failure. In women with a thin endometrium, PRP was used as an intrauterine infusion in order to induce endometrial growth and increase clinical rates of pregnancies . This was described in several cases. Molina et al., for example, characterized 19 patients who had undergone in vitro fertilization, aged between 33 and 45 years, with refractory endometrium, to whom PRP was infused with a catheter into the uterine cavity. In the case histories, PRP was used twice, after the 10th day of the hormone replacement therapy, and then 72 h after the first administration. Endometrial thicknesses >7.0 mm was reported with the first use, and in all cases, endometrial thickness >9.0 mm was evident after the second administration. The entire study group qualified for embryo transfer at the blastocyst stage. There were 73.7% of positive pregnancy tests, of which 26.3% yielded live births; 26.3% generated ongoing pregnancies and 10.5% produced biochemical pregnancies, while 5.3% had fetal death (16 weeks) . In another publication, Zadehmodarres et al. reported that they recruited ten patients with a history of inadequate endometrial growth in frozen-thawed embryo transfer (FET) cycles. In every patient, PRP administration increased endometrial thickness and embryo transfer was performed. After treatment, five patients became pregnant, and in four cases, the pregnancy progressed normally . Contrary to those promising results, Tehraninejad et al. published results of PRP infusion into the uterine cavity in 85 patients with normal endometrium thickness (>7 mm) suffering from repeated implantation failure (RIF). In 42 patients 1 mL of PRP was infused into the uterine cavity 2 days before the embryo transfer. The outcomes, including biochemical, clinical and ongoing pregnancy rates were similar between the PRP and control groups and did not reach statistical significance (35.7% vs. 37.2%; 31.0% vs. 37.2%; and 26.8% vs. 25.6%, respectively) . The other indication for the administration of PRP is Asherman’s syndrome. According to Aghajanova et al. (2021) and Aghajanova et al. (2018), treatment with intrauterine PRP infusion was well tolerated, with no short-term or long-term side effects, and appeared to improve endometrial function—as demonstrated by successful conception and ongoing clinical pregnancies. In conjunction with solid in vitro data on human endometrial cells, these pilot clinical outcomes were very reassuring, but primary results after a pilot study of 30 patients were not very promising compared to standard treatment . 4.2. Ovaries In cases of difficulties in becoming pregnant due to ovarian dysfunction, attempts have been made to inject PRP into both ovaries. The effect of its application was an increase in the number of ovarian oocytes . Moreover, in women with a poor ovarian reserve and premature menopause, autologous intraovarian PRP therapy increased anti-Mullerian hormone levels and decreased follicle-stimulating hormone (FSH) concentration, with a trend toward increasing clinical and live birth rates . In a related study, Farimani et al. published research in which 19 women were enrolled. Therein, the mean numbers of oocytes before and after PRP injection were 0.64 and 2.1, respectively. Two patients experienced spontaneous conceptions. The third case achieved clinical pregnancy and delivered a healthy baby . A similar effect was also found in a woman with chronic endometritis and recurrent implantation failure. The case of a 35-year-old woman with premature ovarian insufficiency and a history of six failed donated embryo transfers was described. The patient was referred to the clinic for assisted reproduction and underwent ET of two donated blastocysts graded as 5 BB and 5 BC at the next menstrual cycle, which resulted in a twin pregnancy. Four weeks following a positive β-hCG pregnancy test, clinical pregnancy was confirmed by observing fetal cardiac activity on transvaginal ultrasound. The babies were delivered at the 36th week of gestation and weighed 2.28 kg and 2.18 kg . 4.3. Wound Healing and Tissue Regeneration Various studies where patients served as their own control (“split-face” studies), investigating whether PRP injections are beneficial for tissue and skin rejuvenation, were undertaken . Platelet rich plasma mode of action is mostly based on stimulating the synthesis of matrix metalloproteinases (MMPs), increasing cutaneous fibroblast growth as well as the production of extracellular matrix (ECM) components including type I collagen and elastin . This was an argument towards applying PRP as a wound healing enhancing factor for various types of wounds, as well as in skin regeneration. The development of the newest type of PRP called lyophilized enhanced PRP (ePRP) is the step toward the standardization of applying a specific, desirable quantity of growth factors by using a defined amount of PRP powder. It was found that ePRP dynamically activates several glycolytic enzymes to modulate and sustain glucose metabolism, mitochondrial biogenesis and respiratory function, to meet energy demands in different wound healing periods. Moreover, multiple antioxidant enzymes are being up-regulated resulting in reactive oxygen species (ROS) decrease thus allowing for proper tissue repair . Those metabolic changes, and many yet unknown, facilitate wound healing and are the driving force for adjunctive treatment of many conditions induced by impaired tissue regenerative capacity. One of the publications presents a prospective randomized controlled trial with 200 patients who underwent elective cesarean section. The intervention group received subcutaneous PRP injection into the wound after surgery. The control group received the usual care. Outcome variables included redness, edema, ecchymosis, discharge, approximation scale (REEDA) results, Vancouver scar scale (VSS) outcome and visual analog scale (VAS) determinations. Patients from the PRP group showed a greater reduction in the REEDA score, compared to the control group on day 1 and day 7, and this was continued for the 6 months of the study (1.51 ± 0.90 vs. 2.49 ± 1.12, p < 0.001). Compared to the control group, the PRP group had a significantly greater reduction in the VSS and VAS scores beginning on the seventh day (3.71 ± 0.99 vs. 4.67 ± 1.25, p < 0.001) and (5.06 ± 1.10 vs. 6.02 ± 1.15, p < 0.001), respectively, and this difference was observed for a 6 month period. This study demonstrated that PRP has positive effects on wound healing and pain reduction in high-risk patients undergoing cesarean section in low-resource settings . This was also confirmed in a recently published paper by Starzyńska et al. where PRP was used in patients with surgical removal of impacted mandibular third molars. As this procedure is associated with various postoperative complications mostly concerning impaired healing additional therapies are being developed and one of those is the addition of advanced platelet-rich fibrin (A-PRF) which consists of a three-dimensional fibrin matrix, rich in platelets and leukocytes, containing cytokines, stem cells, and growth factors and namely, it belongs to the second generation of platelet concentrates. The study was conducted within two groups consisting of 50 patients with immediate A-PRF socket filling and a control group of 50 patients without A-PRF socket filling. Several clinical features were postoperatively assessed: pain, analgesics intake, the presence of trismus, edema, hematomas within the surrounding tissues, the prevalence of pyrexia, dry socket, secondary bleeding, presence of hematomas, skin warmth in the post-operative area, and bleeding time observed by the patient were analyzed on the 3rd, 7th, and 14th day after the procedure. There was a significant decrease in pain intensity, analgesics intake, trismus, and edema on the 3rd and the 7th day in patients with A-PRF socket filling ( p < 0.05). Additionally, the study showed that A-PRF was the most important factor in reducing the incidence of postoperative complications . In order to evaluate the possible utility and efficacy of platelet rich gel after advanced vulvar cancer surgery, Morelli et al. conducted a study on 25 women who had undergone radical surgery. Gel application in 10 out of 25 patients was related to a significant reduction in wound infection, necrosis of vaginal wounds, and wound breakdown rates ( p = 0.032; p = 0.096; p = 0.048, respectively). The authors concluded that platelet gel application before vulvar reconstruction represents an effective strategy to prevent wound breakdown after vulvar cancer surgery . A very interesting paper concerning the molecular aspects of radiation induced wound healing and the interaction of endothelial cells and adipose-derived stem cells in conjunction with PRP in the context of radiation effects was published by Reinders et al. The malfunction of wound healing in irradiated tissues is associated with fibrosis, decreased vascularity and impaired tissue remodeling. The study was conducted using cell cultures with human dermal microvascular endothelial cells (HDMEC), adipose-derived stem cells (ASC). Activated PRP was used for cell culture experiments at a final concentration of 5% in the culture medium. The cells were irradiated with doses of 2 (0.7 min irradiation) and 6 Gy (2 min irradiation), respectively. One of the investigated factors was cell viability and it was determined using a colorimetric assay. Human ASC showed no altered viability upon radiation but the treatment of ASC with 5% PRP caused a slight, although not significant, trend towards increased viability which unfortunately was reversed by irradiation with both tested doses of 2 Gy and 6 Gy. Additionally, endothelial cells showed a trend towards decreased viability upon external radiation, both in the presence and absence of PRP. Interestingly, analysis of co-cultured ASC/HDMEC showed a significant effect for radiation with 6 Gy in both PRP-treated and untreated cells. Furthermore, the effect on PRP treatment of irradiated ASC, HDMEC and the corresponding co-culture was studied using a colorimetric BrdU assay. All cell cultures showed a trend towards decreasing proliferation after irradiation irrespective of PRP. The proliferation of all cells was significantly diminished by radiation with 6 Gy. Remarkably, PRP presence in the cell medium had a pro-proliferating effect on cells after irradiation with 2 Gy. The concluding message of this study is that a combination of treatment with ASC and PRP products might be useful in the care management and adjunctive treatment of chronic radiogenic wounds . The healing effect has also been applied to genital rejuvenation. Vaginal rejuvenation involves the management of extrinsic (traumatic) and intrinsic (aging) changes in the vagina and scrotum. Lipofilling, with an additional injection of PRP (with or without hyaluronic acid), has been used to successfully address vaginal atrophy and vaginal laxity . In the study, the unexpected resolution of lichen sclerosus in one of the women was a factor that initialized PRP application for the treatment of this condition. Unfortunately, the double-blind placebo-controlled trial that was performed on thirty patients did not prove the efficacy of PRP in managing lichen sclerosus . The other indication of the administration of PRP in genital rejuvenation is to improve the quality of sexual life. Sukgen et al. investigated the effect of PRP injection to the lower one-third of the anterior vaginal wall on sexual function, orgasm and genital perception in women with sexual dysfunction. The study revealed that as a minimally invasive method, PRP administration to the distal part of the anterior vaginal wall may improve female sexuality, along with higher satisfaction . Another study conducted on 68 women ranging from 32 to 97 years, indicated that O-shot injection, which is PRP administration to the vulvovaginal region, is a satisfactory solution for women having stress incontinence, overactive bladder, lack of lubrication and sexual dysfunction, such as lack of libido, arousal and dyspareunia. The results show that 94% of these patients were satisfied, however, 6% of all patients with overactive bladder did not indicate improvement . In one case published to date, PRP was used as a regenerative factor for clitoral reconstruction after female genital mutilation (FGM) in a 35-year-old Guinean woman. After surgical clitoris reconstruction with the Foldès method, an A-PRP was applied. Two months postoperatively, wound healing was complete and the patient reported significant improvement in quality of life . 4.4. Urogynecology PRP has been applied in the treatment of urogynecological disorders and LUTS and there are ongoing observations of the use of PRP as a supporting therapy in addressing recurrent vesicovaginal fistulas. Patients enrolled in this study were injected with PRP around the fistulous canal and underwent the Latzko procedure 6–8 weeks later. In all cases, after a 1–2 months follow-up period, the fistula was healed and the vaginal wall at the site of the procedure healed without any signs of scarring, redness, or granulosa tissue. Moreover, the patients did not complain about any urination difficulties or urinary tract disorders. In addition, post void residuals were lower than 50 mL in all patients . There are also published papers describing PRP usage in cystocele treatment (which is the most common vaginal wall prolapse). In a study by Atilgan and Aydin, patients were divided into two groups: (1) cystocele repair only and (2) cystocele repair with platelet-rich plasma injection. Each group consisted of 28 patients. There were no significant differences between the groups in terms of demographic features. At the end of the 48-month follow-up period, the results were compared between the groups. The main outcome was the low recurrence rate with platelet-rich plasma administration. Furthermore, the decrease in prolapse symptoms ascertained with the Pelvic Floor Distress Inventory scale was more significant in group 2. Platelet-rich plasma administration may thus be a good alternative treatment for preventing cystocele recurrence; still, further research is needed to evaluate the safety and efficacy of this treatment . On the other hand, Gorlero et al. evaluated the efficacy of PRF in patients with pelvic organ prolapse recurrent surgery. Platelet-rich fibrin was prepared with the use of the Vivostat system in 10 patients and applied on dissected pubourethral fascia before vaginal skin closure. The authors observed an anatomical success rate of 80%, while patients reported a 100% improvement in symptoms. Despite the aforementioned excellent outcomes, the authors did not continue the study on a larger group of women affected with vaginal prolapse . Stress urinary incontinence (SUI) is a major health problem, which deteriorates the quality of one’s life. According to the integral theory, the most important factor involved in female stress urinary incontinence occurrence is a pubourethral ligament (PUL) defect . This ligament anchors the anterior wall of the bladder and proximal urethral descending like a fan from the lower part of the pubic bone forming a hammock under the midurethra. Studies in animal experimental models have shown that the transection of the PUL is associated with long-term SUI . Platelet rich plasma contains several growth factors that contribute to the pathophysiology of ligament reconstruction including vascular endothelial growth factor (VEGF), insulin growth factor I (IGF-I), platelet derived growth factor (PDGF), hepatocyte growth factor (HGF), transforming growth factor beta (TGF-b) and fibroblast growth factor (FGF). Taking into account this data a pilot study was conducted in order to investigate if PRP induces the resolution of SUI. In 20 women, PRP was injected into the anterior vaginal mucosa around the patient’s mid-urethra, which was approximately 1 cm below the urethra meatus with a depth of about 1.5 cm. Two mL underneath mid-urethra and 1.5 mL for each side of the urethra. The injection was repeated three times one month apart. The study outcome is evidenced by multiple self-reported questionnaires before, 1 month and 6 months after the treatment and all revealed significant and lasting effectiveness in 12 out of 20 patients (60%). Moreover, women 40 years of age or younger, have better treatment outcomes compared to the older ones. A disadvantage of this study is the lack of a control group injected with saline to eliminate the bulking agent effect and the small sample size. However, further research might shed more light on the PRP effect on SUI. However, this innovative intervention could be an alternative treatment for SUI . In another pilot study, also based on results after injecting PRP twice in 20 consecutive women at 4- to 6-week intervals, a significant improvement in SUI symptoms was observed 3 months after treatment with a further improvement at 6 months. A mean reduction of 50.2% in urine loss was observed in the 1-h pad test. At the 6-month follow-up, 80.0% of women reported improvement. No adverse effects were observed. In conclusion, platelet-rich plasma injections seem to be both effective and safe at least in the short term and could be offered as an alternative outpatient procedure for the treatment of SUI, especially in younger women . Another condition that can be diagnosed by the gynecologist is interstitial cystitis/painful bladder syndrome (IC/PBS), a chronic illness with symptoms similar to overactive bladder (OAB) which is increased urination frequency, urgency, urgency urinary incontinence accompanied by recurring events of pelvic pain. Its incidence is assumed to be as high as 52–67 per 100,000 cases in the United States . Although OAB and IC/PBS are considered to be separate pathological conditions there is growing scientific evidence that both are related to structural, synaptic, and complex signaling pathway changes that trigger altered bladder sensation . Recently, the efficacy of intravesical instillation with PRP and hyaluronic acid for cyclophosphamide-induced acute IC/PBS was investigated in a rat model. The study was conducted on thirty virgin female rats which were randomized into five groups. One group consisted of rats instilled CYP plus PRP and showed the most significant prolongation of voiding intervals compared to other groups. Moreover, The expression of cell junction-associated protein zonula occludens-2 (ZO-2) and inflammatory cytokine interleukin 6 (IL-6) was also measured by means of histological staining and was found that the expression of ZO-2 was increased and IL-6 was decreased in the CYP plus PRP group compared with the CYP-induced acute IC/PBS group. These findings resulted in a study undertaken by Jhang and coworkers on 19 patients with IC/BPS who underwent 4 monthly intravesical PRP injections with platelet concentration of approximately five times that of the peripheral blood. Seven to 10 days after the last injection patient satisfaction was measured. Functional bladder capacity and maximum flow rate increased as well as the visual analog scale (VAS) of pain, IC symptom index, IC problem index, O’Leary-Sant symptom score, and global response assessment improved in all patients. Furthermore, they also investigated histological features of PRP instillation and found that ZO-1 and other proteins involved in bladder barrier function, such as E-cadherin and TGF-β expressions, increased significantly after repeated PRP injections . Those results show that Intravesical repeat PRP injections may have the potential to improve urothelial health and result in symptoms improvement in patients with IC/BPS. Nevertheless, further studies must be conducted, also on patients with OAB to elucidate the real potential of PRP in reducing those debilitating symptoms. Endometrium status is one of the main factors of pregnancy implantation failure. In women with a thin endometrium, PRP was used as an intrauterine infusion in order to induce endometrial growth and increase clinical rates of pregnancies . This was described in several cases. Molina et al., for example, characterized 19 patients who had undergone in vitro fertilization, aged between 33 and 45 years, with refractory endometrium, to whom PRP was infused with a catheter into the uterine cavity. In the case histories, PRP was used twice, after the 10th day of the hormone replacement therapy, and then 72 h after the first administration. Endometrial thicknesses >7.0 mm was reported with the first use, and in all cases, endometrial thickness >9.0 mm was evident after the second administration. The entire study group qualified for embryo transfer at the blastocyst stage. There were 73.7% of positive pregnancy tests, of which 26.3% yielded live births; 26.3% generated ongoing pregnancies and 10.5% produced biochemical pregnancies, while 5.3% had fetal death (16 weeks) . In another publication, Zadehmodarres et al. reported that they recruited ten patients with a history of inadequate endometrial growth in frozen-thawed embryo transfer (FET) cycles. In every patient, PRP administration increased endometrial thickness and embryo transfer was performed. After treatment, five patients became pregnant, and in four cases, the pregnancy progressed normally . Contrary to those promising results, Tehraninejad et al. published results of PRP infusion into the uterine cavity in 85 patients with normal endometrium thickness (>7 mm) suffering from repeated implantation failure (RIF). In 42 patients 1 mL of PRP was infused into the uterine cavity 2 days before the embryo transfer. The outcomes, including biochemical, clinical and ongoing pregnancy rates were similar between the PRP and control groups and did not reach statistical significance (35.7% vs. 37.2%; 31.0% vs. 37.2%; and 26.8% vs. 25.6%, respectively) . The other indication for the administration of PRP is Asherman’s syndrome. According to Aghajanova et al. (2021) and Aghajanova et al. (2018), treatment with intrauterine PRP infusion was well tolerated, with no short-term or long-term side effects, and appeared to improve endometrial function—as demonstrated by successful conception and ongoing clinical pregnancies. In conjunction with solid in vitro data on human endometrial cells, these pilot clinical outcomes were very reassuring, but primary results after a pilot study of 30 patients were not very promising compared to standard treatment . In cases of difficulties in becoming pregnant due to ovarian dysfunction, attempts have been made to inject PRP into both ovaries. The effect of its application was an increase in the number of ovarian oocytes . Moreover, in women with a poor ovarian reserve and premature menopause, autologous intraovarian PRP therapy increased anti-Mullerian hormone levels and decreased follicle-stimulating hormone (FSH) concentration, with a trend toward increasing clinical and live birth rates . In a related study, Farimani et al. published research in which 19 women were enrolled. Therein, the mean numbers of oocytes before and after PRP injection were 0.64 and 2.1, respectively. Two patients experienced spontaneous conceptions. The third case achieved clinical pregnancy and delivered a healthy baby . A similar effect was also found in a woman with chronic endometritis and recurrent implantation failure. The case of a 35-year-old woman with premature ovarian insufficiency and a history of six failed donated embryo transfers was described. The patient was referred to the clinic for assisted reproduction and underwent ET of two donated blastocysts graded as 5 BB and 5 BC at the next menstrual cycle, which resulted in a twin pregnancy. Four weeks following a positive β-hCG pregnancy test, clinical pregnancy was confirmed by observing fetal cardiac activity on transvaginal ultrasound. The babies were delivered at the 36th week of gestation and weighed 2.28 kg and 2.18 kg . Various studies where patients served as their own control (“split-face” studies), investigating whether PRP injections are beneficial for tissue and skin rejuvenation, were undertaken . Platelet rich plasma mode of action is mostly based on stimulating the synthesis of matrix metalloproteinases (MMPs), increasing cutaneous fibroblast growth as well as the production of extracellular matrix (ECM) components including type I collagen and elastin . This was an argument towards applying PRP as a wound healing enhancing factor for various types of wounds, as well as in skin regeneration. The development of the newest type of PRP called lyophilized enhanced PRP (ePRP) is the step toward the standardization of applying a specific, desirable quantity of growth factors by using a defined amount of PRP powder. It was found that ePRP dynamically activates several glycolytic enzymes to modulate and sustain glucose metabolism, mitochondrial biogenesis and respiratory function, to meet energy demands in different wound healing periods. Moreover, multiple antioxidant enzymes are being up-regulated resulting in reactive oxygen species (ROS) decrease thus allowing for proper tissue repair . Those metabolic changes, and many yet unknown, facilitate wound healing and are the driving force for adjunctive treatment of many conditions induced by impaired tissue regenerative capacity. One of the publications presents a prospective randomized controlled trial with 200 patients who underwent elective cesarean section. The intervention group received subcutaneous PRP injection into the wound after surgery. The control group received the usual care. Outcome variables included redness, edema, ecchymosis, discharge, approximation scale (REEDA) results, Vancouver scar scale (VSS) outcome and visual analog scale (VAS) determinations. Patients from the PRP group showed a greater reduction in the REEDA score, compared to the control group on day 1 and day 7, and this was continued for the 6 months of the study (1.51 ± 0.90 vs. 2.49 ± 1.12, p < 0.001). Compared to the control group, the PRP group had a significantly greater reduction in the VSS and VAS scores beginning on the seventh day (3.71 ± 0.99 vs. 4.67 ± 1.25, p < 0.001) and (5.06 ± 1.10 vs. 6.02 ± 1.15, p < 0.001), respectively, and this difference was observed for a 6 month period. This study demonstrated that PRP has positive effects on wound healing and pain reduction in high-risk patients undergoing cesarean section in low-resource settings . This was also confirmed in a recently published paper by Starzyńska et al. where PRP was used in patients with surgical removal of impacted mandibular third molars. As this procedure is associated with various postoperative complications mostly concerning impaired healing additional therapies are being developed and one of those is the addition of advanced platelet-rich fibrin (A-PRF) which consists of a three-dimensional fibrin matrix, rich in platelets and leukocytes, containing cytokines, stem cells, and growth factors and namely, it belongs to the second generation of platelet concentrates. The study was conducted within two groups consisting of 50 patients with immediate A-PRF socket filling and a control group of 50 patients without A-PRF socket filling. Several clinical features were postoperatively assessed: pain, analgesics intake, the presence of trismus, edema, hematomas within the surrounding tissues, the prevalence of pyrexia, dry socket, secondary bleeding, presence of hematomas, skin warmth in the post-operative area, and bleeding time observed by the patient were analyzed on the 3rd, 7th, and 14th day after the procedure. There was a significant decrease in pain intensity, analgesics intake, trismus, and edema on the 3rd and the 7th day in patients with A-PRF socket filling ( p < 0.05). Additionally, the study showed that A-PRF was the most important factor in reducing the incidence of postoperative complications . In order to evaluate the possible utility and efficacy of platelet rich gel after advanced vulvar cancer surgery, Morelli et al. conducted a study on 25 women who had undergone radical surgery. Gel application in 10 out of 25 patients was related to a significant reduction in wound infection, necrosis of vaginal wounds, and wound breakdown rates ( p = 0.032; p = 0.096; p = 0.048, respectively). The authors concluded that platelet gel application before vulvar reconstruction represents an effective strategy to prevent wound breakdown after vulvar cancer surgery . A very interesting paper concerning the molecular aspects of radiation induced wound healing and the interaction of endothelial cells and adipose-derived stem cells in conjunction with PRP in the context of radiation effects was published by Reinders et al. The malfunction of wound healing in irradiated tissues is associated with fibrosis, decreased vascularity and impaired tissue remodeling. The study was conducted using cell cultures with human dermal microvascular endothelial cells (HDMEC), adipose-derived stem cells (ASC). Activated PRP was used for cell culture experiments at a final concentration of 5% in the culture medium. The cells were irradiated with doses of 2 (0.7 min irradiation) and 6 Gy (2 min irradiation), respectively. One of the investigated factors was cell viability and it was determined using a colorimetric assay. Human ASC showed no altered viability upon radiation but the treatment of ASC with 5% PRP caused a slight, although not significant, trend towards increased viability which unfortunately was reversed by irradiation with both tested doses of 2 Gy and 6 Gy. Additionally, endothelial cells showed a trend towards decreased viability upon external radiation, both in the presence and absence of PRP. Interestingly, analysis of co-cultured ASC/HDMEC showed a significant effect for radiation with 6 Gy in both PRP-treated and untreated cells. Furthermore, the effect on PRP treatment of irradiated ASC, HDMEC and the corresponding co-culture was studied using a colorimetric BrdU assay. All cell cultures showed a trend towards decreasing proliferation after irradiation irrespective of PRP. The proliferation of all cells was significantly diminished by radiation with 6 Gy. Remarkably, PRP presence in the cell medium had a pro-proliferating effect on cells after irradiation with 2 Gy. The concluding message of this study is that a combination of treatment with ASC and PRP products might be useful in the care management and adjunctive treatment of chronic radiogenic wounds . The healing effect has also been applied to genital rejuvenation. Vaginal rejuvenation involves the management of extrinsic (traumatic) and intrinsic (aging) changes in the vagina and scrotum. Lipofilling, with an additional injection of PRP (with or without hyaluronic acid), has been used to successfully address vaginal atrophy and vaginal laxity . In the study, the unexpected resolution of lichen sclerosus in one of the women was a factor that initialized PRP application for the treatment of this condition. Unfortunately, the double-blind placebo-controlled trial that was performed on thirty patients did not prove the efficacy of PRP in managing lichen sclerosus . The other indication of the administration of PRP in genital rejuvenation is to improve the quality of sexual life. Sukgen et al. investigated the effect of PRP injection to the lower one-third of the anterior vaginal wall on sexual function, orgasm and genital perception in women with sexual dysfunction. The study revealed that as a minimally invasive method, PRP administration to the distal part of the anterior vaginal wall may improve female sexuality, along with higher satisfaction . Another study conducted on 68 women ranging from 32 to 97 years, indicated that O-shot injection, which is PRP administration to the vulvovaginal region, is a satisfactory solution for women having stress incontinence, overactive bladder, lack of lubrication and sexual dysfunction, such as lack of libido, arousal and dyspareunia. The results show that 94% of these patients were satisfied, however, 6% of all patients with overactive bladder did not indicate improvement . In one case published to date, PRP was used as a regenerative factor for clitoral reconstruction after female genital mutilation (FGM) in a 35-year-old Guinean woman. After surgical clitoris reconstruction with the Foldès method, an A-PRP was applied. Two months postoperatively, wound healing was complete and the patient reported significant improvement in quality of life . PRP has been applied in the treatment of urogynecological disorders and LUTS and there are ongoing observations of the use of PRP as a supporting therapy in addressing recurrent vesicovaginal fistulas. Patients enrolled in this study were injected with PRP around the fistulous canal and underwent the Latzko procedure 6–8 weeks later. In all cases, after a 1–2 months follow-up period, the fistula was healed and the vaginal wall at the site of the procedure healed without any signs of scarring, redness, or granulosa tissue. Moreover, the patients did not complain about any urination difficulties or urinary tract disorders. In addition, post void residuals were lower than 50 mL in all patients . There are also published papers describing PRP usage in cystocele treatment (which is the most common vaginal wall prolapse). In a study by Atilgan and Aydin, patients were divided into two groups: (1) cystocele repair only and (2) cystocele repair with platelet-rich plasma injection. Each group consisted of 28 patients. There were no significant differences between the groups in terms of demographic features. At the end of the 48-month follow-up period, the results were compared between the groups. The main outcome was the low recurrence rate with platelet-rich plasma administration. Furthermore, the decrease in prolapse symptoms ascertained with the Pelvic Floor Distress Inventory scale was more significant in group 2. Platelet-rich plasma administration may thus be a good alternative treatment for preventing cystocele recurrence; still, further research is needed to evaluate the safety and efficacy of this treatment . On the other hand, Gorlero et al. evaluated the efficacy of PRF in patients with pelvic organ prolapse recurrent surgery. Platelet-rich fibrin was prepared with the use of the Vivostat system in 10 patients and applied on dissected pubourethral fascia before vaginal skin closure. The authors observed an anatomical success rate of 80%, while patients reported a 100% improvement in symptoms. Despite the aforementioned excellent outcomes, the authors did not continue the study on a larger group of women affected with vaginal prolapse . Stress urinary incontinence (SUI) is a major health problem, which deteriorates the quality of one’s life. According to the integral theory, the most important factor involved in female stress urinary incontinence occurrence is a pubourethral ligament (PUL) defect . This ligament anchors the anterior wall of the bladder and proximal urethral descending like a fan from the lower part of the pubic bone forming a hammock under the midurethra. Studies in animal experimental models have shown that the transection of the PUL is associated with long-term SUI . Platelet rich plasma contains several growth factors that contribute to the pathophysiology of ligament reconstruction including vascular endothelial growth factor (VEGF), insulin growth factor I (IGF-I), platelet derived growth factor (PDGF), hepatocyte growth factor (HGF), transforming growth factor beta (TGF-b) and fibroblast growth factor (FGF). Taking into account this data a pilot study was conducted in order to investigate if PRP induces the resolution of SUI. In 20 women, PRP was injected into the anterior vaginal mucosa around the patient’s mid-urethra, which was approximately 1 cm below the urethra meatus with a depth of about 1.5 cm. Two mL underneath mid-urethra and 1.5 mL for each side of the urethra. The injection was repeated three times one month apart. The study outcome is evidenced by multiple self-reported questionnaires before, 1 month and 6 months after the treatment and all revealed significant and lasting effectiveness in 12 out of 20 patients (60%). Moreover, women 40 years of age or younger, have better treatment outcomes compared to the older ones. A disadvantage of this study is the lack of a control group injected with saline to eliminate the bulking agent effect and the small sample size. However, further research might shed more light on the PRP effect on SUI. However, this innovative intervention could be an alternative treatment for SUI . In another pilot study, also based on results after injecting PRP twice in 20 consecutive women at 4- to 6-week intervals, a significant improvement in SUI symptoms was observed 3 months after treatment with a further improvement at 6 months. A mean reduction of 50.2% in urine loss was observed in the 1-h pad test. At the 6-month follow-up, 80.0% of women reported improvement. No adverse effects were observed. In conclusion, platelet-rich plasma injections seem to be both effective and safe at least in the short term and could be offered as an alternative outpatient procedure for the treatment of SUI, especially in younger women . Another condition that can be diagnosed by the gynecologist is interstitial cystitis/painful bladder syndrome (IC/PBS), a chronic illness with symptoms similar to overactive bladder (OAB) which is increased urination frequency, urgency, urgency urinary incontinence accompanied by recurring events of pelvic pain. Its incidence is assumed to be as high as 52–67 per 100,000 cases in the United States . Although OAB and IC/PBS are considered to be separate pathological conditions there is growing scientific evidence that both are related to structural, synaptic, and complex signaling pathway changes that trigger altered bladder sensation . Recently, the efficacy of intravesical instillation with PRP and hyaluronic acid for cyclophosphamide-induced acute IC/PBS was investigated in a rat model. The study was conducted on thirty virgin female rats which were randomized into five groups. One group consisted of rats instilled CYP plus PRP and showed the most significant prolongation of voiding intervals compared to other groups. Moreover, The expression of cell junction-associated protein zonula occludens-2 (ZO-2) and inflammatory cytokine interleukin 6 (IL-6) was also measured by means of histological staining and was found that the expression of ZO-2 was increased and IL-6 was decreased in the CYP plus PRP group compared with the CYP-induced acute IC/PBS group. These findings resulted in a study undertaken by Jhang and coworkers on 19 patients with IC/BPS who underwent 4 monthly intravesical PRP injections with platelet concentration of approximately five times that of the peripheral blood. Seven to 10 days after the last injection patient satisfaction was measured. Functional bladder capacity and maximum flow rate increased as well as the visual analog scale (VAS) of pain, IC symptom index, IC problem index, O’Leary-Sant symptom score, and global response assessment improved in all patients. Furthermore, they also investigated histological features of PRP instillation and found that ZO-1 and other proteins involved in bladder barrier function, such as E-cadherin and TGF-β expressions, increased significantly after repeated PRP injections . Those results show that Intravesical repeat PRP injections may have the potential to improve urothelial health and result in symptoms improvement in patients with IC/BPS. Nevertheless, further studies must be conducted, also on patients with OAB to elucidate the real potential of PRP in reducing those debilitating symptoms. Currently, platelet rich plasma is the one of most often used preparations in reconstructive medicine. It is obtained quickly and at a low cost. There is no doubt that the released growth factors and proteins have a beneficial effect on wound healing and regeneration processes. Special interest should be especially focused on the regenerative potential of PRP in OAB and SUI as those conditions affect more than 30% of people. This would be, if effective, a desirable treatment option because of costs and simple, minimally invasive application without any adverse reactions risks connected with drugs or foreign materials surgical treatment. The main limitation of the presented clinical results is that different methods and platelet concentrations were used in order to improve the medical condition, thus drawing the final conclusion is difficult and burdened with technical bias. This clearly shows that further research is, however, needed to confirm the effectiveness and possibility of its application in many other disorders.
Systematic identification of cancer pathways and potential drugs for intervention through multi-omics analysis
63448e8b-a9d4-4e33-bf11-fa87fe722378
11839471
Biochemistry[mh]
Cancer is a family of highly diverse and complex diseases that can occur in almost all organs and tissues of the human body. The occurrence and development of human cancers are associated with many factors, particularly the step-wise accumulation of genetic and epigenetic changes in the genome, which are directly manifested as alterations in the transcript and protein expression profiles . High-throughput omics technologies (e.g., transcriptomics and proteomics) have been applied to identify potential biomarkers and novel therapeutic targets for the diagnosis and treatment of human cancers . In addition, an integrative analysis across multiple omics data is capable of generating valid and testable hypotheses that can be prioritized for experimental validations . Generally, the omics profiles vary with different types of cancer, and cancer research has focused primarily on various oncogenic processes associated with a specific cancer type. However, there are limited integrative multi-omics analyses across different cancer types that may reveal new pathways of cancer genesis and new therapeutic targets. Cancer cell lines have been widely used as in vitro models for the investigation of the cellular and molecular mechanisms underlying tumorigenesis, as well as anti-cancer drug screening and repurposing . The Cancer Cell Line Encyclopedia (CCLE) is a publicly available database that contains multi-level omics data of over 1000 cancer cell lines spanning more than 40 cancer types. It provides RNA sequencing (RNA-Seq) transcriptomics data that measures RNA transcript abundance in the cancer cell lines . In addition, the tandem mass tag (TMT) based quantitative proteomics approach has been used for large-scale protein quantification. Using this method, Nusinow et al. performed quantitative proteomics analysis on 375 cell lines across diverse cancer types, resulting in a rich resource of protein expression levels for the exploration of cellular behavior and cancer research . Transcriptomics and proteomics play pivotal roles in linking genomic transcript sequences and protein levels to potential biological functions. Therefore, integrating these two omics methods (i.e., transcriptomics and proteomics) can provide a more comprehensive and holistic understanding of the biological behaviors of cancer at the transcriptional and translational levels that may reveal new mechanisms of pathogenesis and drug targets for cancer. Understanding molecular targets characteristic of a cancer type is crucial for modern anti-cancer drug discovery and therapeutic development. For example, discoidin domain receptor 1 (DDR1) was identified as a molecular target specific for pancreatic cancer. This discovery enabled the development of a novel series of 2-amino-2,3-dihydro-1H-indene-5-carboxamide derivatives as highly selective DDR1 inhibitors using structure-based drug design. These DDR1 inhibitors showed promising efficacy for pancreatic cancer treatment . Omics analysis, either RNA-Seq or proteomics profiling, has provided a rapidly expanding range of information on new molecular targets for early drug discovery. For example, Swaroop et al. found that the genes differentially expressed in the most severe Hurler syndrome subgroup compared to the intermediate Hurler-Scheie or the least severe Scheie syndrome subgroups based on transcriptome profiling data were extremely valuable in guiding the in vivo animal models and clinical trials in the drug development process . In this study, we integrated the transcriptomics and proteomics data from 16 common human cancer types, including acute myeloid leukemia (AML), breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, non-small-cell lung carcinoma (NSCLC), small cell lung carcinoma (SCLC), melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer, to identify the biological pathways characteristic of each cancer type and drugs known to target these pathways. The cancer pathways identified in this study can provide insight into the underlying molecular mechanisms for each cancer type, and the drugs targeting these pathways could potentially be repurposed as new cancer therapeutics. Overview of cancer profiling data A total of 1023 human cancer cell lines were collected, including 1019 cell lines with RNA-Seq data and 375 cell lines with proteomics data (Fig. , and Supplementary Table ). Of the cancer cell lines collected, 371 had both RNA-Seq and proteomics data (Fig. , and Supplementary Table ). The four cell lines that had only proteomics data were COLO205 (large intestine cancer), PL45 (pancreatic cancer), SKMEL2 (skin cancer), and NB19 (central nervous system cancer) (Supplementary Table ). According to the cancer cell line annotations, these cancer cell lines can be grouped into 16 cancer types, including AML, breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, NSCLC, SCLC, melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer (Fig. , and Supplementary Table ). The number of cancer cell lines with proteomics data for each cancer type was significantly smaller than those with RNA-Seq data (Fig. ). For cancer types with RNA-Seq data, the number of cancer cell lines ranged from 25 (liver cancer and urinary tract cancer) to 128 (NSCLC) with a median of 41 (Fig. , and Supplementary Table ). For cancer types with proteomics data, the number of cell lines ranged from 10 (upper aerodigestive cancer) to 64 (NSCLC) with a median of 14 (Fig. , and Supplementary Table ). Transcripts and proteins significantly expressed in each cancer type According to the optimal combination of Gini purity and FDR adjusted P value, the number of significant transcripts for each cancer type ranged from 5756 (liver cancer) to 11,143 (melanoma) with a median of 9256 (Fig. , and Supplementary Table ). Transcripts that showed statistically significant differential expression in a specific cancer type compared to all other cancer types are referred to as “significant transcripts” here. The number of significant proteins for each cancer type ranged from 409 (stomach cancer) to 2443 (AML) with a median of 1344 (Fig. ). The number of significant proteins is much smaller than that of the significant transcripts for each cancer type, and the transcript/protein ratio ranged from 2.86 (kidney cancer) to 19.8 (stomach cancer) with a median of 6.79 (Fig. ). Transcript is a collective term that includes various biotypes. For example, among the 5756 significant transcripts found for liver cancer included 23 biotypes, the top 10 biotypes in descending number of transcripts were protein coding (2579), pseudogene (1107), lincRNA (890), antisense (539), misc RNA (119), miRNA (94), sense intronic (85), snRNA (74), processed transcript (48), and snoRNA (38), accounting for 96.8% of all the biotypes (Fig. ). Moreover, 234 protein coding biotypes in the significant transcript set (a total of 2579 transcripts) were also present in the significant protein set (a total of 825 proteins) for liver cancer (Fig. ), showing that the results from the transcriptomics analysis and the proteomics analysis are consistent. These significantly expressed transcripts and proteins are specific for a particular cancer type and can be used for cancer type-specific pathway analysis. Biological pathways characteristic of each cancer type The significant transcripts and proteins for each cancer type were analyzed for the enrichment of biological pathways, respectively. From the significant transcripts, the number of significant pathways ranged from 36 (ovarian cancer) to 193 (AML and stomach cancer) with a median of 92 (Fig. ). From the significant proteins, the number of significant pathways ranged from 17 (stomach cancer) to 584 (AML) with a median of 174 (Fig. ). The number of overlapping pathways derived from both transcripts and proteins for each cancer type ranged from 4 (stomach cancer) to 112 (AML) with a median of 25.5 (Fig. , and Supplementary Table ). The overlapping significant pathways were considered characteristic of each cancer type. Some pathways were present in multiple cancer types, while others were specific for a particular cancer type (Supplementary Table ). Figure showed the top two significant pathways found for each cancer type, including 12 unique biological pathways. For example, the olfactory transduction pathway was the significant pathway for AML (score = −20.52), breast cancer (score = −14.18), colorectal cancer (score = −21.92), esophageal cancer (score = −8.37), glioma (score = −22.90), kidney cancer (score = −15.21), liver cancer (score = −4.64), melanoma (score = −26.34), NSCLC (score = −22.46), ovarian cancer (score = −6.32), pancreatic cancer (score = −7.17), SCLC (score = −22.44), stomach cancer (score = −4.97), and upper aerodigestive cancer (score = −25.98). Signaling by the GPCR pathway was the significant pathway for breast cancer (score = −5.56), colorectal cancer (score = −10.42), kidney cancer (score = −16.22), melanoma (score = −14.41), NSCLC (score = −7.91), SCLC (score = −10.81), and upper aerodigestive cancer (score = −14.44). Messenger RNA processing was the significant pathway for endometrial cancer (score = −19.50) and glioma (score = −14.69). Alpha-6 beta-1 and alpha-6 beta-4 integrin signaling pathway was the significant pathway for urinary tract cancer (score = −3.84). Axon guidance pathway was the significant pathway for stomach cancer (score = −4.15). Capped intron-containing pre-mRNA processing pathway was the significant pathway for endometrial cancer (score = −18.58). Cell cycle pathway was the significant pathway for esophageal cancer (score = −4.15). Cytoplasmic ribosomal proteins pathway was the significant pathway for AML (score = −11.46). Focal adhesion pathway was the significant pathway for urinary tract cancer (score = −3.39). Metabolism pathway was the significant pathway for liver cancer (score = −3.22). Oncostatin M pathway was the significant pathway for pancreatic cancer (score = −6.08). Tight junction pathway was the significant pathway for ovarian cancer (score = −2.09). Potential anti-cancer drugs identified for each cancer type The significant cancer pathways can serve as a bridge connecting drugs and cancer type. For each cancer type, we identified the drugs that target genes involved in multiple significant cancer pathways. In turn, these drugs can serve as potential anti-cancer drug candidates. The number of potential anti-cancer drugs varied by cancer types, ranging from 1 (ovarian cancer) to 97 (AML and NSCLC) with a median of 66 (Fig. , Supplementary Table ). For each cancer type, the drugs linked to the maximal number of pathways are shown in Fig. and Supplementary Table , and these drugs can be divided into two categories: those involved with multiple cancer types and those involved with one specific cancer type. The former included S-isoproterenol bitartrate for AML (58 pathways), kidney cancer (11 pathways), NSCLC (24 pathways), melanoma (7 pathways), and upper aerodigestive (9 pathways); afatinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); afuresertib for breast cancer (8 pathways) and kidney cancer (11 pathways); bosutinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); canertinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dacomitinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dasatinib for colorectal cancer (15 pathways), endometrial cancer (5 pathways), esophageal cancer (19 pathways), kidney cancer (11 pathways), and pancreatic cancer (36 pathways); HA-1077 for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); ipatasertib for breast cancer (8 pathways) and kidney cancer (11 pathways); lithium citrate for endometrial cancer (5 pathways), esophageal cancer (19 pathways), and liver cancer (6 pathways); neratinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); saracatinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); varlitinib tosylate for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways). The latter included flavopiridol hydrochloride (9 pathways), lapatinib (9 pathways), minocycline HCl (9 pathways), and sorafenib (9 pathways) for upper aerodigestive; cladribine (13 pathways) for SCLC; D-alpha-tocopherol (8 pathways) for breast cancer; lapatinib (3 pathways) for stomach cancer; R-lotrafiban (17 pathways) and tirofiban hydrochloride monohydrate (17 pathways) for glioma; and sotrastaurin (3 pathways) for ovarian cancer (Fig. , and Supplementary Table ). Some anti-cancer drugs identified in this study have been approved as targeted therapies for the treatment of specific cancer types (Fig. ), such as imatinib, bosutinib, and dasatinib for AML; dabrafenib, crizotinib, trametinib, dacomitinib, and gefitinib for lung cancer; regorafenib for colorectal cancer; pazopanib, cabozantinib, sunitinib malate, and sorafenib for kidney cancer; trametinib for skin cancer; and sunitinib malate for pancreatic cancer. Quantitative validation by mean normalized AUC (mnAUC) A total of 426 potential anti-cancer drugs (~44% of the total) were identified, with mnAUC values ranging from 0.23 to 1.42 and a median of 0.88 (Table ). The number of anti-cancer drugs with available mnAUC values varied across cancer types: breast (7), stomach (17), endometrium (24), liver (25), SCLC (36), colorectal (40), kidney (45), pancreas (48), glioma (49), esophagus (52), and NSCLC (62). A Wilcoxon rank-sum test revealed that the mean mnAUC value (0.87) of potential anti-cancer drugs identified in this study was significantly lower than that (0.96) for 19,759 potential anti-cancer drugs reported in the literature ( p < 2 × 10 –16 ; Fig. ). A total of 1023 human cancer cell lines were collected, including 1019 cell lines with RNA-Seq data and 375 cell lines with proteomics data (Fig. , and Supplementary Table ). Of the cancer cell lines collected, 371 had both RNA-Seq and proteomics data (Fig. , and Supplementary Table ). The four cell lines that had only proteomics data were COLO205 (large intestine cancer), PL45 (pancreatic cancer), SKMEL2 (skin cancer), and NB19 (central nervous system cancer) (Supplementary Table ). According to the cancer cell line annotations, these cancer cell lines can be grouped into 16 cancer types, including AML, breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, NSCLC, SCLC, melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer (Fig. , and Supplementary Table ). The number of cancer cell lines with proteomics data for each cancer type was significantly smaller than those with RNA-Seq data (Fig. ). For cancer types with RNA-Seq data, the number of cancer cell lines ranged from 25 (liver cancer and urinary tract cancer) to 128 (NSCLC) with a median of 41 (Fig. , and Supplementary Table ). For cancer types with proteomics data, the number of cell lines ranged from 10 (upper aerodigestive cancer) to 64 (NSCLC) with a median of 14 (Fig. , and Supplementary Table ). According to the optimal combination of Gini purity and FDR adjusted P value, the number of significant transcripts for each cancer type ranged from 5756 (liver cancer) to 11,143 (melanoma) with a median of 9256 (Fig. , and Supplementary Table ). Transcripts that showed statistically significant differential expression in a specific cancer type compared to all other cancer types are referred to as “significant transcripts” here. The number of significant proteins for each cancer type ranged from 409 (stomach cancer) to 2443 (AML) with a median of 1344 (Fig. ). The number of significant proteins is much smaller than that of the significant transcripts for each cancer type, and the transcript/protein ratio ranged from 2.86 (kidney cancer) to 19.8 (stomach cancer) with a median of 6.79 (Fig. ). Transcript is a collective term that includes various biotypes. For example, among the 5756 significant transcripts found for liver cancer included 23 biotypes, the top 10 biotypes in descending number of transcripts were protein coding (2579), pseudogene (1107), lincRNA (890), antisense (539), misc RNA (119), miRNA (94), sense intronic (85), snRNA (74), processed transcript (48), and snoRNA (38), accounting for 96.8% of all the biotypes (Fig. ). Moreover, 234 protein coding biotypes in the significant transcript set (a total of 2579 transcripts) were also present in the significant protein set (a total of 825 proteins) for liver cancer (Fig. ), showing that the results from the transcriptomics analysis and the proteomics analysis are consistent. These significantly expressed transcripts and proteins are specific for a particular cancer type and can be used for cancer type-specific pathway analysis. The significant transcripts and proteins for each cancer type were analyzed for the enrichment of biological pathways, respectively. From the significant transcripts, the number of significant pathways ranged from 36 (ovarian cancer) to 193 (AML and stomach cancer) with a median of 92 (Fig. ). From the significant proteins, the number of significant pathways ranged from 17 (stomach cancer) to 584 (AML) with a median of 174 (Fig. ). The number of overlapping pathways derived from both transcripts and proteins for each cancer type ranged from 4 (stomach cancer) to 112 (AML) with a median of 25.5 (Fig. , and Supplementary Table ). The overlapping significant pathways were considered characteristic of each cancer type. Some pathways were present in multiple cancer types, while others were specific for a particular cancer type (Supplementary Table ). Figure showed the top two significant pathways found for each cancer type, including 12 unique biological pathways. For example, the olfactory transduction pathway was the significant pathway for AML (score = −20.52), breast cancer (score = −14.18), colorectal cancer (score = −21.92), esophageal cancer (score = −8.37), glioma (score = −22.90), kidney cancer (score = −15.21), liver cancer (score = −4.64), melanoma (score = −26.34), NSCLC (score = −22.46), ovarian cancer (score = −6.32), pancreatic cancer (score = −7.17), SCLC (score = −22.44), stomach cancer (score = −4.97), and upper aerodigestive cancer (score = −25.98). Signaling by the GPCR pathway was the significant pathway for breast cancer (score = −5.56), colorectal cancer (score = −10.42), kidney cancer (score = −16.22), melanoma (score = −14.41), NSCLC (score = −7.91), SCLC (score = −10.81), and upper aerodigestive cancer (score = −14.44). Messenger RNA processing was the significant pathway for endometrial cancer (score = −19.50) and glioma (score = −14.69). Alpha-6 beta-1 and alpha-6 beta-4 integrin signaling pathway was the significant pathway for urinary tract cancer (score = −3.84). Axon guidance pathway was the significant pathway for stomach cancer (score = −4.15). Capped intron-containing pre-mRNA processing pathway was the significant pathway for endometrial cancer (score = −18.58). Cell cycle pathway was the significant pathway for esophageal cancer (score = −4.15). Cytoplasmic ribosomal proteins pathway was the significant pathway for AML (score = −11.46). Focal adhesion pathway was the significant pathway for urinary tract cancer (score = −3.39). Metabolism pathway was the significant pathway for liver cancer (score = −3.22). Oncostatin M pathway was the significant pathway for pancreatic cancer (score = −6.08). Tight junction pathway was the significant pathway for ovarian cancer (score = −2.09). The significant cancer pathways can serve as a bridge connecting drugs and cancer type. For each cancer type, we identified the drugs that target genes involved in multiple significant cancer pathways. In turn, these drugs can serve as potential anti-cancer drug candidates. The number of potential anti-cancer drugs varied by cancer types, ranging from 1 (ovarian cancer) to 97 (AML and NSCLC) with a median of 66 (Fig. , Supplementary Table ). For each cancer type, the drugs linked to the maximal number of pathways are shown in Fig. and Supplementary Table , and these drugs can be divided into two categories: those involved with multiple cancer types and those involved with one specific cancer type. The former included S-isoproterenol bitartrate for AML (58 pathways), kidney cancer (11 pathways), NSCLC (24 pathways), melanoma (7 pathways), and upper aerodigestive (9 pathways); afatinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); afuresertib for breast cancer (8 pathways) and kidney cancer (11 pathways); bosutinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); canertinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dacomitinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); dasatinib for colorectal cancer (15 pathways), endometrial cancer (5 pathways), esophageal cancer (19 pathways), kidney cancer (11 pathways), and pancreatic cancer (36 pathways); HA-1077 for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); ipatasertib for breast cancer (8 pathways) and kidney cancer (11 pathways); lithium citrate for endometrial cancer (5 pathways), esophageal cancer (19 pathways), and liver cancer (6 pathways); neratinib for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways); saracatinib for endometrial cancer (5 pathways) and esophageal cancer (19 pathways); varlitinib tosylate for stomach cancer (3 pathways) and upper aerodigestive cancer (9 pathways). The latter included flavopiridol hydrochloride (9 pathways), lapatinib (9 pathways), minocycline HCl (9 pathways), and sorafenib (9 pathways) for upper aerodigestive; cladribine (13 pathways) for SCLC; D-alpha-tocopherol (8 pathways) for breast cancer; lapatinib (3 pathways) for stomach cancer; R-lotrafiban (17 pathways) and tirofiban hydrochloride monohydrate (17 pathways) for glioma; and sotrastaurin (3 pathways) for ovarian cancer (Fig. , and Supplementary Table ). Some anti-cancer drugs identified in this study have been approved as targeted therapies for the treatment of specific cancer types (Fig. ), such as imatinib, bosutinib, and dasatinib for AML; dabrafenib, crizotinib, trametinib, dacomitinib, and gefitinib for lung cancer; regorafenib for colorectal cancer; pazopanib, cabozantinib, sunitinib malate, and sorafenib for kidney cancer; trametinib for skin cancer; and sunitinib malate for pancreatic cancer. A total of 426 potential anti-cancer drugs (~44% of the total) were identified, with mnAUC values ranging from 0.23 to 1.42 and a median of 0.88 (Table ). The number of anti-cancer drugs with available mnAUC values varied across cancer types: breast (7), stomach (17), endometrium (24), liver (25), SCLC (36), colorectal (40), kidney (45), pancreas (48), glioma (49), esophagus (52), and NSCLC (62). A Wilcoxon rank-sum test revealed that the mean mnAUC value (0.87) of potential anti-cancer drugs identified in this study was significantly lower than that (0.96) for 19,759 potential anti-cancer drugs reported in the literature ( p < 2 × 10 –16 ; Fig. ). In this study, we identified the transcripts and proteins significantly expressed in each of the 16 cancer types through integrated analysis of transcriptomics and proteomics profiling data, resulting in biological pathways characteristic of each cancer type. Moreover, the drugs linked to these biological pathways were identified as potential treatments for human cancer. According to the global cancer statistics in 2020, the cancer types analyzed in our study (Fig. , and Supplementary Table ) included the most commonly diagnosed cancer (breast cancer, 11.7% of all sites) and the cancer with the leading death rate (lung cancer, 18% of all sites) . As proteins are the key executors of gene function, high-throughput proteomics data are important in elucidating the mechanisms of action of many critical cancer-related biological processes . Due to the constrained resolution at the proteome level, the coverage of proteomics data is much lower than that of RNA-Seq data, resulting in a smaller number of significant proteins comparing to the number of significant transcripts for each cancer type identified in our study (Fig. ). The protein levels in cells may not correlate with the expression levels of transcripts because of an underlying epigenetic mechanism . In addition to protein-encoding mRNAs, the transcripts also included non-coding RNAs (e.g., long non-coding RNA (lncRNA) and microRNA (miRNA)), some of which often act as oncogenic drivers and tumor suppressors in major cancer types through post-transcriptional regulatory mechanisms . We also identified the significant pathways characteristic of each cancer type (Fig. , and Supplementary Table ), some of which have been reported to  be associated with the corresponding human cancer type. For example, the olfactory transduction pathway has been reported to be associated with certain cancer types including breast cancer , pancreatic cancer , lung carcinoids , colorectal cancer , ovarian serous cystadenocarcinoma , stomach cancer , esophageal cancer , and brain lower grade glioma . Furthermore, the olfactory receptor (OR) family is generally considered to play an important role in the olfactory transduction pathway and a link to various cancers, such as human melanoma, stomach cancer, and AML . In our study, the olfactory transduction pathway has been identified as significant for 16 cancer types (i.e., AML, breast cancer, colorectal cancer, endometrial cancer, esophageal cancer, glioma, kidney cancer, liver cancer, NSCLC, SCLC, melanoma, ovarian cancer, pancreatic cancer, stomach cancer, upper aerodigestive cancer, and urinary tract cancer) (Fig. , and Supplementary Table ). The axon guidance pathway has reported cancer associations, e.g., the axon guidance factor Slit homolog 2 (Slit 2) is known to inhibit neural invasion and metastasis in pancreatic cancer , and affect the prognosis of AML . Silencing of the axon guidance factor semaphorin 6B gene significantly suppressed adhesion, migration, and invasion of stomach cancer cells in vitro . Consistent with these previous studies, the axon guidance pathway was also found closely related to pancreatic cancer, AML, and stomach cancer in our study (Fig. , and Supplementary Table ). Guanine nucleotide-binding protein (G protein) coupled receptors (GPCRs) are the largest family of membrane receptors that mediate transmembrane signaling via heterotrimeric G protein complexes. GPCR signaling has been implicated in various oncogenic and metastatic processes . Consistent with these previous studies, the GPCR signaling pathway was also found closely related to AML, breast cancer, colorectal cancer, glioma, kidney cancer, NSCLC, SCLC, melanoma, ovarian cancer, and upper aerodigestive cancer in our study (Fig. , and Supplementary Table ). These cancer pathways also led to the identification of existing drugs that could potentially be repurposed as new anti-cancer therapies (Fig. , and Supplementary Table ). Drugs that target multiple biological pathways simultaneously may produce additive or even synergistic anti-cancer effects, resulting in more effective therapies and reduced side effects . Figure shows the drugs that are linked to the maximum number of pathways for each cancer type. For example, dasatinib, a small molecule tyrosine kinase inhibitor, has been found to inhibit the growth of AML, breast cancer, liver cancer, melanoma, pancreas tumor, and pre-neoplastic Barrett’s esophagus cell lines . Although dasatinib has previously been reported to inhibit the growth of NSCLC but not SCLC , recent studies have found that dasatinib can significantly enhance the therapeutic efficacy of vorinostat in SCLC xenografts . In addition, dasatinib has been reported to induce autophagic cell death in human ovarian cancer . Consistent with these previous studies, we found dasatinib among the drug candidates for AML, breast cancer, colorectal cancer, endometrium cancer, esophageal cancer, glioma, kidney cancer, liver cancer, melanoma, pancreatic cancer, NSCLC, upper aerodigestive cancer, urinary tract cancer, and SCLC (Fig. , and Supplementary Table ). Afuresertib is a potent protein kinase B (AKT) inhibitor that exhibits favorable tumor-suppressive effects on breast cancer cells by potently inhibiting the phosphatidylinositol 3‑kinase (PI3K)/AKT signaling pathway . Consistent with this study, Afuresertib is one of the drugs we found linked to breast cancer (Fig. , and Supplementary Table ). D-alpha-tocopherol plays a pivotal role in decreasing the metastasis risk of glioma in cancer patients . We also found D-alpha-tocopherol as one of the drugs linked to glioma (Supplementary Table ). Ipatasertib is a potent small molecule AKT kinase inhibitor currently being tested in Phase III clinical trials for the treatment of triple negative metastatic breast cancer , which is also linked to breast cancer in our study (Fig. , and Supplementary Table ). Consistent with the linkage of midostaurin to glioma by our analysis (Supplementary Table ), midostaurin is a multi-targeted tyrosine kinase inhibitor for the treatment of glioma . In addition to these drugs with confirmed anti-cancer activity in the literature, the other drugs identified in our study could potentially be prioritized and repurposed as new treatments for some cancer types. For example, the Rho-kinase inhibitor, HA-1077, suppresses proliferation/migration and induces apoptosis of urothelial cancer cells and MDA-MB 231 human breast cancer cells , while our analysis additionally linked HA-1077 to colorectal cancer and stomach cancer (Fig. , and Supplementary Table ). Moreover, some potential anti-cancer drugs identified in our study have been screened for anti-cancer activities in cell-based assays. For example, dasatinib was associated with 16 significant pathways for colorectal cancer (Supplementary Table ), and inhibited the viability of colorectal cancer cells in vitro (i.e., IC 50 = 0.40 μM, efficacy = 57%) . Enzastaurin was associated with five significant colorectal cancer pathways (Supplementary Table ), and inhibited colorectal cancer cell viability in vitro (i.e., IC 50 = 11 μM, efficacy = 54%) . Finally, puromycin, a drug linked to four significant glioma pathways in our study (Supplementary Table ), was also found to reduce the viability of glioblastoma cells in vitro (i.e., IC 50 = 2.74 μM, efficacy = 90%) . In addition, some drugs identified by our approach are approved targeted therapies for their corresponding cancer type. These findings provide additional evidence for the utility of our method (Fig. ). The Profiling Relative Inhibition Simultaneously in Mixtures (PRISM) repurposing dataset provides information on the growth inhibitory activity of 4518 drugs tested across 578 human cancer cell lines, and the area under the dose-response curve (AUC) is a metric that represents the fraction of cells left after drug exposure averaged over all the test concentrations normalized to cells receiving no drug treatment . Given the variability in cell line testing across different drugs in the PRISM dataset, Koudijs et al. utilized a linear mixed model to separate the effect of cell lines and drugs. They then consolidated the findings into estimating the mean normalized AUC (mnAUC) that represents the average fraction of cells left after drug exposure in a group of cell lines . In this study, mnAUC values for the identified potential anti-cancer drugs were calculated using the methodology of Koudijs et al. to assess drug efficacy (Table ). A Wilcoxon rank-sum test revealed that the mnAUC values of the anti-cancer drugs identified in this study were significantly lower than those reported for potential anti-cancer drugs in the literature ( p < 2 × 10 −16 ), indicating that the identified drugs demonstrated robust anti-cancer effects against their respective cancer types (Figure ). To evaluate the efficacy of the method in identifying drugs for specific cancer types, a randomization test was conducted to compare hit rates between our method and the randomized selections. A drug-cancer type pair was defined as a hit if the drug is an approved targeted therapy for the corresponding cancer type. In the randomization test, 1000 cancer type-drug pairs were sampled from the raw data 100 times, yielding an average hit rate of 0.2%, which was significantly lower than the hit rate of 1.5% for the 974 pairs (Fisher’s exact test, p = 0.001) predicted by our method in this study. In this study, we employed an integrated multi-omics approach, which has demonstrated numerous advantages over conventional single-omics methods. For example, Deng et al. utilized an integrated approach by incorporating transcriptomic, proteomic, and metabolomic molecular profiles of tumor patients. This data integration strategy facilitated the identification of key pathways and metabolites, surpassing the accuracy achieved by individual transcriptomic analyses . Similarly, Lu et al. conducted a thorough analysis by integrating transcriptomic and proteomic data in glioblastoma. The results revealed a significant enrichment of the gonadotropin-releasing hormone (GnRH) signaling pathway, a finding not discernible through single omics datasets. This highlights the potential of multi-omics research and analyses in providing a more comprehensive understanding of complex cancers . Furthermore, Heo et al. found that the integration of multi-omics data offers a comprehensive depiction of the molecular and clinical profile of cancer patients when contrasted with single-omics approaches. This integration not only enhanced the generation of high-quality, unbiased datasets, but also contributed to a more holistic understanding of the subject . Our study is one of many that have utilized the CCLE database in different ways to achieve various goals in cancer research and drug discovery. For example, Shao et al. employed a recommendation system learning model with CCLE data (i.e., drug data and multi-omics data in CCLE), focusing on drug-drug functional similarities, unlike our study, which identified cancer type-specific drugs . Hsu et al. developed Scaden-CA, a deep learning model for deconvoluting tumor data into proportions of cancer type-specific cell lines, aiming to bridge the gap in pharmacogenomics knowledge between in vitro and in vivo datasets. The CCLE bulk RNA data was used for their model validation . Carvalho et al. used CCLE data (i.e., copy number and RNA-Seq expression data of colorectal cancer cell lines in CCLE) to identify cell line models and explore drug responses in rectal cancer, revealing significant findings related to the topoisomerase 2A (TOP2A) gene in separate patient cohorts . Mohammadi et al. analyzed proteomics data from 26 breast cancer cell lines in the CCLE to examine the expression patterns of specific antimicrobial and immunomodulatory peptides across various breast cancer subtypes, aiming to facilitate drug repurposing efforts . Rinaldetti et al. used transcriptome expression data from CCLE and BLA-40 cell lines to identify novel subtype-stratified therapeutic approaches for muscle-invasive bladder cancer through high-content screening, revealing distinct drug sensitivities and highlighting the role of CCLE in molecular subtype assignments . We performed an integrative analysis of large-scale RNA-Seq and proteomics profiling data, resulting in a set of characteristic pathways for 16 human cancer types. These pathways can provide a systematic understanding of the complex underlying mechanisms for each cancer type. Furthermore, through these characteristic cancer pathways, we identified drugs for each cancer type, which could serve as drug repurposing candidates for cancer treatment. Our results provide a rich set of testable hypotheses for the design of future experimental validation and clinical trials. Data collection RNA-Seq data (file: CCLE_RNAseq_genes_rpkm_20180929.gct) were retrieved from the CCLE database, and these data contain a total of 1019 cancer cell lines with 56,202 different transcripts . Quantitative proteomics data were obtained from the literature, and these data contain a total of 375 cancer cell lines with 12,755 different proteins . Cancer cell line annotations (file: Cell_lines_annotations_20181226.txt) were downloaded from the CCLE database . To quantitatively validate the results, mean normalized Area Under the Curve (mnAUC) data were utilized from the supplementary materials of a previously published study . The mnAUC values reflect the average fraction of surviving cells after drug exposure across multiple cell lines. Identification of significant transcripts and proteins for each cancer type The raw transcriptome data were pre-processed to remove outliers using the capping method (i.e., the maximum RPKM value for each cell line was calibrated to the value that occurs most frequently among the maximum RPKM values for all cell lines), followed by a log2 transformation. The raw proteomics data were not subjected to the same preprocessing steps as the transcriptome data, as they had already undergone a log2 transformation. To identify the transcripts and proteins specific for each cancer type, we first determined if there was any significant difference between their expression levels across different cancer types using one-way analysis of variance (ANOVA). Transcripts or proteins that showed significant differential expression ( P value < 0.05) were further analyzed to see if they were significantly expressed for a specific cancer type. The expression levels for one cancer type were compared with those of the others, and statistical significance was determined by the P value from a two-tailed Student’s t test. For each cancer type, the resulting P values were then corrected for multiple hypothesis testing using the false discovery rate (FDR), and the FDR-adjusted P values were set from 10 −10 to 10 −2 with a tenfold proportional increase. Each transcript subset at a different FDR-adjusted P value cutoff was subsequently clustered hierarchically using the complete linkage method with the Euclidean distance as the similarity metric. The clustering results were quantified using Gini purity, a measure of clustering specificity. The value of Gini purity ranged from 0 to 1, with higher values indicating higher specialization in the cluster. Finally, the significant transcripts for each cancer type were prioritized based on the FDR-adjusted P value and Gini purity. For protein expression data, a P value of <0.05 was used to select the significant proteins for each cancer type. Biological pathway enrichment analysis The NCATS BioPlanet pathway database was used to identify the biological pathways characteristic of each cancer type . The pathways enriched in each transcript or protein set for a particular cancer type was determined in two steps. The Fisher’s exact test was first applied and then the FDR was calculated. The statistical significance of the pathways with an FDR adjusted P value < 0.05 was further assessed via bootstrap with 1000 replications. The bootstrap P value was calculated by counting the number of times the Fisher’s exact P value from the randomly permutated data was smaller than the true observed value, i.e., a bootstrap P value of 0.005 means that five out of the 1000 random P values were smaller than the true observed P value. A bootstrap P value < 0.05 was considered statistically significant. To improve the reliability of the pathways identified, the enrichment P values from the transcripts and proteins were further combined into a significance score (i.e., the average of the logarithms of the FDR adjusted P values). The significant biological pathways for each cancer type were ranked and prioritized by this combined score (e.g., a smaller score indicates a higher level of significance). Identification of potential anti-cancer drugs Drug target annotations were acquired from the DrugBank database ( https://go.drugbank.com/ ) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) drug database ( https://www.genome.jp/kegg/drug/ ). DrugBank is a bioinformatics and cheminformatics resource that combines detailed drug data with comprehensive target information . The KEGG drug database stores abundant information pertaining to drugs and their interacting molecular targets, which could be useful in the development of new potential anti-cancer drugs . Anti-cancer drug candidates were identified based on the drug-target interactions annotated by the above two databases. Molecular targets involved in multiple biological pathways significant for a cancer type were collected for drug candidate identification. Approved targeted cancer therapies and their corresponding cancer types were retrieved from the National Cancer Institute (NCI) at the National Institutes of Health (NIH) website ( https://www.cancer.gov/about-cancer/treatment/types/targeted-therapies/targeted-therapies-fact-sheet ). RNA-Seq data (file: CCLE_RNAseq_genes_rpkm_20180929.gct) were retrieved from the CCLE database, and these data contain a total of 1019 cancer cell lines with 56,202 different transcripts . Quantitative proteomics data were obtained from the literature, and these data contain a total of 375 cancer cell lines with 12,755 different proteins . Cancer cell line annotations (file: Cell_lines_annotations_20181226.txt) were downloaded from the CCLE database . To quantitatively validate the results, mean normalized Area Under the Curve (mnAUC) data were utilized from the supplementary materials of a previously published study . The mnAUC values reflect the average fraction of surviving cells after drug exposure across multiple cell lines. The raw transcriptome data were pre-processed to remove outliers using the capping method (i.e., the maximum RPKM value for each cell line was calibrated to the value that occurs most frequently among the maximum RPKM values for all cell lines), followed by a log2 transformation. The raw proteomics data were not subjected to the same preprocessing steps as the transcriptome data, as they had already undergone a log2 transformation. To identify the transcripts and proteins specific for each cancer type, we first determined if there was any significant difference between their expression levels across different cancer types using one-way analysis of variance (ANOVA). Transcripts or proteins that showed significant differential expression ( P value < 0.05) were further analyzed to see if they were significantly expressed for a specific cancer type. The expression levels for one cancer type were compared with those of the others, and statistical significance was determined by the P value from a two-tailed Student’s t test. For each cancer type, the resulting P values were then corrected for multiple hypothesis testing using the false discovery rate (FDR), and the FDR-adjusted P values were set from 10 −10 to 10 −2 with a tenfold proportional increase. Each transcript subset at a different FDR-adjusted P value cutoff was subsequently clustered hierarchically using the complete linkage method with the Euclidean distance as the similarity metric. The clustering results were quantified using Gini purity, a measure of clustering specificity. The value of Gini purity ranged from 0 to 1, with higher values indicating higher specialization in the cluster. Finally, the significant transcripts for each cancer type were prioritized based on the FDR-adjusted P value and Gini purity. For protein expression data, a P value of <0.05 was used to select the significant proteins for each cancer type. The NCATS BioPlanet pathway database was used to identify the biological pathways characteristic of each cancer type . The pathways enriched in each transcript or protein set for a particular cancer type was determined in two steps. The Fisher’s exact test was first applied and then the FDR was calculated. The statistical significance of the pathways with an FDR adjusted P value < 0.05 was further assessed via bootstrap with 1000 replications. The bootstrap P value was calculated by counting the number of times the Fisher’s exact P value from the randomly permutated data was smaller than the true observed value, i.e., a bootstrap P value of 0.005 means that five out of the 1000 random P values were smaller than the true observed P value. A bootstrap P value < 0.05 was considered statistically significant. To improve the reliability of the pathways identified, the enrichment P values from the transcripts and proteins were further combined into a significance score (i.e., the average of the logarithms of the FDR adjusted P values). The significant biological pathways for each cancer type were ranked and prioritized by this combined score (e.g., a smaller score indicates a higher level of significance). Drug target annotations were acquired from the DrugBank database ( https://go.drugbank.com/ ) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) drug database ( https://www.genome.jp/kegg/drug/ ). DrugBank is a bioinformatics and cheminformatics resource that combines detailed drug data with comprehensive target information . The KEGG drug database stores abundant information pertaining to drugs and their interacting molecular targets, which could be useful in the development of new potential anti-cancer drugs . Anti-cancer drug candidates were identified based on the drug-target interactions annotated by the above two databases. Molecular targets involved in multiple biological pathways significant for a cancer type were collected for drug candidate identification. Approved targeted cancer therapies and their corresponding cancer types were retrieved from the National Cancer Institute (NCI) at the National Institutes of Health (NIH) website ( https://www.cancer.gov/about-cancer/treatment/types/targeted-therapies/targeted-therapies-fact-sheet ). Table S1 Table S2 Table S3 Table S4 Supplementary file
Genomics and Epigenomics in Parathyroid Neoplasia: from Bench to Surgical Pathology Practice
12b13f86-7e8a-43a2-8f6f-c2d28d4d4607
7960610
Pathology[mh]
General Background In the clinical setting, the tumor responsible for primary hyperparathyroidism (PHPT) is usually a parathyroid adenoma. The vast majority of parathyroid adenomas are functioning due to an altered set point in terms of calcium sensing mechanisms, and the ensuing parathyroid hormone (PTH) secretion leads to hypercalcemia that may cause diverse symptoms in the afflicted patient . However, the majority of parathyroid adenomas are identified through serum calcium screening. The peak incidence is among 50–60-year-old individuals, and the female to male ratio increases with increased age at diagnosis, reaching 5:1 among patients > 75 years of age . The treatment is surgical, and cure rates at tertiary centers are usually high . Most cases are preoperatively localized using combinations of various imaging techniques, such as neck ultrasound, single-photon emission computed tomography (SPECT/CT), and/or technetium (99mTc) sestamibi scintigraphy, and the endocrine surgeon can thus plan for a focused parathyroidectomy . Although the bulk of PHPT cases are sporadic tumors arising through the somatic acquisition of genetic aberrancies in driver genes, approximately 5% of cases are associated with familial disease. If a familial syndrome is suspected, a four-gland exploration with subtotal parathyroidectomy or total parathyroidectomy with intramuscular reimplantation is often the preferred strategy, as these patients may develop multiglandular, metachronous disease—and some also carry a risk of developing parathyroid carcinoma . From a histopathological perspective, parathyroid adenomas are usually well-circumscribed tumors composed of chief cells arranged in micro-acinar or palisading formations (Fig. a) . Subsets of cells can exhibit hyperchromatic nuclei with nuclear atypia, and multinucleated tumor cells are sometimes observed. Mitotic figures can be seen in parathyroid adenoma and parathyroid carcinoma. The stromal fat content is reduced compared to normal gland histology, and a rim of normal appearing (although suppressed) parathyroid tissue can usually be seen, particularly in smaller parathyroid adenomas and less commonly seen in larger adenomas (Fig. a). What appears to be a rim of normal-appearing parathyroid tissue cannot be used to differentiate parathyroid adenoma from multiglandular disease, as up to 10% of parathyroid “hyperplasia” (multiglandular disease) may have what appear to be rims of normal parathyroid. Moreover, as intraoperative frozen section analyses not always are effective in distinguishing single parathyroid adenomas from multiglandular involvement, intraoperative PTH assays are probably more reliable in this context . In order to diagnose a parathyroid adenoma, there must be no signs of malignant behavior, such as vascular or perineural invasion. Regarding their immunophenotype, most parathyroid adenomas are positive for chromogranin A, PTH, and GATA3. Numerous histological variants have been described, of which oncocytic parathyroid adenomas are the most common, followed by unusual variants such as parathyroid lipoadenomas and water-clear cell adenomas . From a clinical perspective, oncocytic features associate to larger tumor size. Oncocytic parathyroid tumors are functional but may not have as elevated serum calcium or parathyroid hormone levels as comparable conventional parathyroid adenomas. Lipoadenomas might possibly correlate to the presence of arterial hypertension—suggesting that these subclassifications might disclose clinical considerations of importance . Subsets of parathyroid tumors exhibit atypical features usually observed in carcinomas, but yet lacking unequivocal criteria of malignancy such as invasive growth into periparathyroidal tissues (thyroid gland or soft tissues), lymphovascular or perineural invasion, regional and/or distant metastases. These features may include a trabecular growth pattern, fibrotic bands, tumor cells within the capsule, increased mitotic/proliferative activity, and nuclear atypia with macronucleoli (Fig. b, c) . Parathyroid lesions with these features are termed “atypical parathyroid tumors” and could be considered tumors with unknown malignant potential—even though the majority of these cases will behave benign in a clinical context, with few recurrences following parathyroidectomy . Due to the fact that small subsets of atypical tumors will recur as metastatic parathyroid carcinomas, numerous studies have tried to highlight the malignant potential of this tumor subgroup using various combinations of histology and molecular markers, which are discussed in detail below. Parathyroid carcinoma is the malignant form of PHPT. Quite rare, accounting for < 1% of PHPT, parathyroid carcinoma is an entity which from an endocrine pathology perspective always is feared, often discussed, but rarely diagnosed (personal observations) . Unlike parathyroid adenomas, which are more common in women than in men, parathyroid carcinomas affect woman and men equally. Clues to a parathyroid carcinoma diagnosis can be obtained preoperatively, as these tumors often are larger than adenomas, may be clinically palpable, and usually are associated with high serum calcium levels (often > 13.5 mg/dl) with patients exhibiting various symptoms related to hypercalcemia, such as nephrolithiasis and bone disease . Perioperative findings of the tumor being adherent to adjacent structures can also be information of value when assessing these rare lesions, and for this reason, surgeons usually opt in for an en bloc resection of the afflicted ipsilateral thyroid lobe when suspecting parathyroid carcinoma (Fig. d) . However, one must be cautious in evaluating a parathyroid that may be adhesed to adjacent structures such as the thyroid gland, as parathyroid adenomas may also be firmly adhesed to this organ. The difference of course is that parathyroid adenomas are not invasive of the thyroid while parathyroid carcinomas may be invasive of the adjacent thyroid. Parathyroid carcinomas exhibit local invasive behavior and may spread locally to adjacent structures and later on to distant sites (Fig. e, f) . Chemotherapy and/or radiotherapy are largely ineffective, and the 10-year survival rate is around 50–70% , with death often due to hypercalcemia. Morbidity due to complications following repetitive neck surgery (hypoparathyroidism, recurrent nerve palsy) is also high . The differentiation of parathyroid carcinoma from other carcinomas in the neck is usually straightforward. Parathyroid carcinomas are neuroendocrine tumors and generally positive for chromogranin A. Other neuroendocrine tumors in the neck such as medullary thyroid carcinoma will also shows staining for chromogranin A, but medullary thyroid carcinomas are usually positive for thyroid transcription factor 1 while negative for parathyroid hormone. Calcitonin is usually positive in medullary thyroid carcinoma and usually negative in parathyroid carcinoma; however, unusual staining patterns can be seen such an exceptional case of parathyroid carcinoma positive for calcitonin and calcitonin gene related peptide . Moreover, practicing pathologist should also be aware that medullary thyroid carcinomas can express PAX8 in a clone-dependent manner, in which absence of immunoreactivity is noted when monoclonal PAX8 antibodies are applied, as opposed to positive staining using polyclonal antibodies . As parathyroid tissue might stain positive for polyclonal PAX8 antibodies, we recommend a distinguishing panel of PTH and TTF1 when assessing these differentials . Although a plethora of epigenetic and genetic aberrancies have been identified in parathyroid tumors, few markers have paved their way into clinical routine practice. In this review, we focus on two main clinical predicaments concerning parathyroid tumors, namely, (1) the identification of tumors associated to various underlying syndromes of importance for genetic counseling, and (2) the distinction between benign and malignant tumors to triage each patient to correct follow-up and treatment algorithms. In the following sections, we highlight molecular markers of importance that facilitate these diagnostic quandaries, and discuss their potential as discriminative clinical markers of value to the surgical pathologist. Familial PHPT: Underlying Causes Much of what we today know about mutational driver events in parathyroid tumorigenesis stems from earlier work in kindreds with familial PHPT, long before the appearance of next-generation sequencing techniques. By genetic linkage analyses of family members with autosomal dominant PHTP, candidate gene loci segregating with disease-afflicted individuals were identified, followed by identification of mutational events by cumbersome Sanger sequencing of a large number of candidate genes within these regions. By this methodology, germline mutations of the MEN1 ( multiple endocrine neoplasia type 1 ), RET ( rearranged during transfection ), and CDC73 ( cell division cycle protein 73 ) (originally entitled hyperparathyroidism type 2 , HRPT2 ) genes as events underlying the development of multiple endocrine neoplasia type 1 (MEN1), multiple endocrine neoplasia type 2A (MEN2A), and hyperparathyroidism-jaw tumor (HPT-JT) syndromes, respectively (Table ) . These three conditions exhibit high prevalence of PHPT, occurring in approximately 90% of MEN1 kindred, 20–30% of MEN2A kindred, and in 80% of HTP-JT kindred . In MEN1 patients, PHPT is the most common disease manifestation, followed by pituitary tumors (30–40% of patients), duodenal and pancreatic neuroendocrine tumors (most often gastrinoma, insulinoma and/or glucagonoma, 40% of patients), and adrenocortical lesions (20–45% of patients) . In MEN2A, most patients (> 90%) develop medullary thyroid carcinoma and pheochromocytoma (50%), whereas PHPT is more uncommon, occurring in 15–30% of patients . For HPT-JT kindred, 70–80% develop PHPT, whereas approximately 10% of patients also develop ossifying fibromas of the mandible or maxilla . PHPT in MEN1 usually presents as multiglandular disease that may develop in synchronous or metachronous settings and may be asymmetric. For the MEN2A syndrome, the PHPT may present as multiglandular or single gland disease. There are exceedingly few reports of unequivocal parathyroid carcinomas arising in MEN1 or MEN2A kindred . The hyperparathyroidism in HPT-JT syndrome is usually associated with a parathyroid adenoma; however, 15–40% of patients carrying CDC73 mutations or gene deletions will develop parathyroid carcinoma . Apart from these three syndromes, hyperparathyroidism is also seen as the sole feature in familial isolated hyperparathyroidism (FIHP). These families often, but not always, present with germline mutations in either MEN1 , CDC73 , or the calcium sensing receptor ( CaSR ) gene, of which the latter is also associated to the development of familial hypocalciuric hypercalcemia type 1 (FHH1) . The reason why some patients with germline mutations develop full-blown MEN1, HPT-JT, and FHH1 syndromes, while others develop FIHP, is not clearly understood—as there is no apparent genotype to phenotype correlation in terms of mutation types and exon localization. Recently, germline activating glial cells missing transcription factor 2 ( GCM2 ) gene mutations were also coupled to FIHP, adding yet another candidate to the growing palette of genes underlying the development of familial hyperparathyroidism . Moreover, germline inactivating mutations in cyclin-dependent kinase inhibitor (CDKI) genes have been found in rare families with MEN1-like syndromes (with mutations in either CDKN1A , CDKN2B , or CDKN2C ) or the MEN4 syndrome, a phenotypic MEN1 syndrome characterized by mutations in CDKN1B . Somatic Genetics in Parathyroid Adenomas Given the identification of the MEN1 , RET , and CDC73 gene aberrancies as main responsible for the development of familial PHPT, numerous studies followed in which the involvement of these genes were assessed in sporadic parathyroid tumors. Approximately 25–40% of all sporadic parathyroid adenomas harbor LOH of the MEN1 gene locus at 11q13, and half of these cases also exhibit an inactivating MEN1 mutation of the remaining allele (Table , Fig. ) . Interestingly, an association between somatic MEN1 mutations and mild PHPT symptoms has been observed, possibly arguing for early-stage events in parathyroid tumorigenesis . Moreover, while somatic CDC73 gene mutations have been reported in small subsets of sporadic parathyroid adenomas, no reports on somatic RET gene mutations in parathyroid adenoma have been noted . Moreover, mutational and/or epigenetic silencing of other genes predisposing for FIHP and MEN1-like syndromes have also been detected in small subsets of apparent sporadic parathyroid adenomas, including CDKN1A , CDKN1B , CDKN2A , CDKN2B , CDKN2C , and GCM2 . Of particular interest, CDKN1B encodes p27 and down-regulation of p27 has been described in PAs, both on the RNA- and protein level . Moreover, CDKN1B mutations have also been functionally linked to the development of parathyroid tumors, thereby solidifying the role of aberrant cell cycle regulation in the development of parathyroid adenomas . Continuing on the cell cycle topic, a recurrent chromosomal inversion involving the peri-centromeric portion of chromosome 11 has been observed in exceedingly small subsets of sporadic parathyroid adenomas. This rearrangement causes the juxtaposition of the PTH 5′ regulatory region to the CCND1 oncogene coding region, causing a constitutively expression of the CCND1 corresponding protein cyclin D1. Although rare in sporadic parathyroid adenoma, overexpression of cyclin D1 is a common event, and therefore, other mechanisms apart from rare chromosomal inversions involving CCND1 or mutations in cyclin D1-regulating CDKIs are expected to play a role. Instead, promoter hypermethylation and down-regulation of various CDKIs could probably explain the commonly observed cyclin D1 upregulation in parathyroid adenomas (Fig. ) . Apart from the discovery of somatic alterations in established parathyroid adenoma susceptibility genes discussed above, the advent of next-generation sequencing techniques has led to the discovery of additional gene mutations of possible importance to parathyroid adenoma development. By whole-exome sequencing of parathyroid adenomas, an activating missense mutation in the methyltransferase gene enhancer of zeste homolog 2 ( EZH2 ) was detected in one out of 8 adenomas interrogated, and additional targeted sequencing of 185 adenomas revealed one additional case with the same mutation . The EZH2 gene is an epigenetic regulator of chromatin accessibility with an association to tumorigenesis in general, thereby providing additional validity of this gene constituting a possible contributor of parathyroid adenoma development—which was also verified using functional experiments in a parathyroid cell line . Since then, additional whole-exome sequencing studies have corroborated low frequencies of EZH2 mutations in sporadic adenomas . Additional mutational events occurring in low frequencies of parathyroid adenomas include activating CTNNB1 mutations, of which some have been reported as homozygous in small numbers , although not reproduced by others . CTNNB1 encodes beta-catenin, a central onco-protein regulating the Wingless type (Wnt) pathway, and data suggests that nuclear accumulation of beta-catenin could be an important player in the development of parathyroid adenomas—either through activating mutations or through aberrantly expressed Wnt co-receptors (Fig. ) . On the epigenetic level, apart from hypermethylation of CDKIs mentioned above, aberrant methylation has also been reported for numerous tumor-related genes, including WT1 , SFRP1 , SFRP2 , SFRP4 , RIZ1 , APC , and RASSF1A . Most notably, RASSF1A hypermethylation was strongly associated to down-regulation on the mRNA level in virtually all parathyroid adenomas, thereby constituting one of the most commonly known genetic aberrancies in this disease (Table ) . Moreover, on a global level, adenomas seem to exhibit similar levels of methylation as non-tumorous parathyroid tissues, suggesting that epigenetic dysregulations are driven by gene-specific events and not due to a general hypo- or hypermethylation pattern . Regarding specific parathyroid adenoma subtypes, there is also an established correlation between oncocytic parathyroid adenomas and somatic mutations in genes encoded by mitochondrial DNA (mtDNA), especially mitochondrial respiratory chain complex genes NADH dehydrogenase 1 , 4 , and 5 ( ND1 , ND4 , and ND5 ) (Table , Fig. ). As oncocytic tumors in general exhibit prominent amounts of mitochondria, the association is intriguing . However, as no recurrent mutations were observed, these findings mandate functional verification before a true driver status could be assigned to any of these alterations. Somatic Genetics in Parathyroid Carcinomas Somatic CDC73 gene mutations are the most frequent somatic alteration in parathyroid carcinoma (Table , Fig. ) . These mutations are in general disruptive due to premature truncations or frameshift alterations, alternatively the mutations are of missense nature in conserved regions encoding the nuclear localization signals (NLSs) or the human polymerase–associated factor 1 (hPAF1) complex of the corresponding protein product, termed parafibromin. Apart from mutations, LOH encompassing the CDC73 gene locus and aberrant CDC73 promoter methylation have also been reported as somatic events in parathyroid carcinoma . Parafibromin is a member of the hPAF regulatory complex, a key transcriptional unit that interacts with RNA polymerase II and facilitates transcriptional activity due to histone-modifying and chromatin remodeling processes . Parafibromin is associated with tumor-suppressive properties, as (a) the majority of CDC73 germline mutations in HPT-JT and FIHP kindred as well as the bulk of somatic mutations in sporadic parathyroid carcinomas are disruptive , (b) the majority of tumors with CDC73 mutations exhibit loss of parafibromin expression , and (c) functional experiments with CDC73 plasmids support an anti-proliferative effect of the wild-type protein . Indeed, parafibromin has been found to regulate cyclin D1 levels, exhibit pro-apoptotic effects, regulate the Wnt pathway through interactions with beta-catenin, as well as the c-Myc oncogene through direct binding to the promoter region of this gene . Intriguingly, parafibromin also seems to exhibit oncogenic features in the presence of certain molecular partners, suggesting a yin-yang modus operandi of this protein . In contrast to parathyroid adenomas, somatic MEN1 mutations are very infrequent in parathyroid carcinomas, but nonetheless reported . Given the exceedingly low rate of malignant PHPT in MEN1 kindred, other genetic events apart from this aberrancy are expected to drive the invasive behavior in parathyroid carcinoma. TP53 gene mutations are among the most common genetic aberrations in malignant epithelial tumors, although these genetic events seem to be unusual in parathyroid carcinomas . Even so, LOH of one TP53 allele seems to be more common . Similarly, loss of the retinoblastoma 1 ( RB1 ) gene, encoding pRB, is observed in the majority of parathyroid carcinomas, however inactivating mutations have not been reported . The P53 and pRB proteins are regulators of cell cycle progression and two bona fide tumor suppressors usually required to be silenced on both alleles in order to promote neoplasia, and given the fact that parathyroid carcinomas recurrently exhibit absent pRB expression, additional inactivating mechanisms apart from mutations are most likely operational . Next-generation sequencing studies on parathyroid carcinomas are rare, not surprising given the low prevalence of this disease in general. In a recently published whole-genome sequencing study of 23 parathyroid carcinomas, the authors conclude that CDC73 gene mutations were the most common sequence aberration, occurring in almost 40% of cases. Increased copy number variants were seen in parathyroid carcinomas with CDC73 mutations, and these cases also carried an increased tumor mutational burden and poorer patient outcome. In unrelated, exome-based studies, recurrent mutations in AarF domain containing kinase 1 ( ADCK1 ) and prune homolog 2 with BCH domain ( PRUNE2 ) have been reported in parathyroid carcinomas; however, their functional roles has not been elucidated . Moreover, a general overrepresentation of mutations in genes associated with DNA repair and cell cycle regulation seems evident , and rare mutations in established cancer-associated genes have also been identified, for example succinate dehydrogenase complex flavoprotein subunit A ( SDHA ) and DICER1 . From a therapeutic perspective, the majority of parathyroid carcinomas might carry alterations suitable for molecular targeted therapies, thus highlighting the potential role for next-generation sequencing as a tool to identify cases with potential for targeted therapeutic interventions . Promoter mutations of the telomerase reverse transcriptase ( TERT ) gene are heavily implicated in human cancers, as they convey increased TERT expression, which in turn promotes immortalization. However, these mutations seem fairly rare in parathyroid carcinomas, although these tumors in general express TERT protein . As of this, alterative mechanisms leading to increased TERT expression in parathyroid carcinomas are suspected. Aberrant epigenetic mechanisms are also at play in parathyroid carcinomas. Via global methylome analyses, parathyroid carcinomas and adenomas seem to exhibit hypermethylation of CDKN2B , CDKN2A , WT1 , SFRP1 , SFRP2 , and SFRP4 . Moreover, beside the recurrent CDC73 promoter hypermethylation discussed above, altered methylation levels of the adenomatous polyposis coli (APC) promoter 1A region is recurrently seen in parathyroid tumors, although the APC mRNA expression seem to be retained by an unmethylated 1B promoter region . In parathyroid carcinomas specifically, loss of APC protein expression is a frequent event, and this could in part be due to aberrant methylation patterns rather than gene mutations . The APC protein is a tumor suppressor that regulates the Wnt pathway, and loss of APC expression is therefore thought to stimulate proliferation in parathyroid cells. In more recent years, the regulation of epigenetic de-methylation has been highlighted in cancer, especially the discovery of the TET1/TET2 enzymatic activity catalyzing oxidation of 5-methylcytosine (5mC) to generate 5-hydroxymethylcytosine (5hmC). In parathyroid carcinomas, 5hmC levels and TET1 expression have been found extensively reduced, suggesting a general reduction in de-methylation events across the genome . As this phenomenon is tightly linked to the presence of TERT promoter mutations in unrelated cancer types, the general absence of these mutations in parathyroid carcinoma would suggest that other molecular mechanisms influence the observed lack of global de-methylation—a subject worthy of follow-up studies . In the clinical setting, the tumor responsible for primary hyperparathyroidism (PHPT) is usually a parathyroid adenoma. The vast majority of parathyroid adenomas are functioning due to an altered set point in terms of calcium sensing mechanisms, and the ensuing parathyroid hormone (PTH) secretion leads to hypercalcemia that may cause diverse symptoms in the afflicted patient . However, the majority of parathyroid adenomas are identified through serum calcium screening. The peak incidence is among 50–60-year-old individuals, and the female to male ratio increases with increased age at diagnosis, reaching 5:1 among patients > 75 years of age . The treatment is surgical, and cure rates at tertiary centers are usually high . Most cases are preoperatively localized using combinations of various imaging techniques, such as neck ultrasound, single-photon emission computed tomography (SPECT/CT), and/or technetium (99mTc) sestamibi scintigraphy, and the endocrine surgeon can thus plan for a focused parathyroidectomy . Although the bulk of PHPT cases are sporadic tumors arising through the somatic acquisition of genetic aberrancies in driver genes, approximately 5% of cases are associated with familial disease. If a familial syndrome is suspected, a four-gland exploration with subtotal parathyroidectomy or total parathyroidectomy with intramuscular reimplantation is often the preferred strategy, as these patients may develop multiglandular, metachronous disease—and some also carry a risk of developing parathyroid carcinoma . From a histopathological perspective, parathyroid adenomas are usually well-circumscribed tumors composed of chief cells arranged in micro-acinar or palisading formations (Fig. a) . Subsets of cells can exhibit hyperchromatic nuclei with nuclear atypia, and multinucleated tumor cells are sometimes observed. Mitotic figures can be seen in parathyroid adenoma and parathyroid carcinoma. The stromal fat content is reduced compared to normal gland histology, and a rim of normal appearing (although suppressed) parathyroid tissue can usually be seen, particularly in smaller parathyroid adenomas and less commonly seen in larger adenomas (Fig. a). What appears to be a rim of normal-appearing parathyroid tissue cannot be used to differentiate parathyroid adenoma from multiglandular disease, as up to 10% of parathyroid “hyperplasia” (multiglandular disease) may have what appear to be rims of normal parathyroid. Moreover, as intraoperative frozen section analyses not always are effective in distinguishing single parathyroid adenomas from multiglandular involvement, intraoperative PTH assays are probably more reliable in this context . In order to diagnose a parathyroid adenoma, there must be no signs of malignant behavior, such as vascular or perineural invasion. Regarding their immunophenotype, most parathyroid adenomas are positive for chromogranin A, PTH, and GATA3. Numerous histological variants have been described, of which oncocytic parathyroid adenomas are the most common, followed by unusual variants such as parathyroid lipoadenomas and water-clear cell adenomas . From a clinical perspective, oncocytic features associate to larger tumor size. Oncocytic parathyroid tumors are functional but may not have as elevated serum calcium or parathyroid hormone levels as comparable conventional parathyroid adenomas. Lipoadenomas might possibly correlate to the presence of arterial hypertension—suggesting that these subclassifications might disclose clinical considerations of importance . Subsets of parathyroid tumors exhibit atypical features usually observed in carcinomas, but yet lacking unequivocal criteria of malignancy such as invasive growth into periparathyroidal tissues (thyroid gland or soft tissues), lymphovascular or perineural invasion, regional and/or distant metastases. These features may include a trabecular growth pattern, fibrotic bands, tumor cells within the capsule, increased mitotic/proliferative activity, and nuclear atypia with macronucleoli (Fig. b, c) . Parathyroid lesions with these features are termed “atypical parathyroid tumors” and could be considered tumors with unknown malignant potential—even though the majority of these cases will behave benign in a clinical context, with few recurrences following parathyroidectomy . Due to the fact that small subsets of atypical tumors will recur as metastatic parathyroid carcinomas, numerous studies have tried to highlight the malignant potential of this tumor subgroup using various combinations of histology and molecular markers, which are discussed in detail below. Parathyroid carcinoma is the malignant form of PHPT. Quite rare, accounting for < 1% of PHPT, parathyroid carcinoma is an entity which from an endocrine pathology perspective always is feared, often discussed, but rarely diagnosed (personal observations) . Unlike parathyroid adenomas, which are more common in women than in men, parathyroid carcinomas affect woman and men equally. Clues to a parathyroid carcinoma diagnosis can be obtained preoperatively, as these tumors often are larger than adenomas, may be clinically palpable, and usually are associated with high serum calcium levels (often > 13.5 mg/dl) with patients exhibiting various symptoms related to hypercalcemia, such as nephrolithiasis and bone disease . Perioperative findings of the tumor being adherent to adjacent structures can also be information of value when assessing these rare lesions, and for this reason, surgeons usually opt in for an en bloc resection of the afflicted ipsilateral thyroid lobe when suspecting parathyroid carcinoma (Fig. d) . However, one must be cautious in evaluating a parathyroid that may be adhesed to adjacent structures such as the thyroid gland, as parathyroid adenomas may also be firmly adhesed to this organ. The difference of course is that parathyroid adenomas are not invasive of the thyroid while parathyroid carcinomas may be invasive of the adjacent thyroid. Parathyroid carcinomas exhibit local invasive behavior and may spread locally to adjacent structures and later on to distant sites (Fig. e, f) . Chemotherapy and/or radiotherapy are largely ineffective, and the 10-year survival rate is around 50–70% , with death often due to hypercalcemia. Morbidity due to complications following repetitive neck surgery (hypoparathyroidism, recurrent nerve palsy) is also high . The differentiation of parathyroid carcinoma from other carcinomas in the neck is usually straightforward. Parathyroid carcinomas are neuroendocrine tumors and generally positive for chromogranin A. Other neuroendocrine tumors in the neck such as medullary thyroid carcinoma will also shows staining for chromogranin A, but medullary thyroid carcinomas are usually positive for thyroid transcription factor 1 while negative for parathyroid hormone. Calcitonin is usually positive in medullary thyroid carcinoma and usually negative in parathyroid carcinoma; however, unusual staining patterns can be seen such an exceptional case of parathyroid carcinoma positive for calcitonin and calcitonin gene related peptide . Moreover, practicing pathologist should also be aware that medullary thyroid carcinomas can express PAX8 in a clone-dependent manner, in which absence of immunoreactivity is noted when monoclonal PAX8 antibodies are applied, as opposed to positive staining using polyclonal antibodies . As parathyroid tissue might stain positive for polyclonal PAX8 antibodies, we recommend a distinguishing panel of PTH and TTF1 when assessing these differentials . Although a plethora of epigenetic and genetic aberrancies have been identified in parathyroid tumors, few markers have paved their way into clinical routine practice. In this review, we focus on two main clinical predicaments concerning parathyroid tumors, namely, (1) the identification of tumors associated to various underlying syndromes of importance for genetic counseling, and (2) the distinction between benign and malignant tumors to triage each patient to correct follow-up and treatment algorithms. In the following sections, we highlight molecular markers of importance that facilitate these diagnostic quandaries, and discuss their potential as discriminative clinical markers of value to the surgical pathologist. Much of what we today know about mutational driver events in parathyroid tumorigenesis stems from earlier work in kindreds with familial PHPT, long before the appearance of next-generation sequencing techniques. By genetic linkage analyses of family members with autosomal dominant PHTP, candidate gene loci segregating with disease-afflicted individuals were identified, followed by identification of mutational events by cumbersome Sanger sequencing of a large number of candidate genes within these regions. By this methodology, germline mutations of the MEN1 ( multiple endocrine neoplasia type 1 ), RET ( rearranged during transfection ), and CDC73 ( cell division cycle protein 73 ) (originally entitled hyperparathyroidism type 2 , HRPT2 ) genes as events underlying the development of multiple endocrine neoplasia type 1 (MEN1), multiple endocrine neoplasia type 2A (MEN2A), and hyperparathyroidism-jaw tumor (HPT-JT) syndromes, respectively (Table ) . These three conditions exhibit high prevalence of PHPT, occurring in approximately 90% of MEN1 kindred, 20–30% of MEN2A kindred, and in 80% of HTP-JT kindred . In MEN1 patients, PHPT is the most common disease manifestation, followed by pituitary tumors (30–40% of patients), duodenal and pancreatic neuroendocrine tumors (most often gastrinoma, insulinoma and/or glucagonoma, 40% of patients), and adrenocortical lesions (20–45% of patients) . In MEN2A, most patients (> 90%) develop medullary thyroid carcinoma and pheochromocytoma (50%), whereas PHPT is more uncommon, occurring in 15–30% of patients . For HPT-JT kindred, 70–80% develop PHPT, whereas approximately 10% of patients also develop ossifying fibromas of the mandible or maxilla . PHPT in MEN1 usually presents as multiglandular disease that may develop in synchronous or metachronous settings and may be asymmetric. For the MEN2A syndrome, the PHPT may present as multiglandular or single gland disease. There are exceedingly few reports of unequivocal parathyroid carcinomas arising in MEN1 or MEN2A kindred . The hyperparathyroidism in HPT-JT syndrome is usually associated with a parathyroid adenoma; however, 15–40% of patients carrying CDC73 mutations or gene deletions will develop parathyroid carcinoma . Apart from these three syndromes, hyperparathyroidism is also seen as the sole feature in familial isolated hyperparathyroidism (FIHP). These families often, but not always, present with germline mutations in either MEN1 , CDC73 , or the calcium sensing receptor ( CaSR ) gene, of which the latter is also associated to the development of familial hypocalciuric hypercalcemia type 1 (FHH1) . The reason why some patients with germline mutations develop full-blown MEN1, HPT-JT, and FHH1 syndromes, while others develop FIHP, is not clearly understood—as there is no apparent genotype to phenotype correlation in terms of mutation types and exon localization. Recently, germline activating glial cells missing transcription factor 2 ( GCM2 ) gene mutations were also coupled to FIHP, adding yet another candidate to the growing palette of genes underlying the development of familial hyperparathyroidism . Moreover, germline inactivating mutations in cyclin-dependent kinase inhibitor (CDKI) genes have been found in rare families with MEN1-like syndromes (with mutations in either CDKN1A , CDKN2B , or CDKN2C ) or the MEN4 syndrome, a phenotypic MEN1 syndrome characterized by mutations in CDKN1B . Given the identification of the MEN1 , RET , and CDC73 gene aberrancies as main responsible for the development of familial PHPT, numerous studies followed in which the involvement of these genes were assessed in sporadic parathyroid tumors. Approximately 25–40% of all sporadic parathyroid adenomas harbor LOH of the MEN1 gene locus at 11q13, and half of these cases also exhibit an inactivating MEN1 mutation of the remaining allele (Table , Fig. ) . Interestingly, an association between somatic MEN1 mutations and mild PHPT symptoms has been observed, possibly arguing for early-stage events in parathyroid tumorigenesis . Moreover, while somatic CDC73 gene mutations have been reported in small subsets of sporadic parathyroid adenomas, no reports on somatic RET gene mutations in parathyroid adenoma have been noted . Moreover, mutational and/or epigenetic silencing of other genes predisposing for FIHP and MEN1-like syndromes have also been detected in small subsets of apparent sporadic parathyroid adenomas, including CDKN1A , CDKN1B , CDKN2A , CDKN2B , CDKN2C , and GCM2 . Of particular interest, CDKN1B encodes p27 and down-regulation of p27 has been described in PAs, both on the RNA- and protein level . Moreover, CDKN1B mutations have also been functionally linked to the development of parathyroid tumors, thereby solidifying the role of aberrant cell cycle regulation in the development of parathyroid adenomas . Continuing on the cell cycle topic, a recurrent chromosomal inversion involving the peri-centromeric portion of chromosome 11 has been observed in exceedingly small subsets of sporadic parathyroid adenomas. This rearrangement causes the juxtaposition of the PTH 5′ regulatory region to the CCND1 oncogene coding region, causing a constitutively expression of the CCND1 corresponding protein cyclin D1. Although rare in sporadic parathyroid adenoma, overexpression of cyclin D1 is a common event, and therefore, other mechanisms apart from rare chromosomal inversions involving CCND1 or mutations in cyclin D1-regulating CDKIs are expected to play a role. Instead, promoter hypermethylation and down-regulation of various CDKIs could probably explain the commonly observed cyclin D1 upregulation in parathyroid adenomas (Fig. ) . Apart from the discovery of somatic alterations in established parathyroid adenoma susceptibility genes discussed above, the advent of next-generation sequencing techniques has led to the discovery of additional gene mutations of possible importance to parathyroid adenoma development. By whole-exome sequencing of parathyroid adenomas, an activating missense mutation in the methyltransferase gene enhancer of zeste homolog 2 ( EZH2 ) was detected in one out of 8 adenomas interrogated, and additional targeted sequencing of 185 adenomas revealed one additional case with the same mutation . The EZH2 gene is an epigenetic regulator of chromatin accessibility with an association to tumorigenesis in general, thereby providing additional validity of this gene constituting a possible contributor of parathyroid adenoma development—which was also verified using functional experiments in a parathyroid cell line . Since then, additional whole-exome sequencing studies have corroborated low frequencies of EZH2 mutations in sporadic adenomas . Additional mutational events occurring in low frequencies of parathyroid adenomas include activating CTNNB1 mutations, of which some have been reported as homozygous in small numbers , although not reproduced by others . CTNNB1 encodes beta-catenin, a central onco-protein regulating the Wingless type (Wnt) pathway, and data suggests that nuclear accumulation of beta-catenin could be an important player in the development of parathyroid adenomas—either through activating mutations or through aberrantly expressed Wnt co-receptors (Fig. ) . On the epigenetic level, apart from hypermethylation of CDKIs mentioned above, aberrant methylation has also been reported for numerous tumor-related genes, including WT1 , SFRP1 , SFRP2 , SFRP4 , RIZ1 , APC , and RASSF1A . Most notably, RASSF1A hypermethylation was strongly associated to down-regulation on the mRNA level in virtually all parathyroid adenomas, thereby constituting one of the most commonly known genetic aberrancies in this disease (Table ) . Moreover, on a global level, adenomas seem to exhibit similar levels of methylation as non-tumorous parathyroid tissues, suggesting that epigenetic dysregulations are driven by gene-specific events and not due to a general hypo- or hypermethylation pattern . Regarding specific parathyroid adenoma subtypes, there is also an established correlation between oncocytic parathyroid adenomas and somatic mutations in genes encoded by mitochondrial DNA (mtDNA), especially mitochondrial respiratory chain complex genes NADH dehydrogenase 1 , 4 , and 5 ( ND1 , ND4 , and ND5 ) (Table , Fig. ). As oncocytic tumors in general exhibit prominent amounts of mitochondria, the association is intriguing . However, as no recurrent mutations were observed, these findings mandate functional verification before a true driver status could be assigned to any of these alterations. Somatic CDC73 gene mutations are the most frequent somatic alteration in parathyroid carcinoma (Table , Fig. ) . These mutations are in general disruptive due to premature truncations or frameshift alterations, alternatively the mutations are of missense nature in conserved regions encoding the nuclear localization signals (NLSs) or the human polymerase–associated factor 1 (hPAF1) complex of the corresponding protein product, termed parafibromin. Apart from mutations, LOH encompassing the CDC73 gene locus and aberrant CDC73 promoter methylation have also been reported as somatic events in parathyroid carcinoma . Parafibromin is a member of the hPAF regulatory complex, a key transcriptional unit that interacts with RNA polymerase II and facilitates transcriptional activity due to histone-modifying and chromatin remodeling processes . Parafibromin is associated with tumor-suppressive properties, as (a) the majority of CDC73 germline mutations in HPT-JT and FIHP kindred as well as the bulk of somatic mutations in sporadic parathyroid carcinomas are disruptive , (b) the majority of tumors with CDC73 mutations exhibit loss of parafibromin expression , and (c) functional experiments with CDC73 plasmids support an anti-proliferative effect of the wild-type protein . Indeed, parafibromin has been found to regulate cyclin D1 levels, exhibit pro-apoptotic effects, regulate the Wnt pathway through interactions with beta-catenin, as well as the c-Myc oncogene through direct binding to the promoter region of this gene . Intriguingly, parafibromin also seems to exhibit oncogenic features in the presence of certain molecular partners, suggesting a yin-yang modus operandi of this protein . In contrast to parathyroid adenomas, somatic MEN1 mutations are very infrequent in parathyroid carcinomas, but nonetheless reported . Given the exceedingly low rate of malignant PHPT in MEN1 kindred, other genetic events apart from this aberrancy are expected to drive the invasive behavior in parathyroid carcinoma. TP53 gene mutations are among the most common genetic aberrations in malignant epithelial tumors, although these genetic events seem to be unusual in parathyroid carcinomas . Even so, LOH of one TP53 allele seems to be more common . Similarly, loss of the retinoblastoma 1 ( RB1 ) gene, encoding pRB, is observed in the majority of parathyroid carcinomas, however inactivating mutations have not been reported . The P53 and pRB proteins are regulators of cell cycle progression and two bona fide tumor suppressors usually required to be silenced on both alleles in order to promote neoplasia, and given the fact that parathyroid carcinomas recurrently exhibit absent pRB expression, additional inactivating mechanisms apart from mutations are most likely operational . Next-generation sequencing studies on parathyroid carcinomas are rare, not surprising given the low prevalence of this disease in general. In a recently published whole-genome sequencing study of 23 parathyroid carcinomas, the authors conclude that CDC73 gene mutations were the most common sequence aberration, occurring in almost 40% of cases. Increased copy number variants were seen in parathyroid carcinomas with CDC73 mutations, and these cases also carried an increased tumor mutational burden and poorer patient outcome. In unrelated, exome-based studies, recurrent mutations in AarF domain containing kinase 1 ( ADCK1 ) and prune homolog 2 with BCH domain ( PRUNE2 ) have been reported in parathyroid carcinomas; however, their functional roles has not been elucidated . Moreover, a general overrepresentation of mutations in genes associated with DNA repair and cell cycle regulation seems evident , and rare mutations in established cancer-associated genes have also been identified, for example succinate dehydrogenase complex flavoprotein subunit A ( SDHA ) and DICER1 . From a therapeutic perspective, the majority of parathyroid carcinomas might carry alterations suitable for molecular targeted therapies, thus highlighting the potential role for next-generation sequencing as a tool to identify cases with potential for targeted therapeutic interventions . Promoter mutations of the telomerase reverse transcriptase ( TERT ) gene are heavily implicated in human cancers, as they convey increased TERT expression, which in turn promotes immortalization. However, these mutations seem fairly rare in parathyroid carcinomas, although these tumors in general express TERT protein . As of this, alterative mechanisms leading to increased TERT expression in parathyroid carcinomas are suspected. Aberrant epigenetic mechanisms are also at play in parathyroid carcinomas. Via global methylome analyses, parathyroid carcinomas and adenomas seem to exhibit hypermethylation of CDKN2B , CDKN2A , WT1 , SFRP1 , SFRP2 , and SFRP4 . Moreover, beside the recurrent CDC73 promoter hypermethylation discussed above, altered methylation levels of the adenomatous polyposis coli (APC) promoter 1A region is recurrently seen in parathyroid tumors, although the APC mRNA expression seem to be retained by an unmethylated 1B promoter region . In parathyroid carcinomas specifically, loss of APC protein expression is a frequent event, and this could in part be due to aberrant methylation patterns rather than gene mutations . The APC protein is a tumor suppressor that regulates the Wnt pathway, and loss of APC expression is therefore thought to stimulate proliferation in parathyroid cells. In more recent years, the regulation of epigenetic de-methylation has been highlighted in cancer, especially the discovery of the TET1/TET2 enzymatic activity catalyzing oxidation of 5-methylcytosine (5mC) to generate 5-hydroxymethylcytosine (5hmC). In parathyroid carcinomas, 5hmC levels and TET1 expression have been found extensively reduced, suggesting a general reduction in de-methylation events across the genome . As this phenomenon is tightly linked to the presence of TERT promoter mutations in unrelated cancer types, the general absence of these mutations in parathyroid carcinoma would suggest that other molecular mechanisms influence the observed lack of global de-methylation—a subject worthy of follow-up studies . MEN1-Related PHPT: Could the Pathologist be of any Help? The MEN1 syndrome is by far the most common among the hereditary conditions detailed in this review, with one case per 40,000 as opposed to MEN2A (1 case per 2 million) and the HPT-JT syndrome (unknown prevalence, but expected to be exceedingly low) . Therefore, surgical pathologists will most likely diagnose MEN1 related parathyroid adenomas to a much larger extent than MEN2A, HPT-JT, FIHP, or MEN1-like related cases. Although the MEN1 syndrome often is diagnosed long before the patient is subjected to parathyroidectomy, the non-total penetrance in younger years as well as the occurrence of de novo MEN1 mutations in subsets of patients with healthy parents allow for subsets of patients being misclassified as sporadic PHPT patients. As up to 10% of individuals with primary parathyroid “hyperplasia” (multiglandular disease, adenomatosis, multiple adenomas) have MEN1, genetic screening for familial disease has been considered for all patients with primary parathyroid “hyperplasia.” From a diagnostic pathology standpoint, is there a way for the attentive pathologist to aid in the detection of syndromic PHPT in cases where the syndromic association was not evident preoperatively? MEN1 was early on assigned a tumor-suppressor gene status, as the bulk of mutational events in MEN1 kindred reported represent inactivating events leading to premature truncations of the menin protein . Moreover, most parathyroid adenomas arising in MEN1 patients harbor loss of heterozygosity (LOH) of the MEN1 wild-type allele on the somatic level, thereby arguing in favor for the Knudson’s “two hit” theory in which bi-allelic inactivation of a tumor suppressor gene is needed in order for a tumor to develop . Therefore, the question arises if clinical screening using expressional analyses targeting menin could be a cheap and efficient way for the surgical pathologist to triage menin-negative PHPT cases for genetic screening—as loss of menin expression would be indicative of MEN1 gene aberrancies. Menin is a predominantly nuclear protein with a highly conserved sequence among species, and the protein is universally expressed in most human tissues . The protein harbors nuclear localization signals and leucine zipper motifs, both of which are features needed to regulate gene expression though direct interaction with DNA elements. For example, menin has been shown to interact with JunD , a proto-oncogene encoding a transcription factor regulating apoptosis and TP53 gene activation (Fig. ) . Moreover, menin has been proposed as an important player in the regulation of various signaling networks associated to tumor development, such as the TGF-beta, RAS, Wingless type (Wnt) pathways—as well as influencing the expression of the telomerase reverse transcriptase ( TERT ) gene . From a histological context, the MEN1-related parathyroid disease is often multiglandular, although not always synchronous in presentation . Unlike sporadic parathyroid adenomas, MEN1-related tumors are often devoid of a rim of normal parathyroid tissue, and they may be composed of chief cells arranged in compact, sheet-like formations . The tumors may exhibit fibrosis and have similar appearance as parathyroid hyperplastic lesions arising in the setting of chronic renal failure, and almost always lack invasive features. Overall, findings of multiglandular disease and absence of a normal parathyroid rim in a PHPT patient should raise the suspicion of the MEN1 syndrome; however, a “rim” can also be seen in up to 10% of multiglandular disease. In all, there is no histological feature that sets MEN1-related adenomas apart from sporadic ones. Expression studies regarding menin and associations to underlying MEN1 mutational status in parathyroid tumors have been promising, as menin immunohistochemistry seem to exhibit high sensitivity to detect underlying MEN1 mutations and/or gene deletions . Even so, the interpretation of menin immunohistochemistry is not straightforward, and the lack of comprehensive studies on the subject makes menin immunohistochemistry inappropriate for clinical routine screening purposes. Moreover, as the MEN1 gene is frequently deleted and mutated also on the somatic level in sporadic parathyroid adenomas, additional factors are needed to bring into consideration before suspecting MEN1 syndrome–related PHPT from expressional analyses in the pathology laboratory. For example, the penetrance of PHPT in MEN1 patients reaches 95% at 50 years of age, and therefore, the diagnosis of MEN1 in older patients with multiglandular disease should not be completely overlooked—although the bar for considering an MEN1 diagnosis should be even lower in adolescent patients . In all, a concerted teamwork between endocrine surgeons and pathologists covering the patient’s medical history, disease presentation, radiology, and histopathology is probably needed to properly identify all MEN1-related PHPT in the clinical setting. HPT-JT-Associated PHPT: Clues from the Histopathological Workup? As HPT-JT patients carry germline CDC73 gene mutations or deletions, it comes as no surprise that the prevalence of parathyroid carcinoma in this patient category is much higher (15–30%) than in unselected PHPT patient cohorts (< 1%). As parathyroid carcinoma is so uncommon, it has been suggested that individuals diagnosed with parathyroid carcinoma should be promptly evaluated for HPT-JT. Additionally, if the pathologist raises the suspicion that the patient is indeed an HPT-JT kindred, a raised awareness of the increased parathyroid carcinoma risk is mandated when diagnosing the tumor. Easily recognizable features such as a tumor being cystic may in itself raise the possibility of HPT-JT syndrome. In the most comprehensive study yet regarding morphological features of CDC73 mutated parathyroid tumors, the authors conclude that these lesions are characterized by sheet-like, compact growth rather than the usual acinar patterns visualized in the bulk of parathyroid tumors, as well as a typical eosinophilic cytoplasm distinct from the granular oxyphilic cell type often observed in areas of parathyroid adenomas . Many tumors also exhibited enlarged nuclei and a perinuclear cytoplasmic clearing. In all, these findings suggest that the occurrence of these histological parameters might suggest the occurrence of an underlying CDC73 gene mutation, and these cases should preferably be investigated more thoroughly with parafibromin immunohistochemistry and/or CDC73 gene sequencing. Pinpointing Parathyroid Carcinoma in the Diagnostic Setting Parathyroid carcinoma is the malignant form of PHPT. It constitutes only 0.5–3% of PHPT cases. As many parathyroid tumors present with various degrees of atypia, although not fulfilling the current WHO criteria for parathyroid carcinoma, there is a great need to identify malignant potential in these lesions by other means than histology alone. Moreover, any clinical marker of value in this context would need to exhibit very high specificity, in order to avoid falsely diagnosing benign tumors as carcinoma. The bulk of studies on this subject have attempted to address this dilemma via immunohistochemistry, which is a gold standard, inexpensive methodology used in most pathology laboratories and therefore suitable for rapid implementation in routine clinical practice . One of the earliest immunohistochemical markers proposed to identify parathyroid carcinoma was Ki-67, an extensively used proliferation marker used to identify cells in active (non-G0) phases of the cell cycle. Although there seem to be an overrepresentation of carcinomas among high-proliferative parathyroid tumors, the overlap between parathyroid adenomas and carcinomas is considerable . In a similar fashion, investigations of the cell cycle protein cyclin D1 was initially proven of value, as it was found upregulated in most parathyroid carcinoma compared to adenomas . However, subsequent studies have identified overlap in expression between benign and malignant groups . Detection of widespread LOH of the RB1 locus in parathyroid carcinoma with loss of its corresponding protein pRB prompted the investigation of pRB expression in parathyroid tumors—but with divergent results both concerning staining outcomes as well as LOH events . In short, while LOH of the RB1 locus seem to confer high sensitivity for the detection of parathyroid carcinoma, a significant amount of adenomas harbor the same genetic aberrancy, and the reduced specificity renders this marker of less value when screening for rare carcinomas in PHPT. Moreover, a positive pRb staining seems to be variable in terms of the proportion tumor cells stained, making the interpretation somewhat challenging . Following the advent of tissue microarrays, a study concluded that the combined positivity of p27, B-cell lymphoma 2 (bcl-2), and mouse double minute 2 homolog (mdm2) adjoined by a low Ki-67 proliferation index indicated a benign clinical behavior of any given parathyroid tumor, as this profile was not evident in any parathyroid carcinoma but found in the vast majority of adenomas . Both bcl-2 and mdm2 are members of the P53 pathway, thereby furthermore solidifying the relationship between aberrant P53 signaling and parathyroid carcinoma. However, follow-up studies have demonstrated overlaps between parathyroid adenomas and carcinomas, not least the finding of down-regulation of p27 in large subsets of parathyroid adenomas . P53 immunoreactivity shows variable staining in benign and malignant parathyroid tumors and does not appear to be useful as a single marker in separating these tumors . Other diagnostic nomograms have been evaluated to differentiate malignant from benign parathyroid neoplasms, including among others protein gene product 9.5 (PGP 9.5), Ki-67, galectin-3, E-cadherin, and pRb markers . More novel combinations of immunohistochemical and in situ hybridization approaches have also been evaluated, such as long noncoding RNA expression . The most well-studied and reproduced marker in the context of distinguishing parathyroid carcinoma from adenoma is parafibromin, the 531 amino acid protein product of the CDC73 main transcript . Given the increased parathyroid carcinoma risk in HPT-JT kindred as well as the association between somatic CDC73 mutations and sporadic parathyroid carcinoma, subsequent immunohistochemical analyses regarding parafibromin early-on proved of value to detect the majority of CDC73 mutated cases, including most parathyroid carcinomas . Moreover, the vast majority of parathyroid adenomas analyzed have been shown to retain parafibromin immunoreactivity, apart from subsets of sporadic cases with a predominant cystic growth pattern, HPT-JT-associated parathyroid adenomas, and atypical parathyroid tumors . Thus, with the exceptions of the previously mentioned exceptions, parafibromin is useful for screening purposes in the PHPT population and can be a highly useful marker of malignant potential in many cases. The value of this marker in the clinical setting has been assessed both by comprehensive meta-analyses and by recent reports from high-volume centers . As parafibromin is a predominant nuclear protein, loss of nuclear immunoreactivity is considered pathognomonic for an underlying inactivating CDC73 mutation. As most CDC73 mutations, be it germline or somatic, are either deleterious, located in early exons (causing premature termination) or missense alterations in important regulatory regions (such as the nuclear localization regions), mutated parafibromin is usually unable to reach its nuclear localization . Most studies therefore seem to agree that almost all parathyroid tumors with wild-type CDC73 sequence display a diffuse positive parafibromin stain (Fig. a), while reduced or absent nuclear parafibromin expression is an aberrant staining pattern and might signal the presence of a mutation (Fig. b, d, e). While diffuse loss of parafibromin expression is usually seen, reduced (but not complete) nuclear parafibromin expression can also be noted in CDC73 mutated cases, occurring in subsets of tumor nuclei in a chessboard type of pattern (Fig. b) . Additionally, rare cases with parafibromin-negative nucleolar compartments have also been reported, which could be of importance given the nucleolar roles of wild-type parafibromin . Specifically, subsets of parathyroid tumors with CDC73 mutations thought to disrupt the nucleolar localization signals might exhibit retained nuclear parafibromin immunoreactivity while evidently displaying negative nucleolar staining—and practicing pathologist should therefore be aware of this staining pattern as well (Fig. c) . As most studies focus on the 2H1 monoclonal parafibromin antibody targeting the N-terminal, the usage of different antibodies should not explain these different staining patterns. Indeed, the “partial loss” pattern described above has been reported in cases assessed with four different parafibromin antibodies, suggesting that these observations are not biased by the selection of a specific target epitope . The interpretation of parafibromin immunohistochemistry is therefore not completely straightforward, and additionally, the laboratory processing (including different antigen retrieval techniques and primary antibody incubation time) might affect the overall staining outcome . Moreover, parafibromin expression is often found stronger in the tumor periphery than in central aspects of the lesion, and therefore internal controls (such as endothelial cells) should be assessed and noted as positive before a negative parafibromin stain is called out (Fig. d, e) . Most parathyroid tumors will be assessed for parafibromin immunoreactivity if they are atypical, i.e., lack undisputable evidence of malignancy but display worrisome features (be it clinical or histological) often seen in parathyroid carcinomas. The preponderance of these tumors stain positive, although large subsets can be parafibromin deficient . How should these cases be interpreted, and what should be reported to the surgeon? Consulting the literature, it seems evident the vast majority parafibromin-negative atypical tumors will behave benign, even after long-term follow-up . Even so, the recurrence rate is not entirely negligible, and the finding of an aberrant parafibromin stain should also highlight the need to exclude an underlying germline CDC73 gene mutation—especially if there is a positive family history or if the proband itself exhibits multiglandular disease (Fig. ) . Given the non-perfect parafibromin sensitivity and specificity, researchers have been looking for additional markers to complement parafibromin immunohistochemistry. Shortly after parafibromin was coupled to the Wnt pathway, investigations of Wnt regulators detected widespread loss of APC immunoreactivity in most parathyroid carcinomas—which has later been coupled to promoter hypermethylation in PCs (Fig. f) . As parathyroid adenomas are APC positive in general, the marker has gained ground as a clinical adjunct to parafibromin . However, as panels with two negative markers have its limitations in terms of interpretation associated to tissue fixation, researchers have looked for markers upregulated in parathyroid carcinomas. In this aspect, galectin-3 and PGP9.5 have both been shown to stain positive in the majority of parathyroid carcinomas while only expressed in few adenomas, and different panel combinations using galectin-3 and/or PGP9.5 with parafibromin have proven more reliable than using parafibromin alone . Overall, the parathyroid carcinoma diagnosis is reserved for cases exhibiting unequivocal histological signs of invasive behavior, and no single molecular marker has yet been able to safely predict the malignant potential in atypical cases—which is most likely influenced by the general abundance of atypia in benign tumors and the rarity of truly metastatic cases. Moreover, there is also a potential sampling bias, as malignant tumors are not primarily resected if advanced disease is present at the time of diagnosis, and the fact that tumors with a molecular potential for spread might be resected well before their dissemination. The MEN1 syndrome is by far the most common among the hereditary conditions detailed in this review, with one case per 40,000 as opposed to MEN2A (1 case per 2 million) and the HPT-JT syndrome (unknown prevalence, but expected to be exceedingly low) . Therefore, surgical pathologists will most likely diagnose MEN1 related parathyroid adenomas to a much larger extent than MEN2A, HPT-JT, FIHP, or MEN1-like related cases. Although the MEN1 syndrome often is diagnosed long before the patient is subjected to parathyroidectomy, the non-total penetrance in younger years as well as the occurrence of de novo MEN1 mutations in subsets of patients with healthy parents allow for subsets of patients being misclassified as sporadic PHPT patients. As up to 10% of individuals with primary parathyroid “hyperplasia” (multiglandular disease, adenomatosis, multiple adenomas) have MEN1, genetic screening for familial disease has been considered for all patients with primary parathyroid “hyperplasia.” From a diagnostic pathology standpoint, is there a way for the attentive pathologist to aid in the detection of syndromic PHPT in cases where the syndromic association was not evident preoperatively? MEN1 was early on assigned a tumor-suppressor gene status, as the bulk of mutational events in MEN1 kindred reported represent inactivating events leading to premature truncations of the menin protein . Moreover, most parathyroid adenomas arising in MEN1 patients harbor loss of heterozygosity (LOH) of the MEN1 wild-type allele on the somatic level, thereby arguing in favor for the Knudson’s “two hit” theory in which bi-allelic inactivation of a tumor suppressor gene is needed in order for a tumor to develop . Therefore, the question arises if clinical screening using expressional analyses targeting menin could be a cheap and efficient way for the surgical pathologist to triage menin-negative PHPT cases for genetic screening—as loss of menin expression would be indicative of MEN1 gene aberrancies. Menin is a predominantly nuclear protein with a highly conserved sequence among species, and the protein is universally expressed in most human tissues . The protein harbors nuclear localization signals and leucine zipper motifs, both of which are features needed to regulate gene expression though direct interaction with DNA elements. For example, menin has been shown to interact with JunD , a proto-oncogene encoding a transcription factor regulating apoptosis and TP53 gene activation (Fig. ) . Moreover, menin has been proposed as an important player in the regulation of various signaling networks associated to tumor development, such as the TGF-beta, RAS, Wingless type (Wnt) pathways—as well as influencing the expression of the telomerase reverse transcriptase ( TERT ) gene . From a histological context, the MEN1-related parathyroid disease is often multiglandular, although not always synchronous in presentation . Unlike sporadic parathyroid adenomas, MEN1-related tumors are often devoid of a rim of normal parathyroid tissue, and they may be composed of chief cells arranged in compact, sheet-like formations . The tumors may exhibit fibrosis and have similar appearance as parathyroid hyperplastic lesions arising in the setting of chronic renal failure, and almost always lack invasive features. Overall, findings of multiglandular disease and absence of a normal parathyroid rim in a PHPT patient should raise the suspicion of the MEN1 syndrome; however, a “rim” can also be seen in up to 10% of multiglandular disease. In all, there is no histological feature that sets MEN1-related adenomas apart from sporadic ones. Expression studies regarding menin and associations to underlying MEN1 mutational status in parathyroid tumors have been promising, as menin immunohistochemistry seem to exhibit high sensitivity to detect underlying MEN1 mutations and/or gene deletions . Even so, the interpretation of menin immunohistochemistry is not straightforward, and the lack of comprehensive studies on the subject makes menin immunohistochemistry inappropriate for clinical routine screening purposes. Moreover, as the MEN1 gene is frequently deleted and mutated also on the somatic level in sporadic parathyroid adenomas, additional factors are needed to bring into consideration before suspecting MEN1 syndrome–related PHPT from expressional analyses in the pathology laboratory. For example, the penetrance of PHPT in MEN1 patients reaches 95% at 50 years of age, and therefore, the diagnosis of MEN1 in older patients with multiglandular disease should not be completely overlooked—although the bar for considering an MEN1 diagnosis should be even lower in adolescent patients . In all, a concerted teamwork between endocrine surgeons and pathologists covering the patient’s medical history, disease presentation, radiology, and histopathology is probably needed to properly identify all MEN1-related PHPT in the clinical setting. As HPT-JT patients carry germline CDC73 gene mutations or deletions, it comes as no surprise that the prevalence of parathyroid carcinoma in this patient category is much higher (15–30%) than in unselected PHPT patient cohorts (< 1%). As parathyroid carcinoma is so uncommon, it has been suggested that individuals diagnosed with parathyroid carcinoma should be promptly evaluated for HPT-JT. Additionally, if the pathologist raises the suspicion that the patient is indeed an HPT-JT kindred, a raised awareness of the increased parathyroid carcinoma risk is mandated when diagnosing the tumor. Easily recognizable features such as a tumor being cystic may in itself raise the possibility of HPT-JT syndrome. In the most comprehensive study yet regarding morphological features of CDC73 mutated parathyroid tumors, the authors conclude that these lesions are characterized by sheet-like, compact growth rather than the usual acinar patterns visualized in the bulk of parathyroid tumors, as well as a typical eosinophilic cytoplasm distinct from the granular oxyphilic cell type often observed in areas of parathyroid adenomas . Many tumors also exhibited enlarged nuclei and a perinuclear cytoplasmic clearing. In all, these findings suggest that the occurrence of these histological parameters might suggest the occurrence of an underlying CDC73 gene mutation, and these cases should preferably be investigated more thoroughly with parafibromin immunohistochemistry and/or CDC73 gene sequencing. Parathyroid carcinoma is the malignant form of PHPT. It constitutes only 0.5–3% of PHPT cases. As many parathyroid tumors present with various degrees of atypia, although not fulfilling the current WHO criteria for parathyroid carcinoma, there is a great need to identify malignant potential in these lesions by other means than histology alone. Moreover, any clinical marker of value in this context would need to exhibit very high specificity, in order to avoid falsely diagnosing benign tumors as carcinoma. The bulk of studies on this subject have attempted to address this dilemma via immunohistochemistry, which is a gold standard, inexpensive methodology used in most pathology laboratories and therefore suitable for rapid implementation in routine clinical practice . One of the earliest immunohistochemical markers proposed to identify parathyroid carcinoma was Ki-67, an extensively used proliferation marker used to identify cells in active (non-G0) phases of the cell cycle. Although there seem to be an overrepresentation of carcinomas among high-proliferative parathyroid tumors, the overlap between parathyroid adenomas and carcinomas is considerable . In a similar fashion, investigations of the cell cycle protein cyclin D1 was initially proven of value, as it was found upregulated in most parathyroid carcinoma compared to adenomas . However, subsequent studies have identified overlap in expression between benign and malignant groups . Detection of widespread LOH of the RB1 locus in parathyroid carcinoma with loss of its corresponding protein pRB prompted the investigation of pRB expression in parathyroid tumors—but with divergent results both concerning staining outcomes as well as LOH events . In short, while LOH of the RB1 locus seem to confer high sensitivity for the detection of parathyroid carcinoma, a significant amount of adenomas harbor the same genetic aberrancy, and the reduced specificity renders this marker of less value when screening for rare carcinomas in PHPT. Moreover, a positive pRb staining seems to be variable in terms of the proportion tumor cells stained, making the interpretation somewhat challenging . Following the advent of tissue microarrays, a study concluded that the combined positivity of p27, B-cell lymphoma 2 (bcl-2), and mouse double minute 2 homolog (mdm2) adjoined by a low Ki-67 proliferation index indicated a benign clinical behavior of any given parathyroid tumor, as this profile was not evident in any parathyroid carcinoma but found in the vast majority of adenomas . Both bcl-2 and mdm2 are members of the P53 pathway, thereby furthermore solidifying the relationship between aberrant P53 signaling and parathyroid carcinoma. However, follow-up studies have demonstrated overlaps between parathyroid adenomas and carcinomas, not least the finding of down-regulation of p27 in large subsets of parathyroid adenomas . P53 immunoreactivity shows variable staining in benign and malignant parathyroid tumors and does not appear to be useful as a single marker in separating these tumors . Other diagnostic nomograms have been evaluated to differentiate malignant from benign parathyroid neoplasms, including among others protein gene product 9.5 (PGP 9.5), Ki-67, galectin-3, E-cadherin, and pRb markers . More novel combinations of immunohistochemical and in situ hybridization approaches have also been evaluated, such as long noncoding RNA expression . The most well-studied and reproduced marker in the context of distinguishing parathyroid carcinoma from adenoma is parafibromin, the 531 amino acid protein product of the CDC73 main transcript . Given the increased parathyroid carcinoma risk in HPT-JT kindred as well as the association between somatic CDC73 mutations and sporadic parathyroid carcinoma, subsequent immunohistochemical analyses regarding parafibromin early-on proved of value to detect the majority of CDC73 mutated cases, including most parathyroid carcinomas . Moreover, the vast majority of parathyroid adenomas analyzed have been shown to retain parafibromin immunoreactivity, apart from subsets of sporadic cases with a predominant cystic growth pattern, HPT-JT-associated parathyroid adenomas, and atypical parathyroid tumors . Thus, with the exceptions of the previously mentioned exceptions, parafibromin is useful for screening purposes in the PHPT population and can be a highly useful marker of malignant potential in many cases. The value of this marker in the clinical setting has been assessed both by comprehensive meta-analyses and by recent reports from high-volume centers . As parafibromin is a predominant nuclear protein, loss of nuclear immunoreactivity is considered pathognomonic for an underlying inactivating CDC73 mutation. As most CDC73 mutations, be it germline or somatic, are either deleterious, located in early exons (causing premature termination) or missense alterations in important regulatory regions (such as the nuclear localization regions), mutated parafibromin is usually unable to reach its nuclear localization . Most studies therefore seem to agree that almost all parathyroid tumors with wild-type CDC73 sequence display a diffuse positive parafibromin stain (Fig. a), while reduced or absent nuclear parafibromin expression is an aberrant staining pattern and might signal the presence of a mutation (Fig. b, d, e). While diffuse loss of parafibromin expression is usually seen, reduced (but not complete) nuclear parafibromin expression can also be noted in CDC73 mutated cases, occurring in subsets of tumor nuclei in a chessboard type of pattern (Fig. b) . Additionally, rare cases with parafibromin-negative nucleolar compartments have also been reported, which could be of importance given the nucleolar roles of wild-type parafibromin . Specifically, subsets of parathyroid tumors with CDC73 mutations thought to disrupt the nucleolar localization signals might exhibit retained nuclear parafibromin immunoreactivity while evidently displaying negative nucleolar staining—and practicing pathologist should therefore be aware of this staining pattern as well (Fig. c) . As most studies focus on the 2H1 monoclonal parafibromin antibody targeting the N-terminal, the usage of different antibodies should not explain these different staining patterns. Indeed, the “partial loss” pattern described above has been reported in cases assessed with four different parafibromin antibodies, suggesting that these observations are not biased by the selection of a specific target epitope . The interpretation of parafibromin immunohistochemistry is therefore not completely straightforward, and additionally, the laboratory processing (including different antigen retrieval techniques and primary antibody incubation time) might affect the overall staining outcome . Moreover, parafibromin expression is often found stronger in the tumor periphery than in central aspects of the lesion, and therefore internal controls (such as endothelial cells) should be assessed and noted as positive before a negative parafibromin stain is called out (Fig. d, e) . Most parathyroid tumors will be assessed for parafibromin immunoreactivity if they are atypical, i.e., lack undisputable evidence of malignancy but display worrisome features (be it clinical or histological) often seen in parathyroid carcinomas. The preponderance of these tumors stain positive, although large subsets can be parafibromin deficient . How should these cases be interpreted, and what should be reported to the surgeon? Consulting the literature, it seems evident the vast majority parafibromin-negative atypical tumors will behave benign, even after long-term follow-up . Even so, the recurrence rate is not entirely negligible, and the finding of an aberrant parafibromin stain should also highlight the need to exclude an underlying germline CDC73 gene mutation—especially if there is a positive family history or if the proband itself exhibits multiglandular disease (Fig. ) . Given the non-perfect parafibromin sensitivity and specificity, researchers have been looking for additional markers to complement parafibromin immunohistochemistry. Shortly after parafibromin was coupled to the Wnt pathway, investigations of Wnt regulators detected widespread loss of APC immunoreactivity in most parathyroid carcinomas—which has later been coupled to promoter hypermethylation in PCs (Fig. f) . As parathyroid adenomas are APC positive in general, the marker has gained ground as a clinical adjunct to parafibromin . However, as panels with two negative markers have its limitations in terms of interpretation associated to tissue fixation, researchers have looked for markers upregulated in parathyroid carcinomas. In this aspect, galectin-3 and PGP9.5 have both been shown to stain positive in the majority of parathyroid carcinomas while only expressed in few adenomas, and different panel combinations using galectin-3 and/or PGP9.5 with parafibromin have proven more reliable than using parafibromin alone . Overall, the parathyroid carcinoma diagnosis is reserved for cases exhibiting unequivocal histological signs of invasive behavior, and no single molecular marker has yet been able to safely predict the malignant potential in atypical cases—which is most likely influenced by the general abundance of atypia in benign tumors and the rarity of truly metastatic cases. Moreover, there is also a potential sampling bias, as malignant tumors are not primarily resected if advanced disease is present at the time of diagnosis, and the fact that tumors with a molecular potential for spread might be resected well before their dissemination. The molecular background of parathyroid neoplasia is well characterized from a driver gene perspective, not least due to the identification of mutations in genes responsible for the development of hereditary PHPT. These genes were subsequently found aberrant also in sporadic cases, and still—some 20 years after original identification—enjoy their position as the top recurrently mutated genes in parathyroid tumors. Modern molecular analyses have since then expanded our knowledge regarding common dysregulations in parathyroid tumors, and highlighted both genetic and epigenetic changes involved in the development of PHPT. Next-generation sequencing analyses of both parathyroid adenoma and carcinoma cohorts have helped identify additional genes of potential impact for both groups, although it is clearly demonstrated that the “low-hanging fruit” in terms of highly recurrent events already have been picked. From a clinical perspective, CDC73 mutations and loss of parafibromin expression have been firmly established as events coupled to parathyroid malignancy. Additionally, comprehensive sequencing of parathyroid carcinoma has identified mutations suitable for molecular targeted therapy as well as overall mutational burden as a prognostic tool of potential value, thereby highlighting the usage of modern genetic analyses where histology and immunohistochemistry might be insufficient. Given the rapid evolution of molecular techniques used for complimentary analyses in the routine clinical setting, the combination of histology, immunohistochemistry and next-generation sequencing might constitute standard work-up for parathyroid tumors displaying significant atypia in the near future. To summarize the field for the surgical pathologist, parathyroid adenoma is usually a straightforward diagnosis, but there are histological patterns coupled to underlying syndromes worth remembering—as they could help identify hereditary disease of importance for clinical follow-up. When present, histologically atypical features in a parathyroid lesion could signify malignant potential, but invasive growth is still the only accepted criteria to diagnose parathyroid carcinoma. A plethora of immunohistochemical markers could potentially aid in the identification of parathyroid carcinoma, but many display subpar specificity and should be interpreted with caution. Loss of parafibromin nuclear expression correlates with the presence of an underlying CDC7 3 gene mutation, which in turn is coupled to an increased risk of developing parathyroid carcinoma. The mutation might also be present in germline tissues and thus predispose to familial disease. Even though much of the genetic landscape of parathyroid tumors has been studied, two key queries remain to be deciphered, namely (1) the identification of additional driver genes responsible for the development of parathyroid adenomas, and (2) the quest for additional molecular events driving the malignant potential of parathyroid tumors besides parafibromin. As approximately half of parathyroid adenomas lack mutations in the most common driver genes, additional genetic events are expected to be found. Moreover, as the majority of HPT-JT kindred develop adenomas, other genetic aberrations apart from the inactivation of CDC73 are likely to be required to propel the invasive behavior in parathyroid carcinomas. Hopefully, these questions can be solved in the future by comprehensive, next-generation sequencing studies using multi-center tumor cohorts.
Central venous access device terminologies, complications, and reason for removal in oncology: a scoping review
3b565572-8de3-4060-92cb-87d83bf12273
11027380
Internal Medicine[mh]
Central venous access devices (CVADs) are critical for effective and efficient management of patients with malignancies because they facilitate urgent, acute or prolonged access to the bloodstream for the administration of prescribed and supportive therapies and repeated blood sampling . However, they also present considerable risk of complications and many are removed prematurely before the end of prescribed therapy. Premature removal rates of up to 50% are reported in this patient cohort . Complications can be related to the coagulopathic and inflammatory processes of the disease process , adverse effects of prescribed therapies including prolonged and profound immunosuppression , and adverse effects of supportive therapies such as blood products . CVAD complications and premature removal may lead to delays in treatment, reduced treatment efficacy and subsequent survival due to interruptions in schedules , and increased morbidity from CVAD complications (e.g., infection, mortality and healthcare expenditure) . Lack of standardised nomenclature in healthcare has been shown to negatively impact patient safety, patient experience and health system efficiency . The lack of a common language impairs communication and interoperability between individuals and organisations . The potential for complex systems such as electronic health records (EHR) to accurately capture clinical management of patients’ care and health outcomes and to inform and support research is reliant on agreed nomenclature. This enables data sharing, robust data analysis, and meets the requirements of a learning health system . An example of a common global language used in healthcare is the systematised nomenclature of medicine clinical terms (SNOMED CT). SNOMED CT is a comprehensive and precise medical terminology system that is coded and linked, facilitating homogenous data entry, encoding of existing data, mapping of free text, analysis of clinical data, and interoperability between systems and organisations . To date, there is no consensus on CVAD terminology and no standardised definitions for CVAD associated complications and reasons for premature removal. This is imperative to advance the quality and safety of clinical assessment and management, and to drive robust, impactful research for patients undergoing cancer treatment. A scoping review fits well with reviews that map and synthesise available evidence about a given topic and identify gaps and similarities in the published literature . The aim of this review was to understand the terminologies used to describe CVADs, associated complications and reasons for premature removal in people undergoing cancer treatment. It also sought to identify the definitional sources for complications and premature removal reasons. The objective was to map language and descriptions used and to explore opportunities for standardisation. Protocol An a priori protocol for this scoping review aligning with the five stages of Arksey and O’Malley’s scoping review framework, including identification of the research question and relevant studies, selection of studies, documentation of the data, and collating and summarising the results, was developed. Reporting was guided by the PRISMA Extension for Scoping Reviews, PRISMA-ScR . Eligibility criteria Adult patients with cancer over the age of 18 years and with any type of CVAD in situ, for example short-term centrally inserted central catheters (CICCs), or longer term CVADs, for example peripherally inserted central catheters (PICCs) or totally implantable venous access devices (TIVADs) were eligible for inclusion. In keeping with the broad aims of a scoping review, study designs included experimental, quasi-experimental, observational, systematic reviews, meta-analyses, quality improvement and surveys. Studies were limited to English and publications after the 2016 edition of the Infusion Therapy Standards of Practice . Information sources The search was executed in the MedLine, PubMed, Cochrane, CINAHL Complete and Embase databases for a comprehensive approach to the topic. Search Population, concept, and context The search strategy was developed in collaboration with a medical librarian to address the question: how are reasons for premature removal and CVAD-related complications defined in the published literature? A second question was established in response to the diversity of CVAD terminologies noted during development of the search strategy: what CVAD terminology is evident in the published literature? The broader approach of a scoping review aligns with a less restrictive search strategy based on the population, concept and context (PCC) format compared to the precise research questions, and inclusion and exclusion criteria required for a systematic review . The population for this review was broad, including all patients with haematological and solid tumours as this cohort requires insertion of a CVAD for the administration of prescribed therapies for treatment of their disease. The concept in this scoping review included the various CVAD-related complications and reasons for premature removal. This was not restricted to the more commonly reported issues of infection and thrombosis and included subject headings and key terms for clinically relevant problems such as occlusion, catheter migration, skin impairment, CVAD damage or rupture, and accidental dislodgement. Categorical descriptors (e.g., equipment failure, device removal, accidental injuries, and death) were also included. The context was patients with any type of CVAD in situ as the different CVAD types serve different functions according to the goals of treatment, and type and length of prescribed therapies. CVADs included CICCs, PICCs, tunnelled cuffed-centrally inserted central catheters, totally implantable venous access ports, and apheresis and haemodialysis catheters. Subject headings (e.g., central venous catheters or catheterization, central venous), descriptors (e.g., cuff, tunnelled, implanted), trade names commonly used in the literature (e.g., Hickman™ or Infusaport™) were included. The search was established for the MEDLINE database (Table ), then adapted for PUBMed – National Institutes of Health (NIH), EMBASE, CINAHL and the Cochrane Library. Subject headings and key words were combined using Boolean operators AND/OR. The search limiters applied were publication dates before 2017, non-English language, and studies in animals (including mice, mouse, rat(s), porcine, pig(s), sheep, murine, canine or rabbit) or in vitro. Excluded study designs were qualitative studies, study protocols and study reports with limited information including conference abstracts, letters to the editor, educational, posters and case studies. Selection of sources of evidence The search was executed in May 2022. Studies were collated and screened for duplicates in EndNote X9 by one reviewer (KC). Eligible studies were imported into Covidence, a web-based platform that streamlines the process of systematic and other literature reviews , during which a further 125 duplicate records were excluded (total of 5230 duplicate studies). Paired independent review of 100% of studies at title and abstract was undertaken (KC, ET), as well as at full text level (KC, ET), reasons for exclusions were noted, and the eligible studies moved forward for data extraction. Data charting process Data were extracted in Covidence using an a priori template established for this review by one author (KC). Data included key study (i.e., year, title, authors, country where the study took place, study design, aims and objectives, and participant details including number and diagnoses) and device (i.e., CVAD terminologies and abbreviations, terminologies used to describe CVAD complications and definitional sources, and terminologies used to describe CVAD removal reasons and definitional sources) details. Form fields were primarily free text to accurately capture the nuances in terminologies and definitional sources for premature removals and complications. The data charting process was undertaken independently by two authors for 20% of the studies (KC, ET). Any conflicts were discussed and resolved between the two reviewers. Level of agreement was high so individual data extraction was completed for the remainder of the studies (KC). Synthesis of results Study data were stratified according to whether only one or multiple reasons for premature removal, or only one or multiple complications were reported. Data from studies reporting complications that did not indicate whether the complication resulted in premature removal were reported separately. Definitional sources for complications and removal reasons were categorised as follows: national resources or guidelines (e.g., Centers for Disease Control and Prevention-National Healthcare Safety Network (CDC-NHSN), Infectious Diseases Society of America (IDSA) guidelines), other published studies, author-derived, or a combination of the first three categories. Descriptive statistics, primarily counts and percentages, tables and bar graphs were used to summarise charted data. An a priori protocol for this scoping review aligning with the five stages of Arksey and O’Malley’s scoping review framework, including identification of the research question and relevant studies, selection of studies, documentation of the data, and collating and summarising the results, was developed. Reporting was guided by the PRISMA Extension for Scoping Reviews, PRISMA-ScR . Adult patients with cancer over the age of 18 years and with any type of CVAD in situ, for example short-term centrally inserted central catheters (CICCs), or longer term CVADs, for example peripherally inserted central catheters (PICCs) or totally implantable venous access devices (TIVADs) were eligible for inclusion. In keeping with the broad aims of a scoping review, study designs included experimental, quasi-experimental, observational, systematic reviews, meta-analyses, quality improvement and surveys. Studies were limited to English and publications after the 2016 edition of the Infusion Therapy Standards of Practice . The search was executed in the MedLine, PubMed, Cochrane, CINAHL Complete and Embase databases for a comprehensive approach to the topic. Population, concept, and context The search strategy was developed in collaboration with a medical librarian to address the question: how are reasons for premature removal and CVAD-related complications defined in the published literature? A second question was established in response to the diversity of CVAD terminologies noted during development of the search strategy: what CVAD terminology is evident in the published literature? The broader approach of a scoping review aligns with a less restrictive search strategy based on the population, concept and context (PCC) format compared to the precise research questions, and inclusion and exclusion criteria required for a systematic review . The population for this review was broad, including all patients with haematological and solid tumours as this cohort requires insertion of a CVAD for the administration of prescribed therapies for treatment of their disease. The concept in this scoping review included the various CVAD-related complications and reasons for premature removal. This was not restricted to the more commonly reported issues of infection and thrombosis and included subject headings and key terms for clinically relevant problems such as occlusion, catheter migration, skin impairment, CVAD damage or rupture, and accidental dislodgement. Categorical descriptors (e.g., equipment failure, device removal, accidental injuries, and death) were also included. The context was patients with any type of CVAD in situ as the different CVAD types serve different functions according to the goals of treatment, and type and length of prescribed therapies. CVADs included CICCs, PICCs, tunnelled cuffed-centrally inserted central catheters, totally implantable venous access ports, and apheresis and haemodialysis catheters. Subject headings (e.g., central venous catheters or catheterization, central venous), descriptors (e.g., cuff, tunnelled, implanted), trade names commonly used in the literature (e.g., Hickman™ or Infusaport™) were included. The search was established for the MEDLINE database (Table ), then adapted for PUBMed – National Institutes of Health (NIH), EMBASE, CINAHL and the Cochrane Library. Subject headings and key words were combined using Boolean operators AND/OR. The search limiters applied were publication dates before 2017, non-English language, and studies in animals (including mice, mouse, rat(s), porcine, pig(s), sheep, murine, canine or rabbit) or in vitro. Excluded study designs were qualitative studies, study protocols and study reports with limited information including conference abstracts, letters to the editor, educational, posters and case studies. The search strategy was developed in collaboration with a medical librarian to address the question: how are reasons for premature removal and CVAD-related complications defined in the published literature? A second question was established in response to the diversity of CVAD terminologies noted during development of the search strategy: what CVAD terminology is evident in the published literature? The broader approach of a scoping review aligns with a less restrictive search strategy based on the population, concept and context (PCC) format compared to the precise research questions, and inclusion and exclusion criteria required for a systematic review . The population for this review was broad, including all patients with haematological and solid tumours as this cohort requires insertion of a CVAD for the administration of prescribed therapies for treatment of their disease. The concept in this scoping review included the various CVAD-related complications and reasons for premature removal. This was not restricted to the more commonly reported issues of infection and thrombosis and included subject headings and key terms for clinically relevant problems such as occlusion, catheter migration, skin impairment, CVAD damage or rupture, and accidental dislodgement. Categorical descriptors (e.g., equipment failure, device removal, accidental injuries, and death) were also included. The context was patients with any type of CVAD in situ as the different CVAD types serve different functions according to the goals of treatment, and type and length of prescribed therapies. CVADs included CICCs, PICCs, tunnelled cuffed-centrally inserted central catheters, totally implantable venous access ports, and apheresis and haemodialysis catheters. Subject headings (e.g., central venous catheters or catheterization, central venous), descriptors (e.g., cuff, tunnelled, implanted), trade names commonly used in the literature (e.g., Hickman™ or Infusaport™) were included. The search was established for the MEDLINE database (Table ), then adapted for PUBMed – National Institutes of Health (NIH), EMBASE, CINAHL and the Cochrane Library. Subject headings and key words were combined using Boolean operators AND/OR. The search limiters applied were publication dates before 2017, non-English language, and studies in animals (including mice, mouse, rat(s), porcine, pig(s), sheep, murine, canine or rabbit) or in vitro. Excluded study designs were qualitative studies, study protocols and study reports with limited information including conference abstracts, letters to the editor, educational, posters and case studies. The search was executed in May 2022. Studies were collated and screened for duplicates in EndNote X9 by one reviewer (KC). Eligible studies were imported into Covidence, a web-based platform that streamlines the process of systematic and other literature reviews , during which a further 125 duplicate records were excluded (total of 5230 duplicate studies). Paired independent review of 100% of studies at title and abstract was undertaken (KC, ET), as well as at full text level (KC, ET), reasons for exclusions were noted, and the eligible studies moved forward for data extraction. Data were extracted in Covidence using an a priori template established for this review by one author (KC). Data included key study (i.e., year, title, authors, country where the study took place, study design, aims and objectives, and participant details including number and diagnoses) and device (i.e., CVAD terminologies and abbreviations, terminologies used to describe CVAD complications and definitional sources, and terminologies used to describe CVAD removal reasons and definitional sources) details. Form fields were primarily free text to accurately capture the nuances in terminologies and definitional sources for premature removals and complications. The data charting process was undertaken independently by two authors for 20% of the studies (KC, ET). Any conflicts were discussed and resolved between the two reviewers. Level of agreement was high so individual data extraction was completed for the remainder of the studies (KC). Study data were stratified according to whether only one or multiple reasons for premature removal, or only one or multiple complications were reported. Data from studies reporting complications that did not indicate whether the complication resulted in premature removal were reported separately. Definitional sources for complications and removal reasons were categorised as follows: national resources or guidelines (e.g., Centers for Disease Control and Prevention-National Healthcare Safety Network (CDC-NHSN), Infectious Diseases Society of America (IDSA) guidelines), other published studies, author-derived, or a combination of the first three categories. Descriptive statistics, primarily counts and percentages, tables and bar graphs were used to summarise charted data. Selection of sources of evidence The search identified 31,877 records. After removing duplicates ( n = 5230) and irrelevant studies ( n = 24,390) in Endnote X9, 2363 study titles and abstracts, and then 341 full texts were screened for eligibility in Covidence. A total of 292 eligible studies were identified (Fig. ). Central venous access device nomenclature, and taxonomy of complications and reasons for premature removal in patients with cancer: a scoping review. Characteristics of sources of evidence Characteristics of the included studies are detailed in Supplement Information, Additional files due to the volume of studies summarised. Of the 292 studies in this review, 193 (66%) reported on premature removal related to complications ( . The remainder ( n = 99/34%) reported on complications only Characteristics are summarised using counts and percentages. Synthesis of results Samples included patients with solid tumours only ( n = 93), haematological malignancies and solid tumours ( n = 92), and haematological malignancies only ( n = 56). The remainder were described as cancer patients ( n = 51). Studies were conducted in China, ( n = 61), the United States of America (USA) ( n = 41), Italy ( n = 25), Japan and Korea (both n = 15), and Australia, Germany and Turkey (all n = 13). Twelve were multinational. According to the Joanna Briggs Institute’s levels of evidence , most studies were level 4 observational, descriptive studies ( n = 174). The remainder were level 3 observational, analytical designs ( n = 61), level 2 quasi-experimental designs ( n = 31), level 1 experimental designs ( n = 24) and level 5 expert opinion, bench research ( n = 2). CVAD terminologies A total of 213 unique descriptors were extracted from the included studies: 14 unique terms for CVADs, 104 for totally implantable venous access ports, 25 for peripherally inserted central catheters, 41 for tunnelled cuffed centrally inserted central catheters, 27 for centrally inserted central catheters, and two for femorally inserted central catheters. This did not include spelling variations, hyphenation, or use of capitals, or the use of multiple different terms for the device in the same study. The greatest variation was related to the descriptive nature of the names. For example, for totally implantable venous access ports the descriptors included combinations of totally or fully, subcutaneously or tunnelled, implanted or implantable; chest, arm, subclavian, internal jugular, brachial, groin or centrally inserted; devices, catheters, ports or systems; central venous, vascular or venous access; single or dual chamber; chemotherapy or infusion; traditional or power-injectable; PICC, peripherally inserted or peripheral central ports; variations on port, portacath, portacath and the various trade names. Premature CVAD removal related to complications Of the 193 studies that reported on premature removals, 128 (66%) identified multiple types of complications including catheter occlusion, malposition, dislodgement, fracture, local bleeding, infection, or skin necrosis. The remainder ( n = 65, 34%) identified one complication only, most commonly infection ( n = 18) or thrombosis ( n = 14). In studies reporting on multiple reasons for premature removal, definitional sources were not provided in 45 (35%) studies, for one reason only in 37 (29%) studies, and for all reasons in 46 (36%) studies. In studies that reported one premature removal reason only, the definition was provided in 47 (72%) studies, and not provided in 18 (28%) studies. The definitional sources in these studies included local national resources or guidelines in 21 (45%) studies, author-derived definitions in 19 (40%), definitions from other published studies in six (13%) and a combination of these sources in one (2%) study. The definitional sources in studies with multiple reasons for removal included a combination of national guidelines or resources, definitions from other published studies or author-derived definitions (Fig. ). CVAD complications Of the 99 studies that reported CVAD-related complications, 49 (49%) reported one complication and 50 (51%) reported on multiple complications. Complication definitions were provided in 36 (73%) studies reporting one complication, and no definitions provided in 13 (27%). For studies that reported on multiple complications, all complications were defined in 20 (40%) studies, only one and not all complications in 14 (28%) studies, and no complication definitions were provided in 16 (32%) studies. Definitional sources in studies that reported one type of complication were from national resources or guidelines in 16 (44%) studies (e.g., CDC-NHSN or IDSA), author-derived in 14 (39%), and from other published studies in six (17%) studies (Fig. ). Comparatively, of the studies that reported on multiple complications, fewer referenced national resources ( n = 2, 10%); more were author-derived ( n = 10, 50%) or used a combination of sources ( n = 8, 40%) when all complications were defined. Definitional sources were from national resources in three studies, author-derived in eight (57%) studies, other published studies in one (7%) and a combination of sources in two (14%) studies that defined only one of the multiple complications. The search identified 31,877 records. After removing duplicates ( n = 5230) and irrelevant studies ( n = 24,390) in Endnote X9, 2363 study titles and abstracts, and then 341 full texts were screened for eligibility in Covidence. A total of 292 eligible studies were identified (Fig. ). Central venous access device nomenclature, and taxonomy of complications and reasons for premature removal in patients with cancer: a scoping review. Characteristics of the included studies are detailed in Supplement Information, Additional files due to the volume of studies summarised. Of the 292 studies in this review, 193 (66%) reported on premature removal related to complications ( . The remainder ( n = 99/34%) reported on complications only Characteristics are summarised using counts and percentages. Samples included patients with solid tumours only ( n = 93), haematological malignancies and solid tumours ( n = 92), and haematological malignancies only ( n = 56). The remainder were described as cancer patients ( n = 51). Studies were conducted in China, ( n = 61), the United States of America (USA) ( n = 41), Italy ( n = 25), Japan and Korea (both n = 15), and Australia, Germany and Turkey (all n = 13). Twelve were multinational. According to the Joanna Briggs Institute’s levels of evidence , most studies were level 4 observational, descriptive studies ( n = 174). The remainder were level 3 observational, analytical designs ( n = 61), level 2 quasi-experimental designs ( n = 31), level 1 experimental designs ( n = 24) and level 5 expert opinion, bench research ( n = 2). A total of 213 unique descriptors were extracted from the included studies: 14 unique terms for CVADs, 104 for totally implantable venous access ports, 25 for peripherally inserted central catheters, 41 for tunnelled cuffed centrally inserted central catheters, 27 for centrally inserted central catheters, and two for femorally inserted central catheters. This did not include spelling variations, hyphenation, or use of capitals, or the use of multiple different terms for the device in the same study. The greatest variation was related to the descriptive nature of the names. For example, for totally implantable venous access ports the descriptors included combinations of totally or fully, subcutaneously or tunnelled, implanted or implantable; chest, arm, subclavian, internal jugular, brachial, groin or centrally inserted; devices, catheters, ports or systems; central venous, vascular or venous access; single or dual chamber; chemotherapy or infusion; traditional or power-injectable; PICC, peripherally inserted or peripheral central ports; variations on port, portacath, portacath and the various trade names. Of the 193 studies that reported on premature removals, 128 (66%) identified multiple types of complications including catheter occlusion, malposition, dislodgement, fracture, local bleeding, infection, or skin necrosis. The remainder ( n = 65, 34%) identified one complication only, most commonly infection ( n = 18) or thrombosis ( n = 14). In studies reporting on multiple reasons for premature removal, definitional sources were not provided in 45 (35%) studies, for one reason only in 37 (29%) studies, and for all reasons in 46 (36%) studies. In studies that reported one premature removal reason only, the definition was provided in 47 (72%) studies, and not provided in 18 (28%) studies. The definitional sources in these studies included local national resources or guidelines in 21 (45%) studies, author-derived definitions in 19 (40%), definitions from other published studies in six (13%) and a combination of these sources in one (2%) study. The definitional sources in studies with multiple reasons for removal included a combination of national guidelines or resources, definitions from other published studies or author-derived definitions (Fig. ). Of the 99 studies that reported CVAD-related complications, 49 (49%) reported one complication and 50 (51%) reported on multiple complications. Complication definitions were provided in 36 (73%) studies reporting one complication, and no definitions provided in 13 (27%). For studies that reported on multiple complications, all complications were defined in 20 (40%) studies, only one and not all complications in 14 (28%) studies, and no complication definitions were provided in 16 (32%) studies. Definitional sources in studies that reported one type of complication were from national resources or guidelines in 16 (44%) studies (e.g., CDC-NHSN or IDSA), author-derived in 14 (39%), and from other published studies in six (17%) studies (Fig. ). Comparatively, of the studies that reported on multiple complications, fewer referenced national resources ( n = 2, 10%); more were author-derived ( n = 10, 50%) or used a combination of sources ( n = 8, 40%) when all complications were defined. Definitional sources were from national resources in three studies, author-derived in eight (57%) studies, other published studies in one (7%) and a combination of sources in two (14%) studies that defined only one of the multiple complications. This review identified considerable variation in CVAD terminology related to reason for removal and the actual device itself. This included over 200 unique names for the different types of CVADs, with the greatest variation evident for totally implantable venous access devices or ports with over 100 unique names. In addition to inconsistency with definitions and device terminology between studies, inconsistencies were also observed within the same study, underscoring the complexity and confusion in this clinical issue. Terminologies were used interchangeably such as central venous catheters (CVC) and central venous access devices . CVC was also used to describe the multi-lumen catheter most commonly used in critical care units. Despite the term central venous catheter being used more frequently as the term to describe all types of devices, it does not accurately describe or reflect the wide variety of implanted, cuffed or tunnelled catheters and devices, or contemporary innovations in insertion techniques; for example, tunnelling PICCs. The term central venous access device is more inclusive, intuitive, and reflective of the diversity in contemporary clinical practice . Similar findings have previously been reported in other research. In a Delphi consensus study about a minimum dataset for vascular access, no standardised CVADs terms were identified . The authors advocated for development of a vascular access minimum dataset to overcome lack of clarity in the literature that hampers robust data collection, analysis and interoperability within and across countries, ultimately adversely affecting patient outcomes . In response to their findings, Schults et al. (2020) subsequently developed a common set of descriptors (nomenclature) for commonly used vascular access devices . However, these descriptors did not include CVADs commonly used in cancer care (e.g., tunnelled cuffed centrally inserted central catheters, apheresis catheters), and contemporary insertion techniques (e.g. tunnelled peripherally inserted central catheters). A more comprehensive set of descriptors need to be developed to represent CVADs used in cancer care. Considerable variation in CVAD nomenclature evident in this review is problematic. A lack of standardised nomenclature impairs communication and interoperability between healthcare professionals and organisations locally and globally, and fractures data sharing, linkage, analysis and the evidence base from clinical practice . The World Health Organization states that standardised nomenclature is essential for recording and surveillance of all types of medical devices including CVADs , and in the systematic review of 20 papers by Gildow and Lazar (2022), standardised nomenclature was shown to be associated with reduced clinical errors and patient injury, improved communication and opportunity for standardisation of clinical care . Most studies reported multiple reasons for premature device removal as opposed to a single reason for removal. Research investigating multiple reasons for removal reflects the increasing complexity of care and treatment for people with cancer, the majority of whom require CVAD support. The multiplicity of treatment and supporting therapies that commonly characterise care for a person with cancer, compounded by patient, clinician, therapy, and workplace related factors, come together to compound risk of premature CVAD removal. The interplay between one or more of these factors increases the risk of premature removal increasing morbidity and mortality, and cost of care . The only consistently defined premature removal reason was infection. Nearly all studies cited national sources for catheter-related blood stream infection (CRBSI) or the surveillance definition for central line-associated blood stream infection (CLABSI), with the majority citing CDC or IDSA from the USA. There was no consistency in definitions for any other reason for premature removal. This is an important finding with overt implications for quality and safety of care. Heterogeneity of terminology and definitions impair standardised clinical management by causing confusion and permitting an inconsistent approach for the different members of the healthcare team and clinical specialties, and consequently negatively impacts quality and safety of patients . Standardised nomenclature, clinical procedures and standardisation of care have been shown to reduce errors and patient injury by improving communication and dissemination of evidence to inform clinical practices . The infinite potential for utilising routinely collected patient management data and outcomes captured in EHR systems for clinical research into improving patient care and outcomes cannot be realised when such variation exists. Consistency in EHR data is key to the efficient and effective collation and linkage of data required for the development of a reliable big data set . Clinical data, expertise and knowledge integrated with current evidence are the cornerstones of a learning health system which aims to provide informed, safer, higher quality clinical care . Also, consistent data and definitions are required for meta-analyses in quantitative research . Standardised nomenclature in healthcare is complex requiring a multifaceted response. Strategies require collaboration, consensus, communication, and implementation by multidisciplinary professionals including clinicians, health economists, and health service researchers, strategists, and implementation science professionals. This includes commitment by journals, national peak bodies and associations to use the standardised nomenclature as consistency at a system level is required to provide the guidance for the end users. Furthermore, regular review of nomenclature is required so it accurately reflects contemporary evidence in the literature, clinical practice, emerging technology and products. As EHRs become increasingly prevalent across health services, they offer opportunity for standardisation of clinical nomenclature. For example, different standardised global clinical languages such as SNOMED CT or International Classification of Diseases 10 th Revision are translatable and already have equivalent codes for use in EHRs. Leveraging the opportunity of EHRs will require close collaboration between EHR development teams and all end users of the EHR systems. Limitations There are a number of limitations of this scoping review. Limiting the patient cohort to patients with cancer may restrict the applicability to other patient cohorts. However, this was considered to have minimal impact as CVADs are used across multiple patient cohorts. The date range was five years after the 2016 edition of the Infusion Therapy Standards of Practice , so all descriptors and definitions may not be captured; however, it reflects contemporary practice, policy and research. The volume of studies did not allow for analysis beyond the absolute numbers of the different types of CVADs and categories of resources for definitions of CVAD complications and reasons for removal. Establishing consistent definitions for each type of premature removal or complication was not possible. The exclusion of non-English studies is important to acknowledge as a limitation when considering the results and findings of this review. There are a number of limitations of this scoping review. Limiting the patient cohort to patients with cancer may restrict the applicability to other patient cohorts. However, this was considered to have minimal impact as CVADs are used across multiple patient cohorts. The date range was five years after the 2016 edition of the Infusion Therapy Standards of Practice , so all descriptors and definitions may not be captured; however, it reflects contemporary practice, policy and research. The volume of studies did not allow for analysis beyond the absolute numbers of the different types of CVADs and categories of resources for definitions of CVAD complications and reasons for removal. Establishing consistent definitions for each type of premature removal or complication was not possible. The exclusion of non-English studies is important to acknowledge as a limitation when considering the results and findings of this review. Standardised CVAD nomenclature and definitions for premature CVAD removal and complications do not exist. This impacts effective and accurate communication and has been shown to hamper safe, effective cancer care. It also prevents interoperability between individuals and organisations globally to inform research to reduce the incidence and impact of CVAD complications and premature removal on cancer and patients’ experience of care, health outcomes and health system costs. Collaboration, consensus, and standardisation is required to deliver quality CVAD care. Additional file 1. Scoping review protocol. Additional file 2. Search strategy. Additional file 3. Included studies. Additional file 4. Summary of CVAD terminology.
Clinical and economic outcomes of a pharmacogenomics-enriched comprehensive medication management program in a self-insured employee population
720697bd-9b95-4384-8137-a4602cc5b743
11446811
Pharmacology[mh]
A compelling case has been made that pharmacogenomic-enriched comprehensive medication management (PGx + CMM) is ready to become a clinical standard of care and has the potential to provide all stakeholders with an approach to addressing medication safety, poor health, and rising healthcare costs. The scalable, broad utilization of genetic testing in personalized medicine requires five factors working together—clinical utility, laboratory technology, user acceptance, implementation models, and economic value—to achieve value for patients, providers, and payors, and to avoid disruption of existing clinical workflows. Specifically, the tipping point has been reached in favor of population-level, large-scale pharmacogenomic testing. This study adds information about clinical and economic impacts in a self-insured employee population. Pharmacogenomics, as a tool in the clinical practice of medicine, helps healthcare providers optimize drug selection and dosing, avoid adverse events, and identify responders and non-responders to medications . The US Food and Drug Administration (FDA) details a list of over 500 drug-gene (biomarker) pairs with pharmacogenomic information included in drug labels for a variety of therapeutic applications including mental health, cardiology, pain management, diabetes, gastroenterology, neurology, chemotherapy, and infectious diseases . The genes appearing in drug labels and professional guidelines—and recommended for laboratory testing—are well-described components of metabolic enzyme pathways, cell membrane transport mechanisms, and chemical receptors and their downstream cell signaling pathways . With the delivery of CMM, the promise of personalized medicine is realized by identifying the most effective and safe therapeutic regimen through the assessment of genetic and other medication therapy risk factors. These factors include concurrent medications and medical conditions, age, diet, smoking status, and adherence . A recent real-world implementation of a PGx + CMM program showed economic and clinical outcome improvements in a Medicare-eligible population . We previously demonstrated the feasibility of PGx + CMM in a self-insured employer setting and described existing population risks and opportunity to improve medication management. The research demonstrated that 86% of employees who completed the program received actionable recommendations, averaging 5 recommendations per person . The present study compared economic and clinical outcomes between participants in a PGx + CMM program with matched controls. The primary hypothesis of the study was that there is an association between the PGx + CMM intervention and changes in healthcare resource utilization (HRU) yielding reduction in medical costs as measured by claims data at 13 months post-program inception. Study design and setting The impact of a self-insured employer-sponsored PGx + CMM program on HRU and medical costs was evaluated using a retrospective cohort pre-post design. Medical and pharmacy claims data from February 2020 to February 2021 were used to assess risk in individuals and invite those identified as high-risk, based on potential for drug-drug interactions, anticholinergic burden, contraindications, and medications impacted by genetics. Following enrollment which began in March 2021, Quest Diagnostics provided genotyping services and the results were transferred to Coriell Life Sciences for clinical annotation and interpretation. The retrospective data analysis was conducted using medical and pharmacy claims data from February 2020 to March 2022 for consented participants. All components of the program and study were performed under an approved IRB protocol (Biomedical Research Alliance of New York Institutional Review Board; BRANY). Study population and participant engagement Eligible employees ≥18 years old were invited to participate in the program. Employees were deemed eligible if they were enrolled in the employer’s sponsored medical plan, and were ranked as high-risk, based on a weighted aggregation of the calculated risks for potential drug-drug interactions, anticholinergic burden, contraindications, and medications impacted by genetics from pharmacy and medical claims records in the 12 months immediately prior to the start of the program. Medications impacted by genetics were derived from FDA package labeling, the FDA Table of Pharmacogenetic Associations, Clinical Pharmacogenetic Implementation Consortium (CPIC) guidelines, and literature reviews. This includes medications impacted by genetic polymorphisms or variants known to affect drug metabolism enzymes, drug transporters, or drug targets and have met specific inclusion criteria. Enrollment consisted of a web-based survey or phone call to collect contact, medication, diet, and lifestyle (e.g., smoking) information. Additional education and outreach were also part of the ongoing recruitment process. Genetic testing The enrollee self-collected saliva sample was shipped to and analyzed by a CLIA and CAP Certified high complexity laboratory (Quest Diagnostics, San Juan Capistrano, CA) that ran a pharmacogenomic test panel for the purpose of identifying genotype and copy number variations. The PGx test encompasses genes and variants (Supplemental Table ) that influence the pharmacokinetic or pharmacodynamic properties of medications. These genes were based on their documented clinical utility, their impact on medication use outcomes , and their inclusion in the Association for Molecular Pathology (AMP) PGx Working Group lists of recommended alleles for PGx testing. Results were converted into diplotypes based on standard nomenclatures and the results were made available to the clinical decision support tool, GeneDose LIVE™ (Coriell Life Sciences, Philadelphia, PA). GeneDose LIVE™ interprets genotypes using known drug-gene interactions from guidelines, drug labels, and curated data from evidence-based literature. Medication action plan (MAP) Coriell Life Science pharmacists utilized the comprehensive clinical decision support tool, GeneDose LIVE™, first to evaluate genetic and non-genetic sources of patient-specific risk associated with their current medication regimen, and then to model alternative choices that presented lower risks for inefficacy and safety concerns. The pharmacist created a summary medication action plan with proposed changes and notes containing clinical rationale that were subsequently communicated via secure email or fax to the patient’s preferred prescribing physician(s). Evaluation and outcomes The analysis was divided into pre- and post-program timeframes based on the individual’s cohort. The intervention month was the medication action plan delivery date for the intervention cohort and the program launch date of March 2021 was used for the control cohort. For the intervention group, the pre-program period was defined as the number of months before the intervention, and the post-program period was defined as the number of months after the intervention. For the control group, the pre-program period was defined as the 13 months from February 2020 to February 2021; the post-program period was defined as the next 13 months. Participating members with fewer than six months post-program data were excluded. For both groups, individuals not continuously insured—defined by health plan coverage for the entire 26 months—were excluded. Pre-program statistics for age, sex, geographic region, medical cost, Charlson Comorbidity Index, number of medications, and number of PGx-impacted medications were calculated based on de-identified medical and pharmacy insurance claims data from the health plan’s administrative claims database. Medical claim costs were aggregated by month to evaluate direct medical costs per member per month (PMPM) for each individual. As previously described , medical claims not impacted by PGx + CMM, with the potential to bias results in small sample sizes, were excluded (i.e., pregnancy, oncology, and non-medication-related trauma and injuries) from both the intervention and control groups. Pharmacy claims for high-cost medications determined to be outliers (≥3 σ ) were excluded. HRU metrics for inpatient, emergency, and outpatient service usage were also calculated from insurance claims. For the measurement of outcomes, the medical and pharmacy costs (PMPM) were averaged for the pre- and post-program period, and the HRU metrics were summed for the total count for each period. Statistical analysis Propensity score matching by multivariable logistic regression using 4:1 nearest-neighbor matching with a caliper of 0.1 to model the probability of enrolling in the program among those in the participating group was used to create a suitable control group thus reducing both bias and confounding resulting from differences between the two cohorts. Covariates related to the outcomes of interest (i.e., age, sex, geographic region, number of medications, number of PGx-impacted medications, baseline medical cost (PMPM), and Charlson Comorbidity Index) were included in the model. A doubly robust modeling approach was used to further reduce bias and confounding, which included the program participation indication and the covariates used in the propensity score model to estimate the outcomes of interest . For the continuous cost per-month outcome metric, an adjusted linear regression model was fit to estimate the program effect on the overall medical and pharmacy average member cost per month, and the specific medical, pharmacy, inpatient, emergency, and outpatient post-program average member cost per month. For the HRU outcome metrics, adjusted negative binomial regression models were fit to estimate the program effect on the post-program HRU counts. An offset term was included to account for the differing number of post-program months for individuals. For all models, the program participation indicator coefficient is used to describe the estimated effect of participating in the program. When all other covariates are the same, the program participation indicator coefficient is the isolated program effect at the individual employee level. P values ≤ 0.05 were considered statistically significant, and 95% confidence intervals are included to analyze variation observed in the program effect estimates. The impact of a self-insured employer-sponsored PGx + CMM program on HRU and medical costs was evaluated using a retrospective cohort pre-post design. Medical and pharmacy claims data from February 2020 to February 2021 were used to assess risk in individuals and invite those identified as high-risk, based on potential for drug-drug interactions, anticholinergic burden, contraindications, and medications impacted by genetics. Following enrollment which began in March 2021, Quest Diagnostics provided genotyping services and the results were transferred to Coriell Life Sciences for clinical annotation and interpretation. The retrospective data analysis was conducted using medical and pharmacy claims data from February 2020 to March 2022 for consented participants. All components of the program and study were performed under an approved IRB protocol (Biomedical Research Alliance of New York Institutional Review Board; BRANY). Eligible employees ≥18 years old were invited to participate in the program. Employees were deemed eligible if they were enrolled in the employer’s sponsored medical plan, and were ranked as high-risk, based on a weighted aggregation of the calculated risks for potential drug-drug interactions, anticholinergic burden, contraindications, and medications impacted by genetics from pharmacy and medical claims records in the 12 months immediately prior to the start of the program. Medications impacted by genetics were derived from FDA package labeling, the FDA Table of Pharmacogenetic Associations, Clinical Pharmacogenetic Implementation Consortium (CPIC) guidelines, and literature reviews. This includes medications impacted by genetic polymorphisms or variants known to affect drug metabolism enzymes, drug transporters, or drug targets and have met specific inclusion criteria. Enrollment consisted of a web-based survey or phone call to collect contact, medication, diet, and lifestyle (e.g., smoking) information. Additional education and outreach were also part of the ongoing recruitment process. The enrollee self-collected saliva sample was shipped to and analyzed by a CLIA and CAP Certified high complexity laboratory (Quest Diagnostics, San Juan Capistrano, CA) that ran a pharmacogenomic test panel for the purpose of identifying genotype and copy number variations. The PGx test encompasses genes and variants (Supplemental Table ) that influence the pharmacokinetic or pharmacodynamic properties of medications. These genes were based on their documented clinical utility, their impact on medication use outcomes , and their inclusion in the Association for Molecular Pathology (AMP) PGx Working Group lists of recommended alleles for PGx testing. Results were converted into diplotypes based on standard nomenclatures and the results were made available to the clinical decision support tool, GeneDose LIVE™ (Coriell Life Sciences, Philadelphia, PA). GeneDose LIVE™ interprets genotypes using known drug-gene interactions from guidelines, drug labels, and curated data from evidence-based literature. Coriell Life Science pharmacists utilized the comprehensive clinical decision support tool, GeneDose LIVE™, first to evaluate genetic and non-genetic sources of patient-specific risk associated with their current medication regimen, and then to model alternative choices that presented lower risks for inefficacy and safety concerns. The pharmacist created a summary medication action plan with proposed changes and notes containing clinical rationale that were subsequently communicated via secure email or fax to the patient’s preferred prescribing physician(s). The analysis was divided into pre- and post-program timeframes based on the individual’s cohort. The intervention month was the medication action plan delivery date for the intervention cohort and the program launch date of March 2021 was used for the control cohort. For the intervention group, the pre-program period was defined as the number of months before the intervention, and the post-program period was defined as the number of months after the intervention. For the control group, the pre-program period was defined as the 13 months from February 2020 to February 2021; the post-program period was defined as the next 13 months. Participating members with fewer than six months post-program data were excluded. For both groups, individuals not continuously insured—defined by health plan coverage for the entire 26 months—were excluded. Pre-program statistics for age, sex, geographic region, medical cost, Charlson Comorbidity Index, number of medications, and number of PGx-impacted medications were calculated based on de-identified medical and pharmacy insurance claims data from the health plan’s administrative claims database. Medical claim costs were aggregated by month to evaluate direct medical costs per member per month (PMPM) for each individual. As previously described , medical claims not impacted by PGx + CMM, with the potential to bias results in small sample sizes, were excluded (i.e., pregnancy, oncology, and non-medication-related trauma and injuries) from both the intervention and control groups. Pharmacy claims for high-cost medications determined to be outliers (≥3 σ ) were excluded. HRU metrics for inpatient, emergency, and outpatient service usage were also calculated from insurance claims. For the measurement of outcomes, the medical and pharmacy costs (PMPM) were averaged for the pre- and post-program period, and the HRU metrics were summed for the total count for each period. Propensity score matching by multivariable logistic regression using 4:1 nearest-neighbor matching with a caliper of 0.1 to model the probability of enrolling in the program among those in the participating group was used to create a suitable control group thus reducing both bias and confounding resulting from differences between the two cohorts. Covariates related to the outcomes of interest (i.e., age, sex, geographic region, number of medications, number of PGx-impacted medications, baseline medical cost (PMPM), and Charlson Comorbidity Index) were included in the model. A doubly robust modeling approach was used to further reduce bias and confounding, which included the program participation indication and the covariates used in the propensity score model to estimate the outcomes of interest . For the continuous cost per-month outcome metric, an adjusted linear regression model was fit to estimate the program effect on the overall medical and pharmacy average member cost per month, and the specific medical, pharmacy, inpatient, emergency, and outpatient post-program average member cost per month. For the HRU outcome metrics, adjusted negative binomial regression models were fit to estimate the program effect on the post-program HRU counts. An offset term was included to account for the differing number of post-program months for individuals. For all models, the program participation indicator coefficient is used to describe the estimated effect of participating in the program. When all other covariates are the same, the program participation indicator coefficient is the isolated program effect at the individual employee level. P values ≤ 0.05 were considered statistically significant, and 95% confidence intervals are included to analyze variation observed in the program effect estimates. Retrospective study: intervention and control assignments De-identified administrative medical and pharmacy claims were used to evaluate outcomes using the available 26 months of data. Claims data were available for 3252 members, including 1084 employees who enrolled in the program and 2168 members who did not participate in the program between March 2021 and December 2021. In the ongoing program, an enrolled individual was defined as completing the program once the genetic sample kit was returned and a medication action plan was generated. Claims were filtered to only the observation period. Following this, in the participant group, 1084 were enrolled, 631 completed the program, 530 had 6 or more months follow-up time, and 455 were continuously enrolled (Fig. ). In the non-participant group, 1625 of the 2168 individuals were continuously enrolled. Before propensity score matching, the participant group used more PGx-impacted medications and had lower baseline cost than those who were invited but did not enroll in the program (Table ). Both groups had similar numbers of overall prescriptions, 12.7 in the participant group and 12.3 in those who did not enroll (p = 0.28). The age, gender, number of medications, geographic region, and Charlson Comorbidity Index (CCI) scores were similar between the two groups. Propensity score matching resulted in 452 individuals assigned to the intervention group and 1500 individuals that were assigned to the control group for the evaluation. After matching, the intervention and control groups exhibited no differences in age, sex, geographic region, baseline medical cost, number of medications, number of PGx-impacted medications, and CCI score, as a result. Cost outcomes The results of the adjusted linear regression models showed that participating in the program was associated with a decrease in total costs, including pharmacy and medical costs, of $128.31 PMPM (95% CI, −$646.44 to $389.81; p = 0.63) (Table ). Similarly, program participation was associated with a decrease in medical-specific costs of $172.24 PMPM (95% CI, −$688.62 to $344.13; p = 0.51). Program participation was associated with an increase in pharmacy-specific costs of $26.30 PMPM (95% CI, $9.03–$43.56; p < 0.003). Program participation was associated with a decrease in costs specific to inpatient and emergency events of $1726.10 PMPM (95% CI, −$3383.71 to −$68.50; p = 0.04) and $33.36 PMPM (95% CI, −$70.28 to $3.56; p = 0.08), respectively. Program participation was associated with an increase in outpatient-specific costs of $114.51 PMPM (95% CI, −$296.56 to $525.57; p = 0.58). Healthcare resource utilization outcomes In the 13-month follow-up period, the results of the adjusted negative binomial regression models showed that program participation was associated with a decrease in inpatient (−39%, 95% CI, −63% to −1%; p = 0.05) and emergency (−39%, 95% CI, −56% to −16%; p = 0.002) visits and an increase in outpatient visits (21%, 95% CI, 13%–34%; p < 0.001) (Table ). De-identified administrative medical and pharmacy claims were used to evaluate outcomes using the available 26 months of data. Claims data were available for 3252 members, including 1084 employees who enrolled in the program and 2168 members who did not participate in the program between March 2021 and December 2021. In the ongoing program, an enrolled individual was defined as completing the program once the genetic sample kit was returned and a medication action plan was generated. Claims were filtered to only the observation period. Following this, in the participant group, 1084 were enrolled, 631 completed the program, 530 had 6 or more months follow-up time, and 455 were continuously enrolled (Fig. ). In the non-participant group, 1625 of the 2168 individuals were continuously enrolled. Before propensity score matching, the participant group used more PGx-impacted medications and had lower baseline cost than those who were invited but did not enroll in the program (Table ). Both groups had similar numbers of overall prescriptions, 12.7 in the participant group and 12.3 in those who did not enroll (p = 0.28). The age, gender, number of medications, geographic region, and Charlson Comorbidity Index (CCI) scores were similar between the two groups. Propensity score matching resulted in 452 individuals assigned to the intervention group and 1500 individuals that were assigned to the control group for the evaluation. After matching, the intervention and control groups exhibited no differences in age, sex, geographic region, baseline medical cost, number of medications, number of PGx-impacted medications, and CCI score, as a result. The results of the adjusted linear regression models showed that participating in the program was associated with a decrease in total costs, including pharmacy and medical costs, of $128.31 PMPM (95% CI, −$646.44 to $389.81; p = 0.63) (Table ). Similarly, program participation was associated with a decrease in medical-specific costs of $172.24 PMPM (95% CI, −$688.62 to $344.13; p = 0.51). Program participation was associated with an increase in pharmacy-specific costs of $26.30 PMPM (95% CI, $9.03–$43.56; p < 0.003). Program participation was associated with a decrease in costs specific to inpatient and emergency events of $1726.10 PMPM (95% CI, −$3383.71 to −$68.50; p = 0.04) and $33.36 PMPM (95% CI, −$70.28 to $3.56; p = 0.08), respectively. Program participation was associated with an increase in outpatient-specific costs of $114.51 PMPM (95% CI, −$296.56 to $525.57; p = 0.58). In the 13-month follow-up period, the results of the adjusted negative binomial regression models showed that program participation was associated with a decrease in inpatient (−39%, 95% CI, −63% to −1%; p = 0.05) and emergency (−39%, 95% CI, −56% to −16%; p = 0.002) visits and an increase in outpatient visits (21%, 95% CI, 13%–34%; p < 0.001) (Table ). In a self-insured employee population, a medication safety program, consisting of PGx-enriched comprehensive medication management (PGx + CMM), resulted in favorable health outcomes in the year following the intervention. In the 13-month follow-up period, program participation was associated with significantly fewer inpatient and emergency department visits compared to the control group. In addition, the program showed potential economic benefits as measured by healthcare resource utilization (HRU) and costs in medical claims. These findings extend prior applications of real-world implementation of PGx + CMM into a broader population and offer cost savings potential for self-insured employers opting to provide a similar medication safety program to employees. Additionally, results further support the attainment of a tipping point for population-level, large-scale pharmacogenomic testing. The findings show a positive shift in HRU away from acute, expensive services and towards less costly outpatient care settings. Reducing hospital inpatient and emergency department admissions represent a favorable impact on HRU since emergency department visits and inpatient hospital admissions drain healthcare resources and often indicate missed proactive and preventive care opportunities. Costs of inpatient care are rising and account for ~27% of privately paid healthcare expenditures . Similarly, health spending attributable to emergency department visits is increasing in the U.S. and currently represents approximately 5% of total healthcare spending . Outpatient visits reflect healthcare engagement in primary and preventive care services , higher utilization of evidence-based preventive health measures , and an accepted strategy to prevent avoidable hospitalizations. In fact, observed increases in outpatient visits in the intervention group may be attributable to recommendations from program pharmacists for participants to follow-up with their healthcare providers. Findings suggest that recommendations were successfully communicated by the pharmacist to the patient’s prescribers, resulting in more optimized medication management leading to significantly attenuated HRU. Combined, the evidence supports that the PGx + CMM program favorably impacts HRU and offers a potential for cost savings at the population-level. The total pharmacy and medical costs were estimated to decrease by $128.31 PMPM in the intervention group when compared to the control group. Medical-specific costs decreased by $172.24 PMPM for the intervention group. These savings estimates do not incorporate the cost of the program. While reductions in total medical costs were not statistically significant due to high variability in costs, observed savings track with the shift away from inpatient and emergency department services and are consistent with previous studies . Additionally, pharmacy costs increased by $26.30 PMPM for the intervention group—not an unexpected result as healthcare providers may habitually prescribe less expensive medication before moving on to more costly alternatives. Prior to the intervention, the participants had lower total medical costs likely due to lower utilization of costly inpatient and emergency services compared to the control. However, even after matching for baseline differences in total medical costs between the groups, the PGx + CMM program was associated with a decrease in total medical costs for the intervention group. These results build on the compelling evidence of the clinical and economic value of introducing PGx + CMM as a standard of care by expanding to a younger, employed population. While participants were invited into the program based on risk and utilization of medications with PGx implications, PGx interventions may have wider population impact as almost 65% of US adults may be exposed to at least one medication with an established pharmacogenomic association within a 5-year window . Further, it is estimated that 99% of individuals harbor a DNA variant known to impact medication safety and efficacy . This paper suggests that provisioning a PGx + CMM program across similar populations would yield positive clinical and economic impacts. Together, these outcomes provide evidence of a successful PGx + CMM implementation model. At present, coverage for pharmacogenomics testing across most commercial plans is limited and reimbursement at the individual level continues to lag. This may be due—at least in part— to the uncertain regulatory environment regarding pharmacogenomic testing in the US. However, distinct from complicated insurance-based provisions is the opportunity for employers to implement a PGx + CMM program, thus no longer relying on third-party payers to reimburse for PGx testing and medication management services. Employers can thus remove third-party reimbursement constraints while saving costs and improving the health of their employees. Furthermore, PGx + CMM implementations are being initiated by single-payer healthcare systems and countries lending additional credence to population-level PGx + CMM programs. These all suggest that, given no mechanism to fund this population-level activity within the established process of medical necessity decision-making at the patient level, self-insured employers, with more control over healthcare spending, might be interested in offering this program for employees. Given additional opportunities to increase PGx + CMM program accessibility through an employer channel, it would be beneficial to further explore this implementation method. Despite the positive results of the current report, findings should be interpreted in context and with an understanding of limitations. First, this paper reports findings of a real-world implementation as opposed to a randomized controlled trial. Although randomized controlled trials may provide the highest level of evidence, in clinical evaluations of pharmacogenomics, ethical considerations arise regarding the assignment of individuals with pharmacogenetic risk to a control group . Specifically, such considerations may deem that it is not ethical to deny a person with known risk access to an intervention known to be beneficial. It is also noted that real-life clinical evaluations carry the benefit of higher external validity with more transferable evidence to everyday clinical practice, despite any perceived lower level of evidence . Future studies with larger sample sizes and extended program durations could provide more evidence to support the findings of this paper. Specifically, these studies could explore how changes in healthcare resource utilization relate to disease severity, care needs, and clinical outcomes. Although our analysis suggests a strong link between the PGx + CMM intervention and the observed outcomes, we cannot conclusively establish a causal connection between specific medication changes and healthcare resource utilization in this study. Yet, we may reasonably assume that pharmacist-recommended changes were implemented given our previous report showing that 86% of employees who completed the program received actionable recommendations, averaging 5 recommendations per person . Never-the-less, a deeper understanding of the relationship between the intervention and outcomes could enhance future research. Additionally, this implementation was a voluntary employer-sponsored program, so selection bias may have contributed to the observed findings. However, this bias was addressed through statistical methodology and propensity score matching. Further research regarding why some employees, and not others, voluntarily participated in the intervention could enable more targeted engagement in future implementations. Moreover, several additional factors that have been associated with HRU, such as demographic, socioeconomic, health services–related, health status–related, and health insurance coverage, were not evaluated in the present study . In addition, individuals were invited to the program based on risk stratification and evidence of using ≥1 prescription medication. It may be that implementation in populations with lower medication utilization at baseline yields different results. However, potential differences can be mitigated by using a risk stratification process to identify individuals within the population most likely to benefit from the program. Finally, although the value of PGx-guided treatment has not been easily compared across different genetic assays and implementation strategies, our study aligns with a growing body of research demonstrating that pharmacogenetic testing, when integrated with clinical decision support, can lead to improved healthcare utilization and potential cost savings in the management of polypharmacy . Pharmacogenomics-enriched comprehensive medication management can favorably impact healthcare utilization in a self-insured employer population by reducing emergency department and inpatient visits and can offer the potential for cost savings. Self-insured employers may consider implementing pharmacogenomics-enriched comprehensive medication management to improve the healthcare of their employees . Supplementary Table 1: Genes Evaluated for the Pharmacogenomics-enriched Comprehensive Medication Management Program
Spinocerebellar ataxia type 17-digenic
73ea740e-be39-44d2-ac62-a231be1c3485
9727856
Pathology[mh]
To date, although diseases with true digenic inheritance (DI) have been rarely reported, they should be considered when encountering patients showing non-Mendelian inheritance, and broad-spectrum phenotypes such as spinocerebellar degeneration. Spinocerebellar ataxia type 17 (SCA17) is an autosomal dominant cerebellar ataxia characterized by cerebellar ataxia and dementia with sometimes extensive variability in phenotypes such as Huntington’s disease-like symptoms (HDL), caused by abnormal expansion of a CAG/CAA repeat encoding a polyglutamine (polyQ) tract in the TATA-box binding protein ( TBP ) gene . It has long been unexplained why the penetrance differs depending on the number of polyQ repeats: ≥49 such repeats being fully penetrant, whereas 41–48 repeats, termed intermediate alleles, are associated with reduced penetrance, and half of heterozygotic individuals in SCA17 families are healthy. A recent genetic study has revealed the pathogenesis of the SCA17/HDL phenotype in which intermediate alleles arise through digenic inheritance of two gene mutations – TBP polyQ and a heterozygous STUB1 variant – the latter being associated with SCA48 and the spinocerebellar autosomal recessive type 16 (SCA17-DI) . Another group has identified heterozygous mutations in STUB1 with intermediate alleles in TBP in patients exhibiting a progressive dementia syndrome similar to frontotemporal dementia, with only mild cerebellar atrophy on MRI . However, reports of the neuropathologic features are limited and role of STUB1 mutations in SCA17-DI remain unknown. Here, we describe in detail the clinicopathologic features of an autopsied patient with SCA17-DI and demonstrate the possible pathogenicity of STUB1. Patient 1 A 62-year-old Japanese woman from a non-consanguineous family, whose identical twin sister had shown similar symptoms (patient 2), presented with gait disturbance. No other family members showed similar disorders. Their mother had died of a malignant tumor at the age of 72, but no significant ataxia or cognitive impairment had been observed until her death. Their father had been healthy until his 90s. The patient had exhibited normal physical and neurological development. At the age of 68 years, she was admitted to a hospital due to dancing-like involuntary movements in the hands and feet. Neurological examination revealed choreic movement, saccadic eye movement, slurred speech, limb and trunk ataxia, and increased deep tendon reflexes in the upper and lower limbs. Babinski sign was negative. No superficial sensory disturbance or Romberg sign was detected. There was no evidence of bladder or rectal disturbance. Brain MRI revealed severe atrophy of the cerebellum. The cerebrum also showed diffuse atrophy and bilateral hyperintense lesions on T2WI in the basal ganglia and periventricular deep white matter (data not shown). Thus, the patient was diagnosed as having hereditary cerebellar ataxia with leukoencephalopathy, but genetic analysis excluded such diseases including spinocerebellar ataxia type 1 (SCA1) and dentatorubral-pallidoluysian atrophy (DRPLA). Thereafter, her condition slowly deteriorated and she demonstrated cognitive decline. At the age of 73 years, her unsteadiness worsened and she became bedridden. At the age of 76 years, she died of gallbladder cancer. General autopsy was performed, at which time the brain weighed 890 g. Genetic analysis revealed an intermediate allele (41 and 38 CAG/CAA repeats) in TBP and a heterozygous missense mutation in STUB1 (p.P243L) (Fig. a–c), establishing a diagnosis of SCA17-DI . Patient 2 The patient presented with symptoms similar to those of patient 1. She was a 57-year-old Japanese woman with gait disturbance, and five years later she became unable to walk. She had no previous medical history except for surgery for an acoustic tumor at the age of 52 years. At the age of 66 years, she suddenly developed choreic movement similar to those of patient 1. Thus, the patient was thought to have the same hereditary disease, but genetic analysis excluded SCA1 and DRPLA. She became bedridden due to severe trunk ataxia at the age of 69 years, followed by worsening cognitive decline. Brain MRI revealed severe atrophy of the cerebellum and diffuse atrophy of the cerebrum and basal ganglia, showing bilateral hyperintense lesions on T2WI in the basal ganglia, thalamus, and deep white matter (Fig. d–f). She then suffered repeated bouts of aspiration pneumonia and died at the age of 88 years. No autopsy or genetic testing for STUB1 or TBP was performed. Neuropathologic features (patient 1) Macroscopically, atrophy of the basal ganglia was prominent in the caudate nucleus, which showed moderate neuronal loss with gliosis. Neuron reduction was also observed in the deep layer of the frontal and motor cortices, where the white matter showed diffuse myelin pallor (Fig. a–e). Severe loss of Purkinje cells and granule cells with Bergman gliosis (Fig. f, arrows) were evident (Fig. f). Immunoreactivity of calbindin-D28k in the remaining Purkinje cells was depleted (Fig. g). The brain showed no pathological features suggestive of complications arising from Alzheimer’s disease (ABC score: A3B1C1) or Parkinson’s disease (Lewy body disease: none). No neuronal loss or focal gliosis was evident in the spinal cord, except for mild loss of neurons in the anterior horns and myelinated fibers in the corticospinal tract. Immunohistochemistry for expanded polyglutamine stretches using 1C2 antibody demonstrated diffuse accumulation in the neuronal nuclei in a diffuse pattern (neuronal intranuclear inclusions: NIIs). NIIs were restricted to the central nervous system, and most frequently detectable in sector CA1 of Ammon’s horn, where 67% of neurons possessed 1C2-positive nuclear inclusions (Fig. h). Table summarizes the neuronal loss and distribution of the inclusions. To investigate STUB1 (protein) alteration in the affected brain, we performed immunohistochemistry using an antibody against SUTB1. In a previous pathologic study, aberrant STUB1 localization was demonstrated in the distal PJC dendrites of patients with SCA48, while STUB1 was immunoreactive in somatodendrites in the control . However, our analysis revealed no such difference in localization between them (Supplementary Fig. 1 in Additional file ). E3 activity of the CHIP-p.P243L mutant We then investigated whether the STUB1 mutation caused a functional change in the encoded protein, chaperone-associated E3 ubiquitin ligase (CHIP), which is involved in the ubiquitin-mediated proteasomal control of protein homeostasis, and is known to facilitate degradation of misfolded proteins in neurodegenerative diseases . Therefore, we assessed the effect of STUB1 p.P243L mutation on E3 ubiquitin ligase activity by transiently expressing the wild type (WT) or p.P243L mutant of STUB1 in HEK293T cells, followed by immunoprecipitation and in vitro ubiquitination assay (Fig. a). In the presence of E1, E2 (UbcH5a), and ubiquitin, STUB1 -WT efficiently generated the polyubiquitin chain, whereas the p.P243L mutant failed. These results clearly indicated that p.P243L polymorphism in the U box domain affects the E3 activity of CHIP. Details of methods are in Additional file . A 62-year-old Japanese woman from a non-consanguineous family, whose identical twin sister had shown similar symptoms (patient 2), presented with gait disturbance. No other family members showed similar disorders. Their mother had died of a malignant tumor at the age of 72, but no significant ataxia or cognitive impairment had been observed until her death. Their father had been healthy until his 90s. The patient had exhibited normal physical and neurological development. At the age of 68 years, she was admitted to a hospital due to dancing-like involuntary movements in the hands and feet. Neurological examination revealed choreic movement, saccadic eye movement, slurred speech, limb and trunk ataxia, and increased deep tendon reflexes in the upper and lower limbs. Babinski sign was negative. No superficial sensory disturbance or Romberg sign was detected. There was no evidence of bladder or rectal disturbance. Brain MRI revealed severe atrophy of the cerebellum. The cerebrum also showed diffuse atrophy and bilateral hyperintense lesions on T2WI in the basal ganglia and periventricular deep white matter (data not shown). Thus, the patient was diagnosed as having hereditary cerebellar ataxia with leukoencephalopathy, but genetic analysis excluded such diseases including spinocerebellar ataxia type 1 (SCA1) and dentatorubral-pallidoluysian atrophy (DRPLA). Thereafter, her condition slowly deteriorated and she demonstrated cognitive decline. At the age of 73 years, her unsteadiness worsened and she became bedridden. At the age of 76 years, she died of gallbladder cancer. General autopsy was performed, at which time the brain weighed 890 g. Genetic analysis revealed an intermediate allele (41 and 38 CAG/CAA repeats) in TBP and a heterozygous missense mutation in STUB1 (p.P243L) (Fig. a–c), establishing a diagnosis of SCA17-DI . The patient presented with symptoms similar to those of patient 1. She was a 57-year-old Japanese woman with gait disturbance, and five years later she became unable to walk. She had no previous medical history except for surgery for an acoustic tumor at the age of 52 years. At the age of 66 years, she suddenly developed choreic movement similar to those of patient 1. Thus, the patient was thought to have the same hereditary disease, but genetic analysis excluded SCA1 and DRPLA. She became bedridden due to severe trunk ataxia at the age of 69 years, followed by worsening cognitive decline. Brain MRI revealed severe atrophy of the cerebellum and diffuse atrophy of the cerebrum and basal ganglia, showing bilateral hyperintense lesions on T2WI in the basal ganglia, thalamus, and deep white matter (Fig. d–f). She then suffered repeated bouts of aspiration pneumonia and died at the age of 88 years. No autopsy or genetic testing for STUB1 or TBP was performed. Macroscopically, atrophy of the basal ganglia was prominent in the caudate nucleus, which showed moderate neuronal loss with gliosis. Neuron reduction was also observed in the deep layer of the frontal and motor cortices, where the white matter showed diffuse myelin pallor (Fig. a–e). Severe loss of Purkinje cells and granule cells with Bergman gliosis (Fig. f, arrows) were evident (Fig. f). Immunoreactivity of calbindin-D28k in the remaining Purkinje cells was depleted (Fig. g). The brain showed no pathological features suggestive of complications arising from Alzheimer’s disease (ABC score: A3B1C1) or Parkinson’s disease (Lewy body disease: none). No neuronal loss or focal gliosis was evident in the spinal cord, except for mild loss of neurons in the anterior horns and myelinated fibers in the corticospinal tract. Immunohistochemistry for expanded polyglutamine stretches using 1C2 antibody demonstrated diffuse accumulation in the neuronal nuclei in a diffuse pattern (neuronal intranuclear inclusions: NIIs). NIIs were restricted to the central nervous system, and most frequently detectable in sector CA1 of Ammon’s horn, where 67% of neurons possessed 1C2-positive nuclear inclusions (Fig. h). Table summarizes the neuronal loss and distribution of the inclusions. To investigate STUB1 (protein) alteration in the affected brain, we performed immunohistochemistry using an antibody against SUTB1. In a previous pathologic study, aberrant STUB1 localization was demonstrated in the distal PJC dendrites of patients with SCA48, while STUB1 was immunoreactive in somatodendrites in the control . However, our analysis revealed no such difference in localization between them (Supplementary Fig. 1 in Additional file ). We then investigated whether the STUB1 mutation caused a functional change in the encoded protein, chaperone-associated E3 ubiquitin ligase (CHIP), which is involved in the ubiquitin-mediated proteasomal control of protein homeostasis, and is known to facilitate degradation of misfolded proteins in neurodegenerative diseases . Therefore, we assessed the effect of STUB1 p.P243L mutation on E3 ubiquitin ligase activity by transiently expressing the wild type (WT) or p.P243L mutant of STUB1 in HEK293T cells, followed by immunoprecipitation and in vitro ubiquitination assay (Fig. a). In the presence of E1, E2 (UbcH5a), and ubiquitin, STUB1 -WT efficiently generated the polyubiquitin chain, whereas the p.P243L mutant failed. These results clearly indicated that p.P243L polymorphism in the U box domain affects the E3 activity of CHIP. Details of methods are in Additional file . We have described the clinicopathologic features of patients harboring an intermediate allele (41 and 38 CAG/CAA repeats) in TBP and a heterozygous missense mutation in STUB1 (p.P243L), and demonstrated reduced E3 ubiquitin ligase activity of the STUB1 -p.P243L mutant. The clinical features of the present identical twins were quite similar to each other, with onset of ataxic gait at around 60 years of age, followed by chorea and cognitive decline, and a period of approximately 10 years from onset to becoming bedridden, although patient 1 had half the disease duration of patient 2 due to cancer. Reflecting their clinical course, brain MRI also showed features in common. Similarly, the clinical presentation in both patients resembled that of two previously reported cases harboring an intermediate allele (41 and 37, and 43 and 41 CAG/CAA repeats, respectively) in TBP and the same heterozygous missense mutation in STUB1 (p.P243L) : all of the patients were female and demonstrated cerebellar ataxia and cognitive decline, and three of the four developed chorea with a Huntington’s disease-like (HDL) phenotype. On the other hand, these three families showed different inheritance patterns, the present family showing autosomal recessive inheritance and the other two families autosomal dominant inheritance and a sporadic pattern, being consistent with a previous report of complex forms of inheritance in SCA17-DI . The difference in age at disease onset between the present patients and two other reported patients (around 60 years for the former and 30s for the latter) is also within the wide onset age range observed for SCA17-DI as a whole . The pathological findings in the present patient were similar to those reported previously for SCA17-DI and SCA17 in that most patients exhibited degeneration of the cerebellar cortex and striatum with the presence of 1C2-positive neurons showing diffuse nuclear staining (Table ). Only in one of these cases, the striatum including the caudate nucleus, which is known to be associated with choreic movement in Huntington’s disease, was not affected and indeed no involuntary movements were noted (Supplementary table 2 in Additional file ). Overall, the histopathological alterations may be slightly milder in SCA17-DI than in SCA17, and more cases will need to be studied to clarify the difference. Given the similarities of pathology between SCA17-DI and SCA17, despite the fact that the former had a mutation in STUB1 whereas the latter did not, a major question naturally arises as to whether a heterozygous STUB1 mutation alone could affect the phenotype. There have been a few reports on the neuropathology of SCA48, which has both commonalities and differences relative to SCA17-DI and SCA17. The reports on SCA48 have highlighted severe degeneration of the cerebellar cortex . On the other hand, no alterations were observed within the striatum in two of those reports (Table ), despite the fact that one of the two patients presented with HDL showing chorea and dystonia . In terms of 1C2-immunoreactive structures, one had scattered 1C2-positive neuronal intranuclear inclusions , while the other did not . The TBP repeat size in those patients was not stated. Even considering that the specificity of the 1C2 antibody could be sometimes unstable, it would be of importance to determine the repeat size of TBP in patients with heterozygous STUB1 mutation in order to better understand the role of heterozygous STUB1 mutation. It has been postulated that the pathogenicity of mutations in STUB1 centers on E3 activity of CHIP , as we demonstrated, but details of the pathomechanism have remained unclear. The E3 activity of six SCAR16-associated STUB1 variants – p.E28K, p.N65S, p.K145Q, p.M211I, p.S236T, and p.T246M – have been evaluated, and it has been reported that p.T246M mutation in the U box affects the structure and E3 activity of CHIP . In contrast, another U box mutant, p.S236T, exhibited E3 activity equivalent to that of the wild type, suggesting that mutations within the U box domain, a Zn-free E3 active site similar to the RING finger domain, may or may not significantly affect E3 activity depending on the mutated residue . As we identified reduced E3 activity of CHIP resulting from p.P243L mutation, we therefore further analyzed the effect of STUB1 p.P243L on the conformation of the U box domain using data from the deposited crystal structure (Fig. b, c) . A previous report has indicated that the proline residue corresponding to P243 in human CHIP is highly conserved in all U box proteins in mammals . P243 in human CHIP (P244 in mouse CHIP) is not directly involved in CHIP dimerization or binding to E2 (Fig. b). The CHIP U box domain contains three β strands (β1–3), and P243 is located at the end of β1, which seems to promote β-sheet termination and folding of the U box domain (Fig. c). However, in the p.P243L mutant, the NH group of L243 may form a hydrogen bond with the N-terminal main chain of an α-helix (α2), which extends the C-terminal end of β3. Because β3 is immediately followed by an α2, the extension of β3 would inhibit α2 helix formation and disrupt overall folding of the U box (Fig. c). Moreover, P243 in human CHIP forms a hydrophobic core with M240, M286, and I290 (M241, L287, I291 in mouse CHIP) to stabilize the structure of the U box , and p.P243L mutation disrupts these interactions (Fig. c). Together, these results suggest that the p.P243L mutation disrupts the folding of the entire U box domain, and impairs ubiquitin ligase activity, leading to insufficient degradation of TATA box-binding protein with moderately expanded poly-Q tracts and disease onset. In conclusion, we have presented the second genetically confirmed autopsy case of SCA17-DI presenting with a Huntington’s disease-like phenotype, and have demonstrated the functional and conformational changes resulting from STUB1 mutation associated with ubiquitin ligase activity. Further clinicopathologic and molecular studies are needed to clarify how TBP polyQ and STUB1 mutations interact and affect the phenotypic variability of SCA17-DI. Additional file 1: Figure S1. STUB1 immunohistochemistry in the patient. Table S1. Primary antibodies. Table S2. Summary of the clinical features in the autopsied patients with SCA17-DI, SCA17, and SCA48.
Immune and gene-expression profiling in estrogen receptor low and negative early breast cancer
2cbb1073-a5c6-4598-84bc-430a7dfdc3a8
11630536
Anatomy[mh]
Population This study includes 921 patients with early-stage (I-III), HER2-negative BC from 4 institutions: Istituto Oncologico Veneto (IOV) Padova, Italy ( n = 451); Montpellier Cancer Institute (MCI), Montpellier, France ( n = 223); Istituto Nazionale Tumori (INT), Milano, Italy ( n = 178); and Istituto Europeo di Oncologia (IEO), Milano, Italy ( n = 69). Patients were selected based on an expression of ER between 0% and 50% of cancer cells by IHC, according to local review. Tumors were classified as ER-neg (ER 0%, n = 712), ER-low (ER 1%-9%, n = 128), or ER-intermediate (ER-int) (ER 10%-50%, n = 81, included as a control cohort). Allowed progesterone (PgR) levels were up to 10% for ER-neg and ER-low cases. ER-neg and ER-low cases from IOV, MCI, and INT were consecutively treated (March 2000 to December 2021, June 2002 to November 2012, and December 2005 to May 2022, respectively). (available online) shows patient disposition. Patients with ER-int and all patients from IEO were derived from nonconsecutive cohorts enriched in patients who experienced disease relapse; these patients were excluded from survival analyses. Clinicopathological, treatment, and follow-up data were collected. Pathology Treatment-naïve formalin-fixed paraffin-embedded (FFPE) tumor samples were collected: surgery specimens for patients treated with primary surgery and pretreatment core-biopsies for patients treated with neoadjuvant treatment. All IHC protocols relevant to this study are reported as (available online). ER status was locally reviewed on previously stained IHC slides by dedicated breast pathologists. HER2 status was scored according to ASCO/CAP recommendations in place at the time of diagnosis. Blinded histopathological assessment of stromal TILs density on hematoxylin-eosin stained whole-slides (WS) was conducted locally by dedicated pathologists, following standardized guidelines . TILs were evaluated both as continuous and as categorical variables at the ≥30% cutoff validated in triple-negative BC . To investigate the existence of more granular differences in TILs’ composition across two cohorts of ER-low and ER-neg tumors, we evaluated the density of CD8+ cells, the primary mediators of tumor killing, FOXP3+ T regulatory cells, which tamper antitumor immune responses by exerting strong immunosuppressive functions, and the immune-checkpoint PD-L1. Since an enhanced FOXP3+ cell infiltrate may contrast the antitumor activity of CD8+ cells , we used the ratio of CD8/FOXP3 positive cells to infer the polarization of the TME toward an immune-active or an immune-suppressive state . CD8/FOXP3 and PD-L1 IHC staining was evaluated only in ER-neg and ER-low samples (n = 477), sourced from IOV and MCI. At IOV, samples were handled as WS, whereas MCI employed tissue-microarray (TMA). For each case, consecutive slides were locally stained for CD8, FOXP3, and PD-L1 and then scanned using a NanoZoomer C12740 digital scanner. All digital slides were centrally evaluated at IOV for CD8, FOXP3, and PD-L1 metrics using Visiopharm software applications, following a previously described digital pathology workflow . Scanned slides from IOV were aligned with a MNF116 stained slide from the same sample to define the stromal compartment of the tumor. The densities of CD8+ and FOXP3+ cells were measured as number of positive cells/mm 2 . At IOV, this measurement was performed in the stromal area of the tumor. For MCI cases, the intratumoral area of TMA foci was considered. To account for outliers, the CD8/FOXP3 density ratio was log-transformed. PD-L1 expression was evaluated on tumor-infiltrating immune cells (IC score) with the SP142 clone (Ventana), and cases with immunoreactive immune cells covering ≥1% of the tumor area were considered positive. Gene expression Gene-expression analyses were performed locally at IOV and INT. Pathologists reviewed FFPE samples for tumor tissue quality and quantity. From samples with adequate material (>40% of tumor cells), a cohort of ER-low and ER-neg cases matched for age (<50, 50-65, or >65 years old), histotype (ductal, lobular, or other), and stage (I, II, or III) were identified. A control cohort of unmatched ER-int cases was included. RNA extracted from FFPE was used to measure gene expression using the Breast Cancer 360 Panel on the nCounter platform (NanoString Technologies, Inc, Seattle, WA, USA) covering 776 genes from different independent signatures, including the PAM50 signature ( , available online). Gene-expression data were normalized using a ratio of the expression value to the geometric mean of the housekeeper genes of the PAM50 signature. Data were then log2 transformed. Intrinsic molecular subtyping was determined using the previously reported PAM50 subtype predictor . An unpaired 2-class SAM analysis with a 5% false discovery rate (FDR) was used to identify genes differentially expressed in different subgroups. Statistical analysis Statistical analyses were performed using IBM software SPSS v.29.0 and R (version 4.2.1); all tests were 2-sided, and an alpha < 0.05 significance level was used. The association between variables was evaluated using the Mann-Whitney or Kruskal-Wallis nonparametric tests for continuous variables, and the χ 2 test or Fisher exact test for categorical variables, as appropriate. Relapse-free survival (RFS) was defined as the time from diagnosis to relapse or death from any cause, and overall survival (OS) as the time from diagnosis to death from any cause. Patients without events were censored at the time of the last follow-up. The Kaplan-Meier method was used to estimate survival curves, the log-rank test to compare survival curves, and the Cox regression model to calculate hazard ratios (HR) and 95% confidence intervals (95% CI). Ethical considerations Tumor samples were collected after approval from the Institutional Review Board of each participating center and in accordance with the Declaration of Helsinki. Written consent was obtained from each participant who was alive at the time of study entry. This study includes 921 patients with early-stage (I-III), HER2-negative BC from 4 institutions: Istituto Oncologico Veneto (IOV) Padova, Italy ( n = 451); Montpellier Cancer Institute (MCI), Montpellier, France ( n = 223); Istituto Nazionale Tumori (INT), Milano, Italy ( n = 178); and Istituto Europeo di Oncologia (IEO), Milano, Italy ( n = 69). Patients were selected based on an expression of ER between 0% and 50% of cancer cells by IHC, according to local review. Tumors were classified as ER-neg (ER 0%, n = 712), ER-low (ER 1%-9%, n = 128), or ER-intermediate (ER-int) (ER 10%-50%, n = 81, included as a control cohort). Allowed progesterone (PgR) levels were up to 10% for ER-neg and ER-low cases. ER-neg and ER-low cases from IOV, MCI, and INT were consecutively treated (March 2000 to December 2021, June 2002 to November 2012, and December 2005 to May 2022, respectively). (available online) shows patient disposition. Patients with ER-int and all patients from IEO were derived from nonconsecutive cohorts enriched in patients who experienced disease relapse; these patients were excluded from survival analyses. Clinicopathological, treatment, and follow-up data were collected. Treatment-naïve formalin-fixed paraffin-embedded (FFPE) tumor samples were collected: surgery specimens for patients treated with primary surgery and pretreatment core-biopsies for patients treated with neoadjuvant treatment. All IHC protocols relevant to this study are reported as (available online). ER status was locally reviewed on previously stained IHC slides by dedicated breast pathologists. HER2 status was scored according to ASCO/CAP recommendations in place at the time of diagnosis. Blinded histopathological assessment of stromal TILs density on hematoxylin-eosin stained whole-slides (WS) was conducted locally by dedicated pathologists, following standardized guidelines . TILs were evaluated both as continuous and as categorical variables at the ≥30% cutoff validated in triple-negative BC . To investigate the existence of more granular differences in TILs’ composition across two cohorts of ER-low and ER-neg tumors, we evaluated the density of CD8+ cells, the primary mediators of tumor killing, FOXP3+ T regulatory cells, which tamper antitumor immune responses by exerting strong immunosuppressive functions, and the immune-checkpoint PD-L1. Since an enhanced FOXP3+ cell infiltrate may contrast the antitumor activity of CD8+ cells , we used the ratio of CD8/FOXP3 positive cells to infer the polarization of the TME toward an immune-active or an immune-suppressive state . CD8/FOXP3 and PD-L1 IHC staining was evaluated only in ER-neg and ER-low samples (n = 477), sourced from IOV and MCI. At IOV, samples were handled as WS, whereas MCI employed tissue-microarray (TMA). For each case, consecutive slides were locally stained for CD8, FOXP3, and PD-L1 and then scanned using a NanoZoomer C12740 digital scanner. All digital slides were centrally evaluated at IOV for CD8, FOXP3, and PD-L1 metrics using Visiopharm software applications, following a previously described digital pathology workflow . Scanned slides from IOV were aligned with a MNF116 stained slide from the same sample to define the stromal compartment of the tumor. The densities of CD8+ and FOXP3+ cells were measured as number of positive cells/mm 2 . At IOV, this measurement was performed in the stromal area of the tumor. For MCI cases, the intratumoral area of TMA foci was considered. To account for outliers, the CD8/FOXP3 density ratio was log-transformed. PD-L1 expression was evaluated on tumor-infiltrating immune cells (IC score) with the SP142 clone (Ventana), and cases with immunoreactive immune cells covering ≥1% of the tumor area were considered positive. Gene-expression analyses were performed locally at IOV and INT. Pathologists reviewed FFPE samples for tumor tissue quality and quantity. From samples with adequate material (>40% of tumor cells), a cohort of ER-low and ER-neg cases matched for age (<50, 50-65, or >65 years old), histotype (ductal, lobular, or other), and stage (I, II, or III) were identified. A control cohort of unmatched ER-int cases was included. RNA extracted from FFPE was used to measure gene expression using the Breast Cancer 360 Panel on the nCounter platform (NanoString Technologies, Inc, Seattle, WA, USA) covering 776 genes from different independent signatures, including the PAM50 signature ( , available online). Gene-expression data were normalized using a ratio of the expression value to the geometric mean of the housekeeper genes of the PAM50 signature. Data were then log2 transformed. Intrinsic molecular subtyping was determined using the previously reported PAM50 subtype predictor . An unpaired 2-class SAM analysis with a 5% false discovery rate (FDR) was used to identify genes differentially expressed in different subgroups. Statistical analyses were performed using IBM software SPSS v.29.0 and R (version 4.2.1); all tests were 2-sided, and an alpha < 0.05 significance level was used. The association between variables was evaluated using the Mann-Whitney or Kruskal-Wallis nonparametric tests for continuous variables, and the χ 2 test or Fisher exact test for categorical variables, as appropriate. Relapse-free survival (RFS) was defined as the time from diagnosis to relapse or death from any cause, and overall survival (OS) as the time from diagnosis to death from any cause. Patients without events were censored at the time of the last follow-up. The Kaplan-Meier method was used to estimate survival curves, the log-rank test to compare survival curves, and the Cox regression model to calculate hazard ratios (HR) and 95% confidence intervals (95% CI). Tumor samples were collected after approval from the Institutional Review Board of each participating center and in accordance with the Declaration of Helsinki. Written consent was obtained from each participant who was alive at the time of study entry. Patients’ characteristics We included a total of 921 patients: 712 patients with ER-neg, 128 with ER-low, and 81 with ER-int BC ( , available online). presents the clinicopathological data of the two primary patient groups: ER-low and ER-neg. Compared to patients with ER-neg BC, those with ER-low tumors more commonly had lobular histology and were less likely to have HER2-0 status, possibly due to a positive association between HER2-signaling and ER-expression. No differences in key clinic-pathological features such as stage, nodal status, grade, or proliferation rate were noted. ER-low patients were less frequently treated with chemotherapy, including NACT, but received ET more frequently. The non-consecutively treated cohort of patients with ER-int tumors, compared with ER-neg and ER-low, showed differences in several clinic-pathological characteristics ( , available online), which may be related partly to different inherent biology of ER-int tumors and partly to the selection procedure (cohort enriched in patients with disease relapse). Survival analyses revealed no significant differences between ER-low and ER-neg patients both in terms of RFS (5 years RFS 70.9% vs 74.9%, log-rank P = .181; HR 1.26 [95% CI = 0.90 to 1.78]) and OS (79.3% vs 82.2%, log-rank P = .223; HR 1.27 [95% CI = 0.86 to 1.87]) ( , available online). This observation was consistent at a 60-months landmark analysis, where no difference was noted for both RFS (log-rank P = .105; HR 1.84 [95% CI = 0.87 to 3.90]) and OS (log-rank P = .202; HR 1.57 [95% CI = 0.78 to 3.15]) ( , available online), despite numerically higher rates of late distant relapses in the ER-low subgroup ( , available online). Similar results were obtained when directly comparing the outcome of ER-low and ER-neg among the selected group of patients exposed to systemic chemotherapy (5 years RFS 72.0% vs 76.7%, log-rank P = .182; HR 1.29 [95% CI = 0.89 to 1.87]); 5 years OS 80.2% vs 83.9%, log-rank P = .308; HR 1.25 [95% CI = 0.81 to 1.92]) ( , available online). TILs density according to ER status We assessed TILs in 846 samples, 647 ER-neg, 119 ER-low, and 80 ER-int ( , available online). TILs were similar in ER-neg and ER-low BC (median 10%, interquartile range [IQR] [5-30] vs 15%, [5-30]; P > .999) . In contrast, TILs were statistically significantly lower in ER-int (median 5%, IQR [2-11]) compared with both ER-low ( P < .001) and ER-neg ( P < .001) BC specimens . To address the potential influence of tumor-intrinsic features on our analysis, we evaluated the distribution of TILs within ER status according to stage, grade, and Ki67, showing similar influence of grade and Ki67 on TIL density in both ER-neg and ER-low tumors ( , available online). Similar proportions of patients with high TILs (≥30%) were observed in ER-neg and ER-low groups (28.4% vs 26.1%, P = .594). In contrast, ER-int samples showed a lower proportion of patients with high TILs (11.2%) compared with both ER-neg ( P = .001) and ER-low groups ( P = .011) . These findings remained consistent when we separately analyzed samples from each participating institution ( , available online). To further explore TILs density within ER-int tumors, we divided them into two subcategories: ER 10%-30% and ER 31%-50%. Our analysis indicated that tumors with ER 10%-30% showed no significant difference in TILs density (median 9%, IQR [3-23]), compared with ER-neg ( P > .999) and ER-low tumors ( P = .678). Instead, tumors with the highest spectrum of ER-expression (31%-50%) had lower TILs (median 4% [IQR 2-8]) compared with both ER-neg ( P < .001) and ER-low tumors ( P < .001), but not statistically different from tumors with ER 10%-30% ( P = .116) ( , available online). Immune cell densities and PD-L1 expression ER-low tumors showed higher densities of both CD8+ and FOXP3+ cells/mm 2 compared with ER-neg BCs, and this difference reached statistical significance in the IOV cohort ( P = .040 and P = .011, respectively) but not in the smaller MCI cohort ( P = .081 and P = .057, respectively) . On the other hand, the log-transformed CD8/FOXP3 ratio was similar in ER-low vs ER-neg tumors (IOV: median 1.45, IQR [0.86-2.11] vs 1.42 [0.86-1.92], P = .504; MCI: 4.04 IQR [1.97-7.30] vs 3.24 IQR [2.42-5.67] P = .400, , and ), and the two cohorts were also characterized by a similar rate of PD-L1 positive expression (IOV: 69.2% vs 64.9% P > .999; MCI: 94.1% vs 74.6%, P = .080, ). Prognostic impact of TILs in ER-low and ER-neg BC We examined the prognostic relevance of TILs according to ER status in 647 ER-neg and 105 ER-low cases. The median follow-up time was 8.2 years (95% CI = 7.8 to 8.7 years). At univariate analysis, each 1% increase in TILs corresponded to a 2% reduction in the risk of RFS-event in both ER-neg (HR 0.98 [95% CI = 0.98 to 0.99], P < .001) and ER-low (HR 0.98 [95% CI = 0.96 to 1.00], P = .033) cohorts . We also found a 2% reduction in the risk of death for each 1% TILs increase in both patient cohorts (ER-neg: HR 0.98, 95% CI [0.97 to 0.99], P < .001; ER-low: HR 0.98, 95% CI [0.96 to 1.00], P = .062). When TILs were dichotomized based on a ≥30% cutoff , we found that high TILs were associated with statistically significantly improved RFS in both ER-neg (5 year RFS 85.2% vs 69.8%, log-rank P < .001, HR 0.41 [95% CI = 0.27 to 0.60]) and ER-low (5-year RFS 78.6% vs 66.2%, log-rank P = .033, HR 0.37 [95% CI = 0.15 to 0.96]) cohorts. We found similar findings when OS was used as a clinical outcome, with results reaching statistical significance for ER-neg (5-year OS 89.6% vs 78.0%, log-rank P < .001; HR 0.40 [95% CI = 0.25 to 0.62]) and pointing to the same direction for ER-low (5 year 87.1% vs 74.5%, log-rank P = .061; HR 0.38 [95% CI = 0.13 to 1.09]) . Results of univariate analyses were confirmed by multivariate analyses adjusting for age, stage, chemotherapy exposure , and when factoring ER expression (ER-neg vs ER-low) as a covariate ( , available online). Gene-expression analysis Gene-expression analyses were performed on 65 ER-low cases, matched to 39 ER-neg tumors. Twelve ER-int samples served as unmatched controls. Both ER-neg and ER-low tumors exhibited a similar distribution in PAM50-intrinsic subtypes ( P = .396), primarily featuring basal-like tumors (79%, n = 31, and 71%, n = 46, respectively) . Conversely, the ER-int group differed statistically significantly from both ER-low ( P = .002) and ER-neg patients ( P < .001), with basal-like tumors making up only 25% of the cases. Basal-like subtype showed statistically significantly higher TILs compared with other subtypes in both ER-low (median 20%, range [0-80%] vs 6% [1-40%], P < .001) and ER-int samples (53% [25-80%] vs 5% [0-10%], P = .036), whereas no significant difference in TILs was observed in ER-neg tumors ( P = .503). SAM analysis of 776 genes revealed that only three were differentially expressed in ER-low compared with ER-neg tumors ( GATA3 , upregulated; EDN1 and PROM1 , downregulated) ( , available online). When focusing on basal-like tumors ( n = 77), only EDN1 and PROM1 genes remained differentially downregulated in ER-low ( , available online). In contrast, ER-low samples showed a distinct expression pattern compared with ER-int, with a statistically significantly higher expression of 53 genes and a lower expression of 398 genes ( , available online). Comparing the expression of 164 immune-related genes in ER-low and ER-neg tumors, we found no significant differences in the expression of genes related to antigen presentation, cytokine and chemokine signaling, immune infiltration, TGF-beta signaling , or the characterization of immune cells (functionally annotated in , available online). However, 86 genes, including 4 mast-cell-related genes, showed statistically significantly different expression levels between ER-low to ER-int tumors ( , available online). We included a total of 921 patients: 712 patients with ER-neg, 128 with ER-low, and 81 with ER-int BC ( , available online). presents the clinicopathological data of the two primary patient groups: ER-low and ER-neg. Compared to patients with ER-neg BC, those with ER-low tumors more commonly had lobular histology and were less likely to have HER2-0 status, possibly due to a positive association between HER2-signaling and ER-expression. No differences in key clinic-pathological features such as stage, nodal status, grade, or proliferation rate were noted. ER-low patients were less frequently treated with chemotherapy, including NACT, but received ET more frequently. The non-consecutively treated cohort of patients with ER-int tumors, compared with ER-neg and ER-low, showed differences in several clinic-pathological characteristics ( , available online), which may be related partly to different inherent biology of ER-int tumors and partly to the selection procedure (cohort enriched in patients with disease relapse). Survival analyses revealed no significant differences between ER-low and ER-neg patients both in terms of RFS (5 years RFS 70.9% vs 74.9%, log-rank P = .181; HR 1.26 [95% CI = 0.90 to 1.78]) and OS (79.3% vs 82.2%, log-rank P = .223; HR 1.27 [95% CI = 0.86 to 1.87]) ( , available online). This observation was consistent at a 60-months landmark analysis, where no difference was noted for both RFS (log-rank P = .105; HR 1.84 [95% CI = 0.87 to 3.90]) and OS (log-rank P = .202; HR 1.57 [95% CI = 0.78 to 3.15]) ( , available online), despite numerically higher rates of late distant relapses in the ER-low subgroup ( , available online). Similar results were obtained when directly comparing the outcome of ER-low and ER-neg among the selected group of patients exposed to systemic chemotherapy (5 years RFS 72.0% vs 76.7%, log-rank P = .182; HR 1.29 [95% CI = 0.89 to 1.87]); 5 years OS 80.2% vs 83.9%, log-rank P = .308; HR 1.25 [95% CI = 0.81 to 1.92]) ( , available online). We assessed TILs in 846 samples, 647 ER-neg, 119 ER-low, and 80 ER-int ( , available online). TILs were similar in ER-neg and ER-low BC (median 10%, interquartile range [IQR] [5-30] vs 15%, [5-30]; P > .999) . In contrast, TILs were statistically significantly lower in ER-int (median 5%, IQR [2-11]) compared with both ER-low ( P < .001) and ER-neg ( P < .001) BC specimens . To address the potential influence of tumor-intrinsic features on our analysis, we evaluated the distribution of TILs within ER status according to stage, grade, and Ki67, showing similar influence of grade and Ki67 on TIL density in both ER-neg and ER-low tumors ( , available online). Similar proportions of patients with high TILs (≥30%) were observed in ER-neg and ER-low groups (28.4% vs 26.1%, P = .594). In contrast, ER-int samples showed a lower proportion of patients with high TILs (11.2%) compared with both ER-neg ( P = .001) and ER-low groups ( P = .011) . These findings remained consistent when we separately analyzed samples from each participating institution ( , available online). To further explore TILs density within ER-int tumors, we divided them into two subcategories: ER 10%-30% and ER 31%-50%. Our analysis indicated that tumors with ER 10%-30% showed no significant difference in TILs density (median 9%, IQR [3-23]), compared with ER-neg ( P > .999) and ER-low tumors ( P = .678). Instead, tumors with the highest spectrum of ER-expression (31%-50%) had lower TILs (median 4% [IQR 2-8]) compared with both ER-neg ( P < .001) and ER-low tumors ( P < .001), but not statistically different from tumors with ER 10%-30% ( P = .116) ( , available online). ER-low tumors showed higher densities of both CD8+ and FOXP3+ cells/mm 2 compared with ER-neg BCs, and this difference reached statistical significance in the IOV cohort ( P = .040 and P = .011, respectively) but not in the smaller MCI cohort ( P = .081 and P = .057, respectively) . On the other hand, the log-transformed CD8/FOXP3 ratio was similar in ER-low vs ER-neg tumors (IOV: median 1.45, IQR [0.86-2.11] vs 1.42 [0.86-1.92], P = .504; MCI: 4.04 IQR [1.97-7.30] vs 3.24 IQR [2.42-5.67] P = .400, , and ), and the two cohorts were also characterized by a similar rate of PD-L1 positive expression (IOV: 69.2% vs 64.9% P > .999; MCI: 94.1% vs 74.6%, P = .080, ). We examined the prognostic relevance of TILs according to ER status in 647 ER-neg and 105 ER-low cases. The median follow-up time was 8.2 years (95% CI = 7.8 to 8.7 years). At univariate analysis, each 1% increase in TILs corresponded to a 2% reduction in the risk of RFS-event in both ER-neg (HR 0.98 [95% CI = 0.98 to 0.99], P < .001) and ER-low (HR 0.98 [95% CI = 0.96 to 1.00], P = .033) cohorts . We also found a 2% reduction in the risk of death for each 1% TILs increase in both patient cohorts (ER-neg: HR 0.98, 95% CI [0.97 to 0.99], P < .001; ER-low: HR 0.98, 95% CI [0.96 to 1.00], P = .062). When TILs were dichotomized based on a ≥30% cutoff , we found that high TILs were associated with statistically significantly improved RFS in both ER-neg (5 year RFS 85.2% vs 69.8%, log-rank P < .001, HR 0.41 [95% CI = 0.27 to 0.60]) and ER-low (5-year RFS 78.6% vs 66.2%, log-rank P = .033, HR 0.37 [95% CI = 0.15 to 0.96]) cohorts. We found similar findings when OS was used as a clinical outcome, with results reaching statistical significance for ER-neg (5-year OS 89.6% vs 78.0%, log-rank P < .001; HR 0.40 [95% CI = 0.25 to 0.62]) and pointing to the same direction for ER-low (5 year 87.1% vs 74.5%, log-rank P = .061; HR 0.38 [95% CI = 0.13 to 1.09]) . Results of univariate analyses were confirmed by multivariate analyses adjusting for age, stage, chemotherapy exposure , and when factoring ER expression (ER-neg vs ER-low) as a covariate ( , available online). Gene-expression analyses were performed on 65 ER-low cases, matched to 39 ER-neg tumors. Twelve ER-int samples served as unmatched controls. Both ER-neg and ER-low tumors exhibited a similar distribution in PAM50-intrinsic subtypes ( P = .396), primarily featuring basal-like tumors (79%, n = 31, and 71%, n = 46, respectively) . Conversely, the ER-int group differed statistically significantly from both ER-low ( P = .002) and ER-neg patients ( P < .001), with basal-like tumors making up only 25% of the cases. Basal-like subtype showed statistically significantly higher TILs compared with other subtypes in both ER-low (median 20%, range [0-80%] vs 6% [1-40%], P < .001) and ER-int samples (53% [25-80%] vs 5% [0-10%], P = .036), whereas no significant difference in TILs was observed in ER-neg tumors ( P = .503). SAM analysis of 776 genes revealed that only three were differentially expressed in ER-low compared with ER-neg tumors ( GATA3 , upregulated; EDN1 and PROM1 , downregulated) ( , available online). When focusing on basal-like tumors ( n = 77), only EDN1 and PROM1 genes remained differentially downregulated in ER-low ( , available online). In contrast, ER-low samples showed a distinct expression pattern compared with ER-int, with a statistically significantly higher expression of 53 genes and a lower expression of 398 genes ( , available online). Comparing the expression of 164 immune-related genes in ER-low and ER-neg tumors, we found no significant differences in the expression of genes related to antigen presentation, cytokine and chemokine signaling, immune infiltration, TGF-beta signaling , or the characterization of immune cells (functionally annotated in , available online). However, 86 genes, including 4 mast-cell-related genes, showed statistically significantly different expression levels between ER-low to ER-int tumors ( , available online). Our multicentric study reveals that ER-low and ER-neg BCs share similar immune and gene expression characteristics, differing significantly from ER-int tumors. We uniquely demonstrated that high TILs in ER-low BC independently indicate a positive prognosis. Our clinical outcome analyses showed no significant differences in RFS and OS between the ER-low and ER-negative cohorts, with even a numerically higher rate of relapses in ER-low tumors. Importantly, both groups exhibited comparable pCR rates when treated with NACT, aligning with previous studies and contrasting sharply with the limited response rates generally seen in hormone-receptor-positive/HER2- BC . Our observation that ER-low and ER-neg BCs have similar TILs density, which is instead lower in ER-int BC specimens, is remarkable. Indeed, ER-neg BC specimens typically exhibit higher levels of TILs when compared to hormone-receptor-positive/HER2-negative BCs , owing to the generally higher immunogenic background of ER-neg tumors, which contrasts the “cold” immune-suppressive TME often observed in hormone-receptor-positive/HER2-negative BC . Notably, in this study, we found that high levels of TILs were comparably associated with a more favorable prognosis in both ER-neg and ER-low BC patients. Consistently, we observed a similar ratio of CD8/FOXP3 positive cells in ER-low and ER-neg tumor specimens, suggesting a similar polarization of the TME . Again in contrast with the acknowledged low expression of PD-L1 in hormone-receptor-positive BCs , we also identified a high positivity rate in ER-low tumors, akin to ER-neg. Together, these data support the existence of similar immune dynamics across ER-expression levels up to 9%. In our gene-expression analysis, ER-low and ER-neg BC samples showed no major transcriptional differences, including an enrichment in basal-like subtypes, consistent with findings in previous studies . Notably, no immune-related gene was differentially expressed between these groups. In contrast, ER-int tumors displayed a distinct immune profile, characterized by increased expression of several mast cell-related genes. This aligns with previous findings that higher ER levels correlated with mast cell presence , a trait potentially contributing to the promotion of a luminal phenotype . Our data provide strong evidence that ER-low and ER-neg are immunologically and biologically similar entities. Although ER IHC-staining was conceived as a predictive biomarker for ET benefit, the relationship between ER nuclear expression and specific immune-suppressive features typical of ER-positive tumors , which may dampen responses to ICIs , appears to be nonlinear. Our study shows that tumors with ER levels up to 9% exhibit similar CD8/FOXP3 ratio, PD-L1 expression, and GEP, indicating a marked immune and molecular divergence beginning at ER-int expression levels. This partially aligns with a recent report confirming similar immune features in ER-neg and ER-low BC . However, that study, despite reporting a higher prevalence of basal-like subtypes in ER-neg and ER-low compared with ER-int tumors, did not observe significant differences in TME across a broader range of ER expression levels (0% to 50%). This observation aligns with our exploratory observation of similar TIL density in patients with ER up to 30%, corroborating the potential of identifying a group of immune-active tumors within the broader ER-positive spectrum. The biologic heterogeneity within ER-positive/HER2-negative BCs plays a critical role in determining the efficacy of CT, ET , and ICIs . Luminal tumors are sensitive to ET , whereas basal-like tumors resist ET and cyclin-dependent kinase 4/6 inhibitors but are more responsive to chemotherapy . Molecular subtyping combined with immune features may help identify ER-expressing tumors sensitive to immunotherapy across ER levels . For instance, in the I-SPY2 trial, among ER-positive/HER2-negative BC classified as high-risk on MammaPrint, a basal-like intrinsic subtype was associated with a 67% pCR with pembrolizumab added to NACT . Furthermore, the GIADA trial reported that the co-occurrence of a basal-like intrinsic subtype and high TILs in premenopausal patients with ER ≥10%/HER2-negative BC and a luminal B-like IHC profile could accurately predict pCR after ICI-based neoadjuvant treatment and ET. Exploring the presence of this immune-responsive basal-like/high-TILs phenotype in our cohort, we observed higher TILs in ER-low and ER-int BC with basal-like tumors compared with non-basal-like tumors. Recent trials have underscored a distinct activity of ICIs in the ER-low subgroups , mirroring those of ER-neg patients and supported by the similar immune dynamics seen in our study. The NeoPACT phase II trial demonstrated comparable pCR rates in ER-low (56%) and ER-neg patients (58%) with pembrolizumab-NACT . In the Keynote-756 trial, ER-low patients experienced a 25.6% increase in pCR rates from the addition of pembrolizumab to NACT, much higher than the mere 8% seen in patients with ER 10%-100% . Strikingly, this delta is even larger than the 13.6% increase shown in the Keynote-522 trial, which led to pembrolizumab’s approval for ER-neg breast cancer . Similarly, the addition of nivolumab to NACT in the Checkmate 7FL trial resulted in a 27.0% increase in pCR rate in ER-low patients and 29.3% in those with ER ≤50%, compared to just 7.4% increase in patients with ER >50% . A correlation between pCR rates and the expression of PD-L1 and TILs was seen in those trials across the spectrum of ER-positive tumors, which suggests the potential of a biologically informed, response-oriented subtyping of BC . Our study has several strengths. It represents the largest study to provide immune-transcriptomic profiling of patients with ER-low BC, offering significant insights into this understudied population. The multicenter design of our study and the available long-term follow-up data enhance the generalizability and robustness of our findings. Conscious of unique approaches to tissue-handling protocols in place at the two institutions involved in our digital-pathology workflow, results regarding those analyses have been presented separately, a distinction that provides a robust and nuanced overview of immunological features. This study also has some limitations, including its retrospective nature and the relatively small sample size of ER-low tumors. Treatment imbalances between the ER-low and ER-neg cohorts might have influenced our clinical outcome analyses and should be considered when interpreting our findings. First, patients with ER-low BC tumors were less frequently exposed to chemotherapy and more frequently managed with surgery upfront compared with ER-neg patients, although post-neoadjuvant tailoring of adjuvant treatment based on the response rate to NACT was not broadly employed in our cohort. Moreover, ET was not frequently administered, reflecting current clinical practice, as oncologists are generally less prone to prescribe ET in ER-low tumors due to the limited survival benefit reported in earlier studies and the notable side effects associated with ET . Our study’s limited sample size precludes a definitive evaluation of the impact of these therapeutic decisions on the prognosis of patients with ER-low tumors. In this regard, the numerically worse prognosis we observed in ER-low compared with ER-neg tumors, with an even higher incidence of distant relapses, may support further discussion on the role of ET for selected patients with ER-low tumors . Nonetheless, the comparable survival between ER-low and ER-neg tumors seen in our study, consistent with larger cohorts , underscores the urgent need to generate robust evidence to guide the clinical trajectory of patients with ER-low tumors. The comparison of TILs in the non-consecutively treated ER-int cohorts warrants caution, due to limited sample size and the potential selection bias. Potential analytical challenges stemming from the absence of a centralized review of both ER status and TIL density cannot be excluded; however, we believe that these issues were mitigated. Tumor samples were evaluated by experienced and dedicated BC pathologists at single pathology units within high-volume comprehensive cancer centers. ER status was locally reviewed, and TILs were quantified on whole-slides following standardized recommended guidelines and using reference images . The consistency in TILs distribution of ER-low and ER-neg tumors across our participating institutions further supports our findings and TILs’ established reproducibility . The use of SP142 antibody to define PD-L1 positivity in our cohort warrants caution, because this assay has only partial overlap with PD-L1 expression levels defined using 22C3 antibody , the antibody used to define pembrolizumab eligibility in the metastatic setting. Still, a cutoff of ≥1% using SP142 has been shown to be predictive of nivolumab benefit in ER-positive patients treated in the Checkmate 7FL trial , reinforcing the biological role of evaluating PD-L1 status using SP142 in our cohort. Moving forward, efforts to personalize cancer treatment in ER-low tumors should focus on examining TME’s functional status and spatial distribution. The use of IHC staining for CD8, FOXP3, and PD-L1 in our cohort allowed us to evaluate key components of the immune compartment using established IHC markers. However, this TME profiling is only partial and may overlook varying immune-states , which could affect the efficacy of distinct immunomodulatory combinations across ER statuses. Techniques such as multiplexed single-cell spatially resolved tissue analyses could be instrumental in exploring subtle variations in the immune contexture related to various ER levels, potentially overlooked in our quantitative analysis. Such an approach could pave the way for truly tailored immunotherapy strategies beyond traditional IHC-based classifications, across varying ER levels . In conclusion, our results demonstrate that ER-low and ER-neg BC are immunologically and molecularly akin, clarifying their similar clinical outcomes and responses to therapeutics, particularly to ICIs. In this regard, we believe our data contribute notably to the growing body of clinical and translational evidence calling for a reevaluation of ER-based BC classification and management. As such, we advocate for a treatment approach that aligns ER-low tumors with ER-neg, as few guidelines are starting to acknowledge , to avoid perpetuating the current disparities in regulatory access to effective treatments for this subgroup of patients. Crucially, this endeavor should encompass at least the inclusion of patients with ER-low and triple-negative tumors in the same clinical trials, a practice already adopted in academic trials , ensuring that the high-risk ER-low patient population is not deprived from accessing potentially transformative therapies, such as immunotherapy. The evidence in terms of benefit from ICIs, which is stemming from the small subgroups of ER-low patients enrolled in trials dedicated to ER-positive BC, could at the best result in remarkable delay in the access to this treatment option, should long-term survival endpoints support the approval of ICIs in this population. djae178_Supplementary_Data
Electrophysiological responses to conspecific odorants in
33dbf39a-97af-4f5f-a279-e8a3376ccdba
9451071
Physiology[mh]
Xenopus and other pipid frogs are fully aquatic species that spend their adult lives in ponds rather than becoming terrestrial as adults. Although their anuran ancestors lived their adults lives out of the water, these species have adapted to aquatic life with numerous specializations over the last 140 million years or more . These include specializations to the olfactory system to allow adult animals to separately sample both airborne and waterborne stimuli . Adult X . laevis have two chambers within the nose with a valve at the external naris that allows either the air nose to be open when above water, or the water nose to be open below the surface . The air nose, or principal cavity, connects to the respiratory tract and contains an olfactory epithelium similar to that seen in all adult anurans; it may be used to find new ponds during overland migration . Also similar to other anurans, X . laevis have a vomeronasal organ at the base of the principal cavity and adjacent to the choana (the opening that connects the oral cavity with the principal nasal cavity) which likely samples waterborne chemicals originating from the nasolacrimal duct or the choana . The water nose is a dead-end chamber, often referred to as the medial cavity (despite it being lateral and ventral to the principal cavity), through which water is actively circulated when the animal is submerged due to the pulsation of the lateral nasal wall . The water nose contains a separate olfactory epithelium that resembles the larval epithelium of anurans, showing specializations for waterborne odorants, including a mix of ciliated and microvillous receptor neurons expressing OR1 (class I), OR2, and V1R receptors . The water nose may be important for finding food, alerting to predators, and acquiring information about conspecifics via chemical cues in the water . X . laevis social and reproductive interactions have been well studied but significant gaps remain in our understanding of how sensory or hormonal cues lead to particular behaviors. There is no evidence that these animals are territorial; instead a population will share space within a pond where they have a prolonged breeding period, with females entering sexual receptivity asynchronously during the rainy season . These animals use their extensive vocal repertoire for social and reproductive communication, with males and females calling to each other, as well as male-male vocal interactions . Males produce more advertisement calls when a female is present, but it is unclear what sensory cues drive the increased calling. When males are housed together, they do not chorus; in fact, certain males tend to do most of the calling, suggesting a social hierarchy . This may involve an assessment of self (endocrine state, for example) relative to others (perhaps using body size or condition, calling, or chemical cues). Males also select among different reproductive tactics , which may depend on a similar assessment of conspecifics. Chemosensory signaling may be an important missing piece of this puzzle. We do not know what sensory cues prompt different vocalizations, particularly for males; nor do we know what causes males to be dominant or subordinate in vocal or clasping interactions. Looking at other species, it seems plausible that chemical communication could play an important role in these social interactions by allowing animals to learn about nearby conspecifics. Chemical signals are frequently used for social and reproductive signaling across all taxa, including amphibians and other aquatic vertebrates, such as fish . While vocal communication has traditionally received more attention in anurans, cases of chemical communication have been documented . Previous behavioral or physiological studies of X . laevis chemosensation have yet to address these questions, largely focusing on responses to food stimuli and on larval olfactory physiology . To assess the role of waterborne odorants in adult X . laevis social interactions, we developed the first in situ electrolfactogram (EOG) preparation for this species, allowing us to record receptor potentials in the water nose. We then used our EOG preparation to test whether male X . laevis could detect cloacal fluids or skin secretions from male and female conspecifics and determined the sensitivity of the nose to these potential social stimuli. Cloacal fluids consist primarily of urine but may also contain chemicals from reproductive or gastrointestinal tracts, given the confluence of these systems in the cloaca. Urine, which contains hormones, hormone metabolites, bile acids, and other species-specific, sex-specific, and condition-dependent molecules, is a common source of social chemical signals in other species . Amphibian skin secretions also contain a variable mix of chemicals including a variety of peptides, proteins, antimicrobial substances, and toxins . Several pheromones have been identified in the skin secretions of other amphibians . Animal handling and in situ olfactory preparation All animal handling and experiments were conducted with the oversight and approval of the Institutional Animal Care and Use Committee at the Marine Biological Laboratory (MBL), Woods Hole, Massachusetts (protocol number 17-07H-Final) as well as the approval of the Denison University Institutional Animal Care and Use Committee. All animals were sexually mature adults procured from and housed in the National Xenopus Resource (NXR) at the MBL. Frogs were housed in same sex tanks at 18–20 deg C with a 12:12 light cycle. Frogs were fed a 1:1 mix of adult frog brittle (Nasco) and Bio-Trout pellets (Bio-Oregon). A total of 27 male wild type Xenopus laevis (7.9 ± 0.5 cm snout-vent length; 64.0 ± 9.9 g) were utilized for this study, with the first 17 animals used to understand nasal anatomy, establish recording procedures, and pilot a range of potential stimuli. The final 10 animals were used to collect the data presented here. Additionally, several adult male and female Xenopus laevis belonging to the NXR were handled briefly and with permission from the NRX to collect stimuli as described below. No physiological recordings were made in female animals for this study. Experiments were conducted during the summer of 2017 (June–August) at the MBL. The male animals used for physiological recordings were anesthetized with MS222 dissolved in phosphate buffered saline; dosing was 0.15 mg/g body weight, injected subcutaneously into the dorsal lymph sac. Once the frog was deeply anesthetized, we placed it on ice for 5 to 10 minutes before euthanizing by double pithing the frog . By destroying the central nervous system, pithing achieved euthanasia, including terminating all motor activity while leaving the olfactory epithelium intact and functional. This created an in situ preparation for testing olfactory responses. The frog’s body was placed in a custom chamber that elevated its naris above the rest of its body. Ice was placed over the frog’s body to help maintain healthy tissues for as long as possible by lowering the metabolic rate. We opened the frog naris with small surgical scissors, removing superficial tissue and underlying cartilage until we exposed the medial olfactory cavity (the water nose; ). The water nose cavity was continuously perfused with room temperature saline at 2–3 ml/min using a gravity fed system to keep the tissue moist and provide a path for stimulus delivery and wash out. Excess fluid flowed freely out of the cavity and ultimately passed through a drain at the bottom of the frog chamber and into a waste collection. Saline was selected to mimic ionic concentrations in olfactory mucosa and consisted of 55 mM NaCl, 10 mM KCl, and 4 mM CaCl 2 in Millipore purified deionized water, brought to pH of 7.5 using NaOH. EOG recording Electroolfactogram (EOG) recordings were made using a silver/silver-chloride electrode placed in a glass pipette, tip diameter ~100 μm, tip-filled with 1% agar (Sigma A1296, dissolved in saline) and backfilled with saline. The EOG electrode and reference electrode were connected to a head stage and amplifier (AM Systems 3000) for differential recording. The amplifier was set to DC (no high pass filtering) with notch filtering on, low pass filter at 1 KHz, and gain at 1000x. Data was digitized with a Digidata Micro 1401 (CED, Cambridge, UK), and continuously recorded at a rate of 10 kHz with Spike2 software (CED). The EOG electrode was held by a micromanipulator and placed such that the tip was submerged in perfused saline, just above the olfactory epithelium along the medial wall of the water nose based on visual landmarks . A reference electrode was placed in the mouth. Electrode position and prep viability were assessed by delivering positive and negative control stimuli (methionine and saline, see stimuli below). If we did not record a normal EOG signal for methionine (a characteristic negative deflection lasting 2–3 seconds with expected latency based on the perfusion and stimulus deliver system described below), we would wash out the stimulus, reposition the recording electrode or change the glass pipette, then try again. Once a recording site was established, a range of stimuli were tested. Stimulus acquisition and delivery Stimuli consisted of male and female cloacal fluids, male and female skin secretions, and several positive and negative controls. Amino acids (1mM L-methionine (Sigma M5308) dissolved in saline and 1 mM L-alanine (Sigma A7469) dissolved in saline) were used as positive controls because they are reliably detected chemical signals in this and other species . Negative controls included saline controls (saline identical to that being perfused was injected into the perfusion line to control for mechanosensory response to changes in flow rate or pressure), and cloacal- and skin-specific controls (described below; these controlled for contaminating odorants). To collect cloacal fluids, we gently held an adult frog and placed a small piece of new, clean polyethylene tubing inside the cloaca of the frog and waited for fluid to move down the tube by capillary action. Not all frogs yielded fluid, those that did typically yielded 10–100 μl. We performed this process with both male and female frogs, until we had successfully collected fluids from several frogs of each sex. All animals were sexually mature adults, but specific hormonal states or reproductive histories were not known. Samples were pooled by sex, creating cocktails of male or female cloacal fluids. Skin secretions were collected using the hassle bag technique . We rinsed tank water from each frog by gently spraying it with deionized water, then placed each frog into a new plastic sandwich bag (made of low-density polyethylene) and massaged the frog gently for approximately 1 minute to stimulate mucosal secretion. Frogs were then returned to their home tanks and the contents of each bag was collected into a microcentrifuge tube. Again, we sampled several male and female frogs; samples were pooled by sex, creating cocktails of male or female skin secretions. Because our collection techniques could introduce non-biological odorants into our samples (from plastics, for instance), we created cloacal- and skin-specific control stimuli. To do so, we passed saline through all the steps of collection (including either polyethylene tubing or sandwich bags, and microcentrifuge tubes) for each process. Samples of all stimuli were aliquoted and frozen to ensure consistency across experiments. Freshly-collected and frozen stimuli were compared during pilot experiments, and no difference in response was observed. Saline used for perfusion was made fresh for each experiment. Because cloacal fluids were difficult to collect and were collected in small volumes, they were diluted 1:100 in saline before being aliquoted and frozen. Skin secretions were diluted 1:10 before freezing. At the start of each experiment, a set of stimuli was taken from the freezer, defrosted, and serial dilutions were performed using the freshly made saline to achieve a range of dilutions. To deliver a stimulus during an experiment, 50 μl of the stimulus was injected into a port in the perfusion line carrying saline to the olfactory epithelium. After injection, the stimulus reached the olfactory epithelium after several seconds and somewhat diluted. To better understand the time delay and dilution of stimuli, we calibrated the system without an animal present. Specifically, we ran de-ionized water through the perfusion system and injected 50 μl of 1M NaCl in place of a stimulus. Samples were collected from the perfusion system output and tested on an osmometer. Most of the salt was detected between 5 and 10 seconds after injection, with approximately 1:5 dilution of the peak salt concentration. Actual, instantaneous concentrations experienced by the olfactory epithelium could vary from this estimate due to the way liquid flowed across the epithelium (subtle pooling or mixing could alter the concentration over time) or differences in the temporal resolution of our sampling vs. the temporal resolution of the olfactory epithelium (where long, slow receptor potentials suggest integration over time). Stimulus wash out was also characterized: the measured salt concentration was reduced to less than 1% 20 seconds after injection and undetectable after 30 seconds. Once a good EOG recording site was established for an animal, stimuli were run in blocks, with each block testing one type of stimulus (e.g., female cloacal fluid) at different dilutions. All blocks began with a positive control (typically 1 mM methionine, but when methionine was the test stimulus, alanine was used as the positive control), followed by a wash (for the wash, ~0.5 ml saline was slowly injected into the perfusion line over several seconds to ensure the injection port was clear of stimuli), then a saline stimulus (50 μl) was run to ensure the wash was complete and no EOG signal was detected . Next, we began the test stimulus at its lowest concentration, followed by a wash and a saline control as before . Then we would deliver the second most dilute test stimulus, followed by washes and controls, and so on, until we reached the least dilute stimulus. If the test stimulus was either skin secretions or cloacal fluids, we also ran the appropriate stimulus-specific control in between each test stimulus. After completing the stimulus sequence from most to least dilute, we would repeat it again in the opposite order–least to most dilute–with appropriate washes and controls in between. This was done to ensure there was no order effect, and indeed we saw no systematic difference in EOG amplitude when we compared low-to-high vs. high-to-low sequences. The stimulus block was ended by repeating the positive control stimulus, followed by a wash. The time between any two stimuli was not automated and thus varied slightly but was almost always >30 seconds, and often closer to 60 seconds. We saw no signs of adaptation in our data; response amplitudes were similar regardless of stimulus order and even when the same stimulus was repeated several times with washes and controls in between. Additional blocks were then run to examine responses to other kinds of stimuli, as long as good quality recordings could be collected. Not all types of stimuli were run successfully in all preparations. Data analyses To quantify EOG responses for dose response data, we measured the amplitude of the EOG signal and calculated z-scores for the amplitudes of signals in response to test stimuli at each dilution relative to control stimuli. EOG amplitudes were determined by taking the minimum value from the EOG trough and subtracting the average baseline value taken from a 1 second range, starting 2 seconds prior to EOG onset. Because we recorded using a DC amplifier and without low frequency filtering, the baseline showed significant drift at times . Thus, we performed the following baseline corrections: An average post-stimulus baseline was measured from a 1 second range shortly after the end of EOG signal (the time period was set based on the timing of the positive control stimuli at the start and end of each block; this was typically 10 or more seconds after the pre-stimulus baseline). If the pre and post stimulus baselines differed by 0.1 mV or more, the slope of the baseline was calculated. An adjusted baseline value at the time of the EOG trough was then calculated using the slope and the time of the EOG minimum; the trough value was subtracted from the adjusted baseline to determine EOG amplitude. For each block of stimuli, the average EOG amplitude was calculated for each dilution of the test stimulus (typically there were two trials of each dilution per block). The average and standard deviation of the EOG amplitude for the stimulus-specific negative control were also calculated (typically there were 12 trials of the control per block). Z-scores were then calculated according to the following formula, where X i is the average EOG amplitude to a test stimulus, μ is the average EOG amplitude to control, and σ is the standard deviation of the EOG amplitude to control: z = x i − μ σ Without behavioral data, we cannot know the detection threshold for stimuli. Thus, a threshold of z ≥ 2 was set to represent responses that are likely detectable as different from control, since such responses would be greater in amplitude than 97.7% of control responses. in situ olfactory preparation All animal handling and experiments were conducted with the oversight and approval of the Institutional Animal Care and Use Committee at the Marine Biological Laboratory (MBL), Woods Hole, Massachusetts (protocol number 17-07H-Final) as well as the approval of the Denison University Institutional Animal Care and Use Committee. All animals were sexually mature adults procured from and housed in the National Xenopus Resource (NXR) at the MBL. Frogs were housed in same sex tanks at 18–20 deg C with a 12:12 light cycle. Frogs were fed a 1:1 mix of adult frog brittle (Nasco) and Bio-Trout pellets (Bio-Oregon). A total of 27 male wild type Xenopus laevis (7.9 ± 0.5 cm snout-vent length; 64.0 ± 9.9 g) were utilized for this study, with the first 17 animals used to understand nasal anatomy, establish recording procedures, and pilot a range of potential stimuli. The final 10 animals were used to collect the data presented here. Additionally, several adult male and female Xenopus laevis belonging to the NXR were handled briefly and with permission from the NRX to collect stimuli as described below. No physiological recordings were made in female animals for this study. Experiments were conducted during the summer of 2017 (June–August) at the MBL. The male animals used for physiological recordings were anesthetized with MS222 dissolved in phosphate buffered saline; dosing was 0.15 mg/g body weight, injected subcutaneously into the dorsal lymph sac. Once the frog was deeply anesthetized, we placed it on ice for 5 to 10 minutes before euthanizing by double pithing the frog . By destroying the central nervous system, pithing achieved euthanasia, including terminating all motor activity while leaving the olfactory epithelium intact and functional. This created an in situ preparation for testing olfactory responses. The frog’s body was placed in a custom chamber that elevated its naris above the rest of its body. Ice was placed over the frog’s body to help maintain healthy tissues for as long as possible by lowering the metabolic rate. We opened the frog naris with small surgical scissors, removing superficial tissue and underlying cartilage until we exposed the medial olfactory cavity (the water nose; ). The water nose cavity was continuously perfused with room temperature saline at 2–3 ml/min using a gravity fed system to keep the tissue moist and provide a path for stimulus delivery and wash out. Excess fluid flowed freely out of the cavity and ultimately passed through a drain at the bottom of the frog chamber and into a waste collection. Saline was selected to mimic ionic concentrations in olfactory mucosa and consisted of 55 mM NaCl, 10 mM KCl, and 4 mM CaCl 2 in Millipore purified deionized water, brought to pH of 7.5 using NaOH. Electroolfactogram (EOG) recordings were made using a silver/silver-chloride electrode placed in a glass pipette, tip diameter ~100 μm, tip-filled with 1% agar (Sigma A1296, dissolved in saline) and backfilled with saline. The EOG electrode and reference electrode were connected to a head stage and amplifier (AM Systems 3000) for differential recording. The amplifier was set to DC (no high pass filtering) with notch filtering on, low pass filter at 1 KHz, and gain at 1000x. Data was digitized with a Digidata Micro 1401 (CED, Cambridge, UK), and continuously recorded at a rate of 10 kHz with Spike2 software (CED). The EOG electrode was held by a micromanipulator and placed such that the tip was submerged in perfused saline, just above the olfactory epithelium along the medial wall of the water nose based on visual landmarks . A reference electrode was placed in the mouth. Electrode position and prep viability were assessed by delivering positive and negative control stimuli (methionine and saline, see stimuli below). If we did not record a normal EOG signal for methionine (a characteristic negative deflection lasting 2–3 seconds with expected latency based on the perfusion and stimulus deliver system described below), we would wash out the stimulus, reposition the recording electrode or change the glass pipette, then try again. Once a recording site was established, a range of stimuli were tested. Stimuli consisted of male and female cloacal fluids, male and female skin secretions, and several positive and negative controls. Amino acids (1mM L-methionine (Sigma M5308) dissolved in saline and 1 mM L-alanine (Sigma A7469) dissolved in saline) were used as positive controls because they are reliably detected chemical signals in this and other species . Negative controls included saline controls (saline identical to that being perfused was injected into the perfusion line to control for mechanosensory response to changes in flow rate or pressure), and cloacal- and skin-specific controls (described below; these controlled for contaminating odorants). To collect cloacal fluids, we gently held an adult frog and placed a small piece of new, clean polyethylene tubing inside the cloaca of the frog and waited for fluid to move down the tube by capillary action. Not all frogs yielded fluid, those that did typically yielded 10–100 μl. We performed this process with both male and female frogs, until we had successfully collected fluids from several frogs of each sex. All animals were sexually mature adults, but specific hormonal states or reproductive histories were not known. Samples were pooled by sex, creating cocktails of male or female cloacal fluids. Skin secretions were collected using the hassle bag technique . We rinsed tank water from each frog by gently spraying it with deionized water, then placed each frog into a new plastic sandwich bag (made of low-density polyethylene) and massaged the frog gently for approximately 1 minute to stimulate mucosal secretion. Frogs were then returned to their home tanks and the contents of each bag was collected into a microcentrifuge tube. Again, we sampled several male and female frogs; samples were pooled by sex, creating cocktails of male or female skin secretions. Because our collection techniques could introduce non-biological odorants into our samples (from plastics, for instance), we created cloacal- and skin-specific control stimuli. To do so, we passed saline through all the steps of collection (including either polyethylene tubing or sandwich bags, and microcentrifuge tubes) for each process. Samples of all stimuli were aliquoted and frozen to ensure consistency across experiments. Freshly-collected and frozen stimuli were compared during pilot experiments, and no difference in response was observed. Saline used for perfusion was made fresh for each experiment. Because cloacal fluids were difficult to collect and were collected in small volumes, they were diluted 1:100 in saline before being aliquoted and frozen. Skin secretions were diluted 1:10 before freezing. At the start of each experiment, a set of stimuli was taken from the freezer, defrosted, and serial dilutions were performed using the freshly made saline to achieve a range of dilutions. To deliver a stimulus during an experiment, 50 μl of the stimulus was injected into a port in the perfusion line carrying saline to the olfactory epithelium. After injection, the stimulus reached the olfactory epithelium after several seconds and somewhat diluted. To better understand the time delay and dilution of stimuli, we calibrated the system without an animal present. Specifically, we ran de-ionized water through the perfusion system and injected 50 μl of 1M NaCl in place of a stimulus. Samples were collected from the perfusion system output and tested on an osmometer. Most of the salt was detected between 5 and 10 seconds after injection, with approximately 1:5 dilution of the peak salt concentration. Actual, instantaneous concentrations experienced by the olfactory epithelium could vary from this estimate due to the way liquid flowed across the epithelium (subtle pooling or mixing could alter the concentration over time) or differences in the temporal resolution of our sampling vs. the temporal resolution of the olfactory epithelium (where long, slow receptor potentials suggest integration over time). Stimulus wash out was also characterized: the measured salt concentration was reduced to less than 1% 20 seconds after injection and undetectable after 30 seconds. Once a good EOG recording site was established for an animal, stimuli were run in blocks, with each block testing one type of stimulus (e.g., female cloacal fluid) at different dilutions. All blocks began with a positive control (typically 1 mM methionine, but when methionine was the test stimulus, alanine was used as the positive control), followed by a wash (for the wash, ~0.5 ml saline was slowly injected into the perfusion line over several seconds to ensure the injection port was clear of stimuli), then a saline stimulus (50 μl) was run to ensure the wash was complete and no EOG signal was detected . Next, we began the test stimulus at its lowest concentration, followed by a wash and a saline control as before . Then we would deliver the second most dilute test stimulus, followed by washes and controls, and so on, until we reached the least dilute stimulus. If the test stimulus was either skin secretions or cloacal fluids, we also ran the appropriate stimulus-specific control in between each test stimulus. After completing the stimulus sequence from most to least dilute, we would repeat it again in the opposite order–least to most dilute–with appropriate washes and controls in between. This was done to ensure there was no order effect, and indeed we saw no systematic difference in EOG amplitude when we compared low-to-high vs. high-to-low sequences. The stimulus block was ended by repeating the positive control stimulus, followed by a wash. The time between any two stimuli was not automated and thus varied slightly but was almost always >30 seconds, and often closer to 60 seconds. We saw no signs of adaptation in our data; response amplitudes were similar regardless of stimulus order and even when the same stimulus was repeated several times with washes and controls in between. Additional blocks were then run to examine responses to other kinds of stimuli, as long as good quality recordings could be collected. Not all types of stimuli were run successfully in all preparations. To quantify EOG responses for dose response data, we measured the amplitude of the EOG signal and calculated z-scores for the amplitudes of signals in response to test stimuli at each dilution relative to control stimuli. EOG amplitudes were determined by taking the minimum value from the EOG trough and subtracting the average baseline value taken from a 1 second range, starting 2 seconds prior to EOG onset. Because we recorded using a DC amplifier and without low frequency filtering, the baseline showed significant drift at times . Thus, we performed the following baseline corrections: An average post-stimulus baseline was measured from a 1 second range shortly after the end of EOG signal (the time period was set based on the timing of the positive control stimuli at the start and end of each block; this was typically 10 or more seconds after the pre-stimulus baseline). If the pre and post stimulus baselines differed by 0.1 mV or more, the slope of the baseline was calculated. An adjusted baseline value at the time of the EOG trough was then calculated using the slope and the time of the EOG minimum; the trough value was subtracted from the adjusted baseline to determine EOG amplitude. For each block of stimuli, the average EOG amplitude was calculated for each dilution of the test stimulus (typically there were two trials of each dilution per block). The average and standard deviation of the EOG amplitude for the stimulus-specific negative control were also calculated (typically there were 12 trials of the control per block). Z-scores were then calculated according to the following formula, where X i is the average EOG amplitude to a test stimulus, μ is the average EOG amplitude to control, and σ is the standard deviation of the EOG amplitude to control: z = x i − μ σ Without behavioral data, we cannot know the detection threshold for stimuli. Thus, a threshold of z ≥ 2 was set to represent responses that are likely detectable as different from control, since such responses would be greater in amplitude than 97.7% of control responses. We successfully recorded EOG responses to biologically relevant stimuli in adult male Xenopus laevis . We saw large and reliable EOG responses to the amino acids methionine and alanine as well as to conspecific cloacal fluids and skin secretions. We tested the dose dependence of the responses to methionine and found responses declined in amplitude with each 10-fold dilution . For each animal, the average EOG amplitude for a given stimulus type was converted to a z-score using the average and standard deviation of EOG amplitude for control stimuli (for amino acids, saline was used as a control; for cloacal fluids and skin secretions, saline was run through all equipment used for stimulus collection to create cloacal and skin-specific controls). A stimulus was considered “detected” if the z-score was 2 or greater. The detection threshold for our preparation was between 1 and 10 μM methionine, with 5 of 6 individuals showing detection at 10 μM, and only 1 of 6 showing detection at 1 μM. Note that these and other concentrations were the original concentrations of the stimuli, and we estimate there was an additional 5-fold dilution of stimuli when the stimuli reached the olfactory epithelium. We found reliable EOG responses to conspecific cloacal fluids which varied in magnitude and detection threshold depending on whether the cloacal fluids were taken from male or female animals (Figs and ). EOG responses to female cloacal fluids were strong, showing detection in all 7 animals tested for a 1:100 dilution. Detection threshold was between 1:1000 and 1:100,000, with 5 of 7 animals detecting the stimulus at 1:1000 and no animals detecting the stimulus at 1:100,000. The response to male cloacal fluids was less robust. While all animals detected the stimulus at 1:100 dilution, EOG amplitudes were smaller (resulting in smaller z-scores). Only 2 of 6 animals detected male cloacal fluids at 1:1000 dilution and no animals detected it at 1:100,000, suggesting the detection threshold may be close to 1:1000. Skin secretions from male and female animals also produced robust responses. At 1:10 dilution, all animals showed strong EOG signal, well above control, to skin secretions taken from both male and female animals . At 1:100 dilution, responses decreased but were still detected by 6 out of 7 animals tested with female skin secretions and 4 out of 4 animals tested with male skin secretions. Female skin secretions produced detectable responses in 2 animals at 1:1000 and 1:100,000 dilutions. Male skin secretions evoked just detectable responses in 1 animal at 1:1000 and in a different animal at 1:100,000. Control stimuli for cloacal fluids and skin secretions consisted of saline passed through the same type of plastics used to collect and store either the cloacal fluids or skin secretions; they were then aliquoted and frozen, just like the test stimuli. These controls often evoked small EOG responses themselves , unlike the saline controls used for amino acid stimuli . This demonstrates the sensitivity of this preparation and the need for careful controls, as even “clean” laboratory equipment may shed odorants that can contaminate stimuli. We successfully recorded olfactory responses to conspecific odorants in X . laevis , showing that male X . laevis likely detect chemicals in female cloacal fluids and in male and female skin secretions. Our in situ EOG preparation worked well, generating responses to amino acid stimuli comparable to other EOG and calcium imaging studies in aquatic amphibians . Using our arbitrary detection threshold of z ≥ 2, we found reliable responses to the amino acid methionine at concentrations of 10 μM (actual concentration estimated to be closer to 2 μM based on dilution in the stimulus delivery system; see ). These results are similar to the findings of Breunig and colleagues, which found individual olfactory receptor neurons in larval X. laevis had detection thresholds for methionine ranging from 0.2 to 200 μM . Male X . laevis showed robust olfactory responses to conspecific cloacal fluids. Responses to female cloacal fluids showed particular strength and sensitivity, with most animals likely to detect the stimulus at dilutions of 1:1000 or more. Responses to male cloacal fluids were much weaker and just detectable at a dilution of 1:100. This result suggests the presence of a female-specific odorant in cloacal fluids that male X . laevis could use to help locate a mate or to determine when advertisement calling would be most advantageous. Female cloacal fluids contain a wealth of potential signal molecules, including hormones and hormone metabolites , at least some of which can be detected by olfactory receptors in X . laevis tadpoles . In teleost fish, it has been well documented that male fish can detect hormones and hormone metabolites in the urine of female fish, as well as bile acids, and use that information to alter behavior in appropriate ways, such as initiating courtship behaviors in the presence of a reproductively active female . There is evidence that other amphibians may also gain information about the reproductive status of conspecifics or change behavior patterns because of hormones released into the water by conspecifics . Additional testing could elucidate if similar chemical signaling occurs in Xenopus . Olfactory responses to male and female skin secretions were more similar in magnitude and sensitivity, with most animals detecting the stimuli at 1:10 and 1:100 dilutions from either sex. However, two animals appeared to show detectable responses to female skin secretions down to the 1:10,000 dilution, indicating the possibility for greater sensitivity. Skin secretions have been shown to contain pheromones in other species of anuran amphibians; these include signals involved in mate attraction, mate choice, and reproductive competition or aggression . Skin secretions may also contain a variety of antimicrobial peptides and toxins that could be used to identify conspecifics . Given the close proximity of the X . laevis male nose to the skin of a conspecific held in amplexus (the reproductive position), there is certainly opportunity to sample odorants released by the skin . Chemosensory information about conspecifics of both sexes could influence male X . laevis behavior in important ways. Male X . laevis produce different vocalizations and adopt different clasping behaviors depending on the animals they are housed with. Assessing the chemicals released in conspecific cloacal fluids and skin secretions may help males choose the most appropriate and adaptive behavior for its social circumstance. Males may use a combination of chemical and auditory signals to decide when to call and what vocalization to produce. Such multimodal signaling would not be unusual and would provide potentially important additional information about the animal’s social circumstance in an environment where vision cannot be employed ( Xenopus reproduce at night in muddy ponds) . The EOG technique we describe here may be useful to identify candidate social signaling molecules so that their behavioral effects can be evaluated. In other species, EOG has been a key tool in screening and identifying pheromones . The identification of such signals in X . laevis would be an important addition to the growing body of genetic, behavioral, physiological, and evolutionary knowledge about this species .
A multi-modal dental dataset for semi-supervised deep learning image segmentation
e13dd5f4-75cb-4043-8115-c1f1939ce14b
11747459
Dentistry[mh]
Oral diseases are significant global public health challenges that demand immediate action . According to the World Health Organization, 3.58 billion people suffer from severe periodontal disease, while dental caries affect two-thirds of the global population . As of May 2015, up to 57 systemic diseases have been assessed as potentially linked to periodontitis, with the most related being cardiovascular diseases, diabetes, and respiratory illnesses . These conditions severely impact individual quality of life and exert considerable strain on global healthcare systems. Dentists face increasing workloads due to the rising number of patients and affected teeth, challenging current work practices. The traditional method of assessing oral health relies on clinical examinations that focus on visible signs of current oral diseases and treatment outcomes . However, relying solely on clinical examinations can be time-consuming and may lead to diagnostic inaccuracies due to observational limitations. Fortunately, the widespread use of Panoramic X-ray Images (PXIs) and Cone Beam Computed Tomography (CBCT) in dentistry has mitigated these issues to some extent. Dental radiological computer-aided detection and diagnosis serve several purposes, such as detecting radiopaque lesions in the maxillary sinus, calcification in the carotid artery, and morphological changes in the mandibular cortical bone indicative of possible osteoporosis . PXI offers a comprehensive view of the oral cavity, assisting in detecting issues like impacted teeth, skeletal anomalies, and cysts (see Fig. ). However, PXI, while providing a complete view of all teeth, also includes irrelevant information, such as the upper and lower jawbones, the temporomandibular joint, and parts of the nasal cavity and sinuses. Moreover, as a 2D image, PXI lacks the detail and resolution of 3D CBCT and cannot provide complete three-dimensional information. On the other hand, CBCT provides clear, undistorted 3D images, is highly accurate for complex case treatment planning, and is applicable across various dental specialties (see Figs. , , showcasing CBCT datasets). Despite CBCT’s superior imaging capabilities, it has higher time costs, radiation exposure, and operational complexity compared to PXI. Therefore, combining both can provide doctors with more detailed and comprehensive patient information from different perspectives. In addition to the wide application of dental radiological CAD, the importance of deep learning methods in the medical field is undeniable. However, current public dental datasets, both PXI and CBCT, are limited in number and annotation information. This is due to constraints in dataset acquisition: (1) The extensive workload in annotation, both in precision and quantity, for example, a CBCT containing 200–500 scans, each requiring an average of 3–8 minutes; (2) Difficulties in accessing medical data; (3) The high cost of hiring dental experts for annotation. Thus, in dental image analysis tasks, semi-supervised learning plays a crucial role, especially when dealing with large amounts of unlabeled data. By leveraging semi-supervised learning approaches, researchers can significantly reduce the dependency on fully annotated datasets, thereby alleviating the burden of manual annotation. This is particularly effective when handling large-scale datasets, such as typical CBCT scans, where the labor-intensive nature of annotation poses a major challenge. Additionally, we note the absence of a benchmark in the dental field to provide baseline tests for various tooth segmentation tasks. Considering all these factors, establishing a multimodal tooth segmentation dataset with pixel-level annotations is meaningful. In this paper, we introduce a Semi-supervised Tooth Segmentation (STS-Tooth) dataset composed of 4,000 PXIs and 148,400 CBCT scans. For 9,700 of these images, we provide pixel-level segmentation annotations. Our main contributions can be summarized as follows: Image: The dataset includes 4,000 two-dimensional dental PXIs (STS-2D-Tooth) and 148,400 CBCT scans (STS-3D-Tooth), with 3,500 adult and 500 child dental PXIs in STS-2D-Tooth. Annotation: We have pixel-level annotated 900 PXIs and 8,800 CBCT scans. The initial work and annotations were completed by 20 trained dental practitioners and verified by 6 experienced dentists, taking a year to complete. We believe the release of our private STS-Tooth dataset will serve as an important benchmark for tooth image segmentation in the deep learning field, significantly promoting tooth-related research in deep learning and facilitating the transition from technology to clinical application. The development of STS-Tooth primarily consisted of five stages: data collection, data preprocessing, data filtering, data annotation, and dataset organization. The main stages in the workflow are depicted in Fig. . The detailed descriptions of each stage in the workflow are as follows: Data collection For STS-2D-Tooth dataset, a portion of the dataset was sourced from our previous work . The 2 d dataset was divided into adult teeth PXI (A-PXI) and children’s teeth PXI (C-PXI). For the previous work, we included a total of 2,892 2 d PXIs of children (193 cases) and adults (2692 cases). In this work, we further expanded the number of 2 d PXIs to 4315 cases (including 569 children and 3746 adults). The expanded parts are gathered from Hangzhou Dental Hosptial and Hangzhou Qiantang Dental Hospital, and Sichuan Provincial People’s Hospital collected from January 2020 to December 2022. These images were from 4,192 patients, ranging in age from 4 to 92 years. We classified the data into two categories: adults and children, followed by further processing. In Fig. , we showcase A-PXI images with different imaging effects. The grayscale value differences between the background and the teeth in these images vary, which helps to accommodate various imaging effects, thereby enhancing the algorithm’s universality. For the STS-3D-Tooth dataset, we initially collected a total of 31,380 CBCT scans in our previous work . In this study, we significantly expanded the dataset to 168,800 CBCT cases. These scans were obtained from 422 patients, aged 10 to 82 years, at Hangzhou Dental Hospital, and Hangzhou Qiantang Dental Hospital, and Sichuan Provincial People’s Hospital. The collection spanned from January 2020 to December 2022, with each scan having an axial resolution of 640 × 640 pixels and a slice thickness ranging from 0.25 mm to 0.3 mm. The PXI and CBCT were obtained using a HiRes3D-Plus device produced by Changzhou Boneng Zhongding Medical Technology, and the data were collected in DICOM format. All PXI and CBCT scans were performed prior to dental surgeries. The data collected from the hospitals were acquired after each participant consented and signed an informed consent form for non-commercial academic communication usage. Participants were informed prior to their treatment and data collection that their de-identified medical data might be used for non-commercial academic communication, which includes the potential open sharing of anonymized data during such academic exchanges. The ethics committee has thoroughly reviewed the dataset, and the approval for its publication under the license of CC-BY has been acquired. The study adhered to the principles of the Declaration of Helsinki and the data were carefully reviewed and approved by the Medical Ethics Committees of Sichuan Provincial People’s Hospital (Approval ID: 20220484) and Lishui College School of Medicine (Approval ID: 2022YR014). Furthermore, all procedures were conducted strictly in accordance with ethical guidelines to ensure the protection of participants’ privacy and confidentiality throughout the research process. Data preprocessing Converting DICOM to PNG format can enhance the compatibility and accessibility of medical images, facilitating viewing and sharing in non-specialized environments while protecting patient privacy. Therefore, we have converted the collected PXI data from DICOM to PNG format to streamline their utilization and broaden their applicability across different domains, such as presentations, educational materials, and collaborative research projects. To preserve the associated three-dimensional information and metadata, CBCT DICOM format data are converted into.nii.gz files. Specifically, a DICOM sequence is first read, with each patient corresponding to one sequence containing 300 to 400.dcm files. Prior to processing, the.dcm files in the folder are carefully checked to ensure they are correctly ordered, as this directly impacts the accuracy of the reconstruction. Once verified, the 2D DICOM slices are combined to create a 3D NIFTI image, which is then compressed and saved as a .nii.gz file for efficient storage and analysis. It is noteworthy that a DICOM file generally consists of a DICOM file header and a DICOM dataset. The DICOM file header contains various privacy information of the patient at the time of imaging, such as the patient’s name, date of birth, phone number, gender, age, etc. Therefore, we removed the file header, retaining only the image data, to ensure that the patient’s privacy information is not disclosed. Additionally, for CBCT, we changed the original axial resolution of 400 × 640 × 640 pixels to 400 × 512 × 512 pixels, and for PXI, we changed the original resolution of 2800 × 1536 pixels to 640 × 320 pixels. For 2D PXI, the key dental structures remain visible even with a lower resolution, while for 3D CBCT, reducing the resolution simplifies processing without losing essential diagnostic information. By modifying the resolution, the efficiency of expert data annotation work has been greatly improved. Data filtering For the collected data, we conducted a filtering process to ensure the quality and applicability of the data. The filtering mainly considered the following situations: (1) Invalid or incomplete data; (2) Duplicate data, which might be due to system errors or entry mistakes causing the same information to be recorded multiple times; (3) Outliers, potentially caused by equipment failure, entry errors, or unusual patient conditions; (4) Incomplete elimination of privacy information; (5) Poor quality data. For types (1) to (4) of data, they were directly excluded from our use. Meanwhile, to minimize the bias caused by subjective interpretation in filtering the fifth type of data, we introduced the image quality scoring criteria (IQSC) . A total of 8 experts used IQSC to subjectively assess data quality. Each image was given a score within the range of 0–4. Here, 0 indicates the lack of required dental structures or unobservable features, 3 indicates that the image quality is acceptable, and 4 indicates that the image quality is above the required level. After screening, 8.53% of the images scored below 3, and this portion of the data was also excluded. Ultimately, we discarded 131 PXI and 11,600 scans for reasons (1) to (4), and 184 PXIs and 8,400 scans for reason (5). The specific changes in the number of data are shown in Table . Data annotation The dataset labeling work was carried out by 20 dental experts. To improve the efficiency of annotation work, we adopted a mixed annotation method of manual annotation followed by semi-automatic annotation. STS-2D-Tooth Annotation: We carried out the annotation in two stages, the first stage being completely manual annotation, and the second stage being iterative semi-automatic annotation. When annotating PXI, we first manually annotated 300 images using two professional image annotation software, EISeg ( https://github.com/PaddleCV-SIG/EISeg ) and LabelMe ( https://github.com/wkentaro/labelme ). Then, we used this data to train a network suitable for dental segmentation tasks on R2 U-Net ( https://github.com/LeeJunHyun/Image_Segmentation ). Afterward, we input the obtained parameters into the network and made predictions on the data collected this time. The prediction results were then manually modified by experts. Specifically, we first limited the maximum pixel value of the original image to 250, then converted the predicted mask into an RGB format, and superimposed it on the original image with RGB [255, 0, 0]. Next, we used CLIP STUDIO PAINT ( https://www.clipstudio.net ) to modify the images with the overlaid mask. For areas belonging to the dental region but not predicted, we used tools like brushes and paint buckets to assign the area with [255, 0, 0]; for areas not belonging to the dental region, we assigned [0, 0, 0]. After manual modification, we used code to extract the areas where the R channel of RGB is 255 to obtain the corresponding mask and then normalized it. STS-3D-Tooth Annotation: We also divided the annotation into two stages. In the first stage, CTs were manually annotated in the professional 3D annotation software: ITK-SNAP ( https://github.com/pyushkevich/itksnap ). We first used the software to delineate the dental area layer by layer in the axial view, then manually fine-tuned the annotations in the coronal and sagittal views . These data were then used as input for the second stage. In the second stage, the 3D data from the previous annotations were split axially, creating two-dimensional axial slices, which were then input into R2 U-Net for semi-automatic operations similar to STS-2D-Tooth annotation. The difference is that we did not complete training and annotation at once but iteratively. After each round of training, the results were manually supplemented and corrected, and the revised data were continuously added as training material for further training. In the first stage, it took about 60 hours to complete the annotation and review of the dental area of one CT. In the second stage, after the first round of training, it took about 40 hours to complete the annotation of one CT. Thus, this iterative annotation method not only ensures that each CT is manually modified and annotated but also greatly improves the efficiency of the annotation work. Dataset organization The quantity and detailed feature information of the dataset are summarized in Table . Following the completion of data annotation, accurate and comprehensive masks were obtained for each image. For two-dimensional data, the masks were directly converted into corresponding binary images. For three-dimensional data, multiple two-dimensional axial slices were combined to create three-dimensional data, which were saved in the .nii.gz format. Furthermore, all datasets were organized to ensure they are readily usable by readers without requiring additional conversions or modifications, as illustrated in Fig. . The specific actions were as follows: For the STS-2D-Tooth dataset, all images and masks were categorized into adult and child groups, resulting in two sections: A-PXI and C-PXI. Each section included both annotated and unannotated data. Renaming, resolution unification, and channel count standardization were performed for each section to maintain consistency. For the STS-3D-Tooth dataset, Regions of Interest (ROIs) with annotated data were extracted, prioritizing the tooth area while retaining the original complete images for the unannotated parts. These processes are detailed in Figs. , . For STS-2D-Tooth dataset, a portion of the dataset was sourced from our previous work . The 2 d dataset was divided into adult teeth PXI (A-PXI) and children’s teeth PXI (C-PXI). For the previous work, we included a total of 2,892 2 d PXIs of children (193 cases) and adults (2692 cases). In this work, we further expanded the number of 2 d PXIs to 4315 cases (including 569 children and 3746 adults). The expanded parts are gathered from Hangzhou Dental Hosptial and Hangzhou Qiantang Dental Hospital, and Sichuan Provincial People’s Hospital collected from January 2020 to December 2022. These images were from 4,192 patients, ranging in age from 4 to 92 years. We classified the data into two categories: adults and children, followed by further processing. In Fig. , we showcase A-PXI images with different imaging effects. The grayscale value differences between the background and the teeth in these images vary, which helps to accommodate various imaging effects, thereby enhancing the algorithm’s universality. For the STS-3D-Tooth dataset, we initially collected a total of 31,380 CBCT scans in our previous work . In this study, we significantly expanded the dataset to 168,800 CBCT cases. These scans were obtained from 422 patients, aged 10 to 82 years, at Hangzhou Dental Hospital, and Hangzhou Qiantang Dental Hospital, and Sichuan Provincial People’s Hospital. The collection spanned from January 2020 to December 2022, with each scan having an axial resolution of 640 × 640 pixels and a slice thickness ranging from 0.25 mm to 0.3 mm. The PXI and CBCT were obtained using a HiRes3D-Plus device produced by Changzhou Boneng Zhongding Medical Technology, and the data were collected in DICOM format. All PXI and CBCT scans were performed prior to dental surgeries. The data collected from the hospitals were acquired after each participant consented and signed an informed consent form for non-commercial academic communication usage. Participants were informed prior to their treatment and data collection that their de-identified medical data might be used for non-commercial academic communication, which includes the potential open sharing of anonymized data during such academic exchanges. The ethics committee has thoroughly reviewed the dataset, and the approval for its publication under the license of CC-BY has been acquired. The study adhered to the principles of the Declaration of Helsinki and the data were carefully reviewed and approved by the Medical Ethics Committees of Sichuan Provincial People’s Hospital (Approval ID: 20220484) and Lishui College School of Medicine (Approval ID: 2022YR014). Furthermore, all procedures were conducted strictly in accordance with ethical guidelines to ensure the protection of participants’ privacy and confidentiality throughout the research process. Converting DICOM to PNG format can enhance the compatibility and accessibility of medical images, facilitating viewing and sharing in non-specialized environments while protecting patient privacy. Therefore, we have converted the collected PXI data from DICOM to PNG format to streamline their utilization and broaden their applicability across different domains, such as presentations, educational materials, and collaborative research projects. To preserve the associated three-dimensional information and metadata, CBCT DICOM format data are converted into.nii.gz files. Specifically, a DICOM sequence is first read, with each patient corresponding to one sequence containing 300 to 400.dcm files. Prior to processing, the.dcm files in the folder are carefully checked to ensure they are correctly ordered, as this directly impacts the accuracy of the reconstruction. Once verified, the 2D DICOM slices are combined to create a 3D NIFTI image, which is then compressed and saved as a .nii.gz file for efficient storage and analysis. It is noteworthy that a DICOM file generally consists of a DICOM file header and a DICOM dataset. The DICOM file header contains various privacy information of the patient at the time of imaging, such as the patient’s name, date of birth, phone number, gender, age, etc. Therefore, we removed the file header, retaining only the image data, to ensure that the patient’s privacy information is not disclosed. Additionally, for CBCT, we changed the original axial resolution of 400 × 640 × 640 pixels to 400 × 512 × 512 pixels, and for PXI, we changed the original resolution of 2800 × 1536 pixels to 640 × 320 pixels. For 2D PXI, the key dental structures remain visible even with a lower resolution, while for 3D CBCT, reducing the resolution simplifies processing without losing essential diagnostic information. By modifying the resolution, the efficiency of expert data annotation work has been greatly improved. For the collected data, we conducted a filtering process to ensure the quality and applicability of the data. The filtering mainly considered the following situations: (1) Invalid or incomplete data; (2) Duplicate data, which might be due to system errors or entry mistakes causing the same information to be recorded multiple times; (3) Outliers, potentially caused by equipment failure, entry errors, or unusual patient conditions; (4) Incomplete elimination of privacy information; (5) Poor quality data. For types (1) to (4) of data, they were directly excluded from our use. Meanwhile, to minimize the bias caused by subjective interpretation in filtering the fifth type of data, we introduced the image quality scoring criteria (IQSC) . A total of 8 experts used IQSC to subjectively assess data quality. Each image was given a score within the range of 0–4. Here, 0 indicates the lack of required dental structures or unobservable features, 3 indicates that the image quality is acceptable, and 4 indicates that the image quality is above the required level. After screening, 8.53% of the images scored below 3, and this portion of the data was also excluded. Ultimately, we discarded 131 PXI and 11,600 scans for reasons (1) to (4), and 184 PXIs and 8,400 scans for reason (5). The specific changes in the number of data are shown in Table . The dataset labeling work was carried out by 20 dental experts. To improve the efficiency of annotation work, we adopted a mixed annotation method of manual annotation followed by semi-automatic annotation. STS-2D-Tooth Annotation: We carried out the annotation in two stages, the first stage being completely manual annotation, and the second stage being iterative semi-automatic annotation. When annotating PXI, we first manually annotated 300 images using two professional image annotation software, EISeg ( https://github.com/PaddleCV-SIG/EISeg ) and LabelMe ( https://github.com/wkentaro/labelme ). Then, we used this data to train a network suitable for dental segmentation tasks on R2 U-Net ( https://github.com/LeeJunHyun/Image_Segmentation ). Afterward, we input the obtained parameters into the network and made predictions on the data collected this time. The prediction results were then manually modified by experts. Specifically, we first limited the maximum pixel value of the original image to 250, then converted the predicted mask into an RGB format, and superimposed it on the original image with RGB [255, 0, 0]. Next, we used CLIP STUDIO PAINT ( https://www.clipstudio.net ) to modify the images with the overlaid mask. For areas belonging to the dental region but not predicted, we used tools like brushes and paint buckets to assign the area with [255, 0, 0]; for areas not belonging to the dental region, we assigned [0, 0, 0]. After manual modification, we used code to extract the areas where the R channel of RGB is 255 to obtain the corresponding mask and then normalized it. STS-3D-Tooth Annotation: We also divided the annotation into two stages. In the first stage, CTs were manually annotated in the professional 3D annotation software: ITK-SNAP ( https://github.com/pyushkevich/itksnap ). We first used the software to delineate the dental area layer by layer in the axial view, then manually fine-tuned the annotations in the coronal and sagittal views . These data were then used as input for the second stage. In the second stage, the 3D data from the previous annotations were split axially, creating two-dimensional axial slices, which were then input into R2 U-Net for semi-automatic operations similar to STS-2D-Tooth annotation. The difference is that we did not complete training and annotation at once but iteratively. After each round of training, the results were manually supplemented and corrected, and the revised data were continuously added as training material for further training. In the first stage, it took about 60 hours to complete the annotation and review of the dental area of one CT. In the second stage, after the first round of training, it took about 40 hours to complete the annotation of one CT. Thus, this iterative annotation method not only ensures that each CT is manually modified and annotated but also greatly improves the efficiency of the annotation work. The quantity and detailed feature information of the dataset are summarized in Table . Following the completion of data annotation, accurate and comprehensive masks were obtained for each image. For two-dimensional data, the masks were directly converted into corresponding binary images. For three-dimensional data, multiple two-dimensional axial slices were combined to create three-dimensional data, which were saved in the .nii.gz format. Furthermore, all datasets were organized to ensure they are readily usable by readers without requiring additional conversions or modifications, as illustrated in Fig. . The specific actions were as follows: For the STS-2D-Tooth dataset, all images and masks were categorized into adult and child groups, resulting in two sections: A-PXI and C-PXI. Each section included both annotated and unannotated data. Renaming, resolution unification, and channel count standardization were performed for each section to maintain consistency. For the STS-3D-Tooth dataset, Regions of Interest (ROIs) with annotated data were extracted, prioritizing the tooth area while retaining the original complete images for the unannotated parts. These processes are detailed in Figs. , . The STS-Tooth dataset has been uploaded to Zenodo in a compressed file format . Once decompressed, the folder reveals dental images along with their corresponding masks. The organization of the folder is depicted in Fig. . The decompressed files are organized into two main folders, named “STS-2D-Tooth” and “STS-3D-Tooth.” The “STS-2D-Tooth” folder includes two subfolders named “A-PXI” and “C-PXI,” while the “STS-3D-Tooth” folder contains two subfolders named “ROI” and “Integrity,” as shown in Fig. . These four subfolders adhere to the same naming and arrangement scheme, with the detailed organization viewable in Fig. . For instance, within the “A-PXI” folder, there are sections titled “Labeled” and “Unlabeled.” The “Labeled” section includes 850 images and 850 corresponding masks, and the “Unlabeled” section comprises 2650 images. The number of images contained within each subfolder can be found in Table . Validation criteria At present, there is no universally accepted standard for judging the quality of PXI and CBCT images. However, quality is often assessed based on several aspects, including clarity, contrast, exposure level, noise level, artifacts, and dynamic range . Therefore, considering the characteristics of PXI and CBCT, we measure image quality from the following three aspects: Contrast : In PXI and CBCT, the grayscale values of the dental area and the background typically differ significantly. Therefore, we believe that images with higher contrast can better differentiate teeth, bone, and soft tissue, which is more conducive to analysis and application. Sharpness : Clarity is crucial for diagnosis. The sharper the image, it is easier to identify details, especially the subtle differences in tooth structure and bone quality. Artifacts : In PXI and CBCT, the presence of other skeletal parts with grayscale values similar to the teeth can create artifacts, potentially interfering with judgment and affecting diagnostic accuracy. Therefore, assessing the presence and severity of artifacts is important. In addition to the images, we have also provided annotations for 900 PXIs and 8800 scans. We have evaluated the quality of these annotations, primarily using Pixel Accuracy and Boundary Accuracy to measure the accuracy of the annotated results . Pixel Accuracy broadly examines the accuracy of annotations, focusing on whether the dental areas in the images are completely segmented and whether there are segmentation errors. Boundary Accuracy looks at the accuracy of annotations from the perspective of edge details, checking whether the edges of the teeth are segmented with pixel-level precision. We refer to the above standards for assessing image and annotation quality as STS evaluation. This comprehensive evaluation system for the dataset’s overall quality comprises five categories , each worth 20 points, totaling a score out of 100, calculated on a percentage basis, as shown in Table . Validation results We conducted evaluations on both the STS-2D-Tooth and STS-3D-Tooth datasets using the STS-Evaluation system. We categorized all the datasets into three groups based on type and age: A-PXI and C-PXI under STS-2D-Tooth, and STS-3D-Tooth. To comprehensively assess the quality of the datasets, we considered randomly sampling data from each category for scoring. Specifically, 30% of the data was randomly selected from each category. This data was then scored by five dental experts, knowledgeable in image processing and specially trained for this task. The average of the total scores from all five experts constituted the score for that dataset category, as shown in Table . These scores demonstrate that the quality of the images we collected has been recognized by experts. Furthermore, it validates that our dataset is of high quality and suitable for further research applications. At present, there is no universally accepted standard for judging the quality of PXI and CBCT images. However, quality is often assessed based on several aspects, including clarity, contrast, exposure level, noise level, artifacts, and dynamic range . Therefore, considering the characteristics of PXI and CBCT, we measure image quality from the following three aspects: Contrast : In PXI and CBCT, the grayscale values of the dental area and the background typically differ significantly. Therefore, we believe that images with higher contrast can better differentiate teeth, bone, and soft tissue, which is more conducive to analysis and application. Sharpness : Clarity is crucial for diagnosis. The sharper the image, it is easier to identify details, especially the subtle differences in tooth structure and bone quality. Artifacts : In PXI and CBCT, the presence of other skeletal parts with grayscale values similar to the teeth can create artifacts, potentially interfering with judgment and affecting diagnostic accuracy. Therefore, assessing the presence and severity of artifacts is important. In addition to the images, we have also provided annotations for 900 PXIs and 8800 scans. We have evaluated the quality of these annotations, primarily using Pixel Accuracy and Boundary Accuracy to measure the accuracy of the annotated results . Pixel Accuracy broadly examines the accuracy of annotations, focusing on whether the dental areas in the images are completely segmented and whether there are segmentation errors. Boundary Accuracy looks at the accuracy of annotations from the perspective of edge details, checking whether the edges of the teeth are segmented with pixel-level precision. We refer to the above standards for assessing image and annotation quality as STS evaluation. This comprehensive evaluation system for the dataset’s overall quality comprises five categories , each worth 20 points, totaling a score out of 100, calculated on a percentage basis, as shown in Table . We conducted evaluations on both the STS-2D-Tooth and STS-3D-Tooth datasets using the STS-Evaluation system. We categorized all the datasets into three groups based on type and age: A-PXI and C-PXI under STS-2D-Tooth, and STS-3D-Tooth. To comprehensively assess the quality of the datasets, we considered randomly sampling data from each category for scoring. Specifically, 30% of the data was randomly selected from each category. This data was then scored by five dental experts, knowledgeable in image processing and specially trained for this task. The average of the total scores from all five experts constituted the score for that dataset category, as shown in Table . These scores demonstrate that the quality of the images we collected has been recognized by experts. Furthermore, it validates that our dataset is of high quality and suitable for further research applications.
Maximizing Access to Cell Biology for PEERS: Retracting the term minority in favor of a more inclusive lexicon
137ebc39-67a2-4af8-afbb-8eb51fd4ead7
11321047
Anatomy[mh]
Recently, numerous groups in diverse scientific fields have intensified efforts to unpack the problematic connotation of using the word “minority” to describe groups that have been excluded from STEM. The Meriam-Webster definition of minority is, “the smaller in number of two groups constituting a whole; or the smaller quantity or share.” . The minority label, or the “M word,” engulfs political implications; it negates ethnicity and cultural distinctiveness and does not denote the contrasts that are characteristic of ethnic and racial groups . Race and ethnicity are core components of our identity and often provide a label for others to decide who belongs or not . Whether the “M word” is used in general terms to denote groups who are smaller in number than a theoretically larger group or to describe a lack of diversity, strong consensus exists that its usage in these contexts causes more harm than good. Consciously or not, the word minority suggests the existence of a putative “power elite” that contrasts a marginalized group of diminished status, involving intrinsic social complexities that extend beyond pure linguistic associations. For example, early-career scientists who are beginning to establish their scientific identities may internalize the label of “minority” as they move ahead in their careers. Using different language that identifies exclusion as a key component of disproportionate representation within the scientific landscape can help shift the mindset of rising scientists to recognize that their potential is not limited by their identity. Therefore, a top priority of the Diversity, Equity, and Inclusion (DEI) Strategic Plan outlined in 2022 by the American Society for Cell Biology (ASCB) was to change the name of the “Minorities Affairs Committee (MAC).” The MAC was founded in the mid-1980s with the long-term goal to create a more inclusive culture within the cell biology community . The MAC ensured the integration of diversity and inclusion goals into the ASCB mission, established working relationships with underrepresented group committees at other societies and secured funding from different agencies to provide educational and career development opportunities for historically excluded trainees (Segarra et al. , , 2020). Some of the work and opportunities that the MAC has provided include the MAC travel awards to the ASCB annual meeting, the faculty research and education development program (FRED) grant writing workshop, the judged poster competition for undergraduate, and graduate students at the ASCB annual meeting, and the Mentoring and the Ernest Everett Just Lectureship Awards to recognize the outstanding mentors and scientific achievements of U.S. researchers belonging to a historically excluded racial or ethnic group. For several months, a subcommittee collected renaming suggestions from ASCB members. An initial column was posted on the ASCB website to emphasize the urgency of changing the MAC name and request suggestions . This group led constructive discussions about the name change, which resulted in introducing a new name: Maximizing Access in Cell Biology for PEERS (Persons Excluded because of their Ethnicity or Race; ). The acronym MAC was maintained to honor its long history and impact within the scientific community and emphasize that the committees’ core values and DEI commitment remain unchanged. The intrinsic nature of the word “minority” is contradictory when describing all the groups in the United States (U.S.) who are not white, straight, and without disabilities. There are numerous cities and states where white, straight, and able-bodied people do not make up the majority of the population . In fact, the U.S. Census Bureau reported that by 2060, the non-Hispanic white population will make up less than 45% of the U.S. population , not taking into account those who identify as LGBTQIA+ and with disabilities. In addition to the changing demographics in the U.S., the world has changed, and our social consciousness has evolved since the origin of the term “minority” to describe PEERs. When used improperly, the term minority itself is condescending. It implies that people designated as part of the “minority” are lesser, smaller, or minor compared with those considered the majority, thereby discounting their contributions to science and society. Marshall Shepherd pointed out that “… the use of the word minority is, in fact, a microaggression” . To continue to use this term to describe PEER scientists is socially irresponsible. We also support a shift to using the term PEER instead of minority to remove attention from the individual and instead emphasize the impact of institutionalized exclusion, as exclusive practices have maintained the homogeny of the dominant culture. Furthermore, PEER researchers in STEM make major contributions to scientific progress and referring to them as minorities diminishes the value of their work. Research has shown that “teams composed of people from a variety of backgrounds and experiences produce better and more innovative products and ideas than a homogeneous team” (National Institutes of Health, ). Therefore, the word “minority” is no longer accurate, is toxic, and constitutes another microaggression that can be used to try to diminish those who have been historically excluded from the STEM enterprise. We propose the following recommendations regarding the use of the word minority moving forward: Language shift: While minority is not a bad word, a significant portion of the population associates negativity and inferiority with the word. Dialogue: Engage with communities affected by the terminology to understand perspectives and experiences. Support: Create and maintain avenues of support for PEER networks and highlight their contributions to science and society. By implementing these recommendations, we can work towards a more inclusive, respectful, and empowered society that is ready to tackle the pressing big questions in biology. In conclusion, we need to extinguish the use of the term minority from STEM when referring to individuals who have been excluded due to their background or circumstances. The improper use of the term minority can have lasting and compounding effects on trainees who navigate through systems fraught with systemic and systematic oppression. The overuse of this pervasive term has minimized the struggles of PEERs and inaccurately represented the active role that exclusion has played in creating disparities in STEM. We expect that eliminating the use of the word minority will reduce the stigmatization of scientists who may already feel like they are perceived as “other.” The current MAC name uses language that better describes the committee's purpose and inclusive language for all scientists and their circumstances, such as the term PEER. We also seek to ascribe tangible meaning to the underlying missions of societies, initiatives, and opportunities. We hope you, too, will recognize the importance of replacing the term “minority” from our vocabulary in favor of more inclusive language. Please consider sharing this article and the cited literature with individuals who require additional education about why we seek to make this standard practice in the field. Fernando Vonhoff: I was born in Mexico in a time when no research laboratories existed near me, and grew up in a culture that excluded scientists as career options. I completed my undergraduate studies at the Free University of Berlin, my PhD in Neuroscience at Arizona State University, and my postdoctoral training at Yale University. I study molecular mechanisms underlying neuronal connectivity during development and aging-dependent degeneration using Drosophila melanogaster. Currently I serve as a member of the MAC. Dana-Lynn Ko’omoa-Lange: I am a Native Hawaiian biomedical researcher. My research focuses on elucidating novel ion channel and calcium signaling pathways that may be manipulated in high-risk neuroblastoma to induce cell death and inhibit metastasis. I have mentored over 100 students in my biomedical research laboratory. Over 95% of my students are from groups that are underrepresented in biomedical research and/or are from disadvantaged backgrounds, including Native Hawaiians and Pacific Islanders. Jamaine Davis: I completed my undergraduate studies at Drexel University, my PhD in Biochemistry and Molecular Biophysics from the University of Pennsylvania School of Medicine, and my postdoctoral training at the National Cancer Institute. I stand as a trailblazing scientist, navigating the intricate realms of biophysics, health disparities, and community engagement. My pioneering research delves into the fundamental mechanisms of aging and age-related ailments across various levels, effectively bridging critical gaps in medical knowledge. Christina Termini: I am an Assistant Professor at the Fred Hutchinson Cancer Center. My laboratory studies the fundamental mechanisms that control stem cell self-renewal and how these pathways are hijacked during stress and disease. I am a current member of the ASCB Women in Cell Biology committee and an incoming member of the ASCB Council. I am dedicated to building inclusive and supportive communities that foster the scientific and career development of researchers from all walks of life. Michelle M. Martínez Montemayor: I am a Native Puerto Rican biomedical researcher, and a first generation graduate student. I completed my undergraduate studies and master's degree at the University of Puerto Rico (UPR Bayamón, Cayey and Mayaguez Campus, respectively). My PhD in animal science was done at Michigan State University, and then I completed two postdoctoral experiences in molecular and cellular cognition, and cancer biology in Puerto Rico. I study and develop natural product derived therapies for aggressive breast cancers, and recently I cofounded Dynamiko Pharmaceutics, LLC in an effort to commercialize affordable and selective anticancer therapies. I have devoted my life to train scientists from underrepresented groups in STEM. I currently serve as a member of the MAC.
Comparative analysis of fixation techniques for signal detection in avian embryos
5d962de8-c71b-499a-9b0d-1e9f7c64f133
11631674
Anatomy[mh]
Introduction In situ hybridization chain reaction (HCR) and immunohistochemistry (IHC) are cornerstone methods to visualize cell and tissue-level phenomena, revealing potential molecular interactions, gene expression, and protein localization within biological specimens. Central to these techniques are the process of fixation, crucial for preserving targets, tissue morphology, and antigenicity. Here, we compare two prevalent fixatives—paraformaldehyde (PFA) and trichloroacetic acid (TCA)—prior to HCR and IHC analyses. The study delves into the respective impacts of each fixative method and time length on tissue-specific signal detection, including cellular morphology and intensity. Our goal is to unravel the nuanced effects on the quality and reliability of HCR and IHC outcomes and potential differences between the two methods. While HCR detects specific expressed genes via probes complementary to the mRNA sequence, IHC results can be variable depending on the tissue sample used, antibody efficacy, and antigen type and localization . Prior studies identified that specific fixation methods are necessary to visualize proteins that are localized to different sub-cellular regions or cellular structures . Through a systematic investigation, we provide comprehensive insights into how the choice of fixative can alter results, which may empower researchers in optimizing signal detection protocols for enhanced accuracy and reproducibility in biological analyses. Specifically, here we analyze the outcomes of fixing wholemount Gallus gallus (chicken) embryos with PFA and TCA and use HCR and IHC without antigen retrieval to identify how those methods alter the signal visibility, tissue specificity, and fluorescence intensity of transcripts and proteins that are normally found in the nucleus, cytoplasm, and cell membrane. In developmental biology, investigating gene expression and protein localization changes in vertebrate embryos using HCR and IHC offers a profound understanding of intricate molecular processes governing embryogenesis. At minimum, both techniques can provide basic details of cell and tissue types in which a gene or protein is expressed, but IHC can also offer insight into dynamic cellular and subcellular localization changes of specific proteins across developmental stages. The selection, specificity, and efficacy of fixation methods and detection tools significantly influences the accuracy and fidelity of developmental studies. Given the delicate nature of embryonic tissues, a multitude of IHC studies in embryonic tissues use aldehyde fixation in the form of formaldehyde, formalin, or PFA . PFA is often favored for embryonic specimens due to its ability to cross-link proteins and amines in DNA and RNA, thus preserving tissue architecture and maintaining structural epitopes . Upon contact with tissue, PFA undergoes hydrolysis to form formaldehyde, its active component, and this reactive aldehyde efficiently crosslinks proteins via amino acid bridges . The ability of PFA to create stable crosslinks makes it the fixative agent of choice to preserve structural epitopes for subsequent microscopic analysis and downstream experimentation. Conversely, TCA fixation, known for its permeabilization and dehydration, presents an alternative with potential benefits to access hidden epitopes in embryos but is used less frequently in developmental studies . Upon application, TCA penetrates tissues and promptly precipitates proteins by causing their denaturation and aggregation through acid-induced coagulation, which may enhance or deter the ability of antibodies to bind to specific antigens depending on their target . The acidic nature of TCA and high precipitation capacity result in rapid and robust fixation, preserving tissue architecture by solidifying cellular constituents and preventing enzymatic degradation . While TCA fixation may alter some protein structures due to its denaturing effects, this effect can be beneficial when used against bulky or hidden epitopes in subsequent histochemical and immunohistochemical analyses. While mRNA visualization methods have been honed over multiple decades in various species , visualizing various types of proteins within cells demands a careful approach, considering the diverse subcellular localizations, divergent amino acid sequences, and unique tertiary structures they may possess. For proteins localized to distinct subcellular regions such as the nucleus, cytoplasm, or plasma membrane, fixation methods must cater to the preservation of these specific environments. Fixatives like PFA are adept at maintaining the intricate membranous structures and spatial organization within the cytoplasm or plasma membrane. In addition, for proteins residing in the nucleus, fixation methods that effectively permeate nuclear membranes and preserve nuclear morphology become imperative. Moreover, proteins with intricate tertiary structures, such as those forming multimeric complexes or undergoing post-translational modifications, often necessitate fixation techniques that maintain these delicate interactions. Thus, tailoring fixation methods according to subcellular localization and protein tertiary structure becomes pivotal in accurately visualizing diverse protein populations within cells. The selection of fixation methods for IHC poses a delicate balance between tissue preservation and antibody penetration. While certain fixatives excel in preserving tissue architecture and antigenicity, their robustness might hinder the penetration of certain antibodies into the tissue, limiting the accessibility to targeted antigens. Conversely, fixation methods optimized for better antibody penetration might compromise tissue integrity and antigen preservation, which can alter the ability to use these tissues for downstream processing. Achieving an optimal equilibrium between these two facets is crucial to ensure comprehensive visualization of antigens within tissues, balancing the preservation of structural integrity with the facilitation of antibody access for accurate and reliable analyses. With this study, we show the outcomes of PFA versus TCA fixation methods specifically in the context of visualizing avian embryonic development using HCR and IHC, shedding light on their distinct impacts on tissue preservation and signal detection to aid researchers in selecting the most suitable approach for developmental investigations. Here, we identify that TCA fixation methods may be optimal to visualize the signal from cytosolic microtubule subunits and membrane-bound cadherin proteins after IHC, but that TCA is subpar to visualize mRNA signals with fluorescence microscopy after HCR or nuclear-localized transcription factors with IHC. In contrast, PFA fixation provides adequate signal strength for proteins localized to all three cellular regions but is optimal for maximal signal strength of nuclear-localized proteins and for visualization of mRNA signals after HCR. Materials and methods 2.1. Collection and staging of chicken embryos Fertilized chicken eggs were obtained from UC Davis Hopkins Avian Facility and incubated at 37°C to the desired stages according to the Hamburger and Hamilton (HH) staging guide. After incubation, embryos were dissected out of eggs onto Wattman filter paper and placed into room temperature Ringer’s Solution. Embryos were then fixed using one of the methods listed below prior to IHC. 2.2. Fixation methods Tissue fixation is described below and the workflow that was used is detailed in . 2.2.1. Paraformaldehyde Paraformaldehyde (PFA) was dissolved in 0.2M phosphate buffer to make 4% weight per volume (w/v) stock solution, was stored at −20 °C prior to use, and was thawed fresh before use. Embryos were fixed at room temperature with 4% Paraformaldehyde (PFA) for 20 min (20 m). After fixation, embryos were washed in 1X Tris-Buffered Saline (TBS; 1M Tris-HCl, pH 7.4, 5M NaCl, and CaCl 2 ) containing 0.5% Triton X-100 (TBST + Ca 2+ ) or 1X Phosphate Buffered Saline (PBS) containing 0.1–0.5% Triton X-100 (PBST). Following IHC, 20m PFA-fixed embryos were incubated with and without a 1h postfix in 4% PFA at room temperature to test for differences in tissue structure. Following HCR, all samples were post-fixed for 1h with 4% PFA at room temperature to maintain signal. 2.2.2. Trichloroacetic acid Trichloroacetic acid (TCA) was dissolved in 1X PBS to make 20% (w/v) stock solution and stored at −20 °C prior to use. It was then thawed and diluted to 2% concentration with 1X PBS fresh before use. Embryos were fixed at room temperature with 2% TCA in 1X PBS for 1h or 3h. After fixation, embryos were washed in TBST + Ca 2+ or PBST. Following IHC, 1h TCA and 3h TCA-fixed samples were not post-fixed. Following HCR, all samples were post-fixed for 1h with 4% PFA at room temperature to maintain signal. 2.3. Fluorescent in situ hybridization chain reaction (HCR) Fluorescent in situ hybridization chain reaction (HCR) was performed using the protocol suggested by Molecular Technologies with minor modifications as described in . All probes and kits were acquired from Molecular Technologies. Described briefly, chicken embryos were fixed in 4% PFA for 1h at room temperature or 2% TCA for 1 or 3h at room temperature. Embryos were then washed in PBST and dehydrated in a series of 25%, 50%, 75%, and 100% methanol. Embryos were stored at −20 °C prior to beginning HCR protocol. Embryos were rehydrated in a series of 25%, 50%, 75%, and 100% PBST but were not incubated with proteinase-K as suggested by the protocol. Embryos were incubated with 2.5–10 μL of probes dissolved in hybridization buffer overnight (12–24h) at 37 °C. After washes on the second day, embryos were incubated with 10 μL each of hairpins diluted in amplification buffer at room temperature overnight (12–24h). Embryos were subsequently incubated with 1:500 DAPI in PBST for 1h at room temperature and washed with PBST. All embryos were post-fixed in 4% PFA for 1h at room temperature or 4 °C overnight (12–24h) prior to cryosectioning. Following postfix, embryos were washed in 1X PBS with 0.1% Tween-20 (P-Tween) and imaged in both whole mount and transverse section using a Zeiss Imager M2 with Apotome capability and Zen optical processing software. 2.4. Immunohistochemistry (IHC) After fixation, embryos were washed with PBST or TBST + Ca 2+ and wholemount IHC was performed. To block against non-specific antibody binding, embryos were incubated in PBST or TBST + Ca 2+ containing 10% donkey serum (blocking solution) for 1h at room temperature or overnight (12–24h) at 4 °C. Primary antibodies were diluted in blocking solution at indicated dilutions and embryos were incubated in primary antibodies for 72–96h at 4 °C. Multiple antibodies from the study have previously been validated in cell lines or chicken embryos. After incubation with primary antibodies, whole embryos were washed in PBST or TBST + Ca 2+ , then incubated with AlexaFluor secondary antibodies diluted in blocking solution (1:500) overnight (12–24 h) at 4 °C. TCA-fixed embryos were then washed in PBST or TBST + Ca 2+ as the final step before imaging. PFA-fixed embryos had the same final wash with PBST or TBST + Ca 2+ after secondary incubation and were either immediately imaged or post-fixed with 4% PFA for 1h at room temperature and washed again with PBST or TBST + Ca 2+ before imaging . 2.5. Cryosectioning Following whole embryo imaging, embryos were prepared for cryosectioning by incubation with 5% sucrose in PBS (30m to 1h at room temperature or overnight at 4 °C), followed by 15% sucrose in PBS (3h at room temperature to overnight at 4 °C), and then in 10% gelatin with sucrose in PBS for 3h to overnight at 38–42 °C. Embryos were then flash frozen in liquid nitrogen and were sectioned in an HM 525 NX Cryostats, Epredia, Richard-Allan Scientific in 16 μm sections. 2.6. Microscopy Fluorescence images were taken using Zeiss ImagerM2 with Apotome.2 and Zen software (Karl Zeiss). Whole embryos were imaged at 10X (Plan-NEOFLUAR 10X/0,3 420340–9901) and transverse sections were imaged at 20X (Plan-APOCHROMAT 20X/0,8 420650–9901) with Apotome optical sectioning. Exposure times varied for samples. All images were captured at maximum light intensity and exposure time was adjusted for the strength of each sample signal. DAPI with IHC image exposure times ranged for strongest signal to weakest from (80 ms–3.2 s) and for DAPI with HCR samples, which had significantly lower signal, exposure times ranged from (50 ms–8.3 s). DAPI signal is always the strongest signal with shortest exposure time. Images were adjusted for brightness and contrast uniformly across the entire image in Adobe Photoshop in accordance with journal standards. 2.7. Intranuclear fluorescence standard deviation To find the pixel intensity from one side of the nuclear membrane to the other, 10 nuclei from 5 different embryos per marker (50 total nuclei per fixative for each marker) were analyzed using NIH ImageJ/Fiji with the Dynamic ROI Profiler plugin. These measurements were performed on images converted to grayscale with manual brightness and contrast adjustments through Photoshop. The standard deviation of each set of intranuclear fluorescence measurements was calculated and these data points were plotted as a violin plot with the color representing the chicken embryo the measurement came from . Mann-Whitney tests were performed to compare the standard deviation of intranuclear fluorescence across the treatments. 2.8. Nuclei and neural tube measurements 2.8.1. Nucleus area and circularity To quantify the differences in cell area and circularity, nuclei from cells in the neural tube (NT), neural crest (NC), non-neural ectoderm (NNE), and cranial mesenchyme (CM) regions were outlined using Adobe Photoshop and assessed for both area and circularity. From each transverse section, four nuclei were outlined, two from the right side and two from the left side of the embryo. These measurements were done with 2–3 sections per individual, and at least 5 embryos per treatment were measured for each tissue type. Using the Mann-Whitney U test to compare the anatomical differences between the TCA and PFA-fixed samples identified that the nuclei of all cell types analyzed had significantly larger areas and were more circular after TCA fixation. The formula for circularity is 4π (area/perimeter ∧ 2). A value of 1.0 indicates a perfect circle. 2.8.2. Neural tube height, width, and area Transverse cryosections were imaged at 20X with the Zeiss Imager. M2 with Apotome and the scale bar was added using the Zeiss Zen software. The neural tube size, height, and width were obtained using ImageJ/Fiji. Using the scale obtained from each sectioned image, a global scale was set to measure the height and width of the neural tube (220 pixels/50 μm) in individual sections from multiple embryos at the same midbrain axial level (n = 14, 17, and 15 for PFA, 1h TCA, and 3h TCA, respectively). The height was obtained using the ImageJ Straight tool by measuring the basal-to-basal distance from the dorsal region of the neural tube to the ventral side. The width was obtained by measuring the basal-to-basal distance of the left and right lateral sides of the neural tube. The overall area of the neural tube was calculated using the formula for the area of an oval (A = π* (height/2)*(width/2) and these measurements were used to compare the neural tube between the three fixative conditions. 2.9. Fluorescence intensity analysis Fluorescence intensity in and was quantified using NIH ImageJ/Fiji by averaging the relative intensity of tissue-specific regions in section images of chicken embryos. Sections were converted to grayscale, and contrast was adjusted uniformly for each section using Adobe Photoshop. The grayscale images were analyzed using the rectangle tool to quantify the differences in fluorescence between the most visually different tissues for a given marker. The cell type regions analyzed included neural crest (NC) cells, neural tube (NT) cells, non-neural ectoderm (NNE) cells, and cranial mesenchyme (CM) cells. For intensity values, at least 4 regions were sampled from 2 to 6 different cryosection images from each embryo and the average relative fluorescence intensity from each embryo are reported on the graphs as points. Each graph shows relative fluorescence measurements from 5 to 14 individual embryos. The rectangle tool was used to draw a box that was dragged within the image to measure the fluorescence of a tissue region of interest on the right side, then left side of the image, resulting in two images for a given tissue type. This box was then used to measure the fluorescence of the compared tissue type. Between each of these four measurements, the background fluorescence was measured for normalization. The area of the box was between 0.133 and 1.0 pixels 2 , but always the same size within the same image. The measurements obtained through ImageJ/Fiji included the “area,” “area of integrated intensity,” and “mean grey value.” The corrected total cell fluorescence (CTCF) was calculated by subtracting the “area integrated density” from the product of the “area” of a selected region of interest and the “mean gray value” of the background, averaged out the values obtained from each region, then graphed. Each dot on the graphs represents the average of measurements from 1 to 3 cryosection images from 5 to 10 embryos. Number of embryos is indicated in each figure legend. Collection and staging of chicken embryos Fertilized chicken eggs were obtained from UC Davis Hopkins Avian Facility and incubated at 37°C to the desired stages according to the Hamburger and Hamilton (HH) staging guide. After incubation, embryos were dissected out of eggs onto Wattman filter paper and placed into room temperature Ringer’s Solution. Embryos were then fixed using one of the methods listed below prior to IHC. Fixation methods Tissue fixation is described below and the workflow that was used is detailed in . 2.2.1. Paraformaldehyde Paraformaldehyde (PFA) was dissolved in 0.2M phosphate buffer to make 4% weight per volume (w/v) stock solution, was stored at −20 °C prior to use, and was thawed fresh before use. Embryos were fixed at room temperature with 4% Paraformaldehyde (PFA) for 20 min (20 m). After fixation, embryos were washed in 1X Tris-Buffered Saline (TBS; 1M Tris-HCl, pH 7.4, 5M NaCl, and CaCl 2 ) containing 0.5% Triton X-100 (TBST + Ca 2+ ) or 1X Phosphate Buffered Saline (PBS) containing 0.1–0.5% Triton X-100 (PBST). Following IHC, 20m PFA-fixed embryos were incubated with and without a 1h postfix in 4% PFA at room temperature to test for differences in tissue structure. Following HCR, all samples were post-fixed for 1h with 4% PFA at room temperature to maintain signal. 2.2.2. Trichloroacetic acid Trichloroacetic acid (TCA) was dissolved in 1X PBS to make 20% (w/v) stock solution and stored at −20 °C prior to use. It was then thawed and diluted to 2% concentration with 1X PBS fresh before use. Embryos were fixed at room temperature with 2% TCA in 1X PBS for 1h or 3h. After fixation, embryos were washed in TBST + Ca 2+ or PBST. Following IHC, 1h TCA and 3h TCA-fixed samples were not post-fixed. Following HCR, all samples were post-fixed for 1h with 4% PFA at room temperature to maintain signal. Paraformaldehyde Paraformaldehyde (PFA) was dissolved in 0.2M phosphate buffer to make 4% weight per volume (w/v) stock solution, was stored at −20 °C prior to use, and was thawed fresh before use. Embryos were fixed at room temperature with 4% Paraformaldehyde (PFA) for 20 min (20 m). After fixation, embryos were washed in 1X Tris-Buffered Saline (TBS; 1M Tris-HCl, pH 7.4, 5M NaCl, and CaCl 2 ) containing 0.5% Triton X-100 (TBST + Ca 2+ ) or 1X Phosphate Buffered Saline (PBS) containing 0.1–0.5% Triton X-100 (PBST). Following IHC, 20m PFA-fixed embryos were incubated with and without a 1h postfix in 4% PFA at room temperature to test for differences in tissue structure. Following HCR, all samples were post-fixed for 1h with 4% PFA at room temperature to maintain signal. Trichloroacetic acid Trichloroacetic acid (TCA) was dissolved in 1X PBS to make 20% (w/v) stock solution and stored at −20 °C prior to use. It was then thawed and diluted to 2% concentration with 1X PBS fresh before use. Embryos were fixed at room temperature with 2% TCA in 1X PBS for 1h or 3h. After fixation, embryos were washed in TBST + Ca 2+ or PBST. Following IHC, 1h TCA and 3h TCA-fixed samples were not post-fixed. Following HCR, all samples were post-fixed for 1h with 4% PFA at room temperature to maintain signal. Fluorescent in situ hybridization chain reaction (HCR) Fluorescent in situ hybridization chain reaction (HCR) was performed using the protocol suggested by Molecular Technologies with minor modifications as described in . All probes and kits were acquired from Molecular Technologies. Described briefly, chicken embryos were fixed in 4% PFA for 1h at room temperature or 2% TCA for 1 or 3h at room temperature. Embryos were then washed in PBST and dehydrated in a series of 25%, 50%, 75%, and 100% methanol. Embryos were stored at −20 °C prior to beginning HCR protocol. Embryos were rehydrated in a series of 25%, 50%, 75%, and 100% PBST but were not incubated with proteinase-K as suggested by the protocol. Embryos were incubated with 2.5–10 μL of probes dissolved in hybridization buffer overnight (12–24h) at 37 °C. After washes on the second day, embryos were incubated with 10 μL each of hairpins diluted in amplification buffer at room temperature overnight (12–24h). Embryos were subsequently incubated with 1:500 DAPI in PBST for 1h at room temperature and washed with PBST. All embryos were post-fixed in 4% PFA for 1h at room temperature or 4 °C overnight (12–24h) prior to cryosectioning. Following postfix, embryos were washed in 1X PBS with 0.1% Tween-20 (P-Tween) and imaged in both whole mount and transverse section using a Zeiss Imager M2 with Apotome capability and Zen optical processing software. Immunohistochemistry (IHC) After fixation, embryos were washed with PBST or TBST + Ca 2+ and wholemount IHC was performed. To block against non-specific antibody binding, embryos were incubated in PBST or TBST + Ca 2+ containing 10% donkey serum (blocking solution) for 1h at room temperature or overnight (12–24h) at 4 °C. Primary antibodies were diluted in blocking solution at indicated dilutions and embryos were incubated in primary antibodies for 72–96h at 4 °C. Multiple antibodies from the study have previously been validated in cell lines or chicken embryos. After incubation with primary antibodies, whole embryos were washed in PBST or TBST + Ca 2+ , then incubated with AlexaFluor secondary antibodies diluted in blocking solution (1:500) overnight (12–24 h) at 4 °C. TCA-fixed embryos were then washed in PBST or TBST + Ca 2+ as the final step before imaging. PFA-fixed embryos had the same final wash with PBST or TBST + Ca 2+ after secondary incubation and were either immediately imaged or post-fixed with 4% PFA for 1h at room temperature and washed again with PBST or TBST + Ca 2+ before imaging . Cryosectioning Following whole embryo imaging, embryos were prepared for cryosectioning by incubation with 5% sucrose in PBS (30m to 1h at room temperature or overnight at 4 °C), followed by 15% sucrose in PBS (3h at room temperature to overnight at 4 °C), and then in 10% gelatin with sucrose in PBS for 3h to overnight at 38–42 °C. Embryos were then flash frozen in liquid nitrogen and were sectioned in an HM 525 NX Cryostats, Epredia, Richard-Allan Scientific in 16 μm sections. Microscopy Fluorescence images were taken using Zeiss ImagerM2 with Apotome.2 and Zen software (Karl Zeiss). Whole embryos were imaged at 10X (Plan-NEOFLUAR 10X/0,3 420340–9901) and transverse sections were imaged at 20X (Plan-APOCHROMAT 20X/0,8 420650–9901) with Apotome optical sectioning. Exposure times varied for samples. All images were captured at maximum light intensity and exposure time was adjusted for the strength of each sample signal. DAPI with IHC image exposure times ranged for strongest signal to weakest from (80 ms–3.2 s) and for DAPI with HCR samples, which had significantly lower signal, exposure times ranged from (50 ms–8.3 s). DAPI signal is always the strongest signal with shortest exposure time. Images were adjusted for brightness and contrast uniformly across the entire image in Adobe Photoshop in accordance with journal standards. Intranuclear fluorescence standard deviation To find the pixel intensity from one side of the nuclear membrane to the other, 10 nuclei from 5 different embryos per marker (50 total nuclei per fixative for each marker) were analyzed using NIH ImageJ/Fiji with the Dynamic ROI Profiler plugin. These measurements were performed on images converted to grayscale with manual brightness and contrast adjustments through Photoshop. The standard deviation of each set of intranuclear fluorescence measurements was calculated and these data points were plotted as a violin plot with the color representing the chicken embryo the measurement came from . Mann-Whitney tests were performed to compare the standard deviation of intranuclear fluorescence across the treatments. Nuclei and neural tube measurements 2.8.1. Nucleus area and circularity To quantify the differences in cell area and circularity, nuclei from cells in the neural tube (NT), neural crest (NC), non-neural ectoderm (NNE), and cranial mesenchyme (CM) regions were outlined using Adobe Photoshop and assessed for both area and circularity. From each transverse section, four nuclei were outlined, two from the right side and two from the left side of the embryo. These measurements were done with 2–3 sections per individual, and at least 5 embryos per treatment were measured for each tissue type. Using the Mann-Whitney U test to compare the anatomical differences between the TCA and PFA-fixed samples identified that the nuclei of all cell types analyzed had significantly larger areas and were more circular after TCA fixation. The formula for circularity is 4π (area/perimeter ∧ 2). A value of 1.0 indicates a perfect circle. 2.8.2. Neural tube height, width, and area Transverse cryosections were imaged at 20X with the Zeiss Imager. M2 with Apotome and the scale bar was added using the Zeiss Zen software. The neural tube size, height, and width were obtained using ImageJ/Fiji. Using the scale obtained from each sectioned image, a global scale was set to measure the height and width of the neural tube (220 pixels/50 μm) in individual sections from multiple embryos at the same midbrain axial level (n = 14, 17, and 15 for PFA, 1h TCA, and 3h TCA, respectively). The height was obtained using the ImageJ Straight tool by measuring the basal-to-basal distance from the dorsal region of the neural tube to the ventral side. The width was obtained by measuring the basal-to-basal distance of the left and right lateral sides of the neural tube. The overall area of the neural tube was calculated using the formula for the area of an oval (A = π* (height/2)*(width/2) and these measurements were used to compare the neural tube between the three fixative conditions. Nucleus area and circularity To quantify the differences in cell area and circularity, nuclei from cells in the neural tube (NT), neural crest (NC), non-neural ectoderm (NNE), and cranial mesenchyme (CM) regions were outlined using Adobe Photoshop and assessed for both area and circularity. From each transverse section, four nuclei were outlined, two from the right side and two from the left side of the embryo. These measurements were done with 2–3 sections per individual, and at least 5 embryos per treatment were measured for each tissue type. Using the Mann-Whitney U test to compare the anatomical differences between the TCA and PFA-fixed samples identified that the nuclei of all cell types analyzed had significantly larger areas and were more circular after TCA fixation. The formula for circularity is 4π (area/perimeter ∧ 2). A value of 1.0 indicates a perfect circle. Neural tube height, width, and area Transverse cryosections were imaged at 20X with the Zeiss Imager. M2 with Apotome and the scale bar was added using the Zeiss Zen software. The neural tube size, height, and width were obtained using ImageJ/Fiji. Using the scale obtained from each sectioned image, a global scale was set to measure the height and width of the neural tube (220 pixels/50 μm) in individual sections from multiple embryos at the same midbrain axial level (n = 14, 17, and 15 for PFA, 1h TCA, and 3h TCA, respectively). The height was obtained using the ImageJ Straight tool by measuring the basal-to-basal distance from the dorsal region of the neural tube to the ventral side. The width was obtained by measuring the basal-to-basal distance of the left and right lateral sides of the neural tube. The overall area of the neural tube was calculated using the formula for the area of an oval (A = π* (height/2)*(width/2) and these measurements were used to compare the neural tube between the three fixative conditions. Fluorescence intensity analysis Fluorescence intensity in and was quantified using NIH ImageJ/Fiji by averaging the relative intensity of tissue-specific regions in section images of chicken embryos. Sections were converted to grayscale, and contrast was adjusted uniformly for each section using Adobe Photoshop. The grayscale images were analyzed using the rectangle tool to quantify the differences in fluorescence between the most visually different tissues for a given marker. The cell type regions analyzed included neural crest (NC) cells, neural tube (NT) cells, non-neural ectoderm (NNE) cells, and cranial mesenchyme (CM) cells. For intensity values, at least 4 regions were sampled from 2 to 6 different cryosection images from each embryo and the average relative fluorescence intensity from each embryo are reported on the graphs as points. Each graph shows relative fluorescence measurements from 5 to 14 individual embryos. The rectangle tool was used to draw a box that was dragged within the image to measure the fluorescence of a tissue region of interest on the right side, then left side of the image, resulting in two images for a given tissue type. This box was then used to measure the fluorescence of the compared tissue type. Between each of these four measurements, the background fluorescence was measured for normalization. The area of the box was between 0.133 and 1.0 pixels 2 , but always the same size within the same image. The measurements obtained through ImageJ/Fiji included the “area,” “area of integrated intensity,” and “mean grey value.” The corrected total cell fluorescence (CTCF) was calculated by subtracting the “area integrated density” from the product of the “area” of a selected region of interest and the “mean gray value” of the background, averaged out the values obtained from each region, then graphed. Each dot on the graphs represents the average of measurements from 1 to 3 cryosection images from 5 to 10 embryos. Number of embryos is indicated in each figure legend. Results 3.1. TCA fixation alters tissue and nuclear morphology compared to PFA To identify if different fixation methods affected the general tissue structure, we tested the various methods (4% PFA for 20m with and without a 1h post-fixation after IHC, 2% TCA for 1h and 3h) in Hamburger Hamilton stage 8–10 (HH8–10) chicken embryos. Embryos were collected as described in the and fixed in their respective fixatives for 20m, 1h, or 3h . Embryos were then imaged in whole mount and transverse section . After fixation, HCR or IHC was performed using the antibodies in and embryos were stained using the nuclear DNA stain, 6-diamidino-2-phenylindole (DAPI). To quantify the differences in nuclear area and circularity, nuclei from cells in the NT, NC, NNE, and CM regions were assessed. Use of the Mann-Whitney U test to compare the anatomical differences between the TCA and PFA-fixed samples identified that the nuclei of all cell types analyzed had significantly larger areas and were more circular after TCA fixation compared to PFA with or without post-IHC fixation . Compared to 4% PFA fixation without post-fix, 1h 2% TCA fixation in HH8-HH9 embryos resulted in nuclei with a larger average area in the NT (197% larger, p ≤ 0.01), NC (201% larger, p ≤ 0.01), CM (284% larger, p ≤ 0.01), and NNE (243% larger, p ≤ 0.01) . Nuclei circularity was measured and nuclei from embryos fixed in 2% TCA were significantly rounder than those fixed with 4% PFA. Compared to 4% PFA fixation without post-fix, 1h 2% TCA fixation in HH8-HH9 embryos resulted, on average, in more circular nuclei in the NT (125%, p ≤ 0.01), NC (104%, p ≤ 0.01), CM (115%, p ≤ 0.01), and NNE (108%, p ≤ 0.01) . PFA with post-fix averages were more like TCA-fixed nuclei than PFA without post-fix for some tissues but the post-fixation did not fully rescue the significant differences in nuclei area or circularity . In NT cells, the PFA-fixed nuclei with and without post-fixation had an average circularity score of 0.68, while the TCA-fixed cells had scores of 0.83, with 1.0 indicating a perfect circle . In NC cells, the PFA-fixed nuclei had an average circularity score of 0.80, while the TCA-fixed cells had an average score of 0.855 . These morphological changes supported our observation that nuclear staining in the NT and NC regions appeared more diffuse in 2% TCA-fixed samples compared to 4% PFA fixation ( and ). To determine if the expanded cell nuclei were indicative of generalized changes in tissue structure or morphology, we measured the height, width, and total area of the NT from dorsal to ventral and basolateral to basolateral . We identified that indeed, on average, fixation with 2% TCA expanded the height, width, and total area of the NTs. Specifically, 1h 2% TCA fixed NTs were 176% taller (p ≤ 0.001) and had a 210% larger area (p ≤ 0.0001) while 3h 2% TCA fixed NTs were 189% taller (p ≤ 0.001), 117% wider (p ≤ 0.05), and had a 291% larger area (p ≤ 0.0001) than PFA fixed NTs. 3.2. PFA fixation alters nuclear protein signal detection PFA is the primary mode of fixation in avian embryos prior to performing HCR and IHC and it works effectively with short fixation times . To determine the effectiveness of TCA fixation for mRNA detection using HCR or for use of antibodies targeted to antigens in the nucleus, we used previously characterized antibodies against transcription factors paired box protein 7 (PAX7), SRY-Box 9 (SOX9), and Snail Family Repressor 2 (SNAI2) . At HH10, the TCA-fixed wholemount embryos appeared larger than those fixed in PFA , which is supported by our analyses of NT area . TCA fixation prior to HCR to visualize gene expression did not work effectively to preserve the mRNA. Compared to the robust and specific expression of SOX9 and PAX7 that is visible after PFA fixation, the signals were virtually undetectable using our imaging methods after TCA fixation despite post-fixation after probe amplification . At the protein level, SOX9, PAX7, and SNAI2 fluorescence was robust and appeared pan-nuclear after PFA fixation ( , , and ). However, although the appearance of these markers in wholemount did not appear markedly different in embryos that were PFA or TCA-fixed, in section, SOX9 and PAX7 expression appeared diffuse, the signal was weaker, and exposure times were longer to capture the signal after TCA fixation . In contrast, the SNAI2 signal became more punctate and had variable intensity within each nucleus in TCA fixation compared to a more uniform fluorescence in PFA fixation ( , compare P to Q and R). In higher magnification images of sections from TCA-fixed embryos, the DAPI stain overlaps with diffuse PAX7 protein signal, but SNAI2 protein signal appears limited within the nucleus in all NC cells in which it is expressed . We quantified fluorescent signal across the nuclei, and identified that in fact, there are significant differences in standard deviation of the intranuclear fluorescence in PFA versus TCA-fixed samples, indicating diffuse versus punctate signal fluorescence depending on the type of fixative and the time of fixation . In chicken embryos, PFA is our preferred fixation method prior to IHC for robust fluorescence using antibodies against the transcription factors that were evaluated. 3.3. Different fixation methods alter the signal intensity of microtubule subunit proteins To determine how fixation methods affect cytoplasmic and cytoskeletal protein signal, we assessed the various fixation treatments in HH9 chicken embryos and performed HCR and IHC for tubulins . To identify the effectiveness of PFA and TCA fixation for signal detection of these factors, we used probes against Beta III Tubulin ( TUBB3 ) and Tubulin Beta 2A ( TUBB2A ), and antibodies against TUBB3, TUBB2A and Tubulin Alpha 4a (TUBBA4A). Similar to our assessment of mRNA encoding nuclear proteins, we were unable to detect robust gene expression for either TUBB3 or TUBB2A after TCA fixation although the signal was detectable after PFA fixation . We concluded similarly that TCA fixation is not effective prior to HCR. In contrast, signal for all three proteins was visible in all three fixative treatments. TCA fixation appeared to alter the tissue-specific proportional signal brightness compared to PFA fixation for tubulin proteins with IHC. We identified that TUBB3 protein showed stronger fluorescence intensity in NC cells compared to the NT signal after 2% TCA (1h or 3h) versus 4% PFA fixation . The strongest NC-specific TUBB3 signal appeared at 1h 2% TCA fixation ( , , p ≤ 0.0001). For TUBB2A, with PFA fixation, the protein signal was strongest in the NNE and CM with weaker expression in the NC, and NT . With TCA fixation, the TUBB2A fluorescence in the NNE and CM increased compared to the signal in the NC and NT to the point that signal is almost imperceptible in the NT ( and ). However, the NNE signal was significantly stronger than the CM signal after TCA fixation at 1h and 3h (p ≤ 0.0001 and 0.05, respectively). After PFA fixation, TUBA4A signal appears to be solely in the NNE, but after TCA fixation, the protein is visible in the CM as indicated by the increased relative signal intensity, but the NNE signal remains significantly stronger in the NNE than the CM across all fixatives ( – , , p ≤ 0.001 for both). To quantify differences in tissue-specific fluorescence intensity after different fixations, we measured fluorescence intensity in specific tissues and fold changes from the “brightest” signal to the weaker signal. These analyses showed that TUBB3 fluorescence was significantly higher in NC cells compared to NT cells after 1hr and 3h TCA fixation than it was after PFA fixation ( , p ≤ 0.0001, n = 9, and p ≤ 0.05, n = 10). In addition, TCA fixation significantly increased the differences between TUBB2A intensity in the CM compared to the NNE in both 1h and 3h treatments compared to PFA fixation ( , p ≤ 0.0001, n = 7, and p ≤ 0.05, respectively, n = 5). The signal for TUBA4A appeared to be most visible in the NNE after PFA fixation (p ≤ 0.001, n = 7). In the 1h and 3h TCA fixation, the NNE and CM fluorescence signal intensities both increased, but the NNE signal was still significantly stronger than that of the CM ( , p ≤ 0.0001, n = 12, and p ≤ 0.0001, n = 14). These data show that fixation methods can alter the apparent signal intensities in specific tissues. 3.4. Different fixation methods affect cadherin protein tissue-specific signal intensity The localization of N-cadherin (NCAD) and E-cadherin (ECAD) have previously been characterized in chicken embryos across stages using PFA fixation . To determine if TCA fixation is also an efficient method to use prior to HCR or IHC to visualize these genes and proteins, we evaluated the various fixation treatments in HH9 chicken embryos prior to HCR or IHC with antibodies against the two type-I cadherins. Similar to the prior analyses, TCA fixation is not effective to visualize ECAD gene expression compared to PFA fixation . Both TCA fixations prevented the detection of any signal ( and ). NCAD is robustly expressed in the NT and can be visualized after both PFA and TCA fixations . In contrast to all other probes that we tested, we were still able to detect NCAD expression in the NT after TCA fixation although the signal was weaker ( and ). After PFA fixation, both ECAD and NCAD protein signals are visible in the NT at HH9, but while ECAD signal also appears in delaminating NC cells and NNE, the NCAD signal is not detectable in these tissues and instead is visible in the CM confirming previously published results . In 2% TCA at both 1h and 3h fixations, the ECAD signal remains in the same tissues ( and ). We measured the relative fluorescence intensity in the NNE compared to the NT to determine if TCA fixation alters tissue-specific signal intensity as it does in microtubule proteins, and we identified that in all fixations, ECAD signal intensity was higher in the NNE than the NT. However, the difference between the two was more apparent after PFA fixation, (p ≤ 0.0001, n = 13) than in 1h or 3h TCA fixation (p ≤ 0.01, n = 13 and p ≤ 0.01, n = 12). In contrast to the subtle changes in tissue-specific ECAD signal intensity after PFA versus TCA fixation , the NCAD signal intensity appeared to increase in the CM after TCA fixation . Specifically, after PFA fixation, the NCAD NT signal was significantly higher than the CM (p ≤ 0.05, n = 8), but in 1h and 3h 2% TCA fixation, the relative fluorescence intensity of NCAD increased in the CM compared to the NT, thereby reducing the difference in fluorescence intensity, after 1h (p = ns, n = 5) and 3h (p = ns, n = 5). TCA fixation alters tissue and nuclear morphology compared to PFA To identify if different fixation methods affected the general tissue structure, we tested the various methods (4% PFA for 20m with and without a 1h post-fixation after IHC, 2% TCA for 1h and 3h) in Hamburger Hamilton stage 8–10 (HH8–10) chicken embryos. Embryos were collected as described in the and fixed in their respective fixatives for 20m, 1h, or 3h . Embryos were then imaged in whole mount and transverse section . After fixation, HCR or IHC was performed using the antibodies in and embryos were stained using the nuclear DNA stain, 6-diamidino-2-phenylindole (DAPI). To quantify the differences in nuclear area and circularity, nuclei from cells in the NT, NC, NNE, and CM regions were assessed. Use of the Mann-Whitney U test to compare the anatomical differences between the TCA and PFA-fixed samples identified that the nuclei of all cell types analyzed had significantly larger areas and were more circular after TCA fixation compared to PFA with or without post-IHC fixation . Compared to 4% PFA fixation without post-fix, 1h 2% TCA fixation in HH8-HH9 embryos resulted in nuclei with a larger average area in the NT (197% larger, p ≤ 0.01), NC (201% larger, p ≤ 0.01), CM (284% larger, p ≤ 0.01), and NNE (243% larger, p ≤ 0.01) . Nuclei circularity was measured and nuclei from embryos fixed in 2% TCA were significantly rounder than those fixed with 4% PFA. Compared to 4% PFA fixation without post-fix, 1h 2% TCA fixation in HH8-HH9 embryos resulted, on average, in more circular nuclei in the NT (125%, p ≤ 0.01), NC (104%, p ≤ 0.01), CM (115%, p ≤ 0.01), and NNE (108%, p ≤ 0.01) . PFA with post-fix averages were more like TCA-fixed nuclei than PFA without post-fix for some tissues but the post-fixation did not fully rescue the significant differences in nuclei area or circularity . In NT cells, the PFA-fixed nuclei with and without post-fixation had an average circularity score of 0.68, while the TCA-fixed cells had scores of 0.83, with 1.0 indicating a perfect circle . In NC cells, the PFA-fixed nuclei had an average circularity score of 0.80, while the TCA-fixed cells had an average score of 0.855 . These morphological changes supported our observation that nuclear staining in the NT and NC regions appeared more diffuse in 2% TCA-fixed samples compared to 4% PFA fixation ( and ). To determine if the expanded cell nuclei were indicative of generalized changes in tissue structure or morphology, we measured the height, width, and total area of the NT from dorsal to ventral and basolateral to basolateral . We identified that indeed, on average, fixation with 2% TCA expanded the height, width, and total area of the NTs. Specifically, 1h 2% TCA fixed NTs were 176% taller (p ≤ 0.001) and had a 210% larger area (p ≤ 0.0001) while 3h 2% TCA fixed NTs were 189% taller (p ≤ 0.001), 117% wider (p ≤ 0.05), and had a 291% larger area (p ≤ 0.0001) than PFA fixed NTs. PFA fixation alters nuclear protein signal detection PFA is the primary mode of fixation in avian embryos prior to performing HCR and IHC and it works effectively with short fixation times . To determine the effectiveness of TCA fixation for mRNA detection using HCR or for use of antibodies targeted to antigens in the nucleus, we used previously characterized antibodies against transcription factors paired box protein 7 (PAX7), SRY-Box 9 (SOX9), and Snail Family Repressor 2 (SNAI2) . At HH10, the TCA-fixed wholemount embryos appeared larger than those fixed in PFA , which is supported by our analyses of NT area . TCA fixation prior to HCR to visualize gene expression did not work effectively to preserve the mRNA. Compared to the robust and specific expression of SOX9 and PAX7 that is visible after PFA fixation, the signals were virtually undetectable using our imaging methods after TCA fixation despite post-fixation after probe amplification . At the protein level, SOX9, PAX7, and SNAI2 fluorescence was robust and appeared pan-nuclear after PFA fixation ( , , and ). However, although the appearance of these markers in wholemount did not appear markedly different in embryos that were PFA or TCA-fixed, in section, SOX9 and PAX7 expression appeared diffuse, the signal was weaker, and exposure times were longer to capture the signal after TCA fixation . In contrast, the SNAI2 signal became more punctate and had variable intensity within each nucleus in TCA fixation compared to a more uniform fluorescence in PFA fixation ( , compare P to Q and R). In higher magnification images of sections from TCA-fixed embryos, the DAPI stain overlaps with diffuse PAX7 protein signal, but SNAI2 protein signal appears limited within the nucleus in all NC cells in which it is expressed . We quantified fluorescent signal across the nuclei, and identified that in fact, there are significant differences in standard deviation of the intranuclear fluorescence in PFA versus TCA-fixed samples, indicating diffuse versus punctate signal fluorescence depending on the type of fixative and the time of fixation . In chicken embryos, PFA is our preferred fixation method prior to IHC for robust fluorescence using antibodies against the transcription factors that were evaluated. Different fixation methods alter the signal intensity of microtubule subunit proteins To determine how fixation methods affect cytoplasmic and cytoskeletal protein signal, we assessed the various fixation treatments in HH9 chicken embryos and performed HCR and IHC for tubulins . To identify the effectiveness of PFA and TCA fixation for signal detection of these factors, we used probes against Beta III Tubulin ( TUBB3 ) and Tubulin Beta 2A ( TUBB2A ), and antibodies against TUBB3, TUBB2A and Tubulin Alpha 4a (TUBBA4A). Similar to our assessment of mRNA encoding nuclear proteins, we were unable to detect robust gene expression for either TUBB3 or TUBB2A after TCA fixation although the signal was detectable after PFA fixation . We concluded similarly that TCA fixation is not effective prior to HCR. In contrast, signal for all three proteins was visible in all three fixative treatments. TCA fixation appeared to alter the tissue-specific proportional signal brightness compared to PFA fixation for tubulin proteins with IHC. We identified that TUBB3 protein showed stronger fluorescence intensity in NC cells compared to the NT signal after 2% TCA (1h or 3h) versus 4% PFA fixation . The strongest NC-specific TUBB3 signal appeared at 1h 2% TCA fixation ( , , p ≤ 0.0001). For TUBB2A, with PFA fixation, the protein signal was strongest in the NNE and CM with weaker expression in the NC, and NT . With TCA fixation, the TUBB2A fluorescence in the NNE and CM increased compared to the signal in the NC and NT to the point that signal is almost imperceptible in the NT ( and ). However, the NNE signal was significantly stronger than the CM signal after TCA fixation at 1h and 3h (p ≤ 0.0001 and 0.05, respectively). After PFA fixation, TUBA4A signal appears to be solely in the NNE, but after TCA fixation, the protein is visible in the CM as indicated by the increased relative signal intensity, but the NNE signal remains significantly stronger in the NNE than the CM across all fixatives ( – , , p ≤ 0.001 for both). To quantify differences in tissue-specific fluorescence intensity after different fixations, we measured fluorescence intensity in specific tissues and fold changes from the “brightest” signal to the weaker signal. These analyses showed that TUBB3 fluorescence was significantly higher in NC cells compared to NT cells after 1hr and 3h TCA fixation than it was after PFA fixation ( , p ≤ 0.0001, n = 9, and p ≤ 0.05, n = 10). In addition, TCA fixation significantly increased the differences between TUBB2A intensity in the CM compared to the NNE in both 1h and 3h treatments compared to PFA fixation ( , p ≤ 0.0001, n = 7, and p ≤ 0.05, respectively, n = 5). The signal for TUBA4A appeared to be most visible in the NNE after PFA fixation (p ≤ 0.001, n = 7). In the 1h and 3h TCA fixation, the NNE and CM fluorescence signal intensities both increased, but the NNE signal was still significantly stronger than that of the CM ( , p ≤ 0.0001, n = 12, and p ≤ 0.0001, n = 14). These data show that fixation methods can alter the apparent signal intensities in specific tissues. Different fixation methods affect cadherin protein tissue-specific signal intensity The localization of N-cadherin (NCAD) and E-cadherin (ECAD) have previously been characterized in chicken embryos across stages using PFA fixation . To determine if TCA fixation is also an efficient method to use prior to HCR or IHC to visualize these genes and proteins, we evaluated the various fixation treatments in HH9 chicken embryos prior to HCR or IHC with antibodies against the two type-I cadherins. Similar to the prior analyses, TCA fixation is not effective to visualize ECAD gene expression compared to PFA fixation . Both TCA fixations prevented the detection of any signal ( and ). NCAD is robustly expressed in the NT and can be visualized after both PFA and TCA fixations . In contrast to all other probes that we tested, we were still able to detect NCAD expression in the NT after TCA fixation although the signal was weaker ( and ). After PFA fixation, both ECAD and NCAD protein signals are visible in the NT at HH9, but while ECAD signal also appears in delaminating NC cells and NNE, the NCAD signal is not detectable in these tissues and instead is visible in the CM confirming previously published results . In 2% TCA at both 1h and 3h fixations, the ECAD signal remains in the same tissues ( and ). We measured the relative fluorescence intensity in the NNE compared to the NT to determine if TCA fixation alters tissue-specific signal intensity as it does in microtubule proteins, and we identified that in all fixations, ECAD signal intensity was higher in the NNE than the NT. However, the difference between the two was more apparent after PFA fixation, (p ≤ 0.0001, n = 13) than in 1h or 3h TCA fixation (p ≤ 0.01, n = 13 and p ≤ 0.01, n = 12). In contrast to the subtle changes in tissue-specific ECAD signal intensity after PFA versus TCA fixation , the NCAD signal intensity appeared to increase in the CM after TCA fixation . Specifically, after PFA fixation, the NCAD NT signal was significantly higher than the CM (p ≤ 0.05, n = 8), but in 1h and 3h 2% TCA fixation, the relative fluorescence intensity of NCAD increased in the CM compared to the NT, thereby reducing the difference in fluorescence intensity, after 1h (p = ns, n = 5) and 3h (p = ns, n = 5). Discussion Despite their widespread use, studies have shown that over 50% of antibodies fail in one or more applications . Thus, it is vital to validate that antibodies work properly before trusting them for characterization studies or functional applications. When using a new commercial antibody, researchers will often experiment with various concentrations of the antibody but may not alter the fixation method used to process the tissue beforehand. Here, we compared the effectiveness of PFA fixation to that of TCA fixation prior to HCR and IHC in chicken embryos using multiple previously validated antibodies. We identified that the type of fixation applied affected cellular and tissue morphology, with TCA fixation resulting in larger, more circular nuclei. We also found that differences in the type and length of fixation had effects on the visualization of protein signal at the tissue-specific and sometimes subcellular level. The morphological changes that we identified are likely due to the different mechanisms by which PFA and TCA fix tissues rather than artifacts from cryosectioning. Since all samples are fixed, imaged in wholemount, and then cryosectioned using the same methods , we expect that the morphological differences are due to fixation techniques. PFA covalently cross-links molecules, stabilizing tertiary and quaternary structures of proteins and hardening the cell surface . We observed that in PFA-fixed chicken embryos, tissue appeared more tightly packed with denser and less circular nuclei and smaller NTs . In contrast, we found that TCA fixation resulted in larger and more circular nuclei and larger NTs . Rather than cross-linking proteins, TCA precipitates proteins by disrupting their encircling hydration sphere . Unlike PFA, which maintains tertiary and quaternary structure, TCA denatures proteins to the point where their secondary and tertiary structures are lost . The nuclei and tissue shape changes we observed may be due to this precipitation of proteins within a cell, filling up space and rounding out the nuclear and cellular membranes. However, it would be helpful to perform similar analysis using high resolution 3D imaging with light sheet fluorescence microscopy or other method in intact embryos to determine if the tissue-specific intensities change with different fixative methods. These differences make each fixative type more ideal depending on the target epitope. Since TCA precipitates and denatures proteins, it makes hidden epitopes more accessible. In contrast, PFA is ideal for targeting structural epitopes as it maintains tertiary and quaternary structures. Here, we sought to understand how these various fixative methods affect immunohistochemical staining using antibodies for markers in multiple tissue types in chicken embryos. In contrast to PFA, which works well with short fixation times preceding antibody use, such as the 20 min used for this study, TCA results in low signal at an equivalent fixation duration (data not shown). Thus, we employed 1 h and 3 h of TCA fixation, which led to similar outcomes of cellular morphology and signal intensity when compared to each other. By using chicken embryos as our model, we were able to use commercially available antibodies we and others have previously validated . We identified that both PFA and TCA fixation allowed us to visualize proteins in their expected locations, but we saw that some treatments altered signal intensities across tissues. We identified a marked difference in how TCA and PFA affected the visualization of nuclear markers. Nuclear markers had weaker fluorescence signal and appeared more compartmentalized within the nucleus in TCA-fixed embryos compared to PFA-fixed embryos . This result may be caused by actual subnuclear protein localization, or it may be due to the TCA precipitation of the target proteins within the nuclear compartment . In measuring fluorescence intensity of nuclear markers across nuclear membranes, we saw that for some markers (DAPI, PAX7, SOX9) there appeared to be consistent variability of the signal across the nucleus but that for others (SNAI2) there was increased variability in signal intensity across the nuclei after TCA fixation . If this compartmentalization of the signal is biologically accurate, it is a method that could be used to visualize condensates within nuclei. Additionally, TCA fixation may allow us to compare the localization of multiple nuclear markers at once to see if their subnuclear localization differs at different phases of the cell cycle, for example. However, nuclear markers used following TCA fixation also tended to have a weaker signal compared to background, possibly due to precipitation. PFA and TCA fixation also caused noticeable differences in the fluorescence intensity within specific tissues when used before IHC with microtubule subunits . Past work showed that TUBB3 is expressed in the NT at HH8 in chicken embryos, with a stronger signal intensity at the dorsal side where the NC cells are present at HH9 . Here, we see similar localization in HH9 chicken embryos, but 1h TCA fixation was optimal for showcasing the increase in the NC TUBB3 signal compared to the NT signal . Interestingly, TUBB2A and TUBA4A both displayed significant differences in the NNE and CM signals in the TCA fixation versus PFA fixation treatments, but in opposite directions. For TUBB2A, the NNE signal increased in relation to the CM in TCA-fixed embryos compared to PFA-fixed embryos, increasing this difference . Meanwhile, for TUBA4A, the CM signal increased in TCA-fixed embryos compared to PFA-fixed embryos, decreasing this relationship . Thus, it is critical to test multiple fixatives for markers of interest even across similar protein types, as they may enhance signal in different tissue regions. Microtubule proteins are well known for their post-translational modifications which directly affects microtubule stability , and it is possible that these differently modified proteins are better targeted in one fixative versus the other but this was not explicitly tested here. We saw similar differences in cadherin protein signals using IHC between TCA and PFA fixatives. In stage HH9 chickens, equivalent to our samples, ECAD localized to the NNE, NT, migratory NC cells, and developing gut . In our samples across various fixatives, we saw that the fluorescence intensity of the NNE was consistently higher than in the NT, although the NT intensity increased in TCA . Similarly, NCAD displayed the expected localization for HH9 chickens to the NT, CM, notochord, developing gut, and absence from the dorsal NT regardless of fixative type . However, fluorescence intensity of NCAD in the NT was far higher than that in the CM for PFA-fixed embryos compared to TCA-fixed embryos . This result suggests that the type of fixative applied can affect the primary tissue in which a protein signal appears, and that issue may have far-reaching effects for individuals studying cell and developmental biology as those fields strongly rely on knowing spatiotemporal protein localization prior to studying protein function. Of note, although TCA fixation proved ineffective to visualize most genes, the NCAD gene expression was maintained strongly in the NT and weaker in the CM, which suggests that the PFA-fixed NCAD protein localization is more representative of the gene expression (compare – ). However, cadherin proteins are post-translationally modified and trafficked , and therefore gene expression is not always consistent with protein expression and localization. Our results have implications for characterizing new antibodies that do not have published and validated expression models. To truly define where and when a protein is expressed and localized at subcellular- and tissue-level resolution, use of live imaging methods would be ideal. However, there are limitations to these types of analyses currently due to potential issues with protein tertiary structure changes after fused tagging with large fluorescent proteins or overexpression artifacts that can occur if proteins are introduced into an organism. Although our study provides a starting place for analyses of protein signal detection and studies of different fixation methods, we need additional technological advances like those that have been created to visualize mRNA in vivo . Future work may consider using methods like protein tagging paired with the electroporation of nuclear reporters or live imaging dyes to further resolve the question of protein localization in tissues and within cells. Future studies using IHC with commercial antibodies would benefit from fixation validation in addition to traditional antibody specification validations (e.g., knockdown, overexpression, western blot) as some fixatives may improve visualization of proteins of interest. Comparing the 3h versus 1h TCA fixes to each other revealed that the 1h TCA fixation is sufficient to alter tissue morphology and to reveal additional protein signal in the tissue samples. However, fixed tissues are not living tissues and as technologies become available, it would be important to visualize these cellular events in vivo . It may also be beneficial to compare additional fixation techniques such as alcohol-based fixation or antigen retrieval to see if these methods replicate or improve the outcomes from PFA or TCA fixation. As displayed in this paper, the method of fixation can affect the strength of protein signals after IHC in different tissues. While PFA revealed epitopes in most tissues, TCA-mediated protein denaturation may provide access to hidden epitopes in regions of the protein of interest that are inaccessible due to PFA cross-linking . Here, we only evaluated these techniques in a single organism, and we demonstrate that fixatives affect the visualization of numerous proteins in several cellular compartments. However, the type of fixative used has been found to affect cellular and tissue morphology in other systems and animals including human cell culture, goats, rats, and mice ( ; ; ; Rahman et al.; ; ; ). The fixative type used should be optimized depending on the model system, type of protein, and expected localization. Our results demonstrate that methods can, and should, be tested for improved biological analyses and accurate demonstration of results in wholemount or in section.
Knowledge, attitudes and practices of critical care unit personnel regarding pediatric palliative care: a cross-sectional study
a3666c27-c68f-4f42-9ba6-ad796be7f781
11106871
Pediatrics[mh]
Many children worldwide suffer from life-limiting conditions (from which they will die because there is no reasonable hope of cure) or life-threatening conditions (from which they will die if potentially curative treatment fails) . Life-limiting and life-threatening conditions encompass a wide range of illnesses including cancers, nervous system diseases, congenital anomalies, infectious diseases, and conditions affecting the circulatory and respiratory systems . The prevalence of life-limiting conditions in the pediatric population has increased in recent years and is predicted to continue to increase during the next decade . Estimates of the prevalence of life-limiting conditions in children and young people range from 43.2 per 10,000 to 95.5 per 10,000 . Pediatric palliative care involves the provision of physical, psychological and spiritual care to children with life-threatening illnesses and support to their family members so as to optimize the quality of life of the children and their families . It has been estimated that more than 21 million children worldwide require palliative care each year, with requirements varying between different countries . In China, it has been estimated that more than three million children require palliative care . Pediatric palliative care is provided by a multidisciplinary team that includes doctors, nurses, social workers, psychologists and therapists, and a high level of care for the dying patient requires excellent communication and shared decision making between medical personnel, patients and their families . International standards for pediatric palliative care have been published , and the provision of specialized pediatric palliative care has been reported to enhance patient quality of life and reduce healthcare system costs . However, there are numerous barriers to the implementation of effective pediatric palliative care including family-related factors , misperceptions and lack of knowledge among healthcare providers , insufficient funding, and inadequate organization and integration with other pediatric services . Although pediatric palliative care services are improving in China, it is widely recognized that advances need to be made more rapidly. The Chinese Government published practice guidelines for hospice care in 2017 and selected pilot regions for the implementation of hospice services. In 2019, 71 areas were selected for the promotion of pediatric palliative care . Nevertheless, specialized pediatric palliative care services remain limited in many cities in China due to a shortage of resources, and in many children’s hospitals, palliative care beds are provided by hematology or oncology departments rather than specialized centers. The Butterfly House at Changsha First Social Welfare Institute was the first hospice center for children to be built in China, and Daisy House in Beijing Songtang Hospital is the only family-type pediatric palliative care center in China. In addition to the limited number of specialized centers, it is recognized that there are various barriers to palliative care among medical personnel in China . However, few studies have evaluated the perceptions of healthcare providers in China regarding pediatric palliative care, particularly in critical care units (PICUs), where many children receive palliative care . Knowledge, attitude and practice (KAP) surveys provide important information concerning the knowledge, attitudes, beliefs, misconceptions and behaviors of medical professionals towards a health-related topic, both at the baseline and as a tool to assess the efficacy of educational intervention . A few previous studies utilized the KAP method to successfully guide the development and implementation of training programs for healthcare personnel in palliative care, reporting promising results . Therefore, the aim of this study was to evaluate the knowledge, attitudes and practices of PICU personnel in China with regard to pediatric palliative care. Study design and participants This cross-sectional, questionnaire-based study enrolled medical personnel working in PICU s in five cities in China between November 2022 and December 2022. The inclusion criteria were: (1) physicians or nurses working in a PICU in Shanghai, Suzhou, Chongqing, Chengdu or Yunnan; (2) at least one year of work experience; and (3) volunteered to participate in this study. Medical personnel engaged in other studies, training programs or internships were excluded. This study was approved by the ethics committee of Shanghai Children’s Medical Center affiliated to Shanghai Jiao Tong University School of Medicine (SCMCIRB-K2022169-1), and all participants provided informed written consent. Questionnaire design and distribution The first draft of the questionnaire was designed with reference to the “2022 Society of Critical Care Medicine Clinical Practice Guidelines on Prevention and Management of Pain, Agitation, Neuromuscular Blockade, and Delirium in Critically Ill Pediatric Patients With Consideration of the ICU Environment and Early Mobility”, regarding analgesia, sedation, neuromuscular blockade, delirium, iatrogenic withdrawal, end-of-life care and PICU environment optimization . The questionnaire was then modified according to the comments of five medical professionals with expertise in pediatric palliative care (four clinicians and one social worker). According to their input, all sections of questionnaire were revised, adding types of received training, knowledge of relationship between palliative therapy and primary positive treatment, views on the integration of critical care medicine/nursing into palliative therapy and withdrawal support devices. A pilot study was performed by distributing the questionnaire to 57 medical personnel, and the Cronbach’s α coefficient of the questionnaire was determined to be 0.935, which suggested excellent internal consistency (i.e., excellent reliability). The final version of the questionnaire was in Chinese and consisted of 43 questions across four dimensions: demographic information, knowledge, attitude and practice. The demographic information dimension consisted of 9 items that collected the following data: age, gender, occupation (physician or nurse), education level, years of professional work experience, location of the hospital at which employed, previous training in pediatric palliative care, type of training received, and whether pediatric palliative care was available in their department. The knowledge dimension contained 13 items (K1–K13), each of which was scored 1 point for a correct answer and 0 points for an incorrect or unclear answer. The total score of the knowledge dimension ranged from 0 to 13 points. The attitude dimension comprised 12 items (A1–12). The possible responses for items A1–A7 were “strongly agree”, “agree”, “neutral”, “disagree” and “strongly disagree”, while those for items A8–A12 were “very greatly”, “greatly”, “moderately”, “slightly” and “very slightly”. Each item in the attitude dimension was scored using a 5-point Likert scale (1–5 points) according to the positivity/negativity of the response selected. Each of items A8–A12 was divided into three parts, and the average of the scores for the three parts was used as the item score. The total score of the attitude dimension ranged from 12 to 60 points. The practice dimension comprised 9 items (P1–P9), with item P9 divided into two parts (the score for item P9 was calculated as the average of the scores for the two parts). The items were scored using a 5-point Likert scale (“always” = 5 points, “often” = 4 points, “sometimes” = 3 points, “rarely” = 2 points, and “never” = 1 point). The total score for the practice dimension ranged from 9 to 45 points. An online questionnaire was constructed using an online survey tool (SoJump), and a QR code that linked to the online questionnaire was distributed to the participants via WeChat. To ensure the quality and completeness of the questionnaire results, each IP address could only be used once for submission, and all items in the questionnaire had to be completed before submission was permitted. All questionnaires were checked for completeness, consistency and validity by members of the research team. Statistical analysis Based on international principles of questionnaire design and previous research experience, it is generally recommended that the sample size should be 5 to 10 times the number of questionnaire items . With 34 KAP items of the questionnaire, the required sample size was 170. SPSS 26.0 (IBM Corp., Armonk, NY, USA) was used for the analyses. Quantitative data were tested for normality by the Shapiro-Wilk’s test. To ensure consistency in data presentation, all variables were expressed as mean ± standard deviation (SD). Variables that met the normal distribution were compared between groups using the t-test or one-way analysis of variance (ANOVA; three or more groups), while non-normally distributed were compared between groups using the Mann-Whitney U test or Kruskal-Wallis test. Categorical data are expressed as frequency (percentage) and were analyzed using the chi-squared test. Structural equation modelling (SEM) was used to test the initial hypotheses that knowledge regarding pediatric palliative care has effect on attitude, and attitude has effect on practice. A two-sided P < 0.05 was considered statistically significant. This cross-sectional, questionnaire-based study enrolled medical personnel working in PICU s in five cities in China between November 2022 and December 2022. The inclusion criteria were: (1) physicians or nurses working in a PICU in Shanghai, Suzhou, Chongqing, Chengdu or Yunnan; (2) at least one year of work experience; and (3) volunteered to participate in this study. Medical personnel engaged in other studies, training programs or internships were excluded. This study was approved by the ethics committee of Shanghai Children’s Medical Center affiliated to Shanghai Jiao Tong University School of Medicine (SCMCIRB-K2022169-1), and all participants provided informed written consent. The first draft of the questionnaire was designed with reference to the “2022 Society of Critical Care Medicine Clinical Practice Guidelines on Prevention and Management of Pain, Agitation, Neuromuscular Blockade, and Delirium in Critically Ill Pediatric Patients With Consideration of the ICU Environment and Early Mobility”, regarding analgesia, sedation, neuromuscular blockade, delirium, iatrogenic withdrawal, end-of-life care and PICU environment optimization . The questionnaire was then modified according to the comments of five medical professionals with expertise in pediatric palliative care (four clinicians and one social worker). According to their input, all sections of questionnaire were revised, adding types of received training, knowledge of relationship between palliative therapy and primary positive treatment, views on the integration of critical care medicine/nursing into palliative therapy and withdrawal support devices. A pilot study was performed by distributing the questionnaire to 57 medical personnel, and the Cronbach’s α coefficient of the questionnaire was determined to be 0.935, which suggested excellent internal consistency (i.e., excellent reliability). The final version of the questionnaire was in Chinese and consisted of 43 questions across four dimensions: demographic information, knowledge, attitude and practice. The demographic information dimension consisted of 9 items that collected the following data: age, gender, occupation (physician or nurse), education level, years of professional work experience, location of the hospital at which employed, previous training in pediatric palliative care, type of training received, and whether pediatric palliative care was available in their department. The knowledge dimension contained 13 items (K1–K13), each of which was scored 1 point for a correct answer and 0 points for an incorrect or unclear answer. The total score of the knowledge dimension ranged from 0 to 13 points. The attitude dimension comprised 12 items (A1–12). The possible responses for items A1–A7 were “strongly agree”, “agree”, “neutral”, “disagree” and “strongly disagree”, while those for items A8–A12 were “very greatly”, “greatly”, “moderately”, “slightly” and “very slightly”. Each item in the attitude dimension was scored using a 5-point Likert scale (1–5 points) according to the positivity/negativity of the response selected. Each of items A8–A12 was divided into three parts, and the average of the scores for the three parts was used as the item score. The total score of the attitude dimension ranged from 12 to 60 points. The practice dimension comprised 9 items (P1–P9), with item P9 divided into two parts (the score for item P9 was calculated as the average of the scores for the two parts). The items were scored using a 5-point Likert scale (“always” = 5 points, “often” = 4 points, “sometimes” = 3 points, “rarely” = 2 points, and “never” = 1 point). The total score for the practice dimension ranged from 9 to 45 points. An online questionnaire was constructed using an online survey tool (SoJump), and a QR code that linked to the online questionnaire was distributed to the participants via WeChat. To ensure the quality and completeness of the questionnaire results, each IP address could only be used once for submission, and all items in the questionnaire had to be completed before submission was permitted. All questionnaires were checked for completeness, consistency and validity by members of the research team. Based on international principles of questionnaire design and previous research experience, it is generally recommended that the sample size should be 5 to 10 times the number of questionnaire items . With 34 KAP items of the questionnaire, the required sample size was 170. SPSS 26.0 (IBM Corp., Armonk, NY, USA) was used for the analyses. Quantitative data were tested for normality by the Shapiro-Wilk’s test. To ensure consistency in data presentation, all variables were expressed as mean ± standard deviation (SD). Variables that met the normal distribution were compared between groups using the t-test or one-way analysis of variance (ANOVA; three or more groups), while non-normally distributed were compared between groups using the Mann-Whitney U test or Kruskal-Wallis test. Categorical data are expressed as frequency (percentage) and were analyzed using the chi-squared test. Structural equation modelling (SEM) was used to test the initial hypotheses that knowledge regarding pediatric palliative care has effect on attitude, and attitude has effect on practice. A two-sided P < 0.05 was considered statistically significant. Demographic characteristics A total of 204 medical professionals (175 females, 85.78%), including 158 nurses (77.45%) and 46 physicians (22.55%), participated in the survey. The demographic characteristics of the study participants are shown in Table . Just over half the respondents (111/204, 54.41%) were aged ≥ 30 years-old, and only a minority of the participants (38/204, 18.63%) had a master’s degree or higher. Around one-third of the respondents (70/204, 34.31%) had ≤ 5 years of professional experience. Only a minority of participants had received training in pediatric palliative care (147/204, 72.06%), which was mainly theory-based (53/204, 25.98%). Approximately half of the PICU personnel (97/204, 47.55%) indicated that pediatric palliative care was available in their department. Knowledge scores The average knowledge score was 9.75 ± 2.90 points (possible range, 0–13 points), indicating that the respondents had a moderate level of knowledge about pediatric palliative care. The distribution of the responses to each of the 13 questions in the knowledge dimension are shown in Table . More than 85% of the respondents correctly answered questions regarding the aims of pediatric palliative care (86.27%; item K1), the importance of controlling pain/other symptoms, ensuring patient comfort and supporting parents/caregivers (90.20%; item K4), the use of pain scales to assess pain (94.61%; item K6), the importance of parents/caregivers being present during routine care (87.25%; item K9), sleep deprivation being a major stressor in patients with life-limiting conditions (91.67%; item K10), the importance of incorporating pediatric palliative care from outset following diagnosis (86.76%; item K11), and the role of pediatric palliative care in quality of life improvement (85.29%; item K12). However, nearly half of the respondents incorrectly believed that opioid substitution therapy should be considered to reduce iatrogenic withdrawal syndrome regardless of the previous dose used, duration of therapy or drug utilized (46.57%; item K7). As shown in Table , subgroup analyses revealed that the knowledge score was higher for physicians than for nurses ( P < 0.001) and for personnel who had received previous training in pediatric palliative care ( P = 0.005). Attitude scores The average attitude score was 38.30 ± 3.80 points (possible range, 12–60 points), implying that the surveyed anesthetists did not have a strongly positive attitude to pediatric palliative care. The distributions of the responses to the 13 questions in the attitude dimension are summarized in Fig. . The vast majority of respondents gave very positive or positive responses to questions regarding the selection of palliative care to maintain the patient’s quality of life when invasive therapy might cause discomfort and have little effect on the underlying disease (79.42%; item A1), the role of palliative care in improving outcomes for patients and families (76.97%; item A2), the influence of palliative care on the hopes of the family (69.12%; item A3), the multidisciplinary nature of a palliative care team (90.20%; item A5), the importance of integrating pediatric critical care into the PICU (78.92%; item A6), and the need for education, training and evaluation to ensure high-quality pediatric palliative care (86.28%; item A7). However, a mixed distribution of responses was obtained for the question asking whether palliative care would lead to the patient’s family being put under pressure in a variety of ways (item A4). More than 80% of the respondents considered that pediatric palliative care was influenced by insufficient human resources/inadequate organization (80.89%; item A8.1), time pressures at work (80.40%; item A9.1) and a lack of education/training/knowledge (80.40%; item A9.2). The vast majority of demographic characteristics had no significant influence on the attitude score (Table ). The only exception was technology-based training, which was associated with a slightly higher attitude score ( P = 0.004). A more detailed analysis of the attitude dimension scores revealed that physicians had significantly higher scores than nurses for items A1–A4 (considerations for the selection of pediatric palliative care; P = 0.026) and items A5–A7 (composition of the palliative care team; P = 0.023), whereas the scores were similar for items A8–A12 (Table S3). Practice scores The practice score for the respondents averaged 35.48 ± 5.72 points (possible range, 9–45 points), suggesting that there was room for improvement in the practices of the PICU personnel. More than 80% of participants indicated that they and their team always or often implemented analgesia (83.34%; item P1), screening and interventions to prevent delirium (83.34%; item P4), and high-quality communication (80.89%; item P8) when needed (Fig. ). The practice scores were comparable between subgroups stratified according to the various demographic characteristics (Table ). Further analysis indicated that nurses had a significantly higher score than physicians for item P3 (application of neuromuscular blockers; P < 0.001), whereas physicians had a higher score for item P9 (end-of-life care; P < 0.001; Table ). However, there were no significant differences in the other item scores between physicians and nurses, and training was without significant effect on the individual practice item scores (Table ). SEM Method of SEM was used to explore the factors that might influence KAP scores, with fit indices demonstrating acceptable model fit (Fig. and Table ). It was found that knowledge had a direct positive effect on attitude (β = 0.69 [0.28–1.10], p = 0.001), as well as significant indirect (β = 0.82 [0.36–1.28], p < 0.001) effect on practice. The effect of attitude on practice was significant as well (β = 1.18 [0.81–1.56], p < 0.001) (Table ). A total of 204 medical professionals (175 females, 85.78%), including 158 nurses (77.45%) and 46 physicians (22.55%), participated in the survey. The demographic characteristics of the study participants are shown in Table . Just over half the respondents (111/204, 54.41%) were aged ≥ 30 years-old, and only a minority of the participants (38/204, 18.63%) had a master’s degree or higher. Around one-third of the respondents (70/204, 34.31%) had ≤ 5 years of professional experience. Only a minority of participants had received training in pediatric palliative care (147/204, 72.06%), which was mainly theory-based (53/204, 25.98%). Approximately half of the PICU personnel (97/204, 47.55%) indicated that pediatric palliative care was available in their department. The average knowledge score was 9.75 ± 2.90 points (possible range, 0–13 points), indicating that the respondents had a moderate level of knowledge about pediatric palliative care. The distribution of the responses to each of the 13 questions in the knowledge dimension are shown in Table . More than 85% of the respondents correctly answered questions regarding the aims of pediatric palliative care (86.27%; item K1), the importance of controlling pain/other symptoms, ensuring patient comfort and supporting parents/caregivers (90.20%; item K4), the use of pain scales to assess pain (94.61%; item K6), the importance of parents/caregivers being present during routine care (87.25%; item K9), sleep deprivation being a major stressor in patients with life-limiting conditions (91.67%; item K10), the importance of incorporating pediatric palliative care from outset following diagnosis (86.76%; item K11), and the role of pediatric palliative care in quality of life improvement (85.29%; item K12). However, nearly half of the respondents incorrectly believed that opioid substitution therapy should be considered to reduce iatrogenic withdrawal syndrome regardless of the previous dose used, duration of therapy or drug utilized (46.57%; item K7). As shown in Table , subgroup analyses revealed that the knowledge score was higher for physicians than for nurses ( P < 0.001) and for personnel who had received previous training in pediatric palliative care ( P = 0.005). The average attitude score was 38.30 ± 3.80 points (possible range, 12–60 points), implying that the surveyed anesthetists did not have a strongly positive attitude to pediatric palliative care. The distributions of the responses to the 13 questions in the attitude dimension are summarized in Fig. . The vast majority of respondents gave very positive or positive responses to questions regarding the selection of palliative care to maintain the patient’s quality of life when invasive therapy might cause discomfort and have little effect on the underlying disease (79.42%; item A1), the role of palliative care in improving outcomes for patients and families (76.97%; item A2), the influence of palliative care on the hopes of the family (69.12%; item A3), the multidisciplinary nature of a palliative care team (90.20%; item A5), the importance of integrating pediatric critical care into the PICU (78.92%; item A6), and the need for education, training and evaluation to ensure high-quality pediatric palliative care (86.28%; item A7). However, a mixed distribution of responses was obtained for the question asking whether palliative care would lead to the patient’s family being put under pressure in a variety of ways (item A4). More than 80% of the respondents considered that pediatric palliative care was influenced by insufficient human resources/inadequate organization (80.89%; item A8.1), time pressures at work (80.40%; item A9.1) and a lack of education/training/knowledge (80.40%; item A9.2). The vast majority of demographic characteristics had no significant influence on the attitude score (Table ). The only exception was technology-based training, which was associated with a slightly higher attitude score ( P = 0.004). A more detailed analysis of the attitude dimension scores revealed that physicians had significantly higher scores than nurses for items A1–A4 (considerations for the selection of pediatric palliative care; P = 0.026) and items A5–A7 (composition of the palliative care team; P = 0.023), whereas the scores were similar for items A8–A12 (Table S3). The practice score for the respondents averaged 35.48 ± 5.72 points (possible range, 9–45 points), suggesting that there was room for improvement in the practices of the PICU personnel. More than 80% of participants indicated that they and their team always or often implemented analgesia (83.34%; item P1), screening and interventions to prevent delirium (83.34%; item P4), and high-quality communication (80.89%; item P8) when needed (Fig. ). The practice scores were comparable between subgroups stratified according to the various demographic characteristics (Table ). Further analysis indicated that nurses had a significantly higher score than physicians for item P3 (application of neuromuscular blockers; P < 0.001), whereas physicians had a higher score for item P9 (end-of-life care; P < 0.001; Table ). However, there were no significant differences in the other item scores between physicians and nurses, and training was without significant effect on the individual practice item scores (Table ). Method of SEM was used to explore the factors that might influence KAP scores, with fit indices demonstrating acceptable model fit (Fig. and Table ). It was found that knowledge had a direct positive effect on attitude (β = 0.69 [0.28–1.10], p = 0.001), as well as significant indirect (β = 0.82 [0.36–1.28], p < 0.001) effect on practice. The effect of attitude on practice was significant as well (β = 1.18 [0.81–1.56], p < 0.001) (Table ). It was found that the knowledge score was higher for physicians than for nurses and for personnel with previous training or greater professional experience. Demographic characteristics had only limited effects on the attitude and practice scores, while knowledge had a direct positive effect on attitude and indirect on practice. To our knowledge, this is the first survey evaluating the knowledge, attitudes and practices of PICU personnel regarding pediatric palliative care. This study provides new insights into the knowledge, attitudes and practices of physicians and nurses in Chinese PICUs with regard to pediatric palliative care. The results of this study may be used for designing and development of targeted education and training interventions to support PICU personnel providing pediatric palliative care. Knowledge gaps identified after analysis of study results In the present study, 7 of the 13 questions in the knowledge dimension were answered correctly by more than 85% of the surveyed PICU personnel, including items relating to the aims of pediatric palliative care (item K1), the importance of controlling pain/symptoms to make the patient comfortable and supporting parents/caregivers (item K4), the use of pain scales to assess pain (item K6), the importance of parents/caregivers being present during routine care (item K9), sleep deprivation as a major stressor in patients receiving palliative care (item K10), the importance of the early initiation of pediatric palliative care after diagnosis (item K11), and the benefits of pediatric palliative care on patient quality of life (item K12). In addition, more than two-thirds of the participants were aware that pediatric palliative care is provided for a wide range of non-malignant as well as malignant conditions (item K2), that patient comfort is an important aim, with unnecessary examinations and treatments avoided (item K8), and that active treatment to prolong survival should not be promoted if it compromises quality of life (item K13). However, nearly 40% of the medical personnel incorrectly believed that pediatric palliative care is only given to patients with life-threatening diseases after treatment has failed (item K3), and nearly half of the respondents incorrectly believed that opioid substitution therapy should be considered to reduce iatrogenic withdrawal syndrome regardless of the dose used, the duration of therapy or the drug utilized (item K7). Notably, the average knowledge score was 38.30 ± 3.80 points out of a possible maximum of 60 points. The above findings highlight knowledge gaps among the surveyed physicians and nurses, particularly with regard to the indications for palliative care and opioid use in children receiving palliative care. Occupational, but not demographic factors influenced knowledge scores The knowledge levels of the participants in the present study are broadly comparable to those reported previously by surveys of clinicians and nurses. For example, Zuniga-Villanueva et al. described a mean score of 6.8 out of 10 for pediatricians in Mexico . Abuhammad et al. found that nurses in Jordan had a low score in knowledge of pediatric palliative care . Zeru et al. determined that only 62.8% of surveyed nurses in Ethiopia had a good level of knowledge . Similarly, Ghoshal et al. reported that less than half of the surveyed doctors and nurses in India had a good level of knowledge (defined as a knowledge score ≥ 70%) . Detsyk et al. found that 25.3% of healthcare workers providing medical services to children (including nurses, general practitioners and pediatricians) did not know the meaning of pediatric palliative care, with 71.5% of the respondents believing it was mainly provided to patients with cancer, and only 54.8% of the respondents being aware that it was provided to children with incurable chronic diseases . Additionally, only 59.7% of the respondents in the study of Detsyk et al. knew that palliative care should be initiated at the time of diagnosis of an incurable disease, while only 52.6% of the participants were aware that palliative care should also offer support to the relatives of seriously ill children . In agreement with our data suggesting a deficiency in knowledge regarding opioid use, Stenekes et al. concluded that healthcare providers in Canada had knowledge gaps related to opioid use and the development of tolerance to opioids and sedatives . Madden et al. also found variation in the level of comfort with different opioids among physicians in the USA . In this study the subgroup analyses demonstrated that the knowledge score was higher for physicians than for nurses. Furthermore, a higher knowledge score was associated with previous training and working in a department where pediatric palliative care was available. Our findings are consistent with prior research concluding that a higher level of knowledge in pediatric palliative care was associated with training and greater experience in palliative care . Occupation type has also been reported to influence knowledge level . We suggest that the implementation of education and training programs, such as those described previously , may help to improve the knowledge of PICU personnel in China regarding pediatric palliative care. Specific believes formed the mostly positive attitude The mean attitude score of 38.30 ± 3.80 points out of maximum 60 points indicates that, overall, the participants in this study had a moderately positive attitude toward pediatric palliative care. The majority of respondents (> 69%) gave “very positive” or “positive” answers to 6 of the first 7 questions in the attitude dimension (A1–A3, A5–A7), whereas the responses for item A4 varied. More than 60% of the respondents believed that pediatric palliative care was influenced by economic (item A8), personnel-related (item A9), family-related (item A10), social (item A11) and implementation-related (item A12) factors. In addition, structural equation modelling confirmed that knowledge had a direct positive effect on attitude, although influence of knowledge or attitude on practice was not significant. Previous studies have also reported a variety of factors believed to affect the implementation of effective pediatric palliative care, including family preference for life-sustaining treatment, family not ready to acknowledge an incurable condition, parent discomfort with the possibility of hastening death , inadequate training , clinician misperceptions and emotional burden, prognostic uncertainty about treatment options , socio-cultural factors, nature of the patient and disease, insufficient training, regulatory/political issues , lack of adequate funding, lack of palliative care programs, difficulty integrating palliative care into existing pediatric care at the organizational level, and lack of knowledge . New education and training programs are needed to further improve practice The average practice score was 35.48 ± 5.72 points (possible range, 9–45 points), which suggests that there was room for improvement in the practices of the PICU personnel enrolled in this study. Most respondents (> 71%) indicated that, when required, they and their team always/often took care of analgesia, sedation, environment optimization, symptom management and communication, as well as screening and interventions to prevent delirium, or mitigate against iatrogenic withdrawal syndrome. However, only around half of the participants reported to have experience using neuromuscular blockers when indicated. A previous study in Holland reported that neuromuscular blockers were administered in 16% of cases at the time of withholding/withdrawing life-sustaining treatment . Less than half of the respondents in the present study stated that they or their team withdrew life-sustaining treatment when necessary, and less than two-thirds provided support to the parents/caregivers of patients receiving end-of-life care. Overall, our findings concur with those of other studies reporting suboptimal practices . Somewhat unexpectedly, despite being associated with higher knowledge scores, previous training appeared to have little or no effect on the attitudes and practices of PICU personnel regarding pediatric palliative care; practice was influenced by knowledge mostly indirectly, via attitude. This emphasizes the need for improved education and training programs that better target suboptimal attitudes and practices. In line with previous studies in various settings, some crucial aspects for the implementation of a pediatric palliative care program could be supported by the KAP approach, in particular collecting data to characterize the need for pediatric palliative care . Additionally, KAP assessment would help to raise awareness among hospital administration and clinical personnel about pediatric palliative care, and potentially aid in forming a balanced multidisciplinary palliative care team. Thus, above results may be used for the providing education and training to clinical personnel to support continuous provision of pediatric palliative care in PICU. Strengths and limitations of the study This multicenter study used KAP methodology and validated questionnaire to address the issue of the availability and barriers in implementation of pediatric palliative care; the results were additionally strengthened by application of SEM model which allowed to explore the potential influence of other factors on practice scores. Study population included a wide variety of ICU professionals, with different age and experience; still the sample size may be small to detect some specific differences between groups. Participants from five big cities in China answered the questionnaire, allowing to increase the generalizability of the results, but local peculiarities should still be taken into account. Although the KAP questionnaire was developed based on published recommendations and demonstrated good reliability, it may have limitations with regard to its ability to assess perceptions of pediatric palliative care. Finally, this study did not evaluate whether education/training programs would enhance the questionnaire scores, which is of interest for the future research. In the present study, 7 of the 13 questions in the knowledge dimension were answered correctly by more than 85% of the surveyed PICU personnel, including items relating to the aims of pediatric palliative care (item K1), the importance of controlling pain/symptoms to make the patient comfortable and supporting parents/caregivers (item K4), the use of pain scales to assess pain (item K6), the importance of parents/caregivers being present during routine care (item K9), sleep deprivation as a major stressor in patients receiving palliative care (item K10), the importance of the early initiation of pediatric palliative care after diagnosis (item K11), and the benefits of pediatric palliative care on patient quality of life (item K12). In addition, more than two-thirds of the participants were aware that pediatric palliative care is provided for a wide range of non-malignant as well as malignant conditions (item K2), that patient comfort is an important aim, with unnecessary examinations and treatments avoided (item K8), and that active treatment to prolong survival should not be promoted if it compromises quality of life (item K13). However, nearly 40% of the medical personnel incorrectly believed that pediatric palliative care is only given to patients with life-threatening diseases after treatment has failed (item K3), and nearly half of the respondents incorrectly believed that opioid substitution therapy should be considered to reduce iatrogenic withdrawal syndrome regardless of the dose used, the duration of therapy or the drug utilized (item K7). Notably, the average knowledge score was 38.30 ± 3.80 points out of a possible maximum of 60 points. The above findings highlight knowledge gaps among the surveyed physicians and nurses, particularly with regard to the indications for palliative care and opioid use in children receiving palliative care. The knowledge levels of the participants in the present study are broadly comparable to those reported previously by surveys of clinicians and nurses. For example, Zuniga-Villanueva et al. described a mean score of 6.8 out of 10 for pediatricians in Mexico . Abuhammad et al. found that nurses in Jordan had a low score in knowledge of pediatric palliative care . Zeru et al. determined that only 62.8% of surveyed nurses in Ethiopia had a good level of knowledge . Similarly, Ghoshal et al. reported that less than half of the surveyed doctors and nurses in India had a good level of knowledge (defined as a knowledge score ≥ 70%) . Detsyk et al. found that 25.3% of healthcare workers providing medical services to children (including nurses, general practitioners and pediatricians) did not know the meaning of pediatric palliative care, with 71.5% of the respondents believing it was mainly provided to patients with cancer, and only 54.8% of the respondents being aware that it was provided to children with incurable chronic diseases . Additionally, only 59.7% of the respondents in the study of Detsyk et al. knew that palliative care should be initiated at the time of diagnosis of an incurable disease, while only 52.6% of the participants were aware that palliative care should also offer support to the relatives of seriously ill children . In agreement with our data suggesting a deficiency in knowledge regarding opioid use, Stenekes et al. concluded that healthcare providers in Canada had knowledge gaps related to opioid use and the development of tolerance to opioids and sedatives . Madden et al. also found variation in the level of comfort with different opioids among physicians in the USA . In this study the subgroup analyses demonstrated that the knowledge score was higher for physicians than for nurses. Furthermore, a higher knowledge score was associated with previous training and working in a department where pediatric palliative care was available. Our findings are consistent with prior research concluding that a higher level of knowledge in pediatric palliative care was associated with training and greater experience in palliative care . Occupation type has also been reported to influence knowledge level . We suggest that the implementation of education and training programs, such as those described previously , may help to improve the knowledge of PICU personnel in China regarding pediatric palliative care. The mean attitude score of 38.30 ± 3.80 points out of maximum 60 points indicates that, overall, the participants in this study had a moderately positive attitude toward pediatric palliative care. The majority of respondents (> 69%) gave “very positive” or “positive” answers to 6 of the first 7 questions in the attitude dimension (A1–A3, A5–A7), whereas the responses for item A4 varied. More than 60% of the respondents believed that pediatric palliative care was influenced by economic (item A8), personnel-related (item A9), family-related (item A10), social (item A11) and implementation-related (item A12) factors. In addition, structural equation modelling confirmed that knowledge had a direct positive effect on attitude, although influence of knowledge or attitude on practice was not significant. Previous studies have also reported a variety of factors believed to affect the implementation of effective pediatric palliative care, including family preference for life-sustaining treatment, family not ready to acknowledge an incurable condition, parent discomfort with the possibility of hastening death , inadequate training , clinician misperceptions and emotional burden, prognostic uncertainty about treatment options , socio-cultural factors, nature of the patient and disease, insufficient training, regulatory/political issues , lack of adequate funding, lack of palliative care programs, difficulty integrating palliative care into existing pediatric care at the organizational level, and lack of knowledge . The average practice score was 35.48 ± 5.72 points (possible range, 9–45 points), which suggests that there was room for improvement in the practices of the PICU personnel enrolled in this study. Most respondents (> 71%) indicated that, when required, they and their team always/often took care of analgesia, sedation, environment optimization, symptom management and communication, as well as screening and interventions to prevent delirium, or mitigate against iatrogenic withdrawal syndrome. However, only around half of the participants reported to have experience using neuromuscular blockers when indicated. A previous study in Holland reported that neuromuscular blockers were administered in 16% of cases at the time of withholding/withdrawing life-sustaining treatment . Less than half of the respondents in the present study stated that they or their team withdrew life-sustaining treatment when necessary, and less than two-thirds provided support to the parents/caregivers of patients receiving end-of-life care. Overall, our findings concur with those of other studies reporting suboptimal practices . Somewhat unexpectedly, despite being associated with higher knowledge scores, previous training appeared to have little or no effect on the attitudes and practices of PICU personnel regarding pediatric palliative care; practice was influenced by knowledge mostly indirectly, via attitude. This emphasizes the need for improved education and training programs that better target suboptimal attitudes and practices. In line with previous studies in various settings, some crucial aspects for the implementation of a pediatric palliative care program could be supported by the KAP approach, in particular collecting data to characterize the need for pediatric palliative care . Additionally, KAP assessment would help to raise awareness among hospital administration and clinical personnel about pediatric palliative care, and potentially aid in forming a balanced multidisciplinary palliative care team. Thus, above results may be used for the providing education and training to clinical personnel to support continuous provision of pediatric palliative care in PICU. This multicenter study used KAP methodology and validated questionnaire to address the issue of the availability and barriers in implementation of pediatric palliative care; the results were additionally strengthened by application of SEM model which allowed to explore the potential influence of other factors on practice scores. Study population included a wide variety of ICU professionals, with different age and experience; still the sample size may be small to detect some specific differences between groups. Participants from five big cities in China answered the questionnaire, allowing to increase the generalizability of the results, but local peculiarities should still be taken into account. Although the KAP questionnaire was developed based on published recommendations and demonstrated good reliability, it may have limitations with regard to its ability to assess perceptions of pediatric palliative care. Finally, this study did not evaluate whether education/training programs would enhance the questionnaire scores, which is of interest for the future research. The findings of this study provide new insights into the knowledge, attitudes and practices of PICU personnel in China regarding pediatric palliative care. We anticipate that the results may help guide the development and implementation of education and training programs to improve the implementation of pediatric palliative care by PICU personnel in China. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
Applications to augment patient care for Internal Medicine specialists: a position paper from the EFIM working group on telemedicine, innovative technologies & digital health
7850049f-1aec-47f0-a861-55d862de597a
11239350
Internal Medicine[mh]
Main objective In various medical settings in Europe, numerous innovative technological solutions have been applied. This creates a specific need to comprehensively evaluate the applications of these solutions in the field of Internal Medicine. The European Federation of Internal Medicine (EFIM) deeply encourages Internal Medicine societies and internists in Europe to actively engage in Innovative Technologies by means of a written statement. The aim of this position paper is to provide Internal Medicine specialists as well as health professionals, managers, and decision makers, with a framework that highlights the best practices implemented in different European countries. This document serves as a resource, delineating issues and terminology and suggesting recommendations. However, it is not intended to override regulatory or credentialing recommendations and guidelines. Instead, it aims to align with and support the professional and ethical standards of the profession. The paper suggests potential future developments of patients’ and clinicians’ behavior and their interactions, illustrating four possible scenarios. Introduction Telemedicine, as defined by the World Health Organization (WHO), entails the delivery of healthcare services by healthcare professionals employing Information and Communication Technology (ICT) where distance is a critical factor . Telemedicine facilitates the exchange of pertinent information related to diagnosis, treatment, prevention, research, and disease assessment . In addition, significant advancements in information technology, the advent of high-speed internet, and the proliferation of smartphones over the past decade have greatly enhanced the accessibility of telemedicine services. The terms “telemedicine” and “telehealth” are often used interchangeably, although they can have distinct meanings. Telemedicine refers to “the provision of healthcare services, including remote care and online pharmacies, through the use of information and communication technologies, in situations where the health professional and the patient (or several health professionals) are not in the same location” , while telehealth includes a wide range of health promotion and education toward a healthy lifestyle, which also includes providing remote care , such as telemedicine, telenursing, teletherapy, and telepsychology. The goals are similar for each: to improve access to healthcare services. Telemedicine has a positive impact on patient health behavior, medication adherence, and quality of life due to its efficiency and cost-effectiveness. Moreover, the utilization of telemedicine in the field of Internal Medicine can improve the management of various chronic conditions and clinical outcomes. The adoption and implementation of evidence-based telemedicine systems should be based on Internal Medicine cases and tailored to the specific local context . Despite its evident advantages, the widespread adoption of telemedicine has been hindered by technical limitations at the point of care, regulatory policies, and limited reimbursement structures . However, the emergence of the COVID-19 pandemic has accelerated change in many areas and led to the rapid adoption of diverse telemedicine services. During this period, telemedicine has shown its potential to improve access to healthcare for patients with or without SARS-CoV-2 infection, while also ensuring the safety of patients and healthcare workers by maintaining physical distance . Nevertheless, there is substantial evidence that shows the non-use and discontinued use of telemedicine. User-related factors, such as attitudes and technical literacy, are identified as key barriers to adoption along with technical aspects such as poor usability . Therefore, the systematic implementation of telemedicine should not only be based on technical feasibility, but also validated by evidence of real-world results and, ideally, robust evaluation. In light of these considerations, the European Federation of Internal Medicine (EFIM) soundly recommends Internal Medicine societies and specialist physicians throughout Europe to take a proactive role in leading and influencing the development and application of telemedicine and digital technologies in healthcare. The purpose of this position paper is to outline the roles that telemedicine applications play in the practice of Internal Medicine in Europe. Methods The development of this position paper involved the participation and contribution of all the authors through a comparison process conducted remotely from July 2021 to December 2023. The primary methodology employed for this endeavor was the SWOT exercise with a Delphi panel. The Delphi method is a forecasting framework that involves multiple rounds of questionnaires sent to a panel of experts. Its application is deemed as efficient and simple, and often results in a consensus among a group of experts. In this particular instance, the authors of the paper qualified as experts in the field as they had applied telemedicine techniques directly and indirectly implemented within their respective national contexts . The experts of Telemedicine Working Group collaboratively respond to the grand question “What role should Telemedicine play in the care of Internal Medicine patients?” For each identified topic, a panel of experts was selected to encompass expertise in clinical, public health, health economics, and statistics domains. The panel assigned scores reflecting the perceived value, considering the balance between strengths and weaknesses, and the potential risks, considering the balance between threats and opportunities. These scores were evaluated on a Likert scale from −10, indicating minimum added value or risk, to +10, representing maximum added value or risk. All panel discussions were carried out remotely . Results 4.1 Telemedicine SWOT analysis A comprehensive checklist was developed by closely aligning the findings from the review of relevant literature on telemedicine practice with the outcomes derived from the Delphi analysis. Following the structure of the Delphi questionnaire, the checklist included factors that facilitate and hinder the implementation of telemedicine. The SWOT analysis is presented in . Substantial evidence now supports the strengths and opportunities associated with telemedicine. Telemedicine has been shown to reduce consultation time , eliminate unnecessary travel for both patients and healthcare professionals , facilitate healthcare delivery in remote areas , and contribute to cost savings . Integrating telemedicine into a well-coordinated care process has been demonstrated to improve health outcomes . In fact, patient-provider collaboration (“co-care”) and patient self-management (“self-care”) are not only an expression of patient-centeredness, they will also increase the cost-effectiveness of healthcare due to improved clinical outcomes and increased patient responsibilities and inputs. Conversely, there is evidence highlighting the weaknesses and risks associated with telemedicine. Although the process of digitalization impacts approximately 90% of the healthcare sector, digital health extends beyond technological implementation and involves profound substantial cultural and social implications. It fundamentally alters the role of physicians and patients and the dynamics of their relationship. Patients now play an active role in the treatment process, fostering a patient-centric model where technology serves as a key tool for encouraging patient engagement and responsibility . It is crucial to meticulously examine ethical issues in the delivery of telemedicine to ensure the confidentiality and security of patient information, address inefficiencies among physicians, and improve the overall quality of healthcare services. Ethical concerns related to telemedicine can be viewed from various perspectives, including technology, physician-patient relationships, data confidentiality and security, informed consent, and satisfaction of patients and their families with telemedicine services. Prioritizing ethical considerations in telemedicine is an essential aspect of ensuring the delivery of high-quality healthcare services . The physical examination performed by healthcare professionals, including medical doctors, has been a fundamental aspect of medical practice for centuries. This examination, involving sensory engagement, has been instrumental in enabling healthcare practitioners to assess the health status of their patients . Notably, research indicates that patients place high value on the physical examination not only for its perceived higher accuracy but also for its emotional attributes . Moreover, the diagnostic process is not solely based on a single episode of rational decision-making, instead, it involves continuous monitoring of the patient’s condition and subsequent adjustment of care . Telemedicine should be recognized as an alternative form of healthcare delivery that is distinct from traditional medical care. In this context, the interaction between technology and the local context holds significant importance . In addition to the information presented in , there remains unexplored territory regarding the ethical dimensions of telemedicine. These include aspects related to the physician-patient relationship, data confidentiality and security, informed consent, and the satisfaction of both patients and caregivers. Following the completion of the process, four scenarios were generated, considering various potential future developments. Telemedicine SWOT analysis A comprehensive checklist was developed by closely aligning the findings from the review of relevant literature on telemedicine practice with the outcomes derived from the Delphi analysis. Following the structure of the Delphi questionnaire, the checklist included factors that facilitate and hinder the implementation of telemedicine. The SWOT analysis is presented in . Substantial evidence now supports the strengths and opportunities associated with telemedicine. Telemedicine has been shown to reduce consultation time , eliminate unnecessary travel for both patients and healthcare professionals , facilitate healthcare delivery in remote areas , and contribute to cost savings . Integrating telemedicine into a well-coordinated care process has been demonstrated to improve health outcomes . In fact, patient-provider collaboration (“co-care”) and patient self-management (“self-care”) are not only an expression of patient-centeredness, they will also increase the cost-effectiveness of healthcare due to improved clinical outcomes and increased patient responsibilities and inputs. Conversely, there is evidence highlighting the weaknesses and risks associated with telemedicine. Although the process of digitalization impacts approximately 90% of the healthcare sector, digital health extends beyond technological implementation and involves profound substantial cultural and social implications. It fundamentally alters the role of physicians and patients and the dynamics of their relationship. Patients now play an active role in the treatment process, fostering a patient-centric model where technology serves as a key tool for encouraging patient engagement and responsibility . It is crucial to meticulously examine ethical issues in the delivery of telemedicine to ensure the confidentiality and security of patient information, address inefficiencies among physicians, and improve the overall quality of healthcare services. Ethical concerns related to telemedicine can be viewed from various perspectives, including technology, physician-patient relationships, data confidentiality and security, informed consent, and satisfaction of patients and their families with telemedicine services. Prioritizing ethical considerations in telemedicine is an essential aspect of ensuring the delivery of high-quality healthcare services . The physical examination performed by healthcare professionals, including medical doctors, has been a fundamental aspect of medical practice for centuries. This examination, involving sensory engagement, has been instrumental in enabling healthcare practitioners to assess the health status of their patients . Notably, research indicates that patients place high value on the physical examination not only for its perceived higher accuracy but also for its emotional attributes . Moreover, the diagnostic process is not solely based on a single episode of rational decision-making, instead, it involves continuous monitoring of the patient’s condition and subsequent adjustment of care . Telemedicine should be recognized as an alternative form of healthcare delivery that is distinct from traditional medical care. In this context, the interaction between technology and the local context holds significant importance . In addition to the information presented in , there remains unexplored territory regarding the ethical dimensions of telemedicine. These include aspects related to the physician-patient relationship, data confidentiality and security, informed consent, and the satisfaction of both patients and caregivers. Following the completion of the process, four scenarios were generated, considering various potential future developments. Strengths and opportunities 5.1 Accelerated digitalization in Internal Medicine During the SARS-CoV-2 coronavirus pandemic, telemedicine emerged as a natural and necessary solution to address global emergency healthcare needs . Telemedicine consultations, or teleconsultations , are valuable in diverse clinical scenarios, allowing for accurate differential diagnoses and appropriate treatment recommendations . Importantly, after such consultations, patients not only received medical advice but also benefited from e-prescriptions, e-referrals for further examinations (such as laboratory tests), and e-sick notes. This procedure significantly reduces the risk of infection for patients who would otherwise have to physically visit a healthcare center and wait in traditional waiting rooms for their appointments with doctors. Examples of use-cases of evidence-based telemedicine applications in Internal Medicine include: Teletriage and remote consultations between patients and physicians in rural and remote areas or where mobility is an issue or care for the older adult in their home environment, especially when living independently at home . In time-sensitive emergency care scenarios, where access to a specialist cannot be provided on-site within a safe timeframe, such as in the context of stroke care . Telemonitoring of chronic conditions, such as chronic heart failure and arrhythmias . Video consultations as part of long-term patient care . Remote consultations as a protection strategy during the COVID-19 pandemic . Additional self-directed care mechanisms described in a later section. 5.2 Digital literacy: a core skill for patients & clinicians The process of digitalization requires digital health literacy, which is an extension of health literacy and uses an equivalent operational definition in the context of technology. Digital health literacy, or electronic health (e-health) literacy, focuses on an individual’s ability to access, understand, and engage with digital healthcare materials and technologies to contribute to quality of life . Technology solutions have the potential to promote health literacy. However, to be effective, health technology solutions must focus on functional and critical skills rather than building literacy and numeracy skills. Effective examples of functional and critical skills include operating the healthcare system, communicating with healthcare professionals, and sharing decision-making . Stakeholders involved in telemedicine should also have adequate digital literacy and e-health literacy. Specifically, healthcare professionals need to develop specific competencies to effectively apply telemedicine to their routine practices. As digital health resources become more prevalent, the individual ability to interact with technology is to be assessed to ensure that the technology is appropriate for the intended audience . Training of health care professionals should include : discussion of the individual stages of teleconsultations. patient interviews via telemedicine. examples of correct recommendations. attention to the alarm symptoms. sick notes issued after teleconsultations. Differences in internet access can also affect the quality and content of medical education . At a society level, educational campaigns should promote and support increased access to digital literacy and infrastructure necessary for successful eHealth solutions . Interprofessional medical care or network medicine across healthcare settings can benefit from the development of eHealth competencies in physicians , advanced practice nurses, specialty nurses, physician assistants, and additional affiliated health professionals. However, healthcare professionals also need to evaluate additional factors in telemedicine application, such as deployment costs at the point of care and high-speed Internet access for patients. Digital health inequity is defined as a systemic inequality that results from infrastructure disparities between countries and regions . 5.3 Telemedicine classification and modalities To establish common definitions for the different typologies of telemedicine, Internal Medicine specialist physicians may distinguish them according to the methods of interaction employed, as following shown: According to its purpose: teleconsultation, telediagnosis, telemonitoring, telecare, teletraining, telerehabilitation. According to the technology employed: mobile health app, telephone, mail, videoconference, chat, messaging within the Electronic Health Record (EHR). According to the interlocutor: physician-patient, physician – physician, tele-training. According to the timing of execution: synchronous (interlocutors interact simultaneously), asynchronous (interlocutors interact at different times). The methodologies in patient-physician interactions in telemedicine can be categorized into two main modalities: synchronous live and asynchronous interactions . However, academic studies comparing outcomes of asynchronous and synchronous care are still limited . 5.3.1 Synchronous live interactions Synchronous live interactions involve real-time, instant exchanges between participants within a telemedicine environment. This mode of interaction is widely accepted and facilitates simultaneous transmission of information in both directions. This mode also allows healthcare professionals to evaluate patients face-to-face and gain crucial information about their care and disease status. Examples of synchronous live interactions: Teleconsultations: between healthcare professionals or between a healthcare professional and a patient using synchronous information and communication technology platforms such as video, chat, and phone. Teleconsultations can be employed as an alternative to face-to-face consultations. Teletherapy: remote therapy sessions, such as physiotherapy, occupational therapy, psychology, and speech therapy, accomplished between a therapist and a patient through synchronous ICT communication. Remote monitoring: digital solutions, such as smartphone apps or web portals, to enable healthcare professionals to remotely monitor patient health data, such as blood pressure, electrocardiogram (ECG), and glucose levels. This technology makes it possible to intervene at the right timing and contributes to the prevention of hospitalization or urgent hospital admission. Remote monitoring has great potential in the continuous monitoring and prevention of exacerbation in chronic diseases. Remote monitoring is primarily asynchronous, but it can sometimes be combined with synchronous teleconsultations. 5.3.2 Asynchronous interactions Asynchronous interactions, or “store-and-forward” technology, facilitate the interaction of participants at separate time intervals in telemedicine. Asynchronous telemedicine services include various forms of communication, such as emails, secure text messaging, or services that allow both parties to engage at different times. This approach benefits healthcare professionals as they have the flexibility to review patient materials or communications on their own schedule. Asynchronous interactions enable patients to access healthcare services at their convenience in their preferred settings. Asynchronous approaches are particularly relevant in fields such as dermatology, radiology, orthopedics, ophthalmology, and cosmetic surgery where image and video sharing are often required. However, there are also advantages in Internal Medicine consultations where an asynchronous approach can be utilized following a holistic patient-centered approach. Examples of asynchronous approaches include: Remote patient monitoring (telemonitoring): includes registration, transmission, processing of body parameters such as vital signs and medical management through electronic systems. Wireless devices, wearable or implantable sensors, and medical apps can be integrated. Chronic diseases can be managed according to the patient’s needs. Most aspects are asynchronous, but synchronous elements, such as video consultations, can be integrated. Current innovations include the integration of Artificial Intelligence and Machine Learning algorithms for monitoring and early detection, e.g., in cardiac arrhythmias and hearth insufficiency . Remote interpretation telemedicine includes authorized access to healthcare data by healthcare professionals to interpret at any time and location. 5.3.3 E-messaging E-messaging, or chat-based interactions, involves exchanging messages via electronic devices such as tablets and mobile phones with the use of mobile networks and the Internet. Technologies employed for e-messaging include Short Message Services (SMSs) and applications such as FaceTime, Line, Messenger, WeChat, WhatsApp, and Viber. Approved and General Data Protection Regulation (GDPR)-compliant services should be constantly used to secure transmission of patient personal health data, vital signs, physiologic data, diagnostic images, and self-reports to healthcare professionals. These technologies allow healthcare professionals to review and deliver consultations, diagnoses, and treatment plans at a later time, as well as support patient compliance, monitoring, prevention, treatment, and appointment reminders. Privacy and data security are essential in e-messaging technologies. National Health Services (NHS) provided comprehensive guidelines for e-messaging services in Europe. Compliance with Europe’s General Data Protection Regulation ensures patient information and maintains data privacy . 5.3.4 Self-directed care mechanisms Self-directed care mechanisms, which can be synchronous or asynchronous, include self-management that allows individuals to obtain healthcare information and schedule patient appointments at any time and location. In addition, self-management includes diagnostic tools, video tutorials, educational resources, and the ability to self-assess health indicators . Personal alarm systems, such as an alarm button or a wristband, enable patients to promptly contact response call centers in the event of a fall, personal injury, accident, or other critical emergencies . The following list provides examples of various telemedicine applications in the field of Internal Medicine . Complex chronic patient care during episodes of exacerbation. Hospitalization at home. Telemonitoring of vital signs in exacerbation. Video consultations with different specialists. Addressing uncertainties in treatment modalities for individuals with chronic conditions, such as health education and health literacy. New patients referred through teleconsultation, e.g., consultations with the primary care doctor related to analytical alterations, the treatment of chronic diseases. New patients evaluated with no physical examination. Periodic medical checks of stable chronic pathologies. Older adult patients with access restrictions. Intensive follow-up following hospital discharge. Individual or group training consultations via video call. During teleconsultations, a standardized protocol is essential for conducting teleconsultations as it facilitates the acquisition of all relevant information required . The next list summarizes teleconsultation steps: 5.3.5 Pre-consultation Inform the patients about the necessary technical requirements for the consultation. Recommend the patients to take notes and have questions ready during the consultation. Specify estimated time and type of the consultation. Prepare the consultation by reviewing the clinical history and complementary tests. 5.3.6 During the consultation Identify the patients. This is accomplished through either familiarity with the patients or by presenting the patients’ electronic health card or Identity Card card to the camera. Request consent for the consultation. Communicate messages in an orderly manner. Allow patients to express their doubts. Verify that the information has been fully comprehended. Review the agreements and alerts on possible warning signs and mode of action. Do not record too long video consultation hours. Prefer software with end-to-end encryption on videos. 5.3.7 Post consultation Document that the consultation was accomplished by video. Document the relevant aspects of the consultation including the recommendations for further treatment, re-consultation, and/or referral to another health care provider. In addition to standardized conduct of teleconsultation, specific warning signs should be carefully evaluated to protect patient safety and prevent the potential for reduced accuracy of remote visits compared to in-person visits. A summary of warning signs is shown in the following list . Issues in understanding relevant medical information. Sudden worsening of clinical symptoms. Appearance of new symptoms that require physical examination. Signs of clinical instability or unexpected evolution. Need for hospital admission or emergency care. Need to communicate a poor prognosis or negative news. Situations that generate anxiety for the patient or the family. New patients with complex diagnoses. Uncooperative patients. 5.4 Enhancing the benefits of telemedicine applications In the European region, harmonized guidance on the usage of telemedicine among specialist physicians is lacking. A telemedicine sharing protocol for European specialists does not exist at the time of this writing. Numerous countries, concentrated in particular in Southern Europe, had insufficient operational and legislative tools to rapidly introduce telemedicine services in outpatient specialist care . Telemedicine can significantly reduce readmissions when monitoring patients with chronic diseases . However, the inability to conduct a complete physical examination during a teleconsultation is potentially a major barrier to the development of remote consultation services . The application of telemedicine devices, such as e-stethoscopes or video cameras, and artificial intelligence algorithms will increase the possibilities of telemedicine in the future. Such development leverages existing experience from fields such as teledermatology, which has successfully integrated digitally enabled clinical examination of the skin . Additionally, expressions of empathy can support trust during a patient-physician encounter, and the frontier of digital empathy may be paramount in sustaining such constructs in telemedicine visits . A hybrid model could be considered for long-term care in both primary care and specialist care and would also need evaluation over the long term. This model allows alternating in-person appointments at the health facilities and teleconsultation appointments . A similar model employing telephone follow-up visits has been used in many clinical trial protocols, significantly reducing the number of health center visits and hospitalizations . 5.4.1 Noncommunicable chronic disease and multimorbidity care Various academic studies have demonstrated that telemedicine is not inferior to in-person consultations in the management of patients with heart failure, hypertension, and diabetes Telemedicine can effectively prevent exacerbations, hospitalizations, and disease progression . However, the efficacy of telemedicine compared to in-person visits depends on the specific medical field and the patient characteristics. In addition, real-time interactive consultation may be more beneficial than delayed consultation . Monitoring therapy adherence via telemedicine tools is essential Telemedicine tools include a range of devices, such as continuous vital sign monitors, digital reminders, ingestible sensors, video observation, and smartphone applications. Trials evaluating the effectiveness of telemedicine tools have been conducted in China, India, Italy, Belarus, and the United States . 5.4.2 Aging in place with telemedicine The identification of older adult patients with mild cognitive impairment or dementia, who may be at a high risk of acute conditions, can be eased by mobile technologies and telemedicine. Telemedicine solutions should be customized for the older adult to be user-friendly and potentially automated . The introduction of telemedicine can reduce the financial burden on public expenditures related to the older adult segment . Telemedicine improves the reach and efficiency of public healthcare resources and encourages collaboration among healthcare professionals and patients/caregivers. In addition, this approach contributes to reduced hospitalization rates and associated risks such as falls, healthcare-associated infections, compensation claims, and improved treatment adherence . Appropriate utilization of emergency services and optimal ward utilization can also benefit of such technological enhancement because various preventive and real-time monitoring actions can be performed remotely, eliminating the necessity for patients to physically visit a healthcare facility . This is extremely useful especially for older adult and frailty population, who are the most responsible for inappropriate healthcare services utilization . This could also apply to territorial integration between acute hospital wards and intermediate care facilities, such as rehabilitation or palliative care structures. Their timely coordination is paramount in easing the burden of discharge process in hospital wards . Nevertheless, the efficacy of telemedicine depends on individual digital literacy levels and the development of reliable digital infrastructures . Older adult people would especially benefit from telemedicine, as the continuous monitoring of vital parameters can slow the progression or exacerbation of chronic conditions . Telemedicine can also build a sense of community, especially for isolated patients. In conclusion, the integration of human intelligence and telemedicine can produce increasingly personalized medicine, identification of risk factors and extrapolation of patient risk curves. Telemedicine has also proved to be effective in contrasting geriatric depression . 5.5 Roles and responsibilities of other healthcare members Health Information Technologies (HIT) have the potential to improve the quality of interprofessional and team care coordination, benefiting patients as well as healthcare. Specifically, HIT can support shared decision-making, access to care information (such as open notes) and care services (such as synchronous remote telehealth services), and health education. HIT enhances team care similar to that of another member of the healthcare team, automating routine or tedious tasks so that human agents can focus on providing humanized healthcare. Beyond routine tasks such as scheduling or administrative aspects of care, HIT can further evolve to enable previously unfeasible models of care, such as hospital-at-home care or intensive remote monitoring in selected conditions. Augmented intelligence provides humans with actionable data and information, enhancing human intelligence and decision-making . When planning for novel care models, it is essential to engage HIT developers and clinical informaticians with healthcare training, such as physicians, pharmacists, nurses, and other relevant professionals. In addition, the involvement in the design process of patients and their advocates can also be beneficial. This inclusive approach guarantees the ethical and equitable design of healthcare systems . 5.6 Methods to enhance clinical decision-making in telemedicine Despite growing political support for telemedicine systems, their standardization within clinical practice has been hampered by concerns about their effectiveness, cost-effectiveness, and user acceptance . Telemedicine makes it possible to provide healthcare services regardless of geographical constraints. Telemedicine and its associated technologies enable us to switch from the movement of individuals to the flow of information . Telemedicine possesses several positive attributes, such as reduced entry barriers, established health services, integration of primary and specialty care, delivery of care through smart devices in patient homes, patient preference, and convenience. These factors are particularly significant for fragile and vulnerable populations . In addition, telemedicine favors the integration of local health systems and hospitals by facilitating communication between internal specialists and general practitioners. 5.7 Challenges and benefits of health technology assessment application to telemedicine technologies Telemedicine offers benefits in various cases by easing the load on healthcare infrastructure and personnel and ensuring timely and adequate care to patients who face mobility issues and are geographically distant from appropriate medical facilities . However, additional telemedicine dimensions requiring evaluation concern the ethical and social aspects of telemedicine such as the patient-physician relationship, data confidentiality and security, informed consent, and patient and caregiver satisfaction. Most suitable telemedicine devices should be carefully selected, procured, and connected with medical professionals for evaluation. While technology has the potential to improve patient access and health outcomes, not all technological innovations can achieve their intended purposes . Thus, the investigation of different telemedicine technologies is necessary to prioritize the ones that are efficient and impactful. The Health Technology Assessment (HTA) process plays a crucial role in evaluating the adequacy of telemedicine technologies. The HTA carries out a systematic assessment to determine the suitability and effectiveness of various telemedicine approaches . 5.8 Ethical and legal considerations Various types of regulations are touched upon in the jurisdiction of European Law, including primary and secondary regulations, as well as soft law in the form of guidelines and communications issued by the European Commission. With reference to primary law, the Treaty on the Functioning of the European Union (TFEU) plays a central role . Article 56 of the TFEU prohibits any restrictions on the freedom to provide services, while Article 57 of the TFEU defines the very notion of service. Medical care falls within the scope of the Treaty as it regulates the free movement of services. As for secondary law, Regulation (EU) 2016/679 and Directive 95/46/EC, known as General Data Protection Regulation (GDPR), are the main reference regulations. Regulation (EU) 2016/679 concerns the protection of personal data and their free movement, while Directive 95/46/EC pertains to health data and genetic data and emphasizes the rights of patients in cross-border healthcare . Furthermore, this Directive aims to provide clear regulation for the phenomenon known as “medical tourism.” Recitals 19 and 20 of the preambles already impose an obligation to inform patients receiving cross-border healthcare about the applicable rules. Upon request, healthcare professionals are also required to provide specific information about the healthcare benefits they offer and the treatment options available. Directive 2011/24/EU further clarifies the information obligations of healthcare professionals under Article 4 . According to this directive, healthcare professionals should offer relevant information to support individual patients in making informed decisions, including details on treatment options, availability, quality, and safety of healthcare services, as well as prices for specific benefits. At the same time, Article 4 of Directive 2011/24/EU requires Member States to ensure that healthcare professionals on their territory apply the same fee structure for patients from other Member States as for domestic patients in comparable medical situations. If no comparable prices exist for domestic patients, healthcare professionals should charge a price based on objective and non-discriminatory criteria. This approach is explained by the need to establish standards for telemedicine services to preserve patient’s and medical personnel’s safety and protection. In summary, this approach is consistent with solutions planned at the EU level. In 2018, the European Commission announced ongoing efforts to provide citizens with secure access to high-quality digital health and welfare services. A communication on the digital transformation of health and social care has been published, outlining three key areas for further action. The first area focuses on actions to ensure secure access and sharing of health data for citizens. The European Commission plans to establish an e-health digital service infrastructure that allows for the exchange of e-prescriptions and patient data between healthcare professionals in order to facilitate access to cross-border healthcare. Development is underway to establish a European electronic health record exchange format accessible to all EU citizens. The second area stresses the importance of better data for research, disease prevention, and personalized healthcare. The third area highlights the use of digital tools to empower citizens and provide person-centered care. Digital services should be scaled up to enable individuals to manage their health effectively. Consequently, the proposed telemedicine standards align perfectly with these adopted assumptions . Accelerated digitalization in Internal Medicine During the SARS-CoV-2 coronavirus pandemic, telemedicine emerged as a natural and necessary solution to address global emergency healthcare needs . Telemedicine consultations, or teleconsultations , are valuable in diverse clinical scenarios, allowing for accurate differential diagnoses and appropriate treatment recommendations . Importantly, after such consultations, patients not only received medical advice but also benefited from e-prescriptions, e-referrals for further examinations (such as laboratory tests), and e-sick notes. This procedure significantly reduces the risk of infection for patients who would otherwise have to physically visit a healthcare center and wait in traditional waiting rooms for their appointments with doctors. Examples of use-cases of evidence-based telemedicine applications in Internal Medicine include: Teletriage and remote consultations between patients and physicians in rural and remote areas or where mobility is an issue or care for the older adult in their home environment, especially when living independently at home . In time-sensitive emergency care scenarios, where access to a specialist cannot be provided on-site within a safe timeframe, such as in the context of stroke care . Telemonitoring of chronic conditions, such as chronic heart failure and arrhythmias . Video consultations as part of long-term patient care . Remote consultations as a protection strategy during the COVID-19 pandemic . Additional self-directed care mechanisms described in a later section. Digital literacy: a core skill for patients & clinicians The process of digitalization requires digital health literacy, which is an extension of health literacy and uses an equivalent operational definition in the context of technology. Digital health literacy, or electronic health (e-health) literacy, focuses on an individual’s ability to access, understand, and engage with digital healthcare materials and technologies to contribute to quality of life . Technology solutions have the potential to promote health literacy. However, to be effective, health technology solutions must focus on functional and critical skills rather than building literacy and numeracy skills. Effective examples of functional and critical skills include operating the healthcare system, communicating with healthcare professionals, and sharing decision-making . Stakeholders involved in telemedicine should also have adequate digital literacy and e-health literacy. Specifically, healthcare professionals need to develop specific competencies to effectively apply telemedicine to their routine practices. As digital health resources become more prevalent, the individual ability to interact with technology is to be assessed to ensure that the technology is appropriate for the intended audience . Training of health care professionals should include : discussion of the individual stages of teleconsultations. patient interviews via telemedicine. examples of correct recommendations. attention to the alarm symptoms. sick notes issued after teleconsultations. Differences in internet access can also affect the quality and content of medical education . At a society level, educational campaigns should promote and support increased access to digital literacy and infrastructure necessary for successful eHealth solutions . Interprofessional medical care or network medicine across healthcare settings can benefit from the development of eHealth competencies in physicians , advanced practice nurses, specialty nurses, physician assistants, and additional affiliated health professionals. However, healthcare professionals also need to evaluate additional factors in telemedicine application, such as deployment costs at the point of care and high-speed Internet access for patients. Digital health inequity is defined as a systemic inequality that results from infrastructure disparities between countries and regions . Telemedicine classification and modalities To establish common definitions for the different typologies of telemedicine, Internal Medicine specialist physicians may distinguish them according to the methods of interaction employed, as following shown: According to its purpose: teleconsultation, telediagnosis, telemonitoring, telecare, teletraining, telerehabilitation. According to the technology employed: mobile health app, telephone, mail, videoconference, chat, messaging within the Electronic Health Record (EHR). According to the interlocutor: physician-patient, physician – physician, tele-training. According to the timing of execution: synchronous (interlocutors interact simultaneously), asynchronous (interlocutors interact at different times). The methodologies in patient-physician interactions in telemedicine can be categorized into two main modalities: synchronous live and asynchronous interactions . However, academic studies comparing outcomes of asynchronous and synchronous care are still limited . 5.3.1 Synchronous live interactions Synchronous live interactions involve real-time, instant exchanges between participants within a telemedicine environment. This mode of interaction is widely accepted and facilitates simultaneous transmission of information in both directions. This mode also allows healthcare professionals to evaluate patients face-to-face and gain crucial information about their care and disease status. Examples of synchronous live interactions: Teleconsultations: between healthcare professionals or between a healthcare professional and a patient using synchronous information and communication technology platforms such as video, chat, and phone. Teleconsultations can be employed as an alternative to face-to-face consultations. Teletherapy: remote therapy sessions, such as physiotherapy, occupational therapy, psychology, and speech therapy, accomplished between a therapist and a patient through synchronous ICT communication. Remote monitoring: digital solutions, such as smartphone apps or web portals, to enable healthcare professionals to remotely monitor patient health data, such as blood pressure, electrocardiogram (ECG), and glucose levels. This technology makes it possible to intervene at the right timing and contributes to the prevention of hospitalization or urgent hospital admission. Remote monitoring has great potential in the continuous monitoring and prevention of exacerbation in chronic diseases. Remote monitoring is primarily asynchronous, but it can sometimes be combined with synchronous teleconsultations. 5.3.2 Asynchronous interactions Asynchronous interactions, or “store-and-forward” technology, facilitate the interaction of participants at separate time intervals in telemedicine. Asynchronous telemedicine services include various forms of communication, such as emails, secure text messaging, or services that allow both parties to engage at different times. This approach benefits healthcare professionals as they have the flexibility to review patient materials or communications on their own schedule. Asynchronous interactions enable patients to access healthcare services at their convenience in their preferred settings. Asynchronous approaches are particularly relevant in fields such as dermatology, radiology, orthopedics, ophthalmology, and cosmetic surgery where image and video sharing are often required. However, there are also advantages in Internal Medicine consultations where an asynchronous approach can be utilized following a holistic patient-centered approach. Examples of asynchronous approaches include: Remote patient monitoring (telemonitoring): includes registration, transmission, processing of body parameters such as vital signs and medical management through electronic systems. Wireless devices, wearable or implantable sensors, and medical apps can be integrated. Chronic diseases can be managed according to the patient’s needs. Most aspects are asynchronous, but synchronous elements, such as video consultations, can be integrated. Current innovations include the integration of Artificial Intelligence and Machine Learning algorithms for monitoring and early detection, e.g., in cardiac arrhythmias and hearth insufficiency . Remote interpretation telemedicine includes authorized access to healthcare data by healthcare professionals to interpret at any time and location. 5.3.3 E-messaging E-messaging, or chat-based interactions, involves exchanging messages via electronic devices such as tablets and mobile phones with the use of mobile networks and the Internet. Technologies employed for e-messaging include Short Message Services (SMSs) and applications such as FaceTime, Line, Messenger, WeChat, WhatsApp, and Viber. Approved and General Data Protection Regulation (GDPR)-compliant services should be constantly used to secure transmission of patient personal health data, vital signs, physiologic data, diagnostic images, and self-reports to healthcare professionals. These technologies allow healthcare professionals to review and deliver consultations, diagnoses, and treatment plans at a later time, as well as support patient compliance, monitoring, prevention, treatment, and appointment reminders. Privacy and data security are essential in e-messaging technologies. National Health Services (NHS) provided comprehensive guidelines for e-messaging services in Europe. Compliance with Europe’s General Data Protection Regulation ensures patient information and maintains data privacy . 5.3.4 Self-directed care mechanisms Self-directed care mechanisms, which can be synchronous or asynchronous, include self-management that allows individuals to obtain healthcare information and schedule patient appointments at any time and location. In addition, self-management includes diagnostic tools, video tutorials, educational resources, and the ability to self-assess health indicators . Personal alarm systems, such as an alarm button or a wristband, enable patients to promptly contact response call centers in the event of a fall, personal injury, accident, or other critical emergencies . The following list provides examples of various telemedicine applications in the field of Internal Medicine . Complex chronic patient care during episodes of exacerbation. Hospitalization at home. Telemonitoring of vital signs in exacerbation. Video consultations with different specialists. Addressing uncertainties in treatment modalities for individuals with chronic conditions, such as health education and health literacy. New patients referred through teleconsultation, e.g., consultations with the primary care doctor related to analytical alterations, the treatment of chronic diseases. New patients evaluated with no physical examination. Periodic medical checks of stable chronic pathologies. Older adult patients with access restrictions. Intensive follow-up following hospital discharge. Individual or group training consultations via video call. During teleconsultations, a standardized protocol is essential for conducting teleconsultations as it facilitates the acquisition of all relevant information required . The next list summarizes teleconsultation steps: 5.3.5 Pre-consultation Inform the patients about the necessary technical requirements for the consultation. Recommend the patients to take notes and have questions ready during the consultation. Specify estimated time and type of the consultation. Prepare the consultation by reviewing the clinical history and complementary tests. 5.3.6 During the consultation Identify the patients. This is accomplished through either familiarity with the patients or by presenting the patients’ electronic health card or Identity Card card to the camera. Request consent for the consultation. Communicate messages in an orderly manner. Allow patients to express their doubts. Verify that the information has been fully comprehended. Review the agreements and alerts on possible warning signs and mode of action. Do not record too long video consultation hours. Prefer software with end-to-end encryption on videos. 5.3.7 Post consultation Document that the consultation was accomplished by video. Document the relevant aspects of the consultation including the recommendations for further treatment, re-consultation, and/or referral to another health care provider. In addition to standardized conduct of teleconsultation, specific warning signs should be carefully evaluated to protect patient safety and prevent the potential for reduced accuracy of remote visits compared to in-person visits. A summary of warning signs is shown in the following list . Issues in understanding relevant medical information. Sudden worsening of clinical symptoms. Appearance of new symptoms that require physical examination. Signs of clinical instability or unexpected evolution. Need for hospital admission or emergency care. Need to communicate a poor prognosis or negative news. Situations that generate anxiety for the patient or the family. New patients with complex diagnoses. Uncooperative patients. Synchronous live interactions Synchronous live interactions involve real-time, instant exchanges between participants within a telemedicine environment. This mode of interaction is widely accepted and facilitates simultaneous transmission of information in both directions. This mode also allows healthcare professionals to evaluate patients face-to-face and gain crucial information about their care and disease status. Examples of synchronous live interactions: Teleconsultations: between healthcare professionals or between a healthcare professional and a patient using synchronous information and communication technology platforms such as video, chat, and phone. Teleconsultations can be employed as an alternative to face-to-face consultations. Teletherapy: remote therapy sessions, such as physiotherapy, occupational therapy, psychology, and speech therapy, accomplished between a therapist and a patient through synchronous ICT communication. Remote monitoring: digital solutions, such as smartphone apps or web portals, to enable healthcare professionals to remotely monitor patient health data, such as blood pressure, electrocardiogram (ECG), and glucose levels. This technology makes it possible to intervene at the right timing and contributes to the prevention of hospitalization or urgent hospital admission. Remote monitoring has great potential in the continuous monitoring and prevention of exacerbation in chronic diseases. Remote monitoring is primarily asynchronous, but it can sometimes be combined with synchronous teleconsultations. Asynchronous interactions Asynchronous interactions, or “store-and-forward” technology, facilitate the interaction of participants at separate time intervals in telemedicine. Asynchronous telemedicine services include various forms of communication, such as emails, secure text messaging, or services that allow both parties to engage at different times. This approach benefits healthcare professionals as they have the flexibility to review patient materials or communications on their own schedule. Asynchronous interactions enable patients to access healthcare services at their convenience in their preferred settings. Asynchronous approaches are particularly relevant in fields such as dermatology, radiology, orthopedics, ophthalmology, and cosmetic surgery where image and video sharing are often required. However, there are also advantages in Internal Medicine consultations where an asynchronous approach can be utilized following a holistic patient-centered approach. Examples of asynchronous approaches include: Remote patient monitoring (telemonitoring): includes registration, transmission, processing of body parameters such as vital signs and medical management through electronic systems. Wireless devices, wearable or implantable sensors, and medical apps can be integrated. Chronic diseases can be managed according to the patient’s needs. Most aspects are asynchronous, but synchronous elements, such as video consultations, can be integrated. Current innovations include the integration of Artificial Intelligence and Machine Learning algorithms for monitoring and early detection, e.g., in cardiac arrhythmias and hearth insufficiency . Remote interpretation telemedicine includes authorized access to healthcare data by healthcare professionals to interpret at any time and location. E-messaging E-messaging, or chat-based interactions, involves exchanging messages via electronic devices such as tablets and mobile phones with the use of mobile networks and the Internet. Technologies employed for e-messaging include Short Message Services (SMSs) and applications such as FaceTime, Line, Messenger, WeChat, WhatsApp, and Viber. Approved and General Data Protection Regulation (GDPR)-compliant services should be constantly used to secure transmission of patient personal health data, vital signs, physiologic data, diagnostic images, and self-reports to healthcare professionals. These technologies allow healthcare professionals to review and deliver consultations, diagnoses, and treatment plans at a later time, as well as support patient compliance, monitoring, prevention, treatment, and appointment reminders. Privacy and data security are essential in e-messaging technologies. National Health Services (NHS) provided comprehensive guidelines for e-messaging services in Europe. Compliance with Europe’s General Data Protection Regulation ensures patient information and maintains data privacy . Self-directed care mechanisms Self-directed care mechanisms, which can be synchronous or asynchronous, include self-management that allows individuals to obtain healthcare information and schedule patient appointments at any time and location. In addition, self-management includes diagnostic tools, video tutorials, educational resources, and the ability to self-assess health indicators . Personal alarm systems, such as an alarm button or a wristband, enable patients to promptly contact response call centers in the event of a fall, personal injury, accident, or other critical emergencies . The following list provides examples of various telemedicine applications in the field of Internal Medicine . Complex chronic patient care during episodes of exacerbation. Hospitalization at home. Telemonitoring of vital signs in exacerbation. Video consultations with different specialists. Addressing uncertainties in treatment modalities for individuals with chronic conditions, such as health education and health literacy. New patients referred through teleconsultation, e.g., consultations with the primary care doctor related to analytical alterations, the treatment of chronic diseases. New patients evaluated with no physical examination. Periodic medical checks of stable chronic pathologies. Older adult patients with access restrictions. Intensive follow-up following hospital discharge. Individual or group training consultations via video call. During teleconsultations, a standardized protocol is essential for conducting teleconsultations as it facilitates the acquisition of all relevant information required . The next list summarizes teleconsultation steps: Pre-consultation Inform the patients about the necessary technical requirements for the consultation. Recommend the patients to take notes and have questions ready during the consultation. Specify estimated time and type of the consultation. Prepare the consultation by reviewing the clinical history and complementary tests. During the consultation Identify the patients. This is accomplished through either familiarity with the patients or by presenting the patients’ electronic health card or Identity Card card to the camera. Request consent for the consultation. Communicate messages in an orderly manner. Allow patients to express their doubts. Verify that the information has been fully comprehended. Review the agreements and alerts on possible warning signs and mode of action. Do not record too long video consultation hours. Prefer software with end-to-end encryption on videos. Post consultation Document that the consultation was accomplished by video. Document the relevant aspects of the consultation including the recommendations for further treatment, re-consultation, and/or referral to another health care provider. In addition to standardized conduct of teleconsultation, specific warning signs should be carefully evaluated to protect patient safety and prevent the potential for reduced accuracy of remote visits compared to in-person visits. A summary of warning signs is shown in the following list . Issues in understanding relevant medical information. Sudden worsening of clinical symptoms. Appearance of new symptoms that require physical examination. Signs of clinical instability or unexpected evolution. Need for hospital admission or emergency care. Need to communicate a poor prognosis or negative news. Situations that generate anxiety for the patient or the family. New patients with complex diagnoses. Uncooperative patients. Enhancing the benefits of telemedicine applications In the European region, harmonized guidance on the usage of telemedicine among specialist physicians is lacking. A telemedicine sharing protocol for European specialists does not exist at the time of this writing. Numerous countries, concentrated in particular in Southern Europe, had insufficient operational and legislative tools to rapidly introduce telemedicine services in outpatient specialist care . Telemedicine can significantly reduce readmissions when monitoring patients with chronic diseases . However, the inability to conduct a complete physical examination during a teleconsultation is potentially a major barrier to the development of remote consultation services . The application of telemedicine devices, such as e-stethoscopes or video cameras, and artificial intelligence algorithms will increase the possibilities of telemedicine in the future. Such development leverages existing experience from fields such as teledermatology, which has successfully integrated digitally enabled clinical examination of the skin . Additionally, expressions of empathy can support trust during a patient-physician encounter, and the frontier of digital empathy may be paramount in sustaining such constructs in telemedicine visits . A hybrid model could be considered for long-term care in both primary care and specialist care and would also need evaluation over the long term. This model allows alternating in-person appointments at the health facilities and teleconsultation appointments . A similar model employing telephone follow-up visits has been used in many clinical trial protocols, significantly reducing the number of health center visits and hospitalizations . 5.4.1 Noncommunicable chronic disease and multimorbidity care Various academic studies have demonstrated that telemedicine is not inferior to in-person consultations in the management of patients with heart failure, hypertension, and diabetes Telemedicine can effectively prevent exacerbations, hospitalizations, and disease progression . However, the efficacy of telemedicine compared to in-person visits depends on the specific medical field and the patient characteristics. In addition, real-time interactive consultation may be more beneficial than delayed consultation . Monitoring therapy adherence via telemedicine tools is essential Telemedicine tools include a range of devices, such as continuous vital sign monitors, digital reminders, ingestible sensors, video observation, and smartphone applications. Trials evaluating the effectiveness of telemedicine tools have been conducted in China, India, Italy, Belarus, and the United States . 5.4.2 Aging in place with telemedicine The identification of older adult patients with mild cognitive impairment or dementia, who may be at a high risk of acute conditions, can be eased by mobile technologies and telemedicine. Telemedicine solutions should be customized for the older adult to be user-friendly and potentially automated . The introduction of telemedicine can reduce the financial burden on public expenditures related to the older adult segment . Telemedicine improves the reach and efficiency of public healthcare resources and encourages collaboration among healthcare professionals and patients/caregivers. In addition, this approach contributes to reduced hospitalization rates and associated risks such as falls, healthcare-associated infections, compensation claims, and improved treatment adherence . Appropriate utilization of emergency services and optimal ward utilization can also benefit of such technological enhancement because various preventive and real-time monitoring actions can be performed remotely, eliminating the necessity for patients to physically visit a healthcare facility . This is extremely useful especially for older adult and frailty population, who are the most responsible for inappropriate healthcare services utilization . This could also apply to territorial integration between acute hospital wards and intermediate care facilities, such as rehabilitation or palliative care structures. Their timely coordination is paramount in easing the burden of discharge process in hospital wards . Nevertheless, the efficacy of telemedicine depends on individual digital literacy levels and the development of reliable digital infrastructures . Older adult people would especially benefit from telemedicine, as the continuous monitoring of vital parameters can slow the progression or exacerbation of chronic conditions . Telemedicine can also build a sense of community, especially for isolated patients. In conclusion, the integration of human intelligence and telemedicine can produce increasingly personalized medicine, identification of risk factors and extrapolation of patient risk curves. Telemedicine has also proved to be effective in contrasting geriatric depression . Noncommunicable chronic disease and multimorbidity care Various academic studies have demonstrated that telemedicine is not inferior to in-person consultations in the management of patients with heart failure, hypertension, and diabetes Telemedicine can effectively prevent exacerbations, hospitalizations, and disease progression . However, the efficacy of telemedicine compared to in-person visits depends on the specific medical field and the patient characteristics. In addition, real-time interactive consultation may be more beneficial than delayed consultation . Monitoring therapy adherence via telemedicine tools is essential Telemedicine tools include a range of devices, such as continuous vital sign monitors, digital reminders, ingestible sensors, video observation, and smartphone applications. Trials evaluating the effectiveness of telemedicine tools have been conducted in China, India, Italy, Belarus, and the United States . Aging in place with telemedicine The identification of older adult patients with mild cognitive impairment or dementia, who may be at a high risk of acute conditions, can be eased by mobile technologies and telemedicine. Telemedicine solutions should be customized for the older adult to be user-friendly and potentially automated . The introduction of telemedicine can reduce the financial burden on public expenditures related to the older adult segment . Telemedicine improves the reach and efficiency of public healthcare resources and encourages collaboration among healthcare professionals and patients/caregivers. In addition, this approach contributes to reduced hospitalization rates and associated risks such as falls, healthcare-associated infections, compensation claims, and improved treatment adherence . Appropriate utilization of emergency services and optimal ward utilization can also benefit of such technological enhancement because various preventive and real-time monitoring actions can be performed remotely, eliminating the necessity for patients to physically visit a healthcare facility . This is extremely useful especially for older adult and frailty population, who are the most responsible for inappropriate healthcare services utilization . This could also apply to territorial integration between acute hospital wards and intermediate care facilities, such as rehabilitation or palliative care structures. Their timely coordination is paramount in easing the burden of discharge process in hospital wards . Nevertheless, the efficacy of telemedicine depends on individual digital literacy levels and the development of reliable digital infrastructures . Older adult people would especially benefit from telemedicine, as the continuous monitoring of vital parameters can slow the progression or exacerbation of chronic conditions . Telemedicine can also build a sense of community, especially for isolated patients. In conclusion, the integration of human intelligence and telemedicine can produce increasingly personalized medicine, identification of risk factors and extrapolation of patient risk curves. Telemedicine has also proved to be effective in contrasting geriatric depression . Roles and responsibilities of other healthcare members Health Information Technologies (HIT) have the potential to improve the quality of interprofessional and team care coordination, benefiting patients as well as healthcare. Specifically, HIT can support shared decision-making, access to care information (such as open notes) and care services (such as synchronous remote telehealth services), and health education. HIT enhances team care similar to that of another member of the healthcare team, automating routine or tedious tasks so that human agents can focus on providing humanized healthcare. Beyond routine tasks such as scheduling or administrative aspects of care, HIT can further evolve to enable previously unfeasible models of care, such as hospital-at-home care or intensive remote monitoring in selected conditions. Augmented intelligence provides humans with actionable data and information, enhancing human intelligence and decision-making . When planning for novel care models, it is essential to engage HIT developers and clinical informaticians with healthcare training, such as physicians, pharmacists, nurses, and other relevant professionals. In addition, the involvement in the design process of patients and their advocates can also be beneficial. This inclusive approach guarantees the ethical and equitable design of healthcare systems . Methods to enhance clinical decision-making in telemedicine Despite growing political support for telemedicine systems, their standardization within clinical practice has been hampered by concerns about their effectiveness, cost-effectiveness, and user acceptance . Telemedicine makes it possible to provide healthcare services regardless of geographical constraints. Telemedicine and its associated technologies enable us to switch from the movement of individuals to the flow of information . Telemedicine possesses several positive attributes, such as reduced entry barriers, established health services, integration of primary and specialty care, delivery of care through smart devices in patient homes, patient preference, and convenience. These factors are particularly significant for fragile and vulnerable populations . In addition, telemedicine favors the integration of local health systems and hospitals by facilitating communication between internal specialists and general practitioners. Challenges and benefits of health technology assessment application to telemedicine technologies Telemedicine offers benefits in various cases by easing the load on healthcare infrastructure and personnel and ensuring timely and adequate care to patients who face mobility issues and are geographically distant from appropriate medical facilities . However, additional telemedicine dimensions requiring evaluation concern the ethical and social aspects of telemedicine such as the patient-physician relationship, data confidentiality and security, informed consent, and patient and caregiver satisfaction. Most suitable telemedicine devices should be carefully selected, procured, and connected with medical professionals for evaluation. While technology has the potential to improve patient access and health outcomes, not all technological innovations can achieve their intended purposes . Thus, the investigation of different telemedicine technologies is necessary to prioritize the ones that are efficient and impactful. The Health Technology Assessment (HTA) process plays a crucial role in evaluating the adequacy of telemedicine technologies. The HTA carries out a systematic assessment to determine the suitability and effectiveness of various telemedicine approaches . Ethical and legal considerations Various types of regulations are touched upon in the jurisdiction of European Law, including primary and secondary regulations, as well as soft law in the form of guidelines and communications issued by the European Commission. With reference to primary law, the Treaty on the Functioning of the European Union (TFEU) plays a central role . Article 56 of the TFEU prohibits any restrictions on the freedom to provide services, while Article 57 of the TFEU defines the very notion of service. Medical care falls within the scope of the Treaty as it regulates the free movement of services. As for secondary law, Regulation (EU) 2016/679 and Directive 95/46/EC, known as General Data Protection Regulation (GDPR), are the main reference regulations. Regulation (EU) 2016/679 concerns the protection of personal data and their free movement, while Directive 95/46/EC pertains to health data and genetic data and emphasizes the rights of patients in cross-border healthcare . Furthermore, this Directive aims to provide clear regulation for the phenomenon known as “medical tourism.” Recitals 19 and 20 of the preambles already impose an obligation to inform patients receiving cross-border healthcare about the applicable rules. Upon request, healthcare professionals are also required to provide specific information about the healthcare benefits they offer and the treatment options available. Directive 2011/24/EU further clarifies the information obligations of healthcare professionals under Article 4 . According to this directive, healthcare professionals should offer relevant information to support individual patients in making informed decisions, including details on treatment options, availability, quality, and safety of healthcare services, as well as prices for specific benefits. At the same time, Article 4 of Directive 2011/24/EU requires Member States to ensure that healthcare professionals on their territory apply the same fee structure for patients from other Member States as for domestic patients in comparable medical situations. If no comparable prices exist for domestic patients, healthcare professionals should charge a price based on objective and non-discriminatory criteria. This approach is explained by the need to establish standards for telemedicine services to preserve patient’s and medical personnel’s safety and protection. In summary, this approach is consistent with solutions planned at the EU level. In 2018, the European Commission announced ongoing efforts to provide citizens with secure access to high-quality digital health and welfare services. A communication on the digital transformation of health and social care has been published, outlining three key areas for further action. The first area focuses on actions to ensure secure access and sharing of health data for citizens. The European Commission plans to establish an e-health digital service infrastructure that allows for the exchange of e-prescriptions and patient data between healthcare professionals in order to facilitate access to cross-border healthcare. Development is underway to establish a European electronic health record exchange format accessible to all EU citizens. The second area stresses the importance of better data for research, disease prevention, and personalized healthcare. The third area highlights the use of digital tools to empower citizens and provide person-centered care. Digital services should be scaled up to enable individuals to manage their health effectively. Consequently, the proposed telemedicine standards align perfectly with these adopted assumptions . Weaknesses and threats 6.1 Limitations of telemedicine Specific limitations may prevent the adoption, implementation, and expansion of telemedicine and its supporting technologies. Extensive training is required to familiarize patients with video teleconsultations and the use of assistive technologies. Physicians also require targeted technical, clinical, and communication training tailored to their specific subspecialty needs. Limited access to broadband and internet facilities is a significant barrier, especially in remote areas and under-resourced settings . Reliable broadband access is essential for telemedicine services, but its quality is often inadequate in rural clinics and for patients residing in such areas. Legal restrictions and ambiguity in permissible practices in telemedicine have created a cautious attitude among telemedicine professionals. In addition, certain medical conditions are not adequately addressed within existing healthcare legislation. The pricing structure for virtual consultations and video surveillance in hospitals remains unclear, leaving questions as to whether they will be fully reimbursed or classified as shorter visits at a discounted rate. Physician licensing and telemedicine infrastructures pose additional concerns, especially in resource-scarce settings. Telemedicine cannot replace many essential medical procedures and is not universally accessible to all patients. Various patient groups may be further marginalized by healthcare technologies, for example, people whose language(s) are not concordant with those of the telemedicine clinician, people with disabilities , may be excluded or face challenges in using telemedicine. The effectiveness of telemedicine depends on its successful integration into the existing hospital and healthcare system within a local context, adequate preparation and training of medical professionals, and patient awareness and acceptance of telemedicine tools. 6.2 A vision for the future of telemedicine in Internal Medicine To speculate on the future of telemedicine, various future scenarios emerge from the EFIM Telemedicine Working Group’s overview of academic literature . The most probable scenario implies the emergence of a hybrid system where telemedicine augments traditional healthcare services, enhancing efficiency and adaptability to evolving patient care needs in a local context. The goals and measured outcomes of such hybrid models would be to ensure high-quality, accessible, equitable, efficient care, which holds the entire pathway of care services to a similar standard of health outcomes, regardless of the level of technology integration into healthcare services. Four possible scenarios are expected to emerge within the hybrid system by considering the evolving behavior of different stakeholders. 6.2.1 Scenario 1: best-case scenario In the best-case scenario, all aspects related to telemedicine have significantly improved since 2022. The use of telemedicine has increased significantly, with physicians significantly incorporating it into their practices. Research and development efforts have reduced barriers to use and increased technology efficiency and security. User-friendly platforms have been developed, making patients and physicians increasingly rely on telemedicine. In addition, innovative approaches have explored the expansion of telemedicine across different medical specialties by managing virtual and face-to-face components of appointments. Overall, telemedicine is widely adopted, well-understood, and proven to be efficient and effective in this scenario. 6.2.2 Scenario 2: worst-case scenario In the worst-case scenario, all aspects surrounding telemedicine have deteriorated since 2022. Certain variables have reverted to pre-COVID-19 practices, and significant investments in research and development have not materialized. Consequently, little progress has been made in making telemedicine technology more accessible, secure, or inclusive to minority groups. As a result, patients and physicians have become discouraged, and telemedicine is seen as a last resort rather than an integral part of healthcare. 6.2.3 Scenario 3: physician pushback scenario This scenario is similar to the best-case scenario, except physicians are more reluctant to adopt telemedicine. This scenario may arise because of changes in physician perceptions over time or because telemedicine placed additional burdens on physicians. However, ongoing research and development efforts may reverse among physicians and make them more proactive about telemedicine. Lower barriers to use and high patient willingness to engage have the potential to move this scenario toward a best-case situation. 6.2.4 Scenario 4: effort to improve scenario Scenario 4 is similar to Scenario 2 worst-case scenario but differs in terms of important ongoing research and development efforts. However, barriers to access remain high and patient willingness to engage with telemedicine is low. Consequently, this scenario is likely to head toward a worst-case situation . According to these hypotheses, the primary factors influencing future scenarios in healthcare will be the propensity of physicians and patients to adopt new technologies to redefine the doctor-patient relationship. Regardless of whether future scenarios are positive or negative, the existence and inevitability of technological advancements will remain. However, it is important to note that the development of technology alone is not sufficient to facilitate the establishment of a new patient care model. 6.3 EFIM position on telemedicine and recommendations Built on the scenario analysis, the EFIM Working Group proposes the following recommendations for telemedicine implementation: Clinical Care Standards: Guarantee that clinical care standards for telemedicine are consistent traditional office visit standards, comprising all aspects of diagnosis and treatment decisions. Clinical Judgment: Use clinical judgment in establishing the scope and extent of telemedicine applications, especially in the diagnosis and treatment of specific patients and chronic conditions. Authorization and Reimbursement: Approve and refund live interactive telemedicine in Internal Medicine in a way similar or equivalent to traditional in-person visits, subject to commitment to the principles outlined. Definition of Roles and Responsibilities: Define the roles, anticipations, and responsibilities of providers involved in Internal Medicine Telemedicine, including source and remote locations. Development of Models of Care: Move forward models of care in telemedicine, where Internal Medicine specialists, patients, primary care providers, and other healthcare team members work together to improve the value of healthcare delivery in a collaborative way. Compliance with Technical Standards: Maintain appropriate technical standards in the telemedicine delivery process, at the source and remote location. Investigation of Improvement Methods: Consider ways to extend telemedicine utility, including the use of patient explainers, community resources, providers, ancillary tests, and additional technologies. Quality Assurance Processes: Apply quality assurance processes for telemedicine care delivery models, with the intent of catching process measurements, patient outcomes, and patient/provider experiences. Data Management Time Recognition: Acknowledge the period required for data management, quality processes, and other aspects of care delivery related to telemedicine within a value-based care delivery model. Compliance with Professional and Ethical Standards: Warrant accurate compliance with professional and ethical standards in the use of telemedicine services and equipment ensuring patient access, quality, and value of care. Billing Transparency: Promote billing transparency for telemedicine services, and support patients, providers, and others to understand payer reimbursements throughout the entire process. Research and Impact Assessment: Recognize the probable rapid expansion of telemedicine use in Internal Medicine and broader telehealth applications, highlighting the necessity of further research to evaluate the impact and outcomes of these technologies. Limitations of telemedicine Specific limitations may prevent the adoption, implementation, and expansion of telemedicine and its supporting technologies. Extensive training is required to familiarize patients with video teleconsultations and the use of assistive technologies. Physicians also require targeted technical, clinical, and communication training tailored to their specific subspecialty needs. Limited access to broadband and internet facilities is a significant barrier, especially in remote areas and under-resourced settings . Reliable broadband access is essential for telemedicine services, but its quality is often inadequate in rural clinics and for patients residing in such areas. Legal restrictions and ambiguity in permissible practices in telemedicine have created a cautious attitude among telemedicine professionals. In addition, certain medical conditions are not adequately addressed within existing healthcare legislation. The pricing structure for virtual consultations and video surveillance in hospitals remains unclear, leaving questions as to whether they will be fully reimbursed or classified as shorter visits at a discounted rate. Physician licensing and telemedicine infrastructures pose additional concerns, especially in resource-scarce settings. Telemedicine cannot replace many essential medical procedures and is not universally accessible to all patients. Various patient groups may be further marginalized by healthcare technologies, for example, people whose language(s) are not concordant with those of the telemedicine clinician, people with disabilities , may be excluded or face challenges in using telemedicine. The effectiveness of telemedicine depends on its successful integration into the existing hospital and healthcare system within a local context, adequate preparation and training of medical professionals, and patient awareness and acceptance of telemedicine tools. A vision for the future of telemedicine in Internal Medicine To speculate on the future of telemedicine, various future scenarios emerge from the EFIM Telemedicine Working Group’s overview of academic literature . The most probable scenario implies the emergence of a hybrid system where telemedicine augments traditional healthcare services, enhancing efficiency and adaptability to evolving patient care needs in a local context. The goals and measured outcomes of such hybrid models would be to ensure high-quality, accessible, equitable, efficient care, which holds the entire pathway of care services to a similar standard of health outcomes, regardless of the level of technology integration into healthcare services. Four possible scenarios are expected to emerge within the hybrid system by considering the evolving behavior of different stakeholders. 6.2.1 Scenario 1: best-case scenario In the best-case scenario, all aspects related to telemedicine have significantly improved since 2022. The use of telemedicine has increased significantly, with physicians significantly incorporating it into their practices. Research and development efforts have reduced barriers to use and increased technology efficiency and security. User-friendly platforms have been developed, making patients and physicians increasingly rely on telemedicine. In addition, innovative approaches have explored the expansion of telemedicine across different medical specialties by managing virtual and face-to-face components of appointments. Overall, telemedicine is widely adopted, well-understood, and proven to be efficient and effective in this scenario. 6.2.2 Scenario 2: worst-case scenario In the worst-case scenario, all aspects surrounding telemedicine have deteriorated since 2022. Certain variables have reverted to pre-COVID-19 practices, and significant investments in research and development have not materialized. Consequently, little progress has been made in making telemedicine technology more accessible, secure, or inclusive to minority groups. As a result, patients and physicians have become discouraged, and telemedicine is seen as a last resort rather than an integral part of healthcare. 6.2.3 Scenario 3: physician pushback scenario This scenario is similar to the best-case scenario, except physicians are more reluctant to adopt telemedicine. This scenario may arise because of changes in physician perceptions over time or because telemedicine placed additional burdens on physicians. However, ongoing research and development efforts may reverse among physicians and make them more proactive about telemedicine. Lower barriers to use and high patient willingness to engage have the potential to move this scenario toward a best-case situation. 6.2.4 Scenario 4: effort to improve scenario Scenario 4 is similar to Scenario 2 worst-case scenario but differs in terms of important ongoing research and development efforts. However, barriers to access remain high and patient willingness to engage with telemedicine is low. Consequently, this scenario is likely to head toward a worst-case situation . According to these hypotheses, the primary factors influencing future scenarios in healthcare will be the propensity of physicians and patients to adopt new technologies to redefine the doctor-patient relationship. Regardless of whether future scenarios are positive or negative, the existence and inevitability of technological advancements will remain. However, it is important to note that the development of technology alone is not sufficient to facilitate the establishment of a new patient care model. Scenario 1: best-case scenario In the best-case scenario, all aspects related to telemedicine have significantly improved since 2022. The use of telemedicine has increased significantly, with physicians significantly incorporating it into their practices. Research and development efforts have reduced barriers to use and increased technology efficiency and security. User-friendly platforms have been developed, making patients and physicians increasingly rely on telemedicine. In addition, innovative approaches have explored the expansion of telemedicine across different medical specialties by managing virtual and face-to-face components of appointments. Overall, telemedicine is widely adopted, well-understood, and proven to be efficient and effective in this scenario. Scenario 2: worst-case scenario In the worst-case scenario, all aspects surrounding telemedicine have deteriorated since 2022. Certain variables have reverted to pre-COVID-19 practices, and significant investments in research and development have not materialized. Consequently, little progress has been made in making telemedicine technology more accessible, secure, or inclusive to minority groups. As a result, patients and physicians have become discouraged, and telemedicine is seen as a last resort rather than an integral part of healthcare. Scenario 3: physician pushback scenario This scenario is similar to the best-case scenario, except physicians are more reluctant to adopt telemedicine. This scenario may arise because of changes in physician perceptions over time or because telemedicine placed additional burdens on physicians. However, ongoing research and development efforts may reverse among physicians and make them more proactive about telemedicine. Lower barriers to use and high patient willingness to engage have the potential to move this scenario toward a best-case situation. Scenario 4: effort to improve scenario Scenario 4 is similar to Scenario 2 worst-case scenario but differs in terms of important ongoing research and development efforts. However, barriers to access remain high and patient willingness to engage with telemedicine is low. Consequently, this scenario is likely to head toward a worst-case situation . According to these hypotheses, the primary factors influencing future scenarios in healthcare will be the propensity of physicians and patients to adopt new technologies to redefine the doctor-patient relationship. Regardless of whether future scenarios are positive or negative, the existence and inevitability of technological advancements will remain. However, it is important to note that the development of technology alone is not sufficient to facilitate the establishment of a new patient care model. EFIM position on telemedicine and recommendations Built on the scenario analysis, the EFIM Working Group proposes the following recommendations for telemedicine implementation: Clinical Care Standards: Guarantee that clinical care standards for telemedicine are consistent traditional office visit standards, comprising all aspects of diagnosis and treatment decisions. Clinical Judgment: Use clinical judgment in establishing the scope and extent of telemedicine applications, especially in the diagnosis and treatment of specific patients and chronic conditions. Authorization and Reimbursement: Approve and refund live interactive telemedicine in Internal Medicine in a way similar or equivalent to traditional in-person visits, subject to commitment to the principles outlined. Definition of Roles and Responsibilities: Define the roles, anticipations, and responsibilities of providers involved in Internal Medicine Telemedicine, including source and remote locations. Development of Models of Care: Move forward models of care in telemedicine, where Internal Medicine specialists, patients, primary care providers, and other healthcare team members work together to improve the value of healthcare delivery in a collaborative way. Compliance with Technical Standards: Maintain appropriate technical standards in the telemedicine delivery process, at the source and remote location. Investigation of Improvement Methods: Consider ways to extend telemedicine utility, including the use of patient explainers, community resources, providers, ancillary tests, and additional technologies. Quality Assurance Processes: Apply quality assurance processes for telemedicine care delivery models, with the intent of catching process measurements, patient outcomes, and patient/provider experiences. Data Management Time Recognition: Acknowledge the period required for data management, quality processes, and other aspects of care delivery related to telemedicine within a value-based care delivery model. Compliance with Professional and Ethical Standards: Warrant accurate compliance with professional and ethical standards in the use of telemedicine services and equipment ensuring patient access, quality, and value of care. Billing Transparency: Promote billing transparency for telemedicine services, and support patients, providers, and others to understand payer reimbursements throughout the entire process. Research and Impact Assessment: Recognize the probable rapid expansion of telemedicine use in Internal Medicine and broader telehealth applications, highlighting the necessity of further research to evaluate the impact and outcomes of these technologies. Conclusion Further investigation is necessary to evaluate the optimal use of telemedicine in the field of Internal Medicine. Based on existing scientific evidence, the European Federation of Internal Medicine (EFIM) recommends increased utilization of these innovative methods to provide adequate care for complex patients with multiple chronic conditions. Given the ongoing epidemiological shift and rapid technological advancements, EFIM believes that the significant adoption of telemedicine is critical in providing comprehensive care for Internal Medicine patients. FP: Conceptualization, Writing – review & editing, Writing – original draft. MF: Investigation, Writing – original draft. SK: Software, Writing – review & editing. KK: Data curation, Writing – review & editing. TL: Methodology, Writing – original draft. ISG: Supervision, Writing – review & editing. SS: Formal analysis, Writing – original draft. MR: Formal analysis, Writing – original draft. AS: Project administration, Writing – original draft. FR: Writing – original draft, Writing – review & editing. CD: Project administration, Writing – original draft. AV: Resources, Writing – original draft. VB: Validation, Writing – review & editing. NM: Resources, Writing – review & editing. DD: Funding acquisition, Writing – review & editing. RGH: Visualization, Writing – review & editing.
Radiomics-Driven CBCT Texture Analysis as a Novel Biosensor for Quantifying Periapical Bone Healing: A Comparative Study of Intracanal Medications
1f50e26d-9586-4f3a-81c0-f6ae1c337a6f
11852422
Dentistry[mh]
In the field of endodontics, the primary objective is to remove or significantly diminish the microbiota within the root canals by means of biomechanical preparation in order to foster the healing of periradicular tissues . The significance of this objective stems from the established understanding that microorganisms and their metabolic by-products are predominantly responsible for pulpal alterations and the development of periapical lesions, which can lead to pulp necrosis and infection . Specifically, the prevalence of Gram-negative anaerobic microorganisms has been correlated with root canals, which exhibit radiographically visible periapical lesions, a situation exacerbated by the release of endotoxins during death or multiplication of bacteria . These endotoxins stimulate a series of inflammatory responses, thus contributing to bone resorption associated with periapical lesions . Among the advancements in endodontic treatments, various irrigation solutions and intracanal medications have been used to combat persistent infection within the root canal system . Sodium hypochlorite stands out for its broad-spectrum antimicrobial activity and ability to dissolve necrotic tissue, making it a widely used irrigation solution . Additionally, calcium hydroxide has been recognized for its antimicrobial efficacy and its role in promoting tissue repair, in which one can highlight the multifaceted approach to eliminating root canal pathogens . Despite these therapeutic advancements, the challenge of completely eradicating microorganisms and endotoxins from the root canal system remains, which emphasizes the importance of effective post-operative interventions. Conventional diagnostic methods, particularly periapical radiography, have been instrumental in the diagnosis, planning and evaluation of endodontic treatments. However, the limitations of two-dimensional imaging in accurately detecting and measuring periapical lesions have necessitated the exploration of more sophisticated diagnostic tools . Cone beam computed tomography (CBCT) has emerged as a superior alternative to conventional ones, offering detailed three-dimensional images without overlap of anatomical structures associated with two-dimensional imaging . This study introduces texture analysis as a novel approach to assess bone repair in teeth with periapical lesions following endodontic treatment. Texture analysis, a computer-assisted technique, shows the patterns and distributions of pixel intensities within digital images to quantitatively assess structural changes . By using texture analysis, this study aims to evaluate the effectiveness of two different types of intracanal medication in the healing process. This innovative methodology not only enhances the understanding of bone healing dynamics but also contributes to optimizing endodontic treatment outcomes, thus underscoring the potential of texture analysis in promoting advances in endodontic research and practice. This study aimed to evaluate the effectiveness of two intracanal medications in promoting periapical bone healing following endodontic treatment using radiomics-enabled texture analysis of CBCT images as a novel biosensing technique. This approach seeks to quantitatively assess and differentiate tissue changes, potentially enhancing the monitoring of endodontic treatment outcomes. This study was approved by the Ethics Committee of the São Paulo State University (UNESP) according to protocol number 50377321.1.0000.0077 and conducted in accordance with the ethical principles for medical research involving human subjects as described in the Declaration of Helsinki. Prior to inclusion in this study, written informed consent was obtained from all subjects. This retrospective study used a sample of images from a previous research project. The original study focused on determining the estimated time for bone repair in endodontically treated teeth with periapical lesions and comparing the volume of periapical lesions after the use of two types of intracanal medication. 2.1. Patient and Tooth Selection for Endodontic Treatment Thirty-four single-rooted teeth with pulp necrosis and periapical lesions were selected from patients referred to the Endodontics Clinic of the Department of Restorative Dentistry at the Institute of Science and Technology of São José dos Campos, UNESP, for endodontic treatment. The selection was based on pre-established inclusion and exclusion criteria. Before treatment, we evaluated and recorded the condition of each tooth’s crown and its occlusal state. Exclusion criteria encompassed patients who had used antifungals and/or antibiotics (up to 3 months before the study), who were unavailable for radiographic and tomographic follow-up 3 months after treatment completion or who had teeth with periodontal disease or root fractures. Patients with a history of systemic diseases that could affect bone metabolism, such as osteoporosis or diabetes mellitus, were excluded from the study. We also excluded patients who were pregnant, had a history of bisphosphonate use, or were undergoing radiation therapy in the head and neck region. Teeth that experienced significant changes in crown condition or occlusal state during the study period were excluded from the final analysis to minimize potential confounding factors. 2.2. CBCT Image Acquisition CBCT scans were acquired by using an i-CAT Next Generation scanner (Imaging Sciences International, Hatfield, PA, USA) at the Radiology Clinic of the Institute of Science and Technology of the UNESP. The scanning protocol used was the following: field of view (FOV) of 6.0 cm × 16.0 cm encompassing the dental arch of interest and voxel size of 0.25 mm. The average acquisition time was 14.0 s. To study the variation in bone formation in the periapical region, all the patients underwent CBCT examinations at two different treatment stages: T1: immediately after the completion of endodontic treatment. T2: three months after treatment completion. 2.3. Intracanal Medication for 14 Days After biomechanical preparation, the teeth were divided into two groups (n = 17) depending on the intracanal medication (ICM) used, as follows: Group 1: calcium hydroxide (Biodinâmica, Ibipurã, PR, Brazil) + 2% chlorhexidine gel (Concepts V—Ultradent Products, South Jordan, UT, USA). Group 2: Ultracal XS ® (Ultradent Products, Inc.). In Group 1, the medication combination (calcium hydroxide + 2% chlorhexidine gel) was prepared in a 1:1 volume ratio (toothpaste consistency) and delivered into the root canal by using files and Lentulo spirals (Dentsply/Maillefer Instruments SA, Ballaigues, Switzerland) until complete root canal filling. The teeth were then sealed with a layer of pure calcium hydroxide, followed by a layer of Coltosol (Vigodent, Rio de Janeiro, RJ, Brazil) and a temporary restoration by using glass ionomer cement (Vidrion R—S.S. White Artigos Dentários Ltd., Rio de Janeiro, RJ, Brazil). In Group 2, Ultracal XS ® was inserted by using NaviTip from the kit, with additional insertion by means of manual files and Lentulo spirals to ensure complete filling. The assignment of medications to each tooth was performed using a simple alternating method. As patients were enrolled in the study, their teeth were alternately assigned to either Group 1 or Group 2. This method ensured an equal distribution of teeth between the two groups while maintaining a straightforward and easily replicable randomization process. This method ensured an equal distribution of teeth between the two groups while maintaining a straightforward and easily replicable randomization process. The intracanal medication was maintained for a period of 14 days in both groups. 2.4. Texture Analysis CBCT images in DICOM (Digital Imaging and Communications in Medicine) format were extracted from the database and imported into a high-performance notebook computer (MacBook Pro (Cupertino, CA, USA), Intel ® CoreTM i5, 2.4 GHz, 4 GB, 1067 MHz, DDR3 processor) running Microsoft Windows. An evaluator with ten years of experience in CBCT interpretation reviewed the images and selected central sections that clearly displayed the lesions. For each tooth, three image slices were chosen: the central slice of the lesion and two adjacent slices (one in the lateromedial direction and one in the superoinferior direction). This approach was adopted to ensure consistent volumetric representation of the lesion for texture analysis, balancing lesion representativeness with computational efficiency. These slices were processed and converted to BMP (bitmap) format by using OnDemand3D software, version 1.0 (CyberMed Inc., Seoul, South Korea) . Next, a second evaluator well-versed in CBCT image analysis who was unaware of clinical and diagnostic information conducted the texture analysis. The BMP format images were imported into MaZda software (version 4.6) for texture feature calculation. A circular region of interest (ROI) of 44 pixels in diameter was manually delineated at the center of the lesion (point marked in red) on the frontal image to ensure that only the lesion tissue was included . The center of the periapical lesion was determined at the intersection of the lateromedial and superoinferior lines. This study used a gray-level co-occurrence matrix (GLCM), which is a square matrix with dimensions equal to the image’s gray levels, to reveal the spatial distribution of gray levels in the image texture . This mathematical approach, based on Haralick’s method, calculates parameters according to the frequency of specific pairs of pixel values in the image . Eleven texture parameters were extracted as follows: angular second moment (AngScMom), contrast, correlation (Correlat), difference of entropy (DifEntrp), difference of variance (DifVarnc), entropy, inverse difference moment (InvDfMom), sum of average (SumAverg), sum of entropy (SumEntrp), sum of squares (SumOfSqs), and sum of variance (SumVarnc). These were calculated for two inter-pixel distances (d1 = 1, d2 = 2) and four image directions (horizontal, vertical, 45°, and 135°). The distances were arranged in four directions, resulting in the following positions: S(1,0), S(0,1); S(2,0), S(0,2); and S(3,0), S(0,3) . The selection of these specific texture parameters and analysis methods was based on their ability to provide a comprehensive characterization of bone tissue texture in CBCT images. This approach, as demonstrated by Gonçalves et al. , allows for a thorough examination of the spatial relationships and distribution of gray levels within the region of interest. The use of multiple inter-pixel distances and directions enables the capture of texture information at different spatial scales and orientations, which is fundamental for analyzing the complex patterns of bone microarchitecture. The GLCM method, from which these parameters are derived, is particularly effective in quantifying subtle changes in bone structure that may not be apparent through conventional image analysis. By employing this multifaceted texture analysis approach, this study aimed to detect and quantify changes in bone texture associated with the healing process following different endodontic treatments, potentially improving our ability to assess treatment outcomes and periapical bone regeneration. To illustrate the complete workflow of the texture analysis process and quantification of periapical bone healing, we developed a schematic diagram . This diagram summarizes the main steps of our method, from CBCT image acquisition to the final interpretation of results, highlighting the key parameters analyzed and how they relate to the assessment of bone healing. 2.5. Statistical Analysis An exploratory data analysis was conducted by using summary measures (mean and standard deviation) and graphical representations. Time points and medications were compared for each parameter by using ANOVA with rank transformation, as the parameters did not exhibit normal distribution. The significance level for all analyses was set at 5%. Statistical analyses were performed by using R Core Team (2023) software (R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria). Thirty-four single-rooted teeth with pulp necrosis and periapical lesions were selected from patients referred to the Endodontics Clinic of the Department of Restorative Dentistry at the Institute of Science and Technology of São José dos Campos, UNESP, for endodontic treatment. The selection was based on pre-established inclusion and exclusion criteria. Before treatment, we evaluated and recorded the condition of each tooth’s crown and its occlusal state. Exclusion criteria encompassed patients who had used antifungals and/or antibiotics (up to 3 months before the study), who were unavailable for radiographic and tomographic follow-up 3 months after treatment completion or who had teeth with periodontal disease or root fractures. Patients with a history of systemic diseases that could affect bone metabolism, such as osteoporosis or diabetes mellitus, were excluded from the study. We also excluded patients who were pregnant, had a history of bisphosphonate use, or were undergoing radiation therapy in the head and neck region. Teeth that experienced significant changes in crown condition or occlusal state during the study period were excluded from the final analysis to minimize potential confounding factors. CBCT scans were acquired by using an i-CAT Next Generation scanner (Imaging Sciences International, Hatfield, PA, USA) at the Radiology Clinic of the Institute of Science and Technology of the UNESP. The scanning protocol used was the following: field of view (FOV) of 6.0 cm × 16.0 cm encompassing the dental arch of interest and voxel size of 0.25 mm. The average acquisition time was 14.0 s. To study the variation in bone formation in the periapical region, all the patients underwent CBCT examinations at two different treatment stages: T1: immediately after the completion of endodontic treatment. T2: three months after treatment completion. After biomechanical preparation, the teeth were divided into two groups (n = 17) depending on the intracanal medication (ICM) used, as follows: Group 1: calcium hydroxide (Biodinâmica, Ibipurã, PR, Brazil) + 2% chlorhexidine gel (Concepts V—Ultradent Products, South Jordan, UT, USA). Group 2: Ultracal XS ® (Ultradent Products, Inc.). In Group 1, the medication combination (calcium hydroxide + 2% chlorhexidine gel) was prepared in a 1:1 volume ratio (toothpaste consistency) and delivered into the root canal by using files and Lentulo spirals (Dentsply/Maillefer Instruments SA, Ballaigues, Switzerland) until complete root canal filling. The teeth were then sealed with a layer of pure calcium hydroxide, followed by a layer of Coltosol (Vigodent, Rio de Janeiro, RJ, Brazil) and a temporary restoration by using glass ionomer cement (Vidrion R—S.S. White Artigos Dentários Ltd., Rio de Janeiro, RJ, Brazil). In Group 2, Ultracal XS ® was inserted by using NaviTip from the kit, with additional insertion by means of manual files and Lentulo spirals to ensure complete filling. The assignment of medications to each tooth was performed using a simple alternating method. As patients were enrolled in the study, their teeth were alternately assigned to either Group 1 or Group 2. This method ensured an equal distribution of teeth between the two groups while maintaining a straightforward and easily replicable randomization process. This method ensured an equal distribution of teeth between the two groups while maintaining a straightforward and easily replicable randomization process. The intracanal medication was maintained for a period of 14 days in both groups. CBCT images in DICOM (Digital Imaging and Communications in Medicine) format were extracted from the database and imported into a high-performance notebook computer (MacBook Pro (Cupertino, CA, USA), Intel ® CoreTM i5, 2.4 GHz, 4 GB, 1067 MHz, DDR3 processor) running Microsoft Windows. An evaluator with ten years of experience in CBCT interpretation reviewed the images and selected central sections that clearly displayed the lesions. For each tooth, three image slices were chosen: the central slice of the lesion and two adjacent slices (one in the lateromedial direction and one in the superoinferior direction). This approach was adopted to ensure consistent volumetric representation of the lesion for texture analysis, balancing lesion representativeness with computational efficiency. These slices were processed and converted to BMP (bitmap) format by using OnDemand3D software, version 1.0 (CyberMed Inc., Seoul, South Korea) . Next, a second evaluator well-versed in CBCT image analysis who was unaware of clinical and diagnostic information conducted the texture analysis. The BMP format images were imported into MaZda software (version 4.6) for texture feature calculation. A circular region of interest (ROI) of 44 pixels in diameter was manually delineated at the center of the lesion (point marked in red) on the frontal image to ensure that only the lesion tissue was included . The center of the periapical lesion was determined at the intersection of the lateromedial and superoinferior lines. This study used a gray-level co-occurrence matrix (GLCM), which is a square matrix with dimensions equal to the image’s gray levels, to reveal the spatial distribution of gray levels in the image texture . This mathematical approach, based on Haralick’s method, calculates parameters according to the frequency of specific pairs of pixel values in the image . Eleven texture parameters were extracted as follows: angular second moment (AngScMom), contrast, correlation (Correlat), difference of entropy (DifEntrp), difference of variance (DifVarnc), entropy, inverse difference moment (InvDfMom), sum of average (SumAverg), sum of entropy (SumEntrp), sum of squares (SumOfSqs), and sum of variance (SumVarnc). These were calculated for two inter-pixel distances (d1 = 1, d2 = 2) and four image directions (horizontal, vertical, 45°, and 135°). The distances were arranged in four directions, resulting in the following positions: S(1,0), S(0,1); S(2,0), S(0,2); and S(3,0), S(0,3) . The selection of these specific texture parameters and analysis methods was based on their ability to provide a comprehensive characterization of bone tissue texture in CBCT images. This approach, as demonstrated by Gonçalves et al. , allows for a thorough examination of the spatial relationships and distribution of gray levels within the region of interest. The use of multiple inter-pixel distances and directions enables the capture of texture information at different spatial scales and orientations, which is fundamental for analyzing the complex patterns of bone microarchitecture. The GLCM method, from which these parameters are derived, is particularly effective in quantifying subtle changes in bone structure that may not be apparent through conventional image analysis. By employing this multifaceted texture analysis approach, this study aimed to detect and quantify changes in bone texture associated with the healing process following different endodontic treatments, potentially improving our ability to assess treatment outcomes and periapical bone regeneration. To illustrate the complete workflow of the texture analysis process and quantification of periapical bone healing, we developed a schematic diagram . This diagram summarizes the main steps of our method, from CBCT image acquisition to the final interpretation of results, highlighting the key parameters analyzed and how they relate to the assessment of bone healing. An exploratory data analysis was conducted by using summary measures (mean and standard deviation) and graphical representations. Time points and medications were compared for each parameter by using ANOVA with rank transformation, as the parameters did not exhibit normal distribution. The significance level for all analyses was set at 5%. Statistical analyses were performed by using R Core Team (2023) software (R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria). This study included 34 patients of both sexes, of which 80% were female. Thirteen patients were treated with CHX, whereas 11 were treated with Ultracal medication. In Group 1, eight patients (62%) were female, and in Group 2, all were female. Given that no high correlation was observed between the directions of the same parameter, all directions were analyzed separately. and show the descriptive measures and comparison between medications and time points for the 11 parameters as follows: angular second moment, contrast, correlation, difference of entropy, difference of variance, entropy, inverse difference moment, sum of average, sum of entropy, sum of squares, and sum of variance. Statistically significant differences were observed between the medications for the following parameters: S(0,1) InvDfMom ( p -value = 0.043): Ultracal medication showed a greater post-treatment reduction compared with CHX. S(0,2) DifVarnc ( p -value = 0.014): Ultracal medication showed a post-treatment reduction, whereas CHX showed an increase. S(0,2) DifEntrp ( p -value = 0.004): Ultracal medication showed a post-treatment reduction, whereas CHX showed an increase. The capacity to analyze the dimensional reduction of inflammatory periapical lesions through endodontic treatment follow-up, whether by two-dimensional images (i.e., periapical radiographs) or three-dimensional examinations (i.e., cone-beam computed tomography), has been previously studied by using linear or volumetric measurements, respectively . Changes in the internal imaging features of alveolar bone related to inflammatory periapical lesions reflect either a reduction or an increase in bone structure or a combination of both. A quantitative reduction in bone structure is observed as an increase in radiolucency due to a decrease in the number and density of the existing trabeculae. Conversely, bone augmentation is seen as an increased radiopacity (i.e., sclerosis) resulting primarily from an increase in thickness, density and number of trabeculae . Nevertheless, there is a gap in the literature regarding the qualitative evaluation of the bone healing process of inflammatory periapical lesions in relation to the medicinal approach to be used in endodontic treatment. In this context, the present study presents a novel methodology in which an investigation of inflammatory periapical lesions was conducted according to the type of intracanal medication and after 3 months of endodontic treatment. This was achieved through texture analysis of CBCT images of these lesions, a method allowing for comparison between the effects of two ICMs, namely, CHX and Ultracal. The choice of CBCT images for this study was based on their ability to provide a more reliable assessment without structure overlapping, which could mask the real content of periapical lesions, as occurs in conventional radiographs . The results indicated that among the 11 texture parameters analyzed by using GLCM for periapical bone lesions, 3 of them [i.e., InvDfMom, DifEntrp, and DifVarnc] showed statistically significant differences between the two study groups, considering the time elapsed since the completion of endodontic treatment. A similar behavior was observed in these parameters, as their values for bone were decreased in the regions of periapical lesions in Group 2 (Ultracal) compared with Group 1 (CHX). In other words, patients who were treated with Ultracal medication in their endodontic treatment had periapical bone regions with previous inflammatory lesions, as qualitative bone aspects were differentiated by these texture parameters in the group treated with CHX medication. One must consider the meaning of each of these three parameters to understand these results from the correlation between them and the properties of the studied medications. This may result in differences, thus enabling quantitative evaluation of the bone. It should be remembered that the texture analysis technique allows this type of analysis to be performed, as it is an analytical method based on markers [parameters] and behavior of pixels within a determined ROI in the images by comparing the values of each marker at different distances correlated to pixels located in the central sites of the ROI . Image analysis of the bone corresponding to periapical lesion areas of teeth treated with Ultracal medication revealed, after three months of post-endodontic treatment, a lower value of InvDfMom compared with that in Group 1. This parameter governs the degree of homogeneity in the distribution of image gray levels, as its values decreased in the lesion regions of Ultracal-treated teeth compared with those treated with CHX. Therefore, it can be inferred that the bone in the region showed heterogeneous patterns compared to the newly-formed bone in the periapical regions of teeth treated with calcium hydroxide + 2% chlorhexidine gel [Group 1]. This suggests that bone trabeculae and medullary spaces failed to become organized so that conclusive aspects of a more standardized scar tissue could potentially indicate tissue with increased medullary spaces, fewer bone trabeculae or even fibrous tissue [fibrous scar]. These findings are reinforced by Costa et al. , who analyzed dental implant stability by evaluating texture patterns of the bone site at the implant placement bed. In their study, InvDfMom was a texture marker showing reduced values in sites where implant torque values were lower, suggesting less dense bone for implant osseointegration and indicating components similar to those of immature bone tissue or even fibrous tissue. Analogous to InvDfMom, this study showed reduced values of DifEntrp for periapical bone lesions in Group 2. This parameter indicates differences in the disorganization of gray level distribution, in which a difference in entropy [rather than pure entropy] indicates that the greater the disorder, the lower its value. Our results corroborate those observed by De Rosa et al. , who investigated the potential of texture analysis techniques to differentiate between apical radicular cysts and periapical granulomas. They found that various texture parameters, including DifEntrp, could be used for such differentiation in CBCT images as their lower values were statistically significant for granuloma lesions, thus indicating a higher likelihood of fibrous tissue exhibiting this behavior. Similar findings were observed in studies by Queiroz et al. and Gonçalves et al. , who evaluated CBCT images to assess the ability of texture markers to identify affected versus healthy tissues in bone regions bordering medication-related osteonecrosis and furcation lesions, respectively. Their findings confirmed this technique’s potential as a promising means of qualitative image evaluation besides allowing pure optical analysis. In both studies, texture parameters [including lower values of DifEntrp and InvDfMom] indicated that the tissue was sufficiently altered compared with the unaffected bone for the same parameters in the characterization of inflammatory tissue, more closely resembling a contaminated bone. In our study, this can be interpreted as a reduced ability of Ultracal to promote pure bone repair of inflammatory periapical lesions. DifVarnc is a texture parameter, like those previously described, showing statistically significant differences between the groups by characterizing the dispersion of gray level differences in the image. Higher values indicate greater dispersion of gray levels, whereas lower values indicate the opposite. Considering bone analysis, high dispersion of gray levels in the segmented region indicates a balanced presence of trabeculae and medullary spaces, characterizing a more organized and vascularized bone tissue with histological aspects corresponding to type II bone. Conversely, lower values of DifVarnc [i.e., less dispersion of gray levels] characterize a bone pattern compatible with type IV bone in osteogenic terms, which is quite spongy, with fewer trabeculae and predominantly lower density regions, potentially representing less mineralized bone or even fibrous tissue. The previously detailed results indicate that the texture analysis technique used in CBCT images allowed a new qualitative approach to assess the effects of two intracanal medications [i.e., CHX and Ultracal] on the bone neoformation process of periapical lesions. This approach signals that in lesions associated with Ultracal medication, the resulting bone aspect showed less homogeneity and greater structural disorganization three months after endodontic treatment, including an aspect resembling more closely a tissue less likely to match that of bone tissue but rather fibrous tissue or immature bone. The two types of ICMs addressed in this study were chosen due to their abilities relative to calcium hydroxide in neutralizing endotoxins by elevating the pH of the medium [i.e., alkalinizing action], which alters the cytoplasmic membrane of bacteria [i.e., antibacterial action] due to high pH, and by stimulating mineralization through alkaline phosphatase [i.e., mineralization induction] . However, it should be emphasized that despite the presence of calcium hydroxide in both groups, CHX-treated teeth included its association with 2% chlorhexidine gel, whereas Ultracal-treated teeth had the presence of 35% calcium hydroxide in aqueous solution. This fact may precisely be an indicator related to the difference found through texture analysis of the qualitative bone aspects in post-endodontic treatment lesions, as calcium hydroxide associated with chlorhexidine can potentiate antimicrobial effects on Gram-positive and Gram-negative bacteria while preserving its biocompatibility , which does not occur with calcium hydroxide alone in an aqueous medium. Although this fact has already been scientifically proven, our study becomes important as it presents a qualitative analytical approach to the effects of these two ICMs on the healing process of the corresponding periapical lesions through an objective method already validated in different imaging modalities, including conventional radiographs, computed tomography and magnetic resonance imaging . Therefore, texture analysis has not been used until now as a tool for endodontic evaluation of periapical lesions, thus demonstrating the novel nature and importance of the present study. It is important to note that the present study has some limitations. First, the sample size of 34 single-rooted teeth (17 per group), while sufficient to detect significant differences in several texture parameters, may limit the generalizability of our findings. A larger sample size in future studies could potentially reveal more subtle effects and increase the robustness of our conclusions. Additionally, only single-rooted teeth were treated with two different ICMs, not considering factors such as age, gender, stress, nutrition, vitamin intake, quality of coronal sealing, as well as hypertension, osteoporosis, and diabetes mellitus, among others. These factors could additionally influence the periapical healing process . Furthermore, all CBCT scans in this study were performed using a single-machine model. It is important to acknowledge that relative shooting values and image characteristics may differ across various CBCT machine brands and models, which could affect the generalizability of our texture analysis results. Future multi-center studies utilizing different CBCT machines could help validate the robustness of our findings across various imaging platforms and further strengthen the applicability of this texture analysis approach in clinical practice. This study demonstrates the potential of radiomics-enabled texture analysis of CBCT images as a novel biosensing technique for quantitative assessment of periapical bone healing. Our findings indicate that this method can effectively differentiate between tissue characteristics resulting from different intracanal medications. Specifically, teeth treated with CHX exhibited more uniform features consistent with organized bone tissue, while those treated with Ultracal showed less homogeneity, suggesting fibrous or immature tissue. This radiomics-based approach not only highlights the efficacy of CHX as an intracanal medication but also showcases the analytical capability of CBCT-based texture analysis as a promising biosensing platform for non-invasive, quantitative evaluation of endodontic treatment outcomes.
Assessing the impact of clerkships on the growth of clinical knowledge
88f32142-418e-4cc1-a3cd-b4bd1fc90d0d
11703539
Internal Medicine[mh]
Assessing the impact of clerkship rotation requires standardized assessment tools and testing procedures, either before and after the clerkship or across multiple repeated measures . However, since clinical knowledge is multidimensional, it is challenging to quantify the learning growth from the clerkship experience. Without validated and reliable assessment tools or methods, meaningful measurement of growth is not possible. Moreover, scheduling every student’s clerkship rotation at the same time is not administratively possible, further complicating the assessment of the impact of clerkship programs on the growth of clinical knowledge. This study is grounded in experiential learning theory, which posits that learning is a cyclical process of concrete experience, reflective observation, abstract conceptualization, and active experimentation . During clinical clerkships, medical students engage in this cycle, applying their theoretical knowledge to real-world patient encounters, observing expert clinicians, and reflecting on their experiences to refine their understanding and clinical decision-making. The experiential learning framework suggests that the varied clinical experiences encountered during clerkships can have differential impacts on students’ disciplinary knowledge development. Formative assessment is an evaluative process used to monitor student learning and provide ongoing feedback that instructors can use to improve their teaching and students to improve their learning . The Comprehensive Clinical Science Examination (CCSE) is a formative assessment tool that gives students feedback on their progress in clinical domains, such as internal medicine, surgery, and pediatrics. By analyzing students’ CCSE scores before, during, and after their clerkship experiences, we aimed to elucidate the differential effects of various clinical rotations on acquiring disciplinary knowledge. Experiential learning theory and its application are commonly used in clinical rotations and clerkships. Building on the foundation of Dewey, Lewin, Piaget, and Knowles, Kolb conceptualized the work of learning from experience with four different abilities: (1) concrete experience, (2) reflective observation, (3) abstract conceptualization, and (4) active experimentation . These four experiential learning abilities formed a cycle and greatly influenced current clerkship teaching and curriculum. The current medical school curriculum reform trend includes early clerkship experience and longitudinal integrated clerkships . Since the problem-based curriculum was implemented in the early 1980s, medical schools’ curriculum models have necessitated that knowledge be evaluated in tandem with clerkship learning. Nevertheless, few empirical studies have looked at how students benefit from their clerkship learning experience, or how clerkship content might be better aligned with their cognitive learning styles. Previous studies have shown that students’ performance improves incrementally, both on multiple-choice examinations of relevant knowledge for clerkships, and on self-assessments of competency after their clerkship rotation . However, medical schools vary widely in the delivery and sequencing of their disciplinary clerkship rotations and the evaluation of students during their third-year rotations , making it challenging to determine which clerkships provide the most significant positive impact on the development of future clinicians. Additionally, grading rubrics across different clerkships even within an individual school are not necessarily comparable , further complicating the evaluation of any given clerkship’s effectiveness. Standardized assessment plays a vital role in progress testing . Established and stable reliability and validity of examination scores minimize the chances that the impact of assessments gets confounded . Especially when the assessment is measured multiple times, the sources of variation can include form-difficulty variations, mode effects, and/or item-difficulty variations. The National Board of Medical Examiners (NBME) CCSE is a standardized assessment tool that can serve as a reliable and valid measure of progress in competency-based medical education . The CCSE is designed to provide students with formative feedback on their progress in clinical domains, such as internal medicine, surgery, and pediatrics. By analyzing students’ CCSE scores before, during, and after their clinical clerkship experiences, researchers can elucidate the differential impacts of various rotations on the growth of students’ disciplinary knowledge and clinical competencies. Using a well-established, standardized assessment like the CCSE offers several advantages. First, the CCSE has been extensively validated, and its psychometric properties are well-documented, ensuring that the scores reflect meaningful and reliable measures of clinical knowledge and skills. Second, the CCSE’s consistent format and content across multiple administrations allows for the longitudinal tracking of student performance, enabling researchers to identify patterns and trends in the development of disciplinary knowledge throughout clinical clerkships. Finally, the CCSE’s comprehensive coverage of key clinical domains aligns with the multidimensional nature of clinical knowledge, providing a more holistic assessment of student learning compared to narrowly focused, single-subject examinations. Thus, CCSE can serve as such a tool in competence-based progress tests. Drawing on experiential learning theory in medical education, which emphasizes a cyclical process of concrete experience, reflective observation, abstract conceptualization, and active experimentation, this study delves into the impact of clinical clerkships on the growth of medical students’ disciplinary knowledge. Experiential learning theory posits that during clinical clerkships, students engage in a dynamic cycle of applying theoretical knowledge in real-world settings, observing expert clinicians, reflecting on experiences, and actively refining their clinical decision-making skills . This framework suggests that the diverse clinical encounters students experience during clerkships can have varying effects on the development of their disciplinary knowledge . This study analyzes students’ performance on the CCSE before, during, and after their clerkship experiences to unveil the differential impacts of different clinical rotations on acquiring disciplinary knowledge. Integrating experiential learning theory in medical education underscores the importance of hands-on experiences and reflective practice in shaping students’ clinical competencies and understanding. This study aimed to quantify the impact of clinical clerkships on the growth of medical students’ disciplinary knowledge. Using the Comprehensive Clinical Science Examination (CCSE) as a formative assessment tool, we examined students’ performance before, during, and after their clerkship experiences to evaluate the differential effects of various rotations on the acquisition of disciplinary knowledge. Study participants This study’s participants were 155 third-year medical students in the College of Human Medicine at Michigan State University who matriculated in 2016. CCSE is required when the students are in their third year (Fall 2018, Spring 2019, and Summer 2019) in the MD program, so every student is a study participant. Measurements The CCSE was used to assess the participants’ disciplinary clinical knowledge trajectory. They were required to take it twice per semester over the three semesters of the third year of medical school, which count toward their grades each semester. Numeric scores are not available in individual CCSE reports. Disciplinary scores were digitized and extracted from the bar interval charts of the individual CCSE reports using image processing techniques. An example of a bar interval chart can be seen in . Specifically, each bar interval chart was extracted as an image and read in as an array of pixels that stored brightness, color, and distance. We extracted the coordinates of the lower and upper ends of the bars in the array and used their middle points as the CCSE disciplinary scores in the study. The first author wrote the image processing technique using R language, and the digitized numbers were validated with the unregistered version of the software PlotDigitizer . To aid understandability, the ends of the chart were scaled such that the disciplinary scores range from 0 to 100. The same digital extraction procedure was repeated for all students and across all six measures. Except for the surgery clerkship, which lasts eight weeks, the major disciplinary clerkships—internal medicine, psychiatry, pediatrics, and obstetrics and gynecology (ob/gyn)—are four weeks each. All students must undertake all five clerkships after completing the United States Medical Licensure Examination (USMLE) Step 1 at the end of their second year but do so in different orders. Statistical methods Because the students’ rotation schedules through their clerkships were not the same, we defined a time scale oriented around each clerkship separately: i.e. as consisting of Phase 1, Phase 2, and Phase 3, indicating students’ performance in disciplinary clinical knowledge before, during, and after the relevant disciplinary clerkship, respectively. Segmented regression analysis (also called piecewise regression or broken-stick regression) was then used to quantify the pairwise differences in disciplinary knowledge among the three phases . The impacts of the five major disciplinary clerkships were analyzed separately, and each impact was modeled and quantified by the differences in the regression intercept of each phase . For model simplicity, we specified the same growth instead of phase-specific growth across the three phases. The disciplinary scores were digitized from the CCSE reports using the png package, and the segmented-regression models were built using the lmerTest package , in R version 3.6.3. All statistical analyses were conducted using R, a language and environment for statistical computing developed by the R Core Team and supported by the R Foundation for Statistical Computing (Vienna, Austria) . This study was conducted using de-identified data, which exempts it from informed consent requirements under the U.S. Department of Health and Human Services Common Rule (45 CFR 46.104(d)(4)) and the HIPAA Privacy Rule (45 CFR 164.514), both of which allow for the use of de-identified information without obtaining consent from individuals. A designated honest broker is used to deidentify curricular and student evaluation data collected as a normal part of the medical school’s educational programs. According to the Michigan State University’s Human Research Protection Program’s determination, these data are not considered human subject data (IRB# STUDY 00007478). To justify the sample size for our segmented regression analysis, we conducted a power calculation using a simulation-based approach in R. We used the intercept change ( β = 2) as the effect size and fixed the breakpoint at the midpoint of the measuring points. Each subject provided six measuring points, resulting in 930 observations ( n = 155 subjects). We simulated the data under the alternative hypothesis, where there was a change in the intercept at the breakpoint while the slope remained constant. Random noise with a standard deviation of five was added to the simulated data to reflect variability. For each of the 1000 simulations, we fitted a linear regression model with an indicator variable for the time after the breakpoint. The power of the test was estimated by calculating the proportion of simulations where the p -value for the intercept change was less than the significance level ( α = 0.05). The results indicated an estimated power of 0.821, suggesting that our study design has a high probability of detecting the specified effect size at the given significance level. This study’s participants were 155 third-year medical students in the College of Human Medicine at Michigan State University who matriculated in 2016. CCSE is required when the students are in their third year (Fall 2018, Spring 2019, and Summer 2019) in the MD program, so every student is a study participant. The CCSE was used to assess the participants’ disciplinary clinical knowledge trajectory. They were required to take it twice per semester over the three semesters of the third year of medical school, which count toward their grades each semester. Numeric scores are not available in individual CCSE reports. Disciplinary scores were digitized and extracted from the bar interval charts of the individual CCSE reports using image processing techniques. An example of a bar interval chart can be seen in . Specifically, each bar interval chart was extracted as an image and read in as an array of pixels that stored brightness, color, and distance. We extracted the coordinates of the lower and upper ends of the bars in the array and used their middle points as the CCSE disciplinary scores in the study. The first author wrote the image processing technique using R language, and the digitized numbers were validated with the unregistered version of the software PlotDigitizer . To aid understandability, the ends of the chart were scaled such that the disciplinary scores range from 0 to 100. The same digital extraction procedure was repeated for all students and across all six measures. Except for the surgery clerkship, which lasts eight weeks, the major disciplinary clerkships—internal medicine, psychiatry, pediatrics, and obstetrics and gynecology (ob/gyn)—are four weeks each. All students must undertake all five clerkships after completing the United States Medical Licensure Examination (USMLE) Step 1 at the end of their second year but do so in different orders. Because the students’ rotation schedules through their clerkships were not the same, we defined a time scale oriented around each clerkship separately: i.e. as consisting of Phase 1, Phase 2, and Phase 3, indicating students’ performance in disciplinary clinical knowledge before, during, and after the relevant disciplinary clerkship, respectively. Segmented regression analysis (also called piecewise regression or broken-stick regression) was then used to quantify the pairwise differences in disciplinary knowledge among the three phases . The impacts of the five major disciplinary clerkships were analyzed separately, and each impact was modeled and quantified by the differences in the regression intercept of each phase . For model simplicity, we specified the same growth instead of phase-specific growth across the three phases. The disciplinary scores were digitized from the CCSE reports using the png package, and the segmented-regression models were built using the lmerTest package , in R version 3.6.3. All statistical analyses were conducted using R, a language and environment for statistical computing developed by the R Core Team and supported by the R Foundation for Statistical Computing (Vienna, Austria) . This study was conducted using de-identified data, which exempts it from informed consent requirements under the U.S. Department of Health and Human Services Common Rule (45 CFR 46.104(d)(4)) and the HIPAA Privacy Rule (45 CFR 164.514), both of which allow for the use of de-identified information without obtaining consent from individuals. A designated honest broker is used to deidentify curricular and student evaluation data collected as a normal part of the medical school’s educational programs. According to the Michigan State University’s Human Research Protection Program’s determination, these data are not considered human subject data (IRB# STUDY 00007478). To justify the sample size for our segmented regression analysis, we conducted a power calculation using a simulation-based approach in R. We used the intercept change ( β = 2) as the effect size and fixed the breakpoint at the midpoint of the measuring points. Each subject provided six measuring points, resulting in 930 observations ( n = 155 subjects). We simulated the data under the alternative hypothesis, where there was a change in the intercept at the breakpoint while the slope remained constant. Random noise with a standard deviation of five was added to the simulated data to reflect variability. For each of the 1000 simulations, we fitted a linear regression model with an indicator variable for the time after the breakpoint. The power of the test was estimated by calculating the proportion of simulations where the p -value for the intercept change was less than the significance level ( α = 0.05). The results indicated an estimated power of 0.821, suggesting that our study design has a high probability of detecting the specified effect size at the given significance level. shows the descriptive statistics across the six measures. The average scores in all disciplines showed increasing trends, but those in psychiatry were generally the highest. presents the regression discontinuity estimates for each disciplinary clerkship. is a plot of the observed growth and the modeled piecewise regression line for each discipline. Phase 2 vs. Phase 1 To capture change in disciplinary knowledge right after the start of the disciplinary rotation, we compared the regression intercepts before a given clerkship and during it (i.e. Phase 2 vs. Phase 1 in ). This revealed that students’ average scores increased the most in ob/gyn ( β = 11.193, p < .0001), followed by psychiatry ( β = 10.005, p = .001), pediatrics ( β = 6.238, p < .0001), internal medicine ( β = 1.638, p = .30), and surgery ( β = − 2.332, p = .10). However, the increases were only statistically significant in the first three disciplines; knowledge changes in internal medicine and surgery were not significantly different from zero ( p > .05). Phase 3 vs. Phase 2 When we compared the regression intercepts of knowledge during a clerkship and after it (i.e. Phase 3 vs. Phase 2 in ), we found that the students’ average scores improved the most in psychiatry ( β = 7.649, p = .008), followed by ob/gyn ( β = 4.175, p = .06), surgery ( β = 4.106, p = .007), and pediatrics ( β = 1.732, p = .32). However, the observed changes were only statistically significant for psychiatry and surgery. Phase 3 vs. Phase 1 The regression intercepts of knowledge difference between Phase 3 and Phase 1 indicated that disciplinary knowledge increased significantly ( p < 0.0001) pre- to post-clerkship in all disciplines except surgery. As compared to their CCSE disciplinary scores from before the relevant clerkship, students’ average post-clerkship score gains were 4.65 points in internal medicine ( p < .0001), 7.97 points in pediatrics ( p < .0001), 15.37 points in ob/gyn ( p < .0001), and 17.65 points in psychiatry ( p < .0001). Students’ surgery scores only increased 1.77 points after their surgery clerkships ( p = 0.34), which was not statistically different from zero. The impact of the rotation was found to be incremental. Students’ growth in disciplinary clinical knowledge scores before and after their clerkship also reflected their learning curves in the third year of medical school. For internal medicine, although the students’ changes between Phase 2 vs. Phase 1 and Phase 3 vs. Phase 2 were not statistically significant from zero, the change between Phase 3 and Phase 1, 4.65 points, was a statistically significant improvement. Ob/gyn’s score-change trajectory was similar to that of pediatrics: i.e. a large increase occurred when students started the rotation, which implied the impact of ob/gyn and pediatrics rotations on the corresponding clinical knowledge. Interestingly, however, the change in surgery scores across phases differed from the other four disciplines. On average, students’ surgical knowledge had a non-significant drop after they started the rotation, but then returned to their original pre-rotation level. This suggests that the surgery clerkship did not significantly affect the students’ surgical-knowledge growth. The underlying reasons for this are discussed in the next section. The ‘step’ shown in between Phases 1 and 3 clearly shows the impact of the psychiatry rotation. Except in psychiatry, the sampled students’ performance steadily grew between Phase 1 and Phase 3. In psychiatry, though the regression intercept difference between Phase 1 and Phase 3 showed a significant increase, growth within these two phases was zero. To capture change in disciplinary knowledge right after the start of the disciplinary rotation, we compared the regression intercepts before a given clerkship and during it (i.e. Phase 2 vs. Phase 1 in ). This revealed that students’ average scores increased the most in ob/gyn ( β = 11.193, p < .0001), followed by psychiatry ( β = 10.005, p = .001), pediatrics ( β = 6.238, p < .0001), internal medicine ( β = 1.638, p = .30), and surgery ( β = − 2.332, p = .10). However, the increases were only statistically significant in the first three disciplines; knowledge changes in internal medicine and surgery were not significantly different from zero ( p > .05). When we compared the regression intercepts of knowledge during a clerkship and after it (i.e. Phase 3 vs. Phase 2 in ), we found that the students’ average scores improved the most in psychiatry ( β = 7.649, p = .008), followed by ob/gyn ( β = 4.175, p = .06), surgery ( β = 4.106, p = .007), and pediatrics ( β = 1.732, p = .32). However, the observed changes were only statistically significant for psychiatry and surgery. The regression intercepts of knowledge difference between Phase 3 and Phase 1 indicated that disciplinary knowledge increased significantly ( p < 0.0001) pre- to post-clerkship in all disciplines except surgery. As compared to their CCSE disciplinary scores from before the relevant clerkship, students’ average post-clerkship score gains were 4.65 points in internal medicine ( p < .0001), 7.97 points in pediatrics ( p < .0001), 15.37 points in ob/gyn ( p < .0001), and 17.65 points in psychiatry ( p < .0001). Students’ surgery scores only increased 1.77 points after their surgery clerkships ( p = 0.34), which was not statistically different from zero. The impact of the rotation was found to be incremental. Students’ growth in disciplinary clinical knowledge scores before and after their clerkship also reflected their learning curves in the third year of medical school. For internal medicine, although the students’ changes between Phase 2 vs. Phase 1 and Phase 3 vs. Phase 2 were not statistically significant from zero, the change between Phase 3 and Phase 1, 4.65 points, was a statistically significant improvement. Ob/gyn’s score-change trajectory was similar to that of pediatrics: i.e. a large increase occurred when students started the rotation, which implied the impact of ob/gyn and pediatrics rotations on the corresponding clinical knowledge. Interestingly, however, the change in surgery scores across phases differed from the other four disciplines. On average, students’ surgical knowledge had a non-significant drop after they started the rotation, but then returned to their original pre-rotation level. This suggests that the surgery clerkship did not significantly affect the students’ surgical-knowledge growth. The underlying reasons for this are discussed in the next section. The ‘step’ shown in between Phases 1 and 3 clearly shows the impact of the psychiatry rotation. Except in psychiatry, the sampled students’ performance steadily grew between Phase 1 and Phase 3. In psychiatry, though the regression intercept difference between Phase 1 and Phase 3 showed a significant increase, growth within these two phases was zero. Several formative assessment methods with good psychometric properties , have been proposed for use in the clerkship settings. Given that curriculum reform in medical schools is focused on the transition to an integrated curricular structure, assessment and evaluation methods need to be changed accordingly. This study has proposed and demonstrated the utility of segmented regression analysis for quantifying the impact of clerkships on students’ clinical knowledge in various medical disciplines and examining their knowledge growth before and after the clerkship-rotation period, using formative assessment data. The results provide helpful information for medical schools regarding how their students acquire clinical knowledge while preparing for the USMLE Step 2 clinical knowledge examination . The implications of its findings are particularly important to instructors and proctors investigating and seeking to revise the content of clerkship activities. These findings suggest that certain clerkship experiences may be more effective than others in promoting the acquisition of clinical knowledge. Medical schools could use this information to optimize their clerkship structures and activities to better support student learning. Additionally, the analytical approach used in this study could be adapted to evaluate the effectiveness of other curricular interventions, such as tutoring programs or intersession activities. The methods we have presented can be readily applied to other similar purposes; this approach could be utilized to measure the effectiveness of curriculum activities, such as tutoring or intersessions . Before conducting this study, we performed pair-comparison and sequence analyses to examine the order effect of the rotation schedule on disciplinary knowledge. Similar to the previous study , the results showed that students’ disciplinary knowledge was not affected by their rotation order. This finding led us to design separate analyses for each discipline. However, if it were to later emerge that the rotation order or rotation schedule did affect students’ disciplinary knowledge, we would recommend using multivariate segmented regression analysis with disciplinary order included as an interaction term. We also conducted linear mixed segmented regression analysis with phase-specific growth to find the best model. The results did not indicate that phase-specific growth rates significantly differed across the three phases. For simplicity’s sake, we, therefore, specified the growth rate as the same across the phases. The observed steady growth in clinical knowledge in the third-year contrasts sharply with the varying growth rate in students’ medical knowledge generally observed in the first two years of medical school. To our knowledge, it has not been the topic of any previous literature. As such, the learning trajectory we identified can serve as validation evidence for further research. Our evaluation results showed the impact that the clerkships can have on the students’ assessment performance. This may be varied due to medical schools’ clerkship content, schedules, workloads, lengths, and designs. However, because the methods used in this research have no causal-inference implications, the reasons behind the observed growth and declines in clinical knowledge should be discussed further with the students and the educators involved. For example, our finding that students’ CCSE surgery scores dropped during their surgery rotation could have been because that clerkship’s learning was not aligned with the examination content or because its workload was so heavy that students did not have enough time to prepare for the exam; or because this batch of students’ interest in acquiring surgery-related clinical knowledge was low. Another interesting result involved the flatness of students’ growth rate in psychiatry in Phase 1 and Phase 3. This indicated that, although the clerkship experience helped improve their psychiatry exam scores, it might be the only source of psychiatry knowledge in their third-year medical education. Again, such a finding warrants further qualitative or program-evaluation research among curriculum developers, clinical educators, clerkship communities, medical students, psychometricians, and school leaders. Quantifying the effectiveness of rotation training is not intended as a criticism of the conduct of disciplinary clerkships or to examine a specific clinical task , but rather to assess the relationship between such clerkships and students’ disciplinary clinical knowledge over time . Clinical knowledge is a multidimensional latent construct and assessing it on a discipline-specific basis has always been challenging for medical schools. Carrying out research using our methodology could also allow schools to initiate conversations with the clerkship community on preparing students to be better future physicians in practice. The paper assessed and quantified clerkships’ impact using segmented regression analysis. However, it should be noted that it only used CCSE disciplinary scores to measure students’ clinical learning outcomes. What clerkship experience brings to students is beyond what CCSE can cover and measure; indeed, most of its benefits may be unmeasurable. As part of the current trend of ‘data booming’, we can expect a warning system to be built that will provide an in-depth, dynamic evaluation of students’ learning outcomes in clerkship activities. Such a system would allow early detection of students’ difficulties and thus facilitate prompt, appropriate assistance to bridge the gap between clerkship experience and learning outcomes. During periods of crisis, such as the COVID-19 pandemic, young healthcare students and residents face additional challenges that can impact their knowledge advancement. The increased stress, anxiety, and workload can detract from their ability to focus on learning and retaining clinical knowledge. Moreover, disruptions to clinical rotations and educational activities can result in missed learning opportunities and reduced hands-on experience. These factors may lead to uneven knowledge acquisition and a potential decline in clinical competence. However, the post-pandemic era offers an opportunity to integrate adaptive measures permanently into medical education. These measures include leveraging digital tools like virtual patient simulations and telemedicine practices, which proved instrumental during the pandemic, to complement traditional clinical experience. Instructors and curriculum designers can also focus on fostering resilience and adaptability by incorporating training on coping strategies and mental health resources into medical education programs. Such innovations can mitigate the impact of future crises and ensure the continued growth of medical students’ clinical competencies and readiness for practice. For instance, Moldovan et al. highlights the significant impact of the COVID-19 pandemic on orthopedic residents in Romania, emphasizing the need for adaptive measures in medical training during such times . Limitations of the study A key limitation of this study is its reliance on standardized test scores as the sole measure of student learning. While these scores objectively assess disciplinary knowledge, they do not capture other important domains of clinical competence, such as communication skills and professionalism. Future research should consider incorporating multiple assessment methods to gain a more comprehensive understanding of the impact of clerkships on overall clinical development. Another limitation is the potential impact of the COVID-19 pandemic on the study participants’ learning experiences and knowledge acquisition. The disruptions to clinical rotations and educational activities during this time may have influenced the observed results, and further research is needed to understand the long-term implications of the pandemic on medical education. Beyond the challenges posed by the pandemic, additional limitations of the research design warrant consideration, particularly those related to the observational nature of the study and its reliance on a single institutional cohort. A primary limitation of this study lies in its observational design, which inherently limits causal inferences. The absence of randomization or a control group to account for potential confounding factors, such as differences in clerkship sequence or student-specific characteristics, reduces the ability to isolate the effects of clerkships on disciplinary knowledge. Additionally, while segmented regression analysis offers robust insights into phase-specific changes, it assumes linear trends within each phase, potentially oversimplifying non-linear growth patterns in clinical knowledge acquisition. Another limitation is the reliance on a single cohort of students from a single institution, which may limit the generalizability of the findings to other medical schools with different curricular structures, demographic profiles, or educational environments. Future research should address these limitations by employing study designs incorporating randomization or matched comparisons to control for confounding variables and strengthen causal inferences. Expanding the study to include multiple institutions with diverse student populations and curricular models would enhance the generalizability of the findings. Additionally, integrating mixed-method approaches, such as combining quantitative assessments with qualitative interviews, could provide a more holistic understanding of how clerkships impact clinical knowledge and other competencies, such as communication skills, professionalism, and teamwork. Longitudinal studies tracking students’ clinical performance post-graduation could also shed light on the long-term effects of clerkship experiences. Finally, examining innovative assessment tools, such as virtual simulations and competency-based evaluations, could further refine our understanding of how clerkships contribute to medical education outcomes. A key limitation of this study is its reliance on standardized test scores as the sole measure of student learning. While these scores objectively assess disciplinary knowledge, they do not capture other important domains of clinical competence, such as communication skills and professionalism. Future research should consider incorporating multiple assessment methods to gain a more comprehensive understanding of the impact of clerkships on overall clinical development. Another limitation is the potential impact of the COVID-19 pandemic on the study participants’ learning experiences and knowledge acquisition. The disruptions to clinical rotations and educational activities during this time may have influenced the observed results, and further research is needed to understand the long-term implications of the pandemic on medical education. Beyond the challenges posed by the pandemic, additional limitations of the research design warrant consideration, particularly those related to the observational nature of the study and its reliance on a single institutional cohort. A primary limitation of this study lies in its observational design, which inherently limits causal inferences. The absence of randomization or a control group to account for potential confounding factors, such as differences in clerkship sequence or student-specific characteristics, reduces the ability to isolate the effects of clerkships on disciplinary knowledge. Additionally, while segmented regression analysis offers robust insights into phase-specific changes, it assumes linear trends within each phase, potentially oversimplifying non-linear growth patterns in clinical knowledge acquisition. Another limitation is the reliance on a single cohort of students from a single institution, which may limit the generalizability of the findings to other medical schools with different curricular structures, demographic profiles, or educational environments. Future research should address these limitations by employing study designs incorporating randomization or matched comparisons to control for confounding variables and strengthen causal inferences. Expanding the study to include multiple institutions with diverse student populations and curricular models would enhance the generalizability of the findings. Additionally, integrating mixed-method approaches, such as combining quantitative assessments with qualitative interviews, could provide a more holistic understanding of how clerkships impact clinical knowledge and other competencies, such as communication skills, professionalism, and teamwork. Longitudinal studies tracking students’ clinical performance post-graduation could also shed light on the long-term effects of clerkship experiences. Finally, examining innovative assessment tools, such as virtual simulations and competency-based evaluations, could further refine our understanding of how clerkships contribute to medical education outcomes. This study evaluated the effectiveness of clinical clerkships on the growth of medical students’ disciplinary knowledge using their CCSE scores as a formative assessment . The results indicate that the impact of clerkships varied across different disciplines, with students showing the greatest knowledge gains in obstetrics and gynecology, psychiatry, and pediatrics, and the least gains in surgery. Overall, this study contributes valuable insights into the varying effectiveness of clinical clerkships across different medical disciplines. The findings can inform efforts to enhance the design and implementation of clerkship experiences to better support the growth of medical students’ clinical knowledge and competencies.
Epidermal distribution of tetrodotoxin-rich cells in newly hatched larvae of
f71054ac-9fb3-441e-82d6-673d9380ab1b
11541287
Anatomy[mh]
Pufferfish of the genus Takifugu , which are widely consumed in Japan under strict food hygiene management, possess tetrodotoxin (TTX), known as “pufferfish toxin” (Noguchi et al. ). Pufferfish are thought to accumulate TTX in their bodies through the food chain, including TTX-bearing animals, such as flatworm and ribbon worm, starting with bacteria as a primary producer (Noguchi and Arakawa ). In practice, grass puffer Takifugu alboplumbeus juveniles feed on TTX-bearing flatworms and incorporate composition of TTX and its analogs (Itoi et al. ; Ueda et al. ). Tissue distribution patterns of TTX are variable among Takifugu species, with all species having at least some accumulations of TTX in the liver and ovary (Noguchi and Arakawa ). Some species accumulate TTX in the skin for predator protection (Kodama et al. , ). In the toxicification process in adult tiger puffer Takifugu rubripes , TTX is mainly absorbed from the gastrointestinal tract into the vascular system (Matsumoto et al. , ). Subsequently, it is quickly transferred to the liver in an unbound or carrier-bound state (Matsumoto et al. , , ). Part of TTX is transferred and accumulated in the skin of males and in the ovary of females via the liver in “torakusa”, a hybrid of tiger puffer T. rubripes and grass puffer T. alboplumbeus (Wang et al. ). Similarly, it has been confirmed that excess TTX in the liver is transferred to the epidermis of tiger puffer juveniles (Ikeda et al. ). TTX in the ovary of Takifugu species will be localized on the larval body surface in association with embryogenesis after fertilization. It has been reported that some potential predatory fish respond to TTX in the skin of newly hatched pufferfish larvae and immediately spit them out, suggesting that it is clear that pufferfish larvae utilize TTX as a defensive substance against predatory fish (Itoi et al. , ). It was reported that adult grass puffer localizes TTX in sacciform cells, basal cells and mucous cells in the epidermis (Itoi et al. ), and tiger puffer localize exclusively in basal cells (Ikeda et al. ; Okita et al. ). On the other hand, the density and the types of the TTX-rich cells on the body surface of pufferfish larvae remains unclear. Therefore, in this study, we focused on newly hatched puffer larvae, in which TTX-rich cells have been reported (Itoi et al. , ). Although conventional whole-mount staining could not be applied to thicker samples, we improved the technology for the tissue clearing treatment to suppress background fluorescence. Subsequently, whole-mount immunohistochemistry (IHC) was performed to observe TTX location throughout the epidermis in three dimensions. In addition, some methods of staining were performed to detect cell nuclei, mucous cells and ionocytes. These allowed us to elucidate the characteristics and distribution of TTX-rich cells on pufferfish larvae. Reagents and Chemicals TTX (purity ≥ 90%), paraformaldehyde (PFA) solution in phosphate buffer and clear, unobstructed brain/body imaging cocktails and computational analysis (CUBIC)-1 were purchased from FujiFilm Wako (Osaka, Japan). Anti-TTX monoclonal antibody was kindly provided from Dr. Kentaro Kawatsu (Osaka Prefectural Institute of Public Health, Osaka, Japan). VectaFluor Excel Amplified Anti-Mouse IgG, DyLight 594 Antibody Kit (containing normal horse serum, secondary antibody, and VectaFluor Reagent) and VECTASHIELD ® Vibrance™ Antifade Mounting Medium were obtained from Vector Laboratories, Inc. (CA, USA) and the anti-Na + /K + -ATPase (NKA) α-subunit (α5) antibody from the Developmental Studies Hybridoma Bank (IA, USA). The α5 antibody is a mouse monoclonal antibody against the avian NKA α-subunit and has been widely used to detect branchial NKA (Inokuchi et al. ). 4′,6-Diamidino-2-phenylindole, dihydrochloride (DAPI) solution was from Dojindo Laboratories (Kumamoto, Japan), wheat germ agglutinin (WGA), Alexa Fluor 488 Conjugate from Thermo Fisher Scientific (MA, USA) and Alcian Blue - PAS Stain Kit from ScyTek Laboratories (UT, USA). Pufferfish Larvae Larvae of the tiger puffer Takifugu rubripes were obtained by artificial insemination of wild pufferfish parents that were captured by a set-net in the coastal waters off Noto, Ishikawa, Japan, conducted at Kanazawa University in April – May 2023. TTX (9.2 ± 1.1 μg/g) was detected in the ovaries from T. rubripes females which were used in the artificial insemination (Fig. ). Larvae were fixed with 4% PFA solution in phosphate buffer (pH 7.4) on the day of hatching and stored at 4 °C until processing. Larvae of the grass puffer Takifugu alboplumbeus were obtained by artificial fertilization of wild pufferfish parents that were captured from the coastal waters off Kamogawa, Chiba, Japan, in June 2023. It has been reported that TTX is detected in the ovaries of all spawning T. alboplumbeus females at the sites where the pufferfish were collected for this study (Asano et al. ). The 0 day post-hatch (dph) larvae were fixed and stored in the same method as tiger puffer. Whole-Mount IHC Samples were rinsed with phosphate buffered saline (PBS, pH 7.4), and immersed in a solution containing CUBIC-1/PBS (1:1, v/v) for 1 h, and then immersed in a CUBIC-1 solution overnight at room temperature (RT) as per the manufacturer’s protocol. Blocking was performed with 2.5% normal horse serum for 1 h at RT. IHC against TTX was performed using a 96-well cell culture plates (Thermo Fisher Scientific) with 2.0 µg/mL mouse anti-TTX monoclonal antibody (Kawatsu et al. ) overnight at RT. Samples were incubated at RT for 15 min with Amplifier Antibody, followed by 20 min at RT with VectaFluor Reagent. Samples were stained with 1 mg/mL DAPI solution (1:100) for 20 min followed by PBS wash. Samples were stained with 1.0 mg/mL WGA, Alexa Fluor 488 Conjugate (1:7) for 10 min. Some samples were stained as a negative control which was a sample with an absorbed antibody treated with a high concentration of TTX (10,000-fold in molar ratio to anti-TTX antibody) for 3 h at RT instead of anti-TTX antibody. The absorbed antibody was used after dilution with PBS without post-absorption purification. To visualize ionocytes, mouse anti-NKA α-subunit antibody (1:500) was used in place of anti-TTX antibody, and its negative control was PBS in place of anti-NKA α-subunit antibody. To compare the staining characteristics of mucous cells, WGA-stained samples were subjected to PAS staining using the Alcian Blue - PAS Stain Kit, excluding the Alcian Blue and Mayer’s Hematoxylin staining steps referred to the manufacturer’s protocol. Fluorescence Microscopy Analysis Treated samples were sealed in VECTASHIELD ® Vibrance Antifade Mounting Medium on a glass slide. Observation of immunoreactivity image was done with an All-in-One Fluorescence Microscope BZ-X810 (Keyence, Osaka, Japan). The three dimensional (3D) images were acquired using the Z-stack and sectioning functions available with the microscope. Subsequently, the entire surface was two-dimensionally visualized using the stitching and full focus functions. Then, the level correction function was used to adjust the brightness levels. TTX (purity ≥ 90%), paraformaldehyde (PFA) solution in phosphate buffer and clear, unobstructed brain/body imaging cocktails and computational analysis (CUBIC)-1 were purchased from FujiFilm Wako (Osaka, Japan). Anti-TTX monoclonal antibody was kindly provided from Dr. Kentaro Kawatsu (Osaka Prefectural Institute of Public Health, Osaka, Japan). VectaFluor Excel Amplified Anti-Mouse IgG, DyLight 594 Antibody Kit (containing normal horse serum, secondary antibody, and VectaFluor Reagent) and VECTASHIELD ® Vibrance™ Antifade Mounting Medium were obtained from Vector Laboratories, Inc. (CA, USA) and the anti-Na + /K + -ATPase (NKA) α-subunit (α5) antibody from the Developmental Studies Hybridoma Bank (IA, USA). The α5 antibody is a mouse monoclonal antibody against the avian NKA α-subunit and has been widely used to detect branchial NKA (Inokuchi et al. ). 4′,6-Diamidino-2-phenylindole, dihydrochloride (DAPI) solution was from Dojindo Laboratories (Kumamoto, Japan), wheat germ agglutinin (WGA), Alexa Fluor 488 Conjugate from Thermo Fisher Scientific (MA, USA) and Alcian Blue - PAS Stain Kit from ScyTek Laboratories (UT, USA). Larvae of the tiger puffer Takifugu rubripes were obtained by artificial insemination of wild pufferfish parents that were captured by a set-net in the coastal waters off Noto, Ishikawa, Japan, conducted at Kanazawa University in April – May 2023. TTX (9.2 ± 1.1 μg/g) was detected in the ovaries from T. rubripes females which were used in the artificial insemination (Fig. ). Larvae were fixed with 4% PFA solution in phosphate buffer (pH 7.4) on the day of hatching and stored at 4 °C until processing. Larvae of the grass puffer Takifugu alboplumbeus were obtained by artificial fertilization of wild pufferfish parents that were captured from the coastal waters off Kamogawa, Chiba, Japan, in June 2023. It has been reported that TTX is detected in the ovaries of all spawning T. alboplumbeus females at the sites where the pufferfish were collected for this study (Asano et al. ). The 0 day post-hatch (dph) larvae were fixed and stored in the same method as tiger puffer. Samples were rinsed with phosphate buffered saline (PBS, pH 7.4), and immersed in a solution containing CUBIC-1/PBS (1:1, v/v) for 1 h, and then immersed in a CUBIC-1 solution overnight at room temperature (RT) as per the manufacturer’s protocol. Blocking was performed with 2.5% normal horse serum for 1 h at RT. IHC against TTX was performed using a 96-well cell culture plates (Thermo Fisher Scientific) with 2.0 µg/mL mouse anti-TTX monoclonal antibody (Kawatsu et al. ) overnight at RT. Samples were incubated at RT for 15 min with Amplifier Antibody, followed by 20 min at RT with VectaFluor Reagent. Samples were stained with 1 mg/mL DAPI solution (1:100) for 20 min followed by PBS wash. Samples were stained with 1.0 mg/mL WGA, Alexa Fluor 488 Conjugate (1:7) for 10 min. Some samples were stained as a negative control which was a sample with an absorbed antibody treated with a high concentration of TTX (10,000-fold in molar ratio to anti-TTX antibody) for 3 h at RT instead of anti-TTX antibody. The absorbed antibody was used after dilution with PBS without post-absorption purification. To visualize ionocytes, mouse anti-NKA α-subunit antibody (1:500) was used in place of anti-TTX antibody, and its negative control was PBS in place of anti-NKA α-subunit antibody. To compare the staining characteristics of mucous cells, WGA-stained samples were subjected to PAS staining using the Alcian Blue - PAS Stain Kit, excluding the Alcian Blue and Mayer’s Hematoxylin staining steps referred to the manufacturer’s protocol. Treated samples were sealed in VECTASHIELD ® Vibrance Antifade Mounting Medium on a glass slide. Observation of immunoreactivity image was done with an All-in-One Fluorescence Microscope BZ-X810 (Keyence, Osaka, Japan). The three dimensional (3D) images were acquired using the Z-stack and sectioning functions available with the microscope. Subsequently, the entire surface was two-dimensionally visualized using the stitching and full focus functions. Then, the level correction function was used to adjust the brightness levels. Whole-mount IHC against TTX represented the specific signals in the whole epidermis of T. rubripes and T. alboplumbeus as the magenta spots (Fig. ). As shown in Fig. , section-like observation using 3D display demonstrated that TTX signals were detected only in the epidermis. Intense fluorescence with anti-TTX antibody was detected in small cells, about 5 µm in diameter, located in the outermost layers of the epidermis (Fig. ). No signal specific to TTX was observed in the central part of the cell, including the cell nucleus, which is stained with DAPI which binds even a very small amount of DNA and emits blue fluorescence. Signals from WGA staining were scattered uniformly in the epidermis of the whole body except for the eyes (Fig. ), and section-like observation using 3D display showed that WGA-positive cells were detected only in the epidermis (Fig. ). Intense fluorescence with WGA staining was detected in the epidermal cells with a diameter of 10–20 µm (Fig. ). WGA-positive cells were differently located from TTX-rich cells. In thin tissues, such as the tail, cells stained with magenta by PAS staining (Fig. ). On the other hand, in the head and abdomen, it was difficult to observe cells stained by PAS staining because of the tissue thickness. Therefore, we compared the same area stained by WGA and PAS staining in the tail. As a result, the cells with a diameter of approximately 10–20 µm that were stained magenta by PAS staining corresponded to the cells stained by WGA staining (Fig. ). Whole-mount IHC targeting NKA demonstrated that the specific signals of ionocytes concentrated around the yolk sac membrane with yellow spots (Fig. ). Intense fluorescence with anti-NKA antibody was remarkable in the large epidermal cells with a diameter of about 30 µm. Opening of ionocytes is not located directly above the nucleus which is stained with DAPI (white arrowheads in Fig. ). Pufferfish larvae of the genus Takifugu possess maternally derived TTX on their body surface, and are thought to use it to avoid predation (Itoi et al. , ). However, it was not known which parts of the body surface of the pufferfish larvae were covered with TTX-rich cells, and the details of the cells that retained TTX were not clear. IHC with anti-TTX antibodies in the skin of Takifugu species has shown that TTX-rich cells are contained in sacciform cells, mucous cells, basal cells of adult grass puffer (Itoi et al. ), basal cells of juvenile tiger puffer (Okita et al. ), glands of the pear puffer Takifugu vermicularis (Mahmud et al. ) and gland-like structures of adult fine-patterned puffer Takifugu flavipterus (Sato et al. ). Whole-mount IHC in this study revealed that TTX-rich cells cover the entire body, except the eyes, and that these cells are small and distinct from mucous cells and ionocytes, suggesting that these cells are pufferfish-specific or TTX-bearing fish-specific. The possibility that TTX-rich cells correspond to sacciform cells of adult pufferfish was also ruled out, as the results showed that TTX-rich cells were significantly smaller than mucous cells, which in turn are smaller than sacciform cells of adult pufferfish (Itoi et al. ; Tsutsui et al. ). This suggests that TTX-rich cells in Takifugu larvae cannot be identified based on the information from previous studies of pufferfish. Adult Acanthopterygii species, including pufferfish, generally have epithelial cells, mucous cells and sacciform cells in the epidermis, and the epidermis and dermis are separated by a basal cell layer (Takashima and Hibiya ). In fish larvae, sacciform cells are not observed, with exceptions in flatfish (Sarasquete et al. ; Padrós et al. ). Epidermis of newly hatched larvae of Acanthopterygii species has been observed in marine flatfish (Roberts et al. ; Sarasquete et al. ; McFadzen et al. ; Campinho et al. ; Padrós et al. ), and fresh or brackish water fish, tilapia (Shiraishi et al. ; Hiroi et al. , , ; Uchida et al. ; Kaneko and Shiraishi ). The epidermis of newly hatched larvae is generally composed of 1 or 2 cell layers, with epithelial (squamous) cells, mucous cells, mucous-free cells which have the same morphology as mucous cells (mucous cell-like cells), and ionocytes (chloride cells; mitochondria-rich cells), and is separated from the dermis by a basal cell layer. Views on mucous cell-like cells vary as sacciform/mucous cells (Sarasquete et al. ) and saccular/sacciform cells (Padrós et al. ), and these secretory cells are the same size or larger than mucous cells. On the other hand, the TTX-rich cells observed on the body surface of pufferfish larvae were remarkably smaller and more overcrowded than mucous cells, suggesting that they differ from the known secretory cells described above. The ionocytes are known to be large cells although there have been no observations of these in pufferfish larvae (Hiroi et al. ; Inokuchi et al. ). The ionocyte’s opening of pufferfish larvae appeared to be the same size (5–10 µm) as the TTX-rich cells which are observed in this study, and the nucleus was not observed directly below the opening of ionocytes. This is inconsistent with TTX-rich cells having a nucleus. These results suggest that TTX-rich cells of Takifugu larvae are different from any cells previously targeted for observation in larval fish skin. TTX-rich cells are expected to exocrine secretion of TTX against predators although at this time no details of the openings were observed and there is no evidence of exocrine secretion. Further studies are required. Unknown TTX-binding substance may be involved in TTX localization of pufferfish larvae, although the process of TTX localization in the skin of them has not been reported. The IHC signals specific to TTX observed in this study suggest that TTX is in a protein-bound form to be retained in the tissue because free-TTX would have been washed out during the fixation process as shown by Yonezawa et al. . In practice, the cavity in the gland and the gland-like structure of adult pufferfish does not exhibit staining in IHC with anti-TTX antibody (Mahmud et al. ; Sato et al. ). In addition, TTX-rich cells in this study were WGA- and PAS-negative. PSTBP, which is known as a carrier protein of TTX, is a PAS-positive glycoprotein (Yotsu-Yamashita et al. ) and it is suggested that the major protein in the WGA-bound fraction is an isoform derived from a PSTBP-like gene (Zhang et al. ). These indicate that TTX observed in this study may bind to a different substance from the PSTBP-like protein. Further investigation along this line would be necessary. In conclusion, our study attempted to collect basic knowledge to clarify the localization mechanism of maternal TTX in the skin of the pufferfish larvae. TTX-rich cells in pufferfish larvae were found only on the body surface and not corresponding with basal cells, mucous cells and ionocytes. Our data inferred that TTX-rich cells observed in pufferfish larvae are not classified into ionocytes, and known secretory cells, such as mucous cells, sacciform cells and saccular cells. In the future, we aim to identify TTX-rich cell types using electron microscopy analysis and single-cell transcriptome analysis. This will provide insights into the accumulation of TTX, the presentation mechanisms of TTX in pufferfish larvae and the retention/localization of TTX in tissues across their life stages. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 4829 KB)
How does pharmacological and toxicological knowledge evolve? A case study on hydrogen cyanide in German pharmacology and toxicology textbooks from 1878 to 2020
6c59971f-6e02-47d4-a8cd-af7eab809aa7
11522135
Pharmacology[mh]
Medical students use pharmacology and toxicology textbooks to prepare for examinations in medical school, which is why the contents presented in the books on therapeutic and adverse effects, indications, and contraindications of drugs form the knowledge base of future doctors. However, little is known about how pharmacological knowledge develops and changes over time in textbooks. The importance of portraying current knowledge and clinical practice in textbooks is shown by the correlation between insufficient intravenous fluid prescribing knowledge and practices by junior doctors in the UK and inadequate treatment of the topic in medical textbooks (Powell et al. ). A recent analysis of German-language pharmacology and toxicology textbookson the antihypertensive drug reserpine has revealed that they are substantially lagging clinical practice (Misera and Seifert ). An evaluation of the portrayal of a specific drug or poison over the entire history of pharmacology has not yet taken place. Therefore, in this case study, we assessed the presentation of hydrogen cyanide based on sixteen German-language textbooks over almost 150 years. For interpretation of the history, current knowledge on hydrogen cyanide is presented below. Hydrogen cyanide (HCN) is a weak acid that forms water-soluble salts (cyanides) on contact with alkalis, including potassium cyanide (KCN) and sodium cyanide (NaCN) (Aktories et al. ). Hydrogen cyanide can be absorbed via the respiratory tract, gastrointestinal tract, or the skin and mucous membranes (Graham and Traylor ). Hydrogen cyanide binds to the trivalent iron of cytochrome oxidase, which is part of the mitochondrial respiratory chain, and thus inhibits cellular respiration and the production of ATP (Graham and Traylor ). As a result, cellular hypoxia develops and ATP concentration decreases, causing metabolic acidosis (Graham and Traylor ). Hydrogen cyanide is metabolized by rhodanese, which is primarily found in the liver and muscle, and the inactive metabolite is subsequently eliminated renally (Graham and Traylor ). HCN is naturally found in bitter almonds and in the kernels of stone fruits such as apricots, peaches, and plums, as well as in lima beans (Bolarinwa et al. ; Graham and Traylor ). Hydrogen cyanide intoxications can occur through the inhalation of combustion gases or as part of therapy with sodium nitroprusside (Brunton and Knollmann ; Graham and Traylor ). The current use of hydrogen cyanide is limited to non-medical applications such as the chemical and metal industries, for example for galvanization and steel hardening, to produce blue dyes, in photography, and as a pesticide (Marquardt et al. ; Graham and Traylor ). HCN can also be misused in criminal applications (murder, mass murder and suicide) (Marquardt et al. ; Graham and Traylor ). During the Second World War (1939–1945), hydrogen cyanide was used by the Nazis under the name Zyklon B for the genocide of Jews in the gas chambers of concentration camps (Embar-Seddon and Pass ; Graham and Traylor ). Symptoms of acute hydrogen cyanide poisoning include a bitter smell of bitter almonds when inhaled, headache, dizziness, confusion, tachypnea, and tachycardia, dyspnea, and apnea up to coma and death (Hendry-Hofer et al. ; Graham and Traylor ). Hydroxocobalamine is considered the treatment of first choice for hydrogen cyanide poisoning (Aktories et al. ). Dimethylaminophenol can also be used for therapeutic induction of met-hemoglobin in cases of intoxication (Aktories et al. ). Sodium thiosulfate can be given supportively to accelerate the body’s own detoxification by providing sulfur (Marquardt et al. ; Aktories et al. ). Ventilation with oxygen is indicated as part of the symptomatic treatment of hydrogen cyanide intoxication (Marquardt et al. ). The administration of sodium hydrogen carbonate is also suitable for correcting the metabolic acidosis (Marquardt et al. ). Selection of textbooks One textbook per decade was analyzed as an example to compare the content presented for hydrogen cyanide in pharmacology and toxicology textbooks from 1878 onwards (Table ). The selection criteria included that the textbooks must be intended for medical students and doctors and be published in German language. A further selection criterion was the availability of the textbooks. Analyzing the data Figure illustrates the methodological approach used to examine the data. Tables – show the analysis categories with encodings and the detailed results for the textbook groups. The scope of hydrogen cyanide-related pages in the pharmacology and toxicology textbooks was analyzed. The scope of the categories (structure, molecular mechanism of action, occurrence, effects, resorption, areas of application, lethal dose, acute symptoms of intoxication, treatment of hydrogen cyanide poisoning, and recommended therapeutic preparations) was determined. The pharmacology and toxicology textbooks were divided into textbook groups chronologically: 1878–1901 (Textbook group 1), 1919–1944 (Textbook group 2), 1951–1986 (Textbook group 3), 1997–2020 (Textbook group 4) . The content of the textbooks was analyzed. The average range of the categories was calculated. An inductive approach was chosen to digitize the data. This procedure makes it possible, after the verbatim transfer of the primary information from the textbooks into the associated categories and subsequent definition of the encodings, to present all the collected content in a comparable form. All encodings within an analysis category were assigned numbers, which in turn were assigned to the textbooks that listed the contents of the corresponding encodings. One textbook per decade was analyzed as an example to compare the content presented for hydrogen cyanide in pharmacology and toxicology textbooks from 1878 onwards (Table ). The selection criteria included that the textbooks must be intended for medical students and doctors and be published in German language. A further selection criterion was the availability of the textbooks. Figure illustrates the methodological approach used to examine the data. Tables – show the analysis categories with encodings and the detailed results for the textbook groups. The scope of hydrogen cyanide-related pages in the pharmacology and toxicology textbooks was analyzed. The scope of the categories (structure, molecular mechanism of action, occurrence, effects, resorption, areas of application, lethal dose, acute symptoms of intoxication, treatment of hydrogen cyanide poisoning, and recommended therapeutic preparations) was determined. The pharmacology and toxicology textbooks were divided into textbook groups chronologically: 1878–1901 (Textbook group 1), 1919–1944 (Textbook group 2), 1951–1986 (Textbook group 3), 1997–2020 (Textbook group 4) . The content of the textbooks was analyzed. The average range of the categories was calculated. An inductive approach was chosen to digitize the data. This procedure makes it possible, after the verbatim transfer of the primary information from the textbooks into the associated categories and subsequent definition of the encodings, to present all the collected content in a comparable form. All encodings within an analysis category were assigned numbers, which in turn were assigned to the textbooks that listed the contents of the corresponding encodings. Scope of hydrogen cyanide-related content in pharmacology and toxicology textbooks Figure shows the number of pages with content on hydrogen cyanide and its cyanides. Figure shows the total number of pages in the textbooks and the relative proportion of poison pages in the total number of pages in the textbook. The number of pages with hydrogen cyanide-related content reaches its maximum in textbook 9 in 1951 with 14 pages and then drops to 2 pages in textbook 10 in 1964. In the period 1964–2020, the number of substance pages is at a low-medium level. The relative share of poison pages in the total number of pages in the textbooks reaches its maximum in textbook 7 in 1933 at 2.86% and shows a further peak in textbook 9 in 1951 at 2.36%. A connection between the increased representation of the poison in the period 1933–1951 and the use of hydrogen cyanide in the form of Zyklon B as a lethal poison in Nazi concentration camps during the Second World War (1939–1945) is possible (Embar-Seddon and Pass ). To test this hypothesis, the textbook “Grundriß der Pharmakologie, Toxikologie (Wehr-Toxikologie) und Arznei-Verordnungslehre” (“Principles of Pharmacology, Toxicology (Military Toxicology) and Drug Prescription”) by the German pharmacologist and NSDAP member Heinrich Gebhardt from 1940 was examined (Philippu and Seifert ). But here, hydrogen cyanide is only sketchily presented (4 of 403 pages, 1%) (Gebhardt, ). Range of categories on hydrogen cyanide Figure shows the range of the categories presented in the textbook groups. The categories Recommended therapeutic preparations , Molecular mechanism of action , Effects, Resorption , Areas of application , Acute symptoms of intoxication , and Treatment of hydrogen cyanide poisoning show an above-average range. Therefore, the change of knowledge is the greatest here. A below-average range was noted in the categories Occurrence , Lethal dose , and Structure . Thus, changes in knowledge on poisoning symptoms, occurrence, lethal dose, and structure mentioned are small. Figure shows the number of pages with content on hydrogen cyanide and its cyanides. Figure shows the total number of pages in the textbooks and the relative proportion of poison pages in the total number of pages in the textbook. The number of pages with hydrogen cyanide-related content reaches its maximum in textbook 9 in 1951 with 14 pages and then drops to 2 pages in textbook 10 in 1964. In the period 1964–2020, the number of substance pages is at a low-medium level. The relative share of poison pages in the total number of pages in the textbooks reaches its maximum in textbook 7 in 1933 at 2.86% and shows a further peak in textbook 9 in 1951 at 2.36%. A connection between the increased representation of the poison in the period 1933–1951 and the use of hydrogen cyanide in the form of Zyklon B as a lethal poison in Nazi concentration camps during the Second World War (1939–1945) is possible (Embar-Seddon and Pass ). To test this hypothesis, the textbook “Grundriß der Pharmakologie, Toxikologie (Wehr-Toxikologie) und Arznei-Verordnungslehre” (“Principles of Pharmacology, Toxicology (Military Toxicology) and Drug Prescription”) by the German pharmacologist and NSDAP member Heinrich Gebhardt from 1940 was examined (Philippu and Seifert ). But here, hydrogen cyanide is only sketchily presented (4 of 403 pages, 1%) (Gebhardt, ). Figure shows the range of the categories presented in the textbook groups. The categories Recommended therapeutic preparations , Molecular mechanism of action , Effects, Resorption , Areas of application , Acute symptoms of intoxication , and Treatment of hydrogen cyanide poisoning show an above-average range. Therefore, the change of knowledge is the greatest here. A below-average range was noted in the categories Occurrence , Lethal dose , and Structure . Thus, changes in knowledge on poisoning symptoms, occurrence, lethal dose, and structure mentioned are small. Figure shows the information on the structure of hydrogen cyanide by textbook group. No change in the information was determined over the course of the study period. Molecular mechanism of action of hydrogen cyanide Figure shows the information on the molecular mechanism of action of hydrogen cyanide by textbook group. Seventy-five percent of the textbooks of the first textbook group and 50% of the second textbook group mention hydrogen cyanide binding to hemoglobin as the molecular mechanism of action of the toxin. Binding to Fe 3+ of cytochrome oxidase is described in the third textbook group (75%) and fourth textbook group (100%) as the mechanism of action of hydrogen cyanide. This suggests an increase in knowledge regarding the molecular mechanism of action of hydrogen cyanide from the second to the third textbook group. Occurrence of hydrogen cyanide Figure shows the mentions of the occurrence of hydrogen cyanide by textbook group. The textbooks in the first textbook group cite two different sources of hydrogen cyanide. The information in the second textbook group can be assigned to three different types of occurrences. The textbooks in the third textbook group provide information on five sources of hydrogen cyanide. Among the textbooks analyzed in the fourth textbook group, five different sources of hydrogen cyanide and its cyanides are listed. Overall, a trend towards increasing heterogenization of the content presented in the category Occurrence can be identified. However, the occurrence of hydrogen cyanide in seeds is listed most frequently in all textbook groups, which is why the content focus remains unchanged in the period 1878–2020. Pharmacological and toxicological effects of hydrogen cyanide Figure shows the data on the effects of hydrogen cyanide by textbook group. A decrease in the scope of the effects occurred from the first to the second textbook group. From the second textbook group onwards, an increasingly homogenized presentation of the content with a focus on the hydrogen cyanide effects of Inhibited oxygen uptake and utilization in tissues and Inhibition of the respiratory chain and cellular oxidation processes was noted. Resorption of hydrogen cyanide Figure shows the data on the resorption of hydrogen cyanide by textbook groups. The first textbook group mentions a total of 4 different resorption pathways. From 1919 onwards, there is no further mention of the conjunctival resorption route, which is why the information from the second, third, and fourth textbook groups can each be assigned to 3 different resorption pathways. In the third textbook group, resorption from the gastrointestinal tract and the respiratory tract are described as the most frequent routes of hydrogen cyanide uptake. In the fourth textbook group, the gastrointestinal tract is the most frequently mentioned resorption route. Areas of application of hydrogen cyanide Figure shows the information on the areas of application of hydrogen cyanide by textbook groups. The textbooks in the first textbook group list a total of three different areas of application for hydrogen cyanide, as medical, industrial, and cosmetic areas. The textbooks investigated in the second to fourth textbook group mention hydrogen cyanide applications in two different areas. The second textbook group contains information on medical and industrial applications, while the textbooks in the third and fourth textbook groups each list industrial and criminalistic uses of hydrogen cyanide. However, the use of hydrogen cyanide in the form of Zyklon B as a means of mass murder under National Socialism was only described in one textbook (16). Thus, most textbooks avoid dealing with the darkest episode of German history (and pharmacology), thereby missing an important opportunity to educate medical students and young physicians properly about ethics of pharmacology and toxicology. Acute symptoms of intoxication with hydrogen cyanide Figure shows the data on the acute symptoms of hydrogen cyanide intoxication by textbook groups. The first textbook group list intoxication symptoms from nine systems. The information in the second and fourth textbook groups can each be assigned to seven systems. The acute intoxication symptoms listed in the third textbook group come from eight different systems. In the first textbook group, acute intoxication symptoms related to the cardiovascular system are mentioned most frequently. In the second textbook group, the Cardiovascular system , CNS and PNS , and Other account for most mentions. The textbooks in the third and fourth textbook groups most frequently mention symptoms of CNS and PNS poisoning. Lethal dose of hydrogen cyanide Figure shows how many textbooks provide information on the lethal dose of hydrogen cyanide. Among the textbooks investigated, just two do not contain any information on the lethal dose of hydrogen cyanide (2, 15). Figure shows the lethal doses for HCN and CN − in mg. In the period 1878–1964, the lethal dose for hydrogen cyanide ranges between 42.5 and 55 mg. The textbooks for the period 1977–2020 describe lethal doses between 77 and 80 mg, 125 mg being an exception. In an analysis of the textbooks Lüllmann et al. , Lemmer et al. , and Scholz et al. , the lethal doses given in 67% of the textbooks corresponded to the doses of the period 1878–1964. A similarly increased dose as in textbook 14 could only be found in Scholz et al. . This supports the increased lethal dose in textbook 14 as an exception. Thus, only a slight increase in the lethal doses from 1977 onwards can be observed, and the knowledge regarding the lethal dosage of hydrogen cyanide has remained almost constant. Treatment of hydrogen cyanide poisoning Figure shows the data on hydrogen cyanide treatment by textbook group. In the first textbook group, 50% of the textbooks describe oxygen administration and artificial respiration, ammonia odor, atropine, iron oxide hydrate with magnesia, and cold dousing as effective treatments for hydrogen cyanide poisoning. In the second and third textbook group, 75% and 100% of the textbooks provide information on sodium thiosulphate, which makes it the most mentioned treatment option in these textbook groups. The textbooks of the fourth textbook group categorize sodium thiosulfate, dimethylaminophenol, and hydroxycobalamine the most as treatment options. From the second textbook group onwards, the focus of the textbooks is on treatment with sodium thiosulfate. With the start of the fourth textbook group in 1997, dimethylaminophenol and hydroxycobalamine also count as the most recommended treatment options. Recommended therapeutic preparations of hydrogen cyanide Figure shows the information on therapeutic preparations containing hydrogen cyanide. Most mentions come from the first textbook group, followed by a decline in the second textbook group. Textbooks in the third and fourth textbook groups did not provide any information in this category (see supplemental figure ). This shows the decreasing therapeutic relevance of hydrogen cyanide. Table shows the single doses and daily doses given for the recommended therapeutic preparations in textbooks from 1878 to 1921. Most of the information is given for Aqua amygdalarum amararum, as 100% of the textbooks that recommend the preparation give dose details. A decrease in the maximum daily dose can be observed from 1901 onwards. For Aqua laurocerasi, only 75% of the textbooks that recommend the preparation provide information on dosage. Except for 1901, the doses mentioned are lower than for Aqua amygdalarum amararum. The dosage of Aqua amygdalarum amararum diluta is not mentioned in any textbook. Thus, the recommended daily doses of hydrogen cyanide remain 5–tenfold below lethal doses (Fig. ), which is only a small safety margin. Probably, many accidental hydrogen cyanide intoxications occurred which was the reason for hydrogen cyanide being abandoned as a drug. Figure shows the information on the molecular mechanism of action of hydrogen cyanide by textbook group. Seventy-five percent of the textbooks of the first textbook group and 50% of the second textbook group mention hydrogen cyanide binding to hemoglobin as the molecular mechanism of action of the toxin. Binding to Fe 3+ of cytochrome oxidase is described in the third textbook group (75%) and fourth textbook group (100%) as the mechanism of action of hydrogen cyanide. This suggests an increase in knowledge regarding the molecular mechanism of action of hydrogen cyanide from the second to the third textbook group. Figure shows the mentions of the occurrence of hydrogen cyanide by textbook group. The textbooks in the first textbook group cite two different sources of hydrogen cyanide. The information in the second textbook group can be assigned to three different types of occurrences. The textbooks in the third textbook group provide information on five sources of hydrogen cyanide. Among the textbooks analyzed in the fourth textbook group, five different sources of hydrogen cyanide and its cyanides are listed. Overall, a trend towards increasing heterogenization of the content presented in the category Occurrence can be identified. However, the occurrence of hydrogen cyanide in seeds is listed most frequently in all textbook groups, which is why the content focus remains unchanged in the period 1878–2020. Figure shows the data on the effects of hydrogen cyanide by textbook group. A decrease in the scope of the effects occurred from the first to the second textbook group. From the second textbook group onwards, an increasingly homogenized presentation of the content with a focus on the hydrogen cyanide effects of Inhibited oxygen uptake and utilization in tissues and Inhibition of the respiratory chain and cellular oxidation processes was noted. Figure shows the data on the resorption of hydrogen cyanide by textbook groups. The first textbook group mentions a total of 4 different resorption pathways. From 1919 onwards, there is no further mention of the conjunctival resorption route, which is why the information from the second, third, and fourth textbook groups can each be assigned to 3 different resorption pathways. In the third textbook group, resorption from the gastrointestinal tract and the respiratory tract are described as the most frequent routes of hydrogen cyanide uptake. In the fourth textbook group, the gastrointestinal tract is the most frequently mentioned resorption route. Figure shows the information on the areas of application of hydrogen cyanide by textbook groups. The textbooks in the first textbook group list a total of three different areas of application for hydrogen cyanide, as medical, industrial, and cosmetic areas. The textbooks investigated in the second to fourth textbook group mention hydrogen cyanide applications in two different areas. The second textbook group contains information on medical and industrial applications, while the textbooks in the third and fourth textbook groups each list industrial and criminalistic uses of hydrogen cyanide. However, the use of hydrogen cyanide in the form of Zyklon B as a means of mass murder under National Socialism was only described in one textbook (16). Thus, most textbooks avoid dealing with the darkest episode of German history (and pharmacology), thereby missing an important opportunity to educate medical students and young physicians properly about ethics of pharmacology and toxicology. Figure shows the data on the acute symptoms of hydrogen cyanide intoxication by textbook groups. The first textbook group list intoxication symptoms from nine systems. The information in the second and fourth textbook groups can each be assigned to seven systems. The acute intoxication symptoms listed in the third textbook group come from eight different systems. In the first textbook group, acute intoxication symptoms related to the cardiovascular system are mentioned most frequently. In the second textbook group, the Cardiovascular system , CNS and PNS , and Other account for most mentions. The textbooks in the third and fourth textbook groups most frequently mention symptoms of CNS and PNS poisoning. Figure shows how many textbooks provide information on the lethal dose of hydrogen cyanide. Among the textbooks investigated, just two do not contain any information on the lethal dose of hydrogen cyanide (2, 15). Figure shows the lethal doses for HCN and CN − in mg. In the period 1878–1964, the lethal dose for hydrogen cyanide ranges between 42.5 and 55 mg. The textbooks for the period 1977–2020 describe lethal doses between 77 and 80 mg, 125 mg being an exception. In an analysis of the textbooks Lüllmann et al. , Lemmer et al. , and Scholz et al. , the lethal doses given in 67% of the textbooks corresponded to the doses of the period 1878–1964. A similarly increased dose as in textbook 14 could only be found in Scholz et al. . This supports the increased lethal dose in textbook 14 as an exception. Thus, only a slight increase in the lethal doses from 1977 onwards can be observed, and the knowledge regarding the lethal dosage of hydrogen cyanide has remained almost constant. Figure shows the data on hydrogen cyanide treatment by textbook group. In the first textbook group, 50% of the textbooks describe oxygen administration and artificial respiration, ammonia odor, atropine, iron oxide hydrate with magnesia, and cold dousing as effective treatments for hydrogen cyanide poisoning. In the second and third textbook group, 75% and 100% of the textbooks provide information on sodium thiosulphate, which makes it the most mentioned treatment option in these textbook groups. The textbooks of the fourth textbook group categorize sodium thiosulfate, dimethylaminophenol, and hydroxycobalamine the most as treatment options. From the second textbook group onwards, the focus of the textbooks is on treatment with sodium thiosulfate. With the start of the fourth textbook group in 1997, dimethylaminophenol and hydroxycobalamine also count as the most recommended treatment options. Figure shows the information on therapeutic preparations containing hydrogen cyanide. Most mentions come from the first textbook group, followed by a decline in the second textbook group. Textbooks in the third and fourth textbook groups did not provide any information in this category (see supplemental figure ). This shows the decreasing therapeutic relevance of hydrogen cyanide. Table shows the single doses and daily doses given for the recommended therapeutic preparations in textbooks from 1878 to 1921. Most of the information is given for Aqua amygdalarum amararum, as 100% of the textbooks that recommend the preparation give dose details. A decrease in the maximum daily dose can be observed from 1901 onwards. For Aqua laurocerasi, only 75% of the textbooks that recommend the preparation provide information on dosage. Except for 1901, the doses mentioned are lower than for Aqua amygdalarum amararum. The dosage of Aqua amygdalarum amararum diluta is not mentioned in any textbook. Thus, the recommended daily doses of hydrogen cyanide remain 5–tenfold below lethal doses (Fig. ), which is only a small safety margin. Probably, many accidental hydrogen cyanide intoxications occurred which was the reason for hydrogen cyanide being abandoned as a drug. Figure shows whether outdated and incorrect content is addressed as such in the pharmacology and toxicology textbooks. In the categories Structure , Molecular mechanism of action , Resorption , Acute symptoms of intoxication , and Lethal dose , 100% of the textbooks from the period 1878–2020 do not discuss any old or incorrect content. Incorrect contents of the categories Effects and Recommended preparations are discussed by 50% of the textbooks of the first textbook group. Twenty-five percent of the textbooks in the first textbook group discuss old content in the category Treatment . Incorrect content in the category Areas of application is discussed the most, as 50% of the textbooks in the first textbook group and 25% of the textbooks in the second textbook group address outdated knowledge. The category Occurrence is the only category in which outdated knowledge from modern textbooks from 1997 onwards is discussed, as 25% of the textbooks in the fourth textbook group address incorrect content on the occurrence of hydrogen cyanide. Thus, older textbooks are better at discussing advances in scientific concepts than newer textbooks. Newer textbooks tend to simply state current knowledge without discussing the dynamics of knowledge development. Thereby, modern textbooks mostly miss a great opportunity to educate medical students about the history of pharmacology and toxicology which is a history of change of concepts and facts as well as human trial and error. Due to the restriction of the analysis to German-language pharmacology and toxicology textbooks, the transfer of the results to the international textbook literature is not guaranteed. Furthermore, we analyzed just one textbook per decade. The information on hydrogen cyanide in the textbooks by Schmiedeberg and 1921 (2 and 6), as well as by Eichholtz and 1951, (8 and 9) is similar. Since the key word index was used as the basis for the analysis and selection of the analyzed pages, only the information on hydrogen cyanide and its cyanides listed on the pages of the key word index was included in the analysis. Pharmacology and toxicology textbooks are used by students and doctors as a learning and reference tool, which is why they are of great importance for medical practice. Using hydrogen cyanide as a case study, we showed that over a period of 150 years, knowledge on chemical structure, lethal dose, and occurrence of hydrogen cyanide did not change much. In contrast, knowledge of the molecular mechanism of action, recommended preparations, effects, resorption, areas of application, acute symptoms of intoxication, and the treatment of hydrogen cyanide poisoning changed dramatically. From 1878 to 1901, primarily medical applications of the poison are described in the textbooks. From 1919, most of the areas of application listed are of industrial nature, and from 1951, there is an additional focus on criminal uses of the poison (murder, suicide, mass murder). The clinical obsolescence of the poison is reflected in the lack of mentions of medical applications in all recent textbooks. Accordingly, current textbooks (15, 16) are up to date regarding the areas of application. In contrast, with respect to reserpine, textbooks are lagging behind clinical practice (Misera and Seifert ). Thus, the validity of pharmacology and toxicology textbooks as a source for current information depends on the topic and drug. The highest coverage of hydrogen cyanide in textbooks was found between 1933 and 1951 (7–9). Cyanide was used as “Zyklon B” in the Nazi concentration camps during the Second World War (1939–1945). However, in the pharmacology and toxicology textbooks published during the NS regime, no mentioning of the use of cyanide for mass murdering was made. Evidently, this criminal use of cyanide was kept secret, even in the textbook by Heinrich Gebhardt who was an active NSDAP and SS member (Philippu and Seifert ). Based on the present study and the study by Misera and Seifert , it will be worthwhile to analyze the presentation of other drugs and poisons in pharmacology and toxicology textbooks. Pharmacological and toxicological knowledge develops non-linearly for different aspects of a given drug, and it cannot be taken for granted that a current pharmacology and toxicology textbook provides current information. The case study of cyanide shows that pharmacology and toxicology have a long history of errors with respect to mechanism of action and clinical uses that were ultimately corrected but not discussed into a broader historical, ethical, societal or scientific context. What can be learned from the case study on hydrogen cyanide for the future of pharmacology and toxicology It is often complained these days that the relevance of textbooks in general and textbooks of pharmacology and toxicology in particular is decreasing at the expense of powerpoint slides handed out in courses and commercially most successful platforms for answering multiple choice exam questions. Our case study on hydrogen cyanide provides important strategies for how pharmacology and toxicology textbooks of the future can be made more appealing for students and, thereby, offering perspectives for textbook survival. New pharmacology and toxicology textbooks should take up the tradition of old textbooks, 100-150 years ago, and discuss how non-linearly and dynamically pharmacological  and toxicological knowledge evolves and what the reasons are behind such developments. Pharmacology and toxicology textbooks should also be more proactive at discussing the ethical and societal dimensions of pharmacology and toxicology. The case of hydrogen cyanide shows that this was an almost completely missed opportunity, an embarrassing neglect. These easily implemtable suggestions will sharpen the ability of critical thinking in medical students and provide them with important intellectual tools to shape the future of pharmacology and toxicology. Powerpoint slides and commercial collections of commented multiple choice questions cannot achieve this most intellectual important goal in student education. It is the responsibility of, and great opportunity for, textbook authors to reverse the increasing de-academization of pharmacological and toxicological education. It is often complained these days that the relevance of textbooks in general and textbooks of pharmacology and toxicology in particular is decreasing at the expense of powerpoint slides handed out in courses and commercially most successful platforms for answering multiple choice exam questions. Our case study on hydrogen cyanide provides important strategies for how pharmacology and toxicology textbooks of the future can be made more appealing for students and, thereby, offering perspectives for textbook survival. New pharmacology and toxicology textbooks should take up the tradition of old textbooks, 100-150 years ago, and discuss how non-linearly and dynamically pharmacological  and toxicological knowledge evolves and what the reasons are behind such developments. Pharmacology and toxicology textbooks should also be more proactive at discussing the ethical and societal dimensions of pharmacology and toxicology. The case of hydrogen cyanide shows that this was an almost completely missed opportunity, an embarrassing neglect. These easily implemtable suggestions will sharpen the ability of critical thinking in medical students and provide them with important intellectual tools to shape the future of pharmacology and toxicology. Powerpoint slides and commercial collections of commented multiple choice questions cannot achieve this most intellectual important goal in student education. It is the responsibility of, and great opportunity for, textbook authors to reverse the increasing de-academization of pharmacological and toxicological education. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 74 KB)
Advances in molecular pathology, diagnosis, and treatment of amyotrophic lateral sclerosis
066b9d61-3fd6-47d3-90c0-4829601aed29
10603569
Pathology[mh]
As the most common cause of adult onset motor neuron disease (MND), amyotrophic lateral sclerosis (ALS) is traditionally classified as a neuromuscular disorder because the presenting symptoms are caused by muscle weakness and atrophy. However, clinical, genetic, and molecular discoveries over the past 20 years have challenged this convention. ALS shares features of frontotemporal dementia, a group of neurodegenerative disorders that causes cognitive, behavioral, and motor dysfunction. Nearly half of all patients with ALS have varying degrees of cognitive and/or behavioral impairment, with approximately 15% meeting the diagnostic criteria for frontotemporal dementia. Conversely, about 15% of patients with behavioral variant frontotemporal dementia and 18% of patients with primary progressive aphasia have ALS. These disorders also have overlapping genetics, with hexanucleotide repeat expansion (HRE) in C9ORF72 being the most common genetic cause of ALS, frontotemporal dementia, or both in people of European ancestry. Additionally, abnormal aggregation of transactive response DNA binding protein 43 (TDP-43) or fused in sarcoma (FUS) is present in the cytoplasm of cortical neurons in ALS and frontotemporal dementia. As such, ALS is widely recognized as a complex neurodegenerative disorder in the frontotemporal dementia-MND continuum. The reconceptualization of ALS in this continuum of disorders has allowed for novel approaches toward understanding fundamental disease mechanisms contributing to pathogenesis and has opened new avenues in approaches toward therapy. In this review, we provide a comprehensive summary of the clinical and genetic heterogeneity of ALS and advances in molecular pathology and biomarkers, and we highlight key interventions that improve quality of life. The intended audience includes students, trainees, general neurologists, and neuromuscular subspecialists. The worldwide prevalence and incidence of ALS are estimated to be 4.42 per 100 000 population and 1.59 per 100 000 person years, respectively, and population based studies have shown geographic variation with the highest in western Europe (prevalence 9.62 per 100 000 population and incidence 2.76 per 100 000 person years) and lowest in South Asia (prevalence 1.57 per 100 000 population and incidence 0.42 per 100 000 person years). The incidence and prevalence of ALS are higher in developed regions, and a temporal trend has been observed, with the incidence rising by 0.00013 per year. The prevalence and incidence of ALS is higher in men (prevalence 5.96 per 100 000 population; incidence 1.91 per 100 000 person years) than in women (prevalence 3.90 per 100 000 population; incidence 1.36 per 100 000 person years). We independently did searches using the Boolean search criteria in the PubMed and Embase databases, between January 1990 and December 2022, using search terms such as amyotrophic lateral sclerosis, motor neuron disease, frontotemporal dementia, diagnosis, diagnostic criteria, prognosis, genetics, pathology, biomarker, and treatment. We identified articles published in the English language and selected them for inclusion on the basis of other criteria including relevance, peered review, and study type (randomized controlled trials, systematic reviews and meta-analyses, and observational studies). We prioritized publications in high impact and ALS specific journals published in the past 15 years. Several important publications could not be included owing to the scope of this review. We excluded case reports and articles not published in English. Clinical heterogeneity ALS is a clinically heterogeneous disorder , and the biological underpinning of the heterogeneity is poorly understood. Typically, the symptom onset is localized with spread of motor impairment to adjacent muscle groups and/or regions of the neuroaxis. Usually, the progression rate is linear for any given person, but the rate often varies between patients. ALS of spinal onset with weakness first appearing in limb muscles occurs most frequently (two thirds of patients), followed by bulbar onset with initial weakness in lingual and oropharyngeal muscles (a third of patients). Axial or respiratory muscles are rarely the first to be affected. Uncommon subtypes of ALS with spinal or limb onset exist, with atypical patterns of weakness in which the motor impairment tends to be regionally confined early in the disease course. In brachial amyotrophic diplegia or flail arm syndrome, weakness tends to affect proximal upper extremities symmetrically. Similarly, in flail leg syndrome or lower extremity amyotrophic diplegia, weakness is mainly in the lower extremities. Other rarer ALS phenotypic variants include isolated bulbar ALS and hemiplegic ALS presenting with asymmetric hemibody weakness. Although the symptoms rarely remain restricted in these subtypes, the progression is typically slow. Another contributor to phenotypic heterogeneity is the burden of neuronal degeneration in the cortex (upper motor neuron; UMN), brainstem, and spinal cord (lower motor neuron; LMN). Typically, the neurological examination shows UMN and LMN dysfunction, but the contribution of each likely falls on a continuum and can vary with patients having predominantly upper or lower motor signs . One explanation for this variability may be differences in the pattern of spread in the course of the disease. At the extremes are rare phenotypes such as primary lateral sclerosis (PLS) presenting as a pure UMN disorder and progressive muscular atrophy (PMA) presenting as a pure LMN disorder. Some clinical features distinguish PLS from typical ALS. In PLS, symptoms are symmetric and slowly progressive, and they frequently have an ascending pattern of spread. PMA is clinically similar to typical ALS in the rate and pattern of symptom spread, but a subgroup of PMA may have slower disease progression. Prognostic factors associated with longer survival include UMN or LMN predominant symptoms, flail arm variant, and younger age at onset. Factors associated with shorter survival include bulbar and/or respiratory onset, comorbid frontotemporal dementia, poor nutritional status, neck flexion weakness, and older age at onset. Cognitive and behavioral dysfunction Although ALS is synonymous with MND, cognitive and/or behavioral dysfunction are recognized core clinical features. Neuropsychological abnormality is associated with faster disease progression and shorter survival and occurs more frequently in advanced disease. Motor symptoms that alert patients and their care givers to a neurological disorder may overshadow antecedent or concurrent neuropsychological symptoms. Because cognitive or behavioral changes may be obscured by motor dysfunction, validated screening tests specific for ALS such as the Edinburgh Cognitive and Behavioral ALS Screen or ALS Cognitive Behavioral Screen are recommended in all patients. If the screening test is abnormal, a more extensive neuropsychological evaluation can determine the cognitive and/or behavioral changes. ALS specific behavioral measures such as the Motor Neuron Disease Behavior Scale, the ALS-FTD-Questionnaire, or the Frontal Behavioral Inventory-ALS Version can be used to characterize and assess the severity of behavioral dysfunction. The findings of these tests can classify patients as having ALS with cognitive impairment, ALS with behavioral impairment, ALS with combined cognitive and behavioral impairment, ALS with frontotemporal dementia (ALS-FTD) , or none of the above (no cognitive or behavioral impairment). Approximately half of all patients with ALS will show impairment on a comprehensive assessment, with approximately 5% classified as ALS with combined cognitive and behavioral impairment, 8% as ALS with behavioral impairment, 17% as ALS with cognitive impairment, and 15-20% as ALS-FTD. The most commonly affected cognitive domain in ALS is executive function, with abnormal verbal fluency being a consistent and sensitive marker even after control for bulbar motor dysfunction. In patients who develop cognitive symptoms, impaired word fluency is an early finding. Other features of executive dysfunction such as mental inflexibility, inattention and disinhibition, or inability to plan or problem solve can emerge as the disease progresses. Impairment in multiple cognitive domains is less common in ALS and, when present, tends to involve language or memory and may be confounded by co-pathology such as Alzheimer’s disease. Isolated amnestic syndrome is not a feature in ALS and should prompt evaluation for an alternative cause. Overall, progression of cognitive dysfunction in ALS is slow and may remain stable over time. Behavioral abnormalities are a frequent neuropsychiatric feature in ALS, apathy being the most common. Others include disinhibition, perseverative behavior, change in food preferences, loss of empathy, or impaired social cognition including emotional processing. Pathological crying and laughing, also known as emotional lability or pseudobulbar affect, is present in approximately one in three patients with ALS and is associated with gray and white matter pathology in the cortico-cerebellar network. Pathological crying and laughing does not correlate with neuropsychological measures and should be distinguished from other cognitive and behavioral symptoms. Diagnostic criteria and disease progression measures ALS is a clinical diagnosis requiring findings of progressive motor neuron dysfunction in the absence of an alternative diagnosis. In typical ALS, few tests are needed to support the diagnosis and exclude mimics because other disorders rarely mimic ALS perfectly. The most common tests obtained in the diagnostic process are electrophysiology to establish a lower motor neuronopathy and neuroimaging of the brain and spine to exclude mimics causing structural abnormalities as a cause for UMN dysfunction. Routine laboratory studies are frequently obtained to exclude other causes of a patient’s symptoms and are typically normal in ALS. The first widely used criteria in ALS were the El Escorial criteria, which aimed to provide a standardized diagnostic framework to conduct clinical research, with subsequent revisions and updates improving sensitivity allowing for earlier enrollment in clinical trials. Although El Escorial and Awaji criteria are useful in clinical research, they are hampered by the heterogeneity of ALS and do not capture the full disease spectrum. For example, cognitive impairment and behavioral impairment are not included in these criteria and pure lower motor neuron variants are excluded. Furthermore, patients with ALS do not necessarily progress through the El Escorial categories of diagnostic certainty and may never attain the criteria for clinically definite ALS. The ALS-frontotemporal spectrum diagnostic (ALS-FTSD) criteria proposed by Strong and colleagues use three diagnostic axes to define MND, cognitive and behavioral dysfunction, and other non-motor features. Although the ALS-FTSD criteria more fully incorporate the ALS-FTD spectrum disorders, they still rely on El Escorial and Awaji criteria to define MND. The recently proposed Gold Coast criteria have attempted to simplify the diagnosis and recognize the potential utility of the development of biomarkers; however, further validation in different populations will be needed before routine use in clinical care or research. Variability in rate of symptom progression and survival in ALS represents a major obstacle in clinical trials. Disease progression, most often measured by the decline in the revised ALS functional rating scale (ALSFRS-R) over time, differs considerably between patients, and survival ranges from less than one year to more than 10 years. Respiratory failure is the most common cause of death. Clinical factors that contribute to heterogeneity in survival include age at symptom onset, sex, site of symptom onset (bulbar versus spinal), time to diagnosis, respiratory measures, pre-symptomatic body mass index, cigarette use, genetics, and the diagnosis of FTD. Neurofilament, a biomarker for neurodegeneration, is a predictor of the progression and prognosis of ALS. A recently proposed survival prediction model for ALS identified eight prognostic predictors and generated five different survival groups applicable to European patients at the individual level. This model is an important step toward more effective stratification of patients in clinical studies, but validation in non-European groups is needed, and the model will likely evolve as other predictors are identified. Clinical staging is an important tool for research and care planning because it informs the extent and severity of disease. The two proposed staging systems are the King’s staging system and the Milano-Torino staging systems. The King’s staging system is defined by the number of body regions affected and bulbar and respiratory failure, whereas the Milano-Torino system uses the number of impaired domains as delineated by the ALSFRS-R to define successive stages. These systems provide parallel clinical information, using different measures to establish escalating stages, and both have been used to analyze patient population data and are promising endpoints for clinical trials. A limitation of both systems is the lack of cognitive and behavioral change captured by staging, although higher disease stage portends more severe cognitive impairment. ALS is a clinically heterogeneous disorder , and the biological underpinning of the heterogeneity is poorly understood. Typically, the symptom onset is localized with spread of motor impairment to adjacent muscle groups and/or regions of the neuroaxis. Usually, the progression rate is linear for any given person, but the rate often varies between patients. ALS of spinal onset with weakness first appearing in limb muscles occurs most frequently (two thirds of patients), followed by bulbar onset with initial weakness in lingual and oropharyngeal muscles (a third of patients). Axial or respiratory muscles are rarely the first to be affected. Uncommon subtypes of ALS with spinal or limb onset exist, with atypical patterns of weakness in which the motor impairment tends to be regionally confined early in the disease course. In brachial amyotrophic diplegia or flail arm syndrome, weakness tends to affect proximal upper extremities symmetrically. Similarly, in flail leg syndrome or lower extremity amyotrophic diplegia, weakness is mainly in the lower extremities. Other rarer ALS phenotypic variants include isolated bulbar ALS and hemiplegic ALS presenting with asymmetric hemibody weakness. Although the symptoms rarely remain restricted in these subtypes, the progression is typically slow. Another contributor to phenotypic heterogeneity is the burden of neuronal degeneration in the cortex (upper motor neuron; UMN), brainstem, and spinal cord (lower motor neuron; LMN). Typically, the neurological examination shows UMN and LMN dysfunction, but the contribution of each likely falls on a continuum and can vary with patients having predominantly upper or lower motor signs . One explanation for this variability may be differences in the pattern of spread in the course of the disease. At the extremes are rare phenotypes such as primary lateral sclerosis (PLS) presenting as a pure UMN disorder and progressive muscular atrophy (PMA) presenting as a pure LMN disorder. Some clinical features distinguish PLS from typical ALS. In PLS, symptoms are symmetric and slowly progressive, and they frequently have an ascending pattern of spread. PMA is clinically similar to typical ALS in the rate and pattern of symptom spread, but a subgroup of PMA may have slower disease progression. Prognostic factors associated with longer survival include UMN or LMN predominant symptoms, flail arm variant, and younger age at onset. Factors associated with shorter survival include bulbar and/or respiratory onset, comorbid frontotemporal dementia, poor nutritional status, neck flexion weakness, and older age at onset. Although ALS is synonymous with MND, cognitive and/or behavioral dysfunction are recognized core clinical features. Neuropsychological abnormality is associated with faster disease progression and shorter survival and occurs more frequently in advanced disease. Motor symptoms that alert patients and their care givers to a neurological disorder may overshadow antecedent or concurrent neuropsychological symptoms. Because cognitive or behavioral changes may be obscured by motor dysfunction, validated screening tests specific for ALS such as the Edinburgh Cognitive and Behavioral ALS Screen or ALS Cognitive Behavioral Screen are recommended in all patients. If the screening test is abnormal, a more extensive neuropsychological evaluation can determine the cognitive and/or behavioral changes. ALS specific behavioral measures such as the Motor Neuron Disease Behavior Scale, the ALS-FTD-Questionnaire, or the Frontal Behavioral Inventory-ALS Version can be used to characterize and assess the severity of behavioral dysfunction. The findings of these tests can classify patients as having ALS with cognitive impairment, ALS with behavioral impairment, ALS with combined cognitive and behavioral impairment, ALS with frontotemporal dementia (ALS-FTD) , or none of the above (no cognitive or behavioral impairment). Approximately half of all patients with ALS will show impairment on a comprehensive assessment, with approximately 5% classified as ALS with combined cognitive and behavioral impairment, 8% as ALS with behavioral impairment, 17% as ALS with cognitive impairment, and 15-20% as ALS-FTD. The most commonly affected cognitive domain in ALS is executive function, with abnormal verbal fluency being a consistent and sensitive marker even after control for bulbar motor dysfunction. In patients who develop cognitive symptoms, impaired word fluency is an early finding. Other features of executive dysfunction such as mental inflexibility, inattention and disinhibition, or inability to plan or problem solve can emerge as the disease progresses. Impairment in multiple cognitive domains is less common in ALS and, when present, tends to involve language or memory and may be confounded by co-pathology such as Alzheimer’s disease. Isolated amnestic syndrome is not a feature in ALS and should prompt evaluation for an alternative cause. Overall, progression of cognitive dysfunction in ALS is slow and may remain stable over time. Behavioral abnormalities are a frequent neuropsychiatric feature in ALS, apathy being the most common. Others include disinhibition, perseverative behavior, change in food preferences, loss of empathy, or impaired social cognition including emotional processing. Pathological crying and laughing, also known as emotional lability or pseudobulbar affect, is present in approximately one in three patients with ALS and is associated with gray and white matter pathology in the cortico-cerebellar network. Pathological crying and laughing does not correlate with neuropsychological measures and should be distinguished from other cognitive and behavioral symptoms. ALS is a clinical diagnosis requiring findings of progressive motor neuron dysfunction in the absence of an alternative diagnosis. In typical ALS, few tests are needed to support the diagnosis and exclude mimics because other disorders rarely mimic ALS perfectly. The most common tests obtained in the diagnostic process are electrophysiology to establish a lower motor neuronopathy and neuroimaging of the brain and spine to exclude mimics causing structural abnormalities as a cause for UMN dysfunction. Routine laboratory studies are frequently obtained to exclude other causes of a patient’s symptoms and are typically normal in ALS. The first widely used criteria in ALS were the El Escorial criteria, which aimed to provide a standardized diagnostic framework to conduct clinical research, with subsequent revisions and updates improving sensitivity allowing for earlier enrollment in clinical trials. Although El Escorial and Awaji criteria are useful in clinical research, they are hampered by the heterogeneity of ALS and do not capture the full disease spectrum. For example, cognitive impairment and behavioral impairment are not included in these criteria and pure lower motor neuron variants are excluded. Furthermore, patients with ALS do not necessarily progress through the El Escorial categories of diagnostic certainty and may never attain the criteria for clinically definite ALS. The ALS-frontotemporal spectrum diagnostic (ALS-FTSD) criteria proposed by Strong and colleagues use three diagnostic axes to define MND, cognitive and behavioral dysfunction, and other non-motor features. Although the ALS-FTSD criteria more fully incorporate the ALS-FTD spectrum disorders, they still rely on El Escorial and Awaji criteria to define MND. The recently proposed Gold Coast criteria have attempted to simplify the diagnosis and recognize the potential utility of the development of biomarkers; however, further validation in different populations will be needed before routine use in clinical care or research. Variability in rate of symptom progression and survival in ALS represents a major obstacle in clinical trials. Disease progression, most often measured by the decline in the revised ALS functional rating scale (ALSFRS-R) over time, differs considerably between patients, and survival ranges from less than one year to more than 10 years. Respiratory failure is the most common cause of death. Clinical factors that contribute to heterogeneity in survival include age at symptom onset, sex, site of symptom onset (bulbar versus spinal), time to diagnosis, respiratory measures, pre-symptomatic body mass index, cigarette use, genetics, and the diagnosis of FTD. Neurofilament, a biomarker for neurodegeneration, is a predictor of the progression and prognosis of ALS. A recently proposed survival prediction model for ALS identified eight prognostic predictors and generated five different survival groups applicable to European patients at the individual level. This model is an important step toward more effective stratification of patients in clinical studies, but validation in non-European groups is needed, and the model will likely evolve as other predictors are identified. Clinical staging is an important tool for research and care planning because it informs the extent and severity of disease. The two proposed staging systems are the King’s staging system and the Milano-Torino staging systems. The King’s staging system is defined by the number of body regions affected and bulbar and respiratory failure, whereas the Milano-Torino system uses the number of impaired domains as delineated by the ALSFRS-R to define successive stages. These systems provide parallel clinical information, using different measures to establish escalating stages, and both have been used to analyze patient population data and are promising endpoints for clinical trials. A limitation of both systems is the lack of cognitive and behavioral change captured by staging, although higher disease stage portends more severe cognitive impairment. Molecular pathology A new chapter in ALS pathology began in 2006 with the discovery of TDP-43 as the major constituent of ubiquinated aggregates in motor neurons of sporadic ALS and most familial ALS and in cortical neurons in a subgroup of frontotemporal dementia. TDP-43 staining is routinely done in postmortem tissue to characterize the pathology when ALS is suspected. Abnormal accumulation of the protein as either neuronal or glial cytoplasmic inclusions or aggregates is found in 97% of cases of sporadic ALS. Rarely, TDP-43 pathology is not a feature and is seen in ALS caused by superoxidase dismutase 1 ( SOD1 ) or fused in sarcoma ( FUS ) gene mutations. Although accumulation of wild type TDP-43 has become the pathological hallmark of ALS, mutations in TDP-43 are rare and are found in 4-5% of dominantly inherited familial ALS and 1% of sporadic ALS. Additionally, cytoplasmic TDP-43 aggregation can be seen in Alzheimer’s disease, atypical parkinsonism, dementia with Lewy bodies, and limbic predominant age related TDP-43 encephalopathy, leading to the recognition of this group of neurodegenerative disorders as TDP-43 proteinopathies. TDP-43 was first discovered in 1995, and its function was described as a suppressor of HIV-1 expression. As an RNA/DNA binding protein, it is involved in multiple processes such as RNA processing and maturation, RNA transport, microRNA maturation, and stress granule formation. It normally shuttles between the nucleus and the cytoplasm. The cellular dysfunction leading to TDP-43 aggregation in the cytoplasm and the resultant neurodegeneration is a topic of active research. Both loss-of-function and gain-of-function mechanisms have been proposed. An example of the loss-of-function mechanism is the TDP-43 function as a repressor of cryptic exon inclusion. As a result of depletion from the nucleus and loss of the repressor function, cryptic exons (exons that are otherwise excluded from the mRNA) are included in at least two known loci, STMN2 and UNC13A , causing reduced protein expression. Of note, cryptic exons are not always shared between species, necessitating the development of new, humanized models. The mild motor neuron degeneration and the inability to replicate the loss of nuclear localization with concomitant cytoplasmic accumulation in animal models have been considered as evidence that these may be late events in the pathogenic cascade. Genetics Whereas the vast majority of ALS is classified as sporadic disease—that is, without known history of another family member with either ALS or frontotemporal dementia—approximately 10% is familial ALS, which can be autosomal dominant, autosomal recessive, or X linked. The list of different causative genes for ALS has grown tremendously (>40 genes), mostly owing to advances in sequencing technologies. Genes causing two thirds of familial ALS and 10% of sporadic ALS are known. Only an overview of the genetics of ALS will be provided here , as detailed descriptions of all known ALS associated genes are beyond the scope of this review and can be found in other publications. SOD1 The earliest understanding of the pathobiology of ALS derived from disease models based on mutations in SOD1 , the first gene discovered in familial ALS in 1993, which accounts for 20% of familial ALS. Several different mechanisms of neurodegeneration have been proposed, including conformational instability of SOD1 protein, interactions with other proteins, and formation of toxic aggregates, but the exact mechanism remains unclear. The consensus is that the many mutations spanning the whole length of SOD1 confer a toxic gain of function, which has led to development of silencing mutant gene expression as a therapeutic approach. C9ORF72 The largest genetic contributor to familial ALS was not discovered until 2011 because it is an HRE in an intron of a previously unknown gene named after the region of the chromosome where it is located, C9ORF72 (chromosome 9, open reading frame 72). Repeats are typically not detectable by standard sequencing methods and require instead repeat primed polymerase chain reaction (PCR), a PCR that overcomes the limitations of standard PCR by flanking the repeat region, or a careful analysis of the reason for a drop-off in the sequence coverage of the region in standard next generation sequencing. Intronic nucleotide repeat expansions are known to occur in genes linked to different disorders including myotonic dystrophy, spinocerebellar ataxias (SCA10, SCA 31, SCA 36), and Friedreich’s ataxia. In unaffected people, the C9ORF72 alleles have approximately two to 25 repeats. In contrast, ALS and/or frontotemporal dementia linked to chromosome 9 carry one normal allele and one expanded allele that can have hundreds to thousands of repeats. Somatic instability may further complicate the assessment of the repeat size, as different tissues may have different repeat sizes even within a single patient. The process by which HRE lead to neurodegeneration is not precisely known; however, three major hypotheses have been proposed: the (G 4 C 2 ) repeats function similarly to repeat expansions in other disorders (myotonic dystrophy, fragile X tremor/ataxia syndrome) binding and sequestering RNA binding proteins impairing their ability to regulate RNA targets; HRE may cause epigenetic changes resulting in decreased C9ORF72 mRNA expression; and an atypical mode of polypeptide translation across expanded repeats despite absence of an initiating codon, known as repeat associated non-ATG translation, which is also seen in spinocerebellar ataxia 8 and myotonic dystrophy. Epigenetics Similar to the growing list of familial ALS linked genes, our knowledge of genetic modifiers in sporadic ALS has increased. Expression levels of genes such as EphA4 , which encodes a tyrosine kinase receptor that regulates developmental axon outgrowth, inversely correlate with age of disease onset and survival. Variants of ANG increase the risk of development of ALS and Parkinson’s disease, and variants in NEK-1 (NIMA (never in mitosis gene-A) related kinase-1) were found in nearly 3% of ALS patients. An intermediate length ataxin 2 gene ( ATXN2 ) polyglutamine repeats (>23 but <34 polyglutamine repeats) was found in some patients with sporadic ALS, and this finding was later confirmed in additional cohorts. Although interactions between polymorphisms and causative genes of ALS were previously appreciated, the idea that some families with ALS can harbor more than one of these genes was new. Van Blitterwijk et al discovered mutations in more than one ALS linked gene in five out of 97 families. As our knowledge grows, we are likely to find more complex genetic interplay as the basis for disease in individual families. These genes contribute to less than 1% of familial ALS, but their discovery has identified three main cellular functions that, when abnormal, can lead to neuronal degeneration in ALS: RNA/DNA metabolism, protein turnover/autophagy, and cytoskeletal and vesicular regulation . This will hopefully improve understanding of disease mechanisms in the search for treatable targets. A new chapter in ALS pathology began in 2006 with the discovery of TDP-43 as the major constituent of ubiquinated aggregates in motor neurons of sporadic ALS and most familial ALS and in cortical neurons in a subgroup of frontotemporal dementia. TDP-43 staining is routinely done in postmortem tissue to characterize the pathology when ALS is suspected. Abnormal accumulation of the protein as either neuronal or glial cytoplasmic inclusions or aggregates is found in 97% of cases of sporadic ALS. Rarely, TDP-43 pathology is not a feature and is seen in ALS caused by superoxidase dismutase 1 ( SOD1 ) or fused in sarcoma ( FUS ) gene mutations. Although accumulation of wild type TDP-43 has become the pathological hallmark of ALS, mutations in TDP-43 are rare and are found in 4-5% of dominantly inherited familial ALS and 1% of sporadic ALS. Additionally, cytoplasmic TDP-43 aggregation can be seen in Alzheimer’s disease, atypical parkinsonism, dementia with Lewy bodies, and limbic predominant age related TDP-43 encephalopathy, leading to the recognition of this group of neurodegenerative disorders as TDP-43 proteinopathies. TDP-43 was first discovered in 1995, and its function was described as a suppressor of HIV-1 expression. As an RNA/DNA binding protein, it is involved in multiple processes such as RNA processing and maturation, RNA transport, microRNA maturation, and stress granule formation. It normally shuttles between the nucleus and the cytoplasm. The cellular dysfunction leading to TDP-43 aggregation in the cytoplasm and the resultant neurodegeneration is a topic of active research. Both loss-of-function and gain-of-function mechanisms have been proposed. An example of the loss-of-function mechanism is the TDP-43 function as a repressor of cryptic exon inclusion. As a result of depletion from the nucleus and loss of the repressor function, cryptic exons (exons that are otherwise excluded from the mRNA) are included in at least two known loci, STMN2 and UNC13A , causing reduced protein expression. Of note, cryptic exons are not always shared between species, necessitating the development of new, humanized models. The mild motor neuron degeneration and the inability to replicate the loss of nuclear localization with concomitant cytoplasmic accumulation in animal models have been considered as evidence that these may be late events in the pathogenic cascade. Whereas the vast majority of ALS is classified as sporadic disease—that is, without known history of another family member with either ALS or frontotemporal dementia—approximately 10% is familial ALS, which can be autosomal dominant, autosomal recessive, or X linked. The list of different causative genes for ALS has grown tremendously (>40 genes), mostly owing to advances in sequencing technologies. Genes causing two thirds of familial ALS and 10% of sporadic ALS are known. Only an overview of the genetics of ALS will be provided here , as detailed descriptions of all known ALS associated genes are beyond the scope of this review and can be found in other publications. SOD1 The earliest understanding of the pathobiology of ALS derived from disease models based on mutations in SOD1 , the first gene discovered in familial ALS in 1993, which accounts for 20% of familial ALS. Several different mechanisms of neurodegeneration have been proposed, including conformational instability of SOD1 protein, interactions with other proteins, and formation of toxic aggregates, but the exact mechanism remains unclear. The consensus is that the many mutations spanning the whole length of SOD1 confer a toxic gain of function, which has led to development of silencing mutant gene expression as a therapeutic approach. C9ORF72 The largest genetic contributor to familial ALS was not discovered until 2011 because it is an HRE in an intron of a previously unknown gene named after the region of the chromosome where it is located, C9ORF72 (chromosome 9, open reading frame 72). Repeats are typically not detectable by standard sequencing methods and require instead repeat primed polymerase chain reaction (PCR), a PCR that overcomes the limitations of standard PCR by flanking the repeat region, or a careful analysis of the reason for a drop-off in the sequence coverage of the region in standard next generation sequencing. Intronic nucleotide repeat expansions are known to occur in genes linked to different disorders including myotonic dystrophy, spinocerebellar ataxias (SCA10, SCA 31, SCA 36), and Friedreich’s ataxia. In unaffected people, the C9ORF72 alleles have approximately two to 25 repeats. In contrast, ALS and/or frontotemporal dementia linked to chromosome 9 carry one normal allele and one expanded allele that can have hundreds to thousands of repeats. Somatic instability may further complicate the assessment of the repeat size, as different tissues may have different repeat sizes even within a single patient. The process by which HRE lead to neurodegeneration is not precisely known; however, three major hypotheses have been proposed: the (G 4 C 2 ) repeats function similarly to repeat expansions in other disorders (myotonic dystrophy, fragile X tremor/ataxia syndrome) binding and sequestering RNA binding proteins impairing their ability to regulate RNA targets; HRE may cause epigenetic changes resulting in decreased C9ORF72 mRNA expression; and an atypical mode of polypeptide translation across expanded repeats despite absence of an initiating codon, known as repeat associated non-ATG translation, which is also seen in spinocerebellar ataxia 8 and myotonic dystrophy. Epigenetics Similar to the growing list of familial ALS linked genes, our knowledge of genetic modifiers in sporadic ALS has increased. Expression levels of genes such as EphA4 , which encodes a tyrosine kinase receptor that regulates developmental axon outgrowth, inversely correlate with age of disease onset and survival. Variants of ANG increase the risk of development of ALS and Parkinson’s disease, and variants in NEK-1 (NIMA (never in mitosis gene-A) related kinase-1) were found in nearly 3% of ALS patients. An intermediate length ataxin 2 gene ( ATXN2 ) polyglutamine repeats (>23 but <34 polyglutamine repeats) was found in some patients with sporadic ALS, and this finding was later confirmed in additional cohorts. Although interactions between polymorphisms and causative genes of ALS were previously appreciated, the idea that some families with ALS can harbor more than one of these genes was new. Van Blitterwijk et al discovered mutations in more than one ALS linked gene in five out of 97 families. As our knowledge grows, we are likely to find more complex genetic interplay as the basis for disease in individual families. These genes contribute to less than 1% of familial ALS, but their discovery has identified three main cellular functions that, when abnormal, can lead to neuronal degeneration in ALS: RNA/DNA metabolism, protein turnover/autophagy, and cytoskeletal and vesicular regulation . This will hopefully improve understanding of disease mechanisms in the search for treatable targets. The earliest understanding of the pathobiology of ALS derived from disease models based on mutations in SOD1 , the first gene discovered in familial ALS in 1993, which accounts for 20% of familial ALS. Several different mechanisms of neurodegeneration have been proposed, including conformational instability of SOD1 protein, interactions with other proteins, and formation of toxic aggregates, but the exact mechanism remains unclear. The consensus is that the many mutations spanning the whole length of SOD1 confer a toxic gain of function, which has led to development of silencing mutant gene expression as a therapeutic approach. The largest genetic contributor to familial ALS was not discovered until 2011 because it is an HRE in an intron of a previously unknown gene named after the region of the chromosome where it is located, C9ORF72 (chromosome 9, open reading frame 72). Repeats are typically not detectable by standard sequencing methods and require instead repeat primed polymerase chain reaction (PCR), a PCR that overcomes the limitations of standard PCR by flanking the repeat region, or a careful analysis of the reason for a drop-off in the sequence coverage of the region in standard next generation sequencing. Intronic nucleotide repeat expansions are known to occur in genes linked to different disorders including myotonic dystrophy, spinocerebellar ataxias (SCA10, SCA 31, SCA 36), and Friedreich’s ataxia. In unaffected people, the C9ORF72 alleles have approximately two to 25 repeats. In contrast, ALS and/or frontotemporal dementia linked to chromosome 9 carry one normal allele and one expanded allele that can have hundreds to thousands of repeats. Somatic instability may further complicate the assessment of the repeat size, as different tissues may have different repeat sizes even within a single patient. The process by which HRE lead to neurodegeneration is not precisely known; however, three major hypotheses have been proposed: the (G 4 C 2 ) repeats function similarly to repeat expansions in other disorders (myotonic dystrophy, fragile X tremor/ataxia syndrome) binding and sequestering RNA binding proteins impairing their ability to regulate RNA targets; HRE may cause epigenetic changes resulting in decreased C9ORF72 mRNA expression; and an atypical mode of polypeptide translation across expanded repeats despite absence of an initiating codon, known as repeat associated non-ATG translation, which is also seen in spinocerebellar ataxia 8 and myotonic dystrophy. Similar to the growing list of familial ALS linked genes, our knowledge of genetic modifiers in sporadic ALS has increased. Expression levels of genes such as EphA4 , which encodes a tyrosine kinase receptor that regulates developmental axon outgrowth, inversely correlate with age of disease onset and survival. Variants of ANG increase the risk of development of ALS and Parkinson’s disease, and variants in NEK-1 (NIMA (never in mitosis gene-A) related kinase-1) were found in nearly 3% of ALS patients. An intermediate length ataxin 2 gene ( ATXN2 ) polyglutamine repeats (>23 but <34 polyglutamine repeats) was found in some patients with sporadic ALS, and this finding was later confirmed in additional cohorts. Although interactions between polymorphisms and causative genes of ALS were previously appreciated, the idea that some families with ALS can harbor more than one of these genes was new. Van Blitterwijk et al discovered mutations in more than one ALS linked gene in five out of 97 families. As our knowledge grows, we are likely to find more complex genetic interplay as the basis for disease in individual families. These genes contribute to less than 1% of familial ALS, but their discovery has identified three main cellular functions that, when abnormal, can lead to neuronal degeneration in ALS: RNA/DNA metabolism, protein turnover/autophagy, and cytoskeletal and vesicular regulation . This will hopefully improve understanding of disease mechanisms in the search for treatable targets. Imaging Neuroimaging is used to look for structural abnormalities in the central nervous system that can mimic symptoms and signs of ALS. In most patients with ALS, the brain and spinal cord appear unremarkable or show non-specific abnormalities including corticospinal tract hyperintensity or motor cortex hypointensity in T2 weighted magnetic resonance imaging (MRI) sequences . Advanced imaging methods are powerful non-invasive research tools that can be used to study and quantify structural, functional, and metabolic abnormalities. For example, voxel based morphometry and surface based morphometry can determine global or regional gray matter atrophy and cortical thinning, and diffusion tensor imaging (DTI) can evaluate the integrity of white matter tracts. Task based and resting state functional MRI can identify differing patterns of blood oxygen level dependent (BOLD) activity to interrogate connectivity neural networks, and magnetic resonance spectroscopy (MRS) allows for quantification of neuronal and glial metabolites such as N-acetylaspartate, a marker of neuronal integrity, creatine, a marker of energy metabolism, choline, a marker of cell membrane, and myo-inositol, a glial marker. Other imaging modalities such as positron emission tomography (PET) can show regional changes in brain metabolism by using different receptor ligands. The use of these tools has limitations in research, but technical improvements may prove them useful for group stratification in clinical trials, tracking disease progression, and predicting disease onset in pre-symptomatic carriers of gene mutations. Additionally, they have the potential to provide greater insight into the evolution of pathology. The most frequent abnormal findings in structural brain imaging studies in ALS are thinning of the motor cortex and atrophy of the precentral gyrus and structural integrity loss in the corticospinal tract and the corpus callosum. Morphometric changes correlate with the clinical phenotypes and the site of symptom onset, supporting the hypothesis of focal disease onset. Another clinical-imaging correlation is the association of cognitive and/or behavioral impairment with extra-motor gray matter volume loss and white matter DTI diffusivity changes. The structural abnormalities, however, do not consistently correlate with measures of disease progression such as the ALSFRS-R. More widespread frontal atrophy is associated with faster disease progression and is consistent with the observation that faster disease progression occurs in patients with cognitive and behavioral impairment. Longitudinal imaging is invaluable to track disease progression; however, studies show conflicting findings, with a few studies showing progressive gray and white matter changes over time, whereas others show no discernible changes. The causes of these differences include unequal or small sample sizes, clinical heterogeneity, variable follow-up intervals, and different data acquisition and analysis methods. The role of neuroimaging to track clinical progression in ALS remains unresolved, but evolving imaging changes may mirror spread of pathology (for example, TDP-43). In ALS, MRS typically shows decreases in N-acetylaspartate or in N-acetylaspartate/creatine or N-acetylaspartate/choline in the motor cortex and brain stem corresponding to neuronal degeneration. MRS indices correlate with clinical UMN disease burden in some studies, but their association with functional and cognitive measures are less consistent. Additionally, longitudinal MRS studies are often limited by small sample size. Other metabolites such as γ-aminobutyric acid and glutamate have been examined but need further validation. Early PET studies in ALS using the ligand fluorodeoxyglucose show diffusely reduced uptake in the cortex and deep gray nuclei, mostly in patients who have signs of UMN dysfunction. Other studies show hypometabolism in the frontal regions and hypermetabolism in the temporal regions, cerebellum, and upper brainstem. PET ligands binding to the dopamine D2/D3, 5-hydroxytryptamine 1A, and γ-aminobutyric acid A receptors have been examined in ALS, suggesting widespread neuronal dysfunction or degeneration. More recently, interest has been growing in examining the role of neuroinflammation in ALS. This has led to the development of PET ligands that bind to the 18-kDa translocator protein expressed by activated glial cells. Studies using this ligand show increased uptake in the primary motor cortex and frontal regions that also show structural and metabolic abnormalities. Additional studies are needed to understand the complex interactions between neuronal and glial cells in ALS. In ALS, task based functional MRI shows increased activation of contralateral, and sometimes ipsilateral, brain regions such as the supplementary motor areas, sensorimotor cortex, temporal regions, deep gray nuclei, and cerebellum. These abnormalities are hypothesized to represent adaptive or compensatory responses to the neurodegenerative process. Unlike task based functional MRI, resting state functional MRI shows varying patterns of coherence in the spontaneous BOLD activity. In ALS, the functional connectivity of brain regions can increase, decrease, or be mixed within different brain networks. The variability in findings can be attributed to methodological differences. Regional decrease in functional connectivity in default mode and sensorimotor networks correlate with greater functional impairment. Electrophysiology MUNE, MScanFIT, MUNIX Motor unit number estimation (MUNE) is a promising electrophysiological technique to track disease progression in ALS by estimating the number of motor units in a muscle. Distal small muscles such as the intrinsic hand muscles (that is, abductor pollicis brevis) are examined. The concept of MUNE extends from the observation that incremental increases in the intensity of a stimulus delivered to the motor nerve results in stepwise increases in the amplitude of the compound muscle action potential (CMAP) recorded at the innervated muscle. If the average size of a single motor unit potential contributing to the CMAP can be determined, an estimate of the motor units in that nerve can be calculated by dividing the maximal CMAP amplitude by the average single motor unit potential amplitude. Different methods to determine the average amplitude to calculate MUNE have been developed on the basis of this principle and applied in clinical studies in ALS. Across studies using different methods, MUNE declines with disease progression and correlates with functional rating scales. More recently, the MScanFit MUNE method was developed to estimate the number of motor units from an objective stimulus response curve. MScanFit MUNE may be more accurate, reliable, and easier and quicker to perform and may detect earlier motor neuron loss than other MUNE methods. One limitation is the accessibility of nerves to peripheral stimulation, precluding its use to assess larger proximal muscles. The ease of applying this technique and ability to perform the study using standard electromyography machines make MUNE an attractive biomarker. Motor unit number index (MUNIX) is an electrophysiological method that estimates the number and size of motor units by recording the maximum CMAP and epochs of surface electromyographic interference pattern at varying force levels. MUNIX can be easily used to assess proximal and distal muscles and has been shown to track disease progression in ALS clinical trials. MUNIX values also decline before development of muscle weakness and may be more sensitive to detect early motor neuron loss. However, the inter-rater variability across sites may limit the use of this method in clinical trials. Transcranial magnetic stimulation Transcranial magnetic stimulation (TMS) is a non-invasive electrophysiological technique that can objectively assess the integrity of the corticospinal motor neurons. Several different parameters can be measured, including motor threshold, motor evoked amplitude, central motor conduction time, cortical silent period, and intracortical facilitation or inhibition. Compared with controls, the motor threshold is decreased and the motor evoked amplitude is increased in early ALS. Other TMS findings include reduced duration of the cortical silent period with increasing stimulation intensity, reduced short intracortical inhibition, and increased intracortical facilitation. Collectively, these abnormalities reflect altered cortical excitability. The resting threshold and central motor conduction time have been shown to correlate with clinical findings of UMN dysfunction and are suggested to be useful for tracking disease progression. Further studies are needed to establish TMS as a robust biomarker of disease progression. Electrical impedance myography Electrical impedance electromyography (EIM) is mostly a research tool to assess the health of the muscle. It is based on recording the voltage that results from applying a weak, high frequency electrical current across sampled muscle without inducing myofiber or neuronal action potentials. The volume conduction properties of the muscle depend on how strongly muscle resists or conducts alternating electrical current (conductivity) and on its ability to store electrical charge within (relative permittivity). These properties have been shown to differ in health and disease in murine models and patients. EIM was first used to evaluate the muscle in Duchenne muscular dystrophy, and has more recently been used in ALS. EIM parameters correlate moderately with standard ALS disease progression measures and MUNE. Surface EIM can be done at home, requires minimal training, is painless and repeatable, and has been assessed as an exploratory endpoint in different clinical trials. A motivated patient with ALS can collect surface EIM data more frequently than is done in standard clinical trials, thereby reducing the number of patients needed for a study. An in-depth discussion of the advantages and limitations of both surface and needle EIM can be found in a recent review and subsequent letters to the editor. Overall, EIM is a tool in development that may aid ALS patients and researchers to track disease progression. Fluid based biomarkers Significant efforts are being made to evaluate and validate ALS biomarkers of various types (diagnostic, prognostic, predictive, and pharmacodynamic). Neurofilament light chain (NfL) and phosphorylated neurofilament heavy chain have been examined in cerebrospinal fluid and serum in patients with ALS as a marker of neuronal injury. Although concentrations are lower in serum than in cerebrospinal fluid, serum is more accessible and can be measured reliably using technologically advanced methods such as single molecule array technology (simoa). In simoa, single molecules are trapped individually in wells followed by a digital readout of beads that are bound to their targets, leading to increased sensitivity for detecting protein at subfemptomolar concentration. Higher neurofilament concentrations tend to correspond to faster disease progression, a feature that can be explored in the design of future clinical trials. A study of pheno-converters (pre-symptomatic carriers of causative genes for ALS who develop symptoms of ALS or frontotemporal dementia) showed increased NfL concentrations occurring at least a year before clinical disease. Once disease begins, serum NfL concentrations are stable longitudinally allowing for its use as a pharmacodynamic marker. More recently, incorporating two related plasma micro-RNAs (mir-181a-5p and mir181b-5p) to NfL concentrations improves the survival prognostication (higher concentrations correlated with shorter survival) especially in the patient group with intermediate (59-109 pg/ml) NfL concentrations. Similar approaches of combining protein(s) and/or RNA(s) biomarkers will be useful owing to enhanced prognostic power. As an easily accessible biofluid, the urine is an attractive option for screening for biomarkers. The p75 neurotrophin receptor is found on the surface of apoptotic motor neurons and Schwann cells. During normal processing, the ecto domain of the p75 molecule (p75 ECD ) is cleaved and becomes detectable in urine. It is elevated in ALS and increases further with disease progression, thereby making it a putative biomarker. The p75 ECD , unlike neurofilaments, increases at the time of pheno-conversion and not before, thereby serving as a potential marker of pheno-conversion. As a marker of neuroinflammation, chitinases and related proteins (CHIT-1, CHI3L1, CHI3L2) increase in the cerebrospinal fluid as ALS progresses. Although chitinases may be a proxy for neuroinflammation, their use as a biomarker in therapies may be hampered by accessibility (poor serum and cerebrospinal fluid correlation) and polymorphisms that may decrease the protein concentration. Neuroimaging is used to look for structural abnormalities in the central nervous system that can mimic symptoms and signs of ALS. In most patients with ALS, the brain and spinal cord appear unremarkable or show non-specific abnormalities including corticospinal tract hyperintensity or motor cortex hypointensity in T2 weighted magnetic resonance imaging (MRI) sequences . Advanced imaging methods are powerful non-invasive research tools that can be used to study and quantify structural, functional, and metabolic abnormalities. For example, voxel based morphometry and surface based morphometry can determine global or regional gray matter atrophy and cortical thinning, and diffusion tensor imaging (DTI) can evaluate the integrity of white matter tracts. Task based and resting state functional MRI can identify differing patterns of blood oxygen level dependent (BOLD) activity to interrogate connectivity neural networks, and magnetic resonance spectroscopy (MRS) allows for quantification of neuronal and glial metabolites such as N-acetylaspartate, a marker of neuronal integrity, creatine, a marker of energy metabolism, choline, a marker of cell membrane, and myo-inositol, a glial marker. Other imaging modalities such as positron emission tomography (PET) can show regional changes in brain metabolism by using different receptor ligands. The use of these tools has limitations in research, but technical improvements may prove them useful for group stratification in clinical trials, tracking disease progression, and predicting disease onset in pre-symptomatic carriers of gene mutations. Additionally, they have the potential to provide greater insight into the evolution of pathology. The most frequent abnormal findings in structural brain imaging studies in ALS are thinning of the motor cortex and atrophy of the precentral gyrus and structural integrity loss in the corticospinal tract and the corpus callosum. Morphometric changes correlate with the clinical phenotypes and the site of symptom onset, supporting the hypothesis of focal disease onset. Another clinical-imaging correlation is the association of cognitive and/or behavioral impairment with extra-motor gray matter volume loss and white matter DTI diffusivity changes. The structural abnormalities, however, do not consistently correlate with measures of disease progression such as the ALSFRS-R. More widespread frontal atrophy is associated with faster disease progression and is consistent with the observation that faster disease progression occurs in patients with cognitive and behavioral impairment. Longitudinal imaging is invaluable to track disease progression; however, studies show conflicting findings, with a few studies showing progressive gray and white matter changes over time, whereas others show no discernible changes. The causes of these differences include unequal or small sample sizes, clinical heterogeneity, variable follow-up intervals, and different data acquisition and analysis methods. The role of neuroimaging to track clinical progression in ALS remains unresolved, but evolving imaging changes may mirror spread of pathology (for example, TDP-43). In ALS, MRS typically shows decreases in N-acetylaspartate or in N-acetylaspartate/creatine or N-acetylaspartate/choline in the motor cortex and brain stem corresponding to neuronal degeneration. MRS indices correlate with clinical UMN disease burden in some studies, but their association with functional and cognitive measures are less consistent. Additionally, longitudinal MRS studies are often limited by small sample size. Other metabolites such as γ-aminobutyric acid and glutamate have been examined but need further validation. Early PET studies in ALS using the ligand fluorodeoxyglucose show diffusely reduced uptake in the cortex and deep gray nuclei, mostly in patients who have signs of UMN dysfunction. Other studies show hypometabolism in the frontal regions and hypermetabolism in the temporal regions, cerebellum, and upper brainstem. PET ligands binding to the dopamine D2/D3, 5-hydroxytryptamine 1A, and γ-aminobutyric acid A receptors have been examined in ALS, suggesting widespread neuronal dysfunction or degeneration. More recently, interest has been growing in examining the role of neuroinflammation in ALS. This has led to the development of PET ligands that bind to the 18-kDa translocator protein expressed by activated glial cells. Studies using this ligand show increased uptake in the primary motor cortex and frontal regions that also show structural and metabolic abnormalities. Additional studies are needed to understand the complex interactions between neuronal and glial cells in ALS. In ALS, task based functional MRI shows increased activation of contralateral, and sometimes ipsilateral, brain regions such as the supplementary motor areas, sensorimotor cortex, temporal regions, deep gray nuclei, and cerebellum. These abnormalities are hypothesized to represent adaptive or compensatory responses to the neurodegenerative process. Unlike task based functional MRI, resting state functional MRI shows varying patterns of coherence in the spontaneous BOLD activity. In ALS, the functional connectivity of brain regions can increase, decrease, or be mixed within different brain networks. The variability in findings can be attributed to methodological differences. Regional decrease in functional connectivity in default mode and sensorimotor networks correlate with greater functional impairment. MUNE, MScanFIT, MUNIX Motor unit number estimation (MUNE) is a promising electrophysiological technique to track disease progression in ALS by estimating the number of motor units in a muscle. Distal small muscles such as the intrinsic hand muscles (that is, abductor pollicis brevis) are examined. The concept of MUNE extends from the observation that incremental increases in the intensity of a stimulus delivered to the motor nerve results in stepwise increases in the amplitude of the compound muscle action potential (CMAP) recorded at the innervated muscle. If the average size of a single motor unit potential contributing to the CMAP can be determined, an estimate of the motor units in that nerve can be calculated by dividing the maximal CMAP amplitude by the average single motor unit potential amplitude. Different methods to determine the average amplitude to calculate MUNE have been developed on the basis of this principle and applied in clinical studies in ALS. Across studies using different methods, MUNE declines with disease progression and correlates with functional rating scales. More recently, the MScanFit MUNE method was developed to estimate the number of motor units from an objective stimulus response curve. MScanFit MUNE may be more accurate, reliable, and easier and quicker to perform and may detect earlier motor neuron loss than other MUNE methods. One limitation is the accessibility of nerves to peripheral stimulation, precluding its use to assess larger proximal muscles. The ease of applying this technique and ability to perform the study using standard electromyography machines make MUNE an attractive biomarker. Motor unit number index (MUNIX) is an electrophysiological method that estimates the number and size of motor units by recording the maximum CMAP and epochs of surface electromyographic interference pattern at varying force levels. MUNIX can be easily used to assess proximal and distal muscles and has been shown to track disease progression in ALS clinical trials. MUNIX values also decline before development of muscle weakness and may be more sensitive to detect early motor neuron loss. However, the inter-rater variability across sites may limit the use of this method in clinical trials. Transcranial magnetic stimulation Transcranial magnetic stimulation (TMS) is a non-invasive electrophysiological technique that can objectively assess the integrity of the corticospinal motor neurons. Several different parameters can be measured, including motor threshold, motor evoked amplitude, central motor conduction time, cortical silent period, and intracortical facilitation or inhibition. Compared with controls, the motor threshold is decreased and the motor evoked amplitude is increased in early ALS. Other TMS findings include reduced duration of the cortical silent period with increasing stimulation intensity, reduced short intracortical inhibition, and increased intracortical facilitation. Collectively, these abnormalities reflect altered cortical excitability. The resting threshold and central motor conduction time have been shown to correlate with clinical findings of UMN dysfunction and are suggested to be useful for tracking disease progression. Further studies are needed to establish TMS as a robust biomarker of disease progression. Electrical impedance myography Electrical impedance electromyography (EIM) is mostly a research tool to assess the health of the muscle. It is based on recording the voltage that results from applying a weak, high frequency electrical current across sampled muscle without inducing myofiber or neuronal action potentials. The volume conduction properties of the muscle depend on how strongly muscle resists or conducts alternating electrical current (conductivity) and on its ability to store electrical charge within (relative permittivity). These properties have been shown to differ in health and disease in murine models and patients. EIM was first used to evaluate the muscle in Duchenne muscular dystrophy, and has more recently been used in ALS. EIM parameters correlate moderately with standard ALS disease progression measures and MUNE. Surface EIM can be done at home, requires minimal training, is painless and repeatable, and has been assessed as an exploratory endpoint in different clinical trials. A motivated patient with ALS can collect surface EIM data more frequently than is done in standard clinical trials, thereby reducing the number of patients needed for a study. An in-depth discussion of the advantages and limitations of both surface and needle EIM can be found in a recent review and subsequent letters to the editor. Overall, EIM is a tool in development that may aid ALS patients and researchers to track disease progression. Motor unit number estimation (MUNE) is a promising electrophysiological technique to track disease progression in ALS by estimating the number of motor units in a muscle. Distal small muscles such as the intrinsic hand muscles (that is, abductor pollicis brevis) are examined. The concept of MUNE extends from the observation that incremental increases in the intensity of a stimulus delivered to the motor nerve results in stepwise increases in the amplitude of the compound muscle action potential (CMAP) recorded at the innervated muscle. If the average size of a single motor unit potential contributing to the CMAP can be determined, an estimate of the motor units in that nerve can be calculated by dividing the maximal CMAP amplitude by the average single motor unit potential amplitude. Different methods to determine the average amplitude to calculate MUNE have been developed on the basis of this principle and applied in clinical studies in ALS. Across studies using different methods, MUNE declines with disease progression and correlates with functional rating scales. More recently, the MScanFit MUNE method was developed to estimate the number of motor units from an objective stimulus response curve. MScanFit MUNE may be more accurate, reliable, and easier and quicker to perform and may detect earlier motor neuron loss than other MUNE methods. One limitation is the accessibility of nerves to peripheral stimulation, precluding its use to assess larger proximal muscles. The ease of applying this technique and ability to perform the study using standard electromyography machines make MUNE an attractive biomarker. Motor unit number index (MUNIX) is an electrophysiological method that estimates the number and size of motor units by recording the maximum CMAP and epochs of surface electromyographic interference pattern at varying force levels. MUNIX can be easily used to assess proximal and distal muscles and has been shown to track disease progression in ALS clinical trials. MUNIX values also decline before development of muscle weakness and may be more sensitive to detect early motor neuron loss. However, the inter-rater variability across sites may limit the use of this method in clinical trials. Transcranial magnetic stimulation (TMS) is a non-invasive electrophysiological technique that can objectively assess the integrity of the corticospinal motor neurons. Several different parameters can be measured, including motor threshold, motor evoked amplitude, central motor conduction time, cortical silent period, and intracortical facilitation or inhibition. Compared with controls, the motor threshold is decreased and the motor evoked amplitude is increased in early ALS. Other TMS findings include reduced duration of the cortical silent period with increasing stimulation intensity, reduced short intracortical inhibition, and increased intracortical facilitation. Collectively, these abnormalities reflect altered cortical excitability. The resting threshold and central motor conduction time have been shown to correlate with clinical findings of UMN dysfunction and are suggested to be useful for tracking disease progression. Further studies are needed to establish TMS as a robust biomarker of disease progression. Electrical impedance electromyography (EIM) is mostly a research tool to assess the health of the muscle. It is based on recording the voltage that results from applying a weak, high frequency electrical current across sampled muscle without inducing myofiber or neuronal action potentials. The volume conduction properties of the muscle depend on how strongly muscle resists or conducts alternating electrical current (conductivity) and on its ability to store electrical charge within (relative permittivity). These properties have been shown to differ in health and disease in murine models and patients. EIM was first used to evaluate the muscle in Duchenne muscular dystrophy, and has more recently been used in ALS. EIM parameters correlate moderately with standard ALS disease progression measures and MUNE. Surface EIM can be done at home, requires minimal training, is painless and repeatable, and has been assessed as an exploratory endpoint in different clinical trials. A motivated patient with ALS can collect surface EIM data more frequently than is done in standard clinical trials, thereby reducing the number of patients needed for a study. An in-depth discussion of the advantages and limitations of both surface and needle EIM can be found in a recent review and subsequent letters to the editor. Overall, EIM is a tool in development that may aid ALS patients and researchers to track disease progression. Significant efforts are being made to evaluate and validate ALS biomarkers of various types (diagnostic, prognostic, predictive, and pharmacodynamic). Neurofilament light chain (NfL) and phosphorylated neurofilament heavy chain have been examined in cerebrospinal fluid and serum in patients with ALS as a marker of neuronal injury. Although concentrations are lower in serum than in cerebrospinal fluid, serum is more accessible and can be measured reliably using technologically advanced methods such as single molecule array technology (simoa). In simoa, single molecules are trapped individually in wells followed by a digital readout of beads that are bound to their targets, leading to increased sensitivity for detecting protein at subfemptomolar concentration. Higher neurofilament concentrations tend to correspond to faster disease progression, a feature that can be explored in the design of future clinical trials. A study of pheno-converters (pre-symptomatic carriers of causative genes for ALS who develop symptoms of ALS or frontotemporal dementia) showed increased NfL concentrations occurring at least a year before clinical disease. Once disease begins, serum NfL concentrations are stable longitudinally allowing for its use as a pharmacodynamic marker. More recently, incorporating two related plasma micro-RNAs (mir-181a-5p and mir181b-5p) to NfL concentrations improves the survival prognostication (higher concentrations correlated with shorter survival) especially in the patient group with intermediate (59-109 pg/ml) NfL concentrations. Similar approaches of combining protein(s) and/or RNA(s) biomarkers will be useful owing to enhanced prognostic power. As an easily accessible biofluid, the urine is an attractive option for screening for biomarkers. The p75 neurotrophin receptor is found on the surface of apoptotic motor neurons and Schwann cells. During normal processing, the ecto domain of the p75 molecule (p75 ECD ) is cleaved and becomes detectable in urine. It is elevated in ALS and increases further with disease progression, thereby making it a putative biomarker. The p75 ECD , unlike neurofilaments, increases at the time of pheno-conversion and not before, thereby serving as a potential marker of pheno-conversion. As a marker of neuroinflammation, chitinases and related proteins (CHIT-1, CHI3L1, CHI3L2) increase in the cerebrospinal fluid as ALS progresses. Although chitinases may be a proxy for neuroinflammation, their use as a biomarker in therapies may be hampered by accessibility (poor serum and cerebrospinal fluid correlation) and polymorphisms that may decrease the protein concentration. Multidisciplinary care The cornerstone of ALS care is an integrative approach because of the clinical and psychosocial complexities. A common care model in the US consists of an ALS specialist, nurse, pulmonologist, speech and language pathologist, nutritionist, physical therapist, occupational therapist, and social worker in one clinic visit (as “one stop shop”). Other clinicians with critical roles include a psychiatrist, neuropsychologist, genetics counselor, and gastroenterologist. The ALS team collaborates and seamlessly coordinates care with the primary care clinician and other community or home based health service providers. Additional support is achieved by referral to ALS/MND organizations. This patient centric model of care enhances engagement of patients and care givers in treatment and confers benefits such as improved quality and efficiency of care, access to health and governmental agency services, quality of life, and survival. Disease modifying therapies In the past 20 years, most trials evaluating ALS therapeutics aiming to slow or arrest the neurodegenerative process have failed to show efficacy. These therapies have primarily targeted excitotoxicity, oxidative stress, mitochondrial dysfunction, protein homeostasis, nucleocytoplasmic transport, neuroinflammation, cell death, cytoskeletal integrity, axonal transport, DNA repair, RNA metabolism, and stress granule regulation. As new trials are planned, a collaborative effort has been made to identify contributors to the failure of studies such as clinical and biological heterogeneity. Critical future steps for the global ALS community to accelerate successful development of ALS therapy include ensuring equity of access, optimizing study design and analysis, endpoint harmonization, and data sharing. Three disease modifying drugs are approved by the US Food and Drug Administration (FDA) with a primary indication for the treatment of ALS . Riluzole, an anti-glutaminergic drug, increases survival and slows the decline in muscle testing score. The most common side effects are asthenia, gastrointestinal symptoms, and an increase in liver enzymes. Edaravone, a free radical scavenger that acts to decrease oxidative stress, modestly slows ALS disease progression. Edaravone is not approved for ALS treatment in Europe, and its role in ALS therapy continues to be a contested topic. The combination of sodium phenylbutyrate and taurursodiol targeting mitochondrial dysfunction, endoplasmic reticulum stress, and cell death, was approved by the FDA in 2022. Pulmonary intervention Pulmonary system complications are common in ALS, and respiratory failure is the most frequent cause of death Pulmonary studies relevant in ALS care include spirometry, nocturnal pulse oximetry, arterial blood gas, polysomnography, maximal inspiratory pressure/maximal expiratory pressure, transdiaphragmatic pressure, and sniff nasal pressure. Serial evaluations are essential to identify respiratory muscle weakness and allow for early interventions using non-invasive ventilation, which has been shown to prolong survival with improved quality of life. Mechanical insufflation-exsufflation is routinely used by ALS patients to augment weak cough to clear airway secretions; however, no systematic study has evaluated the benefits of this intervention. Respiratory muscle training to improve cough and swallowing is an area of active research. Diet and nutritional intervention Weight loss (specifically fat loss) has been shown to correlate with decline in ALSFRS-R scores, and most patients are advised to adapt their diet to maintain a weight close to their premorbid state. Weight loss is multifactorial and associated with decreased food intake due to dysphagia, impaired limb dexterity in handling utensils, hypermetabolism (in about 50% of patients), loss of appetite, and fatigue. Extremes in body mass index (<18, >40) were associated with shorter survival, and best survival was observed for body mass index maintained in the 30-35 range. The consensus is that the diet should include fiber, carotenes, fruits, and antioxidants. However, little consensus exists on the high calorie nutritional source—that is, carbohydrates versus polyunsaturated fats. Clinical guidelines recommend discussion of gastrostomy tube insertion for patients who have symptomatic dysphagia, prolonged eating time, negative caloric balance, unintentional weight loss of greater than 5-10%, and, in some cases, declining respiratory status (forced vital capacity approaching 50%). The benefits of a gastrostomy tube vary depending on proper patient selection, timing of the procedure, careful management of the insertion process, and post-procedure tube management. ALS patients often ask about over-the-counter supplements and vitamins alone or in combinations. The ALS Untangled ( www.alsuntangled.com ) initiative has reviewed the evidence for many vitamins and supplements and is an excellent guide for patients, care givers, and clinicians. Unfortunately, most clinical trials have not shown slower ALS progression. A recent phase 3 trial of ultrahigh dose methylcobalamin (50 mg) compared with placebo showed a modest slowing in clinical deterioration in treated patients, and evidence suggests that vitamin E may be protective against development of ALS. The cornerstone of ALS care is an integrative approach because of the clinical and psychosocial complexities. A common care model in the US consists of an ALS specialist, nurse, pulmonologist, speech and language pathologist, nutritionist, physical therapist, occupational therapist, and social worker in one clinic visit (as “one stop shop”). Other clinicians with critical roles include a psychiatrist, neuropsychologist, genetics counselor, and gastroenterologist. The ALS team collaborates and seamlessly coordinates care with the primary care clinician and other community or home based health service providers. Additional support is achieved by referral to ALS/MND organizations. This patient centric model of care enhances engagement of patients and care givers in treatment and confers benefits such as improved quality and efficiency of care, access to health and governmental agency services, quality of life, and survival. In the past 20 years, most trials evaluating ALS therapeutics aiming to slow or arrest the neurodegenerative process have failed to show efficacy. These therapies have primarily targeted excitotoxicity, oxidative stress, mitochondrial dysfunction, protein homeostasis, nucleocytoplasmic transport, neuroinflammation, cell death, cytoskeletal integrity, axonal transport, DNA repair, RNA metabolism, and stress granule regulation. As new trials are planned, a collaborative effort has been made to identify contributors to the failure of studies such as clinical and biological heterogeneity. Critical future steps for the global ALS community to accelerate successful development of ALS therapy include ensuring equity of access, optimizing study design and analysis, endpoint harmonization, and data sharing. Three disease modifying drugs are approved by the US Food and Drug Administration (FDA) with a primary indication for the treatment of ALS . Riluzole, an anti-glutaminergic drug, increases survival and slows the decline in muscle testing score. The most common side effects are asthenia, gastrointestinal symptoms, and an increase in liver enzymes. Edaravone, a free radical scavenger that acts to decrease oxidative stress, modestly slows ALS disease progression. Edaravone is not approved for ALS treatment in Europe, and its role in ALS therapy continues to be a contested topic. The combination of sodium phenylbutyrate and taurursodiol targeting mitochondrial dysfunction, endoplasmic reticulum stress, and cell death, was approved by the FDA in 2022. Pulmonary system complications are common in ALS, and respiratory failure is the most frequent cause of death Pulmonary studies relevant in ALS care include spirometry, nocturnal pulse oximetry, arterial blood gas, polysomnography, maximal inspiratory pressure/maximal expiratory pressure, transdiaphragmatic pressure, and sniff nasal pressure. Serial evaluations are essential to identify respiratory muscle weakness and allow for early interventions using non-invasive ventilation, which has been shown to prolong survival with improved quality of life. Mechanical insufflation-exsufflation is routinely used by ALS patients to augment weak cough to clear airway secretions; however, no systematic study has evaluated the benefits of this intervention. Respiratory muscle training to improve cough and swallowing is an area of active research. Weight loss (specifically fat loss) has been shown to correlate with decline in ALSFRS-R scores, and most patients are advised to adapt their diet to maintain a weight close to their premorbid state. Weight loss is multifactorial and associated with decreased food intake due to dysphagia, impaired limb dexterity in handling utensils, hypermetabolism (in about 50% of patients), loss of appetite, and fatigue. Extremes in body mass index (<18, >40) were associated with shorter survival, and best survival was observed for body mass index maintained in the 30-35 range. The consensus is that the diet should include fiber, carotenes, fruits, and antioxidants. However, little consensus exists on the high calorie nutritional source—that is, carbohydrates versus polyunsaturated fats. Clinical guidelines recommend discussion of gastrostomy tube insertion for patients who have symptomatic dysphagia, prolonged eating time, negative caloric balance, unintentional weight loss of greater than 5-10%, and, in some cases, declining respiratory status (forced vital capacity approaching 50%). The benefits of a gastrostomy tube vary depending on proper patient selection, timing of the procedure, careful management of the insertion process, and post-procedure tube management. ALS patients often ask about over-the-counter supplements and vitamins alone or in combinations. The ALS Untangled ( www.alsuntangled.com ) initiative has reviewed the evidence for many vitamins and supplements and is an excellent guide for patients, care givers, and clinicians. Unfortunately, most clinical trials have not shown slower ALS progression. A recent phase 3 trial of ultrahigh dose methylcobalamin (50 mg) compared with placebo showed a modest slowing in clinical deterioration in treated patients, and evidence suggests that vitamin E may be protective against development of ALS. The modest effects of the current FDA approved therapeutics for such a devastating disease have spurred a growing pipeline of investigational agents. The development of the ALS platform trial allows for simultaneous testing of multiple agents, using a shared master protocol and central infrastructure. Investigation of at least 50 small molecules with various mechanisms is under way. The successes of therapeutics targeting pathogenic gene expression such as antisense oligonucleotides have led to growing interest in this technology in genetic forms of ALS. This is realized in the accelerated approval of tofersen by the FDA for treatment of SOD1-ALS in April 2023. Several phase 1-2 trials are under way examining the benefits of different antisense oligonucleotides targeting C9ORF72 and FUS . Vectors for gene therapy using adeno-associated virus to reduce SOD1 concentrations are also being explored in a phase 1 trial. Monoclonal antibodies targeting misfolded proteins are being evaluated in phase 2 clinical trials. The American Academy of Neurology (AAN) practice parameters and European Federation of Neurological Societies (EFNS) guidelines review clinical management of ALS. The AAN parameters provide a comprehensive, systematic, evidence based review of class I-III studies. However, owing to insufficient evidence for the management of certain symptoms (for example, cramps, spasticity, cognitive/behavioral impairment, pain, and dyspnea), no formal recommendations were made in these domains. The EFNS guidelines include book chapters and review papers, and final recommendations were reached by consensus. Both the AAN and EFNS recommend access to a multidisciplinary center and treatment with riluzole. The EFNS guidelines also cover effective communication of the diagnosis and guidelines for genetic testing. Both the AAN and EFNS recommend percutaneous endoscopic gastrostomy placement for symptom progression and weight stabilization before the vital capacity falls below 50% predicted to minimize procedural related risk. Both guidelines recognize non-invasive ventilation to alleviate symptoms of respiratory insufficiency and to prolong survival. The use of invasive mechanical ventilation is discussed, recognizing that this decision varies according to many factors including economic and cultural differences and that, although this prolongs survival, it may not improve quality of life. The AAN practice parameter recognizes the lack of adequate data on drug treatment for cognitive or behavioral impairment in ALS, so no formal recommendations were made. In the past 20 years, considerable progress has been made in basic research on ALS. However, this accumulation of knowledge has been slow to translate into effective therapies, a major source of frustration to patients and care givers. With the inclusion of biomarkers, careful and innovative clinical trial design, and targeting of early disease in pre-symptomatic gene mutation carriers, the field is closer to converting basic science discoveries into disease modifying therapies. For the larger group of patients with sporadic ALS, the hope is that by uncovering gene variants that confer risk, and finding methods to define the predominant mechanism (for example, inflammation versus retro-transposon activation versus oxidative stress pathway) as the driver of disease, a personalized approach similar to that for cancer therapeutics can turn ALS into a chronic disease with limited disability and a dignified life. Glossary of abbreviations AAN—American Academy of Neurology ALS—amyotrophic lateral sclerosis ALSFRS-R—revised ALS functional rating scale ALS-FTD—ALS with frontotemporal dementia BOLD—blood oxygen level dependent CMAP—compound muscle action potential DTI—diffusion tensor imaging EFNS—European Federation of Neurological Societies EIM—electrical impedance electromyography FDA—Food and Drug Administration FUS—fused in sarcoma HRE—hexanucleotide repeat expansion LMN—lower motor neuron MND—motor neuron disease MRI—magnetic resonance imaging MRS—magnetic resonance spectroscopy MUNE—motor unit number estimation MUNIX—motor unit number index NfL—neurofilament light chain PCR—polymerase chain reaction PET—positron emission tomography PLS—primary lateral sclerosis PMA—progressive muscular atrophy SOD1—superoxidase dismutase 1 TDP-43—transactive response DNA binding protein 43 TMS—transcranial magnetic stimulation UMN—upper motor neuron Questions for future research How can diagnostic criteria for amyotrophic lateral sclerosis (ALS) be improved to fully capture motor and cognitive dysfunction? What are the genetic contributors to sporadic ALS? Can diminishing inflammation in the brain and spinal cord of patients with ALS result in slowing of disease progression? What are the major factors driving disease progression, and can they be modified? What determines disease presentation (ALS versus frontotemporal dementia (FTD) versus ALS-FTD)?
Short-fiber Reinforced MOD Restorations of Molars with Severely Undermined Cusps
c4c7110e-c65a-487e-addf-1dacead07223
11734284
Dentistry[mh]
Upon approval by the Ethics Review Committee of the University of Southern California (Los Angeles, CA) (proposal #HS-21-00568), 36 caries-free maxillary third molars without signs of occlusal wear were collected from an oral surgery clinic, scaled, pumiced, and stored in 0.1% thymol solution (Aqua Solutions; Deer Park, TX, USA). Only specimens with few or no cracks were chosen. The roots were embedded up to 3 mm below the cementoenamel junction (CEJ) using acrylic resin (Palapress vario, Heraeus Kulzer; Hanau, Germany) and mounted in a stainless-steel positioning jig. The process of “enamel-crack tracking” was carried out during the whole experiment, in which each surface of the tooth was photographed under standardized conditions at 1.5X magnification (Nikon Z50 with a Nikkor 85-mm macro lens) using transillumination (IL-88-FOI Microscope Light Source, Scienscope; Chino, CA, USA). A new set of images was taken after 24 h and at 7 days post-restoration to detect new cracks. In order to evenly distribute the teeth according to their size and shape, all specimens were organized into groups of three (triplets with similar buccolingual and mesiodistal dimensions) and subsequently randomly re-assigned to groups (n = 12) which received 1. a layered direct composite (Gradia Direct, GC); 2. a fiber-reinforced composite resin base (dentin-shade everX Flow, GC) layered with direct composite (Gradia Direct); 3. a fiber-reinforced composite resin base (dentin-shade everX Flow) and a CAD/CAM inlay (Cerasmart 270, GC). Tooth Preparation A high-speed electric handpiece and tapered diamond burs (Brasseler; Lemgo, Germany) were used to prepare a standardized MOD slot-type defect with 5-mm bucco-palatal width and 5-mm depth. A round diamond bur (801-014 and 801-010) was used to remove all the dentin from underneath buccal and lingual cusps, creating a severe undercut, undermining the enamel to a residual thickness of 1 mm (measured with a Precision Metal Caliper, Buffalo Dental; Syosset, NY, USA). A 0.5- to 1-mm 45-degree bevel at the cervical and proximal angles was created with a spherical fine-diamond bur for direct restorations only. After preparation completion , photographic enamel-crack tracking was performed to determine if preparation caused any damage to the specimens. For the CAD/CAM inlay preparations, immediate dentin sealing (IDS) was performed on the freshly cut dentin using a three-step etch-and-rinse dentin adhesive (Optibond FL, Kerr; Orange, CA, USA) according to a standardized protocol. Etching and bonding during IDS was extended to the internal undermined enamel. The adhesive was polymerized for 20 s at 1000 mW/cm 2 (VALO Curing Light, Ultradent; South Jordan, UT, USA) followed by the placement of dentin-shade everX Flow to fill the undercuts and create a 1-mm coverage of the pupal floor. This was then polymerized for 20 s and an additional 10 s under an air-blocking barrier (KY Jelly, Johnson & Johnson; Montreal, QC, Canada). The enamel margins were re-finished with a spherical fine-diamond bur (Brasseler) to remove excess adhesive resin. Restorative Procedures All inlays were fabricated with the Cerec CAD/CAM system (Dentsply Sirona; Konstanz, Germany) and restorations were designed using the Cerec 4.4 software. To improve standardization, the original design of the restoration was not edited; only the position tools were used to ensure correct thickness. Composite resin blocks (Cerasmart 270, GC) were milled, carefully adjusted under a microscope (Leica MZ 125, Leica Microsystems; Wetzlar, Germany), and mechanically polished. The fitting surface of all restorations was air abraded (RONDOflex plus 360, KaVo; Biberach, Germany) using 30-µm silica-modified aluminum oxide (Rocatec Soft, 3M Oral Care; St Paul, MN, USA) for 10 s at a distance of 10 mm and a pressure of 30 psi, followed by immersion in distilled water in an ultrasonic bath for 2.5 min and air drying. Silane (Ultradent) was applied for 20 s, air- and heat-dried at 100°C for 1 min (D.I.-500, Coltene; Altstätten, Switzerland). The prepared tooth surface was air abraded to clean and reactivate the IDS layer using 30-µm silica-modified aluminum oxide, followed by etching for 30 s with 35% phosphoric acid (Ultra-Etch, Ultradent) and abundant rising and drying. Adhesive resin (Optibond FL Adhesive, Kerr) was applied to both surfaces (tooth and inlay) and left unpolymerized until the luting material (Gradia Direct, GC) – preheated for 5 min in a Calset warmer (AdDent; Danbury, CT, USA) – was inserted into the preparation, followed by complete seating of the inlay. Composite resin excess was removed, and each surface was light polymerized for a total of 60 s (20 s per surface, repeated 3 times), with an additional 10 s under an air-blocking barrier. The margins were mechanically polished. For SFRC direct composite restorations , dentin and enamel were bonded using the same three-step etch-and-rinse adhesive (Optibond FL; Kerr), which was light polymerized for 20 s at 1000 mW/cm 2 (VALO Curing Light). A standardized natural layering technique was applied in seven increments. The proximal walls were raised with two 2-mm-thick enamel-shade increments (Gradia Direct, GC). Approximately 2.5-mm of the remaining Class-I defect (including the undercuts) was filled using fiber-reinforced composite (dentin-shade everX Flow) and light polymerized. Final layering was performed using 3 increments that were individually polymerized, first the floor, followed by the enamel cusps . Special attention was given to strictly emulating the cuspal inclination and occlusal anatomy of the CAD/CAM inlays previously designed. Each increment was polymerized for 20 s at 1000 mW/cm 2 , and final light polymerization was performed for 10 s under an air-blocking barrier (KY Jelly, Johnson & Johnson). Finishing procedures were the same as for the previous group. The same technique was used for the direct control group, except SFRC was substituted by 3 increments of conventional composite resin (Gradia Direct, GC), for a total of 10 increments for the whole restoration. The restorative design for each group is presented schematically in . Accelerated Fatigue Test Restored specimens were kept in distilled water at ambient temperature for 1 week. Enamel crack tracking by transillumination and photography was performed for each tooth surface and followed by the fatigue test in an artificial mouth using a closed-loop electrodynamic system (Acumen 3, MTS Systems; Eden Prairie, MN, USA). All fatigue tests were continuously recorded and monitored using transillumination and a macro video camera (Canon Vixia HF S100, Canon USA; Melville, NY, USA). The masticatory forces were simulated through a composite-resin cusp (Filtek Z100, 3M Oral Care) shaped in a semicylinder (2.5-mm radius) contacting the center of the palatal cusp slope. The cusp slope was prepared flat, and the load point was equidistant from the palatal cusp tip and the central groove. Isometric contraction forces (load control) were applied at a 30-degree angle to the tooth’s long axis . The load chamber was filled with distilled water to submerge the sample during testing. A cyclic load was applied at a frequency of 5 Hz, starting with a load of 200 N and increasing by 100 N every 2000 cycles. Samples were loaded until fracture and the number of endured cycles and failure modes of each specimen was recorded. Specimens were loaded until fracture and the number of endured cycles was registered. After the test, each sample was evaluated by transillumination and optical microscopy (Leica MZ 125; Leica Mycrosystems) at a 10:1 magnification (two-examiner agreement). Enamel-Crack Tracking To detect new enamel cracks, specimens were evaluated 3 times during the experiment at 1.5X magnification (Nikon Z50 with a Nikkor 85-mm macro lens) under standardized conditions with transillumination (IL-88-FOI Microscope Light Source, Scienscope): before and 24 h after tooth restoration and 1 week after restoration. In cases of doubt, a two-examiner agreement was sought and analyzed under an optical microscope at 10:1 magnification (Leica MZ 125, Leica Microsystems). Special care was taken to differentiate between pre-existing cracks from those created by polymerization shrinkage. Cracks were classified into 3 categories based on previous studies: , (a) no cracks visible, (b) visible cracks smaller than 3 mm, and (c) visible cracks larger than 3 mm. Statistical Analysis The fatigue resistance of the groups was evaluated using the Kaplan-Meier analysis (survived cycles) for the accelerated fatigue test. The post-hoc log-rank test was used to compare the influence of the restorative procedure on the fatigue resistance of the teeth at a significance level of 0.05. The data were analyzed with statistical software (SPSS 23, SPSS; Chicago, IL, USA). A high-speed electric handpiece and tapered diamond burs (Brasseler; Lemgo, Germany) were used to prepare a standardized MOD slot-type defect with 5-mm bucco-palatal width and 5-mm depth. A round diamond bur (801-014 and 801-010) was used to remove all the dentin from underneath buccal and lingual cusps, creating a severe undercut, undermining the enamel to a residual thickness of 1 mm (measured with a Precision Metal Caliper, Buffalo Dental; Syosset, NY, USA). A 0.5- to 1-mm 45-degree bevel at the cervical and proximal angles was created with a spherical fine-diamond bur for direct restorations only. After preparation completion , photographic enamel-crack tracking was performed to determine if preparation caused any damage to the specimens. For the CAD/CAM inlay preparations, immediate dentin sealing (IDS) was performed on the freshly cut dentin using a three-step etch-and-rinse dentin adhesive (Optibond FL, Kerr; Orange, CA, USA) according to a standardized protocol. Etching and bonding during IDS was extended to the internal undermined enamel. The adhesive was polymerized for 20 s at 1000 mW/cm 2 (VALO Curing Light, Ultradent; South Jordan, UT, USA) followed by the placement of dentin-shade everX Flow to fill the undercuts and create a 1-mm coverage of the pupal floor. This was then polymerized for 20 s and an additional 10 s under an air-blocking barrier (KY Jelly, Johnson & Johnson; Montreal, QC, Canada). The enamel margins were re-finished with a spherical fine-diamond bur (Brasseler) to remove excess adhesive resin. All inlays were fabricated with the Cerec CAD/CAM system (Dentsply Sirona; Konstanz, Germany) and restorations were designed using the Cerec 4.4 software. To improve standardization, the original design of the restoration was not edited; only the position tools were used to ensure correct thickness. Composite resin blocks (Cerasmart 270, GC) were milled, carefully adjusted under a microscope (Leica MZ 125, Leica Microsystems; Wetzlar, Germany), and mechanically polished. The fitting surface of all restorations was air abraded (RONDOflex plus 360, KaVo; Biberach, Germany) using 30-µm silica-modified aluminum oxide (Rocatec Soft, 3M Oral Care; St Paul, MN, USA) for 10 s at a distance of 10 mm and a pressure of 30 psi, followed by immersion in distilled water in an ultrasonic bath for 2.5 min and air drying. Silane (Ultradent) was applied for 20 s, air- and heat-dried at 100°C for 1 min (D.I.-500, Coltene; Altstätten, Switzerland). The prepared tooth surface was air abraded to clean and reactivate the IDS layer using 30-µm silica-modified aluminum oxide, followed by etching for 30 s with 35% phosphoric acid (Ultra-Etch, Ultradent) and abundant rising and drying. Adhesive resin (Optibond FL Adhesive, Kerr) was applied to both surfaces (tooth and inlay) and left unpolymerized until the luting material (Gradia Direct, GC) – preheated for 5 min in a Calset warmer (AdDent; Danbury, CT, USA) – was inserted into the preparation, followed by complete seating of the inlay. Composite resin excess was removed, and each surface was light polymerized for a total of 60 s (20 s per surface, repeated 3 times), with an additional 10 s under an air-blocking barrier. The margins were mechanically polished. For SFRC direct composite restorations , dentin and enamel were bonded using the same three-step etch-and-rinse adhesive (Optibond FL; Kerr), which was light polymerized for 20 s at 1000 mW/cm 2 (VALO Curing Light). A standardized natural layering technique was applied in seven increments. The proximal walls were raised with two 2-mm-thick enamel-shade increments (Gradia Direct, GC). Approximately 2.5-mm of the remaining Class-I defect (including the undercuts) was filled using fiber-reinforced composite (dentin-shade everX Flow) and light polymerized. Final layering was performed using 3 increments that were individually polymerized, first the floor, followed by the enamel cusps . Special attention was given to strictly emulating the cuspal inclination and occlusal anatomy of the CAD/CAM inlays previously designed. Each increment was polymerized for 20 s at 1000 mW/cm 2 , and final light polymerization was performed for 10 s under an air-blocking barrier (KY Jelly, Johnson & Johnson). Finishing procedures were the same as for the previous group. The same technique was used for the direct control group, except SFRC was substituted by 3 increments of conventional composite resin (Gradia Direct, GC), for a total of 10 increments for the whole restoration. The restorative design for each group is presented schematically in . Restored specimens were kept in distilled water at ambient temperature for 1 week. Enamel crack tracking by transillumination and photography was performed for each tooth surface and followed by the fatigue test in an artificial mouth using a closed-loop electrodynamic system (Acumen 3, MTS Systems; Eden Prairie, MN, USA). All fatigue tests were continuously recorded and monitored using transillumination and a macro video camera (Canon Vixia HF S100, Canon USA; Melville, NY, USA). The masticatory forces were simulated through a composite-resin cusp (Filtek Z100, 3M Oral Care) shaped in a semicylinder (2.5-mm radius) contacting the center of the palatal cusp slope. The cusp slope was prepared flat, and the load point was equidistant from the palatal cusp tip and the central groove. Isometric contraction forces (load control) were applied at a 30-degree angle to the tooth’s long axis . The load chamber was filled with distilled water to submerge the sample during testing. A cyclic load was applied at a frequency of 5 Hz, starting with a load of 200 N and increasing by 100 N every 2000 cycles. Samples were loaded until fracture and the number of endured cycles and failure modes of each specimen was recorded. Specimens were loaded until fracture and the number of endured cycles was registered. After the test, each sample was evaluated by transillumination and optical microscopy (Leica MZ 125; Leica Mycrosystems) at a 10:1 magnification (two-examiner agreement). To detect new enamel cracks, specimens were evaluated 3 times during the experiment at 1.5X magnification (Nikon Z50 with a Nikkor 85-mm macro lens) under standardized conditions with transillumination (IL-88-FOI Microscope Light Source, Scienscope): before and 24 h after tooth restoration and 1 week after restoration. In cases of doubt, a two-examiner agreement was sought and analyzed under an optical microscope at 10:1 magnification (Leica MZ 125, Leica Microsystems). Special care was taken to differentiate between pre-existing cracks from those created by polymerization shrinkage. Cracks were classified into 3 categories based on previous studies: , (a) no cracks visible, (b) visible cracks smaller than 3 mm, and (c) visible cracks larger than 3 mm. The fatigue resistance of the groups was evaluated using the Kaplan-Meier analysis (survived cycles) for the accelerated fatigue test. The post-hoc log-rank test was used to compare the influence of the restorative procedure on the fatigue resistance of the teeth at a significance level of 0.05. The data were analyzed with statistical software (SPSS 23, SPSS; Chicago, IL, USA). Survival after the accelerated fatigue test was significantly different for all 3 groups ( , , Kaplan-Meier followed by the log-rank test, p>0.000). The best performance was documented for inlays with an SFRC base, followed by direct SFRC composite resin restorations. The direct control group had the worst survival rate. No new cracks were observed after tooth preparation. and show that after restoration and 1 week of water storage, the crack propensity was higher for the direct control group (83%), followed by the direct SFRC (66%) and CAD/CAM inlay groups (0% of shrinkage-induced cracks larger/smaller than 3 mm). Failure mode was evaluated with the aid of transillumination and magnification to classify the fracture as reparable, possibly reparable (below the CEJ but above the acrylic resin base, possibly reparable with additional interventions such as margin elevation or periodontal surgery), and irreparable. Irreparable failure, i.e. below the acrylic resin-base limit, affected 100% of the direct control specimens, but ranged between 17% and 42% for the inlay with SFRC base and direct SFRC composite resin groups, respectively . The present study assessed the accelerated fatigue strength and enamel-crack propensity of MOD direct composite restorations of molars with severely undermined cusps (with and without short-fiber reinforced composite) compared to CAD/CAM composite resin inlays with SFRC. The null hypotheses are rejected because (1) a significant difference in mechanical performance and failure mode between the restorative techniques was found, and (2) enamel-crack propensity (induced by shrinkage stress) was not the same across all groups. This in-vitro study allowed a high level of standardization for all procedures by controlling tooth dimensions, exact preparation dimensions, loading steps, loading configuration, and occlusal morphology. The innovative randomly reassigned multiplets method described in the material and methods section was part of this effort to eliminate confounding variables. Such level of standardization would simply be impossible in a clinical study, in which the number of confounding variables is such (patients’ masticatory and dietary habits, individual caries susceptibility, as well as the need for multiple operators and evaluators, etc.) that differences between groups are often masked. True fatigue tests at low loads and high cycles are extremely time-consuming because more than 1,000,000 cycles are necessary before observing any failure. The accelerated fatigue test used in this work, originally introduced by Fennis et al, is therefore the most relevant means of assessment because it replicates the clinical mode of failure in a reasonable amount of time. The Acumen 3 (MTS) electrodynamic system used here features a rigid load frame and a direct-drive linear motor providing highly precise load and motion control. Previously published methods and load protocols from works comparing large direct and CAD/CAM MOD restorations performed at the same facility , , were used. The angle of force was modified to 30 degrees and applied to the supporting cusp using a composite resin cylinder (Filtek Z100; 3M Oral Care) as an antagonist. This increased the stress to the restoration and simulated an extreme load scenario (non-working contact). Extreme loads were used (far in excess of physiological masticatory forces), and all specimens survived the first half of the experiment (up to 1000 N), demonstrating outstanding survival rates. Despite the severe undermining of the cusps and unsupported enamel, the results of the present study align with previously published works performed at the same facility, in which large MOD Filtek MZ100 CAD/CAM inlays, Cerasmart CAD/CAM inlays, and Gradia Direct semi-direct inlays yielded the best performance. Cerasmart 270 is filled with 78wt% silica- and barium-glass nanoparticles, while MZ100 is filled with 85wt% spheroidal zirconia-silica nanofillers. The same adhesive (Optibond FL; Kerr) and protocol was used in all those experiments, including the immediate dentin sealing technique (IDS), which may also account for the high performance achieved. In fact, in a study by Hofsteenge et al, inlays with IDS (Optibond FL) and overlays without IDS did not differ in terms of fracture strength, in addition to which inlays had always more favorable fracture modes than onlays. The direct techniques in the present experiment were not able to match the performance of the CAD/CAM inlay (for both accelerated fatigue and crack propensity), as was the case for one of the comparable experiments (including sandwich techniques and the use of fiber patches). The use of the everX Flow short-fiber reinforced dentin base, however, was able to improve the fatigue resistance and failure mode within the direct groups. The extreme loads required to fracture the restored teeth speak for the ability of the fiber-reinforced base to function in a high-stress–bearing area and its potential ability to match the toughness of dentin. , However, the flowable SFRC base (everX Flow) had only a limited effect on crack propensity. This effect was very significant when using the original SFRC (everX Posterior) in a comparable experiment, in which no shrinkage-induced cracks >3 mm were found even using the direct technique (without undermining the cusps). This is explained in part by the significant difference in polymerization volumetric shrinkage and shrinkage stress between the original everX Posterior (-1.52%) and the newer, flowable version everX Flow (-2.58%). , Absence of shrinkage-induced cracking is not a surprise with inlays; however, in direct techniques, the very limited incidence of cracks reflects the good performance of everX Posterior. The flowable composite everX Flow was developed with the idea of facilitated placement, which was obtained at the price of the fiber length:width critical-aspect ratio (20-30 instead of 70 for the original everX Posterior). This allowed increasing the fiber content to 25% (instead of 5%-15% for the original), but also required a modification of the resin matrix and reduction in barium-glass filler content, possibly explaining the increased shrinkage rate. It is the essence of BRD (Biomimetic Restorative Dentistry) to mimic tooth structure and as such, SFRCs constitute the most biomimetic dentin replacements because of their superior fracture toughness. Natural dentin is reinforced by collagen fibers that can stop and deflect cracks initiating in enamel. The future of SFRCs might lie in the combination of millimeter- and micrometer-scale fibers with aspect ratios of 20-70, the so-called “hybrid SFRC”. Preliminary results by Lassila et al are extremely encouraging: experimental “hybrid” composite resins reinforced with different fiber lengths had statistically significantly greater mechanical performance in terms of fracture toughness (4.7 MPa m 1/2 ) and flexural strength (155 MPa) compared to other tested composites. In agreement with existing data obtained under the same conditions, , it can be stated that the SFRC direct and CAD/CAM inlay restorations also presented more favorable failure types. No catastrophic failures were observed in Cerasmart 270 inlays with a SFRC base, as was shown in the previous studies when using MZ100 inlays. , Among the limitations of the present study was the absence of a Cerasmart 270 inlay group without SFRC (inlay control group). The SFRC base can primarily be considered a crack-arresting layer or dentin replacement. Thus, its thickness might contribute to the fatigue performance and failure mode. In the present study, the SFRC was applied in a relatively thin layer of 1-2.5 mm instead of a bulk or core layer. Further studies should consider thicker layers or even a full restoration made of SFRC, the limitation of which could be the surface polishing and surface degradation due to the exposition of the fibers. There are two important outcomes of this study. The first is the combination of absence of major cracks induced by shrinkage, best fatigue performance, and most favorable failure types for the inlays with SFRC base. While cost effectiveness for the patient might be a limiting factor, from a clinical standpoint, it is undeniable that occlusion and morphology are optimized with inlays rather than direct techniques. Second, the positive effect of the SFRC base on the shrinkage, performance, and failure mode of direct restorations must be mentioned. Directly layered restorations constitute a viable alternative due to the simplicity of the procedure, not just because it is an inexpensive technique. The challenge of restoring large MOD defects with severely undermined cusps was assessed using 3 restorative approaches (direct with and without SFRC base, and CAD/CAM inlay with a SFRC base). Restorations with SFRC bases yielded excellent mechanical performance even above physiological masticatory loads. Large MOD defects, however, are best restored using CAD/CAM inlays with an SFRC base for optimal strength, reduced shrinkage-induced cracks, and most favorable failure mode. When a low-cost restoration must be chosen instead, the SFRC base will significantly improve the performance and failure mode of directly layered restorations.
Physiology Brings Relevance to Biological Research: A Vision for
7688f413-5743-4fe1-ab71-f5e2321d462f
11104523
Physiology[mh]
Impact of pharmacogenetics on aspirin resistance: a systematic review
e1c69724-d741-4e20-b028-663269ec476b
10014202
Pharmacology[mh]
Cardiovascular disease (CVD) is the first cause of mortality worldwide, with all the healthcare systems facing this very challenging issue. The World Health Organization (WHO) estimates that 31% of deaths worldwide are due to CVD, with ∼ 17.7 million CVD-related deaths in 2015. Approximately 7.4 million of these deaths were due to heart disease and 6.7 million deaths were due to stroke. Platelet activation plays an important role in the development of CVD. Acetylsalicylic acid (ASA), commonly known as aspirin, is an irreversible inhibitor of platelet cyclooxygenase (COX), which prevents the formation of thromboxane A2 by arachidonic acid and, therefore, prevents the formation of this activating agent of platelet aggregation and vasoconstriction. Aspirin is a widely used antiplatelet for primary and secondary prevention of CVD, such as stroke and heart attacks. Nevertheless, several patients may still experience treatment failure with ASA and an increased risk in recurrent stroke events. There are several contributing factors for treatment failure including medication adherence, drug-drug interactions, aspirin-independent thromboxane A2 synthesis and also genetic variations. Even low daily aspirin doses (in the range between 75 and 150 mg) are able to suppress biosynthesis of thromboxane, inhibiting the accumulation of platelets, and reducing the risk of CVD. However, aspirin does not always prevent the formation of thromboxane A2 due to failure to inhibit platelet COX. Because of that, all individuals do not respond to antiplatelet therapy in a similar way. In this sense, the genetic mutations have been related with aspirin resistance (AR) and may cause reduction or increase in drug absorption and metabolism, contributing to AR. Aspirin resistance can be diagnosed by clinical criteria or by laboratory tests. Clinically, the patient has a new episode of CVD, despite the regular use of aspirin. While the failure of aspirin to inhibit a platelet function test can be seen by Platelet Function Analyser (PFA-100) or light transmission aggregometry (LTA), for example. The field of pharmacogenetics, which aims to implement specific pharmacological therapies to genetic characteristics with the intention to provide greater efficiency, is a constant target of research. Therefore, several studies have been published about candidate genes associated with the genetic predisposition of resistance to AAS, such as COX-2 , GPIIIA , and P2Y1 . Resistance to antiplatelet therapy and the indiscriminate use of ASA can increase rates of recurrence and mortality from cardiovascular diseases, such as stroke. Hence, the aim of the present study was to perform a systematic literature review to determine the impact of genetic variants on AR. The present systematic review was established according to the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyzes (PRISMA) statement published by Moher et al. (2019). Five following databases were systematically screened: MEDLINE/PubMed, Cochrane, Scopus, LILACS, and SCIELO. The research was restricted to a period of 10 years (December 2009 to December 2019) and the following search terms were applied: Aspirin AND Resistance AND Polymorphism and Aspirin AND Resistance AND Genetic variation . Eligibility criteria Only articles published in English were included in this search. Also, only articles describing the relation between AR, proven by laboratory tests or a new case of CVD, and polymorphisms or genetic variations were included in the present systematic review. The final articles included ( n = 21) in the present review were 20 case-controls and 1 cohort. Assessment of risk of bias The authors, using the combined search terms and based on the inclusion criteria, conducted the primary literature search. In that first moment, titles and abstracts were screened. All reports that appeared in accordance with the inclusion criteria were full-text screened. All studies that did not comply with pre-established eligibility and inclusion requirements were excluded. In a second step, the researchers independently evaluated whether the full-texts previously selected followed the inclusion criteria. In case of disagreement between two authors, a third author was consulted, and a consensus was reached by a meeting between them. Furthermore, to assess and minimize the presence of potential biases, the Risk of Bias in Systematic Reviews (ROBIS) method was used as a reference. Data extraction and synthesis In the primary literature search, a total of 290 articles were found: 178 in SCOPUS, 104 in MEDLINE/Pubmed, 5 in Cochrane, 2 articles in LILACS, and 1 in SCIELO. Of those, 19 were duplicated. Hence, 271 articles were screened for reading of title and abstract, 216 of which were excluded for not meeting our inclusion criteria. In the next step, the authors independently reviewed 65 full-text articles. Then, 44 articles were excluded for not meeting our inclusion criteria. So, in the end, 21 articles were included in the present systematic review . Only articles published in English were included in this search. Also, only articles describing the relation between AR, proven by laboratory tests or a new case of CVD, and polymorphisms or genetic variations were included in the present systematic review. The final articles included ( n = 21) in the present review were 20 case-controls and 1 cohort. The authors, using the combined search terms and based on the inclusion criteria, conducted the primary literature search. In that first moment, titles and abstracts were screened. All reports that appeared in accordance with the inclusion criteria were full-text screened. All studies that did not comply with pre-established eligibility and inclusion requirements were excluded. In a second step, the researchers independently evaluated whether the full-texts previously selected followed the inclusion criteria. In case of disagreement between two authors, a third author was consulted, and a consensus was reached by a meeting between them. Furthermore, to assess and minimize the presence of potential biases, the Risk of Bias in Systematic Reviews (ROBIS) method was used as a reference. In the primary literature search, a total of 290 articles were found: 178 in SCOPUS, 104 in MEDLINE/Pubmed, 5 in Cochrane, 2 articles in LILACS, and 1 in SCIELO. Of those, 19 were duplicated. Hence, 271 articles were screened for reading of title and abstract, 216 of which were excluded for not meeting our inclusion criteria. In the next step, the authors independently reviewed 65 full-text articles. Then, 44 articles were excluded for not meeting our inclusion criteria. So, in the end, 21 articles were included in the present systematic review . In the 21 final articles selected, a total of 10,873 patients were analyzed, of which 3,014 were aspirin resistant and 6,882 were aspirin sensitive (some articles brought semiresistance values and were disregarded, and another 2 articles did not classify their patients as sensitive and not sensitive). Of the 21 articles studied, 11 included patients with a cerebrovascular event, totaling 4,835 patients. The other 10 articles mostly analyzed cardiac outcomes. We also emphasize that the clinical conditions of the evaluated patients were varied among the articles, with some articles evaluating patients with > 1 disease: ischemic stroke (10 articles), coronary artery disease (9), peripheral arterial disease (3), acute vascular event (1), age > 80 years old (1), adults (1), and hypertension (1). Most of the patients in the selected articles are from the Asian continent (9 from China, 4 from India, 2 from Turkey, and 1 from Jordan), and regarding the other works, 3 articles are from the American continent (all from the United States of America), 1 from the European continent (Belgium), and 1 from the African continent (Tunisia). Among the resistance analysis methods, 4 articles used clinical outcome and 17 used platelet aggregation measurement. Among those who performed platelet aggregation measurement, the most common method was LTA (8 articles), followed by PFA-100 system (3), thromboelastography platelet mapping assay (TEG) (2), VerifyNow (2), PL-11 platelet analyzer (1), TXB2 elisa kit (1) and urinary 11-dehydro TXB2 (1), with some articles using > 1 method. In , we detail the following information from the 21 final articles included in the present review: Type of article, country, clinical condition, sample number, number of aspirin resistant patients, number of aspirin sensitive patients, gene, risk allele, protective allele, genetic variant, p-value, Odds Ratio (OR), CI, resistance assessment method, and daily aspirin dose. In addition, we have highlighted in a separate table the genetic variants with relevant results for AR . As for relevance, of the 64 genetic variants evaluated by the articles, 14 had statistical significance ( p < 0.05; 95%CI). Among them, the following polymorphisms have had concordant results so far: rs1371097 (P2RY1 ), rs1045642 ( MDR1 ), rs1051931 and rs7756935 ( PLA2G7 ), rs2071746 ( HO1 ), rs1131882 and rs4523 ( TBXA2R ), rs434473 ( ALOX12 ), rs9315042 ( ALOX5AP ), and rs662 ( PON1 ). In turn, these genetic variants differ in real interference in AR: rs5918 ( ITGB3 ), rs2243093 ( GP1BA ), rs1330344 ( PTGS 1), and rs20417 ( PTGS2 ). To study the relationship between polymorphisms and AR, it is necessary to consider the resistance analysis mode, which can be performed in two ways: clinical or laboratory. In the first, the patient is considered resistant if there is a negative outcome (death or stroke for example). In the second, several types of tests can be used, such as PFA-100, VerifyNow Aspirin, TEG, PL-11 platelet analyzer, serum and urinary TXB2, LTA, and multiplate analyzer. However, it is important to highlight that the measurement of platelet response to aspirin is highly variable, likely due to differing dependence of the arachidonic acid pathway between techniques. In our research, the most used laboratory method was the LTA, which is considered the gold standard for testing platelet function. The relationship between polymorphisms and AR has been described by Yi et al. This study assessed the interaction with PTGS1 (rs1236913 and rs3842787), PTGS2 (rs689466 and rs20417), TXAS1 (rs194149, rs2267679, and rs41708), P2RY1 (rs701265, rs1439010, and rs1371097), P2RY12 (rs16863323 and rs9859538), and ITGB3 (rs2317676 and rs11871251) gene variants. In the laboratory analysis, only rs1371097 of the P2RY1 gene, comparison CC x TT + CT, obtained statistical relevance ( p = 0.01), even after adjusting for other covariates ( p = 0.002; OR = 2.35; 95%CI: 1.87–6.86). In addition, using the generalized multifactor dimensionality reduction (GMDR) method, the following 3 sets of gene-gene interactions were significantly associated with AR: rs20417CC/rs1371097TT/rs2317676GG ( p = 0.004; OR = 2.72; 95%CI: 1.18–6.86); rs20417CC/rs1371097TT/rs2317676GG/AG ( p = 0.034; OR = 1.91; 95%CI: 1.07–3.84); rs20417CC/rs1371097CT/rs2317676AG ( p = 0.0025; OR = 2.28; 95%CI: 1.13–5.33). These high-risk interactive genotypes were also associated with a bigger chance of early neurological deterioration ( p < 0.001; Hazard Ratio [HR] = 2.47; 95%CI: 1.42–7.84). Peng et al. (2016) also assessed genes related to thromboxane and others. The analyzed polymorphisms were ABCB1 (rs1045642), TBXA2R (rs1131882), PLA2G7 (rs1051931 and rs7756935) and PEAR1 (rs12041331–rs1256888). There was statistical significance for 3 of them: rs1045642 ( p = 0.021; OR = 0.421; 95%CI: 0.233–0.759), rs1131882 ( p = 0.028; OR = 2.712; 95%CI: 1.080–6.810) and rs1051931–rs7756935 ( p = 0.023; OR = 8.233; 95%CI: 1.590–42.638), while Wang Z. et al (2013) researched the association with TBXA2R (rs4523), ITGB3 (rs5918), P2RY1 (rs701265), and GP1BA (rs6065) polymorphisms. The only polymorphism significantly associated with AR was rs4523 ( p = 0.001; OR = 4.479; 95%CI = 1.811–11.077). Another study that assessed the TBXA2 and glycoprotein genes was done by Gao et al. GP1BA (rs6065), ITGB3 (rs5918), P2RY1 (rs701265), and TBXA2R (rs4523) genetic variations were researched, but only TBXA2R (rs4523) polymorphism was related ( p = 0.01). In addition, Patel et al. also studied the ITGA2B/ITGB3 polymorphisms. They analyzed the relationship with CYP2C19 (rs4244285) and ITGA2B /I TGB3 (rs5918) polymorphisms. However, no association was observed ( p = 0.171 and p = 0.960, respectively). Moreover, still in the scope of glycoprotein genes, Derle et al. conducted a study with 208 patients with vascular risk factors. ITGB3 (rs5918) polymorphism was screened, and the results showed that there was no significant difference in the presence of the C allele between the groups ( p = 0.277). In addition, in the relationship between the presence of the C allele and atherothrombotic stroke, no significant difference was found ( p = 0.184). A study by Wang B et al. also analyzed the rs5918 (PLA1/A2 ) polymorphism of the ITGB3 gene. All 214 patients in the aspirin sensitive group had the PLA1/A1 genotype and no patients with PLA2/A2 were found. However, of the 236 patients in the AR group, 12 had PLA1/A2 heterozygous genotype ( p = 0.002), finding a statistically significant differenc. In the study by Pamukcu et al., 13 polymorphisms of 10 different genes were tested, including ITGB3 . The genes F5 (rs6025, rs1800595), F2 (rs1799963), F13A1 (rs5985), FGB (rs1800790), SERPINE1 (rs1799889), ITGB3 (rs5918), MTHFR (rs1801133, rs1801131), ACE (rs1799752 - Ins/Del), APOB (rs5742904), and APOE (rs429358 - C112R and C158A) were evaluated. However, there was no significant result for any polymorphism (p > 0.05). Furthermore, in the case-control study by Voora et al, 11 polymorphisms of 11 different genes were assessed: GNB3 (rs5443), ITGA2 (rs1126643), ITGB3 (rs5918), GP6 (rs1613662), GP1BA (rs2243093), PEAR1 (rs2768759), VAV3 (rs6583047), F2R (rs168753), THBS1 (rs2228262), PTGS1 (rs3842787), and ADRA2A (rs1800544). When comparing the groups, there was no relationship ( p > 0.05). Another research that studied some of the same genes was conducted by Al-Azzam et al.: GP1BA (rs1126643), ITGA2 (rs2243093) and PTGS2 (rs20417). Of these, only the GP1BA (rs2243093) gene was related ( p = 0.003), analyzing the presence of the C allele. Additionally, Wang et al. (2017) conducted a study about the following polymorphisms: ITGA2 polymorphism gene at rs1126643 and PTGS2 polymorphism gene at rs20417. The authors found no association: p = 0.21 for rs126643 and p = 0.69 for rs20417. Moreover, Yi et al. used Matrix-Assisted Laser Desorption/Ionization-Time Of Flight (MALDI-TOF) to link PTGS1 (rs1236913 and rs3842787) and PTGS2 (rs689466, and rs20417) with AR. The analysis showed that there was no statistical relevance for the relationship. Only when the gene-gene interaction (rs3842787 and rs20417) was evaluated, there was statistical significance: rs3842787/CT + rs20417/CC ( p = 0.016; OR = 2.36; 95%CI: 1.12–6.86), rs3842787/TT, CT + rs20417/CC ( p = 0.078; OR = 1.36; 95% CI: 0.82–2.01), and rs3842787/CT + rs20417/GC ( p = 0.034; OR = 1.78; 95%CI: 1.04–4.58). Highlighting the fact that, for the second combination, there is an invalid CI. Another study that investigated polymorphisms of the PTGS1 (rs1888943, rs1330344, rs3842787, rs5787, rs5789, rs5794) and PTGS2 (rs20417, rs5277) genes was conducted by Li et al.; in addition to these two genes, a genetic variant of the HO1 gene (rs2071746) was also tested. As a result, only two genetic variations were associated with AR. The rs2071746 polymorphism ( HO1 gene) had statistical significance to genotype TT ( p = 0.04; OR = 1.40; 95%CI = 0.59–3.30) and T allele ( p = 0.04; OR = 1.70; 95%CI =1.02–2.79), while rs1330344 ( PTGS1 gene) had significant results only when G was the risk allele and analyzed separately ( p = 0.02; OR = 1.77; 95%CI = 1.07–2.92). Still on the PTGS1 gene, Fan et al. investigated several polymorphisms of the PTGS1 gene (rs1888943, rs1330344, rs3842787, rs5787, rs5789, and rs5794), but rs1330344 was the only significantly related to AR ( p = 0.01; OR = 1.82; 95%CI = 1.13–2.92; allele value) just in LTA + TEG analysis. Moreover, another case-control study by Chakroun et al. investigated the relationship between rs3842787 polymorphism of the PTGS1 gene and AR. Patients with the allele had no statistically significant difference using CEPI-CT ( p = 0.1) and uTxB2 ( p = 0.43). Sharma et al. evaluated 3 polymorphisms of 3 different genes, PTGS2 (rs20417), ALOX5AP ( rs9315042) and ABCB1 ( rs1045642), to assess their role in AR. The research was performed in 3 different studies and all studies obtained statistical relevance for the CC allele of rs20417 ( p = 0.016; OR = 3.157; 95%CI: 1.241–8.033), the GC allele of rs20417 ( p < 0.001; OR = 2.983; 95%CI: 1,884–4,723) and for the rs9315042 variant ( p < 0.001; OR = 2.983; 95%CI: 1.884–4.723). For the variant rs1045642, 2 comparisons were made, one comparing cases and controls, for the TT x CC alleles ( p < 0.001; OR = 2.27; 95%CI: 1.64–3.168), and for the TT x CT + CC alleles ( p < 0.001; OR = 1.72; 95%CI: 1.335–2.239) and other comparing AR and sensitive participants ( p = 0.012; OR = 1.85; 95%CI: 1.142–3.017). Another study that tested the ALOX gene was done by Carroll et al. The study tested 4 genetic variants: rs434473 and rs1126667 of the ALOX12 gene, rs4792147 of the ALOX15B gene and rs3892408 of the ALOX15 gene. Only the rs434473 polymorphism obtained a significant p -value ( p = 0.043). Furthermore, Yeo et al. analyzed some variants of PTGS1 (rs10306114, rs3842787, rs5788, and rs5789), ITGA2 (rs1126643, rs1062535, and rs1126643), ITGB3 (rs5918), GP6 (rs1613662), P2RY12 (rs1065776), and F13A1 (rs5985) genes, but only rs662 ( A576G ) of PON1 gene was significantly relevant ( p = 0.005) to AR. Lastly, a study by Strisciuglio et al. included 450 noncarriers of the T2238C polymorphism (rs5065, NPPA gene) and 147 carriers. The authors concluded that there was no statistical difference when comparing the groups, neither in overall CAD patients ( p = 0.7) nor in the diabetic group ( p = 0.6). As limitations of the present study, we highlight the nonuniform methodologies of the analyzed articles, as well as population differences. These divergences made it difficult to compare the results of the articles. Among the studies, there was a great difference among the clinical conditions, as well as in the way of analysis of the resistance and in the dosage of aspirin. Unfortunately, meta-analysis was not performed due to such high clinical and methodological heterogeneity of the findings. Despite the heterogeneity of the findings in terms of methodology and results, it is clear that some polymorphisms are more studied than others. Among them, rs1126643 ( ITGA2 ), rs3842787 ( PTGS1 ), rs20417 ( PTGS2 ), and rs 5918 ( ITGB3 ) were the most studied. In conclusion, pharmacogenetics is an expanding area that promises a therapy aimed at the individualities of each patient, personalized medicine, for better control of diseases, including cardiovascular diseases, such as stroke. Finally, further studies are needed to better understand the association between genetic variants and AR and, therefore, the practical application of the findings.
Intrauterine Device Use: A New Frontier for Behavioral Neuroendocrinology
b47fb35c-c65b-47ea-ad87-c8b3790c0bf8
9352855
Physiology[mh]
There has been a recent uptick in the biopsychological study of hormonal contraceptives, partially reflecting women’s increased scientific participation and funding emphases . Indeed, hormonal contraceptives are not only important for their contraceptive and medical benefits, but also as a natural experiment for exogenous sex hormone influences on the brain, cognition, and behavior, which are severely under-studied domains of women’s health. Hormonal contraceptives come in many forms, with intrauterine devices (IUDs) being the most-used worldwide (159 million users; ). Oral contraceptives (OCs), however, are the most widely-studied form, likely owing to their prevalence in North America and Europe . Thus, there are perplexing knowledge gaps regarding neuroendocrine links to cognition and behavior in IUD users. This paper presents vital considerations for filling these gaps and illustratively showcases how multimodal study designs and person-specific methods have potential to accurately reflect the heterogeneity present–but often erroneously ignored–among all women, particularly in relation to ovarian hormone influences (e.g., ). Most empirical research on hormonal contraceptives considers users to be homogenous, and thus, combines women using different forms (e.g., IUDs, OCs, and implants; – , ). This is problematic because hormonal contraceptives have varying exogeneous hormone constituents and doses, and thus, have varying influences on endogenous hormone levels. For instance, combined OCs contain a synthetic estrogen (usually ethinyl estradiol) and a progestin varying in androgenicity, from anti-androgenic to highly androgenic. In many monophasic formulations, women receive stable doses of both hormones for 21 days followed by a placebo for 7 days (although schedules vary). In many triphasic formulations, women receive consistent doses of ethinyl estradiol for 21 days with progestin doses increasing slightly every 7 days for 3 weeks, followed by a placebo for 7 days. The pills alter endogenous ovarian hormone secretion through negative feedback mechanisms and prevent pregnancy by inhibiting ovulation. Most IUDs, however, release a relatively constant dose of the progestin levonorgestrel, which is moderately-to-highly androgenic, for up to three or five years . They prevent pregnancy by instigating local changes to reproductive biology (e.g., in tissue within the endometrial cavity), and their systemic impacts on endogenous ovarian hormone levels (especially because they do not contain estradiol), and on brain function and behavior, are unclear. Their reported side effects, however, include acne, headaches, and breast tenderness , and women using IUDs have shown risks for depression similar to OC users , suggesting that effects may be systemic. Thus, there is significant heterogeneity among hormonal contraceptives. This heterogeneity is exacerbated by the established heterogeneity in women’s neuroendocrine function, including in receptor sensitivity and in lifestyle factors that affect hormone function . It is, therefore, not surprising that research on the neural, cognitive, and behavioral consequences of hormonal contraceptive use offers only a few consistent results. One of them concerns depression, as noted above . Another concerns OCs and spatial skills. OC progestin androgenicity has been positively associated with three-dimensional (3D) mental rotations performance , which shows a large gender difference in which men–on average–outperform women (see ). There is also indication that OC ethinyl estradiol dose is inversely related to mental rotations performance . These findings broadly align with reviews and recent empirical work suggesting that high androgens (and perhaps progestogens) as well as low estradiol may facilitate mental rotations performance in women . There are, surprisingly and unfortunately, no studies that focus on mental rotations performance (or any aspect of cognition) in IUD users as a homogenous group; when they are studied, IUD users are combined with other hormonal contraceptive users, increasing heterogeneity and limiting inferences (e.g., , ). The neural substrates underlying hormonal contraceptives and mental rotations performance are also not well-understood (see ). Generally, functional magnetic resonance imaging (fMRI) studies show that mental rotation tasks engage occipital and parietal regions and some temporal and frontal regions, especially in the right hemisphere, and these regions are linked to gender differences in task performance . Men typically recruit visual and parietal regions more strongly than women, and women tend to engage frontal regions, such as the inferior frontal gyrus, more than men. These differences are thought to be related to gender differences in strategy use . They may also be linked to testosterone and progesterone, but especially to estrogen, as the hormones have been shown to modulate brain activity underlying spatial task performance across the natural menstrual cycle . It is necessary to emphasize, however, that this extant literature overwhelmingly relies on traditional neuroscience methods; studies come from a functional localization perspective and focus on task-related brain regions identified through cognitive subtraction . For instance, focus might be on parietal activation during mental rotations versus passive viewing, determined by averaging brain activity across trials and participants, often regardless of their hormone milieus. Although they have led to important findings, these methods can also result in null or inaccurate findings because the brain operates as a network (e.g., different parietal regions communicate with several different frontal regions during rotation; ), hormone milieus vary within and between individuals , and people are heterogeneous in their cognition and behavior . A person-specific neural network perspective could overcome these limitations. For instance, although the default mode network, which includes midline and lateral parietal regions as well as the medial prefrontal cortex , is more active during rest than tasks, it contributes to cognitive function and task performance . Women also appear to have greater connectivity (i.e., synchrony) of default mode regions during rest than do men . Interestingly, no work has examined the interplay between the default mode network and a set of regions constituting a putative mental rotations network, especially in relation to sex hormones. There is a pressing need for future research to examine the neuroendocrine underpinnings of links to behavior and cognition, such as mental rotations performance, in IUD users. In doing so, it is vital to conduct multimodal investigations that assess links among hormone levels (e.g., circulating, inferred from hormonal contraceptive dosing, or otherwise marked by hormone activity levels), brain function, and behavior (e.g., mental health reports or cognitive task performance), and to consider heterogeneity among women in those links. Multimodal Data Collection To illustrate the feasibility and utility of a multimodal person-specific approach, data from 11 IUD users is briefly presented ( M age =28.37, SD age =5.40; 55% White, 27% Asian, 18% Black; 73% non-Hispanic). Participants are from an ongoing fMRI study that was conducted with approval from the University of Michigan Institutional Review Board; all participants provided informed consent. All participants were using slow-release IUDs containing the androgenic progestin levonorgestrel (nine were using Mirena ® , one Kyleena ® , and one Skyla ® ). They had been using the IUDs for at least the past three months and had no reproductive health issues (e.g., polycystic ovary syndrome) or previous pregnancies. They were also not using medications containing sex hormones. Among other study procedures, participants completed a 60-minute online monitored survey and received a 60-minute MRI scan. The morning of the scan, they provided approximately 2mL of saliva, which was collected via passive drool within 30 minutes of waking. Saliva samples were assayed using high sensitivity estradiol, progesterone, and testosterone enzyme-linked immunosorbent assay kits according to manufacturer instructions by the Core Assay Facility at the University of Michigan. They were assayed in duplicate and averaged for analyses. See for details, including assay sensitivities and intra-assay coefficients of variation. The top third of shows means and standard deviations (in pg/mL) for all three hormones. These hormone levels do not appear to be suppressed, as are hormone levels in OC users (e.g., ); in fact, progesterone in IUD users may be elevated compared to both naturally cycling women and OC users (e.g., ). Thus, these data are consistent with insinuations that IUDs have systemic effects. During each scan, participants completed two unique runs of a slow event mental rotations task . Each run contained 16 trials during which participants determined whether a pair of 2D or 3D objects formed from small blocks were accurate rotations of each other. The 3D condition was based on the traditional Shepard and Metzler task , and the 2D condition controlled for basic visual processing, decision-making, and rotation. Task timing is shown in . Each run lasted 4 min 24s, and correct responses were recorded. Behavior is vital to the interpretation of brain function, and the middle third of shows that IUD users correctly identified whether the rotated 2D or 3D objects were the same in 75% of trials, on average. Neuroimaging data were acquired using a GE Discovery MR750 3.0 Tesla scanner with a standard coil (Milwaukee, WI). Structural data consisted of 208 slices from a T1 SPGR PROMO sequence (TI=1060ms, TE=Min Full, flip angle=8°, FOV=25.6 cm, slice thickness=1mm, 256x256 matrix, interleaved). Before the task, a fieldmap was acquired using a spin-echo EPI sequence (TR=7400ms, TE=80ms, FOV=22.0cm, 64x64 matrix, interleaved). Functional data consisted of 40 interleaved slices collected during an EPI sequence (TR=2000ms, TE=25 ms, flip angle=90°, FOV=22.0 cm, slice thickness=3mm, 64x64 matrix, 134 volumes). Standard preprocessing was conducted, as described in . Blood oxygen level-dependent (BOLD) time series were then extracted from ten regions of interest (ROIs) with 10mm diameters, four that constituted the default mode network (DMN) and six that constituted a putative mental rotations network (MRN; see for central coordinates), following past work . Individual differences in anatomical structure were addressed by intersecting ROIs with participants’ binarized grey matter masks (generated using FSL’s FAST; ). Time series from the two runs were concatenated after processing. Person-Specific Functional Connectivity Person-specific connectivity analyses were conducted on the mental rotations task-related fMRI data in order to reveal potential individual differences in the neuroendocrinology of IUD use. Specifically, the BOLD time series for each participant was submitted to group iterative multiple model estimation (GIMME), which has been validated in extensive largescale simulations (e.g., ). Details can be found in tutorials and empirical applications (e.g., , , ). Briefly, GIMME uses a data-driven approach based on Lagrange Multiplier tests to add directed contemporaneous (same-volume) or lagged (from one volume to the next) connections to participants’ null networks (with no connections). In this application, GIMME added group-level connections (reflecting systematic effects of IUDs) that were significant for at least 75% of the sample to the networks of all women in the sample, followed by individual-level connections (reflecting heterogeneity) for each woman until the model fit well according to standard indices. All connections (i.e., whether at the group- or individual-level) were fit uniquely to each woman’s data, and thus, have individualized weights. Each participant’s network was then characterized by its overall complexity (i.e., number of connections) as well as its subnetwork densities (i.e., number of network connections divided by complexity): within the MRN, within the DMN, and between the MRN and DMN. The 11 person-specific neural networks generated by GIMME fit the data well, as indicated by average fit indices: χ 2 (109.55)=554.17, p <.001, RMSEA=.121, SRMR=.036, CFI=.957, NNFI=.926. presents the network for one individual IUD user. Black nodes represent MRN ROIs, and blue nodes represent DMN ROIs. The network reflects homogeneity, as it prioritized contemporaneous (solid lines) and lagged (dashed lines) group-level connections consistent across all IUD users, which are shown as thick black lines. Notice that most group-level connections are between contralateral ROIs (e.g., left and right parietal, lateral parietal, and superior parietal) or ROIs in the same network; only a few are between ROIs in different networks (e.g., from the posterior cingulate cortex to the left inferior frontal gyrus). Heterogeneity was also reflected in the contemporaneous and lagged individual-level connections unique to this participant, which are shown as thin gray lines in . For this woman, complexity was 33, and the MRN and DMN densities were 33 and 15, respectively, with a 21 between-network density. The bottom third of shows average complexity and network densities across all IUD users, and also graphically shows the average densities. As expected for these task-related fMRI data, the density of connections within the MRN was greater than within the DMN or between the two networks. Finally, shows how neural network densities were related to multimodal study data, including in-scanner mental rotations task performance and endogenous hormone levels. Task performance was positively related to overall neural network complexity, especially to the density of the MRN, as well as to progesterone and even testosterone levels, which were correlated with each other, consistent with the androgenic pharmacokinetic properties of the progestin-based IUDs being used by this sample. The density of the DMN, however, was inversely correlated with all hormones. To illustrate the feasibility and utility of a multimodal person-specific approach, data from 11 IUD users is briefly presented ( M age =28.37, SD age =5.40; 55% White, 27% Asian, 18% Black; 73% non-Hispanic). Participants are from an ongoing fMRI study that was conducted with approval from the University of Michigan Institutional Review Board; all participants provided informed consent. All participants were using slow-release IUDs containing the androgenic progestin levonorgestrel (nine were using Mirena ® , one Kyleena ® , and one Skyla ® ). They had been using the IUDs for at least the past three months and had no reproductive health issues (e.g., polycystic ovary syndrome) or previous pregnancies. They were also not using medications containing sex hormones. Among other study procedures, participants completed a 60-minute online monitored survey and received a 60-minute MRI scan. The morning of the scan, they provided approximately 2mL of saliva, which was collected via passive drool within 30 minutes of waking. Saliva samples were assayed using high sensitivity estradiol, progesterone, and testosterone enzyme-linked immunosorbent assay kits according to manufacturer instructions by the Core Assay Facility at the University of Michigan. They were assayed in duplicate and averaged for analyses. See for details, including assay sensitivities and intra-assay coefficients of variation. The top third of shows means and standard deviations (in pg/mL) for all three hormones. These hormone levels do not appear to be suppressed, as are hormone levels in OC users (e.g., ); in fact, progesterone in IUD users may be elevated compared to both naturally cycling women and OC users (e.g., ). Thus, these data are consistent with insinuations that IUDs have systemic effects. During each scan, participants completed two unique runs of a slow event mental rotations task . Each run contained 16 trials during which participants determined whether a pair of 2D or 3D objects formed from small blocks were accurate rotations of each other. The 3D condition was based on the traditional Shepard and Metzler task , and the 2D condition controlled for basic visual processing, decision-making, and rotation. Task timing is shown in . Each run lasted 4 min 24s, and correct responses were recorded. Behavior is vital to the interpretation of brain function, and the middle third of shows that IUD users correctly identified whether the rotated 2D or 3D objects were the same in 75% of trials, on average. Neuroimaging data were acquired using a GE Discovery MR750 3.0 Tesla scanner with a standard coil (Milwaukee, WI). Structural data consisted of 208 slices from a T1 SPGR PROMO sequence (TI=1060ms, TE=Min Full, flip angle=8°, FOV=25.6 cm, slice thickness=1mm, 256x256 matrix, interleaved). Before the task, a fieldmap was acquired using a spin-echo EPI sequence (TR=7400ms, TE=80ms, FOV=22.0cm, 64x64 matrix, interleaved). Functional data consisted of 40 interleaved slices collected during an EPI sequence (TR=2000ms, TE=25 ms, flip angle=90°, FOV=22.0 cm, slice thickness=3mm, 64x64 matrix, 134 volumes). Standard preprocessing was conducted, as described in . Blood oxygen level-dependent (BOLD) time series were then extracted from ten regions of interest (ROIs) with 10mm diameters, four that constituted the default mode network (DMN) and six that constituted a putative mental rotations network (MRN; see for central coordinates), following past work . Individual differences in anatomical structure were addressed by intersecting ROIs with participants’ binarized grey matter masks (generated using FSL’s FAST; ). Time series from the two runs were concatenated after processing. Person-specific connectivity analyses were conducted on the mental rotations task-related fMRI data in order to reveal potential individual differences in the neuroendocrinology of IUD use. Specifically, the BOLD time series for each participant was submitted to group iterative multiple model estimation (GIMME), which has been validated in extensive largescale simulations (e.g., ). Details can be found in tutorials and empirical applications (e.g., , , ). Briefly, GIMME uses a data-driven approach based on Lagrange Multiplier tests to add directed contemporaneous (same-volume) or lagged (from one volume to the next) connections to participants’ null networks (with no connections). In this application, GIMME added group-level connections (reflecting systematic effects of IUDs) that were significant for at least 75% of the sample to the networks of all women in the sample, followed by individual-level connections (reflecting heterogeneity) for each woman until the model fit well according to standard indices. All connections (i.e., whether at the group- or individual-level) were fit uniquely to each woman’s data, and thus, have individualized weights. Each participant’s network was then characterized by its overall complexity (i.e., number of connections) as well as its subnetwork densities (i.e., number of network connections divided by complexity): within the MRN, within the DMN, and between the MRN and DMN. The 11 person-specific neural networks generated by GIMME fit the data well, as indicated by average fit indices: χ 2 (109.55)=554.17, p <.001, RMSEA=.121, SRMR=.036, CFI=.957, NNFI=.926. presents the network for one individual IUD user. Black nodes represent MRN ROIs, and blue nodes represent DMN ROIs. The network reflects homogeneity, as it prioritized contemporaneous (solid lines) and lagged (dashed lines) group-level connections consistent across all IUD users, which are shown as thick black lines. Notice that most group-level connections are between contralateral ROIs (e.g., left and right parietal, lateral parietal, and superior parietal) or ROIs in the same network; only a few are between ROIs in different networks (e.g., from the posterior cingulate cortex to the left inferior frontal gyrus). Heterogeneity was also reflected in the contemporaneous and lagged individual-level connections unique to this participant, which are shown as thin gray lines in . For this woman, complexity was 33, and the MRN and DMN densities were 33 and 15, respectively, with a 21 between-network density. The bottom third of shows average complexity and network densities across all IUD users, and also graphically shows the average densities. As expected for these task-related fMRI data, the density of connections within the MRN was greater than within the DMN or between the two networks. Finally, shows how neural network densities were related to multimodal study data, including in-scanner mental rotations task performance and endogenous hormone levels. Task performance was positively related to overall neural network complexity, especially to the density of the MRN, as well as to progesterone and even testosterone levels, which were correlated with each other, consistent with the androgenic pharmacokinetic properties of the progestin-based IUDs being used by this sample. The density of the DMN, however, was inversely correlated with all hormones. IUD users provide a novel and promising natural experiment for neuroendocrinological research and are prevalent worldwide , but they remain understudied. Research on IUD users–as an independent group not combined with other hormonal contraceptive users–is necessary and feasible. It is necessary because IUDs have functional properties that inherently differ from those of other hormonal contraceptives, such as OCs, which suppress endogenous hormones levels and inhibit ovulation . In fact, the illustrative data presented here indicate that circulating progesterone may be enhanced in IUD users. More work is sorely needed to determine the extent to which salivary assays of endogenous hormones reflect or are modulated by intrauterine administrations of synthetic hormones, and this work must consider different data collection methods (e.g., saliva versus serum) and analysis approaches (e.g., ELISA versus mass spectrometry; ). Moreover, research with IUD users is arguably more feasible than research on ovarian hormones via menstrual cycle phase comparisons in naturally cycling women or even via active versus placebo pill comparisons in OC users, as it does not require repeated assessments or phase monitoring, which is not only difficult, but often inaccurate . When studying the neural consequences of the interplay between exogenous and endogenous hormones in IUD users, behavioral assessments and heterogeneity are vital to consider. Regarding behavior, it is prudent to examine behaviors that have already been linked to hormonal contraceptives outside of the scanner, such as mental rotations performance, in order to reveal underlying neural mechanisms . Utilizing tasks that maximize power is also important for detecting robust and reliable effects . The mental rotations task used in this feasibility demonstration was statistically powerful because it contained 3D (experimental) and 2D (control) conditions instead of a control condition that did not require rotation (see ). Regarding neural heterogeneity, multivariate connectivity analyses that incorporate individual differences (see ), or better yet, person-specific effects, are well-suited to capturing multimodal associations in IUD users; in this way, GIMME has particular utility . As seen in the illustrative analysis within this paper, GIMME mapped connections among ROIs in the MRN and DMN in a data-driven way, such that only the most meaningful ROI connections were added to participants’ individualized networks. Specifically, if model parameters indicated that certain connections were statistically informative for most IUD users, then those connections were estimated uniquely in all women’s networks based on their own time series. Thus, GIMME provided group-level inferences without averaging! This has incredible utility for future studies of IUD users–and of other heterogenous samples–as human neuroendocrine processes are unique due to individual differences in biology (e.g., hormone receptor sensitivity; ), psychology (e.g., emotion; ), and context (e.g., modulation by stress; ). Averaging across these heterogeneous samples can falsely exaggerate findings, cancel out effects, or distort inferences . Person-specific networks, though time-intensive and complex, are more likely to accurately reflect neuroendocrine nuances. Conclusions The goal of this paper was to highlight the value of IUD users as a natural experiment for studying both exogenous and endogenous sex hormone links to gendered neurocognition (namely, mental rotations), by utilizing multimodal research designs and person-specific approaches to the analysis of fMRI data. Future investigations should focus on IUD users as an independent group; it may rarely be appropriate to combine IUD users with OC users to create a general “hormonal contraceptive” group. Future investigations should also triangulate hormonal, neural, and behavioral data, and analyze these data in ways that accurately reflect heterogeneity within IUD users, who have unique neuroendocrine milieus. Indeed, effects of IUD use are likely to be both systemic within women, and unique to individual women. This means that future investigations are important for both revealing ovarian hormone influences on the brain and behavior, and for advancing multimodal and person-specific methods within behavioral neuroendocrinology. The goal of this paper was to highlight the value of IUD users as a natural experiment for studying both exogenous and endogenous sex hormone links to gendered neurocognition (namely, mental rotations), by utilizing multimodal research designs and person-specific approaches to the analysis of fMRI data. Future investigations should focus on IUD users as an independent group; it may rarely be appropriate to combine IUD users with OC users to create a general “hormonal contraceptive” group. Future investigations should also triangulate hormonal, neural, and behavioral data, and analyze these data in ways that accurately reflect heterogeneity within IUD users, who have unique neuroendocrine milieus. Indeed, effects of IUD use are likely to be both systemic within women, and unique to individual women. This means that future investigations are important for both revealing ovarian hormone influences on the brain and behavior, and for advancing multimodal and person-specific methods within behavioral neuroendocrinology. The datasets presented in this article are not readily available because data use agreements need to be established. Requests to access the datasets should be directed to Adriene Beltz, [email protected] . The studies involving human participants were reviewed and approved by University of Michigan IRB (Health Sciences and Behavioral Sciences). The patients/participants provided their written informed consent to participate in this study. AB conceptualized and directed the study with critical input from KK and JJ; MD helped collect the data; AB and MD analyzed the data with critical input from NC; AB, MD and NC drafted the manuscript; all authors provided critical revisions and approved the final version. AB was supported by the Jacobs Foundation, MD and NC by National Institutes of Health (NIH) grant T32HD007109 (PIs: C. Monk, V. McLoyd, and S. Gelman), KK by NIH grant R01MH111715, and JJ by NIH grants P20RR015592 (PI: T. Curry) and K12DA014040 (PI: E. Wilson). The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
A probability model for estimating age in young individuals relative to key legal thresholds: 15, 18 or 21-year
b3097157-7329-421e-9043-fab2ff284103
11732925
Dentistry[mh]
There are many shortcomings in all medical age assessments that are being applied in different countries. No current method can determine an exact chronological age (CA) due to the individual variations in biological development. Still, there are practical needs to assess age in various legal contexts with minimal error rates. Age estimation is relevant for pre-trial detention and sentencing in criminal cases as well as part of the evaluation in asylum processes to protect the rights and privileges of minors. The European Asylum Support Office (EASO) recommends using the least intrusive examination for medical age assessments methods in their practical guide with radiation free procedures argued to be preferable in children and young adults. The lack of validated or standardized methods has rendered countries within or outside the EU to choose various methods of medical age assessment . In addition, the mission differs slightly between countries in terms of the questions that are expected to be answered as well as which party carries out the task. In many nations, adopting a minimum age concept is a prevalent strategy aimed at minimizing the risk of misclassifying minors. However, this strategy overlooks the potential drawbacks of erroneously classifying adults as minors. Such consequences include misallocation of resources intended for minors to adults and hindrance to the proper administration of justice, as adults may escape prosecution in criminal cases. Probability methods provide a most likely age distribution based on a large reference population rather than an indeterminable CA. The overall approach to provide a probability of an individual being below or above a certain age includes, as a first step, to examine the developmental stages of a selected skeletal component together with the wisdom tooth, and then comparing this to the age distribution of the reference population of the same sex and developmental stages. The probabilities are supplemented with the margin of error, represented by the minor portion of the reference population distribution in relation to the chosen age threshold. The order of magnitude of the margin of error reflects the certainty level of the assessment. Notably, there is a knowledge gap of how one can objectively use multiple anatomical locations and statistical models to estimate the age of an individual more accurately. Having validated models ensures fairness and accuracy as far as possible in legal proceedings. This study seeks to develop and present a validated statistical model for estimating an age relative to key legal thresholds (15, 18, and 21 years) based on skeleton (CT-clavicle, radiography-hand/wrist, MR-knee) and teeth radiography-third molar) developmental stages. Data included in the model A literature search was conducted to identify scientific studies investigating hand/wrist, third molar, distal femur or clavicle maturity in relation to age. After removal of duplicate articles and categorization based on title and abstract, full text articles were read and the following exclusion criteria were applied: 1) Imaging method other than radiography (hand/wrist, third molar), MRI (distal femur), CT (clavicle). 2) Incomplete data: the study does not present all the data needed to recreate individual-based data. ) Different staging than Greulich & Pyle (hand/wrist), Demirjian (third molar) Krämer (Distal femur), Schmeling (Clavicle). 4) The study population does not include ages on both sides of the 15- and 18-year boundaries (Distal femur only). 5) Other anatomical structure than selected indicators. 6) Previously published results, e.g. analysis or review of previous data. ) Post-mortem study population. 8) Full text not available in English, Swedish, Danish or Norwegian. 9) Study based on data that is not available. 10) Study population includes individuals with a disease that may affect skeletal maturity. 11) Study population has uneven age distribution according to Chi-square test (type 3 data only). All the hand/wrist studies investigated skeletal age based on radiographs where the developmental stages are classified according to Greulich & Pyle . Studies were identified through targeted searches on PubMed using the strategy (skeletal matur* OR ossifi* OR age estimat* OR forensic age OR age asses* OR age determin*) AND (radiography OR radiograph* OR x-ray OR ionizing) AND (Greulich OR Pyle) and Embase, which generated 727 studies. The data included in the model were obtained from 15 hand/wrist studies that met the criteria (Table ). All the dental studies related the development of the third molar in the lower jaw, imaged with plain radiographs and classified by Demirjian, to CA in the study populations. Dental studies were identified from the summaries previously made in BioAlder 1.3 . A total of 58 articles were identified, all of which were read in full text and 10 studies met the criteria and were included in the model (Table ). The distal femur studies related the development of the upper knee joint (distal femur), examined by magnetic resonance imaging (MRI) with field strength of at least 1.5T and T1 weighting, to CA after classification according to Krämer 2014 . Studies were identified from Heldring et al. 2022 , supplemented with articles from an internal literature monitoring procedure on distal femur studies. A total of 27 studies were identified and read in full text and 4 of these met the criteria and were selected for inclusion (Table .) Original clavicle studies where the development of clavicles according to Schmeling’s staging (1–5) and CA was studied, were identified. This was done by a literature search in PubMed using the string ((skeletal matur* OR ossifi* OR age estimat* OR forensic age OR age asses* OR age determin*) AND (clavicle OR medial epiphysis OR medial end OR medial clavicular epiphysis OR sternal epiphysis OR sternal end) AND (CT scan OR computed tomography OR CT OR scanner OR Schmeling’s method OR “chest radiographs” OR “forensic radiology”) which generated 296 articles and 5 clavicle studies met the criteria for inclusion (Table ). Data extraction and simulating population age distributions The method of data extraction is adapted to how the data is presented in each study. In order to fit the probabilistic model to the datasets, all data must include a list with known CA and corresponding developmental stage for each individual. The format of type 1 data provides CA presented together with the development stage for each individual either in a table by the authors (type 1a) or extracted from a figure with PlotDigitizer (type 1b) , hence can be included without recreation. However, datasets where both CA and corresponding developmental stage are not reported for each individual require recreation of individual-based datasets. Type 2 data are reported as the frequency of different stages within integer age intervals, either as counts or as fractions together with the total number of individuals for the different intervals. Individual-based data is recreated by calculating the number of individuals with a specific stage in each of the age-cohorts and CAs are assigned randomly within each age interval assuming a uniform distribution. If minimum and maximum of CA for a given developmental stage is provided in addition to the frequency data, the simulated uniform values are further limited to this specified interval. Type 3 data present the number of individuals at each stage, alongside essential statistical measures such as the min, max and lower, median and upper quartile of the CA within each stage (type 3b), or the mean and standard deviation for each stage (type 3a). In the case of type 3a data, a normal distribution is used to generate the individual ages, however, if an age range [ a, b ] is additionally specified for each specific stage by the study, a truncated normal distribution is fitted to the reported values. The truncorm package (version 1.0–9) in R was used to perform this. For type 3b data, which reports the quantiles of the measured age distributions for each stage, a normal distribution of CA is assumed, for every stage s . A truncated normal is fitted through a numerical optimization process that minimizes the errors between the quantiles of the simulated truncated normal distribution and quantiles reported in the study. In the full dataset, CA from type 3a and 3b datasets are therefore simulated with either a normal or truncated normal distribution using the estimated parameters as described above. Further details on this approach and the truncated normal can be found in Supplementary . Type 4 data reports mean age, standard deviation, and Pearson’s correlation for an age-cohort of both the CA and skeletal age. To simulate populations, the process includes a two-step approach, as described in Bleka et al. . In short, the additional information provided by the Pearson’s correlation coefficient is incorporated by fitting a multivariate normal distribution to the data, including the conditional dependence between CA and stage. The resulting bivariate normal distribution is then used to recreate the CA and the stages for each individual in the study. All resulting statistics in this report are derived from 10,000 simulated populations, unless stated otherwise. The probability model The first step in generating the probabilities is to obtain an estimate of chronological stage s through finding the probability of stage given age, P(S = s | A ), by fitting ordinal/logistic regression models to the datasets of each individual developmental indicator. In the second step, these results are used in equation , 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P\left(A\:\right|S=s)=\frac{P\left(S=s\right|A) \, P(A)}{\int\nolimits_b^a{}\;P\left(S=s\right|y) \, P\left(y\right)dy}$$\end{document} P A S = s ) = P S = s A ) P ( A ) ∫ b a P S = s y ) P y d y to obtain the inverse probability of age given stage, P(A | S = s) for each indicator . As this equation only depends on P(S = s | A), assuming a uniform prior, we can find the normalizing factor in the denominator by requiring the total area of the probability density function (PDF) to be one. Finally, we end up with a probability density function P(A | S = s) for each stage/combination of stages s, which can be integrated to find the relevant statistics, such as the probability of stage s for being below or above a certain age threshold. This two-step approach also using re-created population data was taken to minimize the influence of age mimicry . The probability of being below 15, 18 or 21-year thresholds is calculated based on all 10,000 simulations with bootstrapping for each stage, and the 50th percentile is selected as the estimate. From the bootstrap sample, we also determine a 95% confidence interval for the calculated statistics based on the 2.5th and 97.5th percentile. In addition, the probability of the one-year age-cohorts within the assumed age distribution is computed by applying the 50th percentile value from all simulated 10,000 populations. Prior age distribution The selected uniform prior ensures that all information is derived from the data in the posterior distribution as the purpose is to generate the conditional PDF without any subjective influence. This approach with a non-informative prior requires a defined lower and upper limit of the uniform distribution being determined by the assumed age range within the model. Based on the endpoint of the second-to-last stage for hand/wrist, 20 years of age for females and 21 for males was chosen as the upper bound (Roberts et al. (2015) . In order to avoid an increased risk of type 1 errors (identifying children as adults) in the third molar model, the upper limit is set in accordance with Knell et al. (2009) and Olze et al. (2010) , at the age when 50% of the population reaches stage H (21 years for both genders) due to the wide distribution of the second-to-last stage G. The lower bound for both the hand/wrist as well as the third molar model is set to 7 years for both sexes. Data from clavicle studies typically span ages 10–35, and it is noted that stage 4 of the clavicle can still be detected among 35-year-olds for both genders. Similar to the third molar model, the upper limit for the clavicle model is set at the age when 50% of the population reaches the last development stage (stage 5). Hence, the assumed age range was considered between 10–30 years for females and 10–32 years for men, for the clavicle model. For distal femur, we adopted an age range of 15–21, as proposed in Heldring et al. (2022) . Additional assumptions when combining two indicators In order to obtain an estimate of CA when the stages of several different developmental indicators are combined, we assume that stages are conditionally independent from each other. Previous probability models similar to this one assume a conditional independence between skeleton development and third molar development based on studies investigating hand/wrist and third molar development . The study that is comparing models that included or excluded a co-dependence between indicators on a combination dataset concluded that there was no statistically significant improvement in the accuracy of age estimation when including a conditional dependence between indicators . However, this assumption does not apply between skeletal indicators, rendering the calculation of probabilities in those combinations inaccurate. The probability of one skeleton indicator being in stage s s and the third molar indicator being in stage s t for a given age, can be expressed as. 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P\left(S_s=s_s,S_t=s_t\right|A)=P\left(S_s=s_s\right|A)\cdot P\left(S_t=s_t\left|A\right.\right)$$\end{document} P S s = s s , S t = s t A ) = P S s = s s A ) · P S t = s t A assuming conditional independence between the indicators. To obtain the reverse conditional probability, probability of age given stage s (Eq. ) is applied analogous to the calculations in Eq. . For the combined clavicle and third molar model, the upper limit is set to 26.0 years, as the data is truncated at this age for the third molar model. The upper limit is set to 21 years for both females and males for the third molar and hand/wrist combination, as well as the third molar and distal femur model. In addition, the dichotomous distal femur model in combination with third molar is based on the age range 15–21 years and includes the relevant Demirjian stages D-H. Model selection Two candidate ordinal regression models, cumulative and continuous-ratio (CR), with either logit or probit for the linking functions and using either parallel or non-parallel odds-ratios were considered (Supplementary ). This is similar to models previously described in the BioAlder tool . The best model was selected based on a goodness-of-fit of the data for each indicator and gender combination. For each 10,000 populations, the Akaike information criterion (AIC) was computed for every model combination and the final model was selected based on the lowest median AIC value. The choice of AIC was motivated by its ability to penalize the addition of extra parameters estimated in the ordinal model, thereby balancing model complexity. This process was carried out individually for each indicator and gender, yielding a total of 8 distinct models. Both the cumulative and the CR model will be equivalent to a simple logistic regression model for indicators with only two separate stages as in the distal femur model. The model was written in R (Version 4.3.1) . The ordinal/logistic regression models were fitted by applying the vglm function in the VGAM (Version 1.1–9) package . The different conditional PDFs were created by extracting the corresponding parameters from the ordinal/logistic models followed by applying Bayes’ theorem. To calculate the area under the curve of the conditional PDF for a given threshold or one-year cohorts, the integrate function was applied. The method for estimating the prediction intervals (PI) of the CA is described in the Supplementary . Collection of validation populations The access to independent datasets is mainly dependent on other researchers. In our initial search for studies to be included when building the model, we identified studies where data is presented in a format that was not suitable or had a high risk of age mimicry. We invited some of the authors of these studies and additional studies found in later searches to share their primary data (CA, development stage and gender) to be used as independent validation populations (Table ). In addition, an independent study of clavicles with CT was performed. The study was retrospective in its design with all cases extracted from Karolinska University Hospital, Stockholm, and approved by the Swedish Ethical Review Authority (Dnr 2024–00531-01). Individuals aged 17.0 to 25.0 years examined during routine clinical practice and with known CA and sex were selected. Scans with poor image quality and individuals with an injury or a skeletal disease that could affect clavicle development were excluded. Selected scans were subsequently assessed with regard to development stage in agreement with the Schmeling staging system on the most developed side by one radiologist with 14 years of musculoskeletal (MSK) radiology experience and 8 years with focus on pediatric MSK radiology experience. Validation of the statistical model with independent datasets We used the true development stages of the independent individual observations for the classification of whether they fall below or above the 15-, 18- or 21-year age threshold limits. This classification process involves selecting a cutoff point of the given probability where probabilities below the cutoff will classify the individual as above the threshold while probabilities above the cutoff will generate a classification of the individual as below the age threshold. While a common method involves ROC curve analysis to determine an optimal cutoff point to maximize sensitivity and specificity, the chosen cutoff point of 0.35 was based on being an acceptable error of the mean for a final evaluation. This strategy consequently leads to minimizing type 1 errors (classifying underage as overage) and as a consequence will classify more individuals being over the age threshold as under than the opposite if applied. The individuals and proportions being correctly or incorrectly classified are visualized and presented in distribution-plots, point-plots, bar graphs and line-graphs (Fig. , , , and and Supplementary Fig. ). The distribution of the collected validation populations is visualized as interpolated kernel density estimator (KDE) of the different study distributions and all the studies combined (Supplementary Fig. (a-b)). The KDE is fitted with the geom_density function in the ggplot2 package . In order to calculate the minimum sample size required to estimate the precision of the models, the pmsampsize function from the pmsampsize package was used in R. To calculate the minimal sample size needed for external validation of prediction models with a binary outcome (correct or incorrect classification) included a conservative outlook with a c-statistics of 0.85 and a prevalence of 0.15, meaning 15% misclassification of events are expected. This resulted in 195 individuals for a validation sample size for males and females, respectively. A literature search was conducted to identify scientific studies investigating hand/wrist, third molar, distal femur or clavicle maturity in relation to age. After removal of duplicate articles and categorization based on title and abstract, full text articles were read and the following exclusion criteria were applied: 1) Imaging method other than radiography (hand/wrist, third molar), MRI (distal femur), CT (clavicle). 2) Incomplete data: the study does not present all the data needed to recreate individual-based data. ) Different staging than Greulich & Pyle (hand/wrist), Demirjian (third molar) Krämer (Distal femur), Schmeling (Clavicle). 4) The study population does not include ages on both sides of the 15- and 18-year boundaries (Distal femur only). 5) Other anatomical structure than selected indicators. 6) Previously published results, e.g. analysis or review of previous data. ) Post-mortem study population. 8) Full text not available in English, Swedish, Danish or Norwegian. 9) Study based on data that is not available. 10) Study population includes individuals with a disease that may affect skeletal maturity. 11) Study population has uneven age distribution according to Chi-square test (type 3 data only). All the hand/wrist studies investigated skeletal age based on radiographs where the developmental stages are classified according to Greulich & Pyle . Studies were identified through targeted searches on PubMed using the strategy (skeletal matur* OR ossifi* OR age estimat* OR forensic age OR age asses* OR age determin*) AND (radiography OR radiograph* OR x-ray OR ionizing) AND (Greulich OR Pyle) and Embase, which generated 727 studies. The data included in the model were obtained from 15 hand/wrist studies that met the criteria (Table ). All the dental studies related the development of the third molar in the lower jaw, imaged with plain radiographs and classified by Demirjian, to CA in the study populations. Dental studies were identified from the summaries previously made in BioAlder 1.3 . A total of 58 articles were identified, all of which were read in full text and 10 studies met the criteria and were included in the model (Table ). The distal femur studies related the development of the upper knee joint (distal femur), examined by magnetic resonance imaging (MRI) with field strength of at least 1.5T and T1 weighting, to CA after classification according to Krämer 2014 . Studies were identified from Heldring et al. 2022 , supplemented with articles from an internal literature monitoring procedure on distal femur studies. A total of 27 studies were identified and read in full text and 4 of these met the criteria and were selected for inclusion (Table .) Original clavicle studies where the development of clavicles according to Schmeling’s staging (1–5) and CA was studied, were identified. This was done by a literature search in PubMed using the string ((skeletal matur* OR ossifi* OR age estimat* OR forensic age OR age asses* OR age determin*) AND (clavicle OR medial epiphysis OR medial end OR medial clavicular epiphysis OR sternal epiphysis OR sternal end) AND (CT scan OR computed tomography OR CT OR scanner OR Schmeling’s method OR “chest radiographs” OR “forensic radiology”) which generated 296 articles and 5 clavicle studies met the criteria for inclusion (Table ). The method of data extraction is adapted to how the data is presented in each study. In order to fit the probabilistic model to the datasets, all data must include a list with known CA and corresponding developmental stage for each individual. The format of type 1 data provides CA presented together with the development stage for each individual either in a table by the authors (type 1a) or extracted from a figure with PlotDigitizer (type 1b) , hence can be included without recreation. However, datasets where both CA and corresponding developmental stage are not reported for each individual require recreation of individual-based datasets. Type 2 data are reported as the frequency of different stages within integer age intervals, either as counts or as fractions together with the total number of individuals for the different intervals. Individual-based data is recreated by calculating the number of individuals with a specific stage in each of the age-cohorts and CAs are assigned randomly within each age interval assuming a uniform distribution. If minimum and maximum of CA for a given developmental stage is provided in addition to the frequency data, the simulated uniform values are further limited to this specified interval. Type 3 data present the number of individuals at each stage, alongside essential statistical measures such as the min, max and lower, median and upper quartile of the CA within each stage (type 3b), or the mean and standard deviation for each stage (type 3a). In the case of type 3a data, a normal distribution is used to generate the individual ages, however, if an age range [ a, b ] is additionally specified for each specific stage by the study, a truncated normal distribution is fitted to the reported values. The truncorm package (version 1.0–9) in R was used to perform this. For type 3b data, which reports the quantiles of the measured age distributions for each stage, a normal distribution of CA is assumed, for every stage s . A truncated normal is fitted through a numerical optimization process that minimizes the errors between the quantiles of the simulated truncated normal distribution and quantiles reported in the study. In the full dataset, CA from type 3a and 3b datasets are therefore simulated with either a normal or truncated normal distribution using the estimated parameters as described above. Further details on this approach and the truncated normal can be found in Supplementary . Type 4 data reports mean age, standard deviation, and Pearson’s correlation for an age-cohort of both the CA and skeletal age. To simulate populations, the process includes a two-step approach, as described in Bleka et al. . In short, the additional information provided by the Pearson’s correlation coefficient is incorporated by fitting a multivariate normal distribution to the data, including the conditional dependence between CA and stage. The resulting bivariate normal distribution is then used to recreate the CA and the stages for each individual in the study. All resulting statistics in this report are derived from 10,000 simulated populations, unless stated otherwise. The first step in generating the probabilities is to obtain an estimate of chronological stage s through finding the probability of stage given age, P(S = s | A ), by fitting ordinal/logistic regression models to the datasets of each individual developmental indicator. In the second step, these results are used in equation , 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P\left(A\:\right|S=s)=\frac{P\left(S=s\right|A) \, P(A)}{\int\nolimits_b^a{}\;P\left(S=s\right|y) \, P\left(y\right)dy}$$\end{document} P A S = s ) = P S = s A ) P ( A ) ∫ b a P S = s y ) P y d y to obtain the inverse probability of age given stage, P(A | S = s) for each indicator . As this equation only depends on P(S = s | A), assuming a uniform prior, we can find the normalizing factor in the denominator by requiring the total area of the probability density function (PDF) to be one. Finally, we end up with a probability density function P(A | S = s) for each stage/combination of stages s, which can be integrated to find the relevant statistics, such as the probability of stage s for being below or above a certain age threshold. This two-step approach also using re-created population data was taken to minimize the influence of age mimicry . The probability of being below 15, 18 or 21-year thresholds is calculated based on all 10,000 simulations with bootstrapping for each stage, and the 50th percentile is selected as the estimate. From the bootstrap sample, we also determine a 95% confidence interval for the calculated statistics based on the 2.5th and 97.5th percentile. In addition, the probability of the one-year age-cohorts within the assumed age distribution is computed by applying the 50th percentile value from all simulated 10,000 populations. The selected uniform prior ensures that all information is derived from the data in the posterior distribution as the purpose is to generate the conditional PDF without any subjective influence. This approach with a non-informative prior requires a defined lower and upper limit of the uniform distribution being determined by the assumed age range within the model. Based on the endpoint of the second-to-last stage for hand/wrist, 20 years of age for females and 21 for males was chosen as the upper bound (Roberts et al. (2015) . In order to avoid an increased risk of type 1 errors (identifying children as adults) in the third molar model, the upper limit is set in accordance with Knell et al. (2009) and Olze et al. (2010) , at the age when 50% of the population reaches stage H (21 years for both genders) due to the wide distribution of the second-to-last stage G. The lower bound for both the hand/wrist as well as the third molar model is set to 7 years for both sexes. Data from clavicle studies typically span ages 10–35, and it is noted that stage 4 of the clavicle can still be detected among 35-year-olds for both genders. Similar to the third molar model, the upper limit for the clavicle model is set at the age when 50% of the population reaches the last development stage (stage 5). Hence, the assumed age range was considered between 10–30 years for females and 10–32 years for men, for the clavicle model. For distal femur, we adopted an age range of 15–21, as proposed in Heldring et al. (2022) . In order to obtain an estimate of CA when the stages of several different developmental indicators are combined, we assume that stages are conditionally independent from each other. Previous probability models similar to this one assume a conditional independence between skeleton development and third molar development based on studies investigating hand/wrist and third molar development . The study that is comparing models that included or excluded a co-dependence between indicators on a combination dataset concluded that there was no statistically significant improvement in the accuracy of age estimation when including a conditional dependence between indicators . However, this assumption does not apply between skeletal indicators, rendering the calculation of probabilities in those combinations inaccurate. The probability of one skeleton indicator being in stage s s and the third molar indicator being in stage s t for a given age, can be expressed as. 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P\left(S_s=s_s,S_t=s_t\right|A)=P\left(S_s=s_s\right|A)\cdot P\left(S_t=s_t\left|A\right.\right)$$\end{document} P S s = s s , S t = s t A ) = P S s = s s A ) · P S t = s t A assuming conditional independence between the indicators. To obtain the reverse conditional probability, probability of age given stage s (Eq. ) is applied analogous to the calculations in Eq. . For the combined clavicle and third molar model, the upper limit is set to 26.0 years, as the data is truncated at this age for the third molar model. The upper limit is set to 21 years for both females and males for the third molar and hand/wrist combination, as well as the third molar and distal femur model. In addition, the dichotomous distal femur model in combination with third molar is based on the age range 15–21 years and includes the relevant Demirjian stages D-H. Two candidate ordinal regression models, cumulative and continuous-ratio (CR), with either logit or probit for the linking functions and using either parallel or non-parallel odds-ratios were considered (Supplementary ). This is similar to models previously described in the BioAlder tool . The best model was selected based on a goodness-of-fit of the data for each indicator and gender combination. For each 10,000 populations, the Akaike information criterion (AIC) was computed for every model combination and the final model was selected based on the lowest median AIC value. The choice of AIC was motivated by its ability to penalize the addition of extra parameters estimated in the ordinal model, thereby balancing model complexity. This process was carried out individually for each indicator and gender, yielding a total of 8 distinct models. Both the cumulative and the CR model will be equivalent to a simple logistic regression model for indicators with only two separate stages as in the distal femur model. The model was written in R (Version 4.3.1) . The ordinal/logistic regression models were fitted by applying the vglm function in the VGAM (Version 1.1–9) package . The different conditional PDFs were created by extracting the corresponding parameters from the ordinal/logistic models followed by applying Bayes’ theorem. To calculate the area under the curve of the conditional PDF for a given threshold or one-year cohorts, the integrate function was applied. The method for estimating the prediction intervals (PI) of the CA is described in the Supplementary . The access to independent datasets is mainly dependent on other researchers. In our initial search for studies to be included when building the model, we identified studies where data is presented in a format that was not suitable or had a high risk of age mimicry. We invited some of the authors of these studies and additional studies found in later searches to share their primary data (CA, development stage and gender) to be used as independent validation populations (Table ). In addition, an independent study of clavicles with CT was performed. The study was retrospective in its design with all cases extracted from Karolinska University Hospital, Stockholm, and approved by the Swedish Ethical Review Authority (Dnr 2024–00531-01). Individuals aged 17.0 to 25.0 years examined during routine clinical practice and with known CA and sex were selected. Scans with poor image quality and individuals with an injury or a skeletal disease that could affect clavicle development were excluded. Selected scans were subsequently assessed with regard to development stage in agreement with the Schmeling staging system on the most developed side by one radiologist with 14 years of musculoskeletal (MSK) radiology experience and 8 years with focus on pediatric MSK radiology experience. We used the true development stages of the independent individual observations for the classification of whether they fall below or above the 15-, 18- or 21-year age threshold limits. This classification process involves selecting a cutoff point of the given probability where probabilities below the cutoff will classify the individual as above the threshold while probabilities above the cutoff will generate a classification of the individual as below the age threshold. While a common method involves ROC curve analysis to determine an optimal cutoff point to maximize sensitivity and specificity, the chosen cutoff point of 0.35 was based on being an acceptable error of the mean for a final evaluation. This strategy consequently leads to minimizing type 1 errors (classifying underage as overage) and as a consequence will classify more individuals being over the age threshold as under than the opposite if applied. The individuals and proportions being correctly or incorrectly classified are visualized and presented in distribution-plots, point-plots, bar graphs and line-graphs (Fig. , , , and and Supplementary Fig. ). The distribution of the collected validation populations is visualized as interpolated kernel density estimator (KDE) of the different study distributions and all the studies combined (Supplementary Fig. (a-b)). The KDE is fitted with the geom_density function in the ggplot2 package . In order to calculate the minimum sample size required to estimate the precision of the models, the pmsampsize function from the pmsampsize package was used in R. To calculate the minimal sample size needed for external validation of prediction models with a binary outcome (correct or incorrect classification) included a conservative outlook with a c-statistics of 0.85 and a prevalence of 0.15, meaning 15% misclassification of events are expected. This resulted in 195 individuals for a validation sample size for males and females, respectively. Data included in the model Observations from approximately 27,000 individuals from 6 geographic regions are included in the model (Table and Supplementary Table ). Selected model We found that the continuation-ratio model with logit link function and a non-parallel slope coefficient provided the best fit for the clavicle and third molar model (both sexes). A continuation-ratio model with probit link function and a non-parallel slope coefficient fitted the data best for the hand/wrist model in both sexes. For distal femur, where only two stages are used (not closed/closed), logistic regression with a logit link function for both sexes was the best fit and used in the final model. A graphic representation of how the fitted parametric regression model relates to the calculated semi-annually proportion of underlying data (non-parametric), calculated as the fraction of individuals with a specific stage in the simulated datasets, is presented in Supplementary Fig. – . We refrained from log-transforming the CA variable to avoid potentially increasing complexity within the model, as the non-parallel fit gives the posterior distributions more flexibility as they were being estimated and because of the assumption of normal distributions among stages. This is in contrast to previous models where a parallel slope coefficient for all models and log-transformation was applied . We demonstrate that certain third molar stages, fitted with the KDE from one of the randomly generated populations compared with its fitted PDF, appear to be approximately normal distributed (Supplementary Fig. (c-n)) when the influence of age mimicry is low, i.e. where the chronological age of the data is approximately uniformly distributed (Supplementary Fig. (a-b)). Age prediction model The estimated 75% and 95% PI’s of CA for the hand/wrist and third molar stages of development are shown in Supplementary Fig. , separately (a) and in combination (b), as the median from 10,000 simulated populations. The age distributions are wider when using a single indicator compared to combining the third molar with hand/wrist, indicating that multifactorial age estimations are more accurate compared to using a single anatomical site. This is also seen for the combination with the distal femur (Supplementary Fig. ) or clavicle (Supplementary Fig. ). The PDF’s for hand/wrist, third molar, distal femur, and clavicle assuming normally distributed ages for each indicator and stage are shown for males (a-d) and females (e–h) in Fig. . The distributions display one randomly selected distribution from the 10,000 generated populations for each stage. Combining indicators From the known probability of being in a stage given age, we derived the conditional PDF for age within this stage by using Bayes’ theorem (Eq. ). The assumption of conditional independence does not apply between skeletal indicators, rendering the three skeletal indicators inappropriate to combine. Hence, the current combinations are third molar with either one of the skeletal indicators. Age distributions for selected combinations are shown in Fig. for males (a and b) and females (c and d). The probability of age in relation to a certain threshold is represented by the part of a specific combination’s distribution being on either side of the age limit. The distribution as well as probabilities are affected by the chosen upper age limit for each indicator. A sensitivity analysis was performed with several upper age limits (Table , hand/wrist and third molar, Supplementary Table clavicle and Supplementary Table clavicle and third molar). We observe that the probabilities of being under 18 years of age is only minimally affected if the upper age limit is increased for the combination of hand/wrist and third molar (Table ). We also noted that the probabilities of being under the 21-year threshold for stage 4 or 5 in the clavicle model do not vary significantly when changing the upper boundary between 30 and 35 years (Supplementary Table ). This demonstrates that the chosen distribution predicts reliable probabilities. Validation with independent test populations To assess how well the model performs on independent data, a number of datasets for populations of known age have been collected and used for validation (Table ). Aside from the Swedish collection of a clavicle dataset that was collected specifically for the purpose of the validation of the model, the datasets are from published studies or collections, kindly provided by authors and researchers upon contact. Each indicator was validated separately, except the combination of third molar and hand/wrist where examination and developmental stage were studied in the same individual for one of the datasets . Validation of the third molar model The validation set for third molar included in total 1406 males (Fig. (a)) and 1578 females (Fig. (b)), spanning an age interval between 7–26 years (Table ) and originates from 4 separate datasets (Fig. ). In total, 93% of the male and 87% of the female populations were correctly classified regarding the 18-year threshold, corresponding to the separate model’s total accuracy (Table and Fig. (c-f)). In addition, the model accuracy with regard to the 15-year threshold is 90% for males and 87% for females (Table (a)). The sensitivity (adults identified as adults) of the male third molar model is 90% and specificity (children identified as children) is 95% for the 18-year threshold, while the positive predictive value (identified as adults that are adults) is 91% and the negative predictive value (identified as children that are children) is 94% (Table (a)). The corresponding sensitivity in the female third molar model is 75% and the specificity 94% (Table (a)). Not surprisingly, very early stages cause few errors in the assessments of both the 15-and the 18-year threshold (Fig. (c-f)). Most of the incorrectly classified individuals are in the development stages C-F for the 15-year threshold and D-H for the 18-year threshold in both males (c and e) and females (d and f). These individuals are fewer compared to correctly classified individuals (Fig. (g-h)), and represent both individuals with an age close to the limit and individuals with either early or late third molar development (Fig. (c-f)). The proportion of the independent population being under 15 (orange full line) or 18 (blue full line) years overlaps almost completely with the predicted probabilities (dashed lines) for the model (Fig. (g-h)), for both males (g) and females (h). This demonstrates a high reliability of the probability model. Validation of the hand/wrist model In total, 386 males (Fig. (a)) and 301 females (Fig. (b)), spanning an age interval between 7–26 years and originating from 3 separate datasets (Fig. (a-b)) are included in the independent validation set for hand/wrist. What distinguishes the hand/wrist model from the dental model is that it is suitable for assessing the 15-year threshold but is of limited use for the 18-year threshold as the last developmental stage begins before the age of 18 to a large extent (Fig. (a, e)). In total, 88% of the male and 91% of the female populations were correctly classified regarding the 15-year threshold (Table (b)). Similar to the third molar model, incorrectly classified individuals are not found in the early development stages but have reached skeletal age (SA) 13 up to 18 (Fig. c-f) in both males (c) and females (d). The incorrectly classified individuals are fewer compared to correctly classified (Fig. ) in both males (g) and females (h) except for SA 16 and 17 in females with regard to the 15-year threshold where it is equal (h). With regard to the 18-year threshold, the model has an acceptable precision when it comes to below 18 (Fig. (e–f)), while the development stages of hand/wrist do not seem to allow for accurate age estimations with regard to above18 years of age. The proportion of individuals being under 15 (orange full line) or 18 (blue full line) in the independent validation population of the hand/wrist model basically follows the probabilities of being under 15 (orange dashed line) or 18 (blue dashed line) according to the model for males and females (Fig. (g-h). However, the non-smoothness of the curves reflects the limited number of individuals being in some of the SA development stages in the validation population. The sensitivity (aged over 15 identified as aged over 15) of the male hand/wrist model is 81% and specificity (under 15 identified as under 15) is 92% for the 15-year threshold (Table (b)). The corresponding sensitivity of the female hand/wrist model is 89% and specificity is 91% for the 15-year threshold (Table (b)). Keeping in mind that the proportion of individuals above 18-years of age in the independent population is limited (Fig. (c-f)), the total accuracy with regard to the 18-year threshold for the male model is 93% and for the female model, 90% (Table (b)). Validation of the distal femur model The validation set of the distal femur model included a population of total 217 males (Fig. (a)) and 217 females (Fig. (b)), spanning an age interval between 12–23 years and originates from one dataset (Fig. (a-b) and Table ). The distal femur model is based on dichotomous development where the Krämer stages 1–3 are defined as open and 4–5 are defined as closed , rendering the model useful exclusively for the 18-year threshold. In total 88% of the independent male and 84% of the female population were correctly classified with regard to the 18-year threshold (Table (c)) corresponding to the accuracy. The incorrectly classified individuals are in minority compared to correctly classified (Fig. ) in both males (e) and females (f). In regard to the 18-year threshold, the model has an acceptable precision when it comes to men (Fig. (c) and (e)), while a closed distal femur in women generates a lower precision (Fig. (d) and (f)). The proportion of individuals being under 18-years of age (blue full line) in the independent population used for validation of the distal femur model basically follows the probabilities of being under 18-years of age (blue dashed line) according to the model (Fig. ) for males (e) and females (f). The sensitivity (adults identified as adults) of the male distal femur model is 82% and specificity (children identified as children) 96% for the 18-year threshold (Table (c)). The corresponding sensitivity in the female third molar model is 89% and specificity 80% (Table (c)). Validation of the clavicle model The validation set of the clavicle model included a population of total 227 males (Fig. (a)) and 223 females (Fig. (b)), spanning an age interval between 14–30 years and originates from two datasets (Fig. (a-b) and Table ). Being a skeletal indicator that still develops after 18-years of age renders the clavicle model particularly useful for the 21-year threshold. The validation has been performed for both the 18- and the 21-year threshold. In total 77% of the male and 85% of the female validation population were correctly classified with regard to the 18-year threshold and 75% of the males and 78% of the females to the 21-year threshold (Table (d)) corresponding to the accuracy. The sensitivity (above 21 identified as above) of the male clavicle model is 59% and the specificity (below 21 identified as below 21) is 96% for the 21-year threshold (Table (d)). The corresponding sensitivity in the female clavicle model is 64% and specificity 95% (Table (d)). The incorrectly classified individuals, with regard to the 21-year threshold is mainly individuals in development stage 3 (Fig. ) for both males (e and g) and females (f and h). For the 18-year threshold, the incorrectly classified individuals are mainly in development stage 2. The proportion of individuals being under 21-years of age (orange full line) in the independent population used for validation of the clavicle model basically follows the probabilities of being under 21-years of age (orange dashed line) according to the model (Fig. ) for males (g) and females (h), indicating a high reliability of the prediction model. In regard to the 18-year threshold, the validation (blue full line) deviates more from the probabilities according to the prediction model (dashed blue lines) indicating a lower precision compared to the 21-year threshold (orange) (Fig. (g and h). Validating the model on a test set with both third molar and hand/wrist The precision of the age estimation increases when the result from multiple developmental indicators are combined, which corresponds to how the model is recommended to be used in practice. This means that the result from the independent models underestimates the real precision when used in practice. Here, we test our model against one dataset where both third molars and hand/wrist development has been examined in the same individuals, along with CA. The validation data included an independent population of total 106 males and 116 females (Supplementary Fig. (a-b) and Table , spanning an age interval between 8–16 years (Supplementary Fig. ). Classification with Demirjian’s method of the lower left third molar together with the Greulich &Pyle grading of the hand skeleton were applied on individuals in this Lebanese population . The validation of the combined model is limited in that the validation population mostly includes individuals younger than 15 years. However, it is a valuable dataset in that it confirms the higher specificity as demonstrated by a tighter PI compared to single indicators (Supplementary Fig. ) and a high number of correctly classified under 15 represented by a high specificity for both males (Supplementary Fig. (c) and Table (e)) and females (Supplementary Fig. (d) and Table (e)). In total 96% of the independent male and 97% of the female populations were correctly classified with regard to the 15-year threshold representing the accuracy (Supplementary Fig. (c-d) and Table (e)). Observations from approximately 27,000 individuals from 6 geographic regions are included in the model (Table and Supplementary Table ). We found that the continuation-ratio model with logit link function and a non-parallel slope coefficient provided the best fit for the clavicle and third molar model (both sexes). A continuation-ratio model with probit link function and a non-parallel slope coefficient fitted the data best for the hand/wrist model in both sexes. For distal femur, where only two stages are used (not closed/closed), logistic regression with a logit link function for both sexes was the best fit and used in the final model. A graphic representation of how the fitted parametric regression model relates to the calculated semi-annually proportion of underlying data (non-parametric), calculated as the fraction of individuals with a specific stage in the simulated datasets, is presented in Supplementary Fig. – . We refrained from log-transforming the CA variable to avoid potentially increasing complexity within the model, as the non-parallel fit gives the posterior distributions more flexibility as they were being estimated and because of the assumption of normal distributions among stages. This is in contrast to previous models where a parallel slope coefficient for all models and log-transformation was applied . We demonstrate that certain third molar stages, fitted with the KDE from one of the randomly generated populations compared with its fitted PDF, appear to be approximately normal distributed (Supplementary Fig. (c-n)) when the influence of age mimicry is low, i.e. where the chronological age of the data is approximately uniformly distributed (Supplementary Fig. (a-b)). The estimated 75% and 95% PI’s of CA for the hand/wrist and third molar stages of development are shown in Supplementary Fig. , separately (a) and in combination (b), as the median from 10,000 simulated populations. The age distributions are wider when using a single indicator compared to combining the third molar with hand/wrist, indicating that multifactorial age estimations are more accurate compared to using a single anatomical site. This is also seen for the combination with the distal femur (Supplementary Fig. ) or clavicle (Supplementary Fig. ). The PDF’s for hand/wrist, third molar, distal femur, and clavicle assuming normally distributed ages for each indicator and stage are shown for males (a-d) and females (e–h) in Fig. . The distributions display one randomly selected distribution from the 10,000 generated populations for each stage. From the known probability of being in a stage given age, we derived the conditional PDF for age within this stage by using Bayes’ theorem (Eq. ). The assumption of conditional independence does not apply between skeletal indicators, rendering the three skeletal indicators inappropriate to combine. Hence, the current combinations are third molar with either one of the skeletal indicators. Age distributions for selected combinations are shown in Fig. for males (a and b) and females (c and d). The probability of age in relation to a certain threshold is represented by the part of a specific combination’s distribution being on either side of the age limit. The distribution as well as probabilities are affected by the chosen upper age limit for each indicator. A sensitivity analysis was performed with several upper age limits (Table , hand/wrist and third molar, Supplementary Table clavicle and Supplementary Table clavicle and third molar). We observe that the probabilities of being under 18 years of age is only minimally affected if the upper age limit is increased for the combination of hand/wrist and third molar (Table ). We also noted that the probabilities of being under the 21-year threshold for stage 4 or 5 in the clavicle model do not vary significantly when changing the upper boundary between 30 and 35 years (Supplementary Table ). This demonstrates that the chosen distribution predicts reliable probabilities. To assess how well the model performs on independent data, a number of datasets for populations of known age have been collected and used for validation (Table ). Aside from the Swedish collection of a clavicle dataset that was collected specifically for the purpose of the validation of the model, the datasets are from published studies or collections, kindly provided by authors and researchers upon contact. Each indicator was validated separately, except the combination of third molar and hand/wrist where examination and developmental stage were studied in the same individual for one of the datasets . The validation set for third molar included in total 1406 males (Fig. (a)) and 1578 females (Fig. (b)), spanning an age interval between 7–26 years (Table ) and originates from 4 separate datasets (Fig. ). In total, 93% of the male and 87% of the female populations were correctly classified regarding the 18-year threshold, corresponding to the separate model’s total accuracy (Table and Fig. (c-f)). In addition, the model accuracy with regard to the 15-year threshold is 90% for males and 87% for females (Table (a)). The sensitivity (adults identified as adults) of the male third molar model is 90% and specificity (children identified as children) is 95% for the 18-year threshold, while the positive predictive value (identified as adults that are adults) is 91% and the negative predictive value (identified as children that are children) is 94% (Table (a)). The corresponding sensitivity in the female third molar model is 75% and the specificity 94% (Table (a)). Not surprisingly, very early stages cause few errors in the assessments of both the 15-and the 18-year threshold (Fig. (c-f)). Most of the incorrectly classified individuals are in the development stages C-F for the 15-year threshold and D-H for the 18-year threshold in both males (c and e) and females (d and f). These individuals are fewer compared to correctly classified individuals (Fig. (g-h)), and represent both individuals with an age close to the limit and individuals with either early or late third molar development (Fig. (c-f)). The proportion of the independent population being under 15 (orange full line) or 18 (blue full line) years overlaps almost completely with the predicted probabilities (dashed lines) for the model (Fig. (g-h)), for both males (g) and females (h). This demonstrates a high reliability of the probability model. In total, 386 males (Fig. (a)) and 301 females (Fig. (b)), spanning an age interval between 7–26 years and originating from 3 separate datasets (Fig. (a-b)) are included in the independent validation set for hand/wrist. What distinguishes the hand/wrist model from the dental model is that it is suitable for assessing the 15-year threshold but is of limited use for the 18-year threshold as the last developmental stage begins before the age of 18 to a large extent (Fig. (a, e)). In total, 88% of the male and 91% of the female populations were correctly classified regarding the 15-year threshold (Table (b)). Similar to the third molar model, incorrectly classified individuals are not found in the early development stages but have reached skeletal age (SA) 13 up to 18 (Fig. c-f) in both males (c) and females (d). The incorrectly classified individuals are fewer compared to correctly classified (Fig. ) in both males (g) and females (h) except for SA 16 and 17 in females with regard to the 15-year threshold where it is equal (h). With regard to the 18-year threshold, the model has an acceptable precision when it comes to below 18 (Fig. (e–f)), while the development stages of hand/wrist do not seem to allow for accurate age estimations with regard to above18 years of age. The proportion of individuals being under 15 (orange full line) or 18 (blue full line) in the independent validation population of the hand/wrist model basically follows the probabilities of being under 15 (orange dashed line) or 18 (blue dashed line) according to the model for males and females (Fig. (g-h). However, the non-smoothness of the curves reflects the limited number of individuals being in some of the SA development stages in the validation population. The sensitivity (aged over 15 identified as aged over 15) of the male hand/wrist model is 81% and specificity (under 15 identified as under 15) is 92% for the 15-year threshold (Table (b)). The corresponding sensitivity of the female hand/wrist model is 89% and specificity is 91% for the 15-year threshold (Table (b)). Keeping in mind that the proportion of individuals above 18-years of age in the independent population is limited (Fig. (c-f)), the total accuracy with regard to the 18-year threshold for the male model is 93% and for the female model, 90% (Table (b)). The validation set of the distal femur model included a population of total 217 males (Fig. (a)) and 217 females (Fig. (b)), spanning an age interval between 12–23 years and originates from one dataset (Fig. (a-b) and Table ). The distal femur model is based on dichotomous development where the Krämer stages 1–3 are defined as open and 4–5 are defined as closed , rendering the model useful exclusively for the 18-year threshold. In total 88% of the independent male and 84% of the female population were correctly classified with regard to the 18-year threshold (Table (c)) corresponding to the accuracy. The incorrectly classified individuals are in minority compared to correctly classified (Fig. ) in both males (e) and females (f). In regard to the 18-year threshold, the model has an acceptable precision when it comes to men (Fig. (c) and (e)), while a closed distal femur in women generates a lower precision (Fig. (d) and (f)). The proportion of individuals being under 18-years of age (blue full line) in the independent population used for validation of the distal femur model basically follows the probabilities of being under 18-years of age (blue dashed line) according to the model (Fig. ) for males (e) and females (f). The sensitivity (adults identified as adults) of the male distal femur model is 82% and specificity (children identified as children) 96% for the 18-year threshold (Table (c)). The corresponding sensitivity in the female third molar model is 89% and specificity 80% (Table (c)). The validation set of the clavicle model included a population of total 227 males (Fig. (a)) and 223 females (Fig. (b)), spanning an age interval between 14–30 years and originates from two datasets (Fig. (a-b) and Table ). Being a skeletal indicator that still develops after 18-years of age renders the clavicle model particularly useful for the 21-year threshold. The validation has been performed for both the 18- and the 21-year threshold. In total 77% of the male and 85% of the female validation population were correctly classified with regard to the 18-year threshold and 75% of the males and 78% of the females to the 21-year threshold (Table (d)) corresponding to the accuracy. The sensitivity (above 21 identified as above) of the male clavicle model is 59% and the specificity (below 21 identified as below 21) is 96% for the 21-year threshold (Table (d)). The corresponding sensitivity in the female clavicle model is 64% and specificity 95% (Table (d)). The incorrectly classified individuals, with regard to the 21-year threshold is mainly individuals in development stage 3 (Fig. ) for both males (e and g) and females (f and h). For the 18-year threshold, the incorrectly classified individuals are mainly in development stage 2. The proportion of individuals being under 21-years of age (orange full line) in the independent population used for validation of the clavicle model basically follows the probabilities of being under 21-years of age (orange dashed line) according to the model (Fig. ) for males (g) and females (h), indicating a high reliability of the prediction model. In regard to the 18-year threshold, the validation (blue full line) deviates more from the probabilities according to the prediction model (dashed blue lines) indicating a lower precision compared to the 21-year threshold (orange) (Fig. (g and h). The precision of the age estimation increases when the result from multiple developmental indicators are combined, which corresponds to how the model is recommended to be used in practice. This means that the result from the independent models underestimates the real precision when used in practice. Here, we test our model against one dataset where both third molars and hand/wrist development has been examined in the same individuals, along with CA. The validation data included an independent population of total 106 males and 116 females (Supplementary Fig. (a-b) and Table , spanning an age interval between 8–16 years (Supplementary Fig. ). Classification with Demirjian’s method of the lower left third molar together with the Greulich &Pyle grading of the hand skeleton were applied on individuals in this Lebanese population . The validation of the combined model is limited in that the validation population mostly includes individuals younger than 15 years. However, it is a valuable dataset in that it confirms the higher specificity as demonstrated by a tighter PI compared to single indicators (Supplementary Fig. ) and a high number of correctly classified under 15 represented by a high specificity for both males (Supplementary Fig. (c) and Table (e)) and females (Supplementary Fig. (d) and Table (e)). In total 96% of the independent male and 97% of the female populations were correctly classified with regard to the 15-year threshold representing the accuracy (Supplementary Fig. (c-d) and Table (e)). Reliable methods for age estimation in living individuals are of major importance in legal contexts when birth records or other official identification documents are missing. The main aim of this study is to generate and present a validated statistical model for estimating age in living individuals relative to the 15, 18 or 21-year old thresholds. To our knowledge, this is the first model to include several skeletal indicators combined with third molar development to provide assessments for several age thresholds that has been validated with independent datasets. It could be argued that our model addresses the knowledge gap concerning the objective utilization of multiple anatomical locations and statistical models to enhance the accuracy of estimating an individual’s age. The spectrum of methods recommended by the Study Group on Forensic Age Diagnostics in Münster include radiography examination of the hand/wrist and third molars as well as CT clavicle, which may also be supplemented with MRI of distal femur in the future . However, their recommended approach is to add CT clavicle if hand/wrist is fully developed and to use these examinations in a minimal age concept rather than a probability approach. Their recommended methods also include a physical examination and recording of sexual maturity , even though the latter is noticed to be against the EASO recommended guidelines . In the statistical model investigated here, radiography of third molar is combined with either radiography hand/wrist, CT clavicle or MRI distal femur depending on the age threshold of interest. The estimation of age from dental radiographs is one of the most studied and widely used approaches, and the Demirjian staging technique is the most widely used staging method in studies focusing on age estimation . Demirjian’s staging of the wisdom tooth is well suited to assess both the 15- and 18-year threshold (Fig. (b and f). Due to a chosen upper age limit at 21 years for the third molar model, it is not suited to assess the 21-year threshold as a single indicator. However, in combination with the clavicle, a slightly older assumed age distribution has been included in the model that renders it suitable (Fig. ). The higher age as a chosen upper age limit of the third molar in this combination is motivated by the fact that the PI in the combined model is tighter than the clavicle model alone (Supplementary Fig. ). Radiography of the hand/wrist is internationally the most widely applied method to assess skeletal development . The development stages of hand/wrist are suitable for assessing the 15-year threshold in males and females and possibly the 18-year threshold in males, based on the development stage distributions (Fig. (a and e)). The dichotomous distal femur model is suitable for the 18-year threshold in males while an open development stage can be used in women to indicate minority status (Fig. (c and g)). The medial clavicle epiphysis is considered useful for the 21-year threshold due to a continued development until around age 30 (Fig. (d and h)). To create reliable and detailed assessment models, a much larger data set than typically found in a single study is required. The underlying reference population needs to cover all relevant age cohorts that also allow a Bayesian approach to minimize the effect of age mimicry from the underlying studies . Several probability methods have previously been presented in the literature . All these methods have the advantage of relying on larger reference populations when providing age distributions, unlike other assessment approaches that compare with only one limited study population . None of the models will provide a definite age for an individual but in the case of the probability methods, either an age span or a probability of an age in relation to a threshold will be provided, together with an error rate. These probabilities are the base to form the medical component for the overall assessment of an individual’s age. It has been argued that population-specific reference data is needed in age assessments. According to current scientific understanding, the ethnicity or genetic-geographic origin of an individual may not significantly impact the dental- or skeletal maturity . It is noted that a study by Olze et al. as well as a review on dental age estimation cautions against possible differences in dental aging between populations and ethnicities. However, as pointed out before and shown in Rolseth et al. , studies might be subject to age mimicry, meaning that the observed difference between populations is likely to reflect differences in the underlying age distributions of the study population rather than inherent differences in development. Factors such as stress or living standard have been suggested to influence skeletal development . Consequently, individuals from lower socioeconomic backgrounds undergoing medical age assessments may face the risk of being estimated as younger than their CA. In line with the approach of the BioAlder tool , we have opted to incorporate a broad spectrum of individuals from chosen studies into the reference population. This decision aims to encompass the widest possible range of biological variations in age-dependent development, striving for thorough coverage. The single studies covering a single geographic region, socio-economic or other possible influencing factors are argued too small to provide reliable reference populations on their own. The total number of individuals included in the model is high (27,000), but is unequally distributed between the included indicators. The number of studies (34) is limited by covering 6 geographic regions and the main limitation factor is the availability of studies focusing on age in relation to development and fulfilling the pre-set criteria. Similar to the previous statistical models , the results in this model are dependent on the assumptions for the underlying age distributions, conditional independence and simulations as well as study selection. Given the inevitable diversity in underlying studies and limited ethnic representation, a key concern that arises when developing a prediction tool is: how accurately does the tool perform for the individuals we intend to predict? The availability of independent complete data sets is scarce, yet essential to perform a validation of the model compared to real world data. The validation of this model with collected independent populations indicates a high accuracy and precision for all indicators, particularly for the third molar model and the distal femur. When combining dental and skeletal indicators, only a few individuals were wrongly classified with regard to the 15-year threshold in the validation of the combined third molar and hand/wrist model. Considering that the age span in this validation set is limited to a population almost exclusively under 15-years of age, it is possible to establish an adequate level of precision for these individuals, but not for individuals over 15. It has been concluded that a multifactorial age estimation is more accurate than one based on a single anatomical site . Multifactorial age estimation is also recommended by the Münster-based AGFAD study group . An important consideration of multifactorial age estimation is the risk of increased ionizing radiation to a young individual which is against the EASO guidelines and ALARA (as low as reasonably achievable) principle. However, the availability of datasets containing concurrent grading of third molars with a skeletal indicator in the same individuals is limited, and efforts to simultaneously measure multiple developmental indicators would allow for more robust estimations of model accuracy. The validation with the independent populations has pinpointed and confirmed the predicted development stages that are associated with the highest uncertainties. For instance, 30–40% of the individuals in third molar development stage D in both males and females are wrongly classified with regard to the 15-year threshold (Fig. (g-h)), and this uncertainty agrees with the prediction provided by the model, that these individuals are below 15, with a margin of error of 30% and 35% for males and females, respectively. When applying the model on individuals with an unknown age, the degree of certainty in the statement needs to reflect the estimated age distribution and the probability of being below or above the age limit together with this margin of error that corresponds to the proportion of the reference population on the other side of the limit. The presented validation allows reliable assessments together with margin of errors to be provided. To facilitate medical age assessments in routine practice using this complex statistical model, a user-friendly tool is advisable. Such a dashboard has been developed to streamline these assessments by forensic pathologists in Sweden. Dropdown menus allow the assessor to populate the model with the current combination of examinations performed together with gender and development stages. The corresponding distribution of the reference population is then displayed together with 95% PI, probability for the three age thresholds together with probabilities in one-year cohorts. This tool provides the probabilities and the measure of margin of error. A promising tool for faster and more accurate radiological age assessments are artificial intelligence (AI) approaches . Methods using AI necessitate a substantial volume of data for construction and are not exempt from the conventional questions inherent in age assessments, such as biologic variation, the socioeconomic dimension or other factors influencing development. An AI tool, based on third molar development in a Brazilian population, presents a binary assessment with high accuracy of being above or below a specific age threshold . In addition, a high accuracy performing AI-model of age classification with regard to 18, 20, 21 and 22-year thresholds based on clavicle development was recently presented in a Chinese study . Notably, a common feature of these methods is that they achieve a high level of accuracy. Even though additional studies are required, deep learning approaches remain a promising vision for the future following validation on a broader scale. The complex relationship between skeletal or dental development and CA presents an unavoidable barrier to achieving perfect accuracy in age assessment methods . Even though our approach has been to include a broad spectrum of studies performed in different countries and geographic regions in the reference population, the ethnic and socio-economic variation is still limited. The retrospective nature of data collection and the fact that studies are conducted with slightly different protocols and/or data reporting, may introduce variations. The evaluation of the accuracy and precision of the probability model is limited by the access to independent validation populations where multiple indicators have been measured. Although one of the models is based on magnetic resonance imaging, this tool is not entirely devoid of potentially harmful ionizing radiation. In summary, our study presents a validated statistical model for estimating an age relative to key legal thresholds (15, 18, and 21 years) based on a skeleton (CT-clavicle, radiography-hand/wrist or MR-knee) and teeth (radiography-third molar) developmental stages allowing to provide reliable assessments with margin of errors. This probability model provides a most likely age distribution based on a large reference population rather than an indeterminable CA. The assessment based on the model generated probabilities form the medical component for the overall assessment of an individual’s age.While statistical models are by nature complex, the creation of a dashboard may easier facilitate and streamline individual assessments in routine practice. Although AI approaches are in development, providing a validated probability method addresses a knowledge gap and is of high interest as currently, no available method can provide a reliable CA. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 484 KB) Supplementary file2 (DOCX 325 KB) Supplementary file3 (DOCX 324 KB) Supplementary file4 (DOCX 464 KB) Supplementary file5 (DOCX 459 KB) Supplementary file6 (DOCX 254 KB) Supplementary file7 (DOCX 255 KB) Supplementary file8 (DOCX 106 KB) Supplementary file9 (DOCX 270 KB) Supplementary file10 (DOCX 234 KB) Supplementary file11 (DOCX 222 KB) Supplementary file12 (DOCX 383 KB) Supplementary file13 (DOCX 19 KB) Supplementary file14 (DOCX 19 KB) Supplementary file15 (DOCX 22 KB) Supplementary file16 (DOCX 28 KB)
Osteoporosis: Molecular Pathology, Diagnostics, and Therapeutics
4607a7b0-690b-40b1-907f-b6fc9fff7dc3
10572718
Pathology[mh]
Osteoporosis is a very common condition affecting over 14 million people in the United States (US) and over 200 million people globally . It is estimated that one in three women and one in five men aged 50 or over will suffer osteoporosis-related fragility fractures . This heavy disease burden translates to staggering economic costs. The current annual economic burden due to osteoporosis is USD 6.5 trillion between the US, Canada, and Europe alone—and this figure is rapidly growing . The annual US economic burden is projected to climb to USD 25.3 billion by 2025 . Osteoporosis is a multifactorial condition characterized by changes in bone homeostasis, which result in reduced bone mass, impaired bone quality, and an increased propensity for fractures . Hormones, cytokines, and growth factors regulate bone homeostasis both directly and indirectly. Peak bone mass is said to be achieved when all these factors are working effectively in conjunction with one another. Thus, it is thought that imbalances in these molecular and cellular processes alter bone homeostasis, driving the pathophysiology of osteoporosis . Other factors such as race, gender, behavior, and diet can also have influences on bone mass and the tendency to develop osteoporosis. As medical advancements continue to lengthen life expectancy, osteoporosis has emerged as a major public health concern. Thus, understanding its cellular and molecular pathophysiology is crucial for appropriate diagnosis and management. In this paper, the (1) normal cellular and molecular mechanisms of bone homeostasis are discussed, followed by a discussion of the disease state that is osteoporosis, outlining the (2) proposed pathophysiologic mechanisms, (3) diagnostic tools, and (4) current treatment algorithms of this prevalent condition. The goal is to provide a structured up-to-date review on the current understanding of osteoporosis. 2.1. Structural and Cellular Components of Bone Bone is a dynamic, mineralized, multifunctional connective tissue with organic and inorganic components. The organic component of bone is commonly referred to as the “osteoid” and is composed of both collagenous (mainly collagen type I) and non-collagenous (glycosaminoglycans and glycoproteins) proteins . Each of these components—together with hormones, cytokines, and the cellular components of bone—regulate bone metabolism, deposition, mineralization, and turnover. The inorganic component of bone consists mainly of calcium and phosphorus hydroxyapatite crystals, which provide chemical rigidity and structure to the bone and account for 50–70% of bone mass . The remaining bone volume can be attributed to its cellular components, chiefly composed of osteocytes, osteoblasts, and osteoclasts. Each of these cell types plays a dynamic role in the creation and maintenance of bone integrity. Osteocytes are found in the lacunae of the matrix and have a mechano-sensory function, maintaining homeostasis through the transmission of mechanical forces into chemical signaling pathways using various signaling molecules and proteins . Osteoblasts are derived from undifferentiated mesenchymal cells and primarily function in bone formation, growth, and maintenance . These multinucleated giant cells are synchronized by the chemical signaling pathways regulated by osteocytes . Finally, osteoclasts are multinucleated giant cells primarily responsible for bone resorption. They are produced by the fusion of hematopoietic stem cells derived from monocytic precursors. They function primarily to resorb bone, preparing the osteoid matrix for bone formation . The cellular, organic, and inorganic components of bone are arranged in specific microstructural units, termed osteons, which are composed of a harversian canal, lamellae, lacunae, and canaliculi, all arranged in a concentric pattern . Macroscopically, bone is further organized into distinct structures, giving rise to two major types of bone within the adult skeleton: cortical and trabecular bone. Cortical bone comprises approximately 80% of the adult bone mass, whereas trabecular bone makes up the remaining 20%. Cortical bone is dense and has a relatively low turnover rate of 3%. It functions mainly to maintain the multiaxial strength and integrity of the bone. In contrast, trabecular bone is highly porous, with a relatively high turnover rate of 26%. It is more metabolically active than cortical bone. 2.2. Bone Homeostasis The human skeletal system is a specialized, dynamic organ that requires continuous remodeling to maintain its structural and mechanical integrity. Bone remodeling begins in early fetal life and constantly functions thereon to strengthen and replenish the skeletal system as it sustains physical loads. Remodeling is a complex yet coordinated process that involves the aforementioned cell types, and the organic and inorganic components of bone . Additionally, there are various proteins and signaling molecules that are also involved and act to further regulate bone homeostasis . Impairment in this process may lead to mechanical and structural bony pathologies, including osteoporosis . Bone remodeling can be separated into five phases: (1) activation; (2) resorption; (3) reversal; (4) formation; and (5) termination. A summary of the most important aspects of each of these phases is detailed in below. In the (1) activation phase, bone remodeling is initiated by local mechanical or systemic hormonal signals. During this phase, local (TGF-β, macrophage colony-stimulating factor (M-CSF), and receptor activator of NF-κB ligand (RANKL)) and systemic (vitamin D, calcium, parathyroid hormones (PTHs), estrogen, androgen, and glucocorticoids) regulators and transcription factors promote resorptive osteoclastogenesis. RANKL interacts with the RANK receptor (forming the RANKL-RANK complex) on osteoclast precursors, potently inducing differentiation into multinucleated osteoclasts. The osteoblast expression of M-CSF also promotes osteoclast survival and maturation. Additionally, during this stage, osteoblasts release chemokines to recruit osteoclast precursors and matrix metalloproteinases (MMPs) to further prepare the bone surface for remodeling . During the (2) resorption phase, mature osteoclasts secrete MMPs to digest both mineral and organic bone matrices. This process involves the creation of Howship’s resorption lacunae, which are small spaces or pits in the bone. These lacunae are covered by canopy cells—flattened cells covering the surface of the bone. The size and shape of the lacunae are indicative of the activity of osteoclasts and the degree of bone resorption. Osteoprotegerin (OPG) can block RANK-RANKL complex formation and reduce resorption by inhibiting osteoclast differentiation and promoting apoptosis . The (3) reversal phase is responsible for the crucial coupling of osteoclastic and osteoblastic activity at the site of remodeling. It begins with the apoptosis of mature osteoclasts. Osteoblasts are then directed to the resorption site in preparation for bone formation. Local molecules such as TGF-β play a pivotal role in attracting and preparing osteoblasts to initiate bone formation . In the (4) formation phase, local and systemic regulators, such as Wnt, sclerostin, and PTH, induce osteoblastogenesis in bone. During this phase, osteoblasts deposit unmineralized osteoid until the area of previously resorbed bone is replaced. Bone formation is then completed as osteoid is gradually mineralized through the incorporation of hydroxyapatite. The balance between sclerostin, Wnt, and PTH is essential in bone formation. At rest, osteocytes express sclerostin, which prevents Wnt signaling (an inducer of bone formation) in osteoblasts. However, during bone formation, sclerostin expression is inhibited by PTH or mechanical stress, which allows for Wnt-induced bone formation to progress . In the (5) termination phase, the rate of bone formation and bone resorption equivocates, and the remodeling cycle is terminated. The process of termination is completed through a series of yet undetermined termination signals. Bone mineralization also continues during this phase . 2.3. Molecular and Local Regulation Bone remodeling is governed by both hormonal/chemical and mechanical signals. Systemic regulators of bone include estrogen, growth hormone, thyroid hormones, glucocorticoids, and androgens. Thyroid hormones are essential for normal musculoskeletal development, maturation, metabolism, structure, and strength, as they promote bone turnover by influencing osteoblast and osteoclast activities. Glucocorticoids prolong osteoclast survival and reduce bone formation by increasing osteoblast apoptosis. At high doses, PTH increases bone resorption indirectly by promoting RANKL/M-CSF expression and inhibiting OPG expression . At lower doses, PTH induces bone formation by promoting an increased survival, proliferation, and differentiation of osteoblasts. Other systemic regulators include vitamin D3, calcitonin, insulin-like growth factor (IGF), prostaglandins, and bone morphogenetic proteins . Local regulators of bone remodeling include cytokines, growth factors, sirtuins, protein kinases such as the mechanistic target of rapamycin (mTOR), forkhead proteins, M-CSF, Wnt, sclerostin, and the RANK/RANKL/OPG system. Each of these signaling molecules plays a different role in the phases of bone remodeling . Sirtuins inhibit sclerostin activity to promote Wnt signaling and bone formation. The increased activity of mTOR translates to increased osteoclastic activity and the release of cathepsin K. The microenvironment within bone is such that all these systemic and local regulators are delicately balanced and tightly regulated. Altered intracellular signaling milieus lead to pathological outcomes . Bone is a dynamic, mineralized, multifunctional connective tissue with organic and inorganic components. The organic component of bone is commonly referred to as the “osteoid” and is composed of both collagenous (mainly collagen type I) and non-collagenous (glycosaminoglycans and glycoproteins) proteins . Each of these components—together with hormones, cytokines, and the cellular components of bone—regulate bone metabolism, deposition, mineralization, and turnover. The inorganic component of bone consists mainly of calcium and phosphorus hydroxyapatite crystals, which provide chemical rigidity and structure to the bone and account for 50–70% of bone mass . The remaining bone volume can be attributed to its cellular components, chiefly composed of osteocytes, osteoblasts, and osteoclasts. Each of these cell types plays a dynamic role in the creation and maintenance of bone integrity. Osteocytes are found in the lacunae of the matrix and have a mechano-sensory function, maintaining homeostasis through the transmission of mechanical forces into chemical signaling pathways using various signaling molecules and proteins . Osteoblasts are derived from undifferentiated mesenchymal cells and primarily function in bone formation, growth, and maintenance . These multinucleated giant cells are synchronized by the chemical signaling pathways regulated by osteocytes . Finally, osteoclasts are multinucleated giant cells primarily responsible for bone resorption. They are produced by the fusion of hematopoietic stem cells derived from monocytic precursors. They function primarily to resorb bone, preparing the osteoid matrix for bone formation . The cellular, organic, and inorganic components of bone are arranged in specific microstructural units, termed osteons, which are composed of a harversian canal, lamellae, lacunae, and canaliculi, all arranged in a concentric pattern . Macroscopically, bone is further organized into distinct structures, giving rise to two major types of bone within the adult skeleton: cortical and trabecular bone. Cortical bone comprises approximately 80% of the adult bone mass, whereas trabecular bone makes up the remaining 20%. Cortical bone is dense and has a relatively low turnover rate of 3%. It functions mainly to maintain the multiaxial strength and integrity of the bone. In contrast, trabecular bone is highly porous, with a relatively high turnover rate of 26%. It is more metabolically active than cortical bone. The human skeletal system is a specialized, dynamic organ that requires continuous remodeling to maintain its structural and mechanical integrity. Bone remodeling begins in early fetal life and constantly functions thereon to strengthen and replenish the skeletal system as it sustains physical loads. Remodeling is a complex yet coordinated process that involves the aforementioned cell types, and the organic and inorganic components of bone . Additionally, there are various proteins and signaling molecules that are also involved and act to further regulate bone homeostasis . Impairment in this process may lead to mechanical and structural bony pathologies, including osteoporosis . Bone remodeling can be separated into five phases: (1) activation; (2) resorption; (3) reversal; (4) formation; and (5) termination. A summary of the most important aspects of each of these phases is detailed in below. In the (1) activation phase, bone remodeling is initiated by local mechanical or systemic hormonal signals. During this phase, local (TGF-β, macrophage colony-stimulating factor (M-CSF), and receptor activator of NF-κB ligand (RANKL)) and systemic (vitamin D, calcium, parathyroid hormones (PTHs), estrogen, androgen, and glucocorticoids) regulators and transcription factors promote resorptive osteoclastogenesis. RANKL interacts with the RANK receptor (forming the RANKL-RANK complex) on osteoclast precursors, potently inducing differentiation into multinucleated osteoclasts. The osteoblast expression of M-CSF also promotes osteoclast survival and maturation. Additionally, during this stage, osteoblasts release chemokines to recruit osteoclast precursors and matrix metalloproteinases (MMPs) to further prepare the bone surface for remodeling . During the (2) resorption phase, mature osteoclasts secrete MMPs to digest both mineral and organic bone matrices. This process involves the creation of Howship’s resorption lacunae, which are small spaces or pits in the bone. These lacunae are covered by canopy cells—flattened cells covering the surface of the bone. The size and shape of the lacunae are indicative of the activity of osteoclasts and the degree of bone resorption. Osteoprotegerin (OPG) can block RANK-RANKL complex formation and reduce resorption by inhibiting osteoclast differentiation and promoting apoptosis . The (3) reversal phase is responsible for the crucial coupling of osteoclastic and osteoblastic activity at the site of remodeling. It begins with the apoptosis of mature osteoclasts. Osteoblasts are then directed to the resorption site in preparation for bone formation. Local molecules such as TGF-β play a pivotal role in attracting and preparing osteoblasts to initiate bone formation . In the (4) formation phase, local and systemic regulators, such as Wnt, sclerostin, and PTH, induce osteoblastogenesis in bone. During this phase, osteoblasts deposit unmineralized osteoid until the area of previously resorbed bone is replaced. Bone formation is then completed as osteoid is gradually mineralized through the incorporation of hydroxyapatite. The balance between sclerostin, Wnt, and PTH is essential in bone formation. At rest, osteocytes express sclerostin, which prevents Wnt signaling (an inducer of bone formation) in osteoblasts. However, during bone formation, sclerostin expression is inhibited by PTH or mechanical stress, which allows for Wnt-induced bone formation to progress . In the (5) termination phase, the rate of bone formation and bone resorption equivocates, and the remodeling cycle is terminated. The process of termination is completed through a series of yet undetermined termination signals. Bone mineralization also continues during this phase . Bone remodeling is governed by both hormonal/chemical and mechanical signals. Systemic regulators of bone include estrogen, growth hormone, thyroid hormones, glucocorticoids, and androgens. Thyroid hormones are essential for normal musculoskeletal development, maturation, metabolism, structure, and strength, as they promote bone turnover by influencing osteoblast and osteoclast activities. Glucocorticoids prolong osteoclast survival and reduce bone formation by increasing osteoblast apoptosis. At high doses, PTH increases bone resorption indirectly by promoting RANKL/M-CSF expression and inhibiting OPG expression . At lower doses, PTH induces bone formation by promoting an increased survival, proliferation, and differentiation of osteoblasts. Other systemic regulators include vitamin D3, calcitonin, insulin-like growth factor (IGF), prostaglandins, and bone morphogenetic proteins . Local regulators of bone remodeling include cytokines, growth factors, sirtuins, protein kinases such as the mechanistic target of rapamycin (mTOR), forkhead proteins, M-CSF, Wnt, sclerostin, and the RANK/RANKL/OPG system. Each of these signaling molecules plays a different role in the phases of bone remodeling . Sirtuins inhibit sclerostin activity to promote Wnt signaling and bone formation. The increased activity of mTOR translates to increased osteoclastic activity and the release of cathepsin K. The microenvironment within bone is such that all these systemic and local regulators are delicately balanced and tightly regulated. Altered intracellular signaling milieus lead to pathological outcomes . Osteoporosis is a disorder characterized by decreased bone mass, density, quality, and strength, as shown in . It is caused by imbalances in the process of bone remodeling to favor MSC senescence—and a shift in differentiation potential to favor adipogenesis over osteogenesis. In this pathological state, bone loses its structural integrity and becomes more susceptible to fractures . This imbalance is primarily linked to variations in the activity levels of osteoclasts and osteoblasts. Osteoporosis can be classified into two major groups: primary and secondary osteoporosis. Primary osteoporosis includes conditions for which there is no underlying medical etiology. These include idiopathic and involutional osteoporosis. Idiopathic osteoporosis occurs mostly in children and young adults and continues to have no known etiopathogenesis . Involutional osteoporosis affects both men and women and is known to be closely related to aging and hormonal imbalances. Involutional osteoporosis can be further classified into Type I and Type II. Type I involutional osteoporosis mostly affects postmenopausal women and is often referred to as “postmenopausal osteoporosis”. This condition affects women between 51 and 71 years of age and is characterized by rapid bone loss . Type II involutional osteoporosis—often referred to as “senile osteoporosis”—mostly affects those above 75 years of age. This condition is characterized by a primarily trabecular and cortical pattern of bone loss . Secondary osteoporosis occurs due to an underlying disease or medication use, and it accounts for less than 5% of all cases of osteoporosis . Traditional pathophysiological models of osteoporosis have emphasized the endocrine etiology of the condition. Estrogen deficiencies and the resultant secondary hyperparathyroidism, as described by models, coupled with an inadequate vitamin D and calcium intake, have been touted as the key determinants in the development of osteoporosis . The postmenopausal cessation of ovarian function and subsequent decreases in estrogen levels have been known for decades to be key events in the acceleration of bone loss. The effects of estrogen loss are mediated by the direct modulation of osteogenic cellular lineages via the estrogen receptors on these cells. Specifically, decreased estrogen leads to simultaneous increases and decreases in osteoclast and osteoblast activities, respectively, leading to metabolic imbalances favoring bone resorption. Similarly, it is known that nutritional imbalances, specifically in vitamin D and calcium, can also promote bone resorption . However, emerging research on bone homeostasis suggests that the pathophysiological mechanisms of osteoporosis extend beyond this unilateral endocrine model . Rather, more dynamic models are being explored as the pathophysiological drivers behind the disease. An important discussion point is the use of animal models for osteoporosis. Because of the similarity in pathophysiologic responses between the human and rat skeleton, the rat is a valuable model for osteoporosis. Rats are safe to handle, accessible to experimental centers, and have low costs of acquisition and maintenance . Through hormonal interventions, such as ovariectomy, orchidectomy, hypophysectomy, and parathyroidectomy), and immobilization and dietary manipulations, the laboratory rat has provided an aid to the development and understanding of the pathophysiology of osteoporosis . 3.1. Osteoimmunological Model The osteoimmunological model is a relatively novel one that capitalizes on the interplay between the immune system and the skeletal system . It has become increasingly clear that the immune and skeletal systems share multiple overlapping transcription factors, signaling factors, cytokines, and chemokines . Osteoclasts were the first cells in the skeletal system discovered to serve immune functions . Some of the first insights into osteoimmunological crosstalk were gained by Horton et al. (1972), who explored the interactions between immune cells and osteoclasts leading to musculoskeletal inflammatory diseases . The authors found that, in the pathophysiology of rheumatoid arthritis, the stimulation of bone resorption by osteoclasts is exclusively mediated by Th17 cells, which produce IL-17 to stimulate RANKL expression . The osteoimmunological pathophysiological framework for osteoporosis is further strengthened by studies conducted by Zhao et al. (2016), who showed that osteoporotic postmenopausal women express increased levels of proinflammatory cytokines (TNF, IL-1, IL-6, or IL-17) when compared to their non-osteoporotic counterparts . Cline-Smith et al. (2020) further strengthened this model by demonstrating a relationship between the loss of estrogen and the promotion of low-grade T-cell-regulated inflammation . Regulatory T cells (T reg ) have also been found to have anti-osteoclastogenic effects within bone biology through the expression of the transcription factor FOXP3 . Accordingly, Zaiss et al. (2010) found that the transfer of T reg cells into T-cell-deficient mice was associated with increased bone mass and decreased osteoclast expression . B cells also play a role in the pathophysiology of osteoporosis. Panach et al. (2017) showed that B cells produce small amounts of both RANKL and OPG and modulate the RANK/RANKL/OPG axis . 3.2. Gut Microbiome Model Another rapidly expanding model for the pathophysiology of osteoporosis explores the influence of the gut microbiome (GM) on bone health. It is now widely accepted that the GM influences the development and homeostasis of both the gastrointestinal (GI) tract and extra-GI tissues. GM health also affects nutrient production, host growth, and immune homeostasis . Moreover, complex diseases, such as diabetes mellitus (DM), transient ischemic attacks, and rheumatoid arthritis, have all been linked to changes in the GM . Ding et al. (2019) showed that germ-free mice exhibit increased bone mass, suggesting that a correlation exists between bone homeostasis and the GM . This correlation was also redemonstrated by Behera et al. (2020)’s findings that the modulation of the GM through probiotics and antibiotics affects bone health . Though the relationship between the GM and bone health is still being explored, various mechanisms have been proposed to explain this close “microbiota–skeletal” axis . One such mechanism stems from the relationship between the GM and metabolism. The GM has been shown to influence the absorption of nutrients required for skeletal development (i.e., calcium), thereby affecting bone mineral density . Additionally, nutrient absorption is thought to be influenced by GI acidity, which is directly regulated by the GM . Moreover, the microbial fermentation of dietary fiber to short-chain fatty acids (SCFAs) also plays an important role in the regulation of nutrient absorption in the GI tract. Whisner et al. (2016) and Zaiss et al. (2019) recently reported that the consumption of different prebiotic diets (that can be fermented to SCFAs) was associated with an increased GI absorption of dietary calcium . Beyond their influence on the GI tract, SCFAs have emerged as potent regulators of osteoclast activity and bone metabolism . SCFAs have protective effects against the loss of bone mass by inhibiting osteoclast differentiation and bone resorption . SCFAs are amongst the first examples of gut-derived microbial metabolites that diffuse into systemic circulation to affect bone homeostasis . The GM also modulates immune functions. It is believed that the GM’s effect on intestinal and systemic immune responses, which, in turn, modulate bone homeostasis as described above, is yet another link between the GM and the skeletal system. Bone-active cytokines are released directly by immune cells in the gut, absorbed, and then circulate to the bone; these cytokines play a pivotal role in the GM–immune–bone axis . Finally, it is understood that the bone-forming effect of intermittent PTH signaling closely depends on SCFAs—specifically, butyrate, a product of the GM. Li et al. (2020) provided evidence for butyrate acting in concert with PTH to induce CD4+ T cells to differentiate into T reg cells. Differentiated T reg cells then stimulate the Wnt pathway, which is pivotal for bone formation and osteoblast differentiation, as discussed above . Interventions that focus on probiotics and targeting the GM and its metabolic byproducts may be a potential future avenue for preventing and treating osteoporosis. 3.3. Cellular Senescence Model Cellular senescence describes a cellular state induced by various stressors, characterized by irreversible cell cycle arrest and resistance to apoptosis . Senescent cells produce excessive proinflammatory cytokines, chemokines, and extracellular matrix-degrading proteins, known as the senescence-associated secretory phenotype (SASP) proteins . The number of senescent cells increases with aging and has been linked to the development of age-related diseases, such as DM, hypertension, atherosclerosis, and osteoporosis . Farr et al. (2016) explains the role of cellular senescence in the development of osteoporosis . These authors found that there is an accumulation of senescent B cells, T cells, myeloid cells, osteoprogenitors, osteoblasts, and osteocytes in bone biopsy samples from older, postmenopausal women compared to their younger, premenopausal counterparts, suggesting that these cells become senescent with age . Further studies conducted by Farr et al. (2017) suggest a causal link between cellular senescence and age-related bone loss by showing that the elimination of senescent cells or the inhibition of their produced SASPs had a protective and preventative effect on age-related bone loss . These findings suggest that targeting cellular senescence through “senolytic” and “senostatic” interventions may have good results. 3.4. Genetic Component of Osteoporosis Bone mineral density has up to 80% of variance in twin studies and is a heritable trait. Single-nucleotide polymorphisms in specific genes, in addition to polygenic and multiple gene variants, have been identified . Makitie et al. summarized up to 144 different genes that have been reported to be linked to variances in bone mineral density . As an example, Zheng et al. showed that rs11692564 had an effect of +0.20 SD for lumbar spine BMD . It is well known that the WNT pathway plays a role in bone homeostasis, as it promotes bone cell development, differentiation, and proliferation . Dysregulation in its signaling pathway leads to changes in bone mass, such as osteoporosis pseudoglioma syndrome, Pyle’s disease, and van Buchem disease . PLS3 is another recently identified gene that is linked to early-onset osteoporosis. PLS3 functions by altering osteocyte function through an abnormal cytoskeletal microarchitecture and bone mineralization . Finally, there are several genes that have a known effect on changes in the bone extracellular matrix. COL1A1 and COL1A2 mutations are associated with osteogenesis imperfecta; XYLT2 leads to spondyloocular syndrome; and FKBP10 and PLOD2 mutations lead to Bruck syndrome 1 and 2, respectively . The osteoimmunological model is a relatively novel one that capitalizes on the interplay between the immune system and the skeletal system . It has become increasingly clear that the immune and skeletal systems share multiple overlapping transcription factors, signaling factors, cytokines, and chemokines . Osteoclasts were the first cells in the skeletal system discovered to serve immune functions . Some of the first insights into osteoimmunological crosstalk were gained by Horton et al. (1972), who explored the interactions between immune cells and osteoclasts leading to musculoskeletal inflammatory diseases . The authors found that, in the pathophysiology of rheumatoid arthritis, the stimulation of bone resorption by osteoclasts is exclusively mediated by Th17 cells, which produce IL-17 to stimulate RANKL expression . The osteoimmunological pathophysiological framework for osteoporosis is further strengthened by studies conducted by Zhao et al. (2016), who showed that osteoporotic postmenopausal women express increased levels of proinflammatory cytokines (TNF, IL-1, IL-6, or IL-17) when compared to their non-osteoporotic counterparts . Cline-Smith et al. (2020) further strengthened this model by demonstrating a relationship between the loss of estrogen and the promotion of low-grade T-cell-regulated inflammation . Regulatory T cells (T reg ) have also been found to have anti-osteoclastogenic effects within bone biology through the expression of the transcription factor FOXP3 . Accordingly, Zaiss et al. (2010) found that the transfer of T reg cells into T-cell-deficient mice was associated with increased bone mass and decreased osteoclast expression . B cells also play a role in the pathophysiology of osteoporosis. Panach et al. (2017) showed that B cells produce small amounts of both RANKL and OPG and modulate the RANK/RANKL/OPG axis . Another rapidly expanding model for the pathophysiology of osteoporosis explores the influence of the gut microbiome (GM) on bone health. It is now widely accepted that the GM influences the development and homeostasis of both the gastrointestinal (GI) tract and extra-GI tissues. GM health also affects nutrient production, host growth, and immune homeostasis . Moreover, complex diseases, such as diabetes mellitus (DM), transient ischemic attacks, and rheumatoid arthritis, have all been linked to changes in the GM . Ding et al. (2019) showed that germ-free mice exhibit increased bone mass, suggesting that a correlation exists between bone homeostasis and the GM . This correlation was also redemonstrated by Behera et al. (2020)’s findings that the modulation of the GM through probiotics and antibiotics affects bone health . Though the relationship between the GM and bone health is still being explored, various mechanisms have been proposed to explain this close “microbiota–skeletal” axis . One such mechanism stems from the relationship between the GM and metabolism. The GM has been shown to influence the absorption of nutrients required for skeletal development (i.e., calcium), thereby affecting bone mineral density . Additionally, nutrient absorption is thought to be influenced by GI acidity, which is directly regulated by the GM . Moreover, the microbial fermentation of dietary fiber to short-chain fatty acids (SCFAs) also plays an important role in the regulation of nutrient absorption in the GI tract. Whisner et al. (2016) and Zaiss et al. (2019) recently reported that the consumption of different prebiotic diets (that can be fermented to SCFAs) was associated with an increased GI absorption of dietary calcium . Beyond their influence on the GI tract, SCFAs have emerged as potent regulators of osteoclast activity and bone metabolism . SCFAs have protective effects against the loss of bone mass by inhibiting osteoclast differentiation and bone resorption . SCFAs are amongst the first examples of gut-derived microbial metabolites that diffuse into systemic circulation to affect bone homeostasis . The GM also modulates immune functions. It is believed that the GM’s effect on intestinal and systemic immune responses, which, in turn, modulate bone homeostasis as described above, is yet another link between the GM and the skeletal system. Bone-active cytokines are released directly by immune cells in the gut, absorbed, and then circulate to the bone; these cytokines play a pivotal role in the GM–immune–bone axis . Finally, it is understood that the bone-forming effect of intermittent PTH signaling closely depends on SCFAs—specifically, butyrate, a product of the GM. Li et al. (2020) provided evidence for butyrate acting in concert with PTH to induce CD4+ T cells to differentiate into T reg cells. Differentiated T reg cells then stimulate the Wnt pathway, which is pivotal for bone formation and osteoblast differentiation, as discussed above . Interventions that focus on probiotics and targeting the GM and its metabolic byproducts may be a potential future avenue for preventing and treating osteoporosis. Cellular senescence describes a cellular state induced by various stressors, characterized by irreversible cell cycle arrest and resistance to apoptosis . Senescent cells produce excessive proinflammatory cytokines, chemokines, and extracellular matrix-degrading proteins, known as the senescence-associated secretory phenotype (SASP) proteins . The number of senescent cells increases with aging and has been linked to the development of age-related diseases, such as DM, hypertension, atherosclerosis, and osteoporosis . Farr et al. (2016) explains the role of cellular senescence in the development of osteoporosis . These authors found that there is an accumulation of senescent B cells, T cells, myeloid cells, osteoprogenitors, osteoblasts, and osteocytes in bone biopsy samples from older, postmenopausal women compared to their younger, premenopausal counterparts, suggesting that these cells become senescent with age . Further studies conducted by Farr et al. (2017) suggest a causal link between cellular senescence and age-related bone loss by showing that the elimination of senescent cells or the inhibition of their produced SASPs had a protective and preventative effect on age-related bone loss . These findings suggest that targeting cellular senescence through “senolytic” and “senostatic” interventions may have good results. Bone mineral density has up to 80% of variance in twin studies and is a heritable trait. Single-nucleotide polymorphisms in specific genes, in addition to polygenic and multiple gene variants, have been identified . Makitie et al. summarized up to 144 different genes that have been reported to be linked to variances in bone mineral density . As an example, Zheng et al. showed that rs11692564 had an effect of +0.20 SD for lumbar spine BMD . It is well known that the WNT pathway plays a role in bone homeostasis, as it promotes bone cell development, differentiation, and proliferation . Dysregulation in its signaling pathway leads to changes in bone mass, such as osteoporosis pseudoglioma syndrome, Pyle’s disease, and van Buchem disease . PLS3 is another recently identified gene that is linked to early-onset osteoporosis. PLS3 functions by altering osteocyte function through an abnormal cytoskeletal microarchitecture and bone mineralization . Finally, there are several genes that have a known effect on changes in the bone extracellular matrix. COL1A1 and COL1A2 mutations are associated with osteogenesis imperfecta; XYLT2 leads to spondyloocular syndrome; and FKBP10 and PLOD2 mutations lead to Bruck syndrome 1 and 2, respectively . Currently, the diagnosis of osteoporosis primarily relies on the assessment of bone mass through bone densitometry, also known as dual-energy X-ray absorptiometry (DEXA) . The test uses low-dose X-rays to measure the density of bones, typically in the spine, hip, and wrist. An individual’s DEXA score is calculated based on the measured bone density values, and the probability of future fracture risk is thereupon determined . Bone strength can be quantified using bone mineral density (BMD) and/or bone quality. While tools exist to accurately quantify BMD, the accurate measurement of bone quality within the clinical setting remains elusive. Thus, measurement of the BMD is the most effective method for determining the rate of bone loss and monitoring disease progression . Bone mineral content (BMC), however, is the bone mineral density summed over a projected area . Peak bone mass is the amount of bony tissue present at the end of skeletal maturation . Men tend to have higher peak bone mass than women, and African-American males and females have a higher peak bone mass than their Caucasian counterparts . The World Health Organization (WHO) Expert Committee classification of BMD values is as follows: (i) normal: BMD > −1 SD t-score; (ii) osteopenia: BMD between −1 SD and −2.5 SD t-score; (iii) osteoporosis: BMD < −2.5 SD t-score; and (iv) established osteoporosis: BMD < −2.5 SD t-score + fragility fracture . For premenopausal women, men under 50 years of age, and children, the Z-score (in relation to normal subjects of the same age and sex) is considered, with “normal” being considered up to −2.0 . This classification is widely accepted as a diagnostic criterion, with sensitivity and specificity close to 90% . In addition to bone densitometry, general blood and urine tests can provide important information about an individual’s overall health and any underlying conditions that may be contributing to osteoporosis. These markers are particularly useful for identifying metabolic bone diseases, as they can provide information not directly obtained through bone density measurements . If ancillary testing is indicated, then an array of bone turnover markers (BTMs) can be measured. BTM testing detects peptides produced during bone matrix formation and degradation. Examples of bone formation markers include alkaline phosphatase (ALP) and osteocalcin (OC), which quantify osteoblastic activity. Of note, ALP has low sensitivity and specificity in metabolic bone disorders since it is secreted by various tissues, including the liver, bone, and placenta . In contrast, there are BTMs specific to bone resorption. Degradation markers, such as pyridinoline (Pir) and deoxypyridinoline (Dpir), are proxy measurements for osteoclast activity. The most often measured resorption markers in clinical practice for the diagnosis of osteoporosis are the C-terminal telopeptide of type I collagen (ICTP), β-CrossLaps (β-CTX), and the N-terminal telopeptide of type I collagen (NTX) . Several algorithms have been developed to estimate a patient’s future fracture risk . The most commonly used algorithm in the US is the Fracture Risk Assessment Tool (FRAX). The FRAX and other such algorithms allow for the estimation of the 10-year risk of major osteoporotic fragility fractures (vertebral, hip, distal radius, or proximal humerus) . The clinical risk factors that are utilized in these predictive algorithms include age, sex, prior history of osteoporotic fracture, femoral neck BMD, body mass index (BMI), glucocorticoid use, parental history of hip fracture, secondary causes of osteoporosis, smoking history, and alcohol consumption . Novel Diagnostic Approaches In a recent study, the assessment of BMD using Hounsfield unit (HU) measurements from computed tomography (CT) scans was correlated with DEXA scan results . This study established that glenoid and proximal humerus HU can reliably be measured and correlated with patients’ DEXA . Earp et al. (2021) further concluded that the utilization of opportunistic HU values obtained from shoulder CT scans obtained for other purposes could assist in the earlier detection of abnormal bone density, offering an additional way to identify patients who may benefit from further diagnostic testing and potential treatment . This study was the first of its kind and shows incredible promise for the novel diagnostic approach to osteoporosis using CT . In a recent study, the assessment of BMD using Hounsfield unit (HU) measurements from computed tomography (CT) scans was correlated with DEXA scan results . This study established that glenoid and proximal humerus HU can reliably be measured and correlated with patients’ DEXA . Earp et al. (2021) further concluded that the utilization of opportunistic HU values obtained from shoulder CT scans obtained for other purposes could assist in the earlier detection of abnormal bone density, offering an additional way to identify patients who may benefit from further diagnostic testing and potential treatment . This study was the first of its kind and shows incredible promise for the novel diagnostic approach to osteoporosis using CT . Osteoporosis has been dubbed “the silent killer of the 21st century”, as there are minimal clinical signs of the condition prior to patients suffering fracture . The rapidly evolving understanding of the pathophysiology of osteoporosis has led to diagnostic and therapeutic advances. The primary goal of most treatment options for osteoporosis is to reduce the risk of fractures and the subsequent associated morbidity and mortality . The management of osteoporosis includes both non-pharmacological and pharmacological approaches . 5.1. Non-Pharmacological Treatment Options The non-pharmacological management of osteoporosis includes various lifestyle and dietary interventions, which aim to upregulate bone production and inhibit bone resorption . It has long been established that regular weight-bearing exercise stimulates bone production, increases bone strength, and is protective against fractures . Children and young adults who are consistently active reach higher peak bone masses than those who are not . In their systematic review, Howe et al. (2011) found that the most effective type of exercise for increasing femoral neck BMD was “high-force” exercise such as progressive resistance training . LeBoff et al. (2022) highlighted the importance of weight-bearing exercises (in which bones and muscle work against gravity with feet and legs bearing body weight) and detailed the importance of a “multicomponent program” to adequately strengthen bone in patients with osteoporosis . A multicomponent program should include progressive resistance training, balance training, back extensor strengthening, core stabilizers, cardiovascular conditioning, and impact or ground-reaction forces to stimulate bone . Smoking has been shown to influence bone health indirectly and directly. Animal studies have shown that exposure to smoking can change the ratio of RANKL/OPG and lower levels of OPG, thus influencing osteoclast function . Cheraghi et. al’s meta-analysis demonstrated that persons consuming 1–2 drinks daily had a 1.34 times increased risk of developing osteoporosis, and those who drank more than two drinks daily had a 1.63 times increased risk of developing osteoporosis . This is hypothesized to be secondary to decreased bone remodeling due to lower levels of osteocalcin and C-telopeptide of type 1 bone collagen . Therefore, smoking cessation should be considered as a non-pharmacological intervention to address osteoporosis. The Canadian Multicenter Osteoporosis Study showed that an increased intake of protein and nutrient-dense foods, such as fruits, vegetables, and whole grains, was associated with a lower fracture risk . Similar trends were seen in different diets among different cultures. Asian diets, which are low in dairy products, have shown that an increased consumption of large quantities of dark green vegetables may provide an adequate daily calcium dose . When compared to an omnivore diet, a vegan diet had a higher prevalence of vitamin D insufficiency, although bone loss was comparable after 2 years between these two groups . The Framingham heart study showed that, among men, a diet high in fruit, vegetables, and cereal was associated with a higher bone density . 5.2. Pharmacological Treatment Options Before beginning pharmacological treatment for osteoporosis, individuals should be assessed to identify any secondary causes of the condition. If identified, these secondary causes should be addressed in concert with the osteoporosis. When beginning osteoporosis pharmacological therapy, it is also imperative to monitor BTMs to ensure the effectiveness of the treatment regimen . There are several pharmacological options available for treating osteoporosis: (1) calcium and vitamin D supplementation; (2) antiresorptive agents (i.e., bisphosphonates and denosumab); (3) hormonal agents (i.e., estrogen, testosterone, and PTH analogues); and (4) novel therapies (romosozumab and Dickkopf-1 (Dkk1) inhibitors) . 5.2.1. Calcium and Vitamin D Supplementation In many cases of osteoporosis, dietary sources of calcium and vitamin D are inadequate. Additionally, the natural, physiological processes of aging systemically affect the body’s ability to naturally absorb calcium and vitamin D. Thus, it is recommended for those with, or at risk of developing, osteoporosis to supplement with additional vitamin D and calcium. Recent studies have found that calcium and vitamin D supplementation reduced the risk of hip fracture by 30% and the total fracture risk by 12–15% . At least 700 International Units (IU) of vitamin D is needed for improving physical function and preventing falls and fractures. Supplementing calcium for a maximum total daily calcium intake of 1000 to 1200 mg has also been recommended. Even patients with osteoporosis on vitamin D supplementation should have regular lab work to ensure that 25-hydroxy vitamin D levels of more than 50 nmol/L are maintained. 5.2.2. Antiresorptive Agents Bisphosphonates Bisphosphonates are a type of medication that strongly binds to hydroxyapatite, inhibiting osteoclast-mediated bone resorption and increasing BMD . Multiple studies have well established that bisphosphonates reduce the risk of fractures in a wide range of patients, including those who are extremely frail . In spite of its clinical benefits, bisphosphonate use has also been associated with a number of adverse effects, which include gastrointestinal symptoms, bone/joint pain, esophageal ulceration, and, rarely, osteonecrosis of the jaw (the highest risk of which is in patients with cancer) . The prolonged use of bisphosphonates (5+ years) has also been associated with an increased risk of atypical femur fractures . Given this risk, it is imperative to evaluate individuals on prolonged bisphosphonate treatment regimens on an individual basis, with drug holidays and alternative treatment options considered following use for 5+ years . Denosumab Denosumab is a humanized monoclonal antibody that decreases osteoclastic activity by inhibiting RANKL . The 2011 international, randomized, placebo-controlled Fracture Reduction Evaluation of Denosumab (FREEDOM) study showed a reduction in fracture incidence of 68% for vertebral fractures, 40% for hip fractures, and 20% for non-vertebral fractures in the first three years in postmenopausal woman taking denosumab . Denosumab is used as an alternative to bisphosphonates when they are not tolerated or are contraindicated. Treatment with denosumab is usually 5–10 years in duration, after which the antiresorptive effects rapidly decrease. Consequently, atypical fracture risk increases in a manner similar to the prolonged bisphosphonate risk . Other adverse effects include hypocalcemia, skin rash, an increased risk of bacterial infections, and osteonecrosis of the jaw. 5.2.3. Hormonal Agents Estrogen and Selective Estrogen Receptor Modulators (SERMs) Estrogen regulates bone remodeling by blocking RANKL and by increasing OPG production by binding to the ERα receptor—a receptor mainly found in bones. In normal conditions, estrogen’s inhibitory effect on osteoclast activity helps maintain a balance in the bone remodeling process. However, prolonged estrogen treatment can cause serious side effects, such as breast cancer, deep vein thrombosis (DVT), and stroke . To mitigate these issues, SERMs (such as raloxifene and lasoxifene) were developed to provide the benefits of estrogen while minimizing the associated adverse effects . They are mainly used for treating and preventing osteoporosis in postmenopausal women after first-line options have been exhausted . They have been shown to be particularly effective in decreasing vertebral fracture risk, though they do decrease the risk of all fragility fractures to some extent . The adverse effects from the prolonged use of SERMs are similar to those associated with estrogen use—namely, an increased risk of breast cancer, DVT, and stroke—though they occur much more rarely than with estrogen . Additionally, the sudden discontinuation of SERMs following prolonged use can result in rebound increases in bone remodeling, which can, in turn, lead to increased bone loss. Thus, when treatment is discontinued, patients should transition to another treatment agent immediately . PTH Analogues Just as low, pulsatile doses of PTH stimulate bone growth, so, too, does the timed administration of PTH analogues such as teriparatide. Teriparatide is a synthetic version of PTH, and it functions as an anabolic agent in bone (as opposed to antiresorptive agents, which have been previously discussed). Recent studies have shown that PTH analogues effectively increase BMD and lower the risk of vertebral fractures. These agents are used as another treatment option when first-line therapies fail . It is contraindicated to use these medications in patients with Paget’s disease, skeletal muscle metastases, or previous bone radiation therapy. Additionally, the adverse effects from these therapies include nausea, myalgia, arthralgia, headache, and dizziness . Prolonged, unregulated use is also associated with increased bone resorption. Thus, the use of PTH analogues should be restricted to a duration of two years . 5.2.4. Novel Therapies Romosozumab Romosozumab is a newly approved monoclonal antibody targeting sclerostin. Approved by the Food and Drug Administration (FDA) in April 2019, it uniquely exhibits the ability to stimulate bone formation while simultaneously reducing bone resorption. It achieves this by upregulating the Wnt pathway while also acting as an enhancer of RANKL synthesis, thereby downregulating this latter pathway . Romosozumab is a potent treatment option, both alone and in combination with other drugs . Currently, it exists in an injectable form and is recommended for women without a high risk of cardiovascular disease. However, its anabolic effect is temporary and tends to wear off . As of 2021, it has reached Phase III trials, which have shown an increase in bone mineral density and a decrease in vertebral and hip fractures. These were based on two studies: the Fracture Study in Postmenopausal Women with Osteoporosis (FRAME trial) and the Active-Controlled Fracture Study in Postmenopausal Women with Osteoporosis at High Risk (ARCH study) . In the FRAME study, romosozumab use showed a 73% reduction in new vertebral fractures compared with a placebo . The ARCH study showed a 48% lower risk of new vertebral fractures in the group that received romosozumab and alendronate than in the alendronate-only group . However, major cardiac events were observed in the ARCH study, while headaches, arthralgia, and injection site reactions were also observed . Romosozumab has an estimated cost of USD 1825 a month, which is similar in cost to denosumab and conjugated drugs . 5.3. Orthopedic Management of Fragility Fractures The therapeutic principles of osteoporotic fracture include fracture reduction, immobilization, physical therapy, and anti-osteoporosis treatment . Management involves a combination of all four principles to facilitate the most optimal outcomes. Reduction procedures should be performed carefully to avoid further harm and to allow for early mobilization and rehabilitation once the fracture is stabilized. Anti-osteoporosis treatment is also important to prevent worsening of the underlying osteoporosis and fracture-related complications . The treatment plan varies based on the patient’s specific fracture, degree of osteoporosis, and overall health. The focus should be on tissue repair and functional rehabilitation rather than anatomical fracture reduction . For patients who require surgical intervention, orthopedic surgeons must keep in mind that fragility fractures tend to heal much more slowly than do traumatic fractures. To prevent complications, surgery should involve minimal trauma to the surrounding tissues and aim to best restore the articular surface if a fracture extends into the joint . Addressing Common Fragility fractures: Vertebral Fractures: The most common osteoporotic fractures occur within the vertebral column, with 85% of patients experiencing some level of pain and the remaining 15% being asymptomatic . In cases of mild midline back or paraspinal pain, no neurological deficits, and minimal vertebral compression (less than one-third vertebral height loss), non-surgical treatment is recommended. Minimally invasive surgery is preferred for patients with neurological deficits, severe vertebral compression (more than one-third vertebral height loss), damage to the posterior vertebral wall, and significant pain that does not respond to conservative treatment . Hip Fractures: Hip osteoporotic fractures primarily occur in the femoral neck and intertrochanteric area and are marked by high rates of deformity, disability, delayed recovery, and elevated mortality. Regarding femoral neck fractures, treatment options may include non-surgical or surgical methods depending on the patient’s individual characteristics and goals of care. Most US orthopedic surgeons manage femur fractures operatively if the patient and family are amenable, though this is less so the case in Europe. For minimally displaced or impacted fractures in patients with extremely poor health, non-surgical treatments, such as bed rest with weighted traction, brace immobilization, and nutritional support, may be considered . Surgical options for femoral neck fractures include external or internal fixation, hemiarthroplasty, total hip arthroplasty, or proximal femoral replacement. Prompt surgical treatment within 24–48 h of injury has demonstrably improved patient morbidity and mortality . Proximal Humerus Fractures: For nondisplaced proximal humerus fractures, non-surgical treatment is the preferred option. This can involve the use of a sling or shoulder immobilizer. In cases of displaced fractures in a highly functional patient, surgical management should be considered and can involve open reduction and internal fixation (ORIF) or prosthetic replacement . Distal Radius Fractures: Osteoporotic fractures of the distal radius are often comminuted and can involve the articular surface, leading to deformities and chronic pain. Initial treatment should be aimed at closed manual reduction and casting/splinting, ensuring proper restoration and alignment of the articular surface and normal positioning of the wrist. In cases of unstable fractures or an inadequate manual reduction, ORIF may be required to more precisely restore the articular surface . Atypical Femur Fractures: Atypical femoral shaft fractures occur from the prolonged use of resorptive agents, some of which have been mentioned above (bisphosphonates, denosumab, and some SERMs) . The prolonged use of these substances alters the balance between bone resorption and bone formation, favoring bone formation at first but then shifting over time to favor bone resorption. Ideally, the medical treatment courses that include bisphosphonates should not exceed five years . The management of an atypical femur fracture is similar to that of a hip fracture; surgery within 24–48 h is recommended in elderly patients . However, controversy exists on how to manage the contralateral, nonfractured femur in a patient who has sustained an atypical femur fracture due to the prolonged use of antiresorptive agents. Imaging studies are recommended for users who present symptomatically with hip, thigh, or groin pain. Conventional radiography, CT, DEXA, and MRI are all modalities that should be explored in these scenarios . In individuals with an atypical femur fracture who have been treated with bisphosphonates, immediate discontinuation of the medication is advised. Supplemental calcium and vitamin D should be provided as needed. For patients with incomplete fractures and persistent pain for three months despite medical management, prophylactic intramedullary surgical nail fixation is recommended to prevent complete fractures . Pharmacologically, interventions to promote bone healing and formation should also be considered. Teriparatide has been shown to promote fracture healing, even in cases of nonunion . In a retrospective case–control study, Miyakoshi et al. (2015) observed a reduction in healing time and an increased union rate with the use of teriparatide . Surgical management through intramedullary nailing or plating is recommended for patients who sustain complete fractures . However, controversy exists on how to manage the contralateral, nonfractured femur in a patient who has sustained an atypical femur fracture due to the prolonged use of antiresorptive agents. Atypical femur fractures affect the contralateral leg in 28% of cases, with time ranges between fractures ranging from one month to four years . Thus, adequate study of the contralateral leg is mandatory, as recommended by the European Medicines Agency (EMA) and the FDA . The assessment of the contralateral femur should be performed during the initial hospitalization, with the aim of promptly determining appropriate treatment or preventive measures for the contralateral fracture, as detailed in below. An X-ray evaluation of the entire contralateral femur is recommended, even in the absence of prodromal pain . CT, DEXA, and/or MRI may also be used if clinical suspicion is high and conventional radiographs are unrevealing . The non-pharmacological management of osteoporosis includes various lifestyle and dietary interventions, which aim to upregulate bone production and inhibit bone resorption . It has long been established that regular weight-bearing exercise stimulates bone production, increases bone strength, and is protective against fractures . Children and young adults who are consistently active reach higher peak bone masses than those who are not . In their systematic review, Howe et al. (2011) found that the most effective type of exercise for increasing femoral neck BMD was “high-force” exercise such as progressive resistance training . LeBoff et al. (2022) highlighted the importance of weight-bearing exercises (in which bones and muscle work against gravity with feet and legs bearing body weight) and detailed the importance of a “multicomponent program” to adequately strengthen bone in patients with osteoporosis . A multicomponent program should include progressive resistance training, balance training, back extensor strengthening, core stabilizers, cardiovascular conditioning, and impact or ground-reaction forces to stimulate bone . Smoking has been shown to influence bone health indirectly and directly. Animal studies have shown that exposure to smoking can change the ratio of RANKL/OPG and lower levels of OPG, thus influencing osteoclast function . Cheraghi et. al’s meta-analysis demonstrated that persons consuming 1–2 drinks daily had a 1.34 times increased risk of developing osteoporosis, and those who drank more than two drinks daily had a 1.63 times increased risk of developing osteoporosis . This is hypothesized to be secondary to decreased bone remodeling due to lower levels of osteocalcin and C-telopeptide of type 1 bone collagen . Therefore, smoking cessation should be considered as a non-pharmacological intervention to address osteoporosis. The Canadian Multicenter Osteoporosis Study showed that an increased intake of protein and nutrient-dense foods, such as fruits, vegetables, and whole grains, was associated with a lower fracture risk . Similar trends were seen in different diets among different cultures. Asian diets, which are low in dairy products, have shown that an increased consumption of large quantities of dark green vegetables may provide an adequate daily calcium dose . When compared to an omnivore diet, a vegan diet had a higher prevalence of vitamin D insufficiency, although bone loss was comparable after 2 years between these two groups . The Framingham heart study showed that, among men, a diet high in fruit, vegetables, and cereal was associated with a higher bone density . Before beginning pharmacological treatment for osteoporosis, individuals should be assessed to identify any secondary causes of the condition. If identified, these secondary causes should be addressed in concert with the osteoporosis. When beginning osteoporosis pharmacological therapy, it is also imperative to monitor BTMs to ensure the effectiveness of the treatment regimen . There are several pharmacological options available for treating osteoporosis: (1) calcium and vitamin D supplementation; (2) antiresorptive agents (i.e., bisphosphonates and denosumab); (3) hormonal agents (i.e., estrogen, testosterone, and PTH analogues); and (4) novel therapies (romosozumab and Dickkopf-1 (Dkk1) inhibitors) . 5.2.1. Calcium and Vitamin D Supplementation In many cases of osteoporosis, dietary sources of calcium and vitamin D are inadequate. Additionally, the natural, physiological processes of aging systemically affect the body’s ability to naturally absorb calcium and vitamin D. Thus, it is recommended for those with, or at risk of developing, osteoporosis to supplement with additional vitamin D and calcium. Recent studies have found that calcium and vitamin D supplementation reduced the risk of hip fracture by 30% and the total fracture risk by 12–15% . At least 700 International Units (IU) of vitamin D is needed for improving physical function and preventing falls and fractures. Supplementing calcium for a maximum total daily calcium intake of 1000 to 1200 mg has also been recommended. Even patients with osteoporosis on vitamin D supplementation should have regular lab work to ensure that 25-hydroxy vitamin D levels of more than 50 nmol/L are maintained. 5.2.2. Antiresorptive Agents Bisphosphonates Bisphosphonates are a type of medication that strongly binds to hydroxyapatite, inhibiting osteoclast-mediated bone resorption and increasing BMD . Multiple studies have well established that bisphosphonates reduce the risk of fractures in a wide range of patients, including those who are extremely frail . In spite of its clinical benefits, bisphosphonate use has also been associated with a number of adverse effects, which include gastrointestinal symptoms, bone/joint pain, esophageal ulceration, and, rarely, osteonecrosis of the jaw (the highest risk of which is in patients with cancer) . The prolonged use of bisphosphonates (5+ years) has also been associated with an increased risk of atypical femur fractures . Given this risk, it is imperative to evaluate individuals on prolonged bisphosphonate treatment regimens on an individual basis, with drug holidays and alternative treatment options considered following use for 5+ years . Denosumab Denosumab is a humanized monoclonal antibody that decreases osteoclastic activity by inhibiting RANKL . The 2011 international, randomized, placebo-controlled Fracture Reduction Evaluation of Denosumab (FREEDOM) study showed a reduction in fracture incidence of 68% for vertebral fractures, 40% for hip fractures, and 20% for non-vertebral fractures in the first three years in postmenopausal woman taking denosumab . Denosumab is used as an alternative to bisphosphonates when they are not tolerated or are contraindicated. Treatment with denosumab is usually 5–10 years in duration, after which the antiresorptive effects rapidly decrease. Consequently, atypical fracture risk increases in a manner similar to the prolonged bisphosphonate risk . Other adverse effects include hypocalcemia, skin rash, an increased risk of bacterial infections, and osteonecrosis of the jaw. 5.2.3. Hormonal Agents Estrogen and Selective Estrogen Receptor Modulators (SERMs) Estrogen regulates bone remodeling by blocking RANKL and by increasing OPG production by binding to the ERα receptor—a receptor mainly found in bones. In normal conditions, estrogen’s inhibitory effect on osteoclast activity helps maintain a balance in the bone remodeling process. However, prolonged estrogen treatment can cause serious side effects, such as breast cancer, deep vein thrombosis (DVT), and stroke . To mitigate these issues, SERMs (such as raloxifene and lasoxifene) were developed to provide the benefits of estrogen while minimizing the associated adverse effects . They are mainly used for treating and preventing osteoporosis in postmenopausal women after first-line options have been exhausted . They have been shown to be particularly effective in decreasing vertebral fracture risk, though they do decrease the risk of all fragility fractures to some extent . The adverse effects from the prolonged use of SERMs are similar to those associated with estrogen use—namely, an increased risk of breast cancer, DVT, and stroke—though they occur much more rarely than with estrogen . Additionally, the sudden discontinuation of SERMs following prolonged use can result in rebound increases in bone remodeling, which can, in turn, lead to increased bone loss. Thus, when treatment is discontinued, patients should transition to another treatment agent immediately . PTH Analogues Just as low, pulsatile doses of PTH stimulate bone growth, so, too, does the timed administration of PTH analogues such as teriparatide. Teriparatide is a synthetic version of PTH, and it functions as an anabolic agent in bone (as opposed to antiresorptive agents, which have been previously discussed). Recent studies have shown that PTH analogues effectively increase BMD and lower the risk of vertebral fractures. These agents are used as another treatment option when first-line therapies fail . It is contraindicated to use these medications in patients with Paget’s disease, skeletal muscle metastases, or previous bone radiation therapy. Additionally, the adverse effects from these therapies include nausea, myalgia, arthralgia, headache, and dizziness . Prolonged, unregulated use is also associated with increased bone resorption. Thus, the use of PTH analogues should be restricted to a duration of two years . 5.2.4. Novel Therapies Romosozumab Romosozumab is a newly approved monoclonal antibody targeting sclerostin. Approved by the Food and Drug Administration (FDA) in April 2019, it uniquely exhibits the ability to stimulate bone formation while simultaneously reducing bone resorption. It achieves this by upregulating the Wnt pathway while also acting as an enhancer of RANKL synthesis, thereby downregulating this latter pathway . Romosozumab is a potent treatment option, both alone and in combination with other drugs . Currently, it exists in an injectable form and is recommended for women without a high risk of cardiovascular disease. However, its anabolic effect is temporary and tends to wear off . As of 2021, it has reached Phase III trials, which have shown an increase in bone mineral density and a decrease in vertebral and hip fractures. These were based on two studies: the Fracture Study in Postmenopausal Women with Osteoporosis (FRAME trial) and the Active-Controlled Fracture Study in Postmenopausal Women with Osteoporosis at High Risk (ARCH study) . In the FRAME study, romosozumab use showed a 73% reduction in new vertebral fractures compared with a placebo . The ARCH study showed a 48% lower risk of new vertebral fractures in the group that received romosozumab and alendronate than in the alendronate-only group . However, major cardiac events were observed in the ARCH study, while headaches, arthralgia, and injection site reactions were also observed . Romosozumab has an estimated cost of USD 1825 a month, which is similar in cost to denosumab and conjugated drugs . In many cases of osteoporosis, dietary sources of calcium and vitamin D are inadequate. Additionally, the natural, physiological processes of aging systemically affect the body’s ability to naturally absorb calcium and vitamin D. Thus, it is recommended for those with, or at risk of developing, osteoporosis to supplement with additional vitamin D and calcium. Recent studies have found that calcium and vitamin D supplementation reduced the risk of hip fracture by 30% and the total fracture risk by 12–15% . At least 700 International Units (IU) of vitamin D is needed for improving physical function and preventing falls and fractures. Supplementing calcium for a maximum total daily calcium intake of 1000 to 1200 mg has also been recommended. Even patients with osteoporosis on vitamin D supplementation should have regular lab work to ensure that 25-hydroxy vitamin D levels of more than 50 nmol/L are maintained. Bisphosphonates Bisphosphonates are a type of medication that strongly binds to hydroxyapatite, inhibiting osteoclast-mediated bone resorption and increasing BMD . Multiple studies have well established that bisphosphonates reduce the risk of fractures in a wide range of patients, including those who are extremely frail . In spite of its clinical benefits, bisphosphonate use has also been associated with a number of adverse effects, which include gastrointestinal symptoms, bone/joint pain, esophageal ulceration, and, rarely, osteonecrosis of the jaw (the highest risk of which is in patients with cancer) . The prolonged use of bisphosphonates (5+ years) has also been associated with an increased risk of atypical femur fractures . Given this risk, it is imperative to evaluate individuals on prolonged bisphosphonate treatment regimens on an individual basis, with drug holidays and alternative treatment options considered following use for 5+ years . Denosumab Denosumab is a humanized monoclonal antibody that decreases osteoclastic activity by inhibiting RANKL . The 2011 international, randomized, placebo-controlled Fracture Reduction Evaluation of Denosumab (FREEDOM) study showed a reduction in fracture incidence of 68% for vertebral fractures, 40% for hip fractures, and 20% for non-vertebral fractures in the first three years in postmenopausal woman taking denosumab . Denosumab is used as an alternative to bisphosphonates when they are not tolerated or are contraindicated. Treatment with denosumab is usually 5–10 years in duration, after which the antiresorptive effects rapidly decrease. Consequently, atypical fracture risk increases in a manner similar to the prolonged bisphosphonate risk . Other adverse effects include hypocalcemia, skin rash, an increased risk of bacterial infections, and osteonecrosis of the jaw. Bisphosphonates are a type of medication that strongly binds to hydroxyapatite, inhibiting osteoclast-mediated bone resorption and increasing BMD . Multiple studies have well established that bisphosphonates reduce the risk of fractures in a wide range of patients, including those who are extremely frail . In spite of its clinical benefits, bisphosphonate use has also been associated with a number of adverse effects, which include gastrointestinal symptoms, bone/joint pain, esophageal ulceration, and, rarely, osteonecrosis of the jaw (the highest risk of which is in patients with cancer) . The prolonged use of bisphosphonates (5+ years) has also been associated with an increased risk of atypical femur fractures . Given this risk, it is imperative to evaluate individuals on prolonged bisphosphonate treatment regimens on an individual basis, with drug holidays and alternative treatment options considered following use for 5+ years . Denosumab is a humanized monoclonal antibody that decreases osteoclastic activity by inhibiting RANKL . The 2011 international, randomized, placebo-controlled Fracture Reduction Evaluation of Denosumab (FREEDOM) study showed a reduction in fracture incidence of 68% for vertebral fractures, 40% for hip fractures, and 20% for non-vertebral fractures in the first three years in postmenopausal woman taking denosumab . Denosumab is used as an alternative to bisphosphonates when they are not tolerated or are contraindicated. Treatment with denosumab is usually 5–10 years in duration, after which the antiresorptive effects rapidly decrease. Consequently, atypical fracture risk increases in a manner similar to the prolonged bisphosphonate risk . Other adverse effects include hypocalcemia, skin rash, an increased risk of bacterial infections, and osteonecrosis of the jaw. Estrogen and Selective Estrogen Receptor Modulators (SERMs) Estrogen regulates bone remodeling by blocking RANKL and by increasing OPG production by binding to the ERα receptor—a receptor mainly found in bones. In normal conditions, estrogen’s inhibitory effect on osteoclast activity helps maintain a balance in the bone remodeling process. However, prolonged estrogen treatment can cause serious side effects, such as breast cancer, deep vein thrombosis (DVT), and stroke . To mitigate these issues, SERMs (such as raloxifene and lasoxifene) were developed to provide the benefits of estrogen while minimizing the associated adverse effects . They are mainly used for treating and preventing osteoporosis in postmenopausal women after first-line options have been exhausted . They have been shown to be particularly effective in decreasing vertebral fracture risk, though they do decrease the risk of all fragility fractures to some extent . The adverse effects from the prolonged use of SERMs are similar to those associated with estrogen use—namely, an increased risk of breast cancer, DVT, and stroke—though they occur much more rarely than with estrogen . Additionally, the sudden discontinuation of SERMs following prolonged use can result in rebound increases in bone remodeling, which can, in turn, lead to increased bone loss. Thus, when treatment is discontinued, patients should transition to another treatment agent immediately . PTH Analogues Just as low, pulsatile doses of PTH stimulate bone growth, so, too, does the timed administration of PTH analogues such as teriparatide. Teriparatide is a synthetic version of PTH, and it functions as an anabolic agent in bone (as opposed to antiresorptive agents, which have been previously discussed). Recent studies have shown that PTH analogues effectively increase BMD and lower the risk of vertebral fractures. These agents are used as another treatment option when first-line therapies fail . It is contraindicated to use these medications in patients with Paget’s disease, skeletal muscle metastases, or previous bone radiation therapy. Additionally, the adverse effects from these therapies include nausea, myalgia, arthralgia, headache, and dizziness . Prolonged, unregulated use is also associated with increased bone resorption. Thus, the use of PTH analogues should be restricted to a duration of two years . Estrogen regulates bone remodeling by blocking RANKL and by increasing OPG production by binding to the ERα receptor—a receptor mainly found in bones. In normal conditions, estrogen’s inhibitory effect on osteoclast activity helps maintain a balance in the bone remodeling process. However, prolonged estrogen treatment can cause serious side effects, such as breast cancer, deep vein thrombosis (DVT), and stroke . To mitigate these issues, SERMs (such as raloxifene and lasoxifene) were developed to provide the benefits of estrogen while minimizing the associated adverse effects . They are mainly used for treating and preventing osteoporosis in postmenopausal women after first-line options have been exhausted . They have been shown to be particularly effective in decreasing vertebral fracture risk, though they do decrease the risk of all fragility fractures to some extent . The adverse effects from the prolonged use of SERMs are similar to those associated with estrogen use—namely, an increased risk of breast cancer, DVT, and stroke—though they occur much more rarely than with estrogen . Additionally, the sudden discontinuation of SERMs following prolonged use can result in rebound increases in bone remodeling, which can, in turn, lead to increased bone loss. Thus, when treatment is discontinued, patients should transition to another treatment agent immediately . Just as low, pulsatile doses of PTH stimulate bone growth, so, too, does the timed administration of PTH analogues such as teriparatide. Teriparatide is a synthetic version of PTH, and it functions as an anabolic agent in bone (as opposed to antiresorptive agents, which have been previously discussed). Recent studies have shown that PTH analogues effectively increase BMD and lower the risk of vertebral fractures. These agents are used as another treatment option when first-line therapies fail . It is contraindicated to use these medications in patients with Paget’s disease, skeletal muscle metastases, or previous bone radiation therapy. Additionally, the adverse effects from these therapies include nausea, myalgia, arthralgia, headache, and dizziness . Prolonged, unregulated use is also associated with increased bone resorption. Thus, the use of PTH analogues should be restricted to a duration of two years . Romosozumab Romosozumab is a newly approved monoclonal antibody targeting sclerostin. Approved by the Food and Drug Administration (FDA) in April 2019, it uniquely exhibits the ability to stimulate bone formation while simultaneously reducing bone resorption. It achieves this by upregulating the Wnt pathway while also acting as an enhancer of RANKL synthesis, thereby downregulating this latter pathway . Romosozumab is a potent treatment option, both alone and in combination with other drugs . Currently, it exists in an injectable form and is recommended for women without a high risk of cardiovascular disease. However, its anabolic effect is temporary and tends to wear off . As of 2021, it has reached Phase III trials, which have shown an increase in bone mineral density and a decrease in vertebral and hip fractures. These were based on two studies: the Fracture Study in Postmenopausal Women with Osteoporosis (FRAME trial) and the Active-Controlled Fracture Study in Postmenopausal Women with Osteoporosis at High Risk (ARCH study) . In the FRAME study, romosozumab use showed a 73% reduction in new vertebral fractures compared with a placebo . The ARCH study showed a 48% lower risk of new vertebral fractures in the group that received romosozumab and alendronate than in the alendronate-only group . However, major cardiac events were observed in the ARCH study, while headaches, arthralgia, and injection site reactions were also observed . Romosozumab has an estimated cost of USD 1825 a month, which is similar in cost to denosumab and conjugated drugs . Romosozumab is a newly approved monoclonal antibody targeting sclerostin. Approved by the Food and Drug Administration (FDA) in April 2019, it uniquely exhibits the ability to stimulate bone formation while simultaneously reducing bone resorption. It achieves this by upregulating the Wnt pathway while also acting as an enhancer of RANKL synthesis, thereby downregulating this latter pathway . Romosozumab is a potent treatment option, both alone and in combination with other drugs . Currently, it exists in an injectable form and is recommended for women without a high risk of cardiovascular disease. However, its anabolic effect is temporary and tends to wear off . As of 2021, it has reached Phase III trials, which have shown an increase in bone mineral density and a decrease in vertebral and hip fractures. These were based on two studies: the Fracture Study in Postmenopausal Women with Osteoporosis (FRAME trial) and the Active-Controlled Fracture Study in Postmenopausal Women with Osteoporosis at High Risk (ARCH study) . In the FRAME study, romosozumab use showed a 73% reduction in new vertebral fractures compared with a placebo . The ARCH study showed a 48% lower risk of new vertebral fractures in the group that received romosozumab and alendronate than in the alendronate-only group . However, major cardiac events were observed in the ARCH study, while headaches, arthralgia, and injection site reactions were also observed . Romosozumab has an estimated cost of USD 1825 a month, which is similar in cost to denosumab and conjugated drugs . The therapeutic principles of osteoporotic fracture include fracture reduction, immobilization, physical therapy, and anti-osteoporosis treatment . Management involves a combination of all four principles to facilitate the most optimal outcomes. Reduction procedures should be performed carefully to avoid further harm and to allow for early mobilization and rehabilitation once the fracture is stabilized. Anti-osteoporosis treatment is also important to prevent worsening of the underlying osteoporosis and fracture-related complications . The treatment plan varies based on the patient’s specific fracture, degree of osteoporosis, and overall health. The focus should be on tissue repair and functional rehabilitation rather than anatomical fracture reduction . For patients who require surgical intervention, orthopedic surgeons must keep in mind that fragility fractures tend to heal much more slowly than do traumatic fractures. To prevent complications, surgery should involve minimal trauma to the surrounding tissues and aim to best restore the articular surface if a fracture extends into the joint . Addressing Common Fragility fractures: Vertebral Fractures: The most common osteoporotic fractures occur within the vertebral column, with 85% of patients experiencing some level of pain and the remaining 15% being asymptomatic . In cases of mild midline back or paraspinal pain, no neurological deficits, and minimal vertebral compression (less than one-third vertebral height loss), non-surgical treatment is recommended. Minimally invasive surgery is preferred for patients with neurological deficits, severe vertebral compression (more than one-third vertebral height loss), damage to the posterior vertebral wall, and significant pain that does not respond to conservative treatment . Hip Fractures: Hip osteoporotic fractures primarily occur in the femoral neck and intertrochanteric area and are marked by high rates of deformity, disability, delayed recovery, and elevated mortality. Regarding femoral neck fractures, treatment options may include non-surgical or surgical methods depending on the patient’s individual characteristics and goals of care. Most US orthopedic surgeons manage femur fractures operatively if the patient and family are amenable, though this is less so the case in Europe. For minimally displaced or impacted fractures in patients with extremely poor health, non-surgical treatments, such as bed rest with weighted traction, brace immobilization, and nutritional support, may be considered . Surgical options for femoral neck fractures include external or internal fixation, hemiarthroplasty, total hip arthroplasty, or proximal femoral replacement. Prompt surgical treatment within 24–48 h of injury has demonstrably improved patient morbidity and mortality . Proximal Humerus Fractures: For nondisplaced proximal humerus fractures, non-surgical treatment is the preferred option. This can involve the use of a sling or shoulder immobilizer. In cases of displaced fractures in a highly functional patient, surgical management should be considered and can involve open reduction and internal fixation (ORIF) or prosthetic replacement . Distal Radius Fractures: Osteoporotic fractures of the distal radius are often comminuted and can involve the articular surface, leading to deformities and chronic pain. Initial treatment should be aimed at closed manual reduction and casting/splinting, ensuring proper restoration and alignment of the articular surface and normal positioning of the wrist. In cases of unstable fractures or an inadequate manual reduction, ORIF may be required to more precisely restore the articular surface . Atypical Femur Fractures: Atypical femoral shaft fractures occur from the prolonged use of resorptive agents, some of which have been mentioned above (bisphosphonates, denosumab, and some SERMs) . The prolonged use of these substances alters the balance between bone resorption and bone formation, favoring bone formation at first but then shifting over time to favor bone resorption. Ideally, the medical treatment courses that include bisphosphonates should not exceed five years . The management of an atypical femur fracture is similar to that of a hip fracture; surgery within 24–48 h is recommended in elderly patients . However, controversy exists on how to manage the contralateral, nonfractured femur in a patient who has sustained an atypical femur fracture due to the prolonged use of antiresorptive agents. Imaging studies are recommended for users who present symptomatically with hip, thigh, or groin pain. Conventional radiography, CT, DEXA, and MRI are all modalities that should be explored in these scenarios . In individuals with an atypical femur fracture who have been treated with bisphosphonates, immediate discontinuation of the medication is advised. Supplemental calcium and vitamin D should be provided as needed. For patients with incomplete fractures and persistent pain for three months despite medical management, prophylactic intramedullary surgical nail fixation is recommended to prevent complete fractures . Pharmacologically, interventions to promote bone healing and formation should also be considered. Teriparatide has been shown to promote fracture healing, even in cases of nonunion . In a retrospective case–control study, Miyakoshi et al. (2015) observed a reduction in healing time and an increased union rate with the use of teriparatide . Surgical management through intramedullary nailing or plating is recommended for patients who sustain complete fractures . However, controversy exists on how to manage the contralateral, nonfractured femur in a patient who has sustained an atypical femur fracture due to the prolonged use of antiresorptive agents. Atypical femur fractures affect the contralateral leg in 28% of cases, with time ranges between fractures ranging from one month to four years . Thus, adequate study of the contralateral leg is mandatory, as recommended by the European Medicines Agency (EMA) and the FDA . The assessment of the contralateral femur should be performed during the initial hospitalization, with the aim of promptly determining appropriate treatment or preventive measures for the contralateral fracture, as detailed in below. An X-ray evaluation of the entire contralateral femur is recommended, even in the absence of prodromal pain . CT, DEXA, and/or MRI may also be used if clinical suspicion is high and conventional radiographs are unrevealing . In summary, osteoporosis is a global health concern with significant associated morbidity, mortality, and economic costs. The cellular component of bone includes osteocytes, osteoblasts, and osteoclasts, each playing essential roles in bone integrity and remodeling. Through our understanding of the molecular and cellular components behind bone homeostasis, we have been able to better refine our diagnostic and therapeutic protocols surrounding osteoporosis. Our understanding of the pathophysiology of the disease continues to evolve, involving an array of interconnected models. These models involve different pathophysiological thought processes, involving different components of the human body. The osteoimmunological, gut microbiome, and cellular senescence models provide valuable insights into the multifactorial nature of osteoporosis and highlight new possibilities for future therapeutic approaches. Targeting immune interactions, the gut microbiome, or cellular senescence could potentially lead to more effective treatments and preventive measures for osteoporosis, enhancing bone health and reducing fracture-susceptible populations. The rudimentary diagnosis of osteoporosis primarily still relies on bone densitometry, specifically DEXA, to assess BMD and determine the probability of future fracture risk. While this method is currently the most effective method for monitoring disease progression, accurately measuring bone quality within the clinical setting remains challenging. Additionally, various blood and urine tests can provide valuable information about overall health and underlying conditions contributing to osteoporosis in conjunction with predictive algorithms that examine contributing environmental and behavioral factors (i.e., FRAX). Novel diagnostic approaches, such as using Hounsfield unit (HU) measurements from computed tomography (CT) scans, show promise in the early detection of abnormal bone density, offering an additional way to identify patients who may benefit from further diagnostic testing and treatment. Treating osteoporosis largely requires a combinatory approach that involves both old and new ways to address the disease. Environmental and behavioral factors, like smoking, alcohol consumption, diet, and weight-bearing exercise, continue to form much of the non-pharmacological approach to the disease. Supplementation with calcium and vitamin D has continued to remain relevant pharmacologically. Anti-hormonal and antiresorptive pharmacological agents are more novel treatment options that have been developed; however, providers should be cautious in their prescription given the profile of some of their side effects. Novel therapies like romosozumab, a monoclonal antibody targeting sclerostin, show promise in stimulating bone formation while reducing bone resorption. This therapy capitalizes on the osteoimmunological and cellular senescent models for the pathophysiology of osteoporosis. Future pharmacological interventions should keep these models in mind when developing novel therapies. Finally, the orthopedic management of fragility fractures involves a combination of fracture reduction, immobilization, physical therapy, and anti-osteoporosis treatment. Atypical femur fractures, associated with the prolonged use of certain medications, require careful management, and they may involve surgical intervention and discontinuation of medication. When fragility fractures are sustained, management should be tailored to the individual and type of fracture. Disease management should extend beyond the management of the affected limb/bone to include consideration of the contralateral limb/bone. Specifically, the potential for contralateral femoral fractures should be managed with great care. Though there is contention around the necessity of this evaluation, we hope that this paper sheds light on the importance of managing these fractures. Additionally, we are hopeful that the algorithm provided in becomes a regular standard of care for physicians managing patients with osteoporosis. As we continue to learn more about this important disease, it is imperative that we constantly revise our diagnostic and management practices to optimize patient outcomes.
Analysis of person-hours required for proton beam therapy for pediatric tumors
902c1377-5417-4b24-8ae7-685c36a3e658
10214988
Pediatrics[mh]
Treatment for pediatric tumors is based on a multidisciplinary approach that combines surgery, chemotherapy and radiation therapy . In recent years, proton beam therapy (PBT) has been recommended for pediatric tumors to reduce late toxicities . However, radiation therapy for pediatric tumors often requires sedation and preparation, and this places a large burden on medical sites . Sedation for patients under 4 years is almost always needed, and therefore, the actual treatment time tends to be longer. In this report, we examined the person-hours required for PBT for adult and pediatric cases to obtain an accurate evaluation of the burden of PBT for pediatric tumors. The subjects were 32 pediatric patients who received PBT at our hospital from January 2010 to April 2011. Data for these patients were compared with those for 90 adult patients who received PBT from January 2022 to November 2022. Written informed consent was obtained in all cases, and the study was approved by the hospital ethics committee (H21–388, Tsukuba Clinical Research & Development Organization). For pediatric cases, the time from entering the irradiation room to leaving the room was measured for each treatment, and the number of staff (radiotherapy technicians and nurses) involved in the treatment was also examined. For adult cases, the number of staff involved in PBT is fixed at three, so only the time from entering to leaving the treatment room was measured. Pediatric patients were classified into sedation and non-sedation cases. Adult patients were classified into three groups based on irradiation from two directions without or with respiratory synchronization, and patch irradiation . Treatment person-hours were calculated as follows: (time from entering to leaving the treatment room) × (number of required personnel). Of the 32 pediatric patients in the study, 12 were treated while under sedation. Of the 90 adult patients, 30 patients were treated with and without respiratory synchronization and with patch irradiation. The mean ages of the pediatric patients were 3 (range 1–5) years for those treated with sedation and 5 (3–7) years for those treated without sedation. The mean treatment times (from entering to leaving the treatment room) were 20.7 min (pediatric cases without sedation), 30.1 min (pediatric cases with sedation), 11.5 min (without respiratory synchronization in adults), 15.6 min (with respiratory synchronization in adults) and 23.7 min (patch irradiation in adults). The mean numbers of personnel were 3.08 and 3.77 for pediatric cases without and with sedation, respectively, and the number of personnel for adult cases was fixed at 3. Based on these results, the treatment person-hours were 64.5 and 120.9 min for pediatric patients without and with sedation, respectively, and 34.3, 46.7 and 71.1 min for adult patients without and with respiratory synchronization, and with patch field irradiation, respectively . In pediatric patients, preparation procedures were performed five times on average (range 0–25) before the start of and during PBT. For sedated cases compared with non-sedated cases, the heart rate and oxygen saturation need to be monitored during the irradiation, and the final adjustments of sedation are also required after the patient has been moved to the treatment bed. It usually takes an additional 5–10 min to perform both steps. The preparation person-hours calculated as [(all preparation time) × (required personnel)]/(irradiation time) were 14.3 min for cases without sedation and 20.5 for those with sedation. The total person-hours, including the time for preparation, are shown in . In the clinical setting, it is also necessary for a pediatrician and a nurse to accompany a sedated patient during movement from the pediatric ward to the PBT facility, but this is not included in the analysis. PBT has an excellent dose concentration due to its focused energy peak and, therefore, is widely used for various tumors . For pediatric cases and adolescent and young adult patients, PBT is favored over photon radiotherapy due to reduced future adverse effects and secondary cancer . However, treatment for pediatric patients requires more time and effort for sedation and alignment compared with that for adult patients, which is a major negative point in terms of economic efficiency. In this study, the treatment person-hours for PBT for pediatric cases without sedation were 1.9, 1.4 and 0.9 times greater than those for adult patients irradiated without and with breathing synchronization, and with patch irradiation, respectively, and 3.5, 2.6 and 1.7 times greater for pediatric cases with sedation compared with the respective adult cases. Given that patch irradiation is a rare and non-standard method, the treatment person-hours for PBT for a pediatric tumor are generally 1.4–3.5 times greater than those for adult patients. We note that the study period differed between pediatric (2010–11) and adult (2022) cases, but the treatment equipment, irradiation technique and number of medical staff were similar. Various preparation procedures are used for irradiation with PBT for pediatric tumors . With the inclusion of preparation person-hours, treatment of a pediatric case without sedation took about twice as much time as treatment of adult patients (total person-hours: 78.8 vs 34.4–46.7 min) and treatment of a pediatric case with sedation took three to four times more than that for adult patients (141.4 vs 34.3–46.7 min). In addition, one pediatrician and one nurse needed to make a round trip from the ward to the treatment room for sedated patients (about 20–30 min per irradiation). The time required for this activity was not considered in the analysis; therefore, more person-hours are actually used for pediatric patients who need sedation. Given these transportation person-hours, PBT for pediatric cases in clinical practice is at least two to four times more labor-intensive than that for typical adult cases. However, this treatment is particularly effective in these cases, and further implementation of PBT requires increased support from facilities. In conclusion, the results of this study indicate that PBT for pediatric cases is much more labor-intensive than for adult cases. None declared. This work was supported by the University of Tsukuba. Research data are stored in an institutional repository and will be shared upon request to the corresponding author.
Pharmacogenomic variation and sedation outcomes during early intensive care unit admission: A pragmatic study
e821d451-ef75-4581-9067-b837088e8081
11646075
Pharmacology[mh]
Precision medicine is a tool to improve health outcomes by tailoring disease prevention interventions and treatments. Pharmacogenomics (PGx) is one the most developed forms of precision medicine and matches medications and/or doses with each individual's genome to maximize effectiveness while minimizing side effects. Clinical practice guidelines have been systematically developed and are readily available from the NIH supported Clinical Pharmacogenomics Implementation Consortium (CPIC). Currently, there are 26 published guidelines for more than 150 drugs and 25 genes, and they provide clinical recommendations for drug management in patients with PGx variants. Despite the growing utility of PGx, little is known about PGx in the intensive care unit (ICU). Use of PGx to guide therapy may reduce unwanted and poor outcomes. Sedatives and analgesics are frequently administered to critically ill patients to relieve the discomfort, anxiety, and stress of mechanical ventilation (MV) and to prevent self‐injury. , The management of sedation is a major challenge in critically ill patients. Scientific evidence and practice guidelines support the goal of lighter levels of sedation to improve ICU outcomes and length of stay. , , Deep sedation is associated with increased cognitive dysfunction, delirium, and increased mortality. , The Richmond Agitation‐Sedation Scale (RASS) is a common tool used to measure the level of sedation. The scale takes into account the level of consciousness and motor activity and ranges from −5 (deep sedation) to +4 (agitated). The current Society of Critical Care Medicine guideline‐driven sedation goals for most patients on MV is a RASS of 0 to −2, which corresponds to a lighter sedation. Genetic variation in pharmacogenes, which regulate drug metabolism and pharmacodynamic effects of sedative and analgesic drugs, may contribute to poorly controlled sedation, adverse outcomes and failure to efficiently achieve the RASS target. The sedation depth and intensity at the early period (first 48 h) of MV have been shown to be risk factors for negative outcomes such as increased mortality and delirium and decreased in‐hospital survival. For some patients, multiple sedation and analgesic drug adjustments are required to determine the correct drug and dose that achieves optimal sedation which reduces their comfort and increases the complexity of care. The primary aim of this study was to develop preliminary data on the association between the number of altered PGx phenotypes in genes relevant to sedatives and analgesics and sedation outcomes during MV. We hypothesized that patients with altered PGx phenotypes relevant to their sedation and analgesic regimen will spend less time in the target RASS during the initial 24 and 48 h on MV and achieve their first RASS in the target range later compared to patients with normal phenotypes. We also assessed patients' acceptability to be tested, perceptions and knowledge of PGx, and attitudes toward return of results through a PGx counseling session after discharge. Study design and participants selection This was a prospective, observational, pragmatic PGx association study conducted between 2018 and 2021 at the University of Minnesota Medical Center. Participants were enrolled after obtaining written and informed consent from either the participant, if they were able to pass an IRB‐approved University of California, San Diego Brief Assessment of Capacity to Consent (UBACC) tool, or their legally authorized representative. Participants were included if admitted to the surgical ICU (SICU), medical ICU (MICU), or cardiovascular ICU (CVICU) services, receiving acute MV and sedatives and/or analgesics potentially associated with pharmacogenes, and had an order for a target RASS score of 0 to −2. Participants who were admitted to the ICU from surgery and were deeply sedated (e.g., RASS −4 to −5) were enrolled but their RASS scores were not included in the analysis until the sedation was lightened through a reduction in sedatives administration or dose (typically 2–6 h after ICU admission) and had a written order for a target RASS score between 0 to −2. Participants were excluded if admitted to the ICU with head trauma or other neurologic events that may reduce cognition, had a history of or active liver disease, or undergone liver transplantation, substance abuse within the past year, receiving a neuromuscular blocker, or moribund state with the planned withdrawal of support. This study was approved by the Institutional Review Board at our institution (IRB STUDY# 0002189). Sedatives and analgesics administration Sedative and analgesic choices, dosing, and duration were at the discretion of the ICU team. The administration, start and stop times, doses, and infusion rates of the nine sedative and analgesic medications (fentanyl, propofol, midazolam, dexmedetomidine, morphine, hydromorphone, ketamine, lorazepam, and haloperidol) observed in our study participants were recorded for the 48 h study period. Data collection and pharmacogenomic testing Demographic and clinical information was collected from the electronic health record. Genotyping was conducted on a CLIA‐certified assay using The RightMed® Comprehensive Test from OneOme® (Minneapolis, MN, USA). Six pharmacogenes ( CYP2D6 , CYP3A4 , CYP3A5 , CYP2B6, COMT , OPRM1 ) potentially associated (PharmGKB level of evidence of 3 or higher) with sedatives and/or analgesic medications that were administered to at least 10% of study patients (fentanyl, propofol and midazolam were the only agents in more than 10%) were taken from this panel and studied. Genes previously studied for dexmedetomidine (ADRA2A and PRKCB) were not contained on the panel and were not studied. The OneOme PGx test results were not shared with the ICU providers and were not used for clinical decision making. More information on clinical data and DNA collection, and PGx testing are available in the Supplementary Material . A summary of pharmacogenes potentially associated with sedative/analgesics medications of interest and PharmGKB levels of evidence are shown the Supplementary Material (S1) Table . A list of the genetic variants taken from the OneOme panel and studied is presented in the Supplementary Material (S1) Table . Altered phenotypes Phenotypes were assigned for each gene using the OneOme genotype to phenotype algorithm. Normal phenotypes were defined as those with wildtype/typical function and are defined in the Supplementary Material (S1) Table . The CYP3A5 poor metabolizer phenotype ( CYP3A5*3/*3 ) was considered wildtype because it is the most common phenotype in Caucasians and most doses are based on data derived from this population. For each studied gene, a normal phenotype was assigned a score of zero and an altered phenotype (regardless of the diplotype) was assigned 1. Then the total number of altered phenotypes was calculated in each participant by summing the number of altered phenotypes with a maximum of 6 (representing the 6 genes of interest) based on what drugs they were receiving. For example, a patient receiving propofol (relevant gene CYP2B6 ) and midazolam (relevant genes CYP3A4 , CYP3A5 ) with an altered CYP2B6 and CYP3A5 phenotype was assigned a total number of altered phenotypes of two. A patient receiving only fentanyl (relevant genes CYP3A4 , CYP3A5 , CYP2D6 , COMT , and OPRM1 ) with an altered CYP2D6 , COMT , and OPRM1 phenotype, and normal for CYP3A4 and CYP3A5 would have been assigned a total number of altered phenotypes of three. Endpoints The primary endpoint was achieving ≥60 and 70% of time within the target RASS range (0 to −2) in both the first 24 and 48 h of MV. RASS was clinically determined by the ICU nurses and documented in most all cases every 2 h. The linear interpolation method was used to estimate missing RASS scores. For each participant, an overall percentage of time within the target RASS range in both the first 24 and 48 h of MV was calculated by dividing the number of RASS measurements in the target RASS range (0 to −2) by the total number of RASS measurements within that same period of time, and multiplied by 100. Because sedative and analgesic drugs and doses change over time, the primary PGx association analyses focused on % of time the RASS measurement was in target range but only during the periods of time when the patient was receiving propofol, fentanyl and/or midazolam (e.g., a patient may have received propofol and fentanyl for 20 h of the first 24‐h period and only the RASS measurements in those 20 h were analyzed representing the 24 h analysis). Supplementary Material (S1) Figure represents a schematic diagram of the method used in calculating the percentage within the target range. Other endpoints evaluated were time to first RASS in the target range defined as hours from the time of intubation to first RASS measurement in the target range, and adverse drug reactions (ADR) such as delirium or central nervous system changes associated with the sedative regimen and documented in nursing or provider notes which resulted in the drug(s) being discontinued. Statistical analysis Descriptive statistics such as mean and standard deviation (SD) were determined for continuous variables and frequency and percentages for categorical variables. To account for possible dose effects on the endpoints, total cumulative weight normalized doses for fentanyl, propofol, dexmedetomidine, and midazolam were calculated for each patient over 24 and 48 h and then stratified by the median into high dose, low dose, or none, if not receiving the agent. Given the rapid onset of action and short duration of action of the studied sedatives and analgesics, the cumulative doses in the first 24 and 48 h were used to represent the dose effect. Multivariate logistic regressions were performed to determine the association between the number of altered PGx phenotypes and achieving ≥60% and ≥70% of time within the target RASS range of 0 to −2 over both the first 24 and 48 h. Because the number of altered PGx phenotypes was normally distributed and equally spaced, it was tested first as a continuous independent variable in the logistic regression due to the ease of results interpretation when ordinal data is treated as continuous. We also tested the number of altered phenotypes as a categorical variable after grouping the number of altered phenotypes as following: reference group (0 to 1 altered phenotype), group 1 (2 to 3 altered phenotypes) and group 2 (4 to 5 altered phenotypes). Kaplan–Meier plot and log‐rank test were used to assess the association of time to first RASS within the target range with a number of altered phenotypes as a categorical variable (≤1 and >1 altered phenotypes). COX proportional hazard (PH) model was used to test the association of number of altered phenotypes as a categorical variable (≤1 and >1 altered phenotypes) with the event of achieving the first RASS in the target range allowing adjustment for clinical factors and characteristics. Clinical characteristics and factors such as age, sex, creatinine clearance, ICU unit, baseline RASS score, and cumulative weight normalized doses (high, low, or none) of fentanyl, propofol, dexmedetomidine, and midazolam were considered a priori as covariates and adjusted for in the logistic regression and COX PH models. Type of ICU and entering the ICU post‐surgery were highly correlated (Chi‐square test, p < 0.01) therefore only ICU unit was included as a covariate to avoid collinearity. The adjusted odds ratio and adjusted hazard ratio (HR) with 95% confidence intervals (CI) were used to report the logistic regression and COX PH model associations. Fisher exact test was used to determine the association between ADRs and the number of altered phenotypes (categorized as ≤1 vs. >1). P ‐value <0.05 was considered statistically significant. All analyses were conducted using R software (Vienna, Austria). R packages used in the analysis are reported in the Supplementary Material . Return of PGx results and participants perceptions Participants or their legally authorized representatives were asked, at the time of consent, if they would like to receive educational information about PGx, test results and to review them with one of the study pharmacists (DJS and JL) after discharge from the hospital. If yes, a phone call was scheduled and test results were mailed to the participant. The participants completed a 12‐question, IRB‐approved questionnaire to assess the participant's knowledge of PGx, satisfaction at the time of return of results, and plans to share their results with others (full questionnaire is available in Supplementary Material ). This was a prospective, observational, pragmatic PGx association study conducted between 2018 and 2021 at the University of Minnesota Medical Center. Participants were enrolled after obtaining written and informed consent from either the participant, if they were able to pass an IRB‐approved University of California, San Diego Brief Assessment of Capacity to Consent (UBACC) tool, or their legally authorized representative. Participants were included if admitted to the surgical ICU (SICU), medical ICU (MICU), or cardiovascular ICU (CVICU) services, receiving acute MV and sedatives and/or analgesics potentially associated with pharmacogenes, and had an order for a target RASS score of 0 to −2. Participants who were admitted to the ICU from surgery and were deeply sedated (e.g., RASS −4 to −5) were enrolled but their RASS scores were not included in the analysis until the sedation was lightened through a reduction in sedatives administration or dose (typically 2–6 h after ICU admission) and had a written order for a target RASS score between 0 to −2. Participants were excluded if admitted to the ICU with head trauma or other neurologic events that may reduce cognition, had a history of or active liver disease, or undergone liver transplantation, substance abuse within the past year, receiving a neuromuscular blocker, or moribund state with the planned withdrawal of support. This study was approved by the Institutional Review Board at our institution (IRB STUDY# 0002189). Sedative and analgesic choices, dosing, and duration were at the discretion of the ICU team. The administration, start and stop times, doses, and infusion rates of the nine sedative and analgesic medications (fentanyl, propofol, midazolam, dexmedetomidine, morphine, hydromorphone, ketamine, lorazepam, and haloperidol) observed in our study participants were recorded for the 48 h study period. Demographic and clinical information was collected from the electronic health record. Genotyping was conducted on a CLIA‐certified assay using The RightMed® Comprehensive Test from OneOme® (Minneapolis, MN, USA). Six pharmacogenes ( CYP2D6 , CYP3A4 , CYP3A5 , CYP2B6, COMT , OPRM1 ) potentially associated (PharmGKB level of evidence of 3 or higher) with sedatives and/or analgesic medications that were administered to at least 10% of study patients (fentanyl, propofol and midazolam were the only agents in more than 10%) were taken from this panel and studied. Genes previously studied for dexmedetomidine (ADRA2A and PRKCB) were not contained on the panel and were not studied. The OneOme PGx test results were not shared with the ICU providers and were not used for clinical decision making. More information on clinical data and DNA collection, and PGx testing are available in the Supplementary Material . A summary of pharmacogenes potentially associated with sedative/analgesics medications of interest and PharmGKB levels of evidence are shown the Supplementary Material (S1) Table . A list of the genetic variants taken from the OneOme panel and studied is presented in the Supplementary Material (S1) Table . Phenotypes were assigned for each gene using the OneOme genotype to phenotype algorithm. Normal phenotypes were defined as those with wildtype/typical function and are defined in the Supplementary Material (S1) Table . The CYP3A5 poor metabolizer phenotype ( CYP3A5*3/*3 ) was considered wildtype because it is the most common phenotype in Caucasians and most doses are based on data derived from this population. For each studied gene, a normal phenotype was assigned a score of zero and an altered phenotype (regardless of the diplotype) was assigned 1. Then the total number of altered phenotypes was calculated in each participant by summing the number of altered phenotypes with a maximum of 6 (representing the 6 genes of interest) based on what drugs they were receiving. For example, a patient receiving propofol (relevant gene CYP2B6 ) and midazolam (relevant genes CYP3A4 , CYP3A5 ) with an altered CYP2B6 and CYP3A5 phenotype was assigned a total number of altered phenotypes of two. A patient receiving only fentanyl (relevant genes CYP3A4 , CYP3A5 , CYP2D6 , COMT , and OPRM1 ) with an altered CYP2D6 , COMT , and OPRM1 phenotype, and normal for CYP3A4 and CYP3A5 would have been assigned a total number of altered phenotypes of three. The primary endpoint was achieving ≥60 and 70% of time within the target RASS range (0 to −2) in both the first 24 and 48 h of MV. RASS was clinically determined by the ICU nurses and documented in most all cases every 2 h. The linear interpolation method was used to estimate missing RASS scores. For each participant, an overall percentage of time within the target RASS range in both the first 24 and 48 h of MV was calculated by dividing the number of RASS measurements in the target RASS range (0 to −2) by the total number of RASS measurements within that same period of time, and multiplied by 100. Because sedative and analgesic drugs and doses change over time, the primary PGx association analyses focused on % of time the RASS measurement was in target range but only during the periods of time when the patient was receiving propofol, fentanyl and/or midazolam (e.g., a patient may have received propofol and fentanyl for 20 h of the first 24‐h period and only the RASS measurements in those 20 h were analyzed representing the 24 h analysis). Supplementary Material (S1) Figure represents a schematic diagram of the method used in calculating the percentage within the target range. Other endpoints evaluated were time to first RASS in the target range defined as hours from the time of intubation to first RASS measurement in the target range, and adverse drug reactions (ADR) such as delirium or central nervous system changes associated with the sedative regimen and documented in nursing or provider notes which resulted in the drug(s) being discontinued. Descriptive statistics such as mean and standard deviation (SD) were determined for continuous variables and frequency and percentages for categorical variables. To account for possible dose effects on the endpoints, total cumulative weight normalized doses for fentanyl, propofol, dexmedetomidine, and midazolam were calculated for each patient over 24 and 48 h and then stratified by the median into high dose, low dose, or none, if not receiving the agent. Given the rapid onset of action and short duration of action of the studied sedatives and analgesics, the cumulative doses in the first 24 and 48 h were used to represent the dose effect. Multivariate logistic regressions were performed to determine the association between the number of altered PGx phenotypes and achieving ≥60% and ≥70% of time within the target RASS range of 0 to −2 over both the first 24 and 48 h. Because the number of altered PGx phenotypes was normally distributed and equally spaced, it was tested first as a continuous independent variable in the logistic regression due to the ease of results interpretation when ordinal data is treated as continuous. We also tested the number of altered phenotypes as a categorical variable after grouping the number of altered phenotypes as following: reference group (0 to 1 altered phenotype), group 1 (2 to 3 altered phenotypes) and group 2 (4 to 5 altered phenotypes). Kaplan–Meier plot and log‐rank test were used to assess the association of time to first RASS within the target range with a number of altered phenotypes as a categorical variable (≤1 and >1 altered phenotypes). COX proportional hazard (PH) model was used to test the association of number of altered phenotypes as a categorical variable (≤1 and >1 altered phenotypes) with the event of achieving the first RASS in the target range allowing adjustment for clinical factors and characteristics. Clinical characteristics and factors such as age, sex, creatinine clearance, ICU unit, baseline RASS score, and cumulative weight normalized doses (high, low, or none) of fentanyl, propofol, dexmedetomidine, and midazolam were considered a priori as covariates and adjusted for in the logistic regression and COX PH models. Type of ICU and entering the ICU post‐surgery were highly correlated (Chi‐square test, p < 0.01) therefore only ICU unit was included as a covariate to avoid collinearity. The adjusted odds ratio and adjusted hazard ratio (HR) with 95% confidence intervals (CI) were used to report the logistic regression and COX PH model associations. Fisher exact test was used to determine the association between ADRs and the number of altered phenotypes (categorized as ≤1 vs. >1). P ‐value <0.05 was considered statistically significant. All analyses were conducted using R software (Vienna, Austria). R packages used in the analysis are reported in the Supplementary Material . PGx results and participants perceptions Participants or their legally authorized representatives were asked, at the time of consent, if they would like to receive educational information about PGx, test results and to review them with one of the study pharmacists (DJS and JL) after discharge from the hospital. If yes, a phone call was scheduled and test results were mailed to the participant. The participants completed a 12‐question, IRB‐approved questionnaire to assess the participant's knowledge of PGx, satisfaction at the time of return of results, and plans to share their results with others (full questionnaire is available in Supplementary Material ). Participant characteristics ICU admissions were screened (except during the COVID‐19 shutdown) and 86 participants were enrolled. Eight were excluded from the analysis (exclusion reasons are shown in Supplementary Material (S1) Table ) leaving 78 patients. The demographic and baseline characteristics are presented in Table . The average age of the participants was 58 years, and the majority (92%) were Caucasian. Males and females were almost equally represented. The CVICU had the highest number of participants (41%) compared to MICU and SICU with 32.1% and 26.9%, respectively. Thirty‐seven participants (47%) were admitted to the ICU after a surgical procedure. The 78 patients had 1673 RASS measurements with 1447 measurements (86%) occurring when propofol, fentanyl, and/or midazolam were administered. Fentanyl and propofol combination was the most frequently administered regimen (Supplementary Material (S1) Figure ). The overall percentage of time in the target RASS range was highly variable (range 0 to 100%) among our population (Figure ). The median percentage of time in the target RASS range was low; 25% in the first 24 h and 37% at 48 h. The number of individuals with an altered phenotype for COMT ( n = 63, 80.1%), CYP2D6 ( n = 47, 60.3%), and CYP2B6 ( n = 42, 53.8%) was high (Figure ). The median time a patient was receiving one or more sedative/analgesics (fentanyl, propofol, and/or midazolam) with a potentially relevant PGx gene was 24 h [interquartile range (IQR) = 22, 24] and 40 h [IQR = 30, 46] in the first 24 and 48 h of MV, respectively. There was a median of 2 altered phenotypes per patient that were potentially relevant to propofol, fentanyl, and/or midazolam in the first 24 and 48 h. The distributions of the number of altered phenotypes are presented in Supplementary Material (S1) Figure . Propofol and fentanyl were administered in more than 80% of the patients in the 24‐ and 48‐h periods. In the first 24 and 48 h of acute MV, dexmedetomidine was only administered to approximately 45% and 50% of participants, respectively, and midazolam was used in 27% of participants. Time in target RASS range and the administered sedatives and analgesics are presented in Table and Table for the first 24 and 48 h, respectively. Association with outcomes The odds of achieving ≥60% and ≥70% of time in the desired target range decreased (range of 4%–54%) with each one increase in the number of altered PGx phenotypes after adjusting for the important clinical factors, however, these associations were not statistically significant (Table ). Similar non‐significant trends of decreasing the odds of achieving target ranges in groups with a higher number of altered phenotypes (group 1 [2‐3 altered phenotypes] and group 2 [4‐5 altered phenotypes]) compared to the reference group (0 to 1 altered phenotype) were observed when the number of altered phenotypes was treated as a categorical variable. The results of the logistic regression models with the number of altered phenotypes as a categorical variable are reported in the Supplementary Material (S1) Table . Participants with ≤1 altered phenotypes had a more rapid time to target RASS compared to those with >1 altered phenotypes however, this was not significant (log‐rank test, p = 0.3, Figure ). The median time to first RASS in the target range in participants with ≤1 altered phenotypes ( n = 24) was 5 h compared to 10 h in those with >1 altered phenotypes ( n = 54). Similarly, in the COX PH model, the group with >1 altered phenotypes had 7.5% lower hazards of achieving the first RASS in the target range after adjusting for the important clinical factors, however, this association was not significant (HR = 0.93, 95%CI = 0.47–1.82, p ‐value = 0.82). The number of altered phenotypes was not associated with ADRs (Fisher exact, p = 0.26). One (4.16%) and 8 (14.8%) participants developed ADRs in the ≤1 ( n = 24) and >1 ( n = 54) altered phenotype groups, respectively. Return of PGx results and participants perceptions Sixty‐nine patients requested to have results returned. Nine died before the return of results and five declined during follow‐up calls or emails. Eighteen participants did not respond to at least six telephone calls on three different days over at least 3 weeks or 2 encrypted email requests to reply to an investigator (DJS). Results were returned to 37 participants. Investigators spent an average time of 36 min (range 25–60 min) per patient on the return of results and answering their questions. Scores on the questionnaire after the return of results were high (95%) with only one participant missing more than 1 question. The most missed question (27%) was “If my test result says that I am a “poor metabolizer” it means that all medications given to me will stay in my body longer than other people.” All participants rated the return of the results session as extremely interesting and helpful (highly satisfactory) except one participant who ranked the counseling as satisfactory. This participant also did not plan to share his individualized PGx information with his doctor; all others said they plan to share results with their doctor (36/37, 97%). Thirty percent of study participants stated they would share their PGx test results with their pharmacist and 22% planned to share their results with their nurse. Plans to share their PGx information with their family were high with 75% of responses. ICU admissions were screened (except during the COVID‐19 shutdown) and 86 participants were enrolled. Eight were excluded from the analysis (exclusion reasons are shown in Supplementary Material (S1) Table ) leaving 78 patients. The demographic and baseline characteristics are presented in Table . The average age of the participants was 58 years, and the majority (92%) were Caucasian. Males and females were almost equally represented. The CVICU had the highest number of participants (41%) compared to MICU and SICU with 32.1% and 26.9%, respectively. Thirty‐seven participants (47%) were admitted to the ICU after a surgical procedure. The 78 patients had 1673 RASS measurements with 1447 measurements (86%) occurring when propofol, fentanyl, and/or midazolam were administered. Fentanyl and propofol combination was the most frequently administered regimen (Supplementary Material (S1) Figure ). The overall percentage of time in the target RASS range was highly variable (range 0 to 100%) among our population (Figure ). The median percentage of time in the target RASS range was low; 25% in the first 24 h and 37% at 48 h. The number of individuals with an altered phenotype for COMT ( n = 63, 80.1%), CYP2D6 ( n = 47, 60.3%), and CYP2B6 ( n = 42, 53.8%) was high (Figure ). The median time a patient was receiving one or more sedative/analgesics (fentanyl, propofol, and/or midazolam) with a potentially relevant PGx gene was 24 h [interquartile range (IQR) = 22, 24] and 40 h [IQR = 30, 46] in the first 24 and 48 h of MV, respectively. There was a median of 2 altered phenotypes per patient that were potentially relevant to propofol, fentanyl, and/or midazolam in the first 24 and 48 h. The distributions of the number of altered phenotypes are presented in Supplementary Material (S1) Figure . Propofol and fentanyl were administered in more than 80% of the patients in the 24‐ and 48‐h periods. In the first 24 and 48 h of acute MV, dexmedetomidine was only administered to approximately 45% and 50% of participants, respectively, and midazolam was used in 27% of participants. Time in target RASS range and the administered sedatives and analgesics are presented in Table and Table for the first 24 and 48 h, respectively. The odds of achieving ≥60% and ≥70% of time in the desired target range decreased (range of 4%–54%) with each one increase in the number of altered PGx phenotypes after adjusting for the important clinical factors, however, these associations were not statistically significant (Table ). Similar non‐significant trends of decreasing the odds of achieving target ranges in groups with a higher number of altered phenotypes (group 1 [2‐3 altered phenotypes] and group 2 [4‐5 altered phenotypes]) compared to the reference group (0 to 1 altered phenotype) were observed when the number of altered phenotypes was treated as a categorical variable. The results of the logistic regression models with the number of altered phenotypes as a categorical variable are reported in the Supplementary Material (S1) Table . Participants with ≤1 altered phenotypes had a more rapid time to target RASS compared to those with >1 altered phenotypes however, this was not significant (log‐rank test, p = 0.3, Figure ). The median time to first RASS in the target range in participants with ≤1 altered phenotypes ( n = 24) was 5 h compared to 10 h in those with >1 altered phenotypes ( n = 54). Similarly, in the COX PH model, the group with >1 altered phenotypes had 7.5% lower hazards of achieving the first RASS in the target range after adjusting for the important clinical factors, however, this association was not significant (HR = 0.93, 95%CI = 0.47–1.82, p ‐value = 0.82). The number of altered phenotypes was not associated with ADRs (Fisher exact, p = 0.26). One (4.16%) and 8 (14.8%) participants developed ADRs in the ≤1 ( n = 24) and >1 ( n = 54) altered phenotype groups, respectively. PGx results and participants perceptions Sixty‐nine patients requested to have results returned. Nine died before the return of results and five declined during follow‐up calls or emails. Eighteen participants did not respond to at least six telephone calls on three different days over at least 3 weeks or 2 encrypted email requests to reply to an investigator (DJS). Results were returned to 37 participants. Investigators spent an average time of 36 min (range 25–60 min) per patient on the return of results and answering their questions. Scores on the questionnaire after the return of results were high (95%) with only one participant missing more than 1 question. The most missed question (27%) was “If my test result says that I am a “poor metabolizer” it means that all medications given to me will stay in my body longer than other people.” All participants rated the return of the results session as extremely interesting and helpful (highly satisfactory) except one participant who ranked the counseling as satisfactory. This participant also did not plan to share his individualized PGx information with his doctor; all others said they plan to share results with their doctor (36/37, 97%). Thirty percent of study participants stated they would share their PGx test results with their pharmacist and 22% planned to share their results with their nurse. Plans to share their PGx information with their family were high with 75% of responses. Managing sedation is difficult because individual responses to sedatives can be unpredictable and may be influenced by many factors. Pharmacogenomics may also impact the effectiveness of sedation and analgesia through modifying drug exposure and response. The application of PGx approaches could help in the management of critically ill. Conducting PGx research in the ICU is challenging due to the unstable nature of critical illness, rapid changes in medications, and the high medication burden and potential for drug interactions. In addition, many care teams are involved in patient management, and both patient and family are overwhelmed and unable to consider research participation. There are few studies that have investigated the association of PGx variation on outcomes in the ICU and none to report attitudes and perceptions of ICU patients on PGx. Achieving and maintaining a patient's sedation in the target RASS score is an important measure of the effectiveness of sedation management. An ideal sedative regimen reaches the target RASS quickly and maintains the RASS within the target range. The percentage of time in target RASS range is highly variable and is a main purpose of sedation protocols. Clinically, data show that the percentage of time in target RASS range varies depending on the population, the specific sedation protocol used, and other factors. A study reported a wide range of mean time within target RASS (10.7%–27.6%) in the first 48 h and varied based on the sedative agent selected and the body mass index. In a study by DiCesare et al., which investigated the predictors of response to dexmedetomidine in the first 48 h of MV, a sensitivity analysis was conducted on the optimal percentage of time and determined that ≥60% was the optimal cut‐off. Therefore, we studied two cut‐offs for the optimal percentage of time in the target RASS range for the first 24 and 48 h of MV of at least 60% and 70% within the target range. Most critically ill patients have complex diseases and preexisting comorbidities and conditions that contribute to prolonged ICU stay and can impact outcomes and response to treatment. This complexity may obscure the PGx effect or the effect is only meaningful and observable in patients with the most severe phenotypes (i.e., ultrarapid or poor metabolizers). In addition, for efficient use of PGx results in the ICU setting testing must be rapidly available; this remains a major challenge as turnaround time is generally >48 h. The current evaluation and a previous study also conducted in an ICU showed the feasibility of obtaining consent and collecting genetic information from critically ill adults. Most commonly used PGx variants are in drug metabolizing enzymes and may affect the pharmacokinetics of drugs. Previous studies in the non‐ICU settings have reported pharmacokinetic PGx associations of fentanyl with CYP3A4/5 , , , , and CY2D6 , midazolam with CYP3A4/5 , , and propofol with CYP2B6 , but there is little data evaluating if pharmacokinetic changes translate into altered clinical outcomes. Therefore, most of these drug‐gene pairs have a PharmGKB level of evidence (LOE) of 3 except for fentanyl and CYP3A4 which has a LOE of 2. , , In the current study, we found a non‐significant association of the number of altered phenotypes with the study endpoints. There was a non‐significant association for lower odds of achieving ≥60% and ≥70% RASS in the desired target with increasing number of altered phenotypes. Although numerically different, there was no significant difference between time to target RASS in individuals with ≤1 altered phenotypes (5 h) vs. those with >1 altered phenotypes (10 h). Individuals with >1 altered phenotypes had 3.5 times more ADRs than those with ≤1 altered phenotypes (14.8% vs. 4.16%). It may be possible that the presence of altered phenotypes that change the pharmacokinetics or pharmacodynamics of sedative and analgesics increases the risk of ADRs but we were not able to demonstrate a significant difference. This should be evaluated in a larger sample size. In the current study, all participants, who had their PGx results returned and received the individual counseling session, had satisfactory perceptions and positive attitudes toward PGx. Capturing patients' attitudes and perceptions toward PGx is essential for developing and implementing PGx in clinical settings. Future studies should evaluate the attitudes of the ICU medical and nursing providers. The current study has limitations. Our PGx panel was limited to variants with known or probable PGx effects for fentanyl, propofol, midazolam, and other drugs but may have led to omitting other important pharmacogenes. We did not study the ADRA2A and PRKCB genes which have been studied for dexmedetomidine since they are rarely tested on PGx panels and were not available on the panel we used but have LOE of 3 on PharmGKB. In our analysis, the primary PGx effect (percentage of time in target RASS range) is mainly related to fentanyl, propofol and/or midazolam which are the primary agents used in our ICUs. We did not account for concomitant medications which might have drug–drug interactions with the tested sedatives and analgesics because the study period was short (24 and 48 h) and we assumed the drug interaction effect would be small. The lack of biogeographical genetic diversity in our study participants resulted in only a few diplotypes that could be potentially important (i.e., CYP3A5*1/*1) for fentanyl and midazolam. Because of the many possible combinations of sedatives and analgesics that can be used, confounding effects are difficult to control. Future PGx studies should focus on a single agent and/or a single combination such as fentanyl and propofol. Another limitation is that the study was interrupted by the COVID‐19 pandemic and the enrollment process was halted for 7 months which affected the number of patients enrolled and the pandemic affected ICU practice. Future studies with larger sample sizes are needed. An increase in the number of altered phenotypes in pharmacogenes relevant to propofol, fentanyl, and midazolam had a non‐significant association toward unfavorable sedation outcomes such as lower odds of achieving target RASS range and slower time to target RASS during the early period of acute MV. The positive attitudes and perceptions of ICU patients and their willingness to participate will help facilitate the advancement and acceptance of PGx testing in the critical care setting. M.E.M., T.T.N, J.L., B.S., Z.R., G.B., D.S., and P.A.J. wrote the manuscript; T.T.N., J.L., B.S., Z.R., G.B. D.S., and P.A.J. designed the research; M.E.M., T.T.N, J.L., D.S., and P.A.J. performed the research; M.E.M., analyzed the data. This research was funded by Enhance Comprehensive Pharmacist Services to Improve Patient Health Clinical Research Award and MM was supported by a student award by the National Institutes of Health's National Center for Advancing Translational Sciences, grant UM1TR004405. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health's National Center for Advancing Translational Sciences. The authors declared no competing interests for this work. Data S1. Data S2.
ASNTR’s Venture into a Hybrid Conference: Lessons Learned During the COVID-19 Pandemic
96f83b09-e1db-46db-99ce-aa5571b6fbd2
8532255
Pathology[mh]
Like the annual return of the manatee to warmer waters, nearly a hundred world class scientists returned to Florida for their annual exchange of ideas, presentation of new developments, and the fostering of new collaborations during the 28th Annual Conference of the American Society for Neural Therapy and Repair (ASNTR), held at the Sheraton Sand Key in Clearwater Beach, Florida. Even though the ASNTR meeting was held in-person, it offered an optional virtual attendance creating its first hybrid event. The COVID-19 pandemic cancelled the 2020 meeting but 18 months later, the ASNTR, like many scientific meetings, rose to the ongoing challenge by encouraging participants to meet in-person or virtually. For these authors, it was our first in-person conference since 2019. Hybrid meetings have become almost essential during the pandemic. Many researchers were unable to attend the ASNTR meeting in-person because of international or institutional restrictions. Others researchers were held back because of children too young to be vaccinated and did not want to risk the travel. Several clinician scientists reported that the increased number of hospitalized COVID-19 patients would not afford them the time away. Many of these researchers, then, were able to attend virtually. The reasons for virtual attendance at the ASTNR meeting reflect common themes in this new era of scientific conferences. Data collected from international dermatology as well as oncology conferences found virtual conferences can – : increase international participation in an era of travel restrictions allows flexible “attendance” with on-demand access ease conference costs reduces the carbon footprint In addition, virtual conferences let attendees the time and opportunity to really review scholarly work through on-demand access. This could be a substantial benefit to trainees and young investigators, and may help advance educational webinars and meetings , , such as the ASNTR trainee workshop looking at career development in and out of academia. The lack of face-to-face networking though may counter some conference benefits. At critical time points in career development, the in-person experience is almost essential for early stage investigators and graduate students, when the chance to present their own research (in-person) brings a more dynamic interaction and the opportunity for unscripted connections with more seasoned investigators. One possible solution to the in-person vs. on-demand dilemma, which could balance the benefits of both suggests Bousema et al., would be for researchers to alternate their attendance between virtual and in-person events . Since the founding of the ASNTR about 30 years ago by John R. Sladek (University of Colorado Anschutz Medical Campus) and Paul R. Sanberg (University of South Florida), it has been a small but intimate collection of researchers from all levels of academia and industry, loyal to the singular goal of improving neural therapies and transplant approaches. In the past, the casual demeanor of the in-person meeting created an atmosphere where ideas could be shared and collaborations formed as easily over a conference table as a beach umbrella. The in-person aspect has been a lasting strength of the conference Thus, moving to a hybrid format proved challenging but despite this the ASNTR hosted 73 live attendees, including 17 travel award recipients, and over 25 virtual attendees and guests. The scientific program included live and virtual oral presentations and posters, as well as a data blitz session and a virtual presentation by Zeiss on brain imaging. The meeting was led virtually by Li-Ru Zhao (SUNY Upstate Medical University), as current president, and by president-elect, Michael Lane (Drexel University College of Medicine) who hosted in person. This was a unique and successfully executed leadership approach. Dr. Zhao and Dr. Lane were very involved in their roles. The duo oversaw a variety of presentations some with unique and/or timely insights, such as exploring the brain-gut axis in Parkinson’s disease or the neuro-sequela of SARS-CoV2 infection. As in the past, the range of presented topics was broad and included: Neurodegenerative disorders ○ Alzheimer’s disease ○ Parkinson’s disease ○ Huntington’s disease ○ Fragile X disease ○ ALS ○ Traumatic brain injury ○ Spinal cord injury ○ Ischemic stroke ○ Tau-opathies ○ Vascular dementia ○ Angelman syndrome ○ COVID-19 Cell reprogramming and replacement options ○ umbilical cord blood ○ mesenchymal stem cells ○ interspecies chimeric cells ○ induced pluripotent stem cells ○ bone marrow derived stem cells ○ interneurons Technologies ○ hydrogel scaffolding ○ response biomarkers for cell replacement strategies ○ deep brain stimulation ○ optimizing MRIs This year’s ASNTR presidential lecture was given by Gabriel de Erausquin (University of Texas Health San Antonio) entitled “Could COVID-19 increase your risk of dementia?” . Here, Dr. de Erausquin highlighted the potential for future cognitive decline resulting from SARS-CoV2 infection. He pointed out that this idea is not without precedence, where CNS degeneration emerged later in some patients during the 1918 influenza pandemic. Furthermore, both neurons and glia express ACE2, the receptor by which coronaviruses like SARS-CoV2 gain access to cells . For the first time since 2019, the Bernard Sanberg Memorial Award was presented to Walter Low (University of Minnesota), for his distinguished career in cell therapy and regenerative medicine. Dr. Low joins an elite group of 21 previous recipients for this award which started in 2000, to honor the late father of ASNTR co-founder Paul Sanberg. Nicholas Boulis (Emory University) was invited to give the Roy Bakay Memorial lecture . The privilege of this lecture and award is given to notable clinician scientists who reflect the ideals of the late Dr. Roy Bakay , who passed in 2013 from cancer. Dr. Boulis’ presentation was titled “Development of cellular and molecular surgery for ALS.” Additionally, the 2nd annual Paul J Reier Award was given to Ines Maldonado-Lasuncion (University of Chicago and Vrije Universiteit Amsterdam). This award is so named to honor senior investigator, Paul J. Reier, who continues to make a remarkable contribution to spinal cord and neurotrauma research at the University of Florida. The award it unique in that it is presented to an early career investigator with excellent presentation skills during the ASNTR meeting. Ines Maldonado-Lasuncion, PhD candidate, gave a talk entitled “Mesenchymal stromal cells, inflammatory priming, and spinal cord repair.” In it she discussed the role of inflammatory priming of mesenchymal stem cells prior to transplant in spinal cord injury. Sponsors for the ASNTR meeting included USF Research and Innovation, Florida High Tech Corridor Council, Zeiss, and the Marrion Murry Spinal Cord Research Center. Non-profit sponsors included the Lisa Dean Moseley Foundation, Wings for Life Spinal Cord Research Foundations, Cure CADASIL, and the Dementia Society of America. Funds from NINDS supported trainee education (1R13NS118601-01; Kyle Fink, UCD). The take away message from the 2021 ASNTR conference from society members and attendees was that they missed the in-person experience. I, the first author, particularly liked the many spontaneous discussions held outside of formal sessions regarding the translation of research towards patents, licensing, and commercialization among students, faculty, and industry representatives . It could be argued that a small focused in-person meeting held annually at the same conference venue generates a sense of familiarity that allows for more deep exchange in real time. Even though it is not ideal, the fact that ASNTR successfully supported a hybrid meeting bodes well for other small conferences. A hybrid conference does not have to completely eliminate the greatest strength of the small conference if researchers are willing to alternate between live and virtual attendance. The 29th ASNTR conference is scheduled to return to its standard spring time frame, April 28-May1, 2022. In-person programs will be held in Clearwater, Florida. The decision to make it a hybrid conference will depend, like the last two years, on the COVID-19 pandemic.
Tumor-Targeting Peptides Search Strategy for the Delivery of Therapeutic and Diagnostic Molecules to Tumor Cells
9ce2b502-aa03-4dd6-98c6-afdeaadd89cd
7796297
Pathology[mh]
Glioblastoma (GBM) is the most common and aggressive form of brain tumor, which is characterized by the least favorable prognosis—the average survival rate for patients with this diagnosis is 15 months . In modern medical practice, standard methods such as surgery, radiation therapy and chemotherapy are used to treat glioblastoma, and in most cases these methods are ineffective. Such a low efficiency of glioblastoma treatment is often associated with two characteristic features of this tumor: the invasion of tumor cells into the brain parenchyma, which leads to the emergence of secondary tumor foci, and the high heterogeneity of tumor. A special contribution to the resistance of GBM cells to therapy is made by a small population of cells with a highly aggressive phenotype characteristic of cancer stem cells (CSCs) . Targeted therapy based on the use of drugs specifically affecting specific types of tumors can be a solution to the problem of the low efficiency of the applied cancer therapies, which makes it possible to increase the effectiveness of treatment and minimize toxic effects on healthy tissues. The combination of the unique properties of cancer cells makes it possible to find specific ligands that interact directly with the tumor and ensure the implementation of the targeted approach. Currently, short peptides are considered promising agents for the delivery of therapeutic and diagnostic molecules to cancer cells, which have high affinity and specificity for the target and a higher efficiency of penetration into cancer cells as compared to ligands of larger sizes, for example, antibodies. One of the promising ways to search for tumor-targeting peptides is the screening of phage peptide libraries in tumor cell cultures in vitro and in xenograft models in vivo . This approach can be applied to solve the problem of tumor heterogeneity, since the screening can reveal tumor-targeting peptides that specifically interact with different populations of tumor cells, including CSCs. A targeted approach to CSCs is especially relevant, since such characteristics of these cells as the ability to self-renewal, differentiation into various cell types, invasion of the brain parenchyma and metastasis, determine their resistance to chemotherapy and radiotherapy . Earlier, by screening phage peptide libraries Ph.D-7 and Ph.D-12 (New England Biolabs, Ipswich, Massachusetts, USA), we selected bacteriophages displayed tumor-targeting peptides that provide specific binding of phage particles to human glioblastoma cells U-87 MG in vitro and with U-87 MG tumor in the xenograft model in vivo . In this work, a screening of the Ph.D.-C7C phage peptide library was carried out to obtain tumor-targeting peptides to U-87 MG tumor cells with the phenotype of tumor stem cells (CD44+/CD133+), as well as a comparative analysis of the distribution in the body of mice and the specificity of the interaction with U87 MG tumor of bacteriophages displaying tumor-targeting peptides selected during biopanning of various peptide libraries in different selection systems. 2.1. Biopanning of Linear Phage Libraries Ph.D.-12 and Ph.D.-7 on Cells and Tumors U-87 MG Earlier, in our laboratory, we screened the phage peptide library Ph.D.-7 in vivo on U-87MG glioblastoma xenografts in immunodeficient mice. In the course of the work, 102 bacteriophages were selected; the sequences of 27 exposed peptides selected after the third round were identified and analyzed. When analyzing the sequences of the selected peptides, the highest frequency of occurrence was in the sequence HPSSGSA (92)—25.9% . Additionally, the screening of the Ph.D.-12 phage peptide library in vitro on U-87 MG human glioblastoma cells was performed earlier. In the course of the work, 80 bacteriophages were selected; sequences of 39 exposed peptides selected after the third round and 37 peptides selected after the fifth round were identified and analyzed. After the fifth round, it was found that the sequence SWTFGVQFALQH (26) was found in 24.3% of cases . 2.2. Biopanning of the Circular Phage Peptide Library Ph.D.-C7C In Vivo and In Vitro We carried out in vitro biopanning on cells of an immortalized human glioblastoma cell line U-87 MG using Ph.D.-C7C at the same protocol as for linear libraries. Three rounds of selection were carried out; the sequences of the exposed peptides providing the specific interaction of phage particles with U-87 MG cells were determined by sequencing. After the third round of biopanning, bacteriophages displayed the peptides PVPGSFQ (18C), PTQLHGT (23C), MHTQTPW (19C), TTKSSHS (2C), and ISYLYGR (36C) were selected. The frequency of occurrence of the peptides PVPGSFQ (18C) and PTQLHGT (23C) was 35% and 15%, respectively. Peptides MHTQTPW (19C), TTKSSHS (2C) and ISYLYGR (36C) accounted for 10% of the selected pool of bacteriophages . 2.3. Obtaining a Population of CD44+/CD133+ U-87 MG Cells for Selection of Bacteriophages Displaying Peptides Specific to CSCs To obtain tumor-targeting peptides specific to U-87 MG cancer stem cells (CD44+/CD133+ cells), we screened the cyclic phage peptide library Ph.D.-C7C in vivo. The first two rounds of selection were performed on U-87 MG tumor transplanted subcutaneously into SCID mice. The third round of biopanning was performed on orthotopically implanted U-87 MG tumor into SCID mice. In this case, mice with a tumor were intravenously injected with an enriched phage peptide library after the first two rounds, after 24 h of circulation of the library in the body, the animals were euthanized and the tumor was removed. Tumor tissue was homogenized to single cells; tumor cells were stained for markers CD44, CD133 and sorted using Fluorescence-activated cell sorting (FACS). According to the results of sorting, the number of cells positive for CD44 (CD44+) was 8.9% ( A), positive for both markers (CD44+/CD133+)—5.53% ( B), positive for CD133 (CD44−/CD133+)—0.65% ( C). Next, cells positive for both markers CD44/CD133 were lysed, the lysate was amplified in Escherichia coli and the sequence of the insert was determined by Sanger sequencing. According to the sequencing results, only one clone displaying the MHTQTPW peptide (No.19C) binds to cancer cells that were positive for both markers tested. It should be noted that the MHTQTPW peptide was previously selected in the biopanning on U-87 MG cells in vitro (data not shown). 2.4. Analysis of the Binding Specificity of Bacteriophages, Displaying Selected Peptides, to Human Glioblastoma Cells U-87 MG We carried out a comparative analysis of the efficiency of binding of the bacteriophages displayed tumor-targeting peptides to human glioblastoma cells U-87 MG by fluorescence microscopy . We have previously shown that the peptide displayed by bacteriophage No. 26 ensures the binding and internalization of the phage particle into AS2 astrocytoma cells, but not into human MG1 glioblastoma cells . In shows fluorescence microscopy of cells incubated with bacteriophages 19C, 36C, 92, 26, selected on different phage libraries, in different screening systems. Phage M13, displayed the peptide YTYDPWLIFPAN previously selected for MDA-MB 231 cells, was taken as a negative control . No significant differences were found in the efficiency of binding to cells of bacteriophages displayed the studied peptides. Thus, the obtained tumor-targeting peptides are able to provide efficient specific binding of phage particles to U-87 MG glioblastoma cells. 2.5. Analysis of Biodistribution and Specificity of Accumulation of Bacteriophages, Displaying Selected Tumor-Targeting Peptides, in U87 MG Tumor Tissue Comparative analysis of the distribution in the body of experimental animals and the specificity of accumulation in U-87 MG xenograft tumors of bacteriophages, displaying tumor-targeting peptides, was carried out by titration of tumor homogenates and tissues of control organs (kidney, liver, lungs, brain) after 4.5 h of circulation of phage particles in the body of the animal. For comparative analysis, bacteriophages No. 26 (Ph.D.-C12), No. 19c, No. 36c (Ph.D.-C7C) and No. 92 (Ph.D.-7) were selected. A random bacteriophage displayed peptide YTYDPWLIFPAN was used as a negative control. The titration data showed that bacteriophage No. 92, obtained by screening the phage peptide library Ph.D.-7 in vivo, accumulated to the greatest extent in the tumor tissue as compared to the control organs: the titer of the bacteriophage in the tumor exceeded its titer by more than 5.5 times in the kidneys, and more than 11 times in the brain, liver and lungs . Two-way analysis of variance (ANOVA) showed a statistically significant difference ( p ≤ 0.0001) in the accumulation of this bacteriophage in the tumor as compared to the control phage and phages No. 26, No. 19C, No.36C. Bacteriophage No. 26 also specifically accumulated in the tumor tissue, but to a lesser extent compared to bacteriophage No. 92, its accumulation was statistically significantly different only from that for the control phage ( p ≤ 0.001). Bacteriophages, selected from the cyclic library Ph.D.-C7C—No. 19C and No. 36C, showed the least accumulation in tumor tissue and other organs. Earlier, in our laboratory, we screened the phage peptide library Ph.D.-7 in vivo on U-87MG glioblastoma xenografts in immunodeficient mice. In the course of the work, 102 bacteriophages were selected; the sequences of 27 exposed peptides selected after the third round were identified and analyzed. When analyzing the sequences of the selected peptides, the highest frequency of occurrence was in the sequence HPSSGSA (92)—25.9% . Additionally, the screening of the Ph.D.-12 phage peptide library in vitro on U-87 MG human glioblastoma cells was performed earlier. In the course of the work, 80 bacteriophages were selected; sequences of 39 exposed peptides selected after the third round and 37 peptides selected after the fifth round were identified and analyzed. After the fifth round, it was found that the sequence SWTFGVQFALQH (26) was found in 24.3% of cases . We carried out in vitro biopanning on cells of an immortalized human glioblastoma cell line U-87 MG using Ph.D.-C7C at the same protocol as for linear libraries. Three rounds of selection were carried out; the sequences of the exposed peptides providing the specific interaction of phage particles with U-87 MG cells were determined by sequencing. After the third round of biopanning, bacteriophages displayed the peptides PVPGSFQ (18C), PTQLHGT (23C), MHTQTPW (19C), TTKSSHS (2C), and ISYLYGR (36C) were selected. The frequency of occurrence of the peptides PVPGSFQ (18C) and PTQLHGT (23C) was 35% and 15%, respectively. Peptides MHTQTPW (19C), TTKSSHS (2C) and ISYLYGR (36C) accounted for 10% of the selected pool of bacteriophages . To obtain tumor-targeting peptides specific to U-87 MG cancer stem cells (CD44+/CD133+ cells), we screened the cyclic phage peptide library Ph.D.-C7C in vivo. The first two rounds of selection were performed on U-87 MG tumor transplanted subcutaneously into SCID mice. The third round of biopanning was performed on orthotopically implanted U-87 MG tumor into SCID mice. In this case, mice with a tumor were intravenously injected with an enriched phage peptide library after the first two rounds, after 24 h of circulation of the library in the body, the animals were euthanized and the tumor was removed. Tumor tissue was homogenized to single cells; tumor cells were stained for markers CD44, CD133 and sorted using Fluorescence-activated cell sorting (FACS). According to the results of sorting, the number of cells positive for CD44 (CD44+) was 8.9% ( A), positive for both markers (CD44+/CD133+)—5.53% ( B), positive for CD133 (CD44−/CD133+)—0.65% ( C). Next, cells positive for both markers CD44/CD133 were lysed, the lysate was amplified in Escherichia coli and the sequence of the insert was determined by Sanger sequencing. According to the sequencing results, only one clone displaying the MHTQTPW peptide (No.19C) binds to cancer cells that were positive for both markers tested. It should be noted that the MHTQTPW peptide was previously selected in the biopanning on U-87 MG cells in vitro (data not shown). We carried out a comparative analysis of the efficiency of binding of the bacteriophages displayed tumor-targeting peptides to human glioblastoma cells U-87 MG by fluorescence microscopy . We have previously shown that the peptide displayed by bacteriophage No. 26 ensures the binding and internalization of the phage particle into AS2 astrocytoma cells, but not into human MG1 glioblastoma cells . In shows fluorescence microscopy of cells incubated with bacteriophages 19C, 36C, 92, 26, selected on different phage libraries, in different screening systems. Phage M13, displayed the peptide YTYDPWLIFPAN previously selected for MDA-MB 231 cells, was taken as a negative control . No significant differences were found in the efficiency of binding to cells of bacteriophages displayed the studied peptides. Thus, the obtained tumor-targeting peptides are able to provide efficient specific binding of phage particles to U-87 MG glioblastoma cells. Comparative analysis of the distribution in the body of experimental animals and the specificity of accumulation in U-87 MG xenograft tumors of bacteriophages, displaying tumor-targeting peptides, was carried out by titration of tumor homogenates and tissues of control organs (kidney, liver, lungs, brain) after 4.5 h of circulation of phage particles in the body of the animal. For comparative analysis, bacteriophages No. 26 (Ph.D.-C12), No. 19c, No. 36c (Ph.D.-C7C) and No. 92 (Ph.D.-7) were selected. A random bacteriophage displayed peptide YTYDPWLIFPAN was used as a negative control. The titration data showed that bacteriophage No. 92, obtained by screening the phage peptide library Ph.D.-7 in vivo, accumulated to the greatest extent in the tumor tissue as compared to the control organs: the titer of the bacteriophage in the tumor exceeded its titer by more than 5.5 times in the kidneys, and more than 11 times in the brain, liver and lungs . Two-way analysis of variance (ANOVA) showed a statistically significant difference ( p ≤ 0.0001) in the accumulation of this bacteriophage in the tumor as compared to the control phage and phages No. 26, No. 19C, No.36C. Bacteriophage No. 26 also specifically accumulated in the tumor tissue, but to a lesser extent compared to bacteriophage No. 92, its accumulation was statistically significantly different only from that for the control phage ( p ≤ 0.001). Bacteriophages, selected from the cyclic library Ph.D.-C7C—No. 19C and No. 36C, showed the least accumulation in tumor tissue and other organs. The goal of this study was to develop a strategy for searching for tumor-targeting peptides for the delivery of therapeutic and diagnostic molecules to glioblastoma, which is characterized by some degree of heterogeneity. Tumor heterogeneity is due to small population of cells with a highly aggressive phenotype characteristic of CSCc. To identify CSCs, the level of CD24, CD29, CD44, CD133 and ALDH1 is most often examined. CD44 and CD133 are considered one of the most specific CSCs markers. CD44, a transmembrane glycoprotein, is considered one of the most important markers of CSCs . As a result of alternative splicing, post-translational modifications, and partial cleavage by matrix metalloproteinases, multiple CD44 isoforms can exist in the cell . CD44 acts as a co-receptor for several cell surface receptors (EGFR, Her2, Met6, TGFβRI, TGFβRII, VEGFR-2), thus participating in various signaling pathways (Rho, PI3K/Akt and Ras-Raf-MAPK), including those stimulating growth and cell motility. Another characteristic marker of CSCs, CD133 or prominin-1, is a transmembrane glycoprotein with a structure consisting of five transmembrane domains . It is known that CD133 is required to maintain the properties of CSCs, and a low level of this marker in glioblastoma cells negatively effects on the ability of cells to self-renewal and neurosphere-forming . The expression level of CD133 on cells is usually low, but can vary widely. Thus, in endometrial cancer, CD133 was immunohistochemically detected in 1.3–62.6% of cells, in colorectal cancer, CD133 was expressed in 0.3–82.0% of cells . Despite the fact that CD133 is considered as a marker of CSCs, its studies as a marker of glioblastoma CSCs remain controversial . Despite the unclear physiological function of CD133 in the pathogenesis of gliomas, mechanisms in which this receptor is involved have been discovered. It has been shown that under hypoxia an increase in the expression of this receptor is observed, as a result of which cells with a negative CD133 phenotype acquire a CD133+ phenotype . Thus, at present, CSCs are considered the most promising targets for the search for specific therapeutic and diagnostic molecules. The use of combination therapy, including standard cytotoxic drugs capable of destroying the main tumor mass, and drugs targeting CSCs, can significantly increase the effectiveness of anticancer therapy and improve patient survival . In this work, in order to develop a strategy for obtaining tumor-targeting peptides to glioblastoma, a comparative analysis of the binding efficiency of the selected peptides in the screening of linear and cyclic phage peptide libraries, Ph.D.-7, Ph.D.-12, and Ph.D.- C7C, in different selection systems (in vitro and in vivo) was conducted. We also used cyclic phage library, characterized by the fact that the peptides exposed on the surface protein p3 have a circular structure due to the formation of disulfide bridges between cysteines flanking the insert. It is believed that cyclic peptides are much less susceptible to proteolysis and often exhibit increased biological activity due to their conformational rigidity . As a result of the studies carried out, it was found that all selected tumor-targeting peptides obtained from various peptide libraries, both in vitro and in vivo, are able to provide efficient specific binding of phage particles to not enriched U-87 MG glioblastoma cells. Indeed, the immunocytochemistry showed that almost all cells in the population of not enriched U-87 MG cells are stained. On the image related to 19C phage, which was found after lysis the enriched cells (CD44+/CD133+) and their further amplification, not all cells were stained. One possible explanation of this fact could be that this peptide (19C) binds with some receptors of the stem cells surface which could not exist on all the cells in the general population, and likely with CD44 only, because according to cytometry data ( C), the population of CD44+/CD133+ cells is 5.53% only. Additionally, the phages No. 26, 92, 36C were found in the screening on unenriched U-87 MG cells. Another possibility is that after receiving the CD44+/CD133+ cells by sorting the CSCs could generate differentiated progeny, losing the markers of stemness. The highest specificity of binding to the xenograft U87 MG in vivo as compared to control organs is provided by linear tumor-targeting peptides obtained by screening the Ph.D.-12 phage peptide library on the xenograft U87 MG. Despite the great stability under physiological conditions and conformational rigidity, which often determines the high biological activity of cyclic peptides , the specificity of the interaction with the xenograft U-87 MG of bacteriophages displayed cyclic peptides selected on the population of glioblastoma cells expressing CSCs markers turned out to be lower than the specificity of interaction bacteriophages displaying linear peptides. Certain linear peptides are believed to have conformation recognized by target receptors without the need for cyclization. In addition, the linear conformation of the peptide can provide a greater efficiency of its penetration into the cell as compared to cyclic peptides, since a large free energy is required for penetration into the cell . Additionally, when studying the distribution and binding of phage particles to a tumor xenograft, we must take into account the fact that the number of CD44+/CD133+ cells inside xenograft is small. In addition to U-87 MG cells, there are the endothelial cell and stroma’s cells in the tumor. So, CSCs will be in small quantities in the tumor tissue, which explains the absence of significant differences between the binding of the control phage and bacteriophage No. 19C to the U87 MG xenograft. So, using the strategy of searching for peptides on population of enriched cells using specific markers (CD44+/CD133+), we met some obstacles in further experiments. Thus, according to the totality of the obtained data, the most effective strategy for obtaining tumor-targeting peptides that provide targeted delivery of diagnostic agents and therapeutic drugs to human glioblastoma tumors is to screen linear phage peptide libraries for glioblastoma tumors in vivo. 4.1. Cell Cultures Cancer cell line U-87 MG was obtained from the Russian cell culture collection (Russian Branch of the ETCS, St. Petersburg, Russia). U-87 MG cells were cultivated in alpha-MEM (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% of fetal bovine serum (FBS) (Sigma, St. Louis, MO, USA), 1 mM L-glutamine, 250 mg/mL amphotericin B and 100 U/mL penicillin/streptomycin. Cells were grown in a humidified 5% CO2–air atmosphere at 37 °C and were passaged with TripLE Express Enzyme (Thermo Fisher Scientific, USA) every 3–4 days. 4.2. Animals Female SCID hairless outbred (SHO-Prkdc scid Hrhr) mice aged 6–8 weeks were obtained from «SPF-vivarium» ICG SB RAS (Novosibirsk, Russia). Mice were housed in individually ventilated cages (Animal Care Systems, Centennial, Colorado, USA) in groups of one to four animals per cage with ad libitum food (ssniff Spezialdiäten GmbH, Soest, Germany) and water. Mice were kept in the same room within a specific pathogen-free animal facility with a regular 14/10 h light/dark cycle (lights on at 02:00 h) at a constant room temperature of 22 ± 2 °C and relative humidity of approximately 45 ± 15%. 4.3. In Vivo and In Vitro Biopanning Biopanning of the phage peptide library (Ph.D.-C7C, New England Biolabs, Ipswich, MA, USA) on U-87 MG glioblastoma cells in vitro was performed as described previously with some modifications , namely. The cells that reached 100% confluence were washed with 4 mL of PBS, then 400 μL of 10 mM EDTA was added to detach the cells from the surface and incubated for 4 min at 37 °C. Then 1 mL of complete growth medium was added and cell suspension was transferred into a falcon with a volume 15 mL. The cells were centrifuged for 3 min at 1000 rpm, the supernatant was removed, the cells were resuspended in 4 mL of PBS, and the centrifugation was repeated. The cells were resuspended in 4 mL of blocking buffer (5% BSA/PBS), incubated for 10 min at 37 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS and pelleted by centrifugation (3 min, 1000 rpm). The supernatant was removed, the cells were incubated with 3 mL of a negative selection-depleted phage peptide library for 1 h at 4 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cell pellet was washed three times with 4 mL of PBS and centrifuged for 3 min at 1000 rpm. The cells were resuspended in 4 mL of growth medium heated to 37 °C to provide conditions for the internalization of bacteriophages into cells, incubated for 15 min at 37 °C and centrifuged for 3 min at 1000 rpm. The cells were then washed three times with 4 mL of PBS. 400 μL of Triple Express was added to the cell pellet to remove non-internalized bacteriophages, incubated for 2 min at 37 °C, 1 mL of complete growth medium was added, and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS, and the centrifugation was repeated. Then, the cells were lysed with 1 mL of mQ water for 20 min at room temperature. The cell lysate was centrifuged for 5 min at 14,000 rpm, the supernatant was removed, and the phage suspension (1 mL) was amplified. The amplified population of phage particles was used for subsequent rounds of selection. For in vivo screening, we used the previously described methods , to wit. SCID mice with subcutaneously and orthotopic glioblastoma xenograft U-87 MG were injected into the tail vein with 300 μL of a phage peptide library with a concentration of 2 × 10 11 pfu/mL, diluted in saline. The circulation time of the phage library in the bloodstream for mice with subcutaneously glioblastoma xenograft U-87 MG was 5 min; for mice with orthotopic glioblastoma xenograft U-87 MG, the circulation time was 24 h. After the screening time elapsed, the mouse was sacrificed by cervical dislocation, the chest was opened, and 15 mL of saline was perfused through the heart to remove bacteriophages which not binding with the tumor from the bloodstream. The tumor was removed, washed in saline and homogenized in 1 mL PBS containing 1 mM PMSF. The tumor tissue homogenate was centrifuged for 10 min at 10,000 rpm. The pellet was resuspended in 1 mL of blocking buffer (1% BSA), after which centrifugation was repeated under the same conditions. The pellet was resuspended in 1 mL of liquid culture of E. coli ER2738 in the average log-phase with an optical density 0.3 (OD600) for elution of bacteriophages bound to the tumor and incubated for 30 min at 37 °C at 170 rpm. The eluate of phage particles was centrifuged for 5 min at 10,000 rpm. The supernatant was transferred to separate tubes and the enriched phage library was amplified for subsequent rounds of selection. Manipulations on glioblastoma xenograft U-87 MG and monitoring of tumor growth were carried out by employees of «SPF-vivarium» ICG SB RAS. After the third round of selection, phage particles were titrated to obtain individual phage colonies, which were used for DNA isolation according to the manufacturer’s protocol for the phage display peptide library. The sequencing reaction products were determined using an ABI 310 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) at the Genomics Core Facility of SB RAS using sequencing primers (-96III (5′-CCC TCA TAG TTA GCG TAA CG-3′)). 4.4. Tumor Preparation for Cell Sorting In mice with orthotopical glioblastoma xenograft U-87 MG, a peptide library enriched with in vivo biopanning (2 × 10 11 PFU/mL of phage particles in 500 μL of saline) was injected into the tail vein. After 24 h, the mouse was sacrificed by cervical dislocation and the tumor was removed. The tumor was washed twice with PBS containing 10% penicillin-streptomycin (Sigma-Aldrich, St. Louis, MO, USA), after which it was crushed with a scalpel on a Petri dish, transferred into a falcon with 3 mL of trypsin and incubated in a water bath at 37 °C for 10 min to dissociate the cells. To inactivate trypsin, 3 mL of a trypsin inhibitor from soybeans (Sigma-Aldrich, USA) was added to the cell suspension, after which the cells were centrifuged for 10 min at 800 rpm. The cell pellet was resuspended in NSC medium for neural stem cells (Sigma-Aldrich) until a homogeneous cell suspension was formed. The undissociated pieces of tumor tissue were removed and additionally homogenized. 10 mL of NSC medium was added to the cell suspension, filtered through a filter with a pore size of 40 μm, and centrifuged for 10 min at 800 rpm. The cells were resuspended in 1 mL of NSC medium and incubated for 2 h at 37 °C to restore the proteomic profile of the cells. 4.5. Cell Sorting After incubation in NSC medium, cells were incubated in 500 μL blocking buffer containing 10% FBS for 10 min. The cells were then washed with 500 μL PBS and incubated for 45 min on ice with primary antibodies against CD44 labeled with FITC (Abcam, Cambridge, UK) and primary antibodies against CD133 labeled with Alexa Fluor 647 (Abcam), both diluted in 1% FBS in PBS, in 200 μL. The cells were washed twice with 500 μL PBS, resuspended in 500 μL PBS containing 4 μg/mL gentamicin (Thermo Fisher Scientific, Waltham, MA, USA) and passed through a strainer (BD Biosciences, Franklin Lakes, NJ, USA) into flow cytometry tubes (BD Biosciences). The analysis and sorting of cells was carried out on a SONY SH800S Cell Sorter (Sony Biotechnology, San Jose, CA, USA). 4.6. Immunocytochemistry U-87 MG cells were incubated on BD Falcon culture slides to 80–90% confluence, washed with PBS twice, and 100μL of the selected phage clone (2 × 10 10 PFU/mL) in PBS-BSA Ca/Mg buffer (0.1% BSA, 1mM CaCl 2 , 10 mM MgCl 2 × 6H 2 O); was added. Cells were incubated with the bacteriophage clone for 2 h at 37 °C with the following treatment according to the previously described technique with slight modifications , namely. After incubation at 37 °C, cells were washed three times with 500 μL buffer (100 mM glycine, 0.5 M NaCl, pH 2.5) at room temperature, fixed with 200 μL cold 4% formaldehyde for 10 min and washed twice with PBS. Then, 200 μL 0.2% Triton X100 was added for 10 min to permeabilize cells, after which the cells were washed twice with 500 μL PBS. Next, cells were incubated with 200 μL mouse Anti-M13 Bacteriophage Coat Protein g8p antibodies (Abcam) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with cold 500 μL 1% BSA/PBS buffer. Next, cells were incubated with 200 μL secondary Alexa Fluor 647 (Abcam, UK) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with 500 μL cold 1% BSA/PBS buffer. Then the cells were stained with DAPI (Thermo Fisher Scientific) and analyzed by fluorescent microscopy Axio Skope 2 Plus (Zeiss, Oberkochen, Germany) at the Center for Microscopic Analysis of Biological Objects of SB RAS (Novosibirsk, Russia). 4.7. Analysis of the Specificity of Accumulation of Bacteriophages Displayed Selected Peptides in Glioblastoma Xenograft U-87 Mg Mice with a subcutaneously transplanted tumor were injected into the tail vein with 500 μL of bacteriophage (2 × 10 9 PFU/mL) diluted in physiological solution. After 4.5 h of circulation of phage particles in the body, the mouse was sacrificed by cervical dislocation and perfused through the left ventricle of the heart with 15 mL of saline. Then the tumor and control organs (liver, kidney, lungs, and brain) were removed, washed in PBS, and homogenized in 1 mL PBS containing 1 mM PMSF (Sigma Aldrich). The homogenates of tumor tissue and control organs were centrifuged for 20 min at 10,000 g at room temperature to elute bound bacteriophages and were resuspended. The resulting suspension of phage particles was titrated on agar LB medium supplemented with 1 mg/mL X-Gal and 1.25 mg/mL IPTG. 4.8. Statistical Analysis Two-way ANOVA was used for comparisons of more than two sets of data. Differences were considered to be significant if the p -value was <0.05. Nucleotide sequences of the inserts encoding peptides were analyzed using MEGA X software. Cancer cell line U-87 MG was obtained from the Russian cell culture collection (Russian Branch of the ETCS, St. Petersburg, Russia). U-87 MG cells were cultivated in alpha-MEM (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% of fetal bovine serum (FBS) (Sigma, St. Louis, MO, USA), 1 mM L-glutamine, 250 mg/mL amphotericin B and 100 U/mL penicillin/streptomycin. Cells were grown in a humidified 5% CO2–air atmosphere at 37 °C and were passaged with TripLE Express Enzyme (Thermo Fisher Scientific, USA) every 3–4 days. Female SCID hairless outbred (SHO-Prkdc scid Hrhr) mice aged 6–8 weeks were obtained from «SPF-vivarium» ICG SB RAS (Novosibirsk, Russia). Mice were housed in individually ventilated cages (Animal Care Systems, Centennial, Colorado, USA) in groups of one to four animals per cage with ad libitum food (ssniff Spezialdiäten GmbH, Soest, Germany) and water. Mice were kept in the same room within a specific pathogen-free animal facility with a regular 14/10 h light/dark cycle (lights on at 02:00 h) at a constant room temperature of 22 ± 2 °C and relative humidity of approximately 45 ± 15%. Biopanning of the phage peptide library (Ph.D.-C7C, New England Biolabs, Ipswich, MA, USA) on U-87 MG glioblastoma cells in vitro was performed as described previously with some modifications , namely. The cells that reached 100% confluence were washed with 4 mL of PBS, then 400 μL of 10 mM EDTA was added to detach the cells from the surface and incubated for 4 min at 37 °C. Then 1 mL of complete growth medium was added and cell suspension was transferred into a falcon with a volume 15 mL. The cells were centrifuged for 3 min at 1000 rpm, the supernatant was removed, the cells were resuspended in 4 mL of PBS, and the centrifugation was repeated. The cells were resuspended in 4 mL of blocking buffer (5% BSA/PBS), incubated for 10 min at 37 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS and pelleted by centrifugation (3 min, 1000 rpm). The supernatant was removed, the cells were incubated with 3 mL of a negative selection-depleted phage peptide library for 1 h at 4 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cell pellet was washed three times with 4 mL of PBS and centrifuged for 3 min at 1000 rpm. The cells were resuspended in 4 mL of growth medium heated to 37 °C to provide conditions for the internalization of bacteriophages into cells, incubated for 15 min at 37 °C and centrifuged for 3 min at 1000 rpm. The cells were then washed three times with 4 mL of PBS. 400 μL of Triple Express was added to the cell pellet to remove non-internalized bacteriophages, incubated for 2 min at 37 °C, 1 mL of complete growth medium was added, and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS, and the centrifugation was repeated. Then, the cells were lysed with 1 mL of mQ water for 20 min at room temperature. The cell lysate was centrifuged for 5 min at 14,000 rpm, the supernatant was removed, and the phage suspension (1 mL) was amplified. The amplified population of phage particles was used for subsequent rounds of selection. For in vivo screening, we used the previously described methods , to wit. SCID mice with subcutaneously and orthotopic glioblastoma xenograft U-87 MG were injected into the tail vein with 300 μL of a phage peptide library with a concentration of 2 × 10 11 pfu/mL, diluted in saline. The circulation time of the phage library in the bloodstream for mice with subcutaneously glioblastoma xenograft U-87 MG was 5 min; for mice with orthotopic glioblastoma xenograft U-87 MG, the circulation time was 24 h. After the screening time elapsed, the mouse was sacrificed by cervical dislocation, the chest was opened, and 15 mL of saline was perfused through the heart to remove bacteriophages which not binding with the tumor from the bloodstream. The tumor was removed, washed in saline and homogenized in 1 mL PBS containing 1 mM PMSF. The tumor tissue homogenate was centrifuged for 10 min at 10,000 rpm. The pellet was resuspended in 1 mL of blocking buffer (1% BSA), after which centrifugation was repeated under the same conditions. The pellet was resuspended in 1 mL of liquid culture of E. coli ER2738 in the average log-phase with an optical density 0.3 (OD600) for elution of bacteriophages bound to the tumor and incubated for 30 min at 37 °C at 170 rpm. The eluate of phage particles was centrifuged for 5 min at 10,000 rpm. The supernatant was transferred to separate tubes and the enriched phage library was amplified for subsequent rounds of selection. Manipulations on glioblastoma xenograft U-87 MG and monitoring of tumor growth were carried out by employees of «SPF-vivarium» ICG SB RAS. After the third round of selection, phage particles were titrated to obtain individual phage colonies, which were used for DNA isolation according to the manufacturer’s protocol for the phage display peptide library. The sequencing reaction products were determined using an ABI 310 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) at the Genomics Core Facility of SB RAS using sequencing primers (-96III (5′-CCC TCA TAG TTA GCG TAA CG-3′)). In mice with orthotopical glioblastoma xenograft U-87 MG, a peptide library enriched with in vivo biopanning (2 × 10 11 PFU/mL of phage particles in 500 μL of saline) was injected into the tail vein. After 24 h, the mouse was sacrificed by cervical dislocation and the tumor was removed. The tumor was washed twice with PBS containing 10% penicillin-streptomycin (Sigma-Aldrich, St. Louis, MO, USA), after which it was crushed with a scalpel on a Petri dish, transferred into a falcon with 3 mL of trypsin and incubated in a water bath at 37 °C for 10 min to dissociate the cells. To inactivate trypsin, 3 mL of a trypsin inhibitor from soybeans (Sigma-Aldrich, USA) was added to the cell suspension, after which the cells were centrifuged for 10 min at 800 rpm. The cell pellet was resuspended in NSC medium for neural stem cells (Sigma-Aldrich) until a homogeneous cell suspension was formed. The undissociated pieces of tumor tissue were removed and additionally homogenized. 10 mL of NSC medium was added to the cell suspension, filtered through a filter with a pore size of 40 μm, and centrifuged for 10 min at 800 rpm. The cells were resuspended in 1 mL of NSC medium and incubated for 2 h at 37 °C to restore the proteomic profile of the cells. After incubation in NSC medium, cells were incubated in 500 μL blocking buffer containing 10% FBS for 10 min. The cells were then washed with 500 μL PBS and incubated for 45 min on ice with primary antibodies against CD44 labeled with FITC (Abcam, Cambridge, UK) and primary antibodies against CD133 labeled with Alexa Fluor 647 (Abcam), both diluted in 1% FBS in PBS, in 200 μL. The cells were washed twice with 500 μL PBS, resuspended in 500 μL PBS containing 4 μg/mL gentamicin (Thermo Fisher Scientific, Waltham, MA, USA) and passed through a strainer (BD Biosciences, Franklin Lakes, NJ, USA) into flow cytometry tubes (BD Biosciences). The analysis and sorting of cells was carried out on a SONY SH800S Cell Sorter (Sony Biotechnology, San Jose, CA, USA). U-87 MG cells were incubated on BD Falcon culture slides to 80–90% confluence, washed with PBS twice, and 100μL of the selected phage clone (2 × 10 10 PFU/mL) in PBS-BSA Ca/Mg buffer (0.1% BSA, 1mM CaCl 2 , 10 mM MgCl 2 × 6H 2 O); was added. Cells were incubated with the bacteriophage clone for 2 h at 37 °C with the following treatment according to the previously described technique with slight modifications , namely. After incubation at 37 °C, cells were washed three times with 500 μL buffer (100 mM glycine, 0.5 M NaCl, pH 2.5) at room temperature, fixed with 200 μL cold 4% formaldehyde for 10 min and washed twice with PBS. Then, 200 μL 0.2% Triton X100 was added for 10 min to permeabilize cells, after which the cells were washed twice with 500 μL PBS. Next, cells were incubated with 200 μL mouse Anti-M13 Bacteriophage Coat Protein g8p antibodies (Abcam) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with cold 500 μL 1% BSA/PBS buffer. Next, cells were incubated with 200 μL secondary Alexa Fluor 647 (Abcam, UK) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with 500 μL cold 1% BSA/PBS buffer. Then the cells were stained with DAPI (Thermo Fisher Scientific) and analyzed by fluorescent microscopy Axio Skope 2 Plus (Zeiss, Oberkochen, Germany) at the Center for Microscopic Analysis of Biological Objects of SB RAS (Novosibirsk, Russia). Mice with a subcutaneously transplanted tumor were injected into the tail vein with 500 μL of bacteriophage (2 × 10 9 PFU/mL) diluted in physiological solution. After 4.5 h of circulation of phage particles in the body, the mouse was sacrificed by cervical dislocation and perfused through the left ventricle of the heart with 15 mL of saline. Then the tumor and control organs (liver, kidney, lungs, and brain) were removed, washed in PBS, and homogenized in 1 mL PBS containing 1 mM PMSF (Sigma Aldrich). The homogenates of tumor tissue and control organs were centrifuged for 20 min at 10,000 g at room temperature to elute bound bacteriophages and were resuspended. The resulting suspension of phage particles was titrated on agar LB medium supplemented with 1 mg/mL X-Gal and 1.25 mg/mL IPTG. Two-way ANOVA was used for comparisons of more than two sets of data. Differences were considered to be significant if the p -value was <0.05. Nucleotide sequences of the inserts encoding peptides were analyzed using MEGA X software.
Effectiveness of an intervention in increasing the provision of preventive care by community mental health services: a non-randomized, multiple baseline implementation trial
e3902815-45a5-49bd-bf5f-3e94987becdc
4818909
Preventive Medicine[mh]
People with a mental illness experience a disproportionately higher chronic disease burden when compared to the general population and a substantially reduced life expectancy as a consequence . Such poor health outcomes are contributed to by a higher prevalence of modifiable chronic disease health risk behaviours, including smoking , inadequate nutrition , harmful alcohol consumption and inadequate physical activity . Routine care delivery by clinicians to address chronic disease risk behaviours (preventive care) is recommended for all health services , including mental health services . Such care is recommended to involve, at a minimum, clinician assessment of client risk status and, for clients identified as being at risk, provision of advice and referral to specialist preventive care services . Although community mental health services represent a key setting for the provision of preventive care , the provision of such care in this setting is both variable and sub-optimal . Cochrane systematic review evidence supports the efficacy of a range of strategies in improving the provision of recommended elements of clinical care, with such strategies including leadership and consensus , enablement of systems and procedures , training and education , monitoring and feedback , provision of practice change resources such as educational materials and clinical practice guidelines and practice change support such as educational outreach or academic detailing . In intervention trials in general health services, implementation of such strategies has been associated with increases in care delivery for smoking , at-risk alcohol consumption and multiple health risk behaviours . Only one study could be identified that assessed the effectiveness of a practice change intervention in increasing the provision of preventive care for multiple health risks in a community mental health setting . A single group pre-post study was undertaken in two USA services of a 6-month intervention to increase the provision of risk assessment regarding a number of cardiovascular disease risks (tobacco smoking and non-behavioural risks e.g. blood pressure and cholesterol) and the sending of a letter to clients’ primary care providers. The intervention practice change strategies included staff education, an electronic screening tool and a template for a standard communication letter. A random sample of clients’ medical records was audited before ( n = 129) and after ( n = 117) the intervention. The proportion of clients screened for smoking by psychiatrists, mental health nurses and case managers increased from 76 to 89 %, while the proportion of clients for whom a letter was sent to their primary care provider increased from 19 to 32 % . Further research is needed to examine whether a practice change intervention can improve the provision of a broader range of preventive care elements for the most common chronic disease risk behaviours. To address this need, a study was undertaken to determine the effectiveness of a multi-strategic practice change intervention in increasing the provision of three elements of preventive care (risk assessment, brief advice and referral) by community mental health clinicians for four health risk behaviours (smoking, inadequate fruit and vegetable consumption, harmful alcohol consumption and inadequate physical activity). Study design and setting A multiple baseline trial was undertaken involving a 12-month intervention delivered sequentially in two groups of community-based mental health services. Outcome data were collected for both groups from 6 months prior to the implementation of the intervention in the first group of services, and continued until 6 months after the completion of the intervention in the second group of services (36-month study period). Further details of the study design and methods have been reported previously . The study was undertaken in a single regional health district in New South Wales, Australia. Ethics approval was obtained from the Hunter New England Human Research Ethics Committee (approval no. 09/06/17/4.03) and the University of Newcastle Human Research Ethics Committee (approval no. H-2010-1116). The trial was registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12613000693729). Participants Community mental health services All community mental health services ( n = 19) in the health district that provided ambulatory care to clients 18 years of age or greater, and were not involved in a pilot of this study, were included and allocated to two service groups ( n = 7; n = 12) based on their geographic location and associated administrative boundaries. The services provided general adult community mental health care, and care for specific client populations, including older persons, psychiatric rehabilitation, early diagnosis, comorbid substance use, eating disorders and borderline personality disorder. Clinicians All clinicians and managers in the eligible services (psychiatrists, psychologists, social workers, dietitians, nurses, occupational therapists and health service managers) received the intervention. The services were staffed with approximately 220 clinicians, predominantly nurses (40 %), psychiatrists (15 %) and psychologists (15 %). Clients All clients who attended a face-to-face individual clinical appointment were eligible to receive preventive care. Clients were eligible to be selected for data collection if they were 18 years or older, had attended at least one face-to-face individual appointment with an eligible service within the previous 2 weeks, had not previously been selected to participate in the study and had not been identified by their clinician as too unwell to participate. Of such clients, additional eligibility criteria were as follows: English speaking, not living in aged care facilities or gaol and being physically and mentally capable of responding to the survey items. Intervention Preventive care Clinicians were asked to routinely provide preventive care based on the recommended ‘2As and R’ model, a model that includes three elements of care : Assessment Assessment of client risk status for each of the four health risk behaviours based on levels of risk defined in Australian national guidelines Brief advice Provision of advice to clients assessed as being at-risk to modify their risk to comply with the Australian national guidelines and the benefits of doing so Referral Offer of a referral for clients with risks to evidence-based state-wide telephone support services for smoking (New South Wales [NSW] Quitline) and for physical inactivity and inadequate nutrition (NSW Get Healthy Service). For all risk behaviours, referral could additionally be provided to the client’s primary care provider (general practitioner or Aboriginal medical service) or local referral options (e.g. dietitians, exercise groups and drug and alcohol services) Practice change intervention The following multi-strategic clinical practice change intervention, informed by research and reviews of the clinical practice change literature , was implemented: Leadership and consensus A district-wide policy and key performance indicators regarding the provision of preventive care were implemented based on consultation with health district executives, senior clinicians and managers. Enabling systems and procedures A tool was incorporated into the electronic medical record used by all clinicians to enable standardized assessment and recording of risk status and subsequent provision of preventive care; the automated production of a tailored client risk reduction information sheet and referral letter to the clients’ primary care provider; and prompts to deliver brief advice and referral where clients were identified as at-risk. Clinician and manager training Clinicians and managers were provided online educational competency-based training of approximately 2-h duration, addressing the following: the provision of preventive care, including the ‘2As and R’ model; policy guidelines and performance indicators; and the recording of such care in the standardized electronic tool. Managers were additionally provided with a 2-h, face-to-face training session regarding care delivery performance monitoring and feedback and leadership in preventive care. Monitoring and feedback Modifications were made to the electronic medical record to allow automated production of monthly performance reports regarding the provision of preventive care at the service level. Reports were provided to and discussed with managers monthly. Provision of practice change resources An e-mail helpline and internet resource site were established, and monthly newsletters and tip-sheets and a resource pack including a process flowchart, a guide, information on each risk behaviour, fax-based referral forms for telephone referral services, and a paper-based preventive care assessment tool for use during home visits were distributed to clinicians and managers. Practice change support Project personnel (approximately one full time equivalent per group) were allocated to support intervention delivery, including monthly face-to-face visits with managers and clinicians, and fortnightly support phone calls and/or e-mails to managers. The project personnel discussed the feedback reports and provided both proactive and reactive support to managers and clinicians. Data collection procedures Recruitment Each week, a random sample of 40 eligible adult clients (20 from each of the two groups; approximately 7 % of eligible clients per week) was drawn from the health service electronic medical records. These clients were mailed an information statement and contacted by telephone by trained interviewers, blind to group allocation, to confirm eligibility. Eligible clients were asked to participate in a telephone interview regarding their health behaviour risk status, the preventive care they had received for such risks and a number of demographic and clinical characteristics. The interview was approximately 20 min in length. Measures Client characteristics Clients reported their Aboriginal and/or Torres Strait Islander status, highest education level attained, employment status, marital status and physical or psychiatric conditions for which they had received health care within the previous 2 months. Client age, gender, postcode, and the number of community mental health appointments within the last 12 months were obtained from the electronic medical record. Client health behaviour risk status Clients reported their health behaviour risk status for the month prior to seeing their community mental health clinician. Survey items were based on recommended assessment tools and previous community surveys . In line with national guidelines, clients were defined as being at-risk if they reported smoking any tobacco products , consuming less than two serves of fruit or five serves of vegetables per day , consuming more than two standard drinks on average per day or four or more standard drinks on any one occasion or engaging in less than 30 min of physical activity on at least 5 days of the week . Client-reported provision of preventive care Assessment . Clients were asked to report whether, during a community mental health appointment, a clinician had asked about their smoking status, fruit and vegetable intake, alcohol consumption and physical activity (yes, no, don’t know for each). Brief advice . Clients classified as being at-risk for a health risk behaviour(s) based on their self-report were asked whether their community mental health clinician had advised them to modify their behaviour(s) (yes, no, don’t know for each). Referral . Clients classified as having at least one risk were asked whether their community mental health clinician had offered to send their primary care provider a letter summarizing their health behaviour risks and the preventive care provided. Clients classified as at risk for a health risk behaviour(s) were also asked whether their clinician had provided each of the following forms of referral (‘yes, no, or don’t know’): Spoke about the NSW Quitline telephone support service (for smoking); or the NSW Get Healthy Service (for clients with inadequate fruit and vegetable intake or inadequate physical activity); Offered to arrange for a telephone support service (NSW Quitline or NSW Get Healthy Service) to call them; Recommended speaking to their primary care provider about their health risk behaviour(s); and Advised to use any other supports to make changes to their health behaviour(s) (e.g. dietitian, physical activity classes, website). Intervention delivery Project personnel recorded the implementation of each practice change strategy for each service on a monthly basis. Statistical analysis Analyses were undertaken using SAS V9.4. Residential postcode was used to classify client residential geographic location and socio-economic status . Chi square tests were used to compare consenters and non-consenters regarding age group, gender, remoteness, disadvantage and number of appointments . Descriptive statistics were used to describe participating client characteristics, health behaviour risk status and receipt of preventive care. For care receipt items, clients who responded ‘don’t know’ were classified as not having received care. For each of the four behaviours, referral items were combined to create a single variable reflecting receipt of any form of referral. A variable was created to reflect client receipt of assessment for all four risk behaviours. Separate variables were also created to reflect client receipt of brief advice for all behaviours for which they were at risk and receipt of any referral for all behaviours for which they were at risk (‘all risks combined’). Intervention effectiveness Logistic regression models were used to examine changes in the prevalence of preventive care delivery between the baseline and follow-up periods for the two service groups combined and for each of the two service groups individually. Separate models were developed to examine change in delivery of each of the three elements of preventive care for each of the four risk behaviours and for all four behaviours combined; and for the delivery of a letter to the client’s primary care provider (16 models in total). Five models were developed for each of the assessment and brief advice outcomes, and 6 models were developed for the referral outcome. For all models, intervention effect was defined as the difference in prevalence of preventive care delivery from the baseline to the post-intervention periods, adjusted for service group, time and the number of client visits to the service in the prior 12 months (the latter added to account for any introduced selection bias). Analyses are reported using data collected during the baseline and follow-up periods. While all models were also analysed incorporating the intervention period data, as the results did not differ, the simpler method is presented. A significance level of α = 0.01 was used to adjust for multiple testing . As simple random sampling of community mental health clients was used (see “ ” section), there was no need to adjust for clinician, community mental health service or any other natural clustering that occurs within the community. An unadjusted analysis provides an unbiased estimate of the statistics of interest. A multiple baseline trial was undertaken involving a 12-month intervention delivered sequentially in two groups of community-based mental health services. Outcome data were collected for both groups from 6 months prior to the implementation of the intervention in the first group of services, and continued until 6 months after the completion of the intervention in the second group of services (36-month study period). Further details of the study design and methods have been reported previously . The study was undertaken in a single regional health district in New South Wales, Australia. Ethics approval was obtained from the Hunter New England Human Research Ethics Committee (approval no. 09/06/17/4.03) and the University of Newcastle Human Research Ethics Committee (approval no. H-2010-1116). The trial was registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12613000693729). Community mental health services All community mental health services ( n = 19) in the health district that provided ambulatory care to clients 18 years of age or greater, and were not involved in a pilot of this study, were included and allocated to two service groups ( n = 7; n = 12) based on their geographic location and associated administrative boundaries. The services provided general adult community mental health care, and care for specific client populations, including older persons, psychiatric rehabilitation, early diagnosis, comorbid substance use, eating disorders and borderline personality disorder. Clinicians All clinicians and managers in the eligible services (psychiatrists, psychologists, social workers, dietitians, nurses, occupational therapists and health service managers) received the intervention. The services were staffed with approximately 220 clinicians, predominantly nurses (40 %), psychiatrists (15 %) and psychologists (15 %). Clients All clients who attended a face-to-face individual clinical appointment were eligible to receive preventive care. Clients were eligible to be selected for data collection if they were 18 years or older, had attended at least one face-to-face individual appointment with an eligible service within the previous 2 weeks, had not previously been selected to participate in the study and had not been identified by their clinician as too unwell to participate. Of such clients, additional eligibility criteria were as follows: English speaking, not living in aged care facilities or gaol and being physically and mentally capable of responding to the survey items. All community mental health services ( n = 19) in the health district that provided ambulatory care to clients 18 years of age or greater, and were not involved in a pilot of this study, were included and allocated to two service groups ( n = 7; n = 12) based on their geographic location and associated administrative boundaries. The services provided general adult community mental health care, and care for specific client populations, including older persons, psychiatric rehabilitation, early diagnosis, comorbid substance use, eating disorders and borderline personality disorder. All clinicians and managers in the eligible services (psychiatrists, psychologists, social workers, dietitians, nurses, occupational therapists and health service managers) received the intervention. The services were staffed with approximately 220 clinicians, predominantly nurses (40 %), psychiatrists (15 %) and psychologists (15 %). All clients who attended a face-to-face individual clinical appointment were eligible to receive preventive care. Clients were eligible to be selected for data collection if they were 18 years or older, had attended at least one face-to-face individual appointment with an eligible service within the previous 2 weeks, had not previously been selected to participate in the study and had not been identified by their clinician as too unwell to participate. Of such clients, additional eligibility criteria were as follows: English speaking, not living in aged care facilities or gaol and being physically and mentally capable of responding to the survey items. Preventive care Clinicians were asked to routinely provide preventive care based on the recommended ‘2As and R’ model, a model that includes three elements of care : Assessment Assessment of client risk status for each of the four health risk behaviours based on levels of risk defined in Australian national guidelines Brief advice Provision of advice to clients assessed as being at-risk to modify their risk to comply with the Australian national guidelines and the benefits of doing so Referral Offer of a referral for clients with risks to evidence-based state-wide telephone support services for smoking (New South Wales [NSW] Quitline) and for physical inactivity and inadequate nutrition (NSW Get Healthy Service). For all risk behaviours, referral could additionally be provided to the client’s primary care provider (general practitioner or Aboriginal medical service) or local referral options (e.g. dietitians, exercise groups and drug and alcohol services) Practice change intervention The following multi-strategic clinical practice change intervention, informed by research and reviews of the clinical practice change literature , was implemented: Leadership and consensus A district-wide policy and key performance indicators regarding the provision of preventive care were implemented based on consultation with health district executives, senior clinicians and managers. Enabling systems and procedures A tool was incorporated into the electronic medical record used by all clinicians to enable standardized assessment and recording of risk status and subsequent provision of preventive care; the automated production of a tailored client risk reduction information sheet and referral letter to the clients’ primary care provider; and prompts to deliver brief advice and referral where clients were identified as at-risk. Clinician and manager training Clinicians and managers were provided online educational competency-based training of approximately 2-h duration, addressing the following: the provision of preventive care, including the ‘2As and R’ model; policy guidelines and performance indicators; and the recording of such care in the standardized electronic tool. Managers were additionally provided with a 2-h, face-to-face training session regarding care delivery performance monitoring and feedback and leadership in preventive care. Monitoring and feedback Modifications were made to the electronic medical record to allow automated production of monthly performance reports regarding the provision of preventive care at the service level. Reports were provided to and discussed with managers monthly. Provision of practice change resources An e-mail helpline and internet resource site were established, and monthly newsletters and tip-sheets and a resource pack including a process flowchart, a guide, information on each risk behaviour, fax-based referral forms for telephone referral services, and a paper-based preventive care assessment tool for use during home visits were distributed to clinicians and managers. Practice change support Project personnel (approximately one full time equivalent per group) were allocated to support intervention delivery, including monthly face-to-face visits with managers and clinicians, and fortnightly support phone calls and/or e-mails to managers. The project personnel discussed the feedback reports and provided both proactive and reactive support to managers and clinicians. Clinicians were asked to routinely provide preventive care based on the recommended ‘2As and R’ model, a model that includes three elements of care : Assessment Assessment of client risk status for each of the four health risk behaviours based on levels of risk defined in Australian national guidelines Brief advice Provision of advice to clients assessed as being at-risk to modify their risk to comply with the Australian national guidelines and the benefits of doing so Referral Offer of a referral for clients with risks to evidence-based state-wide telephone support services for smoking (New South Wales [NSW] Quitline) and for physical inactivity and inadequate nutrition (NSW Get Healthy Service). For all risk behaviours, referral could additionally be provided to the client’s primary care provider (general practitioner or Aboriginal medical service) or local referral options (e.g. dietitians, exercise groups and drug and alcohol services) Assessment of client risk status for each of the four health risk behaviours based on levels of risk defined in Australian national guidelines Provision of advice to clients assessed as being at-risk to modify their risk to comply with the Australian national guidelines and the benefits of doing so Offer of a referral for clients with risks to evidence-based state-wide telephone support services for smoking (New South Wales [NSW] Quitline) and for physical inactivity and inadequate nutrition (NSW Get Healthy Service). For all risk behaviours, referral could additionally be provided to the client’s primary care provider (general practitioner or Aboriginal medical service) or local referral options (e.g. dietitians, exercise groups and drug and alcohol services) The following multi-strategic clinical practice change intervention, informed by research and reviews of the clinical practice change literature , was implemented: Leadership and consensus A district-wide policy and key performance indicators regarding the provision of preventive care were implemented based on consultation with health district executives, senior clinicians and managers. Enabling systems and procedures A tool was incorporated into the electronic medical record used by all clinicians to enable standardized assessment and recording of risk status and subsequent provision of preventive care; the automated production of a tailored client risk reduction information sheet and referral letter to the clients’ primary care provider; and prompts to deliver brief advice and referral where clients were identified as at-risk. Clinician and manager training Clinicians and managers were provided online educational competency-based training of approximately 2-h duration, addressing the following: the provision of preventive care, including the ‘2As and R’ model; policy guidelines and performance indicators; and the recording of such care in the standardized electronic tool. Managers were additionally provided with a 2-h, face-to-face training session regarding care delivery performance monitoring and feedback and leadership in preventive care. Monitoring and feedback Modifications were made to the electronic medical record to allow automated production of monthly performance reports regarding the provision of preventive care at the service level. Reports were provided to and discussed with managers monthly. Provision of practice change resources An e-mail helpline and internet resource site were established, and monthly newsletters and tip-sheets and a resource pack including a process flowchart, a guide, information on each risk behaviour, fax-based referral forms for telephone referral services, and a paper-based preventive care assessment tool for use during home visits were distributed to clinicians and managers. Practice change support Project personnel (approximately one full time equivalent per group) were allocated to support intervention delivery, including monthly face-to-face visits with managers and clinicians, and fortnightly support phone calls and/or e-mails to managers. The project personnel discussed the feedback reports and provided both proactive and reactive support to managers and clinicians. Recruitment Each week, a random sample of 40 eligible adult clients (20 from each of the two groups; approximately 7 % of eligible clients per week) was drawn from the health service electronic medical records. These clients were mailed an information statement and contacted by telephone by trained interviewers, blind to group allocation, to confirm eligibility. Eligible clients were asked to participate in a telephone interview regarding their health behaviour risk status, the preventive care they had received for such risks and a number of demographic and clinical characteristics. The interview was approximately 20 min in length. Measures Client characteristics Clients reported their Aboriginal and/or Torres Strait Islander status, highest education level attained, employment status, marital status and physical or psychiatric conditions for which they had received health care within the previous 2 months. Client age, gender, postcode, and the number of community mental health appointments within the last 12 months were obtained from the electronic medical record. Client health behaviour risk status Clients reported their health behaviour risk status for the month prior to seeing their community mental health clinician. Survey items were based on recommended assessment tools and previous community surveys . In line with national guidelines, clients were defined as being at-risk if they reported smoking any tobacco products , consuming less than two serves of fruit or five serves of vegetables per day , consuming more than two standard drinks on average per day or four or more standard drinks on any one occasion or engaging in less than 30 min of physical activity on at least 5 days of the week . Client-reported provision of preventive care Assessment . Clients were asked to report whether, during a community mental health appointment, a clinician had asked about their smoking status, fruit and vegetable intake, alcohol consumption and physical activity (yes, no, don’t know for each). Brief advice . Clients classified as being at-risk for a health risk behaviour(s) based on their self-report were asked whether their community mental health clinician had advised them to modify their behaviour(s) (yes, no, don’t know for each). Referral . Clients classified as having at least one risk were asked whether their community mental health clinician had offered to send their primary care provider a letter summarizing their health behaviour risks and the preventive care provided. Clients classified as at risk for a health risk behaviour(s) were also asked whether their clinician had provided each of the following forms of referral (‘yes, no, or don’t know’): Spoke about the NSW Quitline telephone support service (for smoking); or the NSW Get Healthy Service (for clients with inadequate fruit and vegetable intake or inadequate physical activity); Offered to arrange for a telephone support service (NSW Quitline or NSW Get Healthy Service) to call them; Recommended speaking to their primary care provider about their health risk behaviour(s); and Advised to use any other supports to make changes to their health behaviour(s) (e.g. dietitian, physical activity classes, website). Intervention delivery Project personnel recorded the implementation of each practice change strategy for each service on a monthly basis. Each week, a random sample of 40 eligible adult clients (20 from each of the two groups; approximately 7 % of eligible clients per week) was drawn from the health service electronic medical records. These clients were mailed an information statement and contacted by telephone by trained interviewers, blind to group allocation, to confirm eligibility. Eligible clients were asked to participate in a telephone interview regarding their health behaviour risk status, the preventive care they had received for such risks and a number of demographic and clinical characteristics. The interview was approximately 20 min in length. Client characteristics Clients reported their Aboriginal and/or Torres Strait Islander status, highest education level attained, employment status, marital status and physical or psychiatric conditions for which they had received health care within the previous 2 months. Client age, gender, postcode, and the number of community mental health appointments within the last 12 months were obtained from the electronic medical record. Client health behaviour risk status Clients reported their health behaviour risk status for the month prior to seeing their community mental health clinician. Survey items were based on recommended assessment tools and previous community surveys . In line with national guidelines, clients were defined as being at-risk if they reported smoking any tobacco products , consuming less than two serves of fruit or five serves of vegetables per day , consuming more than two standard drinks on average per day or four or more standard drinks on any one occasion or engaging in less than 30 min of physical activity on at least 5 days of the week . Client-reported provision of preventive care Assessment . Clients were asked to report whether, during a community mental health appointment, a clinician had asked about their smoking status, fruit and vegetable intake, alcohol consumption and physical activity (yes, no, don’t know for each). Brief advice . Clients classified as being at-risk for a health risk behaviour(s) based on their self-report were asked whether their community mental health clinician had advised them to modify their behaviour(s) (yes, no, don’t know for each). Referral . Clients classified as having at least one risk were asked whether their community mental health clinician had offered to send their primary care provider a letter summarizing their health behaviour risks and the preventive care provided. Clients classified as at risk for a health risk behaviour(s) were also asked whether their clinician had provided each of the following forms of referral (‘yes, no, or don’t know’): Spoke about the NSW Quitline telephone support service (for smoking); or the NSW Get Healthy Service (for clients with inadequate fruit and vegetable intake or inadequate physical activity); Offered to arrange for a telephone support service (NSW Quitline or NSW Get Healthy Service) to call them; Recommended speaking to their primary care provider about their health risk behaviour(s); and Advised to use any other supports to make changes to their health behaviour(s) (e.g. dietitian, physical activity classes, website). Intervention delivery Project personnel recorded the implementation of each practice change strategy for each service on a monthly basis. Clients reported their Aboriginal and/or Torres Strait Islander status, highest education level attained, employment status, marital status and physical or psychiatric conditions for which they had received health care within the previous 2 months. Client age, gender, postcode, and the number of community mental health appointments within the last 12 months were obtained from the electronic medical record. Clients reported their health behaviour risk status for the month prior to seeing their community mental health clinician. Survey items were based on recommended assessment tools and previous community surveys . In line with national guidelines, clients were defined as being at-risk if they reported smoking any tobacco products , consuming less than two serves of fruit or five serves of vegetables per day , consuming more than two standard drinks on average per day or four or more standard drinks on any one occasion or engaging in less than 30 min of physical activity on at least 5 days of the week . Assessment . Clients were asked to report whether, during a community mental health appointment, a clinician had asked about their smoking status, fruit and vegetable intake, alcohol consumption and physical activity (yes, no, don’t know for each). Brief advice . Clients classified as being at-risk for a health risk behaviour(s) based on their self-report were asked whether their community mental health clinician had advised them to modify their behaviour(s) (yes, no, don’t know for each). Referral . Clients classified as having at least one risk were asked whether their community mental health clinician had offered to send their primary care provider a letter summarizing their health behaviour risks and the preventive care provided. Clients classified as at risk for a health risk behaviour(s) were also asked whether their clinician had provided each of the following forms of referral (‘yes, no, or don’t know’): Spoke about the NSW Quitline telephone support service (for smoking); or the NSW Get Healthy Service (for clients with inadequate fruit and vegetable intake or inadequate physical activity); Offered to arrange for a telephone support service (NSW Quitline or NSW Get Healthy Service) to call them; Recommended speaking to their primary care provider about their health risk behaviour(s); and Advised to use any other supports to make changes to their health behaviour(s) (e.g. dietitian, physical activity classes, website). Project personnel recorded the implementation of each practice change strategy for each service on a monthly basis. Analyses were undertaken using SAS V9.4. Residential postcode was used to classify client residential geographic location and socio-economic status . Chi square tests were used to compare consenters and non-consenters regarding age group, gender, remoteness, disadvantage and number of appointments . Descriptive statistics were used to describe participating client characteristics, health behaviour risk status and receipt of preventive care. For care receipt items, clients who responded ‘don’t know’ were classified as not having received care. For each of the four behaviours, referral items were combined to create a single variable reflecting receipt of any form of referral. A variable was created to reflect client receipt of assessment for all four risk behaviours. Separate variables were also created to reflect client receipt of brief advice for all behaviours for which they were at risk and receipt of any referral for all behaviours for which they were at risk (‘all risks combined’). Intervention effectiveness Logistic regression models were used to examine changes in the prevalence of preventive care delivery between the baseline and follow-up periods for the two service groups combined and for each of the two service groups individually. Separate models were developed to examine change in delivery of each of the three elements of preventive care for each of the four risk behaviours and for all four behaviours combined; and for the delivery of a letter to the client’s primary care provider (16 models in total). Five models were developed for each of the assessment and brief advice outcomes, and 6 models were developed for the referral outcome. For all models, intervention effect was defined as the difference in prevalence of preventive care delivery from the baseline to the post-intervention periods, adjusted for service group, time and the number of client visits to the service in the prior 12 months (the latter added to account for any introduced selection bias). Analyses are reported using data collected during the baseline and follow-up periods. While all models were also analysed incorporating the intervention period data, as the results did not differ, the simpler method is presented. A significance level of α = 0.01 was used to adjust for multiple testing . As simple random sampling of community mental health clients was used (see “ ” section), there was no need to adjust for clinician, community mental health service or any other natural clustering that occurs within the community. An unadjusted analysis provides an unbiased estimate of the statistics of interest. Logistic regression models were used to examine changes in the prevalence of preventive care delivery between the baseline and follow-up periods for the two service groups combined and for each of the two service groups individually. Separate models were developed to examine change in delivery of each of the three elements of preventive care for each of the four risk behaviours and for all four behaviours combined; and for the delivery of a letter to the client’s primary care provider (16 models in total). Five models were developed for each of the assessment and brief advice outcomes, and 6 models were developed for the referral outcome. For all models, intervention effect was defined as the difference in prevalence of preventive care delivery from the baseline to the post-intervention periods, adjusted for service group, time and the number of client visits to the service in the prior 12 months (the latter added to account for any introduced selection bias). Analyses are reported using data collected during the baseline and follow-up periods. While all models were also analysed incorporating the intervention period data, as the results did not differ, the simpler method is presented. A significance level of α = 0.01 was used to adjust for multiple testing . As simple random sampling of community mental health clients was used (see “ ” section), there was no need to adjust for clinician, community mental health service or any other natural clustering that occurs within the community. An unadjusted analysis provides an unbiased estimate of the statistics of interest. Sample characteristics Of the 3764 clients selected to participate, 2817 were able to be contacted by telephone (75 %), and 375 were identified as ineligible upon contact. Of the 2442 eligible potential participants, 1787 (73 %) consented to participate and completed the survey ( n = 805 at baseline, n = 982 at follow-up). There were no significant differences in the characteristics between consenting and non-consenting clients. Characteristics of the sample are presented in Table . Intervention effectiveness For both groups combined, there was a significant increase in the prevalence of one of the 16 outcome measures. From baseline to follow-up, there was an increase in assessment for all risks combined (18 to 29 %; OR 3.55 , p = 0.002) (Table ). When examined separately for each of the two service groups, there was an increase in the prevalence of one outcome for group 1. From baseline to follow-up, there was an increase in the assessment of nutrition (18 to 32 %; OR 5.55, p = 0.001). No increases in care were identified for group 2 individually (Table ). Intervention implementation The implementation of intervention strategies is shown in Table . Overall, the intervention strategies were not delivered as intended. On average per month for the two service groups combined, the proportion of services, managers or clinicians that received each strategy ranged from 63 % (performance reported discussed with managers) to 78 % (fortnightly phone/email support). Group 1 received fewer monthly intervention strategies on average. The proportion of services, managers or clinicians that received each strategy in group 1 ranged from 33 % (performance reports discussed with managers) to 69 % (face-to-face visits with clinicians), compared to 72 % (performance reports discussed with managers) to 83 % (fortnightly phone or email support for managers) in group 2 (Table ). Group 2 generally received one-off strategies (training and practice change resources) at an earlier stage during the intervention. For instance, the majority (80 % or more) of services in group 2 had received the resource pack by the end of month 1, compared to the end of month 4 in group 1 (Table ). Of the 3764 clients selected to participate, 2817 were able to be contacted by telephone (75 %), and 375 were identified as ineligible upon contact. Of the 2442 eligible potential participants, 1787 (73 %) consented to participate and completed the survey ( n = 805 at baseline, n = 982 at follow-up). There were no significant differences in the characteristics between consenting and non-consenting clients. Characteristics of the sample are presented in Table . For both groups combined, there was a significant increase in the prevalence of one of the 16 outcome measures. From baseline to follow-up, there was an increase in assessment for all risks combined (18 to 29 %; OR 3.55 , p = 0.002) (Table ). When examined separately for each of the two service groups, there was an increase in the prevalence of one outcome for group 1. From baseline to follow-up, there was an increase in the assessment of nutrition (18 to 32 %; OR 5.55, p = 0.001). No increases in care were identified for group 2 individually (Table ). The implementation of intervention strategies is shown in Table . Overall, the intervention strategies were not delivered as intended. On average per month for the two service groups combined, the proportion of services, managers or clinicians that received each strategy ranged from 63 % (performance reported discussed with managers) to 78 % (fortnightly phone/email support). Group 1 received fewer monthly intervention strategies on average. The proportion of services, managers or clinicians that received each strategy in group 1 ranged from 33 % (performance reports discussed with managers) to 69 % (face-to-face visits with clinicians), compared to 72 % (performance reports discussed with managers) to 83 % (fortnightly phone or email support for managers) in group 2 (Table ). Group 2 generally received one-off strategies (training and practice change resources) at an earlier stage during the intervention. For instance, the majority (80 % or more) of services in group 2 had received the resource pack by the end of month 1, compared to the end of month 4 in group 1 (Table ). This is the first study to examine the effectiveness of a multi-strategy practice change intervention in increasing the provision of multiple elements of preventive care for multiple chronic disease health risk behaviours within a community mental health care setting. Overall, the study had a limited effect in increasing the provision of elements of care, with an effect observed only for the assessment of risk status for all behaviours combined. Further research is required to identify strategies for improving the delivery of chronic disease preventive care in these settings. One previous study has examined the effectiveness of similar practice change strategies in increasing the delivery of cardiovascular disease risk screening in community mental health services . The single group pre-post study conducted in the USA reported an increase in assessment of smoking status (13 %), and for providing a letter to the clients’ primary care provider (13 %). In comparison, in our controlled trial, we found an effect for assessment across risks, but not smoking, and not for providing a letter to the primary care provider. The absence of a control group in the previous study precludes a direct comparison of effect between the two studies. The intervention in the current study involved the use of practice change strategies previously found to be effective in general health care services but not trialled in mental health services . Importantly, the same intervention strategies were implemented in a contemporaneous study conducted in general community health services (addressing physical health care) within the same health district in which the current study was conducted . That study found, using the same outcome measures and intervention approach, increases in care provision for six out of ten assessment and advice measures of preventive care (assessment of fruit and vegetable consumption, physical activity and for all risks; and brief advice for inadequate fruit and vegetable consumption, harmful alcohol consumption and for all risks ). However, consistent with this trial, no effect was found for provision of any element of smoking care or of referral. The need to address the clinical, professional, cultural and organizational factors that distinguish community mental health service delivery from the delivery of general community health services may have contributed to the contrasting findings. The findings suggest that a greater understanding of the context and barriers to the provision of preventive care in community mental health services is required. Similarly, tailoring of recommendations regarding the provision of care addressing chronic disease risk behaviours that can be operationalized in the context and circumstances of community mental health services also appears warranted, as does tailoring of the practice change strategies to support the delivery of such care. The use of systematic and theory-based methods for identifying barriers and designing interventions, such as the Theoretical Domains Framework , may provide a useful approach to achieving this. No increases in either brief advice or referral were identified for any of the four health risk behaviours. Such findings are of significance as any benefit in terms of reduction in risk of chronic disease is dependent upon either or both of these elements of care . Both elements of care have been shown to be effective in reducing the prevalence of health risks for clients of general health services . Previous research has identified a number of barriers to mental health clinician provision of risk advice, including clinician attitudes regarding their role in providing preventive care and a lack of training in how to provide preventive care . Previous research has also identified a lack of referral options as a barrier to mental health clinicians providing referrals . The current study sought to address barriers to both elements of preventive care through a comprehensive suite of practice change strategies including a policy, electronic prompts, fax referral forms to free public evidence-based specialist risk reduction services, automated production of referral letters to primary care providers, clinician training and education, monthly performance monitoring feedback reports and allocated practice change support personnel for 12 months. Notwithstanding the comprehensiveness of these strategies, they may not have been of a sufficient dose (e.g. frequency of contact with allocated practice change support) or of sufficient length. Additional factors also may have impeded the clinicians’ ability to refer clients. In USA primary care services, additional strategies have been found to be effective in increasing referrals to tobacco quitlines and community behavioural counselling services including the use of financial incentives , and automatic, electronic referral processes . However, the effectiveness of these strategies in increasing referrals regarding chronic disease risk behaviours is yet to be examined in community mental health services. The study outcomes should be interpreted in light of a number of its methodological characteristics. First, although the study was conducted across a number of community mental health services in urban, regional and rural locations, all the services were located within one health district, potentially limiting the generalizability of findings to other jurisdictions. Second, the main outcome measure was based on client-reported receipt of preventive care. The extent to which the receipt of such care in this study is either an over- or under-estimate of the care received, particularly amongst people with a mental illness is unknown . Direct comparison between client report outcomes and the monthly performance reports was not possible; however, the authors can confirm that the performance reports were consistent with the pattern of results reported. Third, systematic review evidence has suggested that inadequate implementation fidelity and integrity may be explanatory factors in trials that fail to show an effect . In the current trial, not all intervention strategies were implemented as planned (Table ), and there was inconsistent implementation of the intervention between the two groups. It is unknown what impact this may have had on the trial outcomes. The observed lack of an increase in preventive care provision for almost all outcome measures suggests that an intervention better tailored to the circumstances of community mental health services may be required, or one that is more intensive or includes a longer intervention period, or that an alternative model of delivering preventive care to clients of community mental health services may be required. Regardless of the specific approach, the need for a greater understanding of the barriers and facilitators to the provision of preventive care in community mental health services is indicated.
Optimizing Early-stage Clinical Pharmacology Evaluation to Accelerate Clinical Development of Giredestrant in Advanced Breast Cancer
00133429-fa55-436d-bca9-59d676bb5d16
10722959
Pharmacology[mh]
The clinical development journey of novel pharmaceuticals is a complex multistage process, which continues to evolve as our understanding of disease biology deepens, new therapeutic entities and classes of drugs are designed, and regulatory mechanisms adapt to the changing therapeutic landscape. Traditionally, oncology clinical development began with dose-escalation studies that aimed to identify the maximum tolerated dose in small cohorts of patients and provided a preliminary assessment of activity and tolerability. However, increasing the dose for nonchemotherapy agents does not always increase activity. An inappropriately high dose could saturate efficacy and bring unnecessary toxicity. To provide an optimized risk:benefit profile for patients, clinical pharmacology characterization can be a useful tool to elucidate the exposure characteristics of an investigational drug and understand how exposure affects clinical outcomes. One of the objectives of clinical pharmacology is to characterize pharmacokinetics by understanding the absorption, distribution, metabolism, and excretion pathways of the drug in the body. Another major objective is to determine how drug concentrations are altered by intrinsic and extrinsic factors. Intrinsic factors are inherent to a person and include age, sex, race, genomics, and organ function, whereas extrinsic factors are external influences that may affect drug exposure, such as concomitant medications or the type of food ingested around the time of drug administration. Finally, the main goal of clinical pharmacology is to justify the dose based on the relationship between exposure and biological response, which can be assessed in terms of efficacy, safety, or pharmacodynamic biomarkers. Clinical pharmacology findings generated in early studies can inform late-stage study designs. For example, insight into potential interactions with food and concomitant medications can guide decisions on drug administration schedules and prohibited or permitted concomitant medications . Likewise, an understanding of risk related to intrinsic factors defined by age, race, organ function, etc., can guide eligibility criteria for phase III trials . For instance, patients with renal impairment may be more susceptible to toxicities because of altered pharmacokinetics, and potentially require dose modification of drugs eliminated predominantly by the kidneys. Clinically relevant differences between racial or ethnic groups concerning drug metabolism and drug exposure may require dose adjustments . Incorporating these clinical pharmacology objectives into early clinical studies designed to provide preliminary evidence of efficacy and safety in the target population can potentially accelerate late-stage development. In this article, we describe the clinical pharmacology characterization of giredestrant, a highly potent nonsteroidal oral selective estrogen receptor (ER) antagonist and degrader in the context of early-stage clinical development. Giredestrant was designed to optimize ER antagonism and degradation, while minimizing off-target toxicity . Several oral selective estrogen receptor degraders (SERDs) are in clinical development, all with distinct physicochemical and pharmacokinetics properties. Variations in these characteristics, such as unfavorable pharmacokinetic attributes [including nonlinear pharmacokinetics, food effects, or drug–drug interactions (DDIs) with combination partners] may affect dosing, efficacy, and safety . This first-in-human phase Ia/Ib dose-escalation/-expansion study was designed to evaluate the safety, pharmacokinetics, pharmacodynamics, and preliminary antitumor activity of giredestrant. In particular, the pharmacokinetics were evaluated by characterizing the absorption, distribution, metabolism, and excretion of giredestrant, and generated signal-seeking data on the impact of certain intrinsic factors (including race and organ dysfunction) and extrinsic factors (including the effect of food and DDIs) on giredestrant exposure. The primary results from this study have been reported elsewhere . Here we focus on aspects of the study that enabled us to conduct an integrated clinical pharmacology assessment to inform better our late-stage development decisions. Study Design This multicenter nonrandomized open-label dose-escalation and -expansion phase Ia/Ib study (clinicaltrials.gov NCT03332797; GO39932) evaluated the safety, pharmacokinetics, pharmacodynamics, and preliminary antitumor activity of giredestrant alone and in combination with palbociclib in patients with ER-positive (ER + ) HER2-negative locally advanced/metastatic breast cancer. The study design has been described in detail by Jhaveri and colleagues , together with the clinical results. In brief, eligible patients had advanced or metastatic ER + /HER2-negative breast cancer that had recurred or progressed while being treated with adjuvant endocrine therapy (ET) for ≥24 months and/or ET in the incurable, locally advanced, or metastatic setting and derived a clinical benefit from therapy (i.e., tumor response or stable disease for ≥6 months); had not received any other ET, targeted therapy, or chemotherapy within the preceding 2 weeks; and were postmenopausal women (or premenopausal/perimenopausal women simultaneously receiving luteinizing hormone-releasing hormone agonists). To enroll a racially diverse population, the study was performed globally at sites in Europe, North America, Asia, and Australia . In the single-agent dose-escalation stage, giredestrant was administered orally once daily on days 1–28 of each 28-day cycle at 10, 30, 90, or 250 mg. A single dose was administered on cycle 1 day −7, followed by a 7-day pharmacokinetic lead-in for all single-agent giredestrant dose-escalation cohorts. In addition, a combination cohort in the dose-escalation stage explored giredestrant 100 mg once daily (days 1–28) in combination with palbociclib 125 mg once daily (days 1–21), repeated every 28 days. In the expansion stage, single-agent expansion cohorts evaluated giredestrant at doses of 30, 100, and 250 mg once daily. A combination expansion cohort evaluated giredestrant 100 mg with palbociclib 125 mg. Pharmacokinetic Assessments In the single-agent dose-escalation stage, pharmacokinetic samples were collected at cycle 1 day −7 predose and then at 0.5, 1, 1.5, 2, 3, 4, 6, 8, 24, 28, 72, 96, and 168 hours after dosing to characterize the single-dose pharmacokinetic profile and estimate the terminal half-life. Samples to determine the steady-state pharmacokinetic profile were collected at cycle 2 day 1 (after 28 days of daily giredestrant) predose and at 0.5, 1, 1.5, 2, 3, 4, 6, and 8 hours after dosing. Metabolite identification was conducted following the first dose (cycle 1 day −7) and at steady state (cycle 2 day 1) in 6 patients (3 patients at 90 mg and 3 at 250 mg). Plasma samples were pooled across patients and timepoints. The cytochrome P450 (CYP) 3A induction potential of giredestrant was evaluated in vivo by assessing 4beta-hydroxycholesterol (4β-HC), an endogenous biomarker formed by CYP3A metabolism in humans . 4β-HC concentration was measured at cycle 1 day −7, cycle 2 day 1, cycle 3 day 1, and cycle 4 day 1 predose in 7 patients (5 patients at 30 mg and 2 at 90 mg). Analysis was conducted ad hoc , in compliance with the informed consent, in patients from whom there was sufficient plasma volume remaining for the assessment after completion of prespecified analyses. Giredestrant concentrations in urine samples were measured at cycle 1 day 1 predose and cycle 2 day 1 at 0–8 hours postdose. Urine samples were collected from 13 patients (7 patients at 30 mg, 5 at 100 mg, 1 at 250 mg). Noncompartmental Analysis Pharmacokinetic parameters for giredestrant were calculated from the plasma concentration–time data according to standard noncompartmental analysis methods using Phoenix WinNonlin (version 8.3.4; Certara USA, Inc.). Noncompartmental analyses were conducted in pharmacokinetic-evaluable patients, defined as those having at least one nonzero postdose plasma concentration. Patients with dose reductions, sparse pharmacokinetic samples, incomplete pharmacokinetic profiles, or incorrect dosing information during pharmacokinetic sampling were not included in the noncompartmental analysis. Bioanalytical Methods The concentrations of giredestrant in plasma were measured using validated LC/MS-MS assays. Giredestrant and its internal standard [(13C8, 15N) GDC 9545] were extracted from human plasma by supported liquid extraction (SLE). Giredestrant concentrations were calculated using a standard curve with a 1/ x 2 linear regression over concentration ranges of 1 to 1,000 ng/mL or 0.1 to 100 ng/mL. The concentrations of giredestrant in urine were measured using a qualified LC/MS-MS assay. Giredestrant and its internal standard were also extracted from human urine by SLE. Giredestrant concentrations were calculated using a standard curve with a 1/ x 2 linear regression over a concentration range of 0.25 to 250 ng/mL. Concentrations of 4β-HC were measured using an LC/MS-MS assay that was developed and validated in human plasma. 4β-HC and its internal standard 4β-HC-d7 were extracted from human plasma by SLE. 4β-HC concentrations were calculated using a standard curve with a 1/ x 2 linear regression over a concentration range of 4 to 100 ng/mL. For plasma, urine, and 4β-HC assays, the mass spectrometer was operated in positive electrospray ionization mode under optimized conditions with multiple reaction monitoring of analytes and internal standards. The precision and accuracy of assays were satisfactory throughout the study. For metabolite identification, plasma samples were extracted by protein precipitation. Parent and metabolites were analyzed by LC and high-resolution MS. The metabolite structures were proposed on the basis of fragmentation patterns and comparison with those of the parent. Food Effect Assessment In the single-agent dose-escalation stage, giredestrant was taken under fasted conditions (fasted for ≥6 hours overnight) until cycle 4 day 1. To explore whether giredestrant had a potential food effect, patients in the single-agent dose-expansion cohorts were able to take giredestrant with or without food, and were asked to complete a food diary at cycle 2 day 1. Patients who had reported fasting for longer than 6 hours before giredestrant dosing were considered as fasted. Ethics The study was conducted in accordance with the protocol and the consensus ethical principles derived from international guidelines including the Declaration of Helsinki and Council for International Organizations of Medical Sciences International Ethical Guidelines for Health-Related Research Involving Humans, relevant International Conference on Harmonisation Good Clinical Practice guidelines, and applicable laws and regulations. All patients provided written informed consent. Data Availability Statement Phase I studies are not in scope of the Roche global policy on data sharing. Qualified researchers may submit an enquiry through the data request platform, Vivli, https://vivli.org/ourmember/roche/ ; however, this does not guarantee that the data can be shared. For up-to-date details on Roche's Global Policy on the Sharing of Clinical Information and how to request access to related clinical study documents, see go.roche.com/data_sharing . Anonymized records for individual patients across more than one data source external to Roche cannot, and should not, be linked due to a potential increase in risk of patient reidentification. This multicenter nonrandomized open-label dose-escalation and -expansion phase Ia/Ib study (clinicaltrials.gov NCT03332797; GO39932) evaluated the safety, pharmacokinetics, pharmacodynamics, and preliminary antitumor activity of giredestrant alone and in combination with palbociclib in patients with ER-positive (ER + ) HER2-negative locally advanced/metastatic breast cancer. The study design has been described in detail by Jhaveri and colleagues , together with the clinical results. In brief, eligible patients had advanced or metastatic ER + /HER2-negative breast cancer that had recurred or progressed while being treated with adjuvant endocrine therapy (ET) for ≥24 months and/or ET in the incurable, locally advanced, or metastatic setting and derived a clinical benefit from therapy (i.e., tumor response or stable disease for ≥6 months); had not received any other ET, targeted therapy, or chemotherapy within the preceding 2 weeks; and were postmenopausal women (or premenopausal/perimenopausal women simultaneously receiving luteinizing hormone-releasing hormone agonists). To enroll a racially diverse population, the study was performed globally at sites in Europe, North America, Asia, and Australia . In the single-agent dose-escalation stage, giredestrant was administered orally once daily on days 1–28 of each 28-day cycle at 10, 30, 90, or 250 mg. A single dose was administered on cycle 1 day −7, followed by a 7-day pharmacokinetic lead-in for all single-agent giredestrant dose-escalation cohorts. In addition, a combination cohort in the dose-escalation stage explored giredestrant 100 mg once daily (days 1–28) in combination with palbociclib 125 mg once daily (days 1–21), repeated every 28 days. In the expansion stage, single-agent expansion cohorts evaluated giredestrant at doses of 30, 100, and 250 mg once daily. A combination expansion cohort evaluated giredestrant 100 mg with palbociclib 125 mg. In the single-agent dose-escalation stage, pharmacokinetic samples were collected at cycle 1 day −7 predose and then at 0.5, 1, 1.5, 2, 3, 4, 6, 8, 24, 28, 72, 96, and 168 hours after dosing to characterize the single-dose pharmacokinetic profile and estimate the terminal half-life. Samples to determine the steady-state pharmacokinetic profile were collected at cycle 2 day 1 (after 28 days of daily giredestrant) predose and at 0.5, 1, 1.5, 2, 3, 4, 6, and 8 hours after dosing. Metabolite identification was conducted following the first dose (cycle 1 day −7) and at steady state (cycle 2 day 1) in 6 patients (3 patients at 90 mg and 3 at 250 mg). Plasma samples were pooled across patients and timepoints. The cytochrome P450 (CYP) 3A induction potential of giredestrant was evaluated in vivo by assessing 4beta-hydroxycholesterol (4β-HC), an endogenous biomarker formed by CYP3A metabolism in humans . 4β-HC concentration was measured at cycle 1 day −7, cycle 2 day 1, cycle 3 day 1, and cycle 4 day 1 predose in 7 patients (5 patients at 30 mg and 2 at 90 mg). Analysis was conducted ad hoc , in compliance with the informed consent, in patients from whom there was sufficient plasma volume remaining for the assessment after completion of prespecified analyses. Giredestrant concentrations in urine samples were measured at cycle 1 day 1 predose and cycle 2 day 1 at 0–8 hours postdose. Urine samples were collected from 13 patients (7 patients at 30 mg, 5 at 100 mg, 1 at 250 mg). Pharmacokinetic parameters for giredestrant were calculated from the plasma concentration–time data according to standard noncompartmental analysis methods using Phoenix WinNonlin (version 8.3.4; Certara USA, Inc.). Noncompartmental analyses were conducted in pharmacokinetic-evaluable patients, defined as those having at least one nonzero postdose plasma concentration. Patients with dose reductions, sparse pharmacokinetic samples, incomplete pharmacokinetic profiles, or incorrect dosing information during pharmacokinetic sampling were not included in the noncompartmental analysis. The concentrations of giredestrant in plasma were measured using validated LC/MS-MS assays. Giredestrant and its internal standard [(13C8, 15N) GDC 9545] were extracted from human plasma by supported liquid extraction (SLE). Giredestrant concentrations were calculated using a standard curve with a 1/ x 2 linear regression over concentration ranges of 1 to 1,000 ng/mL or 0.1 to 100 ng/mL. The concentrations of giredestrant in urine were measured using a qualified LC/MS-MS assay. Giredestrant and its internal standard were also extracted from human urine by SLE. Giredestrant concentrations were calculated using a standard curve with a 1/ x 2 linear regression over a concentration range of 0.25 to 250 ng/mL. Concentrations of 4β-HC were measured using an LC/MS-MS assay that was developed and validated in human plasma. 4β-HC and its internal standard 4β-HC-d7 were extracted from human plasma by SLE. 4β-HC concentrations were calculated using a standard curve with a 1/ x 2 linear regression over a concentration range of 4 to 100 ng/mL. For plasma, urine, and 4β-HC assays, the mass spectrometer was operated in positive electrospray ionization mode under optimized conditions with multiple reaction monitoring of analytes and internal standards. The precision and accuracy of assays were satisfactory throughout the study. For metabolite identification, plasma samples were extracted by protein precipitation. Parent and metabolites were analyzed by LC and high-resolution MS. The metabolite structures were proposed on the basis of fragmentation patterns and comparison with those of the parent. In the single-agent dose-escalation stage, giredestrant was taken under fasted conditions (fasted for ≥6 hours overnight) until cycle 4 day 1. To explore whether giredestrant had a potential food effect, patients in the single-agent dose-expansion cohorts were able to take giredestrant with or without food, and were asked to complete a food diary at cycle 2 day 1. Patients who had reported fasting for longer than 6 hours before giredestrant dosing were considered as fasted. The study was conducted in accordance with the protocol and the consensus ethical principles derived from international guidelines including the Declaration of Helsinki and Council for International Organizations of Medical Sciences International Ethical Guidelines for Health-Related Research Involving Humans, relevant International Conference on Harmonisation Good Clinical Practice guidelines, and applicable laws and regulations. All patients provided written informed consent. Phase I studies are not in scope of the Roche global policy on data sharing. Qualified researchers may submit an enquiry through the data request platform, Vivli, https://vivli.org/ourmember/roche/ ; however, this does not guarantee that the data can be shared. For up-to-date details on Roche's Global Policy on the Sharing of Clinical Information and how to request access to related clinical study documents, see go.roche.com/data_sharing . Anonymized records for individual patients across more than one data source external to Roche cannot, and should not, be linked due to a potential increase in risk of patient reidentification. Study Population Between November 27, 2017, and January 28, 2021, 175 patients were enrolled in the study: 111 across all single-agent cohorts and 64 in the giredestrant 100 mg combination cohort with palbociclib. Of 175 patients enrolled, 171 were evaluable for pharmacokinetics. In the single-agent dose-escalation phase, there were 29 patients with calculated pharmacokinetic parameters (including 21 with a 7-day single-dose pharmacokinetic lead-in). The data cut-off date was September 17, 2021. Pharmacokinetic Characterization Following oral administration of a single dose under fasted conditions, giredestrant was rapidly absorbed with a median time to maximum concentration (t max ) of 1.75–3.13 hours across the dose range of 10 to 250 mg . The geometric mean half-life after a single dose ranged from 25.8 to 43.0 hours over the same dose range (Table 1; ), supporting once-daily dosing. In general, giredestrant showed dose-proportional increases in plasma exposure in the range of 10 to 250 mg, as measured by maximum plasma concentration (C max ) and area under the plasma concentration–time curve (AUC; ref. ; ). After repeated daily dosing, estimated accumulation ratios were 1.1- to 1.8-fold based on C max and 1.4- to 2.4-fold based on AUC from time 0 to 24 hours (AUC 0–24h ; ). At the clinical dose of 30 mg, the geometric mean [geometric % coefficient of variation (%CV)] of maximum steady-state concentration (C max,ss ) was 266 ng/mL (50.1%) and the AUC 0–24h at steady state (AUC 0–24h,ss ) was 4,320 ng·hour/mL (59.4%; ). The geometric mean (geometric %CV) plasma elimination half-life was 43.0 hours (14.7%). Metabolite identification from plasma samples suggested that glucuronidation and oxidation are potentially the metabolism pathways for giredestrant (Genentech data on file). No abundant metabolites or long-lived circulating metabolites were identified; specifically, no major oxidation metabolites were seen. Exploratory assessments of giredestrant concentrations in urine samples over an 8-hour collection period postdose at steady state (cycle 2 day 1) indicated that 0.246% of drug was excreted in urine. Therefore, renal excretion is unlikely to be the major elimination route for giredestrant. Intrinsic and Extrinsic Factors DDI Assessment The mean concentration of 4β-HC showed no apparent increase after multiple doses of giredestrant for up to 84 days (cycle 4 day 1) at both the clinical dose (30 mg) and a supratherapeutic dose (90 mg; ). These results suggest that giredestrant may have low CYP3A induction potential in humans. When giredestrant 100 mg was coadministered with palbociclib 125 mg, giredestrant exposure was generally similar to that observed with single-agent giredestrant . Furthermore, the pharmacokinetics of palbociclib when given in combination with giredestrant were consistent with previously reported values with palbociclib alone . Coadministration of palbociclib and giredestrant revealed no clinically relevant DDI, indicating that dose adjustment is not necessary when combining these two agents. Full details are reported elsewhere . Early Assessment of Food Effect on Giredestrant The exploratory food effect assessment conducted in the single-agent dose-escalation and dose-expansion cohorts showed no remarkable differences in steady-state exposure between fasted and fed patients . Race In the preliminary assessment of the impact of race on giredestrant exposure conducted in the dose-escalation and dose-expansion stage, giredestrant exposures were generally consistent between Asian and White patients . Between November 27, 2017, and January 28, 2021, 175 patients were enrolled in the study: 111 across all single-agent cohorts and 64 in the giredestrant 100 mg combination cohort with palbociclib. Of 175 patients enrolled, 171 were evaluable for pharmacokinetics. In the single-agent dose-escalation phase, there were 29 patients with calculated pharmacokinetic parameters (including 21 with a 7-day single-dose pharmacokinetic lead-in). The data cut-off date was September 17, 2021. Following oral administration of a single dose under fasted conditions, giredestrant was rapidly absorbed with a median time to maximum concentration (t max ) of 1.75–3.13 hours across the dose range of 10 to 250 mg . The geometric mean half-life after a single dose ranged from 25.8 to 43.0 hours over the same dose range (Table 1; ), supporting once-daily dosing. In general, giredestrant showed dose-proportional increases in plasma exposure in the range of 10 to 250 mg, as measured by maximum plasma concentration (C max ) and area under the plasma concentration–time curve (AUC; ref. ; ). After repeated daily dosing, estimated accumulation ratios were 1.1- to 1.8-fold based on C max and 1.4- to 2.4-fold based on AUC from time 0 to 24 hours (AUC 0–24h ; ). At the clinical dose of 30 mg, the geometric mean [geometric % coefficient of variation (%CV)] of maximum steady-state concentration (C max,ss ) was 266 ng/mL (50.1%) and the AUC 0–24h at steady state (AUC 0–24h,ss ) was 4,320 ng·hour/mL (59.4%; ). The geometric mean (geometric %CV) plasma elimination half-life was 43.0 hours (14.7%). Metabolite identification from plasma samples suggested that glucuronidation and oxidation are potentially the metabolism pathways for giredestrant (Genentech data on file). No abundant metabolites or long-lived circulating metabolites were identified; specifically, no major oxidation metabolites were seen. Exploratory assessments of giredestrant concentrations in urine samples over an 8-hour collection period postdose at steady state (cycle 2 day 1) indicated that 0.246% of drug was excreted in urine. Therefore, renal excretion is unlikely to be the major elimination route for giredestrant. DDI Assessment The mean concentration of 4β-HC showed no apparent increase after multiple doses of giredestrant for up to 84 days (cycle 4 day 1) at both the clinical dose (30 mg) and a supratherapeutic dose (90 mg; ). These results suggest that giredestrant may have low CYP3A induction potential in humans. When giredestrant 100 mg was coadministered with palbociclib 125 mg, giredestrant exposure was generally similar to that observed with single-agent giredestrant . Furthermore, the pharmacokinetics of palbociclib when given in combination with giredestrant were consistent with previously reported values with palbociclib alone . Coadministration of palbociclib and giredestrant revealed no clinically relevant DDI, indicating that dose adjustment is not necessary when combining these two agents. Full details are reported elsewhere . Early Assessment of Food Effect on Giredestrant The exploratory food effect assessment conducted in the single-agent dose-escalation and dose-expansion cohorts showed no remarkable differences in steady-state exposure between fasted and fed patients . Race In the preliminary assessment of the impact of race on giredestrant exposure conducted in the dose-escalation and dose-expansion stage, giredestrant exposures were generally consistent between Asian and White patients . The mean concentration of 4β-HC showed no apparent increase after multiple doses of giredestrant for up to 84 days (cycle 4 day 1) at both the clinical dose (30 mg) and a supratherapeutic dose (90 mg; ). These results suggest that giredestrant may have low CYP3A induction potential in humans. When giredestrant 100 mg was coadministered with palbociclib 125 mg, giredestrant exposure was generally similar to that observed with single-agent giredestrant . Furthermore, the pharmacokinetics of palbociclib when given in combination with giredestrant were consistent with previously reported values with palbociclib alone . Coadministration of palbociclib and giredestrant revealed no clinically relevant DDI, indicating that dose adjustment is not necessary when combining these two agents. Full details are reported elsewhere . The exploratory food effect assessment conducted in the single-agent dose-escalation and dose-expansion cohorts showed no remarkable differences in steady-state exposure between fasted and fed patients . In the preliminary assessment of the impact of race on giredestrant exposure conducted in the dose-escalation and dose-expansion stage, giredestrant exposures were generally consistent between Asian and White patients . This phase Ia/Ib dose-escalation/-expansion study illustrates opportunities in oncology drug development from a clinical pharmacology perspective, where integration of clinical pharmacology considerations into early-phase clinical trials can accommodate an accelerated oncology development timeline by providing insights for patient eligibility, concomitant medications, and combination regimens for late-stage studies. The enhanced understanding of the pharmacokinetic profile, dose linearity, metabolic pathway, food effect, DDI potential, impact of race, and appropriate dosing when giredestrant is administered with palbociclib informed the design of pivotal trials. The proposed starting dose for this study was 10 mg once daily, which was projected to be efficacious based on results from nonclinical xenograft models and is approximately 10-fold lower than the calculated maximum recommended starting dose based on the severely toxic dose in 10% of animals from a rat toxicity study (Genentech data on file). In this study, we characterized pharmacokinetics over a wide dose range (10–250 mg) with sufficient exposure separation across different dose levels. Furthermore, the interindividual pharmacokinetic variability observed was in line with that of small molecule drugs in oncology patients . Giredestrant demonstrated rapid oral absorption that was generally dose-proportional over the dose range evaluated. Furthermore, giredestrant achieved higher plasma concentration than fulvestrant, an approved SERD given as an intramuscular injection due to its low bioavailability. To optimize dose selection, three dose levels (30, 100, and 250 mg) were evaluated in the expansion cohort to characterize the safety and clinical activity further and the totality of data was evaluated to select the recommended phase II dose. [ 18 F]-fluoroestradiol PET indicated a high degree of target engagement at all dose levels, including 30 mg . In addition, analysis of circulating tumor DNA in the ESR1 -mutant population showed a consistent reduction in ESR1 variant allele frequency at the 30 mg dose, which was not enhanced at higher doses . Doses above 30 mg provided no additional benefit as measured by clinical benefit rate . Furthermore, bradycardia was a dose-dependent adverse reaction of giredestrant and was more frequent at doses greater than 30 mg . The clinical pharmacology attributes, clinical activity, and safety, together with nonclinical data, supported selection of the 30 mg dose for further development of giredestrant in patients with metastatic and early ER + breast cancer . Full details of the dose selection rationale will be reported separately. The study design also included early assessment of metabolism and excretion through metabolite identification and measurement of giredestrant concentration in urine. The exploratory metabolite identification indicated that an apparently prominent metabolite in humans was absent, which reduced the risk of metabolite-related safety concerns and DDI. The observed minimal renal elimination of giredestrant suggests that renal function is unlikely to have a clinically relevant impact on giredestrant exposure and that the risk for renal transporter-related DDIs with giredestrant is low. In vitro data suggested that giredestrant may induce CYP3A mRNA (Genentech data on file). In the current study, 4β-HC was measured at both the clinical dose (30 mg) and a supratherapeutic dose (90 mg) to enable further evaluation of CYP3A induction potential. 4β-HC has been suggested as an endogenous biomarker for assessing in vivo CYP3A activity due to the low variability in its plasma concentration over time. Furthermore, concentrations of 4β-HC have increased after treatment with several strong CYP3A inducers, as well as moderate (efavirenz) and weak (ursodeoxycholic acid) CYP3A inducers . The exploratory assessment elucidated that there was no apparent increase in 4β-HC levels from baseline over a prolonged period. Furthermore, baseline 4β-HC was within the range observed in previous studies . The prolonged sample collection period was informative because 4β-HC response may be delayed after CYP3A induction due to the long half-life of 4β-HC. Although the sample size of the analysis was limited, the negative finding provides additional evidence that giredestrant may have low CYP3A induction risk. We found no clinically relevant DDI between giredestrant and palbociclib, a CYP3A substrate. The low DDI potential observed in this phase I study supported phase III evaluation of giredestrant combined with palbociclib as first-line therapy for metastatic breast cancer in the persevERA trial (NCT04546009). In contrast, another SERD, amcenestrant, induced CYP3A at therapeutic dose levels, with increasing induction observed with increasing dose . The DDI finding resulted in evaluation of reduced-dose amcenestrant with palbociclib in the (recently discontinued) AMEERA-5 trial (NCT04478266). These findings highlight differences in clinical pharmacology attributes between oral SERD candidates. The exploratory food effect assessment in this phase I study showed a lack of food effect on giredestrant exposure and guided the dosing recommendation regarding meal intake in late-stage studies. The exploratory food effect assessment was based on patient-reported food diaries, which provided a convenient method for collecting patient meal information; however, the precision of food diaries depends on the accuracy of patient reporting and may not capture exact details of the timing or amount of each patient's caloric and fat consumption. The lack of food effect was subsequently confirmed in a dedicated food effect study in healthy subjects (NCT04274075). From a practical perspective, the absence of a food effect on pharmacokinetics offers greater flexibility and improves patient convenience. Among the other SERDs in clinical development a food effect was observed with elacestrant and rintodestrant. Specifically, it appears food was used to improve rintodestrant bioavailability and the package insert for elacestrant stipulates to take elacestrant with food to improve gastrointestinal tolerability . A unique aspect of this phase I study was the preliminary assessment of the impact of race on giredestrant exposure. Early-phase clinical trials often enroll a less diverse population . There are examples where racial differences in efficacy and tolerability have exposed patients to unacceptable toxicity or reduced efficacy . The current phase I study was performed globally, and included a substantial number of Asian and White patients. This approach allowed us to compare exposure in Asian versus White patients, thus reducing the need for a subsequent clinical bridging study between these two populations. Although the current study provided limited data from Black or Hispanic patients, we plan to investigate these populations further through population pharmacokinetic analysis in larger and more diverse studies, such as the phase III lidERA trial (NCT04961996), which has a broader geographical footprint including sites in Africa and Central and South America. Insights generated in the current study informed phase II [aceIERA (NCT04576455; ref. ) and coopERA (NCT04436744; ref. )] and phase III (persevERA and lidERA) trial designs. By carefully integrating clinical pharmacology characterization into this first-in-human study, we were able to support accelerated clinical development, with the hope of bringing a new agent to patients more rapidly. Table S1 Representativeness of study population Click here for additional data file. Figure S1 Study design. LHRH, luteinizing hormone-releasing hormone. Click here for additional data file.
Transcriptomics and proteomics provide insights into the adaptative strategies of Tibetan naked carps (
ad999db4-2693-4a49-a00b-03cd55eba7ee
11837439
Biochemistry[mh]
Salinity-alkalinity is a critical environmental stressor that poses critical challenges to freshwater fish populations . The adaptability of fish to saline-alkaline water has long been of great interest in biology and ecology. Euryhaline fishes have evolved independently in different lineages of teleosts, suggesting that diverse and complex osmo- and iono-regulatory strategies might be adopted in response to osmotic fluctuations . Hence, these fish represent ideal models for investigating the molecular and evolutionary basis underlying the adaptive responses of fish to saline-alkaline variations. For example, physiological and morphological observations have revealed alterations in gill cell composition and junction, plasma endocrine levels and metabolites in muscle in response to saline-alkaline variation in many euryhaline fishes . Genomic analysis revealed that genes that function in cell junction, ion transport and transcriptional regulation were under positive selection and rapid evolution in the adaptation of cyprinid fishes to seawater and high salinity-alkalinity condition . Transcriptomic analyses revealed the transcriptional variations of genes involved in ion homeostasis, protein folding processing, endocrine regulation, and metabolism in diverse fish species, suggesting that the orchestration of cellular processes and tissue functions in osmoregulation . These studies indicate that osmoregulation is an essential adaptive change for fish to address salinity‒alkalinity fluctuations; however, how euryhaline fishes maintain osmotic homeostasis at the molecular level are not fully understood . Gills and kidneys are the primary sites for osmo- and iono-regulation in the response of fish to saline-alkaline alteration . To counter the diffusive loss of ions and influx of water under hypotonic conditions, the gills and kidneys activate ion absorption, limit the inflow of water, and produce diluted urine while excreting excessive ions in the high saline-alkaline water . These functional shifts involve morphological remodeling and cellular rearrangement, accompanied by gene expression regulation . The appearance of mitochondrial-rich cells (MRCs) is coupled with variations in the expression of genes involved in transport, cell junctions and metabolism in the branchial epithelium of fish under hypertonic conditions . Ion transporters and channel proteins, such as Na + -K + ATPase (NKA) and Na + /H + exchanger (NHE), as well as cell junction proteins for paracellular transport, such as the claudin, aquaporin and cadherin gene families, exhibit salinity-dependent expression patterns and facilitate the tolerance of fish to alterations in salinity-alkalinity . To maintain the intracellular pH, carbonic anhydrases (CAs) and HCO 3 − transporter genes, such as solute carrier family 4 ( SLC4a ) and solute carrier family 9 ( SLC9a ), are differentially expressed in the gills and kidneys of fish during their transfer from freshwater to saline-alkaline condition . Considering the high-energy requirements for osmoregulation in teleosts, saline-alkaline shifts impact the metabolism of carbohydrates, lipids, proteins and amino acids . Salinity-alkalinity changes lead to variations in the expression of genes in gills, which are involved mainly in the pathways of glycolysis/gluconeogenesis, amino acid metabolism and steroid biosynthesis, and differentially expressed genes in the liver are enriched in lipid metabolism, protein processing, and N-glycan biosynthesis, suggesting tissue-specific responses to salinity-alkalinity changes . Moreover, saline-alkaline-adapted fish have been reported to synthesize less toxic urea and amino acids to avoid ammonia toxicity under high salinity-alkalinity conditions . These energy-rich compounds, such as amino acids and amino sugars, serve not only as energy resources but also as important organic osmolytes to maintain cell volume and osmolality . Therefore, alterations in salinity-alkalinity may reshape the metabolic signatures for osmoregulation and energy supplies in fish. Gymnocypris przewalskii (subfamily Schizothoracinae, genus Gymnocypris ) is an exclusive cyprinid fish that dwells in Soda Lake Qinghai (salinity 12–14 ppt, pH 9.0–9.2, altitude ~ 3200 m) on the Qinghai Tibetan Plateau (QTP) . It migrates to freshwater rivers (salinity ~ 0.02 ppt, pH 8.3–8.6) for spawning annually, demonstrating the ability to cope with a wide range of salinity-alkalinity . Several studies have characterized the genomic and transcriptomic bases of the adaptation of G. przewalskii to high salinity and alkalinity, however, the molecular regulation underlying its tolerance to salinity-alkalinity variation remains elusive . With advances in high-throughput technologies, multiomics analyses have been adopted to reveal the molecular mechanism involved in the adaptation of fish to environmental variations . In the present study, we conducted morphological, biochemical, transcriptomic and proteomic analyses of the gills and kidneys of G. przewalskii under freshwater (FW) and saline-alkaline lake water (LW), to identify the key genes and regulatory signatures involved in the adaptation to salinity-alkalinity alterations. Our results illustrate the regulatory basis of the adaptation of G. przewalskii to salinity-alkalinity fluctuations and shed light on the molecular mechanism of euryhalinity in fish. Experimental treatment and sample collection All the animal experiments were conducted according to the guidelines described in the “Guidelines for Animal Care and Use” manual (NWIPB-202206) approved by the Animal Care and Use Committee, Northwest Institute of Plateau Biology, Chinese Academy of Sciences. For experimental treatment, G. przewalskii was artificially reproduced from a wild population and reared under freshwater conditions in the rescue center of G. przewalskii . In total, 28 fish were randomly selected and assigned to two groups: the freshwater group (FW, body weight = 14.8 ± 3.21 g, n = 14) and the saline-alkaline lake water group (LW, body weight = 15.6 ± 3.57 g, n = 14), which were raised in two plastic pools filled with 1500 L of water. The FW group was reared in circulating water throughout the experiment. The lake water was obtained from the nearshore region of Lake Qinghai, where the salinity was 12 ppt and the pH was 9.0 and was transported to the experimental facility. In the LW group, the fish were initially freshwater and then transferred to 1:4, 1:2 and 1:1 dilutions of lake water by adding lake water according to volume. The fish were maintained for 48 h at each level before moving to the next level according to a previous report . Finally, the fish in the LW group were reared in lake water with a final salinity of 12 ppt and a pH of 9.0 for two weeks. During the experiment, both groups were fed twice a day (9:00 and 16:00) with a commercial diet containing 43% protein (Hanye Biotechnology Co., Ltd, China) . The dissolved oxygen (DO) was supplied and monitored during the experiment using YSI ProPlus (YSI Inc., USA), with values of 7.58 ± 0.07 mg/L and 7.41 ± 0.12 mg/L for the FW and LW groups, respectively. The water temperature was maintained at 10–12 °C using cooling system (HAILI, China), and half of the water was replaced each day. No fish died during the experiment. After treatment, G. przewalskii were euthanized with 200 mg/L MS222 (Sigma, USA) to collect kidney and gill samples. For each treatment, blood samples were collected from the caudal vein of all individuals via a syringe with 1.5 mg/mL EDTA anticoagulant, after which the serum was separated, and the ammonia level was measured. Four gills and three kidneys from each treatment were collected and immediately stored in liquid nitrogen for RNA and protein purification. Four gill samples (the first gill arch from the right side) and kidney (right side) samples from each treatment group were collected for histological examination and transmission electron microscopy (TEM). Gills and kidneys from 10 samples were collected for the examination of urea and amino acid contents. Transmission electron microscopy (TEM) The tissue samples for transmission electron microscopy (TEM) were prepared as previously described . Fresh kidneys and gills were fixed in 2.5% glutaraldehyde fixative (Servicebio Inc., China) for 2–4 h and then washed with PBS. The sample was dehydrated in a series of ethanol solutions, embedded in SPURR resin blocks and polymerized at 37 °C overnight. The sample was placed in a 60 °C oven for 48 h and then on a copper grid and subjected to uranium‒lead double staining (2% uranium acetate saturated alcohol solution, lead citrate, each for 15 min). After drying overnight at room temperature, the sample slices were observed under a transmission electron microscope (Hitachi HT7700). Biochemical examination A blood sample of 1 ml was centrifuged at 3000 × g for 15 min at 4 °C to obtain serum and plasma for the measurement of ammonia. The level of ammonia was examined using biochemical analyzer (Servicebio, China). The plasma was mixed with 5% trichloroacetic acid and homogenized. The kidney and gill tissues were homogenized, and the pH was adjusted to 1.80–2.00 with LiOH, which was used to determine the amino acid content of each sample. The contents of 19 free amino acids, including aspartate (Asp), asparagine (Asn), glutamine (Gln), glutamate (Glu), threonine (THR), serine (Ser), glycine (Gly), alanine (Ala), valine (Val), cystine (Cys), methionine (Met), isoleucine (Ile), leucine (Leu), tyrosine (Tyr), phenylalanine (Phe), histidine (His), lysine (Lys), arginine (Arg), proline (Pro), and histidine (His), were determined in the amino acid analyzer (S-433D amino acid analyzer, SYKAM, Germany) with standards (SYKAM, Germany). Transcriptomic sequencing and data analysis RNA-seq library construction Four gill samples and three kidney samples from each of the FW and LW groups were used for RNA purification and library construction. The RNA samples were extracted using MiniBEST Universal RNA Extraction Kit (TaKaRa, China), and the quality and quantity of the RNA samples were assessed using 1% gel electrophoresis, a Qubit® 2.0 fluorometer (Life Technologies, USA), and Agilent Bioanalyzer 2100 system (Agilent Technologies, USA). The RNA-seq library was constructed according to the manufacturer’s instructions. Briefly, mRNA was captured from 1.5 μg of RNA, which was used for the synthesis of both first- and second-strand cDNA. After fragmentation, purification and PCR amplification, the final cDNA library was obtained, which was sequenced on the Illumina NovaSeq 6000 platform (Novogene Inc., China). Identification of differentially expressed genes Clean data were obtained by removing adapters, poly-N-containing reads (N% > 10%) and low-quality reads containing more than 50% low-quality bases (Q value ≤ 20) from the raw reads via fastp (version 0.18.0) and then mapped to the G. przewalskii genome using HISAT (v2.2.4) . Gene transcription levels were calculated for each sample using Cufflinks (v2.2.1) to obtain FPKM values. Differential expression analysis was performed using the DESeq2 package implemented in R. Genes with p values less than 0.05 and fold changes greater than 1.5 were defined as differentially expressed genes (DEGs). Based on DEGs, gene ontology (GO) enrichment was conducted using clusterProfiler in R , and significantly enriched pathways were identified according to p value less than 0.05. Reverse transcription quantitative PCR (RT‒qPCR) We performed RT‒qPCR experiments to validate the identified DEGs in RNA-seq analysis. The cDNA from four tissue samples in each group was synthesized using PrimeScrip™ RT reagent Kit with gDNA Eraser (TaKaRa, China), with the RT negative control using PCR water instead of the RNA sample. The PCR primers used are listed in Supplementary Table 1. PCR was performed with a cDNA template (1:5 dilution), Aptamer™ qPCR SYBR Green Master Mixture (Novogene, China), and forward and reverse primers (Table S1) in a final volume of 20 μl. The PCR conditions were as follows: 95 °C for 5 min; 40 cycles at 95 °C for 5 s and an annealing temperature of 55–60 °C for 30 s; 95 °C for 10 s; 60 °C for 30 s; and 95 °C for 15 s. We included two types of controls, the RT control and the PCR control, and no amplicon was detected. The housekeeping gene β-actin was used as the internal control to normalize the data , and gene expression was measured using −ΔΔ Ct method . Fold changes (FCs) were calculated between FW and LW, and Pearson’s correlation was used to evaluate the consistency between RNA-seq and RT‒qPCR. Proteomics Sample preparation The same samples used for RNA-seq were used for tandem mass tag (TMT)-based protein proteomics. The tissue samples were homogenized in liquid nitrogen with lysis buffer and then ultrasonicated in ice water for 5 min. The samples were subsequently centrifuged at 12,000 × g at 4 °C for 15 min, after which the supernatants were collected and mixed with 10 mM DTT at 56 °C for 1 h. Iodoacetamide was added, and the mixture was incubated for 1 h at room temperature in the dark. The sample was then completely mixed with 4 volumes of acetone and incubated at -20 °C for 2 h. The precipitate was obtained by centrifugation at 12,000 × g at 4 °C for 15 min, after which it was quantified via a BSA assay (Bradford protein assay kit, Sangon Biotech) according to the manufacturer’s instructions. Identification of differentially expressed proteins Briefly, ~ 120 μg of protein was digested with trypsin at 37 °C overnight and then desalted using a C18 desalting column. The TMT labeling reagent was mixed with the washed supernatant and incubated at room temperature for 2 h. The sample was dissolved in mobile phase A solution (2% acetonitrile, pH 10.0) and fractionated using a C18 column (Waters BEH C18 4.6 × 250 mm, 5 μm) on a Rigol L3000 HPLC system. The separated peptides were analyzed via an EASY-nLCTM 1200 UHPLC system (Thermo Fisher) coupled with a Q Exactive HF-X mass spectrometer (Thermo Fisher), which generated the spectra for each fraction. The raw data were imported into Proteome Discoverer (v2.2) for peptide and protein identification and quantification. Two steps were applied to obtain high-quality proteomic results. First, peptide spectrum matches (PSMs) with confidence greater than 99% and proteins containing at least one unique peptide were maintained. Second, a false discovery rate (FDR) test was employed to remove peptides and proteins with FDRs greater than 1%, and the remaining peptides and proteins with high confidence were used for quantification and functional annotation. The t-test was conducted, and differentially abundant proteins (DAPs) were identified based on p values less than 0.05 and fold changes (FCs) greater than 1.2 . Integrative analysis of the transcriptome and proteome The correlation coefficients between the transcriptome and proteome were calculated based on the FC of the expressed transcripts and proteins between the FW and LW groups using cor package in R. The protein–protein interaction (PPI) network was constructed with genes whose transcription and protein expression were positively correlated. The genes were retrieved from the STRING database ( http://string-db.org/ ), and the protein‒protein interactions were analyzed via Metascape and visualized via Cytoscape . Immunohistochemical examination The tissue samples were washed with PBS twice and dehydrated in a series of 30–100% ethanol. Each sample was treated with xylene two times, embedded in paraffin, and sliced into 5 μm sections. After dewaxing, endogenous peroxidase activity was blocked with serum. The sections were incubated with a primary antibody (1:500 dilution) against anti-Junctional Adhesion Molecule 1/JAM-A rabbit pAb (GB111265, Servicebio, China) and a secondary antibody (1:200 dilution) against Cy3-conjugated goat anti-rabbit IgG (GB21303, Servicebio, China), and the nuclei were stained with DAPI blue (Servicebio, China). Finally, the sections were observed and photographed under a fluorescence microscope (Zeiss, AxioImage A2). Statistical analysis The t-test was conducted to detect significant differences in biochemical parameters between the FW and LW groups. The significant difference was defined as p value less than 0.05, and the extremely significant difference was defined as p value less than 0.01. Before the t-test, we performed the Shapiro–Wilk normality test and F test to examine the normality and variance homogeneity of the data. The Pearson correlation coefficient ( r 2 ) was calculated between the FC of RT-qPCR and that of RNA-seq via GraphPad Systems (Prism 9). All the animal experiments were conducted according to the guidelines described in the “Guidelines for Animal Care and Use” manual (NWIPB-202206) approved by the Animal Care and Use Committee, Northwest Institute of Plateau Biology, Chinese Academy of Sciences. For experimental treatment, G. przewalskii was artificially reproduced from a wild population and reared under freshwater conditions in the rescue center of G. przewalskii . In total, 28 fish were randomly selected and assigned to two groups: the freshwater group (FW, body weight = 14.8 ± 3.21 g, n = 14) and the saline-alkaline lake water group (LW, body weight = 15.6 ± 3.57 g, n = 14), which were raised in two plastic pools filled with 1500 L of water. The FW group was reared in circulating water throughout the experiment. The lake water was obtained from the nearshore region of Lake Qinghai, where the salinity was 12 ppt and the pH was 9.0 and was transported to the experimental facility. In the LW group, the fish were initially freshwater and then transferred to 1:4, 1:2 and 1:1 dilutions of lake water by adding lake water according to volume. The fish were maintained for 48 h at each level before moving to the next level according to a previous report . Finally, the fish in the LW group were reared in lake water with a final salinity of 12 ppt and a pH of 9.0 for two weeks. During the experiment, both groups were fed twice a day (9:00 and 16:00) with a commercial diet containing 43% protein (Hanye Biotechnology Co., Ltd, China) . The dissolved oxygen (DO) was supplied and monitored during the experiment using YSI ProPlus (YSI Inc., USA), with values of 7.58 ± 0.07 mg/L and 7.41 ± 0.12 mg/L for the FW and LW groups, respectively. The water temperature was maintained at 10–12 °C using cooling system (HAILI, China), and half of the water was replaced each day. No fish died during the experiment. After treatment, G. przewalskii were euthanized with 200 mg/L MS222 (Sigma, USA) to collect kidney and gill samples. For each treatment, blood samples were collected from the caudal vein of all individuals via a syringe with 1.5 mg/mL EDTA anticoagulant, after which the serum was separated, and the ammonia level was measured. Four gills and three kidneys from each treatment were collected and immediately stored in liquid nitrogen for RNA and protein purification. Four gill samples (the first gill arch from the right side) and kidney (right side) samples from each treatment group were collected for histological examination and transmission electron microscopy (TEM). Gills and kidneys from 10 samples were collected for the examination of urea and amino acid contents. The tissue samples for transmission electron microscopy (TEM) were prepared as previously described . Fresh kidneys and gills were fixed in 2.5% glutaraldehyde fixative (Servicebio Inc., China) for 2–4 h and then washed with PBS. The sample was dehydrated in a series of ethanol solutions, embedded in SPURR resin blocks and polymerized at 37 °C overnight. The sample was placed in a 60 °C oven for 48 h and then on a copper grid and subjected to uranium‒lead double staining (2% uranium acetate saturated alcohol solution, lead citrate, each for 15 min). After drying overnight at room temperature, the sample slices were observed under a transmission electron microscope (Hitachi HT7700). A blood sample of 1 ml was centrifuged at 3000 × g for 15 min at 4 °C to obtain serum and plasma for the measurement of ammonia. The level of ammonia was examined using biochemical analyzer (Servicebio, China). The plasma was mixed with 5% trichloroacetic acid and homogenized. The kidney and gill tissues were homogenized, and the pH was adjusted to 1.80–2.00 with LiOH, which was used to determine the amino acid content of each sample. The contents of 19 free amino acids, including aspartate (Asp), asparagine (Asn), glutamine (Gln), glutamate (Glu), threonine (THR), serine (Ser), glycine (Gly), alanine (Ala), valine (Val), cystine (Cys), methionine (Met), isoleucine (Ile), leucine (Leu), tyrosine (Tyr), phenylalanine (Phe), histidine (His), lysine (Lys), arginine (Arg), proline (Pro), and histidine (His), were determined in the amino acid analyzer (S-433D amino acid analyzer, SYKAM, Germany) with standards (SYKAM, Germany). RNA-seq library construction Four gill samples and three kidney samples from each of the FW and LW groups were used for RNA purification and library construction. The RNA samples were extracted using MiniBEST Universal RNA Extraction Kit (TaKaRa, China), and the quality and quantity of the RNA samples were assessed using 1% gel electrophoresis, a Qubit® 2.0 fluorometer (Life Technologies, USA), and Agilent Bioanalyzer 2100 system (Agilent Technologies, USA). The RNA-seq library was constructed according to the manufacturer’s instructions. Briefly, mRNA was captured from 1.5 μg of RNA, which was used for the synthesis of both first- and second-strand cDNA. After fragmentation, purification and PCR amplification, the final cDNA library was obtained, which was sequenced on the Illumina NovaSeq 6000 platform (Novogene Inc., China). Identification of differentially expressed genes Clean data were obtained by removing adapters, poly-N-containing reads (N% > 10%) and low-quality reads containing more than 50% low-quality bases (Q value ≤ 20) from the raw reads via fastp (version 0.18.0) and then mapped to the G. przewalskii genome using HISAT (v2.2.4) . Gene transcription levels were calculated for each sample using Cufflinks (v2.2.1) to obtain FPKM values. Differential expression analysis was performed using the DESeq2 package implemented in R. Genes with p values less than 0.05 and fold changes greater than 1.5 were defined as differentially expressed genes (DEGs). Based on DEGs, gene ontology (GO) enrichment was conducted using clusterProfiler in R , and significantly enriched pathways were identified according to p value less than 0.05. Four gill samples and three kidney samples from each of the FW and LW groups were used for RNA purification and library construction. The RNA samples were extracted using MiniBEST Universal RNA Extraction Kit (TaKaRa, China), and the quality and quantity of the RNA samples were assessed using 1% gel electrophoresis, a Qubit® 2.0 fluorometer (Life Technologies, USA), and Agilent Bioanalyzer 2100 system (Agilent Technologies, USA). The RNA-seq library was constructed according to the manufacturer’s instructions. Briefly, mRNA was captured from 1.5 μg of RNA, which was used for the synthesis of both first- and second-strand cDNA. After fragmentation, purification and PCR amplification, the final cDNA library was obtained, which was sequenced on the Illumina NovaSeq 6000 platform (Novogene Inc., China). Clean data were obtained by removing adapters, poly-N-containing reads (N% > 10%) and low-quality reads containing more than 50% low-quality bases (Q value ≤ 20) from the raw reads via fastp (version 0.18.0) and then mapped to the G. przewalskii genome using HISAT (v2.2.4) . Gene transcription levels were calculated for each sample using Cufflinks (v2.2.1) to obtain FPKM values. Differential expression analysis was performed using the DESeq2 package implemented in R. Genes with p values less than 0.05 and fold changes greater than 1.5 were defined as differentially expressed genes (DEGs). Based on DEGs, gene ontology (GO) enrichment was conducted using clusterProfiler in R , and significantly enriched pathways were identified according to p value less than 0.05. We performed RT‒qPCR experiments to validate the identified DEGs in RNA-seq analysis. The cDNA from four tissue samples in each group was synthesized using PrimeScrip™ RT reagent Kit with gDNA Eraser (TaKaRa, China), with the RT negative control using PCR water instead of the RNA sample. The PCR primers used are listed in Supplementary Table 1. PCR was performed with a cDNA template (1:5 dilution), Aptamer™ qPCR SYBR Green Master Mixture (Novogene, China), and forward and reverse primers (Table S1) in a final volume of 20 μl. The PCR conditions were as follows: 95 °C for 5 min; 40 cycles at 95 °C for 5 s and an annealing temperature of 55–60 °C for 30 s; 95 °C for 10 s; 60 °C for 30 s; and 95 °C for 15 s. We included two types of controls, the RT control and the PCR control, and no amplicon was detected. The housekeeping gene β-actin was used as the internal control to normalize the data , and gene expression was measured using −ΔΔ Ct method . Fold changes (FCs) were calculated between FW and LW, and Pearson’s correlation was used to evaluate the consistency between RNA-seq and RT‒qPCR. Sample preparation The same samples used for RNA-seq were used for tandem mass tag (TMT)-based protein proteomics. The tissue samples were homogenized in liquid nitrogen with lysis buffer and then ultrasonicated in ice water for 5 min. The samples were subsequently centrifuged at 12,000 × g at 4 °C for 15 min, after which the supernatants were collected and mixed with 10 mM DTT at 56 °C for 1 h. Iodoacetamide was added, and the mixture was incubated for 1 h at room temperature in the dark. The sample was then completely mixed with 4 volumes of acetone and incubated at -20 °C for 2 h. The precipitate was obtained by centrifugation at 12,000 × g at 4 °C for 15 min, after which it was quantified via a BSA assay (Bradford protein assay kit, Sangon Biotech) according to the manufacturer’s instructions. Identification of differentially expressed proteins Briefly, ~ 120 μg of protein was digested with trypsin at 37 °C overnight and then desalted using a C18 desalting column. The TMT labeling reagent was mixed with the washed supernatant and incubated at room temperature for 2 h. The sample was dissolved in mobile phase A solution (2% acetonitrile, pH 10.0) and fractionated using a C18 column (Waters BEH C18 4.6 × 250 mm, 5 μm) on a Rigol L3000 HPLC system. The separated peptides were analyzed via an EASY-nLCTM 1200 UHPLC system (Thermo Fisher) coupled with a Q Exactive HF-X mass spectrometer (Thermo Fisher), which generated the spectra for each fraction. The raw data were imported into Proteome Discoverer (v2.2) for peptide and protein identification and quantification. Two steps were applied to obtain high-quality proteomic results. First, peptide spectrum matches (PSMs) with confidence greater than 99% and proteins containing at least one unique peptide were maintained. Second, a false discovery rate (FDR) test was employed to remove peptides and proteins with FDRs greater than 1%, and the remaining peptides and proteins with high confidence were used for quantification and functional annotation. The t-test was conducted, and differentially abundant proteins (DAPs) were identified based on p values less than 0.05 and fold changes (FCs) greater than 1.2 . The same samples used for RNA-seq were used for tandem mass tag (TMT)-based protein proteomics. The tissue samples were homogenized in liquid nitrogen with lysis buffer and then ultrasonicated in ice water for 5 min. The samples were subsequently centrifuged at 12,000 × g at 4 °C for 15 min, after which the supernatants were collected and mixed with 10 mM DTT at 56 °C for 1 h. Iodoacetamide was added, and the mixture was incubated for 1 h at room temperature in the dark. The sample was then completely mixed with 4 volumes of acetone and incubated at -20 °C for 2 h. The precipitate was obtained by centrifugation at 12,000 × g at 4 °C for 15 min, after which it was quantified via a BSA assay (Bradford protein assay kit, Sangon Biotech) according to the manufacturer’s instructions. Briefly, ~ 120 μg of protein was digested with trypsin at 37 °C overnight and then desalted using a C18 desalting column. The TMT labeling reagent was mixed with the washed supernatant and incubated at room temperature for 2 h. The sample was dissolved in mobile phase A solution (2% acetonitrile, pH 10.0) and fractionated using a C18 column (Waters BEH C18 4.6 × 250 mm, 5 μm) on a Rigol L3000 HPLC system. The separated peptides were analyzed via an EASY-nLCTM 1200 UHPLC system (Thermo Fisher) coupled with a Q Exactive HF-X mass spectrometer (Thermo Fisher), which generated the spectra for each fraction. The raw data were imported into Proteome Discoverer (v2.2) for peptide and protein identification and quantification. Two steps were applied to obtain high-quality proteomic results. First, peptide spectrum matches (PSMs) with confidence greater than 99% and proteins containing at least one unique peptide were maintained. Second, a false discovery rate (FDR) test was employed to remove peptides and proteins with FDRs greater than 1%, and the remaining peptides and proteins with high confidence were used for quantification and functional annotation. The t-test was conducted, and differentially abundant proteins (DAPs) were identified based on p values less than 0.05 and fold changes (FCs) greater than 1.2 . The correlation coefficients between the transcriptome and proteome were calculated based on the FC of the expressed transcripts and proteins between the FW and LW groups using cor package in R. The protein–protein interaction (PPI) network was constructed with genes whose transcription and protein expression were positively correlated. The genes were retrieved from the STRING database ( http://string-db.org/ ), and the protein‒protein interactions were analyzed via Metascape and visualized via Cytoscape . The tissue samples were washed with PBS twice and dehydrated in a series of 30–100% ethanol. Each sample was treated with xylene two times, embedded in paraffin, and sliced into 5 μm sections. After dewaxing, endogenous peroxidase activity was blocked with serum. The sections were incubated with a primary antibody (1:500 dilution) against anti-Junctional Adhesion Molecule 1/JAM-A rabbit pAb (GB111265, Servicebio, China) and a secondary antibody (1:200 dilution) against Cy3-conjugated goat anti-rabbit IgG (GB21303, Servicebio, China), and the nuclei were stained with DAPI blue (Servicebio, China). Finally, the sections were observed and photographed under a fluorescence microscope (Zeiss, AxioImage A2). Statistical analysis The t-test was conducted to detect significant differences in biochemical parameters between the FW and LW groups. The significant difference was defined as p value less than 0.05, and the extremely significant difference was defined as p value less than 0.01. Before the t-test, we performed the Shapiro–Wilk normality test and F test to examine the normality and variance homogeneity of the data. The Pearson correlation coefficient ( r 2 ) was calculated between the FC of RT-qPCR and that of RNA-seq via GraphPad Systems (Prism 9). The t-test was conducted to detect significant differences in biochemical parameters between the FW and LW groups. The significant difference was defined as p value less than 0.05, and the extremely significant difference was defined as p value less than 0.01. Before the t-test, we performed the Shapiro–Wilk normality test and F test to examine the normality and variance homogeneity of the data. The Pearson correlation coefficient ( r 2 ) was calculated between the FC of RT-qPCR and that of RNA-seq via GraphPad Systems (Prism 9). Morphological variations in the gills and kidneys of G. przewalskii Mitochondrial-rich cells (MRCs) and red blood cells (RBCs) appeared in the lamella of the gill filament under FW conditions (Fig. a-b), and pavement cells (PVCs), pillar cells (PCs) and mucus cells (MCs) with mucus inside existed in the lamella of G. przewalskii in the LW group (Fig. c-d). In the kidney, TEM showed the variations in the junction structure in the renal tubules, with a dense form in the LW condition and a slightly leaky form in the FW condition (Fig. ). Biochemical responses to saline-alkaline variation The content of ammonia in the blood was significantly greater in the LW group than in the FW group ( p < 0.05) (Fig. a). The urea level was significantly lower in the kidney of the LW group ( p < 0.01) and did not significantly change in the gills (Fig. b). The total free amino acid content significantly decreased in both the gills and kidneys of the LW group (Fig. c). The contents of essential amino acids significantly decreased in the gills, the levels of Val, Ile, Leu, His and Lys significantly decreased, and Phe and Arg increased in the kidney (Fig. d-f). Among the nonessential amino acids (NEAAs), Asn, Glu and Gln were more abundant in both the gills and kidneys of the LW group than in those of the FW group (Fig. e-g). In addition, the remaining NEAAs were lower in the LW group than in the FW group. Transcriptomic variations under different saline‒alkaline conditions Overviews of the RNA-seq results The transcriptomic patterns of the gills and kidneys of G. przewalskii were compared between freshwater (FW, 0.22 ppt, pH 7.8) and high saline-alkaline lake water (LW, 12 ppt, pH 9.0). We generated 54,601,180 to 76,292,610 clean reads for each sample, with Q30% greater than 92% (Table S2). The clean reads were aligned to the G. przewalskii genome, and the mapping ratios ranged from 76.54% to 83.51%. These data indicate the high quality of the RNA-seq data, which ensures the identification of differentially expressed genes (DEGs). Identification of differentially expressed genes We identified 1,424 differentially expressed genes (DEGs) in the gills of G. przewalskii , including 831 upregulated genes and 593 downregulated genes, under LW condition (Table and Table S3). The transcriptomic analysis of the kidney revealed that 1,554 genes were significantly differentially expressed, including 726 upregulated genes and 828 downregulated genes, under LW treatment (Table and Table S4). There were 240 common DEGs in both tissues, and their expression clearly separated the FW and LW groups into two tissues, suggesting that the transcriptional variations in the kidneys and gills were induced by salinity-alkalinity changes (Fig. a-b). We selected 10 genes to verify the RNA-seq results via RT‒qPCR. In terms of the fold changes, the Pearson correlation coefficient ( r 2 ) between RT‒qPCR and RNA‒seq was 0.84, suggesting the accuracy and reliability of the RNA‒seq results (Fig. c). Functional enrichment analyses of DEGs To understand the potential functional impact, we performed GO and KEGG enrichment analyses on the DEGs. GO analysis suggested that DEGs in the gills and kidney were overrepresented in functions related to ion transport, acid–base regulation, and immunity (Table S5 and Table S6). Moreover, genes related to ion transport and acid–base regulation, such as GO terms related to the regulation of pH (GO: 0006885), chloride transport (GO: 0006821) and antiporter activity (GO: 0015297), were downregulated in the LW group (Fig. a-b). The KEGG analysis revealed that 20 and 16 pathways were enriched with DEGs in the gills and kidney, which included 9 and 6 metabolic pathways, such as pyruvate metabolism (ko00620), glycolysis/gluconeogenesis (ko00010) and arginine and proline metabolism (ko00330) (Fig. c-d and Table S7-8). Protein expression variations under different saline‒alkaline conditions TMT-based proteomics generated 416,785 and 433,744 spectra in the gills and kidney, respectively, including 108,375 and 107,239 matched spectra, which resulted in the identification of 53,607 and 48,675 peptides. Overall, 8,364 and 8,306 proteins were identified in the gills and kidneys of G. przewalskii , among which 8,300 were expressed in all samples from both tissues. In the gills, we identified 155 differentially abundant proteins (DAPs), including 55 activated and 100 repressed DAPs, under LW condition (Table and Table S9). These DAPs of gills were enriched in the GO terms of tolerance induction (GO: 0002507), ATP hydrolysis coupled anion transmembrane transport (GO: 0099133) and response to L-glutamate (GO: 1,902,065) (Fig. a and Table S10). In the kidney, we observed the upregulation of 92 proteins and the downregulation of 108 proteins in the LW condition (Table and Table S11). DAPs of the kidney were overrepresented in the GO terms of regulation of pH (GO:0006885), proton transmembrane transport (GO:1,902,600) and cellular response to osmotic stress (GO:0071470) (Fig. b and Table S12). These results indicate that both transport and metabolism are involved in the response of G. przewalskii to salinity-alkalinity changes. The Venn diagram revealed 8 common DAPs in both tissues, which clearly differed in expression between the two treatments (Fig. c-d). Among these common DAPs, the abundance of junctional adhesion molecule A (JAM-A, GPPG00046776) was significantly lower in the FW group than in the LW group in both tissues, which was consistent with the immunostaining results (Fig. ). Integrative proteomic and transcriptomic analyses In total, 201 and 370 genes in the gills and kidney were positively correlated at the transcription and protein expression levels, with Pearson correlation coefficients ( r 2 ) of 0.1305 and 0.1511 for the gills and kidney, respectively (Fig. a-b). The positively correlated genes were enriched in GO functions related to the increased expression of GLUL (glutamate-ammonia ligase, GPPG00030516), cell adhesion, transport activities and metabolism, suggesting that they were influenced by variations in salinity-alkalinity through the gene-protein network in the gills and kidneys of G. przewalskii (Fig. c-d). Molecular network for osmoregulation in G. przewalskii Among these positively correlated genes, genes related to acid‒-base homeostasis and transport in the gills and kidney under freshwater condition, including CA2 ( carbonic anhydrase 2 , GPPG00087498), GLS (glutaminase kidney isoform, GPPG00014160), SLC9a3 ( Na + /H + exchanger 3 , NHE3 , GPPG00000195 and GPPG00001973), SLC4a2 ( Na + / HCO 3 − exchanger , AE2 , GPPG00004442), SLC4a4 ( Na + / HCO 3 − cotransporter , NBCe1 , GPPG00045161), Na + -K + ATPase subunits ATP1a1 (GPPG00073989) and ATP1b1 (GPPG00045014), were highly expressed. In addition, the expression of genes in the renin-angiotensin system, including PRCP (prolycarboxypeptidase, GPPG00017413), ENPEP ( glutamyl aminopeptidase , GPPG00024682 and GPPG00014783) and REN ( renin , GPPG00006133), was increased in the FW group, which was related to the reabsorption of Na + in freshwater condition (Fig. a-b). Under LW condition, genes involved in the regulation of cell adhesion, including GEF-H1 (Rho/Rac guanine nucleotide exchange factor 2, GPPG00000179), cortactin (GPPG00080785), CDH5 (cadherin-5, GPPG00065431) and SYNPO (synaptopodin, GPPG00004218), were highly expressed in the gills and kidney (Fig. a-b). Increased expression of GLUL (glutamate-ammonia ligase, GPPG00030516) was observed in the LW group, which might play a role in reducing NH3 levels via the synthesis of glutamine. Additionally, genes involved in metabolism presented varied expression levels in the kidneys and gills of G. przewalskii in the comparison of the FW and LW groups. For example, MDH1 (GPPG00008265, malate dehydrogenase) in the TCA cycle and CKB (creatine kinase B, GPPG00024448, GPPG00034640) and CKM (creatine kinase M, GPPG00030543) for energy homeostasis presented higher expression under FW condition. The higher expression of CPB1 (carboxypeptidase B, GPPG00062078), SLC1A1 (GPPG00032761) and SLC7A8 (GPPG00056567) indicated the potentially elevated function of protein digestion and absorption, which could explain the increased contents of amino acids under FW conditions. In the LW groups, genes involved in lipid and fatty acid metabolism, including Apoeb (apolipoprotein Eb, GPPG00079040 and GPPG00001858), NEU3 (GPPG00029676, sialidase-3), SMPD4 (GPPG00012835, sphingomyelin phosphodiesterase 4), CERS1 (ceramide synthase, GPPG00007981) and ALDH3A2 (fatty aldehyde dehydrogenase, GPPG00011297), presented relatively high expression under LW condition (Fig. b-c). We also found that genes for N-acetylneuraminic acid biosynthesis, including GFPT2 (glutamine: fructose-6-phosphate aminotransferase, GPPG00034273), GPI (glucose phosphate isomerase, GPPG00045054), and GNE (UDP-N-acetylglucosamine 2-epimerase/N-acetylmannosamine kinase, GPPG00007434) in gills and HK2 (hexokinase 2, GPPG00069499) and NANP (N-acylneuraminate-9-phosphatase, GPPG00038766) in the kidney, were upregulated under LW condition in G. przewalskii (Fig. b-c). G. przewalskii Mitochondrial-rich cells (MRCs) and red blood cells (RBCs) appeared in the lamella of the gill filament under FW conditions (Fig. a-b), and pavement cells (PVCs), pillar cells (PCs) and mucus cells (MCs) with mucus inside existed in the lamella of G. przewalskii in the LW group (Fig. c-d). In the kidney, TEM showed the variations in the junction structure in the renal tubules, with a dense form in the LW condition and a slightly leaky form in the FW condition (Fig. ). The content of ammonia in the blood was significantly greater in the LW group than in the FW group ( p < 0.05) (Fig. a). The urea level was significantly lower in the kidney of the LW group ( p < 0.01) and did not significantly change in the gills (Fig. b). The total free amino acid content significantly decreased in both the gills and kidneys of the LW group (Fig. c). The contents of essential amino acids significantly decreased in the gills, the levels of Val, Ile, Leu, His and Lys significantly decreased, and Phe and Arg increased in the kidney (Fig. d-f). Among the nonessential amino acids (NEAAs), Asn, Glu and Gln were more abundant in both the gills and kidneys of the LW group than in those of the FW group (Fig. e-g). In addition, the remaining NEAAs were lower in the LW group than in the FW group. Overviews of the RNA-seq results The transcriptomic patterns of the gills and kidneys of G. przewalskii were compared between freshwater (FW, 0.22 ppt, pH 7.8) and high saline-alkaline lake water (LW, 12 ppt, pH 9.0). We generated 54,601,180 to 76,292,610 clean reads for each sample, with Q30% greater than 92% (Table S2). The clean reads were aligned to the G. przewalskii genome, and the mapping ratios ranged from 76.54% to 83.51%. These data indicate the high quality of the RNA-seq data, which ensures the identification of differentially expressed genes (DEGs). Identification of differentially expressed genes We identified 1,424 differentially expressed genes (DEGs) in the gills of G. przewalskii , including 831 upregulated genes and 593 downregulated genes, under LW condition (Table and Table S3). The transcriptomic analysis of the kidney revealed that 1,554 genes were significantly differentially expressed, including 726 upregulated genes and 828 downregulated genes, under LW treatment (Table and Table S4). There were 240 common DEGs in both tissues, and their expression clearly separated the FW and LW groups into two tissues, suggesting that the transcriptional variations in the kidneys and gills were induced by salinity-alkalinity changes (Fig. a-b). We selected 10 genes to verify the RNA-seq results via RT‒qPCR. In terms of the fold changes, the Pearson correlation coefficient ( r 2 ) between RT‒qPCR and RNA‒seq was 0.84, suggesting the accuracy and reliability of the RNA‒seq results (Fig. c). Functional enrichment analyses of DEGs To understand the potential functional impact, we performed GO and KEGG enrichment analyses on the DEGs. GO analysis suggested that DEGs in the gills and kidney were overrepresented in functions related to ion transport, acid–base regulation, and immunity (Table S5 and Table S6). Moreover, genes related to ion transport and acid–base regulation, such as GO terms related to the regulation of pH (GO: 0006885), chloride transport (GO: 0006821) and antiporter activity (GO: 0015297), were downregulated in the LW group (Fig. a-b). The KEGG analysis revealed that 20 and 16 pathways were enriched with DEGs in the gills and kidney, which included 9 and 6 metabolic pathways, such as pyruvate metabolism (ko00620), glycolysis/gluconeogenesis (ko00010) and arginine and proline metabolism (ko00330) (Fig. c-d and Table S7-8). The transcriptomic patterns of the gills and kidneys of G. przewalskii were compared between freshwater (FW, 0.22 ppt, pH 7.8) and high saline-alkaline lake water (LW, 12 ppt, pH 9.0). We generated 54,601,180 to 76,292,610 clean reads for each sample, with Q30% greater than 92% (Table S2). The clean reads were aligned to the G. przewalskii genome, and the mapping ratios ranged from 76.54% to 83.51%. These data indicate the high quality of the RNA-seq data, which ensures the identification of differentially expressed genes (DEGs). We identified 1,424 differentially expressed genes (DEGs) in the gills of G. przewalskii , including 831 upregulated genes and 593 downregulated genes, under LW condition (Table and Table S3). The transcriptomic analysis of the kidney revealed that 1,554 genes were significantly differentially expressed, including 726 upregulated genes and 828 downregulated genes, under LW treatment (Table and Table S4). There were 240 common DEGs in both tissues, and their expression clearly separated the FW and LW groups into two tissues, suggesting that the transcriptional variations in the kidneys and gills were induced by salinity-alkalinity changes (Fig. a-b). We selected 10 genes to verify the RNA-seq results via RT‒qPCR. In terms of the fold changes, the Pearson correlation coefficient ( r 2 ) between RT‒qPCR and RNA‒seq was 0.84, suggesting the accuracy and reliability of the RNA‒seq results (Fig. c). To understand the potential functional impact, we performed GO and KEGG enrichment analyses on the DEGs. GO analysis suggested that DEGs in the gills and kidney were overrepresented in functions related to ion transport, acid–base regulation, and immunity (Table S5 and Table S6). Moreover, genes related to ion transport and acid–base regulation, such as GO terms related to the regulation of pH (GO: 0006885), chloride transport (GO: 0006821) and antiporter activity (GO: 0015297), were downregulated in the LW group (Fig. a-b). The KEGG analysis revealed that 20 and 16 pathways were enriched with DEGs in the gills and kidney, which included 9 and 6 metabolic pathways, such as pyruvate metabolism (ko00620), glycolysis/gluconeogenesis (ko00010) and arginine and proline metabolism (ko00330) (Fig. c-d and Table S7-8). TMT-based proteomics generated 416,785 and 433,744 spectra in the gills and kidney, respectively, including 108,375 and 107,239 matched spectra, which resulted in the identification of 53,607 and 48,675 peptides. Overall, 8,364 and 8,306 proteins were identified in the gills and kidneys of G. przewalskii , among which 8,300 were expressed in all samples from both tissues. In the gills, we identified 155 differentially abundant proteins (DAPs), including 55 activated and 100 repressed DAPs, under LW condition (Table and Table S9). These DAPs of gills were enriched in the GO terms of tolerance induction (GO: 0002507), ATP hydrolysis coupled anion transmembrane transport (GO: 0099133) and response to L-glutamate (GO: 1,902,065) (Fig. a and Table S10). In the kidney, we observed the upregulation of 92 proteins and the downregulation of 108 proteins in the LW condition (Table and Table S11). DAPs of the kidney were overrepresented in the GO terms of regulation of pH (GO:0006885), proton transmembrane transport (GO:1,902,600) and cellular response to osmotic stress (GO:0071470) (Fig. b and Table S12). These results indicate that both transport and metabolism are involved in the response of G. przewalskii to salinity-alkalinity changes. The Venn diagram revealed 8 common DAPs in both tissues, which clearly differed in expression between the two treatments (Fig. c-d). Among these common DAPs, the abundance of junctional adhesion molecule A (JAM-A, GPPG00046776) was significantly lower in the FW group than in the LW group in both tissues, which was consistent with the immunostaining results (Fig. ). In total, 201 and 370 genes in the gills and kidney were positively correlated at the transcription and protein expression levels, with Pearson correlation coefficients ( r 2 ) of 0.1305 and 0.1511 for the gills and kidney, respectively (Fig. a-b). The positively correlated genes were enriched in GO functions related to the increased expression of GLUL (glutamate-ammonia ligase, GPPG00030516), cell adhesion, transport activities and metabolism, suggesting that they were influenced by variations in salinity-alkalinity through the gene-protein network in the gills and kidneys of G. przewalskii (Fig. c-d). G. przewalskii Among these positively correlated genes, genes related to acid‒-base homeostasis and transport in the gills and kidney under freshwater condition, including CA2 ( carbonic anhydrase 2 , GPPG00087498), GLS (glutaminase kidney isoform, GPPG00014160), SLC9a3 ( Na + /H + exchanger 3 , NHE3 , GPPG00000195 and GPPG00001973), SLC4a2 ( Na + / HCO 3 − exchanger , AE2 , GPPG00004442), SLC4a4 ( Na + / HCO 3 − cotransporter , NBCe1 , GPPG00045161), Na + -K + ATPase subunits ATP1a1 (GPPG00073989) and ATP1b1 (GPPG00045014), were highly expressed. In addition, the expression of genes in the renin-angiotensin system, including PRCP (prolycarboxypeptidase, GPPG00017413), ENPEP ( glutamyl aminopeptidase , GPPG00024682 and GPPG00014783) and REN ( renin , GPPG00006133), was increased in the FW group, which was related to the reabsorption of Na + in freshwater condition (Fig. a-b). Under LW condition, genes involved in the regulation of cell adhesion, including GEF-H1 (Rho/Rac guanine nucleotide exchange factor 2, GPPG00000179), cortactin (GPPG00080785), CDH5 (cadherin-5, GPPG00065431) and SYNPO (synaptopodin, GPPG00004218), were highly expressed in the gills and kidney (Fig. a-b). Increased expression of GLUL (glutamate-ammonia ligase, GPPG00030516) was observed in the LW group, which might play a role in reducing NH3 levels via the synthesis of glutamine. Additionally, genes involved in metabolism presented varied expression levels in the kidneys and gills of G. przewalskii in the comparison of the FW and LW groups. For example, MDH1 (GPPG00008265, malate dehydrogenase) in the TCA cycle and CKB (creatine kinase B, GPPG00024448, GPPG00034640) and CKM (creatine kinase M, GPPG00030543) for energy homeostasis presented higher expression under FW condition. The higher expression of CPB1 (carboxypeptidase B, GPPG00062078), SLC1A1 (GPPG00032761) and SLC7A8 (GPPG00056567) indicated the potentially elevated function of protein digestion and absorption, which could explain the increased contents of amino acids under FW conditions. In the LW groups, genes involved in lipid and fatty acid metabolism, including Apoeb (apolipoprotein Eb, GPPG00079040 and GPPG00001858), NEU3 (GPPG00029676, sialidase-3), SMPD4 (GPPG00012835, sphingomyelin phosphodiesterase 4), CERS1 (ceramide synthase, GPPG00007981) and ALDH3A2 (fatty aldehyde dehydrogenase, GPPG00011297), presented relatively high expression under LW condition (Fig. b-c). We also found that genes for N-acetylneuraminic acid biosynthesis, including GFPT2 (glutamine: fructose-6-phosphate aminotransferase, GPPG00034273), GPI (glucose phosphate isomerase, GPPG00045054), and GNE (UDP-N-acetylglucosamine 2-epimerase/N-acetylmannosamine kinase, GPPG00007434) in gills and HK2 (hexokinase 2, GPPG00069499) and NANP (N-acylneuraminate-9-phosphatase, GPPG00038766) in the kidney, were upregulated under LW condition in G. przewalskii (Fig. b-c). G. przewalskii dwells in Lake Qinghai with high saline-alkaline conditions and migrates to freshwater during annual spawning, which demonstrated the strong capacity for osmoregulation . The current study combined morphological, biochemical, transcriptomic, and proteomic analyses to explore regulatory pathways and identify the crucial molecules underlying the adaptation of G. przewalskii to salinity‒alkalinity changes. In the present study, the correlation coefficients between transcriptomics and proteomics were 0.13 and 0.15 in the gills and kidneys, respectively, which are similar to observations in other fish . The discrepancy between the transcriptome and proteome could be explained by posttranscriptional regulation, translation efficiency, and degradation rates of proteins and mRNAs . Moreover, miRNA‒mRNA interactions have been observed in the gill-derived cell line of G. przewalskii under salinity stress, suggesting the influence of posttranscriptional regulation on the correlation between transcriptomic and proteomic data . Activation of ion transport and acid‒base regulation During the transition from high to low salinity, the ability of fish to maintain ion homeostasis is highly challenging. Gill remodeling has been observed in the response of fish to saline-alkaline changes, particularly with the appearance of MRCs under hypotonic condition . MRCs are mainly in charge of ion and solute transport, where transporters and exchangers, such as Na + -K + ATPase (NKA), Na + -H + exchangers (NHEs) and Na + -HCO 3 − cotransporters (NBCs), are primarily expressed for transepithelial transport . ATP1a1 and ATP1b1, consisting of the NKA subunit, are expressed in a salinity-dependent manner, and SLC9a3 ( NHE3 ) has been recognized as an important regulator of Na + /H + exchange . In the present study, we detected MRCs in the gills of G. przewalskii under FW condition, which was coincident with the upregulation of DEGs and DAPs involved in ion transport. The increased expression of SLC9a3 , ATP1a1 and ATP1b1 suggested the activation of ion uptake in G. przewalskii under FW condition (Fig. c), which is consistent with the expression patterns of these genes during the transition of several fish to soft, low-salinity water, such as rainbow trout , chum salmon , spotted sea bass , and marine medaka . It has been reported that the blood osmolality of G. przewalskii is close to that in Lake Qinghai, which was reduced in the migration to the river .Therefore, the activation of ion absorption genes would contribute to maintain osmotic homeostasis in G. przewalskii under freshwater condition. Similarly, differential expression of genes involved in ion transport has been identified by comparing freshwater with high saline-alkaline water populations of Leuciscus waleckii , the cyprinid fish dwelling in the soda lake, Lake Dali Nur . Interestingly, genes of the SLC9a and SLC2a gene families and NKA subunits showed higher expression in L. waleckii in high saline-alkaline lake water . Moreover, it has been reported that genes involved in ion uptake, including SLC9a3 , ATPa1a and ATPa1b , are more highly expressed in freshwater than in seawater in fish acclimated to tidally changing salinities . These phenomena suggest the involvement of ion absorption in maintaining the osmotic homeostasis of fish experiencing saline-alkaline fluctuations. In addition to ion exchangers and channels, our results suggest the involvement of the renin‒angiotensin system (RAS) in the prevention of water and ion loss in G. przewalskii in FW. The RAS is responsible for the regulation of sodium and water homeostasis. PRCP , ENPEP and REN encode lysosomal pro-X prolylcarboxypeptidase, glutamyl aminopeptidase and renin, respectively, and are critical in the RAS. Renin is the rate-limiting enzyme in the RAS and is synthesized mainly in the kidney. Renin-null mice and zebrafish showed defects in renal development, and required extra saline injection for survival, indicating that renin is critical for ion reabsorption in the kidney . Renin initiates the RAS cascade to generate Ang II via the process of angiotensinogen by ACE and aminopeptidase, including ENPEP. PRCP converts Ang II to Ang (1–7), which could trigger the expression of downstream genes, such as ATPase and NKA, to strengthen the water and sodium reabsorption in proximal tubules . The increased expression of REN, ENPEP and PRCP suggested that the activation of the RAS, together with increased levels of ion transepithelial transporter genes, might contribute to maintaining ion and water homeostasis in G. przewalskii under FW. Acid‒base regulation The high bicarbonate alkalinity of Lake Qinghai requires G. przewalskii to develop an adaptive response to maintain the acid‒base balance during migration from the soda lake to the freshwater river . The genes associated with proximal tubule bicarbonate reclamation were activated in the gills and kidneys under saline-alkaline condition, which is consistent with observations in rainbow trout and Marine Medaka . Carbonic anhydrases (CAs) catalyze the interconversion of carbon dioxide (CO 2 ) and bicarbonate (HCO 3 − ) and regulate acid–base homeostasis by secreting H + and absorbing HCO 3 − in couple with SLC9a3 (NHE3) and SLC4a4 (NBCe1) . The HCO 3 − transporters SLC4a2 (AE2) and SLC4a4 (NBCe1) and the Na + /H + exchanger 3 SLC9a3 (NHE3) play crucial roles in the maintenance of intracellular ion and pH homeostasis . Physiological assays have demonstrated that G. przewalskii experienced transient metabolic acidosis during its migration from the lake to freshwater rivers . The activation of CA2, SLC9a3, SLC4a2 and SLC4a4 would facilitate the reabsorption of HCO 3 − and Cl − as well as the excretion of H + , which contributes to acid–base regulation and ion homeostasis in G. przewalskii in response to saline-alkaline changes. Nitrogen waste excretion Ammonia (NH 3 ) is a major form of nitrogenous waste in fish and is secreted into water through gill filaments through transporters . Rh glycoproteins, including Rhag, Rhbg and Rhcg, mediate ammonia excretion in fish . The greater transcription of Rhag (GPPG00059145) and Rhcg (GPPG00004431) in the gills of G. przewalskii under FW was consistent with the observation of the upregulation of Rhag glycoproteins under low pH, which is responsible for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + exclusion through the gills of freshwater teleost fish . Additionally, we detected the activation of GLS in G. przewalskii , which could also promote \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + and energy production in FW by catalyzing the conversion of glutamine to glutamate . Therefore, G. przewalskii eliminated \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + directly in gills, and the upregulation of GLS could increase to meet the energy demands of osmoregulation in G. przewalskii under FW. On the other hand, high alkalinity impedes the conversion of NH 3 to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + by reducing H + , which leads to its accumulation and toxic effects on fish . Therefore, saline-alkaline-adapted fish have developed possible mechanisms to convert NH 3 to less toxic forms of nitrogenous products, such as amino acids and urea . In line with previous reports, urea did significantly differ between the LW and FW groups of G. przewalskii . Although the contents of most amino acids decreased, the levels of glutamine and glutamate increased with increasing salinity-alkalinity, indicating their importance in the adaptation of G. przewalskii to salinity-alkalinity. GLUL belongs to the glutamine synthetase family and catalyzes the conversion of glutamate and NH 3 to glutamine . The activation of GLUL suggested that G. przewalskii reduced the toxicity of NH 3 through the generation of glutamine in LW, which was supported by the high levels of glutamate and glutamine. Therefore, G. przewalskii has adopted different methods for NH 3 detoxification, including the excretion of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + in FW and the synthesis of glutamine in LW. Transcellular and paracellular permeability To avoid the passive invasion of ions and water loss, fish tend to adjust their paracellular permeability through the expression of genes in cell‒cell junctions . JAM-A is specifically localized at tight junctions and controls junction integrity and paracellular permeability . Compared with the “leaky” junction in the FW, the upregulation of JAM-A suggested strengthened tight junctions in the kidney, which might explain the appearance of dense and intact barriers in the renal tubules. These morphological and expression alterations contribute to the protection of G. przewalskii from the loss of solutes in high-salinity environments. SYNPO and GEF-H1 reportedly localize at tight junctions and are critical for junction integrity and protective effects . It has been reported that CDH5 (cadherin 5, VE-cadherin) and cortactin form a complex and play a role in adherens junction assembly and stability . The upregulation of SYNPO and GEF-H1 in the kidney and the increased expression of CDH5 and cortactin suggested the involvement of diverse cell junctions in the response of G. przewalskii to increased salinity and alkalinity, which was consistent with observations in many euryhaline fishes . Moreover, MCs with mucus appeared in G. przewalskii gills in the LW. Mucus acts as a “waterproof” barrier and protects cells from hypertonic stress in fish . N-acetylneuramic acid, a member of the sialic acid family, has been identified as a component of gill mucus . Coincidently, the activation of a series of genes in the N-acetylneuramic acid biosynthesis pathway might be related to the induction of key components of mucus, which would limit the transcellular transport of water and ions through G. przewalskii gills under LW conditions . Therefore, increased cell junctions and mucus production might block the paracellular and transcellular transport of solutes and contribute to the adaptation of G. przewalskii to high salinity-alkalinity. The metabolic responses Our analysis revealed that salinity-alkalinity variation resulted in transcriptomic and proteomic changes in genes involved in lipid, protein, amino acid and purine metabolism in G. przewalskii . MDH1 is a metabolic enzyme that fuels the TCA cycle . CK consists of subunits B and M and plays a central role in cellular energy metabolism . The activation of MDH1, PC and CKs might promote the TCA cycle and energy homeostasis to meet the energy demand for ion uptake in G. przewalskii in FW. Genes involved in protein digestion and absorption, such as CPB1 , SLC1A1 and SLC7A8 , were upregulated, which is consistent with the increased contents of amino acids in the FW group. Amino acids are considered energy supplies for osmoregulation and organic osmolytes to maintain cell volume, and their contents are altered in fish tissues under different salinities . Therefore, the increased amino acid contents might provide energy for ion uptake for osmoregulation under freshwater condition. Our results suggested that genes involved in sphingolipid metabolism, including NEU3 , CERS1 , and SMPD4 , were highly expressed under saline-alkaline condition. Sphingolipids, such as sphingomyelin and its intermediate product ceramide, are important components of lipid rafts in the cell membrane and are involved in the regulation of ion channels in tilapia and milkfish . The expression profiles indicated that the synthesis of sphingolipids might contribute to the high salinity‒alkalinity adaptation of G. przewalskii through potential changes in membrane structure. During the transition from high to low salinity, the ability of fish to maintain ion homeostasis is highly challenging. Gill remodeling has been observed in the response of fish to saline-alkaline changes, particularly with the appearance of MRCs under hypotonic condition . MRCs are mainly in charge of ion and solute transport, where transporters and exchangers, such as Na + -K + ATPase (NKA), Na + -H + exchangers (NHEs) and Na + -HCO 3 − cotransporters (NBCs), are primarily expressed for transepithelial transport . ATP1a1 and ATP1b1, consisting of the NKA subunit, are expressed in a salinity-dependent manner, and SLC9a3 ( NHE3 ) has been recognized as an important regulator of Na + /H + exchange . In the present study, we detected MRCs in the gills of G. przewalskii under FW condition, which was coincident with the upregulation of DEGs and DAPs involved in ion transport. The increased expression of SLC9a3 , ATP1a1 and ATP1b1 suggested the activation of ion uptake in G. przewalskii under FW condition (Fig. c), which is consistent with the expression patterns of these genes during the transition of several fish to soft, low-salinity water, such as rainbow trout , chum salmon , spotted sea bass , and marine medaka . It has been reported that the blood osmolality of G. przewalskii is close to that in Lake Qinghai, which was reduced in the migration to the river .Therefore, the activation of ion absorption genes would contribute to maintain osmotic homeostasis in G. przewalskii under freshwater condition. Similarly, differential expression of genes involved in ion transport has been identified by comparing freshwater with high saline-alkaline water populations of Leuciscus waleckii , the cyprinid fish dwelling in the soda lake, Lake Dali Nur . Interestingly, genes of the SLC9a and SLC2a gene families and NKA subunits showed higher expression in L. waleckii in high saline-alkaline lake water . Moreover, it has been reported that genes involved in ion uptake, including SLC9a3 , ATPa1a and ATPa1b , are more highly expressed in freshwater than in seawater in fish acclimated to tidally changing salinities . These phenomena suggest the involvement of ion absorption in maintaining the osmotic homeostasis of fish experiencing saline-alkaline fluctuations. In addition to ion exchangers and channels, our results suggest the involvement of the renin‒angiotensin system (RAS) in the prevention of water and ion loss in G. przewalskii in FW. The RAS is responsible for the regulation of sodium and water homeostasis. PRCP , ENPEP and REN encode lysosomal pro-X prolylcarboxypeptidase, glutamyl aminopeptidase and renin, respectively, and are critical in the RAS. Renin is the rate-limiting enzyme in the RAS and is synthesized mainly in the kidney. Renin-null mice and zebrafish showed defects in renal development, and required extra saline injection for survival, indicating that renin is critical for ion reabsorption in the kidney . Renin initiates the RAS cascade to generate Ang II via the process of angiotensinogen by ACE and aminopeptidase, including ENPEP. PRCP converts Ang II to Ang (1–7), which could trigger the expression of downstream genes, such as ATPase and NKA, to strengthen the water and sodium reabsorption in proximal tubules . The increased expression of REN, ENPEP and PRCP suggested that the activation of the RAS, together with increased levels of ion transepithelial transporter genes, might contribute to maintaining ion and water homeostasis in G. przewalskii under FW. The high bicarbonate alkalinity of Lake Qinghai requires G. przewalskii to develop an adaptive response to maintain the acid‒base balance during migration from the soda lake to the freshwater river . The genes associated with proximal tubule bicarbonate reclamation were activated in the gills and kidneys under saline-alkaline condition, which is consistent with observations in rainbow trout and Marine Medaka . Carbonic anhydrases (CAs) catalyze the interconversion of carbon dioxide (CO 2 ) and bicarbonate (HCO 3 − ) and regulate acid–base homeostasis by secreting H + and absorbing HCO 3 − in couple with SLC9a3 (NHE3) and SLC4a4 (NBCe1) . The HCO 3 − transporters SLC4a2 (AE2) and SLC4a4 (NBCe1) and the Na + /H + exchanger 3 SLC9a3 (NHE3) play crucial roles in the maintenance of intracellular ion and pH homeostasis . Physiological assays have demonstrated that G. przewalskii experienced transient metabolic acidosis during its migration from the lake to freshwater rivers . The activation of CA2, SLC9a3, SLC4a2 and SLC4a4 would facilitate the reabsorption of HCO 3 − and Cl − as well as the excretion of H + , which contributes to acid–base regulation and ion homeostasis in G. przewalskii in response to saline-alkaline changes. Ammonia (NH 3 ) is a major form of nitrogenous waste in fish and is secreted into water through gill filaments through transporters . Rh glycoproteins, including Rhag, Rhbg and Rhcg, mediate ammonia excretion in fish . The greater transcription of Rhag (GPPG00059145) and Rhcg (GPPG00004431) in the gills of G. przewalskii under FW was consistent with the observation of the upregulation of Rhag glycoproteins under low pH, which is responsible for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + exclusion through the gills of freshwater teleost fish . Additionally, we detected the activation of GLS in G. przewalskii , which could also promote \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + and energy production in FW by catalyzing the conversion of glutamine to glutamate . Therefore, G. przewalskii eliminated \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + directly in gills, and the upregulation of GLS could increase to meet the energy demands of osmoregulation in G. przewalskii under FW. On the other hand, high alkalinity impedes the conversion of NH 3 to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + by reducing H + , which leads to its accumulation and toxic effects on fish . Therefore, saline-alkaline-adapted fish have developed possible mechanisms to convert NH 3 to less toxic forms of nitrogenous products, such as amino acids and urea . In line with previous reports, urea did significantly differ between the LW and FW groups of G. przewalskii . Although the contents of most amino acids decreased, the levels of glutamine and glutamate increased with increasing salinity-alkalinity, indicating their importance in the adaptation of G. przewalskii to salinity-alkalinity. GLUL belongs to the glutamine synthetase family and catalyzes the conversion of glutamate and NH 3 to glutamine . The activation of GLUL suggested that G. przewalskii reduced the toxicity of NH 3 through the generation of glutamine in LW, which was supported by the high levels of glutamate and glutamine. Therefore, G. przewalskii has adopted different methods for NH 3 detoxification, including the excretion of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{NH}}_{4}^{+}$$\end{document} NH 4 + in FW and the synthesis of glutamine in LW. To avoid the passive invasion of ions and water loss, fish tend to adjust their paracellular permeability through the expression of genes in cell‒cell junctions . JAM-A is specifically localized at tight junctions and controls junction integrity and paracellular permeability . Compared with the “leaky” junction in the FW, the upregulation of JAM-A suggested strengthened tight junctions in the kidney, which might explain the appearance of dense and intact barriers in the renal tubules. These morphological and expression alterations contribute to the protection of G. przewalskii from the loss of solutes in high-salinity environments. SYNPO and GEF-H1 reportedly localize at tight junctions and are critical for junction integrity and protective effects . It has been reported that CDH5 (cadherin 5, VE-cadherin) and cortactin form a complex and play a role in adherens junction assembly and stability . The upregulation of SYNPO and GEF-H1 in the kidney and the increased expression of CDH5 and cortactin suggested the involvement of diverse cell junctions in the response of G. przewalskii to increased salinity and alkalinity, which was consistent with observations in many euryhaline fishes . Moreover, MCs with mucus appeared in G. przewalskii gills in the LW. Mucus acts as a “waterproof” barrier and protects cells from hypertonic stress in fish . N-acetylneuramic acid, a member of the sialic acid family, has been identified as a component of gill mucus . Coincidently, the activation of a series of genes in the N-acetylneuramic acid biosynthesis pathway might be related to the induction of key components of mucus, which would limit the transcellular transport of water and ions through G. przewalskii gills under LW conditions . Therefore, increased cell junctions and mucus production might block the paracellular and transcellular transport of solutes and contribute to the adaptation of G. przewalskii to high salinity-alkalinity. Our analysis revealed that salinity-alkalinity variation resulted in transcriptomic and proteomic changes in genes involved in lipid, protein, amino acid and purine metabolism in G. przewalskii . MDH1 is a metabolic enzyme that fuels the TCA cycle . CK consists of subunits B and M and plays a central role in cellular energy metabolism . The activation of MDH1, PC and CKs might promote the TCA cycle and energy homeostasis to meet the energy demand for ion uptake in G. przewalskii in FW. Genes involved in protein digestion and absorption, such as CPB1 , SLC1A1 and SLC7A8 , were upregulated, which is consistent with the increased contents of amino acids in the FW group. Amino acids are considered energy supplies for osmoregulation and organic osmolytes to maintain cell volume, and their contents are altered in fish tissues under different salinities . Therefore, the increased amino acid contents might provide energy for ion uptake for osmoregulation under freshwater condition. Our results suggested that genes involved in sphingolipid metabolism, including NEU3 , CERS1 , and SMPD4 , were highly expressed under saline-alkaline condition. Sphingolipids, such as sphingomyelin and its intermediate product ceramide, are important components of lipid rafts in the cell membrane and are involved in the regulation of ion channels in tilapia and milkfish . The expression profiles indicated that the synthesis of sphingolipids might contribute to the high salinity‒alkalinity adaptation of G. przewalskii through potential changes in membrane structure. By applying histological and biochemical analyses, transcriptomics and proteomics, the current study provides new insights into the physiological and molecular regulatory networks in the gills and kidneys of G. przewalskii to adapt to a wide range of salinities and alkalinities. Under freshwater condition, G. przewalskii increased the expression of genes involved in ion absorption and acid–base regulation to maintain osmolality and the intracellular pH. Under high salinity-alkalinity lake water condition, G. przewalskii presumably produces glutamine for NH 3 detoxification via the upregulation of GLUL and increases the expression of genes in cell junctions and mucus to regulate transcellular and paracellular permeability. Additionally, the increased expression of genes involved in sphingolipid metabolism might lead to potential changes in membrane structure and components, and its role in the adaptation of G. przewalskii to high salinity-alkalinity needs further investigation. In conclusion, the present study investigated the morphological, biochemical and molecular responses of G. przewalskii to alterations in salinity-alkalinity, which provides helpful information for understanding the adaptative mechanism of the response of fish to salinity-alkalinity changes at the molecular level. Supplementary Material 1.
Effect of antacid gastric syrups on surface properties of dental restorative materials: an in vitro analysis of roughness and microhardness
f34ed52e-2f1a-4f6f-8707-b7599932ec62
11762459
Dentistry[mh]
The movement of acid from the stomach to the esophagus is controlled by an anti-reflux mechanism, which includes the lower esophageal sphincter. A disruption in this system leads to a chronic condition known as gastroesophageal reflux disease (GERD) . GERD is defined as the pathological passage of stomach contents into the esophagus and, in some cases, into the oral cavity. Its symptoms include insufficient stomach acid, heartburn, chest pain, chronic respiratory complications, and bronchial asthma . Heartburn is described as an uncomfortable burning sensation in the chest, behind the breastbone, or the upper abdomen, which may sometimes radiate to the throat . GERD, similar to other gastrointestinal disorders, is highly prevalent globally, exhibiting significant regional variations in its incidence. GERD prevalence has been reported to range from 18.1 to 27.8% in North America, 8.8–25.9% in Europe, 2.5–7.8% in East Asia, 8.7–33.1% in the Middle East, 11.6% in Australia, and 23.0% in South America. Additionally, the incidence rate of GERD is approximately 5 per 1,000 person-years in the UK and the US, while in children aged 1–17 years in the UK, this rate is 0.84 . Evidence suggests that GERD prevalence is increasing, particularly in North America and East Asia . The potential impact of GERD on oral health is significant, as the condition can affect not only the stomach and esophagus but also the oral cavity. It shows that the prevalence of GERD and the associated use of antacid syrups are expected to increase in the coming years due to lifestyle changes, dietary habits and an aging population. This highlights the need to understand the potential long-term effects of these syrups, particularly on oral health and dental materials, as their usage continues to increase. In the treatment of GERD, antacid medications work by neutralizing excess stomach acid and inhibiting proteolytic enzymes, thereby providing symptomatic relief . The effectiveness of these medications depends on their buffering and neutralizing capacities. Manufacturers are launching various formulations for better consumption and efficacy. These formulations often include agents such as calcium carbonate, sodium bicarbonate, and magnesium carbonate, which are effective in neutralizing acid . Some of these active ingredients are also found in toothpaste. For instance, calcium carbonate and sodium bicarbonate are added as abrasive agents due to their stain-removal and whitening properties, as demonstrated by previous studies . Similarly, the abrasive effect of magnesium carbonate on tooth enamel crystals has been shown in studies . Studies have indicated that brushing teeth with water alone causes minimal wear, less than 0.1 microns. Thus, the active ingredients in toothpaste play a significant role in increasing abrasiveness . The interactions of composite resins with the oral environment have been investigated and reported by researchers for many years . To ensure the long-term success of aesthetic restorative treatments, it is desirable to use composite resins that can maintain their gloss over time. In this context, the interactions of composite resins with toothpaste, toothbrushes, chewing tablets, and syrups have frequently been the subject of research . It has been stated that one way to whiten teeth is to remove stains from the enamel surface, which is why abrasives are added to toothpaste formulations . It has also been shown that these abrasives can lead to changes in the surface roughness and microhardness of composite resins . Studies in the literature have investigated the effects of syrups and effervescent tablets, used in the treatment of systemic diseases, on the surface roughness and microhardness of composite resins . Additionally, it is known that toothpaste containing agents effective in neutralizing acid, such as those found in antacid syrups, can alter the surface properties of composite resins . However, to the best of our knowledge, there is no study evaluating the potential effects of antacid syrups on the surface of composite restorative materials. Therefore, this in-vitro study aimed to evaluate the effect of antacid gastric syrups on the surface roughness and microhardness of restorative materials. The null hypotheses of the study were as follows: Antacid gastric syrups have no significant effect on the surface roughness of composite resins. Antacid gastric syrups have no significant effect on the surface microhardness of composite resins. This study is an in-vitro laboratory study. Table lists the three composite resins and their components, while Table shows the four antacid gastric syrups along with their components and pH values. The endogenous pH of all drugs was measured using a digital pH electrode meter (Growsan, Growkent, Turkey). Sample size calculation The sample size for this study was calculated using the G*Power software (Version 3.1.9.4, Germany). For the one-way ANOVA used to compare the groups, the effect size was set to 0.37 with a confidence level of 95% (α = 0.05) and a statistical power of 80% (β = 0.20). For the repeated measures ANOVA used to analyze time-dependent changes (baseline, day 7, day 15, and day 28), effect size of 0.16 was used to account for the inclusion of the time factor. Preparation of samples Three different types of composites were used in the study: nanohybrid, microhybrid, and giomer (Table ). A total of 150 disc-shaped samples, 50 of each composite type, were prepared in a special Teflon mold (10 mm x 2 mm). After the samples were placed in the molds, they were lightly pressed with mylar matrix and cured with a second-generation LED light device (Guilin Woodpecker Medical Instrument Co., Guilin, China) emitting light at 470 nm wavelength for 20 s (Light intensity 1200 mW/cm2). Disks removed from the molds were additionally cured on the other side for 20 s. The samples were then polished with extra thick, thick, medium, and thin discs (OptiDisc, Kerr, FL, USA) with a power tool running at 15.000 rpm according to user instructions. After polishing, the samples were measured with a digital caliper (Insize, Insize Ltd, China) to ensure that the samples had the same thickness. The pH of all tested liquids, including distilled water, was measured at a controlled temperature of 25 °C in accordance with standard laboratory protocols. In addition, samples were stored at 37 °C to simulate oral conditions. Randomization (blinding) According to the list created by https://www.random.org , all composite samples were numbered and randomly divided into 15 subgroups . The administration of antacid gastric syrups was carried out with the same technique. Calibration Surface roughness and microhardness analysis of all samples were performed by a single operator (FO). Accordingly, he was trained using a profilometer and a microhardness measuring device . For verification purposes, measurements of 10 composite samples excluded from the study were performed. Measurements were repeated at 1 week intervals and a repeatability agreement rate of %90 was obtained. Immersion cycles The immersion cycle protocol used in this study was adopted to imitate the actual consumption of antacid gastric syrups. The samples were immersed in 10 ml of antacid gastric syrup for 2 min daily at 24 h intervals for 28 days to simulate a period of 2 years and kept in an incubator at 37 °C . After each immersion cycle, the samples were washed under running water and stored in distilled water until the next cycle. The syrups were renewed before each cycle. The Control group samples were kept in distilled water for 28 days and the solution was renewed every other day. Surface roughness and microhardness measurements were performed at the beginning of the experiment, on days 7, 15, and 28. Measurement of surface roughness Surface roughness measurements of all samples were carried out using a mechanical profilometer (Surftest SJ-210 Mitutoyo, Tokyo, Japan). Measurements were performed at Baseline, day 7, day 15 and day 28. Measurements were carried out at 3 different points equidistant from the centre of each sample at a speed of 0.25 mm/sec and a cutting speed of 0.80 mm. The arithmetic mean of the obtained values (Ra, µm) was taken and recorded. Measurement of surface microhardness The surface microhardness of all samples was measured with a Vickers hardness tester (Shimadzu HMV-2000, Germany) and recorded as Vickers Microhardness Number (VHN). Measurements were carried out at baseline, day 7, day 15 and day 28. A dwell time of 15 s and a load of 0.49 N were applied to each sample . Hardness measurement was recorded in VHN by making 3 indentations 100 μm apart from the centre of the sample and from each other. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) One sample randomly selected from each group was subjected to examination in AFM and SEM devices. For SEM and AFM imaging, areas of the sample surface that were not impacted by microhardness indentations or surface roughness traces were carefully selected to ensure the integrity of the imaging results. An area of 10 × 10 μm was scanned in contact mode at a speed of 1 Hz with an AFM device (Nanomagnetics Instruments, Oxford, UK). The 3D image of the obtained surface and the corresponding average surface roughness value (Ra) are presented together. All imaging was performed from the center of the samples to ensure standardization. XEI Data Analysis Program, version 1.6 (Park Systems Inc.) analysis software was used to obtain surface topography from AFM images and for roughness calculations. The samples selected for SEM examination were coated with Au-Pd alloy in a coating device (BAL-TEC 050, Capovani Brothers, USA) to increase the conductivity. The samples were then placed in an SEM device (EVO 40 Carl Zeiss, Germany) and images were taken at x1000, x5000, and x10000 magnification and at a voltage rate of 20.00 kilovolts. Statistical analysis Data were analyzed using IBM SPSS V.27 (Chicago, USA). The Shapiro-Wilk test was used to determine the distribution, the Repeated Measures ANOVA test was used to compare parametric data between the beginning of the 28-day experiment and the measurements on days 7, 15 and 28, and the One-Way ANOVA test was used to compare independent groups. Homogeneous distribution was determined by Mauchly’s Test of Sphericity. Tukey and Games-Howell post-hoc tests were applied according to whether the data were homogeneously distributed or not. The significance level was determined as p < 0.05. The sample size for this study was calculated using the G*Power software (Version 3.1.9.4, Germany). For the one-way ANOVA used to compare the groups, the effect size was set to 0.37 with a confidence level of 95% (α = 0.05) and a statistical power of 80% (β = 0.20). For the repeated measures ANOVA used to analyze time-dependent changes (baseline, day 7, day 15, and day 28), effect size of 0.16 was used to account for the inclusion of the time factor. Three different types of composites were used in the study: nanohybrid, microhybrid, and giomer (Table ). A total of 150 disc-shaped samples, 50 of each composite type, were prepared in a special Teflon mold (10 mm x 2 mm). After the samples were placed in the molds, they were lightly pressed with mylar matrix and cured with a second-generation LED light device (Guilin Woodpecker Medical Instrument Co., Guilin, China) emitting light at 470 nm wavelength for 20 s (Light intensity 1200 mW/cm2). Disks removed from the molds were additionally cured on the other side for 20 s. The samples were then polished with extra thick, thick, medium, and thin discs (OptiDisc, Kerr, FL, USA) with a power tool running at 15.000 rpm according to user instructions. After polishing, the samples were measured with a digital caliper (Insize, Insize Ltd, China) to ensure that the samples had the same thickness. The pH of all tested liquids, including distilled water, was measured at a controlled temperature of 25 °C in accordance with standard laboratory protocols. In addition, samples were stored at 37 °C to simulate oral conditions. According to the list created by https://www.random.org , all composite samples were numbered and randomly divided into 15 subgroups . The administration of antacid gastric syrups was carried out with the same technique. Surface roughness and microhardness analysis of all samples were performed by a single operator (FO). Accordingly, he was trained using a profilometer and a microhardness measuring device . For verification purposes, measurements of 10 composite samples excluded from the study were performed. Measurements were repeated at 1 week intervals and a repeatability agreement rate of %90 was obtained. The immersion cycle protocol used in this study was adopted to imitate the actual consumption of antacid gastric syrups. The samples were immersed in 10 ml of antacid gastric syrup for 2 min daily at 24 h intervals for 28 days to simulate a period of 2 years and kept in an incubator at 37 °C . After each immersion cycle, the samples were washed under running water and stored in distilled water until the next cycle. The syrups were renewed before each cycle. The Control group samples were kept in distilled water for 28 days and the solution was renewed every other day. Surface roughness and microhardness measurements were performed at the beginning of the experiment, on days 7, 15, and 28. Surface roughness measurements of all samples were carried out using a mechanical profilometer (Surftest SJ-210 Mitutoyo, Tokyo, Japan). Measurements were performed at Baseline, day 7, day 15 and day 28. Measurements were carried out at 3 different points equidistant from the centre of each sample at a speed of 0.25 mm/sec and a cutting speed of 0.80 mm. The arithmetic mean of the obtained values (Ra, µm) was taken and recorded. The surface microhardness of all samples was measured with a Vickers hardness tester (Shimadzu HMV-2000, Germany) and recorded as Vickers Microhardness Number (VHN). Measurements were carried out at baseline, day 7, day 15 and day 28. A dwell time of 15 s and a load of 0.49 N were applied to each sample . Hardness measurement was recorded in VHN by making 3 indentations 100 μm apart from the centre of the sample and from each other. One sample randomly selected from each group was subjected to examination in AFM and SEM devices. For SEM and AFM imaging, areas of the sample surface that were not impacted by microhardness indentations or surface roughness traces were carefully selected to ensure the integrity of the imaging results. An area of 10 × 10 μm was scanned in contact mode at a speed of 1 Hz with an AFM device (Nanomagnetics Instruments, Oxford, UK). The 3D image of the obtained surface and the corresponding average surface roughness value (Ra) are presented together. All imaging was performed from the center of the samples to ensure standardization. XEI Data Analysis Program, version 1.6 (Park Systems Inc.) analysis software was used to obtain surface topography from AFM images and for roughness calculations. The samples selected for SEM examination were coated with Au-Pd alloy in a coating device (BAL-TEC 050, Capovani Brothers, USA) to increase the conductivity. The samples were then placed in an SEM device (EVO 40 Carl Zeiss, Germany) and images were taken at x1000, x5000, and x10000 magnification and at a voltage rate of 20.00 kilovolts. Data were analyzed using IBM SPSS V.27 (Chicago, USA). The Shapiro-Wilk test was used to determine the distribution, the Repeated Measures ANOVA test was used to compare parametric data between the beginning of the 28-day experiment and the measurements on days 7, 15 and 28, and the One-Way ANOVA test was used to compare independent groups. Homogeneous distribution was determined by Mauchly’s Test of Sphericity. Tukey and Games-Howell post-hoc tests were applied according to whether the data were homogeneously distributed or not. The significance level was determined as p < 0.05. Surface roughness analysis The analysis revealed significant changes in the surface roughness of the giomer composite after 28 days of exposure to antacid stomach syrups and distilled water, as well as in comparison to the Control group ( p < 0.05). Initially, the giomer composite exhibited the highest surface roughness values, while the microhybrid composite had the lowest. However, no significant changes were observed on the surfaces of the microhybrid and nanohybrid composites throughout the 28-day experimental period ( p > 0.05) (Table ). When treated with antacid syrups, the giomer composite showed pronounced reductions in surface roughness across all groups. For instance: Distilled Water : Significant decreases were observed (F(3,27) = 35.724, p < 0.001, η²=0.799, Observed Power (OP) = 1), with reductions between baseline and day 7 ( p < 0.001, d = 2.820, OP = 0.99), day 15 ( p < 0.001, d = 2.191, OP = 0.99), and day 28 ( p < 0.001, d = 3.236, OP = 0.99) (Table ). GL syrup : Significant decreases were observed (F(1.556,14.002) = 69.913, p < 0.001, η²=0.886, OP:1.00), with reductions between baseline and day 7 ( p < 0.001, d = 2.782, OP = 0.99), day 15 ( p < 0.001, d = 3.247, OP = 0.99), and day 28 ( p < 0.001, d = 2.758) (Table ). GD syrup : Surface roughness decreased significantly (F(3,27) = 47.202, p < 0.001, η²=0.84, OP = 0.99), with reductions observed between baseline and day 7 ( p < 0.001, d = 2.975, OP = 0.99), day 15 ( p < 0.001, d = 2.896, OP = 0.99), and day 28 ( p < 0.001, d = 2.159) (Table ). RD syrup : A notable decrease was recorded (F(3,27) = 68.307, p < 0.001, η²=0.884, OP = 0.99), with significant differences between baseline and day 7 ( p < 0.001, d = 2.679, OP = 0.99), day 15 ( p < 0.001, d = 3.216, OP = 0.99), and day 28 ( p < 0.001, d = 2.968, OP = 0.99) (Table ). MF syrup : A significant decline was observed (F(3,27) = 26.294, p < 0.001, η²=0.745, OP = 0.99), with reductions occurring between baseline and day 7 ( p < 0.001, d = 2.967, OP = 0.99), day 15 ( p < 0.001, d = 2.436, OP = 0.99), and day 28 ( p = 0.005, d = 1.546, OP = 0.90) (Table ). Overall, the giomer composite demonstrated a consistent reduction in surface roughness across all antacid syrup groups and the Control group. These findings suggest that the chemical composition and interaction of antacid syrups with the giomer’s surface may have contributed to this effect. Microhardness analysis The surface microhardness of the composites showed a significant decrease after 28 days of exposure to antacid gastric syrups, as well as to the Control group (distilled water). The findings for each composite and experimental group are summarized below (Table ). Among the antacid syrups, the following results were noted: Control Group (Distilled Water) : The significant reduction was observed in the nanohybrid composite (F(1.751,15.755) = 4.645, p = 0.03, η²=0.34, OP = 0.99 ), with significant differences between baseline and day 15 ( p = 0.046, d = 1.078, OP: 0.62), and baseline and day 28 ( p = 0.007, d = 1.483 OP = 0.87) (Table ). GL syrup : The significant reduction was observed in the microhybrid composite (F(3,27) = 23.934, p < 0.001, η²=0.72, OP = 0.99), with significant differences between baseline and day 15 ( p < 0.001, d = 2.137, OP = 0.99), and baseline and day 28 ( p = 0.001, d = 1.866, OP = 0.97). The giomer composite also showed a significant decrease (F(3,27) = 35.584, p < 0.001, η²=0.79, OP = 0.99), with reductions between baseline and day 15 ( p < 0.001, d = 2.569, OP = 0.99) and baseline and day 28 ( p = 0.001, d = 2.177, OP = 0.99) (Table ). GD syrup : The microhybrid composite exhibited the highest reduction in microhardness (F(3,27) = 77.096, p < 0.001, η²=0.89). Significant differences were observed between baseline and day 7 ( p = 0.005, d = 1.555), day 15 ( p < 0.001, d = 3.279, OP = 0.99), and day 28 ( p < 0.001, d = 3.955, OP = 1.00) (Table ). RD syrup : The microhardness of the microhybrid composite decreased significantly (F(3,27) = 21.374, p < 0.001, η²=0.7, OP = 0.99), with notable reductions between baseline and day 15 ( p = 0.024, d = 1.215, OP = 0.72), and baseline and day 28 ( p = 0.004, d = 1.618, OP = 0.92) (Table ). MF syrup : The most significant reduction in microhardness was observed in the giomer composite (F(3,27) = 79.461, p < 0.001, η²=0.9, OP = 0.99), with differences between baseline and day 15 ( p = 0.001, d = 1.916, OP = 0.98), and baseline and day 28 ( p < 0.001, d = 3.267, OP = 0.99). The microhybrid composite also experienced a significant decrease (F(3,27) = 19.260, p < 0.001, η²=0.68, OP = 0.99), with reductions between baseline and day 15 ( p = 0.003, d = 1.65, OP = 0.93), and baseline and day 28 ( p = 0.006, d = 1.505, OP = 0.88) (Table ). In summary, the microhybrid composite demonstrated the largest reduction in microhardness across multiple syrups, while the giomer composite also exhibited significant changes, particularly with MF syrup. These results highlight the variable effects of antacid syrups on composite materials depending on their chemical composition and filler content. SEM analysis Figures , and shows SEM images of the composite groups at x1000, x5000, and x10.000 magnification. The images were taken from near the center of the composite samples and recorded. During surface roughness measurements, precautions were taken to avoid areas with scratches or indentations caused by these measurements. Surface images of each composite group in the Control group and after 28 days of experiment are shown. In general, roughness increased in all 3 composite surface images. There is an increase in the number of scratches, pits, and residues in the surface layer. When the surface of the microhybrid composite is examined, it is seen that there is a significant increase in particle accumulation and irregularity on the surface after contact with GL. When the surface of the nanohybrid composite is examined, it is seen that there is a limited increase in surface irregularity in general. However, it is seen that the increase in surface irregularity is higher after contact with RD. The changes in the roughness of the surface after the antacid stomach syrups applied to the Giomer composite are significantly higher than the Control group. At the same time, changes are also observed on the surface significantly compared to other composite groups. There is an increase in the number of pits and holes on the surface after contact with syrups. Of these, the roughness changes caused by GD and MF syrups are higher than the others. AFM analysis Figures , and shows 3D AFM images of the composite groups. The images were taken from the near center section of the composite samples. During surface microhardnes measurements, precautions were taken to avoid areas with scratches or indentations caused by these measurements. The surface images of each composite group in the Control group and after 28 days of experimentation are shown. When the images of the microhybrid composite groups were examined, the surface treated with MF syrup showed the highest surface roughness. The Control group and the composite surfaces treated with other syrups showed similar roughness values. When the nanohybrid composite groups were analyzed, the group showing the highest surface roughness was the composite treated with MF syrup. The group with the least surface roughness was the group treated with GD. When the giomer composite groups were analyzed, the group showing the highest surface roughness was the composite treated with GD syrup. The group showing the least surface roughness was the Control group. The analysis revealed significant changes in the surface roughness of the giomer composite after 28 days of exposure to antacid stomach syrups and distilled water, as well as in comparison to the Control group ( p < 0.05). Initially, the giomer composite exhibited the highest surface roughness values, while the microhybrid composite had the lowest. However, no significant changes were observed on the surfaces of the microhybrid and nanohybrid composites throughout the 28-day experimental period ( p > 0.05) (Table ). When treated with antacid syrups, the giomer composite showed pronounced reductions in surface roughness across all groups. For instance: Distilled Water : Significant decreases were observed (F(3,27) = 35.724, p < 0.001, η²=0.799, Observed Power (OP) = 1), with reductions between baseline and day 7 ( p < 0.001, d = 2.820, OP = 0.99), day 15 ( p < 0.001, d = 2.191, OP = 0.99), and day 28 ( p < 0.001, d = 3.236, OP = 0.99) (Table ). GL syrup : Significant decreases were observed (F(1.556,14.002) = 69.913, p < 0.001, η²=0.886, OP:1.00), with reductions between baseline and day 7 ( p < 0.001, d = 2.782, OP = 0.99), day 15 ( p < 0.001, d = 3.247, OP = 0.99), and day 28 ( p < 0.001, d = 2.758) (Table ). GD syrup : Surface roughness decreased significantly (F(3,27) = 47.202, p < 0.001, η²=0.84, OP = 0.99), with reductions observed between baseline and day 7 ( p < 0.001, d = 2.975, OP = 0.99), day 15 ( p < 0.001, d = 2.896, OP = 0.99), and day 28 ( p < 0.001, d = 2.159) (Table ). RD syrup : A notable decrease was recorded (F(3,27) = 68.307, p < 0.001, η²=0.884, OP = 0.99), with significant differences between baseline and day 7 ( p < 0.001, d = 2.679, OP = 0.99), day 15 ( p < 0.001, d = 3.216, OP = 0.99), and day 28 ( p < 0.001, d = 2.968, OP = 0.99) (Table ). MF syrup : A significant decline was observed (F(3,27) = 26.294, p < 0.001, η²=0.745, OP = 0.99), with reductions occurring between baseline and day 7 ( p < 0.001, d = 2.967, OP = 0.99), day 15 ( p < 0.001, d = 2.436, OP = 0.99), and day 28 ( p = 0.005, d = 1.546, OP = 0.90) (Table ). Overall, the giomer composite demonstrated a consistent reduction in surface roughness across all antacid syrup groups and the Control group. These findings suggest that the chemical composition and interaction of antacid syrups with the giomer’s surface may have contributed to this effect. The surface microhardness of the composites showed a significant decrease after 28 days of exposure to antacid gastric syrups, as well as to the Control group (distilled water). The findings for each composite and experimental group are summarized below (Table ). Among the antacid syrups, the following results were noted: Control Group (Distilled Water) : The significant reduction was observed in the nanohybrid composite (F(1.751,15.755) = 4.645, p = 0.03, η²=0.34, OP = 0.99 ), with significant differences between baseline and day 15 ( p = 0.046, d = 1.078, OP: 0.62), and baseline and day 28 ( p = 0.007, d = 1.483 OP = 0.87) (Table ). GL syrup : The significant reduction was observed in the microhybrid composite (F(3,27) = 23.934, p < 0.001, η²=0.72, OP = 0.99), with significant differences between baseline and day 15 ( p < 0.001, d = 2.137, OP = 0.99), and baseline and day 28 ( p = 0.001, d = 1.866, OP = 0.97). The giomer composite also showed a significant decrease (F(3,27) = 35.584, p < 0.001, η²=0.79, OP = 0.99), with reductions between baseline and day 15 ( p < 0.001, d = 2.569, OP = 0.99) and baseline and day 28 ( p = 0.001, d = 2.177, OP = 0.99) (Table ). GD syrup : The microhybrid composite exhibited the highest reduction in microhardness (F(3,27) = 77.096, p < 0.001, η²=0.89). Significant differences were observed between baseline and day 7 ( p = 0.005, d = 1.555), day 15 ( p < 0.001, d = 3.279, OP = 0.99), and day 28 ( p < 0.001, d = 3.955, OP = 1.00) (Table ). RD syrup : The microhardness of the microhybrid composite decreased significantly (F(3,27) = 21.374, p < 0.001, η²=0.7, OP = 0.99), with notable reductions between baseline and day 15 ( p = 0.024, d = 1.215, OP = 0.72), and baseline and day 28 ( p = 0.004, d = 1.618, OP = 0.92) (Table ). MF syrup : The most significant reduction in microhardness was observed in the giomer composite (F(3,27) = 79.461, p < 0.001, η²=0.9, OP = 0.99), with differences between baseline and day 15 ( p = 0.001, d = 1.916, OP = 0.98), and baseline and day 28 ( p < 0.001, d = 3.267, OP = 0.99). The microhybrid composite also experienced a significant decrease (F(3,27) = 19.260, p < 0.001, η²=0.68, OP = 0.99), with reductions between baseline and day 15 ( p = 0.003, d = 1.65, OP = 0.93), and baseline and day 28 ( p = 0.006, d = 1.505, OP = 0.88) (Table ). In summary, the microhybrid composite demonstrated the largest reduction in microhardness across multiple syrups, while the giomer composite also exhibited significant changes, particularly with MF syrup. These results highlight the variable effects of antacid syrups on composite materials depending on their chemical composition and filler content. Figures , and shows SEM images of the composite groups at x1000, x5000, and x10.000 magnification. The images were taken from near the center of the composite samples and recorded. During surface roughness measurements, precautions were taken to avoid areas with scratches or indentations caused by these measurements. Surface images of each composite group in the Control group and after 28 days of experiment are shown. In general, roughness increased in all 3 composite surface images. There is an increase in the number of scratches, pits, and residues in the surface layer. When the surface of the microhybrid composite is examined, it is seen that there is a significant increase in particle accumulation and irregularity on the surface after contact with GL. When the surface of the nanohybrid composite is examined, it is seen that there is a limited increase in surface irregularity in general. However, it is seen that the increase in surface irregularity is higher after contact with RD. The changes in the roughness of the surface after the antacid stomach syrups applied to the Giomer composite are significantly higher than the Control group. At the same time, changes are also observed on the surface significantly compared to other composite groups. There is an increase in the number of pits and holes on the surface after contact with syrups. Of these, the roughness changes caused by GD and MF syrups are higher than the others. Figures , and shows 3D AFM images of the composite groups. The images were taken from the near center section of the composite samples. During surface microhardnes measurements, precautions were taken to avoid areas with scratches or indentations caused by these measurements. The surface images of each composite group in the Control group and after 28 days of experimentation are shown. When the images of the microhybrid composite groups were examined, the surface treated with MF syrup showed the highest surface roughness. The Control group and the composite surfaces treated with other syrups showed similar roughness values. When the nanohybrid composite groups were analyzed, the group showing the highest surface roughness was the composite treated with MF syrup. The group with the least surface roughness was the group treated with GD. When the giomer composite groups were analyzed, the group showing the highest surface roughness was the composite treated with GD syrup. The group showing the least surface roughness was the Control group. The aim of this in-vitro study was to evaluate the effects of antacid gastric syrups on the surface roughness and microhardness of composite resins. At the end of the 28-day experimental period, changes in the surface roughness and microhardness of the composite groups were observed. In this study, a period of 28 days was chosen to simulate the long-term and repetitive effect of antacid gastric syrups on composite surfaces. Composite samples were contacted with syrups for 2 min each day, followed by immersion in distilled water for the remainder of the day. This protocol reflects daily syrup consumption and allows for the assessment of cumulative effects over time. In similar studies involving beverages such as tea and coffee, a 28-day period was equated to approximately 2.5 years of real-life consumption, assuming an average daily contact time of 15 min . Although the contact dynamics of syrups differ from beverages, this standardised 28-day period remains a robust and widely accepted approach in the literature for assessing long-term effects on composites, particularly because of the shorter daily exposure time. In our study, mechanical profilometer, AFM, and SEM methods were used to evaluate the changes in surface roughness. The mechanical profilometer scans the indentations and protrusions on the surface with its sensitive tip and creates a 2D profile. Researchers have stated that due to the 3D structure of the surface topography of restorative materials, mechanical profilometers that can make 2D measurements may be insufficient for measuring surface irregularities . The use of 3D imaging with AFM in combination with a mechanical profilometer will provide more objective results in the evaluation of surface roughness . In addition, SEM examination for visual evaluation of microstructural changes, deposits, and deformations on the composite surface at the microscopic level will increase the objectivity of the results obtained . Antacid gastric syrups are medicines that can be purchased over-the-counter (OTC) and studies examining their effects on oral health are limited . Studies in the literature on antacid drugs have reported that they are effective in preventing wear that may occur on the tooth surface due to gastrointestinal disorders, aspirin intake, etc . These drugs contain agents such as calcium carbonate, sodium bicarbonate, magnesium, carbonate to neutralize the acid . It is also known that these active ingredients are added to toothpaste for their abrasive properties . Studies in the literature have shown that toothpaste containing these active ingredients increases the surface roughness of composites . Suzuki et al. reported that a special slurry containing calcium carbonate was effective in eroding the surface of composites . Madeswaran et al. emphasized that sodium bicarbonate may wear dentin in addition to its anti-erosive effects . The common feature of these substances is that they contain carbonate; while their basic structure neutralizes acidic environments, their fine-grained structure may change the surface properties of tissues by physical abrasion. Lippert et al. reported that calcium carbonate and sodium bicarbonate may be present in toothpaste at ratios of %8–20 and more than %50, respectively . 10 ml of toothpaste with a density of 1.3 g/ml may contain an average of 1000–2500 mg calcium carbonate or 6500 mg sodium bicarbonate; those produced for whitening purposes may increase this ratio. Their abrasive effects vary depending on factors such as particle size, application method, and contact time. The RD and MF syrups used in our study contained 600 mg calcium carbonate, while GD contained 325 mg calcium carbonate and 213 mg sodium bicarbonate (Table ). Therefore, the corrosive effect of antacid gastric syrups may increase with the amount present in the formulation. After 28 days, changes in the surface roughness of the composites were observed. Therefore, hypothesis 1 of the study - antacid gastric syrups have no significant effect on the surface roughness of composite resins - was rejected. Microhybrid and nanohybrid composites showed a slight increase in roughness after contact with antacid syrups, while the giomer composite showed a significant decrease in mechanical profilometer analysis. Visual evaluations by SEM show that the irregularities on the surface of microhybrid and nanohybrid composites increased slightly, while the number of pits and indentations increased significantly in the giomer composite. AFM analysis generally showed an increase in the surface roughness of all groups. The researchers note that the differences between the measurement methods may be influenced by factors such as the length of the scanned area, the details of the methods and the film layer that may remain on the surface . In a similar study, Alavi et al. reported the formation of voids on the surface as a result of solution application to the giomer composite . These results are similar to our study. The large voids in the SEM images can be attributed to the detachment of S-PRG filler particles of about 5 μm in size in the giomer composite from the surface after contact with antacid syrups. The observation of irregularities in the groups treated with antacid syrups, except for the Control group, suggests that the abrasive agents in these syrups may be responsible for the increase in roughness on the giomer surface. In studies where the surface roughness of the giomer composite was tested, it was shown to increase more than the other composite groups, in line with our findings . The microhardness test is an effective method for measuring the durability and hardness of a material. It is calculated by the static diamond tip in the device leaving a mark on the material under a certain load for a certain period . The Vickers test can be used to measure the hardness of brittle materials. In this test, the tip of the instrument is shorter than in other instruments and is less affected by the surface properties of the material . Due to these advantages, Vickers hardness tester was used in our study. The results of the analysis show that the microhardness values decreased in the microhybrid composite group after contact with antacid stomach syrups. For this reason, hypothesis 2 of the study - antacid gastric syrups have no significant effect on the surface roughness of composite resins - was rejected. A similar decrease was observed in some groups of the Giomer composite. The chemical structure and content of composite resins are important factors affecting the microhardness value. Karatas et al. stated that composites with high filler content have high surface microhardness, while Rodriguez et al. stated that the physical and mechanical properties of composites are related to filler content . In our study, the observation of a decrease in the microhardness values of microhybrid composites with lower filler content over time supports similar findings in the literature. The low microhardness of the Giomer composite despite its high filler content may be due to its glass ionomer-based structure as well as mechanical weaknesses due to water absorption and dissolution of monomers in its organic matrix. This may lead to separation between the polymer chains and a decrease in mechanical properties . The observed decrease in the microhardness values of microhybrid composite groups after treatment with antacid syrups is related to the basic nature of the syrups, chemical reactions, and corrosive components. In the literature, several studies have shown that solutions with acidic pH reduce the surface microhardness of composites . To our knowledge, basic solutions have not been shown to reduce surface microhardness. The abrasive substances in the antacid syrups may have been effective in the decrease in microhardness. The surface microhardness of composites also decreased after brushing with toothpaste with similar active ingredients . In this study, the interaction of commonly used oral composite types with OTC antacid gastric syrups was evaluated. For the evaluation, observational analyses were performed using mechanical profilometry, three-dimensional AFM, and SEM. The results of all three methods are consistent with each other. The most affected composite group is the giomer composite; this may be due to the special production of giomer with particles in the form of S-PRG. During the 28-day experiment, a slight increase in surface roughness was also observed in the microhybrid and nanohybrid composite groups. In general, slight microhardness reduction occurred in the tested composites and these changes were different when compared to the Control group (distilled water). As a result, carbonate-based agents in antacid gastric syrups may have played a role in these changes. This study has some limitations. The first limitation is that the composite types used in the study differ not only in filler type and size but also in matrix composition. This makes it difficult to examine the surface roughness and microhardness changes that occur after exposure to the same factor. Future studies can investigate materials with more homogeneous matrix compositions to further elucidate the role of organic matrices in surface and mechanical changes under similar conditions. Another limitation, It did not fully simulate the oral environment; factors such as saliva and pH variation were not taken into account. Future studies will allow for more realistic results by including these factors. The final limitation is that the experiment lasted 2 min a day for 28 days. Assuming an average contact time of 5 s with gastric syrups, a duration of action of approximately at least 2 years was calculated. In addition, the increase in the number of individuals using antacid gastric syrups for a long period reveals the need for longer experimental studies. The active ingredients added to gastric syrups to produce an antacid effect significantly change the surface roughness and microhardness of restorative materials. Especially giomer composites show more pronounced changes compared to other composite groups. The effects of the amount of substances such as calcium carbonate, sodium bicarbonate, and magnesium carbonate on teeth and restorative materials should be studied in depth with further research.
Association of Neighborhood-Level Disadvantage With Neurofibrillary Tangles on Neuropathological Tissue Assessment
e5485228-831f-4fb0-92c9-7022765c4087
9051985
Pathology[mh]
The social exposome measures all of the social exposures that a person experiences over a lifetime. Researchers are only beginning to understand the role of upstream, neighborhood-level factors in Alzheimer disease and related dementias (ADRD) risk and their association with biological pathways affecting ADRD. Neighborhood disadvantage, a social exposome measure reflecting income, educational level, employment status, and housing in a Census-block group or neighborhood, has been associated with markers of ADRD brain health, including amyloid plaque. Whether this association extends to neurofibrillary pathology is unknown. This study evaluated the association between neurofibrillary tangles and neighborhood disadvantage. This cross-sectional study was conducted using a sample of decedents from 2 Alzheimer’s Disease Research Center (ADRC) brain banks with previously assessed neurofibrillary tangle deposition and neighborhood disadvantage ranking from 1993 to 2016. Before death, decedents were recruited by ADRC brain donor programs and consented to brain donation for research purposes. The institutional review boards of University of Wisconsin and University of California, San Diego exempted the study because it was not human participant research. We followed the STROBE reporting guideline. We abstracted neurofibrillary tangle B scores, per National Institute on Aging and Alzheimer’s Association neuropathological change guidelines, from the standardized Neuropathology Data Set form or original autopsy reports (following established methods and neuropathologist guidance from both ADRCs) to measure Alzheimer disease–associated neurofibrillary pathology. , Twenty-five decedents without neurofibrillary tangle assessment were excluded. Decedents’ last address was geolinked to their statewide ranking of neighborhood disadvantage using a time-concordant area deprivation index, with higher values denoting greater neighborhood disadvantage. Generalized ordered logistic regression with site-level clustered SEs was used to model the ordinal B score adjusted for covariates regularly available across the 24-year data time frame: age, sex, and year of death of 2005 or later (ie, introductory year for standard reporting using the Uniform Data Set). Data were analyzed from May 3 to November 17, 2021, using Stata/MP version 16.1 (StataCorp LLC). The sample of 428 decedents had a mean (SD) age of 80.5 (9.1) years (237 men [55.4%] and 191 women [44.6%]) and tended to be from less disadvantaged neighborhoods (mean [SD] area deprivation index decile rank: 3.8 [2.4]) . Nearly all (95.8%) had neurofibrillary tangles, which was consistent with ADRC brain donation samples. Modeled analysis suggested that, for every decile increase in neighborhood disadvantage, there was a 5% increase in the odds of a higher B score (odds ratio [OR], 1.05; 95% CI, 1.01-1.08) after adjustments . This translated into an estimated 56% increased odds of a higher B score for those in the most disadvantaged neighborhood decile (OR, 1.56; 95% CI, 1.09-2.23) . Sensitivity analysis using Braak staging, instead of B scores, suggested similar estimated odds (OR, 1.04; 95% CI, 1.02-1.06). With these new findings, neighborhood disadvantage has now been found to be associated with neurofibrillary tangles and amyloid plaques, the primary pathological features of Alzheimer disease. Study limitations emphasize the need for additional infrastructure, data, and insight to address selection bias in brain donation, underrepresentation of decedents from disadvantaged neighborhoods, and generalizability. Neighborhood disadvantage may serve a role in identifying ADRD biological processes and/or be a marker of related adverse exposures. Therefore, a nuanced understanding is needed of the pathways through which neighborhood conditions may associate or interact with other factors to affect ADRD-related brain changes. Mechanisms linking neighborhood disadvantage with tau accumulations might include multiple and overlapping factors (eg, stress, depression, sleep disruption, and constraints on health behaviors; pollution; and cardiovascular risks). Future work will require coordinated involvement among ADRCs for larger, more generalizable samples and additional data linkages to explore the mediating and moderating risk factors involved. Neuropathological changes in ADRD accumulate over decades. Life-course approaches to describing neighborhood disadvantage exposure should include testing dose-response associations, identifying critical thresholds that place people at elevated risk, pinpointing sensitive life periods, and uncovering factors that mitigate the impact of exposure.
Crisis and Emergency Risk Communication and Emotional Appeals in COVID-19 Public Health Messaging: Quantitative Content Analysis
346b7b02-3015-4908-87cf-e9c476a1197c
11445630
Health Communication[mh]
Background Singapore effectively managed COVID-19, which is evident from the World Health Organization lauding its “all-of-government” approach . This approach entails collaboration among different government agencies . While COVID-19 is no longer a global health emergency, Singapore continues to experience periodic infection waves . During the pandemic, the Singaporean government charted its response to COVID-19 in stages, as detailed in a white paper . Avenues for public health communication in Singapore include government websites and Facebook (Meta Platforms Inc) pages. These websites serve as a one-stop communications channel, and Facebook is one of Singapore’s most widely used social-networking platforms . However, studies on the government’s use of Facebook for public health communication during the pandemic are limited. Singapore’s success in managing the pandemic can be attributed to its small population, concentrated political authority, high political trust , state-supported media, and the 2003 SARS outbreak experience . Despite this, Singapore faced criticism for the high number of COVID-19 cases in dormitories of migrant workers, due to the lack of communication . Studies have shown that media messages can shape public knowledge, attitudes, and preventive behaviors during pandemics in Singapore. It is worthwhile to study Singapore’s public health communication during COVID-19 as it can highlight areas of improvement and offer insights for other countries in future crisis. This study had 4 objectives. First, it aimed to characterize the themes of public messages during the COVID-19 using the crisis and emergency risk communication (CERC) framework. Second, it aimed to examine how these message themes changed across different pandemic phases. Third, it aimed to identify the types of emotional appeals used. Fourth, it aimed to analyze how emotional appeals changed across the COVID-19 phases. CERC Framework CERC is well-suited for evaluating Singapore’s public communication strategies during the COVID-19 pandemic. This is because CERC evolved in stages and involves both risk and crisis communications. CERC consists of 5 stages: precrisis, initial, maintenance, resolution, and evaluation. Communication during the precrisis stage focuses on educating the public about potential adverse events and risks to prepare them for the subsequent stages . During the initial stage, communication messages focus on reducing uncertainty, conveying empathy, and imparting the general understanding of the crisis. The maintenance stage reiterates misinformation, ongoing risks, and mitigation strategies . The resolution stage involves communicating how the emergency was handled, while the evaluation stage assesses response effectiveness . The CERC framework assumes that crises develop in a linear way. However, due to the variability of diseases, crises may not follow the sequence of the outlined stages . Although CERC suggests 5 stages, the precrisis stage did not apply to COVID-19 because it was not a known disease. The length of each stage may also vary, as a prolonged crisis state may occur . For example, COVID-19 had a prolonged CERC maintenance stage as the virus mutated several times during the pandemic . This has resulted in repeated tightening and easing of COVID-19 measures in Singapore . CERC Themes Drawing on the existing literature , this study categorized the CERC message themes into 4 categories: risk and crisis information , self-efficacy and sense-making, preparations and uncertainty reduction , and advisories and alerts . Risk and crisis information refers to information that educate the public about potential threats . This category consists of a subtheme, pandemic intelligence. It refers to messages containing basic information about the pandemic, including case numbers , to raise awareness of the current situation. The category self-efficacy and sense-making involves messages that help people to understand the situation and reflect their ability to change their behaviors . This category includes 3 subthemes: personal preventive measures and mitigation, social and common responsibility, and inquisitive messaging . Personal preventive measures and mitigation refers to messages about measures or precautions that can be taken to protect the public from COVID-19. Social and common responsibility includes messages on measures or precautions that can be taken at the community level to prevent the spread of COVID-19 or to show care . Inquisitive messaging addresses the public’s questions to better understand the situation . The category preparations and uncertainty reduction includes messages on how to act appropriately during the pandemic . Drawing reference to Malik et al , preparations and uncertainty comprises 4 subthemes: clarification, events, campaigns and activities, showing gratitude, and reassurance . Clarification refers to messages addressing misunderstandings and untrue claims about the pandemic . Events, campaigns, and activities include messages promoting communication campaigns for awareness, relief, or treatment. Showing gratitude refers to expressing appreciation to those involved in managing the virus, such as frontline workers . Reassurance consists of messages that allay the public’s fears . The category advisories and alerts refers to messages that provide crucial warnings and specific advice about diseases. There are 2 subthemes: risk groups and general advisories and vigilance . The subtheme risk groups refers to messages targeting susceptible groups such as people with preexisting conditions and older adults who are at greater risk of contracting COVID-19 . Messages on general advisories and vigilance include information on what to do in certain situations, such as returning to the workplace. COVID-19 Phases and Social Media Use in Singapore The Singapore government segmented the COVID-19 pandemic into 4 phases: early days of fog, fighting a pandemic, rocky transition , and learning to live with COVID-19 , which correspond to the CERC stages . However, empirical investigation is needed to examine whether the message themes were conveyed appropriately across these stages, especially on social media. The CERC framework has been used to evaluate public health communications on social media such as Facebook . Vijaykumar et al found out that information disseminated by Singapore-based public health institutions on Facebook were similar in content, but differed in focus. The Ministry of Health (MOH) focused on situational updates and the National Environment Agency (NEA) elaborated on preventive measures. However, the study only focused on public communication by these 2 agencies. To gain a broader understanding of crisis communications in Singapore; this study examined public communication by multiple government agencies in Singapore. Hence, we ask the following research questions (RQs): RQ1: To what extent are the CERC message themes present in Singapore’s online public health messaging during the COVID-19 pandemic? RQ2: How do the CERC message themes change across different phases during the COVID-19 pandemic? While CERC is extensively studied, there is limited research linking it with emotional appeals, a gap scholars find crucial to address. Meadows et al argued that investigating the emotional tones of the public during different outbreak phases aids in formulating effective public health messages. This is echoed by Xie et al who found that emotional appeals effectively engaged audiences. In addition to analyzing CERC message themes, this study also aimed to examine the use of emotional appeals in public health communication during COVID-19. Emotional Appeals Emotional appeals can persuade people to perform an intended behavior by evoking specific emotion . They are widely used in health communications ; each type elicits varying responses. For example, people are divided on humor appeals; a few think it undermines the seriousness of the subject, while others find it useful . The choice of emotional appeals depends on the context and the target audience . During the COVID-19 pandemic, key emotional appeals included hope, humor, fear, anger, guilt, and nurturance . Hope appeals emphasize efficacy and can be empowering when paired with actionable advice. During health crises, transparent communication about uncertainties and hopeful messages can enhance support for the measures implemented . World Health Organization recommends using hope appeals to combat pandemic fatigue . Hope appeals are an effective communication strategy across different cultures. In collectivist countries such as Singapore, hope appeals can focus on emerging stronger from COVID-19 as a community. Humor appeals use techniques such as clownish humor, irony, and satire , aimed at reducing negative emotions and promoting positivity . However, they are also noted for potentially reducing social responsibility and perceived crisis risk . Humor appeals should be used tactfully especially during the critical phrases where increased perceived risk and social responsibility are crucial. Fear appeals are the most widely studied emotional appeals. A message with fear appeals induces fear when a situation is seen as threatening to one’s physical or mental health and is perceived as uncontrollable . It evokes fear about the harm that will befall them if they do not adopt the recommended behavior . The arousal triggered would create a desire to avoid the perceived threat and to adopt the suggested behavior, such as mask wearing and vaccination. Upon encountering the message, the audience would evaluate the severity and susceptibility of the threat, and their ability to overcome the threat, and subsequently taking the recommended action . Anger appeals motivate people to carry out actions requiring more effort and commitment . The anger activism model suggests that when coupled with a sense of efficacy, a person made to feel anger would feel motivated to perform a behavior . Anger was one of the least used appeals in organizational YouTube videos during COVID-19 . Guilt appeals consist of 2 components—material to evoke guilt and an action to reduce guilt . The material can highlight discrepancies between the audience’s standards and their behavior , which could effectively influence health-related attitudes . However, excessive guilt can be counterproductive and less persuasive , as shown in the study by Matkovic et al , where guilt appeals failed to influence handwashing intention during the pandemic. Nurturance appeals are defined as appeals that evoke a sense of caretaking, which effectively target parents . Nurturance appeals were the most dominant emotional appeal in advertising materials using COVID-19 as a theme . Given the dynamic nature of a crisis, it is important to use suitable emotional appeals at appropriate times and for effective management of the situation. A few studies focused on how emotional appeals were used in the communication messages during the COVID-19 pandemic (eg, a study by Mello et al ). Hence, we asked the following RQs: RQ3: What are the types of emotional appeals used in Singapore’s online public health messaging during the COVID-19 pandemic? RQ4: How does the use of emotional appeals in Singapore’s online public health messaging change across different CERC phases during the COVID-19 pandemic? Singapore effectively managed COVID-19, which is evident from the World Health Organization lauding its “all-of-government” approach . This approach entails collaboration among different government agencies . While COVID-19 is no longer a global health emergency, Singapore continues to experience periodic infection waves . During the pandemic, the Singaporean government charted its response to COVID-19 in stages, as detailed in a white paper . Avenues for public health communication in Singapore include government websites and Facebook (Meta Platforms Inc) pages. These websites serve as a one-stop communications channel, and Facebook is one of Singapore’s most widely used social-networking platforms . However, studies on the government’s use of Facebook for public health communication during the pandemic are limited. Singapore’s success in managing the pandemic can be attributed to its small population, concentrated political authority, high political trust , state-supported media, and the 2003 SARS outbreak experience . Despite this, Singapore faced criticism for the high number of COVID-19 cases in dormitories of migrant workers, due to the lack of communication . Studies have shown that media messages can shape public knowledge, attitudes, and preventive behaviors during pandemics in Singapore. It is worthwhile to study Singapore’s public health communication during COVID-19 as it can highlight areas of improvement and offer insights for other countries in future crisis. This study had 4 objectives. First, it aimed to characterize the themes of public messages during the COVID-19 using the crisis and emergency risk communication (CERC) framework. Second, it aimed to examine how these message themes changed across different pandemic phases. Third, it aimed to identify the types of emotional appeals used. Fourth, it aimed to analyze how emotional appeals changed across the COVID-19 phases. CERC is well-suited for evaluating Singapore’s public communication strategies during the COVID-19 pandemic. This is because CERC evolved in stages and involves both risk and crisis communications. CERC consists of 5 stages: precrisis, initial, maintenance, resolution, and evaluation. Communication during the precrisis stage focuses on educating the public about potential adverse events and risks to prepare them for the subsequent stages . During the initial stage, communication messages focus on reducing uncertainty, conveying empathy, and imparting the general understanding of the crisis. The maintenance stage reiterates misinformation, ongoing risks, and mitigation strategies . The resolution stage involves communicating how the emergency was handled, while the evaluation stage assesses response effectiveness . The CERC framework assumes that crises develop in a linear way. However, due to the variability of diseases, crises may not follow the sequence of the outlined stages . Although CERC suggests 5 stages, the precrisis stage did not apply to COVID-19 because it was not a known disease. The length of each stage may also vary, as a prolonged crisis state may occur . For example, COVID-19 had a prolonged CERC maintenance stage as the virus mutated several times during the pandemic . This has resulted in repeated tightening and easing of COVID-19 measures in Singapore . Drawing on the existing literature , this study categorized the CERC message themes into 4 categories: risk and crisis information , self-efficacy and sense-making, preparations and uncertainty reduction , and advisories and alerts . Risk and crisis information refers to information that educate the public about potential threats . This category consists of a subtheme, pandemic intelligence. It refers to messages containing basic information about the pandemic, including case numbers , to raise awareness of the current situation. The category self-efficacy and sense-making involves messages that help people to understand the situation and reflect their ability to change their behaviors . This category includes 3 subthemes: personal preventive measures and mitigation, social and common responsibility, and inquisitive messaging . Personal preventive measures and mitigation refers to messages about measures or precautions that can be taken to protect the public from COVID-19. Social and common responsibility includes messages on measures or precautions that can be taken at the community level to prevent the spread of COVID-19 or to show care . Inquisitive messaging addresses the public’s questions to better understand the situation . The category preparations and uncertainty reduction includes messages on how to act appropriately during the pandemic . Drawing reference to Malik et al , preparations and uncertainty comprises 4 subthemes: clarification, events, campaigns and activities, showing gratitude, and reassurance . Clarification refers to messages addressing misunderstandings and untrue claims about the pandemic . Events, campaigns, and activities include messages promoting communication campaigns for awareness, relief, or treatment. Showing gratitude refers to expressing appreciation to those involved in managing the virus, such as frontline workers . Reassurance consists of messages that allay the public’s fears . The category advisories and alerts refers to messages that provide crucial warnings and specific advice about diseases. There are 2 subthemes: risk groups and general advisories and vigilance . The subtheme risk groups refers to messages targeting susceptible groups such as people with preexisting conditions and older adults who are at greater risk of contracting COVID-19 . Messages on general advisories and vigilance include information on what to do in certain situations, such as returning to the workplace. The Singapore government segmented the COVID-19 pandemic into 4 phases: early days of fog, fighting a pandemic, rocky transition , and learning to live with COVID-19 , which correspond to the CERC stages . However, empirical investigation is needed to examine whether the message themes were conveyed appropriately across these stages, especially on social media. The CERC framework has been used to evaluate public health communications on social media such as Facebook . Vijaykumar et al found out that information disseminated by Singapore-based public health institutions on Facebook were similar in content, but differed in focus. The Ministry of Health (MOH) focused on situational updates and the National Environment Agency (NEA) elaborated on preventive measures. However, the study only focused on public communication by these 2 agencies. To gain a broader understanding of crisis communications in Singapore; this study examined public communication by multiple government agencies in Singapore. Hence, we ask the following research questions (RQs): RQ1: To what extent are the CERC message themes present in Singapore’s online public health messaging during the COVID-19 pandemic? RQ2: How do the CERC message themes change across different phases during the COVID-19 pandemic? While CERC is extensively studied, there is limited research linking it with emotional appeals, a gap scholars find crucial to address. Meadows et al argued that investigating the emotional tones of the public during different outbreak phases aids in formulating effective public health messages. This is echoed by Xie et al who found that emotional appeals effectively engaged audiences. In addition to analyzing CERC message themes, this study also aimed to examine the use of emotional appeals in public health communication during COVID-19. Emotional appeals can persuade people to perform an intended behavior by evoking specific emotion . They are widely used in health communications ; each type elicits varying responses. For example, people are divided on humor appeals; a few think it undermines the seriousness of the subject, while others find it useful . The choice of emotional appeals depends on the context and the target audience . During the COVID-19 pandemic, key emotional appeals included hope, humor, fear, anger, guilt, and nurturance . Hope appeals emphasize efficacy and can be empowering when paired with actionable advice. During health crises, transparent communication about uncertainties and hopeful messages can enhance support for the measures implemented . World Health Organization recommends using hope appeals to combat pandemic fatigue . Hope appeals are an effective communication strategy across different cultures. In collectivist countries such as Singapore, hope appeals can focus on emerging stronger from COVID-19 as a community. Humor appeals use techniques such as clownish humor, irony, and satire , aimed at reducing negative emotions and promoting positivity . However, they are also noted for potentially reducing social responsibility and perceived crisis risk . Humor appeals should be used tactfully especially during the critical phrases where increased perceived risk and social responsibility are crucial. Fear appeals are the most widely studied emotional appeals. A message with fear appeals induces fear when a situation is seen as threatening to one’s physical or mental health and is perceived as uncontrollable . It evokes fear about the harm that will befall them if they do not adopt the recommended behavior . The arousal triggered would create a desire to avoid the perceived threat and to adopt the suggested behavior, such as mask wearing and vaccination. Upon encountering the message, the audience would evaluate the severity and susceptibility of the threat, and their ability to overcome the threat, and subsequently taking the recommended action . Anger appeals motivate people to carry out actions requiring more effort and commitment . The anger activism model suggests that when coupled with a sense of efficacy, a person made to feel anger would feel motivated to perform a behavior . Anger was one of the least used appeals in organizational YouTube videos during COVID-19 . Guilt appeals consist of 2 components—material to evoke guilt and an action to reduce guilt . The material can highlight discrepancies between the audience’s standards and their behavior , which could effectively influence health-related attitudes . However, excessive guilt can be counterproductive and less persuasive , as shown in the study by Matkovic et al , where guilt appeals failed to influence handwashing intention during the pandemic. Nurturance appeals are defined as appeals that evoke a sense of caretaking, which effectively target parents . Nurturance appeals were the most dominant emotional appeal in advertising materials using COVID-19 as a theme . Given the dynamic nature of a crisis, it is important to use suitable emotional appeals at appropriate times and for effective management of the situation. A few studies focused on how emotional appeals were used in the communication messages during the COVID-19 pandemic (eg, a study by Mello et al ). Hence, we asked the following RQs: RQ3: What are the types of emotional appeals used in Singapore’s online public health messaging during the COVID-19 pandemic? RQ4: How does the use of emotional appeals in Singapore’s online public health messaging change across different CERC phases during the COVID-19 pandemic? Overview To answer our research questions, we conducted a quantitative content analysis on public Facebook posts and publicly accessible website articles from key Singapore government institutions involved in public health communication during the COVID-19 pandemic. Specifically, we compiled and analyzed content from Gov.sg, representing the Singapore government, as well as institutions, such as the MOH, the Ministry of Sustainability and the Environment, the NEA, and the Health Promotion Board. Ethical Considerations Before commencing data collection for content analysis, we sought approval from the Nanyang Technological University’s Integrity Review Board (IRB-2022-725) in exempt category 4. This category pertained to secondary research using existing or publicly accessible data sets such as those found on social media. The exemption criteria included sources of individually identifiable information that were already in existence or that were publicly available. Obtaining IRB approval ensured that the research adheres to ethical standards, protecting the privacy and rights of individuals whose data were being analyzed. This step was crucial in maintaining the integrity and ethical compliance of the research project. Data Collection and Sampling Upon receiving IRB approval, we used a Python script to crawl Facebook posts containing specified keywords related to COVID-19 from January 1, 2020, to September 30, 2022. Concurrently, we manually compiled relevant website articles from the same timeframe through keyword searches on the institutions’ websites. These keywords included “2019-nCoV,” “SARS-CoV-2,” “Sars-CoV-2,” “Wuhan Coronavirus,” “Wuhan coronavirus,” “wuhan coronavirus,” “Wuhan virus,” “wuhan virus,” “Wuhan Virus,” “Covid-19,” “covid-19,” “novel coronavirus,” “COVID,” “Covid,” and “covid.” Articles and posts that are not related to the public health communication about COVID-19 (such as Facebook posts and website articles that solely focused on situational updates such as the number of cases and clusters, call outs to subscribe for updates, mentions of COVID-19 as a time frame where other activities or programs were the major topics, posts that did not focus on COVID-19, speeches by public figures, and press releases) were excluded. This initial screening yielded a total of 1114 Facebook posts and 85 relevant website articles. The data were then randomly sampled with a CI level of 99% and a 3% margin of error, resulting in the final 696 Facebook posts and 83 website articles selected for detailed analysis. Codebook and Coding Scheme We developed the codebook on the basis of the CERC message themes adapted from previous literature . These themes encompassed (1) pandemic intelligence, (2) personal preventive measures and mitigation, (3) social and common responsibility, (4) inquisitive messaging, (5) clarification, (6) events, campaigns and activities, (7) request for contributions, (8) showing gratitude, (9) reassurance, (10) risk groups, and (11) general advisories and vigilance. In addition, 6 emotional appeals adapted from previous studies were included in the codebook. These emotional appeals included (1) fear appeals, (2) guilt appeals, (3) anger appeals, (4) hope appeals, (5) humor appeals, and (6) nurturance appeals. Each Facebook post, including all text and visual elements, and everything visible on the webpages were coded as a single unit of analysis. Intercoder Reliability We recruited 3 coders to code the posts and articles. Before conducting actual coding, the coders undertook 2 rounds of training, practice sessions for coding, intercoder reliability, and discussions to refine the codebook. During practice sessions, coders coded the same units of analysis to ensure a common understanding of the codebook. The units of analysis (n=60) for the training and practice sessions included the materials that had not been sampled. After achieving consensus, the coders coded 10% of the data, and intercoder reliability was tested. The process was repeated until we achieved an average Krippendorff α value of 0.78, ranging from 0.70 to 1.00. As it exceeded the 0.70 standard established in the literature , demonstrating an acceptable intercoder reliability. Subsequently, the data were split equally and coded by the coders. Statistical Analyses To answer RQ1 and RQ3, a series of descriptive statistics were conducted using SPSS (version 29; IBM Corp). For RQ2 and RQ4, chi-square tests were performed to examine the relationships among CERC themes, emotional appeals, and COVID-19 phases. Notably, 24 website articles lacking publication dates were excluded from the chi-square tests as we could not classify them into any COVID-19 phases. To answer our research questions, we conducted a quantitative content analysis on public Facebook posts and publicly accessible website articles from key Singapore government institutions involved in public health communication during the COVID-19 pandemic. Specifically, we compiled and analyzed content from Gov.sg, representing the Singapore government, as well as institutions, such as the MOH, the Ministry of Sustainability and the Environment, the NEA, and the Health Promotion Board. Before commencing data collection for content analysis, we sought approval from the Nanyang Technological University’s Integrity Review Board (IRB-2022-725) in exempt category 4. This category pertained to secondary research using existing or publicly accessible data sets such as those found on social media. The exemption criteria included sources of individually identifiable information that were already in existence or that were publicly available. Obtaining IRB approval ensured that the research adheres to ethical standards, protecting the privacy and rights of individuals whose data were being analyzed. This step was crucial in maintaining the integrity and ethical compliance of the research project. Upon receiving IRB approval, we used a Python script to crawl Facebook posts containing specified keywords related to COVID-19 from January 1, 2020, to September 30, 2022. Concurrently, we manually compiled relevant website articles from the same timeframe through keyword searches on the institutions’ websites. These keywords included “2019-nCoV,” “SARS-CoV-2,” “Sars-CoV-2,” “Wuhan Coronavirus,” “Wuhan coronavirus,” “wuhan coronavirus,” “Wuhan virus,” “wuhan virus,” “Wuhan Virus,” “Covid-19,” “covid-19,” “novel coronavirus,” “COVID,” “Covid,” and “covid.” Articles and posts that are not related to the public health communication about COVID-19 (such as Facebook posts and website articles that solely focused on situational updates such as the number of cases and clusters, call outs to subscribe for updates, mentions of COVID-19 as a time frame where other activities or programs were the major topics, posts that did not focus on COVID-19, speeches by public figures, and press releases) were excluded. This initial screening yielded a total of 1114 Facebook posts and 85 relevant website articles. The data were then randomly sampled with a CI level of 99% and a 3% margin of error, resulting in the final 696 Facebook posts and 83 website articles selected for detailed analysis. We developed the codebook on the basis of the CERC message themes adapted from previous literature . These themes encompassed (1) pandemic intelligence, (2) personal preventive measures and mitigation, (3) social and common responsibility, (4) inquisitive messaging, (5) clarification, (6) events, campaigns and activities, (7) request for contributions, (8) showing gratitude, (9) reassurance, (10) risk groups, and (11) general advisories and vigilance. In addition, 6 emotional appeals adapted from previous studies were included in the codebook. These emotional appeals included (1) fear appeals, (2) guilt appeals, (3) anger appeals, (4) hope appeals, (5) humor appeals, and (6) nurturance appeals. Each Facebook post, including all text and visual elements, and everything visible on the webpages were coded as a single unit of analysis. We recruited 3 coders to code the posts and articles. Before conducting actual coding, the coders undertook 2 rounds of training, practice sessions for coding, intercoder reliability, and discussions to refine the codebook. During practice sessions, coders coded the same units of analysis to ensure a common understanding of the codebook. The units of analysis (n=60) for the training and practice sessions included the materials that had not been sampled. After achieving consensus, the coders coded 10% of the data, and intercoder reliability was tested. The process was repeated until we achieved an average Krippendorff α value of 0.78, ranging from 0.70 to 1.00. As it exceeded the 0.70 standard established in the literature , demonstrating an acceptable intercoder reliability. Subsequently, the data were split equally and coded by the coders. To answer RQ1 and RQ3, a series of descriptive statistics were conducted using SPSS (version 29; IBM Corp). For RQ2 and RQ4, chi-square tests were performed to examine the relationships among CERC themes, emotional appeals, and COVID-19 phases. Notably, 24 website articles lacking publication dates were excluded from the chi-square tests as we could not classify them into any COVID-19 phases. Our sample showed that most of the messages about COVID-19 were communicated by Gov.sg (394/779, 50.6%), followed by the MOH (261/779, 33.5%), NEA (90/779, 11.5%), Ministry of Sustainability and Environment (18/779, 2.3%), and Health Promotion Board (16/779, 2.1%; ). RQ1 asked about the CERC message themes used by the Singaporean government during the COVID-19 pandemic. Our sample showed that most of the messages disseminated during the pandemic were about personal preventive measures and mitigation (522/779, 67%) followed by general advisories and vigilance (445/779, 57.1%); pandemic intelligence (266/779, 34.1%); social and common responsibility (131/779, 16.8%); risk groups (118/779, 15.1%); and event, campaigns, and activities (105/779, 13.5%). A small number of messages showed gratitude (54/779, 6.9%), inquisitive messaging (31/779, 4%), clarification (31/779, 4%), and reassurance (31/779, 4%). Request for contributions (5/779, 0.6%) was least communicated. RQ2 asked how the CERC message themes changed across different phases during the COVID-19 pandemic. As shown in , the communication message themes changed across the COVID-19 phases. Chi-square tests revealed substantial changes in message themes across the phases, including pandemic intelligence ( χ 2 3 =18.1; P <.001). Specifically, messages on pandemic intelligence were more frequently posted during the maintenance stages—fighting a pandemic and rocky transition—compared with other phases ( and ) . Similarly, the results showed that message themes such as personal preventive measures and mitigation ( χ 2 3 =29.1; P <.001); events, campaigns, and activities ( χ 2 3 =27.9; P <.001); and general advisories and vigilance ( χ 2 3 =15.5; P <.001) changed significantly across different COVID-19 phases. These message themes were frequently used in Singapore’s online public health messaging during the fighting a pandemic phase and rocky transition phase (ie, maintenance stage). Chi-square tests showed that message themes on social and common responsibility ( χ 2 3 =29.9; P <.001) and showing gratitude ( χ 2 3 =21.0; P <.001) changed across different COVID-19 phases. Messages on social and common responsibility were frequently communicated to the public during the fighting a pandemic period (ie, the maintenance stage), while messages that focused on expressing gratitude were often communicated during the early days of fog (ie, the initial stage) and fighting a pandemic period (ie, the maintenance stage). Message theme on risk groups ( χ 2 3 =17.7; P <.001) also changed across different COVID-19 phases; messages about risk groups were frequently mentioned during the rocky transition period (ie, the maintenance stage). RQ3 asked about the types of emotional appeals used in the messages communicated by the Singaporean government to the public during the COVID-19 pandemic. Our data showed that hope (37/97, 38%) and humor (36/97, 37%) appeals were most frequently used in the communication messages during the COVID-19 pandemic, followed by nurturance appeals (17/97, 18%). Anger appeals (4/97, 4%), fear appeals (2/97, 2%), and guilt appeals (1/97, 1%) were used in the messaging strategies with a very low frequency. RQ4 asked how the use of emotional appeals in messages communicated by the Singaporean government changed across different phases of the COVID-19 pandemic. Chi-square tests showed that emotional appeals—fear, anger, humor, and nurturance appeals—changed across phases. Messages containing fear appeals were only disseminated during the learning to live with COVID-19 period ( χ 2 3 =17.4; P <.001). Messages containing anger appeals were used during the fighting a pandemic period and learning to live with COVID-19 period ( χ 2 3 =8.4; P =.04). Humor appeals were used across all the phases of COVID-19 at different levels of frequency ( χ 2 3 =8.3; P =.04). Messages containing nurturance appeals were also mostly communicated to the public during the learning to live with COVID-19 period ( χ 2 3 =49.8; P <.001). Principal Findings This study examined public health communication strategies in Singapore during the COVID-19 pandemic by applying the CERC framework and emotional appeals. We found that the communication strategies used by the Singaporean public health institutions are aligned with the CERC framework. However, our analysis suggested that CERC message themes, such as inquisitive messaging and clarification, can be conveyed more frequently, particularly at the earliest stage of the crisis. This is in line with CERC recommendations; it also helps in verifying the abundance of information available when there is an infodemic. The COVID-19 phases in Singapore outlined by the government are also aligned with the CERC stages. We found that different emotional appeals were used at various COVID-19 phases in differing situations, which is evident in how nurturance appeals were used to encourage child vaccination, aligned with literature showing that nurturance appeals can effectively target parents. Despite this, certain emotional appeals can be used more frequently at various COVID-19 phases. We observed that Singapore’s communication strategy is aligned with the frameworks of CERC and emotional appeals, with a few areas for improvement as discussed below. Consistent with the study by Malik et al , the findings of this study revealed that Singapore-based public health institutions’ communication themes focused more on personal preventive measures and mitigation as well as general advisories and vigilance. For example, tele-befriending and telecounseling services, such as the Seniors Helpline, were established to help older citizens who face mental distress during the lock-down period. Overall, the Singapore government effectively communicated the message themes recommended by the CERC framework. This is evident from how the framework recommends informing the public about what they can do to protect themselves, the risks of the disease, and the actions that the public health institutions are taking to manage the situation. Meanwhile, the request for contribution theme was the one communicated the least, likely due to the Singapore-based public health agencies having sufficient resources to tide over the pandemic. To protect individuals and businesses in the country, the Singapore government had issued multiple budgets and grants since the onset of COVID-19. These monetary payouts include one-off as well as recurring cash grants for individuals whose livelihoods were affected by the pandemic . Assistance was also offered to lower-income households. Examples of this include the COVID-19 Recovery Grant, which ensured that the citizens of Singapore or permanent residents could receive up to SG $700 (US $535) for 3 months if they faced an income loss of at least 50% . The grants were successful in reducing the inequality in Singapore . A shortcoming of the public health institutions’ communication strategies was that messages on clarification was communicated less frequently. This was in line with the existing literature that shows how health care organizations may have insufficient posts addressing misinformation . While steps were taken to clarify misinformation and address the public’s questions, there can be more such messaging as COVID-19 was also an infodemic . Infodemics occur when a large amount of information is rampant, including those that might be inaccurate or confusing . Aligned with Reynolds and Seeger’s argument that communication during the initial phase should aim to reduce uncertainty, the Singapore-based public health institutions can enhance messaging on clarification and inquisitive messaging at the earliest stage of the crisis to prevent outrage and confusion in times of emergency. This is considering the fact that the health institutions would be communicating new information, in the form of pandemic intelligence and general advisories and vigilance, which might lead to increased uncertainty. Separately, the frequency of reassurance messaging can be increased, with the CERC framework encouraging such messaging to be conveyed during the initial and maintenance stages . This can help to assure the public that the health institutions are handling the situation and managing the public’s emotions in times of uncertainty . We found that the communication message themes used by the public health institutions changed across different phases of COVID-19 in Singapore. This finding supported the CERC framework, which suggested that different message themes should be communicated to the public at different stages of a pandemic . For example, we observed that messages on pandemic intelligence were communicated less frequently at the initial stage (ie, early days of fog: January, 2020, to March, 2020) of the COVID-19 pandemic; during this time, there was limited knowledge about the disease. As COVID-19 test kits became available, the Singaporean government could trace the number of cases on a daily basis and better understand the spread of the virus. This enabled them to learn and develop mitigation strategies to control the disease. Hence, there has been an increased focus on communicating messages on pandemic intelligence (eg, messages on the kick-off of COVID-19 vaccination) at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021) than in other stages. Similarly, as scientists gradually gained more information about the virus, personal preventive measures and mitigation strategies were implemented by the public health institutions and more frequently communicated to the public at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021). This is in line with CERC’s recommendations to provide more explanations about preventive measures and mitigation strategies during the maintenance stage . Our results showed that positive emotional appeals (eg, hope and humor appeals) were more frequently used in COVID-19 communication strategies. This is in line with the study by Xie et al , which found that positive emotions, such as hope, were commonly used in videos on COVID-19. They also posited that positive emotions can be beneficial to public engagement at the start of a pandemic to balance out the public’s negative emotions. Hence, Singapore-based public health institutions may have taken this approach to neutralize the public’s uncertainty. While other studies acknowledge that positive emotional appeals should be leveraged, they also suggested for negative emotional appeals to be used as both types of messages can engage the public in taking up preventive behaviors . Positive emotional appeals, such as humor appeals, if overused or applied at inopportune times, can backfire, possibly lowering perceived risk and social responsibility; this may also result in the public not internalizing the intended message or not taking it seriously . In addition, emotional appeals have different effectiveness for different demographics. For example, when compared with younger populations , older populations prefer emotional appeals that avoid negative emotional outcomes. Hence, health institutions can consider integrating a mix of emotional appeals for more effective messaging in future public health crises or pandemics. This study found that the emotional appeals used varied with time, with their use being context-specific, depending on the situation and state of the disease. For example, nurturance appeals were not used at the early stage of the COVID-19 communication but were frequently used during the period of learning to live with COVID-19. This coincided with the first shipment of pediatric doses for the vaccination during the third week of December, 2021 , when the government started encouraging parents to bring their children for vaccination. Humor appeals were used with different frequencies across the stages, which could be due to the fluctuating severity of the crisis. Our studies revealed that humor appeals were used in less-pressing messages, such as encouraging the public to take up preventive behaviors, that were more culturally appropriate especially during the stressful pandemic. For example, a sitcom character most Singapore residents are familiar with, Phua Chu Kang, was used in COVID-19 campaign videos that deal with responsible behavior during the pandemic, and later, to boost the local vaccination drive. While humor appeals were used in the communication messages across different stages of COVID-19 pandemic in Singapore, it is recommended that other countries should use the same strategy tactfully. This is because there are many factors, such as relevance and timeliness , that could influence the effectiveness of humor appeals. Hence, humor appeals need to be applied in good judgment to avoid unintended outcomes. By contrast, fear and guilt appeals were less frequently applied in communication messages during the COVID-19 pandemic in Singapore. This demonstrates the Singapore-based health institutions’ careful use of negative emotion appeals in a tense pandemic situation where most people were confined at home during the “circuit breaker” period. Such negative appeals could lead to higher mental stress and compromise social cohesion, if overused. This also explains why fear appeals were used in the later phase of the pandemic (ie, during the “learning to live with COVID-19” phase) when the situation was more relaxed, and most management measures had been eased. Hence, the public health authorities should consider the political and cultural landscape as well as the appropriate junctures when applying emotional appeals in their communication strategies in future. Implications and Limitations Theoretically, this study contributes to the existing literature on both the CERC model and emotional appeals. Apart from exploring how CERC model and emotional appeals were applied in Singapore’s public health communication, this study is one of the few examining the relationship between CERC stages and the use of emotional appeals, especially in the context of COVID-19. This study provides insight on how to use a balanced mix of communication strategies for effective public health communications. The practical implication of this study is twofold. First, in the local context, the findings of this study could inform Singaporean public health practitioners in developing more comprehensive messages during an emerging health crisis. Understanding how CERC message themes and emotional appeals were used in the public communication strategies during the COVID-19 pandemic could help the relevant authorities identify their strengths and shortcomings. For example, our finding on the lack of clarification messages is a pointer for public communication during the pandemic, especially during the period where misinformation about COVID-19 vaccination for children aged 5 to 11 years in Singapore was widespread . Consequently, the local health authorities can learn from our findings to be better equipped to formulate communication strategies in handling unpredictable and emerging health pandemics in the future. Second, for other nations, especially those with high population density, the health authorities can emulate Singapore’s communication strategies during the COVID-19 pandemic to structure their communication strategies during the health crisis. In particular, Singapore’s “all-of-government” approach, which involves the collaboration of various government agencies in communicating key messages during crises, is a useful communication strategy. Drawing from Singapore’s approach, other countries could chart their responses in stages during a crisis and formulate timely public health messaging by incorporating CERC message themes together with the appropriate emotional appeal. However, as this study considers CERC message themes and phases and emotional appeals in the context of Singapore, the approach should be adapted with care—given the differences in local governance and culture of each country—because the messages may be received differently, thus affecting communication strategies. The “all-of-government” approach may also need to be tailored as a result. This study has several limitations. First, this study did not collate Facebook posts and website articles from all the public health institutions and only focused on those that provided pressing information about COVID-19 that are applicable to all members of the public. We did not analyze content from government institutions with more targeted messaging because of the large volume of content for analysis. We also did not analyze other media sources, such as television, radio, newspapers, online news, and other social media content, beyond Facebook because of cross-posting of content. As this study might not provide a complete picture of COVID-19 messaging in Singapore, future research should examine social media posts by various government institutions. Second, website articles without publication dates were excluded from the analyses for RQ1 and RQ2, as we were unable to categorize the data into any of the COVID-19 phases in Singapore. Third, we did not analyze social media responses (ie, likes, shares, and comments) because such information was unavailable for website articles. Future research could examine social media responses for a greater understanding of CERC themes and emotional appeals in the context of COVID-19. Fourth, the findings of this study might not be generalizable to countries that are very different from Singapore because of the country’s specific sociopolitical traits such as its high population density and strong central government. Nonetheless, given its exemplary management of COVID-19, it is worthy of documenting its practice to offer useful insights into future pandemic management. While other countries can learn from Singapore’s approach, there may be a need to tailor the communication strategies according to their characteristics. Fifth, this study did not specifically focus on messages containing severity and susceptibility because neither theme was encompassed in the CERC model used in this study. Given that severity and susceptibility are important aspects of risk perception, future research should examine these message themes in relation to the CERC model. In addition, this study did not examine the extent to which messaging conveyed acute risks from COVID-19 (eg, hospitalization and death) and chronic risks from COVID-19 (eg, postacute sequelae of COVID-19). Further studies should be conducted to delve into the differences as these may have impacted public willingness to engage in prevention and mitigation behaviors. Finally, while this study examined CERC themes and emotional appeals used across CERC phases, we did not dive into the interaction between CERC themes and emotional appeals. This is a possible area for future studies. Conclusion This study examined public health messaging during COVID-19 pandemic in Singapore. The public health authorities in Singapore have taken a strategic and systematic approach in public health communication coupled with the use of emotional appeal to encourage the public to engage in protective behaviors. This study examined public health communication strategies in Singapore during the COVID-19 pandemic by applying the CERC framework and emotional appeals. We found that the communication strategies used by the Singaporean public health institutions are aligned with the CERC framework. However, our analysis suggested that CERC message themes, such as inquisitive messaging and clarification, can be conveyed more frequently, particularly at the earliest stage of the crisis. This is in line with CERC recommendations; it also helps in verifying the abundance of information available when there is an infodemic. The COVID-19 phases in Singapore outlined by the government are also aligned with the CERC stages. We found that different emotional appeals were used at various COVID-19 phases in differing situations, which is evident in how nurturance appeals were used to encourage child vaccination, aligned with literature showing that nurturance appeals can effectively target parents. Despite this, certain emotional appeals can be used more frequently at various COVID-19 phases. We observed that Singapore’s communication strategy is aligned with the frameworks of CERC and emotional appeals, with a few areas for improvement as discussed below. Consistent with the study by Malik et al , the findings of this study revealed that Singapore-based public health institutions’ communication themes focused more on personal preventive measures and mitigation as well as general advisories and vigilance. For example, tele-befriending and telecounseling services, such as the Seniors Helpline, were established to help older citizens who face mental distress during the lock-down period. Overall, the Singapore government effectively communicated the message themes recommended by the CERC framework. This is evident from how the framework recommends informing the public about what they can do to protect themselves, the risks of the disease, and the actions that the public health institutions are taking to manage the situation. Meanwhile, the request for contribution theme was the one communicated the least, likely due to the Singapore-based public health agencies having sufficient resources to tide over the pandemic. To protect individuals and businesses in the country, the Singapore government had issued multiple budgets and grants since the onset of COVID-19. These monetary payouts include one-off as well as recurring cash grants for individuals whose livelihoods were affected by the pandemic . Assistance was also offered to lower-income households. Examples of this include the COVID-19 Recovery Grant, which ensured that the citizens of Singapore or permanent residents could receive up to SG $700 (US $535) for 3 months if they faced an income loss of at least 50% . The grants were successful in reducing the inequality in Singapore . A shortcoming of the public health institutions’ communication strategies was that messages on clarification was communicated less frequently. This was in line with the existing literature that shows how health care organizations may have insufficient posts addressing misinformation . While steps were taken to clarify misinformation and address the public’s questions, there can be more such messaging as COVID-19 was also an infodemic . Infodemics occur when a large amount of information is rampant, including those that might be inaccurate or confusing . Aligned with Reynolds and Seeger’s argument that communication during the initial phase should aim to reduce uncertainty, the Singapore-based public health institutions can enhance messaging on clarification and inquisitive messaging at the earliest stage of the crisis to prevent outrage and confusion in times of emergency. This is considering the fact that the health institutions would be communicating new information, in the form of pandemic intelligence and general advisories and vigilance, which might lead to increased uncertainty. Separately, the frequency of reassurance messaging can be increased, with the CERC framework encouraging such messaging to be conveyed during the initial and maintenance stages . This can help to assure the public that the health institutions are handling the situation and managing the public’s emotions in times of uncertainty . We found that the communication message themes used by the public health institutions changed across different phases of COVID-19 in Singapore. This finding supported the CERC framework, which suggested that different message themes should be communicated to the public at different stages of a pandemic . For example, we observed that messages on pandemic intelligence were communicated less frequently at the initial stage (ie, early days of fog: January, 2020, to March, 2020) of the COVID-19 pandemic; during this time, there was limited knowledge about the disease. As COVID-19 test kits became available, the Singaporean government could trace the number of cases on a daily basis and better understand the spread of the virus. This enabled them to learn and develop mitigation strategies to control the disease. Hence, there has been an increased focus on communicating messages on pandemic intelligence (eg, messages on the kick-off of COVID-19 vaccination) at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021) than in other stages. Similarly, as scientists gradually gained more information about the virus, personal preventive measures and mitigation strategies were implemented by the public health institutions and more frequently communicated to the public at the maintenance stage (ie, fighting a pandemic: April, 2020, to April, 2021; rocky transition: May, 2021, to November, 2021). This is in line with CERC’s recommendations to provide more explanations about preventive measures and mitigation strategies during the maintenance stage . Our results showed that positive emotional appeals (eg, hope and humor appeals) were more frequently used in COVID-19 communication strategies. This is in line with the study by Xie et al , which found that positive emotions, such as hope, were commonly used in videos on COVID-19. They also posited that positive emotions can be beneficial to public engagement at the start of a pandemic to balance out the public’s negative emotions. Hence, Singapore-based public health institutions may have taken this approach to neutralize the public’s uncertainty. While other studies acknowledge that positive emotional appeals should be leveraged, they also suggested for negative emotional appeals to be used as both types of messages can engage the public in taking up preventive behaviors . Positive emotional appeals, such as humor appeals, if overused or applied at inopportune times, can backfire, possibly lowering perceived risk and social responsibility; this may also result in the public not internalizing the intended message or not taking it seriously . In addition, emotional appeals have different effectiveness for different demographics. For example, when compared with younger populations , older populations prefer emotional appeals that avoid negative emotional outcomes. Hence, health institutions can consider integrating a mix of emotional appeals for more effective messaging in future public health crises or pandemics. This study found that the emotional appeals used varied with time, with their use being context-specific, depending on the situation and state of the disease. For example, nurturance appeals were not used at the early stage of the COVID-19 communication but were frequently used during the period of learning to live with COVID-19. This coincided with the first shipment of pediatric doses for the vaccination during the third week of December, 2021 , when the government started encouraging parents to bring their children for vaccination. Humor appeals were used with different frequencies across the stages, which could be due to the fluctuating severity of the crisis. Our studies revealed that humor appeals were used in less-pressing messages, such as encouraging the public to take up preventive behaviors, that were more culturally appropriate especially during the stressful pandemic. For example, a sitcom character most Singapore residents are familiar with, Phua Chu Kang, was used in COVID-19 campaign videos that deal with responsible behavior during the pandemic, and later, to boost the local vaccination drive. While humor appeals were used in the communication messages across different stages of COVID-19 pandemic in Singapore, it is recommended that other countries should use the same strategy tactfully. This is because there are many factors, such as relevance and timeliness , that could influence the effectiveness of humor appeals. Hence, humor appeals need to be applied in good judgment to avoid unintended outcomes. By contrast, fear and guilt appeals were less frequently applied in communication messages during the COVID-19 pandemic in Singapore. This demonstrates the Singapore-based health institutions’ careful use of negative emotion appeals in a tense pandemic situation where most people were confined at home during the “circuit breaker” period. Such negative appeals could lead to higher mental stress and compromise social cohesion, if overused. This also explains why fear appeals were used in the later phase of the pandemic (ie, during the “learning to live with COVID-19” phase) when the situation was more relaxed, and most management measures had been eased. Hence, the public health authorities should consider the political and cultural landscape as well as the appropriate junctures when applying emotional appeals in their communication strategies in future. Theoretically, this study contributes to the existing literature on both the CERC model and emotional appeals. Apart from exploring how CERC model and emotional appeals were applied in Singapore’s public health communication, this study is one of the few examining the relationship between CERC stages and the use of emotional appeals, especially in the context of COVID-19. This study provides insight on how to use a balanced mix of communication strategies for effective public health communications. The practical implication of this study is twofold. First, in the local context, the findings of this study could inform Singaporean public health practitioners in developing more comprehensive messages during an emerging health crisis. Understanding how CERC message themes and emotional appeals were used in the public communication strategies during the COVID-19 pandemic could help the relevant authorities identify their strengths and shortcomings. For example, our finding on the lack of clarification messages is a pointer for public communication during the pandemic, especially during the period where misinformation about COVID-19 vaccination for children aged 5 to 11 years in Singapore was widespread . Consequently, the local health authorities can learn from our findings to be better equipped to formulate communication strategies in handling unpredictable and emerging health pandemics in the future. Second, for other nations, especially those with high population density, the health authorities can emulate Singapore’s communication strategies during the COVID-19 pandemic to structure their communication strategies during the health crisis. In particular, Singapore’s “all-of-government” approach, which involves the collaboration of various government agencies in communicating key messages during crises, is a useful communication strategy. Drawing from Singapore’s approach, other countries could chart their responses in stages during a crisis and formulate timely public health messaging by incorporating CERC message themes together with the appropriate emotional appeal. However, as this study considers CERC message themes and phases and emotional appeals in the context of Singapore, the approach should be adapted with care—given the differences in local governance and culture of each country—because the messages may be received differently, thus affecting communication strategies. The “all-of-government” approach may also need to be tailored as a result. This study has several limitations. First, this study did not collate Facebook posts and website articles from all the public health institutions and only focused on those that provided pressing information about COVID-19 that are applicable to all members of the public. We did not analyze content from government institutions with more targeted messaging because of the large volume of content for analysis. We also did not analyze other media sources, such as television, radio, newspapers, online news, and other social media content, beyond Facebook because of cross-posting of content. As this study might not provide a complete picture of COVID-19 messaging in Singapore, future research should examine social media posts by various government institutions. Second, website articles without publication dates were excluded from the analyses for RQ1 and RQ2, as we were unable to categorize the data into any of the COVID-19 phases in Singapore. Third, we did not analyze social media responses (ie, likes, shares, and comments) because such information was unavailable for website articles. Future research could examine social media responses for a greater understanding of CERC themes and emotional appeals in the context of COVID-19. Fourth, the findings of this study might not be generalizable to countries that are very different from Singapore because of the country’s specific sociopolitical traits such as its high population density and strong central government. Nonetheless, given its exemplary management of COVID-19, it is worthy of documenting its practice to offer useful insights into future pandemic management. While other countries can learn from Singapore’s approach, there may be a need to tailor the communication strategies according to their characteristics. Fifth, this study did not specifically focus on messages containing severity and susceptibility because neither theme was encompassed in the CERC model used in this study. Given that severity and susceptibility are important aspects of risk perception, future research should examine these message themes in relation to the CERC model. In addition, this study did not examine the extent to which messaging conveyed acute risks from COVID-19 (eg, hospitalization and death) and chronic risks from COVID-19 (eg, postacute sequelae of COVID-19). Further studies should be conducted to delve into the differences as these may have impacted public willingness to engage in prevention and mitigation behaviors. Finally, while this study examined CERC themes and emotional appeals used across CERC phases, we did not dive into the interaction between CERC themes and emotional appeals. This is a possible area for future studies. This study examined public health messaging during COVID-19 pandemic in Singapore. The public health authorities in Singapore have taken a strategic and systematic approach in public health communication coupled with the use of emotional appeal to encourage the public to engage in protective behaviors.
Video based educational intervention in waiting area to improve awareness about health screening among patients visiting family medicine clinics
57ebc028-da30-4d0d-9961-931bbd77e061
11253394
Family Medicine[mh]
Health screening is an effective strategy towards early diagnosis of diseases in asymptomatic individuals with a goal to prevent complication and death from disease . As reported in literature, almost 85% of women’s death due to cervical cancer in lower middle-income countries was attributed to lower screening rates . Lack of knowledge and awareness remains a major barrier which ultimately limits the utilization of screening services among general population . Contrary to the system prevailing in developed countries where 90% of population has screening coverage by health care system funded by the government; developing countries like Pakistan lacks any data reporting the frequency of population undergoing health screening for diseases like diabetes, hypertension and various cancers. This highlights the importance and role of primary care health providers to make their clients aware of routine health screening. Multiple educational modalities including written leaflet, face-to-face counseling and watching videos have been utilized to impart health education when a patient visit health care facility. However Educational videos proved to more effective than written materials especially for people with low health literacy at enhancing knowledge and modifying health behaviors. A meta-analysis has proved effectiveness of videos in breast self-examination, prostate cancer screening, sunscreen adherence, self-care in patients with heart failure, HIV testing, treatment adherence, and female condom use . A video intervention was successful in enhancing knowledge regarding stroke symptoms and satisfaction with education in admitted stroke patients . Numerous community based educational programs also utilized videos to impart education on inhaler technique, COPD and asthma . A multimedia-based educational program not only increased the awareness regarding cervical cancer screening from a proportion of women with good knowledge from 2 to 70.5% but also enhanced the utilization of screening services (from 4.3 to 8.3%) . In our health care system where no formal screening services are engraved in health care system opportunistic advice regarding health screening remains a real challenge in a busy clinic. The purpose of this study is to assess the feasibility of implementing video based educational intervention in doctor’s clinic waiting area evaluate patient’s baseline knowledge on health screening of non-communicable and infectious diseases and assess the impact of video based education on patient’s knowledge regarding health screening. The result of this study will be helpful in designing randomized controlled trial in future to determine effectiveness of video based intervention on health screening which may ultimately affect utilization of screening services by patients. Study design, setting inclusion and exclusion criteria It was a pre and post Quasi experimental study that was conducted in Family Medicine clinics located at main campus and Outreach centers of a tertiary care hospital. A total of 320 participants were approached during the six month period. A total of 300 gave consent to participate in the study who were enrolled through non-probability consecutive sampling. Patients who were very sick, in severe pain, vitally unstable, required Emergency or admission referral, were unable to understand Urdu and those who refused to participate in the study were excluded from the study. Educational intervention An 8-minute educational video intervention was developed on health screening to be shown to the participants on TV screen installed in waiting area. The content was prepared by a faculty of Family Medicine using recommended preventive care guidelines from CDC and USPSTF . The video script was written in Urdu (local language) at a 7th grade reading level in order to facilitate wide range of literacy level. Background audio, simple animations and pictorial display of concepts were used to enhance practical understanding of participants. The concepts addressed in this video included health screening tests and their significance in preventive health care; Information about screening recommendations for following diseases. Non- Communicable diseases: hypertension, diabetes, dyslipidemia. Cancers: breast, cervical and colon cancers. Infections including Hepatitis B and C. For each of these diseases, risk factors, available screening tests, appropriate age and frequency of screening were displayed. The video was revised based on the feedback of people from diverse field including patients, their family members, nursing staff, non-clinical administrative staff who assessed it for sound effects and understandability of concepts. The final version was verified by two experts from Family Medicine and Public Health. Study instrument The pre-and post-intervention knowledge of the participants was assessed through a semi structured coded questionnaire by an interviewer who was trained in data collection. The questionnaire was designed by principal investigator after thorough literature search and the content was validated by other two experts of family medicine who were not part of this study. The questionnaire consisted of three parts including 34 items in all. The first part (Questions 1–9) gathered information about socio demographics including age, gender, socioeconomic status, and education level and comorbid conditions. Second part (Question 10–29) assessed knowledge about general concept of health screening tests. Knowledge about each disease screening was assessed through 2–3 questions. To minimize the risk of bias each question had at least 4 options to choose from. Each correct answer was assigned a score of 1 and wrong items were recorded as zero. Total score for knowledge related to diabetes, hypertension, high cholesterol, breast cancer, colon cancer and Hepatitis B & C was 5, 3, 8, 4, 4 and 5 respectively. The third part (Question 30–34) assessed utilization of health screening services by patients and barriers related to non-utilization of these services (on Likert scale). Pilot study was performed on 30 patients for testing questionnaire before collecting final sample. Cronbach’s alpha was computed for all knowledge components and for third part of questionnaire. All values of the Cronbach’s alpha were ≥ 0.70 and then initial draft was finalized as in its original form. According to Bloom’s criteria, knowledge was considered as adequate for each component when participants scored at least 80% of total score in that component . The study questionnaire is attached as supplementary . Data collection procedure Patients and their families were approached in the assessment area prior to their doctor’s visit to confirm eligibility for their participation and their willingness to participate in the study. Those who gave written consent to participate were interviewed to complete a pre-test questionnaire and after that they were shown a video on health screening at least once in the waiting area. After doctor’s consultation they filled the post- test questionnaire prior to leaving from clinic (Fig. ). Sample size calculation Pilot study was performed on 30 subjects to estimate sample size calculation. NCSS PASS version 11 was used to perform sample size calculation with option of tests for two correlated proportions (McNemar’s Test). Sample size was separately estimated at 95% confidence interval and 80% power for diabetes, hypertension, cholesterol, breast cancer, cervical cancer, colon cancer and hepatitis part. Proportions that we obtained in pilot study presented in Table . The highest calculated sample size was 246. However, for better results we enrolled 300 patients. Data analysis Data was entered into IBM SPSS version 20 for statistical analysis. Frequencies and percentages were computed for categorical variables. Median and inter-quartile range was reported for age after testing normal distribution with Shaprio-Wilk test. Pre and post intervention knowledge adequacy was determined using MacNemar’s Chi-square test. A two tailed p-value < 0.05 was taken as statistically significant. It was a pre and post Quasi experimental study that was conducted in Family Medicine clinics located at main campus and Outreach centers of a tertiary care hospital. A total of 320 participants were approached during the six month period. A total of 300 gave consent to participate in the study who were enrolled through non-probability consecutive sampling. Patients who were very sick, in severe pain, vitally unstable, required Emergency or admission referral, were unable to understand Urdu and those who refused to participate in the study were excluded from the study. An 8-minute educational video intervention was developed on health screening to be shown to the participants on TV screen installed in waiting area. The content was prepared by a faculty of Family Medicine using recommended preventive care guidelines from CDC and USPSTF . The video script was written in Urdu (local language) at a 7th grade reading level in order to facilitate wide range of literacy level. Background audio, simple animations and pictorial display of concepts were used to enhance practical understanding of participants. The concepts addressed in this video included health screening tests and their significance in preventive health care; Information about screening recommendations for following diseases. Non- Communicable diseases: hypertension, diabetes, dyslipidemia. Cancers: breast, cervical and colon cancers. Infections including Hepatitis B and C. For each of these diseases, risk factors, available screening tests, appropriate age and frequency of screening were displayed. The video was revised based on the feedback of people from diverse field including patients, their family members, nursing staff, non-clinical administrative staff who assessed it for sound effects and understandability of concepts. The final version was verified by two experts from Family Medicine and Public Health. The pre-and post-intervention knowledge of the participants was assessed through a semi structured coded questionnaire by an interviewer who was trained in data collection. The questionnaire was designed by principal investigator after thorough literature search and the content was validated by other two experts of family medicine who were not part of this study. The questionnaire consisted of three parts including 34 items in all. The first part (Questions 1–9) gathered information about socio demographics including age, gender, socioeconomic status, and education level and comorbid conditions. Second part (Question 10–29) assessed knowledge about general concept of health screening tests. Knowledge about each disease screening was assessed through 2–3 questions. To minimize the risk of bias each question had at least 4 options to choose from. Each correct answer was assigned a score of 1 and wrong items were recorded as zero. Total score for knowledge related to diabetes, hypertension, high cholesterol, breast cancer, colon cancer and Hepatitis B & C was 5, 3, 8, 4, 4 and 5 respectively. The third part (Question 30–34) assessed utilization of health screening services by patients and barriers related to non-utilization of these services (on Likert scale). Pilot study was performed on 30 patients for testing questionnaire before collecting final sample. Cronbach’s alpha was computed for all knowledge components and for third part of questionnaire. All values of the Cronbach’s alpha were ≥ 0.70 and then initial draft was finalized as in its original form. According to Bloom’s criteria, knowledge was considered as adequate for each component when participants scored at least 80% of total score in that component . The study questionnaire is attached as supplementary . Patients and their families were approached in the assessment area prior to their doctor’s visit to confirm eligibility for their participation and their willingness to participate in the study. Those who gave written consent to participate were interviewed to complete a pre-test questionnaire and after that they were shown a video on health screening at least once in the waiting area. After doctor’s consultation they filled the post- test questionnaire prior to leaving from clinic (Fig. ). Pilot study was performed on 30 subjects to estimate sample size calculation. NCSS PASS version 11 was used to perform sample size calculation with option of tests for two correlated proportions (McNemar’s Test). Sample size was separately estimated at 95% confidence interval and 80% power for diabetes, hypertension, cholesterol, breast cancer, cervical cancer, colon cancer and hepatitis part. Proportions that we obtained in pilot study presented in Table . The highest calculated sample size was 246. However, for better results we enrolled 300 patients. Data was entered into IBM SPSS version 20 for statistical analysis. Frequencies and percentages were computed for categorical variables. Median and inter-quartile range was reported for age after testing normal distribution with Shaprio-Wilk test. Pre and post intervention knowledge adequacy was determined using MacNemar’s Chi-square test. A two tailed p-value < 0.05 was taken as statistically significant. Total 300 participants voluntarily participated into the study with written consent. Median age of study participants was 28(23.25–36.75) years. Majority of participants were male ( n = 168, 56%) Most of the study subjects completed secondary school or above ( n = 105, 35%), were either graduate ( n = 161, 53.7.7%). Few of the participants had religious education ( n = 10, 3.3%), 21(7%) were primary pass and 3(1%) were illiterate. About half of the study participants were patients ( n = 149, 49.7%) and the rest were attendants ( n = 151, 50.3%). Prior to intervention of video-based learning, nearly half of the participants had awareness of health screening check-up ( n = 154, 51.3%) and following the intervention there was significant increase in the proportion of participants ( n = 204, 68%) who had understanding of health screening check-up ( p < 0.001) (Table ). There was significant increase in knowledge of the study participants for specific knowledge questions for all of diseases including non-communicable diseases (Table ), cancer screening and risk factors (Table ), infectious diseases screening and risk factors (Table ). Figure is depicting the frequency of adequate knowledge on different diseases before and after intervention. Prior to the intervention, among all disease, the frequency of adequate knowledge was high for cholesterol ( n = 201, 67%) followed by hepatitis B & C ( n = 170, 56.7%), diabetes ( n = 89, 29.3%), colon cancer ( n = 73, 24.3%), hypertension ( n = 40, 13.3%), breast cancer ( n = 23, 7.7%) and cervical cancer ( n = 5, 1.7%). Following the study intervention, there was significant increase in proportion of participants who had adequate knowledge related to diabetes ( p = 0.045), hypertension ( p < 0.001), cholesterol ( p < 0.001), cervical cancer ( p < 0.001), colon cancer ( p < 0.001) and hepatitis B & C ( p < 0.001). No significant improvement in breast cancer related knowledge was observed ( p = 0.074). Highest post-intervention increase in knowledge from baseline was observed for hypertension (13.3% versus 63.3%) followed by colon cancer (24.3% versus 59.3%), cholesterol (67 versus 96.7%), hepatitis b & C (56.7% versus 77.3%), diabetes (29.7% versus 48%), cervical cancer (1.7% versus 19%), and breast cancer (7.7% versus 18.3%). Figure is displaying the participants’ attitude towards health screening check-up before and after intervention. Prior to watching the knowledge-based video, 202 (67.3%) respondents considered themselves unaware of health screening, and after intervention, 235 (78.3%) participants reported that they had awareness ( p < 0.001). Before intervention about half of the participants ( n = 148, 49.3%) had no priority for health screening due to busy schedule and their attitude significantly improved after intervention ( n = 163, 54.3%) ( p = 0.024). 208(69.3%) reported that the reason of not availing screening test is because of cost of the procedures, and after intervention this proportion further decreased ( n = 199, 69.3%) but it was not significant ( p = 0.150). In perception of 181(60.3%) participants, they did not think health screening is essential; following the intervention, there was significant improvement in their way of thinking ( p < 0.001) and 226 (75.3%) considered it as important. Video intervention has been reported in the literature as an effective strategy to increase knowledge and awareness regarding isolated disease screening like cervical cancer, breast cancer etc. Our study adopted a novel approach by implementing a video intervention encompassing holistic approach to adult health screening. The intervention was targeted to impart education regarding the diseases for which screening tests are recommended for general adult population as per international guidelines. Majority of the participants were formally educated however, their baseline knowledge regarding concept of health screening and risk factors of non-communicable (diabetes, Hypertension, dyslipidemia, cancers) and infectious diseases was very low. This is understandable due to the traditional illness-based approach prevailing among common people, seeking health care only when symptoms occur. Another reason could be the absence of established preventive health care programs by the government at primary care level. Before the video intervention there was a decreased awareness of risk factors and screening of hypertension (13.3%) and diabetes (297%). These results are comparable with the study conducted among African Americans which showed an increase in knowledge of diabetic screening after electronic health intervention . In 2021, a study was done in India upon a population of 64,427 people aged 45 years and above to assess the awareness, treatment, and control (ATC) of hypertension which showed a low awareness (54.4%) among the population leading to barriers in optimal treatment of hypertension in the general population . This is nevertheless a higher percentage than found in our study and is alarming that such a small proportion of individuals were aware of hypertension screening in our study. This could be attributed to the silent nature of disease, there being no definite symptoms marking the severity of blood pressure control. Our study showed a good baseline knowledge regarding dyslipidemia (67%) which improved significantly after the intervention (96.7%). This is a satisfying statistics; in contrast, our literature search yielded a less than 50% awareness regarding their cholesterol screening . Pre and post video intervention in our study regarding knowledge of hypertension, diabetes and dyslipidemia suggested that there was substantial knowledge gain. Similar type of studies showed an increase in knowledge and attitude in people with hypertension after using video presentation . One study also showed an enhancement upon the impact of hypertension control after using multimedia as a teaching tool for general public . These results imply that educational intervention using multimedia as an educational medium can favorably affect the awareness and acceptability of hypertension screening among the masses and may have a key role in improving the knowledge scores regarding both hypertension screening and later on, its control. Our study participants had a good baseline awareness regarding hepatitis B and C screening (56.7%) which improved after the intervention (77.3%). Though there is paucity of research evaluating the impact of video education on Hepatitis screening; however two recent studies showed similar results . 1.7% of the population were aware of cervical cancer screening before the video intervention, which improved to 19.3% post-intervention in the present study. A study including 600 participants in Ghana depicted much higher percentages before (84.2%) and after (100%) a video intervention . Other studies also depict a significant improvement in cervical cancer screening awareness among the women after an educational intervention . The low baseline awareness with a lesser increase in awareness post intervention could be attributable to majority of male participants in our study which may have lesser interest in women related cancers. Our study depicted that 24.3% of the participants were aware of colorectal screening methods before the intervention, while post-intervention, it improved to 59.3%.These figures are comparable to another study done in Khyber pakhtunkhuan in 2021 where 32% had no knowledge regarding colorectal cancer screening . Breast cancer is a growing menace internationally with 76.7% incidence rate in Pakistan . The results of a study published in 2022 upon 774 university students from different universities of all 4 provinces of Pakistan show 44.4% awareness of the correct age of mammography as 40 years. 29.8% demonstrated an understanding of family history as a recognized risk factor for the development of breast cancer, while the study did not assess the correct mode of breast cancer screening . 42% of the participants demonstrated an understanding of mammography as a reliable screening tool for breast cancer before the study intervention, which improved to 76.3% post-intervention. However, increase in overall knowledge of breast cancer after intervention was lowest among all other diseases. The reason of lower knowledge may be that majority were males in this study which might be not concerned over female related issues. Similar results were found in a systemic review demonstrating increase awareness regarding different regarding prevention and screening of different cancers including breast cancer . We found that implementing the educational video in the outpatient clinic waiting area was feasible. Over 95% of the consented patients completed the intervention. The main challenge that was encountered was distraction of noise and talking of other people sitting in the waiting area which may compromise the attention span of the participant. Use of headphone devices to counteract this challenge can be a suitable option for future. In addition, we aimed to allocate the same time for video watching to each participant; however this could not be materialized due to variability of waiting time and consultation of patients at variable pace. Our study population included both patients and their accompanied family members. The mean age of our study population was 28 with a range from 23 to 36 years old. More than 80% of study population had education above secondary level and graduate. Therefore, we believe that the video effectiveness cannot be determined for people with low education level as their health literacy may be inadequate to understand the concepts presented in this video. We found that this simple video based intervention had influenced the perception regarding health screening. More than half of the participants (60%) who perceived health screening as non- essential pre- intervention became sensitized to its importance and there was a significant increase in the number of participants (75%) who now considered it as integral component of health maintenance. However cost was a barrier to utilization of health screening which remained unchanged after video intervention. These findings are in concordance with other studies that investigated the impact of video-intervention upon improving the attitudes and acceptability regarding health screening . Our study has some limitations. Being single arm and non-randomized there was no comparison group or comparing intervention like written material to assess the most effective approach towards health education. Moreover, these was no control on patients’ waiting time and number of time they had watched the video but it was assured that they watched the video at least one time. Another noticeable point is that in this study, we interviewed patients in a private designated area instead of self-administering the questionnaire which may impact the results. Particularly, in Pakistani setting, patients visiting the clinics are more focused to consult their doctors and self-administering of questionnaire could cause lack of their attention because of filling out long questionnaires. That’s why it is better to interview them in our settings. The long term impact of this educational intervention was not determined. It’s worthwhile to explore long term impact of this intervention by evaluating the knowledge after 3–6 months. Moreover, we did not control for the variability in other sources of education that participants may already have owing to their variable background and health literacy level which may confound the results. In addition, our questionnaire has not undergone testing for validity. In addition the video impact has to be assessed for population differences in gender, ethnicity, and age and education level. A randomized trial will clearly be needed to evaluate the efficacy of the video as a tool to improve knowledge regarding health screening. Limitations and findings of this study will be taken into consideration while designing our forthcoming randomized trial. Moreover, given the available evidence that repetition facilitates learning, the video will be shown to participants on multiple occasions with the opportunity to ask questions for long term knowledge retention. This study highlighted a pivotal role of an educational video intervention in clinic waiting area to improve awareness regarding health screening among patients and their families. However, the study may have resulted in over estimation of knowledge score since it was carried in a single hospital setting among patients and their families whose health seeking behaviors and health literacy may be different from general population. Further community-based or multicenter randomized controlled trials are warranted to assess the long term impact of these educational videos on knowledge and utilization of health screening among adult population. Below is the link to the electronic supplementary material. Supplementary Material 1
Two-photon all-optical neurophysiology for the dissection of larval zebrafish brain functional and effective connectivity
d46dfd7a-58c0-4eb1-9cb8-5deeaba90da9
11452506
Physiology[mh]
Understanding the functional connectivity of intricate networks within the brain is a fundamental goal toward unraveling the complexities of neural processes. This longtime focal point in neuroscience requires methodologies to trigger and capture neuronal activity in an intact organism. Critical insights into the complex interplay among large populations of neurons have been provided by electroencephalography and functional magnetic resonance imaging – . These gold standard methods, however, do provide a noninvasive means to detect neuronal activity, but with limited spatial (the former) and temporal resolution (the latter), and lack equally noninvasive possibilities to precisely elicit and control it. Therefore, it is evident that deciphering how individual neurons communicate to shape functional neural circuits on a whole-organ scale demands further technological advances. Over the last few decades, with the advent of optogenetics and the widespread adoption of genetically encoded fluorescent indicators , all-optical methods have gained traction for their ability to simultaneously monitor and manipulate the activity of multiple neurons within the intact brain – . In this framework, the ever-increasing use of the tiny and translucent zebrafish larva as a reliable animal model recapitulating manifold features of vertebrate species physiology , has provided momentum for the development and enhancement of optical technologies aimed at imaging and controlling neuronal activity with light at high spatio-temporal resolution – . On the imaging side, previous high-resolution all-optical investigations of zebrafish have made use of two-photon (2P) point scanning methods – or, more rarely, of one-photon (1P) excitation light-sheet fluorescence microscopy (LSFM) . Compared to point scanning approaches, LSFM , allowing parallelization of the detection process within each frame, enables concurrent high spatio-temporal resolution and extensive volumetric imaging. However, the use of visible excitation in 1P LSFM represents an undesired source of strong visual stimulation for the larva, often requiring tailored excitation strategies at least to prevent direct illumination of the eyes . On the photostimulation side, most advanced all-optical setups typically adopt parallel illumination approaches, making use of spatial light modulators (SLM) or digital micromirror devices to generate multiple simultaneous holographic spots of excitation , – . In computer-generated holography, the input laser power is subdivided among the various spots, resulting in increasing energies released on the specimen as the number of effective targets rises and, consequently, in increasing probability of photodamage . Conversely, scan-based sequential stimulation allows a fraction of the power needed by parallel approaches to be deposited at any time on the sample, regardless of the number of targets. As a drawback, however, scan-based methods typically employ mechanical moving parts that constrain the stimulation sequence speed and thus the maximum temporal resolution achievable. An exception is represented by acousto-optic deflectors (AODs) , which are not affected by mechanical inertia and thus enable discontinuous three-dimensional trajectories with constant repositioning time. In particular, featuring an ultrashort access time (μs range), AODs represent the scanning technology that gets closest to parallel illumination performance. Indeed, AODs enable quasi-simultaneous three-dimensional targeting of multiple spots, yet keeping low the global energy delivered. However, despite their extensive use for fast 3D imaging – , these devices have been rarely employed for photostimulation so far – . In this work, we present an all-optical setup consisting of a light-sheet microscope and a light-targeting system equipped with AODs, both employing nonlinear excitation. The light-sheet microscope enables high spatio-temporal resolution volumetric imaging of the larval zebrafish brain, while the light-targeting system is employed to perform concurrent three-dimensional optogenetic stimulation. Using a double transgenic line pan-neuronally expressing both the green fluorescent calcium indicator GCaMP6s and the red-shifted light-gated cation channel ReaChR , we demonstrate a crosstalk-free experimental approach for all-optical investigation of brain circuitries. Leveraging two-photon excitation and the inertia-free light targeting capabilities of AODs, we validated the system functionality by reconstructing the efferent functional and effective connectivity of the left habenula, a cerebral nucleus mainly composed of excitatory neurons, linking forebrain and midbrain structures. A crosstalk-free approach for two-photon all-optical investigations in zebrafish larvae To explore brain functional connectivity in zebrafish larvae, we devised an integrated all-optical 2P system capable of simultaneously recording and stimulating neuronal activity. The setup (Fig. and Supplementary Fig. ), consists of a light-sheet fluorescence microscope and a light-targeting system, specifically designed for fast whole-brain calcium imaging and 3D optogenetic stimulation, respectively. Both optical paths employ pulsed near-infrared (NIR) laser sources for 2P excitation . The 2P LSFM module employing digitally scanned mode, double-sided illumination, control of excitation light polarization and remote focusing of the detection objective, is capable of recording the entire larval brain (400 × 800 × 200 μm 3 ) at volumetric rates up to 5 Hz (Supplementary Movie and Supplementary Fig. ). On the other hand, the light-targeting system incorporates two couples of acousto-optic deflectors to move the excitation focus to arbitrary locations inside a 100 × 100 × 100 μm 3 volume, guaranteeing constant repositioning time (4 μs) independently of the relative distance between sequentially illuminated points, and equal energy delivered independently from the number of targets . To perform simultaneous recording and stimulation of neuronal activity, we employed the pan-neuronal Tg(elavl3:H2B-GCaMP6; elavl3:ReaChR-TagRFP) zebrafish line (Fig. , Supplementary Movie ). Larvae of this double transgenic line express the green fluorescent calcium indicator GCaMP6s inside neuronal nuclei and the red-shifted light-gated cation channel ReaChR (as a fusion protein with the red fluorescent protein TagRFP) on neuronal membranes (Fig. ). We initially investigated the possible presence of crosstalk activation of ReaChR channels due to the excitation wavelength used for functional imaging. To this end, we employed two complementary approaches. First, light-sheet imaging of both double transgenic larvae (ReaChR + ) and GCaMP6s-expressing larvae (ReaChR − , lacking the light-gated channel) was performed for 5 min (volumetric rate: 2.5 Hz, λ ex : 920 nm, laser power at the sample: 60 mW). To evaluate the level of neuronal activity, we computed the standard deviation (SD) over time for each voxel belonging to the brain (image processing voxel size = 4.4 × 4.4 × 5 μm 3 , see Data analysis for details). We adopted SD as a metric for neuronal activity since we found it more sensitive in discriminating between different conditions with respect to the number of calcium peaks per minute, and equally sensitive to the average peak amplitude, yet not necessitating the setting of arbitrary thresholds (Supplementary Fig. ). No major differences could be observed in the average SD distributions computed over a 5-minute exposure to the imaging laser between the two groups (Fig. ). Indeed, the resulting imaging crosstalk index (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) was extremely low (3.9% ± 4.5%; Fig. ). However, since crosstalk activation of light-gated channels by a spurious wavelength is typically power-dependent , we then investigated whether higher powers of the laser used for imaging could induce a significant effect on ReaChR + larvae. Figure shows the average SD distributions obtained from ReaChR + and ReaChR − larvae at imaging powers ranging from 40 to 100 mW. Despite higher laser powers producing a shift of the distributions towards higher SD values, this shift equally affected the neuronal activity of both ReaChR + and ReaChR − larvae (see also Supplementary Fig. ). The differences between the median values of the SD distributions of the two groups (ReaChR + and ReaChR − ) at the same imaging power were not statistically significant (Fig. , right) and, indeed, the imaging crosstalk index remained essentially constant in the power range tested (Supplementary Fig. ). In addition to this, we investigated the presence of imaging-related crosstalk also from a behavioral standpoint. We performed high-speed tail tracking of head restrained ReaChR − and ReaChR + larvae in absence (OFF) and in presence (ON) of whole-brain light-sheet imaging (Fig. and Supplementary Movies and ). As shown in Fig. , compared to the OFF period, during 920 nm laser exposure (ON) both strains showed a slight but not significant increase in the number of tail beats per minute, suggesting that the power applied (60 mW) was quite well tolerated by the animals. Moreover, the relative number of tail beats during imaging ON was not significantly different between ReaChR + and ReaChR − larvae (Fig. ), providing additional proof of the absence of spurious excitation in ReaChR + larvae by the 920 nm laser used for imaging. After demonstrating the absence of cross-talk activation of ReaChR channels upon 2P light-sheet scanning, we investigated the ability of our AOD-based photostimulation system to effectively induce optogenetic activation of targeted neurons. For this purpose, we selected a stimulation wavelength (1064 nm) that is red-shifted relative to the opsin’s 2P excitation peak (975 nm). By doing so, we increased the separation between the wavelength used for optogenetic stimulation and the 2P excitation peak of GCaMP6s (920 nm), thus further reducing the potential for stimulation-induced artifacts. We thus stimulated ReaChR + and ReaChR − larvae at 1064 nm (laser power at the sample: 30 mW, stimulation volume: 30 × 30 × 30 μm 3 ) while simultaneously recording whole-brain neuronal activity via light-sheet imaging. Larvae expressing the opsin showed strong and consistent calcium transients evoked at the stimulation site (Fig. , inset). Conversely, stimulating ReaChR − larvae did not result in any detectable response (Fig. , inset). We quantified the effect of the optogenetic stimulation by computing the distributions of SD values of the voxels inside the stimulation site for ReaChR + and ReaChR − larvae (Fig. , left). 1064 nm stimulation induced statistically significant optogenetic activation of opsin-expressing neurons in ReaChR + larvae (ReaChR − = 0.0277 ± 0.0017, ReaChR + = 0.0608 ± 0.0077, mean ± sem; Fig. , right). Despite the small stimulation volume with respect to the entire brain size, the effect of the photostimulation was also noticeable in the whole-brain SD distribution, where ReaChR + larvae showed a peak slightly shifted toward higher SD values (Fig. , left), which produced a significantly greater average SD (ReaChR − = 0.0201 ± 0.0007, ReaChR + = 0.0211 ± 0.0006, mean ± sem; Fig. , right). This appreciable difference was due to the high-amplitude calcium transients evoked by the stimulation and by the activation of the neuronal population synaptically downstream of the stimulation site. Figure shows the optogenetic activation indeces (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) for the stimulation site and the entire brain (stimulation site = 68.9% ± 0.3%, brain = 8.6% ± 2.1%). To rule out any possible spurious activation effect not related to the optogenetic excitation of ReaChR channels (e.g., sensorial perception of the laser stimulus), we also compared the SD distributions of ReaChR − larvae subjected to imaging only (ReaChR − i ) or to simultaneous imaging and stimulation (ReaChR − i+s ). The analysis highlighted no statistically significant effects of the photostimulation in the absence of opsin expression at either the stimulus site (Supplementary Fig. ) or at a brain-wide level (Supplementary Fig. ). Characterization of calcium transients evoked by 3D optogenetic stimulation After assessing the absence of opsin crosstalk activation upon light-sheet imaging and verifying the ability of our system to consistently induce optogenetic activation of ReaChR + neurons, we characterized the neuronal responses to identify optimal stimulation parameters. We decided to target the stimulation at an easily recognizable cerebral nucleus mainly composed of excitatory neurons. Neurons having their soma inside the habenulae express vesicular glutamate transporter 2 (VGLUT2, also known as SLC17A6 ), representing a coherent group of excitatory glutamatergic neurons , . We therefore directed the stimulation onto the left habenula, an anatomically segregated nucleus that is part of the dorsal-diencephalic conduction system . We adopted a stimulation volume of 50 × 50 × 50 μm 3 , sufficient to cover the entire habenula (Fig. ). This volume was populated with 6250 points distributed across 10 z-planes ( z step: 5 μm). With a point dwell-time of 20 μs, a complete cycle over all the points in the volume took only 125 ms. We first characterized the calcium transients as a function of the stimulation duration (scan time, Fig. ) in the range of 125 to 625 ms (1–5 iterations over the volume). Figure shows the amplitude of the calcium peaks as a function of the scan time. Increasing scan durations produced a progressive increase in peak amplitude until a plateau was reached between 4 and 5 volume cycles (scan time 500-625 ms). From a kinetic point of view, increasing scan durations led to a significant decrease in the rise time of calcium transients (Fig. ). Additionally, the decay time of the calcium transients progressively increased with increasing scan time (Fig. ). We also characterized the neuronal response as a function of the 1064 nm excitation power (ranging from 10 to 40 mW, Fig. ). The amplitude of the calcium transients showed a strong linear dependence on the stimulation power (Fig. , R 2 : 0.89). While the rise time did not seem to be affected by the laser intensity (Fig. ), the decay time showed a strong linear proportionality (Fig. , R 2 : 0.82). The duration of calcium transients (Supplementary Fig. ), instead, increased with increasing stimulation power but was not significantly affected by scan time. Given the small variation in rise time, in both cases the overall duration of the calcium transient was largely determined by the decay time trend. Whole-brain functional circuitry of the left habenular nucleus After this initial technical validation, we employed our all-optical setup to identify cerebral regions functionally linked to the left habenular nucleus. To this end, we designed the following stimulation protocol (Fig. ). For each zebrafish larva, we performed six trials consisting of 5 optogenetic stimuli (interstimulus interval: 16 s) during simultaneous whole-brain light-sheet imaging. Based on the characterization performed, we adopted a stimulus duration of 500 ms (4 complete consecutive iterations over the 50 × 50 × 50 μm 3 volume, point density: 0.5 point/μm) and a laser power of 30 mW to maximize the neuronal response while keeping the laser intensity low (Supplementary Movie ). First, we evaluated the brain voxel activation probability in response to the optogenetic stimulation of the left habenula (LHb). Figure shows different projections of the whole-brain average activation probability map (Supplementary Movies and ). The LHb, the site of stimulation, predictably showed the highest activation probability values. In addition to the LHb, an unpaired nucleus located at the ventral midline of the posterior midbrain showed an increased activation probability with respect to the surrounding tissue. Then, we segmented the entire larval brain into ten different anatomical regions according to structural boundaries (Fig. , left). By extracting the average activation probability from each region (Fig. , right), we found that the deep midbrain district corresponded to the interpeduncular nucleus (IPN). The IPN is a renowned integrative center and relay station within the limbic system that receives habenular afferences traveling through the fasciculus retroflexus – . Figure shows the average normalized activation probability distributions for voxels inside the LHb, IPN and right habenula (RHb, the region with the highest activation probability after LHb and IPN). LHb and IPN neurons exhibited activation probabilities as high as 100% and 51%, respectively. Notably, despite LHb presenting higher activation probabilities than IPN across larvae, higher LHb probabilities did not necessarily correspond to higher IPN probabilities (Fig. ). Figure shows representative mean Δ F / F signals obtained from the LHb (blue) and IPN (yellow) regions during a stimulation trial. The LHb consistently responded to the photostimulation with high-amplitude calcium transients. The IPN showed lower amplitude activations (~1/10 of the LHb), yet reproducibly following the pace induced by LHb stimulation (as also visible in Supplementary Movie , yellow arrowhead). The coherence between these time traces was confirmed by their cross-wavelet power spectrum, showing highest density around the optogenetic trigger rate (1/16 Hz, Fig. ; see also Supplementary Fig. ). As a comparison, Supplementary Fig. shows the cross-wavelet power spectral density of the LHb and RHb activities, where null to low coupling levels emerge. We then examined whole-brain functional connectivity during optogenetic stimulation. To this end, we first extracted the neuronal activity from previously segmented brain regions. Figure shows, as an example, a heatmap of neuronal activity over time during a single stimulation trial. The LHb and IPN were apparently the only two regions following the photostimulation trigger (dark red vertical bars). This result is confirmed and generalized by the chord diagram presented in Fig. . This chart presents the average all-against-all correlation between the neuronal activity of different brain regions. The LHb and IPN were the two anatomical districts that showed the strongest functional connectivity during stimulation (Pearson’s correlation coefficient = 0.605 ± 0.079, mean ± sem). To explore the causal relationships among observed interactions between brain regions, what is known as effective connectivity , we analyzed the Granger causality (GC) of their spatially averaged activities . By examining the added predictability of one time series based on the past values of another, GC analysis allows to draw inferences about directional cause-and-effect relationships between brain activities . In Fig. , the average strength of the directed interaction among brain regions is depicted using the F statistic. The results from GC analysis showed that the activity recorded in the IPN have a strong causal link only with the activity triggered in the LHb (88.83% ± 8.24% of trials are significant for the LHb→IPN direction while only 2.78% ± 2.78% of them are significant for the opposite direction IPN→LHb, mean ± sem). Furthermore, Fig. illustrates the significant directional causality links between brain regions. Notably, compared to the naturally occurring interacting pairs, the optogenetically revealed LHb-IPN pair showed increased consistency among trials in the significance of their interaction direction (arrow width; percentage of significant trial for the directed interaction: Th-HB, 61.11% ± 9.29%; C-HB, 61.11% ± 10.24%; C-Th, 36.11% ± 5.12%; mean ± sem). On the other hand, the strength of the causal link (represented by the F statistic and graphically depicted by arrow color) for the stimulated LHb-IPN pair was comparable to that of spontaneously occurring pairs ( F value: LHb-IPN, 12.16 ± 1.29; PT-HB, 12.10 ± 1.48; Th-PT, 11.40 ± 1.64; PT-HB, 13.98 ± 2.33; mean ± sem). Interestingly, among the causal connections highlighted by GC analysis, causality links (albeit of a lesser extent) emerged also between LHb-RHb and T-RHb pairs. After employing GC analysis to assess the direction of the causality links, we employed partial correlation analysis to assess the directness of the causal connection observed. Partial correlation analysis represents the remaining correlation between two regions after accounting for the influence of all other regions. Results show with a probability of 88.9 ± 7.0% (mean ± sem) that the LHb-IPN link was direct. In contrast, the LHb-RHb pair produced an opposite result (directness probability 30.4 ± 10.9%, mean ± sem), suggesting an indirect connection, while T-RHb is associated with a more controversial result (directness probability 41.6 ± 17.1%, mean ± sem). Next, we investigated the seed-based functional connectivity of the left habenular nucleus. To this end, we computed the Pearson’s correlation between the average neuronal activity in the LHb (seed) and the activity in each brain voxel. Figure shows different projections of the average functional connectivity map of the LHb (Supplementary Movie ). In addition to LHb neurons which exhibited an expected high self-correlation, IPN neurons showed visible higher functional connectivity with respect to other brain regions. This result is confirmed by the analysis of the average correlation coefficient of the different regions (Fig. ), where the IPN was the only region presenting a statistically significant functional connectivity with the LHb. Figure shows the average normalized distributions of correlation coefficients computed from voxels inside the LHb, IPN and RHb. With respect to RHb, which had a distribution basically centered at 0 with a short tail towards negative correlation values, neurons in the LHb and IPN showed functional connectivity values as high as 100% and 65%, respectively. In order to visually isolate the neuronal circuit underlying LHb stimulation, we set a threshold on the correlation coefficient. Based on the results shown in Fig. , we chose a threshold of 0.12 as the highest value separating regions showing significantly higher correlation with the seed activity (namely, LHb and IPN). Figure shows the binarized functional connectivity map of the left habenular nucleus in larval zebrafish (Supplementary Movie ). To explore brain functional connectivity in zebrafish larvae, we devised an integrated all-optical 2P system capable of simultaneously recording and stimulating neuronal activity. The setup (Fig. and Supplementary Fig. ), consists of a light-sheet fluorescence microscope and a light-targeting system, specifically designed for fast whole-brain calcium imaging and 3D optogenetic stimulation, respectively. Both optical paths employ pulsed near-infrared (NIR) laser sources for 2P excitation . The 2P LSFM module employing digitally scanned mode, double-sided illumination, control of excitation light polarization and remote focusing of the detection objective, is capable of recording the entire larval brain (400 × 800 × 200 μm 3 ) at volumetric rates up to 5 Hz (Supplementary Movie and Supplementary Fig. ). On the other hand, the light-targeting system incorporates two couples of acousto-optic deflectors to move the excitation focus to arbitrary locations inside a 100 × 100 × 100 μm 3 volume, guaranteeing constant repositioning time (4 μs) independently of the relative distance between sequentially illuminated points, and equal energy delivered independently from the number of targets . To perform simultaneous recording and stimulation of neuronal activity, we employed the pan-neuronal Tg(elavl3:H2B-GCaMP6; elavl3:ReaChR-TagRFP) zebrafish line (Fig. , Supplementary Movie ). Larvae of this double transgenic line express the green fluorescent calcium indicator GCaMP6s inside neuronal nuclei and the red-shifted light-gated cation channel ReaChR (as a fusion protein with the red fluorescent protein TagRFP) on neuronal membranes (Fig. ). We initially investigated the possible presence of crosstalk activation of ReaChR channels due to the excitation wavelength used for functional imaging. To this end, we employed two complementary approaches. First, light-sheet imaging of both double transgenic larvae (ReaChR + ) and GCaMP6s-expressing larvae (ReaChR − , lacking the light-gated channel) was performed for 5 min (volumetric rate: 2.5 Hz, λ ex : 920 nm, laser power at the sample: 60 mW). To evaluate the level of neuronal activity, we computed the standard deviation (SD) over time for each voxel belonging to the brain (image processing voxel size = 4.4 × 4.4 × 5 μm 3 , see Data analysis for details). We adopted SD as a metric for neuronal activity since we found it more sensitive in discriminating between different conditions with respect to the number of calcium peaks per minute, and equally sensitive to the average peak amplitude, yet not necessitating the setting of arbitrary thresholds (Supplementary Fig. ). No major differences could be observed in the average SD distributions computed over a 5-minute exposure to the imaging laser between the two groups (Fig. ). Indeed, the resulting imaging crosstalk index (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) was extremely low (3.9% ± 4.5%; Fig. ). However, since crosstalk activation of light-gated channels by a spurious wavelength is typically power-dependent , we then investigated whether higher powers of the laser used for imaging could induce a significant effect on ReaChR + larvae. Figure shows the average SD distributions obtained from ReaChR + and ReaChR − larvae at imaging powers ranging from 40 to 100 mW. Despite higher laser powers producing a shift of the distributions towards higher SD values, this shift equally affected the neuronal activity of both ReaChR + and ReaChR − larvae (see also Supplementary Fig. ). The differences between the median values of the SD distributions of the two groups (ReaChR + and ReaChR − ) at the same imaging power were not statistically significant (Fig. , right) and, indeed, the imaging crosstalk index remained essentially constant in the power range tested (Supplementary Fig. ). In addition to this, we investigated the presence of imaging-related crosstalk also from a behavioral standpoint. We performed high-speed tail tracking of head restrained ReaChR − and ReaChR + larvae in absence (OFF) and in presence (ON) of whole-brain light-sheet imaging (Fig. and Supplementary Movies and ). As shown in Fig. , compared to the OFF period, during 920 nm laser exposure (ON) both strains showed a slight but not significant increase in the number of tail beats per minute, suggesting that the power applied (60 mW) was quite well tolerated by the animals. Moreover, the relative number of tail beats during imaging ON was not significantly different between ReaChR + and ReaChR − larvae (Fig. ), providing additional proof of the absence of spurious excitation in ReaChR + larvae by the 920 nm laser used for imaging. After demonstrating the absence of cross-talk activation of ReaChR channels upon 2P light-sheet scanning, we investigated the ability of our AOD-based photostimulation system to effectively induce optogenetic activation of targeted neurons. For this purpose, we selected a stimulation wavelength (1064 nm) that is red-shifted relative to the opsin’s 2P excitation peak (975 nm). By doing so, we increased the separation between the wavelength used for optogenetic stimulation and the 2P excitation peak of GCaMP6s (920 nm), thus further reducing the potential for stimulation-induced artifacts. We thus stimulated ReaChR + and ReaChR − larvae at 1064 nm (laser power at the sample: 30 mW, stimulation volume: 30 × 30 × 30 μm 3 ) while simultaneously recording whole-brain neuronal activity via light-sheet imaging. Larvae expressing the opsin showed strong and consistent calcium transients evoked at the stimulation site (Fig. , inset). Conversely, stimulating ReaChR − larvae did not result in any detectable response (Fig. , inset). We quantified the effect of the optogenetic stimulation by computing the distributions of SD values of the voxels inside the stimulation site for ReaChR + and ReaChR − larvae (Fig. , left). 1064 nm stimulation induced statistically significant optogenetic activation of opsin-expressing neurons in ReaChR + larvae (ReaChR − = 0.0277 ± 0.0017, ReaChR + = 0.0608 ± 0.0077, mean ± sem; Fig. , right). Despite the small stimulation volume with respect to the entire brain size, the effect of the photostimulation was also noticeable in the whole-brain SD distribution, where ReaChR + larvae showed a peak slightly shifted toward higher SD values (Fig. , left), which produced a significantly greater average SD (ReaChR − = 0.0201 ± 0.0007, ReaChR + = 0.0211 ± 0.0006, mean ± sem; Fig. , right). This appreciable difference was due to the high-amplitude calcium transients evoked by the stimulation and by the activation of the neuronal population synaptically downstream of the stimulation site. Figure shows the optogenetic activation indeces (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) for the stimulation site and the entire brain (stimulation site = 68.9% ± 0.3%, brain = 8.6% ± 2.1%). To rule out any possible spurious activation effect not related to the optogenetic excitation of ReaChR channels (e.g., sensorial perception of the laser stimulus), we also compared the SD distributions of ReaChR − larvae subjected to imaging only (ReaChR − i ) or to simultaneous imaging and stimulation (ReaChR − i+s ). The analysis highlighted no statistically significant effects of the photostimulation in the absence of opsin expression at either the stimulus site (Supplementary Fig. ) or at a brain-wide level (Supplementary Fig. ). After assessing the absence of opsin crosstalk activation upon light-sheet imaging and verifying the ability of our system to consistently induce optogenetic activation of ReaChR + neurons, we characterized the neuronal responses to identify optimal stimulation parameters. We decided to target the stimulation at an easily recognizable cerebral nucleus mainly composed of excitatory neurons. Neurons having their soma inside the habenulae express vesicular glutamate transporter 2 (VGLUT2, also known as SLC17A6 ), representing a coherent group of excitatory glutamatergic neurons , . We therefore directed the stimulation onto the left habenula, an anatomically segregated nucleus that is part of the dorsal-diencephalic conduction system . We adopted a stimulation volume of 50 × 50 × 50 μm 3 , sufficient to cover the entire habenula (Fig. ). This volume was populated with 6250 points distributed across 10 z-planes ( z step: 5 μm). With a point dwell-time of 20 μs, a complete cycle over all the points in the volume took only 125 ms. We first characterized the calcium transients as a function of the stimulation duration (scan time, Fig. ) in the range of 125 to 625 ms (1–5 iterations over the volume). Figure shows the amplitude of the calcium peaks as a function of the scan time. Increasing scan durations produced a progressive increase in peak amplitude until a plateau was reached between 4 and 5 volume cycles (scan time 500-625 ms). From a kinetic point of view, increasing scan durations led to a significant decrease in the rise time of calcium transients (Fig. ). Additionally, the decay time of the calcium transients progressively increased with increasing scan time (Fig. ). We also characterized the neuronal response as a function of the 1064 nm excitation power (ranging from 10 to 40 mW, Fig. ). The amplitude of the calcium transients showed a strong linear dependence on the stimulation power (Fig. , R 2 : 0.89). While the rise time did not seem to be affected by the laser intensity (Fig. ), the decay time showed a strong linear proportionality (Fig. , R 2 : 0.82). The duration of calcium transients (Supplementary Fig. ), instead, increased with increasing stimulation power but was not significantly affected by scan time. Given the small variation in rise time, in both cases the overall duration of the calcium transient was largely determined by the decay time trend. After this initial technical validation, we employed our all-optical setup to identify cerebral regions functionally linked to the left habenular nucleus. To this end, we designed the following stimulation protocol (Fig. ). For each zebrafish larva, we performed six trials consisting of 5 optogenetic stimuli (interstimulus interval: 16 s) during simultaneous whole-brain light-sheet imaging. Based on the characterization performed, we adopted a stimulus duration of 500 ms (4 complete consecutive iterations over the 50 × 50 × 50 μm 3 volume, point density: 0.5 point/μm) and a laser power of 30 mW to maximize the neuronal response while keeping the laser intensity low (Supplementary Movie ). First, we evaluated the brain voxel activation probability in response to the optogenetic stimulation of the left habenula (LHb). Figure shows different projections of the whole-brain average activation probability map (Supplementary Movies and ). The LHb, the site of stimulation, predictably showed the highest activation probability values. In addition to the LHb, an unpaired nucleus located at the ventral midline of the posterior midbrain showed an increased activation probability with respect to the surrounding tissue. Then, we segmented the entire larval brain into ten different anatomical regions according to structural boundaries (Fig. , left). By extracting the average activation probability from each region (Fig. , right), we found that the deep midbrain district corresponded to the interpeduncular nucleus (IPN). The IPN is a renowned integrative center and relay station within the limbic system that receives habenular afferences traveling through the fasciculus retroflexus – . Figure shows the average normalized activation probability distributions for voxels inside the LHb, IPN and right habenula (RHb, the region with the highest activation probability after LHb and IPN). LHb and IPN neurons exhibited activation probabilities as high as 100% and 51%, respectively. Notably, despite LHb presenting higher activation probabilities than IPN across larvae, higher LHb probabilities did not necessarily correspond to higher IPN probabilities (Fig. ). Figure shows representative mean Δ F / F signals obtained from the LHb (blue) and IPN (yellow) regions during a stimulation trial. The LHb consistently responded to the photostimulation with high-amplitude calcium transients. The IPN showed lower amplitude activations (~1/10 of the LHb), yet reproducibly following the pace induced by LHb stimulation (as also visible in Supplementary Movie , yellow arrowhead). The coherence between these time traces was confirmed by their cross-wavelet power spectrum, showing highest density around the optogenetic trigger rate (1/16 Hz, Fig. ; see also Supplementary Fig. ). As a comparison, Supplementary Fig. shows the cross-wavelet power spectral density of the LHb and RHb activities, where null to low coupling levels emerge. We then examined whole-brain functional connectivity during optogenetic stimulation. To this end, we first extracted the neuronal activity from previously segmented brain regions. Figure shows, as an example, a heatmap of neuronal activity over time during a single stimulation trial. The LHb and IPN were apparently the only two regions following the photostimulation trigger (dark red vertical bars). This result is confirmed and generalized by the chord diagram presented in Fig. . This chart presents the average all-against-all correlation between the neuronal activity of different brain regions. The LHb and IPN were the two anatomical districts that showed the strongest functional connectivity during stimulation (Pearson’s correlation coefficient = 0.605 ± 0.079, mean ± sem). To explore the causal relationships among observed interactions between brain regions, what is known as effective connectivity , we analyzed the Granger causality (GC) of their spatially averaged activities . By examining the added predictability of one time series based on the past values of another, GC analysis allows to draw inferences about directional cause-and-effect relationships between brain activities . In Fig. , the average strength of the directed interaction among brain regions is depicted using the F statistic. The results from GC analysis showed that the activity recorded in the IPN have a strong causal link only with the activity triggered in the LHb (88.83% ± 8.24% of trials are significant for the LHb→IPN direction while only 2.78% ± 2.78% of them are significant for the opposite direction IPN→LHb, mean ± sem). Furthermore, Fig. illustrates the significant directional causality links between brain regions. Notably, compared to the naturally occurring interacting pairs, the optogenetically revealed LHb-IPN pair showed increased consistency among trials in the significance of their interaction direction (arrow width; percentage of significant trial for the directed interaction: Th-HB, 61.11% ± 9.29%; C-HB, 61.11% ± 10.24%; C-Th, 36.11% ± 5.12%; mean ± sem). On the other hand, the strength of the causal link (represented by the F statistic and graphically depicted by arrow color) for the stimulated LHb-IPN pair was comparable to that of spontaneously occurring pairs ( F value: LHb-IPN, 12.16 ± 1.29; PT-HB, 12.10 ± 1.48; Th-PT, 11.40 ± 1.64; PT-HB, 13.98 ± 2.33; mean ± sem). Interestingly, among the causal connections highlighted by GC analysis, causality links (albeit of a lesser extent) emerged also between LHb-RHb and T-RHb pairs. After employing GC analysis to assess the direction of the causality links, we employed partial correlation analysis to assess the directness of the causal connection observed. Partial correlation analysis represents the remaining correlation between two regions after accounting for the influence of all other regions. Results show with a probability of 88.9 ± 7.0% (mean ± sem) that the LHb-IPN link was direct. In contrast, the LHb-RHb pair produced an opposite result (directness probability 30.4 ± 10.9%, mean ± sem), suggesting an indirect connection, while T-RHb is associated with a more controversial result (directness probability 41.6 ± 17.1%, mean ± sem). Next, we investigated the seed-based functional connectivity of the left habenular nucleus. To this end, we computed the Pearson’s correlation between the average neuronal activity in the LHb (seed) and the activity in each brain voxel. Figure shows different projections of the average functional connectivity map of the LHb (Supplementary Movie ). In addition to LHb neurons which exhibited an expected high self-correlation, IPN neurons showed visible higher functional connectivity with respect to other brain regions. This result is confirmed by the analysis of the average correlation coefficient of the different regions (Fig. ), where the IPN was the only region presenting a statistically significant functional connectivity with the LHb. Figure shows the average normalized distributions of correlation coefficients computed from voxels inside the LHb, IPN and RHb. With respect to RHb, which had a distribution basically centered at 0 with a short tail towards negative correlation values, neurons in the LHb and IPN showed functional connectivity values as high as 100% and 65%, respectively. In order to visually isolate the neuronal circuit underlying LHb stimulation, we set a threshold on the correlation coefficient. Based on the results shown in Fig. , we chose a threshold of 0.12 as the highest value separating regions showing significantly higher correlation with the seed activity (namely, LHb and IPN). Figure shows the binarized functional connectivity map of the left habenular nucleus in larval zebrafish (Supplementary Movie ). Dissecting brain functional and effective connectivity requires advanced technology for “reading” and “writing” neuronal activity. Here, we have presented the application of an all-optical 2P system intended for simultaneous imaging and optogenetic control of whole-brain neuronal activity in zebrafish larvae. Our method employs light-sheet microscopy to perform functional imaging, ensuring comprehensive mapping of the entire brain at a significantly improved temporal resolution compared to conventional 2P point-scanning imaging techniques. To elicit precise photoactivation within the larval brain, our light-targeting unit utilizes two pairs of AODs, enabling the displacement of the focal volume to arbitrary locations. Admittedly, the utilization of AODs for optogenetics has been restricted to 1P photostimulation in 2D – owing to the drop in transmission efficiency along the optical axis, which hinders a homogeneous 2P volumetric excitation. However, as we demonstrated in a previous work , by properly tuning the trains of chirped radio frequency (RF) signals that drive AODs, it is feasible to enhance the uniformity of energy delivery when shifting the focus of the excitation beam. This enhancement has allowed us to proficiently execute optogenetic stimulation of specific targets over a volumetric range of 100 × 100 × 100 μm 3 . Notably, an intriguing aspect of our approach is that, owing to the use of remote focusing of the detection objective and of AODs for stimulation light defocusing, the localization of the photostimulation volume remains entirely independent of the sequential acquisition of different brain planes, thus affording greater flexibility in our experimental investigations. As previously mentioned, our setup exploits 2P excitation both for imaging and optogenetic stimulation. On the imaging side, the use of NIR light to produce the sheet of light leads to a significant reduction of common striping artifacts that otherwise could severely hinder the interpretation of functional data. Nevertheless, due to the nonlinear nature of its excitation and the need to elongate the axial point spread function (PSF) of the illumination beam to produce the light sheet (thus reducing photon density), 2P LSFM is also typically prone to low signal-to-noise ratio. As a result, despite a voxel size (2.2 × 2.2 × 5 μm 3 ) being 30–35% than the average diameter of a neuronal nucleus (6–7 μm), we did not achieve consistent detection of single neurons throughout the entire brain. On the photostimulation side, the use of nonlinear interaction between light and matter enables precise optical confinement of the stimulation volume, without resorting to narrower genetic control of opsin expression, which is typically required when using 1P excitation – . In addition to these aspects, the exclusive use of NIR light as an excitation source, in contrast to visible lasers, dramatically diminishes unwanted and uncontrolled visual stimulation since these wavelengths are scantily perceived by most vertebrate species, including zebrafish , . Nevertheless, we observed that 2P light-sheet imaging can elicit a power-dependent increase in the neuronal activity of zebrafish larvae. Despite not significantly affecting zebrafish behavior, this effect, which may be attributed to non-visual sensory perception of the excitation light, remarks once more the significance of maintaining low the overall energy applied to the sample. To the best of our knowledge, this is the first time that a fully 2P all-optical setup employs light-sheet microscopy for rapid whole-brain imaging and AODs for 3D optogenetic stimulation. With the aim of establishing an all-optical paradigm for investigating the functional and effective connectivity of the larval zebrafish brain, we considered different pairs of sensor/actuator and eventually opted for the GCaMP6s/ReaChR couple. The green calcium reporter GCaMP6s represents a reliable indicator that has undergone extensive evaluation – . On the other hand, the actuator ReaChR, in comparison with other red-shifted opsins, has a slow channel closing mechanism which is particularly suitable for both sequential photostimulation approaches and 2P excitation . A crucial aspect in all-optical studies lies in the separation between the excitation spectra of proteins used for stimulating and revealing neuronal activity. Previous research has demonstrated that the slow channel closing of ReaChR makes this opsin more susceptible to crosstalk activation when scanning the 920 nm imaging laser at power levels exceeding 60 mW . However, in our work, we did not observe a significant increase in cross-activation even at power levels as high as 100 mW. This divergence can be attributed to the peculiar excitation features of 2P light-sheet imaging compared to 2P point scanning imaging. In digitally scanned 2P LSFM, the use of low numerical aperture excitation objectives (to obtain a stretched axial illumination PSF, continuously scanned to produce the sheet of light) results in lower intensities (and thus lower photon density) in comparison to point-scanning methods, for equal laser powers. It is worth noting that, despite the negligible crosstalk, 2P light-sheet imaging may still lead to subthreshold activation of ReaChR + neurons (at 920 nm the opsin retains approximately 25% of the peak action cross-section ), potentially resulting in altered network excitability . Previous studies have employed 1030 nm pulsed lasers to stimulate ReaChR , . The results of our work demonstrate the feasibility of photostimulating ReaChR at 1064 nm, a wavelength red-shifted by almost 100 nm compared to the ReaChR 2P absorption peak (975 nm ). Furthermore, the use of the 1064 nm wavelength for photostimulation, which is red shifted with respect to the tail of the 2P excitation cross-section of GCaMP6s , accounts for the absence of fluorescence artifacts potentially caused by the calcium indicator excitation at the wavelength employed for optogenetic intervention. The characterization of the kinetic features of calcium transients elicited by optogenetic stimulation, which served as a benchmark for identifying the optimal excitation configuration, highlighted two interesting aspects. First, we observed a linear dependence of calcium peak amplitude on the stimulation power applied. This behavior suggests that increasing power produces a proportional increment in the firing rate of ReaChR + neurons. Secondly, we observed a decrease in the calcium transient rise time in response to longer stimulation durations. This result may be attributed to the fact that ReaChR has a channel off rate ( τ -off) of 140 ms , enabling it to integrate photons beyond the duration of a single volume iteration (125 ms). Supporting this hypothesis is the fact that, after two iterations over the stimulation volume (250 ms and on), the rise time remains constant. As the system allows accurate identification of groups of neurons functionally connected with the stimulated ones, we exploited the setup to explore the efferent connectivity of the left habenula. The habenulae are bilateral nuclei located in the diencephalon that are highly conserved among vertebrates and connect brain regions involved in diverse emotional states such as fear and aversion, as well as learning and memory . Like mammals, the habenulae in zebrafish are highly connected hubs receiving afferents from the entopeduncular nucleus , hypothalamus, and median raphe , in addition to left-right asymmetric inputs , . The habenula can be divided into dorsal (dHb) and ventral (vHb) portions (equivalent to the mammalian medial and lateral habenula , respectively), each exhibiting exclusive efferent connections. Specifically, dHb sends inputs to the IPN while vHb projects to the median raphe , . As a consequence, optogenetic stimulation of the entire LHb should lead, in principle, to the activation of both the IPN and the raphe. However, in our experiments, we observed a high probability of activation, strong correlation and causal link only within the IPN population of neurons. This apparent discrepancy can be explained by the fact that, at the larval stage, the vHb represents only a small fraction of the overall habenular volume . As a result, the limited number of vHb neurons would possess a reduced number of connections with the median raphe, resulting in a weak downstream communication. Furthermore, as described by Amo and colleagues , although vHb neurons terminate in the median raphe, no direct contact with serotonergic neurons is observed, suggesting the presence of interneurons that may bridge the link, similar to what is observed in mammals . This inhibitory connection is consistent with the absence of activation of the raphe, which we observed upon left habenular stimulation. Notably, we did not observe any activation in regions downstream of the IPN either. Although adult zebrafish exhibit IPN habenular-recipient neurons projecting to the dorsal tegmental area or griseum centrale , our results corroborate the structural observations of Ma and colleagues from a functional standpoint. Indeed, using anterograde viral labeling of postsynaptic targets, Ma et al. highlighted that in larval zebrafish habenular-recipient neurons of the IPN do not emanate any efferent axon . LHb and the IPN show a high interindividual variability in terms of average activation probability but a lower variability in terms of correlation. This is because larvae may exhibit slightly different opsin expression levels, which result in greater variance in the amplitude of evoked calcium transients and thus a higher activation probability (i.e., the probability of exceeding an arbitrary amplitude threshold). Conversely, the strength of functional connections (i.e., the degree of correlation) appears to not be dependent on the amplitude of evoked neuronal activity. This aspect is also confirmed by the high cross-wavelet power spectral density in a narrow bandwidth centered on the frequency of the triggered optogenetic stimulus, which we observed in the average activity time traces extracted from the LHb and IPN. Functional connectivity refers to the statistical correlations that signify the synchronous activity between brain regions, without necessarily implying a direct causal interaction. Effective connectivity, on the other hand, takes a step further by seeking to understand the causal influence and the direction of the interaction that one neural population has over another. To delve into the realm of effective connectivity we applied Granger causality analysis. GC results confirmed the presence of a causal link between the LHb and the IPN, with the activity in the latter predicted with high consistency only by the activity triggered in the former. Notably, the magnitude of the causal link strength (F statistic) for the LHb-IPN triggered pair is very similar to that of naturally occurring pairs, underlying the efficacy of our methodology in probing brain connectivity. In addition, partial correlation analysis revealed that the link we observed between LHb and IPN is a direct one, with the interaction between the two not intermediated by any other region, a result which is consistent with the presence of an anatomical connection between LHb and IPN via the fasciculus retroflexus . Notably, GC analysis also revealed weaker connections between LHb-RHb and T-RHb. Regarding the former, results from partial correlation analysis highlighted that the link between the two habenulae is most probably indirect. Indeed, no direct connections between the left and right habenulae are known to date, and a crossed feedback circuit passing through the monoaminergic system has been hypothesized . Concerning T-RHb connection, it is known that in zebrafish a small subset of bilateral pallial neurons sends asymmetric innervations which, passing through the stria medullaris and the habenular commissure, selectively terminate in the RHb . Despite this direct anatomical connection, partial correlation analysis produced controversial results regarding the directness for this pair of regions. This result is probably due to the limited number of telencephalic cells contacting the RHb , , whose activity could have been overshadowed by the averaging of the activity on the entire telencephalon. In conclusion, we employed optogenetic stimulation to map the whole-brain functional connectivity of the left habenula efferent pathway in zebrafish larvae. This application has showcased the remarkable capabilities of our 2P setup for conducting crosstalk-free all-optical investigations. The use of AODs for precisely addressing the photostimulation is a hot topic in systems neuroscience, as evidenced by recent conference contributions – . Owing to their discontinuous scanning and constant access time, these devices indeed enable random-access modality. This feature empowers AODs with the native capability to perform rapid sequential excitation over multiple sparsely distributed cellular targets, a feature recently sought after also by SLM adopters . Indeed, rapid sequential stimulation enabled by AODs represents an invaluable tool for studies aiming at replicating a physiological neuronal activation pattern. Future efforts will be devoted to further expanding the volume addressable with AOD scanning while concurrently improving the uniformity of energy delivery. Furthermore, leveraging transgenic strains that express the actuator under more selective promoters (such as vglut2 for glutamatergic and gad1b for GABAergic neurons) will undoubtedly help producing accurate inferences on network structures , , thus boosting the quest towards a comprehensive picture of zebrafish brain functional connectivity. On the imaging side, technical implementations will be made, in order to improve image contrast while maintaining a low laser power on the sample. This advancement will enable the use of automated segmentation algorithms for single-neuron detection. Cell-wise analyses will allow to refine the reconstruction of neuronal effective connectivity, capturing the nuanced differences between individual cells. Together, nonlinear light-sheet microscopy and 3D optogenetics with AODs, along with the employment of larval zebrafish, offer a promising avenue for bridging the gap between microscale resolution and macroscale investigations, enabling the mapping of whole-brain functional/effective connectivity at previously unattainable spatio-temporal scales. Optical setup All-optical control and readout of zebrafish neuronal activity is achieved through a custom system that combines a 2P dual-sided illumination LSFM for whole-brain calcium imaging , , and an AOD-based 2P light-targeting system for 3D optogenetic stimulation (Supplementary Fig. and Fig. ). The two systems have been slightly modified with respect to the previous published versions to optically couple them. Briefly, the 2P light-sheet imaging path is equipped with a pulsed Ti:Sa laser (Chameleon Ultra II, Coherent), tuned at 920 nm. After a group delay dispersion precompensation step, the near-infrared beam is adjusted in power and routed to an electro-optical modulator (EOM) employed to switch the light polarization orientation between two orthogonal states at a frequency of 100 kHz. A half-wave plate and a quarter-wave plate are used to control the light polarization plane and to pre-compensate for polarization distortions. Then, the beam is routed to a hybrid pair of galvanometric mirrors (GMs). One is a fast resonant mirror (CRS-8 kHz, Cambridge Technology) used to digitally generate the virtual light-sheet scanning (frequency 8 kHz) the larva along the rostro-caudal direction. The second GM is a closed-loop mirror (6215H, Cambridge Technology) used to displace the light-sheet along the dorso-ventral direction. The scanned beam is driven by a scan lens and a tube lens into a polarizing beam splitter, which diverts the light alternatively into either of the two excitation arms, according to the instantaneous polarization state imposed by the EOM. In order to maximize fluorescence collection, after the beam splitter in one of the two arms a half-wave plate is used to rotate the light polarization plane so that light coming from both the excitation paths is polarized parallel to the table surface . Through a twin relay system, the beams are ultimately routed into the excitation objectives (XLFLUOR4X/340/0,28, Olympus). The excitation light is focused inside a custom fish water-filled imaging chamber, heated to 28.5 °C. The fine positioning of the sample under the detection objective is performed with three motorized stages. The fluorescence emitted by the sample is collected with a water-immersion objective (XLUMPLFLN20XW, Olympus, NA = 1). Finally, a relay system brings the collected signal to an electrically tunable lens (ETL; EL-16-40-TC-VIS-5D-C, Optotune) which performs remote axial scanning of the detection objective focal plane in sync with the light-sheet closed-loop displacement. The signal collected is filtered (FF01-510/84-25, Semrock) to select green emission. The filtered light reaches an air objective (UPLFLN10X2, Olympus, NA = 0.3), which demagnifies the image onto a subarray (512 × 512 pixels) of an sCMOS camera (ORCA-Flash4.0 V3, Hamamatsu) working at 16-bit depth of integer gray levels. The final magnification of the imaging system is 3×, with a resulting pixel size of 2.2 μm. Below the transparent PMMA bottom of the imaging chamber, a high-speed CMOS camera (Blackfly S USB3, FLIR) equipped with a varifocal objective lens (employed at 50 mm; YV3.3x15SA-2, Fujinon) is positioned to perform behavioral imaging (tail deflections) during light-sheet imaging. Illumination for behavioral imaging is provided by an 850 nm LED (M850L3, Thorlabs) positioned at an angle above the imaging chamber. A bandpass filter (FF01-835/70-25, Semrock) is placed in front of the objective lens for blocking high-intensity light from the 920 nm light-sheet (see Supplementary Fig. ). Recordings are performed using a 300 × 300 pixels subarray of the camera chip, covering the entire larval body. This configuration allows to achieve sufficient magnification (pixel size: 15.4 μm) and contrast to enable live tail tracking. The 3D light-targeting system employs a 1064 nm pulsed laser (FP-1060-5-fs Fianium FemtoPower, NKT Photonics, Birkerød, Denmark) as an excitation source. The output power (max. 5 W) is attenuated and conveyed to a half-wave plate, which is employed to adjust the polarization of the beam, before the first AOD stage (DTSXY-400 AA Opto Electronic, Orsay, France) is reached. The output beam is then coupled with the second AOD stage through two 1:1 relay systems. From the exit of the second stage, by means of a 1:1 relay system, the beam is routed to a pair of galvanometric mirrors (GVS112, Thorlabs). The scanned beam is then optically coupled with a scan lens (AC254-100-B, Thorlabs) and a tube lens ( F = 300 mm, in turn formed by two achromatic doublets - AC254-150-C-MLE, F = 150 mm by Thorlabs, so customized to avoid aberrations). The excitation light is finally deflected by a dichroic mirror (DMSP926B, Thorlabs) toward the back pupil of the illumination objective, which is also employed by the imaging system for fluorescence detection. Optical characterization of the system The detailed optical characterization of the 2P light-sheet system was described in a previous work of our group . Summarizing, each of the light sheets coming from the two excitation arms has a transversal full width at half maximum (FWHM) at waist of 6 µm and a longitudinal FWHM of 327 µm. The lateral FWHM of the detection PSF is 5.2 µm. Herein, we describe the optical performance of the AOD-based light-targeting system used for optogenetic stimulation. When using AODs to move the beam away from its native focus, the illumination axial displacement—or defocus—has a linear relation with the chirp parameter α, i.e., the rate of frequency change of the driving radio waves . We thus measured the axial displacement of the focused beam as a function of α by illuminating a fluorescent solution (Sulforhodamine 101; S7635, Sigma-Aldrich) and localizing the maximum fluorescent peak in the volume as a function of α, which ranged from −1 MHz/µs to 1 MHz/µs (step size 0.1 MHz/µs). For each chirp configuration, the ETL in detection path was used to obtain a 200-µm deep stack (step size: 1 µm) centered at the nominal focal plane of the illumination objective. Supplementary Fig. shows the axial position of the fluorescent intensity peak as a function of the chirp addressed, following an expected linear trend. We evaluated the conversion coefficient from the slope of the linear fit, which was 50.44 ± 3.45 µm/MHz/µs (mean ± sd). We also measured the amount of energy released on the sample as a function of the chirp parameter or, basically, as a function of the time spent illuminating axially displaced targets. Indeed, the beam would spend slightly different periods lighting spots displaced in different z -planes as the effective frequency ramping time is inversely proportional to the chirp parameter α imposed on the RF signals driving the AODs. As explained in detail in a previous work , we partially recovered this non-uniformity in the distribution of power deposited along the axial direction by repeatedly triggering equal frequency ramps within the desired dwell time (here, 20 µs each point), using what we called multi-trigger modality. With respect to the conventional single-trigger modality, we effectively multiplied the minimum energy deposited on different focal planes, while keeping a stable dwell time. Supplementary Fig. shows in black the usual light transmission distribution collected as a function of the chirp parameter (single-trigger modality) and in blue the distribution obtained with our multi-trigger approach. We then measured the point spread function (PSF) of the light-targeting system using subdiffraction-sized fluorescent beads (TetraSpeck microspheres, radius 50 nm; T7279, Invitrogen) embedded in agarose gel (1.5%, w/v) at a final concentration of 0.0025% (vol/vol). The measurements were performed on a field of view of 100 × 100 μm 2 , performing raster scans of 500 × 500 points. The objective was moved axially covering a 200 μm range ( z step: 1 μm) and the emitted signal was conveyed and collected on an auxiliary photomultiplier tube positioned downstream of the fluorescence-collecting objective. The radial and axial intensity profiles of 25 beads were computed using the open-source software ImageJ and fitted with Gaussian functions in Origin Pro 2021 (OriginLab Corp.) to estimate FWHM. Supplementary Fig. shows, as an example, the raw fluorescence distributions of 5 beads and the Gaussian fit corresponding to the average FWHM, plotted in red and black for the radial and axial PSF, respectively. We found them to be FWHMr = 0.81 ± 0.06 µm and FWHMa = 3.79 ± 0.66 µm (mean ± sd). This measurement was performed by driving the AODs with stationary RF signals. To evaluate the eventual illumination spatial distortions arising away from the nominal focal plane of the objective, we repeated the same PSF measurement for different chirps or, in other words, for different AOD-controlled axial displacements (80 µm range, step size of 20 µm). The average FWHM obtained for the bead intensity distribution is shown in Supplementary Fig. . The radial PSF of the system remains approximately constant as a function of the chirp parameter. A small change is due to the chromatic dispersion affecting the laser beam interacting with the crystal. The deflection angle induced by the AODs on the incident beam is frequency and wavelength dependent. This means that a broadband laser is straightforwardly spatially dispersed by the crystal and that the frequency variations can slightly affect this distortion. Moreover, the axial PSF tends to become slightly oblong with increasing axial displacement. This effect is attributable to the temporal dispersion affecting a short-pulsed laser beam interacting with the crystal. This temporal broadening reduces the axial 2P excitation efficiency, generating a larger axial PSF. This effect is more evident when a chirp is applied to the RF signals driving the AODs. Under these conditions, the beam reaches the objective back-pupil in a non-collimated state. Future efforts will be devoted to the compensation of chromatic aberration and temporal dispersion, for example employing a highly dispersive prism upstream of AODs . Zebrafish lines and maintenance The double Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) zebrafish line was obtained from outcrossing the Tg(elavl3:H2B-GCaMP6s) , and the Tg(elavl3:ReaChR-TagRFP) , lines on the slc45a2 b4/- heterozygous albino background , which we previously generated. The double transgenic line expresses the fluorescent calcium reporter GCaMP6s (nucleus) and the red-shifted light-activatable cation channel ReaChR (plasma membrane) in all differentiated neurons. ReaChR is expressed as a fusion peptide with the red fluorescent protein TagRFP to ensure its localization. Zebrafish strains were reared according to standard procedures , and fed twice a day with dry food and brine shrimp nauplii ( Artemia salina ), both for nutritional and environmental enrichment. For the experiments, we employed N = 20, 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) and N = 13, 5 dpf Tg(elavl3:H2B-GCaMP6s) , both of which were on the slc45a2 b4/b4 homozygous albino background. Zebrafish larvae used in the experiments were maintained at 28.5 °C in fish water (150 mg/L Instant Ocean, 6.9 mg/L NaH2PO4, 12.5 mg/L Na2HPO4, 1 mg/L methylene blue; conductivity 300 μS/cm, pH 7.2) under a 14/10 light/dark cycle, according to standard protocols . Experiments involving zebrafish larvae were carried out in compliance with European and Italian laws on animal experimentation (Directive 2010/63/EU and D.L. 4 March 2014, n.26, respectively), under authorization n.606/2020-PR from the Italian Ministry of Health. Zebrafish larvae preparation To select calcium reporter/opsin-expressing larvae for use in the experiments, 3 dpf embryos were subjected to fluorescence screening. The embryos were first slightly anesthetized with a bath in tricaine (160 mg/L in fish water; A5040, Sigma-Aldrich) to reduce movement. Using a stereomicroscope (Stemi 508, Carl Zeiss) equipped with LEDs for fluorescence excitation (for GCaMP6s: blue LED, M470L3; for TagRFP: green LED, M565L3, both from Thorlabs) and fluorescence filters to block excitation light (for GCaMP6s: FF01-510/84-25; for TagRFP: FF01-593/LP-25, both from Semrock), embryos were selected according to the presence of brighter green/red fluorescent signals in the central nervous system. Screened embryos were transferred to a Petri dish containing fresh fish water and kept in an incubator at 28.5 °C until 5 dpf. Zebrafish larvae were mounted as previously described . Briefly, each larva was transferred into a reaction tube containing 1.5% (w/v) low-gelling temperature agarose (A9414, Sigma-Aldrich) in fish water, maintained fluid on a heater set at 38 °C. Using a plastic pipette, larvae were then placed on a microscope slide inside a drop of melted agarose. Before gel polymerization, their position was adjusted using a couple of fine pipette tips for the dorsal portion to face upwards. To avoid movement artifacts during the measurements, larvae were paralyzed by a 10-min treatment with 2 mM d-tubocurarine (93750, Sigma-Aldrich), a neuromuscular blocker. For tail-free preparations, upon gel polymerization, agarose caudal to the swimming bladder was removed using a scalpel. In this case, no paralyzing agent was applied. Mounted larvae were then placed inside the imaging chamber filled with fish water and thermostated at 28.5 °C for the entire duration of the experiment. Structural imaging to evaluate expression patterns in double transgenic zebrafish larvae Confocal imaging of a 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) larva on an albino background was performed to evaluate the spatial expression of the two proteins. The larva was mounted in agarose as described above and deeply anesthetized with tricaine (300 mg/L in fish water). We employed a commercial confocal microscope (T i 2, Nikon) equipped with two continuous wavelength lasers emitting at 488 and 561 nm for GCaMP6s and TagRFP excitation, respectively. Imaging was performed using a 10× objective, allowing the entire head of the animal to fit into the field of view. Using a piezo-electric motor (PIFOC, Physik Instrumente - PI), the objective was moved at 182 consecutive positions ( z step: 2 μm) to acquire the volume of the larval head. Simultaneous whole-brain and behavioral imaging Head restrained larvae, capable of performing wide tail deflections, were imaged from below the 2P LSFM imaging chamber using a dedicated high-speed camera (see Optical setup for details). Images were streamed at 300 Hz via a USB3 connection to a workstation running a custom tool for live tail movement tracking, developed using the open-source Python Stytra package . Larval tail length was divided into 9 segments, and the sum of their relative angles was employed to quantify tail deflection. Tail movements of both ReaChR + and ReaChR − larvae were tracked for 200 s. During the first half 2P LSFM imaging was off (imaging OFF). During the second half larvae were subjected to whole-brain light-sheet imaging (imaging ON) with the same parameters described in the previous section. Each larva measured underwent 3 consecutive 200-s simultaneous whole-brain/behavioral recordings (inter-measurement interval less than 1 min). Simultaneous whole-brain imaging and optogenetic stimulation Whole-brain calcium imaging was performed at 2.5 Hz (a more than optimal volumetric rate considering the typical time constant of the exponential decay for the nuclear localized version of the GCaMP6s sensor τ: 3.5 s ) with 41 stacked z-planes spanning a depth of 200 μm. An interslice spacing of 5 μm was chosen because it coincides with the half width at half maximum of the detection axial PSF. Before each measurement, the scanning amplitude of the resonant galvo mirror was tuned to produce a virtual light-sheet with a length matching the size of the larval brain in the rostro-caudal direction. The laser wavelength was set to 920 nm to optimally excite the GCaMP6s fluorescence. Unless otherwise stated, the power at the sample of the 920 nm laser was set to 60 mW. Optogenetic stimulation was performed at 1064 nm with a laser power at the sample of 30 mW (unless otherwise specified). Before each experimental session, the 1064 nm stimulation laser was finely aligned to the center of the camera field of view. Then, by means of the galvo mirrors present in the stimulation path, the offset position of the stimulation beam was coarsely displaced in the x - y direction toward the center of the area to be stimulated. During the optogenetics experiment the stimulation volume was covered by discontinuously scanning the beam focus via the two pairs of AODs. A typical volume of 50 × 50 × 50 μm 3 was covered with 6250 points (point x-y density: 1 point/0.25 μm 2 ; z step: 5 μm) with a point dwell time of 20 μs (overall time: 125 ms). The medial plane of the stimulation volume (chirp = 0 MHz/μs, null defocus) was adjusted to overlap with the medial plane of the LHb. Unless otherwise stated, each stimulus consisted of four complete cycles of the entire volume, lasting 500 ms. Each stimulation trial consisted of 100 s of whole-brain calcium imaging, during which 5 optogenetic stimuli (interstimulus interval: 16 s, based on the characterization experiments performed, in order to trigger activation events only after the end of the previous calcium transient) were applied at the same volumetric site. Six trials were performed on each larva, with an intertrial interval ranging from 1 to 3 min. Overall, each larva was imaged for 10 min during which it received 30 stimuli. Data analysis Preprocessing Whole-brain calcium imaging data were processed as follows. Images composing the hyperstacks were first 2 × 2 binned (method: average) in the x and y dimensions to obtain a quasi-isotropic voxel size (4.4 × 4.4 × 5 μm 3 ). Then, employing a custom tool written in Python 3, we computed the voxel-wise ΔF/F 0 of each volumetric recording, after background subtraction. F 0 was calculated using FastChrom’s baseline estimation method . Quantification of imaging crosstalk and optogenetic activation extent/specificity To quantify crosstalk during imaging we first considered different metrics to evaluate neuronal activity levels (Supplementary Fig. ). We computed the standard deviation (SD) over time, the number of calcium peaks per minute, and the average peak amplitude of each voxel composing the larval brain during 5 min of whole-brain calcium imaging (Supplementary Fig. ). For automatic calcium peaks identification, we set the following thresholds: minimum peak prominence 0.05; minimum peak FWHM 2.5 s, minimum peak distance 5 s. We found the SD to have improved sensitivity in discriminating between diverse conditions compared to the number of peaks per minute (Supplementary Fig. ). These results reflected those observed by adopting the average amplitude of calcium peaks (Supplementary Fig. ) as an activity metric. We thus employed SD over time as a proxy of neuronal activity levels since its results do not depend on predefined thresholds. Therefore, the distribution of SD values calculated for each brain was first normalized with respect to the total number of voxels and then pooled (method: average) according to the larval strain (ReaChR + and ReaChR − ). Similarly, the normalized distributions of SD values for ReaChR + and ReaChR − larvae subjected to 100 s of whole-brain imaging during which they received 5 photostimulations (1064 nm) were calculated to evaluate the effect of the optogenetic stimulation. Imaging crosstalk and optogenetic stimulation indices were calculated using the Hellinger distance as a measure of dissimilarity between two probability distributions P and Q: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H(P,Q)=\sqrt{1-{\sum}_{i=1}^{n}\sqrt{{P}_{i}{Q}_{i}}}$$\end{document} H ( P , Q ) = 1 − ∑ i = 1 n P i Q i The errors in the Hellinger distances were calculated according to error propagation theory as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta H=\sqrt{{\sum}_{i=1}^{n}\frac{{Q}_{i}^{2}}{{4H}^{2}}{{\cdot }}\Delta {P}_{i}^{2}+\frac{{P}_{i}^{2}}{{4H}^{2}}{{\cdot }}{\Delta Q}_{i}^{2}}$$\end{document} Δ H = ∑ i = 1 n Q i 2 4 H 2 ⋅ Δ P i 2 + P i 2 4 H 2 ⋅ Δ Q i 2 Finally, normalized distributions of SD values for ReaChR − larvae exposed either to imaging (100 s) only or to imaging and photostimulation (100 s and 5 stimuli at 1064 nm) were calculated to evaluate the specificity of the effect observed. Quantification of tail movements during whole-brain light-sheet imaging Tail deflection (i.e., sum of relative tail segments angles) time traces, were processed to detect and count the number of tail beats. In detail, deflection peaks were considered as tail beats if exceeding an absolute threshold of 20°. Consecutive tail deflections that did not come back to resting position for at least 0.5 s were considered part of the same movement. The relative number of tail beats during imaging ON (Fig. ) was calculated for each trial of each larva dividing the number of tail movements during the imaging ON period by that quantified during the imaging OFF period. To combine behavioral and brain activity recordings (Fig. ), the average fluorescence time trace of the hindbrain acquired at 2.5 Hz was first interpolated to match the frequency of behavioral recordings (300 Hz). Then, ΔF/F 0 was calculated as previously described. Characterization of stimulation-induced calcium transients To characterize neuronal activation as a function of stimulation parameters (scan time and laser power), we first extracted the voxel time series averaged over the entire stimulation site (i.e., left habenula) from 4D ΔF/F 0 hyperstacks. Time traces were windowed to isolate and align the three stimulation events contained in a single trial. Isolated calcium transients were analyzed using the peak analyzer function in Origin Pro 2021 (OriginLab Corp.) to obtain peak amplitude, rise/decay time (i.e., time from baseline to peak and time from peak to baseline, respectively) and duration values. Pooled peak duration data were obtained by first averaging three events of the same larva (intra-individual) and then averaging data between larvae (inter-individual). Activation probability and correlation maps Using a custom Python tool, we calculated the probability of each voxel composing the brain to be active in response to the optogenetic stimulation. For each stimulation event, a voxel was considered active if its change in fluorescence in a 2 s time window after the stimulation exceeded three standard deviations above its baseline level (2 s pre-stimulation). Only events in which the voxels inside the stimulation volume met the activation criterion were considered effective optogenetic stimulations. By iterating this process for all the stimulation events performed (on the same site of the same larva), we calculated the activation probability of each voxel as the number of times the voxel exceeded the threshold divided by the total number of valid stimulations. Employing a second Python tool, we then computed activity correlation maps showing Pearson’s correlation coefficient between each voxel and the activity extracted from the stimulation site (seed). The 3D maps of correlation and activation probability obtained were subsequently aligned. First, the acquired 4D hyperstacks were time averaged. Second, the resulting 3D stack of each larva was registered to a reference brain. Nonrigid image registration was performed using the open source software Computational Morphometry Toolkit (CMTK 3.3.1, https://www.nitrc.org/projects/cmtk/ ) and the ImageJ user interface , employing the command string (-awr 01 -X 52 -C 8 -G 80 -R 3 -A “--accuracy 1.6” -W “--accuracy 0.4”). The calculated morphing transformations were ultimately applied to the corresponding 3D maps. Following the zebrafish brain atlases , , the volumetric regions of interest (ROIs) used in the analysis were manually drawn onto the reference brain (employing ImageJ), based on anatomical boundaries. The 10 volumetric ROIs were then adopted to extract from each map the voxel-wise distribution of activation probability/correlation coefficient values used for further analyses. The binarized functional connectivity map shown in Fig. was obtained after applying a threshold on Pearson’s correlation coefficient to the average correlation map shown in Fig. . The 0.12 value adopted represented the correlation coefficient threshold separating significant from non-significant correlations among brain regions (see Fig. ). Cross-wavelet power spectrum analysis The possible coupling between the delineated brain ROIs and the stimulation site was also characterized in the spectral domain by quantifying and inspecting their cross-wavelet power spectral density (CPSD) . The wavelet transforms of the average activity signals extracted from each ROI were computed using the Morlet mother wavelet, adopting a central frequency f 0 = 1 Hz as time-frequency resolution parameter, and 256 voices per octave for fine frequency discretization. Spurious time-boundary effects were addressed by first applying a zero-padding scheme to the original time series, and then isolating the so-called cone of influence, i.e., the time–frequency region where boundary distortions in the CPSD estimates are negligible . Granger causality analysis The causal link between the activity of different brain regions was explored by analyzing their Granger causality . GC analysis among ΔF/F 0 time series of brain regions was performed in R, with the “lmtest” library . To select an appropriate lag order, we computed both the Akaike (AIC) and Bayesian (BIC) information criterions of the complete autoregressive model for each comparison (each trial and each possible regions pair) for lag orders from 1 to 8 (0.4–3.2 s). Then, for each comparison we selected the lag order associated with the minimum value of the information criterions. Finally, we computed the mode value of this list and used this unique lag order value for every comparison of the final GC analysis. The mode values based on both AIC and BIC resulted the same: a lag order equal to 2, which corresponds to a 0.8 s lag. For each larva, trial, pairs of regions’ activity and causality direction we computed the average F statistic value of the tests. Finally, multiplicity correction for the p-values was performed with a false discovery rate approach using the Benjamini–Hochberg method (GC analysis results are reported in Supplementary Data ). The F statistic was presented in Fig. as average values of all pairs having at least two significant trials. The F statistic in the graph of Fig. was presented as arrows color-mapped according to the average F value found between brain regions’ connections. Direction of the arrow indicates direction of the causality interaction, while arrow width represents the proportion of significant trials over the total. Only causal links having at least 33% of significant trials were depicted (see thresholded matrix in Supplementary Fig. ). Partial correlation analysis In order to gain insight into the directness of the interactions between brain regions, we analyzed the partial correlation between pairs of region-wise mean ΔF/F 0 time series, aiming to capture their residual coupling after the influence of all other regions was accounted for . Pairwise partial correlation coefficients were obtained as described by Han and colleagues . In detail, the partial correlation between a pair of brain regions (i.e., LHb-IPN, LHb-RHb, and T-RHb), A and B, was evaluated as the Pearson’s correlation coefficient between regressed time series ΔF/F 0 A,R and ΔF/F 0 B,R , suitably corrected for the contribution of each other regions’ mean activity signal. These time series were estimated by multiple regression on the original traces ΔF/F 0 A and ΔF/F 0 B , through the evaluation of the Moore-Penrose pseudoinverse of the remaining regions’ time series matrix, C: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{\beta }}}}}_{{{{\rm{A}}}}}={{{{\rm{C}}}}}^{+}{{\cdot }}{\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}}}$$\end{document} β A = C + ⋅ Δ F / F 0 A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{\beta }}}}}_{{{{\rm{B}}}}}={{{{\rm{C}}}}}^{+}{{\cdot }}{\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}}}$$\end{document} β B = C + ⋅ Δ F / F 0 B where C + is the Moore-Penrose pseudoinverse matrix: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{C}}}}}^{+}={({{{{\rm{C}}}}}^{{{{\rm{T}}}}}{{{\rm{C}}}})}^{-1}{{{{\rm{C}}}}}^{{{{\rm{T}}}}}$$\end{document} C + = ( C T C ) − 1 C T here computed using the Python SciPy library . The regressed time series were then obtained as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}},{{{\rm{R}}}}}={\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}}}-{{{\rm{C\cdot }}}}{{{{\rm{\beta }}}}}_{{{{\rm{A}}}}}$$\end{document} Δ F / F 0 A , R = Δ F / F 0 A − C⋅ β A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}},{{{\rm{R}}}}}={\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}}}-{{{\rm{C\cdot }}}}{{{{\rm{\beta }}}}}_{{{{\rm{B}}}}}$$\end{document} Δ F / F 0 B , R = Δ F / F 0 B − C⋅ β B The directness of the mutual interaction between two brain regions was finally detected from the presence of both statistically significant Pearson’s and partial correlation coefficients. When only the Pearson’s correlation is significant the interaction is defined as indirect. While in case only the partial correlation is significant, we are observing what is defined as pseudo-correlation . Results of partial correlation analysis can be found in Supplementary Data . Statistics and reproducibility To guarantee reproducibility of the findings and avoid bias, the larvae employed in the experiments never belonged to a single batch of eggs. No a priori sample size calculation was performed. The sample size employed was justified by the high grade of consistency in the results obtained from different larvae. The expression pattern of GCaMP6s and ReaChR were evaluated in N = 1 ReaChR + larva by confocal imaging. Crosstalk activation of ReaChR by 920 nm excitation light-sheet imaging was evaluated on N = 3 ReaChR + and N = 3 ReaChR − larvae, in the brain activity experiment, and N = 4 ReaChR + and N = 4 ReaChR − larvae in the combined brain/behavioral activities experiment. The effect of optogenetic stimulation was evaluated on N = 6 ReaChR + and N = 6 ReaChR − larvae. Characterization of optogenetically induced calcium transients as a function of stimulation settings was performed on N = 4 ReaChR + larvae ( n = 3 calcium transients per larva). The activation probability, correlation, and causality were evaluated on N = 6 ReaChR + ( n = 30 stimulations per larva). OriginPro 2021 (OriginLab Corp.) was used to carry out all the statistical analyses. Unless otherwise stated, results were considered statistically significant if their corresponding p -value was less than 0.05 (* P < 0.05; ** P < 0.01; *** P < 0.0001). Both intergroup and intragroup statistical significance of imaging crosstalk (Fig. and Supplementary Fig. ) were performed using two-way ANOVA (factors: zebrafish strain, imaging power) followed by post-hoc comparisons with Tukey’s method. Two-way ANOVA and Tukey’s post-hoc comparison were employed also for quantifying the statistical significance of tail beats between imaging OFF and ON conditions (Fig. ; factors: zebrafish strain, imaging presence). For intergroup statistical evaluations of both activation probability (Fig. ) and Pearson’s correlation coefficient (Fig. ), we first verified the normality distribution of data using the Shapiro-Wilk test (see Supplementary Fig. for test results) and then performed one way ANOVA (factor: brain region), followed by post-hoc comparisons employing Tukey’s method. Statistical comparisons of relative number of tail beats during 920 nm imaging (Fig. ) and median SD values to evaluate the effect of optogenetic stimulation (Fig. and Supplementary Fig. ) were performed using unpaired t test. Statistical comparisons of the average distributions of SD (Fig. ) and Pearson’s correlation coefficient (Fig. ) values were performed with the two-sample Kolmogorov-Smirnov test (KS test), applying the Bonferroni correction ( α = 0.05/3 = 0.01667, in both cases). Reporting summary Further information on research design is available in the linked to this article. All-optical control and readout of zebrafish neuronal activity is achieved through a custom system that combines a 2P dual-sided illumination LSFM for whole-brain calcium imaging , , and an AOD-based 2P light-targeting system for 3D optogenetic stimulation (Supplementary Fig. and Fig. ). The two systems have been slightly modified with respect to the previous published versions to optically couple them. Briefly, the 2P light-sheet imaging path is equipped with a pulsed Ti:Sa laser (Chameleon Ultra II, Coherent), tuned at 920 nm. After a group delay dispersion precompensation step, the near-infrared beam is adjusted in power and routed to an electro-optical modulator (EOM) employed to switch the light polarization orientation between two orthogonal states at a frequency of 100 kHz. A half-wave plate and a quarter-wave plate are used to control the light polarization plane and to pre-compensate for polarization distortions. Then, the beam is routed to a hybrid pair of galvanometric mirrors (GMs). One is a fast resonant mirror (CRS-8 kHz, Cambridge Technology) used to digitally generate the virtual light-sheet scanning (frequency 8 kHz) the larva along the rostro-caudal direction. The second GM is a closed-loop mirror (6215H, Cambridge Technology) used to displace the light-sheet along the dorso-ventral direction. The scanned beam is driven by a scan lens and a tube lens into a polarizing beam splitter, which diverts the light alternatively into either of the two excitation arms, according to the instantaneous polarization state imposed by the EOM. In order to maximize fluorescence collection, after the beam splitter in one of the two arms a half-wave plate is used to rotate the light polarization plane so that light coming from both the excitation paths is polarized parallel to the table surface . Through a twin relay system, the beams are ultimately routed into the excitation objectives (XLFLUOR4X/340/0,28, Olympus). The excitation light is focused inside a custom fish water-filled imaging chamber, heated to 28.5 °C. The fine positioning of the sample under the detection objective is performed with three motorized stages. The fluorescence emitted by the sample is collected with a water-immersion objective (XLUMPLFLN20XW, Olympus, NA = 1). Finally, a relay system brings the collected signal to an electrically tunable lens (ETL; EL-16-40-TC-VIS-5D-C, Optotune) which performs remote axial scanning of the detection objective focal plane in sync with the light-sheet closed-loop displacement. The signal collected is filtered (FF01-510/84-25, Semrock) to select green emission. The filtered light reaches an air objective (UPLFLN10X2, Olympus, NA = 0.3), which demagnifies the image onto a subarray (512 × 512 pixels) of an sCMOS camera (ORCA-Flash4.0 V3, Hamamatsu) working at 16-bit depth of integer gray levels. The final magnification of the imaging system is 3×, with a resulting pixel size of 2.2 μm. Below the transparent PMMA bottom of the imaging chamber, a high-speed CMOS camera (Blackfly S USB3, FLIR) equipped with a varifocal objective lens (employed at 50 mm; YV3.3x15SA-2, Fujinon) is positioned to perform behavioral imaging (tail deflections) during light-sheet imaging. Illumination for behavioral imaging is provided by an 850 nm LED (M850L3, Thorlabs) positioned at an angle above the imaging chamber. A bandpass filter (FF01-835/70-25, Semrock) is placed in front of the objective lens for blocking high-intensity light from the 920 nm light-sheet (see Supplementary Fig. ). Recordings are performed using a 300 × 300 pixels subarray of the camera chip, covering the entire larval body. This configuration allows to achieve sufficient magnification (pixel size: 15.4 μm) and contrast to enable live tail tracking. The 3D light-targeting system employs a 1064 nm pulsed laser (FP-1060-5-fs Fianium FemtoPower, NKT Photonics, Birkerød, Denmark) as an excitation source. The output power (max. 5 W) is attenuated and conveyed to a half-wave plate, which is employed to adjust the polarization of the beam, before the first AOD stage (DTSXY-400 AA Opto Electronic, Orsay, France) is reached. The output beam is then coupled with the second AOD stage through two 1:1 relay systems. From the exit of the second stage, by means of a 1:1 relay system, the beam is routed to a pair of galvanometric mirrors (GVS112, Thorlabs). The scanned beam is then optically coupled with a scan lens (AC254-100-B, Thorlabs) and a tube lens ( F = 300 mm, in turn formed by two achromatic doublets - AC254-150-C-MLE, F = 150 mm by Thorlabs, so customized to avoid aberrations). The excitation light is finally deflected by a dichroic mirror (DMSP926B, Thorlabs) toward the back pupil of the illumination objective, which is also employed by the imaging system for fluorescence detection. The detailed optical characterization of the 2P light-sheet system was described in a previous work of our group . Summarizing, each of the light sheets coming from the two excitation arms has a transversal full width at half maximum (FWHM) at waist of 6 µm and a longitudinal FWHM of 327 µm. The lateral FWHM of the detection PSF is 5.2 µm. Herein, we describe the optical performance of the AOD-based light-targeting system used for optogenetic stimulation. When using AODs to move the beam away from its native focus, the illumination axial displacement—or defocus—has a linear relation with the chirp parameter α, i.e., the rate of frequency change of the driving radio waves . We thus measured the axial displacement of the focused beam as a function of α by illuminating a fluorescent solution (Sulforhodamine 101; S7635, Sigma-Aldrich) and localizing the maximum fluorescent peak in the volume as a function of α, which ranged from −1 MHz/µs to 1 MHz/µs (step size 0.1 MHz/µs). For each chirp configuration, the ETL in detection path was used to obtain a 200-µm deep stack (step size: 1 µm) centered at the nominal focal plane of the illumination objective. Supplementary Fig. shows the axial position of the fluorescent intensity peak as a function of the chirp addressed, following an expected linear trend. We evaluated the conversion coefficient from the slope of the linear fit, which was 50.44 ± 3.45 µm/MHz/µs (mean ± sd). We also measured the amount of energy released on the sample as a function of the chirp parameter or, basically, as a function of the time spent illuminating axially displaced targets. Indeed, the beam would spend slightly different periods lighting spots displaced in different z -planes as the effective frequency ramping time is inversely proportional to the chirp parameter α imposed on the RF signals driving the AODs. As explained in detail in a previous work , we partially recovered this non-uniformity in the distribution of power deposited along the axial direction by repeatedly triggering equal frequency ramps within the desired dwell time (here, 20 µs each point), using what we called multi-trigger modality. With respect to the conventional single-trigger modality, we effectively multiplied the minimum energy deposited on different focal planes, while keeping a stable dwell time. Supplementary Fig. shows in black the usual light transmission distribution collected as a function of the chirp parameter (single-trigger modality) and in blue the distribution obtained with our multi-trigger approach. We then measured the point spread function (PSF) of the light-targeting system using subdiffraction-sized fluorescent beads (TetraSpeck microspheres, radius 50 nm; T7279, Invitrogen) embedded in agarose gel (1.5%, w/v) at a final concentration of 0.0025% (vol/vol). The measurements were performed on a field of view of 100 × 100 μm 2 , performing raster scans of 500 × 500 points. The objective was moved axially covering a 200 μm range ( z step: 1 μm) and the emitted signal was conveyed and collected on an auxiliary photomultiplier tube positioned downstream of the fluorescence-collecting objective. The radial and axial intensity profiles of 25 beads were computed using the open-source software ImageJ and fitted with Gaussian functions in Origin Pro 2021 (OriginLab Corp.) to estimate FWHM. Supplementary Fig. shows, as an example, the raw fluorescence distributions of 5 beads and the Gaussian fit corresponding to the average FWHM, plotted in red and black for the radial and axial PSF, respectively. We found them to be FWHMr = 0.81 ± 0.06 µm and FWHMa = 3.79 ± 0.66 µm (mean ± sd). This measurement was performed by driving the AODs with stationary RF signals. To evaluate the eventual illumination spatial distortions arising away from the nominal focal plane of the objective, we repeated the same PSF measurement for different chirps or, in other words, for different AOD-controlled axial displacements (80 µm range, step size of 20 µm). The average FWHM obtained for the bead intensity distribution is shown in Supplementary Fig. . The radial PSF of the system remains approximately constant as a function of the chirp parameter. A small change is due to the chromatic dispersion affecting the laser beam interacting with the crystal. The deflection angle induced by the AODs on the incident beam is frequency and wavelength dependent. This means that a broadband laser is straightforwardly spatially dispersed by the crystal and that the frequency variations can slightly affect this distortion. Moreover, the axial PSF tends to become slightly oblong with increasing axial displacement. This effect is attributable to the temporal dispersion affecting a short-pulsed laser beam interacting with the crystal. This temporal broadening reduces the axial 2P excitation efficiency, generating a larger axial PSF. This effect is more evident when a chirp is applied to the RF signals driving the AODs. Under these conditions, the beam reaches the objective back-pupil in a non-collimated state. Future efforts will be devoted to the compensation of chromatic aberration and temporal dispersion, for example employing a highly dispersive prism upstream of AODs . The double Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) zebrafish line was obtained from outcrossing the Tg(elavl3:H2B-GCaMP6s) , and the Tg(elavl3:ReaChR-TagRFP) , lines on the slc45a2 b4/- heterozygous albino background , which we previously generated. The double transgenic line expresses the fluorescent calcium reporter GCaMP6s (nucleus) and the red-shifted light-activatable cation channel ReaChR (plasma membrane) in all differentiated neurons. ReaChR is expressed as a fusion peptide with the red fluorescent protein TagRFP to ensure its localization. Zebrafish strains were reared according to standard procedures , and fed twice a day with dry food and brine shrimp nauplii ( Artemia salina ), both for nutritional and environmental enrichment. For the experiments, we employed N = 20, 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) and N = 13, 5 dpf Tg(elavl3:H2B-GCaMP6s) , both of which were on the slc45a2 b4/b4 homozygous albino background. Zebrafish larvae used in the experiments were maintained at 28.5 °C in fish water (150 mg/L Instant Ocean, 6.9 mg/L NaH2PO4, 12.5 mg/L Na2HPO4, 1 mg/L methylene blue; conductivity 300 μS/cm, pH 7.2) under a 14/10 light/dark cycle, according to standard protocols . Experiments involving zebrafish larvae were carried out in compliance with European and Italian laws on animal experimentation (Directive 2010/63/EU and D.L. 4 March 2014, n.26, respectively), under authorization n.606/2020-PR from the Italian Ministry of Health. To select calcium reporter/opsin-expressing larvae for use in the experiments, 3 dpf embryos were subjected to fluorescence screening. The embryos were first slightly anesthetized with a bath in tricaine (160 mg/L in fish water; A5040, Sigma-Aldrich) to reduce movement. Using a stereomicroscope (Stemi 508, Carl Zeiss) equipped with LEDs for fluorescence excitation (for GCaMP6s: blue LED, M470L3; for TagRFP: green LED, M565L3, both from Thorlabs) and fluorescence filters to block excitation light (for GCaMP6s: FF01-510/84-25; for TagRFP: FF01-593/LP-25, both from Semrock), embryos were selected according to the presence of brighter green/red fluorescent signals in the central nervous system. Screened embryos were transferred to a Petri dish containing fresh fish water and kept in an incubator at 28.5 °C until 5 dpf. Zebrafish larvae were mounted as previously described . Briefly, each larva was transferred into a reaction tube containing 1.5% (w/v) low-gelling temperature agarose (A9414, Sigma-Aldrich) in fish water, maintained fluid on a heater set at 38 °C. Using a plastic pipette, larvae were then placed on a microscope slide inside a drop of melted agarose. Before gel polymerization, their position was adjusted using a couple of fine pipette tips for the dorsal portion to face upwards. To avoid movement artifacts during the measurements, larvae were paralyzed by a 10-min treatment with 2 mM d-tubocurarine (93750, Sigma-Aldrich), a neuromuscular blocker. For tail-free preparations, upon gel polymerization, agarose caudal to the swimming bladder was removed using a scalpel. In this case, no paralyzing agent was applied. Mounted larvae were then placed inside the imaging chamber filled with fish water and thermostated at 28.5 °C for the entire duration of the experiment. Confocal imaging of a 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) larva on an albino background was performed to evaluate the spatial expression of the two proteins. The larva was mounted in agarose as described above and deeply anesthetized with tricaine (300 mg/L in fish water). We employed a commercial confocal microscope (T i 2, Nikon) equipped with two continuous wavelength lasers emitting at 488 and 561 nm for GCaMP6s and TagRFP excitation, respectively. Imaging was performed using a 10× objective, allowing the entire head of the animal to fit into the field of view. Using a piezo-electric motor (PIFOC, Physik Instrumente - PI), the objective was moved at 182 consecutive positions ( z step: 2 μm) to acquire the volume of the larval head. Head restrained larvae, capable of performing wide tail deflections, were imaged from below the 2P LSFM imaging chamber using a dedicated high-speed camera (see Optical setup for details). Images were streamed at 300 Hz via a USB3 connection to a workstation running a custom tool for live tail movement tracking, developed using the open-source Python Stytra package . Larval tail length was divided into 9 segments, and the sum of their relative angles was employed to quantify tail deflection. Tail movements of both ReaChR + and ReaChR − larvae were tracked for 200 s. During the first half 2P LSFM imaging was off (imaging OFF). During the second half larvae were subjected to whole-brain light-sheet imaging (imaging ON) with the same parameters described in the previous section. Each larva measured underwent 3 consecutive 200-s simultaneous whole-brain/behavioral recordings (inter-measurement interval less than 1 min). Whole-brain calcium imaging was performed at 2.5 Hz (a more than optimal volumetric rate considering the typical time constant of the exponential decay for the nuclear localized version of the GCaMP6s sensor τ: 3.5 s ) with 41 stacked z-planes spanning a depth of 200 μm. An interslice spacing of 5 μm was chosen because it coincides with the half width at half maximum of the detection axial PSF. Before each measurement, the scanning amplitude of the resonant galvo mirror was tuned to produce a virtual light-sheet with a length matching the size of the larval brain in the rostro-caudal direction. The laser wavelength was set to 920 nm to optimally excite the GCaMP6s fluorescence. Unless otherwise stated, the power at the sample of the 920 nm laser was set to 60 mW. Optogenetic stimulation was performed at 1064 nm with a laser power at the sample of 30 mW (unless otherwise specified). Before each experimental session, the 1064 nm stimulation laser was finely aligned to the center of the camera field of view. Then, by means of the galvo mirrors present in the stimulation path, the offset position of the stimulation beam was coarsely displaced in the x - y direction toward the center of the area to be stimulated. During the optogenetics experiment the stimulation volume was covered by discontinuously scanning the beam focus via the two pairs of AODs. A typical volume of 50 × 50 × 50 μm 3 was covered with 6250 points (point x-y density: 1 point/0.25 μm 2 ; z step: 5 μm) with a point dwell time of 20 μs (overall time: 125 ms). The medial plane of the stimulation volume (chirp = 0 MHz/μs, null defocus) was adjusted to overlap with the medial plane of the LHb. Unless otherwise stated, each stimulus consisted of four complete cycles of the entire volume, lasting 500 ms. Each stimulation trial consisted of 100 s of whole-brain calcium imaging, during which 5 optogenetic stimuli (interstimulus interval: 16 s, based on the characterization experiments performed, in order to trigger activation events only after the end of the previous calcium transient) were applied at the same volumetric site. Six trials were performed on each larva, with an intertrial interval ranging from 1 to 3 min. Overall, each larva was imaged for 10 min during which it received 30 stimuli. Preprocessing Whole-brain calcium imaging data were processed as follows. Images composing the hyperstacks were first 2 × 2 binned (method: average) in the x and y dimensions to obtain a quasi-isotropic voxel size (4.4 × 4.4 × 5 μm 3 ). Then, employing a custom tool written in Python 3, we computed the voxel-wise ΔF/F 0 of each volumetric recording, after background subtraction. F 0 was calculated using FastChrom’s baseline estimation method . Quantification of imaging crosstalk and optogenetic activation extent/specificity To quantify crosstalk during imaging we first considered different metrics to evaluate neuronal activity levels (Supplementary Fig. ). We computed the standard deviation (SD) over time, the number of calcium peaks per minute, and the average peak amplitude of each voxel composing the larval brain during 5 min of whole-brain calcium imaging (Supplementary Fig. ). For automatic calcium peaks identification, we set the following thresholds: minimum peak prominence 0.05; minimum peak FWHM 2.5 s, minimum peak distance 5 s. We found the SD to have improved sensitivity in discriminating between diverse conditions compared to the number of peaks per minute (Supplementary Fig. ). These results reflected those observed by adopting the average amplitude of calcium peaks (Supplementary Fig. ) as an activity metric. We thus employed SD over time as a proxy of neuronal activity levels since its results do not depend on predefined thresholds. Therefore, the distribution of SD values calculated for each brain was first normalized with respect to the total number of voxels and then pooled (method: average) according to the larval strain (ReaChR + and ReaChR − ). Similarly, the normalized distributions of SD values for ReaChR + and ReaChR − larvae subjected to 100 s of whole-brain imaging during which they received 5 photostimulations (1064 nm) were calculated to evaluate the effect of the optogenetic stimulation. Imaging crosstalk and optogenetic stimulation indices were calculated using the Hellinger distance as a measure of dissimilarity between two probability distributions P and Q: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H(P,Q)=\sqrt{1-{\sum}_{i=1}^{n}\sqrt{{P}_{i}{Q}_{i}}}$$\end{document} H ( P , Q ) = 1 − ∑ i = 1 n P i Q i The errors in the Hellinger distances were calculated according to error propagation theory as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta H=\sqrt{{\sum}_{i=1}^{n}\frac{{Q}_{i}^{2}}{{4H}^{2}}{{\cdot }}\Delta {P}_{i}^{2}+\frac{{P}_{i}^{2}}{{4H}^{2}}{{\cdot }}{\Delta Q}_{i}^{2}}$$\end{document} Δ H = ∑ i = 1 n Q i 2 4 H 2 ⋅ Δ P i 2 + P i 2 4 H 2 ⋅ Δ Q i 2 Finally, normalized distributions of SD values for ReaChR − larvae exposed either to imaging (100 s) only or to imaging and photostimulation (100 s and 5 stimuli at 1064 nm) were calculated to evaluate the specificity of the effect observed. Quantification of tail movements during whole-brain light-sheet imaging Tail deflection (i.e., sum of relative tail segments angles) time traces, were processed to detect and count the number of tail beats. In detail, deflection peaks were considered as tail beats if exceeding an absolute threshold of 20°. Consecutive tail deflections that did not come back to resting position for at least 0.5 s were considered part of the same movement. The relative number of tail beats during imaging ON (Fig. ) was calculated for each trial of each larva dividing the number of tail movements during the imaging ON period by that quantified during the imaging OFF period. To combine behavioral and brain activity recordings (Fig. ), the average fluorescence time trace of the hindbrain acquired at 2.5 Hz was first interpolated to match the frequency of behavioral recordings (300 Hz). Then, ΔF/F 0 was calculated as previously described. Characterization of stimulation-induced calcium transients To characterize neuronal activation as a function of stimulation parameters (scan time and laser power), we first extracted the voxel time series averaged over the entire stimulation site (i.e., left habenula) from 4D ΔF/F 0 hyperstacks. Time traces were windowed to isolate and align the three stimulation events contained in a single trial. Isolated calcium transients were analyzed using the peak analyzer function in Origin Pro 2021 (OriginLab Corp.) to obtain peak amplitude, rise/decay time (i.e., time from baseline to peak and time from peak to baseline, respectively) and duration values. Pooled peak duration data were obtained by first averaging three events of the same larva (intra-individual) and then averaging data between larvae (inter-individual). Activation probability and correlation maps Using a custom Python tool, we calculated the probability of each voxel composing the brain to be active in response to the optogenetic stimulation. For each stimulation event, a voxel was considered active if its change in fluorescence in a 2 s time window after the stimulation exceeded three standard deviations above its baseline level (2 s pre-stimulation). Only events in which the voxels inside the stimulation volume met the activation criterion were considered effective optogenetic stimulations. By iterating this process for all the stimulation events performed (on the same site of the same larva), we calculated the activation probability of each voxel as the number of times the voxel exceeded the threshold divided by the total number of valid stimulations. Employing a second Python tool, we then computed activity correlation maps showing Pearson’s correlation coefficient between each voxel and the activity extracted from the stimulation site (seed). The 3D maps of correlation and activation probability obtained were subsequently aligned. First, the acquired 4D hyperstacks were time averaged. Second, the resulting 3D stack of each larva was registered to a reference brain. Nonrigid image registration was performed using the open source software Computational Morphometry Toolkit (CMTK 3.3.1, https://www.nitrc.org/projects/cmtk/ ) and the ImageJ user interface , employing the command string (-awr 01 -X 52 -C 8 -G 80 -R 3 -A “--accuracy 1.6” -W “--accuracy 0.4”). The calculated morphing transformations were ultimately applied to the corresponding 3D maps. Following the zebrafish brain atlases , , the volumetric regions of interest (ROIs) used in the analysis were manually drawn onto the reference brain (employing ImageJ), based on anatomical boundaries. The 10 volumetric ROIs were then adopted to extract from each map the voxel-wise distribution of activation probability/correlation coefficient values used for further analyses. The binarized functional connectivity map shown in Fig. was obtained after applying a threshold on Pearson’s correlation coefficient to the average correlation map shown in Fig. . The 0.12 value adopted represented the correlation coefficient threshold separating significant from non-significant correlations among brain regions (see Fig. ). Cross-wavelet power spectrum analysis The possible coupling between the delineated brain ROIs and the stimulation site was also characterized in the spectral domain by quantifying and inspecting their cross-wavelet power spectral density (CPSD) . The wavelet transforms of the average activity signals extracted from each ROI were computed using the Morlet mother wavelet, adopting a central frequency f 0 = 1 Hz as time-frequency resolution parameter, and 256 voices per octave for fine frequency discretization. Spurious time-boundary effects were addressed by first applying a zero-padding scheme to the original time series, and then isolating the so-called cone of influence, i.e., the time–frequency region where boundary distortions in the CPSD estimates are negligible . Granger causality analysis The causal link between the activity of different brain regions was explored by analyzing their Granger causality . GC analysis among ΔF/F 0 time series of brain regions was performed in R, with the “lmtest” library . To select an appropriate lag order, we computed both the Akaike (AIC) and Bayesian (BIC) information criterions of the complete autoregressive model for each comparison (each trial and each possible regions pair) for lag orders from 1 to 8 (0.4–3.2 s). Then, for each comparison we selected the lag order associated with the minimum value of the information criterions. Finally, we computed the mode value of this list and used this unique lag order value for every comparison of the final GC analysis. The mode values based on both AIC and BIC resulted the same: a lag order equal to 2, which corresponds to a 0.8 s lag. For each larva, trial, pairs of regions’ activity and causality direction we computed the average F statistic value of the tests. Finally, multiplicity correction for the p-values was performed with a false discovery rate approach using the Benjamini–Hochberg method (GC analysis results are reported in Supplementary Data ). The F statistic was presented in Fig. as average values of all pairs having at least two significant trials. The F statistic in the graph of Fig. was presented as arrows color-mapped according to the average F value found between brain regions’ connections. Direction of the arrow indicates direction of the causality interaction, while arrow width represents the proportion of significant trials over the total. Only causal links having at least 33% of significant trials were depicted (see thresholded matrix in Supplementary Fig. ). Partial correlation analysis In order to gain insight into the directness of the interactions between brain regions, we analyzed the partial correlation between pairs of region-wise mean ΔF/F 0 time series, aiming to capture their residual coupling after the influence of all other regions was accounted for . Pairwise partial correlation coefficients were obtained as described by Han and colleagues . In detail, the partial correlation between a pair of brain regions (i.e., LHb-IPN, LHb-RHb, and T-RHb), A and B, was evaluated as the Pearson’s correlation coefficient between regressed time series ΔF/F 0 A,R and ΔF/F 0 B,R , suitably corrected for the contribution of each other regions’ mean activity signal. These time series were estimated by multiple regression on the original traces ΔF/F 0 A and ΔF/F 0 B , through the evaluation of the Moore-Penrose pseudoinverse of the remaining regions’ time series matrix, C: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{\beta }}}}}_{{{{\rm{A}}}}}={{{{\rm{C}}}}}^{+}{{\cdot }}{\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}}}$$\end{document} β A = C + ⋅ Δ F / F 0 A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{\beta }}}}}_{{{{\rm{B}}}}}={{{{\rm{C}}}}}^{+}{{\cdot }}{\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}}}$$\end{document} β B = C + ⋅ Δ F / F 0 B where C + is the Moore-Penrose pseudoinverse matrix: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{C}}}}}^{+}={({{{{\rm{C}}}}}^{{{{\rm{T}}}}}{{{\rm{C}}}})}^{-1}{{{{\rm{C}}}}}^{{{{\rm{T}}}}}$$\end{document} C + = ( C T C ) − 1 C T here computed using the Python SciPy library . The regressed time series were then obtained as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}},{{{\rm{R}}}}}={\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}}}-{{{\rm{C\cdot }}}}{{{{\rm{\beta }}}}}_{{{{\rm{A}}}}}$$\end{document} Δ F / F 0 A , R = Δ F / F 0 A − C⋅ β A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}},{{{\rm{R}}}}}={\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}}}-{{{\rm{C\cdot }}}}{{{{\rm{\beta }}}}}_{{{{\rm{B}}}}}$$\end{document} Δ F / F 0 B , R = Δ F / F 0 B − C⋅ β B The directness of the mutual interaction between two brain regions was finally detected from the presence of both statistically significant Pearson’s and partial correlation coefficients. When only the Pearson’s correlation is significant the interaction is defined as indirect. While in case only the partial correlation is significant, we are observing what is defined as pseudo-correlation . Results of partial correlation analysis can be found in Supplementary Data . Whole-brain calcium imaging data were processed as follows. Images composing the hyperstacks were first 2 × 2 binned (method: average) in the x and y dimensions to obtain a quasi-isotropic voxel size (4.4 × 4.4 × 5 μm 3 ). Then, employing a custom tool written in Python 3, we computed the voxel-wise ΔF/F 0 of each volumetric recording, after background subtraction. F 0 was calculated using FastChrom’s baseline estimation method . To quantify crosstalk during imaging we first considered different metrics to evaluate neuronal activity levels (Supplementary Fig. ). We computed the standard deviation (SD) over time, the number of calcium peaks per minute, and the average peak amplitude of each voxel composing the larval brain during 5 min of whole-brain calcium imaging (Supplementary Fig. ). For automatic calcium peaks identification, we set the following thresholds: minimum peak prominence 0.05; minimum peak FWHM 2.5 s, minimum peak distance 5 s. We found the SD to have improved sensitivity in discriminating between diverse conditions compared to the number of peaks per minute (Supplementary Fig. ). These results reflected those observed by adopting the average amplitude of calcium peaks (Supplementary Fig. ) as an activity metric. We thus employed SD over time as a proxy of neuronal activity levels since its results do not depend on predefined thresholds. Therefore, the distribution of SD values calculated for each brain was first normalized with respect to the total number of voxels and then pooled (method: average) according to the larval strain (ReaChR + and ReaChR − ). Similarly, the normalized distributions of SD values for ReaChR + and ReaChR − larvae subjected to 100 s of whole-brain imaging during which they received 5 photostimulations (1064 nm) were calculated to evaluate the effect of the optogenetic stimulation. Imaging crosstalk and optogenetic stimulation indices were calculated using the Hellinger distance as a measure of dissimilarity between two probability distributions P and Q: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H(P,Q)=\sqrt{1-{\sum}_{i=1}^{n}\sqrt{{P}_{i}{Q}_{i}}}$$\end{document} H ( P , Q ) = 1 − ∑ i = 1 n P i Q i The errors in the Hellinger distances were calculated according to error propagation theory as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta H=\sqrt{{\sum}_{i=1}^{n}\frac{{Q}_{i}^{2}}{{4H}^{2}}{{\cdot }}\Delta {P}_{i}^{2}+\frac{{P}_{i}^{2}}{{4H}^{2}}{{\cdot }}{\Delta Q}_{i}^{2}}$$\end{document} Δ H = ∑ i = 1 n Q i 2 4 H 2 ⋅ Δ P i 2 + P i 2 4 H 2 ⋅ Δ Q i 2 Finally, normalized distributions of SD values for ReaChR − larvae exposed either to imaging (100 s) only or to imaging and photostimulation (100 s and 5 stimuli at 1064 nm) were calculated to evaluate the specificity of the effect observed. Tail deflection (i.e., sum of relative tail segments angles) time traces, were processed to detect and count the number of tail beats. In detail, deflection peaks were considered as tail beats if exceeding an absolute threshold of 20°. Consecutive tail deflections that did not come back to resting position for at least 0.5 s were considered part of the same movement. The relative number of tail beats during imaging ON (Fig. ) was calculated for each trial of each larva dividing the number of tail movements during the imaging ON period by that quantified during the imaging OFF period. To combine behavioral and brain activity recordings (Fig. ), the average fluorescence time trace of the hindbrain acquired at 2.5 Hz was first interpolated to match the frequency of behavioral recordings (300 Hz). Then, ΔF/F 0 was calculated as previously described. To characterize neuronal activation as a function of stimulation parameters (scan time and laser power), we first extracted the voxel time series averaged over the entire stimulation site (i.e., left habenula) from 4D ΔF/F 0 hyperstacks. Time traces were windowed to isolate and align the three stimulation events contained in a single trial. Isolated calcium transients were analyzed using the peak analyzer function in Origin Pro 2021 (OriginLab Corp.) to obtain peak amplitude, rise/decay time (i.e., time from baseline to peak and time from peak to baseline, respectively) and duration values. Pooled peak duration data were obtained by first averaging three events of the same larva (intra-individual) and then averaging data between larvae (inter-individual). Using a custom Python tool, we calculated the probability of each voxel composing the brain to be active in response to the optogenetic stimulation. For each stimulation event, a voxel was considered active if its change in fluorescence in a 2 s time window after the stimulation exceeded three standard deviations above its baseline level (2 s pre-stimulation). Only events in which the voxels inside the stimulation volume met the activation criterion were considered effective optogenetic stimulations. By iterating this process for all the stimulation events performed (on the same site of the same larva), we calculated the activation probability of each voxel as the number of times the voxel exceeded the threshold divided by the total number of valid stimulations. Employing a second Python tool, we then computed activity correlation maps showing Pearson’s correlation coefficient between each voxel and the activity extracted from the stimulation site (seed). The 3D maps of correlation and activation probability obtained were subsequently aligned. First, the acquired 4D hyperstacks were time averaged. Second, the resulting 3D stack of each larva was registered to a reference brain. Nonrigid image registration was performed using the open source software Computational Morphometry Toolkit (CMTK 3.3.1, https://www.nitrc.org/projects/cmtk/ ) and the ImageJ user interface , employing the command string (-awr 01 -X 52 -C 8 -G 80 -R 3 -A “--accuracy 1.6” -W “--accuracy 0.4”). The calculated morphing transformations were ultimately applied to the corresponding 3D maps. Following the zebrafish brain atlases , , the volumetric regions of interest (ROIs) used in the analysis were manually drawn onto the reference brain (employing ImageJ), based on anatomical boundaries. The 10 volumetric ROIs were then adopted to extract from each map the voxel-wise distribution of activation probability/correlation coefficient values used for further analyses. The binarized functional connectivity map shown in Fig. was obtained after applying a threshold on Pearson’s correlation coefficient to the average correlation map shown in Fig. . The 0.12 value adopted represented the correlation coefficient threshold separating significant from non-significant correlations among brain regions (see Fig. ). The possible coupling between the delineated brain ROIs and the stimulation site was also characterized in the spectral domain by quantifying and inspecting their cross-wavelet power spectral density (CPSD) . The wavelet transforms of the average activity signals extracted from each ROI were computed using the Morlet mother wavelet, adopting a central frequency f 0 = 1 Hz as time-frequency resolution parameter, and 256 voices per octave for fine frequency discretization. Spurious time-boundary effects were addressed by first applying a zero-padding scheme to the original time series, and then isolating the so-called cone of influence, i.e., the time–frequency region where boundary distortions in the CPSD estimates are negligible . The causal link between the activity of different brain regions was explored by analyzing their Granger causality . GC analysis among ΔF/F 0 time series of brain regions was performed in R, with the “lmtest” library . To select an appropriate lag order, we computed both the Akaike (AIC) and Bayesian (BIC) information criterions of the complete autoregressive model for each comparison (each trial and each possible regions pair) for lag orders from 1 to 8 (0.4–3.2 s). Then, for each comparison we selected the lag order associated with the minimum value of the information criterions. Finally, we computed the mode value of this list and used this unique lag order value for every comparison of the final GC analysis. The mode values based on both AIC and BIC resulted the same: a lag order equal to 2, which corresponds to a 0.8 s lag. For each larva, trial, pairs of regions’ activity and causality direction we computed the average F statistic value of the tests. Finally, multiplicity correction for the p-values was performed with a false discovery rate approach using the Benjamini–Hochberg method (GC analysis results are reported in Supplementary Data ). The F statistic was presented in Fig. as average values of all pairs having at least two significant trials. The F statistic in the graph of Fig. was presented as arrows color-mapped according to the average F value found between brain regions’ connections. Direction of the arrow indicates direction of the causality interaction, while arrow width represents the proportion of significant trials over the total. Only causal links having at least 33% of significant trials were depicted (see thresholded matrix in Supplementary Fig. ). In order to gain insight into the directness of the interactions between brain regions, we analyzed the partial correlation between pairs of region-wise mean ΔF/F 0 time series, aiming to capture their residual coupling after the influence of all other regions was accounted for . Pairwise partial correlation coefficients were obtained as described by Han and colleagues . In detail, the partial correlation between a pair of brain regions (i.e., LHb-IPN, LHb-RHb, and T-RHb), A and B, was evaluated as the Pearson’s correlation coefficient between regressed time series ΔF/F 0 A,R and ΔF/F 0 B,R , suitably corrected for the contribution of each other regions’ mean activity signal. These time series were estimated by multiple regression on the original traces ΔF/F 0 A and ΔF/F 0 B , through the evaluation of the Moore-Penrose pseudoinverse of the remaining regions’ time series matrix, C: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{\beta }}}}}_{{{{\rm{A}}}}}={{{{\rm{C}}}}}^{+}{{\cdot }}{\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}}}$$\end{document} β A = C + ⋅ Δ F / F 0 A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{\beta }}}}}_{{{{\rm{B}}}}}={{{{\rm{C}}}}}^{+}{{\cdot }}{\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}}}$$\end{document} β B = C + ⋅ Δ F / F 0 B where C + is the Moore-Penrose pseudoinverse matrix: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{{{\rm{C}}}}}^{+}={({{{{\rm{C}}}}}^{{{{\rm{T}}}}}{{{\rm{C}}}})}^{-1}{{{{\rm{C}}}}}^{{{{\rm{T}}}}}$$\end{document} C + = ( C T C ) − 1 C T here computed using the Python SciPy library . The regressed time series were then obtained as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}},{{{\rm{R}}}}}={\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{A}}}}}-{{{\rm{C\cdot }}}}{{{{\rm{\beta }}}}}_{{{{\rm{A}}}}}$$\end{document} Δ F / F 0 A , R = Δ F / F 0 A − C⋅ β A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}},{{{\rm{R}}}}}={\Delta {{{\rm{F}}}}/{{{{\rm{F}}}}}_{0}}_{{{{\rm{B}}}}}-{{{\rm{C\cdot }}}}{{{{\rm{\beta }}}}}_{{{{\rm{B}}}}}$$\end{document} Δ F / F 0 B , R = Δ F / F 0 B − C⋅ β B The directness of the mutual interaction between two brain regions was finally detected from the presence of both statistically significant Pearson’s and partial correlation coefficients. When only the Pearson’s correlation is significant the interaction is defined as indirect. While in case only the partial correlation is significant, we are observing what is defined as pseudo-correlation . Results of partial correlation analysis can be found in Supplementary Data . To guarantee reproducibility of the findings and avoid bias, the larvae employed in the experiments never belonged to a single batch of eggs. No a priori sample size calculation was performed. The sample size employed was justified by the high grade of consistency in the results obtained from different larvae. The expression pattern of GCaMP6s and ReaChR were evaluated in N = 1 ReaChR + larva by confocal imaging. Crosstalk activation of ReaChR by 920 nm excitation light-sheet imaging was evaluated on N = 3 ReaChR + and N = 3 ReaChR − larvae, in the brain activity experiment, and N = 4 ReaChR + and N = 4 ReaChR − larvae in the combined brain/behavioral activities experiment. The effect of optogenetic stimulation was evaluated on N = 6 ReaChR + and N = 6 ReaChR − larvae. Characterization of optogenetically induced calcium transients as a function of stimulation settings was performed on N = 4 ReaChR + larvae ( n = 3 calcium transients per larva). The activation probability, correlation, and causality were evaluated on N = 6 ReaChR + ( n = 30 stimulations per larva). OriginPro 2021 (OriginLab Corp.) was used to carry out all the statistical analyses. Unless otherwise stated, results were considered statistically significant if their corresponding p -value was less than 0.05 (* P < 0.05; ** P < 0.01; *** P < 0.0001). Both intergroup and intragroup statistical significance of imaging crosstalk (Fig. and Supplementary Fig. ) were performed using two-way ANOVA (factors: zebrafish strain, imaging power) followed by post-hoc comparisons with Tukey’s method. Two-way ANOVA and Tukey’s post-hoc comparison were employed also for quantifying the statistical significance of tail beats between imaging OFF and ON conditions (Fig. ; factors: zebrafish strain, imaging presence). For intergroup statistical evaluations of both activation probability (Fig. ) and Pearson’s correlation coefficient (Fig. ), we first verified the normality distribution of data using the Shapiro-Wilk test (see Supplementary Fig. for test results) and then performed one way ANOVA (factor: brain region), followed by post-hoc comparisons employing Tukey’s method. Statistical comparisons of relative number of tail beats during 920 nm imaging (Fig. ) and median SD values to evaluate the effect of optogenetic stimulation (Fig. and Supplementary Fig. ) were performed using unpaired t test. Statistical comparisons of the average distributions of SD (Fig. ) and Pearson’s correlation coefficient (Fig. ) values were performed with the two-sample Kolmogorov-Smirnov test (KS test), applying the Bonferroni correction ( α = 0.05/3 = 0.01667, in both cases). Further information on research design is available in the linked to this article. Peer Review File Supplementary Information Description of Additional Supplementary Files Supplementary Data 1 Supplementary Data 2 Supplementary Data 3 Supplementary Movies Reporting Summary
Adapting an Adolescent and Young Adult Program Housed in a Quaternary Cancer Centre to a Regional Cancer Centre: Creating Equitable Access to Developmentally Tailored Support
67df2d3a-d66d-443a-b1ac-4d45d15d9927
10969112
Internal Medicine[mh]
In North America, adolescent and young adult (AYA) is defined as including those between the ages of 15–39 years . This period in life is marked by significant developmental milestones such as pursuing education or employment, establishing romantic and sexual relationships, family planning, and a deepened self-discovery . There can be derailment in this development and health trajectories with the occurrence of a cancer diagnosis . Although a cancer diagnosis inherently disrupts life for patients of all ages, cancer has an augmented impact on AYA patients, especially regarding fertility preservation, diagnosis and treatment of mental health concerns, a pause in education or career life goals, sexual health, and premature death . In Canada, the current landscape of AYA care includes the delivery of oncology support at adult cancer centers, most of whom have limited resources or programming dedicated to the unique needs of this population . For patients who live in rural communities or who identify with marginalized populations, the presence of and access to specialized AYA services within cancer care is further limited. Moreover, psychosocial care at large urban centers is often provided by hospital employees, whereas patients at community centres often rely on external psychosocial support in a fee for service model , further highlighting inequities. AYA oncology programs must prioritize accessibility for patients of diverse backgrounds , acknowledging the importance of factors such as gender identity, sexuality, race/ethnicity, religion, socio-economic status, immigration status, and physical location. Additionally, consideration of historically oppressed members of the LGBTQ+, Indigenous, and Black communities, highlights the interconnectedness of social and personal identities with the behaviors and perceptions of patients . Drawing on an intersectionality framework allows clinicians to remain cognizant of the compounding impact of an individual’s unique identity in their cancer care . The purpose of this paper is to describe the development of an AYA oncology program at a community-based cancer center via a novel collaboration with an established program at a larger quaternary cancer center. A quaternary cancer center is a specialized healthcare facility that not only offers comprehensive cancer treatment and research services but also serves as a referral center, providing expertise in a complex and advanced cases, often involving cutting-edge therapies and experimental treatments. This program expansion pathway may serve as a pilot model to inform further expansion of AYA oncology care for patients independent of their jurisdictions. We also discuss the importance of recognizing diversity and the intersectionality of identity and program development. It is our hope that this paper offers unique insights into expanding AYA supportive care access and adapting the nature of the support offered to patients’ unique identities. 2.1. Social Determinants of Health (SDH) Social determinants of health are the conditions in which individuals are born, grow, live, work, and age, encompassing a range of economic, social, cultural, and environmental factors. These determinants play a crucial role in shaping health outcomes, influencing access to resources, opportunities, and services, and contributing to health inequities within populations . The social determinants of health play a crucial role in shaping individual’s well-being, influencing factors such as access to education, economic opportunities, and healthcare, which collectively impact the overall health of general population . Furthermore, disparities in social determinants can contribute to an increased risk of cancer among certain groups, highlighting the need for addressing social inequities to enhance cancer prevention and control efforts. Social determinants of health significantly impact the cancer experience by influencing factors such as access to timely and quality healthcare, economic opportunities, education, and social support, thereby contributing to disparities in cancer prevention, diagnosis, treatment, and overall health outcomes among diverse populations . Marginalized groups such as Indigenous, LGBTQ+, and Black people have mistrust due to their perceptions of discrimination and racism in healthcare settings creating systemic barriers to routine screenings, potentially delaying diagnosis , and increasing the risk of being diagnosed with late-stage cancer with poorer prognoses . Furthermore, the quality of life of marginalized people navigating a cancer experience is disproportionately impacted by their SDH. In addressing the unique needs of AYAs facing cancer, our focus is on modifying social determinants of health related to education, economic opportunities, and social support. By targeting these determinants, we aim to enhance AYAs’ access to tailored support, improve educational and vocational outcomes, and foster a supportive social environment, ultimately contributing to improved overall well-being and health outcomes during and after cancer treatment. With this knowledge, it is our responsibility to continue developing and expanding the AYA program as an intervention to address these systemic issues. 2.2. Intersectionality Intersectionality (Crenshaw) serves as a framework in healthcare for understanding health inequities . Although AYA cancer patients all share this “cancer patient” label, they also represent a group comprised of unique, compounding, and intersectional identities. Defining the AYA population solely in terms of age without accounting for cultural diversity can perpetuate implicit bias . An effective and sustainable AYA program should not only meet patients’ medical and developmental needs but also meet needs based on their identity. Building a program with an awareness that social and personal identities are interconnected will help healthcare providers understand why patients and families perceive and cope with circumstances regarding their AYA cancer experience differently, and thus be more inclusive in their care . For meaningful change to occur, there is a need for an awareness and prioritization of intersectionality across all stakeholders involved in cancer care, from researchers, direct practice clinicians, educators, policymakers, funders, and health organizations . Social location refers to a person’s position in society based on socially constructed factors such as gender, class, education, employment, socioeconomic status, identity, geographic location, mental health, and disability . Access to quality cancer care can be further impacted by individuals’ SDH , specifically, individuals’ social location may lead to inequities . Both concepts can negatively impact patients’ health outcomes across the cancer continuum from routine screening to survivorship . Health equity seeks to reduce inequalities and increase access to care that is conducive to health . 2.3. Interdisciplinary Health Approach An interdisciplinary approach to AYA cancer care involves collaboration among healthcare professionals from various disciplines to address the unique physical, emotional, social, spiritual, and developmental needs of individuals in this age group who are diagnosed with cancer. The AYA stage of life is a time of self-discovery of their relationship with social constructs, such as religion, culture, and race. A multidisciplinary approach addresses the holistic needs of individuals in this age group, fostering improved treatment outcomes, quality of life, and long-term well-being . This approach recognizes individuals’ unique identity and provides an opportunity to modify inequalities related to SDH. We refer our patients within PM to spiritual care practitioners on site; however, such a program is not in place at the Stronach Regional Cancer Centre at Southlake. It is our goal to build connections and local capacity with cultural and spiritual groups in the community to offer specialized spiritual care. Social determinants of health are the conditions in which individuals are born, grow, live, work, and age, encompassing a range of economic, social, cultural, and environmental factors. These determinants play a crucial role in shaping health outcomes, influencing access to resources, opportunities, and services, and contributing to health inequities within populations . The social determinants of health play a crucial role in shaping individual’s well-being, influencing factors such as access to education, economic opportunities, and healthcare, which collectively impact the overall health of general population . Furthermore, disparities in social determinants can contribute to an increased risk of cancer among certain groups, highlighting the need for addressing social inequities to enhance cancer prevention and control efforts. Social determinants of health significantly impact the cancer experience by influencing factors such as access to timely and quality healthcare, economic opportunities, education, and social support, thereby contributing to disparities in cancer prevention, diagnosis, treatment, and overall health outcomes among diverse populations . Marginalized groups such as Indigenous, LGBTQ+, and Black people have mistrust due to their perceptions of discrimination and racism in healthcare settings creating systemic barriers to routine screenings, potentially delaying diagnosis , and increasing the risk of being diagnosed with late-stage cancer with poorer prognoses . Furthermore, the quality of life of marginalized people navigating a cancer experience is disproportionately impacted by their SDH. In addressing the unique needs of AYAs facing cancer, our focus is on modifying social determinants of health related to education, economic opportunities, and social support. By targeting these determinants, we aim to enhance AYAs’ access to tailored support, improve educational and vocational outcomes, and foster a supportive social environment, ultimately contributing to improved overall well-being and health outcomes during and after cancer treatment. With this knowledge, it is our responsibility to continue developing and expanding the AYA program as an intervention to address these systemic issues. Intersectionality (Crenshaw) serves as a framework in healthcare for understanding health inequities . Although AYA cancer patients all share this “cancer patient” label, they also represent a group comprised of unique, compounding, and intersectional identities. Defining the AYA population solely in terms of age without accounting for cultural diversity can perpetuate implicit bias . An effective and sustainable AYA program should not only meet patients’ medical and developmental needs but also meet needs based on their identity. Building a program with an awareness that social and personal identities are interconnected will help healthcare providers understand why patients and families perceive and cope with circumstances regarding their AYA cancer experience differently, and thus be more inclusive in their care . For meaningful change to occur, there is a need for an awareness and prioritization of intersectionality across all stakeholders involved in cancer care, from researchers, direct practice clinicians, educators, policymakers, funders, and health organizations . Social location refers to a person’s position in society based on socially constructed factors such as gender, class, education, employment, socioeconomic status, identity, geographic location, mental health, and disability . Access to quality cancer care can be further impacted by individuals’ SDH , specifically, individuals’ social location may lead to inequities . Both concepts can negatively impact patients’ health outcomes across the cancer continuum from routine screening to survivorship . Health equity seeks to reduce inequalities and increase access to care that is conducive to health . An interdisciplinary approach to AYA cancer care involves collaboration among healthcare professionals from various disciplines to address the unique physical, emotional, social, spiritual, and developmental needs of individuals in this age group who are diagnosed with cancer. The AYA stage of life is a time of self-discovery of their relationship with social constructs, such as religion, culture, and race. A multidisciplinary approach addresses the holistic needs of individuals in this age group, fostering improved treatment outcomes, quality of life, and long-term well-being . This approach recognizes individuals’ unique identity and provides an opportunity to modify inequalities related to SDH. We refer our patients within PM to spiritual care practitioners on site; however, such a program is not in place at the Stronach Regional Cancer Centre at Southlake. It is our goal to build connections and local capacity with cultural and spiritual groups in the community to offer specialized spiritual care. 3.1. Development of the Princess Margaret Cancer Centre (PM) AYA Program The PM is an example of a well-resourced quaternary care centre , and acts as a referral center for complex cases accepting local, national, and international patients. The AYA program is one of the many specialized services offered at PM and was founded in 2014 after identifying a gap in care for this subpopulation. The program was developed by a medical oncologist with a special interest in the AYA population who then hired and trained a clinical nurse specialist (CNS) to execute the program . The program was developed to optimize the supportive care for AYA patients through a biopsychosocial lens , incorporating biological, psychological, social, behavioral, and systemic processes impacted by a patient’s disease whilst highlighting factors influencing behaviors and perceptions . The CNS conducts consultations with patients in the form of a 45 min phone call, video call, or in person appointment. The CNS conducts semi-structured consultations that are conversational in nature, discussing common documented concerns for AYAs navigating cancer. The consultation and its contents are dynamic in nature and ever changing to accommodate for the changing needs of AYAs and patient feedback. Currently, through these consultations, the CNS collaborates with patients and their family to identify their unmet needs and provides education on common concerns, including fertility, sexual health, fatigue, returning to work, and wellness. The CNS then arranged ongoing follow-up and triages to appropriate resources, both internally at PM and externally through community-based organizations . The PM AYA program developed internal and external referral pathways to specialized clinics related to coping, fertility, mobility, survivorship, and peer connection. These referral pathways were established by the CNS ‘knocking’ on doors and identifying providers who have an interest/expertise in supportive care for the young person. These referral pathways continue to develop to meet the evolving needs of AYAs. The CNS also has had additional training in sexual health , enabling her to offer counseling advocated by American Society of Clinical Oncology and Cancer Care Ontario . Examples of sexual health counseling include assessment of sexual well-being, impotence, climacteric symptoms (in those with ovaries), and resulting impacts on relationships. Additionally, the CNS assesses a patient’s ability to engage in activities of daily living and can provide education regarding non-pharmacological techniques to manage fatigue and education for dietary modifications needed while undergoing treatment. In addition to providing direct clinical care to AYA patients, the CNS supports the professional development of frontline nurses and allied staff. By providing education sessions and just-in-time teaching opportunities, the CNS builds capacity for clinicians to engage in discussion regarding AYA patients’ unique and developmental needs, such as fertility, sexual health, and body image . The CNS also collaboratively develops clinical pathways to meet the changing needs of AYA patients and engages in research to advance AYA oncology care in Canada. Patients have reported that receiving AYA supportive care improves patient satisfaction with cancer information, social support, sexual health, fertility, physical appearance, and navigating work and school life . 3.2. Our Team With continued professional development and recruitment of additional interdisciplinary team members; the current “domains” of AYA supportive care include fertility preservation, sexual health, spirituality, finances, school, work, symptom management, diet, exercise, sleep, relationships, parenting, coping, peer connection, and practical support. The flow of patients through the AYA program, including which actions are carried out by specific team members is shown in . Patients can access the AYA program through a provider or self-referral. A large focus of the program is facilitating peer connections within the AYA community through virtual meetups, virtual book club, yoga classes, cooking classes, art therapy, a strong social media presence, and in-person special events . The discharge process for AYA from our program is thoughtfully tailored, with continuous assessment of the need for ongoing support and readiness for discharge during each patient interaction. While discharge signifies the conclusion of one-on-one clinical support from our social workers (SW) and clinical nurse specialists (CNS), the discharged patient maintains a connection with the AYA community and retains access to our services as necessary in the future. Our flexible approach allows patients to re-engage with the program at any time, ensuring ongoing support and assistance whenever needed. An interdisciplinary AYA advisory committee was also created to expand the perspectives and expertise informing the program . The advisory committee consists of psychologists, a pediatric nurse, adult and pediatric medical oncologists, a physician’s assistant, a music therapist, a PM hospital foundation representative, researchers, AYA palliative care team providers, and a patient representative . Collaboration with community leaders in AYA oncology care and/or external clinical support services, cultural community groups, patient support and advocacy groups, researchers, and educators provide diverse perspectives and ensure a holistic understanding of the broader societal impact of the AYA program. The committee helps stimulate ongoing improvements to programming at various levels including direct practice, advocating for equitable funding allocation, and research and policy change with the AYA patient perspective at its centre. This patient representative shares personal experiences navigating cancer and helps validate service and research gaps and prioritize future initiatives. The PM is an example of a well-resourced quaternary care centre , and acts as a referral center for complex cases accepting local, national, and international patients. The AYA program is one of the many specialized services offered at PM and was founded in 2014 after identifying a gap in care for this subpopulation. The program was developed by a medical oncologist with a special interest in the AYA population who then hired and trained a clinical nurse specialist (CNS) to execute the program . The program was developed to optimize the supportive care for AYA patients through a biopsychosocial lens , incorporating biological, psychological, social, behavioral, and systemic processes impacted by a patient’s disease whilst highlighting factors influencing behaviors and perceptions . The CNS conducts consultations with patients in the form of a 45 min phone call, video call, or in person appointment. The CNS conducts semi-structured consultations that are conversational in nature, discussing common documented concerns for AYAs navigating cancer. The consultation and its contents are dynamic in nature and ever changing to accommodate for the changing needs of AYAs and patient feedback. Currently, through these consultations, the CNS collaborates with patients and their family to identify their unmet needs and provides education on common concerns, including fertility, sexual health, fatigue, returning to work, and wellness. The CNS then arranged ongoing follow-up and triages to appropriate resources, both internally at PM and externally through community-based organizations . The PM AYA program developed internal and external referral pathways to specialized clinics related to coping, fertility, mobility, survivorship, and peer connection. These referral pathways were established by the CNS ‘knocking’ on doors and identifying providers who have an interest/expertise in supportive care for the young person. These referral pathways continue to develop to meet the evolving needs of AYAs. The CNS also has had additional training in sexual health , enabling her to offer counseling advocated by American Society of Clinical Oncology and Cancer Care Ontario . Examples of sexual health counseling include assessment of sexual well-being, impotence, climacteric symptoms (in those with ovaries), and resulting impacts on relationships. Additionally, the CNS assesses a patient’s ability to engage in activities of daily living and can provide education regarding non-pharmacological techniques to manage fatigue and education for dietary modifications needed while undergoing treatment. In addition to providing direct clinical care to AYA patients, the CNS supports the professional development of frontline nurses and allied staff. By providing education sessions and just-in-time teaching opportunities, the CNS builds capacity for clinicians to engage in discussion regarding AYA patients’ unique and developmental needs, such as fertility, sexual health, and body image . The CNS also collaboratively develops clinical pathways to meet the changing needs of AYA patients and engages in research to advance AYA oncology care in Canada. Patients have reported that receiving AYA supportive care improves patient satisfaction with cancer information, social support, sexual health, fertility, physical appearance, and navigating work and school life . With continued professional development and recruitment of additional interdisciplinary team members; the current “domains” of AYA supportive care include fertility preservation, sexual health, spirituality, finances, school, work, symptom management, diet, exercise, sleep, relationships, parenting, coping, peer connection, and practical support. The flow of patients through the AYA program, including which actions are carried out by specific team members is shown in . Patients can access the AYA program through a provider or self-referral. A large focus of the program is facilitating peer connections within the AYA community through virtual meetups, virtual book club, yoga classes, cooking classes, art therapy, a strong social media presence, and in-person special events . The discharge process for AYA from our program is thoughtfully tailored, with continuous assessment of the need for ongoing support and readiness for discharge during each patient interaction. While discharge signifies the conclusion of one-on-one clinical support from our social workers (SW) and clinical nurse specialists (CNS), the discharged patient maintains a connection with the AYA community and retains access to our services as necessary in the future. Our flexible approach allows patients to re-engage with the program at any time, ensuring ongoing support and assistance whenever needed. An interdisciplinary AYA advisory committee was also created to expand the perspectives and expertise informing the program . The advisory committee consists of psychologists, a pediatric nurse, adult and pediatric medical oncologists, a physician’s assistant, a music therapist, a PM hospital foundation representative, researchers, AYA palliative care team providers, and a patient representative . Collaboration with community leaders in AYA oncology care and/or external clinical support services, cultural community groups, patient support and advocacy groups, researchers, and educators provide diverse perspectives and ensure a holistic understanding of the broader societal impact of the AYA program. The committee helps stimulate ongoing improvements to programming at various levels including direct practice, advocating for equitable funding allocation, and research and policy change with the AYA patient perspective at its centre. This patient representative shares personal experiences navigating cancer and helps validate service and research gaps and prioritize future initiatives. The PM Cancer Care Network developed a partnership with the Stronach Regional Cancer Centre (SRCC) at Southlake. In the form of a hub-and-spoke organization design, an opportunity arose to develop the first community based AYA program to address patient needs outside the Greater Toronto Area. Specifically, this model enables the delivery of assets, such as resources and specialized clinicians, to be primarily housed at an anchor establishment (hub) and enriches the services and support currently available at local cancer centres (spokes). Despite seeing roughly 100 patients per year at SRCC, we are still learning about the population’s characteristics and needs. This collaboration ensures that AYA patients can access comprehensive and tailored oncological support within their local community, bridging the gap between regional and quaternary care. 4.1. Securing Funding for Program Expansion The first step of establishing an AYA program involves identifying a local AYA champion to be the program medical director (MeD). The MeD will then begin the engagement of top stakeholders, including hospital executives, followed by patient partners and existing supportive care personnel. The MeD will oversee the development of the everyday operations of the program, and importantly, lobby for funding to support hiring a clinical nurse specialist and the initial phases of program development. Initially, program funding may rely solely on philanthropy with gradual expansion to include hospital-based funding. This pivot requires the measurement of program impact and success and thus, measurement tools should be established upfront, even prior to program initiation . The success of the PM AYA Oncology program is evident from initial evaluation efforts regarding patient satisfaction with information provided by primary oncology providers (POP) in crucial domains such as cancer information, social supports and school/work, and the incremental benefit of the AYA-dedicated team care. An added value was perceived in essential domains like school/work, social support, physical appearance, sexual health, and fertility from the team dedicated to AYA care . Further, anecdotal testimonies from patients highlight the importance of an AYA program to foster a patient community, provide support for their developmental concerns (e.g., fertility and sexual health) and providing support for system navigation, self-advocacy, and empowerment. We will collect data at both the “hub” and “spokes” to ensure our efforts are addressing patients’ needs in different communities. At PM, we were able to collect data on patient volumes as well as on the impact of program implementation on patient satisfaction, patients’ perception of the AYA program’s added value, and patients’ perception of the need for improvement in care delivery (i.e., fertility preservation) . 4.2. Conducting an Environmental Scan Once funding for the new AYA program was secured and a CNS was hired, a formal needs assessment of the current state of AYA oncology care at the regional cancer center was conducted. This assessment was then paired with an environmental scan conducted by the CNS to analyze the organization’s external and internal environments impacting patient experiences at the regional cancer center. The environmental scan was conducted by drawing information from person sources, such as key hospital stakeholders and community partners, and non-person sources, such as databases and internet searches . Non-person sources. A literature search was conducted to learn of AYA program structures within the province of Ontario and within Canada. AYA programs across Canada consist of some combination of the following members of an interdisciplinary team: A medical oncologist, a clinical nurse specialist, psychiatrists and social workers, researchers, a program coordinator, a school/work transitions counsellor, spiritual care, rehabilitation medicine, palliative care, and a radiation oncologist . Person sources. Key hospital stakeholders included the cancer program director and medical director, the heads of medical and radiation oncology, the diversity and equity inclusion department, an indigenous navigator, members of the psychosocial oncology department, members of the information and technology department, unit managers and educators, and corporate communications. Summarizing the meetings led to the development of a list of key program priorities and existing resources available. Key community partners included disease-specific community organizations as well as community rehabilitation services and fertility clinics. The CNS collated a list of recommended resources for common AYA concerns in the form of a “resource navigator” document to be distributed to every patient who meets with the AYA CNS. These resources include psychosocial support options, peer connection opportunities, sexual health, body image and fertility support, wellness and rehabilitation programs, work, school, and financial supports, and supports for the caregivers and children of AYA patients. 4.3. Identifying Service Gaps The conducted environmental scan was useful in analyzing the cancer center’s existing internal and external resources that could be leveraged to provide fulsome AYA oncology care. Additionally, various service gaps were identified including inequitable access to developmentally tailored support for AYAs navigating cancer treatment and life after treatment across jurisdictions. Findings were presented to the MeD and the hospital executive wherein three initial program priorities were agreed upon: (a) to offer consistency in discussing fertility preservation and referring patients to specialized clinics, (b) to provide developmentally tailored psychosocial support for common AYA concerns such as sexual health and body image, and (c) to improve overall clinician education and recognition of unique AYA needs and to empower them to lead conversations related to developmental milestones and impact of cancer. A strategic plan for addressing these initial program priorities was developed in the form of a program logic model . The program logic model serves as a foundational framework for describing and evaluating the adaptation process of the AYA program by systematically addressing identified service gaps and establishing key priorities. It enables a structured approach to strategic planning, clearly defining objectives such as consistency in discussing fertility preservation, tailored psychosocial support, and enhanced clinician education. The model provides a roadmap for the implementation of these priorities, guiding the adaptation process to ensure a comprehensive and developmentally appropriate AYA oncology program. 4.4. Maintaining Connections to the Princess Margaret Cancer Centre AYA Program A key component of the expansion of the AYA program to the regional cancer centre is a partnership with a larger cancer centre. This partnership is grounded in a hub and spoke model, whereby PM serves as the hub providing the core staff, programing, and resources to patients in the regional centre which is the spoke. PM provides training to the staff and providers at the spoke site, so there is continuity in understanding and best practices for AYA needs. Having this model as the framework for expanding the AYA program will ensure that a certain standard of care is met regardless of the capacity to support AYA patients internally and provides access to key specialized central resources associated with a resource rich centre. 4.5. Communication and Referrals A seamless and inexpensive method of establishing communication and referrals involves establishment of a program email address such as aya@ [insert hospital domain]. Brochures and social media posts were also created to advertise the newly formed AYA program and circulated through a featured hospital wide communication. Clinical rounds were organized to educate clinicians and allied staff within each department at the cancer center on the uniqueness of AYA needs, to empower the staff to learn and increase comfort for these conversations, and to refer patients to the AYA program for additional support. Notably, an automated referral process was developed so that all new AYA patients will have the opportunity to meet the CNS at diagnosis, eliminating referral bias and practice variability, and addressing overall lack of AYA patient referrals to supportive care resources . 4.6. Evaluation and Intersectionality Within 7 days of initial contact, patients are sent a patient satisfaction survey. The purpose of this evaluation is to display how our team is addressing the known service gaps experienced by AYAs navigating cancer. We hope to collect data that proves that our program meets the patient’s unique developmental needs and that our team provides information above what was provided by AYA patients’ primary oncology teams to identify areas of strength and areas that require attention. Additionally, the current hospital patient advisory committee is actively recruiting an AYA patient partner to help inform future program development. Meetings with the DEI team and Indigenous navigator were helpful in modifying screening tools (i.e., to include ‘moon cycle’ when enquiring about menses). 4.7. Challenged Faced Through the development of our AYA program, we encountered several challenges, beginning with a notable lack of awareness of the specific needs of AYA patients. Establishing crucial connections within the community for support, creating pathways, and refining the referral system remain ongoing challenges. The collaborative development of consultations is hindered by gaps in services outside of the resource rich “hub”. This has required a focused approach to address, enhance, and standardize the support available to AYA patients across jurisdictions Moreover, securing sustainable funding is a critical hurdle that requires strategic planning for the program’s continued growth and success. 4.8. Future Initiatives More work is needed to fill gaps in services. For example, the hospital does not have a cancer rehabilitation program and meetings have begun to strategize on how to offer rehab services to oncology patients. Moving forward, we will complement input from the feedback surveys and patient partner with focus groups to further address intersectionality for this community. In addition, the unique palliative care needs for AYAs must be addressed. Currently, AYA patients at PM can be referred to the AYA Supportive Care clinic to address the physical and emotional symptom burden caused by dying as a young person. Specifically, this service helps patients find meaning in the face of a life-limiting illness and provides support for patients’ grief over the life they will never live. Our ongoing objective involves embedding a palliative care physician within our team to enhance the comprehensive and specialized care we provide our patients. The first step of establishing an AYA program involves identifying a local AYA champion to be the program medical director (MeD). The MeD will then begin the engagement of top stakeholders, including hospital executives, followed by patient partners and existing supportive care personnel. The MeD will oversee the development of the everyday operations of the program, and importantly, lobby for funding to support hiring a clinical nurse specialist and the initial phases of program development. Initially, program funding may rely solely on philanthropy with gradual expansion to include hospital-based funding. This pivot requires the measurement of program impact and success and thus, measurement tools should be established upfront, even prior to program initiation . The success of the PM AYA Oncology program is evident from initial evaluation efforts regarding patient satisfaction with information provided by primary oncology providers (POP) in crucial domains such as cancer information, social supports and school/work, and the incremental benefit of the AYA-dedicated team care. An added value was perceived in essential domains like school/work, social support, physical appearance, sexual health, and fertility from the team dedicated to AYA care . Further, anecdotal testimonies from patients highlight the importance of an AYA program to foster a patient community, provide support for their developmental concerns (e.g., fertility and sexual health) and providing support for system navigation, self-advocacy, and empowerment. We will collect data at both the “hub” and “spokes” to ensure our efforts are addressing patients’ needs in different communities. At PM, we were able to collect data on patient volumes as well as on the impact of program implementation on patient satisfaction, patients’ perception of the AYA program’s added value, and patients’ perception of the need for improvement in care delivery (i.e., fertility preservation) . Once funding for the new AYA program was secured and a CNS was hired, a formal needs assessment of the current state of AYA oncology care at the regional cancer center was conducted. This assessment was then paired with an environmental scan conducted by the CNS to analyze the organization’s external and internal environments impacting patient experiences at the regional cancer center. The environmental scan was conducted by drawing information from person sources, such as key hospital stakeholders and community partners, and non-person sources, such as databases and internet searches . Non-person sources. A literature search was conducted to learn of AYA program structures within the province of Ontario and within Canada. AYA programs across Canada consist of some combination of the following members of an interdisciplinary team: A medical oncologist, a clinical nurse specialist, psychiatrists and social workers, researchers, a program coordinator, a school/work transitions counsellor, spiritual care, rehabilitation medicine, palliative care, and a radiation oncologist . Person sources. Key hospital stakeholders included the cancer program director and medical director, the heads of medical and radiation oncology, the diversity and equity inclusion department, an indigenous navigator, members of the psychosocial oncology department, members of the information and technology department, unit managers and educators, and corporate communications. Summarizing the meetings led to the development of a list of key program priorities and existing resources available. Key community partners included disease-specific community organizations as well as community rehabilitation services and fertility clinics. The CNS collated a list of recommended resources for common AYA concerns in the form of a “resource navigator” document to be distributed to every patient who meets with the AYA CNS. These resources include psychosocial support options, peer connection opportunities, sexual health, body image and fertility support, wellness and rehabilitation programs, work, school, and financial supports, and supports for the caregivers and children of AYA patients. The conducted environmental scan was useful in analyzing the cancer center’s existing internal and external resources that could be leveraged to provide fulsome AYA oncology care. Additionally, various service gaps were identified including inequitable access to developmentally tailored support for AYAs navigating cancer treatment and life after treatment across jurisdictions. Findings were presented to the MeD and the hospital executive wherein three initial program priorities were agreed upon: (a) to offer consistency in discussing fertility preservation and referring patients to specialized clinics, (b) to provide developmentally tailored psychosocial support for common AYA concerns such as sexual health and body image, and (c) to improve overall clinician education and recognition of unique AYA needs and to empower them to lead conversations related to developmental milestones and impact of cancer. A strategic plan for addressing these initial program priorities was developed in the form of a program logic model . The program logic model serves as a foundational framework for describing and evaluating the adaptation process of the AYA program by systematically addressing identified service gaps and establishing key priorities. It enables a structured approach to strategic planning, clearly defining objectives such as consistency in discussing fertility preservation, tailored psychosocial support, and enhanced clinician education. The model provides a roadmap for the implementation of these priorities, guiding the adaptation process to ensure a comprehensive and developmentally appropriate AYA oncology program. A key component of the expansion of the AYA program to the regional cancer centre is a partnership with a larger cancer centre. This partnership is grounded in a hub and spoke model, whereby PM serves as the hub providing the core staff, programing, and resources to patients in the regional centre which is the spoke. PM provides training to the staff and providers at the spoke site, so there is continuity in understanding and best practices for AYA needs. Having this model as the framework for expanding the AYA program will ensure that a certain standard of care is met regardless of the capacity to support AYA patients internally and provides access to key specialized central resources associated with a resource rich centre. A seamless and inexpensive method of establishing communication and referrals involves establishment of a program email address such as aya@ [insert hospital domain]. Brochures and social media posts were also created to advertise the newly formed AYA program and circulated through a featured hospital wide communication. Clinical rounds were organized to educate clinicians and allied staff within each department at the cancer center on the uniqueness of AYA needs, to empower the staff to learn and increase comfort for these conversations, and to refer patients to the AYA program for additional support. Notably, an automated referral process was developed so that all new AYA patients will have the opportunity to meet the CNS at diagnosis, eliminating referral bias and practice variability, and addressing overall lack of AYA patient referrals to supportive care resources . Within 7 days of initial contact, patients are sent a patient satisfaction survey. The purpose of this evaluation is to display how our team is addressing the known service gaps experienced by AYAs navigating cancer. We hope to collect data that proves that our program meets the patient’s unique developmental needs and that our team provides information above what was provided by AYA patients’ primary oncology teams to identify areas of strength and areas that require attention. Additionally, the current hospital patient advisory committee is actively recruiting an AYA patient partner to help inform future program development. Meetings with the DEI team and Indigenous navigator were helpful in modifying screening tools (i.e., to include ‘moon cycle’ when enquiring about menses). Through the development of our AYA program, we encountered several challenges, beginning with a notable lack of awareness of the specific needs of AYA patients. Establishing crucial connections within the community for support, creating pathways, and refining the referral system remain ongoing challenges. The collaborative development of consultations is hindered by gaps in services outside of the resource rich “hub”. This has required a focused approach to address, enhance, and standardize the support available to AYA patients across jurisdictions Moreover, securing sustainable funding is a critical hurdle that requires strategic planning for the program’s continued growth and success. More work is needed to fill gaps in services. For example, the hospital does not have a cancer rehabilitation program and meetings have begun to strategize on how to offer rehab services to oncology patients. Moving forward, we will complement input from the feedback surveys and patient partner with focus groups to further address intersectionality for this community. In addition, the unique palliative care needs for AYAs must be addressed. Currently, AYA patients at PM can be referred to the AYA Supportive Care clinic to address the physical and emotional symptom burden caused by dying as a young person. Specifically, this service helps patients find meaning in the face of a life-limiting illness and provides support for patients’ grief over the life they will never live. Our ongoing objective involves embedding a palliative care physician within our team to enhance the comprehensive and specialized care we provide our patients. This paper focuses on the formation of a partnership to enable the expansion of an AYA program from a resource-rich quaternary cancer center to a community-based cancer centre. We emphasize the importance of recognizing diversity and intersectionality in AYA care, considering factors such as gender identity, sexuality, race/ethnicity, religion, socio-economic status, immigration status, and physical location. The paper highlights the role of SDH and intersectionality in shaping an individuals’ cancer experience and influencing cancer risk among marginalized groups. Additionally, it discusses the interdisciplinary health approach in AYA cancer care, emphasizing the need for comprehensive support for young patients’ physical, emotional, social, and developmental needs. The program development process involves identifying service gaps, securing funding, conducting an environmental scan, and maintaining connections with a larger cancer center. Challenges faced include a lack of awareness about AYA patients’ specific needs, and future initiatives include addressing gaps in services, enhancing palliative care, and incorporating patient feedback and focus groups to address intersectionality within the community. An accessible AYA program that considers all aspects of a patient’s social location in conjunction with their diagnosis can be crucial in preventative health measures . We are modifying inequalities by bringing AYA cancer programming directly to patient communities which requires a local champion, funding, and engagement of stakeholders to move forward. AYA oncology programming endeavors to reduce health inequalities including recognizing and addressing SDH, implementing initiatives that focus on promoting health, addressing challenges, and preventing diseases especially among marginalized populations and working collaboratively with communities, healthcare providers, and stakeholders to develop and implement strategies and programs that ensure equitable access to healthcare resources regardless of social location . Developing AYA regional cancer programs provides an opportunity to modify these conditions, reducing these health inequalities, and helping to provide all AYA patients with the same opportunities within their cancer care. To maximize our program’s impact and to ensure sustainability, stakeholders’ engagement beyond clinicians and service providers is needed. Specifically, engagement from policymakers and the government is imperative in securing funding to enable growth of our AYA program within Ontario and to extend the reach of our program. The government recognizes the importance of addressing health inequities; therefore, it is our duty to elevate the need for expanded AYA care to the policy agenda. This will enable equitable access to developmentally tailored support that recognizes patients’ intersectionality and will greatly improve outcomes for AYAs in Canada. In conclusion, this paper outlines the development and expansion of an AYA oncology program, emphasizing the importance of addressing the unique needs of this population. The paper discusses the unique impact of a cancer diagnosis on AYA patients’ developmental milestones, particularly regarding future family planning, sexual health, mental health, education, and social disparities. Our paper highlights the benefit of a hub-and-spoke model to expand the support available to AYA patients outside of the Greater Toronto Area. We emphasize the significance of recognizing social determinants of health, incorporating intersectionality, and engaging diverse stakeholders to reduce health inequalities and ensure equitable access to tailored AYA care across jurisdictions. The expansion to a regional cancer center is described, addressing challenges, and outlining future initiatives to further enhance AYA oncology programming. Ultimately, the manuscript underscores the need for ongoing efforts, including policy engagement, to promote health equity and improve outcomes for AYAs in Canada. Through the process of expanding the AYA program outside of Princess Margaret Cancer Centre we have learned the importance of environmental scans. The conducted environmental scan highlighted the resources that could be leveraged for the AYA program that were already embedded locally. This environmental scan also highlighted the importance of fostering relationships with stakeholders both within the partner hospital site and in the community. The scan was also fundamental in highlighting the gaps in services available to patients in their community. Further, the importance of prioritizing the recruitment of patient partners from diverse backgrounds and lived experiences is crucial in the adaptation of an AYA program to meet the needs of the local patient population. Our team endeavors to continue the efforts in recruiting eligible patient partners that are reflective of the local patient population that we serve. At this point, it appears that the fundamental needs of the local patient population are consistent with those of AYA patients in Toronto, indicating that we may have to adapt our approach as we move into more remote regions and the patient needs to diversify. A final lesson learned is the need for a flexible approach to psychosocial care and the need to continue adapting our approach and program offerings based on the everchanging social, emotional, cultural, spiritual, and physical development of the AYA population we serve.
Current practice and barriers for transition of care (TOC) in pediatric surgery: perspectives of adult surgeons from different subspecialties
72c9f0e6-4f72-466a-afeb-05afc036bfa5
11774974
Pediatrics[mh]
The transition of care (TOC) from pediatric to adult care providers is carefully defined as ‘a purposeful, planned process that addresses the medical, psychosocial, and educational/vocational needs of adolescents and young adults who have had prior congenital pathologies as they move from pediatric/child-centered to an adult-oriented healthcare system’ . The advancement of neonatal resuscitation, neonatal intensive care, and surgical expertise has now enabled children with congenital abnormalities to achieve lifespans well into adulthood . However, the nature of their illness requires long-term follow-up as they continue to have risks of developing sequelae throughout their lives. Recent studies reveal that most surgical conditions could significantly reduce patient quality of life in the long term . The potential morbidities and mortality trigger the need for a structured TOC for these patients. The Royal College of Surgeons of England Children’s Surgical Forum acknowledged transitional care as “a process, not a single event” where every National Health Service (NHS) trust should have a policy and an identified lead. Despite international recognition and the urgency for a better framework, few healthcare systems and pediatric centers have practiced the TOC model due to the challenges of lacking human resources, funding, and facilities . Currently, no clear guideline or policy is addressing the TOC for adolescents with complex surgical conditions that manifest in childhood in Malaysia. A locally conducted survey revealed that most of the pediatric surgeons managed their patients well into adulthood. This is less than ideal, and the study identified the need to develop protocols ideally within the next 3–5 years . This study, however, only focused on the current practices and perspectives of pediatric surgeons in Malaysia. The perspectives of adult surgeons from different subspecialties should be investigated to better understand how to bridge the divide. The identification of barriers and gaps would assist in developing appropriate referral pathways for the continued care of these patients. Thus, our study aimed to identify possible barriers in the TOC of pediatric surgical patients to adult care and to help develop a proper referral pathway. The study was performed between December 2023 and March 2024. This was a cross-sectional quantitative study involving an online self-administered questionnaire that was distributed among qualified surgeons managing adult patients in Malaysia. All respondents were welcomed regardless of years of experience and institutions. Purposeful sampling was performed to target surgeons from the subspecialties such as breast and endocrine surgery, colorectal surgery, general surgery, hepatopancreaticobiliary surgery, upper gastrointestinal surgery, and vascular surgery. There were no existing validated questionnaires about this subject. Thus, the questionnaire was developed in stages. At the initial stage, a literature review of TOC was performed by the researchers to develop the preliminary questionnaire (1st draft). Discussions were held among authors, which included both pediatric and adult surgeons. Valuable opinions were received from the authors of a Malaysia-based study, “Transition of Care in Paediatric Surgery: Current Practices and Perspectives of Pediatric Surgeons in Malaysia” . Then, the draft was sent to a focus group composed of 7 consultant surgeons from different subspecialties to obtain expert opinions for the face and content validity (2nd draft). This focus group consisted of a plastic surgeon, a hepatobiliary surgeon, a neurosurgeon, a colorectal surgeon, and 3 upper gastrointestinal surgeons. They were experts in their respective fields of surgery and had more than 10 years of service in government and private healthcare settings in Malaysia. The questionnaire was finalized after this validation process. The final questionnaire consisted of three parts. The first part identified the respondents according to specialty, current position, years of experience, and institution. The second part identified current practices and barriers experienced by surgeons. The third part determined the perspectives of surgeons in developing a proper TOC pathway. The questions were designed to require a “yes” or “no” answer. Surgeons were required to choose or list the factors that necessitated the TOC based on their experiences (in the second part) based on their perception. The third part involved responses that utilized a Likert Scale of Agreement (1—strongly disagree; 2—disagree; 3—neither agree nor disagree; 4—agree; 5—strongly agree) to evaluate surgeons’ responses to some statements and the perceived barriers to a smooth TOC. There was a segment for respondents to share their experiences and opinions in the final part of the questionnaire. Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS), Version 26.0 (IBM Corp., Armonk, New York, USA). Descriptive analysis was performed. Categorical data were presented as frequency and percentage, while numerical data were presented as mean and standard deviation or median and interquartile range. Graphical representations of data (Likert scale) were included for better visualization. The survey was disseminated via email to surgeons who fulfilled the inclusion criteria: must be a surgeon and practicing in a university hospital or Ministry of Health (MOH). Surgeons were identified through professional networks and directories to ensure eligibility. In total, 90 eligible surgeons were identified and received the survey. 57 responses were needed to achieve a 95% confidence interval and 8% margin of error based on Yamane’s formula. This study was approved by the Malaysia Research Ethics Committee (MREC), NMRR ID-23–02119-UPG (IIR). Informed consent was received from all respondents. There was a total of 60 respondents who participated in the survey, with a response rate of 67%. The majority (67%) were general surgeons, followed by upper gastrointestinal surgeons (13%), hepatopancreaticobiliary and colorectal surgeons (8%, respectively), orthopedic, and vascular surgeons (2%, respectively). Among the respondents, there were 3 Heads of Services (HOS) in the Ministry of Health (MOH) and 13 Heads of Department in their respective institutions. 95% worked in government hospitals (42% in state, 30% in major specialist, 8% in minor specialist, 15% in districts), while 3% worked in university hospitals and 2% in private hospitals. The mean year of experience as practicing surgeons for the respondents was 7 years. More than half (62%) had experience managing referrals from pediatric surgeons, and the majority had managed between 1 and 5 cases. 28% of the respondents reported the cases were co-managed by both adult and pediatric surgeons, 20% of the cases were solely managed by adult surgeons, 10% were managed by adult surgeons who had pediatric surgery experience, and the remaining 42% were solely managed by pediatric surgeons. Collaborations often involved co-managing the primary condition and surveillance for potential long-term complications. From the respondents’ perspectives, 17–18 years was the most appropriate age for TOC. The vast majority of the surgeons (63.3%) agreed that TOC should be started when patients are older than 17 years old (Fig. ). All surgeons agreed that involvement of pediatric surgeons was essential for a successful and smooth TOC. Four-fifths of the respondents asserted that pediatric surgeons were needed to participate in patient care after the referral. The majority of the respondents (93%) agreed that TOC is beneficial for the continued care of these patients, and 97% of the respondents felt that TOC was needed in Malaysia. Among all the important factors that necessitated the TOC, increasing age, manifestation of adult comorbidities, and independence to make decisions and care for oneself remained the 3 major factors based on surgeons’ previous experiences and perspectives. This was followed by hospital policies, stable disease processes, patient requests, and non-compliance. Marriage, pregnancy, and admission to college or university appeared to be lower priorities from the perspectives of respondents (Fig. ). The degree of agreement among the respondents was further explored. A strong agreement was observed towards the need for family support to ensure a successful TOC (63% strongly agree, 22% agree). For patients with complex surgical conditions, 23% of the respondents believed that it was insufficient for the conditions to be managed solely by pediatric surgeons familiar with that field (Fig. ). Regarding the barriers to a smooth TOC from pediatric to adult care providers, 81.67% of the respondents agreed that the absence of proper guidelines on TOC was the major barrier (20% strongly agree, 46.67% agree). This was followed by the lack of adult care providers familiar with pediatric surgical conditions (78.33%) and a lack of TOC support staff (75%). Poor record management and hospital policies were also in concern (38.33% and 21.67% strongly agree, respectively) (Fig. ). Other barriers to TOC suggested by respondents In the open comment section, other barriers suggested by respondents included: Fear of litigation or medicolegal implications for the management of complex conditions Patient’s psychological fear of transit to an unfamiliar adult care environment. Reluctance of pediatric care provider to release care of patients to the adult care provider. Reluctance of adult care providers to take over the care from a pediatric care provider. Limited knowledge of the latest advancements in disease management and hospital policies. Lack or poor communication among pediatric and adult surgeon. Lack of understanding/knowledge among adult surgeons in the management of congenital diseases. Lack of experienced adult care providers in managing patients with a background of congenital abnormalities. Lack of case-to-case management. Parents may benchmark the standard of care based on experience from the previous pediatric care provider. Movement of the specialists between hospitals. In the open comment section, other barriers suggested by respondents included: Fear of litigation or medicolegal implications for the management of complex conditions Patient’s psychological fear of transit to an unfamiliar adult care environment. Reluctance of pediatric care provider to release care of patients to the adult care provider. Reluctance of adult care providers to take over the care from a pediatric care provider. Limited knowledge of the latest advancements in disease management and hospital policies. Lack or poor communication among pediatric and adult surgeon. Lack of understanding/knowledge among adult surgeons in the management of congenital diseases. Lack of experienced adult care providers in managing patients with a background of congenital abnormalities. Lack of case-to-case management. Parents may benchmark the standard of care based on experience from the previous pediatric care provider. Movement of the specialists between hospitals. The adult surgeon is one of the fundamental stakeholders in ensuring the smooth and successful TOC. The results of this study were important to reflect the current TOC practices and challenges experienced by adult surgeons in Malaysia. The exploration of barriers was helpful in driving the direction of model development and implementation. Significantly, there was a proven need for a structured and algorithmic TOC model. The low number of referred cases managed by adult surgeons (1–5 cases in our study) revealed that TOC was not practiced widely among adult surgeons, which was consistent with the previously mentioned study that focused on pediatric surgeons in Malaysia . With the advancement of technologies and the improvement of healthcare quality, the number of patients who achieve longer lifespans is expected to expand in the coming years. Pediatric surgery is a relatively new specialty in Malaysia. Starting from 2010, pediatric surgery is no longer a subspecialty of general surgery, i.e., medical graduates who are ambitious to specialize in pediatric surgery may enter the specialization program without the prerequisite of being a qualified adult general surgeon. This direct path may be favorable to most aspiring pediatric surgeons, but the concern of pediatric surgeons having less experience in handling adult surgical problems persisted. The key factors influencing TOC in our study included increasing age, adult comorbidities, and patient independence. Age remained the main factor that necessitates the TOC based on surgeons’ experiences and opinions. Regarding one of the most commonly used TOC models—‘Ready, Steady, Go’, pediatric team would plan the transition in stages . Young patients would start visiting the transition clinics, which consisted of both pediatric and adult teams, from 16 to 18 years old. This allowed formal communications and collaborations between surgeons and young patients might highlight their ongoing issues to the adult team . Some studies suggested the start of the TOC in early adolescence . From the respondents’ perspectives in our study, 17–18 years was deemed as the most appropriate age for the TOC. We acknowledged that the transition from pediatric to adult care was a gradual process that would occur at varying speeds depending on the conditions and individual circumstances. Our finding likely reflected the perspectives of adult general surgeons who participated in the study. However, evidence in the literature highlighted that transition planning should begin earlier, around the age of 12 years, to allow sufficient time for a gradual and effective transition process. This discrepancy might point to a gap in awareness among adult general surgeons about the challenges and nuances of transition, which were more commonly navigated by pediatricians and pediatric surgeons. In addition, this recommendation of age by adult surgeons in our study might be due to the local challenges, such as the shortage of experts in adolescent care and the lack of adolescent-friendly facilities . The development of adult comorbidities was another paramount factor that demands TOC based on our study. The literature revealed that some patients moved on to lead a disease-free life post-surgery while some struggled with chronic symptoms, including physical and psychological disturbances, even after a successful surgery. Some disease-specific long-term morbidities included dysphagia in esophageal atresia, constipation in anorectal malformation and Hirschsprung’s disease, portal hypertension in biliary atresia, and pulmonary impairment in congenital diaphragmatic hernia . There were also case series reporting the development of carcinomas in patients with a history of esophageal atresia and anorectal malformations in their 40s and 30s, respectively . This was concerning as regular follow-ups and disease surveillance might enable earlier detection and provide a better prognosis. Common adult comorbidities such as diabetes mellitus and hypertension were suggested to develop earlier in this patient population than in the general population . Almost all surgeons agreed that the TOC was beneficial to patient care and needed in Malaysia. This implied the support and interest of adult surgeons in improving the care of pediatric patients with complex surgical conditions. In addition, all surgeons claimed that the involvement of pediatric surgeons would lead to successful and smooth TOC. This aligned with the pediatric surgeons’ perspectives, which 84% of them expressed the obligation to provide consultations and care even after patients have been transferred to adult care . In Malaysia, pediatric surgeons often continue to follow up with their patients into adulthood, as there is no formal handover process in place. Adopting and adapting TOC models from other countries might expedite the formation of an appropriate framework in Malaysia. The readiness of adolescents to navigate into the adult healthcare settings independently should be prioritized. The transitional care framework introduced by The North American Society for Paediatric Gastroenterology, Hepatology, and Nutrition (NASPGHAN), which eventually matched each adolescent with the adult care providers, might be studied and modified. This model ensured individualized and comprehensive care to patients . Nevertheless, there were multiple barriers to TOC. In this study, adult surgeons perceived poor record management as the major barrier to a smooth TOC, followed by the absence of a proper guideline, lack of adult care providers familiar with pediatric surgical conditions, hospital policies, lack of TOC support staff, lack of awareness among healthcare providers, and ongoing active surgical problems in patients. By acknowledging and recognizing these barriers, they could be tackled accordingly. There were some limitations to this study. The sample size of this study was small; it might not comprehensively represent adult surgeons from different subspecialties and different institutions. Besides, the responses were not stratified based on the specific congenital abnormalities managed by adult surgeons. Without this stratification, the specific referral pathway practiced by surgeons currently could not be identified and compared. Another limitation was that the questionnaire was not formally validated or assessed for internal reliability and consistency prior to its use. The robustness of the collected data and the generalizability of the findings might be reduced. Following this study, which highlighted the need for a structured transition of care model in Malaysia, we recommended initiating deeper discussions between adult and pediatric surgeons. These discussions should aim to develop models such as combined clinics and multidisciplinary approaches to ensure a seamless TOC. A multidisciplinary team (MDT) might involve a core team (i.e., pediatric surgeons, adult surgeons, nursing, and imaging teams) and an extended team (i.e., anesthesiologist, psychologist/counselor, social workers, and nutritionist). This multidisciplinary approach serves to promote collaboration, create awareness among involved parties and bridge the gaps. Several issues that have to be addressed include the availability of resources (i.e., trained healthcare personnel, medical equipment, and country’s health expenditure), the acknowledgement of the need of TOC by each discipline involved, and the quantification of the need. The ideal model should be patient-centered, case-specific, and collaborative, ensuring that the unique needs of each patient are fulfilled. This study revealed that there is no predominant TOC guideline used in Malaysia, and many cases are only seen when emergency interventions are required. Most surgeons identified 17–18 years as the appropriate age for TOC, emphasizing the importance of individualized and gradual transition processes. Key factors driving TOC include age, adult comorbidities, and patient independences, while the involvement of pediatric surgeons is critical for successful TOC. Our study acknowledged the urgent need for a TOC framework to be implemented in Malaysia. Collaborative efforts among pediatric and adult surgeons were important to bridge the divide and ensure effective transitions. The major barriers must be addressed, including absence of guidelines, lack of experienced adult care providers and poor record management. Adopting multidisciplinary approaches and tailoring TOC models to local context would be constructive and practical. Future formal discussions and meetings should be organized between the pediatric and the adult teams to review the concept and approach of TOC in Malaysia. The findings of this study, along with previously published studies focusing on pediatric surgeons’ perspectives, provided valuable insights for developing patient-centered, collaborative, and comprehensive TOC guidelines. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 19 KB)
Child standardized patients in pediatric OSCEs: a feasibility study for otoscope examination among undergraduate students in Rwanda
82448fdd-fb1f-4884-9538-e00b2dffa011
11684121
Pediatrics[mh]
The accuracy and thoroughness of otoscope examinations are critical for diagnosing and managing pediatric ear conditions. Otoscope examinations are a fundamental clinical skill that medical students must master, particularly in the context of pediatric care where ear pathologies are common . Pediatric ear conditions, such as otitis media, external otitis, and cerumen impaction, are prevalent and can significantly affect a child’s health and quality of life . Early and accurate diagnosis through otoscope examination is essential for effective treatment and management. Physicians should also be proficient in performing otoscopy and conducting ear examinations during any physical exam. This skill is especially critical in pediatric patients, as otitis media is a common cause of fever, a frequent complaint in this age group . However, traditional training methods, including assessment, which often rely on adult standardized patients or simulated models, may not provide sufficient exposure for evaluation of students regarding pediatric patients . These limitations can potentially compromise the training and assessment of competency of medical students when they encounter real pediatric patients in clinical practice. In our medical school, the pediatric and ENT curriculum includes comprehensive teaching of ear examination, clinical reasoning and communication in pediatrics through a combination of training models, supervised clinical examinations of real patients, and simulation-based OSCE sessions during formative assessments. Students have the opportunity to practice essential skills, such as ear examinations, in both simulated and real-patient settings under supervision. Although we were confident that otoscopy skills were being adequately taught during the pediatric and ENT clerkships, we wanted to assess students’ ability to integrate and apply their knowledge and skills to patients similar to those they would encounter in clinical practice. Additionally, we sought to evaluate the feasibility and effectiveness of using pediatric standardized patients (SPs) to assess the approach to a patient presenting with ear pain. Medical education has increasingly recognized the importance of standardized patients (SPs) in clinical skills training . SPs offer a realistic and controlled environment for students to practice and refine their clinical skills. Studies have shown that SPs enhance clinical skills, diagnostic accuracy, and communication abilities . However, the use of adult SPs may not adequately prepare students for pediatric examinations due to anatomical and behavioral differences between adults and children. Incorporating children as SPs in medical education presents unique challenges. Ensuring the safety, comfort, and ethical treatment of child SPs is paramount . Additionally, training children to consistently portray clinical scenarios requires specialized approaches . Despite these challenges, the potential benefits of using child SPs include more realistic training experiences for students, improved diagnostic accuracy, and better preparation for real-world clinical practice. Utilizing children as SPs can bridge the gap in pediatric clinical training. By practicing on child SPs, students can gain experience in handling pediatric patients, recognizing pediatric-specific symptoms and signs, and communicating effectively with both children and their guardians . This hands-on experience is invaluable for building confidence and competence in pediatric care . The efficacy of SPs in medical education has been well studied. It has been demonstrated that SPs improve clinical skills and diagnostic accuracy of students When trained well, SPs were found to improve the exams’ reliability and validity . Using SPs has become an effective method for evaluating clinical competence in Objective Structured Clinical Examinations (OSCE) . Otoscopic skills assessment of students on manikins and video otoscopes has been studied. However, research specifically focusing on the use of child SPs in otoscope examination training is limited. This study aims to fill this gap by evaluating the feasibility and effectiveness of using pediatric SPs in OSCEs for otoscope examination. The feasibility measured in this study primarily focused on multiple dimensions: Technical Feasibility - Evaluating whether children could effectively serve as standardized patients for otoscopic examinations, including their ability to simulate conditions reliably and tolerate the procedure without discomfort. Logistical Feasibility - Assessing the practicality of recruiting and training children and their guardians, ensuring standardization across cases, and managing their participation in a controlled assessment environment. Educational Feasibility - Determining whether using children as standardized patients provides meaningful, realistic training for students and aligns with the educational goals of preparing them for pediatric practice. Hence, we evaluated the competency of final-year medical students in performing otoscopy using a handheld otoscope, the reliability of children’s performance as SPs and their guardian’s satisfaction level with their children participating in OSCEs as a child SP. We also compared students’ self-assessments to their actual performance, as self-assessment data can provide educators with valuable insights into students’ self-perceptions, revealing trends and guiding feedback strategies. By exploring these aspects, the study seeks to contribute to the development of more effective pediatric clinical education strategies, ultimately enhancing the quality of care provided to pediatric patients. Study design and settings This was descriptive cross-sectional study and involved quantitative and qualitative methods conducted at the University of Global health equity (UGHE) campus during the final MBBS qualifying exit OSCE examinations. The exit qualifying clinical exam is designed as an integrative clinical assessment, combining OSCE stations that evaluate skills across multiple specialties. This otoscopic examination station was structured to assess competencies relevant to both Pediatrics and ENT, reflecting the crossover skills required in real-world clinical practice when managing pediatric patients with ear-related conditions. This dual focus highlights the importance of interdisciplinary knowledge and its application in clinical scenarios. This exam included a station where students were presented with a clinical vignette describing a child with ear pain and fever and were instructed to perform the most relevant part of the physical exam. Students were evaluated not only for their technical ability but also for their communication skills, attitude, and behavior, providing a comprehensive assessment of their clinical competencies. Study participants This study included all (thirty) final-year medical students, the guardians and children who participated as SP in the final qualifying exit examination. All students, guardians and children who participated in the OSCE were eligible and enrolled in the study after providing consent. Sample size The sample size for the quantitative analysis consisted of all 30 medical students who participated in the pediatric OSCE component of the exit examinations. For the qualitative analysis, the sample included five SP children and their respective guardians who participated in the OSCE exams. These participants were selected for focus group discussions (FGDs) to gather in-depth insights into their experiences and perspectives. Participants recruitment and data collection: Participants were recruited, and data were collected in a two-phase approach: Recruitment of Guardians and SP Children. Prior to the examination date, guardians of children aged five to eight years were approached with a detailed explanation of the study’s purpose, procedure and potential risks and benefits. Guardians who consented to participate allowed their children to serve as SPs during the pediatric OSCE. These SPs and their guardians were then invited to participate in FGDs following the exam. Standardized patients (SPs), consisting of children aged five to eight years, and their guardians were recruited from the community surrounding the university campus. These SPs and their guardians were brought to the campus for a screening and training session prior to the examination day. During the session, two pediatricians and one General Practitioner, including two who are fluent in the local language, provided the children and their guardians with a detailed explanation of their roles as SPs in the OSCE and emphasized their right to stop the exam at any point. The pediatricians then performed an otoscopic examination on each child’s ears to ensure their suitability for the role. For each child, each ear was examined one after the other following standard techniques. They were also taught how to express themselves if there was any discomfort by role play. Of the 6 SPs examined, 5 had normal otoscopic findings and were selected to participate in the exam. One child was found to have otitis media with effusion and consequently was not selected for the examination. This child was referred to the local clinic for further evaluation and treatment. For guardians, training included how to respond to student inquiries, mirroring typical parental concerns and emotional responses that students would encounter in real scenarios. This helped create a realistic, emotionally resonant interaction, aligning the assessment closely with real-world pediatric care. Recruitment of Medical Students: Following the completion of the OSCE, all 30 medical students were approached and received a comprehensive explanation of the study and were invited to participate in the study. Each student provided informed consent before participating. The students were then asked to complete a structured questionnaire designed to assess their satisfaction with the examination process and their perceived self-efficacy in the Pediatric otoscopy in the context of interacting with child SPs. Quantitative method: structured questionnaires were distributed to the students after they completed their final examinations. The questionnaire assessed their satisfaction level with the use of child SPs in the OSCE on a five-point Likert scale. It also assed their perceived self-efficacy, defined as one`s expectations of his/her ability to perform various tasks successfully and achieve desire goals . This was assessed by asking about their confidence and perceived ability to manage pediatric patients and also rated on a five-point Likert scale with options from “Very Low” to “Very High”. And finally, the students’ performances in the otoscope station of the OSCE were assessed using a standardized checklist. This checklist evaluated technical ability, communication skills, confidence, and diagnostic accuracy. Each of these aspects was rated on a five-point Likert scale (Very Good, Good, Average/Fair, Poor, Very Poor). The performance data were extracted anonymously, de-identified, and compiled for analysis. Qualitative method: The qualitative component of the study involved focus group discussions (FGDs) with the guardians and children SPs who participated in the OSCE. Separate FGDs were conducted with the guardians and the SP children. The discussions with the guardians focused on their satisfaction with the examination process and their assessment of the medical students’ performance in interacting with their children. The children’s FGDs were designed to be age-appropriate and aimed at understanding their comfort levels and experiences during the OSCE. The session was audio-recorded and transcribed verbatim afterward. The transcripts were analyzed thematically to identify recurring themes related to satisfaction, the perceived quality of student performance, and any challenges encountered during the examination process. Data analysis For the quantitative part of the study, data were entered and analyzed using SPSS version 29. The demographics (age) was summarized using mean with standard deviation. A frequency Table was used to summarize the students’ responses on perceived self-efficacy, and recommendation on using children for SP. Furthermore, the Likert Scale response was graded from 5 to 1 as follows [Strongly agree = 5; Agree = 4; Neutral = 3; Disagree = 2, Strongly disagree = 1] and was used to derive the mean of responses for students’ self-efficacy, while the extracted examination raw score converted to percent was to for the actual students’ performance. We compared the scores from the OSCE station with students’ self-efficacy using Pair-T test and explored the association between the two variables using Pearson correlation test. For the qualitative data analysis, the focus group discussions with guardians and child SP were recorded and transcribed, then coded to identify recurrent theme and patterns. Key themes related to satisfaction, comfort, understanding and preparation for the examination were extracted and reported. This was descriptive cross-sectional study and involved quantitative and qualitative methods conducted at the University of Global health equity (UGHE) campus during the final MBBS qualifying exit OSCE examinations. The exit qualifying clinical exam is designed as an integrative clinical assessment, combining OSCE stations that evaluate skills across multiple specialties. This otoscopic examination station was structured to assess competencies relevant to both Pediatrics and ENT, reflecting the crossover skills required in real-world clinical practice when managing pediatric patients with ear-related conditions. This dual focus highlights the importance of interdisciplinary knowledge and its application in clinical scenarios. This exam included a station where students were presented with a clinical vignette describing a child with ear pain and fever and were instructed to perform the most relevant part of the physical exam. Students were evaluated not only for their technical ability but also for their communication skills, attitude, and behavior, providing a comprehensive assessment of their clinical competencies. This study included all (thirty) final-year medical students, the guardians and children who participated as SP in the final qualifying exit examination. All students, guardians and children who participated in the OSCE were eligible and enrolled in the study after providing consent. The sample size for the quantitative analysis consisted of all 30 medical students who participated in the pediatric OSCE component of the exit examinations. For the qualitative analysis, the sample included five SP children and their respective guardians who participated in the OSCE exams. These participants were selected for focus group discussions (FGDs) to gather in-depth insights into their experiences and perspectives. Participants recruitment and data collection: Participants were recruited, and data were collected in a two-phase approach: Recruitment of Guardians and SP Children. Prior to the examination date, guardians of children aged five to eight years were approached with a detailed explanation of the study’s purpose, procedure and potential risks and benefits. Guardians who consented to participate allowed their children to serve as SPs during the pediatric OSCE. These SPs and their guardians were then invited to participate in FGDs following the exam. Standardized patients (SPs), consisting of children aged five to eight years, and their guardians were recruited from the community surrounding the university campus. These SPs and their guardians were brought to the campus for a screening and training session prior to the examination day. During the session, two pediatricians and one General Practitioner, including two who are fluent in the local language, provided the children and their guardians with a detailed explanation of their roles as SPs in the OSCE and emphasized their right to stop the exam at any point. The pediatricians then performed an otoscopic examination on each child’s ears to ensure their suitability for the role. For each child, each ear was examined one after the other following standard techniques. They were also taught how to express themselves if there was any discomfort by role play. Of the 6 SPs examined, 5 had normal otoscopic findings and were selected to participate in the exam. One child was found to have otitis media with effusion and consequently was not selected for the examination. This child was referred to the local clinic for further evaluation and treatment. For guardians, training included how to respond to student inquiries, mirroring typical parental concerns and emotional responses that students would encounter in real scenarios. This helped create a realistic, emotionally resonant interaction, aligning the assessment closely with real-world pediatric care. Recruitment of Medical Students: Following the completion of the OSCE, all 30 medical students were approached and received a comprehensive explanation of the study and were invited to participate in the study. Each student provided informed consent before participating. The students were then asked to complete a structured questionnaire designed to assess their satisfaction with the examination process and their perceived self-efficacy in the Pediatric otoscopy in the context of interacting with child SPs. Quantitative method: structured questionnaires were distributed to the students after they completed their final examinations. The questionnaire assessed their satisfaction level with the use of child SPs in the OSCE on a five-point Likert scale. It also assed their perceived self-efficacy, defined as one`s expectations of his/her ability to perform various tasks successfully and achieve desire goals . This was assessed by asking about their confidence and perceived ability to manage pediatric patients and also rated on a five-point Likert scale with options from “Very Low” to “Very High”. And finally, the students’ performances in the otoscope station of the OSCE were assessed using a standardized checklist. This checklist evaluated technical ability, communication skills, confidence, and diagnostic accuracy. Each of these aspects was rated on a five-point Likert scale (Very Good, Good, Average/Fair, Poor, Very Poor). The performance data were extracted anonymously, de-identified, and compiled for analysis. Qualitative method: The qualitative component of the study involved focus group discussions (FGDs) with the guardians and children SPs who participated in the OSCE. Separate FGDs were conducted with the guardians and the SP children. The discussions with the guardians focused on their satisfaction with the examination process and their assessment of the medical students’ performance in interacting with their children. The children’s FGDs were designed to be age-appropriate and aimed at understanding their comfort levels and experiences during the OSCE. The session was audio-recorded and transcribed verbatim afterward. The transcripts were analyzed thematically to identify recurring themes related to satisfaction, the perceived quality of student performance, and any challenges encountered during the examination process. For the quantitative part of the study, data were entered and analyzed using SPSS version 29. The demographics (age) was summarized using mean with standard deviation. A frequency Table was used to summarize the students’ responses on perceived self-efficacy, and recommendation on using children for SP. Furthermore, the Likert Scale response was graded from 5 to 1 as follows [Strongly agree = 5; Agree = 4; Neutral = 3; Disagree = 2, Strongly disagree = 1] and was used to derive the mean of responses for students’ self-efficacy, while the extracted examination raw score converted to percent was to for the actual students’ performance. We compared the scores from the OSCE station with students’ self-efficacy using Pair-T test and explored the association between the two variables using Pearson correlation test. For the qualitative data analysis, the focus group discussions with guardians and child SP were recorded and transcribed, then coded to identify recurrent theme and patterns. Key themes related to satisfaction, comfort, understanding and preparation for the examination were extracted and reported. General characteristics: students The mean age group of the students was 23.7 (SD 0.8) years with a minimum of 22, a maximum of 25 years and 63.3% were females. Students’ evaluation of the station on the ear examinations of a child (standardized patient). Of the 30 students that took part in this study, 29 (96.7%) reported that they thought they performed well at the station and were all able to visualize the auditory canal and ear drum. Twenty-six (86.7%) were able to identify whether the ear findings were normal or abnormal. Also, all the students agreed that the children used as standardized patients were very cooperative and 80.0% agreed they are ideal for the station while 83.3% would recommend using children as SP for pediatric examinations (Table ). Students’ self-efficacy and performance at the OSCE station The mean (standard deviation) students perceived self-efficacy (one`s expectations of his ability to perform various tasks successfully) was 72.5 (6.8) %, with a minimum of 52.0% and a maximum of 80.0%, while the mean performance at the station was 81.67 (5.7)% with a minimum of 72.5% and a maximum of 92.5% The mean performance (examinations) at the OSCE station was slightly higher than perceived students’ self-efficacy, (mean diff = 8.717, p < 0.001). There was good positive correlation between students’ self-efficacy and actual performance (Pearson correlation coefficient, r = 0.493, p = 0.006). Students’ perception on using children as standardized children Of the 30 students, 11 gave additional comments and were mostly positive, with 8 of the 11 agreeing that having children as SP is either good or great and that they were very cooperative. A candidate suggested a need for adequate training of the child SP as the child SP could not indicate which the ear was for examination. At the same time, another student recommended frequent changes of the children. Themes identified Overall satisfaction All parents and children expressed satisfaction with their experience. Parents appreciated the opportunity for their children to participate, viewing it as beneficial. They believe their participation in this has contributed to the development of the futures doctors to be. Children generally found the examination experience positive and likened it to a normal or enjoyable activity. Confidence and Performance of Medical Students: Parents’ Perspective: While parents were generally impressed by the students’ performance, they identified a lack of confidence in some students, despite their evident knowledge and skills. Parents suggested that building more confidence could improve the students’ performance in future exams. Parent No 3 “I was impressed by their knowledge, but I think they could perform even better if they believed in themselves more.” Children’s Perspective: The children did not express concerns about the students’ performance but did note that in some instances, explanations were not provided before the examination. Child No 2 “The check-up was fine, but sometimes they didn’t tell me what they were going to do before they did it.” Communication: Parents’ Perspective: Parents were pleased with the way students handled the children, but there were slight concerns about a need for better communication and consistency in practices. Parent No 1 “Overall, the examination was good, but sometimes it felt like there wasn’t a clear explanation of what the students were doing.” Children’s Perspective: Some children noted that not all students provided clear explanations before proceeding with the examination. While most children felt reassured, one child did experience some discomfort. Suggestions for Improvement: Parents: Focus on building student confidence to improve their interaction and examination process. Children: Emphasized that students should not be afraid and should ensure clear communication before performing any procedures. Statement: The qualitative analysis of the interviews conducted with parents and children who participated in an otoscopic examination conducted by medical students during an OSCE of a final year medical student exam reveals overall satisfaction with the experience. Parents appreciated the practical experience, recognizing the importance of such training for future healthcare professionals. However, a recurrent theme was the students’ need for greater confidence during the exams, as noted by both parents and children. While communication was generally adequate, with most students explaining the procedures, there were isolated instances where this was not consistent. Moving forward, teaching should emphasize repeated supervised practice of otoscopy until the students can gain the necessary confidence and should emphasize the importance of teaching communication skills before and during otoscopy in children. The findings also highlighted that some of the students need further practice so that they can gain more confidence and need to be thought about the importance of communication. Also, worthy of note is that none of the children who took part in this study experienced any form of injury during the examination. The mean age group of the students was 23.7 (SD 0.8) years with a minimum of 22, a maximum of 25 years and 63.3% were females. Students’ evaluation of the station on the ear examinations of a child (standardized patient). Of the 30 students that took part in this study, 29 (96.7%) reported that they thought they performed well at the station and were all able to visualize the auditory canal and ear drum. Twenty-six (86.7%) were able to identify whether the ear findings were normal or abnormal. Also, all the students agreed that the children used as standardized patients were very cooperative and 80.0% agreed they are ideal for the station while 83.3% would recommend using children as SP for pediatric examinations (Table ). The mean (standard deviation) students perceived self-efficacy (one`s expectations of his ability to perform various tasks successfully) was 72.5 (6.8) %, with a minimum of 52.0% and a maximum of 80.0%, while the mean performance at the station was 81.67 (5.7)% with a minimum of 72.5% and a maximum of 92.5% The mean performance (examinations) at the OSCE station was slightly higher than perceived students’ self-efficacy, (mean diff = 8.717, p < 0.001). There was good positive correlation between students’ self-efficacy and actual performance (Pearson correlation coefficient, r = 0.493, p = 0.006). Of the 30 students, 11 gave additional comments and were mostly positive, with 8 of the 11 agreeing that having children as SP is either good or great and that they were very cooperative. A candidate suggested a need for adequate training of the child SP as the child SP could not indicate which the ear was for examination. At the same time, another student recommended frequent changes of the children. Overall satisfaction All parents and children expressed satisfaction with their experience. Parents appreciated the opportunity for their children to participate, viewing it as beneficial. They believe their participation in this has contributed to the development of the futures doctors to be. Children generally found the examination experience positive and likened it to a normal or enjoyable activity. Confidence and Performance of Medical Students: Parents’ Perspective: While parents were generally impressed by the students’ performance, they identified a lack of confidence in some students, despite their evident knowledge and skills. Parents suggested that building more confidence could improve the students’ performance in future exams. Parent No 3 “I was impressed by their knowledge, but I think they could perform even better if they believed in themselves more.” Children’s Perspective: The children did not express concerns about the students’ performance but did note that in some instances, explanations were not provided before the examination. Child No 2 “The check-up was fine, but sometimes they didn’t tell me what they were going to do before they did it.” Communication: Parents’ Perspective: Parents were pleased with the way students handled the children, but there were slight concerns about a need for better communication and consistency in practices. Parent No 1 “Overall, the examination was good, but sometimes it felt like there wasn’t a clear explanation of what the students were doing.” Children’s Perspective: Some children noted that not all students provided clear explanations before proceeding with the examination. While most children felt reassured, one child did experience some discomfort. Suggestions for Improvement: Parents: Focus on building student confidence to improve their interaction and examination process. Children: Emphasized that students should not be afraid and should ensure clear communication before performing any procedures. Statement: The qualitative analysis of the interviews conducted with parents and children who participated in an otoscopic examination conducted by medical students during an OSCE of a final year medical student exam reveals overall satisfaction with the experience. Parents appreciated the practical experience, recognizing the importance of such training for future healthcare professionals. However, a recurrent theme was the students’ need for greater confidence during the exams, as noted by both parents and children. While communication was generally adequate, with most students explaining the procedures, there were isolated instances where this was not consistent. Moving forward, teaching should emphasize repeated supervised practice of otoscopy until the students can gain the necessary confidence and should emphasize the importance of teaching communication skills before and during otoscopy in children. The findings also highlighted that some of the students need further practice so that they can gain more confidence and need to be thought about the importance of communication. Also, worthy of note is that none of the children who took part in this study experienced any form of injury during the examination. All parents and children expressed satisfaction with their experience. Parents appreciated the opportunity for their children to participate, viewing it as beneficial. They believe their participation in this has contributed to the development of the futures doctors to be. Children generally found the examination experience positive and likened it to a normal or enjoyable activity. Confidence and Performance of Medical Students: Parents’ Perspective: While parents were generally impressed by the students’ performance, they identified a lack of confidence in some students, despite their evident knowledge and skills. Parents suggested that building more confidence could improve the students’ performance in future exams. Parent No 3 “I was impressed by their knowledge, but I think they could perform even better if they believed in themselves more.” Children’s Perspective: The children did not express concerns about the students’ performance but did note that in some instances, explanations were not provided before the examination. Child No 2 “The check-up was fine, but sometimes they didn’t tell me what they were going to do before they did it.” Communication: Parents’ Perspective: Parents were pleased with the way students handled the children, but there were slight concerns about a need for better communication and consistency in practices. Parent No 1 “Overall, the examination was good, but sometimes it felt like there wasn’t a clear explanation of what the students were doing.” Children’s Perspective: Some children noted that not all students provided clear explanations before proceeding with the examination. While most children felt reassured, one child did experience some discomfort. Suggestions for Improvement: Parents: Focus on building student confidence to improve their interaction and examination process. Children: Emphasized that students should not be afraid and should ensure clear communication before performing any procedures. Statement: The qualitative analysis of the interviews conducted with parents and children who participated in an otoscopic examination conducted by medical students during an OSCE of a final year medical student exam reveals overall satisfaction with the experience. Parents appreciated the practical experience, recognizing the importance of such training for future healthcare professionals. However, a recurrent theme was the students’ need for greater confidence during the exams, as noted by both parents and children. While communication was generally adequate, with most students explaining the procedures, there were isolated instances where this was not consistent. Moving forward, teaching should emphasize repeated supervised practice of otoscopy until the students can gain the necessary confidence and should emphasize the importance of teaching communication skills before and during otoscopy in children. The findings also highlighted that some of the students need further practice so that they can gain more confidence and need to be thought about the importance of communication. Also, worthy of note is that none of the children who took part in this study experienced any form of injury during the examination. Standardized patients are important parts of clinical teaching and examination for undergraduate medical training but there is limited data on the use of children as standardized patients, especially in sub-Saharan Africa. The findings from this study highlight the potential benefits and challenges associated with having children’s patients (SPs) in Objective Structured Clinical Examinations (OSCEs). Our study shows that there is a correlation between the competency of medical students in performing otoscopy in children SPs and their perceived self-efficacy. This is particularly important in pediatric care, where the anatomical and behavioral differences between children and adults require specialized training. This observation may not be unexpected as the hands-on students’ experience with children during their training allowed them to develop the necessary skills to handle pediatric patients more effectively, improving diagnostic accuracy and confidence. Among challenges identified in the Pediatric SPs are the consistency and reliability of child SPs in portraying clinical scenarios. While adult SPs are typically well-trained to simulate medical conditions consistently, the variability in children’s behavior and responses can introduce inconsistencies in the examination process. However, the study’s results indicate that with proper training and support, child SPs can reliably simulate pediatric conditions, offering a valuable educational tool for training and assessing the medical students. The observation of reliability of child SPs in this study for otoscopic examination supported role of OSCE in pediatric training of undergraduate students as observed by Paul and colleagues in USA . The study also explored the satisfaction levels of students, child SPs, and their guardians. Overall, students reported high levels of satisfaction with the use of child SPs, noting that the experience better prepared them for real-world clinical practice. Guardians and child SPs also expressed positive feedback, though a few concerns were raised regarding the potential discomfort experienced by the children. Ensuring the safety and comfort of child SPs is paramount, and the study’s methodology included strategies such as thorough explanations, role-playing, and regular breaks to mitigate these risks. The ethical implications of using children as SPs were carefully considered in this study. The safety, comfort, and psychological well-being of the child SPs were prioritized, with measures in place to address any potential discomfort or anxiety. While there are inherent challenges in involving children in medical education, the study suggests that these can be effectively managed with appropriate protocols and support. This study underscores the potential benefits and challenges of incorporating children as standardized patients (SPs) in clinical training and OSCEs for medical students, particularly in the context of pediatric otoscopy. Our findings suggest that the use of child SPs enhances medical students’ skills, confidence, and preparedness for real-world pediatric care, demonstrating a positive correlation between hands-on experience and perceived self-efficacy. Despite concerns about variability in children’s behavior and the potential discomfort for child SPs, with adequate training, support, and ethical safeguards, child SPs can reliably simulate clinical scenarios. This contributes significantly to the effectiveness of pediatric training, enriching the overall educational experience while ensuring the safety and well-being of the children involved. Limitations and future research Despite the positive outcomes, the study has some limitations. The sample size was relatively small, and the findings may not be generalizable to all medical education settings in sub-Saharan Africa. Additionally, the study primarily focused on the feasibility and initial outcomes of using child SPs, leaving room for further research to explore long-term impacts on clinical competency and patient care. Future research should consider larger, more diverse cohorts and explore the integration of child SPs across different clinical skills and specialties. Longitudinal studies could provide insights into how early exposure to pediatric patients through SPs influences the development of clinical skills and confidence in pediatric care over time. Despite the positive outcomes, the study has some limitations. The sample size was relatively small, and the findings may not be generalizable to all medical education settings in sub-Saharan Africa. Additionally, the study primarily focused on the feasibility and initial outcomes of using child SPs, leaving room for further research to explore long-term impacts on clinical competency and patient care. Future research should consider larger, more diverse cohorts and explore the integration of child SPs across different clinical skills and specialties. Longitudinal studies could provide insights into how early exposure to pediatric patients through SPs influences the development of clinical skills and confidence in pediatric care over time. The study demonstrated that incorporating child SPs in OSCE is feasible and can enhance the realism and educational value of pediatric otoscopy training, providing medical students with more accurate and relevant experiences. The children and their guardians are generally positive about their roles as child SPs. With careful planning and ethical considerations, the use of child SPs could play a crucial role in enhancing pediatric clinical education and ultimately improving the quality of care provided to pediatric patients.
Force Degradation of Intermaxillary Latex Elastics: Comparative In Vitro and In Vivo Study
c80ddf59-ea06-4466-b6f5-ff487934b87c
11783226
Dentistry[mh]
Introduction Intermaxillary elastics (IE) are an integral part of orthodontic treatment with fixed appliances or clear aligners (Topouzelis and Palaska ; Thurzo et al. ). While inserted in the patient's mouth, they generate a force of designated magnitude and direction. Properties of IE should include flexibility, low costs, and a capacity for returning to their original form. Orthodontic elastics are made of natural latex or synthetic elastomer based on polyurethane. As synthetic non‐latex IE are not the focus of this research, they will not be mentioned any further. Natural latex is chemically cis ‐1,4‐polyisoprene, an unsaturated hydrocarbon originating from the Brazilian rubber tree ( Hevea brasiliensis ), whose structure and molecular weight may vary depending on the plant species, region, or season (Wong ). The flexibility and strength are improved during a process called vulcanization—cross‐linking occurs in the presence of sulfur and other compounds (Bokobza ; Perrella and Gaspari ). As natural latex is very sensitive to ozone stabilizers, antioxidants and anti‐ozone agents are further added during the production of IE to give the latex the desired properties (Sambataro et al. ). During the manufacture of the IE, steel rods of varying widths are dipped into the vat of the material—the more times the dipping occurs, the thicker the latex layer and therefore the resulting latex tubing will be. The tubes are then cut to the desired width, resulting in elastic rings of defined diameter and thickness. Depending on the width and thickness of the latex layer, the IE are then divided by manufacturers into various sizes and “light,” “medium,” and “heavy” according to the exerting force (Sambataro et al. ). It is a well‐known fact that the amount of force exerted by the elastics decreases over time as every elastomeric material undergoes creep and stress relaxation (Santos et al. ). Plastic deformation of the polymer under load unravels cross‐links leading to mechanical degradation together with chemical degradation leading to the degrading of the mechanical properties. Furthermore, if large, excessive forces are used, the chains may slip on each other, resulting in the permanent deformation of the material. Natural latex is very sensitive to ozone, solar radiation, ultraviolet radiation, or free radical‐producing systems—unsaturated double bonds are broken at the molecular level and the polymer chain is weakened. After the individual pieces of IE are produced, the ozone‐permeable surface is increased and bond breakage occurs more rapidly—latex elastics should not be used after the expiry date precisely because of their reduced durability (Wong ). The force degradation of orthodontic latex IE is the highest after initial elongation and then decreases gradually during the following hours. However, the rate of the force decrease differs widely between the studies—the decrease in force within the first 24 h ranges from 14.2% to 90.2% (Yang et al. ; Oliveira et al. ). The rate of force degradation differs between manufacturers, different sizes and strengths of the IE (Oliveira et al. ; Dubovská et al. ; Wang et al. ; Qodcieh et al. ), and the conditions in which the IE is stored between the measurements (Kardach et al. ). The degradation of force can also be influenced by the design of the study although not many studies studied the force degradation of elastics while worn by the patient (Yang et al. ; Wang et al. ; Qodcieh et al. ; Pithon et al. ). This study aimed to compare the force degradation of one specific type of latex IE in vitro in a controlled humid environment and in vivo in patients' mouths stretched to the precise diameter. Methods In vivo and in vitro measurements of force degradations of IE were performed. Based on our previous research, the 3/16″ medium Dentaurum (Dentaurum, Ispringen, Germany) IE were selected for investigation as they had the closest initial force to the declared force of 1.255 N when prestretched and stretched to three times diameter (Wang et al. ). According to the Safety Data Sheet of Dentaurum GmbH & Co. KG, these elastics are made of Natural rubber (Caoutchouc) together with Sulfur, Zinc Oxide, Age Resistor, and Vulcanization Accelerator (Dentaurum GmbH & Co. , ). Ethical approval for the study was obtained from the local ethics committee. The study was conducted following the Declaration of Helsinki and current local legal regulations. Two hundred pieces of elastics 3/16″ medium from Dentaurum were analyzed from five different batches of packaging 20 pieces each in vivo and in vitro. A total of 1000 measurements were made. All elastics were within their use‐by date, delivered by the manufacturer no later than 2 weeks before the measurement, manufactured no later than 2 months before the measurement, and, after being received from the manufacturer, stored in sealed plastic containers in a dark environment. All elastics were subjected to “prestretching” immediately before time 0 measurement— they were stretched to three times the original diameter, according to the recommendations by Proffit and Liu (Proffit et al. ; Liu, Wataha, and Craig ). For an in vivo examination, 10 volunteers were recruited and written informed consent was obtained from all the participants. Every participant acquired 10 IE 3/16″ medium from Dentaurum (Dentaurum, Ispringen, Germany) and wore one pair for 2 days while coming in for the measurements at a given interval. None of them were undergoing orthodontic treatment, all were Angle Class 1 in the first permanent molars. Beforehand dental scans were taken in all of them using the 3Shape scanner, and virtual models were created (Figure ). On the virtual models, the distance from the upper canine to the lower dental arch was measured to achieve exact distances equaling three times the diameter of the 3/16″ elastics—that is, 14.4 mm. An aligner template made on a 3D‐printed model was used to accurately glue orthodontic buttons. The buttons were glued in the patient's mouth with an orthodontic adhesive in a standard manner according to the premade template and the distance between the two buttons was re‐measured (Figure ). Passive stabilizing aligners were fabricated on the 3D‐printed model and inserted into the participant's mouth (Figure ). The study participants were instructed on how to properly remove and insert the aligners and IE. They were also advised to remove both before eating or drinking and keep their appliance removals to a minimum. To maintain the study's accuracy, the participants were asked to continue their normal eating habits. After prestretching, each IE force was measured one by one at time 0 before the first insertion into the participant's mouth with a force meter from the company “ScienceCube” set on the exact distance, calibrated before each set of measurements. The force meter was connected to the portable data logger “LabQuest3” from the Vernier company (Figure ). The right elastic was always inserted first followed by left elastic 10 min afterward. Elastics were measured five times: at time 0, at 2, 8, 24, and 48 h. Patients came 10 min before the time limit and elastics were measured directly after removal from the oral cavity and were inserted right back. Each patient has worn 10 elastics for 48 h (5 pairs for 2 days each). After the measurements were completed, the buttons were removed from the participant's mouth and residual orthodontic adhesive was cleared in a standard manner. For in vitro examination, 100 IE 3/16″ medium from Dentaurum (Dentaurum, Ispringen, Germany) were measured five times: at time 0, at 2, 8, 24, and 48 h. To standardize the stretching conditions in vitro between measurements, a 3D model of a board with spurs was created in the “Rhinoceros 3D” program and subsequently printed using the “Prusa i3 MKS+” printer. The distance between spurs corresponded with the three times diameters of IE from Dentaurum = 14.4 mm. The bulk of the five elastics were subjected to the experiment each time. Simulation of the oral environment in the laboratory was made possible using the Ivoclar Vivadent Cultura incubator at a constant temperature of 37°C and a controlled humid environment, where the stretched elastics were stored between the measurements (Figure ). The conditions in the incubator were continuously monitored using a precision thermometer and a humidity sensor. The force of elastics was measured individually with a force meter from the company “ScienceCube” set on the exact distance, calibrated before each set of measurements. The force meter was connected to the portable data logger “LabQuest3” from the Vernier company. The measurements were repeated after 2, 8, 24, and 48 h and results were recorded in newtons. Statistical processing of the collected data from both examinations was performed on the statistic software IBM SPSS Statistics for Windows, Version 23.0., IBM Corp., Armonk, NY. The collected data were analyzed by Shapiro–Wilk normality tests which showed normal force distribution. Further statistical processing was performed using parametric methods which were validated with nonparametric tests. The use of parametric methods is appropriate given the relatively large range of samples. Data were presented using means and standard deviations. A comparison of two independent sets was performed using a two‐sample t ‐test. Results Shapiro–Wilk normality tests showed that the distribution of forces is normal for most parameters. Parametric methods were used for processing and validated by non‐parametric tests. The use of parametric methods is appropriate due to the relatively large range of samples. A total of 1000 measurements were made in each group in vivo and in vitro. Data are presented using means and standard deviations in Table . A comparison of two independent sets was performed using a two‐sample t ‐test. All tests were performed at a significance level of 0.05, if the p value was less than 0.05. At time 0, there was no statistically significant difference between IE for in vivo and in vitro. At all other times, the force was statistically significantly higher in the in vitro mode. At 2 h, the in vitro force was 1.08 N and the in vivo force was 1.03 N. At 4 h, the difference in force was 0.05 N; in vitro force was 1 N, and in vivo force was 0.95 N. At 24 h, the difference in force was 0.07 N and at 48 h it was 0.09 N. With a longer time, the residual force difference was higher for IE in vitro than for elastics used in vivo. The force degradation was significantly higher for IE in vivo. In the first 2 h, there was a 20.58% decrease in force in vivo and only 16.38% in vitro, the force degradation being the greatest. The decrease in force averaged over this time interval was 10.29%/h for elastics in vivo and 8.19%/h for elastics in vitro. Over the next 8 h, the force decreased by 26.78% in vivo and 22.83% in vitro. After 24 h, there was a decrease in force of 34.81% for in vivo and 28.32% for in vitro. After 48 h, there was a decrease of 38.56% for in vivo and 30.78% for in vitro. The force degradation after the first 2 h was significantly reduced—below 1.08%/h. At time 0 h, the differences were not statistically significant. At all other times, the force is statistically significantly higher in the in vitro setting (Figure ). The measurement error was checked using Dahlberg's formula ( D ). Since the calculation of the Dahlberg error does not account for the magnitude of the measured values, it is more appropriate to use the relative Dahlberg error (RDE), which is obtained by dividing the Dahlberg error by the average of the corresponding measured values, to compare the accuracy of the measurements for individual parameters. After multiplying by 100, it is given as a percentage. The degree of absolute agreement between the first and control measurements was verified by calculating the intraclass correlation coefficient (ICC). The occurrence of systematic error was verified by paired t ‐test. The RDE was 2.59% for in vivo measurements and 1.29% for in vitro measurements. The higher measurement error for the in vivo study setting is probably due to the more complex measurement after extraction of the IE from the patient's mouth than from the 3D model and therefore risks a longer delay. The ICC values were 0.980 in vivo and 0.993 in vitro, representing a perfect match (Table ). If the ICC values are greater than 0.75 and the RDE is less than 8%, the measurement is considered sufficiently accurate (Proffit et al. ). The paired t ‐test revealed no systematic error ( p values were greater than 0.05). Bland–Altman plots were used to graphically represent the error (Figure ). These graphs are used to reveal the possible dependence of measurement errors on the magnitude of the measured value. Therefore, we can conclude that our measurements are sufficiently accurate, the errors are random and there are no significant trends. In all participants but one, the distance three times the diameter of the IE extended from the upper canine to the lower second premolar; in one participant, the distance corresponded to the standard insertion points—the upper canine and the lower first molar. Various studies have been conducted on the force degradation of IE. However, there is inconsistency in the results due to the use of different types of elastics and experimental methods, which makes it difficult to compare the findings (Young ). In the present study, we used elastics from one manufacturer, Dentaurum (Dentaurum, Ispringen, Germany) size of 3/16″, and a strength of “medium.” The physical properties of these elastics were examined in two environments: in vivo and in vitro, when stretched three times their diameter. These elastics were chosen because, in preliminary in vitro study their initial force was closest to the value declared by the manufacturer (Dubovská et al. ). The present study found that the greatest force decrease in the strength of IE was observed in the first 2 h of use, which confirms the results of previous studies (Liu, Wataha, and Craig ; Kanchana and Godfrey ; Barrie and Spence ). However, some researchers have reported the greatest force decrease after 24 h, suggesting that the elastics should be replaced once a day in clinical practice (Oliveira et al. ; Pithon et al. ). Our findings contradict this recommendation, as we observed that the force decrease between 0 and 24 h was around one‐third of their initial force, and the force decrease was the lowest between 24 and 48 h in the monitored period. This is consistent with other studies that found that the force remained relatively constant for a few days after the 24 h force degradation (Wang et al. ; Beattie and Monaghan ; Lopes Nitrini et al. ; Gangurde, Hazarey, and Vadgaonkar ; Andreasen and Bishara ). Liu, Wataha, and Craig in their research concluded that the force of IE remained almost stable after 1 day because the structural changes in the elastomer caused by repeated stretching were not cumulative (Liu, Wataha, and Craig ). While the force of IE decreases at a slow rate after 1 day, mechanical damage to the elastics occurs after several hours of use in a patient's mouth (Wang et al. ; Gurdán et al. ). Therefore, regular replacement of elastics is essential to maintain their initial force. Some authors recommend replacing them every 6 h, while others suggest intervals of 12 h (Castroflorio et al. ; Gioka et al. ). Our results suggest that replacing the IE every 8 h might be advisable to maintain the level of force between 75% and 100% all the time. It is important to consider various factors when evaluating the force decrease of IE, including the time aspect, environmental factors such as salivary alkalinity, temperature changes, stretching repetition and intensity (Liu, Wataha, and Craig ; Beattie and Monaghan ; Kanchana and Godfrey ; Hwang and Cha ). Storing intermaxillary elastic packages in an unsuitable environment could alter the elastomer structure and affect their properties (Wong ). It is important to use, for study purposes, IE within the expiry date and properly store and utilize IE from different batches, which was not always followed in previous studies. This is consistent with the general characteristics of elastomer degradation, where temperature, fluids, chemicals, and UV radiation can degrade the elastomeric structure. Saliva and bacteria can infiltrate the molecular structures on the latex rubber surface, resulting in discoloration and expansion (Kanchana and Godfrey ; Brantley et al. ). Therefore, the medium in which elastics are tested and the study design should greatly affect the force decrease. Only two studies from the 1980s did not find any differences in force decrease for different environments (Andreasen and Bishara ). Later studies found that force degradation was higher in distilled water after 8 and 24 h than in dry air conditions (Qodcieh et al. ; López et al. ). Wong stated that greater force degradation was observed in wet conditions than in dry conditions of the same temperature. Many authors have used artificial saliva as a suitable medium, with the force degradation being around 25% within 24 h (Yang et al. ; Wang et al. ; Kardach et al. ). However, the composition of artificial saliva solutions varies throughout the studies, which may account for the differences in the results while using this medium. Interestingly, in the study of Oliveira et al. , where one batch of stretched elastics was stored at room temperature in artificial saliva, degradation of slightly less than 10% in 24 h compared to a dry environment was shown, raising the question whether it is the higher temperature that has a greater effect on degradation than the saliva itself (Perrella and Gaspari ). There are not many studies focusing on force degradation of the IE in clinical settings. Although the exact force decrease results differ, all concluded that force degradation in in vivo settings is higher (Yang et al. ; Wang et al. ; Qodcieh et al. ). Our study found that the IE worn by patients had a 5% higher mean force degradation in 48 h than when stored in an incubator at a constant temperature of 37°C and a controlled humid environment. Different conditions affect the force degradation of IE, so any future studies on the subject should be designed in vivo. It is important to note that the magnitude and dynamics of stretching can have an impact on the results of force degradation (Castroflorio et al. ; Klabunde and Grünheid ). Manufacturers recommend that IE should be stretched three times their diameter to achieve the declared force. In our research, we found that when the stretching was measured to exactly three times the diameter, the insertion points were typically on the upper canine and lower second premolars for all patients except one. The same conclusion was also reached by Castroflorio et al. . As the standard insertion points for orthodontic treatment with fixed appliances or aligners are the upper canine and lower first molars, it is clear that without measuring the force with a force meter in each clinical case, clinicians cannot be certain of the force generated by IE. Therefore, clinicians must bear in mind that simulated force degradation models may not be accurate in clinical situations. Limitations of the study: It is important to note that there are some limitations to this study. The in vivo research was performed on 10 participants who wore 10 IE gradually (5 pairs for 2 days each). The intensity of force degradation may have been affected by individual differences in eating habits, oral parafunctions, and the duration of time that the elastics were inserted. Additionally, the in vitro investigation of the elastics was conducted in batches of 10 and measured gradually, resulting in a time difference between the first and last one, which could have influenced the results. The stretching distance of IE varies for each patient when using standard insertion points. As a result, the initial force may differ in each case. The force of IE decreases exponentially in both in vivo and in vitro settings, with the highest decrease in force occurring in the first 2 h. However, in vivo, the force degradation was higher by 5% on average, and the initial force dropped to three‐quarters after 8 h, gradually decreasing further. The clinician must consider the force decrease when advising the patient of the time interval for changing the elastics. Conceptualization: Ivana Dubovská and Iva Voborná. Methodology: Ivana Dubovská. Validation: Soňa Chamlarová and Klaudia Portašíková. Investigation: Barbora Ličková, Lucie Ptáčková, Klaudia Portašíková, and David Sluka. Resources: Ivana Dubovská and Klaudia Portašíková. Data curation: Barbora Ličková, Lucie Ptáčková, David Sluka, and Klaudia Portašíková. Writing–original draft preparation: Barbora Ličková, Wanda Urbanová, and Ivana Dubovská. Writing–review and editing: Ivana Dubovská, Iva Voborná, Wanda Urbanová, and David Sluka. Visualization: David Sluka and Soňa Chamlarová. Supervision: Soňa Chamlarová and Iva Voborná. Project administration: Ivana Dubovská and Wanda Urbanová. Funding acquisition: Iva Voborná. All authors have read and agreed to the published version of the manuscript. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Ethical approval for the study was obtained from the Ethics Committee of the University Hospital and the Faculty of Medicine Palacky University in Olomouc, Czech Republic (approval number: 136/23). The authors declare no conflicts of interest.
Use, applicability, and dissemination of patient versions of clinical practice guidelines in oncology in Germany: a qualitative interview study with healthcare providers
63a52416-1426-4ed1-9dac-19a5ee61a1db
10913627
Internal Medicine[mh]
Several studies have shown that, particularly in the field of oncology, patients’ need for medical information is high but often unmet . A recent overview of systematic reviews found that hospitals provide inadequate health information to patients, which can lead to poor quality patient care and low patient satisfaction . In 2001, the Institute of Medicine (IOM) issued recommendations for achieving high quality care in six areas of healthcare, including patient-centredness . The emphasis was placed on the patient as a complex individual rather than on a simple treatment approach. The IOM recommended six aspects of patient-centred healthcare, notably the provision of information, communication and patient education . Providing health information and advice to patients enables healthcare providers to meet patients’ requirement of information needs and meet the IOM standards for patient-centred care , as well as supporting patients’ autonomy and empowering them to engage in healthier behaviour . To fulfil patients’ information needs, healthcare providers need to familiarise themselves with the relevant health information and distribute it. However, they may only recommend certain health information that they subjectively consider valid. Efforts are already being made to improve the communication skills of healthcare professionals, also in the field of oncology . These efforts focus on how to best communicate information to patients, but do not specifically include evidence-based patient information . Clinical practice guidelines (CPGs) provide evidence-based recommendations and up-to-date knowledge about the prevention, diagnostic procedures, treatments or follow-up policies of specific medical conditions . CPGs mainly address healthcare providers and help them make decisions regarding appropriate patient care . However, their content is written in medical language and might be difficult for laypersons to understand. Moreover, the general concept of CPGs (i.e. methods/procedures to develop recommendations) is difficult for many patients and non-professionals to comprehend . Although CPGs are mainly designed for healthcare providers, patients, their relatives, friends, and other non-professionals involved in patient care sometimes use them as information sources . To make the concepts and content of CPGs more accessible to patients and other laypersons, several international organisations have developed patient versions of CPGs (PVGs) and translated their recommendations into common speech that the average layperson would understand . The German Guideline Program in Oncology (GGPO) develops PVGs in the field of oncology to address the information needs of oncological patients in Germany. To date, the GGPO has provided several PVGs for various oncological diseases. These PVGs provide information on the diagnosis, treatment and follow-up care and address various patient populations in different disease stages (early/metastazised). In addition, the GGPO provides PVGs for cross-sectional oncological topics, such as psycho-oncology, early detection, supportive care and palliative care. The PVGs of the GGPO are distributed online (in PDF format) and as print brochures. PDF versions are accessible via the GGPO and German Cancer Aid websites , and brochures can be ordered through German Cancer Aid . Both options are free of charge. A list of all existing oncological PVGs in Germany can be found on the GGPO website . As healthcare providers play a key role in the dissemination of health information, it is important to know their assessment of PVGs and adapt information materials accordingly. However, to the best of our knowledge, information regarding healthcare providers’ perspectives on oncological PVGs is scarce. This study aims to investigate the use, applicability, and dissemination of PVGs in oncology from the perspective of healthcare providers in Germany. Design This study is part of a large multi-phase study (AnImPaLLO project) investigating the (inter-)national role and applicability of PVGs in oncology in order to derive recommendations for the development, dissemination, and implementation of PVGs in Germany. The study was conducted in two main modules: Module 1 investigated the applied methods and approaches on development and dissemination of PVGs, and Module 2 conducted separate semi-structured interviews and joint focus groups with national healthcare providers and patients to focus on a national perspective on the implementation and dissemination of PVGs. Detailed information can be found in the protocol . The project was set up in cooperation with relevant stakeholders (hereafter, project partners) that are involved in development of patient versions in Germany: the GGPO, the Association of the Scientific Medical Societies in Germany - Institute for Medical Knowledge Management (AWMF-IMWi), the German Agency for Quality in Medicine (ÄZQ), and two German self-help groups focusing on prostate cancer (Bundesverband Prostatakrebs Selbsthilfe [BPS]) and cancer in women (Frauenselbsthilfe Krebs–Bundesverband [FSH]). The present study focuses on semi-structured interviews targeting the perspectives of healthcare providers in Germany. We followed the consolidated criteria for reporting qualitative research (COREQ) checklist to report our study (Supplement : COREQ-checklist). Recruitment Healthcare providers directly involved in the care of cancer patients (e.g. physicians, psycho-oncologists, nurses), aged 18 years or older and with sufficient knowledge of German language were recruited. There were two ways of recruitment. First, participants were recruited via an online survey to analyse their awareness and the role of PVGs in oncology. The survey was conducted between April and June 2021 by the AWMF-IMWi and was not part of the AnImPaLLO project . After completing the survey, healthcare providers were asked whether they would like to take part in the AnImPaLLO interviews and, if interested, provided their email address for recruitment purposes. The authors then contacted the interested survey participants. Second, project partners published calls for study participation via the Internet (e.g. newsletters, websites, social media) or flyers. After creating a list of all existing centres using Microsoft Excel, we also contacted a randomised national sample of certified and non-certified oncology centres in Germany. A new numeration of the centres was created by assigning a random number to each centre using the RAND function, and the first 50 centres on the list were contacted. A central organisation (the German Cancer Society) certifies oncology centres and recognises inpatient and outpatient facilities that form a network (centre) to improve the treatment of oncology patients through cooperative efforts. Information on certification of oncology centres in Germany can be found on the OnkoZert website . Relevant hospital units of certified and non-certified centres (e.g. outpatient clinic, psycho-oncology) were contacted by telephone to recruit medical providers who were directly involved in patient care. If the telephone approach was unsuccessful or impossible, the relevant hospital units were contacted via email. Moreover, we asked the participants whether they could pass on information about the study to colleagues to recruit more participants (snowball recruitment method). Recruitment ended when saturation was reached , indicating no additional analytical themes. Data collection Data were collected via telephone interviews with one author (MB), female researcher and, at the time, doctoral candidate at Witten/Herdecke University. The interviewer was trained in advance in qualitative interviews and analyses. The first contact between the participants and the interviewer occurred prior to the interviews when detailed information about the study (background, duration of the interview, and intention to publish the results), along with privacy statements, was provided to each participant. There was no relationship between the interviewer and participants. Participants were informed that she was a researcher at Witten/Herdecke University. The interview guide was designed prior to conducting the interviews but without any existing framework and consisted of three main sections: (1) general information, (2) general questions about PVGs, and (3) questions about specific PVGs, and was reviewed and modified by the project team (Supplement : interview guide). Two pre-test interviews were conducted, which did not result in any changes to the interview guide. Participants received one version of the PVG for discussion with the interviewer. PVG version selection was based on the oncological field in which the participants worked at or were the most involved. All interviews were conducted via telephone between October and December 2021 using a recording device. Field notes were not taken during the interviews. The interviewer joined the interview from the workplace or home office, and participants were free to choose a convenient place and timeframe. Consequently, the presence of other people cannot be ruled out. Repeat interviews were not conducted. Data processing During the recording of the interviews, the author (MB) avoided bringing up personal details of the interviewees to prevent them from being recorded. An external agency was hired to transcribe the audio files of the semi-structured interviews verbatim. Subsequently, two authors (MB and SW) checked the quality of the transcripts and, if necessary, removed personal details to prevent any conclusions regarding individual participants. The final transcripts were then assigned IDs that were available to the researchers. Participants were not asked to provide feedback on their findings or transcripts. Data analysis Based on the interview guide, data codes were developed and divided into 11 groups with several sub-codes: (1) general information about participants, (2) questions about PVGs in general and questions about specific PVGs, (3) general judgments, (4) design and presentation, (5) comprehensibility, (6) format, (7) trust in PVG, (8) content, (9) impact of patient version, (10) dissemination of PVG to the patient, and 11) perception of specific topics. The rules of code specification were defined for each code before analysing the interviews (Supplement : data coding system). Due to personnel changes within the project team, the interviewer (MB) was not available to conduct the interview analysis. Therefore, the analysis was performed by two other authors (SW and JB). MAXQDA software (version 2022) was used to perform the interview analysis according to Mayring’s content analysis method . To structure the text into codes (categories) and sub-codes (sub-categories), both deductive (a priori defined data codes derived by the interview guide and used on the whole document) and inductive approaches (development of additional themes and sub-codes for the pre-existing material, used at the code level) were used. The interviews were analysed in a two-step process. First, two authors (SW and JB) coded five out of 20 transcripts independently and met to discuss and reach a consensus on the a priori defined data codes. Afterwards, they split the remaining sample of interviews and carried out the analysis independently with ongoing consultation meetings (deductive approach). Second, the final sample of data codes was split in half, and each author independently generated sub-codes for the selected data codes for the entire interview set. The sub-codes were presented, discussed, and agreed upon by the two authors during ongoing meetings (inductive approach). Further issues that arose during the independent coding process were discussed by the two authors during consultation meetings. Subsequently, the coding framework with the final categories and sub-categories was reviewed by a team of authors (SW, JB, SBl, and MN), and minor editorial modifications were made. This study is part of a large multi-phase study (AnImPaLLO project) investigating the (inter-)national role and applicability of PVGs in oncology in order to derive recommendations for the development, dissemination, and implementation of PVGs in Germany. The study was conducted in two main modules: Module 1 investigated the applied methods and approaches on development and dissemination of PVGs, and Module 2 conducted separate semi-structured interviews and joint focus groups with national healthcare providers and patients to focus on a national perspective on the implementation and dissemination of PVGs. Detailed information can be found in the protocol . The project was set up in cooperation with relevant stakeholders (hereafter, project partners) that are involved in development of patient versions in Germany: the GGPO, the Association of the Scientific Medical Societies in Germany - Institute for Medical Knowledge Management (AWMF-IMWi), the German Agency for Quality in Medicine (ÄZQ), and two German self-help groups focusing on prostate cancer (Bundesverband Prostatakrebs Selbsthilfe [BPS]) and cancer in women (Frauenselbsthilfe Krebs–Bundesverband [FSH]). The present study focuses on semi-structured interviews targeting the perspectives of healthcare providers in Germany. We followed the consolidated criteria for reporting qualitative research (COREQ) checklist to report our study (Supplement : COREQ-checklist). Healthcare providers directly involved in the care of cancer patients (e.g. physicians, psycho-oncologists, nurses), aged 18 years or older and with sufficient knowledge of German language were recruited. There were two ways of recruitment. First, participants were recruited via an online survey to analyse their awareness and the role of PVGs in oncology. The survey was conducted between April and June 2021 by the AWMF-IMWi and was not part of the AnImPaLLO project . After completing the survey, healthcare providers were asked whether they would like to take part in the AnImPaLLO interviews and, if interested, provided their email address for recruitment purposes. The authors then contacted the interested survey participants. Second, project partners published calls for study participation via the Internet (e.g. newsletters, websites, social media) or flyers. After creating a list of all existing centres using Microsoft Excel, we also contacted a randomised national sample of certified and non-certified oncology centres in Germany. A new numeration of the centres was created by assigning a random number to each centre using the RAND function, and the first 50 centres on the list were contacted. A central organisation (the German Cancer Society) certifies oncology centres and recognises inpatient and outpatient facilities that form a network (centre) to improve the treatment of oncology patients through cooperative efforts. Information on certification of oncology centres in Germany can be found on the OnkoZert website . Relevant hospital units of certified and non-certified centres (e.g. outpatient clinic, psycho-oncology) were contacted by telephone to recruit medical providers who were directly involved in patient care. If the telephone approach was unsuccessful or impossible, the relevant hospital units were contacted via email. Moreover, we asked the participants whether they could pass on information about the study to colleagues to recruit more participants (snowball recruitment method). Recruitment ended when saturation was reached , indicating no additional analytical themes. Data were collected via telephone interviews with one author (MB), female researcher and, at the time, doctoral candidate at Witten/Herdecke University. The interviewer was trained in advance in qualitative interviews and analyses. The first contact between the participants and the interviewer occurred prior to the interviews when detailed information about the study (background, duration of the interview, and intention to publish the results), along with privacy statements, was provided to each participant. There was no relationship between the interviewer and participants. Participants were informed that she was a researcher at Witten/Herdecke University. The interview guide was designed prior to conducting the interviews but without any existing framework and consisted of three main sections: (1) general information, (2) general questions about PVGs, and (3) questions about specific PVGs, and was reviewed and modified by the project team (Supplement : interview guide). Two pre-test interviews were conducted, which did not result in any changes to the interview guide. Participants received one version of the PVG for discussion with the interviewer. PVG version selection was based on the oncological field in which the participants worked at or were the most involved. All interviews were conducted via telephone between October and December 2021 using a recording device. Field notes were not taken during the interviews. The interviewer joined the interview from the workplace or home office, and participants were free to choose a convenient place and timeframe. Consequently, the presence of other people cannot be ruled out. Repeat interviews were not conducted. During the recording of the interviews, the author (MB) avoided bringing up personal details of the interviewees to prevent them from being recorded. An external agency was hired to transcribe the audio files of the semi-structured interviews verbatim. Subsequently, two authors (MB and SW) checked the quality of the transcripts and, if necessary, removed personal details to prevent any conclusions regarding individual participants. The final transcripts were then assigned IDs that were available to the researchers. Participants were not asked to provide feedback on their findings or transcripts. Based on the interview guide, data codes were developed and divided into 11 groups with several sub-codes: (1) general information about participants, (2) questions about PVGs in general and questions about specific PVGs, (3) general judgments, (4) design and presentation, (5) comprehensibility, (6) format, (7) trust in PVG, (8) content, (9) impact of patient version, (10) dissemination of PVG to the patient, and 11) perception of specific topics. The rules of code specification were defined for each code before analysing the interviews (Supplement : data coding system). Due to personnel changes within the project team, the interviewer (MB) was not available to conduct the interview analysis. Therefore, the analysis was performed by two other authors (SW and JB). MAXQDA software (version 2022) was used to perform the interview analysis according to Mayring’s content analysis method . To structure the text into codes (categories) and sub-codes (sub-categories), both deductive (a priori defined data codes derived by the interview guide and used on the whole document) and inductive approaches (development of additional themes and sub-codes for the pre-existing material, used at the code level) were used. The interviews were analysed in a two-step process. First, two authors (SW and JB) coded five out of 20 transcripts independently and met to discuss and reach a consensus on the a priori defined data codes. Afterwards, they split the remaining sample of interviews and carried out the analysis independently with ongoing consultation meetings (deductive approach). Second, the final sample of data codes was split in half, and each author independently generated sub-codes for the selected data codes for the entire interview set. The sub-codes were presented, discussed, and agreed upon by the two authors during ongoing meetings (inductive approach). Further issues that arose during the independent coding process were discussed by the two authors during consultation meetings. Subsequently, the coding framework with the final categories and sub-categories was reviewed by a team of authors (SW, JB, SBl, and MN), and minor editorial modifications were made. Overall, 36 healthcare providers showed an interest in participating in the study. A total of 16 participants (44%) did not participate, either because interviewers were unable to reach interested healthcare providers or because they were no longer working in the field of oncology. The remaining 20 healthcare providers participated in the semi-structured telephone interviews. Two of them were recruited via the initial AWMF-IMWi survey. Most participants worked as physicians ( n = 5; 25%) or psycho-oncologists ( n = 9; 45%) and in certified clinics ( n = 12; 60%). Participants were predominantly female ( n = 14; 70%), had a mean age of 51 years (range: 33–71), and an average of 15 years of professional experience with oncological patients (range: 3–31). Table lists the participants’ characteristics. The average duration of the interviews was 34 min (range: 20–49 min). Most participants were aware of the existence of PVGs ( n = 15) and stated that they provided them to their patients in either print or online formats. A few participants reported not providing PVGs to patients unless specifically asked. In the study protocol, we planned a study population of approximately 25 participants . However, the actual number of participants was lower ( n = 20) because no additional results were obtained in the course of conducting the last interviews. Therefore, we assumed content saturation had been achieved and stopped recruitment. Impact on healthcare Most participants felt that PVGs had a positive impact on healthcare for patients, healthcare providers, and their relationships. Some participants highlighted the positive impact of PVGs on patients’ confidence in treatment, as well as on their relatives and friends. Two participants expressed the following statements: ‘So from a psycho-oncological point of view, I would say it can really give security because it gives clarity and information’. (ID05) ‘I believe that just as it is now, it is very helpful for […] the patient. […] If the spouse reads it, the girlfriend, whatever. Who then simply has this knowledge, in order to act as a stable person, vicariously convey hope, confidence […]’. (ID06 ) Several participants expressed that PVGs improved the patients’ knowledge of their disease, explaining that this could lead to targeted questions from patients during physician–patient talks and generally improved physician–patient communication. According to the participants, PVGs not only hold a preparatory role for patients in terms of general communication with healthcare providers but also for healthcare providers involved in patient care. PVGs’ structured content helps participants refresh their medical knowledge and prepare for questions or content that might be interesting and important for patients. Simultaneously, wording in the PVGs serves as guidance for participants regarding word choice during conversations with patients. Many participants found the content of the PVGs to be comprehensible. Therefore, existing information needs could be met by the PVG. But some participants have said that patients´ information needs are different or may even be unknown or difficult for patients to understand. However, not all information needs seem to be met. ‘And I believe that people can’t even formulate what information they need. Because it doesn’t occur to them that they have to lie in bed and vomit, and you’ve forgotten to put a spit bag in front of them. (…) They cannot demand this information. And this information is missing.’ (ID03) . But given the amount of information already included in the PVGs, an information overload should be avoided. ‘And that’s such an overload of information that I think it’s sometimes too much for the condition the patients are in at the time.’ (ID03) . Thus, many participants would not include any further patient histories in the PVG, to avoid further increasing the volume of PVGs. Additionally, it was noted that self-help groups are a more suitable medium for sharing individual information. According to a few participants who work as physicians, the reference to patient-relevant information in PVGs facilitates physician–patient talks, particularly considering the limited duration of these talks. However, some participants raised concerns about the impact of PVGs on physician–patient talks because patients may not perceive PVG content regarding treatment as recommendation but rather as a mandate. It was pointed out that this has not happened yet, but it could be a possible scenario. One participant stated: ‘But if the patient who arrives with the guideline and says, “Now you have to do this and this”, it can lead to a problematic relationship’. (ID15) Dissemination A major barrier to the dissemination of PVGs is the lack of knowledge regarding their existence. One participant stated: ‘What I think makes it difficult, is that they [PVGs] are little known. At least in my experience few people know about them’. (ID02) Other barriers mentioned were fear of hidden costs for ordering brochures and the number of other information brochures sent to clinics and hospitals from other providers. In contrast, some participants suggested that the unrequested delivery of PVGs to clinics and hospitals could improve awareness and, subsequently, the dissemination of PVGs. Furthermore, participants suggested that the cost neutrality of PVGs should be displayed more clearly. In addition to the structural barriers listed above, individual-person-related barriers were mentioned. One participant noted language barriers or the intellectual capacity of the patients in this context. ‘When a 60-year-old with a high degree of language barrier comes to me, I don’t hand him a guideline [PVG]. In addition, there are patients who […] don’t have the, let’s say, intelligence […] to be able to deal with such information’. (ID12) According to participants, healthcare providers, especially physicians, play the most important role in delivering PVGs to patients, followed by self-help groups and other information sources such as social media. In addition, participants were asked about the appropriate timing for handing over PVGs. While the answers varied, the majority found the time of diagnosis to be the most convenient. One physician explained as follows: ‘When the diagnosis is made […]. Especially in the beginning, they [patients] need a lot of information and sometimes want to know a lot’. (ID14) In contrast, another participant reported that patients might experience emotional shock and distress immediately after diagnosis and recommended against early confrontation through PVGs. However, according to most participants, dissemination during treatment seemed appropriate. Participants were asked about the influence of PVGs compared to other information sources in Germany. The majority compared PVGs to another source of information for oncological patients of German Cancer Aid called ‘Blaue Ratgeber’ in German, noting that while the former was more detailed, the latter had better distribution to hospitals and medical practices. Moreover, participants found PVGs and patient information from the German Cancer Aid to be complementary as they vary in detail. Examples of patient information of German Cancer Aid can be found on its website ( https://www.krebshilfe.de/informieren/ueber-krebs/infothek/infomaterial-kategorie/die-blauen-ratgeber/ ). Other topics Participants had mixed opinions on PVG designs. Some found that the colours (pastel-red/orange) were friendly and neutral, whereas others wanted more vivid colours to enliven the text. In addition, participants had mainly positive impressions of the graphics, info boxes, and the text structure, while the majority criticised the large volume of PVGs. The majority preferred the brochure format for PVGs. Furthermore, participants found that patient age played an important role in the preference for format as younger patients may prefer the PDF-version of PVGs while older patients preferred the brochure. According to the participants, the overall comprehensibility of PVGs was good owing to the restricted use of medical words in favour of plain language. Many participants were aware of medical recommendations and recognised them in the text. However, the majority assumed that the recommendations were not recognisable or comprehensible to patients. According to the majority, PVG content included important information, although some participants thought information about patient self-care or aspects of complementary medicine was lacking and criticised the lack of up-to-date information of certain aspects (e.g. medications). The lack of up-to-date information had influences on the perceived trust in the content. As an option for improvement, the participants suggested living guidelines, which aim to optimise the guideline development process by updating individual recommendations as soon as new relevant evidence becomes available . Hence, the participants suggested that PVGs could also be adapted to their living status. Table provides additional information. Most participants felt that PVGs had a positive impact on healthcare for patients, healthcare providers, and their relationships. Some participants highlighted the positive impact of PVGs on patients’ confidence in treatment, as well as on their relatives and friends. Two participants expressed the following statements: ‘So from a psycho-oncological point of view, I would say it can really give security because it gives clarity and information’. (ID05) ‘I believe that just as it is now, it is very helpful for […] the patient. […] If the spouse reads it, the girlfriend, whatever. Who then simply has this knowledge, in order to act as a stable person, vicariously convey hope, confidence […]’. (ID06 ) Several participants expressed that PVGs improved the patients’ knowledge of their disease, explaining that this could lead to targeted questions from patients during physician–patient talks and generally improved physician–patient communication. According to the participants, PVGs not only hold a preparatory role for patients in terms of general communication with healthcare providers but also for healthcare providers involved in patient care. PVGs’ structured content helps participants refresh their medical knowledge and prepare for questions or content that might be interesting and important for patients. Simultaneously, wording in the PVGs serves as guidance for participants regarding word choice during conversations with patients. Many participants found the content of the PVGs to be comprehensible. Therefore, existing information needs could be met by the PVG. But some participants have said that patients´ information needs are different or may even be unknown or difficult for patients to understand. However, not all information needs seem to be met. ‘And I believe that people can’t even formulate what information they need. Because it doesn’t occur to them that they have to lie in bed and vomit, and you’ve forgotten to put a spit bag in front of them. (…) They cannot demand this information. And this information is missing.’ (ID03) . But given the amount of information already included in the PVGs, an information overload should be avoided. ‘And that’s such an overload of information that I think it’s sometimes too much for the condition the patients are in at the time.’ (ID03) . Thus, many participants would not include any further patient histories in the PVG, to avoid further increasing the volume of PVGs. Additionally, it was noted that self-help groups are a more suitable medium for sharing individual information. According to a few participants who work as physicians, the reference to patient-relevant information in PVGs facilitates physician–patient talks, particularly considering the limited duration of these talks. However, some participants raised concerns about the impact of PVGs on physician–patient talks because patients may not perceive PVG content regarding treatment as recommendation but rather as a mandate. It was pointed out that this has not happened yet, but it could be a possible scenario. One participant stated: ‘But if the patient who arrives with the guideline and says, “Now you have to do this and this”, it can lead to a problematic relationship’. (ID15) A major barrier to the dissemination of PVGs is the lack of knowledge regarding their existence. One participant stated: ‘What I think makes it difficult, is that they [PVGs] are little known. At least in my experience few people know about them’. (ID02) Other barriers mentioned were fear of hidden costs for ordering brochures and the number of other information brochures sent to clinics and hospitals from other providers. In contrast, some participants suggested that the unrequested delivery of PVGs to clinics and hospitals could improve awareness and, subsequently, the dissemination of PVGs. Furthermore, participants suggested that the cost neutrality of PVGs should be displayed more clearly. In addition to the structural barriers listed above, individual-person-related barriers were mentioned. One participant noted language barriers or the intellectual capacity of the patients in this context. ‘When a 60-year-old with a high degree of language barrier comes to me, I don’t hand him a guideline [PVG]. In addition, there are patients who […] don’t have the, let’s say, intelligence […] to be able to deal with such information’. (ID12) According to participants, healthcare providers, especially physicians, play the most important role in delivering PVGs to patients, followed by self-help groups and other information sources such as social media. In addition, participants were asked about the appropriate timing for handing over PVGs. While the answers varied, the majority found the time of diagnosis to be the most convenient. One physician explained as follows: ‘When the diagnosis is made […]. Especially in the beginning, they [patients] need a lot of information and sometimes want to know a lot’. (ID14) In contrast, another participant reported that patients might experience emotional shock and distress immediately after diagnosis and recommended against early confrontation through PVGs. However, according to most participants, dissemination during treatment seemed appropriate. Participants were asked about the influence of PVGs compared to other information sources in Germany. The majority compared PVGs to another source of information for oncological patients of German Cancer Aid called ‘Blaue Ratgeber’ in German, noting that while the former was more detailed, the latter had better distribution to hospitals and medical practices. Moreover, participants found PVGs and patient information from the German Cancer Aid to be complementary as they vary in detail. Examples of patient information of German Cancer Aid can be found on its website ( https://www.krebshilfe.de/informieren/ueber-krebs/infothek/infomaterial-kategorie/die-blauen-ratgeber/ ). Participants had mixed opinions on PVG designs. Some found that the colours (pastel-red/orange) were friendly and neutral, whereas others wanted more vivid colours to enliven the text. In addition, participants had mainly positive impressions of the graphics, info boxes, and the text structure, while the majority criticised the large volume of PVGs. The majority preferred the brochure format for PVGs. Furthermore, participants found that patient age played an important role in the preference for format as younger patients may prefer the PDF-version of PVGs while older patients preferred the brochure. According to the participants, the overall comprehensibility of PVGs was good owing to the restricted use of medical words in favour of plain language. Many participants were aware of medical recommendations and recognised them in the text. However, the majority assumed that the recommendations were not recognisable or comprehensible to patients. According to the majority, PVG content included important information, although some participants thought information about patient self-care or aspects of complementary medicine was lacking and criticised the lack of up-to-date information of certain aspects (e.g. medications). The lack of up-to-date information had influences on the perceived trust in the content. As an option for improvement, the participants suggested living guidelines, which aim to optimise the guideline development process by updating individual recommendations as soon as new relevant evidence becomes available . Hence, the participants suggested that PVGs could also be adapted to their living status. Table provides additional information. According to healthcare providers, PVGs seem to impact the relationship between patients and healthcare professionals and patients’ medical knowledge of their disease. The relevant aspects of the interviews with healthcare professionals are discussed below. Positive impact on patients‘ health literacy The study results demonstrated that healthcare providers believe PVGs can positively influence patients’ health literacy (HL). For instance, healthcare providers mentioned that patients who were provided with PVGs were better informed. This is in line with previous and recent literature describing the positive impact of evidence-based information on patients with HL . HL is described as the ability to ‘obtain, process, and understand basic health information and services in order to be able to make appropriate health decision’ . It is heterogeneous among patients because its level depends on individual factors (e.g. education and culture) . Two systematic reviews found that patients with low HL were more likely to obtain their health information from friends and family, television, or social media, whereas patients with high HL were more likely to turn to medical professionals . Additionally, a high level of HL has been associated with the ability to identify the trustworthiness and validity of health information . It is particularly important for healthcare providers to educate patients with low HL, and refer them to valid and evidence-based information, including that contained in PVGs. Nevertheless, PVGs are helpful for every patient and should be distributed regardless of their HL level. Reliable and validated health information positively influences not only patients’ HL but also shared decision-making . However, further research is needed to investigate PVG’s influence on patient knowledge and whether it increases informed decision-making. PVGs improve communication between healthcare providers and patients In addition to PVGs’ capacity to support patients, this study showed that PVGs can function as useful tools for healthcare providers. According to participants, the use of PVGs in preparation for physician–patient conversations positively impacted general physician–patient communication in terms of structuring important topics and word choice. However, the preparatory role of patient information for healthcare providers in communication with patients has not yet been discussed in the recent literature, and further research is required in this area. Participants also positively highlighted the time-saving aspects of PVGs for medical appointments. Recent literature found no significant associations between the use of health information during medical appointments and time-saving effects owing to the poor quality of the included studies . Thus, further research is required to address the impact of evidence-based health information on the duration of medical appointments. PVGs might not only help healthcare providers prepare for communication with patients but also invite patients and healthcare providers to communicate more with patients about important aspects of treatment. Info boxes included in PVGs, such as questions before an operation, invite patients to engage in conversations with healthcare providers, which might facilitate more regular communication. Constant communication between healthcare providers and patients has been found to positively affect patient trust in healthcare providers, treatments, and health information . Additionally, patients endorse healthcare providers’ references to reliable and clear literature when time is taken to discuss and answer their questions . Consequently, providing patients with PVGs and communicating about their content might improve patients’ trust in healthcare providers and, subsequently, the use and applicability of PVGs in patient care. Furthermore, the results showed that patients do not always comprehend the intentions of the recommendations; specifically, that they are not mandatory for healthcare providers. According to the results of this study, patients’ misunderstanding of content can negatively affect general communication. Although the methods and intentions of the recommendations have already been described in PVGs, a clearer explanation and presentation are needed so that the content is fully comprehensible for patients. The presentation of PVG and explanation of its content could be part of a comprehensive inclusion in the healthcare provider’s communication with patients. Further research is needed on how the PVG can be actively used by healthcare providers in their communication with patients, e.g. through didactic training. Limited awareness of PVGs among healthcare providers Naturally, in order to hand out PVGs to patients, healthcare providers must first know that they exist. Although healthcare providers see themselves as some of the main providers of PVGs for patients, their knowledge of the existence of PVGs remains limited. Alternative information materials, such as patient information from the German Cancer Aid , are better known to healthcare providers and used more frequently in healthcare. However, even though the participants suggested ways to raise awareness of PVGs (e.g. automatic distribution of brochures in inpatient and outpatient settings, promotion on social media, and delivery through self-help groups), further research is needed to determine appropriate approaches. Brochure distribution in inpatient and outpatient settings involves significant organisational effort and logistic challenges, such as retraining staff and producing, storing, and mailing PVGs. However, according to the participants, hospitals and medical practitioners have already received a significant amount of information. Consequently, the additional PVGs may be overlooked or discarded. Furthermore, hospitals and medical practitioners should ensure that brochures are updated so that patients receive the most current information. Additionally, mentioning PVGs in newsletters of relevant institutions, medical congresses, or other public events might be good options for raising awareness of their existence. In addition to participants’ limited awareness, fear of hidden costs seemed to impact the limited dissemination of PVGs in healthcare. This barrier might be addressed by displaying the cost neutrality of PVGs more clearly. Policy-makers and PVG creators should consider efficient and cost-effective approaches to improve the awareness and dissemination of PVGs in healthcare. Dissemination of PVGs Most participants favoured distributing PVGs around the time of diagnosis. This is in line with findings of a qualitative study, which found that oncological patients require relevant health information from a very early start . Only one interviewed participant in our study recommended dissemination at a later stage (e.g. during treatment). Which is also in line with the international literature. According to a systematic review, especially around the time of diagnosis, patients are confronted with negative emotions such as fear and distress, which might hinder accurate understanding health information. Therefore, the authors suggest avoiding possible barriers (e.g. stress and anxiety) when distributing health information to patients . Another systematic review found that patients prefer health information be provided after diagnosis (e.g. during treatment) or be on demand . Although the results of the current study show that most healthcare providers favour distributing PVGs to patients at an early stage, patients’ individual circumstances should be considered. Consequently, patients’ mental states and desire for health information should be key factors for healthcare providers when distributing PVGs. Overall, coping mechanisms and need of information are highly individual, hence there is no one-fits-all solution for all patients. Individual perceptions of design and format diversity The assessment of the colours, design, volume, and format of the PVGs was based on participants’ individual perceptions. Some favoured the colours used because they radiated calmness, while others suggested the use of vivid colours to emphasise content. However, healthcare providers preferred the printed version of PVGs over the PDF version because the haptic format serves as a good tool for interacting with patients. From the healthcare providers’ point of view, patients’ preferences regarding the format of PVGs are heterogeneous and individual because younger patients may favour the PDF format while older patients may prefer the printed version. This is not in line with the results of previous literature as the preferred format of health information (web-based or print) was not significantly associated with patients’ age . Nevertheless, it should be noted that younger patients use the internet significantly more frequently than older patients . The volume of the printed versions of the PVGs was perceived as sufficient by some and overwhelming by others. To address the perceived overwhelmingness, the content of printed PVGs can be produced in a staggered manner. Chapters can be issued in separate brochures. However, publishing the content of PVGs in separate brochures would not necessarily increase the awareness or dissemination of PVGs and may decrease clinics or practices’ willingness to order or store them. Further research is needed to address possible ways to individualise the formats of PVGs to include patient-relevant content without exaggerating the volume of PVGs and provoke overwhelmingness. One solution could be individualisation in the digital context. PVGs as apps could support individualisation by showing users only selected content or by changing the language of the content (e.g. foreign language or plain language). This could address target groups that are deterred from reading the PVGs due to the high volume, a language barrier, or an intellectual barrier. Moreover, representatives from patient organisations are involved in the development of PVGs and published PVGs can be evaluated via a feedback form included in the PVG. These channels can be used to adapt the PVGs to the needs of practice. Missing up-to-datedness of content Missing up-to-date information (e.g. medications) limits participants’ trust in the content. Living CPGs may be an option to update content more frequently . Because PVGs are based on CPGs, their status can also be converted to a living status once the underlying CPG is adapted to a living CPG. On the one hand, living PVGs can positively impact dissemination and trust in content; on the other, they might add too much content to the already overwhelming volume of PVGs. In addition, the implementation of updated content would lead to a large number of updated versions of PVGs that would have to be published. Hence, living guidelines (CPGs and PVGs) involve a significant amount of organisational effort, which is time consuming and requires significant personnel deployment and monetary resources . One solution may be the continuous updating of single chapters or specific content, such as information on medications . In addition to missing up-to-date information, participants noted that relevant content was missing in specific PVGs, such as information about patient self-care, nausea treatment, and complementary medicine (see Table ). Some of the missing aspects were addressed in the specific PVG and might have been overlooked by participants. Furthermore, missing information can be found in additional PVGs (e.g. complementary medicine) provided by the GGPO. Limitations and strengths The change in researcher personnel during the study period is a limitation of this study. The two researchers in charge of analysing the results (SW and JB) were not involved in planning the study or conducting the interviews. This was addressed through constant communication with the interviewer (MB). One strength was the inclusion of a broad range of participants in terms of profession, thus representing a wide range of professions involved in the care of oncology patients. Furthermore, we included participants with a broad range of experiences and discussed different PVGs with different participants to gain an overview of the topic of PVGs as they differ in content and design. The results of this study should be considered in the context of further studies (qualitative interviews with oncological patients and mixed focus groups) that have also been conducted as part of the AnImPaLLO-project and have yet to be published. Together, they provide a comprehensive view of the topic of PVGs. The study results demonstrated that healthcare providers believe PVGs can positively influence patients’ health literacy (HL). For instance, healthcare providers mentioned that patients who were provided with PVGs were better informed. This is in line with previous and recent literature describing the positive impact of evidence-based information on patients with HL . HL is described as the ability to ‘obtain, process, and understand basic health information and services in order to be able to make appropriate health decision’ . It is heterogeneous among patients because its level depends on individual factors (e.g. education and culture) . Two systematic reviews found that patients with low HL were more likely to obtain their health information from friends and family, television, or social media, whereas patients with high HL were more likely to turn to medical professionals . Additionally, a high level of HL has been associated with the ability to identify the trustworthiness and validity of health information . It is particularly important for healthcare providers to educate patients with low HL, and refer them to valid and evidence-based information, including that contained in PVGs. Nevertheless, PVGs are helpful for every patient and should be distributed regardless of their HL level. Reliable and validated health information positively influences not only patients’ HL but also shared decision-making . However, further research is needed to investigate PVG’s influence on patient knowledge and whether it increases informed decision-making. In addition to PVGs’ capacity to support patients, this study showed that PVGs can function as useful tools for healthcare providers. According to participants, the use of PVGs in preparation for physician–patient conversations positively impacted general physician–patient communication in terms of structuring important topics and word choice. However, the preparatory role of patient information for healthcare providers in communication with patients has not yet been discussed in the recent literature, and further research is required in this area. Participants also positively highlighted the time-saving aspects of PVGs for medical appointments. Recent literature found no significant associations between the use of health information during medical appointments and time-saving effects owing to the poor quality of the included studies . Thus, further research is required to address the impact of evidence-based health information on the duration of medical appointments. PVGs might not only help healthcare providers prepare for communication with patients but also invite patients and healthcare providers to communicate more with patients about important aspects of treatment. Info boxes included in PVGs, such as questions before an operation, invite patients to engage in conversations with healthcare providers, which might facilitate more regular communication. Constant communication between healthcare providers and patients has been found to positively affect patient trust in healthcare providers, treatments, and health information . Additionally, patients endorse healthcare providers’ references to reliable and clear literature when time is taken to discuss and answer their questions . Consequently, providing patients with PVGs and communicating about their content might improve patients’ trust in healthcare providers and, subsequently, the use and applicability of PVGs in patient care. Furthermore, the results showed that patients do not always comprehend the intentions of the recommendations; specifically, that they are not mandatory for healthcare providers. According to the results of this study, patients’ misunderstanding of content can negatively affect general communication. Although the methods and intentions of the recommendations have already been described in PVGs, a clearer explanation and presentation are needed so that the content is fully comprehensible for patients. The presentation of PVG and explanation of its content could be part of a comprehensive inclusion in the healthcare provider’s communication with patients. Further research is needed on how the PVG can be actively used by healthcare providers in their communication with patients, e.g. through didactic training. Naturally, in order to hand out PVGs to patients, healthcare providers must first know that they exist. Although healthcare providers see themselves as some of the main providers of PVGs for patients, their knowledge of the existence of PVGs remains limited. Alternative information materials, such as patient information from the German Cancer Aid , are better known to healthcare providers and used more frequently in healthcare. However, even though the participants suggested ways to raise awareness of PVGs (e.g. automatic distribution of brochures in inpatient and outpatient settings, promotion on social media, and delivery through self-help groups), further research is needed to determine appropriate approaches. Brochure distribution in inpatient and outpatient settings involves significant organisational effort and logistic challenges, such as retraining staff and producing, storing, and mailing PVGs. However, according to the participants, hospitals and medical practitioners have already received a significant amount of information. Consequently, the additional PVGs may be overlooked or discarded. Furthermore, hospitals and medical practitioners should ensure that brochures are updated so that patients receive the most current information. Additionally, mentioning PVGs in newsletters of relevant institutions, medical congresses, or other public events might be good options for raising awareness of their existence. In addition to participants’ limited awareness, fear of hidden costs seemed to impact the limited dissemination of PVGs in healthcare. This barrier might be addressed by displaying the cost neutrality of PVGs more clearly. Policy-makers and PVG creators should consider efficient and cost-effective approaches to improve the awareness and dissemination of PVGs in healthcare. Most participants favoured distributing PVGs around the time of diagnosis. This is in line with findings of a qualitative study, which found that oncological patients require relevant health information from a very early start . Only one interviewed participant in our study recommended dissemination at a later stage (e.g. during treatment). Which is also in line with the international literature. According to a systematic review, especially around the time of diagnosis, patients are confronted with negative emotions such as fear and distress, which might hinder accurate understanding health information. Therefore, the authors suggest avoiding possible barriers (e.g. stress and anxiety) when distributing health information to patients . Another systematic review found that patients prefer health information be provided after diagnosis (e.g. during treatment) or be on demand . Although the results of the current study show that most healthcare providers favour distributing PVGs to patients at an early stage, patients’ individual circumstances should be considered. Consequently, patients’ mental states and desire for health information should be key factors for healthcare providers when distributing PVGs. Overall, coping mechanisms and need of information are highly individual, hence there is no one-fits-all solution for all patients. The assessment of the colours, design, volume, and format of the PVGs was based on participants’ individual perceptions. Some favoured the colours used because they radiated calmness, while others suggested the use of vivid colours to emphasise content. However, healthcare providers preferred the printed version of PVGs over the PDF version because the haptic format serves as a good tool for interacting with patients. From the healthcare providers’ point of view, patients’ preferences regarding the format of PVGs are heterogeneous and individual because younger patients may favour the PDF format while older patients may prefer the printed version. This is not in line with the results of previous literature as the preferred format of health information (web-based or print) was not significantly associated with patients’ age . Nevertheless, it should be noted that younger patients use the internet significantly more frequently than older patients . The volume of the printed versions of the PVGs was perceived as sufficient by some and overwhelming by others. To address the perceived overwhelmingness, the content of printed PVGs can be produced in a staggered manner. Chapters can be issued in separate brochures. However, publishing the content of PVGs in separate brochures would not necessarily increase the awareness or dissemination of PVGs and may decrease clinics or practices’ willingness to order or store them. Further research is needed to address possible ways to individualise the formats of PVGs to include patient-relevant content without exaggerating the volume of PVGs and provoke overwhelmingness. One solution could be individualisation in the digital context. PVGs as apps could support individualisation by showing users only selected content or by changing the language of the content (e.g. foreign language or plain language). This could address target groups that are deterred from reading the PVGs due to the high volume, a language barrier, or an intellectual barrier. Moreover, representatives from patient organisations are involved in the development of PVGs and published PVGs can be evaluated via a feedback form included in the PVG. These channels can be used to adapt the PVGs to the needs of practice. Missing up-to-date information (e.g. medications) limits participants’ trust in the content. Living CPGs may be an option to update content more frequently . Because PVGs are based on CPGs, their status can also be converted to a living status once the underlying CPG is adapted to a living CPG. On the one hand, living PVGs can positively impact dissemination and trust in content; on the other, they might add too much content to the already overwhelming volume of PVGs. In addition, the implementation of updated content would lead to a large number of updated versions of PVGs that would have to be published. Hence, living guidelines (CPGs and PVGs) involve a significant amount of organisational effort, which is time consuming and requires significant personnel deployment and monetary resources . One solution may be the continuous updating of single chapters or specific content, such as information on medications . In addition to missing up-to-date information, participants noted that relevant content was missing in specific PVGs, such as information about patient self-care, nausea treatment, and complementary medicine (see Table ). Some of the missing aspects were addressed in the specific PVG and might have been overlooked by participants. Furthermore, missing information can be found in additional PVGs (e.g. complementary medicine) provided by the GGPO. The change in researcher personnel during the study period is a limitation of this study. The two researchers in charge of analysing the results (SW and JB) were not involved in planning the study or conducting the interviews. This was addressed through constant communication with the interviewer (MB). One strength was the inclusion of a broad range of participants in terms of profession, thus representing a wide range of professions involved in the care of oncology patients. Furthermore, we included participants with a broad range of experiences and discussed different PVGs with different participants to gain an overview of the topic of PVGs as they differ in content and design. The results of this study should be considered in the context of further studies (qualitative interviews with oncological patients and mixed focus groups) that have also been conducted as part of the AnImPaLLO-project and have yet to be published. Together, they provide a comprehensive view of the topic of PVGs. Overall, participants had a generally positive impression of PVGs. PVG content and its comprehensibility positively impacted their applicability, especially in the context of physician–patient talks, while limited awareness and missing up-to-date information on specific content seemed to hinder the use and dissemination of PVGs in healthcare. Additionally, the use of alternative patient information appeared to be more common, with limited effects on the use and dissemination of PVGs. Although participants highlighted the time-saving aspects of PVGs in medical appointments, further research should address this discrepancy because the existing literature is of poor quality. Furthermore, policy-makers and PVG creators should consider efficient approaches to raise awareness of PVGs among healthcare providers, and improve their use and dissemination. To ensure successful implementation of PVGs in healthcare, training of healthcare providers on how best to communicate the contents of PVG to patients might be helpful. Moreover, the possible individualisation of formats and frequent updates of specific content based on living CPGs should be considered to improve the general applicability and use of PVGs in healthcare. In conclusion, further research is needed to investigate whether PVGs impact on patient knowledge and informed decision-making. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3
Exploring proteomic immunoprofiles: common neurological and immunological pathways in multiple sclerosis and type 1 diabetes mellitus
05d152f6-0f4b-4a1f-b51b-80cd71abe41a
11789306
Biochemistry[mh]
Type 1 diabetes mellitus (T1DM) and multiple sclerosis (MS) are severe autoimmune diseases with strong negative impacts on patients and profound implications for the health care system and society at large. Interest in the study of both diseases has increased, owing to the rapid increase in the prevalence of these pathologies over the past decades (Maahs et al. ; Walton et al. ; Gregory et al. ). Some studies have demonstrated that T1DM and MS share certain features and involve organ-specific mechanisms affecting various tissue targets (Pozzilli et al. ). In the case of T1DM, the immune system attacks pancreatic β-cells, leading to a failure in normal insulin production and eventually affecting glucose homeostasis (DiMeglio et al. ). Glucose regulation in these patients, which can be monitored by measuring HbA1c levels, is crucial in the progression of the inflammatory process and, consequently, in β-cell destruction (Bending et al. ) and the development of future peripheral complications, such as neurological complications or diabetic retinopathy among others (Melendez-Ramirez et al. ; Galiero et al. ; Perais et al. ). In MS, the autoimmune process induces a reactive response against antigenic elements of the central nervous system (CNS), leading to substantial disability in most patients (Cotsapas et al. ), which can be monitored by the Expanded Disability Status Scale (EDSS) and it is also related to the inflammatory process (Mungan et al. ). Previous data have provided evidence of the connection between demyelination, tissue injury and inflammation in all states of MS (Lassmann ). The use of several omics technologies, such as high-throughput proteomics, constitutes an optimal approach to explore novel biomarkers, providing an enhanced understanding of disease mechanisms, insights into a etiology, and multifactorial pathophysiological processes (Zhi et al. ; Del Boccio et al. ). These advancements could contribute significantly to the development of therapeutic tools. Studies on the molecular basis of both autoimmune diseases, T1DM and MS, have demonstrated the involvement of various pathways, including autophagy, inflammation and degeneration, among others (Bending et al. ; Ruiz et al. ; Canet et al. ; Al-kuraishy et al. ). Evidence supports the roles of similar pathways and comparable responses that contribute to the pathogenic mechanisms of the diseases (Handel et al. ; Pozzilli et al. ). Given the inherent difficulty in obtaining pancreatic β-cells from patients with T1DM and CNS tissue from patients with MS, there is a pressing need to develop noninvasive sampling techniques capable of accurately reflecting status. Because immune cells initiate the autoimmune and inflammatory processes against the corresponding target organs, the use of peripheral blood mononuclear cells (PBMCs) has emerged as ideal candidates for identifying new and potential biomarkers for both T1DM and MS. In addition, studying the proteomic signatures of both diseases could uncover underlying correlations, providing insight into their casualty. In this study, a label-free quantitation (LFQ) proteomics approach was applied to identify common and differentially expressed proteins in PBMCs from patients with T1DM and MS. Additionally, we identified potential correlations and differences between these conditions. By leveraging this advanced analytical technique, we aimed to deepen our understanding of shared pathophysiological mechanisms and contribute to the discovery of new biomarkers. Experimental design An overview of the experimental workflow is shown in Fig. A. Study participants In this study, 18 patients who were diagnosed with T1DM or relapsing–remitting Multiple Sclerosis (henceforth MS, n = 9 per condition) were recruited between 2020 and 2022 from Puerta del Mar University Hospital of Cadiz. In addition, another 9 volunteers from the same area, age and ethnicity as the patients, with no history of neurological, psychiatric or immunological diseases, were also recruited as a healthy group (H). Some of the characteristics of the study population (sex and age) are graphically represented in Figs. B, . The patient recruitment process was carried out by qualified neurologists and endocrinologists from the same hospital, following the most widely used diagnostic criteria at the time of diagnosis: the ADA criteria (Care and Suppl ) for T1DM and the McDonald criteria (Polman et al. ) for MS diagnosis. The following inclusion criteria were used for patients with T1DM: aged over 18 years; more than 2 years from disease onset; positive results for glutamic acid decarboxylase and tyrosine phosphatase auto-antibodies; and a continuous insulin treatment, with adequate glucose management, as reflected the HbA1c levels (Fig. D). The criteria for patients with MS were as follows: aged over 18 years; between 2 and 7 years from disease onset, experiencing mild physical disability (Fig. D); and having been free from relapses and steroid treatment for at least 2 months prior to the study, assuming the remission phase. In patients with MS, blood samples were routinely performed in the middle or at the end of their respective treatment cycles. PBMC isolation Peripheral blood samples were collected in EDTA tubes, and PBMCs were isolated by a standard density gradient using Histopaquet 1077 (Sigma-Aldrich, Missouri, USA). Briefly, whole blood was extracted from EDTA tubes, diluted 1:1 in phosphate-buffered saline (PBS) and carefully layered over the same volume of Histopaquet. The tubes were centrifuged for 25 min at 1900 rpm with slow acceleration and no break to avoid disrupting the layers. After centrifugation, the interface layer was harvested and transferred to a new tube and the cells were washed twice in PBS (centrifugation; 10 min at 1500 rpm). The cells were counted using Trypan blue staining, ensuring a high viability rate (96–98%) at the time of freezing. In the last centrifugation, the supernatant was removed and the resulting cell pellet was snap-frozen and stored at -80°C until protein extraction. Proteomic analysis For the proteomic analysis a volume of 250 μL of extraction buffer (7 M urea, 2 M thiourea, 0.4% CHAPS, 200 mM DTT) was added to each pellet for cell lysis and protein extraction. The samples were sonicated and centrifuged (13,000 rpm for 15 min), and the supernatants were transferred to another tube for further protein precipitation overnight with a 5X volume of acetone were at – 20 °C. Next, the samples were centrifuged (13,000 rpm for 15 min), the supernatants were discarded, and the pellets were resuspended in 100 μL of extraction buffer. The protein content was quantified following the Bradford method and 50 μg of protein from each sample were digested with trypsin (GOLD, Promega) following the FASP method with minor modifications (Wiśniewski et al. ). The resulting peptides were speed vacuumed, resuspended in 0.1% trifluoroacetic and desalted and concentrated by using reverse-phase microcolumns (C18 OMIX, Agilent). Thus, 200 ng from each sample was loaded on an EVOSEP ONE (Evosep) coupled online to a PASEF powered Tims tof Pro (Bruker) tandem mass spectrometer. Data-dependent acquisition (DDA-PASEF) was applied with the 30 SPD method. The raw mass spectra datasets were analysed using free MaxQuant (v1.6) software for protein identification and quantification (Cox and Mann ). To identify differential proteins between conditions, Perseus software (Tyanova et al. ) ( https://www.maxquant.org/perseus/ ) was then employed to carry out LFQ analysis, considering proteins identified with at least one unique peptide at an FDR of 1% (PSM-level). Data processing and statistical rationale Protein identification and quantification were conducted using PEAKS software (Bioinformatics Solutions Inc. in Waterloo, CA, USA). Searches were executed against a database that included canonical human UniProt/Swissprot entries, excluding isoforms. The precursor and fragment tolerances were set at 20 ppm and 0.05 Da, respectively. The PEAKS Q module within the PEAKS software was utilized for area-based label-free protein quantification. The data were uploaded onto the Perseus platform (Tyanova et al. ) for further analysis. The data were log2 transformed, 70% valid values were filtered, and missing values were imputed from a normal distribution and categorically annotated to define conditions. We also used filter rows based on categorical columns to eliminate proteins identified only by site, reverse and potential contaminants. Two-sample student’s t-test for differential expression analysis were used with a p-value truncation (0.05 threshold p-value). The different protein expression levels were evaluated by comparing the healthy volunteer group against the T1DM and RRMS patient group, and comparing the patient groups against each other to determine the possible differences between the diseases. To discriminate between differentially expressed proteins (DEPs) between groups (patients with T1DM or RRMS and healthy controls), the p-value was set at < 0.05. Further proteins were considered as significantly up- or downregulated when − log 10 (p-value) > 1.3 and the log2 (fold change) rates) were > 1 for up-regulated or < − 1 for downregulated proteins, respectively. ROC curve analysis was used to evaluate the predictive value of selected proteins by using SPSS software, indicating sensitivity and specificity percentages, the AUC and 95% confidence intervals. A p-value < 0.05 was considered to indicate statistical significance. Additional data processing and graphing were performed using Prism 8, R, Perseus, Circos and the image repository Smart Medical Art. The functional roles of proteins were analysed by using Ingenuity Pathway Analysis (IPA) (Qiagen) and Reactome ( https://reactome.org/ ) software, String and functional protein association networks were constructed in STRING software ( https://string-db.org/ ). The functional analysis was carried out by considering the classification made in IPA software of canonical pathways and disease and function classifications. An overview of the experimental workflow is shown in Fig. A. In this study, 18 patients who were diagnosed with T1DM or relapsing–remitting Multiple Sclerosis (henceforth MS, n = 9 per condition) were recruited between 2020 and 2022 from Puerta del Mar University Hospital of Cadiz. In addition, another 9 volunteers from the same area, age and ethnicity as the patients, with no history of neurological, psychiatric or immunological diseases, were also recruited as a healthy group (H). Some of the characteristics of the study population (sex and age) are graphically represented in Figs. B, . The patient recruitment process was carried out by qualified neurologists and endocrinologists from the same hospital, following the most widely used diagnostic criteria at the time of diagnosis: the ADA criteria (Care and Suppl ) for T1DM and the McDonald criteria (Polman et al. ) for MS diagnosis. The following inclusion criteria were used for patients with T1DM: aged over 18 years; more than 2 years from disease onset; positive results for glutamic acid decarboxylase and tyrosine phosphatase auto-antibodies; and a continuous insulin treatment, with adequate glucose management, as reflected the HbA1c levels (Fig. D). The criteria for patients with MS were as follows: aged over 18 years; between 2 and 7 years from disease onset, experiencing mild physical disability (Fig. D); and having been free from relapses and steroid treatment for at least 2 months prior to the study, assuming the remission phase. In patients with MS, blood samples were routinely performed in the middle or at the end of their respective treatment cycles. Peripheral blood samples were collected in EDTA tubes, and PBMCs were isolated by a standard density gradient using Histopaquet 1077 (Sigma-Aldrich, Missouri, USA). Briefly, whole blood was extracted from EDTA tubes, diluted 1:1 in phosphate-buffered saline (PBS) and carefully layered over the same volume of Histopaquet. The tubes were centrifuged for 25 min at 1900 rpm with slow acceleration and no break to avoid disrupting the layers. After centrifugation, the interface layer was harvested and transferred to a new tube and the cells were washed twice in PBS (centrifugation; 10 min at 1500 rpm). The cells were counted using Trypan blue staining, ensuring a high viability rate (96–98%) at the time of freezing. In the last centrifugation, the supernatant was removed and the resulting cell pellet was snap-frozen and stored at -80°C until protein extraction. For the proteomic analysis a volume of 250 μL of extraction buffer (7 M urea, 2 M thiourea, 0.4% CHAPS, 200 mM DTT) was added to each pellet for cell lysis and protein extraction. The samples were sonicated and centrifuged (13,000 rpm for 15 min), and the supernatants were transferred to another tube for further protein precipitation overnight with a 5X volume of acetone were at – 20 °C. Next, the samples were centrifuged (13,000 rpm for 15 min), the supernatants were discarded, and the pellets were resuspended in 100 μL of extraction buffer. The protein content was quantified following the Bradford method and 50 μg of protein from each sample were digested with trypsin (GOLD, Promega) following the FASP method with minor modifications (Wiśniewski et al. ). The resulting peptides were speed vacuumed, resuspended in 0.1% trifluoroacetic and desalted and concentrated by using reverse-phase microcolumns (C18 OMIX, Agilent). Thus, 200 ng from each sample was loaded on an EVOSEP ONE (Evosep) coupled online to a PASEF powered Tims tof Pro (Bruker) tandem mass spectrometer. Data-dependent acquisition (DDA-PASEF) was applied with the 30 SPD method. The raw mass spectra datasets were analysed using free MaxQuant (v1.6) software for protein identification and quantification (Cox and Mann ). To identify differential proteins between conditions, Perseus software (Tyanova et al. ) ( https://www.maxquant.org/perseus/ ) was then employed to carry out LFQ analysis, considering proteins identified with at least one unique peptide at an FDR of 1% (PSM-level). Protein identification and quantification were conducted using PEAKS software (Bioinformatics Solutions Inc. in Waterloo, CA, USA). Searches were executed against a database that included canonical human UniProt/Swissprot entries, excluding isoforms. The precursor and fragment tolerances were set at 20 ppm and 0.05 Da, respectively. The PEAKS Q module within the PEAKS software was utilized for area-based label-free protein quantification. The data were uploaded onto the Perseus platform (Tyanova et al. ) for further analysis. The data were log2 transformed, 70% valid values were filtered, and missing values were imputed from a normal distribution and categorically annotated to define conditions. We also used filter rows based on categorical columns to eliminate proteins identified only by site, reverse and potential contaminants. Two-sample student’s t-test for differential expression analysis were used with a p-value truncation (0.05 threshold p-value). The different protein expression levels were evaluated by comparing the healthy volunteer group against the T1DM and RRMS patient group, and comparing the patient groups against each other to determine the possible differences between the diseases. To discriminate between differentially expressed proteins (DEPs) between groups (patients with T1DM or RRMS and healthy controls), the p-value was set at < 0.05. Further proteins were considered as significantly up- or downregulated when − log 10 (p-value) > 1.3 and the log2 (fold change) rates) were > 1 for up-regulated or < − 1 for downregulated proteins, respectively. ROC curve analysis was used to evaluate the predictive value of selected proteins by using SPSS software, indicating sensitivity and specificity percentages, the AUC and 95% confidence intervals. A p-value < 0.05 was considered to indicate statistical significance. Additional data processing and graphing were performed using Prism 8, R, Perseus, Circos and the image repository Smart Medical Art. The functional roles of proteins were analysed by using Ingenuity Pathway Analysis (IPA) (Qiagen) and Reactome ( https://reactome.org/ ) software, String and functional protein association networks were constructed in STRING software ( https://string-db.org/ ). The functional analysis was carried out by considering the classification made in IPA software of canonical pathways and disease and function classifications. A LFQ proteomic analysis was carried out to investigate the differential proteomic expression of PBMCs from patients with T1DM and MS in the remission phase, and compared with healthy volunteers. On average, a total of 2476 proteins were identified, among the comparisons of patients with T1DM and MS vs. the healthy volunteers, resulting in an overall total of 674 proteins with significant differences (− log 10 p-value > 1.3) in all the comparisons. Notably, the proteome profiles of the groups of patients with T1DM and MS were different between from those of healthy controls, as indicated by the principal component analysis (Fig. A) and the hierarchical cluster (Fig. B) representations. Full information regarding identification and normalized protein intensities for T1DM and MS vs. healthy comparisons can be found in Supplementary Table S1. Differentially expressed proteins between T1DM and MS Among the 674 DEPs, we focused on those with fold change rates ≥ 2, identifying in total 136 DEPs that were up- or downregulated between the patients with T1DM or MS and healthy controls group (H) (Fig. A–C). Notably, the number of DEPs found in patients with T1DM was greater (64.9%) than that found in patients with MS (35.1%), indicating that more proteins downregulated than upregulated in both cases. Additionally, the number of upregulated proteins in patients with MS vs. H was greater than that in patients with T1DM vs. H (21 vs. 10 proteins, respectively), whereas more downregulated proteins were found in patients with T1DM vs. H (88 downregulated) than in patients with MS vs. H (32 proteins) (Fig. C, ). Furthermore, we found 15 common proteins (12.1%) with differential expression in both diseases compared with healthy controls, 2 of which were upregulated and 13 of which were downregulated; these proteins presented similar expression patterns in both cases (Fig. E, ). Additionally, the protein transthyretin (TTR) was downregulated in both the T1DM and MS vs. H groups, and it was also significantly downregulated in patients with T1DM compared with patients with MS (Fig. F). Based on the information obtained from the Reactome Pathway Database (Supplementary Table 2), the 15 common DEPs identified in PBMCs from patients with T1DM and MS were involved in immune system activity (BTF3, TTR, CD59, CSTB), diseases of the neuronal system (TTR), signal transduction (STMN1, LAMTOR5), metabolism of nucleotides (RPS21), proteins (TTR, ENAM, CD59, RPS21, SRP9) and RNA (SRSF10, RPS21) (see Supplementary Table 2). Compared with healthy controls, differentially expressed proteins in patients with T1DM and MS are connected in several networks A relationship network analysis revealed strong connections between the 136 DEPs identified in both diseases, T1DM and MS, and compared to healthy controls (Fig. ). Indeed, the analysis showed that nodes related to T1DM (purple) were closely connected with those related to MS (orange), instead of having two separate networks for each disease. Furthermore, some of the commonly altered proteins found in patients with T1DM and MS patients vs. H (green) were highly connected with dysregulated proteins in both comparisons (see Supplementary Table 3). T1DM and MS share several functional and disease-related pathways According to the functional analysis performed with IPA, the protein changes detected in patients with T1DM or MS vs. healthy controls were related to several significant canonical pathways, such as immunological, neurological, and cellular trafficking or signalling pathways, among others (Fig. A). Thus, many of the protein changes observed in both diseases were associated with the immune system (approximately 50% of them) including immunological or inflammatory functions, with a significant presence of proteins related to apoptosis or cell death of immune cells (mostly lymphocytes, ie. CD247, CD59, GSK3B, ITPR1, or PRKCQ) as shown in Table . In addition, other processes, such as membrane trafficking and metabolic or signalling pathways were also found to be commonly linked to the DEFs observed in patients with T1DM and MS compared with Healthy controls (Fig. A). Furthermore, in T1DM, many DEPs were related to lipid metabolism (33% T1DM, 0% MS), and in MS the majority of them were related to signalling functions (14% T1DM, 60% MS) (Fig. B). Notably, 10–15% of the altered proteins were also involved in neurological pathways in both diseases (Fig. A). Indeed, a greater number of proteins related to the neurological system were found in patients with T1DM than in patients with MS vs. H. In this sense, despite T1DM not being considered a neurological disorder, this disease has already been related to specific neurological complications and predisposes individuals to develop other neurological disorders (Chou et al. ; Ding et al. ; Jin et al. ). In T1DM, some of the terms annotated were related to myelin dysregulation (ARHGEF6, CNP, NADK2, PSAP, SLC25A12) and other neurological disorders, such as Alzheimer’s disease (i.e., POA2, APOC1, APOC3, CNP, GSK3B, PON1, PSAP, SELENBP1) or movement disorders (Table ). The analysis also revealed common functional annotations between the two diseases, and there were several canonical pathways, such as those related to CXCR4, CDC42, FAK, axonal guidance or insulin secretion were shared among these pathologies (Fig. C). There were also some common annotations related to disease and function, most of which were related to cell death processes, such as necrosis or apoptosis, specifically the apoptosis of T cells (Fig. D). The predictive value of the TTR for the differential diagnosis of autoimmune diseases: T1DM and MS To evaluate the diagnostic efficacies of specific proteins, receiver operating characteristic (ROC) curves were constructed utilizing the protein common or different protein sets (Table ) previously identified in Fig. E–F. In the data obtained from the cohort, proteins such as TTR, SRSF10, RPS21, SRP9, DBI, BTF3, GZMK, RO60, STMN1, ENAM and CD59 had significant prognostic value in both the T1DM and MS groups. Furthermore, the proteins CSTB and RASSF2 presented a significant predictive value for differentiating between T1DM and MS (Table ). These findings indicate that CSTB and RASSF2, as biomarkers, have a good diagnostic value for autoimmune disease and distinguishing between T1DM and MS, with a similar autoimmune profiles. Among the 674 DEPs, we focused on those with fold change rates ≥ 2, identifying in total 136 DEPs that were up- or downregulated between the patients with T1DM or MS and healthy controls group (H) (Fig. A–C). Notably, the number of DEPs found in patients with T1DM was greater (64.9%) than that found in patients with MS (35.1%), indicating that more proteins downregulated than upregulated in both cases. Additionally, the number of upregulated proteins in patients with MS vs. H was greater than that in patients with T1DM vs. H (21 vs. 10 proteins, respectively), whereas more downregulated proteins were found in patients with T1DM vs. H (88 downregulated) than in patients with MS vs. H (32 proteins) (Fig. C, ). Furthermore, we found 15 common proteins (12.1%) with differential expression in both diseases compared with healthy controls, 2 of which were upregulated and 13 of which were downregulated; these proteins presented similar expression patterns in both cases (Fig. E, ). Additionally, the protein transthyretin (TTR) was downregulated in both the T1DM and MS vs. H groups, and it was also significantly downregulated in patients with T1DM compared with patients with MS (Fig. F). Based on the information obtained from the Reactome Pathway Database (Supplementary Table 2), the 15 common DEPs identified in PBMCs from patients with T1DM and MS were involved in immune system activity (BTF3, TTR, CD59, CSTB), diseases of the neuronal system (TTR), signal transduction (STMN1, LAMTOR5), metabolism of nucleotides (RPS21), proteins (TTR, ENAM, CD59, RPS21, SRP9) and RNA (SRSF10, RPS21) (see Supplementary Table 2). A relationship network analysis revealed strong connections between the 136 DEPs identified in both diseases, T1DM and MS, and compared to healthy controls (Fig. ). Indeed, the analysis showed that nodes related to T1DM (purple) were closely connected with those related to MS (orange), instead of having two separate networks for each disease. Furthermore, some of the commonly altered proteins found in patients with T1DM and MS patients vs. H (green) were highly connected with dysregulated proteins in both comparisons (see Supplementary Table 3). According to the functional analysis performed with IPA, the protein changes detected in patients with T1DM or MS vs. healthy controls were related to several significant canonical pathways, such as immunological, neurological, and cellular trafficking or signalling pathways, among others (Fig. A). Thus, many of the protein changes observed in both diseases were associated with the immune system (approximately 50% of them) including immunological or inflammatory functions, with a significant presence of proteins related to apoptosis or cell death of immune cells (mostly lymphocytes, ie. CD247, CD59, GSK3B, ITPR1, or PRKCQ) as shown in Table . In addition, other processes, such as membrane trafficking and metabolic or signalling pathways were also found to be commonly linked to the DEFs observed in patients with T1DM and MS compared with Healthy controls (Fig. A). Furthermore, in T1DM, many DEPs were related to lipid metabolism (33% T1DM, 0% MS), and in MS the majority of them were related to signalling functions (14% T1DM, 60% MS) (Fig. B). Notably, 10–15% of the altered proteins were also involved in neurological pathways in both diseases (Fig. A). Indeed, a greater number of proteins related to the neurological system were found in patients with T1DM than in patients with MS vs. H. In this sense, despite T1DM not being considered a neurological disorder, this disease has already been related to specific neurological complications and predisposes individuals to develop other neurological disorders (Chou et al. ; Ding et al. ; Jin et al. ). In T1DM, some of the terms annotated were related to myelin dysregulation (ARHGEF6, CNP, NADK2, PSAP, SLC25A12) and other neurological disorders, such as Alzheimer’s disease (i.e., POA2, APOC1, APOC3, CNP, GSK3B, PON1, PSAP, SELENBP1) or movement disorders (Table ). The analysis also revealed common functional annotations between the two diseases, and there were several canonical pathways, such as those related to CXCR4, CDC42, FAK, axonal guidance or insulin secretion were shared among these pathologies (Fig. C). There were also some common annotations related to disease and function, most of which were related to cell death processes, such as necrosis or apoptosis, specifically the apoptosis of T cells (Fig. D). To evaluate the diagnostic efficacies of specific proteins, receiver operating characteristic (ROC) curves were constructed utilizing the protein common or different protein sets (Table ) previously identified in Fig. E–F. In the data obtained from the cohort, proteins such as TTR, SRSF10, RPS21, SRP9, DBI, BTF3, GZMK, RO60, STMN1, ENAM and CD59 had significant prognostic value in both the T1DM and MS groups. Furthermore, the proteins CSTB and RASSF2 presented a significant predictive value for differentiating between T1DM and MS (Table ). These findings indicate that CSTB and RASSF2, as biomarkers, have a good diagnostic value for autoimmune disease and distinguishing between T1DM and MS, with a similar autoimmune profiles. In this study, we have described the involvement of the immune system through common signalling pathways in both T1DM and MS, despite their differing pathophysiologies, providing a new perspective on the molecular basis underlying these diseases. The connections observed between the proteins involved and the mechanisms activated in both immunological and neurological contexts allow for the establishment of differential and/or similar profiles in the diagnosis and progression of T1DM and MS. In this study, we employed an LFQ proteomics approach to identify DEPs in PBMCs from individuals diagnosed with T1DM or MS compared with those from healthy controls. Furthermore, we have discerned the potential correlations and differences existing between these two conditions. Both autoimmune diseases presented a high number of common proteins involved in the development of T1DM and MS; however, our results have identified two specific proteins with significant profile changes compared with those in the healthy population, and with a potential diagnostic value for differentiating both pathologies: CSTB and RASSF2. The co-occurrence of different autoimmune diseases has been a matter of interest in several studies as a way of understanding the autoimmune process (Cojocaru et al. ; Fidalgo et al. ). In the case of T1DM, there is a threefold greater risk of developing MS as a comorbidity than in the general population (Bechtold et al. ). This increased risk could be connected with the common relationship of both diseases with T-cell mediated autoimmunity. T-cell responses appear to be less organ-specific than might be anticipated from the two different conditions, as cross-reactivity between their tissues has been demonstrated. Indeed, T-cells from patients with T1DM present reactivity against pancreatic islet and CNS antigens, and this phenomenon takes place similarly in patients with MS (Winer et al. ; Banwell et al. ). This fact, combined with previous evidence (Marrosu et al. ; Zoledziewska et al. ; Pozzilli et al. ), clearly indicates the plausible correlation between the two diseases and how alterations in the immune system significantly impact them. The data analysed in this work reflect an established connection between the immune system modulation and neurological involvement. Furthermore, comprehending these correlations might contribute to explaining causality for both T1DM and MS. In this autoimmune and inflammatory context, it seems reasonable to study PBMCs, as precursors of immune cells, to unravel the underlying mechanisms involved. In agreement with previous studies showing the existence of a differential transcriptional expression profile in PBMCs between healthy controls and those with T1DM and MS when analysed separately (Brynedal et al. ; Safari-Alighiarloo et al. ), our investigation revealed a discernible differential proteomic profile not only between healthy controls and individuals afflicted with either T1DM or MS but also between the two patient groups. Indeed, the results reflect a specific protein expression pattern for each pathological group, despite their similar autoimmune origin. These findings could contribute to the identification of potential protein biomarkers (CSTB and RASSF2) that are either shared between T1DM and MS or specific to each disease. Furthermore, patients with T1DM presented a greater number of DEPs than patients with MS did, with the former presenting nearly twice as many DEPs as the latter (98–53 proteins, respectively). This phenomenon may be linked to the disease, as evidenced by the observed discrepancy in the number of dysregulated genes within PBMCs during relapse or remission periods in MS in previous studies (Brynedal et al. ). These fluctuation periods have also been described in T1DM as part of the immunomodulatory process that occurs during β-cell destruction, resulting in a distinct signature throughout T1DM progression. Moreover, some investigations have suggested that this process contributes to the existence of a continuous relapsing remitting profile of β-cell mass and variations in the destructive autoreactive response (von Herrath et al. ; Van Belle et al. ; van Megen et al. ). The oscillations in the immunological and inflammatory processes during the course of each disease, could influence the abundance of DEPs present in PBMCs at each time point of these pathologies. The analysis of common dysregulated proteins between both pathologies, revealed the relationships of those proteins with the immune system activity, diseases of the neuronal system, signal transduction, metabolism of nucleotides, proteins and RNA processing. With respect to immune system activity, we found a connection with two different pathways, interactions with the butyrophilin (BTN) family and neutrophil degranulation (Supplementary Table S2). The BTN family has been linked to both stimulatory and inhibitory effects on cells within the immune system, especially T lymphocytes (Malinowska et al. ). More specifically, BTF3 down-regulation (Fig. F) has been connected with the inhibition of transcription and protein synthesis in apoptotic K562 cells and is involved in the regulation of apoptosis in animal models (Jamil et al. ), suggesting that BFT3 downregulation could compromise cell viability in specific target organs to both T1DM and MS. The recruitment and activity of different immune cells during T1DM and MS contribute to disease development. Furthermore, the role of neutrophils has been proposed to be crucial in the onset and progression of both diseases, because of their capacity for degranulation and heightened production of reactive oxygen species (ROS) in target tissues (Huang et al. ; De Bondt et al. ). ROS production is related to the death of pancreatic β-cells in T1DM (Obeagu and Obeagu ), and is also being associated with demyelination and damage to astrocytes and axons in MS (Larochelle et al. ). There is substantial evidence of oxidative and nitrosative stress in patients with MS, as demonstrated by elevated serum levels of ascorbic acid, nitrites, and malondialdehyde compared with those in the healthy population. These findings suggest that increased lipid peroxidation is a consequence of exacerbated ROS production (Rispoli et al. ). Lipid peroxidation exerts its pathological effects by modifying specific proteins in patients with MS, which leads to the generation of autoantibodies against these lipid peroxidation-modified proteins (Gonzalo et al. ). Furthermore, oxidative and nitrosative stress have been associated with increased disability in patients with MS (Kallaur et al. ). In contrast, our results indicate the downregulation of RPS21, which may possibly indicate an activation of autophagy processes (Al-kuraishy et al. ), potentially linked to the remitting phase of MS. Notably, we detected lower levels of TTR protein in PBMCs from patients with both diseases than in those from healthy controls. TTR plays a role in various neuronal processes, including the transport of retinol and thyroid hormones (Ueda ), which could be affected in both diseases (Lehmensiek et al. ; Forga et al. ). Furthermore, TTR has been shown to play roles in oligodendrocyte development and the process of myelination, by producing hypermyelination in TTR-null mice (Alshehri et al. ). Microstructural abnormalities in the white matter of the brain have been found in patients with T1DM, suggesting the presence of injury in myelinated fibres or axonal degeneration (Toprak et al. ; Muthulingam et al. ). TTR is involved in oligodendrocyte development and myelination processes (Alshehri et al. ), where its absence or low levels are associated with enhanced and faster remyelination. This implies that TTR might act as a modulator that, when absent, allows for improved remyelination (Pagnin et al. ), during the remission phase of patients with MS. Notably, TTR levels were significantly lower in patients with T1DM than in patients with MS, underscoring the potential importance of this protein in the nervous system. This reduction in TTR could be associated with the prevalence of developing diabetic retinopathy (DR), and it has been proposed as a potential marker for the diagnosis and treatment of DR (Sun et al. ), justifying its crucial role in the development of neurological alterations. Additionally, we observed alterations in the ERBB4 signalling pathway in both diseases (Supplementary Table 2). There is some evidence of reduced ERBB4 expression in the immune cells of patients with MS, suggesting that this protein is involved in the proliferation of oligodendrocyte progenitor cells, the differentiation of oligodendrocytes and remyelination (Tynyakov-Samra et al. ). However, evidence regarding the involvement of this pathway in typical neurological alterations in T1DM is currently limited. Furthermore, it is noteworthy that our analysis of pathways implicated in each disease, with respect to neurological functions, revealed several pathways linked with myelin dysregulation in T1DM (Table ). These include dysmyelination, abnormal brain myelination or hypomyelination of the brain in T1DM, as well as axonal guidance in both patients with T1DM and MS (Fig. C). Although dysregulations in axonal guidance and myelin metabolism have been extensively studied in MS (Berg et al. ; Lemus et al. ), evidence in T1DM is limited. Therefore, further research is warranted to elucidate potential myelin-related dysregulations in T1DM. Among the proteins associated with these pathways, ribosomal protein S21 (RPS21) protein, which is related to dysregulation of the translation process, was downregulated (Wang et al. ; Pöll et al. ). Notably, this protein is important not only because of its dysregulation in both diseases, but also because it presented the greatest number of connections (11 connections) in the network analysis. Moreover, the presence of numerous connections between proteins highlights their possible involvement in several pathways essential for normal cellular function. Specifically, RPS21 alteration could be related to alterations in several pathways, such as ER stress, and consequently, it may play a role in the regulation of autophagic processes in both diseases. The expression of this protein has not been studied before in either of the two pathologies, underscoring the importance of investigating its role in elucidating its implications in both. Additionally, we analysed the altered pathways by incorporating all the dysregulated proteins in each disease separately. With respect to functional pathways, our results revealed that a high proportion of proteins were associated with lipid metabolism in T1DM, as shown in Fig. (33% in T1DM, 0% in MS), whereas signalling functions predominated in MS (14% in T1DM, 60% in MS). Lipid metabolism is related to a switch in metabolic signatures in T-cells and macrophages (Catarino et al. ; Villoria-González et al. ) during differentiation and activation (Endo et al. ). Similarly, these processes may be implicated in both diseases, as has been previously studied in MS (Pompura et al. ). However, further evidence is needed to understand its role during the relapse and remission periods in MS and the subjacent mechanisms involved in T1DM. Notably, the primary signalling functions analysed were involved in survival and apoptotic processes. These results emphasize the critical role of controlling cell viability during the immune response in both chronic autoimmune diseases. In addition, we observed alterations in several pathways, including mTORC1-mediated signalling, the cellular response to starvation and the GCN2 response to amino acid deficiency, all of which have been linked to autophagy activation (Hamasaki et al. ; Singh and Cuervo ; Masson ). Autophagy plays an important role in cell survival under certain conditions, such as inflammation, neurodegeneration and starvation (Chatterjee et al. ). Suggestions have been made that prolonged and excessive exposure to glucose and fatty acids might block the natural adaptive mechanisms, such as autophagy, within β-cells to protect themselves from the toxicity and stress associated with the T1DM environment (Marasco and Linnemann ). In the context of MS, neuronal loss may be linked to the normal function of neuronal autophagy, as well as other surrounding cells, such as microglia and oligodendrocytes. Both cell types are involved in myelin debris clearance and impaired remyelination, respectively, potentially contributing to neuronal death (Misrielal et al. ). In addition to autophagy dysregulation in target tissues, dysregulation of autophagy has also been identified in the PBMCs of both diseases (Canet et al. ; Al-kuraishy et al. ). This promotes autophagy activation in PBMCs, protecting them from an inflammatory environment (Chatterjee et al. ) and assisting in their survival through protein turnover associated with cell death (Botbol et al. ). The survival of these immune cells may trigger a sustained immune response. Moreover, we identified several dysregulated pathways associated with various stages of protein translation, potentially linked to the presence of endoplasmic reticulum (ER) stress in PBMCs, which plays a crucial role in autophagy activation(Deegan et al. ), such as EIF2 signalling. EIF2 has been associated with ER stress, as it prevents ribosome assembly, leading to the global downregulation of protein translation (B’chir et al. ). This finding supports the down-regulation of the RPS21 protein in our patients with T1DM and MS, potentially indicating a decrease in protein translation due to autophagy activation in PBMCs. Moreover, our analysis revealed numerous dysregulated pathways associated with immune cell apoptosis and cell death, which are linked mainly to lymphocyte apoptosis. As mentioned previously, the activation of autophagy mitigates cellular stress, thereby preventing apoptosis and promoting cell survival. There is evidence of apoptosis in PBMCs in both patients with T1DM (Hu et al. ) and patients with MS (Mandel et al. ), supporting that increased tolerance to the apoptosis of autoimmune cells underlies the pathogenesis of these diseases. Some limitations should be considered. First, the influence of disease-modifying therapies (DMTs) in patients with MS represents a potential confounding factor. Different DMTs have varying mechanisms of action—ranging from immunomodulation to selective immune cell depletion—and may influence proteomic profiles (Oreja-Guevara et al. ). To mitigate long-lasting effects on the immune system, blood samples from MS patients were always obtained mid-to-end of of the corresponding treatment cycles, when a gradual reconstruction or complete recovery of the immune system is assumed. And we included patients with T1DM, who were not receiving immunomodulatory or immunosuppressive treatments, as a comparator group. This allowed us to identify proteins and pathways that remain consistently dysregulated under both conditions, potentially reflecting robust autoimmune-related mechanisms rather than treatment-induced effects. Second, our study is limited by a small sample size (n = 9 per group), which, although sufficient to reveal significant proteomic differences, may reduce the generalizability of our findings. Larger cohorts will be necessary to validate the identified biomarkers and ensure their reproducibility across diverse patient populations and clinical settings. Third, although bioinformatics tools such as Reactome and IPA enabled us to integrate and interpret the high volume of proteomic data, these analyses primarily highlight associations between proteins and specific pathways. Such tools indicate pathway involvement but do not provide information regarding pathway activation or suppression. Future experimental studies, including functional assays, are needed to delineate the orientation and impact of these pathways in autoimmune conditions such as MS and T1DM. Notwithstanding these drawbacks, our study may represent an important step toward for comprehending both common and disease-specific mechanisms in T1DM and MS. This study highlights key proteins and pathways that may serve as potential biomarkers of immune dysregulation and neuroinflammatory processes, laying the groundwork for future studies aimed at confirming and expanding upon these findings. During T1DM and MS, similar immune and neurological processes occur with distinct pathological implications and differential protein expression. The identification of specific expression patterns for common proteins in both autoimmune diseases suggest that the underlying mechanisms involved in the immune response are linked to the development of various neurological complications, accompanied by autophagy pathway dysregulation. Thus, our results revealed that two of the common proteins for T1DM and MS, CSTB and RASFF2, are potential biomarkers for differentiating between these autoimmune diseases. Supplementary material 1.