Editor’s Note: This piece is well over the 2,500 word suggestion. Jurors, consider where this author might trim back their essay.
Thank you, I will wait for the suggestions.
This article presents the experiences and insights gained from designing and delivering a series of Massive Open Online Courses (MOOCs) at the Federal University of São Paulo's Outreach Program, with a focus on integrating artificial intelligence (AI) tools into course development, delivery, and assessment. We followed the Kolb Experiential Learning Cycle to continuously enhance the quality of the courses and adopted a convergent parallel mixed-methods approach for data collection and analysis. Our findings are based on a detailed analysis of the data, reflecting on what was successful, what challenges were encountered, and what improvements could be made. The article also suggests further enhancements, such as more inclusive course design and wider adoption of AI-driven instructional and marketing tools. These findings contribute to the ongoing dialogue about improving MOOCs through AI, offering practical insights for educators in the AI and OER community.
Brazil's educational system faces numerous challenges, including high dropout rates, disparities between public and private schools, and low test scores (Guilherme et al., 2024). In global education rankings, particularly in math and science, Brazil lags behind developed countries (OECD, 2021). This systemic problem is caused by several contributing factors, such as inadequate teacher training, underinvestment in schools, and uneven resource distribution (Castro, 2021).
To help improve this situation, professors from the Federal University of São Paulo have offered free massive open online extension courses. Our goal is to provide educational opportunities to everyone interested, especially those who may not be able to afford paid courses or attend face-to-face classes. However, creating and delivering these courses present numerous challenges. These challenges range from the bureaucratic processes involved in proposing and approving the courses to the final evaluation of students' learning outcomes.
In this article, we present the methods we followed, the challenges we encountered, what worked well, what didn't, and the lessons we've learned so far. We also explain how we utilized AI in each phase and speculate on its potential future use. Our aim is to provide a useful roadmap for those facing similar challenges in the AI+OER community.
We did not have any financial resources or staff to support the creation and delivery of MOOCs. Our university support was very limited, providing only a system that allowed us to promote the courses on the Outreach Website and manage the certificates upon course completion. The university did not provide a virtual learning environment. As a result, we had to rely mostly on free tools. Whenever we used paid software, we covered the costs ourselves.
From 2022 to 2024, we offered five free-of-charge massive open online courses. Table 1 presents the courses offered, their learning goals, and the target audience.
Table 1: The courses offered
Course | Year/Month | Goal | Target audience |
---|---|---|---|
Introduction to Data Science (with R) | 2022/ January | Present concepts of Data Science by means of practical hands on programming challenges | Anyone interested in learning basic concepts of Data Science. |
Introduction to R programming language | 2022/February | To provide learners with concepts of data analysis, visualization and statistical modeling focusing on hands-on experiences with real-world data. | Teachers, students and professionals involved with data analysis. |
Introduction to Scratch Programming Language -July 2022 | 2022/July | To introduce children to the basics of programming and problem-solving using Scratch. | Children and K-12 Educators. |
Visual Thinking- January 2024 | 2024/February | To help learners to develop skills to communicate ideas through visual methods. | Educators, students, designers and business professionals. |
Systems Thinking- June 2024 | 2024/June | To equip learners with the skills of understanding and analyzing complex systems | Educators, students, business leaders and managers |
Each MOOC adhered to the OER principles of openness, reusability, adaptability, and accessibility. We followed the Creative Commons guidelines, allowing anyone to distribute, remix, adapt, or build upon our work, even for commercial purposes.
On the course website, we explicitly stated that the course was designed to be openly accessible to all and encouraged users to share the materials and reuse them in various contexts. All materials were freely available, not only during the course but also after its completion. We actively promoted the reuse of all the educational resources we created.
We developed open textbooks, allowing students to download the materials for free. For example, in the systems thinking course, we made our comic book (Arantes do Amaral, 2019) and case study book (Arantes do Amaral, 2012) freely available. Similarly, for the R programming and Data Science courses, we provided free access to our book (Arantes do Amaral, 2021), specifically designed to support the course.
In the programming-related courses, all programming exercises and code examples were made available, allowing anyone to use, modify, and redistribute them.
Additionally, all videos created for the courses were accessible on both the course website and our YouTube channel. This enabled learners to freely share the content on social media or other platforms as they wished. To ensure accessibility, all videos included closed captions, making them easier to follow for learners with learning difficulties.
Each MOOC was designed as a project composed of four phases (Figure 1). During Phase 1 (Course Planning and Launch), we formulated the course and obtained all necessary university approvals. In Phase 2 (Course Creation), we designed and structured the course based on the target audience's needs. In Phase 3 (Course Delivery), we ran the course, collecting data about students' learning and their perceptions of the course quality. In the final phase (Phase 4, Knowledge Sharing), we reflected on our learning. We analyzed the data, identified what worked and what didn’t, and made improvements for the next course. We also published the lessons learned in peer-reviewed journals.
Phase 1 consists of two macro processes: submission and approval, and course promotion.
For each course we proposed, we had to follow a rigid approval process. This involved creating the course proposal and submitting it to the Outreach Committee. The Committee would then analyze the proposal and usually request improvements. This bureaucratic process typically takes about four weeks to complete.
Once the course was approved, it was publicized on the University’s Outreach Program website. This website lists and provides details of all approved outreach courses and allows interested individuals to enroll.
To attract the desired target audience, we took several actions, such as creating course information packets and advertising the course on social media. For the courses we taught, we typically created all promotional materials using Adobe Sketchbook, a free software tool. We also attempted to use AI software, such as Dall-E, to create images integrated with text. However, the results were not very satisfactory, as it generated interesting images but often failed to combine them with text correctly.
We spent a significant amount of time promoting the course on social media, particularly by searching for relevant groups on Facebook that might be interested in participating. We advertised the courses in groups for teachers, educators, and university social groups. We also promoted the courses on LinkedIn and other news outlets. It would be highly beneficial if social media platforms had embedded AI tools to help identify groups that match our interests. Additionally, we contacted outreach programs at several public universities, asking them to help promote our courses.
This was a time-consuming process, as it required the approval of administrators from various social media groups, outreach programs, and newspapers. Not all of our efforts were successful, as several group administrators did not agree to publish our advertisements. We also did not succeed in advertising the course in the media. Nevertheless, despite these obstacles, we successfully achieved a high number of enrollees, as discussed later in this article.
Phase 2 consists of three macro processes: 1) understanding the target audience, 2) course design, and 3) course adjustments.
At the end of the enrollment period, we downloaded the list of students from the Outreach University system. This provided us with an Excel file containing limited information about the enrollees, including their names, city of residence, email, education level, undergraduate field (if any), and any disabilities.
This information alone was not very useful. To create a meaningful course, we needed to better understand the enrollees—not just who they were, but also their interests, expectations, and context. To achieve this, we used two Design Thinking concepts: Personas and Empathy Maps.
Personas were fictional characters representing groups of enrollees with similar characteristics (Li and Xiao, 2022). Empathy Maps provided insights into the Personas' expectations regarding the course (Liu et al., 2022).
Both tools required us to collect additional data. Therefore, as soon as we received the list of enrollees, we sent them questionnaires with open-ended and closed-ended questions about their demographics, needs, feelings, and experiences. We used free tools like Google Forms to collect the data and R Studio to analyze it.
To create the Personas, we gathered data on demographics, challenges, motivations, and skills. For the Empathy Maps, we collected information about what the enrollees expected to learn, their prior experiences with MOOCs, what they had heard about the course topic, what they already knew, and what they hoped to achieve through the course.
Once we collected the data, we analyzed it. For quantitative data, we developed R programs, and to speed up the process, we used ChatGPT 4.0 to generate the code. For qualitative data, we also used ChatGPT 4.0, prompting it to identify the recurrent themes from the answers to the qualitative questions.
The use of AI was very helpful: it allowed us to clearly identify specific clusters of people with distinct needs and desires. Understanding these clusters was crucial for designing the course effectively.
The course design was the most complex macro process, as it involved multiple subprocesses—some related to pedagogy and others to IT. Each process is outlined succinctly in the following sections.
For each course, we defined learning objectives that outlined the skills we wanted our learners to develop upon completing the course. Using Personas and Empathy Maps helped us gain a clear understanding of the learners' motivations, preferences, previous skill levels, and learning environment constraints. Based on this understanding, we determined the course content, defining the core modules and customizing exercises to meet the needs of each Persona.
In all courses, we used active learning, challenging students to learn by doing through problem-solving exercises based on real-world problems. We incorporated both problem-based and project-based learning approaches.
We also fostered peer learning and collaboration. The course’s Facebook groups encouraged students to engage with each other, discuss content, collaborate, and share knowledge.
We designed the courses to ensure strong teacher presence, using free tools such as Facebook to interact with students and provide timely, constructive feedback.
Each course had core content, along with customized quizzes and exercises. The core content included essential material such as key concepts, theories, and required readings. We created video lectures using free tools: Zoom for recording and YouTube for hosting. The videos were short, typically between 5 to 10 minutes each.
The exercises were designed based on what we learned from the use of Personas and Empathy Maps. For each course, we developed exercises. For example, in programming courses, we challenged students to develop programs related to real-world problems. These challenges were not graded; instead, students completed the exercises and afterward compared their solutions with those we presented in video format. This process allowed students to evaluate their own progress. Additionally, we were available to address any questions or doubts students had.
Next, we structured the courses by breaking them into modules. Each course lasted one month, with four modules—one per week. We aimed to balance the workload across the modules, providing approximately the same amount of video lectures and exercises in each.
The course structure allowed for self-paced learning: all modules were available from the beginning of the course. This approach enabled students to move through the content at their own pace, helping them balance their studies with other commitments.
We considered several platforms for hosting our courses. Moodle, an open-source learning environment, was one option, but we lacked the technical staff to install and maintain it. Using a paid service like MoodleCloud wasn’t feasible due to budget constraints.
Google Classroom was another possibility, but its student limit (500 per class) would have required splitting students into smaller groups, making management difficult. After evaluating these options, we ultimately chose Google Sites. We selected this tool for several reasons: it was free, easy to use, had a customizable structure, and integrated with other Google services such as Google Drive and Google Forms. It also allowed us to embed YouTube videos on the course website. Although Google Sites doesn’t have a built-in discussion forum, we addressed this by using Facebook to facilitate course discussions.
At the beginning of each module, typically on Monday mornings, we sent an email to the students. In this email, we provided detailed instructions, including the module goals, required readings or videos, exercises, and additional study materials. We also explained the course rules and the actions students needed to complete in order to receive certificates at the end of the course.
We visited the discussion forum (Facebook group) several times a day to answer students' questions, help them stay on track, and foster engagement and collaboration. At the end of the course, we administered the final exam and sent a form for students to evaluate the course and their own performance.
Managing the course was very time-consuming and required significant effort. Many students had questions about the exercises and course content. However, many of these questions arose because students either didn’t take the time or didn’t have the patience to read the instructions or watch the videos where everything was explained. As a result, they sent us numerous unnecessary questions, which required us to repeat information that had already been provided.
We assessed the students’ learning in different ways. In programming-related courses (such as Data Science, R, and Scratch), we asked questions that required students to develop a computer program as a response. In the Visual Thinking course, we required students to submit a visual model. In the Systems Thinking course, we asked questions related to the course content. We used the following rubric to evaluate the students’ learning (Table 2).
Table 2. Rubric used to evaluate the students’ learning
Achievement level | Rubric Category | Description |
---|---|---|
0 - 3.9 | Low | Demonstrates a limited understanding of key concepts. Struggles to apply concepts to problems or tasks. Shows little evidence of critical thinking or synthesis of ideas. |
4-7.9 | Medium | Demonstrates a basic understanding of key concepts. Can apply concepts in familiar contexts but may struggle with unfamiliar situations. Demonstrates emerging critical thinking but may rely on surface-level understanding. |
8-10 | High | Shows thorough and in-depth understanding of key concepts. Consistently applies concepts effectively in both familiar and unfamiliar contexts. Exhibits advanced critical thinking, connecting ideas creatively and drawing insightful conclusions. |
In late 2022, with the introduction of ChatGPT, we noticed that some students began using this tool to complete their final assignments. To identify whether a student's response was generated by AI, we used ZeroGPT for detection.
Since these courses were part of an Outreach program, no formal grading was applied. Students who successfully completed the required assignments received a course certificate, even if we perceived their achievement level as “low.”
At the end of each MOOC, we reflected on our own learning and then wrote articles to share our findings. As non-native English speakers, we used ChatGPT4.0 to help improve the grammar of these articles. So far, we have successfully published three articles. The first (Arantes do Amaral & Onusic, 2022) presents our findings from the course "Introduction to Data Science." The second discusses the process of creating Personas and Empathy Maps for the "Introduction to Scratch" course (Arantes do Amaral et al., 2022). The third outlines the lessons learned from delivering the same course (Arantes do Amaral, 2023). Additionally, two more articles, which detail the insights gained from subsequent MOOCs, are currently under review by international journals.
We followed the Kolb Experiential Learning Cycle (Figure 2), which outlines how teachers can continuously learn through the experience of delivering courses, reflecting on that experience, and improving the next course based on these reflections.
We followed a convergent parallel mixed-methods design (Creswell & Clark, 2018). In this method, qualitative and quantitative data are collected simultaneously and analyzed independently. Afterward, the results are compared, contrasted, and interpreted.
A total of two thousand one hundred forty-five people participated in the course. The ages ranged from 12 to 80 years old. Students from all Brazilian states participated, with the state of São Paulo having the highest number of enrollees (50% to 60% of the participants in each course). The majority of participants were teachers and students.
Data was collected one week before the beginning of the course and again at the end.
The first data collection aimed to gather demographic information about the students and understand their expectations. This data was used to create Personas and Empathy Maps, which helped us adjust the course to better meet expectations.
The second data collection aimed to evaluate the students' learning, which could include answers to a final exam or a visual model, depending on the course. We also collected data related to students' self-perception of their learning. We used electronic questionnaires with both open-ended and closed-ended questions.
We analyzed all the collected data using a mixed-methods approach. For quantitative data analysis, we used descriptive statistics, creating R code for this purpose. To speed up the process, we used ChatGPT 4.0 to refine the code and create customized functions.
For qualitative data analysis, we followed the five steps of qualitative analysis (Yin, 2015). Given the large number of responses, we also used ChatGPT 4.0 to organize recurring themes into categories.
For each course, we collected data on the enrollees, non-starters, and completers. Based on this information, we calculated the completion rates, considering both all enrolled students and only the starters (Table 3).
Table 3. Dropout Rates
Course | Enrollees | Non-Starters | Completers | Dropout rates (considering all the enrolled) | Dropout rates (considering the starters) |
---|---|---|---|---|---|
Introduction to Data Science (with R) | 845 | 163 (19.3%) | 323 | 61.8% | 52.7% |
Introduction to R programming language | 953 | 346(36.3%) | 354 | 62.9% | 41.7% |
Introduction to Scratch programming language | 354 | 141(39.8%)200 | 84 | 76.3% | 60.6% |
Visual Thinking | 2000 | 1013 (50.6%) | 359 | 82% | 63.6% |
Systems Thinking | 766 | 432 (56.3%) | 183 | 76.2% | 45.2% |
Due to financial restrictions, we used free tools whenever possible. However, to improve the process, we also utilized paid AI tools. Table 4 presents the tools we used.
Table 4. IT Tools Used
Tool | Application | Cost |
---|---|---|
Google Groups | Creation of list of emails | Free of charge |
Google Sites | Creation of the virtual learning environment | Free of charge |
Google Forms | Creation of questionnaires and quizzes | Free of charge |
Gmail | Communication | Free of charge |
Adobe Sketchbook | Creation of all art of the courses ( images, poster) | Free of charge |
Used as discussion forum for the courses | Free of charge | |
Loom | Creation of the video-lectures and the videos with feedback to the students | Free of charge |
Youtube | To host the video-lectures | Free of charge |
R-Studio | To analyze quantitative data | Free of charge |
ChatGPT | To analyze qualitative data and to create R code. To create Persona and the Empathy Maps | Paid |
Dall-E | To create images | Paid |
ZeroGPT | To detect text generated by AI models | Paid |
Figure 3 presents our evaluation of the students' learning, based on the rubric developed (Table 2). For each course, it shows the percentage of students whose learning we evaluated as "Low," "Medium," or "High."
Figure 4 presents students' perceptions of their own learning. For each course, it shows the percentage of students who evaluated their learning as "Low," "Medium," or "High."
Regarding the dropout rates in our courses, when considering all students who enrolled, we observed relatively high rates, ranging from 61.8% to 82% (Table 3). However, research by Mehrabi et al. (2022) has shown that dropout rates in MOOCs are typically higher, between 80% and 90%. Additionally, there is still no consensus on how dropout rates should be measured (Papadakis, 2023). The debate centers on whether all enrolled students should be considered or only those who actually start the course. When excluding non-starters, our dropout rates improve, ranging from 41.7% to 66.6%.
We used nine free IT tools and three paid AI tools. The free tools were used to create course content, structure the course, host the content, and manage communication with the students. The AI tools helped us analyze the data collected and detect AI-generated responses. The use of AI tools alongside Design Thinking concepts allowed us to better understand the profile of the students and their expectations. It also helped analyze the large amount of data collected during the course. This finding aligns with other researchers (Fauvel et al., 2018), who pointed out that AI not only powers MOOCs but also reduces instructor workload while boosting learner engagement (Dao et al., 2021).
The data (Figure 3) on professors' perceptions of students' learning revealed that learning outcomes were generally classified as "medium" in all courses, with only a small percentage of students demonstrating "high" learning. The data also showed that students faced more difficulties in programming-related courses.
On the other hand, the data on students' perceptions of their own learning (Figure 4) revealed that most students rated their learning as "high." It also indicated that students felt they learned less in programming-related courses.
Comparing both sets of data (Figures 3 and 4), we speculate that perceptions of learning differ significantly between students and teachers. While students tend to rate their own learning very positively, teachers view the outcomes with more reservation. This finding aligns with other scholars (Prosser & Trigwell, 1997), who noted that students may perceive their progress more positively, while teachers adopt a more critical perspective based on assessments and learning outcomes.
We can affirm that our goal of providing free, large-scale courses worked very well. We were able to offer a variety of courses to thousands of students from all Brazilian states, with 2,145 successfully completing the courses and earning certificates. The majority of students reported that they enjoyed the courses and learned effectively. Therefore, we can conclude that our efforts are helping to provide educational opportunities to Brazilian students, teachers, and the general public.
The combination of free tools and paid AI tools was highly effective. Google Sites helped organize the course modules in an easy and intuitive way. Loom allowed us to create videos quickly and efficiently. Using YouTube to host the videos was also very helpful. Google Forms enabled us to create questionnaires, making it easy to collect both qualitative and quantitative data. R Studio allowed us to perform data analysis and visualization, while ChatGPT4.0 sped up the development of R code. ChatGPT4.0 also helped us analyze large amounts of qualitative data, accurately identifying recurring themes and facilitating the creation of Personas and Empathy Maps. ZeroGPT helped us detect AI-generated text in the assignments. Moreover, ChatGPT4.0 assisted in improving the grammar of the journal articles we wrote, reducing the need for paid proofreading services.
The publication of our findings in peer-reviewed journals was also successful. It allowed us to deeply reflect on each course, learn what worked and what didn’t, and share the lessons learned with the broader community.
In the courses delivered after 2022, we instructed students not to use AI tools to answer exam questions. However, this approach was ineffective, as nearly all students used AI tools to varying extents. To identify AI-generated responses, we employed ZeroGPT. Despite its usefulness, ZeroGPT had certain limitations. It occasionally produced false positives, flagging human-written text as AI-generated, and false negatives, failing to detect AI-generated content. Additionally, some responses were labeled as "Mixed," without clarifying whether AI was used solely for grammar correction or to significantly enhance the content.
In the first large-scale course we taught, we tried using Gmail as a communication tool. This didn’t work at all, as Gmail has a daily sending limit of up to 500 emails. To overcome this problem, we switched to using Google Groups.
The use of Facebook as a discussion forum also didn’t work as expected. Only a small percentage of students participated by asking questions and sharing knowledge. Most participants remained quiet, only reading others' questions and the professors’ responses.
We also spent a lot of time answering students who didn’t have the patience to carefully read the detailed instructions we emailed them each week. We had to repeat instructions multiple times, which was somewhat stressful.
We also noticed that a small percentage of enrollees were not genuinely interested in learning. They enrolled in the courses only to obtain certificates. Some of them were public school teachers who used the certificates to gain promotions in their careers.
We also acknowledge the need to expand our team. We understand that relying on just one professor to handle all the work is not very effective. Throughout the course, we made several attempts to build a team by inviting other professors to join, asking IT technicians for help, and even engaging students to contribute. Unfortunately, the lack of any financial incentives or rewards did not motivate them to participate. Nevertheless, we will continue our efforts in future versions.
Lastly, we realized that our MOOCs should be more inclusive. We should have put more effort into creating course materials that are accessible to people with disabilities.
For the next versions of the MOOCs, we plan to review and correct the automatic captions for the videos hosted on YouTube, and create transcripts for the video content. In addition, we plan to review the course materials, formatting them to be accessible to screen readers by including descriptions for all images and charts for visually impaired learners, and incorporating free text-to-speech software. We also intend to review the course interfaces to ensure easy navigation by removing non-essential elements.
If we had ample financial resources, we would consider using AI-powered social media marketing platforms to promote the courses. We could also use AI-driven instructional design and content development tools, such as Articulate 360 and Knewton Alta, to create content. Additionally, AI-powered learning management tools, such as Canvas LMS or Moodle with AI plugins, could be used to provide automatic feedback, create quizzes, and manage grading. Furthermore, AI-powered research and writing assistance tools, such as Grammarly and Quillbot, would support the process of knowledge sharing (Figure 5). More than that, we could hire a team to assist us. This team could include web designers, AI experts, an inclusivity specialist, and IT technicians.
Creating massive courses with limited financial resources and minimal IT support from the university is very challenging. It requires a significant amount of time and effort in all phases of the project, from creating to delivering each MOOC. Additionally, it is costly, as we had to cover the expenses of using AI tools such as ChatGPT 4.0 and ZeroGPT.
The use of AI tools was helpful in almost all phases of the project. They assisted us in analyzing both quantitative and qualitative data from the questionnaires.
However, we are unsure if this experience could be easily replicated. It requires the person in charge to have a wide range of skills: information technology (to create the virtual learning environment and manage multiple software tools), data science (to collect and analyze course data), project management (to deliver the course and handle administrative tasks), instructional design (to select the most appropriate teaching and learning approaches), and marketing (to create promotional materials and distribute them on social media). It also demands that the person have enough time, patience, and energy to handle all the issues that arise during the process.
We hope this article captures the attention and curiosity of professionals working on educational MOOC projects. We also hope that this work leads to the creation of national and international partnerships with universities, companies, and funding organizations. Support from various institutions could enhance our efforts, improve the quality of the courses, and create educational opportunities for people in socially vulnerable situations. We hope this reflection is helpful to the AI+OER community.
Arantes do Amaral, J. A. A. (2012). Desvendando sistemas. Kindle Direct Publisher.
Arantes do Amaral, J. A. A. (2019). The cartoon guide to system dynamics. Kindle Direct Publisher.
Arantes do Amaral, J. A. A. (2021). Pesquisa em Educação: uma abordagem prática. Kindle Direct Publisher.
Arantes do Amaral, J. A., Meister, I. P., Lima, V. S. S., & Garbe, G. G. (2022). Using Design Thinking Tools to Inform the Design Process of a Massive Open Online Course on Using Scratch for Teachers. Journal of Design Thinking, 3(1), 123-134. https://doi.org/10.22059/jdt.2022.350059.1085
Arantes do Amaral, J. A., & Onusic, L. M. (2022). The Challenges of Providing Massive Online Extension Courses: A Case Study. Anatolian Journal of Education, 7(2), 123-134.. https://doi.org/10.29333/aje.2022.7210a
Arantes do Amaral, J. A. (2023). Using Scratch to Teach Coding in Massive Online Open Courses: A Systemic Analysis. Journal of Problem Based Learning in Higher Education, 11(3), 130-144.https://doi.org/10.54337/ojs.jpblhe.v11i3.7390
Castro, C. de M. (2001). Educação superior e eqüidade: inocente ou culpada?. Ensaio: Avaliação e Políticas Públicas em Educação, 9(30), 109-122. https://doi.org/10.1590/S0104-40362001000100006
Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE Publications.
Dao, X. Q., Le, N. B., & Nguyen, T. M. T. (2021, March). Ai-powered moocs: Video lecture generation. In Proceedings of the 2021 3rd International Conference on Image, Video and Signal Processing (pp. 95-102).
Fauvel, S., Yu, H., Miao, C., Cui, L., Song, H., Zhang, L., & Leung, C. (2018, July). Artificial intelligence powered MOOCs: A brief survey. In 2018 IEEE International Conference on Agents (ICA) (pp. 56-61). IEEE. https://doi.org/10.1109/ICA.2018.00014
Guilherme, A. A., de Araujo, J. M., Silva, L., & de Oliveira Brito, R. (2024). Two ‘Brazils’: Socioeconomic status and education performance in Brazil. International Journal of Educational Research, Vol 123, 102287.DOI: https://doi.org/10.1016/j.ijer.2023.102287
Li, L., & Xiao, J. (2022). Persona profiling: A multi-dimensional model to study learner subgroups in Massive Open Online Courses. Education and Information Technologies, 27(5), 5521–5549. https://doi.org/10.1007/s10639-021-10829-0
Liu, S. F., Wu, Y. C., Chang, C. F., & Liu, G. Z. (2022). Exploring the design of online teaching courses based on the needs of learners. In G. Bruyns & H. Wei (Eds.), With design: Reinventing design modes. IASDR 2021 (pp. 2930–2945). Springer, Singapore. https://doi.org/10.1007/978-981-19-4472-7_205
Mehrabi, M., Safarpour, A. R., & Keshtkar, A. (2022). Massive open online courses (MOOCs) dropout rate in the world: a protocol for systematic review and meta-analysis. Interdisciplinary Journal of Virtual Learning in Medical Sciences, 13(2), 85-92. https://doi.org/10.30476/IJVLMS.2022.94572.1138.
OECD (2021), Education in Brazil: An International Perspective, OECD Publishing, Paris, https://doi.org/10.1787/60a667f7-en
Papadakis, S. (2023). MOOCs 2012-2022: An overview. Advances in Mobile Learning Educational Research, 3(1), 682-693. https://doi.org/10.25082/amler.2023.01.017
Prosser, M., & Trigwell, K. (1997). Relations between perceptions of the teaching environment and approaches to teaching. British Journal of Educational Psychology, 67(1), 25-35. https://doi.org/10.1111/j.2044-8279.1997.tb01224.x
Yin, R. K. (2015). Qualitative research from start to finish. Guilford publications.