| Summary: | This study evaluates “I Learn with Prompt Engineering”, a self-paced, self-regulated elective course designed to equip university students with skills in prompt engineering to effectively utilize large language models (LLMs), foster self-directed learning, and enhance academic English proficiency through generative AI applications. By integrating prompt engineering concepts with generative AI tools, the course supports autonomous learning and addresses critical skill gaps in language proficiency and market-ready capabilities. The study also examines EnSmart, an AI-driven tool powered by GPT-4 and integrated into Canvas LMS, which automates academic test content generation and grading and delivers real-time, human-like feedback. Performance evaluation, structured questionnaires, and surveys were used to evaluate the course’s impact on prompting skills, academic English proficiency, and overall learning experiences. Results demonstrated significant improvements in prompt engineering skills, with accessible patterns like “Persona” proving highly effective, while advanced patterns such as “Flipped Interaction” posed challenges. Gains in academic English were most notable among students with lower initial proficiency, though engagement and practice time varied. Students valued EnSmart’s intuitive integration and grading accuracy but identified limitations in question diversity and adaptability. The high final success rate demonstrated that proper course design (taking into consideration Panadero’s four dimensions of self-regulated learning) can facilitate successful autonomous learning. The findings highlight generative AI’s potential to enhance autonomous learning and task automation, emphasizing the necessity of human oversight for ethical and effective implementation in education.
|