

A Comparative Analysis of Large Language Models for Automated Course Content Generation from Books
Large Language Models (LLMs) have emerged as powerful tools for extracting course topics from textbooks in today's fast-paced educational landscape. Additionally, harnessing the potential of Knowledge Graphs to visualize the mutuality among topics enhances the informativeness of the extracted content. This paper presents a comprehensive comparative study that explores and assesses the effectiveness of different LLMs in extracting, identifying, and summarizing course topics within textbooks and generating knowledge graphs to visualize topic interdependencies. Moreover, we present a comprehensive methodology for knowledge graph development, incorporating specialized models, GPT2, Falcon 7B, and Llama-2-7b-chat-hf, fine-tuned with eight book tables of contents. Also, we have used Llama3, Llama3.1, Gemma, and Mistral-Nemo as a zero-shot model. Our findings show that llama3 has achieved the best performance among the zero-shot models in the following constraints: quality of content, correctness, clarity, and overall rating. Also, GPT2- Large excels in generating meaningful content, while GPT2-Base performs efficiently. In addition, challenges in knowledge graph integration were addressed by representing table of content data as knowledge graphs, providing more meaningful insights. This research enhances knowledge representation, demonstrating LLMs' value in knowledge graphs and data balance optimization. © 2024 IEEE.