Skip to content

TUST-KG-KT-group/Paper-Code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

An Actor-Critic Hierarchical Reinforcement Learning Model for Course Recommendation

This repository is the official implementation of An Actor-Critic Hierarchical Reinforcement Learning Model for Course Recommendation.

Online learning platforms provide diverse course resources, but this often results in the issue of information overload. Learners always want to learn courses that are appropriate for their knowledge level and preferences quickly and accurately. Effective course recommendation plays a key role in helping learners select appropriate courses and improving the efficiency of online learning. However, when a user is enrolled in multiple courses, existing course recommendation methods face the challenge of accurately recommending the target course that is most relevant to the user because of the noise courses. In this paper, we propose a novel reinforcement learning model named Actor-Critic Hierarchical Reinforcement Learning (ACHRL). The model incorporates the actor-critic method to construct the profile reviser. This can remove noise courses and make personalized course recommendations effectively. Furthermore, we propose a policy gradient based on the temporal difference error to reduce the variance in the training process, to speed up the convergence of the model, and to improve the accuracy of the recommendation. We evaluate the proposed model using two real datasets, and the experimental results show that the proposed model significantly outperforms the existing recommendation models (improving 3.77% to 13.66% in terms of HR@5).

1. Requirements

To install requirements:

pip install -r requirements.txt

2. Data prepare

The dataset can be avaliable at http://moocdata.cn/data/course-recommendation

3. run

To run the model in the paper, run this command:

python train.py

Contributing

Our code follows the MIT license.

BibTeX

@Article{electronics12244939,
AUTHOR = {Liang, Kun and Zhang, Guoqiang and Guo, Jinhui and Li, Wentao},
TITLE = {An Actor-Critic Hierarchical Reinforcement Learning Model for Course Recommendation},
JOURNAL = {Electronics},
VOLUME = {12},
YEAR = {2023},
NUMBER = {24},
ARTICLE-NUMBER = {4939},
URL = {https://www.mdpi.com/2079-9292/12/24/4939},
ISSN = {2079-9292},
ABSTRACT = {Online learning platforms provide diverse course resources, but this often results in the issue of information overload. Learners always want to learn courses that are appropriate for their knowledge level and preferences quickly and accurately. Effective course recommendation plays a key role in helping learners select appropriate courses and improving the efficiency of online learning. However, when a user is enrolled in multiple courses, existing course recommendation methods face the challenge of accurately recommending the target course that is most relevant to the user because of the noise courses. In this paper, we propose a novel reinforcement learning model named Actor-Critic Hierarchical Reinforcement Learning (ACHRL). The model incorporates the actor-critic method to construct the profile reviser. This can remove noise courses and make personalized course recommendations effectively. Furthermore, we propose a policy gradient based on the temporal difference error to reduce the variance in the training process, to speed up the convergence of the model, and to improve the accuracy of the recommendation. We evaluate the proposed model using two real datasets, and the experimental results show that the proposed model significantly outperforms the existing recommendation models (improving 3.77% to 13.66% in terms of HR@5).},
DOI = {10.3390/electronics12244939}
}

About

发表的论文代码

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published