-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Dynamic Recurrent Basket Model (DREAM) #590
base: master
Are you sure you want to change the base?
Conversation
from tqdm.auto import trange | ||
|
||
|
||
class Wloss(nn.modules.loss._Loss): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is W loss? Can it be more informative or is it an acronym by the authors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tqtg I am not quite sure about the acronym for Wloss
. The current implementation is based on this implemenation https://github.com/liming-7/A-Next-Basket-Recommendation-Reality-Check/tree/main/methods/dream
The authors of the above implementation also refer the running source code to this repo https://github.com/yihong-chen/DREAM
However, these two implementations are quite different in their naming convention. I haven't read through them thoroughly. Let's take some time to check the validity of both implementations, whether they reflect the paper idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Glancing at the paper, I only see BPR loss is used without additional information. I think it's better to rely on the original implementation which contains 2 variants (BPR and reordered BPR).
self.loss_fct = nn.BCELoss() | ||
self.p_loss_fct = Wloss(self.loss_uplift, 1) | ||
self.n_loss_fct = Wloss(1, self.loss_uplift) | ||
self.meta_loss_fct = nn.MSELoss() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think meta_loss_fct
is not used anywhere during training.
@lthoang are we actively working on this? |
Yes. I will be back to this model next week. |
Description
Related Issues
#579
Checklist:
README.md
(if you are adding a new model).examples/README.md
(if you are adding a new example).datasets/README.md
(if you are adding a new dataset).