Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

一些问题的解答/Answers to some questions for training and sampling #195

Open
MilkTeaAddicted opened this issue Oct 11, 2024 · 10 comments

Comments

@MilkTeaAddicted
Copy link

MilkTeaAddicted commented Oct 11, 2024

看到这里很多issue没有人回答,作者也没怎么维护,我结合我自己实验的结果回答一下

  1. 为什么采样生成的图案是一片漆黑?
    训练出了问题,大概率training过程的步数不够,多训练一会

  2. training的epoch在多少比较合适
    我个人实验的结果显示epoch大概在60000次左右效果开始变好

  3. 代码没有设置收敛标准,怎么让training停下来
    需要手动停止,停止的标准可以参考上一步我实验得出的结果

  4. 代码生成了三个模型,用哪一个采用会比较好
    emasavemodel.pt做采用的效果会比较好

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I see that many issues here have not been answered, and the author does not do much to maintain them. If you have obsessive-compulsive disorder, I will answer them by myself based on experiments.

  1. Why is the pattern generated by sampling completely black?
    There is a problem with the training. There is a high probability that the number of steps in the training process is not enough. Please train for a while.

  2. What is the appropriate epoch for training?
    The results of my personal experiments show that the effect starts to get better around 60,000 epochs.

  3. The code does not set a convergence standard. How to stop training?
    It needs to be stopped manually. The standard for stopping can refer to the results of my experiment in the previous step.

  4. The code generates three models, which one is better to use?
    The effect of using emasavemodel.pt will be better

@WuJunde
Copy link
Collaborator

WuJunde commented Oct 11, 2024

Greatly appreciate your contribution.

@ErfanNourian
Copy link

ErfanNourian commented Oct 30, 2024

@MilkTeaAddicted
Thank You for clarifying. can you mention what dataset you worked on? I trained BRATS2020 for 85K steps and still got black samples.

@MilkTeaAddicted
Copy link
Author

@MilkTeaAddicted 感谢您的澄清。您能提到您研究的是什么数据集吗?我对 BRATS2020 进行了 85K 步训练,但仍然得到了黑色样本。

Hello, my dataset is ISIC2016, 60000 epoch, the integration effect of multiple pictures is as follows
image
The pictures generated each time are different. The one on the far right is the integrated picture.

@2039551625
Copy link

你好,请问可以帮我解决一下,我的这个问题吗?不胜感激
Traceback (most recent call last):
File "E:\deep_learning\Segmentation\MedSegDiff-master\scripts\segmentation_sample.py", line 214, in
main()
File "E:\deep_learning\Segmentation\MedSegDiff-master\scripts\segmentation_sample.py", line 123, in main
sample, x_noisy, org, cal, cal_out = sample_fn(
File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 565, in p_sample_loop_known
for sample in self.p_sample_loop_progressive(
File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 650, in p_sample_loop_progressive
out = self.p_sample(
File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 444, in p_sample
out = self.p_mean_variance(
File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\respace.py", line 90, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 324, in p_mean_variance
self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)
File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 348, in _predict_xstart_from_eps
assert x_t.shape == eps.shape
AssertionError

@MilkTeaAddicted
Copy link
Author

你好,请问可以帮我解决一下,我的这个问题吗?不胜感激 Traceback (most recent call last): File "E:\deep_learning\Segmentation\MedSegDiff-master\scripts\segmentation_sample.py", line 214, in main() File "E:\deep_learning\Segmentation\MedSegDiff-master\scripts\segmentation_sample.py", line 123, in main sample, x_noisy, org, cal, cal_out = sample_fn( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 565, in p_sample_loop_known for sample in self.p_sample_loop_progressive( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 650, in p_sample_loop_progressive out = self.p_sample( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 444, in p_sample out = self.p_mean_variance( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\respace.py", line 90, in p_mean_variance return super().p_mean_variance(self._wrap_model(model), *args, **kwargs) File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 324, in p_mean_variance self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output) File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 348, in _predict_xstart_from_eps assert x_t.shape == eps.shape AssertionError

不太懂你的报错,我在ISIC上没有出现过这种错误,而且check了一下你的调用栈,和我跑的代码一样

@2039551625
Copy link

我是在DRIVE数据集上跑的采样,然后我尝试打印他们的形状,分别为x_t: torch.Size([1, 1, 64, 64])
eps: torch.Size([1, 2, 64, 64]),然后我打印出eps中的数据,发现他的两个通道的数据是一样的,我很疑惑这一点

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I ran the sampling on the DRIVE data set, and then I tried to print their shapes, which are x_t: torch.Size([1, 1, 64, 64])
eps: torch.Size([1, 2, 64, 64]), then I printed out the data in eps and found that the data of his two channels were the same. I was very confused about this.

@MilkTeaAddicted
Copy link
Author

唔,没做过血管分割相关的工作,不过你的eps的shap是torch.Size([1, 2, 64, 64]) 要不试试沿着第二个维度拆分?调一下输入应该问题不大

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Well, I have never done any work related to blood vessel segmentation, but the shap of your eps is torch.Size([1, 2, 64, 64]). How about trying to split it along the second dimension? It shouldn't be a big problem if you adjust the input.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants