-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
新版本的遇到的疑问 #103
Comments
Yeah, I also have the same problem like u. And the other question is that in the paper said that in attention mechanism it will be used layer norm but I found that actually it is instance norm. |
+1 我也发现了上述两个问题,在新版本中默认去除了FFParse,这跟medsegdiff v2的论文流程图对应不起来,想知道是什么原因。 |
+1 I also discovered the above two problems. FFParse is removed by default in the new version. This does not correspond to the paper flow chart of medsegdiff v2. I want to know the reason. |
你好,我发现v2版本的ssformer好像在代码里没有?v2论文里的ssformer的输入有两个,然后代码里关于qkv的attention模块都是单输入的?请问已找到ssformer模块在哪里了吗? |
Hello, I found that the v2 version of ssformer does not seem to be in the code? The ssformer in the v2 paper has two inputs, and the attention module about qkv in the code has a single input? |
I have the same problem. |
作者您好,在老版本的MedSegDiff上面是使用了FF的,为什么你重新修改后改写的UNet中并没有FF变换,这是因为原来的效果不好吗?那么这和论文就不对应了,请问这是什么原因?
The text was updated successfully, but these errors were encountered: