Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sync data for to_global function #10433

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

hanwen-sun
Copy link

为to_global function添加sync_data参数,在python端调用时可指定参数sync_data为true or false, 默认值为true。相应的, local_to_global function添加sync_data参数。修改对应的docstr文档。

@hanwen-sun hanwen-sun requested a review from clackhan February 6, 2024 08:13
@hanwen-sun hanwen-sun marked this pull request as ready for review February 6, 2024 08:20
@hanwen-sun hanwen-sun changed the title Add sync data for to global function Add sync data for to_global function Feb 6, 2024
Copy link
Contributor

github-actions bot commented Feb 7, 2024

Copy link
Contributor

github-actions bot commented Feb 7, 2024

Speed stats:
GPU Name: NVIDIA GeForce RTX 3080 Ti 

❌ OneFlow resnet50 time: 43.6ms (= 4359.1ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 57.8ms (= 5776.1ms / 100, input_shape=[16, 3, 224, 224])
✔️ Relative speed: 1.33 (= 57.8ms / 43.6ms)

OneFlow resnet50 time: 26.6ms (= 2655.9ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 37.5ms (= 3753.8ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.41 (= 37.5ms / 26.6ms)

OneFlow resnet50 time: 19.0ms (= 3790.2ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 36.2ms (= 7235.1ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.91 (= 36.2ms / 19.0ms)

OneFlow resnet50 time: 17.8ms (= 3561.8ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 31.7ms (= 6330.7ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 1.78 (= 31.7ms / 17.8ms)

OneFlow resnet50 time: 17.3ms (= 3451.9ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 29.0ms (= 5792.2ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.68 (= 29.0ms / 17.3ms)

OneFlow swin dataloader time: 0.200s (= 40.028s / 200, num_workers=1)
PyTorch swin dataloader time: 0.128s (= 25.597s / 200, num_workers=1)
Relative speed: 0.639 (= 0.128s / 0.200s)

OneFlow swin dataloader time: 0.059s (= 11.782s / 200, num_workers=4)
PyTorch swin dataloader time: 0.032s (= 6.441s / 200, num_workers=4)
Relative speed: 0.547 (= 0.032s / 0.059s)

OneFlow swin dataloader time: 0.032s (= 6.302s / 200, num_workers=8)
PyTorch swin dataloader time: 0.017s (= 3.315s / 200, num_workers=8)
Relative speed: 0.526 (= 0.017s / 0.032s)

❌ OneFlow resnet50 time: 49.1ms (= 4909.8ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 65.8ms (= 6579.5ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.34 (= 65.8ms / 49.1ms)

OneFlow resnet50 time: 36.7ms (= 3672.3ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 46.7ms (= 4672.6ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.27 (= 46.7ms / 36.7ms)

OneFlow resnet50 time: 28.3ms (= 5651.1ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 39.8ms (= 7950.2ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.41 (= 39.8ms / 28.3ms)

OneFlow resnet50 time: 25.0ms (= 4998.9ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 38.7ms (= 7744.2ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.55 (= 38.7ms / 25.0ms)

OneFlow resnet50 time: 24.0ms (= 4792.2ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 36.1ms (= 7222.9ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.51 (= 36.1ms / 24.0ms)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant